text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Event-Triggered Fault Estimation for Stochastic Systems over Multi-Hop Relay Networks with Randomly Occurring Sensor Nonlinearities and Packet Dropouts Wireless sensors have many new applications where remote estimation is essential. Considering that a remote estimator is located far away from the process and the wireless transmission distance of sensor nodes is limited, sensor nodes always forward data packets to the remote estimator through a series of relays over a multi-hop link. In this paper, we consider a network with sensor nodes and relay nodes where the relay nodes can forward the estimated values to the remote estimator. An event-triggered remote estimator of state and fault with the corresponding data-forwarding scheme is investigated for stochastic systems subject to both randomly occurring nonlinearity and randomly occurring packet dropouts governed by Bernoulli-distributed sequences to achieve a trade-off between estimation accuracy and energy consumption. Recursive Riccati-like matrix equations are established to calculate the estimator gain to minimize an upper bound of the estimator error covariance. Subsequently, a sufficient condition and data-forwarding scheme are presented under which the error covariance is mean-square bounded in the multi-hop links with random packet dropouts. Furthermore, implementation issues of the theoretical results are discussed where a new data-forwarding communication protocol is designed. Finally, the effectiveness of the proposed algorithms and communication protocol are extensively evaluated using an experimental platform that was established for performance evaluation with a sensor and two relay nodes. Introduction The increased use of battery-powered wireless sensors can improve productivity and reduce installation costs in industrial processes. A variety of battery-powered wireless sensors span a wide range of applications including area detection, environmental sensing, industrial monitoring and control, etc. [1]. In these applications, data packet loss is often encountered in various practical environments owing to bandwidth constraints; and then, wireless sensors are practically often made under harsh environments including both uncontrollable elements and aggressive conditions [2]. In this case, estimator or observer results [3,4] based on the linear sensor may not provide a reliable solution and are not applicable. It should be pointed out that the size and costs of sensor nodes may result in constraints on resources such as energy, memory and computation speeds [5][6][7][8]. Such constraints may lead to development of new estimators and data transmission schemes against these constraints we mentioned above. On the other hand, it is also recognized that the failures of components appear always in many practical engineering systems. The occurrence of faults in sensors, actuator or process (plant) failures may drastically modify the system behavior, resulting in performance degradation or even instability. For the purpose of increasing the safety and reliability of networked controlled systems, fault diagnosis research and their applications to a wide range of industrial and commercial processes have been the subjects of intensive investigations over the past two decades [9][10][11][12]. Many fruitful results for a variety of systems have been reported [13][14][15][16][17][18][19][20][21]. In the past few years, a number of results related to state and/or fault estimation for a variety of systems with packet dropouts and/or sensor nonlinearities have been established in terms of all sorts of methods. Some examples are mentioned here. Linear and nonlinear estimation problems were tackled for missing measurements in [22], where the nonlinear function of sensor was modeled as a sector-bound condition. A robust filter was designed in [23] against the sensor saturation and the packet losses such that the filtering error dynamics was mean-square stable and the performance index was satisfied. The problem of asynchronous filtering was addressed in [24] for stochastic Markov jump systems with probabilistic occurring sensor nonlinearities. Recently, reducing the redundant data transmission operated by a wireless transmission module was referred to as an event-triggered data transmission scheme which was first presented in [25] on the concept of send-on-delta. This kind of transmission scheme taking system performance and energy conservation into account has been an active area of research and some outstanding results are made [26][27][28][29][30]. For instance, a modified Kalman filter using the send-on-delta method was designed in [26]. The study in [27] extended this to a varying-condition threshold in send-on-delta transmission scheme for stochastic nonlinear systems, where an easy-implemented recursive algorithm with consideration of linearization errors, time delays, and packet losses was derived. The work in [28] proposed optimal and suboptimal consensus filters with event-triggered communication protocols to achieve energy efficiency via reducing unnecessary interactions among the neighboring sensors. More related studies can be found in recent publications [31][32][33][34][35][36][37][38][39][40][41] and references therein. As is mentioned above, most of the existing research is focused on single-hop networks where sensor nodes collect measurements and then wireless transmission modules in these sensor nodes transmit data directly to the remote estimator for estimating faults and states at each time. However, the sensor nodes cannot work properly once it exceeds its transmission distances. It can be also noted that the event-triggered sensor transmission scheme used in single-hop networks can simply be utilized in multi-hop networks case; that is, the sensors run transmission decision and relay nodes simply forward information to the remote estimator. Nevertheless, the relay nodes may not be able to complete the data-forwarding duty in the case of network failures (e.g., packet dropouts and jamming attacks). Furthermore, adding antennas may increase power consumption of the sensor nodes. Under these circumstances, there is no doubt that it is of significance to study remote estimation over the multi-hop relay networks. In this paper, we consider the situation that a remote estimator is located far away from the process. A wireless sensor node has to forward its data packets to the remote estimator through a series of relay nodes over multi-hop links subject to random packet dropouts. This article will mainly focus on how to derive an event-triggered estimator of state and fault to against both randomly occurring nonlinearities and randomly occurring packet dropouts, and then how to design a data-forwarding scheme to realize a trade-off between estimation performance and energy consumption. In particular, we will design a new data-forwarding protocol that is verified on an experimental platform to ensure that sensors and a series of relay nodes can establish the multi-hop network perfectly when a "sleep" command is activated in the transmission module. The main contributions of this paper are summarized as follows: (1) A co-design algorithm of event-triggered state and fault estimator is presented for a class of linear stochastic system, for the first time, to deal with the phenomena of simultaneous randomly occurring nonlinearity and randomly occurring packet dropouts, which reflects the reality closely. An upper bound of state and fault error covariances is minimized by appropriately designing the desired estimator gain. (2) A Sufficient condition and a data-forwarding scheme are given such that the error covariance is mean-square bounded in the multi-hop relay links with random packet dropouts. Such data-forwarding scheme enables each relay node to forward the estimated values to the remote estimator. (3) Implementation issues of the theoretical results are discussed. A new data-forwarding communication protocol that could be applied to our addressed topology is designed; this involves hardware design and the corresponding procedure implementation. The proposed communication protocol and theoretical results are verified in a classical industry-like process. Nomenclature: Prob(x) means the occurrence probability of the event x. N and R denote the sets of natural and real numbers, respectively; R m×n denotes the sets of m by n real-valued matrices, whereas R n is short for R n×1 ; R n×n + and R n×n ++ are the sets of n × n positive semi-definite and positive definite matrices, respectively. When X ∈ R n×n + , we simply write X ≥ 0 ( or X > 0 if X ∈ R n×n ++ ). For X ∈ R m×n , X T denotes the transpose of X. For x ∈ R m×n , (x) 2 represents x by x. I is an identity matrix with appropriate dimensions. Furthermore, E(·), Var(·) and trace(·) denote the mathematical expectation, variance and the trace of a matrix, respectively. Problem Statement A block diagram of a multi-hop relay network is given in Figure 1. The process is a discrete-time linear system defined on k ∈ [0, L] that can be described bȳ where a discrete time index k ∈ L and L = {0, 1, . . .}. The variablesx k ∈ R n andf k ∈ R n are state vector and fault signal to be estimated, respectively. The noise signal w k ∈ R n and is independent identically distributed (i.i.d) , satisfying Gaussian with zero-mean and known variance as follows The sensor measurement model with both randomly occurring nonlinearity and randomly occurring packet dropouts is described bȳ where measurement outputȳ k ∈ R m and measurement noise v k ∈ R m . It is another i.i.d noise signal satisfying Gaussian with zero-mean and known variance: where system matrixĀ and output matrixC are known with appropriate dimensions. Figure 1 illustrates that the data packets are transmitted to the remote estimator via wireless medium with successive N relay nodes. The current relay node will receive data packets only from its last node then forward data packets to next relay node. The sensor node is treated as relay 0 and other relay nodes are denoted as relay i (i = 1, 2, . . . , N). Additionally, let γ i k be the decision variable: if γ i k = 1, the data packets in the relay node i will be sent to the next relay node, and if γ i k = 0, they will not be sent. The random variables α k ∈ R and β i k ∈ R are Bernoulli-distributed white sequences with the following probabilities. where α, β i ∈ [0, 1] are know constants. All random variables α k and β i k are assumed to be independent in k and uncorrelated with noise signals w k and v k . The nonlinear functionφ (x k ) is further assumed to be known and analytic everywhere. The dynamic model of the fault vectorf k borrowed from [42,43] can be established as follows:f k+1 =Mf k (6) whereM is a known matrix with appropriate dimensions. Remark 1. For the co-design problem of state and fault estimator using stochastic system model, a robust fault estimation filter design was proposed in terms of Riccati-like difference equations in [44,45]. Using the assumption that the sampling interval was sufficiently small, it was supposed that the fault difference item was too small to be neglected. However, in practice, faults always generate a great amplitude change of a certain time, especially when time-varying faults occur. Compared with the assumption in [44,45], it is clear that time-varying fault model described in Equation (6) covers the results of constant faults as a special case, which is less restrictive. Remark 2. The measurement model proposed in Equation ( 3) provides a unified framework to account for the phenomenon of both randomly occurring sensor nonlinearities and random packet dropouts. The stochastic variable α k is indicated as the phenomenon of the probabilistic sensor nonlinearities, while the random variable β i k is used to represent the nature of random packet dropouts. Specifically, if α k = 1 and β i k = 1, it means that the sensor work normally; if α k = 0 and β i k = 1, it can be seen that the sensor is subject to nonlinearity only; and if β i k = 0, the measurement output contains the noise signal v k only, implying that the random packet dropouts occur. By introducing a new vector x k = x k f k , we can rewrite Equations (1) and (3) as where Before giving the main results, the following lemma, which will be useful in this paper, needs to be introduced. Lemma 1. (Lemma 1 [46]) Let A, D, E and F be real matrices of appropriate dimensions with FF T ≤ I. For any matrix P = P T > 0 and scalar ε > 0 such that ε −1 I − EPE T > 0, then we have A Co-Design Algorithm of Event-Triggered State and Fault Estimator Based on the measurement y k mentioned before, the estimated variablex 0 k of the sensor node (or the relay node 0) can be recursively computed as follows. where K k is the estimator gain to be designed. Further, the estimation on the relay node i is suggested as follows: while the corresponding estimation error covariance is given by where estimation error Remark 3. Traditionally, the remote estimator needs to know measurements collected by sensors at each time instant k. However, reduction of the number of relay-to-relay transmission actions has been adopted to make the relay nodes extend lifetime and save energy as much as possible. Under this circumstance, multi-hop links may create a problem: the measurements could not be obtained at each time instant and the estimated values could not be calculated by the remote estimator. Because the ultimate goal for remote estimation is to obtain estimated values at each time instant, it follows from Equation (11) that the relay nodes can forward "the estimated" values to the remote estimator. The purpose of this section is to design an estimator of form Equation (10) for the stochastic system in Equation (1) and the sensor in Equation (3) with incomplete information (randomly occurring sensor nonlinearities and randomly occurring packet dropouts). More specifically, we are interested in looking for the filter parameter K k such that the following requirements are met simultaneously: (a) For the phenomenon of packet loss and randomly occurring sensor nonlinearities, an upper bound of the error covariance P 0 k is derived, i.e., there exists a sequence of positive-definite matricesP 0 The sequence of upper boundP 0 k is minimized by the designed estimator gain K k through a recursive scheme. Now we are in a position to obtain an upper bound of the error covariance P 0 k in the following theorem. (3) suffering from both packet loss and randomly occurring sensor nonlinearities. For a arbitrary positive constant γ and the given initial conditionP 0 Theorem 1. Consider the stochastic system described by Equation (1) with measurements in Equation Proof. First, the error dynamics of the addressed system are calculated by subtracting Equation (1) from Equation (10): With the help of results in [47] and Taylor series expansion to φ (x k ) aroundx 0 k , we have where and o e 0 k represented the first-order term of the Taylor series expansion. Moreover, the high-order term can be changed into the following form: where H k is a matrix with appropriate dimension that depends on the problem, L k is used to accommodate the estimator with a further extent of freedom, and N k is an unknown discrete-time matrix that stands for the error of linearization of model that requires N k N T k ≤ I. Inserting Equations (17) and (18) into Equation (16), the expression of estimation error can be expanded as By the definition of error covariance P 0 k , it follows from Equation (16) that where According to the initial conditionP 0 0 ≥ P 0 0 , the upper bound of error covariance P 0 k+1 can be proved by induction. LetP 0 k ≥ P 0 k , we need to prove thatP 0 k+1 ≥ P 0 k+1 . Using the elementary inequality xy T + yx T ≤ xx T + yy T and the results of Lemma 1, P 0 k+1 can be rewritten as Noticing the facts that E x k x T k = P 0 k +x 0 k x 0 k T , the above Inequality (21) can be rewritten as follows: Since the assumption that nonlinear function φ x 0 k is known and analytic everywhere, we can deduce that E φ x 0 k φ T x 0 k is calculable and further derive P 0 k+1 as the following form which implies that Inequality (14) is true. In what follows, the gain matrix K k is determined by minimizing the upper bound of error covariance given by Equation (14). (3) suffering from both packet dropouts and randomly occurring sensor nonlinearities. The gain matrix K k is given as follows Theorem 2. Consider the stochastic system described by Equation (1) with measurements in Equation Furthermore, the upper bound of the estimator error covarianceP 0 k+1 is recursively calculated by Riccati-like difference Equation (15). Proof. We are ready to show that the gain described by Equation (24) is optimal in the sense that minimizes the upper boundP 0 k+1 . Note that three terms in Equation (15) are quadratic in K k . The matrix differentiation formulas may be applied to Equation (15). Now differentiate trace P 0 k+1 with respect to K k . The result is We set the derivative equal to zero and the optimal gain is solved as follows which is as same as (24). It is clear that the estimator gain is optimal that minimizes the upper bound P 0 k+1 for the estimator error covariance. Data Forwarding with Packet Dropouts Thus far, we have derived an upper bound of the estimator error covariance and such an upper bound is subsequently minimized by properly designing the estimator gain. However, as shown in Section 3.1, we only consider the case of data packet dropouts in the sensor transmission stage. The problem on data packet loss is neglected in the multi-hop links. In the following, the mean-square boundedness of the error covariance P i k will be presented. Theorem 3. Consider the relay node i and the stochastic system described by Equation (1) subject to random packet loss in the multi-hop links. Let ρ s (A) be s-th eigenvalue of matrix A and s = 1, · · · , n. If system matrix A is unstable and satisfies |ρ s (A)| < 1 √ β i , then the error covariance P i k is mean-square bounded, namely, Proof. The upper bound of the error covariance P i k in the relay node i is updated according to Equations (12) and (14), and then, by taking expectation on both sides, we obtain The differences of expectations between two adjacent sampling instants can be derived as follows where the initial conditionP i 0 = P 0 0 > 0. According to the considered topology given in Figure 1 and the unstable system matrix A, it is shown that Then, from the above equalities in Equation (29), we can infer that E P i 1 > E P i 0 via the Lemma 2.2 presented in [48], which implies that E P i 1 ≤ Θ 1 , where Θ 1 > 0. Utilizing the induction method and the continuity of Equation (27), we can know that E P i k ≤ Θ, where Θ > 0. Further, let us denote P i ∞ as the steady-state value for E P i k in the current relay node i, and P i ∞ is the solution of the following matrix equation. where E P i−1 k > 0. This is equivalent to an extended Lyapunov equation and has a unique positive solution if |ρ s (A)| < 1 As a result, E P i k ≤ Θ, where Θ is a unique positive solution. It can be concluded that the boundedness and convergence of E P i k is guaranteed. (29) and (28), it is obviously known that E P i 1 ≤ E P i 0 = P 0 0 , when system matrix A is stable. Therefore, the error covariance P i k is also mean-square bounded if the stochastic system presented by Equation (1) is stable. Remark 5. Due to the random packet dropouts, the error covariance P i k is time-varying for any given positive initial state. However, P i k is bounded with probability if E P i k is bounded [49]. Therefore, E P i k ≤ Θ with Θ > 0 can be considered that the estimation error is mean square stable. Although many event-triggered sensor schedules (e.g., [50,51]) can be utilized in the multi-hop networks, wireless communication network failures make the relay nodes unable to complete the data-forwarding tasks. Thus, it is necessary to design an energy-efficient data-forwarding scheme for relay nodes against the situation of network data dropouts. Theorem 4. Given that a positive constant δ i < ∞, if the following event condition of the relay node i is satisfied, where |·| stands for the absolute value, then the proposed estimator in Equation (11) can ensure that trace E P i k is bounded by trace E P i k ≤ Ω, where Ω is a unique positive solution. Proof. Let us start to recall the following expression for E P i k in Theorem 3 Using the property of matrix trace, the above Equation (33) becomes For the sequence trace E P i k , we have where |·| represents the absolute value. Then, substituting the event condition in Equation (32) into Equation (35) yields The following proof of Theorem 4 is similar to that of Theorem 3. The detailed proof is thus omitted. = 1, declaring that the relay node i has successfully received data packets. For the purpose of achieving more accurate estimation for remote estimator, data packets of the relay node i are sent to the next relay node without entering the event-triggered decision. We now elaborate this scheme described in Algorithm 1 for relay node i. First of all, the measurements are collected locally at each time instant, then the state values are estimated by a steady-state Kalman filter. Next, the relay node i will forward the estimated state values to the next relay node. If γ i−1 k = 1 and β i k = 1, the relay node i will successfully receive the estimated state values from relay node i − 1 at time instant k, i.e.,x i k =x i−1 k and the corresponding error variance can be calculated asP i k =P i−1 k . To achieve more accurate estimates of system state, the relay node i will forward data packets to the next relay node without entering the event-triggered decision rule. Whereas, if γ i−1 k = 0 or β i k = 0 at time instant k, relay node i − 1 will not send the estimated state values to relay node i for energy conservation (or relay node i cannot receive the data packets due to data losses). Without state information from relay node i − 1, the estimated state values and error variance at relay node i will be updated as follows: Then, the event-triggered decision rule will determine whether the relay node i will send the current estimated state valuex i k to the next relay node. Experimental Verification In this section, the effectiveness of the proposed theoretical algorithm will be evaluated on a test bed that is a scale-down industrial process of twin water tanks. Based on the architecture of Figure 1, a sensor node (Node 1) and two relay nodes (Node 2 and Node 3) are designed to construct a multi-hop network transmitting water level information from Node 1 to Node 2 and to Node 3. At Node 3, the information on water level will be fed to a remote computer. This section is organized as follows. A new transmission protocol including the implementation of hardware and the corresponding procedure is presented in Section 4.1. Section 4.2 introduces system description and modeling. Experimental results on estimation quality and energy conservation are obtained in Section 4.3. To verify the effectiveness of our proposed data-forwarding scheme, we shall present a new transmission protocol in the following section. A New Transmission Protocol for Data Forwarding Scheme In the most industrial applications, a wireless transmission module (WTM) always consumes more energy than a computation module. This is why we have designed data-forwarding scheme to reduce the amount of time for sending and receiving data, making the lifetime of the wireless node longer. However, stopping communication does not mean stopping energy consumption. It is because the WTM of each node keeps monitoring whether the data has arrived or not. Although the characteristic of the WTM can make it sleep to achieve the result of energy conservation in the single-hop wireless networks [52], the wireless transmission technology may not allow us to obtain such a sleeping capability in the multi-hop networks. For example, two nodes including station (STA) mode and access point (AP) mode have been embedded in the Wi-fi technology and they have to exist in the relay node. However, the WTM chosen as AP mode will spend a lot of time waking up (or even cannot wake up) once it enters a sleep state. It will extend the transmission time and lead to limited applications in real-world applications. The ZigBee communication technology cannot be applied to the network topology described in Figure 1 because the coordinator and router, which have to be added as a relay node in the multi-hop network, cannot go to sleep. In addition, the Bluetooth technique is not qualified as the WTM of relay nodes due to its long matching time and limited transmission distance. All of these motivate us to come up with a new transmission protocol suitable for any data-forwarding schemes in multi-hop relay networks. First, we introduce the components of each relay node: (i) the wireless transmission module forwards the data packets between the relay nodes; (ii) the computation module determines when to forward data packets via our forwarding scheme; (iii) the switching module turns off and on the power of WTM; and (iv) the transmitter and receiver are a pair of wireless transceivers to distinguish them from the WTM. The transmitter and receiver are used as a medium to wake up WTM quickly and to ensure the network connectivity when the WTM is commanded into a sleep mode. The procedure of this new transmission protocol is now presented in Algorithm 2. Algorithm 2 The implementation steps for the new transmission protocol When the data packet is requested to be sent from the relay node i to the relay node i + 1, the following steps are performed: Step 1: For relay node i, the computation module sends a specified digital signal to the transmitter through I/O ports. Step 2: For relay node i, the switching module turns on the power of WTM. Step 3: The transmitter of relay node i sends a signal to the receiver of relay node i + 1. Step 4: For relay node i + 1, the receiver sends a specified digital signal to wake up the computation module by I/O ports. Step 5: For relay node i + 1, the computation module requires switching module to power on the WTM. Step 6: The WTM of relay node i forwards data packets to the WTM of relay node i + 1. Step 7: For relay node i, the switching module turns off the power of WTM after the end of transmission. Figure 2 is a photograph of the components in Node 2. Due to the limited space in this paper, the structures of Node 1 and Node 3 are omitted. They are similar to the structure of Node 1 except that the receiver and transmitter ignored in the Node 1 and Node 3, respectively. As shown in Figure 2, the node contains the following components: an STM32L162ZD micro-controller [53] (STM32, Geneva, Switzerland) including an ARM cortexTM-M3 CPU, a 384 Kbytes Flash memory, and a 48 Kbytes RAM that allows us to use it as a computation module; and an HC-11 [54] (also called 433 Mhz UART serial wireless transceiver module) with simple and flexible operation is selected as the WTM. Furthermore, the corresponding switching module is an S9013 NPN type triode and the power management system is composed of an X6206 voltage regulator and a Lithium-ion battery. In particular, the reason why we choose 315M transmitter and receiver is due to their extreme low power consumption. Even though its transmission rate is very limited, the electric current of idle state is approximately 0 mA, and the electric current in transmission state is lower than 2 mA. Implementation of the Experiment To implement this experiment, two scenarios will be discussed in Algorithms 3 and 4. If the transmission commands are calculated by the STM32 using our designed forwarding scheme, the active mode is executed and the sleep mode is activated otherwise. Algorithm 3 The active mode for Node i When Node i sends the data packet to Node i + 1, the following steps will be performed: Step 1: For Node i: STM32 sends a signal to 315M transmitter and activates HC-11. Step 3: Node i forwards data packets to Node i + 1. Step 4: For Node i: turns off HC-11. end Algorithm 4 The sleep mode for Node i When Node i is not allowed to send the data packet to Node i + 1 , the following steps will be performed: Step 1: For Node i: STM32 and 315M transmitters enter an idle state and the HC-11 is not turned on. Step 2: For Node i + 1: STM32 calculates the corresponding decision to determine whether or not sending data packets based on the proposed data-forwarding scheme. The 315M receiver enters an idle state then HC-11 is not turned on. end Additionally, the received data packets may contain incomplete data packets (or data packets with error information) due to network failures, so a data validation algorithm is presented in Algorithm 5 which also reduce the probability of data-packet loss to some extent. We now introduce two indicators f lag1 ∈ {success, f ailure} and f lag2 ∈ {success, f ailure}. Either f lag1 = f ailure or f lag2 = f ailure, the re-transmission commands sent by Node 2 (or Node 3) will be fed back to Node 1 (or Node 2). Conversely, both f lag1 = success and f lag2 = success, the end command will be executed. System Description and Modeling of the Twin Water-Tank System In this subsection, the feasibility and practicality of the proposed theoretical results and the transmission protocol will be examined on a continuous-time linear model [55]. Figure 3 is a photograph of the architecture of the twin water-tank system including two small tanks and a reservoir. The system state-space equations are described as follows. where for i = 1 and 2, h (i) is the water level and can be calculated using the sensor's measurements is voltage values measured by the input-type level transmitter placed in each tank. The flow rate q (in) can be calculated as q (i) = f (i) 98 and q (in) = f (in) 98 where f (i) and f (in) are measured by the flow meters. In addition, A (1) and A (2) are the cross-sectional areas of the water tanks, and r (1) and r (2) are water resistance. Furthermore, y (1) and y (2) are output variables satisfying the following relationship Based on parameters of the experimental platform, the discretized model of the system in Equation (37) with a sample of 5 s is formulated as follows where the noise processes {w k } and {v k } are assumed mutually independent, white, zero-mean and have known variance Q w ≥ 0 and R v > 0, respectively. The error accuracy e m of the level transmitters is ±0.5 centimeters. Considering the main technical specifications of water level sensors, the following parameters are chosen asM = 1, Q w = 1 0 0 1 and R v = 0.25 0 0 0.25 . The nonlinear functionφ (x k ) = cosh and formulate the first-order expansion term coefficient with the high-order expansion term H k = diag 0.1 0.2 . Assessment of Effectiveness of the Theoretical Results In this part, the effectiveness of the proposed estimator and data-forwarding scheme will be assessed through the following experiments. (1) Experiment 1: In the first experiment, the accuracy of state estimation will be evaluated by using the proposed data-forwarding scheme. we temporarily ignore system faultf k in system Equation (1) for the convenience of discussion. The running times of this system is set to 50, and the initial water level of the twin water tanks are 53 and 24 centimeters, respectively. To verify the practicability of the proposed algorithm, the following parameters are set as θ i = Pr{β i k = 1} = 0.9 (i= 1 and 2), Pr{α k = 1} = 0.95, γ = 0.002, L k = diag 0.01 0.01 and the transmission threshold δ i = 0.032 (i= 1 and 2). Figures 4 and 5 show that two water levels measured by the level transmitters and the estimated water levels of each node via our proposed estimator and data-forwarding scheme. As shown in Figures 4 and 5, the measured values and the estimated values are coincident as time increases. Obviously, the estimation accuracy is satisfactory using the proposed data transmission scheme. Moreover, the corresponding communication behaviors on β 1 k , γ 2 k and β 2 k at each time instant are demonstrated in Figure 6. It can be also noted that our data-forwarding scheme can effectively reduce the update frequency as compared with the traditional time-triggered mechanism. (2) Experiment 2: To verify the performance of event-triggered fault estimation, the following fault scenarios are used to complete our second experiment. A constant fault The estimated signals of a constant fault and a time-varying fault are illustrated in Figures 7 and 8, respectively. As comparison, fault-estimating signals using the time-driven learning observer (TDLO) borrowed from [4] and the evolution of event-triggered communication behaviors are also depicted Figures 7 and 8. It is worth mentioning that, compared with TDLO, the proposed event-triggered fault estimation (ETFE) not only provides better rapidity of fault estimation but also achieves robust reconstruction of the constant and time-varying actuator faults. Further, we examine the effect on the estimation performance from the different α and β i (i= 1 and 2) in Tables 1 and 2, respectively. We can also find that a larger probability corresponds to a smaller error bound, that is, when randomly sensor nonlinearity and packet dropout have smaller probabilities of occurring, the fault estimation can achieve a better performance. All of these make it possible for the ETFE to be easily implemented in practice. (3) Experiment 3: Here, the energy conservation is now verified using a 50 mAh battery. The comparison of battery voltage at Node 2 and Node 3 are illustrated by Figure 9 where the battery voltage using the periodically forwarding scheme drops to 3.28 V after 66 min. Node 2 cannot work normally because its working voltage must exceed 3.3 V [53]. Comparatively, the battery voltage at Node 2 using the VDFS reaches 3.3 V after 77 min. We find that Node 2 consumed more energy than other nodes. The reason is that the 315M transmitter and receiver are installed at Node 2 and they can consume more energy. Because the network topology described in Figure 1 is fixed, the system stop operating once the battery at Node 2 is completely consumed. The working life of the battery is prolonged by 16.7%. Remark 6. The battery voltage for Node 1 is ignored. Because γ 0 k can never be equal to zero constrained by the VDFS, we can utilize the sensor data transmission schedule (e.g., [31,32]) for Node 1 to achieve energy-saving in the practical applications. Conclusions and Further Work In this work, we have addressed the co-design problem of state and fault estimation with an event-triggered data-forwarding scheme against both randomly occurring nonlinearity and randomly occurring packet dropouts governed by Bernoulli-distributed sequences in multi-hop relay wireless networks. Recursive Riccati-like matrix equations are established to calculate the estimator gain in order to minimize an upper bound of error covariance. A Sufficient condition and a data-forwarding scheme have been derived to achieve the mean-square boundedness of the error covariance in the multi-hop relay links with random packet dropouts. Such data-forwarding scheme enables each relay node to forward the estimated values to the remote estimator. Furthermore, a new transmission protocol can be applied to the desired event-triggered transmission scheme under the fixed network topology where a relay node has the knowledge of its previous relay node and of next relay node. The effectiveness of the proposed technique has been evaluated by using a twin water-tank system with a sensor and two relay nodes. However, we also find some open problems that should be solved in future research. First, time delays should be considered in this kind of network topology. The constant (or random) time delays can occur if the number of relays are large. Next, a switching module S9013 has been used for turning the wireless transmission module on and off. However, the wireless transmission module may reduce the operating life due to frequent opening and closing. It is necessary that the wireless transmission module implement self-dormancy for energy saving. Finally, combing event-triggered transmission scheme and coding technologies may be an interesting direction for improving energy conservation in multi-hop relays networks.
8,680
2018-02-28T00:00:00.000
[ "Computer Science" ]
Multi-UAV Redeployment Optimization Based on Multi-Agent Deep Reinforcement Learning Oriented to Swarm Performance Restoration Distributed artificial intelligence is increasingly being applied to multiple unmanned aerial vehicles (multi-UAVs). This poses challenges to the distributed reconfiguration (DR) required for the optimal redeployment of multi-UAVs in the event of vehicle destruction. This paper presents a multi-agent deep reinforcement learning-based DR strategy (DRS) that optimizes the multi-UAV group redeployment in terms of swarm performance. To generate a two-layer DRS between multiple groups and a single group, a multi-agent deep reinforcement learning framework is developed in which a QMIX network determines the swarm redeployment, and each deep Q-network determines the single-group redeployment. The proposed method is simulated using Python and a case study demonstrates its effectiveness as a high-quality DRS for large-scale scenarios. Introduction Recently, mission planning associated with unmanned aerial vehicles (UAVs) has received considerable attention [1,2], and distributed artificial intelligence (AI) technologies have been extensively applied in multiple-UAV (multi-UAV) mission planning, enabling efficient decision-making and yielding high-quality solutions [3,4].For missions in geographically decentralized environments, the focus is on deploying UAVs to their destinations and repositioning them to adapt to changing circumstances [5].To minimize the costs of positioning UAVs, Masroor et al. [6] proposed a branch-and-bound algorithm that determines the optimal UAV deployment solution in emergency situations.Savkin et al. [7] employed a range-based reactive algorithm for autonomous UAV deployment.Nevertheless, many existing distributed algorithms lack the security necessary to achieve the global objective. For the UAVs in a swarm, the placement of individual is important, but the completion of the swarm mission is the ultimate goal.Wang et al. [8] proposed a K-means clusteringbased UAV deployment scheme that significantly improves the spectrum efficiency and energy efficiency of cellular uplinks at limited cost, while Yu et al. [9] introduced an evolutionary game-based adaptive dynamic reconfiguration mechanism that provides decision support for the cooperative mode design of unmanned swarm operations.These algorithms take static multi-swarm problems into account.However, some of the UAVs may suffer destruction or break down during a mission [10].To deal with situations in which the swarm suffers unexpected destruction, adaptive swarm reconfiguration strategies are required [11]. Learning-based methods are gaining increasing attention for their flexibility and efficiency [12,13].Deep reinforcement learning (DRL) has shown promising results in resolving the task assignment problems associated with multi-UAV swarms [14].Samir et al. [15] combined DRL with joint optimization to achieve improved learning efficiency, although changes to the dynamic environment can hinder the implementation of this strategy.Zhang et al. [16] investigated a double deep Q-network (DQN) framework for long period UAV swarm collaborative tasks and designed a guided reward function to solve the convergence problem caused by the sparse returns of long period tasks.Huda et al. [17] investigated a surveillance application scenario using a hierarchical UAV swarm.In this case, they used a DQN to minimize the weighted sum cost.As a result, their DRL method exhibited better convergence and effectiveness than traditional methods.Zhang et al. [18] designed a DRL-based algorithm to find the optimal attack sequence for large-scale UAV swarm so that the purpose of destroying the target communication system can be achieved.Mou et al. [19] built a geometric way to project the 3D terrain surface into many weighted 2D patches and proposed a swarm DQN reinforcement learning algorithm to select patches for leader UAVs, which could cover the object area with little redundancies.Liu et al. [20] focused on a latency minimization problem for both communication and computation in a maritime UAV swarm mobile edge computing network, then they proposed a DQN and a deep deterministic policy gradient algorithm to optimize the trajectory of multi-UAVs and configuration of virtual machines.However, multi-agent DRL (MADRL) captures realworld situations more easily than DRL [21,22].Hence, MADRL is considered an important topic of research.Xia et al. [22] proposed an end-to-end cooperative multi-agent reinforcement learning scheme that enables the UAV swarm to make decisions on the basis of the past and current states of the target.Lv et al. [23] proposed a MADRL-based UAV swarm communication scheme to optimize the relay selection and power allocation, then they designed a DRL-based scheme to improve the anti-jamming performance.Xiang et al. [24] established an intelligent UAV swarm model based on a multi-agent deep deterministic policy gradient algorithm, significantly improving the success rate of the UAV swarm in confrontations. In summary, developments in distributed AI mean that swarm intelligence is now of vital strategic importance.Under this background, it is vital to develop multi-agent algorithms.However, few reconfiguration studies have investigated this distributed multiagent scenario.Therefore, this paper proposes a MADRL-based distributed reconfiguration strategy (DRS) for the problem of UAV swarm reconfiguration after large-scale destruction.The main contributions of this paper are as follows: (1) UAV swarm reconfiguration is formulated so as to generate a swarm DRS considering detection missions and destruction.The finite number of UAVs is the constraint, and the coverage area forms the objective. (2) MADRL-based swarm reconfiguration employs multi-agent deep learning and the Q-MIX network.Each agent, representing a group, uses reinforcement learning to select the optimal distributed reconfiguration (DR) actions.The Q-MIX network is used to synthesize the actions of each agent and output the final strategy. (3) When the network has been trained well, the algorithm can effectively utilize various UAV swarm information to support DR decision-making.This enables efficient and steady multi-group swarm DR to achieve the mission objective. The remainder of this paper is organized as follows.Section 2 presents the swarm mission framework.Section 3 elucidates the DRS, before Section 4 introduces a UAV swarm reconfiguration case study of detection missions.Finally, Section 5 presents the concluding remarks. Mission A detection mission containing M irregular detection areas is considered.As shown in Figure 1a, the detection areas (colored yellow) are divided into hexagons, which are inscribed hexagons of the mission areas (colored green). Problem Formulation where N m is the total number of hexagons in group mission area MA m , and each hexagon represents a single UAV mission area , {1,2, … , }, n{1, 2, …, N m }.A UAV swarm, the size of which is determined by the detection area, is dispatched for a detection mission.Each area requires a group to execute the detection mission, and the number of UAVs in the group depends on the number of hexagons in the mission area.Furthermore, each group is formed of one leader UAV and several follower UAVs.To execute a detection mission, as shown in Figure 1b, the radius R of the UAV detection area is determined by the detection equipment installed on the UAVs. The UAV swarm can then be expressed as follows: ) where each group , m{1, 2, …, M} , performs detection in the group mission area MA m , as follows: where is the m-th UAV in group n and performs detection in UAV mission area , {1,2, … , }, n{1, 2, …, N m }.The first UAV in each group is the leader of that group.The swarm detection mission area set can then be expressed as follows: where each group mission area MA m , m ∈ {1, 2, . . . ,M}, is covered by a certain number of hexagons, as follows: MA m = {ma m1 , ma m2 , . . . ,ma mn , . . . ,ma mN m } where N m is the total number of hexagons in group mission area MA m , and each hexagon represents a single UAV mission area ma mn , m ∈ {1, 2, . . . ,M}, n ∈ {1, 2, . . . ,N m }. A UAV swarm, the size of which is determined by the detection area, is dispatched for a detection mission.Each area requires a group to execute the detection mission, and the number of UAVs in the group depends on the number of hexagons in the mission area.Furthermore, each group is formed of one leader UAV and several follower UAVs.To execute a detection mission, as shown in Figure 1b, the radius R of the UAV detection area is determined by the detection equipment installed on the UAVs. The UAV swarm can then be expressed as follows: where each group G m , m ∈ {1, 2, . . . ,M}, performs detection in the group mission area MA m , as follows: G m = {U m1 , U m2 , . . . ,U mn , . . . ,U mN m } where U mn is the m-th UAV in group n and performs detection in UAV mission area ma mn , m ∈ {1, 2, . . . ,M}, n ∈ {1, 2, . . . ,N m }.The first UAV in each group is the leader of that group. Destruction The UAV swarm may be subject to local and random destruction, and some UAVs may be destroyed.The effects of this destruction are used as inputs.Each UAV has two states: normal working and complete failure.When a UAV suffers destruction, it enters the failure state.When a leader UAV is destroyed, a follower UAV in the same group assumes the role of the leader of that particular group. The scope of local destruction is represented by a circle with center coordinates (i d , j d ) and radius r d , as illustrated in Figure 2a.The values of (i d , j d ) and r d are randomly generated. Destruction The UAV swarm may be subject to local and random destruction, and some UAVs may be destroyed.The effects of this destruction are used as inputs.Each UAV has two states: normal working and complete failure.When a UAV suffers destruction, it enters the failure state.When a leader UAV is destroyed, a follower UAV in the same group assumes the role of the leader of that particular group. The scope of local destruction is represented by a circle with center coordinates ( , ) and radius , as illustrated in Figure 2a.The values of ( , ) and are randomly generated. Random destruction is characterized by a destruction scale, denoted as , which is also generated randomly.When random destruction occurs, random UAVs transition from the normal state to the faulty state, as depicted in Figure 2b. Reconfiguration UAV swarm reconfiguration is an autonomous behavior that adapts to changes in the environment to enable the execution of the task.When the swarm is affected by dynamic changes during task execution, the system can use a DRS to achieve global mission performance recovery and reconfiguration, thus ensuring mission continuity. When destruction occurs, the state of the UAV swarm is input into the reconstruction algorithm.The resulting strategy is communicated back to each UAV group.In-group reconstruction and inter-group reconstruction are applied to certain UAVs, as shown in Figure 3a.After the reconstruction is completed, all mission areas should be covered by the detection range of the UAVs, as shown in Figure 3b.Random destruction is characterized by a destruction scale, denoted as S rand , which is also generated randomly.When random destruction occurs, S rand random UAVs transition from the normal state to the faulty state, as depicted in Figure 2b. Reconfiguration UAV swarm reconfiguration is an autonomous behavior that adapts to changes in the environment to enable the execution of the task.When the swarm is affected by dynamic changes during task execution, the system can use a DRS to achieve global mission performance recovery and reconfiguration, thus ensuring mission continuity. When destruction occurs, the state of the UAV swarm is input into the reconstruction algorithm.The resulting strategy is communicated back to each UAV group.In-group reconstruction and inter-group reconstruction are applied to certain UAVs, as shown in Figure 3a.After the reconstruction is completed, all mission areas should be covered by the detection range of the UAVs, as shown in Figure 3b. Objective, Constraints, and Variables Over a finite time , swarm reconfiguration aims to maximize the total coverage area (TCA) , which is the mission area detected by the UAVs.This can be expressed as follows: where () is the TCA at the current time , is the detected area of mission area ; if is not covered, = 0.The problem should be solved at the swarm level.Considering the number of remaining UAVs, the number of UAVs to be repositioned should be less than the number of normal working UAVs.Furthermore, the minimum area detected by the UAVs in each mission area must be set.Therefore, the reconfiguration problem can be expressed as follows: where is the coverage area of group , is the specified minimum coverage area for group , is the number of UAVs in that can be repositioned, is the number of normal-working UAVs in , d is the distance between two normal-working UAVs, and is the minimum allowable distance, which is the safety distance between UAVs.This problem considers the UAVs within the communication range.If a UAV exceeds the communication distance, it enters the faulty state due to communication failure. Objective, Constraints, and Variables Over a finite time τ thr , swarm reconfiguration aims to maximize the total coverage area (TCA) ε tot , which is the mission area detected by the UAVs.This can be expressed as follows: where ε tot (τ) is the TCA at the current time τ, ε mn is the detected area of mission area ma mn ; if ma mn is not covered, ε mn = 0.The problem should be solved at the swarm level.Considering the number of remaining UAVs, the number of UAVs to be repositioned should be less than the number of normal working UAVs.Furthermore, the minimum area detected by the UAVs in each mission area must be set.Therefore, the reconfiguration problem can be expressed as follows: where ε m is the coverage area of group G m , ε m min is the specified minimum coverage area for group G m , N m move is the number of UAVs in G m that can be repositioned, N m normal is the number of normal-working UAVs in G m , d is the distance between two normal-working UAVs, and d min is the minimum allowable distance, which is the safety distance between UAVs.This problem considers the UAVs within the communication range.If a UAV exceeds the communication distance, it enters the faulty state due to communication failure. The initial deployment status depends on whether there is a normal-working UAV at a certain hexagon for each group mission area MA m .Then, the UAV swarm distribution deployment status can be represented by a I × J matrix S. The matrix element s ij = 1 if there is a normal-working UAV U mn in hexagon H ij and s ij = 0 if not.Therefore, the deployment status information of the UAV swarm can be expressed as follows: MADRL-Based DR Method An MADRL framework was developed to solve the DR problem described in the previous section, as shown in Figure 4.The framework consists of three parts: a reconfiguration decision-making progress, agent decision-making, and a neural network.The three parts of the framework are described in this section, and the reconfiguration decision-making process is illustrated in Figure 4A.The initial deployment status depends on whether there is a normal-working UAV at a certain hexagon for each group mission area .Then, the UAV swarm distribution deployment status can be represented by a × matrix S. The matrix element = 1 if there is a normal-working UAV in hexagon and = 0 if not.Therefore, the deployment status information of the UAV swarm can be expressed as follows: MADRL-Based DR Method An MADRL framework was developed to solve the DR problem described in the previous section, as shown in Figure 4.The framework consists of three parts: a reconfiguration decision-making progress, agent decision-making, and a neural network.The three parts of the framework are described in this section, and the reconfiguration decision-making process is illustrated in Figure 4A. Reconfiguration Decision Process The group agents choose the DRS for the UAV groups.The UAV swarm's status matrix S t is used by the group agents as the main input.This process can be expressed as follows: where t represents the movement feature selected by the agent m at time step t.A swarm agent uses a QMIX network to combine the outputs of all group agents and choose the most efficient one.This can be expressed as follows: where represents the movement set of all group agents at time step t, M t−1 represents the last swarm movement feature set which consists of this swarm's history of movement features {mov t−1 , mov t−2 , mov t−3 }, and the output mov t |[S t , M t ] represents the final chosen movement feature for the swarm. The DR process consists of mission and destruction features, DR action generation, and renewal features.These three components are described in the following subsections. Mission and Destruction Features The destruction is randomly initialized at time t d , and the status matrix S is then generated.The coverage area at this time is ε To reconfigure the swarm and reach the maximum coverage rate, M agents, representing M different UAV groups, execute a sequence of DR actions.The DR action set is described as follows: where the act t|mn is the DR action of U AV mn at time step t.This DR action is defined as act t|mn = cen H ij , cen H i"j" which means that the U AV mn in hexagon H ij moves to the target hexagon H i"j" .The parameter cen H ij represents the center location of hexagon H ij , and the action act t|mn is generated according to the movement feature mov t .The DR action set of group m can be described as follows: After the DR action has finished, agent m uses a search algorithm to select the next DR action act mn|t for U AV mn in group G m , or chooses to finish the reconfiguration process.This process is repeated in each time step t.The neural network of agent m (see Section 3.2) can be described as follows: where Q m (S t , mov m t ) is the value of the movement feature mov m t at time step t.Each time step corresponds to a realistic period of time, the length of which is proportional to the distance the UAV moves in this time step. Reconfiguration Action Generation For the DR action act t|mn , once complete, the moving UAV is considered to perform the detection mission at the new location, then the status matrix S t can be updated.The term ε A (t) can be calculated according to (5) after the movement.The objective of agent m is to achieve the maximum coverage area as efficiently as possible.Thus, the reward should include both the coverage area and reconfiguration time.All agents use the same reward function, and the reward at time step t is defined as follows: where R t is the reward at time step t, τ t+ζ is the reconfiguration time of time step ( t + ζ), τ t+ζ−1 is the reconfiguration time of time step (t + ζ − 1), ε 0 is the initial TCA, δ is the discounted factor, and τ T is the time to finish reconfiguration (TTFR). For agent m, an optimization algorithm is used to select the best movement feature of UAV group m.For each time step t, the DQN of agent m outputs a movement value quantity Q m (S t , mov m t ), then this agent outputs a movement feature mov m t .A QMIX network is used to select the most effective action from all possible actions. The mixing network has two parts: a parameter generating network and an inference network.The former receives the global state S t and generates the neuron weights and deviations.The latter receives the control quantity Q m (S t , mov m t ) from each agent and generates the global utility value Q tot with the help of the neuron weights and deviations. The movement utility value Q tot is used to formulate the final decision for the whole swarm (see Section 3.3), as expressed in (15). Renewal Features Once the swarm has finished act t|mn , the state matrix and feature set [S t , A t ] is used as the new input to the algorithm.The algorithm continues to run and outputs new movement actions or takes the decision to end the reconfiguration process. Deep Q-Learning for Reconfiguration The agents use the deep Q-learning algorithm to evaluate the movement action, in which the action-value function is represented by a deep neural network parameterized by ϑ.The movement feature mov m t has a movement value function of , where ∑ R t = ∑ ∞ i=0 δ i r t+i is the discounted return and δ is the discount factor. The transition tuple of each movement action of the group agent m is stored as [S, mov m , R, S ], where S is the state before mov, mov is the selected mobile movement feature, R is the reward for this movement, and S is the state after the movement has finished.ϑ is learned by sampling batches of b transitions and minimizing the squared temporal-difference error: Sensors 2023, 23, 9484 9 of 16 where γ DQN = R + δmax mov Q m S , mov m ; ϑ − , ϑ − represents parameters of the target network that are periodically copied from ϑ and held constant for several iterations, b is the batch size of transitions sampled from the replay buffer, Q m (S, mov m ; ϑ) is the utility value of mov m . QMIX for Multi-Agent Strategy The QMIX network is applied to the generated swarm-level DR action.The network represents Q tot as a monotone function for mixing the individual value functions Q(S t , mov t ) of each agent.This can be expressed as follows: where Q m (S, mov m )| m=M m=1 is the movement value set, and the Q tot (S, mov) is a joint move- ment value of the swarm.The monotonicity of ( 15) can be enforced by the partial derivative relation To ensure this relationship, QMIX consists of agent networks, a mixing network, and a set of hypernetworks, as shown in Figure 4C. For each agent m, there is one agent network representing the individual value function Q m (S, mov m ).The agent networks are represented as deep recurrent Q-networks (DRQNs).At each time step, the DRQNs receive the status S t and last movement mov t as input and output a value function Q m (S, mov m ) to the mixing network. The mixing network is a feedforward neural network that monotonically mixes all Q m (S, mov m ) with nonnegative weights.The weights of the mixing network are generated by separate hypernetworks, each of which generates the weight of one layer using the status S t .The biases of the mixing network are produced in the same manner but are not necessarily nonnegative.The final bias is produced by a two-layer hypernetwork. The whole QMIX network is trained end-to-end to minimize the following loss: where γ DQN = R + δmax mov Q tot (S , mov ; ϑ − ), Q tot (S, mov; ϑ) is the globe utility value of mov. Case Study A case study of UAV swarm reconfiguration was simulated using Python.The numerical simulation is described from the perspective of optimal UAV swarm reconfiguration.The effectiveness of the proposed DR decision-making method is validated using the reconfiguration results under different scenarios.In this section, a fixed-wing UAV swarm is considered, although the proposed method is also applicable to other types of UAV swarms. Mission A detection mission containing seven irregular detection areas is randomly generated, as shown in Figure 5.The yellow areas represent the detection areas, and the map is divided into hexagons.The hexagons of the mission areas (colored green) need to cover the detection areas.A UAV swarm with 7 × 6 UAVs is simulated to execute this detection mission; the initial location information of each UAV is presented in Table 1. From the UAV swarm deployment in Figure 5, the initial detection mission state is shown in Figure 6a, where each light-gray circle represents the detection area of one UAV.In this case, all UAVs in the swarm are assumed to have the same detection radius of 3 √ 3 km, and the initial TCA is ε tot = 770.59km 2 .Furthermore, the safety distance is assumed to be 0.2 km.The detection radius and safety distance can also be assigned based on the actual regions. Mission A detection mission containing seven irregular detection areas is randomly generated, as shown in Figure 5.The yellow areas represent the detection areas, and the map is divided into hexagons.The hexagons of the mission areas (colored green) need to cover the detection areas.A UAV swarm with 7 × 6 UAVs is simulated to execute this detection mission; the initial location information of each UAV is presented in Table 1.From the UAV swarm deployment in Figure 5, the initial detection mission state is shown in Figure 6a, where each light-gray circle represents the detection area of one UAV.In this case, all UAVs in the swarm are assumed to have the same detection radius of 3√3 km, and the initial TCA is = 770.59km 2 .Furthermore, the safety distance is assumed to be 0.2 km.The detection radius and safety distance can also be assigned based on the actual regions. Destruction The destruction states were randomly generated.Two kinds of destruction, namely local and random destruction, were considered simultaneously.For local destruction, the destruction center is a randomly sampled point on the mission area, and the destruction area is a randomly generated irregular polygon.For random destruction, the number of Destruction The destruction states were randomly generated.Two kinds of destruction, namely local and random destruction, were considered simultaneously.For local destruction, the destruction center is a randomly sampled point on the mission area, and the destruction area is a randomly generated irregular polygon.For random destruction, the number of destroyed UAVs is assumed to follow the Poisson distribution with λ = 1. For the mission and swarm deployment case in Figure 6a, the generated destruction states are illustrated in Figure 6b, and consist of two local destruction areas and a random destruction with three UAVs.The destruction centers of these two local destruction areas are (18, 17.32) and (5, 83.13), and the radii are 11 and 4, respectively.The destroyed UAVs in Figure 6b can be described as {U 1,1 , U 1,5 , U 3,2 , U 4,3 , U 5,4 , U 6,1 , U 6,2 , U 6,3 , U 6,4 , U 6,5 , U 6,6 }, including both local destruction and random destruction.After this destruction process, the current total coverage area is ε tot = 615.02km 2 .All of the destruction information is presented in Table 2.For the reconfiguration process, the initial time step and the final time step are shown in Figure 6c,d, respectively.The UAV colored yellow represents the initial location of this reconfiguration process, while the UAV colored blue represents the final location.The red arrow represents the reconfiguration route from the initial location to the final location, which can be generated by the movement feature set M according to (9).Each reconfiguration action is generated by an agent of the proposed multi-agent framework according to (9).For the case in Figure 6c,d, the DR action set Φ is listed in Table 3.After this reconfiguration process, the UAV swarm has finished its redeployment.The current detection state is shown in Figure 6e.All UAVs in the swarm are assumed to have the same speed of 50 km/h.The speed can also be assigned based on the actual regions.The TCA is considered as a metric of UAV swarm performance.During this reconfiguration process, the UAV swarm performance exhibits a fluctuating upward trend, as shown in Figure 6f.The black dashed line in Figure 6f represents the TCA threshold ε thr , which is assumed to be 714 km 2 .This TCA threshold can also be assigned on the basis of actual conditions.After finishing the reconfiguration process, the final TCA is ε tot (τ T ) = 732.31km 2 . Discussion In addressing the UAV swarm reconfiguration, the main objective is to generate an optimal feasible strategy.Extended analyses are now presented covering the method performance and the influence of various factors. Different Algorithms This section evaluates the performance of the proposed QMIX method, although the DQN method and a cooperative game (CG) method [25] have also been used to generate this UAV swarm DRS.We used a single machine with one Intel i9 7980XE CPU and four RTX2080 TI-A11G GPUs to train the QMIX network and the DQN network.During the training process, each episode generated a DRS for the randomly generated mission and destruction, as described in Section 4.1.This section presents the results of the following assessment process: for each training procedure, the training is paused every 100 episodes and the method runs 10 independent episodes with greedy action selection.Figure 7 plots the mean reward across these 10 runs for each method with independent mission and destruction details.As the 10 independent episodes are fixed, the mean reward of the CG method is a constant value.Thus, the reward curve of CG method is a straight line.The shading around each reward curve represents the standard deviation across the 10 runs.Over the training process, 100,000 episodes were executed for each method.The reward curves of these two methods fluctuate upward.In the first 17,000 episodes, the DQN method exhibits faster growth than the QMIX method.However, QMIX achieves a higher upper bound of the reward curve after 20,000 episodes.QMIX is noticeably stronger in terms of the final DR decision-making performance.The superior representational capacity of QMIX combined with the state information provides a clear benefit over the DQN method. Different Algorithms This section evaluates the performance of the proposed QMIX method, although the DQN method and a cooperative game (CG) method [25] have also been used to generate this UAV swarm DRS.We used a single machine with one Intel i9 7980XE CPU and four RTX2080 TI-A11G GPUs to train the QMIX network and the DQN network.During the training process, each episode generated a DRS for the randomly generated mission and destruction, as described in Section 4.1.This section presents the results of the following assessment process: for each training procedure, the training is paused every 100 episodes and the method runs 10 independent episodes with greedy action selection.Figure 7 plots the mean reward across these 10 runs for each method with independent mission and destruction details.As the 10 independent episodes are fixed, the mean reward of the CG method is a constant value.Thus, the reward curve of CG method is a straight line.The shading around each reward curve represents the standard deviation across the 10 runs.Over the training process, 100,000 episodes were executed for each method.The reward curves of these two methods fluctuate upward.In the first 17,000 episodes, the DQN method exhibits faster growth than the QMIX method.However, QMIX achieves a higher upper bound of the reward curve after 20,000 episodes.QMIX is noticeably stronger in terms of the final DR decision-making performance.The superior representational capacity of QMIX combined with the state information provides a clear benefit over the DQN method. Different Destruction Cases For a given mission and swarm scale, the destruction process was randomly generated.The redeployment results were obtained by executing the QMIX reconfiguration strategy, as shown in Figure 8.The three subgraphs demonstrate the initial deployment status, the destruction status, and the redeployment results of the proposed QMIX algorithm.The geographical distributions of all mission areas and the swarm with 5 × 6 UAVs are the same in the three subgraphs, while the destruction states are different.After the reconfiguration process, the redeployment results in the three subgraphs demonstrate that the proposed QMIX method exhibits stable performance for this reconfiguration decisionmaking problem with different destruction patterns.This is because, during the training Different Destruction Cases For a given mission and swarm scale, the destruction process was randomly generated.The redeployment results were obtained by executing the QMIX reconfiguration strategy, as shown in Figure 8.The three subgraphs demonstrate the initial deployment status, the destruction status, and the redeployment results of the proposed QMIX algorithm. The geographical distributions of all mission areas and the swarm with 5 × 6 UAVs are the same in the three subgraphs, while the destruction states are different.After the reconfiguration process, the redeployment results in the three subgraphs demonstrate that the proposed QMIX method exhibits stable performance for this reconfiguration decisionmaking problem with different destruction patterns.This is because, during the training process, UAV destruction is generated randomly.In addressing the UAV swarm redeployment, the main objective was to obtain an optimal feasible DRS strategy.Extended analyses of the optimization strategy were conducted to determine the influences of different methods.The QMIX method with high efficiency was proposed for this optimization strategy, while the DQN method and the CG method had also been used to solve the three destruction cases in Figure 8.The QMIX method gives optimal solutions with better TCA ( ) and less TTFR than the other methods, as shown in Figure 9.The proposed method achieves the better solution, since these two methods may lead to local optima, such as a situation in which multiple UAVs have to spend more time moving during the reconfiguration process.The efficiencies of the methods are analyzed in Table 4.According to these results, the solution speeds of the QMIX and DQN are close, while the solution speed of CG method is significantly slower than the other two methods.In addressing the UAV swarm redeployment, the main objective was to obtain an optimal feasible DRS strategy.Extended analyses of the optimization strategy were conducted to determine the influences of different methods.The QMIX method with high efficiency was proposed for this optimization strategy, while the DQN method and the CG method had also been used to solve the three destruction cases in Figure 8.The QMIX method gives optimal solutions with better TCA ε tot (τ T ) and less TTFR τ T than the other methods, as shown in Figure 9.The proposed method achieves the better solution, since these two methods may lead to local optima, such as a situation in which multiple UAVs have to spend more time moving during the reconfiguration process.The efficiencies of the methods are analyzed in Table 4.According to these results, the solution speeds of the QMIX and DQN are close, while the solution speed of CG method is significantly slower than the other two methods. Different Swarm Scales Under different missions and swarm scales, the redeployment results obtained by executing the QMIX reconfiguration strategy are shown in Figure 10.The three subgraphs demonstrate the different deployment missions, the destruction status, and the redeployment results of the proposed method.The geographical distributions of all mission areas were randomly generated in the three subgraphs, and the initial swarm scales were 5 × 6, 7 × 6, and 9 × 6.Then, the destruction states were randomly generated.After the reconfiguration process, the redeployment results in the three subgraphs demonstrate that the proposed QMIX method exhibits stable performance under the different missions and swarm scales.During the training process, the missions and swarm scales were generated randomly.Thus, the superior representational capacity of QMIX combined with the mission state and swarm state information provides a clear benefit in terms of reconfiguration decision-making performance. optimal feasible DRS strategy.Extended analyses of the optimization strategy were conducted to determine the influences of different methods.The QMIX method with high efficiency was proposed for this optimization strategy, while the DQN method and the CG method had also been used to solve the three destruction cases in Figure 8.The QMIX method gives optimal solutions with better TCA ( ) and less TTFR than the other methods, as shown in Figure 9.The proposed method achieves the better solution, since these two methods may lead to local optima, such as a situation in which multiple UAVs have to spend more time moving during the reconfiguration process.The efficiencies of the methods are analyzed in Table 4.According to these results, the solution speeds of the QMIX and DQN are close, while the solution speed of CG method is significantly slower than the other two methods.Under different missions and swarm scales, the redeployment results obtained by executing the QMIX reconfiguration strategy are shown in Figure 10.The three subgraphs demonstrate the different deployment missions, the destruction status, and the redeployment results of the proposed method.The geographical distributions of all mission areas were randomly generated in the three subgraphs, and the initial swarm scales were 5 × 6, 7 × 6, and 9 × 6.Then, the destruction states were randomly generated.After the reconfiguration process, the redeployment results in the three subgraphs demonstrate that the proposed QMIX method exhibits stable performance under the different missions and swarm scales.During the training process, the missions and swarm scales were generated randomly.Thus, the superior representational capacity of QMIX combined with the mission state and swarm state information provides a clear benefit in terms of reconfiguration decision-making performance.Again, keeping the same cases as in Figure 10 and using the QMIX method, the DQN method, and the CG method, we also analyzed the differences in algorithm performance.Under different missions and swarm scales, the QMIX method also gives optimal solutions with better TCA ( ) and less TTFR than the other methods, as shown in Figure 11.The efficiencies of the methods are analyzed in Table 5.According to these results, the solution speeds of the QMIX and DQN are close for each case, while the solution Again, keeping the same cases as in Figure 10 and using the QMIX method, the DQN method, and the CG method, we also analyzed the differences in algorithm performance.Under different missions and swarm scales, the QMIX method also gives optimal solutions with better TCA ε tot (τ T ) and less TTFR τ T than the other methods, as shown in Figure 11.The efficiencies of the methods are analyzed in Table 5.According to these results, the solution speeds of the QMIX and DQN are close for each case, while the solution speed of CG method is significantly slower than the other two methods.Furthermore, the solution speeds of the QMIX and DQN are stable, and they do not exponentially decrease as the swarm scales increase.However, the solution speed of the CG method clearly decreases as the swarm scale increases.Thus, these results show that the proposed QMIX method exhibits stable DR decision-making performance for swarms with different scales.speed of CG method is significantly slower than the other two methods.Furthermore, the solution speeds of the QMIX and DQN are stable, and they do not exponentially decrease as the swarm scales increase.However, the solution speed of the CG method clearly decreases as the swarm scale increases.Thus, these results show that the proposed QMIX method exhibits stable DR decision-making performance for swarms with different scales. Conclusions Distributed AI is gradually being applied to multi-UAVs.This paper has focused on DR decision-making for UAV swarm deployment optimization using a proposed MADRL framework.A two-layered decision-making framework based on MADRL enables UAV swarm redeployment, which maximizes swarm performance.Simulations using Python have demonstrated that the proposed QMIX method can generate a globally optimal DRS for UAV swarm redeployment.Furthermore, the results of the case study show that the QMIX method achieves a better swarm performance with less reconfiguration time than the other methods and exhibits stable and efficient solution speed.The DR decision-making problem considered in this paper is one of redeployment decision-making; the initial deployment planning was not addressed.Future research should emphasize the integration of UAV swarm initial deployments into decision-making frameworks. Figure 11 .Table 5 . Figure 11.Reconfiguration under different swarm sizes.Table 5.Running time (in seconds) of different methods under different swarm sizes.Different Swarm Cases QMIX DQN CG Case (a) in Figure 10 20.542 19.942 44.845 Case (b) in Figure 10 27.031 26.531 70.275 Case (c) in Figure 10 32.816 32.116 120.389 t represents the current state matrix of time step t, and M m t−1 represents the movement feature set of agent m which consists of this agent's history of movement features mov m t−1 , mov m t−2 , mov m t−3 .History of movement features are necessary, because the agents are not fully observable solely from the current state, since the DR decisionmaking is a sequence decision process.The movement feature of agent m can be described are the location matrices to describe the hexagons in the figures of Section 2. Each element in the location matric relates to a hexagon, and if the element is 1, the related hexagon is the chosen location.Both loc init Table 1 . Location of each UAV. Figure 5. Mission and UAV swarm deployment. Table 1 . Location of each UAV. Table 4 . Running time (in seconds) of different methods under different destruction cases. Table 4 . Running time (in seconds) of different methods under different destruction cases.
9,480.8
2023-11-28T00:00:00.000
[ "Computer Science", "Engineering" ]
Antireflective grassy surface on glass substrates with self-masked dry etching Although recently developed bio-inspired nanostructures exhibit superior optic performance, their practical applications are limited due to cost issues. We present highly transparent glasses with grassy surface fabricated with self-masked dry etch process. Simultaneously generated nanoclusters during reactive ion etch process with simple gas mixture (i.e., CF4/O2) enables lithography-free, one-step nanostructure fabrication. The resulting grassy surfaces, composed of tapered subwavelength structures, exhibit antireflective (AR) properties in 300 to 1,800-nm wavelength ranges as well as improved hydrophilicity for antifogging. Rigorous coupled-wave analysis calculation provides design guidelines for AR surface on glass substrates. Background Antireflective (AR) coatings/structures are needed for most of existing optical components and optoelectronic devices, ranging from glasses, polymers, and fibers to solar cells, photodetectors, light-emitting diodes, and laser diodes, to remove undesired optical loss and improve optical performance [1][2][3]. For advanced AR properties compared to the conventional AR coatings (i.e., very low reflection at broad wavelength ranges and large incident angles), subwavelength structures (SWSs) with tapered profile, which is inspired by insect's eye, have been developed [4][5][6]. Because the SWSs have only zeroth diffraction order, it is possible to control the effective refractive index by changing the curvature of SWSs. From the theoretical understanding of SWSs and precise control of geometries (i.e., period, height, shape and packing density), improved AR performances of various materials and their device applications have been recently reported [7][8][9]. There are a variety of fabrication processes for AR SWSs, such as electron-beam or laser interference lithography, nanoimprint lithography, nanosphere or colloid formation, metal nanoparticles, and Langmuir-Blodgett assembly [5,6,[8][9][10][11][12][13][14][15]. However, these techniques are still expensive, time consuming, and sophisticated, which block the penetration of commercial market. In case of transparent glasses, although the importance of AR structures for improvement of optical efficiency, the cost issues have hindered the use of AR structures in applications such as photovoltaics and optoelectronics. In this letter, we present a simple, fast, and cost-effective method for fabricating AR grassy surfaces composed of tapered SWSs on glass substrates. Reactive ion etch (RIE) process of glasses with gas mixture of CF 4 and O 2 generates nanoclusters that can be used as an etch mask. Control of etch conditions provides optimal AR performance in the visible wavelength ranges. Design and fabrication According to theoretical analysis, the subwavelength structures (SWSs) with high aspect ratio (i.e., fine period and tall height) and continuous tapered shape from the air to the substrate show the widest bandwidth and almost omnidirectional AR properties [1]. However, fine tuning of geometry increases process complexity and costs. It is essential to find the optimal geometry based on the theoretical calculation to obtain a reasonable AR performance. Figure 1 shows the color map of reflectance of the SWSs on glass substrates as a function of height (0 to 400 nm) and wavelength (300 to 800 nm), calculated by a rigorous coupled-wave analysis method [16]. A model was designed in hexagonal lattices of 100 nm, which is small enough to satisfy zeroth order condition (Λ << λ). The dispersion of glass material (BoroFloat 33, Schott, Louisville, KY, USA) was taken into account in this calculation. The apex diameter was set to 50% of the base diameter. The flat surface (height = 0 nm) of glass substrate shows the reflectance of approximately 4% as expected. This reflectance rapidly goes down to 1% as the height increases from 0 to 150 nm. This is available only when the index difference is not quite big. For semiconductor materials such as silicon and GaAs, the height should be at least >300 nm to have broadband antireflection characteristics. In this study, the SWSs with height of approximately 150 nm were selected as a target value to maintain a low surface reflection. Uniform and high-density grassy surfaces were prepared by plasma etching in an RIE system with gas mixture of CF 4 (40 sccm) and O 2 (10 sccm), as illustrated in Figure 2. First, borosilicate glass substrates (2 × 2 cm 2 ), which is commonly used as an optic component in various fields, were cleaned with acetone, isopropyl alcohol, and deionized (DI) water and loaded into the chamber. Afterward, simultaneous process of self-masking and reactive etching of the unmasked area for grassy surface formation was done in one step. The process pressure was 50 mTorr and the RF power was varied from 50 to 150 W. The fabricated samples were cleaned with DI water and analyzed using a field-emission scanning electron microscope (FE-SEM, S-4700, Hitachi, Ltd., Tokyo, Japan). The transmittance spectra of the samples were measured with a UV-Vis-NIR spectrophotometer (Cary 500, Varian, Inc., Palo Alto, CA, USA) in the wavelength range of 300 to 1,800 nm. Figure 3. Grassy etched surfaces observed at low bias powers of 100 W indicate the existence of nanoscale masks, while a smoother surface was obtained at a higher bias power of 150 W. This tendency can be found in other literature [17]. It is believed that during the RIE etching with low RF power, nonvolatile nanoscale clusters are formed from the reaction of glass and reactive ions, and these clusters are uniformly distributed over the entire surface. Meanwhile, CF 4 and O 2 plasma are responsible for the etching of exposed surface. At 50 W RF power, the resulting grassy surface has tapered SWSs with diameter of approximately 100 nm. Results and discussion The SEM images in Figure 4 show that grassy surfaces were successfully fabricated using self-masked etch process with a RF power of 50 W. The resulting surfaces are uniform and the average distance between neighboring SWSs are sufficiently short to satisfy zeroth order condition. As the etching time increases, the height of SWSs increases vertically, whereas the density of SWSs decreases because the adjacent structures clumped with each other. This tendency is directly related with the optical behaviors. Figure 5A presents the transmittance curves of glasses with flat and grassy surfaces on both sides in the wavelength range of 300 to 1,800 nm. The glass with flat surface has a transmittance of approximately 93%, which increases monotonically due to the material dispersion. The grassy surface with 1-min etch time has very similar curves with that of the flat surface because the height of grasses is very short. However, the AR effects can be found in all the other grassy surfaces (with 4, 7, and 10 min etch times). After a 7-min etching, the resulting grassy structure has heights of approximately 150 to 200 nm, as shown in the inset of Figure 5A. The average transmittance of glass with grassy surfaces on both sides for 7-min etch time is 96.89% in the visible spectrum (390 to 700 nm), which is 4.15% higher than that of flat surfaces (92.74%). In particular, this high transmittance is sustained over the UV-vis-NIR ranges (i.e., T ave@300-1,800 nm = 96.64%). These broadband AR characteristics afford a possibility of the use of this AR glass as a substrate or a cover glass for photovoltaic applications. In case of glass with a 10-min etching, the antireflective property seems to increase from 600 to 900 nm while the broadband AR property is degraded. One of the possible causes on this detrimental change is the reduced density of grassy nanostructures compared to that of glass with a 7-min etching. It is needed to conduct more systematic characterization/analysis to figure out the effect of size, density, and shape of randomly distributed nanostructures on optical properties. The reflectance difference between the glasses with flat and grassy surface is revealed visually in Figure 5B. An intense light reflection from the flat glass is observed and as a result, reflections occurring at both sides of the glass make the words difficult to read. The grassy surface showed improved readability due to the reduced reflection. In addition to the AR property, the wetting property is also affected by both the structured surface [18] and the oxygen plasma treatment. To confirm the antifogging performance, the SWS-integrated glass and the bare glass were exposed to steam at the same time. Figure 5C shows the antifogging behavior of the glasses with flat and grassy surface. The water droplets beaded up on the flat surface of the bare glass substrate and the bead-like water droplets caused light scattering, which degrades the readability of the words. However, the water droplets on the roughened surface of the SWS-integrated glass evenly spread over the whole surface, and the hydrophilic glass still remained transparent, and the words below it were clearly readable. Water contact angle measurement results also support this hydrophilic effect. The contact angles of glass with and without grassy surface were 12.5°and 71.5°, respectively. The surface energy of structured glass was 87.8 mN/m, which is a higher value than that of bare glass (39.0 mN/m). Conclusions In summary, we demonstrated the subwavelength scale grassy surfaces on the glass substrate by using simple one-step dry etch process without any lithography. The resulting grassy surface showed very high transmittance in very wide spectral ranges as well as antifogging effects. Optimization of self-masked dry etching for improving the optical/material properties remains as a future work. We expect that this low-cost, high-performance optical materials are applicable in various optical and optoelectronic devices.
2,142
2013-12-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Corneal Sublayers Thickness Estimation Obtained by High-Resolution FD-OCT This paper presents a novel processing technique which can be applied to corneal in vivo images obtained with optical coherence tomograms across the central meridian of the cornea. The method allows to estimate the thickness of the corneal sublayers (Epithelium, Bowman's layer, Stroma, Endothelium, and whole corneal thickness) at any location, including the center and the midperiphery, on both nasal and temporal sides. The analysis is carried out on both the pixel and subpixel scales to reduce the uncertainty in thickness estimations. This technique allows quick and noninvasive assessment of patients. As an example of application and validation, we present the results obtained from the analysis of 52 healthy subjects, each with 3 scans per eye, for a total of more than 300 images. Particular attention has been paid to the statistical interpretation of the obtained results to find a representative assessment of each sublayer's thickness. Introduction Optical coherence tomography (OCT) based on low coherence interferometry is a well-established imaging technique thanks to its prominent axial resolution. Since 2006, commercially available OCT systems perform visualization of tissue microstructure in the so-called Fourier Domain (FD-OCT). Differently to Time Domain OCT (TD-OCT), the whole depth structure is obtained synchronously providing higher resolution imaging with faster acquisition times. FD-OCT can be used to provide in vivo cross-sectional imaging of the eye in a noninvasive and noncontact way [1]. To date, this technique has been mostly applied to capture retinal structure and optic nerve, displaying and localizing discrete morphological changes in detail [2,3]. In this paper we use an FD-OCT to study the anterior segment of the eye since this acquisition system can produce cross-sectional images of the cornea, which can be properly processed to analyze corneal sublayers: Epithelium, Bowman's layer, Stroma, Descemet's membrane, and Endothelium [4][5][6][7]. The precise measurement of these sublayers thickness is very important in ophthalmics and optometrics, for therapeutic treatments, refractive surgery, and contact lens applications. Many works have presented the thickness estimation of corneal sublayers techniques different from OCT. All these approaches have drawbacks and/or introduce some restrictions. Confocal microscopy [8] is an invasive technique that can cause lesions of corneal tissues, while electron microscopy [8,9] deals only with histopathologic samples, and ultrasonic pachymetry [10] requires the instillation of a topical anaesthetic and well trained operators. On the contrary, OCT has the advantage of allowing quick, noninvasive and completely safe assessment of patients [2,3]. Unfortunately, traditional image processing techniques, such as Sobel or Canny algorithms [11], failed in boundary localization of OCT cross-sections because, in general, these images present low Signal-to-Noise Ratio (SNR) and low contrast between boundary and internal corneal regions [5][6][7]. In this work, a novel technique based on automated edge detection method is presented. This procedure, based on SNR enhancement and corneal sublayers segmentation, allows to accurately extract the sublayers thickness information from FD-OCT images. FD-OCT Corneal In vivo Images and Sublayer Thickness Estimation Problem. A FD-OCT corneal in vivo image appears as in Figure 1. Different reflectivity profile boundaries identify different corneal sublayers, as reported in histological examinations [5]. Since each image is available in grayscale format and is uploaded as a matrix of pixels [5,6], the pixel intensity can be treated as the third coordinate. This reflectivity profile information can be considered as the amplitude of the signals to be constructed and analyzed, see Figure 1. In more detail, the region of interest (ROI) highlighted in the Figure 1 is shown in Figure 2(a). As a reference, in Figure 2(b), a human corneal histological sample is presented and graphically compared with Figure 2(a). (The two pictures do not come from the same subject.) Human corneas, like those of other primates, are composed of five sublayers: Epithelium is a layer of cells that cover the surface of the cornea; Bowman's layer protects the Stroma from injury; Stroma is the thickest layer (90% of the corneal thickness), transparent and made of collagen fibrils; Descemet's membrane is the thinnest layer, only one cell thick (too thin to be detected [5] and also in our study it will not be estimated); Endothelium is a low cuboidal monolayer of mitochondria-rich cells. The knowledge of their thicknesses is of significant importance for ophthalmics and optometrics examinations and treatments. To validate the proposed analysis protocol, a sample of 52 healthy patients has been considered: 25 females and 27 males, mean ± standard deviation age: 34 ± 11 years, range: 25 to 74 years. All the subjects did not present any ocular disease nor any history of ocular surgery and have been analyzed with the FD-OCT system described in Section 2.2. This group of patients can be defined as normal patients. In this paper, the analyses of the only right eyes are reported since no significant difference occurred in the comparison between right and left eyes, nor between male and female subgroups (for the statistical validation see Section 4). The considered problem consists of estimating the thickness of each sublayer starting from images like that depicted in Figure 1. Unfortunately, these images are characterized by low SNR values which prevents the application of classical techniques. Even if the algorithm is presented in general and can be applied to any starting FD-OCT image, we provide here all the details of the experimental set-up to allow reproducibility of the results and better understand the analysis scenario. Experimental Setup and Acquisition Procedure. A FD-OCT RTVue-100 Optovue device [12] was used. The reflectivity profile (A-scan) information was acquired by a CCD camera simultaneously. Due to the fast CCD camera line transfer rate and fast Fourier transform algorithm, this FD-OCT could perform 26000 A-scan/second. Each tomogram was the average of 16 images. The Super Luminescent Diode (SLD) this device provided worked at a wavelength of 840 ± 10 nm. It was connected to a telecentric light delivery system and mounted on a standard slit-lamp. This wavelength value was adopted since it allowed higher SNR than older OCT devices [13]. Furthermore, it was already chosen to analyze retinal imaging [14] and to obtain higher axial resolutions with the same bandwidth. Corneal imaging was performed with auxiliary lens (CAM-l), helpful for corneal structures magnification. The working distance between patients and the OCT device was 22 mm. Subjects were asked to put their chin on the slit-lamp and to watch the target in the central point of the OCT probe. The exposure power at pupil was 750 W. This low value guaranteed no damage to analyzed eyes being below the maximum permissible exposure dictated by the American National Standards Institute (ANSI) at this wavelength [15]. The axial calibration of the OCT was performed using a set of polymethylmethacrylate (PMMA) lenses of known thickness (546 ± 1 m) and constant index of refraction (1.4838 at 840 nm) [16,17]. The PMMA lenses were measured using a Mitutoyo micrometer [18,19]. The FD-OCT declared resolutions were: 5.0 m in depth (axial direction), and 15 m for the transverse direction [12]. The investigated corneal area was 6 mm × 4 mm, corresponding to a matrix of 1016 × 640 pixels. Simply performing the division between these correlated values, the axial resolution is equal to 6.25 m for and the transverse one to 5.91 m. To find a more reliable pixel-m conversion factor, a calibration procedure was applied. By examining 10 OCT images of a set of PMMA contact lenses with known thickness and index of refraction, the conversion factor pixel-m has been found: 1 pixel = 4.13 m, 1 subpixel = 0.52 m [16][17][18] for axial resolution (for the pixel-subpixel chosen ratio see Section 4). These were the mean values of the conversion factors obtained from the analysis carried on the complete set of lenses. As a simplification, it was decided to consider neither the deterministic nor the statistical errors performed both in PMMA lens thickness estimation and in their OCT acquisitions to avoid their propagations. Unfortunately, with OCT acquisitions of PMMA lenses it was not possible to estimate the transverse resolution, therefore, 5.91 m/pixel was assumed. The difference between the chosen axial and transverse resolutions is close to the one presented in [6]. The OCT was connected to a computer to visualize and store corneal images. In a second analysis, the acquired tomograms were processed to extract the features of interest. The average time duration per patient of the medical analysis was 10 minutes, whereas the digital processing required only few seconds. The Algorithm for Estimating the Sublayer Thickness: Estimation Problem In an OCT corneal tomogram the cornea and its internal sublayers are represented by different grayscale regions since each corneal tissue presents a different reflectivity, see Figures 2(a) and 2(b). In particular, the boundary between two consecutive sublayers presents a constant reflectivity profile [5,6]. Enhancing the SNR of each analyzed region, our algorithm quickly detects these boundaries (edges) and estimates the sublayer thicknesses evaluating the distance between two consecutive couple of edges. As a first step, we need to identify on the tomogram the dimension and direction of the ROI to be analyzed, as in Figure 2(a). Due to the natural shape (curvature) of the cornea, particular attention must be paid to the chosen region. Taking for instance into account the central region of the cornea and working symmetrically on the apex of every meridian, it is mandatory not to consider pixels from different sublayers on the same row, see Figure 3. This kind of problem could arise if we consider too wide regions. The procedure for determining the ROI maximum dimension, denoted as the 2lag MAX value, is depicted in Figure 3. As a result, this region can be assumed straight, or affected by negligible corneal curvature, and every sublayer represented on the same pixel row. In our case study, the ROI maximum dimension chosen according to this rule has been found, on average, equal to 2lag MAX = 90 pixels (∼532 m, for the transverse pixel-m conversion see Section 2.2). As a second step, the ROI area is divided into three slices one next to the other, see Figure 4. Each slice is composed of the same number of pixel columns (25-30 in our case study), depending on the 2lag MAX value chosen in the previous step. These three slices, being adjacent, are considered not to be affected by significant differences of thickness. Note that this subdivision into three slices is essential also for the statistical validation (Section 4) of the presented approach. With reference to the histological model ( Figure 2(b)), corneal sublayers can be identified from the different reflectivity profile of the anatomical boundaries. Suppose that a slice is composed of one (central) pixel column. Different reflectivity is mapped by a proportional pixel grayscale intensity, valued between 0 and 255 [5,6]. If the intensity depth profile is plotted as a function of pixel rows, the search of minima and maxima values leads to the localization of the beginning of corneal sublayers. In order to reduce noise (flicker, speckle, etc.), the pixel intensity reflectivity profile is linearly averaged on the number of columns composing the slice (25-30 in our case study). We will refer to this procedure as the averaging technique. This procedure is justified by the evidence that, inside each ROI slice, pixel rows of the same region show a Gaussian reflectivity profile distribution (Pearson's chi-square test [20], < 0.05). The same sublayer boundary shows nearly the same reflectivity index, and it is represented by a continuous line in an OCT tomogram (see Figures 2(a) and 2(b)). Conversely, regions inside two consecutive boundaries present nonconstant reflectivity values (due to the anatomy of the analyzed tissues [5]). In the current analysis, this behavior can be considered as additive noise that makes it harder to find peak values that delimit sublayer regions. The average of pixel intensity values performed on near columns can preserve the boundary pixel values and reduce the uncertainty introduced by pixels coming from inner regions. The robustness of this procedure can be estimated by comparing the SNR evaluated on a single column with the one calculated on averaged columns (see Section 4). Figures 5 and 6 show the results of this procedure; for the pixel-subpixel chosen ratio see the following section. Epithelium, Bowman's layer, Stroma, and Endothelium sublayers can be identified. The global maximum corresponds to the anterior surface of Epithelium; the global minimum is the end of the Epithelium. The Bowman's layer starts immediately after the end of the Epithelium and it ends at the following (second) global minimum. The Stroma starts after the second minimum and ends at the last maximum on the right side of the pixel intensity profile. Endothelium starts after the last maximum and its end is assumed (approximation) where the signal goes under two RMS of pixel intensity values coming from the right region outside cornea (noise). Descemet's membrane is too thin to be detected being, if present, nearly or less than one pixel width [21]. A customized MATLAB [22] program has been developed to automatically process all the images and accurately segment the inner corneal sublayers. All analyses have been carried out using a MATLAB platform. 3.1. Subpixel Procedure. The approach described in the previous section has been carried out on three slices (see Figure 2(b)) of the same ROI to evaluate the uncertainties on the sublayer thickness estimations by means of weighted averages. It is worth noting that in using the automated procedure in pixel scale, the original image is not altered. As a further step, a subpixeling technique has been applied to reduce the uncertainty in thickness estimations when measurements were expressed in pixel scale. It was obtained with a bi-cubic interpolation [22,23] on both image directions and was not a super-resolution technique. It simply helped in the statistical analysis to distinguish cases that presented the same thickness value estimated in pixel scale for two, or all three, slices of the same ROI. Interpolating does not increase axial/transverse resolution but can increase only digital resolution of the image. The chosen linear ratio between pixel and subpixel has been set one to eight. For ease of elaboration it had to be a power of two [22], and the value eight was the first that highly reduced the aforementioned cases that presented the same estimated thickness. As a result, the subpixel intensity profile is smoothed if compared with the pixel averaged sample, as shown in Figure 5. Application to Different Regions. Interpolating the air-Epithelium boundary and working orthogonally to this sublayer, the procedure described in Section 3 can be also utilized to study midperipheral corneal regions, both nasal and temporal sides, see Figure 7. For the same reason discussed in the previous section, in order not to consider pixels coming from different sublayer on the same analyzed line (orthogonal to the investigated axes, blue solid segment in Figure 7), the width of the analyzed marginal ROI must be properly chosen (about 85-90 pixels in our case study), in accordance with the estimated curvature of the considered tomogram. With this approach, a more detailed investigation of the corneal sublayers thickness behavior can be obtained. In the literature, polynomial approximations of corneal sublayers' two dimensional profiles are widely assumed [6,24]. In this work, however, a circumference has been preferred since this curve provides not only a good agreement with analyzed data (Pearson's chi-square goodness-of-fit test [20], < 0.05), but also a reference (its center) to measure the midperipheral nasal and temporal angles at which the aforementioned method can be applied, see Figure 7. In principle, every patient's tomogram could be analyzed on every marginal region of the corneal image. In practice, the SNR of the considered subregion limits the application of our procedure. For example, in Figure 7 the tomogram has been investigated at 23 degrees on both nasal and temporal sides. This chosen angle represents the highest value at which the averaging technique returns signals with SNR high enough to be efficiently analyzed. The obtained results have been reported and compared in the following section. Study Results To validate our technique, the procedure described in the previous section has been applied to the sample of 52 healthy patients reported in Section 2.1. Table 1: Mid-peripheral nasal, central, and mid-peripheral temporal corneal sublayers thickness estimation obtained averaging the respective evaluations performed on 156 images (right eye). The Total corneal thickness (TCT) value is calculated as the sum of all sublayers. Sublayer Midperiph. nasal A statistical analysis of the proposed procedure is mandatory to validate our methodology and to build a reference of sublayer thickness values for normal patients. For every subject, three images of each eye have been recorded on a corneal area of 6 mm × 4 mm (1016 × 640 pixels), for a total number of 312 images, see one of them in Figure 1. Even though FD-OCT devices present very good repeatability and accuracy [25], to give an additional validation to the reported measurements, the estimated sublayers' thickness (in pixel scale) were compared, fixed the region (nasal, central or temporal), on the first and on the third acquisition of the same set of patients for the right eye. No significant differences have been identified (paired samples t-test [20], < 0.05). The complete set of images has been processed with the customized algorithm introduced in Section 3. On the apex and on the midperipheral nasal and temporal sides of the horizontal meridian, the resulting thicknesses of all sublayers and of the total cornea are reported in Table 1. It is worth recalling that the small uncertainties are due to the weighted average procedure. The sublayer estimations of central corneal thicknesses are all in accordance with the results presented in [5] except for the Epithelium, where the difference of nearly 10 m is clearly significant if compared with the standard deviations (unpaired samples t-test [20], < 0.05). A possible explanation of this discrepancy can be found in the way the Epithelium starting point was defined in Section 3. It was assumed to be the first pixel after the global reflectivity maximum. This highest value is due to the presence of tears and can be a plateau two or three pixels wide (corresponding to 10-12 m). Midperipheral corneal thicknesses cannot be strictly compared with the results proposed in [5] since they refer to regions that differ by 3 degrees (angle separation). However, they remain statistically compatible (unpaired samples t-test [20], < 0.05) and the difference between the Bowman's layer estimates is due to the same reason explained for the central thickness analysis. Total corneal increasing behavior is evident, as reported also in [6]. Furthermore, the contribution to the TCT increment is due to Epithelium and Stroma, while Bowman's layer remains nearly constant, in accordance with [5]. As introduced in Section 2.1, this paper reports the results of the only right eye (Table 1), since no significant difference occurred in the comparison between right and left eyes, nor between male and female subgroups (resp. paired and unpaired samples t-test [20], < 0.05). These statistical results are in accordance with the previously reported study [5]. A useful feature in the processing of an OCT corneal image is represented by the SNR improvement obtained using the previously discussed averaging technique. The noise component was assumed to be studied in the only Stromal region since it is the thickest among all corneal sublayers with an average estimated thickness of the order of 100 pixels on the corneal apex. By carrying out this analysis on the central ROI (Figure 7) in pixel scale and evaluating the SNR on the subset of images coming from the third acquisition of the right eye for all 52 patients, in every image the SNR distributions of the central pixel column and of the averaged columns composing the central slice were compared. In both cases the values were Gaussian distributed (Pearson's chi-square test [20], < 0.05). The central column presented a SNR mean value equal to 6.7 dB and a RMS of 0.8 dB, while the averaged signal showed a mean value of 13.9 dB and a RMS of 1.7 dB. Taking into account the mean values of these two distributions, the improvement obtained with the averaging technique was 7.2 dB. This rise allowed an accurate and robust sublayer boundaries detection and consequent thickness estimations. By applying the same SNR estimate procedure to midperipheral corneal regions (Figure 7) of the same subset of images, SNR values remained Gaussian distributed (Pearson's chi-square test [20], < 0.05). However, central midperipheral axes presented a SNR mean value reduced to 3.7 dB and a RMS of 1.2 dB, while the signals averaged on peripheral ROIs showed a mean value of 9.4 dB and a RMS of 2.0 dB. Considering also in this case the only mean values, the improvement obtained with the averaging technique was 5.7 dB. This rise confirmed the validity of the averaging procedure but, the absolute SNR value of processed signals was the lowest able to allow an accurate and robust sublayer boundaries detection. This is the reason why more peripheral corneal regions cannot be processed with this approach on the considered tomograms. Note that these SNR evaluations on central and peripheral ROIs come from images in pixel resolution since no significant difference has been found carrying out this analysis in subpixel scale (paired samples t-test [20], < 0.05). Discussion The number of patients considered in this work was more than double the ones presented in [5,6,21,24]. All subjects gave the informed consent to the collection and use of recorded data and were treated according to the tenets of the Declaration of Helsinki. Whereas the algorithm itself and the estimated results can be considered validated, the case study and its medical reliability clearly present some minor limitations. Firstly, the database was composed of 52 patients only and may be the reason why no significant age or genders differences have been identified (statistical problem). Secondly, the device returned few cases of too dark OCT corneal images that presented a SNR low enough to make any processing impossible, especially in midperipheral sides. In such cases new images from the same patients were acquired. Thirdly, Descemet's membrane was not distinguishable, as in the work reported in [5]. This sublayer, if present, could vary the Stroma thickness estimations of 10-15 m for people without any ocular disease [21]. Fourthly, a precise estimate of the OCT transverse resolution was not obtained, however, it can be determined using, for example, special USAF targets [26]. To quantify the influence of this uncertainty at the considered midperipheral angle and with the procedure described in Section 3.2, a variation of ±0.5 m on the transverse resolution leads to a variation of ±2.2% in the midperipheral results reported in Table 1. Notwithstanding that, they still remain in accordance with [5]. Finally, because of this technology, the ROI regions were assumed straight or affected by negligible curvature. Conclusions A robust technique for estimating corneal sublayer thickness starting from low-SNR FD-OCT images has been presented and validated by statistical analysis. The introduced procedure allowed a significant SNR enhancement and an analysis on a wide region of the considered OCT tomograms. The method has been utilized for the study of sublayer thickness estimations on a sample of 52 healthy patients without any optical disease on both central and peripheral regions of the horizontal meridian of the cornea by using more than 300 FD-OCT corneal images. From this analysis, the average values for all sublayers in three different corneal regions have been provided and, fixed the corneal area, no significant difference between right and left eyes or between male and female subgroups occurred. In addition, an averaging technique has been introduced to construct reference signals to be used for thickness estimations. The improvement in SNR has been introduced and discussed. The method is very useful to provide a fast, simple but robust and noninvasive estimation of the sublayer thickness. Its advantages and limitations have been discussed in the paper. As future work, a USAF target can be imaged to estimate OCT transverse resolution. Furthermore, the processing of corneal marginal regions can be applied to study medical cases in order to quantify the effective change in corneal sublayers produced by ophthalmological treatments. Finally, further development is in progress for studying a special class of customized contact lens thickness estimations.
5,640.6
2013-06-02T00:00:00.000
[ "Engineering", "Medicine" ]
A Comparison of Word Similarity Performance Using Explanatory and Non-explanatory Texts Vectorial representations of words derived from large current events datasets have been shown to perform well on word similarity tasks. This paper shows vectorial representations derived from substantially smaller explanatory text datasets such as English Wikipedia and Simple English Wikipedia preserve enough lexical semantic information to make these kinds of category judgments with equal or better accuracy. Introduction Vectorial representations derived from large current events datasets such as Google News have been shown to perform well on word similarity tasks (Mikolov, 2013;Levy & Goldberg, 2014). This paper shows vectorial representations derived from substantially smaller explanatory text datasets such as English Wikipedia and Simple English Wikipedia preserve enough lexical semantic information to make these kinds of category judgments with equal or better accuracy. Analysis shows these results may be driven by a prevalence of commonsense facts in explanatory text. These positive results for relatively small datasets suggest vectors derived from slower but more accurate analyses of these resources may be practical for lexical semantic applications. Wikipedia Wikipedia is a free Internet encyclopedia website and the largest general reference work over the Internet. 1 As of December 2014, Wikipedia contained over 4.6 million articles 2 and 1.6 billion words. Wikipedia as a corpus has been heavily used to train various NLP models. Features of Wikipedia are well exploited in research like semantic web (Lehmann et al, 2014) and topic modeling (Dumais, 1988;Gabrilovich, 2007), but more importantly Wikipedia has been a reliable source for word embedding training because of its sheer size and coverage (Qiu, 2014), as recent word embedding models (Mikolov et al, 2013;Pennington et al, 2014) all use Wikipedia as an important corpus to build and evaluate their algorithms for word embedding creation. Simple English Wikipedia Simple English Wikipedia 3 is a Wikipedia database where all articles are written using simple English words and grammar. It is created to help adults and children who are learning English to look for encyclopedic information. Compared with full English Wikipedia, Simple English Wikipedia is much smaller. It contains around 120,000 articles and 20 million words, which is almost one fortieth the number of articles and one eightieth the number of words compared to full English Wikipedia, so the average length of articles is also shorter. Simple English Wikipedia is often used in simplification research (Coster, 2011;Napoles, 2010) where sentences from full English Wikipedia are matched to sentences from Simple English Wikipedia to explore techniques to simplify sentences. It would be reasonable to expect that the small vocabulary size of Simple English Wikipedia may be disadvantageous when trying to create word embeddings using co-occurrence information, but it may also be true that despite the much smaller vocabulary size and overall size, because of the explanatory nature of its text, Simple English Wikipedia would still preserve enough information to allow the performance of models trained with Simple English Wikipedia to be comparable to models trained on full Wikipedia, and perform equally well or better than non-explanatory texts like the Google News corpus. Word2Vec The distributed representation of words, or word embeddings, has gained significant attention in the research community, and one of the more discussed works is Mikolov's (2013) word representation estimation research. Mikolov proposed two neusral network based models for word representation: Continuous Bag-of-Words (CBOW) and Skip-gram. CBOW takes advantage of context words surrounding a given word to predict the word by summing all the context word vectors together to represent the word; whereas Skip-gram uses the word to predict the context word vectors for skip-gram positions, therefore making the model sensitive to positions of context words. Both of the models scale well to large quantities of training data, however it is noted by Mikolov that Skipgram works well with small amounts of training data and provides good representations for rare words, and CBOW would perform better and have higher accuracy for frequent words if trained on larger corpora. The purpose of this paper is not to compare the models, but to use the models to compare training corpora to see how different arrangement of information may impact the quality of the word embeddings. Task Description To evaluate the effectiveness of full English Wikipedia and Simple English Wikipedia as training corpora for word embeddings, the word similarityrelatedness task described by Levy & Goldberg (2014) is used. As pointed out by Agirre et al (2009) and Levy & Goldberg (2014), relatedness may actually be measuring topical similarity and be better predicted by a bag-of-words model, and similarity may be measuring functional or syntactic similarity and be better predicted by a contextwindow model. However, when the models are constant, the semantic information of the test words in the training corpora is crucial to allowing the model to build semantic representations for the words. It may be argued that when the corpus is explanatory, more semantic information about the target words is present; whereas when the corpus is non-explanatory, information around the words is merely related to the words. The WordSim353 (Agirre, 2009) dataset is used as the test dataset. This dataset contains pairs of words that are decided by human annotators to be either similar or related, and a similarity or relatedness gold standard score is also given to every pair of words. There are 100 similar word pairs, 149 related pairs and 104 pairs of words with very weak or no relation. In the evaluation task, the unrelated word pairs are discarded from the dataset. The objective of the task is to rank the similar word pairs higher than related ones. The retrieval/ranking procedure is as follows. First, the cosine similarity scores are calculated using word embeddings from a certain model; then the scores are sorted from the highest to the lowest. The retrieval step is then carried out by locating the last pair of the first n% of the pairs of similar words in the sorted list of scores and determining the percentage of similar word pairs in the sub-list delimited by the last pair of similar words. In other words, the procedure treats similar word pairs as successful retrievals and determines the accuracy rate when the recall rate is n%. Because the accuracy rate would always fall to the percentage of similar word pairs in all word pairs, it is expected that the later and more suddenly it falls, the better the model is performing in this task. Models The word2vec python implementation provided by gensim (Rehurek et al, 2010) package is used to train all the word2vec models. For Skip-gram and CBOW, a 5-word window size is used to allow them to get the same amount of raw information, also words appearing 5 times or fewer are filtered out. The dimensions of the word embeddings from Skip-gram and CBOW are all 300. Both full English Wikipedia and Simple English Wikipedia are used as training corpora with minimal preprocessing procedures: XML tags are removed and infoboxes are filtered out, thus yielding four models: Full English Wikipedia -CBOW(FW-CBOW), Full English Wikipedia -Skip-gram(FW-SG), Simple English Wikipedia -CBOW(SW-CBOW) and Simple English Wikipedia -Skip-gram(SW-SG). The pre-trained Google News skip-gram model with 300-dimensional vectors (GN-SG) is also downloaded from the Google word2vec website for comparison. This model is trained on the Google News dataset with 100 billion words, which is 30 times as large as the full English Wikipedia and 240 times as large as Simple English Wikipedia. Table 1 shows the accuracy rate at every recall rate point, with the sum of all the accuracy rates as the cumulative score. It is shown that GN-SG, although not far behind, is not giving the best performance despite being trained on the largest dataset. In fact, it is clear that it never excels at any given recall rate point. It outperforms various models at certain recall rate points by a small margin, but there is no obvious advantage gained from training using a much larger corpus even when compared with the models trained on Simple English Wikipedia, despite the greater risk of sparse data problems on this smaller data set. Results For models trained on Simple English Wikipedia and full English Wikipedia, it is also interesting to see that the models almost perform equally well. The FW-CBOW trained on full English Wikipedia performs the best among the models overall, but for the first few recall rate points, it performs equally well or slightly worse than either SW-CBOW or SW-SG trained on Simple English Wikipedia. At the later points, it is also clear that although FW-CBOW is generally better than all the other models most of the time, the margin could be considered narrow and furthermore it is equally as good as SW-CBOW at the first two recall points. Comparing FW-SG with SW-SG and SW-CBOW, there is almost no sign of performance gain from training using full Wikipedia instead of the much smaller Simple Wikipedia. FW-SG performs equally well or often slightly worse than both Simple Wikipedia models. The main observation in this paper is that Google News is not out-performing other systems substantially and that full Wikipedia systems are not out-performing Simple Wikipedia substantially (that is, comparing the CBOW models to one another and the Skip-gram models to one another). The main result from the table is not that smaller training datasets yield better systems, but that systems trained using significantly smaller training datasets of explanatory text have very close performances in this task compared with systems trained on very large datasets, despite the big training data size difference. Analysis As mentioned previously, similarity may be better predicted by a context-window model because it measures functional or syntactic similarity. However, it is not clear in these models that the syntactic information is a major component in the word embeddings. Instead, it may be that the main factor for the performance level of the models is the general explanatory content of the Wikipedia articles, as opposed to the current events content of Google News. For similar words such as synonyms or hyponyms, the crucial information making them similar is shared general semantic features of the words. For example, for the word pair physics : chemistry, the shared semantic features might be that they are both academic subjects, both studied in institutions and both composed of different subfields, as shown in Table 2. The '@' sign in table 2 connects a context word with its position relative to the word in the center of the window. These shared properties of the core semantic identities for these words may contribute greatly to the similarity judgments for humans and machines alike, and these shared properties may be considered general knowledge about the words. For the related words, for example computer : keyboard, it may be difficult to pinpoint the semantic overlap between the components which build up the core semantic identities of these words, and none is observed in the data. General knowledge of a certain word may be found in explanatory texts about the word like dictionaries or encyclopedias, but rarely found in texts other than that. It would be assumed by the writers of informative non-explanatory texts like news articles that the readers are well acquainted with all the basic semantic information about the words, therefore repetition of such information would be unnecessary. For a similarity/relatedness judgment task where basic and compositional semantic information may prove to be useful, using a corpus like Google News, where information or context for a particular word assumes one is already con-versant with it, would not be as effective as using a corpus like Wikipedia where general knowledge about a word may be available and repeated. Also, the smaller vocabulary size of Wikipedia compared with Google News would suggest that general knowledge may be conveyed more efficiently with less data sparsity. Conclusion This paper has shown vectorial representations derived from substantially smaller explanatory text datasets such as Wikipedia and Simple Wikipedia preserve enough lexical semantic information to make these kinds of category judgments with equal or better accuracy than news corpora. Analysis shows these results may be driven by a prevalence of commonsense facts in explanatory text. These positive results for small datasets suggest vectors derived from slower but more accurate analysis of these resources may be practical for lexical semantic applications, and we hope by providing this result, future researchers may be more aware of the viability of smaller-scale resources like Simple English Wikipedia (or presumably Wikipedia in other languages which are substantially smaller in size than English Wikipedia), that can still produce high quality vectors despite a much smaller size.
2,921.8
2015-01-01T00:00:00.000
[ "Computer Science" ]
High expression of SLC20A1 is less effective for endocrine therapy and predicts late recurrence in ER-positive breast cancer Estrogen receptor-positive (ER+) breast cancer intrinsically confers satisfactory clinical outcomes in response to endocrine therapy. However, a significant proportion of patients with ER+ breast cancer do not respond well to this treatment. Therefore, to evaluate the effects of endocrine therapy, there is a need for identification of novel markers that can be used at the time of diagnosis for predicting clinical outcomes, especially for early-stage and late recurrence. Solute carrier family 20 member 1 (SLC20A1) is a sodium/inorganic phosphate symporter that has been proposed to be a viable prognostic marker for the luminal A and luminal B types of ER+ breast cancer. In the present study, we examined the possible association of SLC20A1 expression with tumor staging, endocrine therapy and chemotherapy in the luminal A and luminal B subtypes of breast cancer. In addition, we analyzed the relationship between SLC20A1 expression and late recurrence in patients with luminal A and luminal B breast cancer following endocrine therapy. We showed that patients with higher levels of SLC20A1 expression (SLC20A1high) exhibited poorer clinical outcomes in those with tumor stage I luminal A breast cancer. In addition, this SLC20A1high subgroup of patients exhibited less responses to endocrine therapy, specifically in those with the luminal A and luminal B subtypes of breast cancer. However, patients with SLC20A1high showed good clinical outcomes following chemotherapy. Patients tested to be in the SLC20A1high group at the time of diagnosis also showed a higher incidence of recurrence compared with those with lower expression levels of SLC20A1, at >15 years for luminal A breast cancer and at 10–15 years for luminal B breast cancer. Therefore, we conclude that SLC20A1high can be used as a prognostic biomarker for predicting the efficacy of endocrine therapy and late recurrence for ER+ breast cancer. Introduction Breast cancer is the most common malignancy among women and the leading cause of cancer-associated mortality in women worldwide [1]. In general, when the cancer is detected early, patients will exhibit longer survival times with less extensive treatment regimens and minimal risk of cancer progression. Breast cancer is one of the most stratified types of cancer, where the treatment methodology is typically designed according to the subtype and tumor stage [2][3][4]. This stratification has led to improvements in the clinical outcome [2][3][4]. However, there remains a substantial proportion of patients who do not respond well to treatment. Furthermore, early prediction of patient prognosis provides an opportunity to maximize the range of treatment options available at the earliest possible tumor stage, which can confer significant benefits on the quality of life. Therefore, identification of novel biomarkers that can accurately predict the prognosis of patients with early stage tumors, and in turn optimize the treatment strategy remains in high demand. Estrogen receptor-positive (ER+) breast cancer is a major breast cancer subtype that accounts for 70-80% of all types of breast cancers [3]. Breast cancer is stratified into � six subtypes in accordance with their gene expression profiles (PAM 50), with the main subtypes being normal-like, luminal A, luminal B, human epidermal growth factor receptor 2 (HER2)enriched, claudin-low and basal-like [5][6][7][8][9]. In particular, the luminal A and luminal B subtypes fall under the ER+ subtype of breast cancer [10,11]. Patients with luminal A and luminal B breast cancer subtypes are known to exhibit superior prognosis compared with that in the other subtypes [7][8][9][10][11][12]. However, 25-50% of the patients with these two subtypes become less responsive to endocrine therapy due to the heterogenous phenotypes of tumor cells and development of resistance to therapy [12,13]. In addition, another important obstacle blocking the effective treatment of patients with ER+ breast cancer is late recurrence. A small number of patients relapse after >5 years of endocrine therapy [14][15][16][17][18][19][20][21]. In one study, 15-year distant relapse rates are 27.8% for luminal A and 42.9% luminal B breast cancer [17]. It has been previously reported that tumor size and lymph node metastasis are associated with late recurrence [15][16][17][18][19][20][21]. However, unlike early recurrence, Ki-67 and p53 expression are not likely to be associated with late recurrence [20,21]. In fact, there is currently a lack of accurate parameters that can be applied for the prediction of late recurrence. Although dormant micro-metastatic cells have been proposed to be one of the mechanism underlying late recurrence, this hypothesis remains in its infancy [22,23]. Therefore, identification of a biomarker for clinically predicting late recurrence in patients with ER+ breast cancer after medical treatment is required. In this field, analysis of gene expression profiles in breast cancer and evaluation of their corresponding clinical outcomes have been demonstrated to be beneficial for the systemic stratification of breast cancers [24,25]. This allowed the degree of tumor heterogeneity among patients to be more accurately reflected [24,25]. Therefore, biomarkers for predicting the outcome of endocrine therapy and late recurrence in patients with luminal A and luminal B breast cancer are in urgent demand. Solute carrier family 20 member 1 (SLC20A1) is a sodium/inorganic phosphate (Pi) symporter [26,27]. The expression of SLC20A1 is high in ER+ breast cancer and has been previously found to associated with poor prognosis [24,25]. In addition, apart from the ER + luminal A and luminal B subtypes, patients with higher levels of SLC20A1 expression (SLC20A1 high ) in the claudin-low and basal-like subtypes have been reported to show inferior clinical outcomes. Radiation therapy against SLC20A1 high claudin-low and basal-like subtypes of breast cancer tumors has been demonstrated to be insufficient [25]. In addition, inhibiting SLC20A1 has been shown to delay cell cycle progression and impair mitosis and cytokinesis and cell proliferation in cancer cells [28]. SLC20A1 knockdown using small interfering RNA (siRNA) also reduced the viability of the luminal A subtype MCF7 cell line [25]. Although, these previous findings suggest the potential prognostic value of using SLC20A1 expression for predicting late recurrence in patients with ER+ at early tumor stages, its relationship with tumor stage or endocrine therapy outcomes, and late recurrence remain to be clarified. To assess the potential prognostic implications of SLC20A1 expression in detail, we evaluated its association with tumor stage and endocrine therapy outcomes in patients with luminal A and luminal B breast cancers in the present study. Additionally, we analyzed the possible relationship between late recurrence and SLC20A1 expression in patients with luminal A and luminal B breast cancer after endocrine therapy. Molecular Taxonomy of the Breast Cancer International Consortium dataset The Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) dataset [29,30] was downloaded from the cBioportal (http://cbioportal.org) [31,32] on July 29, 2020. The clinicopathological data from these patients were summarized S1 Table and previously [25,33,34] 14)]. This METABRIC dataset contains mRNA expression profile data (n = 1904). We defined the optimal cut-off thresholds using Youden's index to assign the patients into the SLC20A1 high and low-expression (SLC20A1 low ) groups through receiver operating characteristic (ROC) analysis. ROC was performed by SLC20A1 expression gene and disease-specific survival (DSS) or relapse free status (RFS) for each group divided, and calculated Youden's index (S2 Table). Patient data of 'living' and 'Died of Disease' of patient's vital status were used as DSS and relapse free status were used as RFS. Analysis of patient prognosis using the Kaplan-Meier method Survival curves based on DSS and RFS were plotted using the Kaplan-Meier method. The curves were compared between the SLC20A1 high and SLC20A1 low groups using the log-rank (Cochran-Mantel-Haenszel) test. Kaplan-Meier survival curves were produced using Bell-Curve for Excel version 3.00 (Social Survey Research Information Co., Ltd.). Analysis of patient prognosis using the multivariate Cox regression method Multivariate Cox regression analysis was performed to evaluate the influence of high and low SLC20A1 gene expression on patient outcome and to estimate the SLC20A1 high group's adjusted hazard ratios (HRs) to SLC20A1 low group for DSS or RFS. The ages at the time of diagnosis were adjusted as a confounding factor to remove the effect of age. We set the level of significance to be at 5% (two-sided). Multivariate Cox regression analyses were carried out using BellCurve for Excel version 3.00 (Social Survey Research Information Co., Ltd.). Analysis of the recurrence incidence rate The recurrence incidence rate was calculated using the number of recurrences divided by the observation term of the patients with luminal A and luminal B breast cancer after endocrine therapy. The observation term was divided every 5 years, where the number of recurrences was then counted during that term. The observation term of the patients was the total observation time (year) during that period. The p-value was calculated from the statistic based on normal distribution and corrected using the Holm method. Incidence rate ratio was calculated as the ratio of the recurrence incidence rate of SLC20A1 high group to that in the SLC20A1 low group. SLC20A1 high confers poorer clinical outcomes for patients with early-stage breast cancer according to Kaplan-Meier analysis and multivariate Cox regression analyses It has been previously shown that the levels of SLC20A1 expression in ER+ breast cancer tissues are higher compared with those in normal tissues, where high SLC20A1 expression is associated with poorer clinical outcomes [24,25]. However, the TCGA dataset used in the previous analysis had a small number of patients (n = 526) and did not include endocrine therapy data. And the possible association between SLC20A1 expression at tumor stages and clinical outcomes in patients with breast cancer remains poorly defined. In the present study, therefore, to assess the role of SLC20A1 expression in ER+ breast cancer subtypes in detail, we analyzed a METABRIC dataset that included the gene expression data from 1904 breast cancer patients. We first compared the levels of SLC20A1 expression in tissues among the various tumor stages. Box-plot analysis revealed no statistical difference in SLC20A1 expression among the tumor stages (S1A Fig). Statistical difference in SLC20A1 expression among the tumor stages in the ER+, luminal A and luminal B subtypes also could not be found (S1B and S1C Fig). We next performed Kaplan-Meier analysis to compare DSS and RFS between patients in the SLC20A1 high and SLC20A1 low groups at tumor stages I, II and III. Patients in the SLC20A1 high group showed poorer clinical outcomes at tumor stage I (DSS, p<0.001; RFS, p<0.001) (Fig 1A and 1D) and stage II (DSS, p = 0.0014; RFS, p<0.001) (Fig 1B and 1E). At tumor stage III, patients with SLC20A1 high did not show significance in their clinical outcomes (DSS, p = 0.12; RFS, p = 0.19) (Fig 1C and 1F). To verify the results from the Kaplan-Meier analysis, multivariate Cox regression analyses were then performed between the SLC20A1 high and SLC20A1 low groups for DSS and RFS with age as a confounding factor. Patients with SLC20A1 high at tumor stages I and II exhibited poorer clinical outcomes (DSS: stage I, HR = 1.92, 95% CI = 1.30-2.82; stage II, HR = 1.55, 95% CI = 1.18-2.03; RFS: stage I, HR = 1.91, 95% CI = 1.34-2.74; stage II, HR = 1.52, 95% CI = 1.20-1.95); however, this was not the case with patients at tumor stage III (DSS: HR = 0.65, 95% CI = 0.39-1.11; RFS: HR = 0.70, 95% CI = 0.42-1.18) ( Table 1). These results strongly suggest that high SLC20A1 expression may be used as a prognostic biomarker for poor outcomes of patients with earlystage breast cancer. Among patients with early-stage luminal A breast cancer SLC20A1 high confers poorer clinical outcomes according to Kaplan-Meier analysis and multivariate Cox regression analysis To examine the prognosis of patients with luminal A and luminal B breast cancer at each tumor stage, we next performed Kaplan-Meier analysis to compare DSS and RFS between the SLC20A1 high and SLC20A1 low groups at tumor stages I, II or III. Kaplan-Meier analysis showed that patients with SLC20A1 high luminal A breast cancer at tumor stages I, II and III showed poorer clinical outcomes (DSS: stage I, p<0.001; stage II, p<0.001; stage III, p = 0.030) (RFS: stage I, p<0.001; stage II, p = 0.0052; stage III, p = 0.38) (Fig 2A-2F). On the other hand, there was no significant difference between patients with SLC20A1 high and SLC20A1 low luminal B breast cancer at tumor stages I and II (DSS: stage I; p = 0.15; stage II, p = 0.42) (RFS: stage I, Table 2). These results strongly suggest that SLC20A1 may be applied as a prognostic biomarker for luminal A breast cancer at the early stages. Endocrine therapy is insufficient for SLC20A1 high luminal A and luminal B breast cancers by Kaplan-Meier analyses and multivariate Cox regression analyses We next examined the outcomes of using endocrine therapy as the main treatment for ER + breast cancer. Chemotherapy is also used to the part of the treatment regimen for luminal A Table 2 DSS Hazard ratio a 95% confidence interval P-value and luminal B breast cancer [2-4, 10, 11]. Patients with luminal A and luminal B breast cancer who received chemotherapy in the analyzed dataset in the present study constituted only luminal A: 8.0% (54/679) and luminal B: 9.9% (47/475) of the population (S1 Table). Therefore, we first performed Kaplan-Meier analysis comparing DSS and RFS between the SLC20A1 high and SLC20A1 low groups in patients with luminal A and luminal B breast cancer who did or did not receive endocrine therapy. Patients with SLC20A1 high luminal A breast cancer showed poorer clinical outcomes (without endocrine therapy: DSS, p<0.001; RFS, p<0.001) (with endocrine therapy: DSS, p = 0.0049; RFS, p = 0.052) (Fig 3A-3D). Although patients with SLC20A1 high luminal B breast cancer who did not receive endocrine therapy did not display poorer clinical outcomes (DSS: p = 0.12, RFS, p = 0.58) (Fig 3E and 3G), patients who received endocrine therapy showed poorer clinical outcomes (DSS, p = 0.0058, RFS, p = 0.022) (Fig 3F and 3H Endocrine therapy at early tumor-stage is insufficient for SLC20A1 high luminal A type breast cancer by Kaplan-Meier analyses and multivariate Cox regression analyses To determine the relationship between the prognoses of patients with SLC20A1 high and endocrine therapy at each tumor stage, we performed Kaplan-Meier analysis of DSS and RFS between the SLC20A1 high and SLC20A1 low groups for both patients with luminal A and luminal B breast cancer without or with endocrine therapy at tumor stages I, II or III. At tumor stage I, patients with SLC20A1 high luminal A breast cancer showed poor clinical outcomes regardless of whether they received endocrine therapy (without endocrine therapy: DSS, p<0.001, RFS, p<0.001) (with endocrine therapy: DSS, p = 0.024, RFS, p = 0.0091) (Fig 4A-4D). At tumor stage II, although patients with luminal A breast cancer who did not receive endocrine therapy in the SLC20A1 high group did not show poorer clinical outcomes (DSS, p = 0.090, RFS, p = 0.15) (Fig 4E and 4G), those with the SLC20A1 high luminal A type with endocrine therapy showed poorer clinical outcomes (DSS, p = 0.0023, RFS, p = 0.023) (Fig 4F and 4H). At tumor stage III, there were insufficient numbers of patients with luminal A breast cancer who did not receive endocrine therapy for analysis (Fig 4I and 4K). At tumor stage III, patients with SLC20A1 high luminal A breast cancer who underwent endocrine therapy showed shorter survival (DSS, p = 0.038, RFS, p = 0.34) (Fig 4J and 4L). By contrast, amongst patients with the luminal B subtype at tumor stage I, none of those in the SLC20A1 high group exhibited poor clinical outcomes (without endocrine therapy: DSS, p = 0.24, RFS, p = 0.26; with endocrine therapy: DSS, p = 0.22, RFS, p = 0.79) (Fig 5A-5D). At tumor stages II, none of the patients with SLC20A1 high luminal B breast cancer also exhibited poor clinical outcomes, regardless of whether they received endocrine therapy (without endocrine therapy: DSS, p = 0.065, RFS, p = 0.46; with endocrine therapy: DSS, p = 0.16, RFS, p = 0.11) (Fig 5E-5H). At tumor stage III, there were insufficient numbers of patients with not only luminal A, but also luminal B breast cancer who did not receive endocrine therapy for analysis (Fig 5I and 5K). Patients with SLC20A1 high luminal B breast cancer did not exhibit poorer clinical outcomes (DSS, p = 0.055, RFS, p = 0.27) (Fig 5J and 5L). Table 4). At tumor stage II, although patients with SLC20A1 high luminal A breast cancer did not exhibit poor clinical outcomes even if they did not undergo endocrine therapy, patients with SLC20A1 high luminal A breast cancer who underwent endocrine therapy exhibited poorer clinical outcomes (without endocrine therapy: DSS, HR = 2.42, 95% CI = 0.82-7.18; RFS, HR = 1.61, 95% CI = 0.63-4.14) (with endocrine therapy: DSS, HR = 2.64, 95% CI = 1.36-5.12; RFS, HR = 1.83, 95% CI = 1.07-3.14) ( Table 4) Table 4). Taken together, these results suggest that the administration of endocrine therapy beginning from the early tumor stages is less effective for patients with SLC20A1 high luminal A breast cancer. Chemotherapy is an effective treatment option for patients with SLC20A1 high luminal A and B breast cancer according to Kaplan-Meier analysis and multivariate Cox regression analysis Although there was a relatively small number of chemotherapy cases in the present study, chemotherapy was selected as the treatment option for ER+ breast cancer (luminal A: DSS, n = 51; RFS; n = 54) (luminal B: DSS, n = 41; RFS, n = 44). In addition, we previously reported that chemotherapy was sufficient for SLC20A1 high claudin-low and basal-like breast cancers [25]. Therefore, for the present study we also classified patients with luminal A or luminal B into the without and with chemotherapy categories before performing Kaplan-Meier and Table 4 (Fig 6B, 6D, 6F and 6H Table 5). Taken together, these results suggest that chemotherapy is an effective treatment option for patients with SLC20A1 high luminal A or luminal B. Patients with SLC20A1 high luminal A or luminal B breast cancer who underwent endocrine therapy are at higher risk of late recurrence Patients with luminal A or luminal B breast cancer typically exhibit superior prognoses compared with other subtypes breast cancer [7][8][9][10][11][12]. However, some patients with the luminal A and luminal B subtypes will relapse after the termination of long-term endocrine therapy [14][15][16][17][18][19][20][21]. Therefore, late recurrence is one of the key clinical issues of luminal A and luminal B breast cancer that needs to be addressed following endocrine therapy. Kaplan-Meier analyses of RFS showed marked differences between the SLC20A1 high and SLC20A1 low groups from 175 months onwards (Fig 3D and 3H). Calculations of the recurrent period and number of patients in the SLC20A1 high and SLC20A1 low groups amongst those with luminal A or luminal B breast cancer are shown ( Fig 7A and 7B). We next analyzed the recurrence incidence rate and the rate ratio every 5 years from the time of diagnosis. Amongst patients with luminal A breast cancer who underwent endocrine therapy, those in the SLC20A1 high group showed similar recurrence incidence rates compared with those in the SLC20A1 low group at year 0-5, year 5-10 and year 10-15 (year 0-5: Incidence rate ratio = 1.68, 95% CI = 1.03-2.74) (year 5-10: Incidence rate ratio = 1.14, 95% CI = 0.64-2.02) (year 10-15: Incidence rate ratio = 0.85, 95% CI = 0.39-1.86). In particular, patients with SLC20A1 high showed higher recurrence incidence rates compared with those with SLC20A1 low at >15 years (Incidence rate ratio = 3.40, 95% CI = 1.02-11.27) (Fig 7C). DSS Since none of patients with SLC20A1 high recurred at >15 years, the incidence rate ratio could not be analyzed for this time period (Fig 7D). To conclude, these results suggest that patients with SLC20A1 high exhibit higher risk of late recurrence even with endocrine therapy. Discussion Patients with early-stage SLC20A1 high luminal A breast cancer showed poorer clinical outcomes in terms of both DSS and RFS (Fig 2A-2F and Table 2), suggesting that SLC20A1 high can be applied as a viable prognostic biomarker for early-stage luminal A breast cancer. By contrast, patients with SLC20A1 high luminal B tumors did not show poorer clinical outcomes (Fig 2G-2L and Table 2). Our previous study reported that SLC20A1 high HER2-enriched subtypes have superior clinical outcomes [25]. Since the luminal B type expresses HER2 strongly, differential HER2 status may be the reason for the different prognoses between the luminal A and luminal B subtypes with SLC20A1 high . As shown in Fig 3 and Table 3, endocrine therapy for the luminal A and luminal B subtypes of patients with SLC20A1 high is insufficient for the improvement of prognosis or lengthening of the interval prior to relapse. In the luminal A subtype, patients in the SLC20A1 high group with endocrine therapy showed poorer prognosis according to DSS, but they did not show significant differences in terms of the recurrence compared with that in the SLC20A1 low group until 175 months onwards (14.6 years) (Fig 3D). These results suggest that patients with SLC20A1 high are at higher risk of mortality among patients who relapsed �175 months regardless of whether SLC20A1 expression is high or low. Subsequently, we also analyzed the recurrence incidence rate every 5 years. One of the key clinical issues for treating ER+ breast cancer is late recurrence after the end of long-termed endocrine therapy. Therefore, it would be beneficial if the prediction of late recurrence after a long period of therapy can be achieved at the time of diagnosis before the medical treatment commences. Although not statistically significant, patients with the luminal A subtype, SLC20A1 high were found to be at high risk of recurrence over 15 years (Fig 7C). Among patients with the luminal B subtype, SLC20A1 high combined with endocrine therapy showed poorer prognoses and shorter intervals to recurrence. This suggests that patients with SLC20A1 high have higher risks of late recurrence if they received endocrine therapy for year 10-15 ( Fig 7D). The difference in RFS observed between the luminal A and luminal B subtypes may be due to differences in the late recurrence rate. In addition, a significant proportion breast cancer cells in the luminal B subtype is HER2-positive. The reason for the difference in relapse rates between the luminal A and luminal B subtypes warrants further study. In stage I luminal A tumors, patients with SLC20A1 high who underwent endocrine therapy showed poorer prognoses and shorter intervals to recurrence. By contrast, patients with luminal A who received chemotherapy showed good clinical outcomes. These results indicate that endocrine therapy for patients with stage I SLC20A1 high luminal A is less effective, such that other treatment options, such as chemotherapy, would be necessary. It has been proposed that the main cause of late recurrence is dormancy [22,23]. Dormancy of a cell is defined by the extension of the G 0 /G 1 phase and cell cycle arrest [22,23,35,36]. SLC20A1 is a Pi symporter, whereas Pi is necessary for DNA synthesis and ATP generation. SLC20A1 depletion has been found to impair cell cycle progression and cell proliferation, in addition to causing cell death [28,37,38]. In addition, SLC20A1 siRNA knockdown has been reported to reduce cell viability in MCF7 cells, which was derived from the luminal A subtype of breast cancer [25]. Therefore, it is entirely possible that SLC20A1 overexpression can showing the recurrence incidence rates of patients in the SLC20A1 high and SLC20A1 low luminal A or luminal B groups every 5 years with or without endocrine therapy. P-values were calculated using the statistic based on normal distribution and corrected using the Holm method. The incidence rate ratio was calculated as the ratio of the recurrence incidence rate of SLC20A1 high to that of SLC20A1 low . The 95% confidence interval and the number of patients were shown on the right sides. increase Pi supply into the cell cycle-arrested cells, leading to survival from endocrine therapy and therefore late recurrence. Dormancy has been reported to be one main feature of cancer stem cells (CSCs) [22,35,36]. We have previously shown that SLC20A1 knockdown suppressed tumor sphere formation by aldehyde dehydrogenase 1-positive CSCs from claudinlow and basal-like type breast cancer [25]. Therefore, SLC20A1 may contribute to the maintenance of dormant CSCs and lead to late recurrence in luminal A and luminal B breast cancer. By contrast, the suppression of cell proliferation in HeLa cells by SLC20A1 inhibition was found to be independent of Pi uptake [28]. Therefore, the detailed molecular mechanism underlying the effects of SLC20A1 high on late recurrence in ER+ breast cancer require further study. SLC20A1 high may reduce the efficacy of endocrine therapy against ER+ breast cancer from early tumor stages onwards with regards to late recurrence. The present study analyzed the METABRIC dataset. To validate the results, another seven breast cancer datasets containing mRNA data were downloaded from cBioportal. All these datasets did not include data of endocrine therapy. Amongst these datasets, TCGA PanCancer Atlas included information of 1,084 patients with DSS, DFS, PFS and staging; however, the average observation period (DSS, 40.8 months; DFS, 37.9 months; PFS, 37.9 months) (S3 Table) was shorter than that in the METABRIC dataset (DSS, 123.6 months; RFS, 110.2 months) (S1 Table). Therefore, TCGA PanCancer Atlas dataset was only used to examine DSS, DFS and PFS, tumor-stage and breast cancer subtypes using Kaplan-Meier and the multivariate Cox regression analyses between the patients in the SLC20A1 high and SLC20A1 low groups (S2 and S3 Figs, and S4 Table). The patients with stage II luminal A breast cancer in the SLC20A1 high group exhibited a tendency towards poor clinical outcomes; this was observed in TCGA PanCancer Atlas dataset, as well as in the METABRIC dataset (S2 Fig). In luminal B breast cancer, TCGA PanCancer Atlas dataset did not reveal similar results to the results of the METABRIC dataset (S3 Fig). This discrepancy between the analyzed results of both cohorts may be due to the smaller number of patients, more censoring, and a shorter observation period in TCGA PanCancer Atlas dataset compared to the METABRIC dataset. The validation of the results of the present study needs to be performed in future. Apart from breast cancer, it has been reported that a high SLC20A1 expression is associated with poor prognoses in pancreatic cancer using Kaplan-Meier and COX hazards analyses of overall survival [39,40]. Therefore, it is also crucial to perform similar analyses of SLC20A1 in the cohorts of other types of cancer. Since SLC20A1 high was identified to be a prognostic marker using information-theoretical analysis [24], this informative approach may become a powerful tool for identifying novel biomarkers to predict clinical effects at the early stages, especially of late recurrence, in a variety of cancers from the genomic database. Conclusions In the present study, we revealed that patients with SLC20A1 high showed poorer clinical outcomes at early tumor stages and tend to be less responsive to endocrine therapy for luminal A and luminal B breast cancer. However, patients with SLC20A1 high who underwent chemotherapy showed good clinical outcomes. In addition, patients with SLC20A1 high at the time of diagnosis showed higher recurrence incidence rates compared with those in the SLC20A1 low group at >15 years in those with luminal A and at 10-15 years in those with luminal B. Therefore, SLC20A1 high can be used as a prognostic biomarker for predicting the effect of endocrine therapy and the likelihood of late recurrence in ER+ breast cancer.
6,363
2022-05-23T00:00:00.000
[ "Medicine", "Biology" ]
Performance Evaluation of a Stator Modular Ring Generator for a Shrouded Wind Turbine This paper presents the performance evaluation of a stator modular ring permanent-magnet generator to be embedded in a shrouded wind turbine. That is done to increase the power conversion for the same turbine area when compared to more conventional ones. An adapted structure allows the assembling of the prototype, aiming to verify its performance under controlled conditions. Aiming to verify the accuracy of an analytical subdomain model for a large diameter machine, the evaluation compares the results obtained by the electromagnetic finite element method and experimental measurements. The results of the components of the air-gap flux density, back EMF and electromagnetic torque obtained by the proposed analytical model and finite-element method are in good agreement with the experimental measurements. The experimental measurements of the iron loss and copper loss show that the prototype efficiency can reach 90% approximately. Introduction Efforts to reduce pollutant emissions and the impact of global warming have led countries around the world to invest in renewable energy sources such as solar and wind. Despite the advantages of wind energy, there are still many challenges for further expansion of the energy matrix, including the development of more efficient generators. Aiming to increase renewable energy usage, residences and commercial buildings have been encouraged to produce their own clean energy. However, constant changes in the direction of the wind produce turbulence that reduces the efficiency of the turbine in urban environments. Therefore, the development of mechanisms that allow the integration of wind turbines in buildings is essential for the operation of a wind system in these conditions [1]. Among the alternatives studied to increase the wind power of turbines, the use of a wind concentrator that involves the turbine and increases the wind speed is proposed [2]. Studies of the dimensions and shape of the wind concentrator show that it is possible to obtain an increase in wind speed of up to 5 times when compared to a standard wind turbine [3]. To develop a wind system that can be applied in an urban environment that, in addition to operating at low wind speeds, provides security in relation to the possibility of the blades detaching, the insertion of a diffuser is a solution that meets these two requirements. Based on the concept of using the diffuser, the proposal to install the rotor of the electric generator in the tips of the turbine blades, in the form of a ring, and to insert the stator in the structure of the diffuser is presented by [4]. However, the construction of this kind of machine introduces an issue concerning the turbine diameter, and to reduce its complexity the proposal of stator modularization allows simplifying its manufacturing and assembling. A comparison between the design of two wind generators, one a continuous core and the other with a modular stator, both with 400 kW power and 2.1 m rotor diameter, shows that the modular construction is more efficient at full load and has less mass of active material [5]. The comparison of a stator interior permanent-magnet generator (SIPMG) with power between 3 MW and 10 MW and internal diameter of the stator between 3.7 m and 9.7 m shows that the SIPMG has about 120% density torque and 78% of the cost per kilowatt compared to the conventional generator [6]. Another version developed connects each module of a flux-switching permanent-magnet generator to an electronic converter, where the total power is 450 kW, which allows the generator to continue operating even if one of the modules presents problems [7]. Recently, the concept of the modular stator has also been applied to the design of electric motors; however, in these cases the machines have diameters lower than 0.4 m and power lower than 3.5 kW [8][9][10]. In general, studies with a modular stator present machines in which the torque transmission is carried out by the machine shaft. In addition, wind generators with a modular stator are of large power, while small-power machines have smaller diameters. The wind generator with a modular stator presented in this paper has characteristics not yet evaluated, because despite having dimensions for a wind turbine for urban applications, it has a large diameter, 1.5 m, and a power of 1 kW. Figure 1 presents the structure of the small, shrouded wind turbine used to design the stator modular ring generator, with the permanent-magnet mounted surface evaluated in this paper. Energies 2020, 13, x FOR PEER REVIEW 2 of 19 A comparison between the design of two wind generators, one a continuous core and the other with a modular stator, both with 400 kW power and 2.1 m rotor diameter, shows that the modular construction is more efficient at full load and has less mass of active material [5]. The comparison of a stator interior permanent-magnet generator (SIPMG) with power between 3 MW and 10 MW and internal diameter of the stator between 3.7 m and 9.7 m shows that the SIPMG has about 120% density torque and 78% of the cost per kilowatt compared to the conventional generator [6]. Another version developed connects each module of a flux-switching permanent-magnet generator to an electronic converter, where the total power is 450 kW, which allows the generator to continue operating even if one of the modules presents problems [7]. Recently, the concept of the modular stator has also been applied to the design of electric motors; however, in these cases the machines have diameters lower than 0.4 m and power lower than 3.5 kW [8][9][10]. In general, studies with a modular stator present machines in which the torque transmission is carried out by the machine shaft. In addition, wind generators with a modular stator are of large power, while small-power machines have smaller diameters. The wind generator with a modular stator presented in this paper has characteristics not yet evaluated, because despite having dimensions for a wind turbine for urban applications, it has a large diameter, 1.5 m, and a power of 1 kW. Figure 1 presents the structure of the small, shrouded wind turbine used to design the stator modular ring generator, with the permanent-magnet mounted surface evaluated in this paper. This paper presents the electromagnetic performance evaluation of the generator as embedded in a small, shrouded wind turbine, tested under controlled conditions of rotation and load using a structure with the same dimensions as the wind turbine with diffuser and comparing the experimental results to analytical results obtained by means of a subdomain model and by a 2D finite element method (FEM). The stator modular ring generator aims to produce energy within the largest possible wind speed range in urban areas. The design of the generator aims to connect it to the grid through an AC/AC converter, and the performance evaluation considers the generation operation prior to its grid connection. Methodology The development of the synchronous generator with modular stator is based on the relationship between power and speed of rotation of a small wind turbine with diffuser. The design of the This paper presents the electromagnetic performance evaluation of the generator as embedded in a small, shrouded wind turbine, tested under controlled conditions of rotation and load using a structure with the same dimensions as the wind turbine with diffuser and comparing the experimental results to analytical results obtained by means of a subdomain model and by a 2D finite element method (FEM). The stator modular ring generator aims to produce energy within the largest possible wind speed range in urban areas. The design of the generator aims to connect it to the grid through an AC/AC converter, and the performance evaluation considers the generation operation prior to its grid connection. Methodology The development of the synchronous generator with modular stator is based on the relationship between power and speed of rotation of a small wind turbine with diffuser. The design of the synchronous generator with modular stator evaluated different possibilities of modules and quantity of poles as a function of copper and iron losses [11]. The parameters of the synchronous generator with modular stator were evaluated experimentally [12]. The experimental tests provide the results of mechanical torque, terminal voltage and current for different speeds in the operating work range of the wind turbine. The results of the analytical modeling and the finite element method are compared with the experimental results to verify the accuracy of both methods. The following sections present the details of the wind generator with a modular stator. Stator Modular Ring Generator Model The performance evaluation considers as a reference the power and rotation of the wind turbine, calculated as where ρ is the air density, C W is the power coefficient of the turbine, V w is the wind speed and A T is the area covered by the movement of the blades of the turbine. The application of Equation (1) leads to the graph presented in Figure 2, which shows the improvement of power of the shrouded wind turbine considering the power coefficient is equal to 0.4 and a wind speed augmentation of 1.5. Energies 2020, 13, x FOR PEER REVIEW 3 of 19 synchronous generator with modular stator evaluated different possibilities of modules and quantity of poles as a function of copper and iron losses [11]. The parameters of the synchronous generator with modular stator were evaluated experimentally [12]. The experimental tests provide the results of mechanical torque, terminal voltage and current for different speeds in the operating work range of the wind turbine. The results of the analytical modeling and the finite element method are compared with the experimental results to verify the accuracy of both methods. The following sections present the details of the wind generator with a modular stator. Stator Modular Ring Generator Model The performance evaluation considers as a reference the power and rotation of the wind turbine, calculated as where is the air density, CW is the power coefficient of the turbine, Vw is the wind speed and AT is the area covered by the movement of the blades of the turbine. The application of Equation (1) leads to the graph presented in Figure 2, which shows the improvement of power of the shrouded wind turbine considering the power coefficient is equal to 0.4 and a wind speed augmentation of 1.5. In this paper, the power curve of the shrouded wind turbine shows an increase of approximately 2.2 times in comparison to a standard wind turbine. As shown in Figure 1, the system structure uses mechanical parts to support the generator into the shrouded turbine. However, this study considers only the electromagnetic parts to develop the analytical model and the FEM simulation. The stator modular ring generator is made of a rotor back iron with 40 poles of ferrite magnets and 20 stator modules with three phases each. Considering the circumference size and making proper use of the symmetries, a reduced electromagnetic model can be employed. Thus, the analytical model and FEM simulation consider the smallest symmetric fraction, which is 1/20 of the entire machine as shown in Figure 3. In this paper, the power curve of the shrouded wind turbine shows an increase of approximately 2.2 times in comparison to a standard wind turbine. As shown in Figure 1, the system structure uses mechanical parts to support the generator into the shrouded turbine. However, this study considers only the electromagnetic parts to develop the analytical model and the FEM simulation. The stator modular ring generator is made of a rotor back iron with 40 poles of ferrite magnets and 20 stator modules with three phases each. Considering the circumference size and making proper use of the symmetries, a reduced electromagnetic model can be employed. Thus, the analytical model and FEM simulation consider the smallest symmetric fraction, which is 1/20 of the entire machine as shown in Figure 3. Table 1 presents the dimensions of the stator module of Figure 3. The generator assembly connects stator modules in series. Hence, the results obtained in one fraction of the machine provide its performance multiplying by the total number of modules, as described in the sections that follow. Analytical Model An analytical subdomain model is the method employed to solve this problem due to its accuracy taking into account semiclosed slots under load conditions [13,14]. The analytical modeling considers the following assumptions:  infinite permeable iron materials.  nonconductive stator/rotor laminations.  negligible end effect.  uniform distributed current density in conductor area.  relative permeability of the permanent magnets equals 1.  opening slot and slots have radial sides. The model solution uses the magnetic vector potential obtained by in the magnet subdomain, in the air gap and slot opening and Table 1 presents the dimensions of the stator module of Figure 3. The generator assembly connects stator modules in series. Hence, the results obtained in one fraction of the machine provide its performance multiplying by the total number of modules, as described in the sections that follow. Analytical Model An analytical subdomain model is the method employed to solve this problem due to its accuracy taking into account semiclosed slots under load conditions [13,14]. The analytical modeling considers the following assumptions: This work presents the modeling with semi closed slots and the permanent magnets' eccentric shapes and armature reaction field using three regions, i.e., magnets and air gap (Region 1), slot openings (Region 2i, i = 1, 2, . . . ,18) and winding slots (Region 3i, i = 1,2, . . . ,18), according to Figure 3. The model solution uses the magnetic vector potential obtained by in the magnet subdomain, in the air gap and slot opening and in the slots. The work of Wu et al. [14] presents the development of these equations conducting to an accurate result of flux density in the air gap. However, when the magnets have eccentric shapes it becomes harder to define the limits of the magnets' subdomain and consequently apply the interface condition with the air-gap subdomain. Intending to simplify the problem in this case, the magnet modeling by equivalent surface currents allows uniting the subdomains of magnet and air gap. The procedure to determine the equivalent surface currents consists of [15]: where → n s is a normal vector of the evaluated surface as shown in Figure 4. Energies 2020, 13, x FOR PEER REVIEW 5 of 19 in the slots. The work of Wu et al. [14] presents the development of these equations conducting to an accurate result of flux density in the air gap. However, when the magnets have eccentric shapes it becomes harder to define the limits of the magnets' subdomain and consequently apply the interface condition with the air-gap subdomain. Intending to simplify the problem in this case, the magnet modeling by equivalent surface currents allows uniting the subdomains of magnet and air gap. The procedure to determine the equivalent surface currents consists of [15]: where ⃗ is a normal vector of the evaluated surface as shown in Figure 4. When modeling permanent magnets with equivalent surface currents, the particular solution of this subdomain is the following [16]: when and ln 1 when , where ic is the current of the equivalent coil of each permanent-magnet side and ζ is the angle between the current of the equivalent coil and the evaluated point. Zhou et al. [17] present the solution to the equation for the region of the permanent magnets and air gap, applying the boundary condition of rotor yoke as well as the equations' description of eccentric permanent-magnet shapes for open slots. Semi closed slots with permanent magnets' eccentric shapes, without winding currents, are presented in [18]; meanwhile, a model that predicts an armature reaction field with permanent magnets' concentric shapes is calculated in [19]. The governing equations' solutions, after applying the boundaries conditions of each subdomain are: When modeling permanent magnets with equivalent surface currents, the particular solution of this subdomain is the following [16]: when a > r and when r > a, where i c is the current of the equivalent coil of each permanent-magnet side and ζ is the angle between the current of the equivalent coil and the evaluated point. Zhou et al. [17] present the solution to the equation for the region of the permanent magnets and air gap, applying the boundary condition of rotor yoke as well as the equations' description of eccentric permanent-magnet shapes for open slots. Semi closed slots with permanent magnets' eccentric shapes, without winding currents, are presented in [18]; meanwhile, a model that predicts an armature reaction field with permanent magnets' concentric shapes is calculated in [19]. The governing equations' solutions, after applying the boundaries conditions of each subdomain are: for permanent magnet and air-gap subdomain, and where Based on these equations, the modeling aims to verify the accuracy of the model for a machine with a large diameter. Appendix A details the interface conditions and the equations employed to determine the unknown coefficients. Evaluation of a generator's performance takes into account its parameters previously determined in [12]; moreover, iron-loss estimation is also considered according to [11]. After determining the flux densities' components, the Maxwell stress tensor method provides the electromagnetic torque, according to [20]. FEM Simulation The FEM performs simulations aiming to obtain the electromagnetic performance in a 2D model using the software ANSYS Electronics. The simulations connect resistive loads to the model and provide the waveforms of back EMF, and the current and electromagnetic torque obtained by the virtual work method. Table 1 shows the main parameters of the model evaluated. Prototype Structure To simulate the real operation conditions of wind speed, a prototype structure with the same dimensions of the shrouded wind turbine driven by a motor provides the rotation and torque expected in Figure 2, as shown in Figure 5. The wooden structure identified as Number 1 in Figure 5a has the same dimensions of the wind turbine and the structure. Number 2 provides the support to the stator modules fixation with the same diffuser dimensions. Measurements of mechanical torque, terminal voltage and current under several rotation speeds provide the generator performance for no-load and on-load conditions for the wind speed range of the shrouded wind turbine. In addition, static measurements of the radial component flux density in the air gap with no load condition allow verifying directly the magnets' flux density and validating the analytical model and FEM results. Results and Discussion As aforementioned, static measurements with no current in the windings provide the radial component of magnetic flux density, and Figure 6 shows the comparison between experimental measurements, and analytical modeling and FEM results. Measurements of mechanical torque, terminal voltage and current under several rotation speeds provide the generator performance for no-load and on-load conditions for the wind speed range of the shrouded wind turbine. In addition, static measurements of the radial component flux density in the air gap with no load condition allow verifying directly the magnets' flux density and validating the analytical model and FEM results. Results and Discussion As aforementioned, static measurements with no current in the windings provide the radial component of magnetic flux density, and Figure 6 shows the comparison between experimental measurements, and analytical modeling and FEM results. Figure 6 aids in validating the analytical model and FEM results; both methods applied over a pair pole have good agreement to the experimental measurements, including the opening slots, where the biggest difference is approximately 21 mT. Concerning the air-gap magnitude of flux density, it is important to remember that due to the relation between the rotor diameter and rotation, the number of poles produces frequencies that increase expressively the stator iron losses. Thus, to reduce these losses, a lower flux density provided by ferrite magnets decreases the stator loss, also the back-iron length and the rotor height. Despite the impossibility of measuring directly the circumferential component of the magnetic flux density, Figure 7 presents the results of analytical modeling and FEM to verify the first one accurately; this component is also necessary to torque calculation using the Maxwell stress tensor method. Although there are differences between the peak values due to the slots opening, both waveforms show good agreement regarding shape. However, it is important to highlight that the analytical model uses 1470 harmonics in the air-gap subdomain to achieve this precision. This number of harmonics is necessary due to the diameter rotor being bigger than the models previously presented, and a higher number of harmonics does not improve the accuracy or the ripple in the waveform. To identify the stator loss caused by the magnet flux density, Figure 8 shows the torque measures with and without stator and the modules' stator iron loss along the work range. density, it is important to remember that due to the relation between the rotor diameter and rotation, the number of poles produces frequencies that increase expressively the stator iron losses. Thus, to reduce these losses, a lower flux density provided by ferrite magnets decreases the stator loss, also the back-iron length and the rotor height. Despite the impossibility of measuring directly the circumferential component of the magnetic flux density, Figure 7 presents the results of analytical modeling and FEM to verify the first one accurately; this component is also necessary to torque calculation using the Maxwell stress tensor method. Although there are differences between the peak values due to the slots opening, both waveforms show good agreement regarding shape. However, it is important to highlight that the analytical model uses 1470 harmonics in the air-gap subdomain to achieve this precision. This number of harmonics is necessary due to the diameter rotor being bigger than the models previously presented, and a higher number of harmonics does not improve the accuracy or the ripple in the waveform. To identify the stator loss caused by the magnet flux density, Figure 8 shows the torque measures with and without stator and the modules' stator iron loss along the work range. Considering the weight of the rotor, it is necessary to measure the torque produced by the friction between the axis and the bearings without the stator modules. Thus, the difference between measured torque with and without the stator modules provides the iron losses of the stator, as shown in Figure 8. As expected, the iron losses increase with the speed rotation, and considering the turbine power at the initial operation speed, it is possible to verify the importance of reducing the iron loss. Figure 8 presents some points that are not in good agreement with this behavior, because the adapted structure for the tests presented some vibration at the stator part that affected the torque measurements. However, it is important to highlight that the mechanical behavior of the real shrouded wind turbine is different, thus mitigating this problem. Considering the weight of the rotor, it is necessary to measure the torque produced by the friction between the axis and the bearings without the stator modules. Thus, the difference between measured torque with and without the stator modules provides the iron losses of the stator, as shown in Figure 8. As expected, the iron losses increase with the speed rotation, and considering the turbine power at the initial operation speed, it is possible to verify the importance of reducing the iron loss. Figure 8 presents some points that are not in good agreement with this behavior, because the adapted structure for the tests presented some vibration at the stator part that affected the torque measurements. However, it is important to highlight that the mechanical behavior of the real shrouded wind turbine is different, thus mitigating this problem. Considering the weight of the rotor, it is necessary to measure the torque produced by the friction between the axis and the bearings without the stator modules. Thus, the difference between measured torque with and without the stator modules provides the iron losses of the stator, as shown in Figure 8. As expected, the iron losses increase with the speed rotation, and considering the turbine power at the initial operation speed, it is possible to verify the importance of reducing the iron loss. Figure 8 presents some points that are not in good agreement with this behavior, because the adapted structure for the tests presented some vibration at the stator part that affected the torque measurements. However, it is important to highlight that the mechanical behavior of the real shrouded wind turbine is different, thus mitigating this problem. Figures 9 and 10 show the terminal voltage comparison of waveform rms values under no-load and on-load conditions with the analytical model and FEM results, and experimental measurements. Results of Figure 9 confirm the validation of the radial component of the magnetic flux density as in Figure 6, and Figure 10 shows that the differences between the analytical method, FEM and experimental measurements are lower than approximately 4%. The comparison between the terminal rms voltages shows a voltage drop at the on-load condition, but the voltage behavior remains linear Results of Figure 9 confirm the validation of the radial component of the magnetic flux density as in Figure 6, and Figure 10 shows that the differences between the analytical method, FEM and experimental measurements are lower than approximately 4%. The comparison between the terminal rms voltages shows a voltage drop at the on-load condition, but the voltage behavior remains linear in both cases, which indicates that the armature reaction is not affecting the permanent magnets. The rms current and copper loss in armature winding presented in Figure 11 allows the comparison of experimental measurements and analytical modeling and FEM. The results for the rms current, as well as the voltage, lead to a validation of the analytical method and FEM. By comparing the copper loss to the iron loss, it is possible to conclude that due to the behavior of the first one, the iron loss is more significant to the generator's efficiency; only at the maximum speed are both approximately equal in magnitude. According to the rms values for terminal voltage and current, Figure 12 shows the comparison of the load power along the work range. The results for the rms current, as well as the voltage, lead to a validation of the analytical method and FEM. By comparing the copper loss to the iron loss, it is possible to conclude that due to the behavior of the first one, the iron loss is more significant to the generator's efficiency; only at the maximum speed are both approximately equal in magnitude. According to the rms values for terminal voltage and current, Figure 12 shows the comparison of the load power along the work range. As aforementioned, the load power results for the three methods present differences lower than approximately 6% and are below the turbine power presented in Figure 2 due to the iron loss and copper loss. The electromagnetic torque comparison presented in Figure 13 allows verifying the accuracy of the analytical modeling obtained by the Maxwell stress tensor and the FEM calculated with the aid of the virtual work method in comparison to the experimental measurements. The electromagnetic torque, experimental measurements and the no-load torque are presented in Figure 8. As aforementioned, the load power results for the three methods present differences lower than approximately 6% and are below the turbine power presented in Figure 2 due to the iron loss and copper loss. The electromagnetic torque comparison presented in Figure 13 allows verifying the accuracy of the analytical modeling obtained by the Maxwell stress tensor and the FEM calculated with the aid of the virtual work method in comparison to the experimental measurements. The electromagnetic torque, experimental measurements and the no-load torque are presented in Figure 8. As aforementioned, the load power results for the three methods present differences lower than approximately 6% and are below the turbine power presented in Figure 2 due to the iron loss and copper loss. The electromagnetic torque comparison presented in Figure 13 allows verifying the accuracy of the analytical modeling obtained by the Maxwell stress tensor and the FEM calculated with the aid of the virtual work method in comparison to the experimental measurements. The electromagnetic torque, experimental measurements and the no-load torque are presented in Figure 8. The results in Figure 13 show that despite the differences verified in the circumferential component of the magnetic flux density, both the Maxwell stress tensor and the virtual work method agree with the experimental measurements, with differences lower than approximately 6.8%. This means that although a ripple exists in the circumferential component of the magnetic flux density, the analytical method for this dimension of machine can provide valid results. Lastly, Figure 14 shows an estimation of the generator's efficiency using the results of Figures 2 and 12. Energies 2020, 13, x FOR PEER REVIEW 13 of 19 The results in Figure 13 show that despite the differences verified in the circumferential component of the magnetic flux density, both the Maxwell stress tensor and the virtual work method agree with the experimental measurements, with differences lower than approximately 6.8%. This means that although a ripple exists in the circumferential component of the magnetic flux density, the analytical method for this dimension of machine can provide valid results. Lastly, Figure 14 shows an estimation of the generator's efficiency using the results of Figures 2 and 12. Figure 14. Estimation of the stator modular ring generator efficiency over the work range. Figure 14 shows slight differences between the efficiencies that come from the combined uncertainties of the torque, speed, voltage and current used to calculate the efficiency, but overall, they all present a similar behavior, with a measured efficiency approximately equal to 90% for a good range of the turbine operation. Conclusions This paper evaluated the performance of a stator modular ring generator for wind-generation purposes using an analytical model, FEM and experimental measurements. Previous work used the analytical model employed at the analysis of this prototype; however, none of them had such huge a rotor diameter as compared to its axial length. The results showed that despite the large number of harmonics considered by the analytical modeling, its results were in good agreement with FEM and experimental measurements. Moreover, both the electromagnetic torque obtained by the virtual work method as well as the Maxwell stress tensor presented results quite similar to the experimental measurements, confirming the accuracy of both methods. The comparison between the prototype losses allowed verifying that the iron loss was more significant over most of the work range, due to the high number of poles and the rotation. Lastly, an efficiency evaluation showed similar behavior in all the methods, reaching approximately 90% in the work range with the highest values for the wind turbine power. The next stage of the research is to assemble the stator modular ring generator to the shrouded wind turbine to evaluate the performance and verify the structural conditions regarding safety during operation. Thus, from these tests, results could be obtained that would allow the development of larger wind generators. Figure 14 shows slight differences between the efficiencies that come from the combined uncertainties of the torque, speed, voltage and current used to calculate the efficiency, but overall, they all present a similar behavior, with a measured efficiency approximately equal to 90% for a good range of the turbine operation. Conclusions This paper evaluated the performance of a stator modular ring generator for windgeneration purposes using an analytical model, FEM and experimental measurements. Previous work used the analytical model employed at the analysis of this prototype; however, none of them had such huge a rotor diameter as compared to its axial length. The results showed that despite the large number of harmonics considered by the analytical modeling, its results were in good agreement with FEM and experimental measurements. Moreover, both the electromagnetic torque obtained by the virtual work method as well as the Maxwell stress tensor presented results quite similar to the experimental measurements, confirming the accuracy of both methods. The comparison between the prototype losses allowed verifying that the iron loss was more significant over most of the work range, due to the high number of poles and the rotation. Lastly, an efficiency evaluation showed similar behavior in all the methods, reaching approximately 90% in the work range with the highest values for the wind turbine power. The next stage of the research is to assemble the stator modular ring generator to the shrouded wind turbine to evaluate the performance and verify the structural conditions regarding safety during operation. Thus, from these tests, results could be obtained that would allow the development of larger wind generators.
7,602
2020-12-25T00:00:00.000
[ "Engineering" ]
Modeling gene regulatory network motifs using statecharts Background Gene regulatory networks are widely used by biologists to describe the interactions among genes, proteins and other components at the intra-cellular level. Recently, a great effort has been devoted to give gene regulatory networks a formal semantics based on existing computational frameworks. For this purpose, we consider Statecharts, which are a modular, hierarchical and executable formal model widely used to represent software systems. We use Statecharts for modeling small and recurring patterns of interactions in gene regulatory networks, called motifs. Results We present an improved method for modeling gene regulatory network motifs using Statecharts and we describe the successful modeling of several motifs, including those which could not be modeled or whose models could not be distinguished using the method of a previous proposal. We model motifs in an easy and intuitive way by taking advantage of the visual features of Statecharts. Our modeling approach is able to simulate some interesting temporal properties of gene regulatory network motifs: the delay in the activation and the deactivation of the "output" gene in the coherent type-1 feedforward loop, the pulse in the incoherent type-1 feedforward loop, the bistability nature of double positive and double negative feedback loops, the oscillatory behavior of the negative feedback loop, and the "lock-in" effect of positive autoregulation. Conclusions We present a Statecharts-based approach for the modeling of gene regulatory network motifs in biological systems. The basic motifs used to build more complex networks (that is, simple regulation, reciprocal regulation, feedback loop, feedforward loop, and autoregulation) can be faithfully described and their temporal dynamics can be analyzed. Background In order to understand how biological systems behave, a branch of systems biology [1,2] called "executable cell biology" [3] aims to construct computational models which mimic their behavior and which can be used for simulating, in a faithful and cost-effective way, their reactions to external stimuli. The computational model, which is built upon knowledge obtained by performing some in vitro experiments, should be complete (it should be able to reproduce all the experimental data) and correct (it should be possible to reproduce its behavior experimentally). The correspondence between the in silico model and in vitro observed behaviors is verified by applying model checking techniques [4]. If the model is found to be not consistent with the experimental data, it must be refined and experimentally validated again. A notable side-effect of the model construction process is that the computational model may suggest new hypotheses about the behavior of the biological system which can then be verified by performing in vitro or in vivo experiments. A largely studied class of biological systems is constituted by systems which regulate the expression of genes in an organism. Their behavior is often represented by using gene regulatory networks (GRNs), which describe the interactions among genes, proteins and other components at the intra-cellular level. GRNs have been successful among biologists because they constitute an easy to use and intuitive tool which can be used to represent the biological model under consideration. However, their lack of formal semantics prevents their direct use for performing reliable and consistent simulations and for model checking with experimental data. There have been several attempts to define formal mathematical and computational frameworks for modeling GRNs. They can be classified into quantitative approaches, using differential equations or stochastic models [5], and qualitative approaches, mostly based on boolean networks [6], Petri nets [7,8], and bayesian networks [9]. See [10] for a detailed analysis and survey of modelling and analysis of GRNs. Motifs have been identified that are significantly overrepresented in biological networks [5,[11][12][13][14]. The same motifs have been found in organisms at different levels of complexity, ranging from bacteria to humans. The relationships between different types of motifs and their function have been explored in a number of simple cases, in silico and in vivo [15,16]. Recently, Shin and Nourani [17] have used Statecharts (SCs) [18], a computational framework with a visual language and well-defined semantics, for modeling some small and recurring patterns of interactions in GRNs, called motifs [13]. Gene Regulatory Network motifs GRN motifs are pattern of interconnections occurring in real GRNs with a frequence that is significantly higher than that in a randomly generated GRN. Their high frequency suggests that they play an important role in the GRN function and can, thus, be considered as its building blocks. The functional role of most common GRN motifs has been extensively studied in some organisms, such as E. coli and other model organisms [19]. The simple regulation motif The simple regulation motif is one of the most basic interaction patterns. It is composed of two genes X, Y, where X regulates Y and the interaction is mediated by a signal S X . The signal can act as an inducer molecule that binds X or can represent a modification of X which activates it. Since the regulation of X on Y is either activation or repression and S X can mediate the regulation with either presence or absence, four possible types of motifs can be described. A simple regulation motif is coherent if both the effects are of the same polarity, i.e. activation of Y in presence of S X (s1 in Figure 1A) or repression of Y in absence of S X (s2). It is incoherent if the effects are of different polarity, i.e. repression of Y in presence of S X (s3 in Figure 1A) or activation of Y in absence of S X (s4). The feedback loop motif The feedback loop motif is composed of two genes X and Y, which regulate each other, and their interactions are mediated by a signal S X (for X regulating Y ) and a signal S Y (for Y regulating X). Since the reciprocal regulations between X and Y can be either activations or repressions we have different feedback loop motifs. A feedback loop motif is double-positive if both the reciprocal regulations of the two genes X and Y are positive, that is, X and Y activate each other ( Figure 1B, left). Similarly, a feedback loop motif is double-negative if X and Y repress each other ( Figure 1B, middle). If the effects of the reciprocal regulations of the two genes X and Y are of different polarity, that is, X represses Y and Y activates X or viceversa, the feedback loop motif is said to be negative. Due to symmetry, we consider only the former negative feedback loop motif (see Figure 1B, right). The feedforward loop motifs The feedforward loop (FFL) motifs are commonly found in many GRNs of widely studied organisms like yeast and E. coli. They are composed of three genes X, Y, and Z, where X regulates Y and Z, and Y regulates Z. For reasons of simplicity from now on we discuss only the motifs where the regulatory effect depends on the presence of the mediating signals, but our findings apply also to the cases of their absence. Each type of regulation can be either activation or repression. Here we use the term coherent (resp. incoherent) to denote the case where the sign of the direct regulation from X to Z is the same (resp. the opposite) as the overall sign of the indirect regulation path through Y, as in the seminal paper of Mangan and Alon [20]. Out of the eight possible FFL motifs, the most frequently encountered ones [20] are the coherent type-1 FFL motif c1 and the incoherent type-1 FFL motif i1, both shown in Figure 1C. The combination of the regulations on gene Z by genes X and Y can be given different interpretations [20]. In the following we will assume that such regulations are combined using the AND logic function, as in the arabinose system of E. coli [21]. Although other functions seem to be more appropriate for use in other systems, the AND and OR functions are sufficient to explain the most peculiar properties of FFL. The autoregulation motifs The characteristic element of an autoregulation motif is a gene regulating itself. The autoregulation motif is positive if Y activates itself (see par in Figure 1D) and is negative if Y represses itself (see nar in Figure 1D). Statecharts SCs extend state transition diagrams by adding concurrency (i.e., the capability of representing a state as made up by smaller components all active at the same time) and hierarchy (i.e., the possibility of representing a state with a set of more detailed substates). The hierarchical structuring capabilities of SCs allow one to model systems at different levels of detail, while concurrency is useful for modeling multiple, mostly independent, portions of a system. Moreover, SCs are compositional, that is, they can be defined in terms of other SCs, thus making the specifications more reusable. These additional features, if correctly exploited, provide a solution to the scalability problems of other computational modeling techniques like, e.g., those based on boolean networks and Petri nets, whose effectiveness rapidly decreases when applied to larger systems [3]. We now summarize some of the SCs features that we believe are essential to understand their potential. Please refer to [18] for more complete and detailed information. A SC is composed of states and of transitions between states. A state is composite, if it contains other states, and is simple, otherwise. A composite state is parallel if its sub-states are executed concurrently, and is exclusive if exactly one of its sub-states is executed. The overall state of a SC is given by all the atomic states currently under execution. Transitions are used to specify how a system evolves changing its internal state according to the external stimuli. They can be labeled by events which trigger their activation and the consequent change of state of the system, conditions for their applicability, and actions to be performed during their execution. SCs have an intuitive graphical representation: see Figure 2A showing a SC modeling the movement and feeding of an organism by means of two concurrent substates. SCs have very good software tool support [22][23][24][25][26][27], which can be used to generate source code (e.g. in Java) whose execution corresponds to the SCs semantics, and to interactively simulate the system execution. SCs have been extensively studied in software and systems engineering, and have demonstrated to be particularly wellsuited for modeling and designing reactive systems, that is, systems which evolve reacting to internal or external events, or changed conditions. In the case of GRNs these events can be, for example, the introduction or removal of a protein or of another component. SCs have also been successfully used to model pancreatic organogenesis in the embryonic mouse [28], cell fate specification during C. elegans vulval development [29], and T-cell development in the thymus [30]. Shin and Nourani have used SCs to model GRN motifs [17]. In their approach, each element (gene, protein, signal) can be in one of the two states: "on", which means that the gene is expressed or that the protein is present and active, and "off", which means that the gene is not expressed or that the protein is not present or present in its inactive form. Moreover, activating interactions in GRNs are translated to transitions from the "off" state to the "on" state for the gene being activated. Similarly, inhibiting interactions correspond to transitions from the "on" state to the "off" state. Their SCs model of the coherent simple regulation motifs s1 and s2 is shown in Figure 2B, which in their approach represents also the autoregulation motifs. Results and discussion We present an improved method for modeling gene regulatory network motifs by using SCs and we show its application to model a number of motifs. As in the Shin and Nourani [17] approach we use two states "on" and "off" to model each element with the same meaning. Transitions in our approach are labeled with a logical formula, expressed in terms of presence or absence of genes and signals, which activates the transition when true. Whenever the transitions between "on" and "off" states are not present in our SCs model of a motif this means that the corresponding elements are the independent variables of the modeled motif and their state is possibly changed as a consequence of events outside the motif itself. A distinctive and novel feature of our method with respect to the method of Shin and Nourani is that we map the elements which are involved in the regulation to concurrent states. This offers a number of advantages that will be detailed in the following. We also study the temporal behavior of GRN motifs. Given the discrete nature of SCs, the temporal behavior of SCs models of GRN motifs is somewhat rough, but anyhow allows us to simulate some interesting temporal properties of GRN motifs. We are able to model the delay in the activation and the deactivation of the "output" gene in the coherent type-1 feedforward loop motif (c1 FFL), and the pulse in the incoherent type-1 feedforward loop motif (i1 FFL). We are also able to partially model the temporal dynamics of feedback loop motifs and autoregulation motifs, in the sense that the qualitative behavior is represented but the boolean nature of our SCs based approach does not allow us to model more sophisticated temporal mechanisms which require the use of quantitative aspects, like acceleration and damping. Model of simple regulation Our models of the simple regulation motifs s1 and s2 are shown in Figure 3A left and right. In both cases, all the elements involved in the regulation, the genes X and Y and the signal S X are modeled as concurrent states, and, for each of them, we use two states for modeling its presence (and absence). The activation and deactivation of the regulated gene are modeled by two transitions connecting its presence states, which are triggered according to the truth value of logical formulas depending on the presence of the gene X and the signal S X . Note that in the logical formulas the green symbol ∨ represents the logical connective OR while the orange symbol ∧ the logical connective AND. Note also that in the logical formulas for any element X, the expression X = 1 is abbreviated as X and the expression X = 0 is abbreviated asX. Our approach for modeling simple regulation is nonambiguous, because motifs s1 and s2 are represented by two different SCs. See again Figure 3A for our model and compare it with the ambiguity deriving from Shin and Nourani model shown in Figure 2B, where the same SC is used to describe both s1 and s2. Mapping different motifs onto the same SC is a potential source of problems when the mapping is inverted (i.e., from the SC to motifs) because it is not clear whether the SC should be mapped on both the original motifs (thus, possibly leading to an over-specification) or it should be mapped on only one of them. Moreover the Shin and Nourani model for coherent simple regulations shown in Figure 2B is incomplete, because it implicitly assumes that the regulating gene X is always expressed. But ignoring the situation where X is not expressed can be significant if, for example, the same gene has a repression role in other parts of the network. If we try to solve their incompleteness problem by adding another state for X = 0 then we have to duplicate the states for Y = 0 and Y = 1, thereby obtaining the SC of Figure 3B and losing the scalability advantage of SCs. In fact, their model does not fully exploit the concurrency features of SC. This determines sub-optimality, because it does not allow to reduce the size of the system. Their method is therefore not scalable: the complexity of their models grows faster than their size. Moreover, since the states of the regulated gene are modeled as substates of the regulating gene, and not as concurrent states, it is not possible to model networks containing genes which reciprocally regulate each other (see the model of feedback loop presented below). Note that these problems of [17] just described with reference to coherent simple regulations also affect the modeling of the other, more complex, motifs. Similar considerations also apply to the modeling of the incoherent simple regulation motifs s3 and s4, whose SCs models with our approach are shown in Figure 3C. Model of feedback loop The feedback loop motif is not addressed by the modeling approach defined by Shin and Nourani [17] and we will shortly prove that it cannot be. We first note that the authors themselves observe in the "Further Discussion" section of their paper [17] that feedback loop motif is not part of their modeling scheme and that they intend to incorporate it in the future. We observe that this is not possible in their method, because it requires the states of the regulated gene to be substates of the states of the regulating gene. Since in the feedback loop motif X and Y act as both regulated and regulating genes, this requirement cannot be fulfilled. Our modeling approach does not have this limitation because, as already mentioned, the genes and the signals are modeled as concurrent states. The double-positive feedback loop motif has two genes X and Y which reciprocally activate each other. The model for this motif can easily be obtained from the model of the coherent simple regulation motif s1 (previously shown in Figure 3A) by adding the states for the signal S Y and the transitions between the states for the gene X which correspond to the regulation of the gene X by Y. The resulting model and the motif are shown in Figure 4A. From now we shall discuss also the temporal behavior of each SCs model representing a given in vitro motif so as to determine how closely each model is able to reproduce the corresponding in vitro behavior. Note that since a SC is a discrete model the state of the regulated gene at time instant t + 1 depends on the state of its regulating gene at time instant t. Also note that the results of this investigation are a priori limited by the fact that since our SCs models are boolean any behavior requiring more than two values in the domain cannot be reproduced. The temporal behavior of the SCs model of the double-positive feedback loop motif is shown in the diagrams reported in Figure 4B. In particular, when X and Y are initially both present or both absent, it exhibits the "joint bistability" behavior [31], that is X and Y are either both always "off" or both always "on", as shown in Figure 4B (left and middle). But, as you can see in Figure 4B (right), when the initial state for X and Y is different, the temporal behavior, due to the approximation of the boolean domain where only two values are available, is not able to escape from the oscillating pattern to fall into one of the two steady states that are known from the in vitro experiments [5,31]. Our approach allows us also to build the model for the double-negative feedback loop motif, where the two genes X and Y reciprocally repress each other (see Figure 5A). Also in this case, our SCs model is able to reproduce the temporal behavior of the motif, that is, X always "on" and Y always "off", or viceversa (this is called "exclusive bistability" in [31]). The corresponding diagrams are reported in Figure 5B (left and middle). Once again, the roughness of the boolean model does not allow the temporal behavior to be attracted into one of the two steady states when the initial states of X and Y are the same, see Figure 5B (right). For completeness, we also show the SCs model of the negative feedback loop motif ( Figure 6A), and the diagram of its temporal behavior ( Figure 6B), where the oscillatory behavior known for this kind of motif [32] is reproduced. Some variations of this motif exhibit a damped oscillatory behavior: as said above, the roughness of the boolean model does not allow our modeling approach to reproduce it. We are working on an extension to overcome these limitations. Model of coherent feedforward loop The c1 FFL motif with the AND combination of X's and Y's regulations on Z has been used as a model of the arabinose system in E. coli. This motif, already Figure 1C and reported for convenience in Figure 7A (top), can be modeled in our approach by using the SC of Figure 7A (bottom), which, despite its discrete nature, is able to exhibit the same temporal behavior of the in vitro system, consisting in (i) a delayed activation of Z after the activation of X, and (ii) an immediate de-activation of Z when X disappears (such a behavior is called "sign-sensitive delay" in [13]). A diagrammatic representation of the temporal behavior of the considered SCs model is reported in Figure 7B, where it can be observed (right) that there is no delay in the deactivation of Z (Z and Y become both inactive at time instant t = 3 immediately after X disappears at time instant t = 2), but its activation (left) is delayed (only Y is active in the time instant t = 3 right after X appears at time instant t = 2, and Z becomes active only in the step after Y's activation, that is at time instant t = 4). Model of incoherent feedforward loop The i1 FFL motif (once again, with the AND combination of X's and Y's regulations on Z) has been used as a model of the galactose system in E. coli [33] where it produces an impulsive behavior in the regulated gene which first rises very quickly and afterwards soon goes down. The i1 FFL motif, already illustrated in Figure 1C and reported for convenience in Figure 8A (top), is modeled by using the SC of Figure 8A (bottom) which can reproduce pulse-like dynamics, as shown in the temporal diagram presented in Figure 8B. Soon after X becomes active at time instant t = 2 (left), also Z gets activated at time instant t = 3 together with Y but, after one more time step, the repressive action of Y deactivates Z at time instant t = 4. Of course, the approximation of the boolean domain only allows a unit time impulse, but that is is enough to show that our SCs model is able to reproduce the dynamic behavior typical of this motif. When X becomes inactive at time instant t = 2 (right) there is no effect on Z which remains inactive, while Y becomes inactive in the next step at time instant t = 3. On the other side, our SCs model is not able to express the response acceleration dynamics of the i1 FFL motif with respect to simple regulation [33], as previously said in the discussion of the intrinsic limitation of the boolean domain. We are currently working on the extension of our SCs-based approach to the more general case of a many-valued discrete domain. Model of autoregulation The negative autoregulation motif is a very common and widely studied pattern of regulation. Experimental results [34] have shown that it behaves as an accelerator of the gene response (with respect to the simple regulation motif), in presence of a high initial concentration of the self-regulating gene. The opposite behavior is exhibited by the positive autoregulation motif which slows down the production of the gene [35]. Our models for the negative autoregulation motif (see Figure 9A) and the positive autoregulation motif (see Figure 9B) are inherently boolean: therefore they do not have the means of reproducing the acceleration and deceleration which can be observed in vitro. The diagrams of their temporal behavior are shown in Figure 9C (left) and (right), respectively. As already mentioned, we plan to extend our modeling approach to take into account these aspects. On the other side, note that Shin and Nourani have observed in [17] that with their modeling approach both negative and positive autoregulation are identical to simple regulation in logical domain (see in [17] their But as you can see by comparing our SCs models for simple regulation (Figures 3A and 3C) to our SCs models for negative and positive autoregulations (to the right in both Figures 9A and 9B), our modeling approach allows to fully distinguish, in the logical domain, the various cases. This is true even if we build with our approach the SCs models for exactly the same autoregulation motifs considered by Shin and Nourani in [17] (shown in Figure 10A) where Y is regulated by the AND combination of itself and an additional activating gene X. Such SCs models are presented for completeness in Figures 10B (positive autoregulation) and 10C (negative) and the temporal dynamics of Y when X is expressed is the same shown in Figure 9C. Conclusions We have presented a Statecharts-based approach for modeling motifs of gene regulatory networks which (i) avoids the representation problems (incompleteness, no-concurrency, ambiguity) of a previous proposal [17], (ii) is able to model motifs that were not possible to model by following the approach of [17], (iii) produces more faithful models for the autoregulation motifs than [17], and (iv) is able to exhibit a temporal dynamics which qualitatively follows the actual biological dynamics. More specifically, we have been able to represent simple regulation, feedforward loop, feedback loop, and autoregulation, which represent the basic motifs that can be used to model more complex networks. Furthermore, our approach, even if intrinsically boolean and discrete, allows us to give a faithful qualitative description of the temporal behavior in the coherent type-1 feedforward loop motif (c1 FFL), in the incoherent type-1 feedforward loop motif (i1 FFL), in feedback loop motifs, and in the positive autoregulation motif. We are now planning, as future work, to extend our approach to consider also quantitative information, so as to provide a more realistic executable model of GRN motifs and their temporal dynamics.
6,070.4
2012-03-28T00:00:00.000
[ "Biology" ]
Apoptotic β-cells induce macrophage reprogramming under diabetic conditions Type 2 diabetes mellitus (T2DM) occurs when insulin-producing pancreatic β-cells fail to secrete sufficient insulin to compensate for insulin resistance. As T2DM progresses, apoptotic β-cells need to be removed by macrophages through efferocytosis that is anti-inflammatory by nature. Paradoxically, infiltrating macrophages are a main source of inflammatory cytokines that leads to T2DM. It is unclear how apoptotic β-cells impact macrophage function. We show under diabetic conditions, phagocytosis of apoptotic β-cells causes lysosomal permeabilization and generates reactive oxygen species that lead to inflammasome activation and cytokine secretion in macrophages. Efferocytosis-induced lipid accumulation transforms islet macrophages into foam cell–like outside the context of atherosclerosis. Our study suggests that whereas macrophages normally play a protective anti-inflammatory role, the increasing demand of clearing apoptotic cells may trigger them to undergo proinflammatory reprogramming as T2DM progresses. This shift in the balance between opposing macrophage inflammatory responses could contribute to chronic inflammation involved in metabolic diseases. Our study highlights the importance of preserving macrophage lysosomal function as a therapeutic intervention for diabetes progression. Type 2 diabetes mellitus (T2DM) occurs when insulin-producing pancreatic ␤-cells fail to secrete sufficient insulin to compensate for insulin resistance. As T2DM progresses, apoptotic ␤-cells need to be removed by macrophages through efferocytosis that is anti-inflammatory by nature. Paradoxically, infiltrating macrophages are a main source of inflammatory cytokines that leads to T2DM. It is unclear how apoptotic ␤-cells impact macrophage function. We show under diabetic conditions, phagocytosis of apoptotic ␤-cells causes lysosomal permeabilization and generates reactive oxygen species that lead to inflammasome activation and cytokine secretion in macrophages. Efferocytosis-induced lipid accumulation transforms islet macrophages into foam cell-like outside the context of atherosclerosis. Our study suggests that whereas macrophages normally play a protective anti-inflammatory role, the increasing demand of clearing apoptotic cells may trigger them to undergo proinflammatory reprogramming as T2DM progresses. This shift in the balance between opposing macrophage inflammatory responses could contribute to chronic inflammation involved in metabolic diseases. Our study highlights the importance of preserving macrophage lysosomal function as a therapeutic intervention for diabetes progression. Type 2 diabetes mellitus (T2DM) 2 is becoming a global epidemic. Growing recognition of chronic islet inflammation as a critical factor in the pathogenesis of T2DM opens up a new area in ␤-cell research. It highlights the importance of macrophages in islet biology (1). In healthy organs, apoptotic cells are rapidly cleared by phagocytic cells, making the presence of dead cells rare (2). The process of clearing apoptotic cells is termed efferocytosis, which involves engulfment of apoptotic cells by macrophages, followed by cell debris targeted to and degraded by digestive enzymes in the acidic lysosomes. Under normal conditions, efferocytosis reprograms macrophages toward an anti-inflammatory phenotype that results in resolution of inflammation (3). Pancreatic ␤-cells specialize in the synthesis and release of insulin in response to glucose. Similar to that of lysosomes, the internal milieu of insulin granules creates an acidic environment maintained through ATPases and allows for the crystallization of insulin around zinc molecules (4). Unlike most materials, insulin crystals are notoriously slow to degrade in the lysosomes, shown by studies of the ␤-cells and liver cells as well as in vitro processing by lysosomal proteases (5). A resting mouse ␤-cell has roughly 10,000 insulin granules, each may contain as many as 20,000 insulin crystals (6). Thus, when presented with an increased demand for ␤-cell efferocytosis (phagocytosis of apoptotic ␤-cells) in T2DM, macrophages must deal with the accumulation of insulin crystals that are not readily degraded by the lysosomes. To date there has been no study that addresses the impact of long-lived insulin crystals on macrophage function. After engulfment, pathogenic crystals (monosodium urate, calcium pyrophosphate dihydrate, cholesterol, and cysteine crystals) permeabilize lysosomal membrane and activate the NLRP3 inflammasomes (7)(8)(9). The pathogenicity of a crystal is influenced by its size and shape, as not all crystal particles activate the inflammasome (10). In addition to crystals, substances that are generally thought to be inert can effect lysosomal dysfunction (11). Whether the insulin crystal can behave as a pathogenic crystal or lysosomal antagonist prompted us to test the idea that ␤-cell efferocytosis may lead to lysosomal defects due to accumulation of insulin crystals in macrophages. Chronic islet inflammation contributes to the pathogenesis of T2DM (1). Pancreatic islets from diabetic patients have increased macrophage infiltration (12). The proinflammatory cytokine interleukin-1␤ (IL-1␤) is a major contributor to islet inflammation and T2DM by reducing ␤-cell function and promoting ␤-cell apoptosis (13). IL-1␤ secretion requires the activation of NLRP3 inflammasomes, as Nlrp3 knockout mice fed a high-fat diet showed reduced islet IL-1␤ protein expression and ␤-cell death compared with WT mice (14). However, the mechanisms of NLRP3 inflammasome activation in infiltrating macrophages are not fully understood (15). Endocannabinoids and human islet amyloid polypeptides are thus far the only identified activators of the NLRP3 inflammasomes in infiltrating islet macrophages (16,17). In this study, we conducted in vitro experiments to show insulin crystals from ␤-cell efferocytosis can cause inflammasome activation and release of IL-1␤ from macrophages, thus acting as a previously unrecognized cause of islet inflammation in vitro. In vitro and in vivo experimental models for studying ␤-cell efferocytosis To study ␤-cell efferocytosis, we created an in vitro system using bone marrow-derived primary macrophages (BMMs) or J774a.1 macrophage-like cells (J7) incubated with UVinduced apoptotic MIN6 cultured ␤-cells (apMIN6) or apoptotic-isolated islets. Presented in Fig. 1, A-C are images demonstrating the in vitro experimental systems specifically set up for this study. Efferocytosis was carried out by overnight incubation of BMMs or J7 cells with apMIN6 cells, where the J7 cells were labeled with Alexa 488-cholera toxin subunit B (CtB) and apMIN6 cells were labeled with succinimidyl esters (NHS) of Alexa Fluor-546 (18). J7 cells (green) were observed to be engulfing pieces of apMIN6 cells (red) during efferocytosis (Fig. 1A, enlarged cells to the right). To study primary islet macrophages derived in vivo, we induced islet cell death in vivo by injecting mice with five low doses of streptozotocin 2 weeks prior to islet isolation (19). We used F4/80, an extensively used surface marker for mouse macrophages (20), to identify macrophages. Because islet macrophages were few in number (12,21) and antibody penetration was severely limited for an intact islet, cells labeled with F4/80 were rarely found inside an islet. Therefore, we focused on islet macrophages that migrated out of an islet during in vitro culturing (Fig. 1B, the islet is not shown), similar to the examples shown in Fig. S1, A and B. We used F4/80 (blue) to detect macrophages, insulin (red) to verify that A, co-culture of J7 with apMIN6. apMIN6 cells were labeled with NHS (red) before incubation with J7 cells overnight, followed by CtB labeling (green) and wide-field microscopy. B, primary islet macrophages from STZ-treated apoptotic islets. Single islets isolated from STZ-treated mice were seeded individually in imaging dishes and stained with F4/80, insulin, and LAMP-1 antibodies. Shown here is a confocal image of an islet macrophage that migrated out of the islet. C, interaction of BMMs with apoptotic islets. Isolated islets were cultured for 2 weeks to induce apoptosis before BMMs were added for 24 h. The co-culture was stained with F4/80 and insulin antibodies, and imaged by confocal microscopy. The differential interference contrast (DIC) panel shows the imaged portion of the islet. D, verification that NHS-labeled apoptotic bodies were phagocytosed by macrophages. Floating apMIN6 cells were labeled with NHS (red) before incubation with attached J7 cells for 2 h with or without bafilomycin A1 (BafA1) followed by extensive rinsing, and imaged by wide-field microscopy. CtB (green) was used to label J7 plasma membrane. NHS fluorescence inside CtB was quantified from 60 cells. *, p Ͻ 0.05. A.U., arbitrary units. Macrophage reprogramming by apoptotic ␤-cells efferocytosis had taken place, and LAMP-1 (green) to identify the lysosomes as the compartment in which ingested insulin accumulated. To induce islet cell death in vitro, islets were kept in culture for 2 weeks followed by incubation with BMMs ( Fig. 1C). Here we again observed insulin-positive cell debris (green) inside F4/80-positive BMMs (red). We also used flow cytometry to show insulin accumulation in J7 cells after phagocytosis of apMIN6 cells (Fig. S1C). Finally, we included bafilomycin A1, a partial inhibitor of phagocytosis (22), during the incubation of J7 cells with apMIN6 cells to make sure that the NHS-positive cell debris associated with macrophages truly reflected phagocytosed material versus nonspecific sticking to macrophage cell surfaces (Fig. 1D). Bafilomycin treatment did not affect cell viability (data not shown). Phagocytosis of apMIN6 cells leads to insulin accumulation and enlarged lysosomes Phagocytosis of apMIN6 cells led to insulin staining in LAMP1-postive lysosomes in J7 cells 1 day after efferocytosis ( Fig. 2A, arrows). We used apoptotic 3T3-L1 cells (ap3T3-L1) as a control for efferocytosis of noninsulin cells. Incubation of J7 cells with monomeric insulin in solution did not cause accumulation of insulin ( Fig. 2A, J7ϩinsulin), indicating the insulin detected in the J7 lysosomes came from phagocytosis of apMIN6 cells. After removing apMIN6 cells and waiting for 3 days, insulin from ␤-cell efferocytosis remained in J7 lysosomes (Fig. 2B, arrows) and caused marked lysosomal swelling (Fig. 2B, arrowheads, Fig. S2A). This phenomenon was not observed in J7 cells 3 days after phagocytosis of ap3T3-L1 cells (Fig. 2B) or incubation with monomeric insulin for 3 days (Fig. S2B). Lysosome size increased in J7 cells after a 1-day incubation with either apMIN6 or ap3T3-L1 cells (Fig. 2C, gray bars) in response to efferocytosis. However, J7 cells incubated with apMIN6 cells but not with ap3T3-L1 cells still had enlarged lysosomes 3 days (Fig. 2C, black bars) or even 7 days (not shown) after removal of apMIN6 cells. Quantified by another method, J7 cells incubated with apMIN6 cells had significantly more LAMP-1 positive compartments that were greater than 2 m in diameter (Fig. 2D, black). The comparison with ap3T3-L1 cells (Fig. 2D, gray) suggested insulin accumulation as the underlying cause for the change in lysosomal morphology. As a marker for lysosome biogenesis, we measured total LAMP-1 expression. When normalized to the lysosome area, there was a deficit in LAMP-1 as a result of ␤-cell efferocytosis (Fig. 2E). To see if enlarged lysosomes were observed in macrophages from a diabetes mouse model, we isolated pancreatic islets from wildtype (WT) and db/db diabetic mice, and stained dissociated islet cells with F4/80 to identify macrophages and LAMP1 to label lysosomes. A db/db macrophage shown in Fig. S2D displayed enlarged lysosomes (yellow arrowheads) compared with that of WT (Fig. S2C). The histogram distribution analysis of LAMP1 structures (Fig. S2E) within the expected size range (23) show that whereas the majority of the lysosomes (Ͻ0.5 m 2 , Fig. S2F) were similar in size between WT and db/db, the number of lysosomes that were Ͼ0.5 m 2 (Fig. S2G) was significantly higher in db/db macrophages. These results show that the dynamic regulation of lysosomal biogenesis that normally accompanied efferocytosis was disrupted in cells with insulin accumulation, which was most likely contributed by the prolonged presence of slowly degradable insulin crystals (4, 5, 24). Figure 2. Phagocytosis of apMIN6 cells leads to insulin accumulation and enlarged lysosomes. A and B, incubation with apMIN6, but not ap3T3-L1 or monomeric insulin, induced insulin accumulation, and lysosome swelling. One day after efferocytosis, insulin was detected in LAMP-1 positive lysosomes indicated by the arrows (A). By day 3, enlarged lysosomes (arrowheads) were found in J7 cells incubated with apMIN6 (B) by confocal microscopy. Scale bars, 5 m. C, two-dimensional lysosome area was measured by manually tracing the lysosomes outlined by LAMP-1. D, lysosome outlines were determined by LAMP-1, and those with a diameter greater than 2 m were counted. E, LAMP-1 fluorescence intensity was measured and normalized to lysosome area. C-E, data are presented as fold-change relative to J7 cells alone. n ϭ 48 lysosomes from 9 cells. *, p Ͻ 0.05 versus J7 alone. Phagocytosis of apMIN6 cells induces lysosomal dysfunction Because phagocytosis of apMIN6 cells induced morphological changes in macrophage lysosomes, we examined the consequences of ␤-cell efferocytosis on lysosomal function. Lyso-Tracker concentrates in acidic organelles and is a general marker for lysosomal function (25). We performed a time course study in which J7 cells were incubated with apMIN6 cells and then labeled with LysoTracker (Fig. 3A). LysoTracker . LysoTracker staining was decreased by phagocytosis of apMIN6 cells. A, J7 cells were incubated with apMIN6 for 24 h, chased for various lengths of time, labeled with LysoTracker and imaged by wide-field microscopy. LysoTracker fluorescence per cell from 16 imaging fields was quantified. B, J7 cells were pulsed with fresh apMIN6 every 24 h for 96 h before being labeled by LysoTracker, with apoptotic 3T3-L1 (ap3T3) used as a control for general efferocytosis of noninsulin containing cells. n ϭ 12 imaging fields. C, BMMs were incubated with NHS-labeled apMIN6 for 24 h and chased for 3 days before LysoTracker fluorescence was measured from all cells (gray bars) or NHS-positive cells (black bars). Arrows point to NHS-positive BMMs indicating efferocytosis. At least 20 cells were used for each condition. D, BMMs were incubated with apMIN6 for 24 h and chased for 3 days in HG or HGP medium before LysoTracker fluorescence was measured from all cells. n ϭ 9 imaging fields. All measurement was normalized to that obtained in J7 or BMMs alone (control). *, p Ͻ 0.05 versus control. Scale bars, 50 m. Macrophage reprogramming by apoptotic ␤-cells staining first increased immediately after efferocytosis (0 h chase), consistent with up-regulation of lysosomal biogenesis that may decrease lysosomal pH to enhance the activity of lysosomal enzymes. The level of LysoTracker returned to normal 24 h after efferocytosis and continued to decrease until it was only half of that in the control cells 96 h after efferocytosis. When J7 cells were continuously pulsed with a fresh supply of apMIN6 cells every 24 h to mimic the continuous presence of apoptotic ␤-cells in vivo, LysoTracker staining was still significantly decreased after 96 h, which was not the case with ap3T3-L1 cells (Fig. 3B). Using NHS-labeled apMIN6 cells, there was less Lyso-Tracker accumulation in BMMs that actively phagocytosed NHS-apMIN6 cells (Fig. 3C, arrows) compared with NHS-negative BMMs. When LysoTracker staining was quantified in NHS-positive BMMs, a significant decrease was detected compared with BMMs alone (Fig. 3C, Day 3). The decrease in Lyso-Tracker fluorescence was not simply due to an exclusion of LysoTracker by insulin accumulation, because dextran concentrated and co-existed with insulin in the enlarged lysosomes in both J7 cells and BMMs (Fig. S2A). To study how elevated glucose and excess fatty acids affect macrophage function, we exposed BMMs to high glucose (HG) or high glucose with palmitate (HGP). Supraphysiological concentration of glucose (30 mM) was used as HG because it is a widely used tool to produce glucose-induced cellular dysfunction in vitro. We acknowledge that this is purely an in vitro experimental system. There was no difference in LysoTracker for control versus HGtreated or control versus apMIN6-exposed BMMs. When BMMs was treated with HG or HGP, LysoTracker staining was modestly decreased with statistical significance in BMMs exposed to apMIN6 cells compared with BMMs alone, quantified from all the cells (Fig. 3D). This result suggested that additional metabolic stress could exacerbate lysosomal defects. Because HG and HGP seemed to affect lysosomal function to a similar degree, we focused on the HG treatment. Phagocytosis of apMIN6 cells induces functional defects in lysosomes An increase in lysosomal pH could contribute to the reduced intensity of the acidotropic LysoTracker (26). We used fluorescence ratio imaging of lysosomes containing endocytosed fluorescein-rhodamine-dextran (27) to determine whether efferocytosis altered lysosomal pH. Fluorescein fluorescence increases as pH increases, whereas rhodamine fluorescence is pH independent, thus rendering the fluorescein/rhodamine ratio pH-dependent (Fig. S3). BMMs and BMMs exposed to apMIN6 cells cultured in normal medium did not show significant difference in lysosomal pH (data not shown). When cultured in HG medium, BMMs ϩ apMIN6 showed larger lysosomes with higher fluorescein/rhodamine ratio, indicating higher lysosomal pH (Fig. 4A). When individual lysosomes were quantified, there were more lysosomes with higher pH values (Fig. 4B). Because lysosomes with insulin accumulation were larger, we measured lysosomal pH according to the size of the lysosomes. Interestingly, all lysosomes have reduced acidification after ␤-cell efferocytosis (Fig. 4C), suggesting a general defect in the lysosomal system. Because increased pH can greatly compromise the proteolytic power of lysosomes, we next examined if reduced lysosomal acidification due to ␤-cell efferocytosis could lead to inefficient degradation of lysosomal contents. We took advantage of a fluorogenic substrate for proteases, DQ-ovalbumin. Hydrolysis of DQ-ovalbumin by lysosomal proteases relieved the self-quenched BODIPY FL dyes, resulting in increased fluorescence intensity (28) and formation of excimers that can be visualized using a red emission long pass filter (Fig. 4D, panel ii). There was a noticeable decrease in the number of excimer puncta observed in cells exposed to apMIN6 cells (Fig. 4D, panel iv versus panel ii). When quantified, total fluorescence intensity was reduced from both DQ channels (Fig. 4E), suggesting a diminished proteolytic capacity of the lysosomes. Lysosome permeabilization in phagocytic macrophages exposed to high glucose The reduced LysoTracker staining after ␤-cell efferocytosis could be due to either a rise in lysosomal pH or leakage in lysosomal membrane. To see if there was lysosomal leakage, we used pH-independent, lysine-fixable dextran compatible with co-staining with other markers. Although the morphology of the lysosomes was altered in both J7 cells (Fig. 5A) and BMMs (Fig. 5B, Fig. S2A), dextran was confined to the LAMP-1 positive compartments and co-existed with insulin aggregates (Fig. 5A, arrowheads). Two of the enlarged lysosomes in a BMM are shown in Fig. 5B outlined by LAMP-1 (cyan) with dextran (red) and insulin (green) inside. These results show that although insulin aggregates caused lysosome swelling, they were not as potent as other pathogenic crystalline particulates in inducing lysosome permeabilization (7)(8)(9). We next imaged dextran-loaded BMMs in high glucose culture (HG-BMMs) to test if additional cell stress would induce lysosome permeabilization. Dextran remained in distinct puncta after efferocytosis in control BMMs, indicating intact lysosomes (Fig. 5C, yellow box). When HG-BMMs were exposed to apMIN6, less dextran was found in enlarged insulinpositive lysosomes (Fig. 5D, arrowhead). More importantly, dextran appeared as additional diffuse staining in the cytoplasm, indicating leakage from lysosomes (Fig. 5D, yellow box). Even at single confocal planes, dextran was present in the cytoplasm (Fig. S4B, yellow box). HG treatment alone did not induce either enlarged lysosomes or lysosomal leakage (Fig. S4C). We quantified the distribution of dextran in control and HG-BMMs after efferocytosis (Fig. 5, C and D). Compared with BMMs, HG-BMMs had a 21.4 Ϯ 4% decrease in the ratio of dextran intensity in the lysosomes (punctate structures) to total dextran intensity (n ϭ 8 imaging fields per condition). We next performed live-cell imaging of BMMs co-cultured with apoptotic islets to mimic the in vivo interaction of macrophages with islets. This would also rule out the possibility that fixation and permeabilization was the cause for the dextran observed in the cytoplasm, even when lysine-fixable dextran was used. Shown in Fig. 5, E and F, are two regions of the same islet co-cultured with HG-BMMs prelabeled with dextran. BMMs situated outside the islet (Fig. 5E, box 1) had small punctate dextran staining, similar to BMMs cultured alone (Fig. S5A). For BMMs in contact with the islet cells (Fig. 5F, box 2), either on the cover-Macrophage reprogramming by apoptotic ␤-cells slip plane (Fig. 5F) or inside the islet away from the coverslip (Fig. S5B), several BMMs showed enlarged lysosomes (Fig. 5F, white arrowheads) along with the presence of dextran in the cytoplasm (Fig. 5F, yellow arrows). It suggested that glucoseinduced cell stress resulted in the weakening of the lysosomal membrane, thus enabling insulin aggregates to permeabilize the lysosomal membrane. If this were true, we should see less dextran in a portion of the lysosomes in phagocytic HG-BMMs due to insulin-induced permeabilization. When cultured in normal medium, there was a slight right shift in the dextran intensity from phagocytic BMMs compared with BMMs alone (Fig. 5G). On the other hand, when BMMs were pretreated with HG, there was a significant left shift in the dextran intensity in phagocytic BMMs, indicating reduced dextran in the permeabilized lysosomes (Fig. 5H). High glucose induces pro-inflammatory responses from phagocytic BMMs Lysosome permeabilization by pathogenic crystals results in NLRP3-inflammasome activation and IL-1␤ secretion. To see if efferocytosis-induced lysosomal leakage would promote macrophage inflammation, we show altered gene expression of TNF␣, IL-18, and IL-10 (Fig. S6A). For control BMMs, ␤-cell efferocytosis had little effect on IL-1␤ release, most likely due to the very low levels of cytokine production in unprimed macrophages (3). When BMMs were primed with bacterial lipopoly- C). B, histogram of lysosomal pH values from BMMs alone and BMMs exposed to apMIN6. C, lysosomes were pooled and an average of their pH values are presented as a function of lysosome sizes. D and E, the proteolytic capacity of the lysosomes was measured using a fluorogenic substrate for lysosomal proteases, DQ-ovalbumin (DQ), whose green fluorescence increased dramatically upon hydrolysis by proteases. Concentrated DQ fragments gave rise to red fluorescence emission. BMMs were treated with HG prior to loading with DQ. DQ fluorescence per cell was quantified from 12 imaging fields. *, p Ͻ 0.05; **, p Ͻ 0.01. Scale bars, 20 m. Macrophage reprogramming by apoptotic ␤-cells saccharide (LPS) to stimulate cytokine production, an increase in IL-1␤ secretion was detected. Interestingly, ␤-cell efferocytosis significantly decreased LPS-stimulated IL-1␤ secretion (Fig. 6A), exhibiting an anti-inflammatory response usually elicited by efferocytosis. Similar results showed that efferocytosis of apoptotic neutrophils by BMMs inhibited cytokine release (3). If BMMs were first cultured in HG, efferocytosis of MIN6 but not 3T3-L1 significantly increased IL-1␤ secretion (Fig. 6B, Fig. S6B). HG treatment did not further increase IL-1␤ secretion from LPS-treated samples (208 Ϯ 36 versus 196 Ϯ 41 pg/ml). The absolute concentrations of secreted IL-1␤ are shown in Fig. S6, C and D. Please note the physiological effect of IL-1␤ secretion cannot be evaluated based on the concentrations of IL-1␤ collected in solution in vitro, because in vitro Macrophage reprogramming by apoptotic ␤-cells IL-1␤ secretion is measured during a fixed time period in an arbitrary volume. Even when in vivo circulating IL-1␤ is low, local concentration at the interface between islet macrophages and islet ␤-cells could be high enough to result in a significant effect, especially from continuous exposure. For example, it is shown that the IL-1 receptor is expressed at a particularly high level in the islet ␤-cells, which can respond to low levels of IL-1␤ to induce hyperinsulinemia and proinflammatory responses during prediabetes (29,30). Although HG treatment alone did not induce IL-1␤ secretion (Fig. 6B), it promoted Il1b gene expression (Fig. 6C), consistent with the finding that the mRNA and protein expression of major inflammasome components NLRP3, caspase-1, and IL-1 were up-regulated by high glucose treatment (31). These results suggested that high glucose and ␤-cell efferocytosis together were sufficient to trigger IL-1␤ release. IL-1␤ processing and release from macrophages depend on the NLRP3-inflammasome, whose activation requires the assembly of the proteolytic complex containing active caspase-1. To directly examine inflammasome activation, we used a fluorescent probe (FAM-YVAD-FMK FLICA) to detect specifically the activated caspase-1 enzyme (Fig. 6, D-G). Upon inflammasome assembly, active caspase-1 translocated to punctate "specks" in HG-BMMs (Fig. 6D, arrows), suggesting NLRP3-inflammasome activation (32). Although the total cellassociated fluorescence increased slightly (Fig. 6E), there was a significant increase in the number of specks from HG-BMMs (Fig. 6F). When NHS-labeled apMIN6 cells were used, we detected phagocytosed NHS within the specks that contained active caspase-1 (Fig. 6G) in 64.6 Ϯ 2.1% of the cells (n ϭ 194), consistent with the finding that inflammasome activation can lead to colocalization between inflammasomes and autophagosomes (33). The high degree of colocalization between FLICA and NHS was not an artifact of signal crossover, as shown in Fig. S6E. Many of the known mechanisms for NLRP3-inflammasome activation converge in the production of reactive oxygen species (ROS), which is also a requisite for inflammasome activation (34). Using NHS to indicate apMIN6 cell fragments, we show that phagocytic HG-BMMs displayed significantly higher intensity of carboxy-H 2 DFFDA, a fluorescent probe for ROS (35), than cells with less NHS (Fig. 6H, arrows). For control BMMs, ␤-cell efferocytosis induced ROS production; when BMMs were pretreated with HG, ␤-cell efferocytosis caused a dramatic increase in ROS production (Fig. 6I). Finally, when cells were treated with trehalose, a disaccharide that induces lysosome biogenesis (36), or the antioxidant N-acetylcysteine (NAC), IL-1␤ secretion was significantly reduced (Fig. 6J). HG-BMMs undergo foam cell-like transformation after ␤-cell efferocytosis Compared with BMMs alone, ␤-cell efferocytosis increased free cholesterol (labeled by filipin, Fig. 7A) and neutral lipids (labeled by LipidTOX, Fig. 7B), as quantified in Fig. 7C. However, it did not induce lipid droplet formation (no puncta in Fig. 7B). HG treatment induced the formation of a massive number of lipid droplets in BMMs after ␤-cell efferocytosis (Fig. 7, D and E). This result suggested that when high glucose was present, macrophages accumulated lipids and took on foam cell characteristics as a consequence of clearing apoptotic ␤-cells. We next verified that the increased LipidTOX staining was detected in islet macrophages. When apoptotic islets containing native islet macrophages cultured in HG were imaged, there were cells that stood out with substantial amounts of lipid droplets labeled by LipidTOX. They were much larger than the majority of islet cells, most of which were ␤-cells (Fig. 7F). This was observed in 17 of 19 LipidTOX-positive cells. The number and size of these lipid-laden cells were consistent with them being macrophages (12,21), although it is possible that they are not. Because LipidTOX labeling was not compatible with antibody staining, we identified islet macrophages by F4/80 in separate images, which showed that F4/80-positive islet macrophages were indeed larger than their surrounding cells (Fig. 7G) in all the islets stained with F4/80 (n ϭ 5). The islet macrophages were also readily distinguishable in the DIC images (Fig. 7, F and G, Fig. S7), likely due to their lipid accumulation. Phagocytosis of apMIN6 cells induces cellular characteristics of inflammation As we examined through hundreds of images of phagocytic BMMs, we observed that ␤-cell efferocytosis resulted in dramatically more BMMs that were multinucleated (Fig. 8A, arrows) and enlarged in size (Figs. 5, A and F, 6D, and 8A). The multinuclear morphology was specific to phagocytosis of insulin-containing apMIN6 cells, as it did not occur with ap3T3-L1 cells (Fig. 8B). Interestingly, the lipid-laden cells after ␤-cell efferocytosis were also more likely to be multinuclear (Fig. 8C). All LipidTOX-labeled cells that were multinuclear had distinct lipid droplets (n ϭ 92 cells examined). Macrophages become multinucleated in response to crystals and when poorly degradable materials are present (37). It is a hallmark of chronic inflammation (38,39). Furthermore, many of these BMMs no Macrophage reprogramming by apoptotic ␤-cells longer had the elongated morphology (Fig. 8, A and C), which has been shown to indicate a transition to the proinflammatory phenotype (40). Transcription factor EB (TFEB) is a master regulator for many lysosomal and autophagy genes (41). TFEB was activated by ␤-cell efferocytosis, demonstrated by nuclear translocation of TFEB (Fig. 8D). As positive control, J7 cells were starved for 2 h to induce TFEB nuclear translocation (Fig. 8D). Efferocytosis of apMIN6 but not ap3T3-L1 cells induced long-term up-regulation of LAMP-1, a target of TFEB (Fig. 8E). The compensatory mechanism activated by lysosomal dysfunction (25) induced by ␤-cell efferocytosis would in turn promote macrophage inflammasome activation (9). Consistent with the finding that macrophages with active inflammasome complexes often undergo a form of cell death that is inherently Arrows point to the specks where active caspase-1 was concentrated. E and F, total fluorescence and number of specks per cell was measured from 10 imaging fields, respectively, from 10 imaging fields. G, apMIN6 cells were labeled with NHS before being added to HG-BMMs. Scale bar, 10 m. H, HG-BMMs were incubated with NHS-apMIN6 cells and labeled with carboxy-H 2 DFFDA to measure ROS and Hoechst to stain nuclei. I, BMMs kept in control or HG medium were cultured alone or overnight with apMIN6, and labeled with carboxy-H 2 DFFDA and Hoechst. ROS was measured as carboxy-H 2 DFFDA fluorescence per cell and normalized to that from control BMMs. n ϭ 12 imaging fields. Bar, 20 m. *, p Ͻ 0.05; **, p Ͻ 0.01; #, not significant by Student's t test. J, BMMs in control or HG medium were cultured overnight alone or with apMIN6 in the presence or absence of 10 mM trehalose or 0.5 mM NAC. Medium was collected after 24 h for ELISA of secreted IL-1␤. *, p Ͻ 0.05 by ANOVA test. Discussion In this study, we show that phagocytosis of apoptotic ␤-cells by macrophages, a process that is significantly increased in T2DM, could contribute to islet dysfunction by macrophage proinflammatory reprogramming. Although there is abundant knowledge about the role of adipose tissue macrophages in T2DM, information on islet macrophages is critically lacking. By investigating how ␤-cell efferocytosis and elevated glucose work in synergy to impact macrophage function, we propose that the normally protective physiological process of apoptotic ␤-cell removal could generate destructive proinflammatory responses under diabetic conditions in vitro. Please note that this study was conducted entirely in vitro and validation of our findings in vivo is important in understanding how islet macrophages contribute to diabetes. Many pathogenic crystals have been shown to induce lysosomal membrane permeabilization. However, they have only been studied in systems in which they were in direct contact with the macrophages. To our knowledge, this study is the first to examine the accumulation of crystals in the macrophage lysosomes by phagocytosis of apoptotic cells. Our data show insulin aggregates stayed in macrophage lysosomes for days after phagocytosis, indicating resistance to lysosomal degradation. This prompted us to test the idea that ␤-cell efferocytosis could activate the NLRP3 inflammasomes in macrophages via lysosomal destabilization, as do other nondegradable materials (17). Surprisingly, our results show that although accumulation of insulin crystals induced lysosomal swelling, it did not destabilize lysosomal membrane to the extent that caused significant Total fluorescence intensity per cell was quantified in C from 10 imaging fields. D and E, BMMs cultured in HG medium showed increased lipid droplet formation (labeled by LipidTOX) after ␤-cell efferocytosis. Number of lipid droplets per cell was quantified in E from 9 imaging fields. A-E, scale bars, 10 m; *, p Ͻ 0.05 versus BMM alone. F, isolated islets were cultured in HG, labeled with LipidTOX, and imaged by confocal microscopy. Three images of the same islet show islet macrophages present in different focal planes. G, isolated islets were cultured in HG, labeled with F4/80 and insulin antibodies, and imaged by confocal microscopy. DIC, differential interference contrast. Scale bars, 50 m. Macrophage reprogramming by apoptotic ␤-cells lysosomal leakage. This might be because insulin crystals were much smaller in size compared with many of the established pathogenic crystal clusters. A more likely explanation may be that the anti-inflammatory nature of efferocytosis provided anti-oxidative measures to protect the macrophage lysosome (3,43). On the other hand, when macrophages were exposed to high glucose, lysosomal leakage was clearly detected after ␤-cell efferocytosis. Similarly, we detected inflammasome activation, cytokine release, and lipid accumulation only when metabolic stress was present in addition to ␤-cell efferocytosis. It has been suggested that the metabolic and inflammatory phenotypes of a macrophage are interdependent (44). Cholesterol crystals induce phagolysosomal damage and inflammasome activation, contributing to inflammation and atherosclerosis (9). It is unknown whether ingested insulin, by its crystalline nature, behaves similarly to cholesterol crystals in inducing inflammatory responses. Inflammatory activation from foam cells has been studied extensively yet almost exclusively in the context of atherosclerosis. Because lysosomal dysfunction inhibits cholesterol efflux (45), we showed formation of lipid-laden macrophage cells was induced by ␤-cell efferocytosis, but only under hyperglycemic conditions, suggesting that elevated glucose shifted ␤-cell efferocytosis from an anti-inflammatory to a pro-inflammatory mediator in macrophage polarization. This may contribute to the acceleration of atherogenesis in patients with T2DM (46). ROS lies at the interface of metabolic and inflammatory pathways. Because all NLRP3-inflammasome activators examined to date trigger ROS generation, whereas ROS scavengers inhibit NLRP3 activation (34), it has been proposed that multiple activation pathways of NLRP3-inflammasomes converge on ROS signaling (47). Our data showed an increase in ROS production in BMMs after ␤-cell efferocytosis, especially with high glucose treatment. Among the various signals that activate NLRP3-inflammasomes, the lysosome rupture model is associated with pathogenic particulates (e.g. silica crystals, aluminum salts, and ␤-amyloid). Our data support lysosome leakage induced by ␤-cell efferocytosis in hyperglycemic macrophages. This study shows that phagocytosis of apoptotic ␤-cells disrupts macrophage immune homeostasis, and that a diabetic milieu may tip the balance between pro-and anti-inflammatory responses. Slowly degradable insulin crystals can switch the anti-inflammatory action of efferocytosis to the pro-inflammatory action of inflammasome activation. This challenges our understanding of how apoptotic ␤-cells uniquely impact islet macrophage biology in the face of metabolic stress. Our results provide a new perspective on the causes of islet inflammation and highlight the importance of preserving macrophage lysosomal function as a therapeutic intervention for diabetes progression. were labeled with CellTracker (green) and Hoechst (purple). Bar, 20 m. The percent of multinucleated cells (arrows) was quantified from 9 imaging fields. *, p Ͻ 0.05 versus BMMs alone. C, HG-BMMs with or without ␤-cell efferocytosis were labeled with LipidTOX. Bar, 20 m. D, J7 cells with or without ␤-cell efferocytosis were stained with TFEB antibody. TFEB translocation was measured by the ratio of TFEB fluorescence intensity in the nucleus to that in the cytoplasm. As positive control, cells were starved for 2 h to induce TFEB translocation. n ϭ 9 imaging fields. *, p Ͻ 0.05 versus J7 alone. n.s., not significant. E, total LAMP-1 fluorescence was quantified and normalized to cell area. Data are presented as fold-change relative to J7 cells alone. n ϭ 12 imaging fields. *, p Ͻ 0.05 versus J7 alone. F, BMMs or HG-BMMs with or without ␤-cell efferocytosis were labeled with PI to measure cell death. n ϭ 9 imaging fields. *, p Ͻ 0.05 versus BMMs alone. Cell culture J774a.1 macrophage-like cells (American Type Culture Collection, Manassas, VA) were maintained in DMEM supplemented with 10% FBS, 50 units/ml of penicillin, and 50 g/ml of streptomycin in a humidified, 5% CO 2 atmosphere at 37°C. MIN6 cultured ␤-cells were grown in DMEM supplemented with 100 units/ml of penicillin, 100 g/ml of streptomycin, 10% FBS, 2 mM L-glutamine, and 50 M 2-mercaptoethanol. Mouse bone marrow macrophages were differentiated as previously described (48). Briefly, fibula and tibia bones were dissected from euthanized mice. The bone marrow was flushed using a 25-gauge needle and syringe under aseptic conditions. Bone marrow cells were incubated in DMEM supplemented with 10% heat-inactivated FBS and 20% L929-conditioned media. After 3 days, cell cultures were supplemented with an additional dose of L929-conditioned media (20% of volume). After 6 days of total culture, cells were visually inspected for attachment to untreated culture plates. For treatment with HG or HGP, DMEM containing 30 mM glucose or 30 mM glucose and 0.1 mM palmitate (49) was used after exposure to 11 mM glucose for 1 day. The control cells were kept in 11 mM glucose. The high glucose concentration does not represent a human physiologic condition; rather, it is a widely used tool in ␤-cell studies for glucotoxicity. Islet isolation Animal protocols were approved by the Institutional Animal Care and Use Committee at Weill Cornell Medical College. Islet isolation was performed with minor modifications (50). Collagenase P was first perfused through the common bile duct for digestion. Islets were separated from exocrine cells by gradient centrifugation using Histopaque 1077 and 1119, and further purified by manual picking. Efferocytosis assay Imaging of efferocytosis was carried out as described with modification (18). To induce apoptosis, MIN6 cells were irradiated with 365 UV for 30 min, and then cultured for 16 h to allow apoptosis to develop. To induce islet apoptosis in vivo, ICR mice were injected intraperitoneally with 40 g of STZ/g of body weight daily for 5 days to induce ␤-cell death (19). Two weeks after the last injection, islets were isolated, and seeded in imaging dishes overnight. To induce islet apoptosis in vitro, we cultured isolated islets with or without high glucose for 2 weeks. Apoptotic MIN6 (apMIN6) cells were incubated with J7 cells or BMMs at a ratio of 1:5 for 24 h, after which the co-culture was washed 4 times to remove floating cells. Imaging took place at the indicated time points after washing. To label with Alexa NHS dye, apMIN6 cells were incubated with Alexa Fluor NHS Ester diluted in 1 M NaOH, pH 9.0, for 2 h at 37°C, 5% CO 2 . After incubation, cells were washed 2 times to remove NaOH. CtB-Alexa Fluor 488 was used to label J7 cells on ice for 2 min followed by fixation for 15 min in 3% paraformaldehyde. Fluorescence labeling Cells were incubated with 500 nM LysoTracker or 10 g/ml of DQ-ovalbumin for 30 min before imaging in KRBH buffer. For ROS, cells were incubated in 20 M carboxy-H 2 DFFDA for 1 h in culture media prior to imaging. Cells were washed and counterstained with Hoechst 33342 to quantify nuclei. Lysosomal pH was determined by FITC/rhodamine dextran as reported previously (27). To image dextran with antibody staining, BMMs were first cultured overnight alone or with apMIN6, then pulsed with 0.5 mg/ml of lysine-fixable tetramethylrhodamine-dextran for 16 h with a 4-h chase period, followed by fixation in 4% paraformaldehyde for 15 min, then blocking and permeabilization in 0.1% Tween 20 supplemented with 10% goat serum for 30 min. Cells were incubated with primary antibody for 1 h and with secondary antibody for 30 min, with the concentrations described below. Immunofluorescence of islets containing BMMs BMMs were identified by F4/80 or by preloaded dextran. Where dextran is shown, BMMs were first labeled with lysinefixable dextran overnight and then added to islets in control or HG medium for 7 days. Islets containing BMMs were fixed in 3% paraformaldehyde for 1 h, permeabilized in 0.1% Triton X-100 for 30 min, and blocked with 10% goat serum in 0.2% Tween 20 for 1 h, all at room temperature. The following primary antibodies were used overnight at 4°C: anti-F4/80 (1:100), anti-insulin (1:300), anti-LAMP-1 (1:125), and secondary antibody used at room temperature for 4 h at 1:400 with Alexa 546 anti-rat, Alexa 488 anti-guinea pig, and Alexa 633 anti-rabbit, respectively. Fluorescence microscopy Cells and islets were seeded in MatTek imaging dishes. Wide-field fluorescence microscopy utilized a Leica DMIRB microscope (Leica Mikroscopie und Systeme GmbH, Germany) equipped with a Princeton Instruments (Princeton, NJ) cooled charge coupled device using MetaMorph Imaging System software (Molecular Devices). Images were acquired at room temperature using an oil-immersion 40 ϫ 1.25 NA oilimmersion objective. Hoechst and filipin images were captured using the UV filters. The green probes (DQ Ovalbu-Macrophage reprogramming by apoptotic ␤-cells min, Alexa Fluor 488, LysoTracker, FITC, FAM FLICA, carboxy-H 2 DFFDA, CellTracker) were imaged using a standard fluorescein filter cube (Chroma) on the wide-field microscope, and a 488 -nm laser excitation with a 505-530 -nm emission filter on the confocal microscope. The red probes (Alexa Fluor 546, Alexa Fluor 555, LipidTOX, tetramethylrhodamine, and DQ Ovalbumin excimers) were imaged using a standard rhodamine filter cube (Chroma) on the wide-field microscope, and a 543-nm laser excitation with a 560 -615-nm emission filter on the confocal microscope. Alexa Fluor 633 was imaged using a standard Cy5 filter cube (Chroma) on the widefield microscope and a 633-nm laser excitation with a 650 -nm long pass emission filter on the confocal microscope. For confocal microscopy, the channels were scanned alternately with only one laser line and one detector channel on at a time. Images were corrected for background (51) and analyzed using MetaMorph software (Molecular Devices). Images were quantified either as individual cells or whole fields normalized to cell counts. Flow cytometry UV-induced apMIN6 cells were stained with Alexa Fluor 647 NHS ester and added to J7 cells. After 24 h co-culture, the mixture was stained with cholera toxin subunit B conjugated with Alexa Fluor 488. Next, the cells were fixed and stained with insulin antibody and secondary antibody conjugated with Alexa Fluor 546. After staining, the cells were examined with flow cytometry (BD Fortessa). The gate was first set for cells double positive for Alexa 488 and Alexa 647. The selected population was analyzed for insulin staining by Alexa 546 intensity. Statistical analysis Unless otherwise indicated, each experiment was repeated three times. The number of cells or imaging fields used for quantification under each condition, i.e. the value of n used for statistical analysis, are listed in the figure legends. When individual lysosomes were quantified, the sample sizes were similar among groups, and the reported n refers to the smallest n used. Data are presented as the mean Ϯ S.E. Statistical significance was analyzed using unpaired two-tailed Student's t tests for comparison between two groups and one-way repeated measures ANOVA with post hoc comparisons using the Tukey HSD test for data from more than two groups, with p values less than 0.05 deemed significant.
9,577.4
2018-09-13T00:00:00.000
[ "Biology" ]
Whither the roads lead to? Estimating association between urbanization and primary healthcare service use with chinese prefecture-level data in 2014 With the rapid economic development across China over recent decades, examining how urbanization may affect healthcare service use and its implications is more than urgent. This study estimates the association between urbanization and primary healthcare services use in China. We construct a prefecture-level dataset on healthcare services utilization and urbanization. We regress the proportion of residents using healthcare services in primary healthcare centers versus secondary or tertiary hospitals on a set of prefecture-level control variables. Results suggest that use of primary healthcare centers outpatient service is positively associated with being in the proximity of a provincial capital, but negatively correlated with the percentage of the urban population and the availability of public transportation. Higher likelihood of seeking care in major hospitals instead of primary healthcare centers is associated with urbanization, justifying a need for primary care physicians as gatekeepers in China’s healthcare delivery system. Introduction Primary healthcare services are often provided by local community health centers for convenience and efficiency. However, at times patients may seek care at major hospitals away from where they live, even for conditions treatable at local community health centers-commonly known as bypassing. In countries where primary care physicians serve as gatekeepers, bypassing is less likely to be an issue for their healthcare systems. However, patients in China have more latitude in choosing a provider within their home province than most other healthcare systems thus bypassing is more common in China. In addition, major Chinese hospitals are motivated to compete for patients, leading to wide-spread bypassing and a significant economic burden and inefficiencies in the healthcare system. Bypassing causes congestion in major hospitals and underutilized resources in lower-tiered providers, resulting in wasted resources, low patient satisfaction, and medical malpractice disputes [1]. Prior research argued that primary care-centered healthcare delivery systems are more efficient than hospital-centered systems [2]. Patient flow managed by primary care physicians plays a crucial role in proper patient management and reducing patients' search costs in accessing specialty care services [3]. Earlier studies provide ample support for the impact of gatekeeping in reducing inefficiencies and trimming costs [4][5][6]. With this backdrop, reducing bypassing in the healthcare delivery system has been one of the focal areas of China's health reform since 1997. In a recent national policy, China's central government pushed for gatekeeping by using the phrase 'Fen Ji Zhen Liao' (i.e., two-tier healthcare) to describe the process of 'patients visiting a primary healthcare center (PHC) as a point of entry into the healthcare systems and followed by a two-way referral' [7]. This is commonly known as gatekeeping, with a referral after the first point of contact service [1,8,9]. China's healthcare system and referral practices From 1949 to 1970s, China had a tiered healthcare system operating under the urban-rural dual social structure and the planned economy. Healthcare was provided through village clinics, township health centers, county hospitals, and city hospitals to rural residents, and employersponsored clinics, community health centers, district and city hospitals in urban areas. Community clinics, health centers, rural village clinics, and township health centers are considered PHCs in China. Before 1978, Chinese residents' mobility was restricted by their residential registration (known as hukou) and employment. Patients could receive reimbursement if they followed the referral policies set in place by their employers or villages. A national policy promulgated in 1978, 'Rectifying and Enhancing Free Medical Services', banned reimbursement for medical expenses without documentation of proper referral or pre-approval from a physician. In 1978, China launched a national economic reform from a planned economy to a market economy, with no exception for the healthcare sector. Higher utilization of healthcare services has accompanied stronger economic growth. However, bypassing was rare during the 1980s when most PHCs could treat common health conditions. To address the imbalance between the strong demand for quality health services and a shortage in the supply of health services, the Chinese government promulgated market-oriented health policies to improve health financing [10]. The central government has issued policies encouraging hospitals to broaden their sources of revenue, expanding hospitals' autonomy, and permitting hospitals to use their surplus funds [11][12][13]. During this period, the quantity and size of public hospitals were included in the performance evaluation of local governments, and therefore healthcare delivery systems experienced a substantial expansion [14]. In 2000, China's then Ministry of Health enacted a policy to allow patients to choose a physician when seeking healthcare [15], which officially ended the practice of mandated referral. On the financing side, China's social medical insurance schemes have relaxed the traditional referral rules. China established the national Urban Employee Basic Medical Insurance (UEBMI) to cover the medical needs of all formal urban employees in 1998. The funds in a UEBMI personal account are owned by the enrollee and can be spent at any level of delivery system without referral permission. The funds in the pooled account follow the 'first-comefirst-serve' rule, and can be used in any contracted hospitals and pharmacies [16]. Two other social insurance schemes, namely the New Rural Cooperative Medical Scheme (NRCMS) and the Urban Resident Basic Medical Insurance (URBMI) that were established in 2003 and 2007 respectively, have followed UEBMI in their policies regulating the reimbursement procedure, to allow beneficiaries to choose providers at any levels [16]. Given China's economic development and misaligned incentive mechanism for the providers, small PHCs with limited facilities and inadequate capacity tend to lose in the competition for patients and form a feedback circuit of losing medical staff and then patients again. Even with a tiered reimbursement rate in favor of PHCs, the price effect does not offer enough incentive to attract patients to use PHCs. Bypassing thus has become common. China's healthcare delivery system faces the challenges of the more than 20 years of social norm among patients of utilizing hospital as primary source of healthcare services, shortage of a capable and motivated primary care workforce. To rebuild the referral system, China's healthcare system has focused on supply side efforts. Since 1997, China has issued at least 60 policies aimed to address bypassing in its healthcare delivery system. Measures implemented include investing in PHC infrastructure and medical equipment, providing training and assistance to improve PHC capacity, and subsidizing medical students from rural areas to work with PHCs in targeted areas [1,7,17,18]. The policy objectives of these measures are to enhance PHC capability to treat patients, thus to reduce bypassing. However, the impact of these demand side efforts has not been systematically assessed elsewhere. Urbanization and bypassing Recent studies on bypassing have taken two different perspectives, demand side and supply side. Demand side research focuses on the determinants of patient choices [17,19] and the supply side concerns on how the physician services and payment schemes drive the patient flow [8,20,21]. Prior studies on this topic often collected individual-level data from face-to-face surveys and assumed a static demographic and location profile. However, because individuallevel studies are often limited in sample size thus relevant to regional settings, few studies have examined the macro-level determinants including urbanization, one of the most remarkable changes occurred in China within the last 40 years. From 1979 to 2016, China's urban population has increased annually by 4.1%, while its rural population decreased by 0.8% each year on average. In 2016, the migrant population in China totaled 254 million, which is more than the population of England, Germany, France, and Greece combined [22]. Urbanization affects human health [23], in areas such as non-communicable diseases [24] including cancer [25], aortic stiffness [26], kidney stones [27], obesity [28], mental health [29,30], infectious diseases [31,32,33], and PM 2.5 related mortality [34]. Studies also show an association of urbanization with health services utilization such as hospital admissions for alcohol and drug abuse [35] and liver transplant [36]. Other studies have emphasized the impact of urbanization on health disparity [37] and income inequality [38]. How urbanization relates to health services utilization across different tiers of healthcare is a critical question requiring attention and research. Urbanization increases the proportion of people living in urban areas and changes the ways how society adapts to economic transition [39]. With the population movement from villages to cities intensifies, urbanization may lead to reduced utilization of rural PHC. Urbanization is often seen as a double-edged sword to rural health services utilization. While urbanization improves the service capability of PHC thus has a 'pull-back effect' from high-tier healthcare providers, it also has a 'consumption upgrade effect' which leads to more intensive bypassing [40]. From 1978 to 2017, the percentage of non-agricultural labor in China increased from 30.2% to 73.0%. The disposable income reached RMB 33,616 Yuan per person (about $4,831 at an exchange rate of $1 = RMB 6.96 as of July 23, 2019) among urban population and RMB 12,363 Yuan per person ($1,776) in rural areas, both of which are about ten times more than those in 1978. The proportion of vehicle ownership in China increased by 13.8% and the social medical insurance savings account rose by 20.7% each year. With these rapid changes, population mobility due to urbanization have a profound impact on health services utilization. Fig 1 illustrates how urbanization can affect health services utilization on both supply and demand sides. On the demand side, the adoption of innovative agricultural technologies has increased production efficiency and freed a great deal of labor from agriculture. In China, migrant workers on average earn more than farmers, so their ability to afford services at hightier providers has improved, leading to an income effect [39]. Besides a reduced share of the agricultural population and increasing urban population density, urbanization also causes improvements in transportation infrastructure and additional internet users. The demand side changes may cause the redistribution of the medical workforce. Urbanization affects bypassing in combination with other factors. The first factor we consider is that insurance policies have eased migrant workers' movement and employment but has an unintended incentive for bypassing. China has four main insurance plans for its citizens, namely UEBMI for urban formal workers, NRCMS for rural residents, URBMI for urban residents with informal jobs or no jobs, and Social Medical Aid (SMA) for eligible low-income individuals. Migrant workers are mostly covered by NRCMS and URBMI. If a migrant worker wants to bypass PHCs and visit secondary or tertiary providers directly, they can inform the insurance administrative department by phone and submit the required documents after their visit. The reimbursement rate is the same, whether they have used a referral or not. Although this was designed to ensure migrant workers having adequate access to health services, it has an unintended consequence of increased bypassing. In addition, one investigation showed that migrant workers in China consider a referral from PHC physician burdensome, because, if a PHC is unable to treat their condition, they will have to find a specialist and pay extra fees [41]. Personal level of referral service is not required and often considered a luxury among Chinese patients. The second factor that has facilitated bypassing is the reduced cost of transportation. The development of the bullet train and public transportation system in urban areas has drawn patients from the rural area, bypassing local PHC in the rural areas. Because of their high wage rates and thus high opportunity cost, migrant workers living in urban areas usually do not intend to spend additional time to travel back to their hometown for PHC services, in which case a higher reimbursement rate may apply. They also consider urban major hospitals more competent than rural hospitals; thus the absolute number of rural PHC utilization may decrease. In the urban areas, those who are employed may visit an urgent care unit of major hospitals when they are off work. A recent study showed that Chinese urban residents have two modes of health services use [42]. When they fell sick, they tend to self-diagnose or rely on informal sectors. If the symptoms are not relieved, they bypass PHCs and visit major hospitals in part to save time and transportation costs. The third factor relates to the financing of China's PHCs. The Chinese government subsidizes PHCs following the 'Separating Revenue and Expenditure' (shouzhi fenkai) policy, through which PHCs transfer all revenues at the end of the year to upper-level management and in turn receive a fixed amount of budgeted funding. If one PHC secures more revenue than expected, it will be subsidizing other PHCs in the same tax district. The policy has been designed and promulgated to reduce "induced demand" but led to misaligned incentives as an unintended consequence. Many PHC workers have left for better-paid jobs, and those who stay have little incentive to provide more or better quality of services, as their workload is not related to their income. For ten years after this policy was implemented, PHCs have seen an average reduction of 10% in hospital beds and clinical services [1]. However, high tier hospitals are free from this policy and can reward their workers with a revenue surplus; thus they have an incentive to attract patients. This paper intends to examine the association between urbanization and health services utilization using prefecture-level data in China in 2014. This research is to understand bypassing in the backdrop of rapid urbanization and to examine how aspects of urbanization, including population structure and transportation, affect health services utilization. The study fills a critical knowledge gap in understanding how urbanization is related to health services utilization in rural and urban settings by using an innovative dataset linking health services utilization and prefecture-level indicators. The results of our study will have important implications for formulating policies to reduce bypassing. We describe the data and methods below, then present the results and discussions, and end with our conclusions. Data and variables Health services utilization data are from the National Health Financial Yearbook of 2014. China's public hospitals submit annually a set of statutory data to the local health administrative department through an online system. The data include hospital income and expenditure statements, balance sheets, and service provision portfolios. The data are collected and reported from county health bureaus to prefecture-level authorities, and then to provincial and national-level health authorities. To link health services utilization with city-level indicators, we use prefecture as the unit of study. Prefecture-level information comes from the China City Statistics Yearbook of 2015 [43], including population, economic, and geographic data in 2014. Cities in Xinjiang and Tibet are excluded due to their sparsely populated geography. We group prefecture-level districts in a province-level metropolitan city (i.e., Beijing, Shanghai, Tianjin, and Chongqing) as one because the distance within the cities is not an issue for patients to seek care outside of their residential districts. The National Health Financial Yearbook does not record China's privately-operated hospitals, but we believe that the data from public hospitals are representative at least up until 2014. First, most Chinese citizens trust and use public hospitals, which provide more than 90% of outpatient and hospitalization services in the country [44]. Second, the data for health services utilization were extracted directly from each hospital's management information system, ensuring its quality. We are unable to obtain private providers' utilization data at the prefecture-level. By using the year 2014, we avoid the impact of the national policy enacted in 2015 which aimed to increase the health services utilization at PHCs [45], a topic for future researches. Our final dataset is a cross-section of 274 records of Chinese cities in 2014. The dependent variables are the four variables listed on top of Table 1.They represent the outpatient and inpatient utilization in rural or urban PHCs, which is equal to the annual outpatient (inpatient) service volume of urban or rural PHCs divided by total annual rural or urban outpatient (inpatient) service of the city. The other variables are as follows. Capital is a dummy variable of city location, taking the value of one if a city is the provincial capital city or bordering the capital city, and otherwise zero. The percentage of the urban population (Urban Population) reflects the extent of urbanization in the administrative areas of the prefecture; it is the most used urbanization index in previous studies [24,30,35,40,46]. The ratio of non-agricultural labor (Non-Agricultural Labor) reflects how many residents were working in non-agricultural industries, including both manufacturing and service, and thus revealing the economic structure of the prefecture. The population of the prefecture or metropolitan city (Population) is included as a control variable. The population density of the urban area (Urban Density) is equal to the number of urban residents divided by urban area in square kilometers. GDP per capita (GDP) is used to capture the level of economic development of the prefecture. The number of doctors per thousand population in the prefecture (Doctor), including those in the public and private sectors, is a proxy of the healthcare workforce in the city. We were unable to find data relating to the income level of each prefecture, so the resident average bank saving (Savings) was used as a proxy of the wealth of the average resident in the prefecture. Chinese people have the tradition of saving for catastrophic events such as major illnesses. Bank savings is correlated with the propensity and ability to use high-quality healthcare services. Internet users per thousand population (Internet) correlates with the accessibility of information. Residents tend to find it easier to obtain information about their health and their providers if internet use is more accessible. The number of public buses per ten thousand (Bus) is a proxy of convenience and cost of transportation. The city road area in square meters per capita (Road) reflects the urban public transportation capability, with higher numbers indicating more public transportation facilities and more vehicles in the prefecture. Empirical model Consider the following model (1), Where y ij represents the dependent variable i of a prefecture j, i represents one of the four dependent variables; x ikj is the independent variable k of prefecture j for dependent variable i, k is the index of the independent variables; β i0 is the intercept term, β ik is the parameter of each x k and μ ij is the disturbance term of prefecture j for the dependent variable i. Dependent variables, in our estimation, are the annual percentages of PHC services utilization as we have explained previously. Patients may choose a PHC outpatient and inpatient service because of preference, distance, and provider reputation. If these unobserved factors affect the utilization of outpatient and inpatient services simultaneously, then the residuals are correlated. We use the Seemingly Unrelated Regression (SUR) estimation to address this concern [47]. Because China's urban and rural areas differ in many important aspects and that urbanization is the main factor we intend to examine, separate regressions of rural and urban PHC healthcare service utilization are warranted. SUR was estimated for two groups of dependent variables separately, i.e., the urban group (Urban Outpatient and Urban Inpatient) and the rural group (Rural Outpatient and Rural Inpatient). Results of OLS regression are listed for comparison. Table 1 presents descriptive statistics. In general, urban residents use PHC health services less frequently. Rural residents have much higher usage than their urban counterparts, with 35% of outpatient services being within rural PHC facilities and 23% of inpatient services. About 35% of the prefectural population are urban residents, while about 87.5% of the labor force work outside the agricultural industry. The average prefectural population is 4.45 million with significant variations across the prefectures. The average GDP per capita is about RMB 48,603 Yuan, and their per person savings is RMB 33,737 Yuan. The number of buses per ten thousand residents averages at 7.92 and per capita, and the area of road is about 12.97 square meters. Internet users per thousand averages at 177 persons. Results The regression results are in Tables 2 and 3. The location factor (Capital) is positive and statistically significant in the regression of rural PHC outpatient services utilization. The coefficient of the percentage of urban population (Urban Population) is negative and statistically significant in the equations for rural PHC outpatient services but positive in the regressions of urban PHC outpatient and inpatient services. The total city population (Population) is positively correlated to PHC rural outpatient services and urban PHC inpatient and outpatient services utilization. In contrast, the urban population density of the prefecture (Urban Density) is not statistically significant in any of the regression equations, implying the population movement's related with healthcare utilization. The percentage of non-agricultural labor (Non-Agricultural Labor) is statistically significant and positively related to rural residents' use of PHC services. The direction of the quadratic term for Non-Agriculture Labor is inverse to the level term, which implies a quadratic effect for rural PHC services. The natural log of per capita GDP (GDP) is negative and statistically significant in the regression of rural and urban residents' PHC inpatient service utilization and urban PHC outpatient service utilization. Its quadratic term is positive in the equations for both rural and urban residents' PHC inpatient services, and the effect for urban outpatient and inpatient services is statistically significant, indicating the concave function of the income effect for PHC utilizations. The resident bank savings (Savings) is negatively associated with rural PHC inpatient service utilization but not with any other dependent variables, implying a negative income elasticity. The coefficient of the indicator for internet users (Internet) is negative in the regression for urban PHC inpatient service utilization, showing the impact of information accessibility toward PHC utilization. The coefficient for the number of buses (Bus) is negative and statistically significant in the equation for rural PHC outpatient and inpatient service utilization, which verifies our hypothesis that convenience of transportation will reduce resident's moving cost of bypassing. The road coverage per capital (Road) is also negative and statistically significant in the regression of urban PHC outpatient service. We examined the null hypothesis of the coefficients of Urban Population, prefecture Population, and Urban Density in each regression to test the effect of demographic and geographic features of a prefecture. Regression results imply that we cannot accept the null hypothesis at the significance level of 0.05. Discussion The results of this study suggest an important role of urbanization in residents' decision to use PHC health services. Geographic location, public transportation, economic development, and internet usage are all correlated with the use of PHC inpatient or outpatient services. A quadratic curvature of Non-Agricultural Labor in the regressions for rural areas indicate that, when the percentage of non-agricultural labor of a city increases, rural residents are more likely to use PHC services than major hospitals at first, but when the percentage increases to 85.3% and 81.4%, the rate of PHC service utilization starts to decline. Considering the average level of non-agricultural of 87% in 2014 and its increasing trend, we predict a declining trend in PHC service utilization, which is consistent with the fact that China's PHC ratio (with rural and urban PHC ratio aggregated) is about 65% in 2009 but less than 35% in 2014 [48]. Our regressions have a somewhat surprising result-if a prefecture sits geographically next to a provincial capital city, its rural residents are more likely to use the local PHC outpatient services. This result has the appearance of contradicting our hypothesis that cities near provincial capitals lose patients because of the presumed syphon effect of major hospitals in the provincial capital city. However, we have compared the mean of inpatient ratio of cities bordering the provincial capital to that of the other cities using t-tests and found that the other cities have significantly higher rates of hospital service utilization than those of the cities bordering the provincial capital. This suggests that patients near capital cities are more likely to go to the capital city for healthcare services. We have two explanations for this finding. First, prefectures near provincial capitals tend to be wealthier than prefectures that far away from the capital, thus rural PHCs in these prefectures may be more competent in retaining local patients. Second, residents in these prefectures are attracted to providers in the provincial capital; thus they bypass the local secondary and tertiary hospitals as well and visit the major hospitals in the provincial capital directly. The latter scenario is consistent with the high rates of overall health services utilization and the PHC outpatient services use in the near-capital cities we have seen. Industrialization offers rural residents with stable employment and incomes. Our results on GDP per capita suggest that this is positively associated with rural PHC services utilization. We examined several industrialized cities including Dongguan, Yiwu, and Kunshan, and found that most of the rural PHCs in these cities were established to provide health services to manufacturing plants nearby and thus increases the PHC services utilization. GDP per capita had a quadratic effect on PHC services used. A possible explanation is that PHCs in cities with a growing GDP would have improved capacity, thus the quality of PHC offsets the income effect of urban residents. Rural residents' choice among providers of different levels is sensitive to their wealth. Bypassing is more likely to occur if residents can afford high quality medical services in major urban hospitals. Another explanation is that a high income facilitates rural residents to migrate to urban areas, so the utilization of rural PHC declines with the increasing per capita saving. The number of internet users was used to capture the effect of information and the cost of finding a proper provider. Many tertiary hospitals in China offer internet appointments for residents to improve accessibility, and this may explain why urban PHC inpatient services utilization is affected negatively by internet users, because larger hospitals have advantages in setting up an online presence. Urbanization affects resident choices of healthcare services providers. We anticipate that when more population move into urban areas in the near future, rural and urban PHCs at the prefecture-level cities of China will face additional challenges. Strengths and limitations This study is one of the first attempts to use city-level data to examine the impact of urbanization, among other factors, on the pattern of PHC health service utilization. By including an indicator of proximity to the provincial capital, we observed the siphon effect of provincial capital cities on health services utilization at secondary and tertiary hospitals in cities near the provincial capitals. However, this study has at least two limitations. First, we used an aggregate-level dataset, and therefore the ecological fallacy applies if we make inferences at the individual level. We refrained from doing so and our conclusions are applicable to the prefecturelevel percentages and rates and may be relevant for policymaking at the prefecture-level (or city level). Second, we are unable to examine the impact of more recent policies, which are relegated to future investigations when the required data are available to the public. Nonetheless, our results offer important implications for policies governing health services utilization and/ or urbanization. Conclusions This study draws three major conclusions. Firstly, geographic location and changes in the percentage of the urban population significantly affect bypassing in rural areas. As China will experience further urbanization with more physical and human capital flowing into urban centers, health services in rural areas may not be able to keep up with the demand among rural residents for high-quality medical services requiring specialized equipment. Bypassing arises not only from the gaps between the capacity of PHCs and that of higher-level hospitals, but also from such gaps between rural and urban areas, and between near-capital areas and other areas. Future policymaking efforts and additional investment into rural PHCs need to take into consideration urbanization and associated population mobility. Secondly, residents' wealth and prefecture-level transportation are associated with reduced PHC utilization, and this may indicate that recent tiered healthcare ('Fen Ji Zhen Liao') policies to reduce bypassing are offset by the macroeconomic factors. Inadequate gatekeeping, competition between hospitals, and an increase in the number of wealthy residents contribute to the rise in bypassing, exacerbating PHCs' financial operations and medical capabilities. Third, information flow encourages bypassing when the gap in the quality of health services provided by PHCs and hospitals continue to exist. Our analysis found that the number of internet users is associated with a lower probability of using urban PHC inpatient services because residents usually go to hospitals that can offer them convenient online appointments and payment facilities. Investing in telemedicine and health information infrastructure may improve the PHC capacity thus should be assessed. To summarize, urbanization has a complex effect on the utilization of healthcare services. Whether the roads pave the way to enhanced PHC capacity for serving their constituents or lead them to urban centers for healthcare services will depend on the location, economic development, and rurality. More in-depth research is needed to examine the consequences of urbanization on China's healthcare systems.
6,699
2020-06-03T00:00:00.000
[ "Economics", "Geography", "Medicine" ]
Standard Intensity Deviation Approach based Clipped Sub Image Histogram Equalization Algorithm for Image Enhancement The limitations of the hardware and dynamic range of digital camera have created the demand for post processing software tool to improve image quality. Image enhancement is a technique that helps to improve finer details of the image. This paper presents a new algorithm for contrast enhancement, where the enhancement rate is controlled by clipped histogram approach, which uses standard intensity deviation. Here standard intensity deviation is used to divide and equalize the image histogram. The equalization processes is applied to sub images independently and combine them into one complete enhanced image. The conventional histogram equalization stretches the dynamic range which leads to a large gap between adjacent pixels that produces over enhancement problem. This drawback is overcome by defining standard intensity deviation value to split and equalize the histogram. The selection of suitable threshold value for clipping and splitting image, provides better enhancement over other methods. The simulation results show that proposed method out performs other conventional histogram equalization (HE) methods and effectively preserves entropy. Keywords—Standard intensity deviation; histogram clipping; histogram equalization; contrast enhancement; entropy I. INTRODUCTION Digital imagery plays an important role in the fields of medical, industry, civil, security, astronomy, animation, forensic, web design.Image processing, helps human visual perception, visual quality and is used in many areas of image enhancement, image compression, image de-noising, image sharpening etc. Image enhancement is the process of making digital image more suitable for visualization or for further analysis and identifying key features of image.Contrast enhancement of an image is one of the well-known techniques in image enhancement.The contrast is created by the difference in luminance reflectance from two adjacent surfaces [1], [2] and enhancement is a technique of changing the pixel intensity of the input image.The quality of image contrast reduces, due to various factors, like of poor and ambient light conditions, aperture size and shutter speed of camera [3].Histogram equalization is a technique that improves image contrast by adjusting image intensity and is used in wide range of applications as it is a simple method can be implemented easily.Its performance is limited because it tends to change the mean brightness of the image to the middle of the gray level and creates undesirable effects that lead to over enhancement [4].This method doesn't preserve image brightness because it is global operation and introduces the noise artifacts.To overcome this problem, different methods of histogram equalization were proposed to enhance the image brightness [5]. To preserve mean brightness of the image, Brightness preserving bi-histogram equalization (BBHE) method was proposed [6].Here, mean value is used to bisect the histogram and then equalize both sub images independently.Another method, Dualistic sub image histogram equalization (DSIHE), follows BBHE and it differs in that it uses median value instead of mean to create sub images [7].But, these methods fail to preserve the brightness of the image effectively.To improve the preservation of brightness, minimum mean brightness error bi-histogram equalization [MMBEBHE] was tried [8].This method uses histogram separation based on threshold value and is an extension of BBHE, however, it fails to control the over enhancement. To improve the visual quality of image, multi-histogram equalization approaches have come into existence.They are, recursive mean separate histogram equalization [RMSHE] [9], which performs BBHE recursively and recursive sub image histogram equalization [RSIHE], that performs division of histogram based median value [10].From both methods, selection of number of iterations is an annoying issue and may lead to over enhancement.To overcome, over enhancement problem, histogram clipping approach was used and it helped in avoiding saturation effect and preserved the details of the image by controlling high frequency bins [11]- [13]. The exposure based sub image histogram equalization [ESIHE] performs enhancement of low exposure image by using exposure threshold value for image sub division [14].This method equalizes, sub images individually and uses clipped histogram for controlling, over enhancement.It doesn't consider the variation of the exposure value from each gray value and stretches the contrast at high intensity region.This method is well suited for low exposure images and produce better visual quality images.This paper refers to the ESIHE and proposes a contrast enhancement algorithm by defining new standard intensity deviation value instead of exposure threshold value.This www.ijacsa.thesai.orgimproves the effect of enhancement in terms of average information contents.The performance of proposed method was analyzed by enhancing low exposure images and low exposure underwater images.We find that the proposed method helps in enhancing contrast, visual quality and average information contents of the images, as compared to others methods. The paper is structured as follows: Section 2 describes the proposed method.Experimental results and discussion are given in Section 3 and the conclusion is given in Section 4. II. PROPOSED ALGORITHM The conventional histogram equalization methods improve the image contrast by stretching the dynamic range of the image using cumulative distribution function.1(d), (f) and (h)).The gap denotes the number of gray levels between two neighboring gray values, and large gap between the adjacent pixels leads to over enhancement problem [15].From Fig. 1, it is found that, gap between adjacent pixels, affect the enhancement quality and selection of threshold value to divide the image histogram also plays an important role.To overcome these problems explained earlier, the standard intensity deviation based clipped sub image histogram equalization (SIDCSIHE) algorithm is presented by defining new threshold value.This value is being used to divide the image histogram, as it helps to enhance image contrast by preserving maximum information and minimizing the gap between adjacent pixels.The algorithm consists of three steps, namely standard intensity deviation value calculation, histogram clipping and histogram equalization. A. Standard Intensity Deviation Value Calculation To measure volatility of the image intensity, the standard deviation function , by finding the variance between corresponding intensity and mean image histogram is given by (1) [16]. The mean image histogram of the low contrast image is given by (2). where ( ) , is image histogram with its corresponding intensity and is its total number of gray levels.The normalized standard deviation value is expressed as in (3) and its range is [0 1].Another parameter is defined in (4), by using normalized standard deviation value and it also used to modify each input gray level by dividing the image into two sub images. (4) B. Histogram Clipping The process of clipping histogram controls the enhancement rate.The threshold value given in ( 5) is calculated as an average number of grey level occurrences.The histogram bins are clipped, and is greater than the clipping threshold as given by ( 6).The histogram is then clipped by clipping threshold as shown in Fig. 2(b). where ( ) and ( ) are the original and clipped histogram, respectively.The histogram clipping consumes lesser time as it needs less number of computations [14]. C. Clipped Histogram Sub Division and Equalization Based on standard intensity deviation value , the clipped histogram is divided into two sub images and with ranges varying from 0 to and +1 to respectively.The probability of these two sub images are ( ) and ( ) , respectively where and are total number of pixels in each sub images and its cumulative distribution function ( ), ( ) can be defined as: The histogram equalization is done individually for two sub images using the transfer function ( ) as expressed in (11): The sub images are combined into one final image by the transfer function ( ) for further analysis.Fig. 3(a) is the processed image appears cleaner and overcome over enhancement in the highlighted boxes.Fig. 3(b) represents its histogram, has overcome the gap between the adjacent pixels which lead to better image quality. Integrate both images into final image. III. RESULTS AND DISCUSSION The pre-eminence of the proposed method is illustrated by comparing both objective and subjective assessments with well-known methods. A. Objective Assessment To verify the effect of enhancement, the objective assessment has been carried out and compared on the basis of entropy.Entropy means average information content that is the measure of richness of image details.A higher value indicates the availability of more information content and is perceived to have better quality of the image.Equation ( 12) defines entropy [17]. where ( ), is probability density function at the intensity level and is total number of gray levels of the image. The different types of 15 test images with low exposure underwater images and low exposure images are used.The performances of the proposed method is compared with other existing methods HE, BBHE, DSIHE, MMSICHE ,NMHE and ESIHE for better entropy results.From Table I, it is evident that the proposed (SIDCSIHE) method has the highest entropy values as compared to other methods and its value is very near to the input average entropy value (6.269), which indicates that more information is extracted from an input image.To check its robustness, the proposed method execution time is compared with other methods because most of the studies focus on image quality as well as execution time.The execution of the method is carried out on the computer with 64bit, windows 8 and Intel i3 processor with 4GB RAM.The average execution time as compared to other methods has been tabulated in Table I.The execution time of the proposed method is compared with advanced methods MMSICHE, NMHE and ESIHE as the conventional methods HE, BBHE and DSIHE does not possess better visual quality. B. Subjective Assessment The performance of algorithm in contrast enhancement, provides the information about over enhancement and unnatural look of the image, by inspecting the visual quality of the processed image.Although objective assessment provides quantitative information but its quality evaluation can be accomplished by subjective assessment and this is the most direct approach to judge the quality of image from an observer.To prove the robustness and versatility of the proposed methods, the standard images are chosen from different fields, like, underwater images and low exposure images (tank, fish, elk, plane, sanctuary) as shown in Fig. 4 to 8. The low contrast image is shown in Fig. 6(a).The supremacy of the proposed method (Fig. 6(h)) can be analysed by comparing the Fig. 6(b), (c), (d), (e) and (g) of HE, BBHE, DSIHE, MMSICHE and ESIHE respectively.Due to over enhancement problem information is unclear especially in the skin of elk and grass, which is highlighted square boxes.Fig. 6(f) is the result of applying NMHE, appears dark as compared to Fig. 6(h) obtained by the proposed SIDCSIHE method.In addition to that, Fig. 6(h), have better contrast and visual quality resembling its natural look. IV. CONCLUSION The proposed method has promising performance in terms of both entropy and overall visual quality.The selection of standard deviation intensity value provides new optimal threshold value to split the clipped histogram and equalize sub image effectively and gives control on over enhancement rate.The visual quality, entropy value and execution speed shows the robustness of the proposed method as compared to existing algorithm for low exposure of images. Looking at the efficiency of the proposed method by protecting detailed information, especially for low exposure underwater image, the proposed method can be extended further to improve the average information by decomposing histogram into multiple segment based on suitable threshold and equalizing each segment independently. Fig. 1 ( a), (c), (e) and (g) represent original image, HE image, BBHE image and ESIHE image, respectively.Fig 1(b), (d), (f) and (h) are the histograms of respective images.Fig. 1(c), (e) and (g) illustrates over enhancement problem, where, road, top of the truck and soil portions of image texture are very bright, which is highlighted in red square boxes.The reason for this problem is large gap between two adjacent gray values of the histogram (Fig. Fig. 2 . Fig. 2. Sub division process (a) Image Histogram sub division and clipping; (b) Image Histogram sub division and clipping. Fig. 4 ( Fig. 4(a) is a low contrast image.Fig. 4(b), (c) and (d) are processed by HE, BBHE and DSIHE leads over enhancement, can be seen in the visual appearance and corresponding histogram.Fig. 4(e), (f) are the results of MMSICHE and NMHE individually.These images seem to be gloomy and Fig. 4(f) is too dark, as compared to other images.Fig. 4(g), processed by ESIHE, has better enhancement over other methods.However, its histogram detail seems to equalize more in lower intensity region due to which the tank appears dark as compared to Fig. 4(h) proposed by SIDCSIHE.The image in Fig. 4(h) looks more natural with better visual quality than other methods. Fig. 5 ( Fig. 5(a) is an example of underwater low exposure fish image.Fig. 5(b) is processed by HE, which enhances the image in great way but the white pebbles of image have become brighter and cannot be visible clearly.Fig. 5(c), (d), (e) and (f) are processed by BBHE, DSIHE, MMSICHE and NMHE methods respectively and the methods fail to distinguish the fish from the background.Fig. 5(g), (h) are the results of ESIHE and SIDCSIHE and the images have better enhancement over other methods. Fig. 7 ( Fig. 7(a) is low contrast plane image.Fig. 6(b), (c), (d), (e) and (g) are results of HE, BBHE, DSIHE, MMSICHE and ESIHE.The texture of the plane and surrounding cloud area in the images are not clear and have unpleasant visual artefacts.Fig. 7(f) processed by NMHE has clear look but lacks the visual effect as compared to Fig. 7(h) the result of proposed method.The proposed method gives image with clear outer surface, which is highlighted in square boxes.The resulting image of the proposed method have natural look by preserving maximum information. Fig. 8 ( Fig. 8(a) is low contrast sanctuary image.Fig. 8(b)-(g) are results of HE, BBHE, DSIHE, MMSICHE, NMHE and ESIHE, respectively.The methods fail to enhance the texture of mountain, grass and road in all the images except Fig. 8(f) which is highlighted in the square boxes.The same texture information in SIDCSIHE fig 8h is clearly visible and the image looks more natural without any unpleasant artefact.
3,227
2018-01-01T00:00:00.000
[ "Computer Science" ]
Wafer-fused 1300 nm VCSELs with an active region based on superlattice The 1300 nm range vertical-cavity surface-emitting lasers with the ac- tive region based on InGaAs/InGaAlAs superlatticeare fabricated using molecular-beam epitaxy and the double wafer-fusion technique. Lasers with the buried tunnel junction diameter of 5 μ m have shown single-mode CW operation with the output optical power of ∼ 6 mW at 20°C. Opened eye diagrams are observed up to 10 Gbps. 1.0% to 1.6%, the D-factor which defines the rising rate of resonance frequency with current, can be increased by 60% [9]. In addition, the modulation bandwidth of InP-based VCSELs with AlGaInAs QWs can be increased up to 11.5 GHz by optimization of the cavity photon lifetime, which in turn enables error-free data transmission rate at 25 Gbps. An InGaAs/InGaAlAs superlattice (SL) can be used as an alternative approach to enhance differential gain. The use of molecular-beam epitaxy (MBE) technique in comparison with metalorganic vapour-phase epitaxy allows creating sharp heterointerfaces, which made it possible to implement an active region based on SL. In fact, the investigation of 1550 nm range edge-emitting lasers revealed the higher optical gain for the SL-based active region in comparison with the InGaAs QW-based active region [10]. Here, we report on the realization of 1300 nm MBE-grown double wafer-fused VCSELs with an active region based on InGaAs/InAlGaAs SL, which demonstrate the output optical power of 6 mW in a singlemode CW regime. Small and large-signal modulation experiments revealed the possibility of efficient stable operation at 10 Gbps. Device structure: The VCSEL heterostructure was fabricated by double wafer fusion of AlGaAs/GaAs DBRs grown on GaAs substrate on both sides of the InAlGaAsP optical cavity grown on the InP substrate. The InP-based and GaAs-based heterostructures were grown using the MBE technique. A double intra-cavity contacted the VCSEL design with buried tunnel junction (BTJ) for current confinement which was applied as our basic device design [11]. Figure 1 shows a cross-sectional scanning electron microscope (SEM) image of the completed VCSEL heterostructure in the microcavity region and the corresponding distribution of the electromagnetic field intensity of the cavity mode along with the refractive index profile. GaAs-based VCSEL heterostructure consists of a bottom DBR based on 35.5 pairs of quarter-wave Al 0.91 Ga 0.09 As/GaAs layers, a bottom intra-cavity n-InP contact layer with a thin heavily doped n-InGaAs contact layer, an active region based on SL (24 periods of 0.8 nm-thick In 0.57 Ga 0.43 As/2 nm-thick In 0.53 Ga 0.20 Al 0.27 As layers), a p-In 0.52 Al 0.48 As emitter, an n ++ -In 0.53 Ga 0.47 As/p ++ -In 0.53 Ga 0.47 As/p ++ -In 0.53 Ga 0.31 Al 0.16 As composite-BTJ with an etching depth of ∼25 nm, a top intra-cavity n-InP contact layer with modulated doping profile, and a top DBR based on 21.5 pairs of quarter-wave Al 0.91 Ga 0.09 As/GaAs layers. The wavelength of the photoluminescence peak of the active region emission was about 1280 nm at 20°C. The total thickness of optical microcavity is 3λ (the cavity boundaries are determined by the fused interfaces). The BTJ layers and heavily doped contact layers were placed at the node of the electromagnetic field intensity of the optical cavity mode to reduce optical absorption loss, whereas the active region was placed at its antinode to increase the optical confinement factor. Experimental results: The CW light-current-voltage (LIV) characteristics for typical VCSELs at 20°C are presented in Figure 2. Despite the use of InGaAs-based BTJ for efficient current confinement and relatively high mirror losses, a threshold current of about 1.25 mA is achieved. The thermal rollover occurs at about 15 mA and limits the maximum output optical power to 6.1 mW. Such high output optical power is due to the high differential quantum efficiency reached for such devices ∼70%. The differential quantum efficiency decreases by only 11% at 70°C and by 22% at 90°C. The threshold voltage is 1.9 V and the differential resistance, calculated at half rollover current, is ∼85-90 , due to the optimized doping profile and a high quality of the wafer-fused interfaces. The wall-plug efficiency also reaches high values up to ∼30%. Single-mode operation over the entire current range with a side-mode suppression ratio (SMSR) of a least 40 dB is revealed. Inset in Figure 2 shows a spectra at different currents. To estimate the high-speed performance of the present VCSELs, the small-signal modulation response S 21 (f) was measured using the Keysight N4375D 26.5 GHz lightwave component analyser. The RF signal was combined with the direct current bias through a 45 GHz bias tee and fed to on-wafer VCSELs by a high-frequency ground-source-ground probe head. The results of small-signal modulation analysis for VCSELs at different bias currents are presented in Figure 3. The −3 dB cut-off frequency modulation bandwidth reaches a value of ∼8 GHz at about 10 mA with a modulation current efficiency factor of ∼2.9 GHz/mA 0.5 , then saturates at 8 GHz and drops down to 5 GHz at higher currents (near rollover current). According to the corresponding S 21 (f) fits by the threepole transfer function, the resonance frequency f R significantly exceeds the −3 dB cut-off modulation bandwidth at high currents, while the Kfactor, derived from the dependence of the damping coefficient on the squared resonance frequency, is about 0.4 ns (a further decrease in the value is possible only by reducing the length of the optical cavity). The parasitic cut-off frequency reaches only ∼4 GHz and hence the highspeed performance of the developed VCSELs is limited by an electrical parasitic. Large-signal modulation experiments were performed at various bit rates to determine the data transmission capacity of the fabricated de-vices. The Keysight M8195A 65 Gbps arbitrary waveform generator was used to generate a non-return-to-zero bit pattern (pseudo-random bit sequence with a pattern length of 2 7 − 1). The Keysight 86100D Infiniium DCA-X wide-bandwidth oscilloscope combined with a 20 GHz optical module was used to record the large-signal modulation. The VC-SEL was biased at 10 mA with 0.7 V peak-to-peak modulation voltage at 20°C. The eye amplitude is weakly changing with bit rate increase up to 10 Gbps, however beyond this rate a decrease in the eye height is observed. The inset in Figure 3 shows a typical eye diagram at 10 Gbps. Conclusions: We have studied 1300 nm MBE-grown wafer-fused VC-SELs with the InGaAs/InGaAlAs SL-based active region and a BTJ diameter of 5 μm. Single-mode CW operation with the output optical power of 6 mW and SMSR > 40 dB at 20°C has been obtained. Small-signal modulation more than 8 GHz and clearly opened eye diagrams up to 10 Gbps were observed. We believe that further optimization of the length of the optical cavity, SL design and the photon lifetime, as well as reduction of the parasitic capacitance, would lead to better dynamic characteristics of VCSELs with the SL-based active region compared to the results of InP-based VCSELs with AlGaInAs QWs.
1,617
2021-06-03T00:00:00.000
[ "Physics" ]
Rationalizable strategies in games with incomplete preferences This paper introduces a new solution concept for games with incomplete preferences. The concept is based on rationalizability and it is more general than the existing ones based on Nash equilibrium. In rationalizable strategies, we assume that the players choose nondominated strategies given their beliefs of what strategies the other players may choose. Our solution concept can also be used, e.g., in ordinal games where the standard notion of rationalizability cannot be applied. We show that the sets of rationalizable strategies are the maximal mutually nondominated sets. We also show that no new rationalizable strategies appear when the preferences are refined, i.e., when the information gets more precise. Moreover, noncooperative multicriteria games are suitable applications of incomplete preferences. We apply our framework to such games, where the outcomes are evaluated according to several criteria and the payoffs are vector valued. We use the sets of feasible weights to represent the relative importance of the criteria. We demonstrate the applicability of the new solution concept with an ordinal game and a bicriteria Cournot game. Introduction This paper examines games with incomplete preferences (Bade 2005). A player has incomplete preferences when he is unable to compare or is indecisive between some of the outcomes. This may be due to the fact that the outcomes are represented by multiple conflicting criteria, that the outcomes are simply uncertain, or that the player represents a group of individuals. Incomplete preferences have been mainly studied in the context of decision-making problems (Aumann 1962;Ok 2002;Heller 2012). In non-cooperative games, incomplete preferences have been studied in the special case of multicriteria games (Shapley 1959;Blackwell 1956;Corley 1985;Borm et al. 1988;de Marco and Morgan 2007;Marmol et al. 2017). Excluding the multicriteria games, only few papers have examined incomplete preferences in non-cooperative game models (Bade 2005;Park 2015;Shafer and Sonnenchein 1975). This paper extends the solution concept of rationalizability to the games with incomplete preferences. Nash equilibrium is the main solution concept in game theory and it assumes that the players have correct beliefs about their opponents' strategies, i.e., they know the strategies that their opponents are going to choose. Rationalizability is a more general solution concept that allows the players to have erroneous but rational beliefs (Bernheim 1984;Pearce 1984), which means that the set of rationalizable strategies always contains the set of Nash equilibria. The standard rationalizability means that each player maximizes the expected utility given her probabilistic belief. However, the maximization of expected utility may not be possible under incomplete preferences or nonprobabilistic beliefs. This may, e.g., happen in ordinal games (Durieu et al. 2008) where the players only have a preference order but no numeric values are assigned to the outcomes. Thus, we need to generalize the definition of rationalizability. We propose a notion of rationalizable strategies, where the players have nonprobabilistic beliefs and they choose nondominated strategies given their beliefs. Nonprobabilistic beliefs mean that the players only reason about what pure strategies the opponents may choose but do not need to assign any probabilities to these strategies. Nondomination means that a player does not select a strategy if another strategy yields a better outcome with all combinations of the possible strategies of the other players given her belief. The probabilistic and nonprobabilistic beliefs have been discussed in Perea (2014). Chen et al. (2016) extend rationalizability beyond the probabilistic beliefs and expected utility maximization. The non-expected utility models have also been examined in Jungbauer and Ritzberger (2011) and Beauchêne (2016). Moreover, the closed under rational behavior (curb) sets (Basu and Weibull 1991) are closely related to the rationalizable strategies and our solution concept as well. Especially, the maximal tight curb sets are very close to our notion when the players' preferences are complete, except we use nondominated strategies whereas the curb sets are defined by the best-response correspondences. To our knowledge, this paper is the first to study rationalizability in the games with incomplete preferences. Bade (2005) has shown that the Nash equilibria in games with incomplete preferences correspond to the union of Nash equilibria in all the completions of the game. More recently, Park (2015) has examined the existence of Nash equilibrium in potential games with incomplete preferences. Incomplete preferences have also been considered in nonmonetized games (Li 2013;Xie et al. 2013), where the preference order is defined in a common outcome space for all players, whereas in our framework, the preference orders for each player are defined directly over the strategy combinations. The literature on nonmonetized games is also focused on generalizations of Nash equilibrium. In this paper, we show that the sets of rationalizable strategies are the maximal mutually nondominated sets. The mutual nondominance is defined so that the sets of strategies are mutually nondominated if they contain no dominated strategies with respect to each other. We also show that providing more precise preference information does not enlarge the sets of rationalizable strategies. That is, if new preferences are added over some pairs of outcomes for which a player was previously indecisive and all the original preferences are maintained, then there will be no new rationalizable strategies. We apply our framework to multicriteria games (Shapley 1959;Blackwell 1956;Corley 1985;Borm et al. 1988;de Marco and Morgan 2007;Marmol et al. 2017), which is a special class of games with incomplete preferences. In the multicriteria games, the outcomes are evaluated as vector-valued payoffs, where each component describes how good the outcome is with respect to that particular criterion. Since an outcome can be better in one criterion but worse in another criterion, it is natural that the players may have incomplete preferences over the outcomes in multicriteria games. The main solution concept in the existing literature on multicriteria games is the multicriteria extension of Nash equilibrium (e.g., Shapley 1959;Corley 1985;Borm et al. 1988;de Marco and Morgan 2007;Marmol et al. 2017). A combination of strategies is an equilibrium if the strategy of each player is nondominated when other players play the equilibrium strategies. However, this equilibrium concept does not take into account information about the relative importance of criteria. In this paper, we use weights to represent the importance of the criteria and allow the weights to be set-valued. This idea has been proposed in the multicriteria/multiattribute decision analysis literature (e.g., White et al. 1982;Kirkwood and Sarin 1985;Hazen 1986;Weber 1987;Hämäläinen 1992, 2010). In the game context, Monroy et al. (2009) has considered the sets of feasible weights in a cooperative bargaining setting, while in the noncooperative setting using sets of feasible weights has been proposed only by us and independently by Marmol et al. (2017), who uses the Nash equilibrium as the solution concept. Our framework enables analyzing the impact of additional preference information about the relative importance of the criteria to the solutions of the multicriteria games. To our knowledge, this is the first paper to consider rationalizable strategies in the multicriteria games while representing incomplete preference information as sets of feasible weights. The paper is structured as follows. Games with incomplete preferences as well as the rationality concept are defined in Sect. 2, and the rationalizable strategies are defined in Sect. 3. Sect. 4 provides properties of the sets of rationalizable strategies. First, the characterization in terms of mutual dominance is given in Sect. 4.1. Then, the relation between rationalizable strategies and the iterative elimination of dominated strategies is discussed in Sect. 4.2. The existence of the rationalizable strategies in the case of finite strategy sets and possible nonexistence in the infinite case are shown in Sect. 4.3. The result that adding preference information does not lead to any additional rationalizable strategies is shown in Sect. 4.4. Multicriteria games with incomplete preference information are discussed in Sect. 5. Examples of a game with incomplete preferences with a finite number of strategies as well as of a multicriteria game with an infinite number of strategies are given in Sect. 6. Finally, concluding remarks are presented in Sect. 7. Elements of the game Definition 1 (Bade 2005) A game with incomplete preferences consists of the following components: • The set of players I = {1, . . . , n}. • For each player i ∈ I , the set of strategies S i . • For each player i ∈ I , the preference relation: a transitive and reflexive binary relation i defined on S = S 1 × · · · × S n . The players are assumed to select their strategies simultaneously. The combination of the selected strategies then implies the outcome of the game and, thus, the outcome is formally defined as the combination of the selected strategies. Player i ∈ I strictly prefers the outcome implied by strategies (s 1 , . . . , s n ) ∈ S to the outcome implied by strategies (s 1 , . . . , s n ) ∈ S, denoted by (s 1 , . . . , and (s 1 , . . . , s n ) i (s 1 , . . . , s n ), player i is indifferent between the outcomes. The relation i is allowed to be incomplete so that neither (s 1 , . . . , s n ) i (s 1 , . . . , s n ) nor (s 1 , . . . , s n ) i (s 1 , . . . , s n ) holds for some pair of outcomes. This means that the outcomes are incomparable for the player. Rationality concept We do not assume that the players possess probabilistic assessments about what strategies the opponents may select. Instead, we only assume that a player holds a nonprobabilistic belief of possible selections of strategies taken by the opponents. Without probabilities, the only assumption that can be made about the preferences over strategies is that a rational player does not select a strategy that leads to worse outcomes than some other strategy no matter which strategies the opponents select among those that are possible according to her belief. Thus, we take rationality to mean that the players select nondominated strategies with respect to their beliefs. The set of nondominated strategies for player i with belief B i ⊆ S −i is 1 hereafter denoted by ND(i, B i ) and defined as: It should also be noted that only the strict preferences are important for our rationality concept since it is based on domination. A useful property of ND is that all strategies nondominated with respect to a belief remain nondominated if the belief is replaced with another belief containing additional possible strategies of the opponents. This is known as the monotonicity property and it is formalized in the following remark. Our rationality concept formulated in this section can be seen as a relaxation of a similar rationality concept based on nonprobabilistic beliefs, viz. rationality * defined by Chen et al. (2007). Here, the difference is that the preferences can be incomplete and we use nondominated strategies. Bernheim (1984) motivates the concept of rationalizable strategies as the logical conclusion of assuming that the players view the selections of the opponents' strategies as uncertain events, that the players follow Savage's axioms of individual rationality, and that the latter as well as the game (i.e., the strategies and the payoffs) are common knowledge. In this paper, we assume the selection of nondominated strategies in the sense of Sect. 2.2 instead of Savage rationality. Furthermore, in our case, the game being common knowledge means that the strategies and the preference relations are common knowledge. The definition of rationalizable strategies used here is obtained by modifying the standard definition of Bernheim (1984) accordingly. Definition of rationalizable strategies The following notation is taken from Bernheim (1984). Let Δ i I be the set of sequences (i 1 , . . . , i m ), i j ∈ I for 1 ≤ j ≤ m, where 1 ≤ m < ∞, i 1 = i, and i j = i j+1 for all 1 ≤ j ≤ m. Denote the last element of δ ∈ Δ i I by l(δ), and a sequence formed by adding j to the end of δ by δ + ( j). Similarly, a sequence formed by concatenating δ 1 and δ 2 is denoted by δ 1 + δ 2 . Definition 2 A mapping is the set of strategies s for which player i believes that i 1 may believe that i 2 may believe that . . . that i m−1 may believe that i m may select s. Naturally, such strategies must belong to the set of strategies of player i m , i.e., Θ(δ) is a subset of S l(δ) . Bernheim (1984) requires Θ(δ) to be a Borel subset. In our framework with nonprobabilistic beliefs, no such assumption is necessary and thus the systems of beliefs are allowed to contain any subsets of the strategy sets S k . The common knowledge of rationality implies that i believes that i 1 believes that . . . i m−1 believes that i m is rational. Hence, the strategies that i believes that i 1 believes that . . . i m−1 believes that i m may select must be nondominated with respect to the strategies that i believes that i 1 believes that . . . i m−1 believes that i m believes that her opponents may select. This implies the following consistency condition for the systems of beliefs. Definition 3 A system of beliefs of player (2) Bernheim (1984) requires that each strategy in Θ(δ) is a best response to some probability distribution over the strategies of l(δ)'s opponents, where only strategies that i believes that i 1 believes that i 2 believes that . . . that l(δ) considers possible may have nonzero probability. Our definition captures the idea without probability distributions and with nondominance instead of best responses. Finally, the rationalizable strategies of player i are the strategies that are nondominated with respect to some consistent system of beliefs for player i, which leads to the following definition. Definition 4 Strategy s i ∈ S i is a rationalizable strategy iff a consistent system of beliefs Θ of player i exists such that The set of rationalizable strategies for player i is hereafter denoted by S R i . The following remark shows that the strategies contained in a consistent system of beliefs are rationalizable. Remark 2 Let Θ be a consistent system of beliefs for player i and δ ∈ Δ i I . Then, I , then δ + δ ∈ Δ i I . Therefore, we may define a system of beliefs Θ for player l(δ) as follows: Consistency of Θ then implies consistency of Θ as well as that the condition of Eq. (3) is fulfilled for any s j ∈ Θ(δ). Thus, any such s j is rationalizable. Properties of rationalizable strategies In this section, we first characterize the sets of rationalizable strategies as the largest mutually nondominated sets of players' strategies. Then, we show that the iterative elimination of dominated strategies never removes rationalizable strategies. Furthermore, in the case of finite strategy sets, the iterative elimination converges exactly to the rationalizable strategies, which in turn implies the existence of rationalizable strategies in the finite case. These results are close in spirit to those of Chen et al. (2007) who showed that their iterative elimination concept, IESDS * , converges to the largest stable set with respect to dominance, which in turn is the implication of common knowledge of rationality * as defined in their paper. We give independently developed proofs of our results since our rationality concept differs from rationality * in that we allow the preferences to be incomplete. Our solution concept is also related to the maximal tight curb sets (Basu and Weibull 1991;Jungbauer and Ritzberger 2011) except that we allow incomplete preferences and use nondominated strategies rather than best response. Note that while IESDS * uses uncountably infinite number of iterations, we focus only on countable iteration, since our main motivation for studying the iterative elimination is to show the existence of rationalizable strategies in the finite case, and to use it as a practical algorithm for actually finding the rationalizable strategies in games. In Sect. 4.4, we formulate and prove the result that adding preference information to the preference relations does not lead to any new rationalizable strategies. This result is novel and has no correspondence in the rationality * framework of Chen et al. (2007) since they do not consider incomplete preferences. Characterization The sets of rationalizable strategies are characterized here in terms of mutually nondominated subsets. We define mutually nondominated subsets as subsets consisting of strategies that are nondominated with respect to belief that the opponents select from the same mutually nondominated sets. This concept is an adaptation of the best response property used by Bernheim (1984) to the rationality concept used in this paper. In the following, we show (1) that the sets of rationalizable strategies must be mutually nondominated, (2) that any mutually nondominated sets are subsets of the rationalizable strategies, and (3) that the union of all mutually nondominated sets is mutually nondominated. This implies that the set of rationalizable strategies is the union of all mutually nondominated sets. In other words, the rationalizable strategies comprise the maximal mutually nondominated set. Definition 5 An n-tuple of sets of strategies for each player (S 1 , . . . , S n ) is mutually nondominated, denoted by (S 1 , . . . , Lemma 1 The sets of rationalizable strategies are mutually nondominated, i.e., S R = (S R 1 , . . . , S R n ) ∈ MND. Proof All strategies contained in consistent systems of beliefs must be rationalizable (Remark 2). On the other hand, a rationalizable strategy must be nondominated with respect to strategies contained in a consistent system of beliefs. Therefore, a rationalizable strategy is nondominated with respect to a subset of the rationalizable strategies. Based on Remark 1, this implies that a rationalizable strategy is nondominated with respect to the rationalizable strategies. Hence, the sets of rationalizable strategies are mutually nondominated. Lemma 2 All strategies contained in an n-tuple of mutually nondominated sets of strategies are rationalizable, i.e., Proof Define the following systems of beliefs for all players: ∀δ ∈ Δ i I : Θ(δ) = S l(δ) . Mutual nondominance implies consistency of Θ and that Θ rationalizes all strategies in (S 1 , . . . , S n ). Definition 6 We denote by MND the ordered n-tuple ( MND 1 , MND 2 , . . . , Note that this is an abuse of notation since MND is not technically the union of MND but the tuple of the unions of the components of the members of MND. See the example after Theorem 1. Lemma 3 MND ∈ MND. Proof For any strategy s i in MND, there exists a S ∈ MND such that s i ∈ S i . By the definition of MND, this implies Furthermore, ND has the property that all nondominated strategies remain nondominated if the belief set is replaced with its superset. Therefore, and thus MND satisfies Definition 5. Proof Lemma 3 states that MND ∈ MND. Hence, Lemma 2 implies that MND ⊆ S R . On the other hand, Lemma 1 states that S R ∈ MND. Hence, S R ⊆ MND. Therefore, S R = MND. Iterative elimination of dominated strategies Next, we show that the iterative elimination of dominated strategies never removes rationalizable strategies. If the strategy sets are finite, the iterative elimination of dominated strategies converges to the rationalizable strategies. However, in the case of infinite strategy sets, nonrationalizable strategies may survive the iterative elimination. Definition 7 We define the sets of strategies surviving k steps of the iterative elimination recursively as follows. ∀i ∈ I : S 0 i = S i and Note that clearly ∀i, k : S k+1 i ⊆ S k i . Then, the strategies surviving the iterative elimination of dominated strategies are ∀i : Theorem 2 All rationalizable strategies survive the iterative elimination of dominated strategies, i.e., S R i ⊆ S ∞ i . Proof We show by induction that the set of rationalizable strategies of player i is a subset of the strategies surviving k steps of the iterative elimination for all k. Initially, There exists a game with incomplete preferences for which S R i = S ∞ i . Proof Consider the following game with two players where the players pick numbers from the set of natural numbers extended with a "small infinity" (∞ ) and a "big infinity" (∞). The player who selects the largest number wins. A player always prefers a win to a tie and a tie to a loss. Furthermore, between two losing outcomes, the players prefer ones where they select larger numbers. However, the players have no preference between two winning outcomes. Formally, the sets of strategies are S 1 = S 2 = {0, 1, 2, . . . , ∞ , ∞} and the preference relation of player 1, 1 , is defined so that (n 1 , n 2 ) 1 (m 1 , m 2 ) if and only if • m 1 < m 2 and n 1 ≥ n 2 , or • m 1 = m 2 and n 1 > n 2 , or • m 1 < m 2 and n 1 > m 1 , where the relations > and < are extended so that ∀k ∈ N : ∞, ∞ > k and ∞ > ∞ . The iterative elimination removes at each step the smallest number from the remaining strategy sets of both players, and thus and, therefore, S ∞ 1 = S ∞ 2 = {∞ , ∞}. However, these sets are not mutually nondominated as only ∞ is nondominated with respect to the belief that the opponent selects from {∞ , ∞}. Existence Here, we show in Theorem 4 that when the strategy sets are finite, the sets of rationalizable strategies are nonempty. However, in the case of infinite strategy sets, the existence of rationalizable strategies is not guaranteed, which is illustrated in Remark 4. Theorem 4 If the strategy sets are finite, the sets of rationalizable strategies are nonempty. Proof Theorem 3 implies that the iterative elimination of dominated strategies converges to the set of rationalizable strategies. When the strategy sets are finite, a step of the iterative elimination of dominated strategies will never remove all strategies. Hence, the sets of rationalizable strategies are nonempty. Remark 4 When the strategy sets are allowed to be infinite, the sets of rationalizable strategies may be empty. Proof Consider the game discussed in Remark 3 with the strategies ∞ and ∞ removed. For this game, and thus S ∞ 1 = S ∞ 2 = ∅. Then, by applying Theorem 2 one can conclude that S R 1 = S R 2 = ∅. Effect of additional preference information Next, we consider the effect of taking into account additional information about the preferences of the players. That is, how the sets of rationalizable strategies change when we extend the preference relations. We denote the dependence of the rationalizable strategies and mutually nondominated sets on the preference relations by S R ( 1 , . . . , n ), MND( 1 , . . . , n ). (s 1 , . . . , s n ). This is interpreted as adding strict preferences and indifferences between pairs of outcomes that were incomparable. This corresponds to the definition of completion by Bade (2005), except that we do not require the extended preference relations to be complete. However, since our solution concept is based on strict preferences only, the possible addition of indifferences is irrelevant, and thus for simplicity we formulate the result in terms of Definition 8. The above result can be interpreted as follows. The new preference relations are more accurate than . Here, more accurate is understood to mean that while all preference statements composing the original incomplete preference information are correct, the more accurate information contains additional preferences between outcomes that were incomparable according to the original information. Theorem 5 then shows that incomplete preference information will not cause the exclusion of strategies that might be selected with more accurate information about the preferences. On the other hand, more accurate preference information may lead to ruling out more strategies. It should be noted that Theorem 5 holds also when we allow adding both indifferences and strict preferences into the preference relations (see Remark 5). Multicriteria games Multicriteria games (Shapley 1959;Blackwell 1956;Corley 1985;Borm et al. 1988;Ghose and Prasad 1989;Zhao 1991;de Marco and Morgan 2007;Monroy et al. 2009) are games where players evaluate outcomes according to several criteria and thus the outcomes correspond to vector-valued payoffs. If one outcome is better than another with respect to one criterion but worse with respect to another criterion, a player may not be able to state her preference between these outcomes. Therefore, the following multicriteria game is naturally a game with incomplete preferences. Definition 9 A multicriteria game is a game with incomplete preferences (c.f. Definition 1) defined so that Remark 6 In a multicriteria game according to Definition 9, the strict preferences are such that s i s iff Multicriteria games have been analyzed in the literature mainly from the point of view of equilibrium strategies (e.g., Shapley 1959;Corley 1985;Borm et al. 1988;de Marco and Morgan 2007). Ghose and Prasad (1989) considered additionally solutions based on the so-called security strategies, and Zhao (1991) discussed cooperative solutions. Since the multicriteria games are a special case of games with incomplete preferences, rationalizable strategies and the rationality concept elaborated in this paper can be applied to the multicriteria games as well. Incomplete preference information In the existing literature (e.g., Corley 1985;Borm et al. 1988), information about the relative importance of criteria has been taken into account by introducing weight vectors so that each component of the payoff vectors is weighted according to its relative importance. A multicriteria game is then turned into a scalar game via multiplying the payoff vectors by these weight vectors. However, defining the weights would require complete information about the preferences of the players. Monroy et al. (2009) mentioned the possibility of using incomplete information about the weights, in the form of inequality constraints, to obtain a unique bargaining solution. Preference programming (Salo and Hämäläinen 2010) is a similar idea in the multicriteria decision analysis literature. In preference programming, incomplete preference information of a decision maker is represented by a set of feasible weights. In this paper, we apply the concept of preference programming into multicriteria games as follows. The preferences of player i are described by a set of feasible weights W i ⊆ W 0 , where W 0 = {w ∈ R m : w k ≥ 0, w k = 1}. Here, the kth component of a weight vector w describes the relative importance of the kth criterion. The player prefers an outcome to another if it is better with at least some weight vector in the set of feasible weights and at least as good with all weights in the set of feasible weights. This leads to the following game: Definition 10 A multicriteria game with incomplete preference information is a game with incomplete preferences (c.f. Definition 1) defined so that for each player i ∈ I : Note that w T f i is linear in w and thus W i can be replaced with the set of the extreme points of W i in Eq. (17) if W i are polyhedral sets. Note that the relation defined by Eq. (17) is reflexive and transitive and thus Definition 10 indeed defines a game with incomplete preferences following Definition 1. Furthermore, the multicriteria game of Definition 9 is a special case of Definition 10 where the sets of feasible weights are equal to the set of all possible weights W 0 . Remark 7 In a multicriteria game with incomplete preference information according to Definition 10, the strict preference s i s is equivalent to In preference programming, additional preference information is treated by constraining the set of feasible weights, i.e., replacing W i with W i ⊆ W i . A known result (see, e.g., Liesiö et al. 2007) is that limiting the set of the feasible weights extends the preference relation in the sense of Definition 8 under a certain technical condition shown in Lemma 4 below. (s 1 , . . . , s n ) over (s 1 , . . . , s n ) in the original game, she has the same preference in the new game. Lemma 4 Assume that a new multicriteria game with incomplete preference information is formed from an original multicriteria game with incomplete preference information so that the original weight sets W i are replaced with weight sets W i so that W i ⊆ W i and int(W i ) ∩ W i = ∅. Denote by i the preference relations of the original game and by i the preference relations of the new game. Then, extends (cf. Definition 8). That is, for any pair of outcomes such that player i prefers Then, Theorem 5 can be applied to multicriteria games with incomplete preference information as follows. Theorem 6 If a new multicriteria game with incomplete preference information is formed from an original multicriteria game with incomplete preference information so that the original weight sets W i are replaced with weight sets W i so that W i ⊆ W i and int(W i )∩W i = ∅, the sets of rationalizable strategies of the new game are subsets of the sets of rationalizable strategies of the original game. Proof The result is directly implied by Lemma 4 and Theorem 5. Examples In this section, we present two examples. The first one is an ordinal game with a finite number of strategies that is solved by the iterative elimination of dominated strategies. The second example deals with a multicriteria game containing an infinite number of strategies that is solved by deriving equations for the maximal mutually nondominated sets. The effect of adding preference information by limiting the set of feasible weights is also illustrated. Game with finite strategy sets Consider a game with two players having strategy sets S 1 = {T , M, B}, S 2 = {L, C, R} and the preference relations These relations are illustrated in Fig. 1. The strategy sets are finite and thus the rationalizable strategies can be found by the iterative elimination of dominated strategies, as shown in Sect. 4.2. For player 2, strategy C yields a preferred outcome to strategy R no matter which strategy player 1 selects. For player 1, no strategies are dominated. Thus, S 1 1 = {T , M, B}, S 1 2 = {L, C}. Both remaining strategies of player 2 are nondominated with respect to S 1 1 as player 2 has no preferences defined for the relevant outcomes. For player 1, all strategies are nondominated as the preference order of the outcomes is reversed when the strategy of player 2 is switched. Thus, as all remaining strategies of both players are nondominated, the rationalizable strategies of the game are S R 1 = {T , M, B} and S R 2 = {L, C}. Note that strategy M of player 1 is not nondominated with respect to any singleton belief among the rationalizable strategies of player 2 since it is dominated by strategy T if player 2 selects strategy L and by strategy B if player 2 selects strategy C. However, strategy M of player 1 is rationalized by the belief that player 2 may select either strategy L or strategy C. Now, let us incorporate additional preference information to the game. Define Fig. 2. Now, for player 2, strategy C is dominated by strategy L and the only nondominated strategy is L. Therefore, S R 2 = {L}. When player 2 is known to select strategy L, the only nondominated response of player 1 is strategy T . Thus, the rationalizable strategies of the new game are S R 1 = {T } and S R 2 = {L}. Note that the new rationalizable strategies are subsets of the rationalizable strategies of the original game, which is implied by Theorem 5. Multicriteria game with infinite strategy sets A bicriteria Cournot game is discussed by Bade (2005) from the point of view of equilibrium strategies. In this game setting, there are n players representing firms that select produced quantities simultaneously; see also Marmol et al. (2017). Denote the quantity selected by player i by s i . Assume that the market clearing price is 2 − s k and that the unit cost of production is 1. The profit for player i is Besides profits, the firms desire to maximize the sales as long as the profits are nonnegative, i.e., when 1 − s k ≥ 0. When the firm cannot make profit, i.e., 1 − k =i s k ≤ 0, then we set the utility function to take the constant value of 1 − k =i s k . This makes the utility function concave and continuous in the firm's own action s i . This can be expressed as a bicriteria game where the strategy sets are ∀i ∈ I = {1, . . . , n} : S i = [0, ∞), and the payoff vectors are (20) Two players Here, we obtain the rationalizable strategies of the bicriteria Cournot game with 2 players by deriving the maximal mutually nondominated sets of the game. We consider the game first as a multicriteria game in the sense of Definition 9. Then, we consider additional incomplete preference information in the sense of Definition 10. If the opponent selects strategy s j , the payoff vector of player i is For player i, any strategy s i > 1 is dominated by s i = 1 as the profits will be less than and the sales at most equal to what is obtained by selecting s i = 1, no matter which strategy the opponent selects. Hence, no strategies s i > 1 belong to any mutually nondominated sets. When player i believes the opponent may select s j ∈ [0, 1], 1 − s j is a nondominated strategy since the sales criterion is lower with any strategy s i < 1 − s j and the profit criterion is lower with any strategy s i > 1 − s j . This implies that S 1 = [0, 1] and S 2 = [0, 1] are mutually nondominated. As no strategies s i > 1 belong to any mutually nondominated sets, it has been shown that S R 1 = [0, 1] and S R 2 = [0, 1]. Now, let us add preference information in the sense of Definition 10. Assume that one unit of profits is known to be at least as valuable as α units of sales for both firms. Thus, the sets of feasible weights are The extreme points of these sets are (1, 0) and ( α 1+α , 1 1+α ). Thus, the payoffs at the extreme points are When s j is fixed, the value of Eq. (22) , 1 − s j . Next, we argue that if the infimum of S j is s min and the supremum of S j is s max , the set of nondominated strategies with respect to S j is First, any s i < max(0, 1−s max 2 ) is dominated by max(0, 1−s max 2 ) and any s i > min whose solution is To summarize, with no information about the relative importance of the criteria, all strategies between [0, 1] are rationalizable. For example, s 1 = 1, s 2 = 1 is not an equilibrium (Bade 2005), but in the absence of any equilibrium selection mechanism, it is possible that both players select strategy 1 unaware that also the opponent will select strategy 1. When the game is considered with the additional preference information that the players consider one unit of profits at least as valuable as α units of sales, if α ≤ 1, the rationalizable strategies given by Eq. (26) are S R 1 = S R 2 = [0, 1], i.e., the preference information does not change the rationalizable strategies. However, when α > 1, the rationalizable strategies given by Eq. (26) Increasing the value of the bound α corresponds to adding preference information to the game and as we have shown in Sect. 4.4, Theorem 5, no new rationalizable strategies appear. When α → ∞, i.e., profits become more important, the rationalizable strategies approach S R 1 = S R 2 = { 1 3 }, i.e., the equilibrium (Bade 2005) and the rationalizable strategies (Bernheim 1984) of the single-criterion profit-maximizing Cournot game. Multiple players With n > 2, the analysis of the bicriteria Cournot game is similar to the case with two players. When the opponents are believed to select strategies from [s min , s max ], the total quantity produced by the opponents varies in [(n − 1) s min , (n − 1) s max ]. Hence, the rationalizable strategies are obtained by solving the equations ⎧ ⎪ ⎨ ⎪ ⎩ s min = max 0, 1−(n−1)s max 2 , s max = min With some values of n and α, these equations have multiple solutions. Since the rationalizable strategies are the largest mutually nondominated sets, the solution corresponding to the rationalizable strategies is the one that produces an interval that contains the intervals produced by possible other solutions. That is, When α ≤ 1, all strategies in [0, 1] are rationalizable as in the two-player case, whereas with α > 1, the rationalizable strategies are [0, When α → ∞, the rationalizable strategies approach the rationalizable strategies of the single-criterion profit-maximizing Cournot game, i.e., [0, 1 2 ] (Bernheim 1984). Conclusions In this paper, we considered normal-form games with incomplete preferences (Bade 2005). We proposed a new and more general solution concept for these games that is based on rationalizable strategies (Bernheim 1984;Pearce 1984). This is an alternative solution concept to the standard notion of Nash equilibrium where the players possess correct beliefs about their opponents' choices. In rationalizable strategies, the players choose reasonable strategies but may hold incorrect beliefs; see the motivation in Perea (2014). We revised the standard notion of rationalizable strategies that uses probabilistic beliefs. Instead, we assume nonprobabilistic beliefs where the players only specify the strategies that they consider possible but do not assign probabilities to these strategies; see Perea (2014) for the earlier use of such models. Moreover, the players select nondominated strategies given these beliefs. Our framework can be used in games where the expected utility maximization is not possible and the information that is needed to define the probabilities and utilities does not exist. We showed that no new rationalizable strategies appear in a game with incomplete preferences when preference information is added in the sense of adding new preferences over pairs of outcomes into the preference relations. Another interpretation of this result is that no rationalizable strategies disappear when preferences are relaxed in the sense of removing preferences over pairs of outcomes from the preference relations. Thus, the game can be analyzed using only such preference information that one is definitely willing to assume, with confidence that no strategies are wrongly ruled out. We considered multicriteria games as a special case of games with incomplete preferences and introduced a way of adding preference information to the multicriteria games by modeling incomplete preferences with sets of feasible weights of the criteria, following the treatment in the literature on multicriteria/multiattribute decision analysis (e.g., Salo and Hämäläinen 2010). The idea of using sets of feasible weights in noncooperative multicriteria games has recently been proposed also by Marmol et al. (2017) who considered equilibrium solutions. Besides multicriteria games, the game and solution concept developed in this paper could be applied to, for example, interval games (Levin 1999), where the payoffs are not known exactly, but only as intervals of possible values. We showed that the sets of rationalizable strategies as defined in this paper are nonempty in the case of finite strategy sets. In the infinite case, nonemptiness is not guaranteed. However, the nonexistence of rationalizable strategies might be due to unreasonable structure of the preference relations. A possible topic for future research would be to define intuitively reasonable conditions on the preference relations that guarantee the existence of rationalizable strategies. Another topic for future research is the extension to probabilistic beliefs, mixed strategies and/or extensiveform games. This requires defining the preferences over lotteries and introducing appropriate restrictions on belief updating. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
9,655.8
2018-12-03T00:00:00.000
[ "Mathematics" ]
Stochastic-field approach to the quench dynamics of the one-dimensional Bose polaron We consider the dynamics of a quantum impurity after a sudden interaction quench into a one-dimensional degenerate Bose gas. We use the Keldysh path integral formalism to derive a truncated Wigner like approach that takes the back action of the impurity onto the condensate into account already on the mean-field level and further incorporates thermal and quantum effects up to one-loop accuracy. This framework enables us not only to calculate the real space trajectory of the impurity but also the absorption spectrum. We find that quantum corrections and thermal effects play a crucial role for the impurity momentum at weak to intermediate impurity-bath couplings.Furthermore, we see the broadening of the absorption spectrum with increasing temperature. I. INTRODUCTION The interaction of a mobile impurity with a surrounding many-body quantum system is one of the most prominent and oldest problems in condensed matter physics. The polaron, initially considered by Landau and Pekar [1,2] to describe an impurity electron interacting with the lattice vibrations of a solid is a prototypical scenario to study quasi-particle formation. In more recent years, neutral atoms immersed in a surrounding ultra-cold gas have attracted widespread attention due to great experimental controllability which enable the study of novel exotic regimes of the polaron. For example, the impuritybath coupling can be tuned via Feshbach resonances [3]. Here, the fundamental principles at work are the same as in the original problem; the impurity forms a polaron through interacting with the collective excitations of the surrounding superfluid. While the Fermi polaron has been subject to extensive experimental study [4][5][6][7][8][9][10][11][12], the Bose polaron has only been realised in a few experiment [13][14][15][16][17]. These experiments hint towards a delicate interplay between equilibrium and out-of-equilibrium effects. There has been extensive work addressing the Bose polaron in equilibrium [18][19][20][21][22][23][24][25]. In this work, we focus, in contrast, on the out-of-equilibrium Bose polaron. More precisely, we consider quench dynamics, involving the abrupt immersion of an impurity into a homogeneous Bose gas, at finite temperature. The quench can be realised through a Feshbach resonant radiofrequency pulse [3] that switches on the impurity-condensate interaction nearly instantaneously. Such quench dynamics have been studied either at zero temperatures, or focused on the (extended) Fröhlich models (or very similar approximations) [26][27][28][29][30][31][32][33][34][35][36][37][38][39]. In much of the equilibrium and some of the out-of-equilibrium treatments, the Lee-Low-Pines transformation [40], which eliminates the impurity from the problem at the expense of adding an additional quartic vertex, has proven extremely useful. However, when considering finite temperature, this requires further approximations. The vast majority of the existing literature has focused on three-dimensional systems, where the (extended) Fröhlich model is widely applicable. In 1D, the situation is different. In [21] it was shown that the ap-plicability of Fröhlich-type approximations are somewhat limited in 1D and while the mass balanced case is integrable for the Fermi-gas [41] no such limit exist in the case of a Bose gas. Instead, it is natural to incorporate the effects of the impurity on the condensate already at the mean-field level. This can be done efficiently for a single impurity at zero temperature using the LLP transformation, but this method does not extend to several impurities and is also not straightforward to generalise to finite temperatures. To circumvent those complications, several approaches do not eliminate the impurity from the problem and can address finite temperature [30,42], and for example, treat the impurity in a manner related to the coherent state representation of the impurity [43]. It has been shown in [33,44,45] that a product wave function within the tree-level approximation, yields inconsistent results with those obtained in the more accurate LLPframe in 1D. Therefore, it is perhaps more appropriate to treat the impurity in a position-momentum path-integral as it highlights the particle nature of the impurity. Using the coherent state path integral for the condensate has been shown to yield good results in 1D not only for the polaron but also the bipolaron problem [33,46,47]. This is conceptually close to the approach originally developed by Feynman [42] and applied to the Bose polaron in [19,24] with the main difference being that we do not expand the condensate around a homogeneous density, and our focus lies on out-of-equilibrium phenomena. In this work, we develop a conceptually simple and numerically tractable approach to address quench dynamics at finite temperature in a general manner. Ultimately, this is achieved by mapping the dynamics to a set of deterministic differential equations with stochastic initial conditions. By averaging over the different trajectories, expectation values can be calculated within one-loop accuracy. We then use this methodology to study an impurity's evolution after a sudden interaction quench into the bath. We find that the impurity delocalises quickly for weak impurity-bath couplings and that observables like the impurity's velocity crucially depend on the incorporation of quantum and thermal effects. In the case of strong impurity-bath couplings, we observe self-trapping and quantum corrections and thermal effects to be e considerably less pronounced. This work is organised as follows. In Section II A we derive the truncated Wigner approximation from the Keldysh path integral representation of the problem at hand. In this section, we also show how to obtain the absorption in the language of semi-classical dynamics. We proceed by specifying the initial Wigner function in Section II B and show how to regularise the divergences that arise in the one-dimensional setting. We proceed by briefly outlining the numerical considerations in Section II C. To conclude, we discuss the results for an impurity at rest and finite momentum in Section III and outline further directions in Section IV. II. METHODOLOGY A. The Keldysh formalism for the Bose polaron In the section, we want to outline the derivation of the equations of motion and discuss the truncated Wigner approximation for the polaron problem. We start by con-sidering the Hamiltonian, where m (M ) denotes the mass of the bosons (impurity atom),φ(x) is the Bose field operator, g BB (g IB ) is the boson-boson (boson-impurity) interaction strength and X (P ) denotes the position (momentum) operator of the impurity. Moreover, we left the interaction potential between the impurities and the condensate general instead of directly assuming s-wave scattering. We will not employ a delta function here but a smoothed out version of it for numerical reasons, that will be discussed later. In the following, we are going to apply the Kedlysh formalism to (1). The expectation value of an arbitrary observable (see [48] for a detailed discussion) Ω(X,P ,φ † (x),φ(x), t) is given by where W is the Wigner function that depends on the initial density matrix and will be specified below. Ω W (X c (τ ), P c (τ ), φ * c (x, τ ), φ c (x, τ ), t) is the Weyl ordered operator of the observable one wants to calculate (again see [48] for more details). The subscript c denotes the classical field and the subscript q the quantum field, which describes the quantum fluctuation around the classical saddle point solution. Those two fields arise when mixing forward and backward contour in the Keldysh formalism. The D denotes the integration over all field configurations in space and time. In contrast to that the first integral in (2) can be understood as a normal integral in X 0 and P 0 . The one-loop approximation is now to drop all terms of order two and higher inh, which corresponds to an expansion up to second order in the quantum fields. We then find that there are, in fact, only terms linear in the quantum fields, and the action is given by It is now easy to see that all the quantum fields can be easily integrated out and yield functional delta distributions enforcing the classical equations of motion The only challenge remaining at this point is to integrate over the initial conditions weighted by the Wigner function. In practice this is done by sampling initial conditions according to the Wigner function, solving the classical equations of motion and then averaging the desired observable over the calculated trajectories. In this framework it is now straightforward to calculate the impurity dynamics. Before proceeding we would like to make some remarks about the classical equations of motion and make contact with the equilibrium case to show that even without taking the first-order correction into account those equations give satisfactory results in the limiting case of heavy impurities. For the equilibrium case we assume the impurity to travel at constant velocity d dτ X c (τ ) = v, implying d dτ P c (τ ) = 0, which in turn tells us that |φ c (x, τ )| 2 is symmetric around the impurity position. Together with the equilibrium assumption this directly leads to the conclusion that the bosonic field takes the following Putting it all together and defining x = x − X c (τ ) we find the equilibrium equation We can now compare this equation with the one obtained by performing a Lee-Low-Pines transformation and find that it has in fact the same form as the one found in [33,44,47,49], where it has been shown that quantities obtained like the effective polaron mass or the polaron energy are in excellent agreement with results obtained by quasi-exact quantum monte carlo methods. The only difference is that instead of the reduced mass the boson mass appears in front of ∂ 2 x , which can be traced back to the fact that in this derivation the effect of normal ordering is lost. However, this effect is unimportant for heavy impurities. To summarize, we showed that the equation obtained by employing a coherent state ansatz for the bosonic field paired with a position momentum representation for the impurity reduce to the correct mean-field equations in the equilibrium case if the impurity mass is large. Another quantity of great interest is the impurity Green's function [18,27,31] from which the absorption spectrum can be calculated by taking the Fourier transformation. The impurity Green's function is defined by whereĤ 0 stands for (1) with V (x −X) = 0 andρ is the initial density matrix. We now observe that this has the same structure as the trace that is considered to derive the Keldysh path integral, with the only difference, that the forward and backward contour differ by an extra interaction term. We can therefore perform the same steps as when deriving (2). The only difference in S is the resulting impurity boson interaction where we have introduced the notation d n The resulting magnitude of the interaction is changed by a factor of 1/2 and a new purely classical term arises . Lastly, we note that there is also a quadratic term in φ q (x, τ ) and X q (τ ) now. If we want to keep the accuracy up to one loop order this term has to be considered, which somewhat complicates matters. An ad hoc approximation is to drop this term altogether and therefore staying in a strictly semi-classical regime. To see when this approximation is justified one can simply compare the arsing terms and their order of magnitude. We note, that the |φ q (x, τ )| 2 term directly competes with the |φ c (x, τ )| 2 term. As long as one is within the applicability region of a general c-field treatment, |φ q (x, τ )| 2 will be small compared with |φ c (x, τ )| 2 , whenever the condensate deformation is not large, which corresponds to small and intermediate η. For the corrections in the impurity degrees of freedom one has to com- We note that through partial integration the derivative terms can be brought onto bosonic field variables. One then realizes, that the second derivative of the fields is going to be small compared to the first derivative for weak coupling. Additionally, for large impurity masses M the magnitude of X q will stay small. Those two considerations show that one would expect the absorption spectrum yield reliable results for weak to intermediate impurity-boson coupling and potentially a wider range of couplings if the impurity is sufficiently heavy. We refer to section II D for more details on the validity the approach presented here. We also note that this approximation is only made when calculating the absorption spectrum and the impurity Green's function and all dynamical results do not rely on this approximation. Henceforth, the additional stochastic term will be dropped. This leaves us with the expression where we denote the average with respect to the initial Wigner function as ... W . We are now in the position to calculate the real space trajectories of the impurities for finite temperature as well as the absorption spectrum. B. The quench protocol and the initial Wigner function In the following, we want to specify the quench protocol and the initial Wigner function. We start by briefly outlining the initial state and then specify the Wigner function for a 1D quasi condensate. Here, we will also discuss all the regularisation necessary to arrive at a divergence-free quasi condensate description. The quench protocol is the following: we start with a free impurity and an interacting superfluid at temperature T . At t = 0, the interaction between the superfluid and the impurity is turned on instantly. Experimentally this is realised through a Feshbach resonance [3]. We assume that the initial density matrix can be written as a direct product of the state of the impurity (which is assumed to be pure) and the thermal density matrix of the superfluid As a consequence, the Wigner function also factorises, and we can sample the initial conditions independently. For the condensate, we employ a quasi condensate description; in 1D, this is best done by employing a density and phase representation. We note that this has been used before in [50], in the trapped gas case. Since we want to focus on a homogeneous gas in continuum here, we need to regularise the non-condensed part. In this representation, the condensate field operator can be written asφ The density operator and the phase operator can be expressed within the Bogoliubov approximation [51] aŝ where u k and v k are the usual Bogoliubov modes. For this treatment to be valid one has to be in the vicinity of a weakly interacting Bose gas, which can be characterised by the Tonks parameter γ = 1/(2n 2 0 ξ 2 ) [52, 53] which should be less than unity where ξ = 1 √ 2gBBn0 is the healing length. We refer to the section II D for a detailed discussion of the validity of the presented approach. In the path integral this corresponds to a shift of variables meaning that instead of integrating over φ k , corresponding to the operatorsâ † k . In the standard way, we can now write down the thermal Wigner function (within the coherent state representation) for the a k [48] with the Bogoliubov dispersion ω k . From this, it can be seen immediately, that the average condensate particle number is given by N 0 /L = ρ 0 . In order to account for the quantum and thermal depletion, we fix the total particle number N and then choose N 0 according to N 0 = N − N d (this is done for every realisation), where after proper regularisation (see [54,55]) with the single-particle dispersion e k = k 2 /(2m). Here, a first-order T-matrix approximation was employed, and in line with the Bogoliubov theory up to one loop, µ is the chemical potential within the Bogoliubov approximation and hence is not temperature-dependent. The 1/2 is needed to cancel the extra factor that stems from the symmetric ordering. After averaging, this reproduces the expected result for a thermal quasi condensate in 1D. It should be noted that µ has to be chosen consistently with the total particle number, which can be done by fixing one reference point, where the total particle number is known. Henceforth we assume a mean-field density at T = 0. This then fixes µ in the Bogoliubov approximation through µ = g BB n 0 (T = 0) We can now use (16) to determine the total particle number which remains fixed throughout the calculation. We can now sample individual realisations of the condensate, whose description is free of IR and UV divergences. Upon closer inspection, one might realise that even though the mean of the phase and density corrections are zero, the variance scales up to linearly with the system size L. A direct result of this is that the computational time needed to achieve convergence also scales with the system size. This computational challenge can be tackled by increasing the system size gradually until the results are independent of the system size and then validating certain data points with larger system size. Furthermore, this restricts the discretisation of space, as outlined in [56], which will be discussed in the next section. Because the effect of the impurity is local, we find relatively low dependence on the system size already for small systems. Lastly, we assume that the impurities are not entangled (namely can be represented by a product wave function) and are localised in space around x 0 or equivalently in momentum space localised around q at t = 0. It is therefore natural to choose a wave packet as their initial wave function Here a 0 is an external parameter that determines how localised the initial state is. The Wigner function in this setting is well known to be [48] W (X 0 , P 0 ) = 2 exp −2a 2 C. Numerical considerations In this section, we will show that three quantities can describe the whole parameter space, and we will briefly outline the discretisation of space and all the subtleties involved. We now define the following scaled parameters where we choose τ s =ξ/c, x s = √ 2ξ =ξ, and where the speed of sound is given by c = gBBn0 m . Dropping the twiddle we then find the Hamiltonian +η where η remains unchanged and α = m M . From here the following equations of motion can be obtained In order to tame the extensive variance and ensure numerical stability, we have to choose the discretisation l = L/N grid as outlined in [56], namely l has to be large enough to satisfy ρ 0 l 1, while at the same time ensuring that the energy cut-off introduced by l does not alter the physics. This translates to l < ξ, λ, where ξ is the healing length that sets a natural length scale for our problem and λ is the thermal de Broglie wavelength. Lastly, we note that we make the following choice for the interaction potential which converges to the delta distribution as l → 0, but has the advantage of being smoother than δ x /l, where δ x is the Kronecker delta. D. Validity of formalism In this subsection we will address the regime of validity of the formalism used in this work. We start by giving some general arguments for the validity of the approach followed by a term-by-term discussion of the higher-order corrections and their order of magnitude. While for the absorption spectrum we have to restrict our considerations to weak or moderate boson-impurity couplings, we would like to stress that for all dynamical properties calculated (excluding the absorption spectrum) the result is strictly non perturbative in g BB and g IB . Hence we can safely say that the method presented is valid for γ ≤ 1, which is equivalent to the range for the Gross-Pitaevskii equation, which corresponds to the tree level approximation of (1). The same reasoning applies to g IB , leading to the conclusion that our results, at least for short times are valid across the whole range from weak to strong impurity-boson couplings. For longer times one has to consider the deformation of the condensate. Another interesting point to consider is that as soon as α is small (meaning the impurity is heavy) the accuracy of the presented approach will improve further, since the impurity will behave more classically. In fact α can be seen as a strict control parameter for the corrections arising due to higher order terms in X q (τ ). Combining those considerations with the known fact that in general the truncated Wigner approximation is exact for short time scales [48], we can conclude that for short to intermediate time scales our results are trustworthy for all values of boson-boson and boson-impurity coupling. For weak coupling it is ab initio reasonable to assume that the presented results hold for longer time scales, since higher order quantum corrections should accumulate slowly if at all. However, no such universal statement can be made for strong couplings. There are two contributions of orderh 2 or higher, which were neglected in our approach and would have to be added to (9) if one wanted to solve the problem exactly, the first one coming from the boson-boson interaction and the second one being due to the impurity-boson interaction. The first one takes the form We note that this term is closely related to the standard Bogoliubov approximation, with the main difference being that the classical field here is taken to be the deformed field. We note that in no point of our simulations the expectation value of |φ * c (x, τ )| 2 falls below the value of 3ξ, meaning that the it is safe, in the spirit of the well-established Bogoliubov approximation, to neglect higher-order terms in the quantum fields, which only scale linearly in φ c and are of O(φ c φ 3 q ) which is certainly small compared to O(|φ c (x, τ )| 3 φ q ), as long as the density of the condensate stays larger than the healing length. We note that in the case of very low density gases and strong boson-impurity coupling the above argument comes under question and it is a priori not clear whether the outlined method is reliable in this regime. This regime was not investigated in the present work. This argument is further underlined by the remarkable accuracy of pure c-field methods (which do not take firstorder quantum corrections into account) for the equilibrium polaron, see [35,44,46,49], where the c-field approximation was shown to also hold for low-density gases and strong coupling, and the fact that the boson-boson interaction to all orders in g BB and to first order inh are already taken into account in our calculations. A some-what more complicated expression is obtained when considering the impurity-boson interaction term. Here, one finds that already in (9) all orders of X q (τ ) are present and the higher order terms take the following form Again for approximating the impact of those terms it is convenient to bring the derivative terms to the bosonic field operators. It then becomes clear that all corrections can be understood as a gradient expansion in the bosonic field operators, whose impact is certainly small for short time scales and also for longer time scales as long as the coupling stays moderate. Besides the gradient terms we also note that each correction is accompanied by an increasing power of X q (τ ) terms, which are also small at short times and whose impact can be controlled by α. To summarize, while the validity of this approach for very large couplings and long time scales can not be judged a priori, we note that for short time scales the results here hold regardless of impurity-boson coupling strength and also note that α serves as a control parameter for the approximation in the impurity degrees of freedom. A. Post-quench density profile In this subsection, we focus on the density of the condensate for repulsive and attractive impurity-bath couplings at different times after the quench and supplement those findings with the variance of the position σ 2 X = (X − X ) 2 and the variance of the velocity We find a dynamically distinct behaviour for weak and strong couplings on the repulsive and the attractive side. In Fig. 1, the condensate density at different times and the evolution of the variance of the position and the variance of the velocity for repulsive interactions are shown. Before discussing the results in more detail we notice that even for η = 50 the minimum of the condensate density is still larger than 3ξ, indicating that the approximations made later for the absorption spectrum are justified. We note that for weak coupling, the impurity delocalises faster than a free impurity would. For stronger couplings, the impurity stays localised much longer in time, indicating self-trapping. The velocity variance saturates after a finite time, and the time scale is inversely proportional to In all plots the parameters are α = 0.2, γ = 0.04, n0 = 5ξ and a0 = 3/ξ. As expected, we can clearly see that the impurity deforms the condensate over time. It also becomes apparent that the impurity delocalises over time and that this effect is slowed down with increasing η, which can be understood by a self-trapping argument given in the main text. In all plots the parameters are as follows α = 0.2, γ = 0.04, n0 = 5ξ and a0 = 3/ξ. We can see that the impurity attracts surrounding particles, which in turn depletes the condensate. As can be seen in (b), this appears as a repulsively interacting polaron. the impurity boson coupling. We note that this can be explained by realising that two competing effects are determining the dynamics. Namely, the impurity tends to distribute the repulsion equally throughout the condensate, causing the impurity to delocalise and the opposing effect of self-trapping, where the impurity deforms the condensate and then self-traps in the deformation. It is intuitively clear that self-trapping will not occur for weak couplings, which explains the different behaviours seen in Fig. 1. It can also be seen that the variance of the impurity velocity can exceed the speed of sound, which is associated with the emission of non zero energy excitation, indicating an energy transfer from the impurity to the bath, which has also been observed in [57]. The temperature influences the value with which the position variance and velocity variance saturate; it does not significantly influence the timescale. For strong coupling, the temperature dependence becomes relatively weak, which can be explained by noting that the impurity-boson scat-tering length determines the relevant energy scale, which is much larger than the thermal length in this case. In Fig 2 the same situation for attractive couplings is shown. Here the difference between strongly attractive and moderate attractive couplings becomes obvious. We observe that in the case of moderately attractive couplings, the impurity not only diffuses but also forms a purely attractive polaron. In contrast, for strongly attractive couplings, an attractive polaron with repulsive interactions is observed, and the time scales of the polaron formation are prolonged. This difference is a dynamical effect and can be understood by noting that when the interaction is turned on, particles from the condensate start to accumulate around the impurity, which in turn depletes the condensate around the impurity. The superfluid is interacting itself, and therefore the depletion is filled by the particles around it, with the time scale being set by the boson-boson interaction. Meanwhile, the impurity-boson interaction strength determines the number of particles that can accumulate around the impurity before the boson-boson action prevents further accumulation. At the same time the impurity delocalises, which prevents the formation of a well-defined peak around the impurity and the interaction can thus seem repulsive for a long time scale. Ultimately this is of course only a metastable state. This process continues for a longer time when the impurity-boson interaction is large, resulting in a polaron that looks repulsive which can be observed in Fig 2(b). B. Impurity velocity Another quantity that is of great interest is the impurity velocity. The out-of-equilibrium case gives insight into the polaron formation and the time scales at work. It is also of great importance for the equilibrium case since it can be used to calculate the effective mass of the polaron [58]. Here, the impurity is not at rest when the quench occurs but instead carries some finite momentum. The sudden quench of the impurity-boson interaction leads to a momentum transfer from the impurity to the surrounding particles, and we expect a slow down of the impurity. The time evolution for the velocity of the impurity is depicted in Fig. 3. Here it becomes apparent that quantum corrections have a noticeable impact on the evolution of the velocity at weak to intermediate coupling. This can easily be understood by noting that the impurity is treated as a point particle on the mean-field level, and as observed in Fig 1, which is not a valid approximation for weak couplings. This also explains why the MF velocity is lower than the corrected solution. We also note that the steady-state velocity increases with the temperature, which can be understood by noting that the surrounding gas has a higher average squared velocity for increasing temperature, and therefore the momentum transfer will be smaller. In contrast, for strong couplings, the impurity stays localised and approximating it as a point particle is, therefore, less of a simplification. The same behaviour can be observed for the temperature dependence, which is more critical for weak and intermediate couplings, which is again due to the scattering length being larger than the thermal length. We also note that the impurity transfers some of its momentum to the Bose-gas and then relatively quickly reaches an equilibrium velocity for not too strong interactions. For stronger interactions, we observe different behaviour. After an initial abrupt slow down of the impurity, the impurity velocity changes sign, which directly results from the back action from the condensate. We note that a similar abrupt slowdown has been observed in the three dimensional setting in [59]. The velocity then performs a damped oscillation around its final velocity. C. The absorption spectrum Next we turn our attention to the (injection) absorption spectrum A(ω) = 2Re ∞ 0 G(t)e iωt dt [10,12], which can be measured using Ramsey spectroscopy [9,10,14]. It can be calculated by taking the Fourier transform of the impurity green's function, which characterises the dephasing of the system and is closely related to the Lochschmidt echo [60]. The absorption spectrum gives essential information about the polaron formation and can be used to estimate the polaron energy and lifetime [61,62]. At this point, we also want to stress that pure ∞ 0 G(t)e iωt dt for different temperatures T calculated using the truncated Wigner approximation for α = 0.2, γ = 0.04, n0 = 5ξ and a0 = 3/ξ mean-field (MF) calculations in position momentum basis are not sufficient to calculate the impurity Green's function. This can be seen by noting that there is no averaging for a classical calculation, which means there is no dephasing between different trajectories, and therefore one always obtains |G| = 1 in purely classical calculations. Our results are depicted in Fig. 4, it can be observed that the quasi-particle peak widens with increasing temperature, which is also reported in 3D [31]. However, in contrast to the 3D case, [27], where the extended Fröhlich model was considered, we do not find several peaks on the repulsive side. We also note that the overall amplitude decreases with η, and the quasi-particle peak gets washed out with increasing η, which is a direct consequence of the orthogonality catastrophe [63][64][65], hence the emergence of a clear quasi-particle peak comes under question. We note that the absorption spectrum shows a functional dependence associated with quasi-particle behaviour, and we do not find an infrared dominated regime as observed in other one dimensional systems [61]. Those findings are supplemented by the overlap G(t). Here we can see another vastly different feature of the one-dimensional case, compared to the three-dimensional case [27,31], namely that |G|, approaches zero even for moderate couplings, signalling the onset of the orthogonality catastrophe. As expected, the dephasing becomes more rapid with increased temperature and increased g IB . IV. SUMMARY & OUTLOOK In summary, by leveraging the Keldysh formalism, we derived a truncated Wigner approach to study dynamical properties of the bose polaron in 1D. This allowed us to reduce the problem to simulating semi-classical equations of motion with stochastic initial conditions. We showed how to adequately account for temperature effects of the surrounding bath by sampling the phase and density of the condensate. We also discussed how to regularise the arising divergences that typically occur in such one dimensional systems. The method presented here takes the back action of the impurity onto the condensate into account and is therefore applicable from weak to strong impurity bath couplings. We then used this framework to calculate the dynamics of an impurity after sudden immersion in a surrounding bath and the absorption spectrum. By considering the condensate density and the position/velocity variance, we showed that there is a distinct dynamical behaviour associated with the strong and weak coupling regime, namely our results indicate self-trapping of the impurity for strongly repulsive interactions, and we also find a repulsive polaron on the attractive side. We also investigated the temperature dependence of the polaron formation and found a substantial influence of quantum corrections on dynamical properties like the velocity of the impurity, showing the necessity to go beyond pure meanfield considerations. Lastly, we considered the absorption spectrum and the impurity Green's function. Here, we observed a clear quasi-particle peak for weak to intermediate couplings. In contrast to that, we see that the quasi-particle peak is washed out for strong couplings and that temperature effects widen the quasi-particle peak. In contrast to the higher dimensional case, the impurity Green's function approaches zero even for weak couplings. At this point, we want to stress that our approach is neither limited to 1D nor a single impurity. It could therefore serve as an exciting starting point to explore higher dimensional systems, as well as the interplay of several impurities. While the generalization to several impurities is quite straightforward we want to stress that, as pointed out for example in [66][67][68], the generalisation to higher dimensions is highly non-trivial in general. First we note that in higher dimensions it is not possible to use bare contact interactions for the boson-boson interaction and the boson-impurity interaction simultaneously when employing the approach. One has to resort to using more realistic interaction potentials for at least one of them as has, for example, been done in the three dimensional context in [35,64]. Another major challenge is a purely numerical one, as it becomes more costly to sample the Bose fields in higher dimensions. Nevertheless we expect that the presented method may be paired with some small approximations to address higher dimensional systems.
8,164.8
2021-03-24T00:00:00.000
[ "Physics" ]
New exact wave solutions to the space-time fractional-coupled Burgers equations and the space-time fractional foam drainage equation Abstract The space-time fractional-coupled Burgers equations and the space-time fractional foam drainage equation are important as an electro-hydro-dynamical model to progress the local electric field and ion acoustic waves in plasma, the shallow water wave problems, and also fluid flow of liquid through foam arisen by gravity and capillarity. In this article, we determine new and further general exact wave solutions to the above-mentioned space-time fractional equations using the generalized (G′/G)-expansion method with the assistance of the fractional complex transformation. It is shown that the method is further effective, convenient, and can be used to establish new solutions for other kind non-linear fractional differential equations arising in mathematical physics. Finally, we depict the 3D and 2D figures of the obtained wave solutions in order to interpret them in geometrical sense. PUBLIC INTEREST STATEMENT Fractional differential equations have gained much importance and popularity to the researchers. In order for better understanding the complex phenomena, exact solutions play a vital role. It has recently proved to be a valuable tool to the modeling in various fields of mathematical physics, biology, population dynamics, engineering, fluid-dynamic traffic model, etc. The results obtained from the fractional system are of a more general nature. In the present article, we use the generalized (G ′ /G)-expansion method to investigate closed form wave solutions of the space-time fractional-coupled Burgers equations and the space-time fractional foam drainage equation. Consequently, we obtain abundant closed form wave solutions of these two equations among them some are new solutions. We expect that the new closed form solutions will be helpful to explain the associated phenomena. Therefore, diverse group of researchers developed and extended different methods for investigating closed form solutions to NLEEs. In recent literature, the space and time fractional-coupled Burgers equations (Kurudy, 2010) and the space and time fractional foam drainage equation (Hutzler, Weaire, Saugey, Cox, & Peron, 2014) is investigated through the modified trial equation method (Bulut, Baskonus, & Pandir, 2013), the Adomin decomposition method (Dahmani, Mesmoudi, & Bebbouchi, 2008), the tanh-function method (Helal & Mehanna, 2007), the variation iteration method (Inc, 2008), the exp-function method (Khani, Hamedi-Nezhad, Darvishi, & Ryu, 2009), the homotopy perturbation method (Fereidoon, Yaghoobi, & Davoudabadi, 2011), the He's projected differential transform method (Elzaki, 2015), the modified Kudryashov method (Ege & Misirli, 2014) etc. To the best of our knowledge, the spacetime fractional-coupled Burgers equations and the space-time fractional foam drainage equation have not been investigated by means of the generalized (G ′ /G)-expansion method. Therefore, our object is to examine the further general and new exact wave solutions to the space-time fractionalcoupled Burgers equations and the space-time fractional foam drainage equation by making use of the generalized (G ′ /G)-expansion method and study the effect of the included emerging parameters to the obtained solutions and the attained solutions might be useful and realistic to analyze for both the evolution equations. The generalized (G ′ /G)-expansion method is a recently developed efficient, potential and rising method to search new wave solutions to the non-linear fractional equations. Its finding results are straightforward, more general, efficient, useful, and no need to use the symbolic computation software to manipulate the algebraic equations. The rest of the article is patterned as follows: In Section 2, we explain the Jumarie's modified Riemann-Liouville derivative. In Section 3, we describe the outline of the generalized (G ′ /G)-expansion method. In Section 4, we search new and further general solutions to the time fractional evolution equations mentioned above. In Section 5, we discuss the results and discussion. In Section 6, we describe the physical explanation and show graphically representation and in Section 7, we draw our conclusions. Modified Riemann-Liouville derivative The Jumarie's modified Riemann-Liouville derivative of order α is defined as follows (Jumarie, 2006): Some properties for the proposed modified Riemann-Liouville derivative are listed below: Remark 2.1 In the above formulas, (2.2)-(2.5), and (2.2) is non-differentiable function and the function f(x) is non-differentiable function in (2.3) and (2.4) and differentiable in (2.5), also g(x) is nondifferentiable in (2.4), and f(x) is differentiable in (2.4) but non-differentiable in (2.5). In this article, based on the generalized (G ′ /G)-expansion method, we use formulas (2.3) and (2.5) to obtain the solutions of mentioned fractional differential equations. We will derive effective way for solving fractional partial differential equations. It will be shown that the use of the generalized (G ′ /G)-expansion method permit us to obtain new exact close form solutions from the known seed solutions. In (He, 2012;He, Elagan, & Li, 2012), He et al. introduced the fractional complex transform to convert an FDE into its differential partner easily. Hence, above formulae play an important role in fractional calculus and also fractional differential equations. Outline of the method Let us consider a general non-linear fractional differential equation in the form: where u = u(x, t) is wave function, H is a polynomial in u = u(x, t) and its fractional derivatives, containing the highest order derivatives and non-linear terms of the highest order, and subscripts denote fractional derivatives. To obtain the solution of Equation (3.1) using the generalized (G ′ /G)-expansion method, we have to execute the subsequent steps: Step 1: Let us consider, u(x, t) = ( )e i and the traveling wave variable, permits us to transform the Equation (3.1) into the following ordinary differential equation (ODE): where R is a polynomial in u( ) and its derivatives, wherein u � ( ) = du d . (2.1) Step 2: According to possibility, Equation (3.3) can be integrated term-by-term one or more times, yields constant(s) of integration. The integral constant can be considered as zero since we look for solitary wave solutions. Step 3: Assume that the wave solution of Equation (3.3) can be written in the form: where either a N or b N maybe zero, but both a N and b N cannot be zero at a time, a i , (i = 0, 1, 2, … N) and b i , (i = 1, 2, … N) and d are arbitrary constants to be evaluated and M(ξ) is given by: where G = G( ) satisfies the following auxiliary non-linear ordinary differential equation: where the prime stands for derivative with respect to ;A, B, C and E are real parameters. Step 4: The positive integer N arises in Equation (3.3) can be determined by balancing the highest order non-linear terms and the derivatives of highest order appearing in Equation (3.3). Step 5 Step 6: Suppose that the value of the constants a i , i = 0, 1, 2, … , b i i = 1, 2, 3 … , d and k can be found by solving the algebraic equations obtained in Step 5. Since the general solution of Equation (3.6) is well known to us, putting the values of a i , i = 0, 1, 2, … , b i i = 1, 2, 3 … , d and k into Equation (3.4), we attain more general type and new exact traveling wave solutions of the non-linear fractional differential Equation (3.1). Using the general solution of Equation (3.6), we attain the following solutions of Equation (3.5): Formulation of the solutions In this section, we will determine the new and further general useful solutions to the space-time fractional-coupled Burgers equations and the space-time fractional foam drainage equation. The coupled burgers equations In this sub-section, we determine some new close form traveling wave solutions to the space-time fractional-coupled Burgers equations by making use of the generalized (G ′ /G)-expansion method. Let us consider the space-time fractional-coupled Burgers equations of the form: where α, p, and q are emerging parameters. The fractional-coupled Burgers equations are mathematical equations that describe the variation overtime of a physical structure on the fractional fluid mechanics system, ion acoustic waves through a gas-filled pipe, model of turbulent motion and certain steady-state viscous fluid and waves in bubbly liquids. It is also an important equation in financial mathematics. By means of the traveling transformation (3.2), Equation (4.1.1) is converted into the following non-linear ODE: Now, balancing the linear term of the highest order and the non-linear term of the highest order occurring in (4.1.2), yields N 1 = N 2 = 1. Then the solution of Equation (4.1.2) is the form: where a 0 , a 1 , b 1 , c 0 , c 1 and e 1 are arbitrary constants to be determined, such that either a 1 or b 1 maybe zero, but both a 1 and b 1 cannot be zero at a time and also either a 1 or b 1 maybe zero, but both c 1 and e 1 cannot be zero at a time. (3.9) We collect each coefficient of these resulted polynomials and setting them zero yields a set of simultaneous algebraic equations (for simplicity, the equations are not present here) for a 0 , a 1 , b 1 , c 0 , c 1 , e 1 , d and k. Solving these algebraic equations with the help of symbolic computation software, such as Maple, we obtain the following 3 (three) sets of solutions: Set-2 Set-3 where ψ = A − C, a 0 , A, B, C, E and d are free parameters. For simplicity, we have discussed only on the solutions Set-1 of the mentioned equations arranged in Equation (4.1.4) as follows and other sets of solutions are omitted here. Since r 1 and r 2 are integral constants, so we might choose arbitrarily their values. If we choose r 1 = 0 but r 2 ≠ 0, then the solutions (4.1.7) and (4.1.8) is simplified as: Again if we choose r 2 = 0 but r 1 ≠ 0, then the solutions (4.1.7) and (4.1.8) is simplified as: Again if we choose r 2 = 0 but r 1 ≠ 0, then we attained the traveling wave solutions as: Again if we choose r 2 = 0 but r 1 ≠ 0, then we attained the traveling wave solutions as: . It is outstanding to notice that the traveling wave solution u 1 , u 2 , u 3 , u 4 , u 5 , u 6 , u 7 , u 8, u 9 , v 1 , v 2 , v 3 , v 4 , v 5 , v 6 , v 7 , v 8 , and v 9 of the space-time fractional-coupled Burgers equations and space-time fractional foam drainage equation are fresh and further more general and have not been established in the previous solutions. Obtaining solutions are to be convenient to search the demandable model of waves on the variation overtime of a physical structure on the fractional fluid mechanics system, Sallow water waves, model of turbulent motion, and electro-hydro-dynamical model to progress the local electric field and also ion acoustic waves in plasma. The foam drainage equation In this section, we determine some appropriate close form traveling wave solutions to the spacetime fractional foam drainage equation by making use of the generalized (G ′ /G)-expansion method. Let us consider the space-time fractional foam drainage equation of the form: The fractional foam drainage equation is a model of the flow of liquid through the channels and nodes between the bubbles, driven by gravity and capillarity. This equation is also described as the demandable model of waves on shallow water surface, ion acoustic waves in plasma, and the waves on foam maybe explain in terms of a non-linear PDE for the foam density as a function of time and vertical position. It is also an important equation in financial mathematics. By means of the traveling transformation (3.2), Equation (4.2.1) is converted into the following non-linear ODE: Again we assume a new transformation w = v −1 , then Equation We collect each coefficient of these resulted polynomials and setting them zero yields a set of simultaneous algebraic equations (for simplicity, the equations are not present here) for a 0 , a 1 , b 1 , d and k. Solving these algebraic equations with the help of symbolic computation software, such as, Maple, we obtain the following three sets of solutions: Set-1 Set-2 (4.2.1) . Since r 1 and r 2 are integral constants, so we might choose arbitrarily their values. If we choose r 1 = 0 but r 2 ≠ 0, then the solutions (4.2.9) is simplified as: Again if we choose r 2 = 0 but r 1 ≠ 0, then we attained the traveling wave solutions as: When B ≠ 0, = A − C and Ω = B 2 + 4E A − C < 0, substituting the values of the constants arranged in Equation (4.2.5) into Equation (4.2.4) and using the result w = v −1 and also if we choose r 1 = 0 but r 2 ≠ 0 and simplifying, we attained the traveling wave solutions as: Again if we choose r 2 = 0 but r 1 ≠ 0, then we attained the traveling wave solutions as: Again if we choose r 2 = 0 but r 1 ≠ 0, then we attained the traveling wave solutions as: It is remarkable to see that the traveling wave solution w 1 , w 2 , w 3 , w 4 , w 5 , w 6 , w 7 , w 8 , and w 9 of the space-time fractional-coupled Burgers equations and space-time fractional foam drainage equation are fresh and further more general and have not been established in the previous solutions. Obtaining solutions arise to be convenient to search the demandable model of waves on shallow water surface, the waves of flow of liquid through the channels and nodes between the bubbles, driven by gravity and capillarity, ion acoustic waves in plasma, and the waves on foam density as a function of time and vertical position. Results and discussion It is very important to point out that our obtained solutions play a significant role over the already published solutions. We notice that, investigated by He's projected differential transform method (PDTM) (Hutzler et al., 2014), Elzaki obtained only two wave solutions (see Appendix A), for spacetime fractional-coupled Burgers equations. But we have obtained more new wave solutions by used the generalized (G ′ /G)-expansion method for these equations which have not been reported in the previous literature. Also we have mentioned that investigated by the modified Kudryashov method (Bulut et al., 2013), Ege and Misirli obtained only two wave solutions (see Appendix B), for space-time fractional foam drainage equation. But from our mentioned equations, we have obtained more new wave solutions by used the generalized (G ′ /G)-expansion method which have not been reported in the previous literature. Hence, comparing between our solutions and their solutions; we declare that our solutions are more general and huge amount of new exact traveling wave solutions. Physical explanation In this section, we describe the physical explanation and of the graphically representations of the obtained solutions for 3D and 2D figures by means of symbolic software like Mathematica of the non-linear space-time fractional-coupled Burgers equations and the space-time fractional foam drainage equation as follows: (1) Solutions u 2 , u 7 , v 2 and v 7 are the kink-type solutions. Figure 1 shows the shape of the exact kink-type solution as follows. The figures of solution u 2 , u 7 , v 2 and v 7 are similar and therefore, the other figures of these solutions are omitted for convenience. (2) Solutions u 1 , u 6 , v 1 and v 6 are the singular kink solution. Figure 2 shows the shape of the exact singular kink-type solution as follows. The figures of solutions u 1 , u 6 , v 1 and v 6 are similar and therefore, the other figures of these solutions are omitted for convenience. (3) Solutions u 1 , u 2 , u 6 , u 7 , v 1 , v 2 , v 6 and v 7 are the singular kink solution. Figure 3 shows the shape of the exact singular kink-type solution as follows. The figures of solutions u 1 , u 2 , u 6 , u 7 , v 1 , v 2 , v 6 and v 7 are similar and therefore the other figures of these solutions are omitted for convenience. (4) Solutions u 3 , u 8 , v 3, and v 8 are the singular periodic solution. Figure 4 shows the shape of the exact singular periodic-type solution as follows. The figures of solutions u 3 , u 8 , v 3, and v 8 are similar and therefore, the other figures of these solutions are omitted for convenience. (5) Solutions u 4 , u 9 , v 4 and v 9 are the singular periodic solution. Figure 5 shows the shape of the exact singular periodic-type solution as follows. The figures of solutions u 4 , u 9 , v 4 and v 9 are similar and therefore, the other figures of these solutions are omitted for convenience. (6) Solutions u 5 and v 5 are the singular kink solution. Figure 6 shows the shape of the exact singu- (7) Solutions w 2 and w 7 are the kink solutions. Figure 7 shows the shape of the exact singular kink-type solution as follows. The figures of solutions w 2 and w 7 are similar and therefore, the other figures of these solutions are omitted for convenience. (8) Solutions w 1 , and w 6 are the kink solutions. Figure 8 shows the shape of the exact singular kink-type solution as follows. The figures of solutions w 1 and w 6 are similar and therefore, the other figures of these solutions are omitted for convenience. (9) Solutions w 3 , w 4 , w 8 , and w 9 are the singular periodic solutions. Figure 9 shows the shape of the exact singular periodic-type solution as follows. The figures of solutions w 3 , w 4 , w 8 , and w 9 are similar and therefore, the other figures of these solutions are omitted for convenience. (10) Solution w 5 is the singular kink-type solution. Figure 10 shows the shape of the singular kingtype solutions as follows. Conclusion In this article, we have determined new and further general solitary wave solutions to the spacetime fractional-coupled Burgers equations and also the space-time fractional foam drainage equation by used the efficient and powerful technique known as the generalized (G ′ /G)-expansion method. The obtained solutions are in more general forms, and many known solutions to these equations are only special cases of them and definite values of the included parameters give diverse known soliton solutions. We also have shown that the generalized (G ′ /G)-expansion method over the He's projected differential transform method (PDTM) and the modified Kudryashov method, offers more general and huge amount of new exact traveling wave solutions with some free parameters. Finally, we have shown graphically to the sense of interpret of the physical phenomena for both the equations. The established results also have shown that the generalized (G ′ /G)-expansion method is more powerful, effective and can be helped for many other fractional equations to get feasible solutions of the tangible incidents.
4,517.8
2018-01-01T00:00:00.000
[ "Physics" ]
Levodopa-based device-aided therapies for the treatment of advanced Parkinson’s disease: a social return on investment analysis Introduction Parkinson’s disease (PD) is an incurable, progressive, neurodegenerative disorder. As PD advances and symptoms progress, patients become increasingly dependent on family and carers. Traditional cost-effectiveness analyses (CEA) only consider patient and payer-related outcomes, failing to acknowledge impacts on families, carers, and broader society. This novel Social Return on Investment (SROI) analysis aimed to evaluate the broader impact created by improving access to levodopa (LD) device-aided therapies (DATs) for people living with advanced PD (aPD) in Australia. Methods A forecast SROI analysis over a three-year time horizon was conducted. People living with aPD and their families were recruited for qualitative interviews or a quantitative survey. Secondary research and clinical trial data was used to supplement the primary research. Outcomes were valued and assessed in a SROI value map in Microsoft Excel™. Financial proxies were assigned to each final outcome based on willingness-to-pay, economic valuation, and replacement value. Treatment cost inputs were sourced from Pharmaceutical Benefits Schedule (PBS) and Medicare Benefits Scheme (MBS) published prices. Results Twenty-four interviews were conducted, and 55 survey responses were received. For every $1 invested in access to LD-based DATs in Australia, an estimated $1.79 of social value is created. Over 3 years, it was estimated $277.16 million will be invested and $406.77 million of social return will be created. This value is shared between people living with aPD (27%), their partners (22%), children (36%), and the Australian Government (15%). Most of the value created is social and emotional in nature, including reduced worry, increased connection to family and friends, and increased hope for the future. Discussion Investment in LD-based DATs is expected to generate a positive social return. Over 50% of the value is created for the partners and children of people living with aPD. This value would not be captured in traditional CEA. The SROI methodology highlights the importance of investing in aPD treatment, capturing the social value created by improved access to LD-based DATs. Introduction: Parkinson's disease (PD) is an incurable, progressive, neurodegenerative disorder.As PD advances and symptoms progress, patients become increasingly dependent on family and carers.Traditional costeffectiveness analyses (CEA) only consider patient and payer-related outcomes, failing to acknowledge impacts on families, carers, and broader society.This novel Social Return on Investment (SROI) analysis aimed to evaluate the broader impact created by improving access to levodopa (LD) device-aided therapies (DATs) for people living with advanced PD (aPD) in Australia. Methods: A forecast SROI analysis over a three-year time horizon was conducted.People living with aPD and their families were recruited for qualitative interviews or a quantitative survey.Secondary research and clinical trial data was used to supplement the primary research.Outcomes were valued and assessed in a SROI value map in Microsoft Excel™.Financial proxies were assigned to each final outcome based on willingness-to-pay, economic valuation, and replacement value.Treatment cost inputs were sourced from Pharmaceutical Benefits Schedule (PBS) and Medicare Benefits Scheme (MBS) published prices. Results: Twenty-four interviews were conducted, and 55 survey responses were received.For every $1 invested in access to LD-based DATs in Australia, an estimated $1.79 of social value is created.Over 3 years, it was estimated $277.16 million will be invested and $406.77 million of social return will be created. Introduction Parkinson's disease (PD) is an incurable neurodegenerative disorder characterised by the progressive loss of dopamine-producing neurons in the brain which impairs an individual's ability to control and coordinate movement (1).In its early stages, PD presents with three main symptoms: uncontrollable shaking (tremor), slowness of movement (bradykinesia), and muscle stiffness (rigidity).Other symptoms include postural instability, nerve pain, cognitive dysfunction, and mood and sleep disturbances (1). Oral levodopa (LD) is the mainstay in PD treatment (2,3) and is commonly prescribed in combination with a dopamine decarboxylase inhibitor (commonly carbidopa or benserazide).It works to replenish and maintain dopamine levels in the brain to reduce PD symptoms (2).Oral levodopa is initially effective at controlling PD symptoms, but effectiveness decreases as the disease progresses.This is due to several reasons, including the short half-life of LD leading to variable plasma concentration, and erratic gastric emptying.Additionally, as PD progresses and more dopaminergic neurons lose the capacity to store dopamine, the therapeutic window during which patients experience adequate symptom control narrows, and patients experience periods of severe symptom onset (referred to as "Off " time).To maintain symptom control, people living with advancing PD often require higher and more frequent doses of oral LD, resulting in an increased risk of medication side effects such as dyskinesia (involuntary, erratic movement of the limbs), an increasingly complex oral dosing regimen and, in turn, a greater medication-related burden (2).As a result, people living with advancing PD often require higher levels of care from their partner and family.This has significant impacts on the quality of life (QoL) of both the person living with PD and their family, leading to an increased physical, mental, social, economic, and emotional burden (4)(5)(6). While there is no universally agreed definition of the term 'advanced PD (aPD)' , it is generally defined as PD which is poorly controlled by oral LD, often based on '5-2-1' criteria (≥ 5 times daily oral LD use, ≥ 2 daily hours of "Off " time, or ≥ 1 daily hour with troublesome dyskinesia).It is characterised by significantly decreased bilateral mobility, severe motor deficits including tremors and rigidity, increased risk of falls, and cognitive and mental health decline (7).Studies have shown a wide variation in the prevalence of aPD among those with PD, ranging from 10 to 60% depending on the setting (8).Additionally, data suggests up to 40% of people with PD will experience symptoms of advanced disease within 5 years of initiating oral LD (9).As aPD patients experience worsening symptoms, LD device-aided therapies (DATs) may be considered to maintain a consistent LD-plasma concentration, provide better symptom control, and reduce the medication-related burden of PD oral medication regimens (2,9).LD-based DATs provide a continuous infusion of treatment, resulting in a stable plasma concentration of LD and reduction in aPD symptoms (2,9,10). In Australia, LD/CD intestinal gel (Duodopa ® ) is the only LD-based DAT currently subsidised by the Pharmaceutical Benefits Scheme (PBS).Initiation of Duodopa ® requires a percutaneous endoscopic gastrostomy (PEG) procedure where a jejunostomy tube (J-tube) is permanently placed.The medication is then administered through the PEG/J tube by an external device (referred to as "the pump") (2,3).The clinical efficacy and safety of Duodopa ® is well established, however, analysis of PBS data suggests only 5% of people living with aPD are treated with Duodopa ® .LD-based DAT uptake is currently limited by health system capacity constraints and the requirement for surgical initiation. A continuous subcutaneous infusion of foslevodopa/foscarbidopa (prodrugs of LD and CD that are converted into their active form in the body) (Vyalev ® ) is now being investigated in the clinical trial setting (11).Like Duodopa ® , Vyalev ® will provide continuous drug administration provide stable LD-plasma concentration and thus reduce aPD symptoms.However, unlike Duodopa ® , Vyalev ® does not require surgical initiation.Some of the economic impacts of PD have been previously documented in traditional cost-effectiveness analyses (CEA).Such analyses consider the direct and tangible costs experienced by the patient and health system but often fail to capture the indirect burden of disease; that is, the intangible costs, including the impact of disease on families and broader society.In 2014, the annual cost of PD in Australia was estimated to be over $1 billion (12).Direct health system costs contributed to most of this estimation.However, much of this data is now out of date and thus, existing economic assessments of PD likely underestimate its true cost.Further, Australian clinicians have called for patient's QoL as well as more qualitatively subjective non-motor symptoms to be considered when assessing access to advanced treatment (8).As there is a known impact on the QoL of both the person living with PD and their family (4-6), an assessment including the social, emotional, and intangible consequences should also be conducted to understand the true cost of PD. Social Return on Investment (SROI) is a principles-based research method used to understand, measure, and report the broader social, economic, and environmental consequences of an intervention (13).SROI analyses rely on extensive and robust stakeholder engagement, including with families, carers, and broader society, to measure change in ways that are relevant to those impacted (13).This process of stakeholder engagement captures and compliments outcomes which are often underrepresented or excluded from traditional CEAs.To date, no SROI analysis of aPD device-aided therapies has been undertaken.This novel study aimed to undertake a SROI analysis to understand, measure, and report the broader social, economic, and environmental consequences of aPD treatment with LD-based DATs (Duodopa ® or Vyalev ® ). Methods The SROI framework has been described in detail elsewhere (13).Briefly, an SROI involves six key stages: (1) establish the scope and identify stakeholders; (2) map outcomes; (3) evidence outcomes and give them a value; (4) establish the impact; (5) calculate the SROI; and (6) report to stakeholders, use the results, and embed the SROI process. For this analysis, a forecast-type SROI with a 3-year time horizon was chosen, to limit uncertainty associated with reduced clinical effectiveness over time and to capture the short-and medium-term changes in health and social impacts expected to result from treatment with a LD-based DAT.This time horizon was considered reasonable given that clinical data has demonstrated people receiving treatment with LD-based DATs continue to experience statistically significant outcomes up to 36 months after commencing treatment (14). Stakeholder engagement and mapping outcomes Stakeholder groups identification The stakeholders to be considered were groups that may affect or be affected by improving access to LD-based DATs.Some stakeholders were considered appropriate to be consulted as proxies for other groups included in the analysis (Table 1).Seven stakeholder groups were identified as likely to be materially impacted or able to act as a proxy for other groups: people living with aPD, their partners, their children, neurologists, Parkinson's nurses, patient advocacy groups, and the Australian Government. Participant recruitment and interviews Neurologists treating people living with aPD were identified by the study sponsor and contacted via email from May to June 2022.One follow-up email was sent to each clinician and no further contact was attempted after this to avoid potential coercion or pressure from the researchers.Once a clinician indicated their willingness to participate, an introductory meeting was held and the project was outlined in more detail.People living with aPD who were eligible for DATs were identified by their treating clinician.Partners and family members were identified by people living with aPD.Invitations for interviews were sent via email, including the consent form and brief details about the research aims, ethical considerations, and confidentiality.PD advocacy and research organisations throughout Australia, specifically Parkinson's Australia and Shake It Up, were also contacted to recruit people living with PD for surveys and interviews regarding their experience.Participation and information flyers for the study were disseminated across respective networks via newsletters and email to inform people of the study being conducted. It was made clear to all stakeholders that the research was being conducted separately from any ongoing clinical trial (specifically relating to the investigational product Vyalev ® ) and participation in the study would not impact their relationship with the study sponsor or jeopardise their current or future treatment for PD or any other condition. Interviews were semi-structured and conducted virtually over Zoom or Microsoft Teams and in some cases over the phone.The aim of the interviews was to understand changes in aPD symptoms that had occurred after commencing treatment with a LD-based DAT and any downstream changes in QoL or daily and leisure activities that arose due to symptom changes. Interviews were analysed thematically to identify common experiences among stakeholders.Interview transcripts were uploaded to Dovetail (15), coded, and tagged by a single researcher.Codes were reviewed by another researcher.Coded transcripts were used to inform the Theory of Change and determine final outcomes. Surveys were conducted using Qualtrics (16).The aim of the surveys was to understand the relative importance of individual symptoms and outcomes of treatment.Participants were asked to rank which symptoms were most important for them to control and which outcomes were most important for them to experience.The outcomes to be ranked were based on the findings from the interviews and literature.Ranked outcomes were used to inform the importance of each final outcome and the SROI filters. Secondary research Phase III clinical trial data from the M15-736 trial was used to determine the proportion of people expected to experience a change as a result of treatment with a LD-based DAT (11).The M15-736 trial compared treatment with Vyalev ® , a LD-based DAT, to continued treatment with oral LD/CD immediate release tablets.The M15-736 clinical trial was considered the most relevant study to assess the clinical impact of LD-based DATs due to its recency compared to Duodopa ® clinical trials, capturing the current standard of care in aPD treatment.Efficacy outcomes were measured using the PDQ-39 and the Parkinson's Disease Sleep Scale (PDSS).Individual domains from the PDQ-39 were analysed to determine the difference in the proportion of people who experienced an improvement in their PD symptoms after commencing treatment.Additionally, a literature search was conducted to identify patient-reported outcomes considered relevant in people living with PD. Theory of change (ToC) maps were developed for each stakeholder based on consultation and secondary research.Thematic analysis of interview transcripts and narrative analysis of secondary literature was used to identify recurring themes and understand outcomes of importance to people living with aPD and their families. Valuing outcomes and establishing impact Outcomes were valued and assessed in a SROI value map created in Microsoft Excel, which was an adaptation of the Social Value International Value Map available online (17). An importance weight was applied to each final outcome, to account for the degree to which the outcome was valued from the perspective of stakeholders, informed by stakeholder surveys.Each outcome was also assigned a financial proxy based on three valuation approaches: willingness-to-pay (the value of an outcome based on how much stakeholders are willing to pay/accept), economic valuation (the financial value representing the actual savings/costs to the stakeholder), or replacement value (the cost of other goods or services which would achieve the same amount of change).SROI filters including deadweight, attribution, displacement, and drop off were determined via stakeholder consultation and secondary research and then quantified.Specifically, deadweight for each outcome was quantified using a six-point scale from never (0%), very probably not (20%), might (40%), probably (60%), very probably (80%) to certainly (100%).This transformation scale was extracted from a previously assured SROI report (18) and accounted for the amount of change that could have happened without the intervention.Similarly, attribution was measured on a six-point Likert scale which measured the contribution of external factors to the outcome.The scale ranged from 0% (the change was completely the result of the intervention) to 100% (the intervention had nothing to do with the change).Displacement and drop off were determined using data from the M15-736 pivotal clinical trial, including the incremental change between treatments and treatment discontinuation rates (19). In health economic evaluations, discounting is intended to reflect the difference in how society values future outcomes compared to present outcomes.In Australia, the recommended discount rate is 5% (20).This discount rate was applied to each final outcome.Alternative discount rates of 3.5 and 0% are also recommended and tested in the sensitivity analyses (20). The true cost of the medicine to the Government (also called the 'effective price' or 'net price') is commercial-in-confidence thus was not used in this SROI.The published drug cost of Duodopa ® was used as an input for the cost of LD-based DATs.This cost was separated into Pharmaceutical Benefits Scheme (PBS) costs (21) (paid by the Australian Government) and the co-payment (paid by the patient).The annual cost of Vyalev ® per patient was assumed to be equal to that of Duodopa ® .The 'list price' of Duodopa ® was discounted using an average rebate estimate.The average PBS rebate across all listed medicines was calculated based on publicly available PBS expenditure reports for the financial year 2020-2021 (22).This rebate was applied to the Duodopa ® list price to calculate an estimated effective price.The cost of medical services associated with commencing LD-based DATs was also included as an input, based on hospital costs and specialist fees (23,24). In order to avoid double counting, it was assumed that other stakeholders (e.g., partners and children of people living with aPD) did not have any monetary or in-kind investment into the treatment, and all financial inputs were incurred by the patient themselves. The number of people living with aPD who would access treatment with LD-based DATs each year totalled 1,228.This was calculated from the number of people currently living with aPD based on '5-2-1' criteria, analysis of PBS data, and clinician feedback and expertise (Supplementary Table S1). Ethical considerations Ethics approval from Bellberry Human Research Ethics Committee (HREC) was received on the 27th of April 2022 (Application No. 2022-01-082).Additional ethics approval was received from the Gold Coast Hospital and Health Service on the 3rd of August (HREC/2022/QGC/87501) to enrol stakeholders treated within public hospitals. Results A total of 79 participants were recruited from May 2022 to March 2023 for semi-structured interviews (n = 24) and surveys (n = 55) (Table 2).Survey responses were collected between December 2022 and January 2023. Fifteen unique outcomes were included in this SROI.Six outcomes for people living with aPD, five outcomes for partners of people living with aPD, two for children of people living with aPD, and two for the Australian Government (see Table 3). The financial proxies for the included outcomes are shown in Table 4. Five financial proxies were based on willingness-to-pay/ accept, six on economic valuation, and four on replacement valuation.SROI filters (deadweight, attribution, displacement, and drop off) and importance weights are included in the Supplementary Table S2.By convention, outcomes which are already financial in nature were assigned an importance weighting of 100%. The total investment into LD-based DATs over the three-year time horizon was calculated to be $227.16million, or $79.44 million per year.Over the 3-year time horizon, it was estimated $277.16 million will be invested and $406.77 million of social value will be created. The SROI ratio was estimated to be 1:1.79.This indicates that for every AUD$1 invested to improve access to LD-based DATs in Australia, an estimated AUD$1.79 of social value is created.This value (to be understood as the return on investment) is shared between people living with aPD (27%), their partners (22%), children (36%), and the Australian Government (15%).Most of the value is created from social and emotional outcomes (AUD$221.34million, 54%), "I spend approximately 65% of my waking day in the "Off " state when my medication is not working.This causes me to have difficulty moving independently, feeding myself, and performing basic tasks.The 35% I manage in the "On" state is with troublesome dyskinesia, very violent movements that again prevent me from doing most basic activities" -Person living with aPD discussing the burden of disease progression (25) Increased connection to 10.3389/fpubh.2024.1351808Frontiers in Public Health 06 frontiersin.orgfollowed by role functioning (AUD$75.02 million, 18%), economics (AUD$62.92 million, 15%), productivity (AUD$27.44 million, 7%), and wellbeing (AUD$20.06 million, 5%).Sensitivity analyses of recommended discount rates, cost inputs, valuation approaches, time horizon and duration, and SROI filters showed all alternative scenarios yield a positive SROI (range 1:1.07 to 1:2.24) (Supplementary Table S3). The analysis identified an increased burden of discomfort which was associated with the external pump required for both Duodopa ® and Vyalev ® .However, PD nurses noted that this burden could often be avoided if patients receive adequate support and education prior to commencing treatment with a LD-based DAT.PD nurses help to support patients with questions about their disease and can provide education regarding treatment optimisation.The lack of access to specialised PD nurses was noted as a barrier to uptake of LD-based DATs in Australia, especially in rural and remote areas.Improving access to PD nurses for patients with aPD could help to increase uptake of LD-based DATs in Australia, support treating health care professionals in training patients on use of the pump, provide ongoing follow up support to ensure that patients have the best chance of successful therapy, thus improving QoL for aPD patients and their families, and increasing the social benefit associated with LD-based DATs. Discussion This study is the first published SROI to evaluate the impact of improving access to LD-based DATs for people living with aPD.Results showed that the value of LD-based DATs is experienced not only by patients (27%) and the Government (15%), but that over half of the value is generated for the partners (22%) and children (36%) of people living with aPD.In addition, results highlighted the importance of improvements in non-motor symptoms, such as mental wellbeing, cognition, and speech.These improvements positively impact social and emotional outcomes, including increased hope for the future and Previous studies have evaluated the burden of aPD, including its economic impact and effect on families and carers of people living with aPD (5,12,26).Previously published CEAs showed treating aPD with LD-based DATs such as Duodopa ® generates a positive return (27, 28).However, these analyses did not consider the broader social impacts of treatment on partners and children of people living with aPD despite the known impact of PD on carer quality of life (4,5).Noting the limitations with existing research, this study employed an SROI approach which aimed to capture the potential social, economic and environmental impacts of LD-based DATs.Based on the key principles of SROI, this research included extensive stakeholder consultation and outcome mapping, to ensure that all relevant stakeholders and material outcomes were identified and included.This method allows for a more complete measurement of value from a societal perspective and highlights the extensive impact of improved treatment for aPD.Our findings demonstrate over half of the value created by LD-based DATs is experienced by partners and children of people living with aPD.Additionally, this research found the benefits of LD-based DATs extend beyond improvement in motor symptoms.For example, improvements in mental wellbeing, cognition, and speech improve a person's ability to connect with others, thereby improving connection to family and friends.This then has broader impacts on the person's partner and children, who experience an increased connection to the person living with aPD.These social and emotional outcomes were found to generate most of the value in this SROI but would typically be excluded from a CEA.Previous research has acknowledged the impact of non-motor symptoms (29), and our findings have identified the additional value created by LD-based DATs for both the partners and children of people living with aPD. This study has some limitations related to the method and the scope of analysis.Firstly, the SROI method has known limitations (13) which can introduce bias.However, this analysis followed SROI best practice and underwent assurance assessment through Social Value International, increasing the confidence in the results.Secondly, this analysis only considered the impact of LD-based DATs in people living with aPD who reside in the community, excluding people living in out-of-home care such as in residential aged care.This was a pragmatic decision, as it was not considered feasible to engage people living in out-of-home care.People living in aged care are likely to have materially difference experiences compared to those living in the community and aged care costs are a significant contributor to the total economic burden of PD (12).Therefore, further research should be undertaken to better quantify the benefits of LD-based DATs in this setting.Findings could help to inform treatment options for these patients and understand the broader impact of LD-based DATs for this population.Lastly, the avoided cost of healthcare services and utilisation included in this analysis was informed by previous economic evaluations (30).While the cost estimate used in this analysis included hospitalisation, medical services, and allied health, limited data exists detailing the true cost of healthcare services and utilisation related to PD, thus it is likely the savings healthcare utilisation costs are an underestimate.Further, as sensitivity analyses of varying healthcare costs showed a positive return on investment (Supplementary Table S3), this research shows that investing in aPD treatment has social and economic benefits.Future research focusing on the health system cost of PD should be conducted, with a focus on understanding the changing health care resource utilisation as PD progresses.This research will also be useful to inform the broader burden of PD, specifically relating to costs associated with the non-motor symptoms of PD including choking risk and mental health symptoms. Similarly, the cost of improving access to LD-based DATs for people living with aPD presented in this analysis is based on an estimate of the effective price which may over or underestimate the true price.A sensitivity analysis conducted using the list price still showed a positive return on investment (Supplementary Table S3), demonstrating that even in the most conservative scenario, increased access to LD-based DATs is expected to generate positive social value. Further research should also assess the potential differential impact of young onset PD.While the average age of PD diagnosis is above 65, approximately 10% of people are diagnosed with PD before the age of 50 (8).People living with young onset PD are more likely to have younger, dependent children, and will likely still be an active part of the workforce.As such, they may experience additional or different outcomes compared to the broader aPD population assessed in this SROI.While some individuals living with young onset PD were consulted as part of this research, additional research focusing exclusively on this population may reveal additional impacts. DAT family and friends Social and emotional "The best part is we have a social life again!Reconnecting with my friends and spending time with my family has brought me so much joy and happiness." -Person living with aPD receiving treatment with a LD-based DAT (25) Increased ability to remain in the workforce Productivity "My primary goal is to stay at work and retire when I want to retire, not when Parkinson's makes me retire…" -Person living with aPD receiving treatment with a DAT Increased hope for the future Social and emotional "[Access to treatment with a LD-based DAT] has improved my life enormously compared to what it was like on the tablets… I just do not have the down times anymore." -Person living with aPD receiving treatment with a LD-based DAT Increased burden of discomfort Social and emotional "Having a tube and pump took some time getting used to, but the independence is worth it…" -Person living with aPD receiving treatment with a LD-based with LD-based DATs] straightens out your life a little bit more.It gives you a bit more hope for the future." -Partner of patient with aPD receiving treatment with a LD-based DAT Reduced worry about partner's health Social and emotional "We are so much happier.We were given life back.My wife does not have to worry anymore." -Person living with aPD receiving treatment with a LD-based DAT (25) Increased carer wellbeing Wellbeing "I used to ask him if he could just give me some respite because there was only me" -Partner of person living with PD Increased connection to family and friends Social and emotional "I could go for walks with my husband, go to the movies, go back to work" -Partner of person living with aPD receiving treatment with a LD-based DAT (25) voice of the person living with aPD had] become very weak and he really could not have phone conversations with his sons who both live interstate.That had become a real issue because he felt he was losing touch… [After starting treatment with a LD-based DAT] his sons were really blown away by having these great long conversations with their dad." -Nurse caring for people living with aPD receiving treatment with LD-based DATs Reduced worry about parent Social and emotional "They told us to go to a retirement village" -Person living with aPD talking about their children Parkinson's disease; DATs, device-aided therapies; LD, levodopa. TABLE 1 Preliminary list of stakeholders. TABLE 2 Stakeholder engagement summary. aPD, advanced Parkinson's disease.The bold value indicates the total value of unique engagements. TABLE 3 Included final outcomes.Oral medication] just did not work for me at all.So, I had 3 months in which I had a number of falls, and I damaged my shoulder quite badly… and then when I went on the [LD-based DAT] infusions, things changed rapidly" -Person living with Parkinson's disease receiving treatment with a LD-based DAT TABLE 4 Financial proxy allocated to each outcome.connection to family and friends.Finally, this research also revealed gaps in aPD care, specifically the need for increased access to specialised nurses with expertise in aPD and LD-based DATs. increased The author(s) declare that financial support was received for the research, authorship, and/or publication of this article.AbbVie Pty Ltd. sponsored this research.
6,673.8
2024-06-06T00:00:00.000
[ "Medicine", "Economics" ]
Synthesis, Electrical Properties and Na + Migration Pathways of Na 2 CuP 1.5 As 0.5 O 7 : A new member of sodium metal diphosphate-diarsenate, Na 2 CuP 1.5 As 0.5 O 7 , was synthesized as polycrystalline powder by a solid-state route. X-ray di ff raction followed by Rietveld refinement show that the studied material, isostructural with β -Na 2 CuP 2 O 7 , crystallizes in the monoclinic system of the C2 / c space group with the unit cell parameters a = 14.798(2) Å; b = 5.729(3) Å; c = 8.075(2) Å; β = 115.00(3) ◦ . The structure of the studied material is formed by Cu 2 P 4 O 15 groups connected via oxygen atoms that results in infinite chains, wavy saw-toothed along the [001] direction, with Na + ions located in the inter-chain space. Thermal study using DSC analysis shows that the studied material is stable up to the melting point at 688 ◦ C. The electrical investigation, using impedance spectroscopy in the 260–380 ◦ C temperature range, shows that the Na 2 CuP 1.5 As 0.5 O 7 compound is a fast-ion conductor with σ 350 ◦ C = 2.28 10 − 5 Scm − 1 and Ea = 0.6 eV. Na + ions pathways simulation using bond-valence site energy (BVSE) supports the fast three-dimensional mobility of the sodium cations in the inter-chain space. Introduction The research exploration of new inorganic materials with open framework constructed of polyhedra sharing faces, edges and/or corners forming 1D channels, 2D inter-layer spaces or 3D networks where cations are located, is currently an area of intense activity including several disciplines, in particular solid-state chemistry. In particular, alkali metal phosphates were found to have various applications because of their electric, piezoelectric, ferroelectric, magnetic, and catalytic properties [1][2][3][4]. Among those, the families of materials with the melilite structure [5], the olivine structure [6] and the natrium super ionic conductor (NaSICON) structure [7], attracted attention for their ionic conduction and exchange of ions [6,7]. More recently, in a series of studies arsenate analogs have been synthesized [8][9][10]. But, until today phosphate compounds are more studied as cathodes [11,12] compared to arsenate and this is perhaps due to the toxicity of arsenic III (As 2 O 3 ). However, the oxide of arsenic V (As 2 O 5 ) is less toxic. In addition, the introduction of arsenic into a structure changes its physical and chemical properties Materials and Methods A mixture of Cu(NO 3 ) 2 .2.5H 2 O, NH 4 H 2 PO 4 and Na 2 HAsO 4 .7H 2 O in the molar ratio Na:Cu:P:As equal to 2:1:1.5:0.5 was placed in a porcelain crucible and heated to 350 • C for 24 h to eliminate the volatile products H 2 O, NO 2 , and NH 3 . The obtained powder was ground manually using agate mortar and shaped as cylindrical pellets by a uniaxial press. The obtained pellets were heated to 600 • C. After 72 h, the sample was cooled slowly at a rate of 10 • C/h down to room temperature. After grinding finely, a blue polycrystalline powder was obtained. X-ray powder diffraction (XRD) was used to control and ensure the purity of the obtained powder. The analysis was carried out using XRD-6000 (Shimadzu, Japan) with graphite monochromator (Cukα, λ = 0.154178 nm) and a scan range of 2θ = 10 • -70 • with step of about 0.02 • . The structure was refined using the Rietveld method by the means of the GSAS computer program [16] (EXPGUI, Gaithersburg, Maryland, USA). The crystallographic data of Na 2 CuP 2 O 7 [17] was used as a starting set. The obtained structural model was confirmed by the Charge Distribution CHARDI model of validation. The CHARDI calculation was done by using the CHARDI2015 computer program (Nespolo, IUCR) [18]. FTIR spectrometer (Agilent Technologies Cary 630 model) was used to allow a direct indexation of the peaks on a spectral range in wave numbers ranging from (1300-400 cm −1 ). Differential scanning calorimetry (DSC), with the SDT Q600 model, was used to study the thermal behavior of the obtained and prepared sample. In fact, the device contains two crucibles, one as a reference and the other contains the sample to be analyzed. These two crucibles are heated to 750 • C at a rate of 10 • C/min. The thermal analysis was carried out under a nitrogen atmosphere to avoid the reaction of the sample with the oxygen in the air. Energy-dispersive X-ray spectroscopy (EDX) and scanning electron microscopy (SEM, Thermo Fisher Scientific model), were used to identify the present elements and the microstructure of the studied material, respectively. The electrical measurements were preceded by pretreatment of the sample in order to densify the measured sample by reducing the mean particle size of the synthesized powder. Mechanical grinding for 100 min was carried out using the FRISCH planetary micromill pulverisette 7. The polycrystalline sample was shaped as a cylindrical pellet using a uniaxial press. The pellet was sintered in air at an optimal temperature of 610 • C for 2 h with 5 • C/min heating and cooling rates. The geometric factor of the dense ceramic is g = e/S = 0.793 cm −1 where e and S are the thickness and face area of pellet, respectively. Gold metal electrodes~36 nm thick were deposited using a SC7620 mini sputter coater. The sample was then placed between two platinum electrodes that were connected by platinum cables to the frequency response analyzer (HP 4192A) which was controlled by a microcomputer. Impedance spectroscopic measurements were performed via the Hewlett-Packard 4192-A automatic bridge supervised by HP workstation. Impedance spectra were recorded with 0.5 V AC-signal in the 5 Hz-13 MHz frequency range. The bond-valence site energy (BVSE) model [19,20] was used to simulate the alkali migration in the 3D anionic framework. The BVSE model is the latest extension of the bond-valence sum (BVS) model developed by Pauling [21] to describe the formation of inorganic materials. The BVS model was improved by Brown & Altermatt [22] followed by Adams [23], resulting in the expression: where s A-X is individual bond-valence, R A-X is the distance between counter-ions A and X, R 0 and b are fitted empirical constants, and R 0 is the length of a bond of unit valence. The BVSE model was extensively used in the cation motion simulation in the anionic framework by following the valence unit as a function of migration distance [24]. The valence unit was also recently related to potential energy scale and electrostatic interactions [19,20]. The BVSE method was used with success to simulate the transport pathways of monovalent cations (Na + ; K + and Ag + ) in numerous materials including Na 2 CoP 1.5 As 0.5 O 7 [15], Na 1.14 K 0.86 CoP 2 O 7 [25] and Ag 3.68 Co 2 (P 2 O 7 ) 2 [26]. The BVSE calculations were performed using the SoftBV [27] code and the visualization of isosurfaces was carried out using VESTA3 software (version 3, Koichi Momma and Fujio Izumi, 2018). X-ray Powder Diffraction The crystallographic study was started by a simple comparison between the XRD pattern of the prepared materials in the Na 2 O-CuO-P 2 O 5 -As 2 O 5 system and those of the previous studies of diphosphate Na 2 MP 2 O 7 [5,7,14,28,29] and Na 2 CoP 1.5 As 0.5 O 7 [15]. In this case, only the Na 2 CuP 1.5 As 0.5 O 7 diffractogram showed a similarity with that of the β-Na 2 CuP 2 O 7 diphosphate [17]. It crystallizes in the monoclinic system of the C2/c space group. This result prompted us to make a precise refinement using the Rietveld method which was implemented into the GSAS computer software [16]. The final agreement factors are Rp = 5.4% and Rwp = 6.9%. No additional peaks were detected. The final Rietveld plot is presented in Figure 1. The unit cell parameters obtained from the Rietveld refinement are a = 14.798(2) Å; b = 5.729(3) Å; c = 8.075(2) Å; β = 115.00(3) • (Table 1). The details of the crystallographic data, data collection and final agreement factors are given in Table 2. The atomic coordinates and isotropic displacement parameters are listed in Table 3. The main bond distances are given in Table 4. The charge distribution analysis and the bond-valence computation are summarized in Table 5. Table 5. Charge distribution analysis of cation polyhedra in Na 2 CuP 1.5 As 0.5 O 7 . By comparing the unit cell parameters of the studied material with those of β-Na 2 CuP 2 O 7 , the P/As substitution effect increases the volume of the unit cell (Table 1), which is explained by the distance of As-O bonds being greater than that of P-O. Infrared Spectroscopy The IR absorption spectrum of the studied Na 2 CuP 1.5 As 0.5 O 7 material is shown in Figure 2. The spectrum shows the presence of the series of distinct bands attributed to asymmetric and symmetrical valence vibrations of the P-O-P and As-O-As bridges. These bands are characteristic of the pyrophosphate (P 2 O 7 ) 4− and diarsenate (As 2 O 7 ) 4− groups (Table 6) [30] and similar to those of the Li 2 CuP 2 O 7 spectrum [31]. DSC Thermal Analysis In order to determine the thermal stability of the studied compound, the DSC analysis was used in the range from room temperature to 750 °C. The analysis result is illustrated in Figure 3. An endothermic peak was observed at 688 °C. This peak corresponds to the melting point of our compound. While, an exothermal peak is shown at 743 °C, after the fusion, probably corresponds to the oxidation of fractions of Cu 2+ to Cu 3+ in the obtained liquid phase. Overall, the thermal analysis via DSC shows that Na2CuP1.5As0.5O7 material is stable up to a temperature of 674 °C. The sharpness of the endothermic peak in the DSC analysis suggests good crystallinity of our synthesized powder. Table 6. Proposed assignment of the vibration bands of Na 2 CuP 1.5 As 0.5 O 7. Attribution Wave DSC Thermal Analysis In order to determine the thermal stability of the studied compound, the DSC analysis was used in the range from room temperature to 750 • C. The analysis result is illustrated in Figure 3. An endothermic peak was observed at 688 • C. This peak corresponds to the melting point of our compound. While, an exothermal peak is shown at 743 • C, after the fusion, probably corresponds to the oxidation of fractions of Cu 2+ to Cu 3+ in the obtained liquid phase. Overall, the thermal analysis via DSC shows that Na 2 CuP 1.5 As 0.5 O 7 material is stable up to a temperature of 674 • C. The sharpness of the endothermic peak in the DSC analysis suggests good crystallinity of our synthesized powder. Here we can also compare the thermal stability of Na2CuP1.5As0.5O7 to that of the recently studied Co analog Na2CoP1.5As0.5O7. The Cu material is stable from room temperature to the melting temperature, which is around 688 °C. In contrast, the Na2CoP1.5As0.5O7 material undergoes a phase transition at a temperature of 675 °C before melting at ~700 °C. This shows that the Na2CuP1.5As0.5O7 material is more stable than the Na2CoP1.5As0.5O7 material [15]. SEM Microstructure and EDX Analysis of Na2CuP1.5As0.5O7 Energy-dispersive X-ray spectroscopy (EDX) and scanning electron microscopy (SEM) analysis were used to confirm the chemical composition and examine polycrystalline morphology, respectively ( Figure 4). The EDX analysis of the polycrystalline powder revealed the presence of the expected elements, i.e., sodium, copper, phosphorus, arsenic, and oxygen. The micrograph on SEM of the sample shows agglomeration of uniform parallelepiped crystallites. The mapping elemental analysis of Na2CuP1.5As0.5O7 confirmed the uniform distribution of the constituent elements ( Figure 5). Here we can also compare the thermal stability of Na 2 CuP 1.5 As 0.5 O 7 to that of the recently studied Co analog Na 2 CoP 1.5 As 0.5 O 7 . The Cu material is stable from room temperature to the melting temperature, which is around 688 • C. In contrast, the Na 2 CoP 1.5 As 0.5 O 7 material undergoes a phase transition at a temperature of 675 • C before melting at~700 • C. This shows that the Na 2 CuP 1.5 As 0.5 O 7 material is more stable than the Na 2 CoP 1.5 As 0.5 O 7 material [15]. 3.4. SEM Microstructure and EDX Analysis of Na 2 CuP 1.5 As 0.5 O 7 Energy-dispersive X-ray spectroscopy (EDX) and scanning electron microscopy (SEM) analysis were used to confirm the chemical composition and examine polycrystalline morphology, respectively ( Figure 4). The EDX analysis of the polycrystalline powder revealed the presence of the expected elements, i.e., sodium, copper, phosphorus, arsenic, and oxygen. The micrograph on SEM of the sample shows agglomeration of uniform parallelepiped crystallites. The mapping elemental analysis of Na 2 CuP 1.5 As 0.5 O 7 confirmed the uniform distribution of the constituent elements ( Figure 5). Here we can also compare the thermal stability of Na2CuP1.5As0.5O7 to that of the recently studied Co analog Na2CoP1.5As0.5O7. The Cu material is stable from room temperature to the melting temperature, which is around 688 °C. In contrast, the Na2CoP1.5As0.5O7 material undergoes a phase transition at a temperature of 675 °C before melting at ~700 °C. This shows that the Na2CuP1.5As0.5O7 material is more stable than the Na2CoP1.5As0.5O7 material [15]. SEM Microstructure and EDX Analysis of Na2CuP1.5As0.5O7 Energy-dispersive X-ray spectroscopy (EDX) and scanning electron microscopy (SEM) analysis were used to confirm the chemical composition and examine polycrystalline morphology, respectively ( Figure 4). The EDX analysis of the polycrystalline powder revealed the presence of the expected elements, i.e., sodium, copper, phosphorus, arsenic, and oxygen. The micrograph on SEM of the sample shows agglomeration of uniform parallelepiped crystallites. The mapping elemental analysis of Na2CuP1.5As0.5O7 confirmed the uniform distribution of the constituent elements ( Figure 5). Crystal Structure Description The structural unit of Na2CuP1.5As0.5O7 is presented in Figure 6. It contains two P2O7 units connected by a vertex with two CuO4 of square planar geometry. The charge neutrality of the structural unit is ensured by four sodium ions (Na + ). Crystal Structure Description The structural unit of Na 2 CuP 1.5 As 0.5 O 7 is presented in Figure 6. It contains two P 2 O 7 units connected by a vertex with two CuO 4 of square planar geometry. The charge neutrality of the structural unit is ensured by four sodium ions (Na + ). The structure of our material differs from that of the allotropic form α-Na2CuP2O7 [17], which has a two-dimensional anionic framework formed by the connection of vertices of PO4 tetrahedra, and CuO5 polyhedra. Electrical Properties: Effect of P/As Doping The prepared pellet of the Na2CuP1.5As0.5O7 compound was sintered at 550 °C for 2 hours with a 5 °C/min heating and cooling rate. The relative density of the obtained pellet is D = 88%. The thickness and surface of the pellet are e = 0.36 cm and S = 0.454 cm 2 , respectively. The electrical measurements of the obtained sample were carried out using complex impedance spectroscopy in the temperature range of 260-380 °C. The recorded spectra are shown in Figure 9. The best fits of impedance spectra were obtained when using a conventional electrical circuit Rg//CPEg-Rgb//CPEgb, where CPE are constant phase elements (Figure 9a) and subscripts g and gb indicate bulk grain and grain boundary contribution, respectively: The true capacitance was calculated from the pseudo-capacitance according to the following relationships: (where ω0 is the relaxation frequency, A is the pseudo-capacitance obtained from the CPE, and C is the true capacitance. The structure of our material differs from that of the allotropic form α-Na 2 CuP 2 O 7 [17], which has a two-dimensional anionic framework formed by the connection of vertices of PO 4 tetrahedra, and CuO 5 polyhedra. Compared to the sodium cobalt diphosphate-diarsenate Na 2 CoP 1.5 As 0.5 O 7 investigated recently by Marzouki et al. [15], we notice that despite a similar composition, Na 2 CuP 1.5 As 0.5 O 7 crystallizes in a different structure type. Indeed, the cobalt material crystallizes in the tetragonal system of the P4 2 /mnm space group with the unit cell parameters a = 7.764(3) Å, c = 10.385(3) Å. In contrast, the studied material Na 2 CuP 1.5 As 0.5 O 7 , crystallizes in the monoclinic system of the C2/c space group with the unit cell parameters a = 14.798(2) Å; b = 5.729(3) Å; c = 8.075(2) Å; β = 115.00(3) • . The difference is undoubtedly determined by the preference of the Jahn-Teller active d 9 Cu 2+ to adopt square coordination ( Figure 6). Electrical Properties: Effect of P/As Doping The prepared pellet of the Na 2 CuP 1.5 As 0.5 O 7 compound was sintered at 550 • C for 2 h with a 5 • C/min heating and cooling rate. The relative density of the obtained pellet is D = 88%. The thickness and surface of the pellet are e = 0.36 cm and S = 0.454 cm 2 , respectively. The electrical measurements of the obtained sample were carried out using complex impedance spectroscopy in the temperature range of 260-380 • C. The recorded spectra are shown in Figure 9. The best fits of impedance spectra were obtained when using a conventional electrical circuit R g //CPE g -R gb //CPE gb , where CPE are constant phase elements (Figure 9a) and subscripts g and gb indicate bulk grain and grain boundary contribution, respectively: The true capacitance was calculated from the pseudo-capacitance according to the following relationships: (where ω 0 is the relaxation frequency, A is the pseudo-capacitance obtained from the CPE, and C is the true capacitance. Table 4. The values of the capacities Cgk and Cgbk are approximately 10 -11 and 10 -10 Fcm -1 for the bulk and grain boundaries, respectively [15]. In fact, with a relative density of D = 88%, the conductivity of the prepared sample (Table 7) increases from 0. 35 10 -5 Scm -1 at 260 °C to 3.13 10 -5 Scm -1 at 380 °C. On the other hand, the 12% porosity of our sample prompted us to estimate the conductivity values of the fully dense sample of Na2CuP1.5As0.5O7 using the empirical formula proposed by Langlois and Coeuret [32]: where σ and σd are the electrical conductivity of porous and dense samples, respectively. P is the porosity of the sample. Table 4. The values of the capacities C g k and C gb k are approximately 10 −11 and 10 −10 Fcm −1 for the bulk and grain boundaries, respectively [15]. In fact, with a relative density of D = 88%, the conductivity of the prepared sample (Table 7) increases from 0.35 10 −5 Scm −1 at 260 • C to 3.13 10 −5 Scm −1 at 380 • C. On the other hand, the 12% porosity of our sample prompted us to estimate the conductivity values of the fully dense sample of Na 2 CuP 1.5 As 0.5 O 7 using the empirical formula proposed by Langlois and Coeuret [32]: where σ and σ d are the electrical conductivity of porous and dense samples, respectively. P is the porosity of the sample. T ( • C) T (K) R g (10 4 Ω) A g (10 −10 F s p−1 ) C g k (10 −11 Fcm −1 ) R gb (10 4 Ω) A gb (10 −10 F s p−1 ) C gb k (10 −10 Fcm −1 ) Rt (10 4 Ω) ρt (10 4 Ω cm) σ (10 −5 S cm −1 ) σ d (10 −5 S cm −1 ) This correction has been used in previous works such as Na 2 CoP 1.5 As 0.5 O 7 [15]. Taking into account the porosity factor P = 0.12, the conductivity value of dense material will be σ d = (4σ/0.88). The conductivity values of dense sample calculated at different temperatures are summarized in Table 7. In this case, the experimental conductivity of 3.5 10 −6 Scm −1 corresponds to the corrected value of 1.59 10 −5 Scm −1 at 260 • C. The curve Ln (σ × T) = f (1000/T) is linear (Figure 10), satisfying the Arrhenius law LnσT = Lnσ 0 − Ea/kT (k = Boltzmann constant). The activation energy calculated from the slope of this curve is Ea = 0.60 eV. The curve Ln (σ × T) = f (1000/T) is linear (Figure 10), satisfying the Arrhenius law LnσT = Lnσ0 − Ea/kT (k = Boltzmann constant). The activation energy calculated from the slope of this curve is Ea = 0.60 eV. The electrical investigation of the studied material shows that the activation energy, which is unaffected by porosity and thus easier to use for comparison, decreases for Na2CuP1.5As0.5O7 compared to that of Na2CuP2O7 [33], i.e., 0.60 eV and 0.89 eV, respectively. Consequently, the effect of P/As substitution increases the electrical conductivity of the parent material Na2CuP2O7 at lower temperatures [33]. Overall, a comparison of the conductivity values of the studied material Na2CuP1.5As0.5O7 (at T = 350 °C, σD = 88% = 2.28 × 10 -5 Scm -1 ; σd = 2.28 × 10 -4 Scm -1 and Ea = 0.60 eV) with those found in the literature shows that our material can be classified among the fast ionic conductors as shown in Table 8. The BVSE calculation revealed in addition to the equilibrium site Na1, the presence of two interstitials sites (i1 to i2) and ten saddle points (s1 to s10) (Table 9). Thus, there are ten local 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1 The electrical investigation of the studied material shows that the activation energy, which is unaffected by porosity and thus easier to use for comparison, decreases for Na 2 CuP 1.5 As 0.5 O 7 compared to that of Na 2 CuP 2 O 7 [33], i.e., 0.60 eV and 0.89 eV, respectively. Consequently, the effect of P/As substitution increases the electrical conductivity of the parent material Na 2 CuP 2 O 7 at lower temperatures [33]. Overall, a comparison of the conductivity values of the studied material Na 2 CuP 1.5 As 0.5 O 7 (at T = 350 • C, σ D = 88% = 2.28 × 10 −5 Scm −1 ; σ d = 2.28 × 10 −4 Scm −1 and Ea = 0.60 eV) with those found in the literature shows that our material can be classified among the fast ionic conductors as shown in Table 8. Table 8. Activation energies of ionic conductivity for some sodium-ion materials. Material Activation Energy (eV) Temperature Range ( • C) Ref. The BVSE calculation revealed in addition to the equilibrium site Na1, the presence of two interstitials sites (i1 to i2) and ten saddle points (s1 to s10) (Table 9). Thus, there are ten local pathways as shown in Table 10. Figure 11 shows the position of equilibrium sites and interstitial sites in the unit cell. Table 9. Bond-valence sites energies and positions of equilibrium site (Na1), interstitial sites (i1 and i2) and saddle points (s1 to s10). pathways as shown in Table 10. Figure 11 shows the position of equilibrium sites and interstitial sites in the unit cell. Table 9. Bond-valence sites energies and positions of equilibrium site (Na1), interstitial sites (i1 and i2) and saddle points (s1 to s10). Figure 11 shows that the migration along the b direction does not involve any interstitial sites, and the diffusion occurs from the equilibrium site Na1 to its symmetry image, with a jump distance of approximately 3.119 Å and with an activation energy of approximately 0.466 eV (Figure 12a). Site Along the c direction, Figure 11 shows that the sodium moves from the Na1 position to the interstitial sites i2 then to i1 to reach the equivalent Na1 site (Figure 12b) with an activation energy along this direction of approximately 0.96 eV (Figure 12b). Figure 11 shows that the migration along the b direction does not involve any interstitial sites, and the diffusion occurs from the equilibrium site Na1 to its symmetry image, with a jump distance of approximately 3.119 Å and with an activation energy of approximately 0.466 eV (Figure 12a). Along the c direction, Figure 11 shows that the sodium moves from the Na1 position to the interstitial sites i2 then to i1 to reach the equivalent Na1 site (Figure 12b) with an activation energy along this direction of approximately 0.96 eV (Figure 12b). Along the a direction, the sodium atoms pass through the following sites: Na1-i1-i1-Na1-i2. The activation energy along this direction is approximately 0.96 eV (Figure 12c). Thus, the activation energy of the title compound for 1D and 3D ionic conductivity is approximately 0.466 eV and 0.96 eV, respectively. Figure 13 shows the isosufaces of conduction pathways. Consequently, based on the BVSE calculations, the fast ionic conductivity observed for the material can be explained by the three-dimensional mobility of Na + ions in the inter-ribbon space, likely with more favorable diffusion along the b-axis. Conclusions A new quaternary oxide Na2CuP1.5As0.5O7 was identified in the Na2O-CuO-P2O5-As2O5 system. It crystallizes in the monoclinic C2/c space group and is isostructural to β-Na2CuP2O7. The partial substitution of P appears to be beneficial for ionic conductivity as the material exhibits lower activation energy of 0.6 eV vs. 0.89 eV for the parent β-Na2CuP2O7. According to the impedance spectroscopy performed on the 88% dense pellet, the bulk ionic conductivity reaches the value of 2.28 10 -5 Scm -1 , which allows Na2CuP1.5As0.5O7 to classify as a fast ion conductor. The bond-valence Along the a direction, the sodium atoms pass through the following sites: Na1-i1-i1-Na1-i2. The activation energy along this direction is approximately 0.96 eV (Figure 12c). Thus, the activation energy of the title compound for 1D and 3D ionic conductivity is approximately 0.466 eV and 0.96 eV, respectively. Figure 13 shows the isosufaces of conduction pathways. Along the a direction, the sodium atoms pass through the following sites: Na1-i1-i1-Na1-i2. The activation energy along this direction is approximately 0.96 eV (Figure 12c). Thus, the activation energy of the title compound for 1D and 3D ionic conductivity is approximately 0.466 eV and 0.96 eV, respectively. Figure 13 shows the isosufaces of conduction pathways. Consequently, based on the BVSE calculations, the fast ionic conductivity observed for the material can be explained by the three-dimensional mobility of Na + ions in the inter-ribbon space, likely with more favorable diffusion along the b-axis. Conclusions A new quaternary oxide Na2CuP1.5As0.5O7 was identified in the Na2O-CuO-P2O5-As2O5 system. It crystallizes in the monoclinic C2/c space group and is isostructural to β-Na2CuP2O7. The partial substitution of P appears to be beneficial for ionic conductivity as the material exhibits lower activation energy of 0.6 eV vs. 0.89 eV for the parent β-Na2CuP2O7. According to the impedance spectroscopy performed on the 88% dense pellet, the bulk ionic conductivity reaches the value of 2.28 10 -5 Scm -1 , which allows Na2CuP1.5As0.5O7 to classify as a fast ion conductor. The bond-valence Consequently, based on the BVSE calculations, the fast ionic conductivity observed for the material can be explained by the three-dimensional mobility of Na + ions in the inter-ribbon space, likely with more favorable diffusion along the b-axis. Conclusions A new quaternary oxide Na 2 CuP 1.5 As 0.5 O 7 was identified in the Na 2 O-CuO-P 2 O 5 -As 2 O 5 system. It crystallizes in the monoclinic C2/c space group and is isostructural to β-Na 2 CuP 2 O 7 . The partial substitution of P appears to be beneficial for ionic conductivity as the material exhibits lower activation energy of 0.6 eV vs. 0.89 eV for the parent β-Na 2 CuP 2 O 7 . According to the impedance spectroscopy performed on the 88% dense pellet, the bulk ionic conductivity reaches the value of 2.28 10 −5 Scm −1 , which allows Na 2 CuP 1.5 As 0.5 O 7 to classify as a fast ion conductor. The bond-valence site energy calculations suggest that the Na + diffusion is three-dimensional with some preference for transport along the b axis.
6,620.6
2020-03-06T00:00:00.000
[ "Materials Science", "Chemistry" ]
Aβ and Tau Interact with Metal Ions, Lipid Membranes and Peptide-Based Amyloid Inhibitors: Are These Common Features Relevant in Alzheimer’s Disease? In the last two decades, the amyloid hypothesis, i.e., the abnormal accumulation of toxic Aβ assemblies in the brain, has been considered the mainstream concept sustaining research in Alzheimer’s Disease (AD). However, the course of cognitive decline and AD development better correlates with tau accumulation rather than amyloid peptide deposition. Moreover, all clinical trials of amyloid-targeting drug candidates have been unsuccessful, implicitly suggesting that the amyloid hypothesis needs significant amendments. Accumulating evidence supports the existence of a series of potentially dangerous relationships between Aβ oligomeric species and tau protein in AD. However, the molecular determinants underlying pathogenic Aβ/tau cross interactions are not fully understood. Here, we discuss the common features of Aβ and tau molecules, with special emphasis on: (i) the critical role played by metal dyshomeostasis in promoting both Aβ and tau aggregation and oxidative stress, in AD; (ii) the effects of lipid membranes on Aβ and tau (co)-aggregation at the membrane interface; (iii) the potential of small peptide-based inhibitors of Aβ and tau misfolding as therapeutic tools in AD. Although the molecular mechanism underlying the direct Aβ/tau interaction remains largely unknown, the arguments discussed in this review may help reinforcing the current view of a synergistic Aβ/tau molecular crosstalk in AD and stimulate further research to mechanism elucidation and next-generation AD therapeutics. Introduction At the end of 1901, Alois Alzheimer, a German neuropathologist, described the presence of neurofibrillary tangles (NFTs) and "senile plaques" in post-mortem neuronal tissues of a patient that experienced memory failure and gradual mental decline [1]. That pioneering article is considered the first report describing senile dementia, a chronic neurodegenerative condition that will be later commonly identified as Alzheimer's Disease (AD). AD is known to be the prevalent form of dementia especially in aged people: 60-70% of all cases of dementia are diagnosed with AD [2] and about 32% of people 85 years old and older are affected by AD [3]. Presently, the different types of protein aggregates i.e., extracellular deposits of amyloid β (Aβ) peptide [4] and the intracellular hyperphosphorylated forms of tau protein, NFTs, [5] observed in AD brains represent the two distinctive pathological traits of AD [6]. Accumulating evidence suggests that Aβ plays a significant role in AD while human genetics established the relationship between tau malfunction and neurodegeneration. It is demonstrated that inherited Frontotemporal Dementia (FTD) and parkinsonism, with extensive filamentous tau deposits in the brain in the absence of Aβ deposits, are caused by mutations in the MAPT, the microtubule associated protein tau gene [7]. These pathological mutations involve tau hyperphosphorylation which leads to detachment of the functional protein from the microtubules, with consequent intracellular When present in low quantities, soluble Aβ isoforms play a vital physiological role in the CNS, contributing to normal brain function [28]. Due to its hydrophobic interaction with lipidic membranes, vesicles, and transmembrane receptors [29], as well as neurotrophic or neurotoxic effects depending on its concentration, monomeric Aβ homeostasis appears to be crucial in the modulation of synaptic function. In addition to its role in synapsis regulation, the Aβ monomer has neuroprotective properties that are mediated through the activation of several pathways. Aβ monomers have been shown to increase neuronal survival by activating the phosphatidylinositol-3-kinase (PI-3-K) pathway, which appears to be mediated by IGF-1/insulin receptor stimulation [30]. The activation of this route in neurons has been shown to cause functional synaptogenesis, or the development of new synapses [31]. Additional non-toxic yet atypical functions of soluble Aβ have been observed. These include: protective antimicrobial properties [32]; protection against cancer [33]; assisting the brain to recover from traumatic and ischemic injuries by participating to the blood-brain barrier repair; regulation of the synaptic function [34]. The two 40 or 42 residues long Aβ peptides (Aβ40 or Aβ42, respectively) are produced by the concerted proteolysis of the amyloid precursor protein (APP) [4,35,36]. APP processing may occur via two distinct pathways, i.e., non-amyloidogenic and amyloidogenic. The generation of Aβ peptides belongs to the activity of two transmembrane proteolytic enzymes (the β-secretase and the γ-secretase), on the membrane-bound APP. APP can be also processed by a different protease (α-secretase) that cleaves APP between amino acids 16 and 17 of the Aβ peptide, thus blocking Aβ peptides generation [37,38]. Aβ's primary structure is composed of a hydrophilic N-terminal region (1-16) alongside a hydrophobic C-terminal domain (17-40/42). Tau is also a substrate for the ubiquitin-proteasome system (UPS) and for chaperonemediated autophagy [46]. A role for tau in regulating the functional maturation and survival of new-born neurons, the selectivity of neuronal death following stress, and neuronal responses to external stimuli was also reported [47]. Tau is subject to different post-translational modifications i.e., phosphorylation, glycosylation, glycation, prolylisomerization, cleavage or truncation, nitration, polyamination, ubiquitination, sumoylation, oxidation and aggregation [48]. Could Aβ and Tau Be Colocalized in Lipid Membranes? The role played by lipid membranes in modulating amyloid aggregation and toxicity has been largely investigated [49,50]. Plasma cell membrane is not a simple target for amyloidogenic proteins, but it is rather an active actor which can foster peptide aggregation. Many reports suggests that lipid abnormalities exist in the AD brain, implying that aberrant Aβ amyloid interactions with the plasma membrane may cause toxicity [23,51]. Aβ-membrane interactions can occur when the peptide is inserted into the membrane and a pore-like structure forms, or when it is bound to the membrane's surface [23]. The channels generated in the plasma membrane have the potential to harm neuronal cells by impairing signal transmission and ultimately causing apoptosis. Surprisingly, just as Aβ peptide can influence the property and state of the membrane, so the membrane can influence the fibrillation process of Aβ peptides. Due to the easy accessibility of cytosolic tau to cellular and organelle membranes, research into tau-lipid bilayer interactions has become more helpful in understanding disease pathophysiology, with multiple studies implicating membranes as major targets for oligomeric tau aggregation [24]. Indeed, similar to Aβ peptide, tau protein is able to interact with both biological and artificial membranes [24,[52][53][54][55]. Moreover, binding of tau to membranes in vitro is enhanced by the presence of anionic lipid, as observed for Aβ peptide [56,57]. Interestingly, Katsinelos and colleagues showed that tau is also capable of disrupting large unilamellar vesicle (LUV) membranes in a PI(4,5)P2-dependent manner [58]. Thus, such similarities in the mechanism and topology of the interactions of Aβ and tau peptide with lipidic membrane raise a question: could lipid membranes be the interface that mediates the cross correlation between Aβ and tau in the etiology of Alzheimer's Disease? Each lipid component of cell membrane is an active molecule which can affect conformation, function and behavior of several transmembrane protein [59]. Cholesterol, gangliosides, in particular monosialotetrahexosylganglioside (GM1), and sphingomyelin have shown to play a pivotal role in Alzheimer's Disease [60,61] and are all strictly correlated with raft domain [62]. A significant alteration in raft domains lipid composition in the frontal cortex of AD patients were described [63,64]. Interestingly, there is some evidence that both tau protein and amyloid beta are strictly correlated, directly and/or indirectly, with raft domain in membrane, and, in particular, with Cholesterol and gangliosides [23,55]. Cholesterol could interact with Aβ monomer, protofibrils and fibrils [65][66][67]. Matsuzaki et al. showed that a reduction in cholesterol almost abolished the formation of Aβ42 amyloids in rat pheochromocytoma PC12 cells [68]. Moreover, Cholesterol and sphingomyelin are also involved in tau secretion through plasma membrane. Treatment with methyl-β-cyclodextrin to reduce cell membrane cholesterol decreased tau secretion by 47%, while increasing cellular cholesterol increased tau secretion by 75% [69]. Thus, an increase in cholesterol contents in membrane, associated with Alzheimer's Disease [70] could potentially promote a local coexistence of high concentration of both proteins. Yet, recent studies suggest that gangliosides play essential and complicated roles in Alzheimer's Disease, by accelerating the oligomerization of Aβ in neuronal membrane environment [71,72]. According to the studies by Matsuzaki et al., in GM1 cluster containing sphingomyelin and cholesterol, Aβ40 specifically bound to a GM1 cluster. Confocal laser microscopic study further demonstrated that fluorescein-labeled Aβ selectively bound to GM1 enriched domains of cell membranes in a time and concentration-dependent manner [73]. Tau protein also requires the presence of cholesterol and sphingomyelin to interact and penetrate membranes, indicating that raft domains play a key role in this process [69,74,75]. The interactions between lipid membranes and tau have not been fully characterized, particularly the mutually disruptive structural perturbations. Several works suggest that tau aggregation may be modulated by plasma membranes [55]. Tau has been shown to interact with the plasma membrane through its amino-terminal domain and accumulates with Aβ in raft microdomains [54,74,76]. Moreover, anionic lipid vesicles have been shown to promote the aggregation of the microtubule binding domain of tau (K18) at sub-µM concentrations [77]. Tau also has been shown to have a strong tendency to associate with and intercalate into negatively charged lipid monolayers and bilayers being able to seed the formation of paired helical filament in the inner leaflet of plasma membranes. Furthermore, tau interaction with anionic lipid membranes has been demonstrated to disrupt lipid packing and compromise membrane structural integrity [78]. The whole of these data suggest that raft lipid domain could play a role of "mediator". Indeed, accumulating evidence suggests the possibility of a local and transient co-presence of Aβ and tau, at high local concentration, in the microdomain raft region of plasma membrane, enhanced by pathological condition [74,79]. Thus, lipid membrane could be the interface where Aβ and tau proteins mutually cross interact. Whether this cross interaction occurs directly or indirectly must be determined yet and more investigation is necessary. Plasma membrane might also mediate the mutual modulation of the activity of Aβ and tau without a direct cross-interaction. The importance of tau in mediating Aβ toxicity has been clearly demonstrated by the resistance of tau knock-out neurons to Aβ-induced neurotoxicity [80]. Aβ is known to activate calpain and increase tau proteolysis in primary neurons [81]. Membrane cholesterol content regulates the rate of calcium influx and calpain activation in neurons by increasing the activity of glutamatergic receptors and membrane associated calcium transporters. Nicholson and Ferreira [82] suggested a link between cholesterol levels in plasma membranes and tau toxicity in the context of AD, by showing that Aβ-mediated production of 17 kDa calpain cleaved tau fragments increase with neuronal development and is correlated with membrane cholesterol level in neurons. However a direct neurotoxic effect of the calpain-cleaved 17 kDa tau species has not yet been demonstrated [83]. Aβ and Tau Can Interact with Metal Ions It is widely accepted that unbalanced metal homeostasis is associated to neurodegeneration through different pathogenic pathways including oxidative stress, microglia activation, and inflammation [57,84]. In AD, metals ions are directly and indirectly involved in the Aβ/tau processing [85][86][87][88][89][90] and can impact the physiological functions of these proteins. In particular, Aβ production is regulated by zinc-(α-secretase) and copper-dependent (β-secretase) enzymes that will not properly work if metal ions are dysregulated. On this ground, abnormally high concentrations of zinc increase the resistance of Aβ peptides to α-secretase cleavage and, therefore, promote an increase in Aβ content [91]. Moreover, it has been found that zinc can inhibit the α-secretase cleavage activity of APP, resulting in elevated βand γ-secretase processing of APP and a further increased generation of extracellular Aβ plaques [85]. Some proteases responsible for Aβ degradation are the zinc-dependent neprilysin (NEP) and insulin degrading enzyme (IDE) [92]. Metal ion dyshomeostasis could result in improperly metal complexes of NEP or IDE, thereby affecting the Aβ turn-over and the formation of toxic aggregates. Likewise, tau phosphorylation and aggregation could be influenced by metals as zinc, copper and iron, which have been shown to modulate kinases that phosphorylate tau, further worsening tau pathology [93]. Moreover, zinc has also been demonstrated to induce tau hyperphosphorylation by activating the glycogen synthase kinase-3beta (GSK-3β) and inactivating phosphatase such as protein phosphatase 2A (PP2A) [94,95] and it has been implicated in tau aggregation and neurotoxicity as suggested by a recent study carried out on a tau-R3(303-336) peptide fragment [96]. Metals can also directly interact either with Aβ peptides or tau proteins in AD by influencing their respective folding stability as well as their mutual interaction. As an example, Cu(II) and Zn(II) ions have been reported to catalyze toxic Aβ or tau self-assembly and, in turn, generate toxic amyloid fibrils promoting neuronal loss [97,98]. In addition to its effects on amyloid aggregation, metal ions could play multifaceted effects on protein clearance [99]. In particular, it has been reported that the reuptake of transition metal ions as Zn(II) and Cu(II), after neuronal signaling, is slower in AD brains. As a result, higher concentrations of metal ions may be allowed to persist within the synapse, which in turn might promote Aβ aggregation [100][101][102]. Another important consequence of metal ion dyshomeostasis is oxidative stress caused by production of reactive oxygen species (ROS) by Fenton-like reactions [97]. Metal−Aβ complexes [i.e., Cu(I/II)−Aβ and Fe(II/III)−Aβ] have been observed to generate ROS similar to redox-active metal ions [103,104]. Several studies indicated the reduction of Cu(II)−Aβ to Cu(I)−Aβ and transfer of one electron to O 2 to yield O 2 •− [105,106]. Moreover, the D1, H13/H14, and M35 residues in Aβ may also be involved in the mechanism proposed for the ROS generation mediated by Cu(I/II)−Aβ [107,108]. From the above it is clear that an interplay among Aβ, tau, and metal ion dyshomeostasis plays a significant role in AD pathogenesis [109]. All these factors have been shown to mutually influence each other with adverse effects on disease progression. A better understanding of their relationship at a molecular level, could be beneficial for elucidating AD pathogenesis. In this regard, determining the affinity of metal ions for tau and Aβ proteins is essential to understand the biological relevance of these metal complexes and predict which biomolecule could effectively compete with Aβ and/or tau for metal ion complexation. Affinity measurements of the copper complexes with Aβ and tau indicated dissociation constants in the range from nanomolar to attomolar values [110][111][112] and from micromolar to high picomolar [113,114], respectively. Since extracellular Cu(II) levels in the brain interstitial fluid are 100 nM, a picomolar affinity for Cu(II) would allow Aβ or Tau to compete for Cu(II) ions with other extracellular Cu(II) ligands [110]. This is especially true in the synaptic region where Cu(II) can be released during neurotransmission [115]. In particular, the concentrations of ionic copper in the synaptic cleft, after excitatory release, may reach 15 mM [116]. Aβ is also present in the synaptic cleft region [117] while recent studies indicate that tau can be detected into the synaptic vesicles under pathological conditions [118]. The data above mentioned indicate the ability of Aβ and tau proteins to interact with copper ions at physiological conditions. The binding sites, the conformational changes, and affinity constants of Cu(II) and Zn(II) complexes with Aβ have been widely investigated [119,120]. Different coordination modes, stability constant values, and metal-assisted polypeptide secondary structure changes have been proposed. The 1-16 residue domain was generally considered the binding region for the Cu(II) ion in Aβ [121,122]. In particular, at a low metal to Aβ ratio and physiological pH, macrochelate complex species mainly form while, upon increasing the metal to ligand ratio, more than one complex species can form with different coordination modes, as a consequence of the metal-assisted deprotonation of the amide nitrogens [123]. Little is known on the Cu(II) complex with oligomeric Aβ species. Recently, the Cu(II) coordination properties of synthetic Aβ(1-16) dimer, has been investigated [124]. Interestingly, an increased affinity of Cu(II) for His13 and His14 residues, compared with the Cu(II) coordination modes reported in the Aβ(1-16) monomer, was observed. This result was in keeping with a recent study that reported the involvement of imidazole nitrogen of His13 and His14 in Cu(II) coordination with Aβ(1-40) fibrils [125]. Interestingly, a solid state NMR study on Aβ fibrils indicated that Zn 2+ causes structural changes in the N-terminal domain of Aβ42 by interaction of the metal ion with the side chain of His13 and His14 residues. Moreover, the metal ion is able to disrupt the salt bridge, between the side chains of Asp23 and Lys28, considered to be critical in the Aβ42 aggregation process [127]. The studies reported above revealed that Cu(II) or Zn(II) exhibit a different binding site preference within the N-terminal region of Aβ peptide. Indeed, the addition of Zn(II) in excess is not able to completely remove Cu(II) from its primary binding sites. In particular, the copper ion cannot be displaced from the N-terminal group, which is its preferred coordinating site. Interestingly, Cu(II) can be shifted from the binding to the two His residues, His13 and His14, when Cu(II) to Zn(II) ratios are low [128]. The formation of this ternary metal complex may play a role in the redox activity of metals-Aβ complexes and justify the protective role of zinc in comparison with copper [85,129]. The amino acid sequence of tau protein is relatively rich in histidine residues and the imidazole side chain can be considered as potential metal binding sites of proteins [see Figure 2]. Almost all complexation studies reported in literature have been focused on the pseudo-repeats (R1-R4 domains) placed in the microtubule-binding region of tau protein [86][87][88][89]114]. These studies confirmed the involvement of the histidyl residue and the deprotonated amide nitrogens as the primary metal binding sites of the microtubule region. Few works have been reported about the metal complexes with peptides derived from the region outside the microtubule-binding domain of tau protein. In particular, recent studies described for the first time the Cu(II) binding ability of tau peptides from the N-terminal region of tau protein [130,131]. The results indicate that Cu(II) can bind the N-terminal domain using the histidine residues or amino group as anchoring sites. The results revealed that the complexes formed with the peptides containing the H32 residue predominate over those of H14 at physiological pH values with the formation of imidazole-and amide-coordinated species. Interestingly, the copper binding affinity of the histidyl residue (H32) is greater than the one of the histidines in the microtubule domain, suggesting that the N-terminal domain of tau protein could be an additional coordinating site for copper ions [86]. No data are available on the formation of mixed metal complexes involving Aβ and tau despite molecular in vitro evidence for a direct Aβ/tau interaction. In particular, computational simulations revealed a metal binding affinity of peptide fragments bearing the metal interacting sequence from tau protein lower than those reported for the Cu(II)-Aβ system [90]. In these conditions, the formation of an a ternary complex Aβ-Cu-tau complex might occur as observed in the study of the copper complexes with Aβ N-terminus and octarepeat peptide fragments derived from the N-terminal part of the prion protein where computational simulations revealed that the amino terminus is a more effective anchoring site for metal binding [132,133]. Therefore, the possibility for a direct, Cu-mediated interaction between Aβ and Tau cannot be ruled out. Cu-bridging coordination might have an important impact on the aggregation behavior because the overall affinity of the peptide-peptide interactions, could be considerably increased. Can Targeted Peptide-Based Inhibitors Prevent Aβ/Tau Cross-Seeding AD? The interplay between the tau protein and Aβ is revealing more than casual in AD [134,135], and the search of the cross-interaction sites on both proteins may lead to the design of new molecules capable of inhibiting cross-seeded toxic aggregation [136]. While a variety of peptide-based inhibitors of Aβ aggregation have been developed and extensively reported [137][138][139][140], less examples of peptide inhibitors of tau protein aggregation exist in the literature [86,141]. So far, most AD therapeutic research has focused on Aβ, fewer efforts have been directed to developing therapeutic compounds targeting tau. The development of peptide drugs can be hampered by their short half-life in vivo due to protease susceptibility, however the bioavailability of peptides, as well their blood-brain barrier (BBB) permeability, can be overcome by chemical modifications i.e., incorporation of conformationally constrained amino acids, modifications of the peptide backbone, endprotection and the use of D-enantiomeric amino acids. Recently, Cuadros et al. [143] showed that the w-tau peptide (KKVKGVGWVGCCP-WVYGH), containing two tryptophan residues and derived from the 18-residue unique sequence of w-tau (a new tau human-specific splicing isoform) is able to inhibit not only tau protein assembly but also amyloid β peptide polymerization. Computational seeding model predicts that the amyloid core of Aβ can form intermolecular β-sheet interactions with VQIINK or VQIVYK [144]. It is plausible that cross-seeding of tau by Aβ promotes tangle formation in AD, which could be prevented not only by inhibiting Aβ aggregation, but also by disrupting the binding site of Aβ with tau. On this basis, it can be hypothesized that an inhibitor capable of targeting the amyloid core, which itself is an important sequence for Aβ aggregation [145][146][147], might block both Aβ aggregation and tau seeding by Aβ. The design and synthesis of small peptides and/or peptidomimetics might be a viable strategy to break pathological cross interaction and prevent protein aggregation. Rationally designed peptides should prevent polypeptide chain aggregation by an ensemble of concurrent mechanisms operating at (i) the intermolecular interfaces that may include hampering the electrostatic interaction, (ii) hydrophobic capping of the "hot spot" responsible of molecular recognition or (iii) managing the metal coordination properties of either Aβ or tau. Many studies have been inspired by the well-known hydrophobic core Aβ16-20 (KLVFF) [148] and the β-sheet breaker peptide LPFFD [149] (see Figure 3) sequences to generate a variety of β-sheet breaker modified peptides [150]. For instance, considering the key role of the hydrogen-bridging or electrostatic interactions in the inhibitory property. β-sheet breaking peptides, based on the KLVFF sequence, the incorporation of N-methyl-amino acids [151][152][153], the addition of "disrupter" groups such as oligolysine chains to the C-terminus (KLVFFKKKKK) [154,155] or RG-/-GR residues at their N-and C-terminal end (RGKLVFFGR or RGKLVFFGR-NH 2 ) have been designed [156]. Other KLVFF derived synthetic peptides include a multiple-peptide conjugate such as a 4-branched KLVFF-dendrimer [157] and six copies of ffvlk (retro-inverse analogue of KLVFF) linked to branched hexameric polyethylene glycol (PEG) [158]. Moreover, reducing the conformational freedom of the peptide, a cyclic KLVFF-derived peptide aggregation inhibitor was developed as well [159,160]. The introduced modification positively impacted with the inhibitory effect on Aβ aggregation compared to the unmodified sequence KLVFF or LPFFD, allowing a potential use of the compounds as therapeutic in Alzheimer's Disease. The antifibrillogenic and neuroprotetcive ability of a zinc-porphyrin-KLVFF conjugate was also reported [161]. Authors demonstrated an enhanced amyloid suppression properties and inhibition of the cytotoxic effects of Aβ42's oligomers by this derivative with respect the unconjugated KLVFF parent peptide [161]. Yet, the fluorescent porphyrin endows this peptide conjugate with theragnostic properties enabling the identification and visualization of soluble Aβ aggregates in biological fluids and the photodynamic therapy. The β-sheet breaker peptide LPFFD has been subjected to chemical modifications as well. LPFFD-PEG [162], LPFFD containing at the N-terminus an N,N-di-methyltaurine or a taurine moiety [163], N-and C-terminal protections to minimize exopeptidase cleavage, modifications at the catabolic sites of the sequence to prevent proteolytic degradation have been reported [164]. Furthermore, LPFFD-modified analogues were screened by a set of in vitro and in vivo assays to study their application as peptide drug candidates, increasing stability and simultaneously maintaining (or enhancing) potency, brain uptake, compound solubility, and low toxicity [164]. The authors suggest that introducing a series of chemical modifications, the pharmacological profile of LPFFD can be improved and these strategies could be beneficial in the treatment of Alzheimer's Disease. A trehalose moiety has been covalently attached to the LPFFD peptide in different sites of the sequence to endow these systems with optimal bioavailability in terms of higher stability toward proteolytic degradation within biological fluids and hence better opportunities for potential clinical trials [165][166][167]. Trehalose has been chosen because of its protein stabilizing ability and neuroprotective action [165][166][167]. All the LPFFD conjugates interfered with the Aβ's fibrillation process by recognizing the "hot spots" responsible for Aβ oligomerization and fibril formation. A significant cytoprotective effect, toward pure cultures of rat cortical neurons, was also observed for all the synthesized derivatives alongside a concomitant activation of cell viability signal pathways [167]. It would be of interest to explore whether the above described peptides, known to prevent aggregation of amyloid-β, would exhibit a promising dual role in preventing amyloidβ as well as tau aggregation, as demonstrated by Gorantla et al. who screened LPFFD, KLVFF, and related derivatives containing thymine and sarcosine units (Table 1) [168]. Griner et al., designed several peptide-based inhibitors whose effectiveness for both Aβ and tau suggested that there was a common binding interface on the Aβ and tau aggregates [169] (Table 1). Starting from the crystal structure of the segment 16-26, containing the mutation D23N, the authors identified two octapeptides of the D-series designed against Aβ 16-26 D23N: (D)-LYIWIWRT e (D)-LYWIQKT. Moreover, they found that solvent accessible residues K16, V18 and E22 of Aβ are important for tau seeding. The dual efficacy of the inhibitor (D)-LYIWIWRT suggests that Aβ and tau, share a common structural motif in AD and that there is a direct interaction between the Aβ core and the amyloid-prone regions of tau. Interestingly, these inhibitors are specific for Aβ and tau, and not for others protein such as hIAPP [169]. [11][12][13][14][15][16], NKGAII (residues 27-32), and GGVVIA (residues 37-42) [142]; the hydrophobic core region (KLVFF) [148,169] and the β-sheet breaker peptide LPFFD [149]. The VQIVYK fragment has been used as a model system for the development of cyclic tau aggregation peptide inhibitors: the cyclic D,L-α-peptides [lJwHsK] (square brackets indicate the cyclopeptide; upper and lower case letters represents L-and D-amino acids, respectively; J indicates norleucine) whose self-assembled structure is similar to amyloids. These designed cyclic D,L-α-peptides (Table 1) may cross-react with Aβ through a complementary sequence of hydrogen-bond donors and acceptors to modulate Aβ aggregation and toxicity. These cyclic peptides can inhibit the formation of toxic aggregates and can disassemble preformed Aβ fibrils by interacting with several regions of the soluble Aβ sequence and inducing structural changes to unfolded Aβ [170,171]. An emerging approach relies on the use of multifunctional peptides, having metal chelating and antifibrillogenic as potential elements to counteract metal-induced amyloid toxic aggregation [173]. To this regard, two Aβ/tau chimera peptides, namely Ac-EVMEDHAKLVFF-NH 2 (τ9-16-KLVFF) and Ac-QGGYTMHQKLVFF-NH 2 (τ26-33-KLVFF), have been designed and synthesized (Table 1) [90,141]. These peptides present at their respective C-terminal part, the Aβ recognizing hydrophobic sequence Aβ16-20, whereas the metal binding sites are represented by the amino acid strings of the 9-16 or 26-33 tau's protein N-terminal region. The interaction between the Aβ and chimera peptides was studied by means of thioflavin-T (ThT) fluorescence and limited proteolysis MALDI-TOF spectrometry. These studies aimed at determining the anti-fibrillogenic activity of the two "chimera" peptides exploring different environmental conditions. The study was carried out in the presence of lipid membranes consisting of TLBE as well as in the presence of copper or zinc ions and the peptides were proposed as a potential candidate for future in vitro studies addressing the inhibition of pathogenic Aβ/tau accumulation. Forthcoming studies are in progress to elucidate cross-interaction with tau related peptides/protein. Conclusions Beyond the initial amyloid cascade hypothesis, postulating no interaction between Aβ and tau, several lines of evidence, either molecular or clinical, indicates the existence of an interplay between Aβ and tau accumulation. It is now well-established that Aβ peptide and tau protein are mutually interconnected in AD pathogenesis, but neither how the misfolding of one of these two amyloid proteins may affect the other, nor the molecular pathways underlying aberrant Aβ/tau crossinteraction, have been fully elucidated yet. Accumulating evidence suggests that tau is physiologically released into the extracellular space, independently of cell death or neurodegeneration, where it can interact with the Aβ peptides [174,175]. Several hypotheses suggest that peptide patches of both Aβ and tau can interact together, either in the monomeric or aggregated forms, to facilitate cross-seeding [25,142,176]. Interestingly, in the case of the Aβ peptide, the mainly involved amino acids patches may include the Aβ16-21 region [142], whereas the repeat domains or PHF6 peptide fragments have been invoked for the tau protein [177]. However, for a more comprehensive understanding of the molecular events triggering Aβ/tau interactions, additional tau's amino acid regions should be considered [178]. In this review, we have extended the discussion by also including those papers describing the cross-relationships between the full-length Aβ peptides and some peptide fragments belonging to N-terminal domain of the tau protein [130,141]. These studies have only recently appeared in the literature and dedicated emphasis on the role played by metal ions in modulating the Aβ/tau interplay also in different environmental conditions. Indeed, beside a brief description of the putative physiological role of Aβ and tau, the present review illustrates the recent studies carried out on the potential role of membranes in the Aβ/tau cross-interactions, with the aim to better define whether membranes could be the preferred site where protein-protein interactions may occur [179]. It turned out that lipid membrane can effectively drive tau's structural transition from random coil to β-sheet and this might potentially occur also in the presence of Aβ. Finally, this review addressed the importance of appropriately designed multifunctional peptides as modulator of the protein-protein interactions also in the presence of metal ions. In this context, we believe that there are margins for potentially therapeutic interventions against AD, using multifunctional peptides that are capable of inhibiting Aβ/tau co-aggregation by also modulating metal driven cross-interactions between Aβ and tau protein in specific, biologically relevant, compartments including the cell membrane interface. Considering all the above, we can conclude that most of the literature cited in this review supports the theory that mutual Aβ and tau interactions strongly contribute to exac-erbate AD pathology [26]. However, the understanding of the molecular relationship and mechanism of Aβ and tau interaction remains at its infancy. Such a conclusion comes from the awareness that, at present, there is no clear experimental evidence in vivo describing the molecular association between Aβ and tau in the various regions of the central nervous system. Nevertheless, the emerging scenario continues to stimulate the scientific research towards therapeutic solutions that consider both Aβ and tau, as a therapeutic target for an effective fight against Alzheimer's Disease. This is a quite ambitious, yet compelling goal to achieve, but other intrinsic (i.e., genetic), environmental and lifestyle aspects must be taken into consideration, in addition to the molecular traits we have considered in this review. In any case, it is apparent that the latest research developments in the molecular events underlying AD, increasingly underscores the need to restrict the pathological synergism downstream the aberrant interaction between Aβ and tau proteins. Funding: Authors thank CNR (Italy)-HAS (Hungary) bilateral agreement, for partial support. Conflicts of Interest: The authors declare no conflict of interest.
7,165.2
2022-08-01T00:00:00.000
[ "Biology", "Chemistry" ]
Modeling and Numerical Simulation of Semitensioned Mooring Line under Taut-Slack State . In this paper, considering the mooring system can be taut-slack in the motion, the dynamic equations of the mooring system are derived by using the theory of large deformation. Te nonlinear dynamic response of semitensioned mooring line is numerically simulated. Assuming the platform motion is known, the efect of platform on mooring line is simplifed as an end-point excitation. Directly using the fnite diference method for numerically solving partial diferential equations of mooring line, the dynamic responses can be obtained. Ten, the causes about the nonlinear state are analyzed, and the location where the taut-slack phenomenon occurs can be located by calculating the tension of mooring. Te results show that the mooring line is more likely to be taut-slack under tangential excitation with the tension change, while the mooring remains taut under normal excitation. Te taut-slack state of the mooring line is concentrated near the anchor point. Trough the amplitude-frequency curve and bifurcation point set, it is found that the taut-slack region is accompanied by multiperiod motion. And the taut-slack phenomenon will lead to the unstable motion. Introduction With the development of ocean engineering technology, ofshore oil production has gradually shifted from shallow sea to deep sea.As an indispensable part of the positioning of the deep-sea oil platform, the mooring system has become one of the key technologies that must be studied.Due to the structural characteristic which has large length-diameter ratio, mooring line's dynamic tension fuctuates greatly and results in the alternation of taut-slack state.Meanwhile, the constitutive relation of the mooring line also has nonlinear characteristics.Terefore, the analysis of the dynamics problem on mooring line under taut-slack state is always a concern in the feld of marine engineering. Te numerical analysis of mooring line is basically realized by establishing equations and developing fnite element models.Perkins and Mote [1] derived the threedimensional dynamic equations of the elastic cable fxed at both ends by the Hamilton principle.Tangential, normal, and binormal motions along the cable are studied, respectively, and a theoretical model of the foating body was established for the analysis of cable vibration in water [2].According to this model, Tang et al. [3,4] studied the tension characteristic and impact tension under taut-slack condition.Ivan and Neven [5] developed a fnite element model considering the diameter and axial deformation of the mooring line, which can calculate hydrodynamic force more accurately when simulating the cable's large motion.Meng et al. [6] designed a mooring system of wave energy converter by using the second-order Stokes theory and verifed the stability of the mooring system and the whole device through model experiments.Behbahani-Nejad and Perkins [7] studied longitudinal-transverse coupled waves propagating along elastic cables.A mathematical model describing the three-dimensional nonlinear response of tensile elastic cables is presented.Te asymptotic form of the model is derived from the linear response of a cable with a small equilibrium curvature.Luo and Huang [8] studied a theoretical hydrodynamic model which is developed to describe the coupled dynamic response of a submerged foating tunnel (SFT) and mooring lines under regular waves.In that model, wave-induced hydrodynamic loads are estimated by the Morison equation for a moving object, and the simplifed governing diferential equation of the tunnel with mooring cables is solved using the fourth-order Runge-Kutta and Adams numerical method. All the above scholars obtained the continuity equation of the system by establishing the continuity model, and the equation obtained thus is generally more practical but relatively complex and difcult to solve.Such as centralized mass method, the stifness matrix, and the recursive relation of the mooring can be easily written through the constitutive relation between elements, but the deformation of the element and the geometric nonlinearity of mooring are easily ignored.Terefore, in this research, the dynamics equations of the mooring system are deduced by the large deformation theory.Tis model can be solved by numerical method, and the geometric nonlinearity of the mooring is considered. More and more software simulations can also be used to study mooring systems, such as AQWA [9], FhSim [10], and OrcaFlex [11].It is difcult to obtain the taut-slack phenomenon by software simulations. Problems of the taut-slack efect and impact tension have been studied by scholars.Huang and Vassalos [12] used the spring-mass model to solve the impact tension of the cable from taut to slack state.Qiao et al. [13] analyzed the impact tension in the taut-slack state of the tensioned mooring by using the fnite element method.Wang et al. [14] studied the infuence of a submerged buoy on the tautslack state of the mooring line by using the concentrated mass method.Zhang et al. [15] studied the impact tension of taut mooring cable by using the fnite element method.Touzon et al. [16] used the catenary model to study the mooring system based on a wave energy converter and found that the sudden force generated by the foating body has a signifcant infuence on cable tension and resistance.Hsu et al. [17] studied the damping and impact load of anchor chain in a shallow water environment by experimental method.Xu and Guedes Soares [18][19][20] studied the hydrodynamic response of the point absorber and the dynamic response of the mooring system through a series of regular and irregular wave model tests on the buoy wave energy converter.Subsequently, based on the model experiment results, a new Markov chain by Bayesian inference method was proposed to study the short-term limit mooring tension.A mixed distribution extreme anchorage tension and fatigue damage analysis model was proposed, and the parameters of the mixed model were estimated by the expectation-maximization algorithm, and the limit tension analysis was carried out by depolymerization method to investigate the infuence of the polymerization process setting on the prediction of limit tension.Low et al. [21] studied the efect of line dispersion and the seafoor model on tension fuctuation by using the spring-pad method and improved seafoor reaction model.Hsu et al. [22] proposed a comprehensive probability distribution model to predict the impact load on the mooring line when the fan experiences large wave and wind-induced motion.Zhang et al. [23] used the hyperbolic tangent (tanh) method to transform the nonlinear partial diferential equation of the taut-slack mooring system into a nonlinear algebraic equation for solving and studied the tension changes of the mooring line in the process from slack to taut. Tese above scholars have studied the taut-slack state of the mooring and found that this state is accompanied by impact load.Te impact load is fatal to the mooring in engineering, so it is very necessary to study the mechanism of the taut-slack state of the cable. In addition, some scholars also study the impact tension of mooring system by experimental methods.Gomes et al. [24] conducted an experimental study on the buoy-swinging wave energy converter of the fve-device array in the wave pool with diferent confgurations, focusing on the analysis of the device's motion and mooring tension.Liang et al. [25] evaluated the feasibility of the simplifed mooring system by model experiment on a very large foating structure (VLFS) mooring system which is composed of 20 mooring chains.Gao et al. [26] conducted a series of tests in a tank with a model scale of 1 : 70 to study the dynamic response of mooring line based on catenary structure when wave and heave motions were applied.Te dynamic tension and trajectory of mooring lines were measured.Furthermore, the numerical results of this paper have the same trend as the experimental results by Gao et al. [26]. In this paper, the dynamics equations of the mooring system are deduced by the large deformation theory.Te nonlinear dynamic response of the semitensioned mooring line is numerically simulated.Assuming the platform motion is known, the efect of the platform on mooring line is simplifed as an end-point excitation.Directly using the fnite diference method to numerical solving partial differential equations of mooring line, the dynamic responses can be obtained.Ten, the causes about the nonlinear state are analyzed, and the location where the taut-slack phenomenon occurs can be located by calculating the tension of mooring.Tis research is based on the assumption that the model ignores the wave load generated by fuid motion in quasi-static fuid.And the diferential scheme of the fnite diference method has been verifed in previous research. Model When the ratio of the sag to the length of the mooring is greater than 1/8, the mooring is generally considered to be in a slack state and cannot be regarded as a taut string, so that the static stretching theory is no longer applicable [27].As the mooring is forced to vibrate at a large amplitude and low frequency by the end excitation, the local position of the mooring will show two alternating states of taut-slack.For the elastic elongated structure, this is a form of local geometric large deformation, so it is necessary to introduce the theory of large deformation to describe the dynamics model of the mooring system. Considering the infnitesimal body with large deformation of mooring structure, the coordinate system is established as shown in Figure 1, where Te generalized Serret-Frenet formula is introduced [28]: where r is the radius of curvature, τ is the radius of torsion, s is the natural coordinate of the mooring, and the relation is Figure 1 shows the force analysis of the infnitesimal body.R → is the space vector describing position of the element, T → is the tension, F → is the external load acting on the mooring, G → is gravity, ρ is the mooring line density, and A is the efective cross-sectional area.For the momentum theorem, Decomposed in three directions: tangential, normal, and binormal. Substitute equations ( 1)-( 3) into (6a), equation (6b) is obtained: Ten, three-dimensional dynamic equations of the infnitesimal body can be obtained as follows: where U 1 , U 2 , andU 3 are used to represent the displacement of the cable along the tangential, normal, and binormal directions relative to the equilibrium position, respectively, and l t and l n are the directional cosine of the body coordinate in the absolute coordinate system [29].Te absolute coordinate system is a Cartesian coordinate fxed on the seabed. In the study of the mooring system, the mooring and the excitation load are placed in the same plane, so that the binormal motion is smaller than that in other directions and can be ignored, so the in-plane motion is mainly considered [27].And the mooring system can be shown in Figure 2. By ignoring the spatial state of the mooring, 1/τ � 0 [28] so that equations ( 7) and ( 8) can be written as follows: where P is the static equilibrium tension, E is the elastic modulus, A is the efective cross-sectional area, κ is the curvature function of the mooring line in static state, ε is the tensile dynamic strain of the mooring line along the neutral layer in Lagrangian coordinates, s is the body coordinates, ρ w is the fuid density, and g is the gravitational acceleration. Te obtained dynamic equations ( 10) and ( 11) are consistent with those established by the method of the Hamiltonian principle in literature [2].It means that the mooring's motion conforms to the conservative system, but the boundary excitation needs to be considered separately. Since it is assumed that the mooring line is in a quasistatic fuid, the diameter is smaller than the wavelength.Te hydrodynamic force comes from the motion of the mooring structure, and its hydrodynamic force can be approximately obtained by the Morrison formula [5].Ten, the tangential and normal hydrodynamic forces acting on the unit mooring length can be expressed as F 1 and F 2 , respectively: where C at and C dt are inertia coefcient and drag force coefcient in the tangential direction, C an and C dn are inertia coefcient and drag force coefcient in the normal direction, and d is the efective diameter of the mooring. Te boundary condition at the anchor point of the mooring line is Te upper point U D (t) gives diferent displacement functions according to the working conditions.U D (t) � U Dτ (t) + U Dn (t), where U Dτ (t) is tangential displacement and U Dn (t) is normal displacement. Selection of Parameters and Difference Scheme Te structural and environmental parameters related to mooring lines are shown in Table 1. Due to the forced vibration of the mooring system under the given excitation of the platform, the two states of tautslack will occur repeatedly and alternately in the motion.During the slack state, the tension tends to 0, and the impact caused by tension will suddenly increase the tension several times [5], which makes the vibration state of the mooring system very complex.PDE (partial diferential equations), equations ( 10) and ( 11), belong to the wave equation form of hyperbolic function, and it is a dynamic boundary problem.In addition, the fuid drag force is also a nonlinear term.Terefore, the fnite diference method is the most suitable for direct numerical solution [30]. During the motion of the mooring system, there will be local taut-slack phenomenon, so there are high accuracy requirements for numerical calculation.Te diference scheme adopted is as follows [31].Some scholars have used this scheme to solve similar equations of fexible bodies and obtained more accurate results [29,32]. Te spatial diference scheme is as follows: Te time diference scheme is as follows: Shock and Vibration 5 where subscript j represents spatial coordinates and superscript k represents time coordinates, u is the displacement, v is the velocity, a is the acceleration, ∆t is the step length of time, and α 1 , α 2 , β 1 , and β 2 are the weights of acceleration and velocity at adjacent time step, respectively, and the values are {1, 1, 1, 2} [29]. Dynamic Response Analysis of Mooring Line with Excitation.In the process of establishing the mathematical model, the body coordinate has been used to divide the mooring into tangential and normal motion.And there is a nonlinear coupling relationship among dynamic strains which generated by the mooring motion in diferent directions.For study patterns of the nonlinear phenomena between tangential and normal motion, the normal motion caused by tangential excitation and the tangential motion caused by normal excitation is calculated so that it is assumed that the excitation subjected on the upper point of the mooring replaces the large motion of the foating structure. Dynamic Response of Mooring Line Endpoint Only Subjected to Tangential Excitation.Generally, the period range of the frst-order simple harmonic wave of the ocean is 5 to 25 seconds.Te swing period of the semisubmersible platform is about greater than 100 seconds, and the heave period is generally greater than 20 seconds [27].Taking the South China Sea as an example, the average wave height can reach 6 m by typhoon.Terefore, it is assumed that there is a displacement excitation in the tangential direction at upper point of the mooring line, and the boundary condition can be expressed as follows: where A max � 6m is the tangential excitation amplitude, and the excitation frequency f � 0.2865Hz (circular frequency ω A � 1.8rad/s).Te dynamic response of the mooring system under tangential excitation can be obtained by using the numerical solution in equations ( 10) and (11). In order to avoid the infuence of initial calculation error, the steady-state part of the system motion is selected for research, and the time is after t � 80 sec.Figure 3 shows the tension distribution curve of the mooring at diferent times from 83 sec to 84.7 sec.It can be seen that the tension of mooring within 280 m near the anchor point will appear 0 N during motion so that the taut-slack phenomena occur.Other parts of the mooring remain taut during motion.Meanwhile, the max tension change is at the anchor point.Te tension change along the mooring gradually decreases, and the min tension change of the mooring is near 1110 m from the anchor point.Ten, the tension change gradually increases until the excitation point. According to the tension distribution on the mooring, Figure 3 is divided into four areas: I, II, III, and IV.I is an area where the taut-slack phenomena can occur.II is the area that the total tension can't be 0, and the tension change has the same order with the pretension.III is the min change of dynamic tension, and the amplitude of dynamic tension has one order of magnitude smaller than the pretension.IV is the area near the excitation point.Take four points A 1 , A 2 , A 3 , and A 4 from the four diferent areas on the mooring for analysis.A 1 is 130 m away from the anchor point, and it has the max total tension during the motion.A 2 is 650 m away from the anchor point, which has the max sag of the mooring static confguration.A 3 is the position which is 1110 m away from the anchor point, and it has smallest tension change during motion.A 4 is the excitation point. Figure 4 shows that the max displacement amplitude of tangential motion distributes with the coordinate of mooring.Its tangential amplitude reaches the maximum near 1110 m, which is exactly the point that has the smallest tension change.It indicates that the motion of this region near the point is relatively synchronous, so that the dynamic strain is small, and the tension changes small.Figure 5 shows that the max displacement amplitude of normal motion distributes with the coordinate of mooring, and the max displacement appears near the anchor point.Since the mooring is subjected to tangential displacement excitation, its normal motion is caused by the geometric nonlinear coupling of the system structure.It indicates that the mooring near the upper point mainly moves by the displacement excitation, but the closer mooring is to the anchor point, the greater normal motion appears by coupling efect. During the motion, dynamic responses of points A 1 , A 2 , A 3 , and A 4 on the mooring are shown in Figures 6 to 8. It can be seen from Figure 6 that the amplitude of the total tension of the system gradually decreases from the excitation point along the mooring line to the anchor point and reaches the minimum at point A 3 .Ten, the amplitude of total tension gradually increases and reaches the maximum at point A 1 .It means that tension near the anchor point changes drastically, and it is more likely to appear tautslack phenomenon.Shock and Vibration Mooring confguration can be described by calculating dynamic response.According to Figure 7, since the length of mooring is larger relative to the excitation amplitude, the whole confguration does not change signifcantly.Label x (m) and y (m) represent the horizontal and vertical span in an absolute coordinate system fxed on the seabed.In local magnifcation, it can be seen that mooring near the anchor point has unstable motion, while the motion of most other areas keeps stable.Terefore, the region of mooring near the anchor point is worth studying. As shown in Figures 8 and 9, the period of tangential and normal motion is consistent with the excitation period.Tangential motion is forced vibration, while normal motion is caused by nonlinear coupling, so the period is the same as tangential motion. Taking the motion of mooring 10 m away from the anchor point as an example, the time histories of tension are obtained in Figure 10.In Figure 10(a), when a point of Shock and Vibration mooring becomes slack, the total tension of this point is 0, and the slack state will last for a while.As mentioned previously, the mooring will appear slack state within 280 m from anchor point.As shown in Figure 10(b), the closer to anchor point the slack state appears, the longer duration will be.Te relaxation state is closer to the fxed end.At 280 m from the anchor point, the duration of the slack state is 0.02 s in one period, while at 10 m point, the duration is 0.579 s. Te response of the 10 m point motion is shown in Figure 11.In one period, the point moves rapidly to wave trough in the negative direction and then moves to the peak in the positive direction with a slower speed.Te reason is that the taut-slack phenomenon appears. When the point moves towards the wave trough, the tension of this point is 0, and the mooring is in a slack state.As the point moves towards the peak, the mooring gradually tenses.When the mooring changes from slack to taut, a sudden increase in tension will lead to the mooring taut instantly.And the mooring will appear to fall and rise regularly in the normal direction because of elasticity. Te Fourier transform of normal motion is shown in Figure 12, and the amplitude-frequency curve can be obtained.It can be seen from the fgure that the phenomenon of these regularly falling and rising in the normal motion is composed of multiple periodic solutions, and the motion frequency is times of the excitation frequency. Furthermore, it can be seen from Figure 11 that the normal motion is in-phase with the tension, while the tangential motion is out-of-phase with the tension.Tis is because T � EA((zU 1 /zs) − κU 2 ), zU 1 /zs is in-phase with tension and normal motion.And the position 10 m from the anchor point tends to be fat, so the curvature is very small, κ � 0.0005.Terefore, the tension is in phase with zU 1 /zs and similar in shape. Under this displacement excitation, the bifurcation diagram of the normal displacement of each point on the cable can be obtained along the cable coordinates.As shown in Figure 13, 0 is the anchor point.It can be found that points from the excitation point to 50 m of the mooring are in single-period motion, which is region II.And from 50 m to the anchor point, multiperiod motion appears in region I. Te closer it is to the anchor point, the more multiperiod solutions appear.Terefore, multiorder frequency occurs during the motion near the anchor point.Tis is because during the motion, when the mooring is slack, part of mooring close to the anchor point will recover original length with 0 tension, and stop at the seafoor.If the mooring becomes taut, this part of mooring will be lifted and strained again, so the phenomenon of multiperiod solution appears. In addition, the maximum Lyapunov exponents of displacement at each point of mooring can be calculated by the wolf method [33].In Figure 14, within 280 m of mooring near the anchor point, the maximum Lyapunov exponent is greater than 0. It means that the closer to the anchor point tangential motion is, the more unstable motion will appear.Meanwhile, this part of mooring is also the area where the taut-slack phenomenon occurs, which also indicates that the occurrence of the taut-slack phenomenon will lead to the unstable motion.Te maximum exponent from 280 m to the excitation point is approximately 0 or less than 0, indicating that this part of mooring is in stable periodic motion. As shown in Figure 15, the exponent of normal motion is mostly around 0. According to the previous calculation results, the normal motion is periodic motion mainly depends on the geometric nonlinear coupling of the mooring structure, so there will be no unstable motion state. Moreover, not every excitation amplitude and frequency will lead to the taut-slack phenomenon.As shown in Figure 16, the amplitude of displacement excitation ranging from 1 to 12 m and excitation frequency ranging from 0 to 0.4456 Hz (circular frequency 2.8 rad/s) is divided into two parts.In region I, the mooring is always in a taut state, while region II (including line) is in a slack state where the local tension of the mooring is 0. From Figure 16, it can be known that when the excitation frequency is 0.1 Hz, amplitude of displacement excitation must reach 10 m and lead to a slack Shock and Vibration state.As the amplitude decreases, a slack state occurs need an increasing frequency.Tat means the taut-slack phenomenon appears only when the energy in the system reaches to a certain level. Dynamic Response of Mooring Line Endpoint Only under Normal Excitation.When only the normal excitation is applied at upper point of the mooring, the boundary condition can be expressed as follows: Shock and Vibration where B max � 6m is the excitation amplitude and the excitation frequency f � 0.2865Hz (ω B � 1.8rad/s).Figures 17 to 19, respectively, describe the time history curves of total tension, tangential motion, and normal motion at 440 m, 530 m, 820 m, 1020 m, and 1230 m away from the anchor point.Te same conclusion can be drawn from Figure 17 that the total tension at each point of the mooring does not change much.Te tangential motion of the mooring is caused by nonlinear coupling with the normal motion, but the amplitude is an order of magnitude smaller than that of the normal motion, and the amplitude of the normal motion does not change much.It shows that motion under normal displacement excitation cannot be transmitted far, and it mainly afects the motion of the mooring near the excitation point, while the closer to the anchor point, the smaller change of displacement amplitude.More importantly, there is no sudden change in motion, which means there is no taut-slack phenomenon.And the taut-slack phenomenon is mainly induced by tangential displacement excitation. Conclusions In this paper, the nonlinear dynamic response of semitensioned mooring line under quasi-static fuid is numerically simulated.Assuming that the motion of the platform is known, the efect of the platform on the mooring is simplifed as endpoint displacement excitation.Te dynamic response of mooring line is numerically calculated by the fnite diference method.All above calculation results are consistent with the experimental results in the literature [26], and the following conclusions are obtained: (1) Considering the mooring system can be taut-slack in the motion, the dynamic equations of the mooring system are derived by using the theory of large deformation.(2) Under the tangential and normal excitation, the conditions of taut-slack motion of mooring are analyzed.It is found that when the tangential excitation is given, the mooring will be in the taut-slack state, while the system will always remain in the taut state with the normal excitation.For the mooring lines with the parameters in this paper, the excitation amplitude and frequency region that will cause the taut-slack motion state are given.(3) Te taut-slack state of the mooring line is concentrated near the anchor point.During the process from slack to taut, due to the sudden increase of tension, it will lead to the instant tightening of the mooring line and make the mooring in normal motion to appear regular falling and rising Shock and Vibration 11 phenomenon.Trough the amplitude-frequency curve and bifurcation point set, it is found that the taut-slack region is accompanied by perioddoubling motion.(4) Te stability analysis of the motion time series under tangential excitation shows that the taut-slack phenomenon will lead to the unstable motion, and the normal motion mainly depends on the geometric nonlinear coupling of the mooring structure, so there will be no unstable motion state. In conclusion, from the analysis of numerical simulation results, it can be known that the tangential excitation of mooring line can make the tension change greatly, so it is easy to produce the taut-slack state, and the large amplitude of normal motion can also be caused due to the coupling efect.When the system is excited in the normal direction, it is not easy to cause the tangential motion of the system.Te system is always in a taut state, and its tension basically fuctuates within a certain range.Terefore, in order to avoid the impact caused by taut-slack and sudden change of tension in the system under actual working conditions, we should try to avoid the excitation area where this can happen and mainly limit the motion of tangential excitation. Figure 2 : Figure 2: Schematic diagram of mooring line motion. Figure 3 : Figure 3: Tension distribution of mooring line at diferent times. Figure 4 : Figure 4: Amplitude distribution of tangential motion of mooring line. Figure 9 : Figure 9: Time history of normal motion at A 1 , A 2 , A 3 , and A 4 . Figure 10 : Figure 10: (a) Time history of total tension at 10 m from the anchor point and (b) time history of total tension at positions when the cable is relaxed. Figure 11 : Figure 11: Time history of normal motion at 10 m from anchor point. Figure 12 : Figure12: Amplitude frequency curve of normal motion at 10 m from anchor point. Figure 13 :Figure 14 :Figure 15 :Figure 16 : Figure 13: Bifurcation diagram of normal displacement at each position on the cable: (a) the whole mooring structure and (b) partial of mooring structure 0-200 m.
6,683.6
2023-03-20T00:00:00.000
[ "Engineering" ]
Tunable SnO 2 Nanoribbon by Electric Fields and Hydrogen Passivation Under external transverse electronic fields and hydrogen passivation, the electronic structure and band gap of tin dioxide nanoribbons (SnO2NRs) with both zigzag and armchair shaped edges are studied by using the first-principles projector augmented wave (PAW) potential with the density function theory (DFT) framework.The results showed that the electronic structures of zigzag and armchair edge SnO2NRs exhibit an indirect semiconducting nature and the band gaps demonstrate a remarkable reductionwith the increase of external transverse electronic field intensity, which demonstrate a giant Stark effect. The value of the critical electric field for bare Z-SnO2NRs is smaller than A-SnO2NRs. In addition, the different hydrogen passivation nanoribbons (Z-SnO2NRs2H and A-SnO2NRs-OH) show different band gaps and a slightly weaker Stark effect. The band gap of A-SnO2NRs-OH obviously is enhanced while the Z-SnO2NRs-2H reduce. Interestingly, the Z-SnO2NRs-OH presented the convert of metal-semiconductormetal under external transverse electronic fields. In the end, the electronic transport properties of the different edges SnO2NRs are studied. These findings provide useful ways in nanomaterial design and band engineering for spintronics. Introduction The low-dimensional materials have attracted extensive attentions due to their excellent physical properties and novel electronic properties in the past decade.The graphene is especially discovered in 2004 [1]; it provides the new foundation for the research of low-dimensional systems [2][3][4][5][6].However, graphene still faces many challenges, such as toxicity, lacking an intrinsic band gap, and incompatibility with current silicon-based electronic technology, which are in the way of the applications of graphene in electric devices [7].Since the discovery of graphene, various 2D materials have been predicted and synthesized [8,9], particularly, the group IV (Si, Ge, and Sn) analogs of graphene.The stanene may be a competitive candidate of graphene, because of the high conductivity of stanene [10], and its electric structure can be easily tuned [11].The existing works have studied the effect of stain, substrate, external electric field, modification by functional groups, and so on [12,13].As in Modarresi et al. [14] tuning the band gap by the strain effect and external electric field (E-field) tends to open an energy gap predicated by Ren et al. [15].In addition, Li and Zhang [16] studied tunable electronic structures and magnetic properties in twodimensional stanene.Recently, the experiment and theory of one-dimensional (1D) graphene nanoribbons (GNRs) have been widely reported.In theoretical research, the zigzag GNRs can display the properties of half-metallicity by applying transverse electric field, promising candidate materials for spintronic devices [2,4].The electronic and optical properties of GNRs can be also modulated by changing edge termination [17].Experimental results prove that the GNRs with various widths and crystallographic orientations are more facilitative and flexible than carbon nanotubes (CNTs), through standard lithographic procedures or chemical methods [3,5].Inspired by the fruitful results based on GNRs, recently a lot of interest has been drawn to nanoribbons of wide bandgap semiconductor, such as boron nitride (BNNRs) [18], molybdenum disulfide (MoS 2 NRs) [19,20], and zinc oxide (ZnONRs) [21].In the calculation of zinc oxide (ZnONRs), the polarized spin density of states is a function of the nanoribbons width.Kou et al. demonstrate that the band gap of ZnO nanoribbon can be reduced monotonically with increasing transverse field strength, demonstrating a giant Stark effect [22].The titania nanoribbons (TiO 2 NRs) also predicate that the edge states and band gaps are sensitive to edge termination and ribbon width [23].Tin dioxide (SnO 2 ) is also a typical wide-gap semiconductor with band gap = 3.6 eV, because of the fascinating electronic properties and optical transparency they have been widely used in photovoltaic devices, transparent conducting electrodes, solar cells, chemical gas sensors, and panel displays [24][25][26][27][28][29][30].Recently, many theoretical researches have reported the electronic structures and geometric properties of SnO 2 nanomaterials using first-principles [31,32].More fortunately, the ultrathin SnO 2 nanofilm and one-dimensional (1D) SnO 2 nanoribbon have been synthesized by thermal evaporation and postprocessing in experiments [33].Motivated by the outstanding properties of GNRs and wide band gap semiconductor, we put our emphasis on the SnO 2 nanoribbons, which may obtain more excellent achievement.In previous work, Huang et al. study the electronic structure bandgap modulations of SnO 2 nanoribbon by ribbon width [34]. In this paper, we perform first-principles calculations to study the bandgap modulations, electronic structures, and hydrogen passivation of SnO 2 NRs under transverse electric fields.We found that both armchair and zigzag SnO 2 NRs without hydrogen passivation are semiconductor with indirect gap and the semiconductor-metal transition occurs at a certain critical electric field.More interesting, partially passivating all the edge O atoms and passivating all the O and Sn atoms of SnO 2 NRs could present the distinct properties.Our work offers a promising route toward modulating lowdimensional SnO 2 nanomaterials bandgap which have potential applications in nanoscale optical devices. Model and Computational Methods The electronic and optical properties of 1D SnO 2 nanostructures are performed with the spin-polarized first-principles methods as implemented in the Vienna ab initio simulation package (VASP) [35] and the subsequent electronic transport properties calculations are performed by the Atomistix Toolkit Package (ATK).Projector augmented wave (PAW) potentials and the Perdew et al. [36] function under the spin-polarized generalized gradient correlation are used to describe the exchange and correlation interaction.The supercells are large enough to ensure that the vacuum space is at least 15 Å, so that the interaction between nanostructures and their periodic images can be safely avoided.Following the Monkhorst-Pack scheme [37], 12 k points were used to sample the 1D Brillouin zone, and the convergence threshold was set as 10 −4 eV in energy and 0.02 eV/ Å in force.The valence electrons for the Sn, O, and Ag are 14 (Sn: 4d 10 5s 2 5p 2 ), 6 (O: 2s 2 2p 4 ), and 11 (4d 10 5s 1 ).The positions of all the atoms in the supercell were fully relaxed during the geometry optimizations. The model structures of SnO 2 NRs were generated by cutting an SnO 2 NS (tin dioxide nanosheets) along armchair and zigzag orientations, which are referred to as zigzag SnO 2 nanoribbons (Z-SnO 2 NRs) and armchair SnO 2 nanoribbons (A-SnO 2 NRs), as shown in Figure 1.The external electric field is modelled by transversely adding a sawtooth-like potential to the nanoribbon.We should explain that the DFT functional always underestimates the band gap of semiconductors, but it is powerful to predict a correct trend toward the band gap change and efficiently unveil the physical mechanism [38,39]. The Band Gap Modulation under External Electric Field in Bare SnO 2 Nanoribbon.In both zigzag nanoribbon and armchair nanoribbon, the original package contains 8 tin atoms and 16 oxygen atoms.So, in our calculations we choose zigzag and armchair SnO 2 nanoribbons with width = 8.The results show that the electronic structures of zigzag and armchair edge SnO 2 nanoribbon exhibit an indirect semiconducting nature with the band gap of about 2.19 eV and 2.08 eV, respectively, which are well consistent with our previous results [34].Firstly, we modulate band gaps of the bare SnO 2 nanoribbon by applying the transverse electronic fields.Considering the symmetrical structure of the Z-SnO 2 NRs and A-SnO 2 NRs, the external electric fields are added to the direction in Figure 1.The evolution of the band gaps for Z-SnO 2 NRs and A-SnO 2 NRs under different external electric field intensity is shown in Figure 2. From Figure 2, the band gaps of bare Z-SnO 2 NRs and A-SnO 2 NRs demonstrate a remarkable reduction when an external electric field is applied.The band gaps eventually closed when the electric field reached a critical value for each specified SnO 2 nanoribbon; this phenomenon is similar to the armchair ZnO nanoribbon [22].The values of the critical electric field for bare Z-SnO 2 NRs and A-SnO 2 NRs are 0.124 V/ Å and 0.210 V/ Å, respectively.This suggests that the semiconductormetal transition occurs at a certain value of critical electric field for Z-SnO 2 NRs compared to A-SnO 2 NRs. To further investigate the underlying physical mechanism, the charge redistribution of Z-SnO 2 NRs and A-SnO 2 NRs is studied with and without an external electronic field, respectively.Figures 3(a 0 V/>.While the band gap of the Z-SnO 2 NRs obviously becomes narrow and the DOS near the Fermi level are mainly ascribed to O 2p states and Sn 5d states under the electric field of 0.13 V/>.In addition, from Figure 3 (charge densities for CBM and VBM of Z-SnO 2 NRs), we can find that when we applied an external electric field (0.13 V/>), the charge is evenly distributed, whereas VBM is a different localized state in which the wave function mainly focused on both sides of the O atoms for the whole nanoribbon. In Figures 4(a)-4(b), band structure, DOS, and the corresponding charge densities for the CBM and VBM of the A-SnO 2 NRs are illustrated at 0 V/> and 0.2 V/>.From Figure 4 we can find that the band gap of A-SnO 2 NRs is 2.08 eV without electric field; the charge density of the CBM and VBM is different from the Z-SnO 2 NRs and is edge state with wave function localized at the edge of the O atoms.The band gap becomes narrower and the DOS near the Fermi level is mainly caused by O 2p states with the external electric field of 0.2 V/>.Meanwhile, it is found that the charges are redistributed and the phenomenon of giant Stark effects appears, which is similar to ZnO nanoribbons [22,40].For the Z-SnO 2 NRs, with the external electric field, the band gap is similar to the A-SnO 2 NRs.In addition, the distribution of the wave function for CBM has no obvious change, while the wave functions of VBM gather to the edge O atoms and hardly have any wave function at the ribbon center, as shown in Figure 3(b).All in all, when we apply external electric field, due to the splitting of subbands levels caused by the charge redistribution, this makes the band gaps of SnO 2 nanoribbons decrease.Compared to the bare SnO 2 NRs, the behavior of the H passivation nanoribbons changed significantly.The A-SnO 2 NRs-OH and Z-SnO 2 NRs-2H remain semiconducting properties (the band gaps are 2.64 eV and 2.41 eV, resp.), but the band gap of A-SnO 2 NRs-OH is obviously enhanced while the Z-SnO 2 NRs-2H reduce.Similarly, we take the A 8 -SnO 2 NR-OH (width = 8) as an example; the calculated band gap is 2.42 eV, slightly larger than 2.08 eV of the bare armchair ribbon.For Z 8 -SnO 2 NR-2H, the calculated band gap is 1.71 eV, obviously smaller than 2.19 eV of the bare zigzag nanoribbon.The A 8 -SnO 2 NR-2H and Z 8 -SnO 2 NR-OH present the metallic property which supplied by the 2p state of edge O atoms and the 5s state of the edge Sn atoms. The Band Gap Modulation under To further study the effect of external electronic field for H passivation SnO 2 NRs, we calculate the evolutions of the band gap for H passivation with different external electric field in Figure 5. From Figure 5 we can find that because no pronounced differences are found when the field direction is reversed, owing to the symmetrical structure, A-SnO 2 NRs-OH has a slower decreasing rate of the band gap with the increase of electric field and requires a larger critical electric field to close its band gap, which is similar to Kou et al. [22].A vital reason is that the A-SnO 2 NRs-OH has larger band gaps than the bare ones and the unpaired electrons reduce when H atoms are introduced.For the Z-SnO 2 NRs-2H, the decreasing rate of the band gap is faster than A-SnO 2 NRs-OH and it has a lower critical electric field of 0.13 eV to close its band gap.This phenomenon well corresponds to the bare SnO 2 nanoribbons.It illustrates that the Z-SnO 2 NRs-2H are more sensitive to external electric field than A-SnO 2 NRs-OH.There is special phenomenon that presents metallic property for Z-SnO 2 NRs-OH (the band gap is 0 eV) when it is without external electric field.Interestingly, it realizes a transformation from metal to semiconductor when the electric field reaches 0.07 eV/ Å.The band gap has the maximum value of 1.3 eV when the external electric field increases to 0.15 eV/ Å and subsequently the band gap decreases with the increase of the external electric field which is similar to Zheng et al. [41]. To reveal the underlying physical mechanism for the different H passivation models, band structure and the charge redistribution of the CBM and VBM are examined.Figures 6(a the whole ribbon except for the ribbon edges, while the charge distribution of the VBM is totally concentrated at the Sn atoms and O atoms of the ribbon edges which is similar to the VBM of A-SnO 2 NRs.Once an external electric field is applied, the charge distributions of the CBM and VBM are driven to localize at opposite edges owing to the different electrostatic potential, as analyzed above, as shown in Figure 6(b).This can shift down the CBM because the charge easily redistributes in external electric fields, while the VBM also obviously upshifts in energy band toward the Fermi level, thereby further narrowing the energy gap.Band structure and the charge redistribution of H passivation Z-SnO 2 NRs were presented in Figures 7 and 8. From Figure 7, we can conclude that it is an indirect gap semiconductor regardless of whether or not the electric field is applied.The charges of CBM and VBM mainly concentrate at the O atoms and Sn atoms of the ribbon edges as shown in Figure 7(a).That is different from the bare Z-SnO 2 NRs in which the charge mainly concentrates at the O atoms in whole nanoribbon.From Figure 7(b), under the electronic field of 0.2 v/>, the charge distributions of the CBM and VBM are driven to localize at opposite edges; this situation is similar to the A-SnO 2 NRs-OH.In Figure 8, the wave function of the CBM primarily localize at O atoms and Sn atoms of the nanoribbon center, whereas the wave function of VBM mainly concentrates at the O atoms of the whole nanoribbon and the Sn atoms of the right nanoribbon edge.The hydrogenation of Z-SnO 2 NRs destructed the symmetry Electronic Transport. The lower dimension semiconductors are usually used to fabricate electronic transport devices, such as graphene and silicone nanoribbon [42,43].The electronic transmission of SnO 2 nanoribbon can be explored in terms of transport property.The transmission of SnO 2 nanoribbon is determined with two probe methods.The two ends of SnO 2 nanoribbon are held within the electrodes.The width of left electrode and right electrode is 3.15 Å, respectively, as shown in Figure 1(h).The electrons near the Fermi level contribute to electronic transport property in SnO 2 nanoribbon.The orbital delocalization near the Fermi level results in high mobility of electrons which corresponds to certain peak amplitudes in the transmission spectrum [44,45].The current through the system is calculated using the Landauer-Büttiker formula: = (2/ℎ) ∫ (, )[ ( − ) − ( − )] [46,47], where is the electron charge, ℎ is Planck's constant, (, ) is the transmission The electronic transmission is invoked with magnitude of transmission in a particular energy interval.Figure 9 represents the transmission spectrum of A-SnO 2 NRs and Z-SnO 2 NRs.The width of the center scattering region is 0.02 eV and 0.05 eV for A-SnO 2 NRs and Z-SnO 2 NRs, respectively.For A-SnO 2 NRs, the transmission peaks are observed for −2.8 V, −2.4 V, and −1.7 V in the valence band and 1.5 V and 2.8 V in the conduction band.Compared with A-SnO 2 NRs, more peaks of Z-SnO 2 NRs are observed in the valence band.The peaks locate at −2.8 V, −2.5 V, −1.7 V, and −1.1 V in valence band; the location of peaks is similar to A-SnO 2 NRs in conduction band.This is also in agreement with the electron density of Z-SnO 2 NRs, where the electron density is more intense than A-SnO 2 NRs.In addition, compared to Z-SnO 2 NRs, A-SnO 2 NRs has a larger cutting area near the Fermi surface, which is closely related to a larger band gap for A-SnO 2 NRs, which also shows that the transmission spectrum of materials has close correlation for electronic density of states. The calculated I-V characteristics of SnO 2 NRs are illuminated in Figure 10.From Figure 10 we can find that the A-SnO 2 NR and Z-SnO 2 NR are in off state when the bias voltage is between −0.3 V and 0.3 V.When the bias voltage exceeds 0.3 V, the current presents a nonlinear increase and same change trend for A-SnO 2 NR and Z-SnO 2 NR, similar to the electrons conduction of the other semiconductors, such as ZnO and TiO 2 .Since 0.8 V, the change of the bias voltage not only produces the conduction current, which also makes the current have a nonlinear growth; this phase of the V-I characteristic curve is similar to general electronic conduction semiconductor devices.The difference of the I-V curve can be explained by the fact that the A-SnO 2 NR and Z-SnO 2 NR possess the different band gap and voltage sensitivity.The thickness of SnO 2 NR is similar to the thickness of three atoms and two-dimensional electron gas can be formed; in addition, similar to the graphene, SnO 2 can be self-assembled on the experimental generation.Based on these special electronic transport properties, the SnO 2 nanoribbons have enormous potential application in the switching, rectifying devices and the whole semiconductor industry. Conclusions In summary, the electronic structures and band gap modulations of the Z-SnO There is similar variation tendency of the I-V characteristics of both A-SnO 2 NR and Z-SnO 2 NR.Our results may be very useful theoretical reference to design nanoelectronic and spintronic devices. Figure 1 : Figure 1: The structure model of (4 × 1 × 1) zigzag and (1 × 4 × 1) armchair SnO 2 nanoribbon for different hydrogen passivation (a-f) and the transport model (h) ( is the growth direction of nanoribbons and is the direction of external electronic field).The big gray balls are Sn atoms, the small red balls are O atoms and the pink balls are H atom.In addition, the red-dotted lines would represent the unit cell. Figure 2 : Figure 2: Band gap evolution under various external electric fields in bare Z-SnO 2 NRs and A-SnO 2 NRs. )-3(b) show the band structure, density of state (DOS), and the corresponding charge densities for the conduction band minimum (CBM) and the valence band maximum (VBM) of Z-SnO 2 NRs and A-SnO 2 NRs at 0 V/> (a) and 0.13 V/>, respectively.From Figure3, the band gap of the Z-SnO 2 NRs is 2.19 eV under the electric field of Figure 3 : Figure 3: Band structure and DOS (left) and the corresponding charge densities for CBM (right upper) and VBM (right lower) of Z-SnO 2 NRs at 0 V/> (a) and 0.13 V/> (b).The isosurface is 5.0 × 10 −3 e/> 3 .The Fermi level is shifted to zero.Different colors in curves represent different energy levels. Figure 4 : Figure 4: Band structure and DOS (left) and the corresponding charge densities for CBM (right upper) and VBM (right lower) of A-SnO 2 NRs at 0 V/> and 0.2 V/>.The isosurface is 5.0 × 10 −3 e/> 3 .The Fermi level is shifted to zero.Different colors in curves represent different energy levels. 2 NRs and A-SnO 2 NRs influenced by an external transverse electronic field and hydrogen passivation have been investigated by using the first-principle PAW potential within the DFT framework under GGA.As affected by a transverse electric field, the band gaps of bare Z-SnO 2 NRs and A-SnO 2 NRs demonstrate a remarkable reduction and the semiconductor-metal transition would appear at a lower value of critical electric field.The charge density near the Fermi can be redistributed.For the SnO 2 NRs with edge hydrogen termination through different way, the Z-SnO 2 NRs-2H and A-SnO 2 NRs-OH remain semiconducting properties, but the change of the band gap for the A-SnO 2 NRs-OH and Z-SnO 2 NRs-2H is obviously different.The Z-SnO 2 NRs-2H are more sensitive to external electric field than A-SnO 2 NRs-OH.For the Z-SnO 2 NRs-OH, it can realize transformation from metal to semiconductor when the electric field reaches 0.07 eV/ Å.Additionally, it indicates that more peaks of Z-SnO 2 NRs are observed in the valence band compared with A-SnO 2 NRs by electronic transport properties. Table 1 : External Electric Field in Hydrogen Passivation SnO 2 Nanoribbon.To explore the effect of edge hydrogen (H) termination on the electronic properties of the different SnO 2 nanoribbons, two different ways of nanoribbon edge hydrogen termination have been Figure 5: Band gap evolutions under various external electric fields in hydrogen passivation zigzag and armchair SnO 2 nanoribbon with width = 8.Binding energy (per atom) and band gap of SnO 2 NRs for different configurations.andarethenumberand chemical potential for the corresponding element, respectively, and tot is the total free energy of system.According to the laws of thermodynamics, the larger the binding energy is, the more stable the structure is.The values of for different configurations of SnO 2 NRs are shown in Table1.From Table1, the SnO 2 NR-OH has a higher binding energy than bare SnO 2 NRs and SnO 2 NRs-2H.The binding energies of Z-SnO 2 NRs-2H are 2.41 eV, which are slightly lower than the A-SnO 2 NRs; this demonstrates that Z-SnO 2 NRs-2H is more active than the other SnO 2 nanoribbons. studied: (i) the edge of Sn and O atoms are passivated with H atoms (denoted SnO 2 NR-2H) and (ii) the edge of O atoms are passivated (denoted SnO 2 NR-OH), as shown in Figures 1(b)-1(c) and 1(e)-1(f).It is important for evaluation of the stability of SnO 2 NRs because it can determine whether this nanostructure can be realized experimentally.The relative stability of bare and H passivated SnO 2 NRs is determined by the binding energy ( ). is defined as = (∑ − tot )/ ∑ , where represents different elements of Sn and O,
5,133.8
2017-01-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Study on the gas – liquid annular vortex fl ow for liquid unloading of gas well . Vortex tool is a new technique for the liquid unloading in gas wells. But it lacks a mathematical model to describe and predict the effect of vortex tools. In the present work, according to the axial, radial and circumferential momentum balance of the gas phase and liquid phase, the governing equations of vortex fl ow model have been established. Then thickness of liquid fi lm and gas and liquid vortex fl ow intensity as well as the pressure drop gradient can be calculated. The calculation results and the previous experiments indicate that the pressure drop of the gas – liquid fl ow can be reduced by 5% ~ 25% with the vortex tool, and the vortex fl ow model has an average relative difference of 6.01%. The model results show that there are two mechanisms for reducing the pressure drop under the vortex fl ow condition. In addition, the research results show that vortex tools with bigger helical angle will lead to higher vortex fl ow intensity. The decay rate of vortex fl ow intensity decreases along the pipe as liquid velocity increases and the vortex fl ow working distance can be calculated by the vortex intensity gradient and initial vortex fl ow intensity. Introduction Liquid loading is one of the significant issues needed to be overcome in gas-well production. When liquid loading occurs, gas production will be blocked from gas wells. Therefore, U.S. and China adopt vortex tools in liquid-loading gas wells in succession. Mingaleeva (2002) found that the minimum pressure loss can be obtained when the gasliquid fluid moves upward in vortex flow regime. Based on this theory, the downhole vortex tool in gas well was designed to decrease the pressure drop along the wellbore. The vortex tool transforms the common gas-liquid flow into swirling annulus flow with a certain stability. As a result, gas well production is increased and liquid carrying capacity is improved. Alekseenko et al. (1999) proposed a solution to symmetrical vortex accounting for the helical shape of vortex, and he conducted experiments on helical vortex that were carried out in a vertical hydrodynamical vortex pipe. Ali et al. (2005) first carried out an experimental study on vortex tools. They pointed out that pressure drop and critical gas velocity were reduced in vortex flow. Facciolo et al. (2007) used new measurements in a vortex pipe flow and came up with the axial average velocity distribution. Hein (2007) introduced the field test of vortex tools conducted by the U.S. Department of Energy from 2002 to 2006. Milliken (2008 installed several Vortex Flow surface tools of different sizes in liquid gathering systems, he found that the Vortex Flow surface tool created two separated flows inside the stratified flow. Surendra et al. (2009) carried out a numerical simulation study on gas-liquid two phase flow process with the vortex tool. Singh et al. (2016) applied the vortex technology to gas lift with two cases. Field experiments also conducted on liquid loading wells simultaneously. The results indicated that the downhole vortex tool provided artificial lift optimization. Subsequently, a large number of field tests of vortex tools were gradually conducted in China (Du, 2015;Yang et al., 2012;Zhang et al., 2012;Zhu et al., 2013). In addition, experiments and theoretical researches on vortex tools were also gradually carried out in China. Wu et al. (2016) analyzed forces of liquid droplets in swirling flow based on two-phase fluid dynamics theories, and optimized the helical angle of vortex tools. Liu and Sun (2017) studied on the vortex tool with numerical method and conducted laboratory test and downhole test on geometrical parameters of the vortex tool. They found the preferred helical angles are 50°and 55°. Zhou et al. (2018) established a critical liquid film model of gas wells under vortex flow conditions based on experimental data. Shi et al. (2018) simulated the gas-liquid flow process through the vortex tool under different flow regimes with Fluent software. Zhang et al. (2018) also used CFD method to simulate the flow process under different vortex tools, and analyzed the influence of number and distribution of vortex blades on liquid carrying performance. Like other downhole liquid-unloading tools, the vortex tool cannot provide energy for gas-liquid two phase flow, but it can increase the efficiency of formation energy by improving flow conditions. However, extra energy losses can be increased with downhole tools at times. The vortex flow has a certain effective working distance that can be reflected by the stability of spiral rising liquid ring. The working distance will decrease as the angular velocity drops. Finally, the vortex flow disappears and returns to the original flow state. Based on previous field experiments, it is found that the vortex tool has a certain application condition and an optimal operating condition. The unloading function of vortex tools can be well developed through reasonable wells choosing and vortex tools designing. Until now, there are no theoretical models for the tool operating characteristics and vortex flow characteristics. Due to the insufficient researches on the unloading mechanism of vortex tools, which limits the further applications of the vortex tools. The present work mainly focused on building a theoretical vortex flow model which is based on the two-phase flow theory and experiments were also conducted to validate the vortex model. Theoretical vortex flow model When the gas-liquid mixture flows through the vortex tools, liquid converges on the surface of vortex channel. The liquid ring will appear as shown in Figure 1, it is formed by the water which is from the liquid ring due to gravitation. Eventually, the gravitation of liquid ring will be equal to the gas drag force. Flow patterns are the basics of two-phase flow study (Li et al., 2017), just the same in the vortex flow model. Due to the existence of vortex flow, this model not only take into account the axial mechanical balance, but also the circumferential and radial mechanical balance. For the simplicity of the model, heat transfer between liquid and gas has been ignored. Therefore, the vortex model is for an isothermal flow; the liquid and gas in this model are assumed to be incompressible. In r direction (radial direction), the radial velocities of gas core and liquid film are both zero. Due to the existence of angular velocity, the centripetal acceleration of liquid makes liquid film pressure higher than the gas core pressure in cross section. The Navier-Stokes (N-S) equation of cylindrical coordinate system under the condition that radial velocity u r = 0 is written as: The angular velocities of gas phase and liquid phase are constant in the same cross section; p g and p L are the average pressure of gas core and liquid film respectively. The integration of equation (1) is: where x g and x L are the angular velocities of gas core and liquid film respectively, d is liquid film thickness (d D = d/D). In the same time, assuming that the liquid film thickness will not change in one vortex unit. The vortex flow can be seen as a special annular-mist flow with a gas-liquid interface fluctuation in the axial direction, so the momentum equations of the gas core and liquid film are written as (Barnea, 1986;Petalas and Aziz, 1998): Considering the angular momentum balance in the circumferential direction, the loss of gas angular momentum is used for overcoming the torque caused by the friction on the gas-liquid interface. Similarly, the variation of liquid momentum in the axial direction is used for overcoming the torque caused by the interfacial friction and the torque of liquid-wall friction. Therefore, the angular momentum balance written as: d dz The vortex intensity of gas phase and liquid phase are defined as dimensionless variable W g and W L respectively, which are as same as the Liu and Bai (2015) proposed. Furthermore, the variable C g and C L are defined as follow: Combined with equations (7)-(10): 2.1 Average liquid film thickness d Combine equations (2)-(4), eliminating p g and p L as following: Equation (12) is used to calculate the average liquid film thickness d. W reflects the effect of centrifugal force on d caused by vortex flow. Experiment shows that the vortex flow will gradually disappear along the pipeline which expresses as dW/dz < 0. If W = 0, equation (10) will be same as the common annular-mist model (Ansari et al., 1990;Taitel and Dukler, 1976). W is calculated by equation (14): In this equation, W < 0 means the liquid film thickness is smaller than that of common annular-mist model. Therefore, vortex flow will decrease the loss of gravitational pressure, which is one of the mechanisms of the vortex tools. The interfacial relations The axial shear stress and circumferential shear stress are written as follows: where s zw , s uw , s zi and s ui are the axial liquid-wall shear stress, the circumferential liquid-wall shear stress, axial liquid-gas interfacial shear stress and circumferential liquid-gas interfacial shear stress, respectively. s gf in equation (17) is the liquid-gas interfacial shear stress in the common annular flow, which can be calculated by equation (29). Because of the existence of vortex tool, liquid will converge into the vortex channel and form a liquid ring that flows upward along the pipeline. The protruding liquid ring occupies some cross section of the gas core. Therefore, the liquid-gas interfacial shear stress will produce a drag force on the liquid ring. In order to simplify the model, it is assumed that the shear stress is equal to the drag force. The result shows that g > 1 and s zi ! s gf , which means under the effect of vortex tool, the drag force on liquid increases, and the thickness of liquid film decreases calculated by equation (13), which will lead to the decrease of gravitational pressure drop. The relations of friction factor f are shown as follows: where axial liquid-gas interfacial friction factor f zi was proposed by Wei et al. (2014), u g in equation (17) is defined as u g = u zg (1 + W g 2 ) 0.5 . Calculation of shear stresses When gas core flows through the protruding liquid ring, there is a drag force on the liquid ring, which is calculated as: In the vortex flow, the total cross-section area of protruding liquid ring is shown in Figure 2 with red part, assuming the drag force equals to the shear stress on liquid ring: In order to compare with the common two-phase annular flow, equation (24) can be rewritten as: In equation (27), s gf indicates the shear stress between gas core and liquid film without liquid ring, which is calculated as: In equation (28), C D is drag coefficient proposed by Morsi and Alexander (1972): where Reynold number is defined as Re = q g u zg h a /l g . The forces applied to the liquid phase include the drag force on the liquid ring and shear stress on the liquid film, so the resultant force is written as: where h a is the thickness of liquid ring, defining h aD as: ). Ryu and Park (2011) proposed a correlation to calculate the interface fluctuation amplitude in two-phase annular flow, which is introduced into present work, empirical parameter h aD is calculated as: a and b are determined to 18 and À0.8 respectively; C w is a function of gas and liquid physical property which is defined as follow: Solution for the model According to the mass balance law, the mass flow rate of gas and the mass flow rate of liquid are constant. Thus, the axial average velocity of gas and liquid are calculated as: where A is cross-section area; the subscripts t, r and f denote the test pipeline, liquid ring and liquid film, respectively. where S fi is the wetted perimeter between liquid film and pipe wall, S f is the interfacial perimeter. The liquid holdup of cross section is shown as: When calculating the pressure profile along the pipeline, it is needed to segment the pipe into units, and choose the appropriate method to calculate the pressure gradient of each unit (Fu et al., 2015;Zhang and Li, 2011), then the outlet pressure can be obtained. The pressure calculation procedure is shown in Figure 3. 3 The two-phase flow experiments with vortex tools The experiments arrangement Air and water are regarded as gas phase flow medium and liquid phase flow medium, respectively. The experimental flow loop is shown in Figure 4. Water is injected through the screw pump, and the flow rate ranges from 0.1 m 3 /h to 6.3 m 3 /h. Air is supplied through the compressor whose maximum rate is 5 m 3 /min, and maximum outlet pressure is 1.2 MPa. A vertical transparent test section is set to observe the vortex flow, and different vortex tools can be changed at the bottom of the section. The length of test section is 7.5 m, which is constituted of stainless-steel pipe and transparent pipe. There are three vortex tools with different helical angles as shown in Figure 5 (angle between liquid film ring and horizontal plane), which are 30°, 45°a nd 55°respectively, to simulate different vortex flow conditions. Flow parameter measurement Three Rosemount pressure gauges are installed below and above the vortex tool, and at the outlet of test pipeline with the range of 0-500 kPa, they are used to measure pressure drop. Several flowmeters are also installed on the pipeline, including a turbine flowmeter for the water volume flow rate with a range of 0.1-1.5 m 3 /h; a gas volume flowmeter with a range of 15-3000 L/min. Helical liquid film can be observed on the transparent section after the vortex tool. Based on the helical angle of liquid film ring, the vortex intensity can be obtained. According to the experiments, it is found that liquid velocity drops because of the friction between the liquid film and pipe wall, which can be reflected by the increasement of helical angle. Experiments procedure Firstly, air is injected from compressor into the gas tank, and input the gas to the pipe with a fixed flow rate. When the gas flow reached the stability in the whole pipeline, injected water into the mixer under a low rate from the water tank, gradually increased the water flow rate until it formed a liquid ring. The gas-water flow must reach a steady state for several minutes, and recorded the pressure data, gas velocity and liquid velocity. Repeated the above steps with different gas flow rate, liquid flow rate and different vortex tools. The maximum relative differences between vortex flow model and experiments are 30.06%, 8.10% and 1.76%, respectively; the average relative differences between vortex flow model and experiments are 11.54%, 5.41% and 1.08%, respectively. In addition, it can be found that this model performs better when the inlet pressure is higher. Results comparison between vortex flow model and pressure drop experiments In the preceding section, Ali et al. (2005) only gave the common experimental results, they didn't come up with the parameters related to vortex flow, which is of great importance for liquid unloading process. Their study mainly focused on the promotion of liquid unloading capacity, Gas-water mixture flows through the vortex tool channel, and the vortex flow intensity is determined by the helical angle when the mixture flowed out of tools. In present experiments, the helical angles of vortex tool are 30°, 45°a nd 55°, the corresponding vortex flow intensity are 1.73, 1 and 0.70. The pressure drops against gas superficial velocities under three kinds of liquid superficial velocities with vortex tool #1, #2 and #3 are plotted in Figures 9-11 respectively. In these graphics, scattering points are experimental data. As the gas superficial velocity increases, the cross-section liquid holdup and pressure drop decrease; but when the gas superficial velocity becomes higher, it will lead to the higher friction that cause the pressure drop increases. When liquid superficial velocity is 0.05 m/s, it is shown in Figure 9 that the pressure drop decreases from 18.21 kPa to 7.51 kPa, then rises to 10.51 kPa with the increase of gas superficial velocity. Under the condition of high liquid superficial velocity, the effect of decreasing liquid holdup is more obvious, so the pressure drop decreases continuously, but under low liquid velocity condition, the pressure drop inversion is not obvious. It can be found in Figures 10 and 11 that the pressure drops trend of vortex tool #2 and #3 are similar to tool #1. In Figures 9-11, the data analysis of tool #1 shows that the maximum and minimum relative differences between vortex model and experiments data are 31.7% and 0.1%, respectively; and the average difference between vortex model and Unloading state experiments data is 9.6%. The analysis of tool #2 shows that the maximum and minimum relative differences between vortex model and experiments data are 29.3% and 2.5%, respectively; and the average difference between vortex model and experiments data is 10.7%. The analysis of tool #3 shows that the maximum and minimum relative differences between vortex model and experiments data are 15.6% and 0.2%, respectively; and the average difference between vortex model and experiments data is 7.5%. Generally, this model is well fitted to experimental results. Figure 12 shows the pressure drops against gas superficial velocity with different vortex tools when the liquid superficial velocity is 0.05 m/s. It can be found that the helical angle of vortex tool has contradictory influences on the two-phase flow under different gas superficial velocity conditions. When gas superficial velocity is low (12 m/s-14 m/s), the pressure drop increases as the helical angle becomes bigger; when gas superficial velocity is high (16 m/s-18 m/s), the pressure drop decreases as the helical angle becomes bigger. For example, the vortex tool #1 has the smallest helical angle, according to equation (43), which leads to the biggest vortex flow intensity; and based on equation (31), it has the biggest interfacial force, so liquid holdup will decrease at low gas superficial velocity, while increase friction at high gas superficial velocity. Figure 12 also shows that the minimum pressure drop is 15%-35% lower than the maximum pressure drop with different vortex tools, therefore when using vortex tools in a gas well, actual flow conditions should be considered. Results comparison between vortex flow model and decay experiments of vortex flow intensity The angular velocities of gas and liquid will gradually decrease caused by interfacial frictions. The average vortex flow intensity gradient dW L /dz in the test section is calculated based on the helical angle and the vortex flow intensities at the inlet and outlet of vortex tool. Figures 13-15 show the theoretical and experimental liquid vortex flow intensities under different conditions. Both experimental and calculated results show that vortex flow intensity decay rate decreases along the pipe as liquid velocity increases. There also is a rising trend of ÀdW L /dz as gas velocity increases in Figures 13-15, which due to the liquid film thickness decreased caused by rising gas velocity. The rise of vortex flow intensity decay rate is based on equation (13). In Figures 13-15, the differences between experimental data and calculation results are large under low liquid superficial velocity condition; then, the differences become smaller under high liquid superficial velocity, mainly because the vortex flow has fully developed. It can be found from Figures 13-15 that vortex intensity gradients have similar trend as gas superficial velocity increases; the ÀdW L /dz drops rather obviously as the helical angle becomes higher with different vortex tools. In Figures 13-15, the maximum and minimum relative differences between vortex model and experiments data of tool #1 are 63.3% and 2.1%, respectively; and the average difference is 25.3%. The maximum and minimum relative differences between vortex model and experiments data of tool #2 are 37.8% and 6.0%, respectively; and the average difference is 13.9%. The maximum and minimum relative differences between vortex model and experiments data of tool #3 are 9.0% and 2.1%, respectively; and the average difference is 4.1%, the calculation results of vortex model is acceptable. According to the green solid line and dotted line in Figure 13, which represent ÀdW L /dz of tool #1 and tool #3 respectively. It can be found that the lower initial velocity causes the smaller friction between liquid and pipe wall, which will lead to the smaller vortex flow intensity gradient. Based on this conclusion, it can be inferred that the vortex flow intensity will decrease upward along the pipe and the decrease rate will gradually become smaller. The vortex flow working distance can be calculated by the vortex flow intensity. For example, when the liquid and gas superficial velocities are 0.05 m/s and 16 m/s, the vortex flow intensity gradient is 0.10 m À1 and the vortex flow intensity of tool #1 is 1.73. The theoretical working distance is 17.3 m (1.73/0.10), but the actual working distance is a little larger than 17.3 m. It can be found that the vortex intensity gradient is related to liquid velocity, the relation is that the vortex intensity gradient becomes lower as liquid velocity rises, which leads to the longer working distance. Taking tool #1 as an example, when liquid velocity is 0.1 m/s and the vortex intensity gradient is 0.04 m À1 , the working distance is 40 m; when liquid velocity is 0.2 m/s and the vortex intensity gradient is 0.02 m À1 , the working distance is 86.5 m. However liquid velocity can't increase limitlessly. As the liquid velocity increases, the pressure drop will become higher and the stability of gasliquid interface will decrease, which leads to collapse of liquid ring and causes liquid loading. When the vortex flow intensity is near 0, the effect for lowing pressure drop is inconspicuous. Therefore, the actual gas production situations should be taken into account when design the interval of vortex tools. Conclusion Under the vortex flow condition, there is a pressure difference between gas core and liquid film due to the centrifugal force in two-phase annular flow caused by circumferential flow. A drag force is added on the liquid phase due to the structural characteristics of liquid ring. Both situations will lead to the two-phase pressure drop decreasing under the vortex flow. The present study proposes the gas and liquid momentum equations in axial, circumferential and radial directions, which can calculate thickness of liquid film, vortex flow intensity and pressure drop. Comparing the previous experiments and models with vortex flow model, it shows that the vortex flow model performs better than other models, and its average relative differences are 11.54%, 5.41% and 1.08% when inlet pressure are 170.27 kPa, 239.22 kPa and 308.17 kPa, respectively. The analyses indicate that the vortex flow model is well fitted to experimental results. Vortex tools with bigger helical angle will lead to higher vortex flow intensity. As the gas superficial velocity increases, on one side it will lead to the decrease of liquid holdup and pressure drop; on the other side it will lead to the high friction which causes the increasing of pressure drop. Therefore, under high gas velocity condition, higher vortex flow intensity tends to the increase of flow friction; under low gas velocity condition, the trend is inverse. The vortex flow theoretical working distance can be calculated by the vortex intensity gradient and initial vortex flow intensity. Larger liquid superficial velocity will lead to the longer working distance, but liquid velocity can't increase limitlessly, it might lead to collapse of liquid ring and cause liquid loading.
5,504.8
2019-01-01T00:00:00.000
[ "Physics", "Engineering" ]
A comprehensive review on the advancements and challenges in perovskite solar cell technology Perovskite solar cells (PSCs) have emerged as revolutionary technology in the field of photovoltaics, offering a promising avenue for efficient and cost-effective solar energy conversion. This review provides a comprehensive overview of the progress and developments in PSCs, beginning with an introduction to their fundamental properties and significance. Herein, we discuss the various types of PSCs, including lead-based, tin-based, mixed Sn–Pb, germanium-based, and polymer-based PSCs, highlighting their unique attributes and performance metrics. Special emphasis is given to halide double PSCs and their potential in enhancing the stability of PSCs. Charge transport layers and their significance in influencing the overall efficiency of solar cells are discussed in detail. The review also explores the role of tandem solar cells as a solution to overcome the limitations of single-junction solar cells, offering an integrated approach to harness a broader spectrum of sunlight. This review concludes with challenges associated with PSCs and perspective on the future potential of PSCs, emphasizing their role in shaping a sustainable energy landscape. Through this review readers will gain a comprehensive insight into the current state-of-the-art in PSC technology and the avenues for future research and development. Introduction The non-renewable sources of fossil fuels is the main contributor for providing reliable energy throughout the world.The global energy demand rises steadily as the earth's population increases over time.The depletion of these non-renewable energy sources along with their environmental issues has caused major concerns.Owing to the discharge of carbon and other hazardous elements into the environment, the world is now dealing with a serious environmental problem.This has led to the utilization of alternative energy sources, especially renewable energy.In this context, solar energy is recognized as an essential renewable energy source due to its quantity, purity, Muhammad Noman Dr Muhammad Noman, Assistant Professor and Head of the Renewable Energy Engineering Program at the US -Pakistan Center for Advanced Studies in Energy, University of Engineering and Technology, Peshawar, Pakistan, is a distinguished scientist in the eld of energy.Holding a PhD in Electronics Engineering from Politecnico di Torino, Italy, and further postdoctoral experience, Dr Noman's has a prolic prole of over 70 publications in prestigious journals and conferences.Dr Noman's research group is currently working on photovoltaic reliability and materials & methods for enhancing the efficiency & stability of perovskite solar cells. Zeeshan Khan Zeeshan Khan, Master's graduate from the US -Pakistan Center for Advanced Studies in Energy, University of Engineering and Technology, Peshawar, Pakistan, is carrying out research on non-toxic tin-based perovskite solar cells, focusing on novel charge transport layers including kesterite materials.He is part of the research group working under Dr Noman to enhance the efficiency and stability of perovskite solar cells.Thus far, he has published multiple research papers in prestigious journals and conferences.and inexhaustibility.Furthermore, the use of solar energy does not harm the environment. 1Numerous methods have been developed in the last few years to capture and utilize solar energy, including solar heating, articial photosynthesis, photovoltaics, photocatalytic water splitting, and solar architecture. 2,3Among the various solar technologies, photovoltaics (PV) has attracted major attention.This technology is based on the phenomenon known as photovoltaics, in which the photon energy of sunlight is directly converted into electrical energy through a device called a PV cell or solar cell.By switching to PV technology, the amount of pollutants in the physical environment and the release of poisonous gases can be reduced.According to reports, until 2050, there must be a decrease in carbon emissions of 25 000 GW for a sustainable environment. Solar cell technology is oen divided into three generations based on the materials used in the devices.Silicon wafer-based solar cells make up the rst generation, whereas thin lm-based solar cells make up the second generation.Similarly, the thirdgeneration cells are comprised of organic, dye-sensitized, quantum dot, and perovskite materials.The PV market is dominated by the 1st and 2nd generation solar cells.However, these technologies are associated with the issues of high fabrication cost, complicated production procedures, and low efficiency.This has led researchers towards developing innovative, low-cost, and efficient materials for solar cells. Numerous types of solar cells have been discovered thus far, including perovskite and organic cells and polycrystallinesilicon (mc-Si cells), single-crystalline silicon (c-Si cells), CIGS solar cells, CdTe-based solar cells, quantum dot-sensitized solar cells and polycrystalline-silicon (mc-Si cells). 4,5However, to realize the wide commercialization of solar cells, their low-cost manufacture and high conversion efficiency are important.To date, the rst-generation silicon-based solar cells have been the most popular in the market because of their high-power conversion efficiency (PCE) of 25-26% and durability.However, they are hampered by their long fabrication time and high cost.Recently, perovskite solar cells (PSCs) have emerged as an alternative option to silicon solar cells.PSCs belong to the thirdgeneration technology of PV and have achieved remarkable breakthrough over the past decade, 6 achieving an exceptional PCE of more than 25% and have the potential to outperform the Shockley Queisser limit. 7n incredible advantage PSCs is their compatibility with rstand second-generation solar cell technologies, opening the door to unlimited possibilities.Perovskite materials can be combined with conventional solar cells such as silicon and CIGS to create a cohesive tandem solar cells for exploring the untapped potential of high-performing PV cells. 8Furthermore, extensive research is ongoing to further enhance the performance of perovskite solar cells and their applications to further enhance technology. 9ojima et al. 10,11 carried out the ground-breaking work on the use of halide perovskites in solar cells.In 2009, these researchers achieved a PCE of up to 3.8% using methylammonium lead bromine (MAPbBr 3 ) perovskite as a light sensitizer in dye-sensitized solar cells.This paved the way for future research in this area.The accomplishments achieved by perovskite materials during the last ten years are shown in Fig. 1, focusing on lead-based, tin-based, germanium-based, polymer-based, lead-free halide double perovskite-based and tandem PSCs. For PSCs to thrive in the commercial landscape and effectively compete with established technologies, they must satisfy three critical requirements.Firstly, PSCs need to exhibit remarkable energy conversion efficiency, ensuring that they can generate power effectively.Secondly, these solar cells should demonstrate a prolonged operational lifespan, assuring users of their durability and long-term performance.Lastly, they need to be produced at low cost to enable the widespread implementation of low-cost PV technology.Achieving these requirements is essential to deliver power at an exceptionally competitive rate per watt, making PSCs a nancially viable and attractive option for a broad range of applications.However, although PSCs have shown great promise, the current commercial systems still do not meet all these specications completely.Accordingly, there is still a lot of room for further exploration in PSC technologies to achieve the full potential of highly efficient, durable, and cost-effective photovoltaic solutions. Considering the pressing global need for sustainable energy solutions, the exploration and development of PSCs present a promising avenue towards achieving efficient, durable, and cost-effective photovoltaic technologies.This review presents a comprehensive analysis of the advancements and challenges in the eld of PSCs, ranging from their foundational principles to the latest innovations in materials and design.By delving into the intricacies of the charge transport layers, the merits and demerits of lead and tin-based PSCs, and the pioneering work in germanium-based PSCs, we aim to provide a complete understanding of the current landscape.Furthermore, the exploration of A-site modication, the integration of polymer-based PSCs, the prospects of lead-free halide double perovskite solar cells, and the innovative tandem solar cells underscore the versatility and adaptability of PSCs.The journey of PSCs, from their Shayan Tariq Jan Engr.Shayan Tariq Jan, Lecturer at Energy Engineering Technology, University of Technology Nowshera, Pakistan is an upcoming bright researcher in the eld of 3rd generation photovoltaic cells and hybrid renewable energy systems.Engr.Jan is a PhD scholar at the US -Pakistan Center for Advanced Studies in Energy, University of Engineering and Technology, Peshawar, Pakistan, and is actively carrying out research on perovskite materials under the supervision of Dr Noman.His prole includes over 20 publications in prestigious journals and conferences.His work has gathered over 200 citations over the past few years, highlighting its impact on the research community.inception to their current state, is a testament to the relentless pursuit of excellence in the renewable energy sector.As the global energy landscape evolves, it becomes increasingly evident that PSCs, with their promising attributes, hold significant potential to address the signicant energy challenges presently.Accordingly, the aim of this review is to provide researchers, industry professionals, and enthusiasts with a consolidated resource, guiding future endeavors in the quest for sustainable and efficient photovoltaic solutions. The tunable bandgap of perovskite solar cells, which ranges from approximately 1.3-2.2electron volts (eV), provides several distinct advantages.Firstly, it allows the optimization of their absorption in the solar spectrum.Given that the solar spectrum is broad, a tunable bandgap enables these cells to be specically optimized for absorbing different parts of the spectrum, thereby enhancing the overall energy conversion efficiency.This feature is particularly benecial in the development of tandem solar cells, where the perovskite layers with varying bandgaps can be stacked together or with other materials such as silicon to absorb different wavelengths, potentially surpassing the efficiency limits of traditional single-junction cells.Moreover, their tunable bandgap makes perovskite solar cells adaptable to various environmental conditions.For example, different geographic locations with unique solar irradiance proles can benet from cells optimized for specic conditions, increasing their efficiency in diverse climates. 13This adaptability is especially useful for maintaining higher efficiency in hotter climates, given that the efficiency of solar cells generally decreases with an increase in temperature, where a tunable bandgap can help mitigate this effect.Additionally, the versatility offered by the tunable bandgap extends to a range of applications.For instance, in building-integrated photovoltaics (BIPV), where aesthetic considerations are crucial, the ability to adjust the bandgap allows for different colorations and light absorption characteristics.This versatility also supports the use of thinner perovskite layers, reducing material costs and enabling the production of lighter and more exible solar cells.In summary, the tunable bandgap of perovskite solar cells is a key attribute that signicantly contributes to their efficiency, versatility, and potential for widespread application in various environments and integrations. 14nother critical attribute is their high absorption coefficient, which is notably around 5.7 × 10 4 cm −1 at 600 nm.This high absorption coefficient signies that perovskite materials can absorb a substantial amount of sunlight very efficiently, even when applied as thin lms, which is particularly advantageous for several reasons.Firstly, it contributes to the high efficiency of perovskite solar cells.The ability to absorb a signicant amount of light with a relatively thin layer of material means that more photons are converted into electrical energy, boosting the overall power conversion efficiency of the cell.This high efficiency is crucial for making solar energy a more viable and competitive source of renewable energy.Secondly, the high absorption coefficient enables the production of lightweight and exible solar cells.Given that a thinner layer of material is required to achieve the desired absorption, perovskite solar cells can be made with less bulk, reducing their weight and allowing for more exibility in their application. 15,16This aspect opens up new possibilities for the integration of solar cells, such as in portable and wearable electronics, unconventional building surfaces, and other areas were traditional, heavier solar panels may not be feasible.Furthermore, the efficient light absorption with a thinner thicknesses implies lower material usage, which can result in reduced manufacturing costs.This cost-effectiveness is essential for the widespread adoption and deployment of solar energy technologies, making renewable energy more accessible and affordable.Lastly, the high absorption efficiency at a specic wavelength such as 600 nm demonstrates the potential for perovskite solar cells to be nely tuned for specic parts of the solar spectrum, further enhancing their compatibility with multi-junction solar cell technologies.This compatibility can lead to the creation of highly efficient, multi-layered solar cells that can outperform traditional singlejunction cells. Continuing the discussion on the advantages of perovskite solar cells, the efficient carrier mobility of these materials, ranging from 1 to 10 cm 2 V −1 s −1 , stands out as a key attribute.This high carrier mobility is crucial in several aspects of the performance of solar cells.Firstly, it leads to improved charge collection.The rapid movement of charge carriers (electrons and holes) to the electrodes minimizes recombination losses, thereby enhancing the overall efficiency of the solar cell. 17In addition, this efficient carrier movement reduces internal resistive losses, ensuring that less energy is lost as heat and maintaining high efficiency during operation.This high carrier mobility is also benecial under low-light conditions.It compensates for the lower number of photons, ensuring that solar cells maintain a good level of efficiency even when the light conditions are not ideal.Furthermore, the compatibility of high carrier mobility with thin lm technology is particularly advantageous for perovskite solar cells.Despite the use of thin layers, the movement of charge carriers is not signicantly hindered, which is vital for sustaining high efficiency in thin-lm solar technologies. 18Another exciting aspect of the high carrier mobility in perovskites is its potential in applications beyond photovoltaics.For instance, in photodetectors and other optoelectronic devices, this property can lead to faster response times and higher sensitivities.Additionally, the high mobility in perovskites reduces the impact of impurities and defects, which typically hinder the performance in materials with lower carrier mobility.This means that even with minor imperfections, carriers in perovskites can still move effectively, reducing the need for ultra-high purity in material production. The low exciton binding energy in perovskite materials is crucial for efficient solar energy conversion.In photovoltaic materials, excitons (bound pairs of electrons and holes) need to be separated into free charge carriers for the generation of electricity.The low exciton binding energy in perovskites means that less energy is required to separate these pairs, thereby facilitating efficient charge carrier generation under normal sunlight conditions.This leads to higher efficiency in converting solar energy into electrical energy.Moreover, the high dielectric constant of perovskite materials plays a pivotal role in enhancing their solar cell performance.A high dielectric constant leads to the efficient screening of charge carriers, reducing their recombination rate. 19This efficient photogeneration of electrons and holes ensures that more of the absorbed sunlight is converted into useable electrical energy, improving the overall power conversion efficiency of solar cells.In terms of practical solar cell metrics, these properties translate into signicantly high short-circuit current densities and open-circuit voltages.The short-circuit current density indicates how much current the solar cell can produce under the optimal sunlight conditions, while the open-circuit voltage represents the maximum voltage the cell can generate when not connected to an external circuit.Both parameters are crucial for determining the overall efficiency of a solar cell, and perovskite materials excel in these aspects due to their inherent physical properties.Lastly, the long charge diffusion lengths in perovskite materials further contribute to their efficiency.The charge diffusion length is the average distance that charge carriers can travel before recombining.Longer diffusion lengths in perovskites imply that electrons and holes can travel farther, increasing the likelihood of their successful extraction and conversion into electrical energy.This characteristic is particularly benecial in thin-lm solar cells, where efficient charge transport across the material is essential for high performance.Table 1 shows the band gap, binding energy and carrier mobility of different perovskite materials. Presently, organic-inorganic hybrid perovskite (OHIP) materials have emerged as the optimal choice for cost-effective solar cell production with exceptional performance.Mitzi et al. rst showed that an OHIP material could be used in lightemitting diodes and transistors in the 1990s. 36,37Moreover, OHIPs display distinct optical and electrical features in comparison to common organic and inorganic semiconductors.These OHIP materials also have a weak binding energy, large Bohr radius, high dielectric constant, long diffusion length and high carrier diffusion velocity, in addition to having an exceptional light absorption capacity. 38,39Owing to all these benets, OHIP materials have risen to the top of the list of contenders for the fabrication of inexpensive, highly effective solar cells. In solar cells, perovskite materials such as CH 3 NH 3 PbX 3 and CH 3 CH 2 NH 3+ SnX 3 are used as the absorber.Additionally, an electron transport layer (ETL) and hole transport layer (HTL) are positioned on either side of the absorber.When exposed to light, the perovskite absorber introduces charge carriers (electronse − and holesh + ), which are transported to the n-type and p-type charge transport layers (CTL), respectively, thereby generating free charge carriers.Subsequently, electrons migrate towards the cathode through the ETL and the external circuit.Simultaneously, the oxidized perovskite is regenerated and returns to its ground state with the assistance of the compact portion of the HTL.Consequently, the holes in the HTL diffuse in the opposite direction of the electrodes, where they recombine with the electrons, ultimately forming a current at the terminus of the circuit.Importantly, a relationship exists between the thickness of the perovskite material and the generation of a current. 40Fig. 4 illustrates the energy levels involved in the charge transfer process in PSCs. According to research, OHIP materials are particularly intriguing possibilities for solar cell applications.Additionally, atmospheric solution processing and easy preparation procedures such as vacuum deposition have been developed because of the abundant availability of the precursor components in OHIPs. 41,42Initially, OHIPs were used in solar cell applications by Miyasaka and colleagues by employing the CH 3 NH 3 PbX 3 sensitizer in dye-sensitized solar cells (DSSC), with a PCE of 3.81%. 42With signicant advancements in technology, the PCE has increased to as high as 25.8% within a short period. 43 Charge transport layers The role of ETLs in the advancement of PSCs is important. 44TLs consist of highly n-doped colloidal thin lms designed to facilitate the transport of electrons from the perovskite layer to the cathode.The commonly employed ETL materials in PSCs include ZnO, TiO 2 , SnO 2 , and their mesoporous counterparts, which are traditionally used in conventional perovskite solar cells.However, oxide-based ETLs are characterized by broad grain boundaries and suffer from poor interface recombination. 45Notably, issues such as oxygen vacancies and trapassisted recombination contribute to the emergence of defects in semiconductor ETLs.Thus, to address these limitations, researchers have explored alternative materials, including single-crystalline substances, to enhance the ETL performance in perovskite solar cells.This approach suggests that nanosheets or atomically thin forms of transition metal dichalcogenides such as WS 2 , TiS 2 , and MoS 2 can serve as promising ETL candidates.Nanosheets are advantageous due to their oneatom-thick crystal structure, which minimizes defects. 46,47oreover, their thin conguration facilitates rapid charge carrier transfer to the electrode. 48,49oS 2 is frequently chosen as the ETL due to its favorable attributes, including low trap density and robust carrier mobility. 50Malek et al. introduced a novel approach by demonstrating the direct production of MoS 2 nanosheets on an indium-tin-oxide (ITO) substrate.Their investigation revealed the optimal homogeneity of the nanosheets at 200 °C.When these synthesized materials were employed as ETLs, they notably enhanced the interfacial charge transfer capabilities, stability, and overall performance of PSCs.It was observed that reducing the thickness of the MoS 2 layer resulted in an increase in the PCE of the solar cell.Specically, the thicker MoS 2 nanosheet ETL exhibited a V oc of 0.56 V, ll factor (FF) of 37%, J SC of 16.24 mA cm −2 , and PCE of 3.36%.Remarkably, even aer continuous exposure to peak sunlight intensity for 80 s, these solar cells retained 90% of their initial PCE. 51urthermore, owing to its ambipolar characteristics, MoS 2 is sometimes employed as an HTL. 52An example of its versatility was demonstrated in the work by Kim et al. in 2016, where MoS 2 was utilized as an HTL in a perovskite solar cell (PSC), resulting in an impressive PCE of 9.53%. 53Subsequently, Das et al. employed MoS 2 as an HTL in an inverted p-i-n heterojunction planar PSC, achieving a PCE of 6.01%. 54n addition to ETLs, HTLs play a crucial role in optimizing the efficiency of solar systems.HTLs consist of highly p-doped materials designed to facilitate the movement of holes from the perovskite layer to the anode.Extensive research has been undertaken to enhance the conductivity of HTLs by doping with various substances, aiming to prevent charge carrier recombination at the HTL/perovskite interface. 55Among the HTL candidates, spiro-OMeTAD stands out due to its unique characteristics, including low glass transition temperature and good solubility.However, unprocessed spiro-OMeTAD exhibits a low PCE due to its insufficient oxidation states.In PSCs, their photovoltaic performance oen relies on a prolonged oxidation process. 56Kim et al. addressed this challenge by expediting the oxidation of spiro-OMeTAD through exposure to oxygen plasma. 57Nevertheless, exposure to plasma can trigger the decomposition of the perovskite phase into PbI 2 .Thus, to circumvent this issue and enhance the hole-carrying capacity of spiro-OMeTAD, doping with trivalent (p-dopants) materials has been explored.This approach aims to simultaneously mitigate the decomposition of the perovskite and improve the hole transport in the HTL. Presently, the utilized p-dopants range from metal-organic complexes to metal oxides and organic molecules. 58However, although these dopants exhibit potential benets for PSCs, their limited solubility and intricate degradation processes hinder their widespread application.Thus, to address this challenge, innovative approaches have been explored.Cobalt (Co 60 ) and FeCl 3 complexes have shown promise as efficient p-type dopants by oxidizing spiro-OMeTAD, generating new holes and enhancing the conductivity. 59These dopants offer potential solutions to counteract the rapid aging of PSCs.Moreover, introducing an acid in the system can expedite the oxidation process and extend the operational life of these solar cells. 60ecent research efforts have focused on enhancing HTLs by incorporating acids in spiro-OMeTAD.In the study by Guan et al., they investigated the impact of benzoic acid on the oxidation of spiro-OMeTAD, building on prior ndings. 61Their Fig. 4 Energy levels with charge transfer processes inside a PSC. results indicated that increasing the concentration of benzoic acid accelerated the oxidation and improved the hole-transmitting properties of the HTL.Moreover, optimizing the doping concentration effectively reduced the hysteresis in PSCs based on the HTL, resulting in an enhanced PCE of 16.26% under conventional AM 1.5 G illumination. Yang et al. 62 created a uorinated spiro-OMeTAD for solar cell applications that was inexpensive and devoid of dopants.They claimed that the material was dopant-free and sensitive enough to be used as the HTL in CsPbI 2 Br-based PSCs.Next, they used 2,3,5,6-tetrauoro-7,7,8,8-tetracyanoquinodimethane (C 12 F 4 N 4 ) to modify the surfaces of CsPbI 2 Br perovskite and uorinated spiro-OMeTAD.In comparison to the doped PSCs, the modied and dopant-free CsPbI 2 Br perovskite solar cells exhibited a very high PCE of 14.42% with a high V OC of 1.23 V.Even aer 30 days of open-air aging without encapsulation, the solar cells manufactured from the recommended HTL materials maintained 94% of their initial PCE, demonstrating exceptional longevity.Thus, high-performance CsPbI 2 Br PSCs can be fabricated using the produced dopant-free HTL.As a new technique for solution processing, a novel solvent mixture consisting of organic amines (H 2 O/ETA/EDA/DTA) in the volumetric ratio 2 : 6 : 1 : 1 was introduced by Zhao et al. 63 to produce CuSeCN thin lms for HTL application in p-i-n PSCs.For the developed HTL-based PSCs, they reported a PCE of 15.61% in the forward IV analysis and 15.97% in the reverse IV analysis.Importantly, these CuSeCN-based PSCs demonstrated minimal hysteresis and exceptional long-term stability.These results highlight the signicant potential of CuSeCN lms as HTL materials in solar applications. Another ETL, ZnO, has attracted attention as a promising candidate for solar cell applications, owing to its high electron mobility, diverse nanostructured morphologies, and versatile growth methods. 64,65It is well-established that the efficiency of PSCs depends on the surface morphology and crystalline characteristics of the perovskite top layer.In this case, the morphology of the active perovskite layer, including surface roughness and particle size, is inuenced by the choice of solvent during its creation, which can signicantly impact the solar cell performance. Recently, Ahmadi et al. introduced an economical method involving an ultrasonic bath to produce ETLs for perovskite solar cells using ZnO nanoparticles synthesized in three different solvents including isopropyl alcohol (IPA), 2-methoxy ethanol (2 ME), and ethanol. 66The research ndings on the structure, morphology, and device performance revealed that the ZnO layers produced using 2 ME as the solvent exhibited the highest quality.Notably, a PSC utilizing ZnO (2 ME) as the electron transport layer and methylammonium lead iodide (MAPbI 3 ) as the perovskite layer achieved an impressive power conversion efficiency (PCE) of 22%.This superior performance was attributed to the excellent MAPbI 3 surface coverage, larger grain sizes, and the lowest defect density at the ETL/MAPbI 3 interface.Consequently, ZnO-ETL-based solar cells emerge as a compelling choice for various solar cell applications. Furthermore, due to its exceptional optical transparency, ZnSnO (ZTO) holds promise as an ETL material for solar cells. The oxygen vacancies naturally present in ZTO signicantly facilitate charge carrier transmission.However, the presence of multiple oxygen vacancies in ZTO is considered a major drawback.Thus, Miao et al. addressed this issue by fabricating solar cells using ZTO doped with varying concentrations of silicon to investigate the impact of oxygen vacancies in ZTO and strategies for their management. 67They achieved this by producing amorphous metal oxide lms through RF magnetron sputtering and adjusting the silicon concentration.Their research revealed a reduction in oxygen vacancies relative to silicon content, which was corroborated by X-ray photoelectron spectroscopy (XPS) analysis.Notably, the decrease in oxygen vacancies in silicon-doped ZTO (SZTO) contributed to improved charge extraction and conduction capabilities.Utilizing synthesized the SZTO as the ETL, they developed a PSC with impressive performance metrics, including a peak PCE of 13.4%, J SC of 21.6 mA cm −2 , FF of 0.67%, and V OC of 1.04 V. TiO 2 -based solar cells have achieved remarkable PCEs exceeding 20%.However, they have certain drawbacks.Thus, the choice of TiO 2 as the ETL signicantly inuences device performance in all of the aforementioned methods.When utilized as an ETL in n-i-p structured PSCs on exposure to UV light, TiO 2 can exhibit a rapid drop in J sc and make the cells unstable. 68Thus, to address these limitations and protect PSCs from UV-induced degradation, several studies have explored the use of an interfacial layer positioned between the perovskite layer and the TiO 2 ETL. 69onsequently, it is crucial to develop stable and highperformance PSCs, which has attracted substantial attention in this research direction. 70The stability of various ETL materials under ultraviolet (UV) radiation has driven this interest.Mg X -Zn 1−X O (MZO), due to its robust electron mobility and deeper conduction band, has emerged as a promising ETL material for PSCs. 71Accordingly, Han et al. recently demonstrated the exceptional stability of PSCs utilizing MZO-based ETLs when exposed to UV light. 72They highlighted that MZO exhibits enhanced carrier mobility and a more efficient conduction mechanism compared to TiO 2 .This characteristic prevents charge accumulation at the perovskite/MZO interface and facilitates efficient charge conduction between the two materials.The researchers achieved an MZO-based device with an impressive V oc of 1.11 V and efficiency of 19.57%. Notably, the MZO-based device retained 76% of its original J sc compared to the TiO 2 -based device, which retained only 12% of its initial J sc .Both cells were analyzed following a year of aging under ambient conditions, including 40% to 80% relative humidity (RH) and 8 h of UV exposure.This exceptional UV resistance was attributed to the reduced electron capture site density in the MZO-ETL when exposed to UV light.The oxygen vacancies and zinc interstitials in MZO-ETL played a pivotal role in the ability of this material to withstand UV radiation without compromising the integrity of the perovskite active layer.Therefore, MZO as an ETL holds promise for the fabrication of durable PSCs resilient to UV light-induced degradation. Teimouri et al. 73 demonstrated that lithium (Li) doping of TiO 2 enhances the conductivity and electron transport in the ETL.Ultrasonication was employed to fabricate Li-doped TiO 2 lms, which exhibited improved conductivity and reduced solar power loss compared to undoped TiO 2 .Simulations conducted using a solar cell capacitance simulator (SCAPS) unveiled the impact of varying Li concentrations on the efficiency of perovskite solar cells.These simulations demonstrated a notable enhancement in performance, with the Li-doped electron transport layer (ETL) achieving a signicantly higher PCE of 24.23% compared to the undoped ETL, marking an impressive increase of 1.97%.Additionally, Li-doped TiO 2 showed a lower trap density between the absorber and ETL.These ndings establish Li-doped TiO 2 as a strong contender as the ETL in PSCs.In another study, Yang et al. employed a sol-gel production process with varying TiO 2 concentrations to fabricate compact TiO 2 ETLs (c-TiO 2 ). 74Among them, the ETL prepared with the highest TiO 2 concentration of 2.0 M exhibited the most noteworthy characteristics.The PSCs utilizing c-TiO 2 achieved an impressive PCE of 16.11% and high V oc of 1.1 V.This innovative use of c-TiO 2 in PSCs represents a promising avenue for enhancing the efficiency of low-temperature solar panels.Fig. 5 provides a schematic representation of PSCs incorporating c-TiO 2 . In another study, Zhang et al. 75 employed a straightforward sintering technique to fabricate MgTiO 3 -coated TiO 2 mesoporous scaffold layers with diverse treatment concentrations, targeting their application in solar cells.The photovoltaic performance was much improved once the manufactured scaffolds were used as shell layers.Furthermore, the MgTiO 3 shell effectively blocked charge carriers from recombining at the MAPbI 3 /TiO 2 interface.The crystallinity of MAPbI 3 was improved by the incorporation of MgTiO 3 , which was crucial for the manufacturing of excellent-quality perovskite lms.The PSC treated with an optimal concentration of 0.10 M achieved a remarkable PCE of 10.39%.Even aer 1008 h of exposure to normal humidity, the device retained 88.35% of its initial PCE.The durable, highly-efficient MgTiO 3 -coated TiO 2 mesoporous scaffold layers are promising materials for future solar systems, given their ease of manufacturing and exceptional performance. The high electron mobility, good chemical stability, anti-reectiveness and wide bandgap of indium oxide (In 2 O 3 ) thin lms make them a promising ETL material in PSCs. 76However, the hygroscopic nature of In 3+ causes pinholes, fractures, and unfavorable morphology, as demonstrated in prior research due to the reaction between In 3+ cations and water molecules during the fabrication of the samples. 77This hinders the potential of In 2 O 3 as an ETL frontrunner in the solar cell industry.Thus, to enhance the PCE of photovoltaic devices, it is preferable to manufacture In 2 O 3 free of aws. In the study by Zhang et al., they pioneered the synthesis of stable In 2 O 3 lms at low temperatures using an exceptionally stable indium precursor solution. 78These lms were intended for use as reliable ETL in PSCs.With a water content of only 0.2%, the indium precursor exhibited exceptional stability in ethanol.Adding the chelate ligand acetylacetone (acacH) to the solution prevented the further hydrolysis of the indium. 78 Recently, high-performance PSCs were fabricated by Zhu et al. by introducing a thin m-TiO 2 layer at the interface between a perovskite lm and compact TiO 2 in a planar PSC. 80This interfacial modifying layer boosted the hardness and particle size of the perovskite lms.Solar cells incorporating this layer achieved a superior performance, with an impressive PCE of 18.5% and a low hysteresis coefficient of 4.5%, surpassing conventional planar and mesoporous cells.The interfacial modifying layer holds promise for next-generation PSCs, enabling high transport capacity and improved carrier separation efficiency. Hu et al. introduced a distinctive approach known as the multifunctional interface layer (MFIL) technique to enhance the PCE in inverted PSCs. 81MFIL serves multiple purposes, including trap passivation, electron transport, ion migration suppression, moisture barrier, and near-infrared photocurrent enhancement.This multifunctional approach contributes to the enhanced efficiency and durability of devices.When considering environmental conditions such as heat, moisture, and light, the MFIL-fabricated device showed outstanding stability for up to 1700 h without encapsulation and a considerable PCE of 21%.Molecular orientation studies at the perovskite/MFIL interface have shed light on maximizing the device performance by increasing the molecular bonding at the interface and decreasing the trap density via the design of sophisticated interlayers. The suggested MFIL method offers a new chance to boost the efficiency of future perovskite-based photovoltaic systems, as shown by the ndings of the above-mentioned study.For perovskite solar cell applications, Yun et al. 82 showed the production of well-ordered ZnO nanorods on an FTO substrate using a low-temperature water bath method.These nanorods exhibited varying lengths depending on the reaction time and offered advantages such as improved electron (e − ) transportation, increased contact area, high visual transmittance, and a compact interface.By utilizing these ZnO nanorods as the ETL layer in solar cells, an impressive PCE of 14.22% was achieved under AM 1.5 G illumination.This highlights the potential of ZnO-based materials as effective ETLs in highperformance PSCs. In summary, the intricate landscape of ETLs and HTLs in PSCs has been thoroughly examined, revealing their pivotal role in optimizing the efficiency and stability of these solar systems, as shown in Fig. 6.From traditional oxide-based ETLs to innovative materials such as nanosheets, MoS 2 , and uorinated spiro-OMeTAD, the advancements in this domain underscore the relentless pursuit of achieving higher PCEs.The exploration of various doping techniques, the introduction of multifunctional interface layers, and the utilization of novel materials such as ZnO nanorods further emphasize the in-depth research and innovation in this eld.Table 2 shows the summary of the different CTLs and their impact on the performance of PSCs discussed above.As the global demand for sustainable energy solutions continues to grow, the ndings presented in this section highlight the potential of PSCs to revolutionize the solar industry.The continuous endeavors to address the challenges, enhance the performance, and ensure the long-term stability of PSCs pave the way for a brighter, more sustainable future. Lead-based PSCs Kojima et al. employed MAPbBr 3 and MAPbI 3 as the rst perovskite materials used in DSSCs as sensitizers. 42Since then, Pbhalide-based perovskites have drastically transformed the eld of PVs.The photocurrent and photovoltage of the device are signicantly inuenced by the inclusion of I and Br ions as halide anions in PSCs.Pb-based perovskites possess several exceptional qualities, making them suitable for optoelectronic applications.They are characterized as direct bandgap (1.6 eV) semiconductor materials with a Shockley-Queisser limit for single-junction solar cells that is nearly ideal (1.43 eV).They exhibit an exceptionally high absorption coefficient (5 × 10 4 cm −1 ), which surpasses that of GaAs and is nearly 25 times greater than that of Si. 83 The balanced effective masses of electrons (e − ) and holes (h + ) facilitate enhanced charge carrier transport. 84In addition, photogenerated charge carriers have a long lifespan, complemented by a 1 m diffusion length. 85There are two architectural designs for perovskite single-crystal devices, i.e., planar and lateral structures.Between them, the more commonly used solar cell design is the planar or conventional sandwiched structure.In the work by Malinkiewicz et al., they employed MAPbI 3 (a donor material paired with fullerene derivatives) to fabricate a planar-structured single-crystal device.Their planar devices were constructed at room temperature using a thin 285 nm MAPbI 3 coating, thermally positioned between materials that acted as electron acceptors (PCBM) and electron blockers (polyTPD). 86The resulting device achieved a pioneering PCE of 12%, setting a benchmark for fullerene-based organic solar cells.Subsequently, small molecule-based hybrid solar cells have reported PCEs exceeding 18%. 87However, planar-structured devices typically maintain a planar design, where lateral-structured single-crystal devices offer enhanced mechanical and thermal properties, endowing the device with increased stability. Lateral structures do not require costly substrates such as ITO and are free-standing. 88This immediately inuences their capacity to absorb light given that they can avoid being absorbed by the glass substrate and conductive electrode, which improves the photocurrent and efficiency in comparison to conventional PSCs.Thus, as integrated back contact structures, lateral structures are more effective and produce devices that are less expensive.By simply improving the anode contact, Y. Song et al. created a stable and effective single-crystal MAPbI 3 lateral structure that was surface treated with MAI and resulted in improved V OC and FF as well as increased PCE reaching 11%. 89The fundamental challenge in producing this type of building on a large scale is the use of challenging photolithography and laborious deposition procedures.Although MAPbI 3 is an effective material, due to its primary drawback of low stability, researchers have attempted to tailor this material by adjusting its composition via the synthesis of chloride or bromide counterparts (e.g., MAPbI 3−x Cl x and MAPbBr 3 , respectively). 90,91For example, compared to iodide-based perovskite solar cells (MAPbI 3 , 1.15 V), the V OC jumped to 1.3 V in MAPbBr 3 . 92e optoelectronic characteristics of polycrystalline organic inorganic hybrid perovskite (OIHP) thin lms may be negatively impacted by the excess charge traps at their grain boundaries. 93urthermore, polycrystalline perovskites are known to be susceptible to moisture and photodegradation.A single crystal OIHP, such as MAPbX 3 , usually has structural characteristics such as crack-free, smooth surfaces and well-shaped boundaries, and they are also thermally more lasting than their polycrystalline counterparts. 94Additionally, single crystals have diffusion lengths and carrier mobilities that are around two orders of magnitude greater than that of their polycrystalline phases. 95These improved characteristics cause the PCE of single-crystal PSCs to quickly increase, rising from 6.53% to 22.8% during the last three years. 96Consequently, single-crystal perovskites are excellent candidates for the fabrication of solar cells that are both stable and effective when used in the development of industrial applications.However, single-crystal perovskites are associated with some drawbacks, such as the inability to synthesize or develop large-area thin lms with excellent quality.The presence of Pb in the perovskite crystal is a signicant disadvantage despite the high PCE of MAPbX 3 perovskite materials.Exposure to Pb is very poisonous and bad for the health.Humans who consume it may have hyperactivity and neurological, reproductive, and renal organ damage. 97Thus, researchers have focused on lowering the content of lead or removing it in PSCs given that these toxicity concerns restrict the use of Pb-containing PSCs. 98Another strategy to reduce the total Pb consumption in the device, while keeping the high PCE, Zheng et al. 99 recommended the physical Pb reduction idea.Perovskite technology is now making gradual progress toward Pb-free materials by giving priority to research on workable substitutes. Table 3 presents a summary of the recent development in Pbbased PSCs.The advancement in Pb-halide-based perovskites has undeniably ushered in a transformative era in PVs, with their remarkable optoelectronic properties setting new benchmarks in solar cell efficiencies.From the pioneering work of Kojima et al. to the innovative approaches by Malinkiewicz and Song, the versatility and potential of these materials have been consistently demonstrated.However, the journey of Pb-based PSCs also has challenges.The inherent instability of MAPbI 3 and the detrimental effects of the grain boundaries in polycrystalline structures have driven researchers towards the exploration of single-crystal perovskites, which promise enhanced stability and performance.However, the overarching concern remains the toxic nature of lead, which poses signicant environmental and health risks.The endeavors of researchers to mitigate the lead content in PSCs underscore the commitment of this industry to developing sustainable and safe perovskite technologies.As the eld progresses, the quest for efficient, stable, and lead-free perovskite materials will undoubtedly remain at the forefront, guiding the future trajectory of perovskite solar cell research and applications. Tin (Sn)-based PSCs Lead (Pb) is commonly present in PSCs.However, although lead-based PSCs exhibit an impressive perform, the toxicity of Pb poses signicant environmental concerns and hinders the mass production and commercialization of PSCs.Thus, to make PSCs more affordable and environmentally friendly, it is necessary to develop Pb-free or Pb-reduced perovskites. 100,101hen considering replacements for the hazardous Pb in perovskite unit cells, factors such as the ionic radius and stability of the perovskite structure are vital.In this case, divalent metallic ions such as strontium (Sr), tin (Sn), calcium (Ca), and barium (Ba) have emerged as potential candidates to replace Pb, aligning with Goldschmidt's octahedral factor and tolerance principles. 102,103Among them, Sn has attracted signicant attention due to its electron conguration and coordination geometry, which are similar to that of Pb.Generally, Sn-based perovskites, represented by the chemical formula ASnX 3 , are considered the best alternatives to Pb. Sn-based PSCs boast attributes such as low excitation binding energies, superior carrier mobility, and a theoretical PCE of 33%, making them favorable compared to Pb-based devices.However, their efficiency of 10% is notably lower than that of Pb-based perovskite solar cells.Moreover, the oxidation of Sn 2+ to Sn 4+ adversely affects the stability of PSCs. 104The partial replacement of divalent metal-ions for lead may enhance the performance of perovskite solar cells, while posing no environmental hazards. Considering with this, Ji et al. 105 fabricated a PSC with a Pb-Sn mixed triple cation, which displayed a PCE of 16.10%.Bulky organic ligands are also included, which affect the orientation and development of the grains, causing an increase in spin orbit coupling (SOC) and out-of-plane photoinduced bulk polarization.These factors determine how well 2D/3D perovskite solar cells function photovoltaically.The SOC specically increases the photovoltaic activity, i.e., a higher SOC greatly enhances the spin conversion from optically generated states.In contrast to A good device performance in PSCs is facilitated by a suitable ETL. 107In this case, metal oxides have attracted signicant interest in the development of ETL materials for solar cell applications due to their intrinsic characteristics, including robust thermal and chemical durability, elevated permittivity, and superior electrical conductivity. 108Presently, TiO 2 is the most common material used to create very effective PSCs. 109owever, it has certain drawbacks, including limited electron mobility (0.1-1 cm 2 V −1 s −1 ), the need for high sintering temperatures (>450 °C), and perovskite deterioration in the presence of light. 110Therefore, researchers are focused on the introduction of alternative ETL materials to avoid these problems.Binary metal oxides such as ZnO and SnO 2 are considered to be feasible replacements for TiO 2 because of their improved electron mobility and convenient low temperature production. 111,112Nevertheless, ZnO-based photovoltaic setups suffer from inadequate stability arising from residual OH on the surface of ZnO, leading to the degradation of the perovskite structures.Hence, SnO 2 has recently become popular as an ETL for PSCs.It has been shown that superior-quality SnO 2 -based devices have demonstrated excellent PCEs comparable to TiO 2based devices.Additionally, SnO 2 -based devices are more reliable than TiO 2 -and ZnO-based ones. 113Presently, signicant research is devoted to altering the structure of ETL materials to increase the durability and efficiency of PSCs. Generally, the oxidation characteristic of Sn has a signicant impact on the operation of the device by generating vacancies inside cells.Thus, to avoid this, Mohammadian et al. 114 showed how to make inexpensive, environmentally friendly tin-based PSCs without an HTL by using a natural antioxidant, i.e., uric acid (UA).They claimed that the performance of the device is enhanced by the addition of UA because it lowers oxidation and promotes carrier recombination.This indicates that uric acid enhances the operation of the device by stopping the oxidation of Sn.Additionally, Ghahremani et al. 115 employed strong pulsed light for the rst time to quickly anneal the ETL (SnO 2 ) and triple cation perovskites with the aim to fabricate effective PSCs.The inclusion of di-iodomethane alkyl-halide (CH 2 I 2 ) prevented the regular crystallization during intensive pulsed light annealing and enhanced the surface characteristics of the perovskite layer by delivering iodine slices via UV radiation.They reported that the greatest efficiency of 12.56% was achieved by the SnO 2 -based PSC produced by intensive pulsed light annealing.Additionally, SnO 2 quantum dots were used as the ETL in the creation of exceptionally procient PSCs by Vijayaraghavan et al. 116 They created SnO 2 quantum dots using the lowtemperature solution processing approach.The electron extraction and hole-blocking capabilities of SnO 2 -quantum dots were better than that of high-temperature-produced ETLs.Additionally, the device created employing SnO 2 -quantum dots as the ETL exhibited a high PCE of 13.64%. In another study, Deng et al. 117 presented a novel method for the preparation of tin-doped TiO 2 ETL materials for PSC applications.To passivate the TiO 2 lm and surface, while doping TiO 2 with Sn, they utilized hydriodic acid (HI) for the rst time.Initially, HI regulates the hydrolysis of TiO 2 and eliminates the trap centers of associated oxygen vacancies.Subsequently, TiO 2 /SnO 2 lms are created by incorporating Sn in HI-passivated TiO 2 .The use of SnO 2 signicantly improves the electron mobility, while suppressing the aws over the whole lm.Additionally, they reported that the 0.05 M SnO 2doped TiO 2 -based perovskite device showed low hysteresis, outstanding stability, and an efficiency of 17.77% among the generated samples.Additionally, the TiO 2 /SnO 2 (0.05 M) device maintained 86% of its original efficiency even aer sustained heating at 100 °C for 21 h.Therefore, compared to pristine TiO 2based devices, the inclusion of SnO 2 improved the stability and efficiency of perovskite solar cells.To enhance the electron coupling, passivate trapping defects, and align the energy levels optimally at the junction of the perovskite and ETL layer, Zhang et al. 118 implemented compact and ultra-thin SnO x coatings produced from SnCl 4 .Furthermore, the PCE of the PSC based on Cl-SnO 2 and SnO 2 as the ETL was 18.6%, while that of the same device without Cl was 16.3%.Additionally, Huang et al. 119 reported the addition of LiCl to the SnO 2 ETL using a straightforward low-temperature procedure.The conductivity of SnO 2 was greatly increased by the addition of LiCl, which enhanced the charge transport and prevented charge recombination.They reported that although the same device exhibited a PCE of 18.35% in the steady state, the PCE of the PSC based on Li:SnO 2 -ETL reached 19%. Du et al. 120 added an amino-acid or glycine self-assembled lm to the SnO 2 -ETL at a reduced temperature, serving as a buffer layer to enhance the performance of the SnO 2 -based PSCs.In reality, the lattice mismatch between the SnO 2 and perovskite layer was modulated by the buffer layer.Additionally, the interaction between SnO 2 and the perovskite layer at the interface was improved by the electrostatic interactions between the amino group and perovskite framework.A schematic illustration of the SnO 2 -based PSC device architecture is shown in Fig. 7. This resulted in the reduced recombination of charge carriers and improved charge carrier transport efficiency.They reported that the SnO 2 -based PSC modied with glycine had a maximum efficiency of 20.68%, FF of 0.78%, V OC of 1.10 V, J SC of 24.15 mA cm −2 and SnO 2 or glycine may serve as an effective electron buffer layer for extremely efficient PSCs, as seen by the improved efficiency.Additionally, ternary metal oxides have superior characteristics compared to their binary counterparts.In ternary metal oxide materials, the ratios of the cations can be changed, and consequently their optoelectronic characteristics such as electric resistivity and bandgap can also be controlled.Therefore, ternary metal oxides, such as Zn 2 SnO 4 (ZSO), SrTiO 3 and BaSnO 3 , 121 are potential ETL materials for developing highly efficient PSCs.Among the ternary metal oxides, ZSO has the best features, including strong electron mobility (10-30 cm 2 V −1 s −1 ), broad optical bandgap (3.8 eV), and appropriate conduction band edge, making it a desirable electrode choice for PSCs. 121Oh et al. reported the fabrication of ZSO ETL-based PSCs with the PCE of 7% for the rst time.Later, Shin et al. reported a novel technique for creating ZSO nanoparticles for solar applications.The produced ZSO nanoparticle-based PSCs exhibited a PCE of 15.3%. 122Subsequently, a solution-processed ZSO-lm was employed by Jung et al. 123,124 as an ETL in a PSC, resulting in a record-breaking efficiency of 20.02%.Recently, Zheng et al. 125 created a Zn 2 SnO 4 single crystal using a straightforward, affordable hydro-thermal synthesis process.The particles size and shape of the ZSO single crystal were controlled in the proposed method based on the duration of the hydrothermal reaction.Additionally, the ZSO-based perovskite solar cell displayed an elevated PCE of 18.32% together with a high J SC of 24.79 mA cm −2 .Moreover, the device remained stable even aer 15 days in air with a humidity level of 20%.Thus, ZSO exhibits great potential as an ETL candidate for manufacturing exceptionally efficient photovoltaic devices, as demonstrated by all the above-mentioned ndings. Recently, ZSO was employed as an ETL in PSCs by Sadegh et al. 126 They employed the chemical bath deposition (CBD) approach to modify the surface of the ZSO layer.The density and surface shape of the perovskite lm were changed by CBD.Therefore, a perovskite layer with high surface exposure and enlarged grains was produced by CBD.The decrease in losses was caused by the recombination of charge carriers.These modications signicantly increased the charge extraction at the ETL/perovskite interface and successfully inhibited the trapassisted recombination.Consequently, the photovoltaic performance was also improved simultaneously.The highest PCE of 21.3% was shown by the ZSO (ETL)-based PSCs treated with CBD.Specically, the assembled device exhibited remarkable stability, maintaining 90% of its original PCE even aer 1000 h of continuous illumination.New types of Sn-based materials are continuously being developed to enhance the performance of PSCs according to ongoing research. In conclusion, the exploration and development of Sn-based PSCs have emerged as a promising avenue in the eld of photovoltaics, addressing the environmental concerns associated with their Pb-based counterparts.The intrinsic properties of Sn, coupled with its compatibility with various metal oxides and organic ligands, have paved the way for innovative device architectures and enhanced photovoltaic performances, as shown Fig. 8.The incorporation of diverse strategies, from the use of bulky organic ligands to the modulation of ETL materials, underscores the versatility and potential of Sn-based perovskites, as summarized in Table 4. Notably, the endeavors of researchers have showcased the potential of mixed cations, organic cations, and ETL modications in achieving impressive PCEs.The recent advancements in ternary metal oxides, particularly ZSO, further highlight the continuous evolution of Sn-based PSCs towards achieving both high efficiency and stability.In this case, although challenges persist, especially regarding the oxidation of Sn and the synthesis of large-area lms, the collective efforts of the scientic community signal a bright future for Sn-based PSCs. Mixed Sn-Pb PSCs Pb-free perovskites have resolved the toxicity issue of lead-based perovskites, but their performance in practical applications is inferior to that of their Pb-based counterparts owing to their poor efficiency and stability.Thus, by combining lead and tin in perovskite structures, tin-lead perovskites may strike a balance between reduction in toxicity and obtaining excellent efficacy and stability. 127However, Sn-based perovskites solidify before Pb-based perovskites owing to the disparity in their crystallization rates.Accordingly, the Chen, Liu, and Guo research groups have made attempts to control the crystallization rates of these materials and promote vertical crystal growth.This approach aims to mitigate nonuniform growth, which results in the formation of numerous traps in the lm that impede carrier transport. 128The device quality is increased by the improved crystallinity, which lowers the perovskite residual stress (or strain). 129An intriguing observation is that the bandgap of the material is narrowed to the range of 1.2-1.3eV, which is smaller than that of perovskites based solely on lead or tin.This can be attributed to the "bowing effect" caused by the sharing of the octahedral cage between lead and tin in the perovskite structure. The rst Sn-Pb PSCs were reported in 2014 by the Hayase and Kanatzidis research groups, with efficiencies of 4.18% and 7.37%, respectively. 130,131To date, the Wakamiya group reported the greatest efficiency of 23.6%, surpassing the Hayase group's most recent achievement of 23.2%. 132Presently, attempted have been devoted to improving the effectiveness of Sn-Pb PSCs in a manner comparable to that of lead-and tin-based PSC investigations including n-doping of self-p-doped perovskite, surface creation of 2D perovskite, and additive for antioxidation. 133,134The special qualities and prospective uses of Sn-Pb PSCs are covered in the following section. Fig. 9a depicts the bandgap bowing effect, demonstrating that a distinct alteration in the bandgap that occurs when Sn and Pb perovskites are alloyed.This effect leads to a reduction in the bandgap below that of each individual pure composition. 135The variation in both the energy level and lattice strain contribute to the bandgap bending of alloy perovskites.The band edge formations are mostly caused by inconsistencies in the energy levels of the atomic orbitals of Sn and Pb.The VBM and CBM of Sn perovskite are shied upwards (Fig. 9b), as described in the section on Sn-based perovskites. 136These discrepancies cause Sn-Pb perovskites to have a narrow bandgap.Smaller Sn has an indirect inuence on the bandgap value by causing the octahedron to tilt and the lattice to compress. 137Changes in the A-and/or X-site ions have an impact on the bandgap values in lead-and tin-based perovskites, as covered in the prior sections.The bandgaps of mixed Sn-Pb perovskites exhibit variations inuenced by the organic cation and halide present.The lowest bandgap and ABX 3 perovskites was reported when the ratio of Sn to the B element (Sn/Pb) was in the range of 40% to 70%. 138lso, the bandgap can be controlled, the defect can be reduced, and the oxidation can be suppressed.As shown in Fig. 9c, the Snaith group showed that the Sn concentration in perovskite in the range of 0.5% to 20% of the metal content caused faults, but an Sn content in the range of 30% to 50% recovered the optoelectronic quality. 139Trap sites were created and the photoconductivity, photoluminescence lifespan, and photoluminescence quantum efficiency all decreased with the inclusion of a modest concentration of Sn.The Sargent group demonstrated that deep-level traps were formed when the Sn concentration was less than 30%. 140Alternatively, the 50% Sn mixed alloy showed extended carrier lifetimes and improved defect tolerance without deep traps (Fig. 9d).The oxidation level varied depending on the Sn concentration.Sn 4+ is quickly produced on the perovskite surface, according to the theory of the Angelis group that Sn-poor conditions enhance the oxidation of Sn because it functions as a dopant. 141n is more readily oxidized than Pb, which leads to more aws and severe oxidation.However, it has been shown that increasing the stability and lm quality of perovskites requires an Sn concentration of roughly 50%.The advantage of having a low bandgap opens up a range of applications, including photodetectors and tandem solar cells, 142 given that the bandgap of Sn-Pb perovskites with 50% Sn content is close to 1.2 eV and may potentially attain sufficiently high efficiency (32.74%).Recently, the wide absorption spectrum (1000 nm) of Sn-Pb mixed perovskites has been applied in several investigations on photodetectors.These photo-detectors struggle to detect light in the NIR region because the absorption spectrum of Pb-based perovskite is restricted to the range of 300 to 800 nm. 143Thus, to extend its absorption spectrum to 1000 nm, organic bulk-heterojunction (BHJ) layers capable of absorbing in the near-infrared (NIR) region are deposited on the perovskite lm. 144In organic photovoltaics, the morphology of the BHJ layer is a critical in determining the device performance, particularly impacting its light absorption capabilities.This layer, typically a blend of electron-donating and electronaccepting organic materials, creates a nanoscale network of interpenetrating phases, each with distinct roles in the device operation.The surface morphology of the BHJ layer is pivotal for several reasons.Firstly, it denes the interfacial area between the donor and acceptor materials, which is crucial for effective exciton dissociation.A larger interfacial area means more opportunities for light absorption and exciton generation.This is particularly important in the context of extending the absorption spectrum, given that a greater interfacial area results in more effective light harvesting, including in the NIR region. Additionally, the morphology dictates the pathways available for charge transport.An optimal network with efficient pathways ensures that the photogenerated carriers (electrons and holes) can reach their respective electrodes effectively, contributing to the overall current of the device.The morphology also affects the exciton diffusion length, which is the distance which excitons have to travel to recombine. 145,146A favorable morphology with appropriate domain sizes increases the likelihood of excitons reaching a donor-acceptor interface for dissociation, thereby enhancing the efficiency of light absorption and conversion.Another crucial aspect inuenced by morphology is the rate of recombination.Poorly connected phases can lead to increased charge recombination, where electrons and holes recombine prematurely, leading to energy loss.Thus, optimizing the morphology is key to minimizing these recombination losses.Furthermore, the way light interacts with the active layer is signicantly inuenced by the surface morphology of the BHJ layer.This morphological structure can scatter or trap light, directly affecting the absorption prole of the material. 147or instance, aggregated domains or irregular structures in the BHJ layer may enhance light trapping, which is particularly benecial for extending the absorption into the NIR region.This effect is critical for devices designed to absorb a broader spectrum of solar radiation, including wavelengths that standard photovoltaic materials may not efficiently capture.Moreover, the morphological features at the interfaces with other layers, such as electrodes, are vital in impacting charge injection and extraction processes.An optimal morphology ensures efficient charge transport and extraction, minimizing the energy losses at these crucial interfaces.Therefore, achieving the appropriate morphology is essential for efficient device operation, particularly in ensuring that the perovskite layer and the BHJ layer work synergistically. However, the NIR detectivity of Pb-based perovskite photodetectors in combination with BHJ is poor.Fortunately, better detectivity and sensitivity can be achieved by Sn-Pb-based perovskite, which absorbs NIR light. 148Perovskite tandem solar cells are also used.A rear broad-bandgap solar cell that absorbs high-energy photons and a front smaller-bandgap solar cell that absorbs low-energy photons make up a perovskite tandem solar cell in most cases.To date, the top cells are generally made of organic, CIGS, and Si solar cells, 149 which are further explained in the next section.Sn-Pb-based perovskite is a possible contender for the fabrication of the top cells 150 due to its high efficiency, low production cost, and solution processability.Thus, to achieve a remarkable efficiency of over 20% in Sn-Pb perovskite solar cells (PSCs), the composition of the perovskite material, specically the mixed A cation, needs to be ne-tuned, as demonstrated in Fig. 10a.The major materials utilized are FA and MA, with a small quantity of Cs added on occasion.The best efficiency was observed when the FA-based perovskite was comprised of 30 mol% of MA, as demonstrated in the efficiency trend of the ASn 0.5 Pb 0.5 I 3 PSCs.The Podraza group discovered a connection between the Urbach energy (UE) and the increased efficiency of mixed cation Sn-Pb PSCs.By using photothermal deection spectroscopy, they evaluated the UE of perovskites with diverse cation compositions.The low UE and V OC of thin-lm perovskites exhibit a clear correlation, with the reduction in UE happening in conjunction with a positive decline in the V OC decit.They demonstrated that a lower UE and V OC reduction occurs when FA and MA are blended in an equivalent ratio (Fig. 10b and c, respectively). 151 well-known substance, inorganic Cs, considerably increases the moisture and light stability of perovskite lms. 152he Jen group established that Sn-Pb-based perovskites with partially substituted MA + or FA + for Cs + exhibit a slowed crystallization rate to promote the creation of homogeneous lms. 153In particular, in compositions including high concentrations of Sn, the device stability and performance were improved (Fig. 10d-f).The Sn-Pb perovskite has an intriguing property, where its bandgap changes randomly when the mixed A-site is present.Materials having a smaller A site predominate, APbI 3 develops a bigger bandgap, and ASnI 3 exhibits the reverse trend. 154However, there is no obvious regularity in the case of Sn-Pb perovskite due to variations in the extent of orbital binding and deformation of the perovskite lattice caused by the mixed B-site. 155n summary, the evolution of mixed Sn-Pb PSCs underscores the relentless pursuit by the scientic community to merge environmental sustainability with optimal device performance.The amalgamation of tin and lead in perovskite structures has unveiled a promising pathway, bridging the gap between the high efficiency of Pb-based perovskites and the environmental benignity of their Sn-based counterparts, as shown in Fig. 11.The intricate interplay among the "bowing effect," crystallization rates and defect dynamics has been pivotal in shaping the optoelectronic properties of these mixed perovskites.Noteworthy strides by various research groups have illuminated the potential of manipulating the Sn concentration to enhance the lm quality, suppress oxidation, and ne-tune the bandgaps, opening doors to diverse applications such as photodetectors and tandem solar cells.The extended absorption spectrum of Sn-Pb perovskites, especially in the NIR region, further accentuates their versatility and potential in photodetection, as summarized in Table 5.As this eld continues to ourish, the insights gleaned from these studies will undoubtedly serve as Germanium-based PSCs Germanium (Ge), another member of group IV, can also serve as a substitute for Pb or Sn in the creation of perovskites.However, Ge-based perovskites exhibit distinct characteristics compared to their Pb 2+ and Sn 2+ counterparts, primarily because Ge is a signicantly lighter element.Due to the propensity of Ge 2+ to oxidize to the more stable Ge 4+ , germanium-based perovskites tend to be less stable than tin-based ones.Furthermore, its considerably smaller radius (73 pm) does not align well with the ABX 3 structure of perovskites based on Goldschmidt's tolerance factor.These perovskites display broader optical bandgaps exceeding 1.6 eV and markedly different energy levels.For instance, the conduction band minimum (CBM) of FAGeI 3 is 3.15 eV, whereas that of FAPbI 3 and FASnI 3 is 4.0 eV and 3.79 eV, respectively. 156,157espite the numerous theoretical investigations on Ge-based perovskites, the use of germanium-based perovskites in solar cells has been scarcely documented. 158The rst germaniumbased PSCs with CsGeI 3 , MAGeI 3 , or FAGeI 3 photoactive layers were reported in 2015. 157Fig. 12 illustrates that all the produced PSCs, including CsGeI 3 solar cells, performed poorly, achieving a PCE of less than 0.20%.This was attributed to the fact that the bandgaps of MAGeI 3 and FAGeI 3 are too wide to effectively absorb light.In 2018, by substituting 10% of the iodide in germanium-based PSCs with bromide, the PCE increased to 0.57%. 159These subpar performances appear to originate from the formation of imperfect crystals, instability, and poor surface conguration of Ge-based perovskites. 157xed tin and lead-based perovskites (Sn-Pb) are considered alternatives to lead perovskites.However, given that Ge is nontoxic and theoretically offers a performance comparable to lead PSCs, it should also be considered.The simulation results by Raman et al. suggest that Ge-Pb-based PSCs using an MA(Pb,Ge)I 3 light absorber can potentially achieve efficiencies of up to 30%. 160Nevertheless, there have been limited studies on Ge-Pb devices due to the instability of the Ge-Pb perovskite crystal (tolerance factor of >1).Another simulation-based study indicated that ternary B-site mixed cations, including Pb, Sn, and Ge, can be used to develop stable and efficient devices. 161,162xperimental studies have identied In, Cu, and Zn as feasible substitutes for reduced-lead perovskites. 163Currently, germanium-based perovskites are primarily used in combination with tin-based perovskites due to performance concerns. 164 Polymer-based PSCs Researchers have been focused on improving the durability of PSCs by interfacial modication, device encapsulation and inverted orientation, together with other methods. 165According to reports, intrinsic alteration may also increase the stability of perovskites, particularly by reducing moisture-corrosion. 166The active layer, such as the ETL, HTL, photovoltaic operational layer and buffering intermediate layer, is oen the location of the interfacial alteration. 167However, the perovskite/perovskite contact, which is called the intergranular interface, has a higher defect density, which makes it easier for moisture to penetrate the perovskite layer. 168,169Thus, to optimize the morphology, charge transfer, and separation in the perovskite layer, the crystallinity at the intergranular interface must be increased. 170n excellent photovoltaic performance is achieved as a result of better charge separation and transportation in the perovskite lm.Consequently, PSCs with perovskite intergranular interfaces that are resistant to moisture display enhanced stability. 171esearchers are focused on improving the interactions between the perovskite layer grains, such as acid-base, stacking, electrostatic interactions, and hydrogen bonding, to provide moisture-resistant intergranular interface. 168The moistureresistant intragranular interface in perovskite solar cells signicantly impacts the exciton depletion and electron/hole mobilities, which are essential for the overall efficiency of the cells.Firstly, regarding exciton depletion, grain boundaries in the perovskite materials are crucial.These boundaries, when stable and moisture-resistant, introduce fewer deep defect states in the band gaps, maintaining high electronic quality.Grain boundaries can provide additional pathways for exciton dissociation and charge separation, enhancing the exciton depletion. 145Furthermore, the ordered perovskite/perovskite heterojunctions, formed through molecular modication of the perovskite layer, aid in efficient exciton dissociation and charge carrier transport in the grains.The moisture resistance at these interfaces is critical for maintaining the integrity of the perovskite layer and preventing the entry of water, which reduces the charge recombination, and thereby effectively depletes excitons.Regarding the aspect of electron and hole mobilities, the ETL interface plays a signicant role.Moisture-resistant modications at the perovskite/ETL interface, such as incorporating bifunctional molecules and fullerene derivatives, optimize the electronic structure and passivate recombination processes.This results in enhanced electron mobility by reducing the number of trap sites and improving the interface band alignments.Similarly, at the perovskite/HTL interface, functional modications can facilitate improved hole extraction and electron blocking, leading to better hole mobility in the perovskite layer.The development of ordered perovskite/perovskite heterojunctions also inuences the charge carrier mobilities, where the well-aligned interlayers and minimized defects in these heterojunctions promote efficient electron and hole transport. Substituting phenethylamine (PhCH 2 CH 2 NH 2 ) with methylamine (CH 3 NH 2 ) may improve the hydrophobic interactions and stacking.Additionally, formamidine is used to replace methylamine (CH 3 NH 2 ) to increase the hydrogen bonding contact.Consequently, the stability and PCE of PSCs are improved.Many polymers, including polyethylene oxide (PEO), polymethylmethacrylate (PMMA), polyethyleneimine (PEI) and polyvinylpyrrolidone (PVP) display signicant intergranular interactions as a result of the presence of many active sites. 172,173Also, polymer-based PSCs display great stability and enhanced PCE because of the presence of signicant intergranular contacts. 174To date, in all the reported polymers, their curly macromolecular arrangement has been altered to modify the intergranular contacts of the perovskite.This is because their arrangement will have a negative effect, such as reducing the perovskite crystallinity or diminishing its photoelectric property.Researchers are actively developing improved polymers, specically dendritic polymers or dendrimers, to address existing challenges.These 3D spherical polymers have generated signicant interest due to their ability to make slight conguration adjustments when interacting with the perovskite grain surfaces, thereby preventing local aggregation in their linear macromolecular structure. 175Consequently, the crystallinity of dendrimers is enhanced.Therefore, the stability and effectiveness of perovskite devices are signicantly enhanced by the use of dendritic polymers. Consistent with this, Du et al. created a novel molecular roadmap to illuminate the efficacy of PSCs.The intergranular perovskite contact was successfully regulated by the suggested model, which enhanced the PCE.Polyamidoamine (PAMAM) dendrimers were employed by Du et al. as a template for the dendritic crystallization process that formed the perovskite. 176AMAM contains methyl esters at its molecular perimeter, and these groups have tremendous potential to interact with the grain surfaces of the perovskites.These interactions, likely involving chemical bonding or other molecular associations, are poised to signicantly inuence the surface morphology of the perovskite layer.The primary objective of this interaction between the PAMAM functional groups and the perovskite grains is to enhance the intergranular interface interactions, strengthening the overall structure of the perovskite layer.These interactions with PAMAM induce alterations in the chemical structure of the active perovskite layer, which affect the organization of the perovskite grains, leading to changes in the surface morphology and the interfacial interactions between the grains.The aim is to enhance the compactness and uniformity of the perovskite lm, which includes reducing the number of defects such as pinholes and bolstering the intergranular connections. 146These chemical and morphological modications in the perovskite layer can substantially impact the performance of the device.A more compact and uniformly distributed perovskite lm, achieved through the PAMAM interactions, facilitate improved charge separation and reduced recombination rates, thereby enhancing the overall stability and efficiency of the device.The fortication of the intergranular interactions is especially vital for the durability and performance of perovskite solar cells.Furthermore, the interaction between the methyl esters in the PAMAM dendrimers and perovskite grain surfaces results in notable changes in the chemical structure of the active layer.This alteration positively inuences the surface morphology, leading to enhancements in device performance.The methyl esters in PAMAM interact with the perovskite grain surfaces, forming bonds through its amino and carbonyl groups.The PAMAM dendrimers act as a dendritic crystallization framework, guiding the formation of the perovskite layer.This process crosslinks the perovskite grains, substantially strengthening the intergranular interfacial interactions. The result is a marked improvement in the perovskite phase morphology, which is characterized by a reduction in the number of grain boundaries and elimination of pinholes.These morphological improvements caused by PAMAM modication synergistically enhance the power conversion efficiency and stability of perovskite solar cells.The increase in the shortcircuit current density, leading to a signicant enhancement in power conversion efficiency, is chiey attributed to the improved perovskite morphology, and in particular, the robust intergranular interactions facilitated by PAMAM.Because of this, the dendritic-polymer backbone of the perovskite grains exhibits high intergranular interfacial contacts.Additionally, the phase shape of the perovskite is signicantly improved by getting rid of the pinholes and reducing the number of grain boundaries.To create a homogeneous surface, the compact perovskite layer and non-pinhole dendritic PAMAM crosslinked the perovskite grain, strengthening the contacts at the intergranular interface.This resulted in a substantial PCE of 42.6% for the unencapsulated PSCs using the PAMAM polymer (dendritic) backbone under ambient conditions.The PAMAM-modied device also retained 73% of its original PCE aer 400 h.The major factor in achieving a high PCE is the enhancement in the perovskite intergranular interactions by PAMAM modi-cation.A schematic depiction of the PAMAM dendrimers controlling the perovskite morphology is shown in Fig. 13, together with the device architecture of a PAMAM-modied PSC.Interlayers also signicantly contribute to increasing the PCE of PSCs.Recently, poly-electrolytes have been shown to have an impact on the device performance when utilized as buffer layers in both n-type substrate (N-I-P) and p-type substrate (P-I-N) geometries, according to Kang et al. 177 To create the buffer layers, they employed non-conjugated polymer electrolytes (NPEs) with a PEI backbone and a variety of counterions, including tetrakis(imidazole) borate (BIm 4 − ). bromide (Br − ), and iodide (I − ).Additionally, the size of the counterion affects the performance of perovskite solar cells.The nonconjugated polymers produced electric dipoles at the NPE/ metal electrode interface, which could be used to adjust the energy levels and work functions of the electrodes.Consequently, in the N-I-P and P-I-N congurations, the solar cell incorporating the NPE buffering layer displayed a PCE of 14.71% and 13.79%, respectively. At the interface between the perovskite and electrode, the HTL layer generally removes the holes and inhibits the recombination of charge carriers, which affects the PCE of PSCs.Therefore, developing HTLs is equally crucial for creating highperformance PSCs. 178In this case, due to its good lm shape, high conductivity, and ability to be processed in a solution at low temperatures, polyethylenedioxythiophene (PEDOT):polystyrenesulfonate (PSS) has been one of the most oen utilized HTL in inverted PSCs thus far. 179However, despite its benets, the main limitations of the PEDOT:PSS-based HTL in PSCs is the acidic nature of PEDOT:PSS. 180Thus, numerous researchers are looking for novel strategies to advance the PV device performance by addressing the drawbacks of PEDOT:PSS HTLs. 181ecently, a unique method for synthesizing new HTLs using the readily available copper thiocyanate (CuSCN) was suggested by Xu et al. 182 According to their ndings, adding CuSCN to PEDOT:PSS, and then annealing it at a low temperature lowers the energy barrier and improves the charge extraction yield, while also having an acidic nature.Consequently, the PCE of the CuSCN-modied PEDOT:PSS HTL-based PSC was 15.3% at 1.0 V, which was 16% higher than that of the PEDOT:PSS HTLbased PSCs.Additionally, the decreased acidity produced exceptional longevity, as shown by the retention of 71% of the original PCE of the device aer 175 h of exposure to N 2 under full sun.N,N 0 -Bis-(1-naphthaleny)-N,N 0 -bis-phenyl-(1,1 0biphenyl)-4,4 0 -diamine (NPB) is a tiny triphenylamine-based chemical, which Ma et al. 183 added to a perovskite solar cell as a multifunctional buffer layer to further improve its PCE.The device conguration of the NPB-based PSC is shown in Fig. 14. According to their ndings, the use of NPB as a buffer layer lowered the amount of pinholes and imperfections in the perovskite lms and modied the energy imbalance between the perovskite structure and PEDOT:PSS lm.Due to the diminished defects and pinholes at the interface of the perovskite/PEDOT:PSS layers, the electron-hole recombination was severely constrained in the NPB-modied device.Consequently, the PCE of 18.4% was shown by the NPB-modied PSC.The same device displayed a PCE of 14.4% under UV light without hysteresis and great stability.According to the suggested method, creating the advanced generation of effective, exceptionally stable and exible PSCs may rely heavily on NPB as a buffer layer.This is because the NPB buffer layer plays a pivotal role in enhancing the surface morphology of the perovskite lms.This improvement is evident in the scanning electron microscopy (SEM) images, which demonstrated a denser and more uniform lm structure upon the introduction of NPB.The increased coverage and uniformity provided by the NPB layer are crucial in diminishing the presence of pinholes in the lm.Secondly, the use of NPB affects the wettability of the perovskite layer.Contact angle measurements revealed a modest decrease in wettability with the incorporation of NPB.This reduction in wettability is associated with improved lm quality, given that it aids in minimizing the number of pinholes and surface imperfections.The enhanced lm formation due to the altered wettability contributed to a smoother and more consistent perovskite layer.Furthermore, the chemical interaction between NPB and the perovskite lm was instrumental in suppressing the formation of defects, particularly at the interface with PEDOT:PSS. 184This chemical synergy not only enhanced the quality of the perovskite lm but also contributed to a more integrated and coherent layer structure.Lastly, the presence of NPB helps reduce the interfacial defects that typically arise due to the chemical reaction between PEDOT:PSS and the perovskite precursor.In the absence of an NPB buffer layer, these reactions can lead to the formation of detrimental interfacial defects.However, the introduction of NPB effectively suppresses these defects, leading to an overall improvement in the lm quality. Thus far, nearly all the reported exible PSCs have tiny surface areas.However, as the lms inevitably experience reduced uniformity, it is widely recognized that the PCE decreases when the device area is scaled up to a large extent.Therefore, the performance of large-area exible PSCs is directly inuenced by the large-scale thin-lm deposition process.Accordingly, wide-area manufacturing methods must be developed to create exible PSCs with all their layers.In order to further reduce the cost of manufacturing, alternative technologies should be offered, which should ideally encourage the development of useful applications. Review RSC Advances Table 6 summarizes the recent developments in polymerbased PSC.The exploration in polymer-based PSCs has unveiled a plethora of opportunities and challenges in the eld of perovskite solar technology.The intrinsic modications, especially at the intergranular interface, have emerged as a pivotal strategy to enhance both the efficiency and stability of PSCs.The innovative use of dendritic polymers, particularly PAMAM, has showcased the potential of molecular engineering in optimizing the morphology and performance of perovskites.Furthermore, the development of novel HTLs and buffer layers, such as CuSCN-modied PEDOT:PSS and NPB, respectively, have set new benchmarks in device efficiency and stability.However, as we venture into the domain of large-area exible PSCs, the challenges of maintaining uniformity and efficiency at a larger scale become evident.Therefore, the future of polymer-based PSCs hinges on the development of scalable deposition processes and innovative materials that can maintain high performances across larger device areas. Halide double perovskite solar cells (HDPs) Halide double perovskite solar cells (HDPs) are created by substituting one monovalent cation B + and one trivalent cation B 3+ for two Pb 2+ , with the formula of A 2 B + B 3+ X 6 .The general chemical formula for Pb halide perovskites can be described as APbX 3 , wherein A represents either an organic or inorganic cation (such as MA + , Cs + , and FA + ), B signies Pb 2+ , and X denotes a halogen (Cl − , Br − , and I − ).The A-site cations occupy the cavities formed by eight [PbX 6 ] 4− octahedra.With alternating [B + X 6 ] 5− and [B 3+ X 6 ] 3− octahedra, the double perovskites have a similar three-dimensional structure to Pb perovskites, and the A-site cations are located in the cavities created by the octahedra. It is important to note that HDPs are much more stable and typically non or less hazardous than Pb-based perovskites (with the exception of Tl-based compounds).Two distinct B-site cations provide access to a broad variety of potential combinations and rich substitutional chemistry, which is equally signicant.The choices for both the A cation and X anion are limited, with the A cation consisting mostly of Cs + and CH 3 NH 3+ (MA + ) and the X anion consisting primarily of Cl − , Br − , and I − .Alternatively, the selection of the B-site cations is more exible and may include Ag + , Na + , Li + , Au + , Bi 3+ , SB 3+ , In 3+ , Fe 3+ , and Tl 3+ .Given that the related HDPs are more desirable for phosphors or light-emitting diodes, other B 3+ cations such rare-earth ions are not noted here. 185These components may be easily permuted and combined to produce hundreds of HDPs.Meanwhile, two distinct B-site metal ions give several potential for alloying and doping in HDPs by a variety of elements, thereby extending the broad family of HDPs and providing enormous prospects for HDP-based photovoltaics. The study on Cs 2 AgBiBr 6 in 2016 prompted signicant interest in HDPs.Bein and coworkers created the rst double perovskite solar cell devices in 2017 aer resolving the poor solubility of the precursors in DMSO at 75 °C and spin coating thin lms. 186They underlined that obtaining a pure Cs 2 AgBiBr 6 double perovskite phase requires the use of a high annealing temperature (250 °C).Additionally, they created the rst Cs 2 -AgBiBr 6 solar cell device with the conventional mesoporous structure and attained a PCE of 2.43%. 186These Cs 2 AgBiBr 6based devices impressively demonstrated exceptional stability under continuous illumination for 100 min or ambient settings for at least 25 days.Using Cs 2 AgBiBr 6 single crystals as the precursor solution, we simultaneously produced a highly uniform and high-quality Cs 2 AgBiBr 6 thin lm made of singlelayer nanocrystals.With the structure ITO/TiO 2 /Cs 2 AgBiBr 6 (205 nm)/spiro-OMeTAD/Au, we further demonstrated the rst planar Cs 2 AgBiBr 6 solar cells.With a V OC of 1.06, J SC of 1.55 mA cm −2 and FF of 74%, the champion device exhibited a PCE of 1.22%. 187This planar device structure showed little hysteresis. Thus far, a variety of deposition techniques has been used to explore the mechanisms that affect the PV performance of Additionally, these devices retained 90% of their original PCEs aer 10 days of storage and hardly exhibited any performance loss when annealed for 60 min at 100 °C.Employing a dye-sensitized ETL or HTL is an intriguing method to boost the J SC of Cs 2 AgBiBr 6 solar cells.For instance, a device based on C-Chl-sensitized mesoporous TiO 2 pushed the PCE to 3.11% by increasing the J SC from 3.22 to 4.09 mA cm −2 . 193Similarly, by sensitizing the TiO 2 ETL with D149 indoline and adding Ti 3 C 2 T x MXene nanosheets to Cs 2 AgBiBr 6 , the J SC of the nal device reached up to 8.85 mA cm −2 .According to the external quantum efficiency (EQE) spectra, the dye increased the absorption of sunlight between 500 and 650 nm, which was the primary source of the enhanced photocurrent.Consequently, a very high PCE of 4.47% was attained. 194Additionally, aer 1000 h of storage in air (approximately 20% relative humidity) without encapsulation, the D149-Cs 2 AgBiBr 6 @Ti 3 C 2 T x -based devices exhibited improved long-term stability with just 14% PCE loss.In Cs 2 AgBiBr 6 devices, a photoactive dye called Zn-Chl was used as an HTL in addition to changing the ETL.The Zn-Chl-sensitized solar cell exhibited a PCE of 2.79% and J SC of 3.83 mA cm −2 , which is 22-27% greater than the devices using traditional hole transport materials (HTMs), such as spiro-OmeTAD, P3HT, and PTAA. 195Although this technique produced reasonably high J SC and PCEs, the dyes rather than the Cs 2 AgBiBr 6 absorber are responsible for the improvement. An effective method to increase the inherent absorption capabilities of Cs 2 AgBiBr 6 is element doping or alloying.A series of Cs 2 AgSb x Bi (1−x) Br 6 (x = 0, 0.25, 0.50, and 0.75) thin lms with progressively smaller bandgaps was produced by substituting SB 3+ for Bi 3+ in the structure.According to a study, 196 the solar cell made utilizing a Cs 2 AgSb 0.25 Bi 0.75 Br 6 thin lm exhibited a clear enhancement in PCE compared to the solar cell used as a reference with Cs 2 AgBiBr 6 .However, following SB 3+ alloying, the J SC was reduced rather than increased, as anticipated, which is presumably because of the existence of big pinholes.An additional unique Cs 2 AgSbBr 6 HDP could be created by completely substituting SB 3+ for Bi 3+ .The Cs 2 AgSbBr 6 -based solar cells only produced a very low PCE of 0.01% with V OC = 0.35 V, J SC = 0.08 mA cm −2 , and FF = 35.9%(ref.197) due to the presence of impurity phases, as stated previously.Recently, the bandgap of the Cs 2 AgBiBr 6 lm was considerably reduced from 2.18 to 1.64 eV using a new hydrogen atom interstitial doping.With record PCE values of 5.64% and 6.37% for forward and backward scans, respectively, the J SC of the solar cell impressively increased drastically from 1.03 to 11.4 mA cm −2 . 198Upon treatment under nitrogen at 20 °C with light illumination, and 85 °C without or with light illumination for 1440 h, the devices retained almost 95%, 91%, and 84% of their original PCE, respectively.This indicates that the hydrogenated Cs 2 AgBiBr 6 solar cells displayed good stability.Recently, the mixed-valence HDP MA 2 AuBr 6 was employed as an absorber instead of Cs 2 AgBiBr 6 , although the device exhibited a very low PCE of 0.007%. 199n summary, lead-free halide double perovskite solar cells have emerged as a promising alternative to traditional Pb-based perovskites (Fig. 15), offering enhanced stability and reduced toxicity.The versatility in the choice of the B-site cations in HDPs opens up a plethora of possibilities for ne-tuning their properties, leading to a rich landscape of potential combinations and substitutional chemistry.The pioneering work on Cs 2 AgBiBr 6 and subsequent innovations in deposition techniques, dye-sensitization, and elemental doping have paved the way for signicant advancements in device efficiency and stability.However, although the achievements to date are commendable, as summarized in Table 7, the journey of HDPs is still in its initial stages.Thus, the exploration of new combinations, improved fabrication techniques, and deeper understanding of the underlying mechanisms will be crucial in realizing the full potential of HDPs in the eld of photovoltaics.As research continues to evolve, HDPs stand poised to redene the future of sustainable and efficient solar energy solutions. Tandem solar cells Photoactive layers having a low band gap cannot achieve large voltages compared to large band-gap materials, which can achieve high voltages.However, despite the high voltages achieved, the short circuit current of these materials is limited by their low energy photons, which leads to constraints in single junction solar cells (SJSCs).Accordingly, tandem solar cells can be used to overcome these constraints.This arrangement integrates two or more single-junction solar cells for better use of short-wavelength photons from the visible spectrum.When photons with a small wavelength are absorbed by the top cell (perovskite 1), which is made of a large bandgap semiconductor material, a photocurrent is produced.Wider-wavelength photons pass through the top cell and into the perovskite cell bottom cell, where they are effectively absorbed.In essence, the bottom cell may use photons that are sent through the top cell, reducing the heat loss in PV devices even further.By properly combining the separate cells in series and parallel, the produced photo-voltage and photo-current can be increased.Utilizing tandem PSCs, record efficiencies of 29% have been achieved.A record PCE of 29.52% was reported by Oxford PV for perovskite silicon tandem solar cells, outperforming silicon solar cells by a wide margin. 206The monolithic perovskite-silicon tandem solar cell, which was designed with sinusoidal nanotextures, is very important for the control of light in PV devices.The benet of perovskite bandgap tuning is that it provides opportunities for use in tandem devices.The efficiency of twobandgap tandem solar cells soared to a staggering 45%, surpassing their single-bandgap counterparts with a mere 33%.The theoretical bandgap diagram for tandem devices is shown in Fig. 16a. 208,209The combination of a 0.95 eV bottom cell and a 1.6 eV top cell yielded an incredible power conversion efficiency (PCE) of 45.8%, assuming the absolute absorption of sunlight.This can be achieved by changing the thickness of the light-absorbing layer.A regulated quantity of photons may be matched to the current of each device by adjusting the width of the broad bandgap front cell.As shown in Fig. 16b, the use of a large range of bandgap values was possible by current matching and combining two cells with bandgap values of 0.7-1.3eV and 1.4-1.8eV may provide a power conversion efficiency of 42%.The bandgap of all the Sn-Pb PSCs and Pb PSCs-based perovskite tandem solar cells (PTSCs) is around 1.2 and 1.8 eV, respectively. The two-terminal (2-T) PTSC structure is schematically shown in Fig. 16c.4-T PTSCs are mechanically arranged in series with 2-T PTSCs, which are merged monolithically by a recombination layer.For a series connection, the bottom and top cells should have identical polarity, which is described as either planer (n-i-p) or inverted (p-i-n) liable on the polarity.Given that PTSCs with p-i-n structures have received greater research attention, Sn-Pb PSCs with planer structures have a lower PCE than that with inverted structures (Fig. 16d).However, a larger power conversion efficiency and open circuit voltage in a planer structure are possible with Pb-based PSCs employing broad-bandgap cells (Fig. 16e).Researchers are examining Sn-Pb PSCs with a planar structure and Pb PSCs with an inverted structure, both of which have inferior performance to create a high-efficiency tandem device.The Grätzel group used chemical vapor deposition to manage the grain uniformity in the case of Sn-Pb PSCs with an n-i-p structure to generate big, uniform lms for high efficiency, while the Hayase group added a passivation layer between the metal oxide and SnI 2 to reduce the number of defects. 210,211In addition, p-i-n-structured PSCs have been investigated for use in tandem solar cells, and recently attained PCEs surpassing 26%. 212lthough the efficiency of PTSCs is much lower than the theoretical value, they may still be used for a variety of applications, including water splitting. 213Water splitting is a procedure that uses incident photons on PTSCs to break down water into pure O 2 and H 2 and has lately gained greater attention as a subject in green hydrogen generation.Carbon-free methods such as thermoelectric, pyroelectric, triboelectric, and photoelectric power unlock the potential to produce hydrogen without pollution, paving the way for a sustainable future of limitless clean energy. 214he PEC system has a straightforward design and low device cost.However, it requires an external bias voltage and has a lower solar-to-hydrogen (STH) efficiency than PV-EC devices, making H 2 production in PEC systems uneconomical. 215In contrast, a PV-EC system that separates the catalyst and battery may use two established technologies.Specically, 1.23 V is the energy variance between former and latter to split water; however, a greater voltage is oen needed to overcome the activation energy barrier.H 2 is formed when a voltage higher than the activation energy is provided, and the quantity of hydrogen conversion is controlled by the current ow supplied by the PV component.The Grätzel group demonstrated the rst perovskite PV-EC.Two series-connected MAPbI 3 solar cells were used to split water; however, because of the poor J SC , the STH efficiency was only 12.3%.They employed a silicon/perovskite tandem device rather than a series-connected device to boost the STH efficiency; consequently, the reduced J SC loss permitted the STH efficiency of 18.7%. 216According to theoretical estimates, silicon/perovskite tandem cells for water splitting can attain a solar-to-hydrogen efficiency of 25% and a levelized hydrogen cost of less than 3 $ kg −1 . 217With perovskite/silicon tandem solar cells, the STH efficiency has now reached 21.32%, causing us to assume that PTSCs have the potential to achieve STH efficiency levels higher than 20%.The best V OC and J SC of the obtained PTSCs are 2 V and 16 mA cm −2 , respectively, when the performance of Sn-Pb PSCs increases. 218According to these V OC and J SC values, the STH efficiency of 19.68% can be attained, assuming that the voltage of the device is greater than the activation energy. 213It is anticipated that further study on Sn-Pb PSCs and PTSCs will lead to additional advancements in the device performance and STH efficiency. In conclusion, tandem solar cells, particularly those based on perovskite technologies, represent a signicant leap in the eld of photovoltaics, as shown in Fig. 17.By harnessing the unique properties of perovskites and their ability for bandgap tuning, tandem congurations have successfully addressed the inherent limitations in single-junction solar cells.The advancements in PTSCs, not only in achieving record efficiencies but also in their potential applications such as water splitting, underscore their transformative potential in the renewable energy sector.The integration of perovskites with other materials, such as silicon, further amplies the potential of these cells to achieve even higher efficiencies and broader applications.Table 8 shows a concise overview of the latest advancements in the performance of PTSCs.As research continues to push the boundaries of the possibilities with tandem congurations, it is evident that the future of solar energy is bright, with PTSCs poised to play a pivotal role in driving sustainable and efficient energy solutions for the world. Crystal structure An ideal perovskite crystal structure typically exhibits a cubic conguration, composed of the ABX 3 material combination, where 'A' represents a larger monovalent cation, 'B' indicates a smaller divalent cation, and 'X' represents a monovalent anion.The stability of a perovskite structure can be predicted using Goldschmidt's tolerance factor, 't', which can be calculated as follows: where R A , R B , and R X are the ionic radii of the A, B, and X ions, respectively.The tolerance factor helps in assessing whether a particular combination of ions will form a stable perovskite structure.A stable perovskite crystal structure usually has a tolerance factor in the range of 0.7 to 1.0.Typically, a cubic structure is observed when the factor is in the range of 0.9 and 1. In this range, the ionic sizes are suitable for forming a cubic lattice, where the BX 6 octahedra are undistorted, and the A-site cation ts snugly into the cuboctahedral void formed by the twelve X anions. 231,232Alternatively, tetragonal or hexagonal structures are formed when it exceeds 1.This usually indicates that the A-site cation is too large for the ideal cubic structure, leading to a distortion in the perovskite structure.Rhombohedral or orthorhombic structures are found in the range of 0.7 and 0.9.The A-site cation is relatively small, potentially leading to tilting of the BX 6 octahedra.A tolerance factor below 0.7 usually indicates that the ionic radii are not conducive to forming a perovskite structure.The small size of the A-site cation compared to the B-site cation and X-site anion oen prevents the formation of a stable perovskite lattice.Accordingly, the tolerance factor of the cubic structure can be controlled, where altering the ionic radii of the A-site, B-site, or X-site ions by substituting them with different ions is a common method.For example, replacing a larger A-site ion with a smaller one (or vice versa) can lead to an increase or decrease in the tolerance factor.This substitution can be partial or complete, depending on the desired outcome. 233owever, the perovskite crystals tend to have inherent defects, such as vacancies in lead (Pb) and iodine(I) ions, especially near the grain boundaries, which can compromise the stability and efficiency of PSCs.The presence of defects in perovskite crystals primarily affects their electronic properties by introducing trap states in their bandgap, which can capture charge carriers (electrons and holes), leading to non-radiative recombination.This process signicantly reduces the efficiency of devices, given that it decreases the number of available charge carriers.Furthermore, defects can also aggravate ion migration, which not only contributes to hysteresis in the current-voltage characteristics of perovskite solar cells but also lead to long-term degradation of the material.This migration can alter the composition of the material over time, leading to phase instability and a decrease in device performance.Additionally, the presence of defects at the grain boundaries and interfaces can compromise the structural integrity of perovskite lms, leading to mechanical weaknesses, which makes them more susceptible to environmental degradation factors such as moisture, oxygen, and heat.In solar cell applications, this degradation can manifest as a decline in power conversion efficiency and longevity of the device. 234o overcome the above-mentioned issues, doping with metal ions has been shown to be a successful approach. 235,2368][239] These improvements in the structural and electronic properties of the perovskite material directly contribute to the enhanced efficiency and stability of solar cells. Additionally, doping with bivalent cations such as zinc (Zn 2+ ), manganese (Mn 2+ ), and cobalt (Co 2+ ) has also been employed to increase the stability of PSCs.Trivalent metal cations such as indium (In 3+ ), europium (Eu 3+ ), and aluminum (Al 3+ ) are frequently used to mitigate deep aws, rene the lm morphology, and boost both the efficiency and stability of PSCs. 233Notably, devices with Eu 3+ doping have demonstrated remarkable endurance, maintaining 92% of their original PCE even aer 1500 h of constant illumination.In a study at the University of Electronic Science and Technology in China, researchers have made substantial strides in enhancing the stability of perovskite solar cells (PSCs).By passivating grain boundary defects using a uorinated oligomer derived from 4,4bis(4-hydroxyphenyl) pentanoic acid (FO-19), they have signicantly reduced the charge recombination, thus improving both the humidity and thermal stability of the cells.This approach led to an impressive PCE of 21.23% in MAPbI 3 -based PSCs. 234he coordination of the FO-19 carboxyl bond with Pb ions in the perovskite crystals effectively passivates defects, enhancing the overall stability of the perovskite lm.This longevity is a signicant step forward in the practical application of PSCs, making them more viable for long-term use in various settings. Stability Perovskite solar cells (PSCs), while offering high power conversion efficiencies (PCE) and lower manufacturing costs compared to silicon solar cells, exhibit substantial stability issues, hindering their path to commercialization.Various degradation mechanisms, unique to each solar cell type, need to be addressed, particularly for PSCs.Factors such as moisture, oxygen, elevated temperatures, and UV illumination signicantly affect their performance, necessitating specialized encapsulation strategies for protection. 232he core stability issues in PSCs originate from the inherent phase instability of the perovskite crystal structure and the ) and other byproducts, which dramatically reduce the cell efficiency.Oxygen exposure can lead to oxidative degradation, especially under photo-induced conditions, forming reactive oxygen species that attack the perovskite structure.Similarly, UV light can degrade the organic components in the perovskite, creating defects and trap states, which diminish the cell efficiency.Additionally, the thermal instability due to high temperatures can cause phase segregation, introduce crystalline defects, and alter the material composition, further impacting the performance of the cell.Encapsulation strategies play a crucial role in enhancing the lifespan of PSCs by acting as barriers against oxygen and moisture. 240Recent advancements include implementing glassto-glass encapsulation, applying hydrophobic coatings, and replacing the reactive metal electrodes with more stable materials such as carbon and transparent conducting oxides.These methods effectively shield the perovskite from environmental stressors.For example, electron beam-deposited SiO 2 layers in a glass cover have shown promising long-term stability.To further improve stability, researchers are exploring a variety of encapsulation materials, such as ethylene methyl acrylate, ethylene vinyl acetate, and polyisobutylene, with glass-polymer-glass structures proving particularly effective in preventing moisture ingress. Addressing thermal instability is also crucial, given that the perovskite components can degrade at temperatures above 100 °C, forming more PbI 2 and organic salts. 240Innovations such as replacing TiO 2 with CdS as the electron transport layer and using carbon nanotubes wrapped with conductive polymers as the hole transport layer have shown potential in enhancing the thermal stability and improving the morphology of the device.However, even with these improvements, the maximum stable lifespan of PSCs under continuous light exposure typically reaches around 4000 h, with their efficiency declining as degradation progresses. Photobleaching effects and the absence of encapsulation can cause further photo-instability, particularly in devices utilizing TiO 2 layers, which are sensitive to UV light. 241In this case, implementing anti-UV coatings on the front glass and adding interlayers with high light transmission and electrical properties can signicantly enhance the stability.These interlayers serve multiple functions, including suppressing charge recombination, modifying the perovskite surface, preventing moisture ingress, and blocking the diffusion of other materials. Noteworthy research in this eld includes the work by Henry J. Snaith's team, 242 who investigated inverted perovskite solar cells using a copper phthalocyanine (CuPc) hole transport layer.This study, notable for its insights into long-term stability, showed that these solar cells demonstrated remarkable endurance.The cells maintained efficiency aer more than 5000 h of storage and 3700 h at elevated temperatures of 85 °C in a nitrogen environment.The utilization of CuPc as a hole transport layer is particularly signicant, highlighting the crucial role played by material selection in enhancing the performance and durability of perovskite solar cells.These ndings not only underline the robustness of the devices but also their potential applicability in real-world settings, contributing valuable knowledge to the ongoing development of solar energy technologies.In another study, Zhou et al. 243 explored the device conguration of ITO/HTL-Free/MAPbI 3 /C 60 / BCP/Ag.This perovskite solar cell demonstrated impressive stability, maintaining its performance aer 1000 h of light soaking at the maximum power point (MPP) under continuous illumination, achieving the PCE of 16.9%.This conguration highlights the potential of HTL-free structures in achieving stable and efficient PSCs.Similarly, Xie et al. 244 focused on FTO/ NiMgLiOx/FAMAPbI 3 /PCBM/Ti(Nb)Ox/Ag.The cell showed a notable PCE of 20.6% and maintained its stability for 500 h under continuous light soaking at MPP.The use of NiMgLiOx as an interlayer in this structure suggests its effectiveness in enhancing both the efficiency and stability. Arora et al. 245 focused on a device with FTO/meso-TiO 2 / CsFAMAPbI 3 -xBrx/CuSCN-rGO/Au.The solar cell experienced only a 5% drop in PCE over more than 1000 h at MPPT, achieving an efficiency of 20.2%.The inclusion of reduced graphene oxide (rGO) and copper thiocyanate (CuSCN) indicated their roles in prolonging the operational life of the cell.In another study, Jung et al. 246 studied FTO/c-TiO 2 /m-TiO 2 / (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 /WOx/spiro/Au.This cell demonstrated a 5% drop in PCE for over 1300 h at MPPT, with an ambient temperature of 25 °C and PCE of 22.7%.The use of a mixed halide perovskite layer and WOx as a buffer layer indicated their effective contribution to stability.Similarly, Akin et al. 247 studied the device conguration of FTO/Ru-doped SnO 2 / perovskite/spiro-OMeTAD(Zn-TPP)/Au, which showed exceptional stability, with only a 3% decrease in efficiency over 2000 h, achieving a PCE of 21.8%.The use of Ru-doped SnO 2 and Zn-TPP incorporated in spiro-OMeTAD as a modied hole transport layer underscores the potential of doping and molecular engineering in enhancing the stability and efficiency of PSCs. Further advancements in perovskite solar cell technology are evident in the development of perovskite/silicon tandem solar cells.Using an evaporation-solution combination technique, the research team led by Li et al. 248 successfully fabricated a p-in type perovskite layer atop a fully textured silicon cell, achieving a PCE of 27.48%.Remarkably, these tandem cells demonstrated stability in nitrogen for over 10 000 h, showcasing a viable solution to overcome the efficiency and stability limitations commonly associated with single-junction perovskite cells.Table 9 shows the impact of various PSC device congurations on the stability and PCE. However, despite these advancements, the overall lifetime of PSCs, which decreases signicantly at a standard degradation rate of 25%, remains considerably shorter than that of crystalline silicon solar cells. 256Silicon solar cells have beneted from over ve decades of research and development, leading to highly stable, efficient, and commercially viable solar technologies with lifespans nearing 30 years.Nevertheless, the rapid progress of perovskite solar cells within just a decade of research, achieving high PCEs, fuels optimism that ongoing research and technological advancements will eventually address the stability issues plaguing perovskite cells, paving the way for their wider adoption in sustainable energy applications. Perovskite fabrication The fabrication of PSCs presents several challenges that are pivotal to their performance and commercial viability.One of the primary challenges is achieving uniform lm formation.Perovskites are typically deposited as thin lms, and inconsistencies during this process can lead to defects, such as pinholes and non-uniform crystal sizes, which signicantly impact the efficiency and stability of the cells.Another hurdle is controlling the crystallization process.The crystallization rate and conditions determine the quality of the perovskite layer, with factors such as temperature, solvent choice, and deposition technique playing crucial roles.Too fast crystallization can lead to poor crystal formation, while too slow may result in large grains that affect the charge transport.Additionally, the sensitivity of perovskites to environmental factors such as moisture and oxygen during fabrication necessitates controlled atmospheric conditions, adding complexity and cost to the manufacturing process.This sensitivity also extends to the instability of perovskite materials under operational conditions, particularly for lead-based perovskites, which raises concerns about their long-term durability and environmental impact. 257e synthesis of hybrid organic-inorganic metal halide perovskite crystals, although based on stoichiometric reactions, involves a complex interplay among various process parameters that critically inuence the quality of perovskite thin lms.Research in this eld has predominantly focused on rening these parameters such as the ratio of the precursors in solution, processing temperatures, and various fabrication techniques to optimize the formation of the perovskite layer.The quality of the perovskite lm is paramount in determining the performance of PSCs, given that it directly impacts crucial factors such as the light absorption efficiency, charge recombination rates, and carrier diffusion lengths.Consequently, enhancing the quality of the perovskite lm is a key strategy in improving the overall performance of PSCs. In the development of high-quality perovskite lms, several synthesis factors must be meticulously controlled.These include the temperature at which the process is conducted, the concentration of solutions, the choice of precursors and solvents, the use of surfactants, the ambient atmosphere during synthesis, the duration of the process, and the rates of ow and distribution of materials.Each of these factors can signicantly affect the properties of the resultant perovskite lms.Additionally, managing the growth of perovskites on various substrates is essential to produce lms with desirable characteristics such as large grain size, high levels of crystallinity, and smooth surface morphology.In this context, the deposition method plays a crucial role, given that it directly inuences the structural morphology of the perovskite layer.Techniques such as one-step deposition, two-step spin coating, two-source vapor deposition, sequential vapor deposition, and vapor-assisted solution deposition represent some of the diverse approaches utilized in the fabrication of perovskite thin lms.Each of these deposition methods offers unique advantages and challenges, and the choice of method can be pivotal in achieving the desired lm characteristics. The majority of MAPbI 3 lms reported to date have been prepared using a two-step deposition technique, where the composition and concentration of the precursor solution are critical in the solution processing technique.The electronic structure of MAPbI 3 exhibits a high degree of stoichiometric exibility and surface defect tolerance during fabrication, making it relatively stable against a variety of compositional uctuations.However, the optimal ratio of MAI to PbI 2 in the precursor solution for enhancing the perovskite performance remains a subject of debate.Various mechanisms have been suggested to explain the notable improvements in perovskite materials.The precursor concentration plays a vital role in dictating the crystallinity, morphology, and colloidal nature of halide perovskites.Colloidal particles in the precursor act as nucleation points, inuencing the quality of the resultant perovskite lms.Researchers such as Hong, Xie, and Tian have experimented with MAPbI 3 lms under different conditions, adjusting the MAI and PbI 2 ratios to either I-rich or Pb-rich environments.Their ndings indicated that solvent engineering and stoichiometry modications can signicantly impact the efficiency, photo-stability, surface morphology, and coverage of MAPbI 3 lms. 258dditionally, introducing excess MAI in the precursor solution, particularly when combined with a Lewis acid-base adduct deposition technique, has been shown to effectively reduce the non-radiative recombination at the grain boundaries of the lms. 259Chen et al. 260,261 observed that the release of organic species during annealing enabled the presence of PbI 2 phases at the perovskite grain boundaries, potentially enhancing the carrier behavior and stability.They also noted that DMF is an effective solvent for PbI 2 and MAI, capable of controlling the crystallization speed and aiding the formation of compact perovskite lms.Wieghold et al. 262 highlighted that higher concentrations of precursors lead to the formation of larger, more oriented grains in MAPbI 3 lms.Additionally, the research by Park BW and others 263 demonstrated the signicance of excess lead iodide in the perovskite precursor solution for achieving over 20% power conversion efficiency, primarily by reducing the number of halide vacancies.These studies collectively contribute to a deeper understanding of the fabrication of perovskite solar cells, offering insights into how various factors inuence the overall performance and efficiency of these promising photovoltaic materials. Hasan et al. conducted a detailed investigation into the diffusion and crystallization of perovskite materials, particularly focusing on the interplay between PbI 2 and MAI in the perovskite structure.Utilizing a synchrotron source-based XRD instrument, they examined the perovskite material at varying incidence angles and for different molar ratios of PbI 2 to MAI.Their ndings revealed that a 1 : 1 ratio of PbI 2 to MAI facilitated the most effective interdiffusion of these components, leading to a fully converted perovskite material.This complete conversion was crucial for enhancing the chemical bonding and stability in the perovskite structure.5][266] In another signicant study, Bahtiar et al. 267 explored the fabrication of a perovskite solar cell (PSC) with the structure of FTO/ PEDOT:PSS/CH 3 NH 3 PbI 3 using a sequential deposition method.Their approach involved a meticulous two-step perovskite deposition process, which greatly inuenced the performance and structural properties of the PSCs.Initially, a PbI 2 precursor solution was prepared by dissolving 900 mg of PbI 2 in 2 mL of DMF, followed by continuous stirring at 70 °C for 24 h.Subsequently, this solution was spin-coated over a PEDOT:PSS layer under varying conditions of rpm, annealing temperature, and time.Subsequently, an MAI precursor solution was prepared using 90 mg of MAI in 2 mL of IPA and spin-coated atop the PbI 2 layer.This method, particularly the single-step spin coating, was found to enhance the surface morphology of PEDOT:PSS, reduce the number of pinholes, and consequently increase the power conversion efficiency.This study highlighted that specic conditions, such as a spin-coating rpm of 1000 for 20 s, annealing temperatures of 40 °C and 100 °C, and annealing times of 180 and 300 s, resulted in perovskite lms with improved structural properties, including a pinhole-free surface and larger grain sizes exceeding 500 nm. Minhuan Wang and colleagues conducted an insightful study comparing the one-step and two-step deposition methods for fabricating CH 3 NH 3 PbI 3 -based perovskite solar cells.Their research focused on analyzing how these methods inuence the performance and structural integrity of the perovskite layer.In the one-step process, they prepared a perovskite precursor solution by combining PbI 2 and MAI in a DMF + DMSO solution with a 1 : 4 volume ratio.Subsequently, this solution was directly spin-coated on a substrate at 3000 rpm for 50 s.In contrast, the two-step method involved rst spin-coating PbI 2 at 5000 rpm for 5 s, followed by applying the MAI solution at 500 rpm for 30 s, and then annealing the substrate at 150 °C for 20 min. 268,269Their X-ray diffraction (XRD) analysis revealed that the lms fabricated using the two-step method exhibited distinct and pure perovskite peaks, indicating better crystallinity and phase purity compared to the one-step method, where the XRD peaks were less dened with evidence of degraded I 2 .Additionally, scanning electron microscopy (SEM) images corroborated that the two-step method resulted in superior structural properties in the perovskite layer.The two-step fabricated lms showed no pinholes and dense coverage, which are crucial for minimizing the leakage current, and thus enhancing the power conversion efficiency (PCE) of the perovskite solar cells. In the study by Liu et al. on perovskite solar cells with the device architecture of ITO/ZnO/CH 3 NH 3 PbI 3 /P 3 HT/Ag, they further delved on the impact of the perovskite layer thickness on PSC performance.They employed two-step spin coating and thermal deposition methods to fabricate the perovskite layer with a thicknesses in the range of 100 to 600 nm.Their ndings indicated that the PCE increased with a layer thickness of up to 330 nm, beyond which an increase in thickness led to a decrease in efficiency.The optimal thickness of 330 nm yielded efficiencies of 1.3% for thermal deposition and 1.8% for sequential deposition. 263Notably, for thicker CH 3 NH 3 PbI 3 lms, the deposition of the P 3 HT polymer hole transport material faced challenges in adequately penetrating the perovskite layer, potentially leading to increased series resistance and decreased PCE. These studies, together with numerous other reports on varying the precursor concentrations using different fabrication methods, highlight the critical importance of the deposition techniques and layer thickness in optimizing the performance of perovskite solar cells.The meticulous control of these factors is essential for enhancing the efficiency, stability, and overall viability of perovskite-based photovoltaic technologies.Table 10 presents the most common perovskite lm fabrication techniques. Band gap alignment & band offsets Developing efficient electron and hole transport layers (ETLs and HTLs) with compatible energy levels is essential for creating high-efficiency and stable PSCs.The electrical and optical properties of perovskites are signicantly inuenced by their band structure, which is determined by the quantum mechanical wave functions permissible in the crystal.The bandgap of the materials used in PSCs is particularly crucial for effective visible light absorption, reducing the capacitive effects at the interfaces and maintaining the device stability.A notable aspect is the impact of the large bandgap of the ETL, which can restrict light absorption.Similarly, the matching of band structures in PSCs is a key factor in the rapid separation of electrons and holes, which aids in quickly dissipating capacitive charges and minimizing the hysteresis effect.The degree of conduction band alignment between the perovskite material and the ETL dictates the ow of electrons.Ideal alignment ensures smooth ow of charge carriers, while any offset can create barriers, hindering this ow.Similarly, the alignment of the valence bands between the perovskite and the HTL is crucial for efficient hole transport.Achieving high operational efficiency in PSCs requires the minimum conduction band offset (CBO) and maximum valence band offset (VBO) at the perovskite-ETL interface.This alignment allows for the efficient transfer of electrons from the perovskite to the ETL, while blocking holes.Conversely, the interface between the perovskite and HTL should aim for minimal VBO and maximal CBO, facilitating the movement of holes to the HTL and blocking electrons. 275hen the conduction band (CB) of the perovskite layer is higher than the CB of the ETL (creating a cliff, or negative CBO), it can adversely affect the performance of the PSC by reducing the activation energy against recombination at the heterojunction and lowering the built-in potential (Vbi), which results in a decreased open-circuit voltage (V oc ).In contrast, if the CB of the perovskite is lower than the CB of the ETL (forming a spike, or positive CBO), it leads to an increased Vbi, enhancing the V oc .However, excessively large spikes can impede the electron transport, increasing the charge recombination due to the reduced activation energy.Similarly, the alignment of valence bands affects the hole transport.If the valence band (VB) of the perovskite layer is below the VB of the HTL, it creates a cliff-like Review RSC Advances discontinuity (negative VBO), reducing the Vbi and impacting efficiency.If the VB of the perovskite is above the VB of the HTL, a spike (positive VBO) occurs, increasing the Vbi.Similar to the CBO, overly large spikes in the VBO can create barriers to hole transport. 276and gap engineering is a crucial aspect in the advancement of highly efficient perovskite solar cells (PSCs).For instance, standard anatase TiO 2 , with a band gap of 3.2 eV, can only absorb about 5% of solar energy, which limits its effectiveness in solar cell applications.Thus, to enhance UV-visible light photocatalysis and broaden the absorption range, researchers have investigated doping TiO 2 with various metals (such as V, Fe, Cr, and Ni) and non-metals (such as S, F, C, N, and B).This doping not only improves the quality of the semiconductor material by expanding its absorption range but also increases the mobility of charge carriers. 277The charge separation and transportation in PSCs are signicantly inuenced by the energy band alignment and the built-in internal electrical eld.Ming Wang et al. 278 developed a perovskite solar cell with the conguration of ITO/PEDOT:PSS/MAPbI 3 -XClX/PCBM/Rhodamine/LiF/Ag to explore how band gap tuning in the perovskite layer leads to rapid hole extraction.They found that as the concentration of MAI increases, reducing the band gap of the material, the charge transportation is enhanced, thereby increasing the current density.With an MAI concentration of 4 mg mL −1 , the J sc increased to 23.52 mA cm −2 , resulting in a high PCE of 16.67% in MAPbI 3 -xClx-based PSCs. Zhang et al. 279 studied the effect of band gap tuning on the performance of perovskite solar cells by incorporating Sb in the CH 3 NH 3 PbI 3 material.This adjustment regulated the band gap from 1.55 to 2.06 eV.A larger band gap was observed due to the reduced Pb bonding caused by the stronger interaction of Sb with the CH 3 NH 3 PbI 3 material.The optimal Sb doping resulted in increased electron density in the conduction band and raised the quasi-Fermi energy level.Consequently, the built-in potential in the Sb-1%-doped cells increased, leading to a signicant enhancement in V oc and improved electron transport.The performance of the Sb-1%-doped solar cell outperformed the Sb-100%-doped cell, which is primarily because the J sc in Sb-1% increased due to the longer charge diffusion length, ensuring efficient charge transport and collection.However, an increase in trap states in the Sb-100%-doped devices led to a degradation in J sc .Prasanna et al. 280,281 delved into the impact of the band gap tuning of perovskite materials for solar photovoltaic applications.They highlighted that tin and lead iodide perovskite semiconductors are prominent candidates in PSCs partly due to their adjustable band gaps through compositional modication.Lead iodide-based perovskites exhibit an increase in band gap with the partial replacement of formamidinium and cesium due to octahedral tilting.Conversely, tin-based perovskites show a reduction in band gap without octahedral tilting.The band gaps achieved through this compositional tuning are ideal for tandem-based perovskite solar cells, capable of harvesting light up to approximately 1040 nm in the solar spectrum.This study underscores that ideal perovskite solar cells require specic material properties, such as a direct and suitable band gap, a sharp band edge, a long charge carrier lifespan, a long diffusion length, and a low exciton binding energy.Thus, band gap engineering strategies are vital for optimizing the energy band structures, signicantly impacting the light harvesting and PCE.Fig. 18 show the energy levels of the different materials used in PSCs. Future opportunities A promising avenue is the development of exible and lightweight PSCs.The inherent properties of perovskite materials make them suitable for incorporation in exible substrates, paving the way for a new generation of lightweight, portable, and even wearable solar-powered devices.This exibility can revolutionize the integration of solar cells into everyday objects and structures, ranging from clothing and portable chargers to building facades and vehicles. Building-integrated photovoltaics (BIPV) represents a growing sector, where perovskite solar cells (PSCs) can offer substantial advancements.3][284][285][286][287] The unique advantage of PSCs in BIPV applications originates from their high power conversion efficiency, lightweight nature, and potential for aesthetic integration.Unlike traditional photovoltaic systems, PSCs can be fabricated with varying colors and transparency levels, making them more architecturally versatile for integration into building surfaces without compromising the design aesthetics. Furthermore, the potential of semi-transparent PSCs allows for their use in windows and glass facades, where they can generate electricity, while allowing some natural light to pass through.This dual functionality is particularly valuable in urban settings, where space constraints limit the installation of conventional solar panels.The use of PSCs in BIPV also contributes to reducing the heat gain inside buildings, potentially lowering cooling costs and enhancing the overall energy efficiency.In addition, the ease of manufacturing and the possibility of creating exible perovskite modules extend the range of architectural applications.They can be integrated in curved surfaces and unconventional building shapes, opening new avenues for innovative and sustainable architectural designs.The incorporation of PSCs in BIPV systems can signicantly contribute to the generation of renewable energy at the point of use, reducing the reliance on grid-supplied power and carbon footprint of buildings. However, challenges such as ensuring long-term stability, weather resistance, and scalability of PSCs are areas of ongoing research.Addressing these challenges is crucial for the successful implementation of PSCs in BIPV, which holds the promise of transforming buildings from passive structures into active energy producers, aligning with global goals for sustainable development and energy efficiency. Conclusion and perspective Perovskite solar cells (PSCs) have emerged as a promising contender in the eld of photovoltaic technology, demonstrating rapid advancements in efficiency and adaptability.This review discussed these materials in-depth, highlighting their unique attributes and the challenges they present.The stability and efficiency of PSCs have been enhanced by techniques such creating electron and hole transport layers, using interfacial layers, and considering tandem solar cells.In-depth research has been done on lead-based, carbon-based, and tin-based PSCs, each of which has benets and disadvantages.The investigation of tin-and carbon-based replacements has resulted from efforts to minimize or remove the lead concentration in PSCs.Additionally promising are polymer-based PSCs and lead-free halide double perovskite solar cells (HDPs).To promote the commercialization and broad acceptance of PSCs in industrial applications, future research should concentrate on improving the stability, scalability, and cost-effectiveness and creating lead-free perovskite materials and innovative ETLs.In the next years, PSCs are expected to make further strides and breakthroughs.Although signicant development has been achieved, there are still several issues that need to be resolved before these solar cell technologies can reach their full potential. Enhancing the stability and long-term performance of PSCs should be the main topic of future study.To ensure the viability and longevity of PSCs, efforts must be made to avoid moisture intrusion, enhance interfacial engineering, and create stable materials that can tolerate environmental variables.Additionally, lead-free alternatives should be further investigated, with an emphasis on creating effective and durable perovskite materials based on tin and carbon.The large-scale commercialization of PSCs necessitates the optimization of scalability and cost-effectiveness.PSCs will need to be more affordable to be more widely available, which will require streamlining manufacturing processes, improving deposition methods, and creating cost-effective materials.Device engineering and interface improvement will continue to be crucial in addition to material developments.Also, to improve the overall device performance, research should concentrate on creating new electron and hole transport layers, investigating alternate interfacial layers, and increasing charge extraction.The commercialization and broad acceptance of PSCs will be facilitated by the coordinated efforts of researchers, industry players, and policymakers.Gaining the full potential of PSCs in offering clean and sustainable energy solutions will be accelerated by continued investment in research and development, information exchange, and cooperation. In conclusion, PSCs have enormous potential to revolutionize the solar energy conversion industry.Perovskite materials have several outstanding characteristics, and current research and development efforts are opening the door to extremely effective, inexpensive, and environmentally friendly solar cells.PSCs have the potential to change the clean energy landscape and contribute to a sustainable and environmentally friendly future with continuing improvements in stability, scalability, cost-effectiveness, and lead-free substitutes. Fig.3 Fig. 1 Fig. 1 Progress in the PCE of PSCs in the last decade. Fig. 7 Fig. 7 Schematic of the device architecture of an SnO 2 -based PSC. Fig. 9 Fig. 9 (a) Variation in the bandgap of Sn-Pb perovskite with changes in the Pb and Sn ratio in the B site.(b) Schematic illustrating how the bandgap is formed in Pb-Sn alloy perovskite.(c) Substituting Pb with Sn in perovskite leads to the formation of defect sites, causing nonradiative voltage loss.(d) Time-resolved photoluminescence spectra are obtained for FA 0.75 Cs 0.25 Sn x Pb 1−x I 3 perovskite layers with varying Sn amounts. 132(b) Ref. 133.(c) Ref. 134.(d) Ref. 135. Fig. 10 Fig. 10 (a) PCE of MA 1−x FA x Sn 0.5 Pb 0.5 I 3 PSCs.UE in relation to the changing perovskite composition for (b) MA 1−x FA x Pb 0.4 Sn 0.6 I 3 and (c) (FASnI 3 ) x (MAPbI 3 ) 1−x .(d-f) Thermal and air stability test conducted on the device at 85 °C with 10% Cs addition.Stability and efficiency increase with the inclusion of Cs in Sn x Pb 1−x PSCs.(b and c) Ref. 143. (d-f) Ref. 144. Fig. 14 Fig. 14 Structure of the PSC based on NPB. Fig. 15 Fig. 15 Device architecture and photovoltaic parameters of HDP-based solar cells. Fig. 16 ( Fig. 16 (a) Maximum efficiency limits for a 2-T tandem solar cell and (b) panel.(c) Configuration of a 2-T PTSC.(d) Efficiency of narrow bandgap (NBG) Sn-Pb PSCs with a bandgap range of 1.17-1.3eV.(e) Efficiency and V OC graph of large bandgap (WBG) Pb-based PSCs with a bandgap range of 1.7-1.8eV.(f) PCE for 2-T and 4-T PTSCs over time.(a and b) Ref. 212. Fig. 17Recent progress in perovskite-based tandem PV cells. Fig. 18 Fig.18Energy levels of different materials used in PSCs. Table 1 Band gap, carrier mobility, and exciton binding energy of different perovskite materials Table 2 Progress in charge transport layer materials for PSCs Table 3 106gress in lead-based PSCs Alternatively, the vertically oriented grains in 2D/3D perovskites with condensed traps confer benets in terms of aligning optical transition dipoles, thereby amplifying the photovoltaic performance.Recently, Zhang et al.106fabricated signicantly procient PSCs by introducing PEA + (bulky organic cations) in 2D/3D Pb-Sn alloys using the solvent optimization approach.The growthoriented orientations and SOC in Pb-Sn perovskites are modulated by organic cations aer inclusion, which results in photoinduced bulk-polarization.The photovoltaic activities are improved by the increased SOC and bulk polarization.They stated that a high PCE of 15.93% is present in the 2D and 3D Pb-Sn alloy-based PSCs. Strategy to use less Pb in the device Maintain high PCE while reducing Pb consumption 92 © 2024 The Author(s).Published by the Royal Society of Chemistry RSC Adv., 2024, 14, 5085-5131 | 5095 Review RSC Advances spin-forbidden recombination, which creates dark states, spinallowed recombination creates brilliant states. Table 4 Progress in tin-based PSCs Table 5 Progress in Pb-Sn based PSCs Table 6 192ent developments in polymer-based PSCs AgBiBr 6 solar cells.Controlling the crystallinity, shape, orientation, thickness, phase purity, etc. of the lm, which may affect the PCE of solar cell devices, is the driving force for the use of various manufacturing procedures.188Low-pressureaidedsolutionprocessingwasusedbyXiaoandcolleagues to create planar solar cells with an optimum PCE of 1.44%.189Cs 2 AgBiBr 6 thin lms were successfully created by Liu and colleagues using a sequential vapor deposition method.The device displayed a PCE of 1.37%, V OC of 1.12 V, J SC of 1.79 mA cm −2 , and FF of 68%.190They also emphasized the need for extra BiBr 3 to produce stoichiometric Cs 2 AgBiBr 6 double perovskite thin lms.Yang and coworkers later veried this.The vaporprocessed lm displayed a deviating composition stoichiometry as a result of a greater loss of Br.191Aer 350 h of ambient storage without encapsulation, the vapor-processed devices retained over 90% of their original PCEs.It was also utilized to create Cs 2 AgBiBr 6 thin lms with ultrasmooth morphology, micro-sized grains, and high crystallinity.Antisolvent dropping is a common technique for the fabrication of Pb-based perovskite solar cells, resulting in V OC = 1.01 V, J SC = 3.19 mA cm −2 , and FF = 69.2%,and the resultant solar cells exhibited the optimum PCE of 2.23%.192 Table 7 Recent development in halide double perovskite solar cells (HDPs) Table 8 240oncise overview of the latest advancements in the performance of PTSCs device itself.The key factors contributing to this instability include exposure to water (H 2 O), oxygen (O 2 ), ultraviolet light, and heat.240Methylammoniumlead iodides, common components in PSCs, are particularly susceptible to hydrolysis when exposed to moisture.This process leads to material degradation, forming lead iodide (PbI 2 Table 9 Impact of various PSC device configuration on stability and PCE
29,107.4
2024-02-07T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Kinetic Methods of Analysis with Potentiometric and Spectrophotometric Detectors – Our Laboratory Experiences © 2012 Radić and Kukoc-Modun, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Kinetic Methods of Analysis with Potentiometric and Spectrophotometric Detectors – Our Laboratory Experiences Introduction The basic types of reactions used for determinative purpose encompass the traditional four in equilibrium-based measurements: precipitation (ion exchange), acid-base (proton exchange), redox (electron exchange) and complexation (ligand exchange). These four basic types, or cases that can be reduced to them, are also found in kinetic-based measurements with some distinguishable trends. The influence of concentration on the position of a chemical equilibrium is described in quantitative terms by means of an equilibrium-constant expression. Such expressions are important because they permit the chemist to predict the direction and completeness of a chemical reaction. However, the size of one equilibrium constant tells us nothing about the rate (the kinetic) of the reaction. A large equilibrium constant does not imply that a reaction is fast. In fact, we sometimes encounter reactions that have highly favorable equilibrium constants but are of slight analytical use because their rates are low. Commonly used kinetic methods based on chemistry of reaction employed have been selected [1,2]. Kinetic methods of analysis are based on the fact that for most reactions the rate of the reaction and the analytical signal increase with an increase of the analyte concentration. In kinetic methods, measurement of the analytical signal is made under dynamic conditions in which the concentrations of reactants and products are changing as a function of time. Generally, in analytical chemistry many methods of analysis are based on the equilibrium state of the selected reaction. In contrast to kinetic methods, equilibrium or thermodynamic methods are performed on systems that have come to equilibrium or steady state, so that the analytical signal should be stable during measurements. Kinetic and equilibrium parts of the selected chemical reaction are illustrated in the figure 1. The most important advantage of kinetic method of the analysis is the ability to use chemical reaction that is slow to reach equilibrium. By using kinetic methods determination of a single species in a mixture may be possible when species have sufficient differences of reaction rates. In this chapter we present two analytical techniques where experimental measurements are made while analytical system is under kinetic control: i) chemical kinetic techniques, ii) flow injection analysis. The use of potentiometric and spectrophotometric detectors in kinetic methods are discussed. Also, the preparation and potential response of solid state potentiometric chemical sensors are described. Kinetic methods of analysis with potentiometric detector A potentiometric chemical sensor or ion-selective electrode confirms to the Nernst equation (1) where E = the measured cell potential, E 0 = a constant for a given temperature, ai = activity of an analyte ion in an aqueous solution and k = RT log(10)/nF where R is the gas constant, T is the absolute temperature, F is Faraday's constant and n is the number of electrons discharged or taken up by one ion (molecule) of an analyte. Usually, but not necessarily, n equals the charge (with sign) on the ionic form of the analyte. In practice, for constructing a calibration graph, it is normal to use solution concentrations instead of activities since concentration is more meaningful term to the analytical chemist than the activity. There are several points which should be noted from the response behaviors of the potentiometric chemical sensors when calibration graphs are constructed [3]. The electrode potential developed by an ion-selective electrode in a standardizing solution can vary by several millivolts per day for different measurements. For accurate measurement, therefore, the electrodes should be restandardized several times during the day. For a single determination, an error of 0.1 mV in measurements of the electrode potential results in an error of 0.39% in the value of monovalent anion activity [4]. Direct potentiometric measurements are usually time-consuming experiments. Kinetic potentiometric methods are powerful tool for analysis, since they permit sensitive and selective determination of many samples within a few minutes with no sample pretreatment in many cases. The application of kinetic potentiometric methods offers some specific advantages over classical potentiometry, such as improved selectivity due to measurements of the evolution of the analytical signal with the reaction time. To construct calibration graphs the initial rate of the complex (product) formation reaction or change in potential during fixed time interval are used. Use of potentiometric chemical sensors in aqueous solution The fluoride ion-selective electrode (FISE) with LaF3 membrane has proved to be one of the most successful ion-selective electrodes. FISE has a great ability to indirectly determine whole series of cations which form strong complexes with fluoride (such as Al 3+ , Fe 3+ , Ce 4+ , Li + , Th 4+ , etc.). Combination of the simplicity of the kinetic method with the advantages of this sensor (low detection limit, high selectivity) produces an excellent analytical technique for determination of metal ions that form complexes with fluoride. Suitability of the FISE for monitoring a reasonable fast reaction of the formation of FeF 2+ in acidic solution has been established by Srinivasan and Rechnitz [5]. Determination of Fe(III), based on monitoring of the formation of FeF 2+ using FISE is described [6]. In this work, the kinetics of the FeF 2+ formation reaction were studied in acidic solution (pH = 1.8; 2.5). The initial rates of iron(III)-fluoride complex formation in the solution, calculated from the non-steady-state potential values recorded after addition of Fe(III), were shown to be proportional to the analytical concentration of this ion in cell solution. The initial rate of the complex formation reaction, or change in potential during fixed time interval (1 minute), was used to construct calibration graphs. Good linearity (r = 0.9979) was achieved in the range of iron concentration from 3.510 -5 to 1.410 -5 mol L -1 . The described procedure can be usefully applied for the determination of free Fe(III) or labile Fe(III), as the fluoride may displace weaker ligands. The determination of aluminium using FISE has mostly been performed in solutions buffered with acetate at pH 5, where fluorine is in the F ‾ form [7,8]. Potential -time curves recorded during the Al-F complex formation reaction, using potentiometric cell with FISE, constitute the primary data in this study [7]. The initial rates decrease of the concentration of free fluoride ion were calculated and shown to be proportional to the amount of aluminium in reaction solution. The initial rates of aluminium-fluoride complex formation in this acidic solution, calculated from the non-steady-state potential values recorded after addition of aluminium, were shown to be proportional to the amount of this ion added [9]. The kinetic of aluminium fluoride complexation was studied in the large pH range. In the range of 0.9 -1.5 [5], and from 2.9 to 4.9 [10]. Due to the toxicity of monomeric aluminium in free (aquo) and hydroxide forms, its rate of complexation with fluoride in the acidified aquatic environment is very important. A kinetic investigation of the rate and mechanism of reaction between Al(III) ions and fluoride in buffered aqueous solution (pH values 2 and 5) was described. The important paths of complex forming and the ecological importance of aluminium fluoride complexation in acidified aquatic environments were discussed [11]. In the laboratory solution, or in the aquatic environment, contains aluminium and fluoride ions, the following reactions may be considered to be the important path for aluminium-fluoride formation: in these reactions, coordinated water has bee omitted for simplicity. Under the experimental conditions where Al F c c  , the formation of AlF 2+ complex may be expected through one of the four possible paths (Eqs. [2][3][4][5], depending on the solution acidity. According to the theoretical consideration, after addition of aluminium, the recorded change in potential of the cell with FISE was higher at pH 2 than at pH 5. However, the rate of aluminium fluoride complexation is slightly slower at pH 2 than at pH 5. Kinetic method of potentiometric determination of Fe(III) with a copper(II) selective electrode based on a metal displacement reaction is described [12]. Addition of various amounts of iron(III) to the buffered (pH 4) Cu(II)-EDTA cell solution alters the concentration of free copper(II) ion in the solution. EDTA is well known abbreviation for ethylenediaminetetraacetic acid, a compound that forms strong 1:1 complexes with most metal ions. EDTA is a hexaprotic system, designated 2+ 6 H Y . When iron(III) is added to a buffered aqueous solution containing 2 CuY  species same cupric ion will be displaced because 2 FeY CuY K K    : The above ligand exchange between two metals is often sluggish because the reaction involves breaking a series of coordinate bonds in succession [2]. As already noted [7], the rate of change in the potential, expressed as dE/dt, is directly proportional to the rate of change of the concentration of the potential determining ion, Cu 2+ in this experiment, with time. The calculated values, ΔE/Δt versus log cFe(III) was found to be linear for different concentrations of the Cu-EDTA complex, which was used as "kinetic substrate". The linear or analytical range for each tested concentrations of Cu-EDTA was close to one decade of iron concentration. Kinetic potentiometric method for the determination of thiols (RSH): L-cysteine (cys), Nacetyl-L-cysteine (NAC), L-glutatione (glu) and D-penicillamine (pen) has been presented [13]. The proposed method is based on the reaction of formation the sparingly soluble salts, RSAg, between RSH and Ag + . During the kinetic part of this reaction potential-time curves were recorded by using commercial iodide ion selective electrode with AgI-based sensitive membrane versus double-junction reference electrode as one potentiometric detector. The change of cell potential was continuously recorded at 3.0-sec interval. When the potential change, ΔE, recorded in 5 th min. after RSH had been added in reaction solution, were plotted versus the negative logarithm of RSH concentration, p(RSH), rectilinear calibration graphs were obtained in the concentration ranges from 1.010 -5 to 1.010 -5 mol L -1 . The applicability of the proposed method was demonstrated by determination of chosen compounds in pharmaceutical dosage forms. Use of potentiometric chemical sensors in non-aqueous solution A change in solvent may cause changes in thermodynamic as well as kinetic properties of the selected chemical reaction. Also, the solubility of sensing membrane of one potentiometric chemical sensor, stability of the forming complexes, adsorption of reactants on the membrane and any undefined surface reaction may be strongly solvent dependent. Furthermore, the main properties of the used sensor which are important for analytical application such as sensitivity, selectivity response time and life-time, may be altered in non-aqueous solvents. In our experiments we have investigated different aqueous + organic solvent mixtures and their influence on thermodynamic and kinetic of the chemical reaction employed. Baumann and Wallace showed that cupric-selective electrode and a small amount of the copper(II)-EDTA complex could be used for the end-point detection in chelometric titrations of metals for which no electrode was available [14]. In the case of the compleximetric titration of mixtures of copper(II) an other metal ion in aqueous solution only the sum of both metals can be determined [15]. Titration measurements in the ethanolaqueous media by using cupric ion-selective electrode as the titration sensor showed the possibility of direct determination of copper(II) in the presence of different quantity of iron(III) [16]. Sulfide ion-selective electrode was used as potentiometric sensor for determination of lead(II) in aqueous and nonaqueous medium. The initial rate of PbS formation was studied for series of solutions at various concentration of sodium sulfide and different pH values. The measurements of lead sulfide formation in the presence of ethanol in 50% V/V were carried out in order to study the effect of organic solvent on the formation of the lead sulfide precipitate. After addition of Pb(II) ion, in water-ethanol mixtures, ethanol yielded higher potential jumps than in aqueous media [17]. Generally titrations in aqueous and nonaqueous media offer numerous advantages over direct potentiometry [18]. As it was mentioned, a change in solvent may cause changes in thermodynamic as well as in kinetic properties of the ions present. Also, the solubility of the FISE membrane, the stability of other metal fluorides, adsorption of fluoride ion and/or metal ions on the membrane and any undefined surface reaction, may be strongly solvent dependent. Furthermore, the main properties of the electrode used such as sensitivity, selectivity, response time and life time may be altered in non-aqueous solvents. Many papers have been concerned with the behavior of the FISE in a variety of organic solvents and their mixtures with water. Potentiometric titration of aluminium with fluoride in organic solvent + water mixtures by using electrochemical cell with FISE has been performed [19]. The potential break at the titration curve is not evident when titration is performed in aqueous solution. When the complexometric titration is performed in non-aqueous solution well defined S-shaped titration curves are obtained which suggest a simple stoichiometry of the titration reaction. In nonaqueous solutions, the formation of the complex with the maximum number of ligands (six) is presumably preferred. On the basis of potentiometric titration experiments the overall conditional formation constant of (3 i) i AlF  complexes have been calculated. Among the solvents tested (namely: ethanol, p-dioxane, methanol, n-propanol and tert-butanol) p-dioxane yielded a greater potential break than the other solvents and the measurements in mixtures with this solvent and ethanol also showed the best precision. The formation of aluminium hexafluoride complex in organic solvent + water mixtures may be accepted for the titration of higher concentration of aluminium (> 10 -5 mol L -1 ). However, at a low concentration of aluminium, the stoichiometric ratio between aluminium and fluoride was constant for a narrow range of aluminium concentrations and can be determined by experiment only. The potentiometric determination of aluminium in 2-propanol + water mixtures was described [20]. The theoretical approach for the determination of aluminium using two potentiometric methods (potentiometric titration and analyte subtraction potentiometry) was discussed. The computed theoretical titration curves show that the equivalence point is signaled by great potential break only in media where aluminium forms hexafluoride complex. On the basis of the potentiometric titration and the known subtraction experiments in 2-propanol + water mixtures, the overall conditional constants   Kinetic methods of analysis with spectrophotometric detector In this chapter kinetic spectrophotometric methods are concerned to determination of thiols and similar compounds in pharmaceutical dosage forms. In fact, the spectrophotometric technique is the most widely used in pharmaceutical analysis, due to its inherent simplicity, economic advantage, and wide availability in most quality control laboratories. Kinetic spectrophotometric methods are becoming a great interest for the pharmaceutical analysis. The application of these methods offers some specific advantages over classical spectrophotometry, such as improved selectivity due to the measurement of the evolution of the absorbance with the reaction time. The literature is still poor regarding to analytical procedures based on kinetic spectrophotometry for the determination of drugs in pharmaceutical formulations. Surprisingly, to the authors' knowledge, there are only few published kinetic spectrophotometric methods for the determination of N-acetyl-L-cysteine (NAC) [21][22][23]. Also, only one of the cited methods for the determination of NAC has used Fe 3+ and 2,4,6-trypyridyl-s-triazine (TPTZ) as a reagent solution. The reported method [23] is based on a coupled redox-complexation reaction. In the first (redox) step of the reaction, NAC (RSH compound) reduces Fe 3+ to Fe 2+ (Eq. (6)). In the second step of the reaction, the reduced Fe 2+ is rapidly converted to the highly stable, deep-blue coloured Fe TPTZ  complex (Eq. (7)) with max at 593 nm: The initial rate and fixed-time (at 5 min) methods were utilized in this experiment. Both methods can be easily applied to the determination of NAC in pure form or in tablets. In addition, the proposed methods are sensitive enough to enable the determination of near nanomole amounts of the NAC without expensive instruments and/or critical analytical reagents. The kinetic manifold for a spectrophotometric determination of NAC or other thiols is shown in Fig. 2. Kinetic spectrophotometric method for the determination of tiopronin {N-(2mercaptopropionyl)-glycine, MPG} has been developed and validated. This method is also based on the coupled redox-complexation reaction (Eqs. 6, 7.) [24]. The use of TPTZ as chromogenic reagent has improved selectivity, linearity and sensitivity of measurements. The method was successfully applied for determination of MPG in pharmaceutical formulations. The initial rate and fixed time (at 3 min) methods were utilized for constructing the calibration graphs. The graphs were linear in concentration ranges from 1.0  10 -6 to 1.0  10 -4 mol L -1 for both methods with limits of detection 1.3  10 -7 mol L -1 and 7.5  10 -8 mol L -1 for the initial rate and fixed time method, respectively. Under the optimum conditions, the absorbance-time curves for the reaction at varying MPG concentrations (1.0 × 10 -6 to 1.0 × 10 -4 mol L -1 ) with the fixed concentration of Fe(III) (5.0  10 -4 mol L -1 ) and TPTZ (5.0  10 -4 mol L -1 ) were generated (Fig. 3). The initial reaction rates (K) were determined from the slopes of these curves. The logarithms of the reaction rates (log K) were plotted as a function of logarithms of MPG concentrations (log c) (Fig.4). The regression analysis for the values was performed by fitting the data to the following equation: where K is reaction rate, k´ is the rate constant, c is the molar concentration of MPG, and n (slope of the regression line) is the order of the reaction. A straight line with slope values of 0.9686 (≈ 1) was obtained confirming the first order reaction. However under the optimized reaction conditions the concentrations of Fe(III) and TPTZ were much higher than concentrations of MPG in the reaction solution. Therefore, the reaction was regarded as a pseudo-first order reaction. A simple kinetic method for the spectrophotometric determination of L-ascorbic acid (AA) and thiols (RSH) in pharmaceutical dosage forms, based on a redox reaction of these compounds with Fe(III) in the presence of 1,10-phenantroline (phen) at pH = 2.8 has been described [21]. Before RSH or AA is added to the reaction solution, Fe(III) and phen have formed a stable complex, Fe phen  . The mechanism of redox-reaction of AA or RSH with the formed complex   An orange-red iron(II)-phen complex produced by the reaction in Equation (12) absorbs at 510 nm   and Cu 2+ ions will turn back to the reaction with RSH. The rates and mechanisms of the chemical reactions, on which all these determinations are based, play a fundamental role in the development of an analytical (spectrophotometric) signal. Therefore, it was greatly important to establish the kinetics of chemical reactions applied for developing the proposed method. This investigation is also important for the optimization of the flow injection method used for the determination of the same compounds. Flow injection analysis In flow-injection analysis (FIA) the sample (analyte) is injected into a continuously flowing carrier stream, where mixing of sample and analyte with reagent(s) in the stream are controlled by the kinetic processes of dispersion and diffusion. Since the concept of FIA was first introduced in 1975 [26], it has had a profound impact on how modern analytical procedures are implemented. FIA with different detector is rapidly developing into a powerful analytical tool with many merits, such as broad scope and rapid sample throughput. The analytical signal monitored by a suitable detection device, is always a result of two kinetic processes that occur simultaneously, namely the physical process of zone dispersion and the superimposed chemical processes resulting from reaction between analyte and reagent species. As a result of growing environmental demands for reduced consumption of sample and reagent solutions, the first generation of FIA, which utilizes continuous pumping of carrier and reagent solutions, was supplemented in 1990 by the second generation, termed sequential injection analysis (SIA). In 2000, the third generation of FIA, the so-called lab-on-valve (LOV) was appeared [27]. FIA-methods with potentiometric detector Over many years, there has been a great deal of research and development in FIA using ionselective electrodes as detectors. A different design of flow-through potentiometric sensors has been investigated, but the incorporation of a tubular ion-selective electrode into the conduits of FIA has been used as a nearly ideal configuration, because the hydrodynamic flow conditions can be kept constant throughout the flow system [28]. For the determination of compounds containing sulphur a simple FIA system was developed. A simple tubular solid-state electrode with an AgI-based membrane hydrophobized by PTFE (powdered Teflon) was constructed and incorporated into a flowinjection system. The flow system and the configuration of the constructed tubular flowthrough electrode unit have been described [29,30]. For the experimental measurements, two-channel FIA setup has bee used. The tubular electrode and reference electrode are located downstream after mixing two channels. A constant representing a dilution of the sample and/or reagent after mixing of two solutions depends on the flow rates in channels and can be calculated as it has been shown [30]. In this experiment, the iodide electrode with (Ag2S+AgI+PTFE)-membrane responds primarily to the activity of the silver ion at the sample solution-electrode membrane interface downstream after a confluence point of two channels. The preparation and performance of a silver iodide-based pellet hydrophobised by PTFE have been described [31]. In FIA experiment, using two-line flow manifold, the potential of the cell with the sensing tubular electrode is given by where , , , ,and S c m f  denote the response slope of the electrode, the total or analytical concentration of silver ions in reagent solution, the dilution constant, the fraction of Ag + , and the activity coefficient, respectively. In the absence of ions in the streaming solution that form sparingly soluble silver salts or stable silver complexes and at constant ionic strength, the potential of the sensor can be expressed by the following equation: When a sample containing compound with sulfur (designated also as RSH) at a sufficiently high concentration to cause precipitation of RSAg is injected into the carrier stream, the silver ion concentration will be lowered to a new value. If , where dispersion of the sample is represented by the constant d, the free silver ion concentration at equilibrium can be analyzed and expressed as follows: where sp, RSAg K is the solubility product of silver salt, while a, RSH K is the dissociation constant of sulfhydryl group, In the flow-injection measurements of compounds with highly reactive sulfhydryl group, the potential of the peak may be described by the following equation: The peak height h in these measurements is equal to the potential difference: and using equations (14) and (21) Application of a FIA system was exemplified by the determination of different compounds containing sulfur in 0.1 mol L -1 HClO4 as a supporting electrolyte. For compounds with -SH group, a rectilinear calibration graph was obtained. The experimental slope was in good agreement with the theoretical value postulated on the precipitation process and formation of RSAg into the carrier stream or at the sensing part of the detector. The equilibrium concentration of Ag + ions will also be lowered if a sample contains RSH forms     1 i i Ag SR  complexes instead of precipitation. Hence, if injected concentration of RSH is much higher than silver concentration in streaming solution, the potential of the peak may be described by the following equation: where  is the stability constant and [RS -] is the free concentration of ligand. The concentration of ligand can be expressed by FIA-methods with spectrophotometric detector Recently, more strict regulation related to the quality control in pharmaceuticals has led to an increase of demands on automation of the analytical assays carried out in appropriate control laboratories. The FIA became a versatile instrumental tool that contributed substantially to the development of automation in pharmaceutical analysis due to its simplicity, low cost and relatively short analysis time. A simple, rapid and sensitive flowinjection spectrophotometric method for the determination of NAC has been successfully developed and validated [35]. In this work TPTZ was proposed as a chromogenic reagent for the determination of NAC in aqueous laboratory samples, instead of frequently employed 1,10-phenantroline. Reaction mechanism of the method is based on the coupled redox-complexation reaction between NAC, Fe(III) and TPTZ. The use of TPTZ as chromogenic reagent has improved selectivity, linearity and sensitivity of measurements. The method was successfully applied for determination of NAC in pharmaceutical formulations. The flow-injection manifold for spectrophotometric determination of NAC is showed in Figure 5. In order to evaluate the potential of the proposed method for the analysis of real samples, flow-injection spectrometric procedure was applied to different pharmaceutical formulations (granules, syrup and dispersible tablets) for the determination of NAC. Recorded peaks refer to samples A, B and C are showed in the Figure 6. A FIA spectrophotometric procedure for determination of N-(2-mercaptopropionyl)-glycine (MPG), tiopronin, has been proposed [36]. Determination was also based on the coupled redox-complexation reaction between MPG, Fe(III) and TPTZ. This coupled reaction was usefully used in development of the FIA method for determination of ascorbic acid in pharmaceutical preparations [37]. The proposed method is simple, inexpensive, does not involve any pre-treatment procedure and has a high sample analysis frequency. Potential response of solid state potentiometric chemical sensors, theoretical approach and analytical teaching experiment In this chapter a solid state potentiometric chemical sensors (PCS) used as detectors in the presented kinetic methods, performed in batch or flow-injection mode, are discussed. PCS make the use of the development of an electrical potential at the surface of a solid material when it is placed in a solution containing species which can be exchange (or reversibly react) with the surface. The species recognition process is achieved with a PCS through a chemical reaction at the sensor surface. Thus the sensor surface must contain a component which will react chemically and reversibly with the analyte in a contacting solution. The response of a solid state PCS to sensed ions in solution is governed by ion exchange or redox processes occurring between the electrode membrane and the solution. Since the transfer of the ions or electrons occurs across this membrane-solution interface, it is readily apparent that any changes in the nature and composition of the membrane surface will affect these processes and hence the response of the sensor. The potential of PCS in kinetic experiments is formed due to heterogeneous reaction at the surface of membrane and homogeneous reaction in contacting solution. The potential response of solid state PCS with Ag2S + AgI membrane has been extensively investigated in our laboratory. For better understanding the behavior of this sensor in kinetic experiments the following questions are discussed. i) Which chemical compound on the surface of the membrane is important for the response of the sensor? ii)Which heterogeneous chemical reaction (or reactions), occurring between the electrode membrane and the sensed ions in solution, forms the interfacial potential? iii) Which homogeneous chemical reaction (reactions) in solution is (are) important for the potential response of the sensor? Potentiometric measurements with PCS containing membrane prepared by pressing sparingly soluble inorganic salts can be used for teaching homogeneous and heterogeneous equilibrium. Learning objective is to distinguish between homogeneous and heterogeneous equilibrium, and between single-component and multicomponent systems [38,39]. As it has been discussed, the determination of penicillamine was based on a batch and FIA experiments using PCS with AgI membrane. The membrane was prepared by pressing silver salts (AgI, Ag2S) and powdered Teflon (PTFE). This AgI-based membrane detector, sensitive to sulfhydryl group, can be applied to flow-injection determination of different compounds containing sulfur. In order to understand the effect of stirring or flowing to potential response of sensor, for both kind of kinetic experiment (batch and FIA), it is necessary to develop a picture of liquid flow patterns near the surface of sensor in a stirred or flowing solution. According to Skoog [40] three types of flow can be identified. Turbulent flow occurs in the bulk of the solution away from the electrode and can be considered only in stirred solution during batch kinetic experiment. Near the surface of electrode laminar flow take place. In FIA experiment only laminar flow exists in the tube. For both kind of kinetic experiments (batch and FIA) at 0.01 -0.50 mm from the surface of electrode, the rate of laminar flow approaches zero and gives a very thin layer of stagnant solution, which is called the Nernst diffusion layer (Ndl). According Equation (13), the potential of sensor is determined by activity of Ag + ion in Ndl. When the membrane of the sensor, containing both Ag2S and AgI, is immersed in a solution with Ag + or Iions heterogeneous equilibrium at the phase boundary is established. The potential difference between the solution phase and the solid phase of the sensor is built up by a charge separation mechanism in which silver ions distribute across the membrane/solution interface as shown in Figure 7. The stable potential of PCS with AgI + Ag2S membrane in contact with penicillamine (RSH) solution can be explained by the following consideration. According to the picture of liquid flow near the surface of sensor in a stirred or a flowing solution (Fig. 7) the potential of the sensor is determined by activity of Ag + ion in Ndl. In FIA experiment PCS with AgI + Ag2S membrane (before injection of penicillamine) was in contact with flowing solution of Ag + ion, and the concentration of Ag + ions in solution including Ndl was 6.3010 -6 mol L -1 . The formation a new solid state phase in Ndl or/and at the surface of the sensing part of the tubular flow-through electrode unit may be expressed by the next reaction: 3.16 10 2.26 10 1. 40 The calculated value of equilibrium constant suggests completeness of the new phase formation reaction at the surface of membrane. In addition, it can be supposed that, by adsorption process, both parts of membrane, AgI and Ag2S, are covered with a thin layer of RSAg precipitate (Fig. 8). Under these conditions, the equilibrium activity of Ag + ions and the corresponding response of PCS are governed by new heterogeneous equilibrium. Now we can calculate the minimal concentration of penicillamine, or any other RSH compound, which cause precipitation of RSAg in acid media.
7,123.2
2012-01-01T00:00:00.000
[ "Chemistry" ]
Flux vacua and modularity Geometric modularity has recently been conjectured to be a characteristic feature for flux vacua with $W=0$. This paper provides support for the conjecture by computing motivic modular forms in a direct way for several string compactifications for which such vacua are known to exist. The analysis of some Calabi-Yau manifolds which do not admit supersymmetric flux vacua shows that the reverse of the conjecture does not hold. Introduction Modularity is a theme that arises in quite different ways in string theory. From the early days of its development modular invariance on the worldsheet has been one of corner stones of the foundations of the theory. Historically far removed and much older than twodimensional conformal field theory is the notion of geometric modularity, which can be traced to Klein in the late 19th century, but which emerged as a more central part in mathematics only in the 1950s and 60s. First steps in this direction were taken in the work of Eichler, Taniyama, Shimura and Weil [1] on elliptic curves, which eventually led to the insights of Langlands [2,3] concerning higher dimensions. The latter work was originally aimed at a nonabelian generalization of class field theory, i.e. a nonabelian generalization of the relation between Artin's Galois theoretic L-functions and Hecke's modular L-series, but today the Langlands program has absorbed Grothendieck's notion of motives and subsumes in particular the grand conjecture that all motives are automorphic. In a first approximation, motives can simply be thought of as subsectors of the cohomology of the variety. The definition of automorphic forms is not canonical in the mathematics literature and different objects are called automorphic. One clear distinction that can be drawn is between modularity in the classical sense of Klein and Hecke, which views modular forms as objects that are associated to the group SL(2, R), while automorphic forms are associated to higher rank groups. It is this latter class of objects that the Langlands conjectures are concerned with, and in the present paper the distinction between modular and automorphic forms will be made along these lines. The notion of geometric modularity did not play a role in the first exploration of string theory compactifications in the 1980s and 1990s, perhaps because Langlands' conjectures are not very precise, they are computationally not immediately accessible, and more importantly, a physical interpretation of the purported modular and automorphic forms was lacking. A first idea for such a physical interpretation came from the question whether geometrically induced modular forms can be related to string theoretic modular forms on the worldsheet. This was pursued in a march through the dimensions, starting with the simplest possible string compactifications of complex dimension one [4]. Extensions to higher dimensions were then constructed for K3 surfaces [5], Calabi-Yau threefolds [6,7], as well as for Fano-type mirrors of rigid CYs [8]. Modularity in families of CY varieties was explored in [9]. Related work in this direction was done in the context of elliptic compactifications in [10,11]. Recently it was suggested that modularity might also serve as an indicator for the existence of supersymmetric flux vacua in the framework of Calabi-Yau varieties [12]. For flux compactifications there are cohomological constraints for the field G 3 = F 3 − τ H 3 , which are conjectured to lead to modular forms in the classical sense for flux compactifications with vanishing superpotential W . This was supported in [12] with computations of several points in the complex moduli space of the two-parameter octic embedded in the configuration P (1,1,2,2,2) [8]. There are other prominent Calabi-Yau configurations that are known to admit such supersymmetric flux vacua and it is of interest to test the conjecture beyond this octic. A second issue is whether the modularity conjecture can be reversed in the way suggested in ref. [12], where the authors note that one can imagine running the conjecture in reverse to use modularity results to find new supersymmetric flux compactifications. The idea that modularity implies the existence of supersymmetric flux vacua would be very useful because modularity is expected to be a common occurrence. It is however not universal, as the example of the quintic threefold already shows, and in general the Langlands conjectures only suggest the existence of automorphic forms. Hence modularity is selective in the sense that not every manifold leads to classical modular forms and it is of interest to analyze manifolds that have been shown not to admit flux vacua with W = 0. The outline of this paper is as follows. Section 2 contains a description of the methods used to compute the motivic L-series derived from weighted projective hypersurfaces. Section 3 extends the analysis of [12] to several prominent Calabi-Yau threefolds that have been considered in the literature and for which flux vacua are known to exist, adding also a remark about modular black hole attractors. Section 4 addresses the issue of the reverse of the flux vacua modularity conjecture, and Section 5 contains the conclusions. Motivic L-series in weighted hypersurfaces In this section we describe a method that allows to efficiently identify rank two motives M for manifolds with high dimensional cohomology groups and to compute their L-series L(M, s). These L-series are used to check for modularity and read off the level of their modular groups. This method will then be applied in the remainder of the paper to several manifolds that have been of interest in the context of flux vacua. There exist different methods that can be used to compute motivic L-series of weighted hypersurfaces. Most common in the mathematical literature is the p-adic approach, but in the following the emphasis will be on methods developed in a series of papers that were aimed at relating the resulting geometrically induced modular forms to forms on the worldsheet. This was initiated in [4] in the context of elliptic curves and extended to higher dimensions in [5,6,8,7] and to families in [9]. The advantage of this method is its directness and simplicity, allowing the computation of motivic L-series without computing the full cohomology with subsequent factorization. In the case of elliptic curves the motivic framework is not necessary because of the simple structure of their cohomology, but it becomes important when considering K3 surfaces and higher dimensional varieties, as discussed in the above references. In order to prepare for the structure discussed in this paper it is useful to briefly recall some of the structures that enter the arithmetic L-series. The conceptual starting point is the zeta function Z(X/F p , t) of Artin, Schmidt, and Weil, defined as a series expansion that collects the cardinalities N r,p (X) of the variety X defined over finite field extensions F p r of F p as Dwork's unscheduled proof [13] of the rationality of the zeta function in terms of polynomials Q p , R p gives this series (1) a finite form that makes it useful because it shows that a finite amount of computation determines the complete structure of Z(X/F p , t). The detailed structure of Q p , R p was outlined by Weil [14] and proven by Grothendieck [15] as given in terms of individual factors P j p (X, t) that are associated to the cohomology groups of the variety, with degrees given by the dimension of the j th group The numerator collects the factors arising from the odd-dimensional groups, while the denominator runs through the even dimensional cohomology: The full cohomology group of complex deformations is often a high-dimensional object, hence the polynomials P j p (X, t) are not very useful and neither are their completely factorized forms. This motivates the search for smaller building blocks, first introduced by Grothendieck as motives. There are several ways in which motives can be described. The most familiar approach perhaps is via cohomological realizations using the standard cohomology groups, but this is not immediately useful in an arithmetic context. An alternative formulation via the concept of correspondences provides a more geometric picture that makes contact with the mathematical goal to construct an appropriate category of these objects. In the present paper, following [6], motives are viewed as representations of the Galois group because this approach provides the simplest and most effective approach. The Galois theoretic framework of motives is based on the idea that associated to a manifold X is a number field K X and that the Galois group Gal(K X /Q) of K X acts on the cohomology. In the following all Galois groups are of the type Gal(K/Q) and the reference to Q will be dropped for simplicity of notation. The action of this Galois group is in general reducible, leading to a decomposition of the full group into irreducible sectors. These sectors can be viewed as realizations of the motives. A detailed introduction to general motives a la Grothendieck can be found in a string physics context in ref. [6], which also contains references to the standard mathematical literature on motives. This paper also describes the concrete realization of these objects that allows to specialize the abstract categorical treatment of the mathematics literature to concrete computations in the case of weighted hypersurfaces. In the case of the manfolds of interest here the abstract definition of motives just outlined can be made concrete. The important factor comes from the intermediate cohomology and can be written as where α are rational vectors of dimension five that essentially parametrize the cohomology for certain primes p. More generally, the set of α for a weighted hypersurface of Brieskorn-Pham type of complex dimension n with weights (w 0 , ..., w n+1 ) and degree d is given by For a vector α the Jacobi sum is determined in terms of characters χ α i (u) on finite fields F p which are defined as where m is determined in terms of the generator g of F p as u = g m . With these characters the Jacobi sums can be written as As noted above, the full cohomology group of complex deformations of weighted hypersurfaces is often a high-dimensional space and it is more efficient to think in terms of the arithmetic building blocks of the manifolds. The simplification that arises from the motivic structure is that given any of the vectors α and the Galois group Gal(K X ) of the number field K X associated to the manifold X one can consider motives generated by the orbits O α of α by the action this Galois group on the vector α. This leads to a computationally useful representation of the motive as While for general varieties X the field K X of the variety is determined by the factorization of the polynomials P p that enter the Dwork-Grothendieck factorization of the zeta function, for weighted hypersurfaces this field is immediately determined as the cyclotomic field Q(µ d ), where µ d is the cyclic group determined by the degree d of the manifold. The rank of the resulting motives is generically given by the order of the associated Galois group Gal(Q(µ d )), which is isomorphic to (Z/dZ) × , and whose order is given by Euler's totient function φ(d), which can be computed via the product formula φ(d) = d p|d (p − 1)/p, where the product is over all prime divisors of the degree d. Hence the generic rank of the motive is determined by the degree of the hypersurface. The L-series of this motive M α can then be obtained via the motivic polynomials as With the motive M α defined as the Galois orbit (9) the combination of Jacobi sums that determines the coefficients in the L-series are rational integers even though the Jacobi sums themselves are complex. A proof, based on the fact that the action of an element g ∈ Gal(Q(µ d )) on α induces an action on the Jacobi sum j p (α), can be found in [5]. A second type of L-series are those obtained from modular forms f (q) defined relative to Hecke congruence subgroups Γ 0 (N) of the modular group SL(2, Z) via a tensor type transformation behavior (see e.g. [6] for a physical discussion). Associated to modular forms are L-series via the Mellin transform, which via the expansion f (q) = n a n q n leads to the same type of series L(f, s) = n a n n −s . The question becomes whethe these L-series match for some weight and level N. The main interest in the earlier work cited above was to apply this construction to the holomorphic threeform Ω, which leads to the concept of the Ω-motive [5] (see also [6]), and to relate the resulting modular and automorphic forms to the modular structure on the worldsheet. In the present paper the focus is on the existence of rank two motives and their modularity. The focus will be on Calabi-Yau varieties that have been of interest in the context of flux compactifications. An application of the arithmetic of CYs that is aimed at the connection with the string worldsheet conformal field theory, but is independent of the modular structure, can be found in ref. [16]. Flux vacua with modular forms Flux vacua with contributions from both the RR field F (3) and the NSNS field H (3) in type IIB theory lead to a superpotential W that is usually written in terms of the complex axion-dilaton where Ω is the holomorphic threeform. The specific form of τ adopted here is motivated by the fact that it lives in the upper halfplane, Im(τ ) > 0. The vacuum constraints for the complex deformations z a , defined as the Ω-periods via a homology basis {A a , B b } and its dual, as well as the axion-dilaton τ are given by where D τ W ≡ ∂ τ W + W ∂ τ K and D a W = ∂ a W + W ∂ a K, where K = K τ + K cs is the Kähler potential of τ and the complex deformations respectively These constraints are sometimes called F-flatness. If in addition to the criticality constraints (13) the superpotential vanishes as well, W = 0, the resulting vacua are called supersymmetric. While the criticality constraints determine G 3 to be of type H 2,1 ⊕H 0,3 , the vanishing of the potential imposes the further constraint G 3 ∈ H 2,1 [17]. In depth reviews of flux compactifications can be found in [18,19,20]. Flux vacua for one-parameter weighted hypersurfaces The class of smooth Calabi-Yau hypersurfaces in weighted projective space consists of four spaces, all of which were considered first in the context of flux vacua in [21]. It was shown there that of these four only the degree six hypersurface X 6 3 ∈ P (1,1,1,1,2) [6] leads to flux vacua with W = 0, while the remaining three, given by the quintic X 5 3 ∈ P 4 [5], the octic X 8A 4 ∈ P (1,1,1,1,4) [8] and the degree ten hypersurface X 10 3 ∈ P (1,1,1,2,5) [10], do not. Thus, while all these manifolds share the property that they have the simplest possible Kähler sector with h 1,1 = 1, they behave quite differently. The reason for this is to be found in the fact that the number fields K X associated to these manifolds have different degrees in that the smooth degree six surface leads to quadratic extension of the rationals Q, while the remaining spaces have fields of higher degree. Thus, if modular motives of rank two exist for any of these three manifolds (at the relevant ψ) then it is established that modularity is not sufficient for the existence of supersymmetric flux vacua. It is therefore of interest to check whether among these manifolds there exist spaces that are modular in the sense discussed here. In the case of the quintic threefold X 5 3 it follows from the fact that the Galois group Gal(K X 5 3 ) = (Z/5Z) × = {1, 2, 3, 4} has order four, in combination with the structure of H 2,1 cohomology, that there are no rank two motives, consistent with the modularity conjecture. Similarly, for the degree ten hypersurface the Galois group also has order four, Gal(K X 10 3 ) = (Z/10Z) × = {1, 3, 7, 9}, and it follows again from the cohomological structure that all motives are of rank four. Thus in both of these cases the manifold is at best automorphic of higher rank. A similar analysis shows that the same holds for the configurations X 14 3 ∈ P (1,2,2,2,7) [14] and X 15 3 ∈ P (1,3,3,3,5) [15], which have (h 1,1 , h 2,1 ) = (2, 122) and (h 1,1 , h 2,1 ) = (3, 75), respectively. These considerations already provide additional support of the modularity conjecture and the discussion leaves among the one-parameter manifolds the degree six manifold, which does admit supersymmetric flux vacua, and the smooth octic, which does not. The first of these two will be computed presently, while the octic X 8A 3 will be considered in the next section. 3.2 The smooth degree six hypersurface X 6 3 ∈ P (1,1,1,1,2) [6] An example that appeared early in the discussions of flux vacua based on the class of weighted hypersurfaces constructed in [22,23,24] is the degree six Brieskorn-Pham manifold in the configuration P (1,1,1,1,2) [6], defined by This is a smooth hypersurface, hence it has only one Kähler form inherited from the ambient space, h 1,1 = 1, and the cohomology H 2,1 (X) is of monomial type, h 2,1 = 103. It was shown in ref. [21] that supersymmetric flux vacua exist for this manifold at the Landau-Ginzburg point, i.e. the constraints D a W = 0 = D τ W can be solved as well as W = 0. Further discussions of flux vacua derived from this manifold can be found in a number of papers, including [25,26]. In the context of a string theoretic interpretation of geometric modularity this variety was discussed briefly in [6], where the modular structure was established at the Landau-Ginzburg point described by the Brieskorn-Pham geometry. In ref. [6] the focus was on the Ω-motive, and leads to the set of associated L-series L(M A± , s). The coefficients of these L-series are not directly fundamental though because each coefficient at a prime p contains p as a factor. It is therefore natural to shift the argument s of the L-function in order to eliminate these p-factors. This shift corresponds to a twist in Hodge degree of the cohomology and has been important already in [6], as well as in the context of mirrors of rigid Calabi-Yau varieties considered in [8]. In the latter context this twist can involve higher powers of the primes p. A selection of the L-function coefficients a p that are obtained by dividing out these prime factors is shown in the table 1. Table 1. Coefficients of the motivic L-series L(M A± , s + 1) of X 6 3 . A perusal of various databases, starting with the list of Cremona of weight two forms [27], as well as the more extensive list of forms by Meyer [28], allows an identification of these L-series as arising from modular forms f A± (q) = n a n q n , q = e 2πiτ , in the sense that their L-series agree with the L-series of the motives where the shift in s implements the twist just discussed. Here the f A+ are modular forms at level 432, 144, 108, and the f A− are modular forms of weight two at levels N = 27, 36, 432, respectively. All these forms are relative to the Hecke congruence group Γ 0 (N). The identification via the level N is sufficient for N = 27, 36 and N = 108, but is not unique when the space of rational forms at a given level has more than one dimension. This was taken into account by Cremona [27] by introducing further alphabetical counting labels, a strategy that was adopted in a slightly different way by the L-function and modular form data base (LMFDB) [29]). This is of relevance in the present discussion at the levels N = 144, 432. In the former case the modular form has the Cremona label 144A(A) and the LMFDB designation 144.2.a.a. In the latter case the two forms f I+ and f III− both belong to the same space, but the [4] and [30], where they were shown to admit a worldsheet interpretation for elliptic compactifications. It was explained in [30] that these forms admit complex multiplication, a symmetry that imposes a certain sparseness of the nonvanishing coefficients a p . All the forms determined in the above discussion have complex multiplication. The modular forms identified above can be shown to arise from the genus ten and genus four curves that are embedded in the hypersurface X Here a A± ∈ N describe the multiplicity of the motives and the modular form f Ω of the Ω-motive is a cusp form of weight four and level N = 108 relative to the Hecke congruence subgroup Γ 0 (N), i.e. f Ω ∈ S 4 (Γ 0 (108)). This shows that the cohomology of the degree six threefold hypersurfaces leads to a total of seven different modular forms f Ω , f A± . Combining the flux vacua analysis of [31] with the modularity analysis above thus provides further support for the flux vacua modularity conjecture of ref. [12]. A modular rank two attractor on X 6 3 A third arithmetic theme in string theory is provided by black hole attractors, initially emphasized in the number theoretic context by Moore [32], and further developed in the context of complex multiplication in ref. [33]. Recently, an extensive search for rank two attractors was reported in ref. [34]. It is in this context worth noting that the modularity of the Ω-motive M Ω of the degree six hypersurface X 6 3 , established in [6] and recalled briefly in the previous section, in terms of a weight four cusp form at level N = 108, i.e. f (M Ω (X 6 3 ), q) ∈ S 4 (Γ 0 (108), shows that the rank two attractor derived from the smooth degree six hypersurface is modular in the classical sense. The form f (M Ω (X 6 3 ), q) of X 6 3 has complex multiplications, which provides a second way to recognize complex multiplication as a characteristic of at least some black hole attractors. This is independent of the approach in [33] via the Shioda-Katsura decomposition of the cohomology of curves embedded in weighted hypersurfaces. The Deligne conjecture, discussed in the context of black hole attractors in [33], has been proven for motives with complex multiplication [35], hence leads to a relation between the periods of the variety and the L-functions values of M Ω . Since the black hole potential can be written in terms of the periods it immediately follows from the work of [33] and [6] that the entropy can be expressed in terms of these L-function values. This was recently made explicit in a different example in [34] and is further discussed in [36]. Flux vacua for two-parameter hypersurfaces 3.4.1 The octic X 8B 3 ∈ P (1,1,2,2,2) [8] The resolved Brieskorn-Pham octic hypersurface has Hodge numbers h 1,1 = 2 and h 2,1 = 86, where both sectors receive contributions of the resolution of the singular set. This variety is a K3 fibration with a typical fiber in the configuration of quartic surfaces X 4 2 ∈ P 3 [4], which has been considered in the context of flux vacua in a number of papers, including [25,37,12]. In the latter reference this is the main configuration considered. Aspects of a string theoretic interpretation of geometrically induced modular forms of weight two for this variety were discussed for this variety in [38], where it was shown that this manifold leads to a weight two modular form at level N = 64. This modular form arises from the algebraic curve C 4 of degree four that defines the singular set embedded in this manifold. While this curve has genus three, its L-function factors, in the process leading to an elliptic curve of degree four, which is modular at the given level. This form was also considered in the paper by Kachru, Nally and Yang [12] as support for the modularity conjecture. In the motivic framework considered in this paper the starting point is the Galois group K X , which in the present example has order four Hence the generic orbit has length four, which is in particular the case for the Ω-motive M Ω . These rank four motives are not modular but are expected to be automorphic according to the Langlands conjecture. Having a Galois group whose order is larger than two in general does not prevent the existence of orbits of shorter length, leading to the possibility of rank two motives that can be modular. For the present case of the two-parameter octic modular motives of rank two can be found, for example, at levels N = 32, 64 for the motives parametrized by α = ( 1 4 , 1 4 , 1 4 , 1 2 , 3 4 ) and ( 1 4 , 1 4 , 1 4 , 1 4 , 1 2 ), respectively, after implementing the Tate twist. The focus here has been on the Landau-Ginzburg point, geometrically given by the Brieskorn-Pham point in the configuration. However, the observation just made, that higher order Galois groups can lead to rank two motives, generalizes to the case of deformations of the variety away from the diagonal form. This can be seen in the motivic framework by considering the theory of deformed motives in ref. [9]. The order of the Galois group here is φ(12) = 4, hence generically the motives will again be of rank four, much like in the case of the octic hypersurface considered above. Similar to the octic case there exist rank two motives for this variety as well. It turns out that most of these rank two motives have already been encountered above in the discussion of the degree six threefold X 6 3 even though the orders of the underlying Galois groups are different. As a result, their modularity follows immediately from the analysis of section 3.2. The remaining rank two motives of this manifold can either be recovered from the two-parameter model X 8B 3 considered above, or can be computed separately, leading to modular forms already encountered in our previous computations. On the reverse of the modularity conjecture A natural question in the context of the flux vacua modularity conjecture is whether its reverse also holds, an issue that was addressed in [12]. A positive answer to this question would be very interesting because it would imply that the existence of modular motives provides a diagnostic for the existence of flux vacua D α W = 0 = D τ W for which W = 0. This can be discussed by testing the existence of modular rank two motives in compactifications for which such W = 0 do not exist. As noted above already, it has been known for quite some time that that among the one- there are points in the moduli space for which the solutions of the flux dynamics as well as W = 0 cannot be solved [21]. An example of this type that was analyzed in [31,43,21,25] is the smooth octic hypersurface X 8A 3 ⊂ P (1,1,1,1,4) [8] at the Brieskorn-Pham point, or Landau-Ginzburg point. As noted above already, it was furthermore shown in [21,25] that among the smooth weighted CY hypersurfaces there is only one manifold that admits such supersymmetric flux vacua, namely the degree six hypersurface X 6 3 ∈ P (1,1,1,1,2) [6], analyzed in the previous section while the remaining three do not. It was shown in the previous section that both the quintic and the degree ten hypersurface are not modular in the classical sense defined here, which leaves the smooth octic. The Galois group of the field K X of this degree eight hypersuface is of order four a p (M ± ) ±2 ∓6 +2 ±10 ±2 +10 Table 2. L-series coefficients for the motives M ± of X 8A 3 . The existing data bases again allow to identify these L-series as arising from weight two modular forms f ± (q) which shows that the motives M ± are modular after the Tate twist Here the weight two modular forms f ± are at levels N = 64 and N = 32, respectively. At these levels the spaces of rational forms are one-dimensional, hence N determines these forms uniquely. These modular forms have appeared previously in the string theory analysis of [38] and [30]. The analysis here thus leads to the conclusion that the one-parameter octic Brieskorn-Pham hypersurface embedded in P (1,1,1,1,4) is modular in the sense of carrying rank two motives that are modular in the classical sense, while not admitting supersymmetric flux vacua. 4.2 The two-parameter hypersurface X 18 3 ∈ P (1,1,1,6,9 [18] The above analysis for X 8A 3 can be applied to other weighted hypersurfaces that have been considered in the context of the existence/nonexistence of W = 0 flux vacua. One example that was considered early on in many papers on flux vacua, and which has received recent attention, is the degree 18 hypersurface see e.g. [49,50,51,25,52,53,41,54,55,56,57]. This is an elliptic fibration with h 1,1 = 2, h 2,1 = 272, hence χ = −540, with a typical fiber in the configuration E 6 ∈ P (1,2,3) [6]. An analysis of DeWolfe [25] based on R symmetries first showed that there are no W = 0 flux vacua associated to this manifolds. It is therefore again of interest to consider the motivic structure of this manifold. In this case the Galois group has order six Gal(K X 18 3 ) = {1, 5, 7, 11, 13, 17} (29) and hence the rank of the motives will generically be six. This holds in particular for the Ω-motive, which therefore is expected to be automorphic, not modular. However, despite this larger Galois group this manifold does have rank two motives. As before, we consider the image of α-vectors under the Galois group. An example of a rank two motive is given by The comparison of this motive with the rank two motive M I+ (X 6 3 ) of the degree six hypersurface X 6 3 ∈ P (1,1,1,1,2) [6] considered above shows that M is identical to M I+ even though the Galois groups have different orders. It follows that the L-series has the same expansion as in the case of the motive M I+ of the degree six hypersurface, hence leads again to the level N = 432 modular form designated by the LMFDB as 432.2.a.e. There are further rank two motives that arise in the manifold and these can be analyzed in a similar way. 4.3 The three-parameter hypersurface X 24 3 ∈ P (1,1,2,8,12) [24] As a final example discussed here consider the variety The configuration is iteratively structured in that it is not only an elliptic fibration with typical fiber in the configuration E 6 ∈ P (1,2,3) [6], but also a K3-fibration with typical fiber in the configuration X 12 2 ∈ P (1,1,4,6) [12]. Its flux vacuum structure was considered in DeWolfe [25] as a three-parameter model with Hodge numbers h 1,1 = 3 and h 2,1 = 243, leading to a Kähler structure that is more involved than the often considered two-parameter examples. Other work on this manifold in the context of flux compactifications includes [58,59]. The manifold X 24 3 has the highest degree d = 24 of the manifolds considered in this paper, leading to the largest Galois group of all examples. Since the totient function here is φ(24) = 8, a typical Galois orbit leads to a motive of rank eight. It turns out that despite the large order of this group, the modular structure of this manifold is in part similar to that of the degree six smooth hypersurface X 6 3 because their motivic structure partially overlaps. The modular analysis of section three thus immediately leads to the existence of modular rank two motives for the manifold X 24 3 . Discussion In this paper the modular and automorphic structure of most of the Calabi-Yau manifolds considered in the context of flux compactifications has been established. The methods used here are direct and allow the computation of the necessary L-functions from the motive itself, rather than from the factorization of the full cohomology. These results have implications for the modularity conjecture for supersymmetric flux vacua formulated in [12]. Support for the conjecture has been provided above by establishing the existence or non-existence of modular rank two motives for all one-parameter smooth Calabi-Yau hypersurfaces of dimension three. While the smooth degree six and eight weighted hypersurfaces admit such modular motives, the quintic and degree ten hypersurfaces do not. A second issue is raised by the question whether rank two modularity of motives in the cohomology sector H 2,1 (X) ⊕ H 1,2 (X) can be used as an indicator for the existence of supersymmetric flux vacua. Since modularity is expected to be common this would lead to the expectation that the existence of supersymmetric flux vacua is in some sense also generic. This was first addressed here by considering the one-parameter hypersurface of degree eight in the weighted projective space P (1,1,1,1,4) [8]. For this variety modular rank two motives can be constructed, thereby establishing that modularity is not a definitive diagnostic for the existence of flux vacua with vanishing superpotential. Other manifolds that have been considered in the flux vacua literature have a similar structure. Among these are the degree 18 configuration X 18 3 ∈ P (1,1,1, 6,9) [18] and the degree 24 configuration X 24 3 ∈ P (1,1,2,8,12) [24], both of which were considered in the flux context in [25], where it was shown that these two spaces do not admit flux vacua with vanishing superpotential. The first of these is a prominent two-parameter model, while the latter has a three-dimensional Kähler sector. Both of these spaces have rank two motives that are modular, similar to the one-parameter octic. The results of this paper thus strengthen the modularity conjecture of ref. [12] but indicate
8,094.8
2020-03-02T00:00:00.000
[ "Physics" ]
Direct visualization of dispersed lipid bicontinuous cubic phases by cryo-electron tomography Bulk and dispersed cubic liquid crystalline phases (cubosomes), present in the body and in living cell membranes, are believed to play an essential role in biological phenomena. Moreover, their biocompatibility is attractive for nutrient or drug delivery system applications. Here the three-dimensional organization of dispersed cubic lipid self-assembled phases is fully revealed by cryo-electron tomography and compared with simulated structures. It is demonstrated that the interior is constituted of a perfect bicontinuous cubic phase, while the outside shows interlamellar attachments, which represent a transition state between the liquid crystalline interior phase and the outside vesicular structure. Therefore, compositional gradients within cubosomes are inferred, with a lipid bilayer separating at least one water channel set from the external aqueous phase. This is crucial to understand and enhance controlled release of target molecules and calls for a revision of postulated transport mechanisms from cubosomes to the aqueous phase. A mphiphilic lipids can self-assemble into a variety of structures [1][2][3] . In particular, the inverse bicontinuous cubic phases and their applications and role in nature have received significant attention [4][5][6][7] . These structures have been reported to occur spontaneously in mitochondrial membranes, in stressed or virally infected cells 5,8 and are believed to be essential for understanding vital mechanisms such as cell fusion 6,9 and food digestion 3,10 . They are used for the growth of protein crystals to study their structure 11 . In medicine and industrial applications, they can control the release of drugs and flavours 12 , and have been shown to enhance the yield of the Maillard reaction products 4 . There is overwhelming evidence that the macroscopic properties of lipid structures depend on their fine structure. It is therefore of prime interest to develop methods for the accurate determination of those structures. The most commonly used technique for this purpose is smallangle X-ray scattering (SAXS). This method relies on constructive interferences, in the reciprocal space, from a large number of ordered scattering planes, and therefore does not provide a straightforward visualization of the structure in the direct space. Further limitations arise when materials are dispersed into water to form submicrometer particles. The scattered signal is then often restricted due to the small size of the objects, which leads to a low coherence and limited information 13 . In standard cryo-transmission electron microscopy (cryo-TEM), the signal results from the complete thickness of the vitrified sample, which limits the resolution and subsequent structural interpretations. To circumvent these limitations, we used cryo-electron tomography (CET) to unveil the three-dimensional (3D) organization at nanometric scale of self-assembled structures formed by a dispersed phase composed of biologically and industrially relevant unsaturated monoglycerides. CET enables the reconstruction of 3D information in the native state and the investigation of large structures with unique topologies. In colloidal science and in biology, the development of CET gives access to the fine structure of biological features such as organelles at a high resolution of only a few nanometres [14][15][16][17][18] . Emulsification of monoglyceride/surfactant mixtures in water results in the formation of particles with an interior displaying liquid crystalline organization. We performed CET on those particles to demonstrate the presence of internal bicontinuous cubic structure and to study the water-particle interface. Employing a subtomogram averaging on the liquid crystalline region, we directly characterized the internal 3D organization and compared it with the prevailing mathematical model of the bicontinous cubic structure demonstrating that the particle interior is constituted by two continuous water channels separated by lipid bilayers. We then investigated the interface organization between dispersed particles and the water involved in particle stabilization, which has a strong influence on the release of active elements solubilized within the cubosomes. It was found that the transition between the particle structured core and outer vesicles is made possible by the presence of interlamellar attachments (ILAs). This work enables to unambiguously determine the structure of the particles interior and forges a new understanding of the structural gradient within the particles. Results Cryo-electron microscopy of self-assembled structures. In view of applying CET to cubosomes, it is crucial to select compositions and dispersion conditions leading to well-ordered structures and to a lattice parameter as large as possible. We used an optimum amount of polyglycerol ester to tune these parameters ( Supplementary Figs 1 and 2). Cryo-TEM images reveal that particles are internally ordered and have a diameter ranging from 100 to 500 nm ( Fig. 1). They coexist with vesicles and attached vesicular structures ( Fig. 1; Supplementary Fig. 3), as usually reported 13,19 . As indicated by SAXS and cryo-TEM crystallographic analyses, the internally ordered particles have a space group symmetry Im 3m and a lattice parameter of about 16 nm ( Fig. 1; Supplementary Figs 1 and 4). Note the presence of a well-ordered structure (for example, red box) in the particle inside and of a vesicular structure close to the interface with the water matrix. The inset shows the fast Fourier transform (FFT) of the red box area and it is used for the structure determination of the liquid crystalline particles, independently confirmed by SAXS analysis. Scale bar, 100 nm. ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/ncomms9915 In previous literature, the self-assembled structures of lipid mesophases and in particular their bicontinous nature have been inferred from crystallographic arguments based on X-ray scattering, although not unequivocally. The independence of the two distinct water channel networks in cubic phases has been demonstrated before, but indirectly via the introduction of transport membrane proteins at the lipid bilayers 12 . Stimuli-triggered opening of those protein pores linked the two independent water channel networks, which resulted in faster diffusion of ions and molecules. No direct observation of the water and lipid network of the bicontinuous phases has yet been produced. Bicontinuous cubic structure visualized by CET. Here we show how it has been possible to directly visualize the 3D networks by CET and we demonstrate unambiguously the presence of bicontinuous cubic structure and the presence of two independent water channels. Cubosomes in a size range of 100-300 nm were chosen for 3D reconstruction, since their low thickness leads to a high signal-tonoise ratio. The sequences of images extracted from the tomogram in the z direction show holes belonging to the water channels (Fig. 2a). When progressing along the z direction, the periodic network is shifted by half of the diagonal of the cube (aO3/2, where a is the lattice parameter). This provides a direct and separate visualization of the two independent networks indicated by the red and blue box (first and second network). To reconstruct and visualize the native 3D structure of the crystalline part of the cubosome, image-processing techniques were used to increase the contrast and the resolution of this region. It was first determined that the liquid crystal has a periodicity of 16.8 nm ( Supplementary Fig. 5) for the particular particle studied in Fig. 2. Then, the central region of tomogram volume was selected to extract 150 boxes. Finally, we took advantage of the structure periodicity and symmetry to duplicate and rotate each box accordingly, which compensated for missing information in the axial orientation 20 . An average map was created from all extracted boxes, as presented in Fig. 2b-e. This subtomogram averaging approach in direct space on a single cubosome differs fundamentally from the indirect space summation by photon interferences, which underpins scattering methods. The resulting sub-tomogram averaging reveals unprecedented detail the 3D organization of the bicontinuous Im 3m structure. The 'top' and 'side' view of the filtered 3D image clearly show how the channels are organized in two interdependent networks, indicated by the red and blue arrows (Fig. 2d,e). Moreover, one of the water channels is represented in Fig. 2c, giving insight on localization and diffusion of potential solubilized hydrophilic molecules. ARTICLE Comparison between the mathematical and the CET reconstruction. The bilayer of bicontinuous cubic phase forms infinite periodic minimal surface structures that can adopt the gyroid (G), diamond (D) and primitive (P) surfaces 21 . We used here the concept of periodic nodal surface, which for the inverted bicontinuous cubic structures leads to a structure very close to the corresponding periodic minimal surface 22 . For the P surface, the equation of the periodic nodal surface is 23 : Figure 3a gives the 3D image of the P surface obtained using equation (1). A direct comparison in 3D between the average map obtained by electron tomography (Fig. 3d) and the 3D theoretical surface (Fig. 3a) is difficult and cannot be quantitative. Therefore, following an approach taken from condensed material sciences, we took slices from the experimental tomogram and decomposed in a series of 20 images showing the progression along the z direction, describing one unit cell. These images can be compared with the curves generated by equation (1) at different fixed z values (Fig. 3b). The similarity between the series is excellent, thus demonstrating that the bicontinuous cubic structure really corresponds to the primitive surface. With the CET coupled with the subtomogram averaging, it has been possible to visualize the 3D structure of the unit cell and to compare it directly with the mathematical model. This confirms the bicontinuous character of the Im 3m phase and its similarity with the P type. Interface between cubosomes and water. Having resolved the interior structure of a dispersed cubic structure, we focused on the structure of the interface. Numerous reports have been devoted to solving the interfacial structure between a cubic phase particle and water. A first stabilization mechanism was proposed by Larsson 21 and was described in more details later on by Anderson and Jacobs 24 . Gustaffson et al. 19 suggested that the Anderson model ( Supplementary Fig. 6) could be seen as an idealized model for cubosome stabilization, but could not explain why the surface of cubosomes is consistently observed to be crowded with disordered vesicular structures (Fig. 1). Later on, based on freeze fracture images, Angelov et al. 25 proposed another model, which again did not involve disordered vesicular structures but particles were built with repeated elementary structures also present at the surface. In a later study, Boyd et al. 26 , using scanning electron microscopy, deduced that the surface was compatible with the model proposed by Angelov et al. In the present work, CET makes it further possible to directly visualize the organization of the surface between the cubic phase particle and the surrounding water, and to study how lipids build structural order and complexity when going from the water/particle interface to the interior. We propose a new model for cubosome stabilization that CET enables us to validate. In the spatial transition from the external shell to the ordered core of the cubosome, we were able to identify structures that are similar to the transient intermediates reported in the fusion of lamellar membranes 27 . In cubosomes, structural transitions from lamellar to reverse bicontinuous cubic phase (Fig. 4a), involving stalks and interlamellar attachments (ILAs) as intermediates, have been so far reported only as a function of time 27 . ILAs were also reported to be present during the formation of sponge and discontinuous cubic phases (Q L ) 28 or in bulk phase 29 . An initial 'cis' (apposed) membrane contact between two bilayers first occurs, followed by the formation of a stalk 30 . By contact of the 'trans' (non-opposed) monolayers of the original two bilayers in the stalk, a transmembrane contact is formed (Fig. 4a). A pore or ILA 9 appears by rupture of the transmembrane contact. As the number of ILAs per unit area increases when moving towards the particle centre, they organize in squares lattices, yielding the final bicontinuous cubic phase without any open bilayer 29 . Figure 4c shows an experimental tomogram slice passing through the core of the particle. It is clear that the structure gradually evolves from disordered vesicular-like structures on the outer surface to a highly ordered structure in the core. The simulated model of an ILA (Fig. 4b) indicates that ILAs should appear as circular holes when viewed from the top (Fig. 4bIII). Membrane undulations and fusion can be observed in the centre slice (Fig. 4c), although the visualization of ILAs has been difficult due both to the low resolution and the missing wedge effect 31 . The thickness of the vitreous layer is a parameter that contributes to the low contrast of the CET. For this reason, small particles or intermediate structures are more favourable for a direct visualization of the membrane fusion ( Supplementary Fig. 7; Supplementary Movie 2). The volume rendering (Fig. 4d) shows that the structure of the object is elongated, preventing the visualization of details in the external shell of the cubosome. (Fig. 4e), which are tangent to the cubosome surface (Fig. 4d), ILAs could be clearly visualized as small circles (orange arrows). Lipid bilayers, which are not parallel to the slicing plane, appear as thick black lines (black arrows). Animations of the electron tomographic reconstruction indicate that the number and order of the ILAs increase when moving from the particle top (or bottom) to the centre ( Fig. 4f; Supplementary Movie 3) in agreement with the proposed model. In Fig. 4c, membrane fusion events are observed but due to the missing wedge, the same ILAs cannot be observed on the side view and ILAs are observed from bottom (top view, Fig. 4e). In Fig. 4g, the small thickness of the object allows to see ILAs observed along the side view confirming their presence. Therefore, our results demonstrate that the stabilization of cubosomes involves the transition between lamellar liquid crystalline and cubic phases generated by membrane fusion of the lamellar structure and ILAs. Discussion In literature, cryo-TEM experiments performed on cubosomes obtained with different stabilizers 13,32 , different processes of manufacturing 33,34 , different lipids (phytantriol and phospholipids) 35,36 and different bicontinuous space groups 13 demonstrate the presence of disordered vesicular structure at the interface ( Supplementary Figs 8-11). Supplementary Fig. 12 shows membrane fusion events for a dispersion for which the particles have a diamond structure (space group Pn 3m). This dispersion was heated for 20 min at 120°C to obtain a more stable dispersion 37 . Similar features are also observed for phospholipid dispersions 36 . The model proposed by Anderson ( Supplementary Fig. 6) can be seen as a particular case of the stabilization mechanism described in this study and as the simplest means to stabilize cubosomes. In particular, vesicular caps prevent from having the lipophilic part of the emulsifier in contact with water at the water cubosome interface and a regular repartition of ILAs, at the interface, insures the transition to the bicontinuous cubic structure. Supplementary Fig. 6, and the similarity between the schematic and experimental image, strongly suggests that the Anderson model applies when no apparent lamellar structure is present. We can therefore conclude that the mechanism of stabilization proposed here (including the Andersson model) applies for different compositions and processes and is probably very general for lipids. For non-lipids, the situation may be completely different. In particular, in cubosomes obtained using block-copolymers, no lamellar structure is observed and both water channels are open to the outside 38 . However, the physical/ chemical rules for block-copolymer cubosomes are very different since those are stable after removing water, which is not the case for lipid cubosomes. For lipids, the reason dictating the presence or absence of a large lamellar structure at the interface is not fully clear. It could be that since the diamond bicontinuous cubic structure (symmetry Pn 3m) is stabilized using more lipophilic lipid, less vesicular structures could be present compared with the primitive one (Im 3m). However, as discussed before, large amount of lamellar structure at the interface are also observed for the Pn 3m structure ( Supplementary Fig. 12 and images in de Campo et al. 39 ). The nature of the stabilizer is probably more important than the space group. Proteins seem to provide less vesicular structures ( Supplementary Fig. 9), while partially hydrolyzed lecithin increases it (Supplementary Fig. 8). Likely, proteins interact less with monoglyceride (or phytantriol) than partially hydrolyzed lecithin. In addition, it was reported that the amount of vesicular structure at the interface increases with Pluronic amount 19 . A further question is whether the stabilization mechanism and the structures described in the present study are at or close to equilibrium. Barauskas et al. 37 used heat treatment at 125°C to obtain stable dispersed bicontinuous cubic phase at or close to equilibrium. In that case, as shown in supplementary Fig. 12, lamellar phase at the particle/water interface is present and membrane fusion events are evidenced. To confirm that for the composition used in the present work, the stabilization mechanism corresponds to an equilibrium situation, we performed also heating at high temperature and cooling ( Fig. 5; Supplementary Fig. 13). Extended amount of lamellar phase surrounds the cubic phase. In addition, with the knowledge gained from the CET analysis ( Fig. 4; Supplementary Fig. 7), ILAs and membrane fusion events are easily identified (Fig. 5) in conventional cryo-TEM images. This indicates that the stabilization mechanism proposed here, involving ILAs, corresponds to situations at or close to equilibrium and confirms that its appearance is very general. In addition, de Campo et al. 39 show that internal structure of the bicontinuous cubic particles was at equilibrium since the crystallographic information obtained from SAXS was identical to the one of the non-dispersed phase and there was no change of structures after heating the dispersion up to 94°C and cooling back to room temperature. At 94°C, the particles adopt the reversed microemulsion structure. For the heating experiment performed in the present study, the fast Fourier transform analysis indicates a lattice parameter of the bicontinuous cubic internal structure in the range of 15-16 nm that is in good agreement with the one of the non-heated dispersion (about 16 nm) in accordance with the study of de Campo et al. Our work indicates that the internal structure has essentially no defect, which confirms the study of de Campo et al. 39 , which, as mentioned before, shows that the SAXS pattern of a dispersion containing monoglycerides and Pluronic 127 is the same as the one of the bulk phase containing only monoglyceride. This indicates that the particle interior contains little or no stabilizers. Landh 40 studied the ternary phase diagram Pluronic F127-unsaturated monoglyceride-water. A lamellar phase was Figure 5 | Cubosomes obtained after heating at 100°C. Note that the resulting particles are surrounded by an extended amount of lamellar structure. Membrane fusion events (short black arrows) and ILAs (long red arrows) are also evidenced. Images were obtained after cubosomes, used in the present work, were heated at 100°C and cooled down to room temperature. Scale bar, 100 nm. observed for compositions containing almost equal amounts of Pluronic 127 and monoglyceride, strongly suggesting that the stabilizing layer is rich in Pluronic. The high content in Pluronic of the disordered vesicular structure external to the particle is also in agreement with the fact that the amount of vesicles increases with the content of Pluronic F127 as mentioned earlier. From all the above findings and the new experiments carried out here, the structure discussed in the present work is deduced to be at thermodynamic equilibrium. The cubosome interface reported here is very different from the ones previously reported using electron microscopy imaging. Angelov et al. 25 and Boyd et al. 26 proposed that the particles are composed of repeating units also present at the interface, and no ILAs or disordered vesicular structures at the interface are included. With this model, no composition gradient across cubosomes is expected and only discrete sizes and forms of cubosomes are present, which is not the case for our model. Angelov et al. used freeze fracture where the interpretation of the fracture is not always straightforward and where only a cut through the object is visible. Boyd et al. used sublimation, which removes water and may affect the particle surface structure. Our work illustrates the power of CET where the full object is imaged in its native state in presence of water, allowing 3D reconstruction at high resolution. In our model, one water channel is isolated from the outside aqueous matrix by a bilayer as it is the case in Andersson model ( Supplementary Fig. 6). If complex vesicular structures are formed at the interface, which represents the majority of the cases, water release from the second channel is restricted to a limited number of locations or is suppressed when a lipid bilayer completely surrounds the cubosome. This situation favours sustained release of solubilized hydrophilic molecules (that is, it decreases the effective release rate). It was recently found that the diffusion coefficient of glucose and proflavine in lamellar phase is between 10 and 100 times lower than the one of this molecule when solubilized in the inverted bicontinuous cubic phase 41 . This also calls for a re-examination of previous models for release of hydrophilic drugs from cubosomes, which assumed direct molecule transfer from inner water channels to external water without bilayer crossing. The heating and cooling experiments favour the presence of a thick lamellar phase at the particle water interface ( Fig. 5; Supplementary Fig. 13) compared with the initial dispersion ( Fig. 1; Supplementary Fig. 3). This offers a way to increase the thickness of the lamellar structure. In addition, more stabilizers could be used to achieve this task. Furthermore, in the future, diffusion could be further limited by inserting other molecules (for example, cholesterol) in the vesicular structure. In summary, we use CET to resolve directly the structure of lipid cubosomes. Not only have our results conclusively validated the bicontinuous structure of cubosomes, previously inferred from indirect reciprocal space techniques (X-ray scattering) or molecular transport measurements, such as diffusion, conductivity and self-diffusion NMR, but they also demonstrate that the mechanism of cubosome stabilization involves membrane fusions and ILAs present in the transition between infinite periodic minimal surface and lamellar structures giving insight in the structural variation within the particles. These results advance our understanding of these systems and pave the way to a rational understanding of biological phenomenon associated to reversed bicontinuous cubic phases and molecular transport properties in these fascinating internally structured colloids. Methods Cubosomes preparation. Pluronic F127 (0.075 g, Sigma) was dispersed into 19 g of milliQ water. Glycerol Monolinoleate (0.69375 g, Emulsifier TSPH039 from Danisco, which has a purity of 93.8% monoglyceride and contains 4.1% diglycerides. The fatty acids are composed of 91.8% linoleate, 6.8% oleate and about 1% saturated fatty acids were mixed with polyglycerol ester 0.23125 g (PGE 080D from Danisco) to form the lipid mixture. The lipid mixture was added to the Pluronic aqueous solution. Ultrasonication was applied at cycle 1 and 70% intensity for 5 min with a probe sonicator (Ultraschallprozessor 400, Hielscher). Small-angle X-ray scattering. Laboratory SAXS measurements were performed with a MicroMax-002 þ microfocused X-ray machine (Rigaku), operating at 4 kW, 45 kV and 0.88 mA. The Ka X-ray radiation of wavelength l ¼ 1.5418 Å emitted at the Cu anode is collimated through three pinholes of respective sizes 0.4, 0.3 and 0.8 mm. The scattered intensity was collected on a two-dimensional Triton-200 X-ray detector (20-cm diameter, 200-mm resolution) for 16 h. The scattering wave vector is defined as q ¼ 4psin(y)/l, where 2y is the scattering angle. The sample chamber used gives access to q ranges of 0.01-0.44 Å À 1 . Silver behenate was used for q vector calibration. Scattered intensity data were azimuthally averaged using SAXSgui software (Rigaku). Dispersion samples were filled into 1.5-mm diameter quartz capillaries, sealed with epoxy glue (UHU). The X-ray machine is thermostated at 22.0 ± 0.5°C, taken as room temperature. Cryo-electron microscopy and 3D tomography. A 5-ml sample was deposited onto a lacey copper grid; the excess was blotted with a filter paper, and the grid was plunged into a liquid ethane bath cooled with liquid nitrogen inside a climate chamber (Vitrobot MarkIV, FEI, Eindoven). The climate chamber temperature was 22.5 ( ± 0.5)°C and the relative humidity was kept close to saturation (100%) to prevent evaporation of the sample during preparation. Specimens were maintained at a temperature of À 170°C using a cryo holder 626 (Gatan) and were observed with a JEOL 2200FS electron microscope operating at 200 kV equipped with a omega energy filter and at a nominal magnification of 40,000 under low-dose conditions. Images were recorded with a 2 Â 2 k slow scan charge-coupled device camera (Gatan). For 3D tomography, tilt series were collected automatically from À 60°to þ 60°with 2°angular increment using the Recorder (JEOL) software. Images were recorded on charge-coupled device camera at a defocus level between À 2 and À 3 mm. For image processing, using colloidal gold particles as fiducial markers, the two-dimensional projection images, binned by a factor of one, were aligned with the IMOD software 42 and then tomographic reconstructions were calculated by tomoJ 43 . Subtomogram averaging. A total of 150 subtomograms of dimension 200 Â 200 Â 200 pixels (pixel ¼ 0.3 nm) were boxed inside the cubosome and next extracted using bsoft 44 . These subtomograms were aligned in 3D using Spider. To compensate the missing wedge and based on the internal symmetry of the cubosome, we then duplicated all extracted boxes and rotated them at 90°. After several iterations of averaging ( Supplementary Fig. 14) using the output as a new reference, we obtained a final map and visualized it using ImageJ 45 and the 3D representation using UCSF Chimera 46 . Heating the dispersion at 100°C. A digital heatblock (VWR International, model number 949307) was set-up at 100°C. A measure of 10 ml of the dispersion was introduced in Pyrex culture tubes 100 Â 26 mm. Once the heating block temperature reached 100°C, the Pyrex tube was introduced into the heating block and heated for 40 min. After this time, heating was switched off. After 1 h, the dispersion was removed from the heating block.
6,062
2015-11-17T00:00:00.000
[ "Materials Science" ]
High-Resolution, High-Throughput, Positive-Tone Patterning of Poly(ethylene glycol) by Helium Beam Exposure through Stencil Masks In this work, a collimated helium beam was used to activate a thiol-poly(ethylene glycol) (SH-PEG) monolayer on gold to selectively capture proteins in the exposed regions. Protein patterns were formed at high throughput by exposing a stencil mask placed in proximity to the PEG-coated surface to a broad beam of helium particles, followed by incubation in a protein solution. Attenuated Total Reflectance–Fourier Transform Infrared Spectroscopy (ATR–FTIR) spectra showed that SH-PEG molecules remain attached to gold after exposure to beam doses of 1.5–60 µC/cm2 and incubation in PBS buffer for one hour, as evidenced by the presence of characteristic ether and methoxy peaks at 1120 cm−1 and 2870 cm−1, respectively. X-ray Photoelectron Spectroscopy (XPS) spectra showed that increasing beam doses destroy ether (C–O) bonds in PEG molecules as evidenced by the decrease in carbon C1s peak at 286.6 eV and increased alkyl (C–C) signal at 284.6 eV. XPS spectra also demonstrated protein capture on beam-exposed PEG regions through the appearance of a nitrogen N1s peak at 400 eV and carbon C1s peak at 288 eV binding energies, while the unexposed PEG areas remained protein-free. The characteristic activities of avidin and horseradish peroxidase were preserved after attachment on beam-exposed regions. Protein patterns created using a 35 µm mesh mask were visualized by localized formation of insoluble diformazan precipitates by alkaline phosphatase conversion of its substrate bromochloroindoyl phosphate-nitroblue tetrazolium (BCIP-NBT) and by avidin binding of biotinylated antibodies conjugated on 100 nm gold nanoparticles (AuNP). Patterns created using a mask with smaller 300 nm openings were detected by specific binding of 40 nm AuNP probes and by localized HRP-mediated deposition of silver nanoparticles. Corresponding BSA-passivated negative controls showed very few bound AuNP probes and little to no enzymatic formation of diformazan precipitates or silver nanoparticles. Introduction Surface patterning of biomolecules is important in the study of cell adhesion, in tissue engineering, and in the development of diagnostics and biomedical assays such as protein nanoarrays [1,2,3,4]. Controlled attachment of biomolecules can be achieved by approaches generally categorized as ''top-down'' (e.g., photolithography, focused ion beam lithography, electron beam lithography, nanografting), or ''bottom-up'' (e.g., self-assembly of monolayers, dip-pen nanolithography, micro/nano contact printing or stamping), or combinations of these techniques. Top-down techniques manipulate an instrument to modify the bulk material to create patterns. Photolithography is a high-throughput lithographic process but its resolution is diffraction-limited below the micron-scale, and it is expensive to use for low-volume manufac-turing. Electron beam and focused ion beam lithography have very high resolution but very low throughput. Bottom-up techniques take advantage of the spontaneous organization of molecules to produce a more complex and functional patterned material. These methods can allow patterning below the current resolution of lithographic techniques, but often involve high costs (for large areas) and imperfect patterning. Current approaches to biomolecule patterning, therefore, often employ combinations of top-down and bottom-up techniques [4,5,6]. Several approaches have been taken to biomolecule, specifically protein, patterning on surfaces. Utilizing self-assembly and electron-beam lithography, Geyer et al. [7] and Golzhauser et al. [8] have demonstrated the use of nitro-terminated aromatic thiols assembled on gold surfaces to create surface patterns of chemical functionality. Upon exposure to the electron beam, the nitro groups are converted to amines while the underlying aromatic groups are dehydrogenated and cross-linked. The amine groups were then used to attach molecules (carboxylic acid anhydrides and rhodamine dyes) to the exposed regions of the surface. Perhaps the most popular approach, however, involves the use of poly(ethylene glycol) (PEG) [9,10,11], which is well-known for its resistance to non-specific adsorption of proteins through a combination of entropic (steric stabilization) and enthalpic (hydrogen bonding) mechanisms [12,13]. Entropic passivation is favored by longer chains which can assume a greater variety of configurations [13,14,15], but even smaller ethylene oxide chains of 3-6 monomer units can resist protein adsorption as long as the molecular conformation is helical or amorphous, favoring a more stable interfacial layer of tightly-bound water [12,13,15,16,17,18]. Electron beam lithography is commonly used to create protein patterns on PEG surfaces. Hong et al. [19] observed that proteins selectively attach to electron beam-exposed regions of amineterminated PEG spin-coated on silicon. Patterns consisting of multiple proteins also can be formed by introducing PEG bearing different functionalities (e.g., biotin, maleimide, aminooxy or metal chelate) that are patterned over several exposures [20]. Harnett et al. [21] used low-energy electron beam exposure to destroy the amine functionality in selected regions of an amine-functionalized silane-PEG on silicon surfaces, allowing proteins to attach only on the unexposed regions (negative-tone patterning). Inert silane-PEG on silicon and thiol-PEG on gold have also been used to create a protein-resistant surface, with electron beam exposure used to remove the SAM on exposed regions and protein-reactive PEGs used to backfill the exposed regions (positive-tone patterning) [22,23]. In this study, we explored the use of helium beam proximity lithography to create micron and nanoscale patterns of proteins on grafted PEG in a high-throughput manner. Unlike focused ion beam lithography techniques, proximity printing forms its patterns by exposing a thin membrane with etched openings corresponding to the desired pattern (a ''stencil mask'') to a broad beam of helium particles, which are either stopped in the opaque regions of the mask or pass through the openings. In this way, the pattern can be formed very quickly and without the need for expensive ion optics. After exposure to the beam, protein patterns were detected and visualized using three different methods chosen to be representative of common approaches: (1) formation of localized diformazan precipitates by patterned alkaline phosphatase (AP) from its substrate BCIP-NBT [24,25,26], (2) binding of gold nanoparticle probes (40 nm and 100 nm gold particles conjugated with antibodies) [27,28,29], by patterned antibodies and avidin, and (3) localized redox formation of silver deposits mediated by patterned horseradish peroxidase (HRP) [30,31,32,33]. Preparation of assembled thiol-polyethylene glycol on gold Gold surfaces were prepared by thermally evaporating 100 nm gold (with a 20 nm nickel-chromium adhesion layer) on 4-inch silicon wafers. Before any further treatment, these gold surfaces were cleaved (ca. 4 cm 2 ) and cleaned by dipping into anhydrous ethanol for at least 2 minutes, then thoroughly rinsed with deionized water (18 MV), and finally dried with a stream of compressed nitrogen. The clean surfaces were then immersed in a 1 mM solution of SH-PEG in 90% ethanol and allowed to incubate overnight (at least 18 hours) at room temperature. Afterwards, the surfaces were washed thrice with DI water and dried with compressed nitrogen. Helium beam exposure of PEG and surface characterization The helium beam is generated in a saddle-field ion source that is based on the designs of Hogg [35] and Franks [36]. In this source, a plasma is ignited at low pressure when electrons are trapped along long oscillatory paths through a saddle-point in the electric potential distribution, and ions escape through a small aperture machined into one end of the source. As the ions escape, a fraction is neutralized through charge-exchange collisions with the neutral helium gas ambient and a beam of mixed atoms and ions drifts along the length of a vacuum beam line to an exposure chamber, located about 1.5 meters from the source [37]. To form patterns, a stencil mask is held in proximity to the wafer by a fixture and the wafer can be moved below the mask using an in-vacuum x-y stage to allow for step-and-repeat patterning of large surfaces. The exposure time is controlled through a computer that actuates a mechanical shutter. Patterns of beam-modified PEG (and, subsequently, proteins) on the gold surfaces were formed by casting shadows using a stencil mask in proximity to the substrate (see Figure 1). The mixed ion and atom flux was equivalent to a helium ion beam current density of about 70 nA/cm 2 with a beam energy of about 6.5 keV (for a source power supply setting of 10 kV at 1 mA), and three doses were tested: 30, 150 and 600 seconds ( approximately 2, 10 and 45 mC/cm 2 , respectively). After exposure, samples were stored dry at 4uC until use. Surfaces were analyzed by Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopy, ATR-FTIR (Nicolet 4700 FT-IR, Thermo Scientific) before and after beam exposure, and after PBS buffer incubation. In addition, surfaces before and after incubation with protein (15 mg/mL avidin) were analyzed by X-ray Photoelectron Spectroscopy (Physical Electronics model 5700 XPS instrument) using a monochromatic Al-ka X-ray source (1486.6 eV) operated at 350 W. The analyzed area, collection solid cone and take-off angle were set at 0.8 mm diameter, 5u and 45u, respectively. Applying a low pass energy filter of 11.75 eV resulted in an energy resolution of better than 0.51 eV. All spectra were acquired at room temperature under a vacuum of 5610 29 torr or better. A survey scan was first performed to determine the major elements present, followed by element-specific scans of at least 15 minutes per scan. Data processing was carried out using the Multipak TM software package (Physical Electronics, Inc.). A Shirley background [38] subtraction routine was applied. Protein patterning on PEG After helium beam exposure, surfaces were further cleaved into smaller pieces (ca. 25 mm 2 ) to fit into micro-centrifuge tubes. Samples used for patterning were incubated with 15 mg/mL of protein solution (either avidin, streptavidin-polyHRP, goat antimouse antibodies or goat anti-rabbit antibodies) in PBS buffer for 1 hour at room temperature, while negative control surfaces were incubated with protein-free PBS buffer. Before pattern detection, all surfaces were immersed in 4% BSA in PBS for 1 hour to further passivate the back and edges of the gold-coated silicon chips, as well as the walls of the micro-centrifuge tube. All surfaces were washed at least three times with PBS (except where specified) between reagents. For pattern detection by formation of alkaline phosphatase diformazan precipitate, surfaces were incubated on an orbital shaker in a solution of 2 mg/mL of biotinylated enzyme (with ca. 3 biotin molecules per enzyme molecule, as assessed by HABA assay [39]) in 100 mM diethanolamine, 100 mM NaCl, 5 mM MgCl 2 , pH 9.2 (DEA buffer), for 1 hour at room temperature, then . Elemental XPS spectra of beam protected (unexposed) and irradiated (exposed at different doses) PEG before and after protein incubation. Before (''unexp'') and after (''exp'') helium beam exposure, carbon C1s signals (A) show characteristic alkyl and ether peaks at 284.6 eV and 286.6 eV binding energies, respectively. The presence of oxygen O1s signals (B) at 532 eV and the absence of nitrogen N1s signals (C) at 400 eV also were observed. Subsequent incubation with avidin shows additional C1s peak at 288 eV (D), similar O1s signals at 532 eV (E) and existence of N1s peak at 400 eV (F) binding energies for beam exposed PEG. doi:10.1371/journal.pone.0056835.g003 washed with DEA buffer. BCIP-NBT (0.15 mg/mL BCIP and 0.30 mg/mL NBT) substrate was then added and allowed to react for 10 minutes. The surfaces were then washed with water, dried with compressed nitrogen, and imaged using an optical microscope. For pattern testing by gold nanoparticle probes, 300 mL of a suspension of ca. 10 9 gold nanoparticles/mL conjugated with antibodies (100 nm particles with biotinylated goat anti-rabbit polyclonal antibodies, or 40 nm particles with goat anti-mouse antibodies) in PBS buffer was added to the surface and allowed to incubate for at least 12 hours at room temperature, with mixing on an orbital shaker. For HRP-mediated silver staining, surfaces were incubated with streptavidin-polyHRP conjugates (20 mL spot, 10 mg/mL) for at least one hour at room temperature, and then washed thrice with PBS and twice with DI water. Surfaces were then silver-stained using the EnzMet TM staining solution, with 2 minutes incubation time each for the manufacturer's reagents Detect A, B and C (20 mL each). Finally, surfaces were washed thrice with water, dried with compressed nitrogen and imaged by scanning electron microscopy (Zeiss/LEO 1525 Field Emission SEM). Results and Discussion As shown in Figure 2, ATR spectra confirmed the presence of SH-PEG assembled on gold surfaces before helium beam exposure, after beam exposure and after post-exposure incubation in PBS buffer for one hour. Peaks near 2870 cm 21 and 1120 cm 21 were observed, corresponding to the C-H stretch of methoxy and C-O stretch of ether groups, respectively [40]. We found that our MW 5000 SH-PEG SAM monolayers at least partially survived 6.5 kV helium beam exposures with doses of 1.5-60 mC/cm 2 . It is notable that previous researchers [22,23] found that monolayers of smaller PEG molecules (MW: 290 and 750) were damaged by doses of 5-80 mC/cm 2 and completely removed by doses over 160 mC/cm 2 when using 1 kV electrons. The radiation chemistry of electron and ion/atom beams on PEG and alkanethiols would be expected to be very similar, mainly involving hydrogen abstraction of the PEG hydrocarbons [41] and dissociation of C-H, C-C, C-S and substrate-SH bonds [23,34]. Hydrogen abstraction and bond dissociation form radicals, which eventually cause cross-linking within the polymer [42,43], affect hydrogen bonding and disrupt the highly organized interface between water and PEG. Our results are consistent with a similar effect of helium beam exposure. Cross-linking of PEG was observed on helium beam exposed surfaces as evidenced by the water insolubility of exposed spin-coated thiol-PEG on silicon (unexposed thiol-PEG film on silicon is soluble). Furthermore, disruption of PEG chains was confirmed by XPS, as shown in Figure 3. As presented in Figure 3A, prominent carbon (C1s) peaks were observed at 284.6 eV and 286.6 eV binding energies, which correspond to alkyl and ether C1s bonding states, respectively [44,45]. As the helium beam dose was increased, the C1s ether peak at 286.6 eV decreased while the alkyl peak at 284.6 eV increased, indicating the destruction of ether bonds and formation of more alkyl bonds in the PEG molecules. A decrease in the oxygen (O1s) signal (532 eV binding energy) also was observed as the beam dose was increased. Helium beam exposure renders the grafted PEG on the surface less hydrophilic by a decrease in C-O ether and an increase in C-C alkyl functionalities, and thus potentially more amenable to protein adsorption. There also is literature evidence that electron [9] or ion (argon) beam [10,11] exposure can create carbonyl functionalities (which could be charged carboxylate or protein amine-aldehyde) in PEG samples. A relatively small number of these functional groups could alter the local protein-capture properties of the PRG surface, while being difficult to detect against the background of much-morenumerous ether and alkyl functionalities. Elemental XPS spectra of exposed and unexposed samples incubated with avidin are shown in Figures 3D, 3E and 3F for carbon (C1s), oxygen (O1s) and nitrogen (N1s) signals, respectively. In Figure 3D, aside from the observed decrease in the C1s peak at 286.6 eV and increase in the C1s peak at 284.6 eV discussed previously, the C1s peak at 288 eV, a characteristic binding energy of amide C1s [46], appeared as the helium beam dose was increased, confirming that avidin attaches to the beam-exposed surfaces. This is further supported by the increase in the nitrogen N1s signal (400 eV) with increasing beam dose, as shown in Figure 3F. Unlike in Figure 3B, the differences in oxygen signal intensities in Figure 3E were not distinct, particularly in the unexposed and exposed (30 sec and 150 sec) samples. This might be due to the added attenuation length for photoelectrons in XPS provided by the avidin layer (there is little nonspecific adsorption of avidin on unexposed PEG as shown by the nitrogen signal in Figure 3F). The oxygen signal intensities for unexposed and exposed (30 sec and 150 sec) PEG samples are similar because the avidin layer somewhat overshadows the small oxygen signal differences observed in Figure 3B. However, since more avidin molecules were found to attach on the exposed (600 sec) sample, its oxygen signal is distinctly different from the latter samples. Among the helium beam doses tested, we chose to use the 150 second exposure time for further work as a balance between protein attachment effectiveness and processing throughput. Patterning of proteins using a 35 mm mesh is presented in Figures 4 and 5, wherein attachment of proteins on beam-exposed regions is indicated by enzymatic formation of localized diformazan precipitates and also by binding of gold nanoparticle probes. As shown in Figure 4, the helium beam was able to pattern avidin over a large surface area (25 mm 2 , with 2.3 mm 2 shown in Figure 4). The darkened regions indicate the formation of Figure 5. Gold nanoparticle probes on protein-PEG patterns (35 mm mesh). Images on the left column show the schematic diagram (not to scale) of helium beam-patterned PEG incubated with avidin (A), polyclonal anti-rabbit antibodies (B), or BSA (C), followed by the addition of 100 nm gold nanoparticles conjugated with biotinylated rabbit antibodies. Electron microscope images on the right column show the protected and irradiated PEG regions after incubation with gold nanoparticle probes. doi:10.1371/journal.pone.0056835.g005 Figure 6. Gold nanoparticle probes and silver nanoparticle deposition on protein-PEG patterns (300 nm mask). Electron microscope images on the left column (with zoom-in pictures as insets) correspond to helium beam-patterned PEG incubated with polyclonal anti-mouse antibodies (A), streptavidin-polyHRP conjugates (B), or biotinylated lysozyme (C), while images on the right display beampatterned PEG surfaces incubated with PBS buffer (no proteins); all samples were then passivated with BSA. Patterns were visualized by binding of 40 nm gold nanoparticles conjugated with D1.3 mouse antibodies (A), or silver staining through HRP conjugates (B). Captured biotinylated lysozyme (C) was detected by addition of streptavidin-polyHRP conjugates and silver staining. doi:10.1371/journal.pone.0056835.g006 diformazan precipitates by captured biotinylated alkaline phosphatase via dephosphorylation and reduction of the substrate BCIP-NBT. These regions signify the successful attachment of avidin in active form on beam-exposed PEG. Similar results were obtained using 100 nm gold nanoparticles, as shown in Figure 5A. In Figure 5B, where the surface was incubated with polyclonal anti-rabbit antibodies after beam exposure, gold nanoparticle probes conjugated with biotinylated rabbit antibodies were shown to bind to the irradiated PEG regions. BSA control surfaces, by contrast, showed little to no formation of diformazan precipitate ( Figure 4) and bound very few gold nanoparticles ( Figure 5C). To demonstrate that the helium beam could create highresolution nanoscale protein patterns on PEG surfaces, a smaller mask with 300 nm openings at 1 mm spacing was used. As shown in Figure 6A, gold nanoparticles conjugated with monoclonal mouse antibodies were observed to bind to the 300 nm irradiated PEG regions incubated with polyclonal anti-mouse antibodies, while very few bound nanoparticles were seen on a BSA control surface. Similar results were obtained by silver staining, as shown in Figures 6B and 6C. Silver particles of approximately 300 nm diameter were formed on the beam-exposed areas, both on surfaces incubated with streptavidin-HRP, then silver stained with EnzMet TM solution ( Figure 6B) and on surfaces incubated with biotinylated lysozyme, followed by streptavidin-HRP conjugate and silver staining ( Figure 6C). These results showed that helium beam exposure through stencil masks can be used to activate PEG and create protein patterns on the order of hundreds of nanometers. Corresponding negative controls showed very few bound gold nanoparticles and little to no formation of silver nanoparticles, as BSA was able effectively to passivate the beamexposed regions. Summary We have demonstrated that helium beam exposure through a stencil mask in proximity to the substrate can be used for the massively-parallel creation of micro-and nano-scale protein patterns on PEG-grafted surfaces. Proteins captured on irradiated PEG regions were shown to retain their functionality, as the patterned avidin could bind biotinylated molecules and patterned HRP conjugates were able to produce silver nanoparticles. Protein attachment on helium beam-exposed PEG may be due to hydrophobic interactions, electrostatic interactions, formation of reactive oxidized products, or a combination of these mechanisms. Further studies will be needed to establish the governing mechanism(s) and also to maximize processing throughput.
4,632.6
2013-05-24T00:00:00.000
[ "Chemistry" ]
Exposing students to a simulation of the online platform used by the South African revenue service Purpose – Students completing their tertiary education at a university may be equipped with theoretical knowledge with little to no practical experience. In order to bridge this gap in practical skills, a computer simulation was developed based on the e-filing platform of the South African Revenue Services (SARS). Students were exposed to this self-developed computer simulation to answer the question: to what extent will the e-filing simulation improve students ’ confidence to practically apply their theoretical knowledge? Design/methodology/approach – Theresearchappliedapre – postquestionnaireresearchmethodtogauge thestudents ’ abilitytoapplytheirtheoreticalknowledgetoapracticalscenariobeforeandafterthesimulation. Findings – From the results, it is apparent that the students were inspired with confidence in getting to terms withtheapplicationoftheirtheoreticalknowledgeinareal-lifescenario.Thecomputersimulationprovidedthe platformforlearningtotakeplaceinapracticalenvironmentwithouttheriskoferrorsthatwouldtranslateintorealfinancialconsequences. Originality/value – Thecontributionofthisresearchcanbefoundinateachinginterventionthatmaysupport the training of future tax professionals in practical application skills. The contribution can be extended to the enhancement of education in the field of taxation, particularly with the results ’ showing that the students experienced high levels ofincreasedconfidencein theirapplicationoftheoretical knowledge toreal-lifescenarios. Introduction As the world is currently within the grasp of the Fourth Industrial Revolution, the use of the digital space is of vital importance.For this reason, the learning environment, where students prepare themselves for the real world, must adapt accordingly.For this study, an alternative experience of real-world interaction was identified as a possible solution and presented to students in the form of a computer simulation teaching intervention. A computer simulation can be defined as a representation of reality in a digital environment, where the user manipulates data in order to learn principles, procedures, concepts and values (Barton and Maharg, 2007;Gibson et al., 2006).In order to support learning in a socio-economic system, the taxonomy of computer simulations (Maier and Grobler, 2000) classifies the current research as a modelling-oriented simulation and, more specifically, as a feedback-oriented continuous simulation (see Figure 1). Two issues need to be addressed when considering the use of simulations in a teaching environment.The first issue is to ensure that the simulation is a representation of the real world.The second issue is to establish whether the simulation is a useful tool when confronted with the complex concepts and values of the real-life environment.Both issues are addressed and embodied through the current research, leading to the research question: to what extent will the e-filing simulation improve students' confidence to practically apply their theoretical knowledge? The research followed a quantitative approach and made use of pre-and postquestionnaires.A prequestionnaire was administered before the e-filing simulation and the postquestionnaire afterwards.The participants were postgraduate students registered for a taxation module in the Department of Taxation. The first contribution of the research is that the simulation is based on the real-life e-filing platform (for tax compliance in South Africa) and therefore extends the training of future tax professionals to incorporate practical skills.The second contribution is thus the enhancement of education in the field of taxation.The confidence acquired by taxation students through the e-filing simulation in practically applying their theoretical knowledge to a real-life scenario was extensively improved. In the next section, the literature around teaching interventions and simulations is explored.A description of the research method is then given, followed by the data analysis, a discussion of the results, and a conclusion. Literature review Various teaching interventions have been explored and studied by researchers, all with the goal to improve teaching, increase student success, and deliver educational content more efficiently (Ilic et al., 2015).The various teaching interventions are shown in Figure 1.The current research focuses on simulation as a teaching intervention where an artificial representation of the real world is presented in a noninvasive way to evaluate practical situations (Barton and Maharg, 2007;Fricke and Lux, 2015).Such a teaching intervention offers the benefits of enhancing students' critical thinking, their content knowledge and their self-confidence (Cant and Cooper, 2017). The first recorded use of simulations can be traced back as far as 1777 with the "needle experiment" (Goldsman et al., 2010).The slow development of manual simulations continued until 1910 when the first flight simulator was developed and brought into use for the training of pilots (Henry, 2018;Kincaid and Westerlund, 2009).During the late 1940's, with the development of computers, the use of the Monte Carlo simulation method was expanded, and the first general-purpose simulator was built.In 1964 Tocher constructed the activity-cycle diagram (ACD) that became the cornerstone for teaching simulations in the United Kingdom (Goldsman et al., 2010).During the 1970's, flight simulators changed to computer-generated simulators with more technology and processing power (Henry, 2018).Teaching and training with simulations were further introduced in the field of medicine, leading to the development of simulations in computer-based games for this purpose (Kincaid and Westerlund, 2009).Computer-based games further developed to the extent that several branches of the military are exploring the use of simulation games for training purposes.The use of simulations is rapid expanding to fields such as maintenance, law enforcement, healthcare, transportation, athletics, crane simulations and emergency management scenarios (Henry, 2018;Kincaid and Westerlund, 2009). The use of simulations is interdisciplinary and is premised on the representation of reality, mainly with the objective to teach.Barton and Maharg (2007), for example, confirmed the idea that simulations could be used to teach problem-solving.Furthermore, simulations aid in integrated learning and can be adapted to fit the purposeto represent the desired business environment and the desired scenario (Jain et al., 2010;Kolb and David, 2014;Mainemelis et al., 2002).Indeed, Maier and Grobler (2000, p. 136) point out: 'Just as a pilot learns to fly with a flight simulator, one can learn to manage a company with the help of a management flight simulator.'Simulations offer an advantageous intervention for teaching and learning, with extended growth benefits for both students and facilitators. Some of the benefits of using simulation-based training are as follows: first, the training is authentic and a replica of a real-life scenario.Students can therefore train in an environment that is risk-free and where the event can be repeated to obtain the skills necessary.Second, the simulation may assist in gaining insights into the different variables of the system with the purpose to find the highest performance setting (Future learn, n.d.), Third, the simulation will provide accurate and immediate feedback (Hurix digital, 2023).Finally, the simulation is costeffective.Once developed, the simulation can be repeated without additional acquisition costs (Hurix digital, 2023). The disadvantages of simulations may include, among others, the cost of maintaining and updating the simulation, the misconception that the simulation is perfect but in real life there is always room for errors (Knilt, 2022), time-consuming training and the need for the assistance of a supervisor in simulation training (Future learn, n.d.). A simulation may be carried out in various forms, depending on the objective of the simulation project.In the field of taxation, there are mainly three forms of simulation: the Monte Carlo simulation, the computable general equilibrium simulation and computer simulations (see Figure 1).Fogarty and Goldwater (1996) applied the Monte Carlo simulation in tax education; the simulation was described by Johansen et al. (2010) as a collection of computational techniques for the solution of mathematical problems.Meng and Siriwardana (2017) define the computable general equilibrium simulation as an economic model that has the ability to reveal information on the whole economy A simulation of an online tax platform and on detailed industries.Bonga-Bonga and Perold (2014) made use of a computable general equilibrium simulation in researching the effect of a reduction in value-added tax on the South African economy when either a flat rate or a progressive tax system is applied.Haar (2021) aptly describes computer simulations as dynamic models that are capable of representing complex systems.Marriott (2004) used computer simulations in researching how to best assist accounting educators in overcoming some of the challenges they face in teaching by providing educators with a simulated educational setting.Computer simulations may indeed prove to be a great tool for teaching new concepts and exposing individuals to a real-world platform in a safe environment; these simulations are thus a powerful tool for learning, with great potential for formal educational use (Akilli, 2007). Three aspects comprise computer simulations, namely the underlying model, the human-computer interface and various functionalities (Maier and Grobler, 2000).Maier and Grobler (2000) developed a taxonomy for computer simulations by dividing them into modelling-oriented simulations and gaming-oriented simulations (as seen in Figure 1).The current research applies the modelling-oriented simulations, which can be divided further into feedback-oriented continuous simulations and process-oriented simulations.The efiling simulation used for the research can be described as a feedback-oriented continuous simulation.The main goals of this type of simulation are learning, problem-solving and gaining insights, and the achievement of these goals is virtually undoubted (Maier and Grobler, 2000). Due to the ever-changing nature of taxation, changes in tax systems and tax legislation are often necessitated.These changes lead to continuous development in education and in the extended profession.New employees may find this constant change challenging.Best and Schafer (2017) highlight how tax practitioners often note that new staff lack certain core skills in carrying out their duties.Computer simulations may be the solution to this challenge in the teaching and learning environment of taxation, and Summers (2006) in fact confirms that computer tax simulations do have a material impact on the educational side of tax policies. In the USA, Best and Schafer (2017) developed a computer simulation with the objective of providing students with realistic corporate tax return experiences.In this simulation, students were given the role of an accountant at an accounting firm and were provided with relevant information to complete a return (they were also required to consult with the client for additional required information).Students were provided with a skills assessment questionnaire prior to and after the simulation to gauge their experience, their ability to think critically and their level of comfort in performing a similar exercise (completion of a corporate tax return) in future.Significant improvements (in critical thinking, in preparedness to work in a professional environment and in ability to calculate taxable income) in postquestionnaires were noted, indicating that students did learn through the simulation.Furthermore, according to Best and Schafer (2017, p. 74), students provided excellent feedback after being exposed to the simulated environment.Examples of feedback comments given by students after this exposure are follows: I like how the project had a real world feel to it. I liked the way it made you go through all the steps of preparing a corporate tax return and simulated the real process of having incomplete information and meetings with supervisors.I liked that it resembled what we may see outside of the classroom.(Best and Schafer, 2017, p. 74) Method The South African Revenue Service (SARS) employs the e-filing platform to empower taxpayers to conduct their tax affairs online.It is the interface of SARS with taxpayers.The services rendered on this platform include, among others, the submission of tax returns, SARS issuing assessments and the lodging of an objection to or an appeal against a decision made by SARS.The computer simulation used in this research is a representation of the efiling platform, built from screenshots from the e-filing platform itself.The e-filing system was deemed appropriate to simulate, as students will encounter this platform in real life and to illustrate to them how the knowledge obtained during the course is applied in a real-life scenario.In the simulation, postgraduate taxation students were exposed to the simulated SARS e-filing platform to afford them the opportunity to apply the Income Tax Act (58 of 1962, which contains the guidelines for calculating the normal tax payable by taxpayers in South Africa) and the Tax Administration Act (TAA) (28 of 2011, which guides the tax administration and tax compliance of taxpayers in South Africa) in a simulated real-world environment which mimics the platform that the students will encounter when they commence their professional careers. The simulation was developed by one of the researchers (who is also a tax practitioner), in cooperation with a computer programmer.The researcher provided the programmer with extensive screenshots and explanations of what the real-life e-filing platform looks like and how the interactions between the platform and the end-user take place.The development of the simulation took several months, during which both the researcher and the programmer worked continuously to refine the final product.The simulation was therefore based on the real-life e-filing platform and the validity and reliability of the simulation can be confirmed.The researchers who presented the session to the two groups are chartered accountants who regularly use the e-filing system and could also confirm the validity of the simulation. The e-filing simulation was rolled out to two postgraduate taxation groups in the Department of Taxation, at the University of Pretoria, South Africa.Both groups were enrolled for the same module in taxation in the specific academic year.The students were selected for the simulation, as they would move into the work environment in the following year.The students also had extensive knowledge of taxation as they had already completed three years of undergraduate studies in taxation at tertiary level before being accepted for this postgraduate module.All the students forming part of these two postgraduate groups acted as participants to the current research. The simulation was conducted after both groups had engaged with the theoretical content necessary to complete the e-filing simulation: on 25 June 2021 for Group 1 and on 27 September 2021 for Group 2. The simulation was presented online, using desktop computers or laptops. Before the commencement of the simulation, the students were invited to participate in a prequestionnaire to gather information regarding their presimulation TAA knowledge and their confidence in applying that knowledge.Before the commencement of the prequestionnaire, students were given the opportunity to consent to their participation or withdraw from the research.After the simulation, it was requested that the students complete a postquestionnaire.Students were informed that the completion of the pre-and postquestionnaires was voluntary.They were, however, encouraged to complete both questionnaires, and it was explained to them that their feedback would indicate whether they had benefited from the session and that the feedback would be used to improve the simulation in the future.Furthermore, students were asked to create a unique username to ensure anonymity in the questionnaire, and the unique usernames were used to link the preand post-questionnaires. Students were informed that the teaching intervention session would be a simulation of the e-filing interaction on behalf of a fictitious taxpayer for the 2021 years of assessment. A simulation of an online tax platform Students were given a set of facts and supporting documents for the fictitious taxpayer.The steps listed below were to be followed. 1. Calculate the normal tax due/refundable. 2. Complete and submit the income tax return on the simulated SARS e-filing platform. 3. Check the simulated original assessment issued by SARS against their own calculation. 4. Upload supporting documents to the simulated SARS e-filing platform. 5. Object to an additional simulated assessment issued by SARS. The simulation was only available on a desktop computer or laptop.Some students used two devices during the session; for example, a laptop for participating in the simulation and a smartphone for remotely joining the live session for interaction and instructions.Other students used one device and moved between tabs as needed.The fact that the simulation was only developed for a desktop computer or laptop is one of the limitations identified by the researchers for improvement in the future, where the platform will be converted for use on a mobile device (smartphone or tablet) as well. Students were awarded marks for completing the simulation and for attending the online session.They were also encouraged to ask questions during the sessions.However, no marks were awarded for completing the pre-and post-questionnaires, as these were voluntary. When the computer simulation commenced and the link to the simulated SARS website was provided for students to access the simulation, the simulation continued as outlined below. 1. Students calculated the normal tax due by or refundable to the fictitious taxpayer. 2. Students completed the tax return for the fictitious taxpayer and submitted it on the simulated platform.The session was interactive, and students could post a question in the chat box at any time. 3. Once the original assessment was issued on the simulated platform, students had to compare their calculations to the assessment and identify any differences or errors.4. Students were made aware that 'SARS' had requested supporting documentation, which the students then independently uploaded onto the simulated platform.5. 'SARS' then issued an additional assessment on the simulated platform, disallowing one of the deductions claimed.Students had to object to this additional assessment. Once all five steps of the simulation were completed, the link to the post-questionnaire was posted in the chat box. The only responses that were included in the data analysis were responses where students had completed both the pre-and post-questionnaire.Of the 232 registered students, 172 completed both questionnaires, resulting in a 74% response rate. Results and discussion The data collected through the pre-and post-questionnaires were downloaded from Qualtrics and analysed by frequency and by applying cross-tabulations. The data analysis is structured in such a way as to be able to gain an understanding of the change in students' confidence when applying theory to practice.Before analysing the knowledge component, one needs to understand the exposure students previously had to the actual e-filing platform.Students were thus asked if they had ever visited the platform.The responses revealed that 48.8% of the students had visited the platform before; 45.9% had never accessed the platform; and 5.2% were not sure if they had ever visited the platform.Subsequently, to get a more detailed understanding of what functionality the students had previously visited or used on the actual platform, those who answered 'yes' to the previous question were asked to select all [1] of the functions they had visited or made use of.The results are summarised in Figure 2, showing that 70.2% of students who had previously been exposed to the platform (48.8% of the total respondents) had completed a tax return, while only 39.3% had submitted a tax return on e-filing.This data lead to the conclusion that the students in fact had limited exposure to the functionalities of the actual e-filing platform. Although the majority of students had not had extensive practical exposure to the actual e-filing platform, all of the students had been exposed to theoretical training regarding the TAA in their current course content.To gauge students' confidence in their own theoretical knowledge, the students were asked to rate their knowledge of the TAA. Figure 3 provides the results to this question: the majority of students felt that their knowledge of the TAA was average (51.2%),above average (35.5%) or even far above average (5.8%).Only 7.6% of the students rated their knowledge as below average (6.4% as somewhat below average and 1.2% as far below average).The results therefore indicate that 92.5% of the students had confidence in their theoretical knowledge of the TAA. To gain an understanding of students' perceptions of the practical usefulness of the efiling simulation, students were asked, in the prequestionnaire, if they believed that the simulation would assist them in applying their theoretical knowledge of the TAA in a practical scenario.The same question was asked again once the students had been exposed to the simulation.The responses are summarised in Figure 4.It is clear from the data that the students did believe that the simulation would assist in enhancing their theoretical knowledge, and their perceptions only changed slightly once they were exposed to the simulation. A cross-tabulation (refer to Figure 5) was also performed on the above questions to show how the expectation of change in confidence prior to the simulation agrees with the actual reported change post the simulation.93.0% of the students expected an improvement prior to simulation, while 90.7% reported an improvement post the simulation.Therefore, not all the students that expected an improvement reported an improvement.However, 86.6% of the students who stated they expect an improvement also reported an improvement in confidence post the simulation.5.8% of the students were uncertain if they would experience an A simulation of an online tax platform improvement.However, 3.5% of the students were uncertain prior to simulation and these students then reported an increase in confidence post the simulation.0.6% of students who did not expect an improvement, then reported an improvement post the simulation.A Fisher-Freeman-Halton's exact test was used to determine if there was a significant association between expectation of a possible improvement and reported improvement after the simulation.The results indicate that there is a statistically significant association between these two variables at a 1% level of significance (p 5 0.004). Once the students had been exposed to the simulation, they were asked for their feedback on how they had experienced the simulation; this feedback was overwhelmingly positive.The results (included in Figure 6) were that 91.9% of the students indicated that their experience had been good (68.0%extremely good and 23.8% somewhat good); 6.4% indicated that their experience had been fair (neither good nor bad) and just 1.7% indicated that their experience had been bad (1.7% somewhat bad and 0.0% extremely bad).Despite the positive responses to the simulation, it is important to gauge whether it was valuable to the students' actual learning.The questions in the postquestionnaire therefore asked whether the experience had enhanced the students' TAA knowledge and whether they believed that the simulation would help them with practical real-life scenarios.The results of these questions are summarised in Figure 7.Of the respondents, 86.7% were positive that the simulation enhanced their TAA knowledge, and 98.3% indicated that the simulation would indeed assist them in future practical situations.These results confirm the conclusion from Best and Schafer (2017) that a simulation can improve preparedness to work in a professional environment.The results also support the goals of a simulation through their indication that learning and gaining insights using simulations are virtually undoubted (Maier and Grobler, 2000). Source(s): Authors' own Further confirmation of the positive value of the simulation for the students can be found in the following comments received from the students: Thought it was a fantastic learning experience.This will make learning much better because we get to see where we will apply the things we learn. The experience was enjoyable to see how to apply the theory in a practical way. This was really such a great learning opportunity as I could see how everything fits in together.A simulation of an online tax platform Ultimately, a comparison was drawn between the students' confidence levels in applying their knowledge of the TAA before (prequestionnaire) and after (postquestionnaire) their exposure to the computer simulation.The results of this comparison are included in the crosstabulation in Figure 8, in which it is apparent that after the simulation, 101 students (58.7%) felt that their confidence had definitely improved and 55 students (32.0%) felt that their confidence had probably improved.This means that 90.7% of the students felt more confident after the simulation. Prior to the simulation, 111 students (64.5%) had a lack of confidence in practically applying their knowledge.After being exposed to the simulation, 100 (sum of the shaded area) of these 111 students felt that their confidence had improved.This means that 90.1% of the students who had lacked confidence ultimately felt that their confidence had improved (or 58.1% of all students).A Fisher-Freeman-Halton's exact test was used to determine if there was a significant association between confidence in practical application prior to the simulation and reported confidence after the simulation.The results indicate that there is a statistically significant association between these two variables at a 1% level of significance (p 5 0.003). Improvement in confidence and in general understanding of the TAA is also apparent in the following comments received from the students: Absolutely loved this experience.It was amazing to see how our work is actually applied.I now know that I am able to help myself and my family members to submit their e-Filing. As soon as I say to anyone that I'm studying to be a CA(SA) they ask if I can fill out their tax return, after today I feel like I could definitely help them!This was extremely helpful and made e-Filing less daunting. The experience was great and I felt like it was real, like when you are in a company assisting clients.I enjoyed working on the platform and think that exposure to the practical application adds value to my learning experience.I found the simulation to be very beneficial to apply my theoretical knowledge to a practical scenario.I really enjoyed the session. Conclusion The current research asked: to what extent will the e-filing simulation improve students' confidence to practically apply their theoretical knowledge?The results indicate that the confidence of students improved extensively through the use of the e-filing simulation, leading to the students' feeling confident enough to advise family and friends on the functionalities available on the actual platform. From the results, it is apparent that the students were inspired with confidence in getting to terms with the application of the TAA in a real-life scenario.The computer simulation provided the platform for learning to take place in a practical environment without the risk of errors that would translate into real financial consequences. The contribution of the research is therefore confirmed, as it highlights a possible teaching intervention to support the training of future tax professionals in practical application skills.A further contribution is thus the enhancement of education in the field of taxation, particularly with the results' showing that the students experienced high levels of increased confidence in their application of theoretical TAA knowledge to real-life scenarios. As a final conclusion, the results of this study confirm that the use of simulationsmore specifically, computer simulationsin the field of taxation is of vital importance, as the future follows a digital pathway.The education environment can gain substantial benefits by further investigating the use of computer simulations in the training of taxation professionals. Notes 1. Students were able to make multiple selections, meaning that the percentages cited do not add up to 100%. Figure 1.A presentation of a computer simulation as a teaching intervention Figure 2. E-filing functions visited or performed Figure 3. Students' ratings of their own theoretical knowledge of the TAA Figure 5. Cross-tabulationconfidence in applying theoretical knowledge Figure 7. Value of simulation to actual learning Figure 8. Cross-tabulation of change in confidence in applying the TAA before and after the simulation Confidence in applying theoretical knowledge to practical scenarios improved? TOTAL Yes Maybe No Do you think that your confidence will improve if you are exposed to a simulation? Source(s): Authors' own Did confidence in practical application improve post-simulation? Source(s): Authors' own
6,126.4
2023-07-05T00:00:00.000
[ "Computer Science", "Education" ]
Poly (ADP-Ribose) Polymerase-1 (PARP1) Deficiency and Pharmacological Inhibition by Pirenzepine Protects From Cisplatin-Induced Ototoxicity Without Affecting Antitumor Efficacy Cisplatin remains an indispensable drug for the systemic treatment of many solid tumors. However, a major dose-limiting side-effect is ototoxicity. In some scenarios, such as treatment of germ cell tumors or adjuvant therapy of non-small cell lung cancer, cisplatin cannot be replaced without undue loss of efficacy. Inhibition of polyadenosine diphosphate-ribose polymerase-1 (PARP1), is presently being evaluated as a novel anti-neoplastic principle. Of note, cisplatin-induced PARP1 activation has been related to inner ear cell death. Thus, PARP1 inhibition may exert a protective effect on the inner ear without compromising the antitumor activity of cisplatin. Here, we evaluated PARP1 deficiency and PARP1 pharmacological inhibition as a means to protect the auditory hair cells from cisplatin-mediated ototoxicity. We demonstrate that cisplatin-induced loss of sensory hair cells in the organ of Corti is attenuated in PARP1-deficient cochleae. The PARP inhibitor pirenzepine and its metabolite LS-75 mimicked the protective effect observed in PARP1-deficient cochleae. Moreover, the cytotoxic potential of cisplatin was unchanged by PARP inhibition in two different cancer cell lines. Taken together, the results from our study suggest that the negative side-effects of cisplatin anti-cancer treatment could be alleviated by a PARP inhibition adjunctive therapy. INTRODUCTION Cisplatin [cis-diammin dichloroplatinum (II)] belongs to the essential cytotoxic drugs for the treatment of solid tumors. Applications include a wide range of cancers in children and adults. The introduction of cisplatin has increased the cure rate of metastatic germ cell tumors in an unprecedented way from approximately 20% in the era before cisplatin to 80-90% with cisplatinbased combination chemotherapy. Nephrotoxicity (renal dysfunction) and bone marrow toxicity (anemia, neutropenia, and thrombocytopenia), together with ototoxicity, are among the major dose-limiting side-effects of cisplatin (Ekborn et al., 2003). Ototoxic symptoms persisted in 20% of adult testicular cancer patients (median age of 31 years) but may reach prevalence over 50% in patients receiving cumulative doses of cisplatin above 400 mg/m 2 (Bokemeyer et al., 1998). About 62% of children and adolescents treated with platinum-based chemotherapy acquired bilateral hearing loss in the conventional audiometric frequency range. This number increased to 81% for a decrease in evoked distortion product otoacoustic emissions (DPOAE) amplitudes and dynamic ranges. When extended high frequency (EHF) range audiometry was applied, functional deficits increased further to 94% (Knight et al., 2007). These data from sensitive audiological tests suggest that cisplatin-induced ototoxicity is more frequent than measured by conventional audiometry (Knight et al., 2007). Cisplatin ototoxicity is mediated by reactive oxygen species, including superoxide anions (Dehne et al., 2001;Rybak et al., 2007), and transcription factors such as NF-κB have been suggested as further cell death mediators (Rybak et al., 2007). However, the damage caused by cisplatin to cells cannot solely be explained through DNA interaction (Sherman et al., 1985) but also by the cell's ability to detect and respond to DNA damage (Kerr et al., 1994). One important DNA repair mechanism is activation of poly (ADP-ribose) polymerase-1 (PARP1). Even 50 years after Paul Mandel first described a nuclear enzymatic activity that synthesizes an adenine-containing RNA-like polymer (Chambon et al., 1963), PARP1 and its activation mechanism in the ear have not been thoroughly investigated. PARP1 detects and marks damaged DNA sequences and acts as scaffolding protein to orchestrate other factors in the DNA repair pathway. Similar as many other DNA repair mechanisms, its action can have two opposing effects (Rouleau et al., 2010): (1) Sufficient repair of the damage can allow for survival of the cell, while (2) in cases of more severe damage, the failed efforts to repair the damage can result in fatal ATP depletion and cell death. The poly-ADP ribosylation, which is primarily catalyzed through PARP1, is a post-translational modification of nuclear proteins where NAD + is used as a substrate (Burkle, 2005). Strong up-regulation of PARP1 can be observed in the early stages of cell death (Yu et al., 2002). This mechanism plays a crucial role in several pathophysiological processes and can be therapeutically exploited through the use of PARP1 inhibitors (Virag and Szabo, 2002), including in the inner ear, where the PARP1 inhibitor 3aminobenzamide (3-AB) has been shown to be otoprotective (Tabuchi et al., 2001(Tabuchi et al., , 2003. In other sensory systems, such as the retina, PARP1 gene knock-out increases resistance to retinal degeneration (Sahaboglu et al., 2010), and PARP inhibitors, including the anti-cancer drug olaparib, improve retinal viability (Sahaboglu et al., 2016). Recently, more evidence was presented that ototoxicity of cisplatin can be influenced through the regulation of PARP1 (Kim et al., 2014(Kim et al., , 2016. It was reported that a decrease in SIRT1 activity and expression facilitated increased PARP1 activity and aggravated cisplatin-mediated ototoxicity. The expression level and activity of SIRT1 in turn was suppressed by the reduction of intracellular NAD + levels. Hyperactivation of PARP1 led to depletion of NAD + and ATP production, followed by cochlear cell death. Restoring cellular NAD + levels by application of β-lapachone or dunnione as substrates for NAD(P)H dehydrogenase quinone-1 (NQO1) promoting conversion of NADH to NAD + prevented the toxic effects of cisplatin. These results suggest that direct modulation of cellular NAD + levels by pharmacological agents could be a promising therapeutic approach for protection from cisplatin-induced ototoxicity (Kim et al., 2014). Thus, PARP-1 inhibition may provide for otoprotection during cisplatin treatment. Here, we tested pirenzepine and several other, well-validated PARP inhibitors of various potencies. The prototypical PARP inhibitor 3-AB has an EC50 of 200 µM. The structure of 3-AB is similar to that of NAD + so that it binds to PARP and prevents it from using up NAD + . Another PARP inhibitor is PJ34, a cell-permeable, water-soluble phenanthridinone-derivative that is a 10.000fold more potent PARP inhibitor with an EC50 of 20 nM. DPQ is also a potent and selective inhibitor of PARP (EC50 = 40 nM). PARP inhibition is a general neuroprotective principle, likely attenuating the intrinsic, i.e., mitochondrial pathway of apoptosis, which can be induced by a variety of pathological or toxic conditions (Schrattenholz and Soskic, 2006). Pirenzepine (Gastrozepin) is an approved muscarinic M1 receptor selective antagonist (Del Tacca et al., 1989) that reduces gastric acid secretion and muscle spasms and is used in the treatment of peptic ulcers. It is also investigated for use in myopia (Mak et al., 2018). The PARP1 inhibitor pirenzepine would allow for a direct entry into clinical development and use after a "proof of concept" on the basis of drug repositioning, as pirenzepine is an active substance already on the market with a different indication area and mechanism of action. For further development an already identified metabolite (LS-75) with a highly specific effect on PARP1 is available. LS-75 is a metabolite of pirenzepine (Riss et al., 2009). In vivo, pirenzepine is metabolized to 5,11dihydro-benzo[e]pyrido[3,2-b][1,4]diazepin-6-one, tagged LS-75 (Schrattenholz, 2006;Schrattenholz and Soskic, 2006). Otoprotective drug screening for the prevention of cisplatininduced ototoxicity unfortunately is impeded by the high mortality encountered in animal models due to the systemic toxicity of cisplatin that may precede ototoxic effects. This might be reflected in the many different protocols utilizing single (Schmitt and Rubel, 2013) or multiple (Roy et al., 2013) cisplatin applications (see also Tropitzsch et al., 2014 for review). Toward the development of an in vivo model, a quantitative assay employing the lateral line of zebrafish larvae was introduced to facilitate drug screening for ototoxic and otoprotective agents (Ton and Parng, 2005;Ou et al., 2007). The model provided evidence of dose-dependent cisplatininduced hair cell loss. This limitation may be overcome by the development of validated in vitro models. Previous in vitro models to study cisplatin ototoxicity in the mammalian cochlea utilized tissue culture techniques involving isolated tissue fragments dissected from the postnatal inner ear organ (Liu et al., 1998;Zhang et al., 2003;Yarin et al., 2005;Previati et al., 2007). In the present study, we utilized a rotating bioreactor culture system under simulated microgravity conditions (Hahn et al., 2008;Arnold et al., 2010;Tropitzsch et al., 2014). The system allows for maintaining the entire postnatal mouse inner ear organ in culture for up to seven days. In this model, controlled sensory cell lesions can be induced by ototoxic agents as toxicological models of hair cell degeneration and hair cell loss. We use this model to show that both genetic inactivation of PARP1 and pharmacological inhibition of PARP protects inner and outer hair cells from cisplatin induced damage. MATERIALS AND METHODS Animals PARP1-deficient mice (Wang et al., 1995) and wild-type 129SV mice used in this study were obtained from an in-house breeding colony maintained in a specified pathogen free (SPF) animal facility. Animal use for organ explants was approved by the Committee for Animal Experiments of the Regional Council (Regierungspräsidium) of Tübingen. PARP1 deficient mice were originally purchased from Jackson Laboratories (Parp1 TM1Zqw ; Mouse Strain Datasheet -002779) and bred with 129SV wildtype obtained from the Leibniz Institute on Aging -Fritz Lipmann Institute (FLI; Prof. ZQ Wang, Jena, Germany). Genotyping was performed on each animal by PCR (according to JAX Standard PCR protocol, 2003; primers CCA GCG CAG CTC AGA GAA GCC A for Wildtype; CAT GTT CGA TGG GAA AGT CCC) for common; AGG TGA GAT GAC AGG AGA TC for mutant). Cell Culture System Setup Details of the methods are given in Hahn et al. (2008), Arnold et al. (2010), Tropitzsch et al. (2014). In short, explants of the inner ear organ of mice at postnatal day 7 were obtained. Animals had a bodyweight of 5-6.5 g. The explants were cultured in 55-ml HARV culture vessels mounted on a Rotary Cell Culture System (RCCS TM -4; Synthecon Inc., Houston, TX, United States). All dissections were performed in a laminar flow cell culture hood under sterile conditions. After decapitation, the complete inner ear bony labyrinth capsules were dissected from the skull base in ice-cold Hank's balanced saline solution (HBSS) supplemented with 10 mM HEPES buffer (pH 7.3). The perilymphatic fluid spaces were opened via micro-dissection to provide access of the culture medium to the inner ear sensory epithelia. Immediately after completing the micro-dissection, inner ear organs were inoculated in a HARV vessel filled with pre-warmed culture medium, and rotation was started for 24 h at a rotation speed of 25-35 rounds per minute (RPM). The bioreactor was placed in a 37 • C humidified 5% CO 2 /95% air incubator. The culture medium was Neurobasal TM Medium (Invitrogen, Inc., Carlsbad, CA, United States) supplemented with 1 x B27 supplement (Invitrogen, Inc.), 5 mM glutamine, 10 mM HEPES (Invitrogen, Inc.), and 100 U penicillin (Sigma, St. Louis, MO, United States). Cisplatin (MW: 300.05 g/mol, Sigma, St. Louis, MO, United States) was applied to the culture medium at concentrations ranging from 0.1 to 5 µg/ml for 24 h. After termination of the culture, inner ear explants were fixed with 4% paraformaldehyde, and the organ of Corti was micro-dissected as a whole mount preparation and divided into basal, middle, and apical segments. The surface morphology of the organ of Corti, in particular hair cell stereocilia bundles, were visualized by Phalloidin labeling of F-actin and immunohistochemical labeling for myosin VIIa. Whole mount specimens were washed with phosphate-buffered saline (PBS), permeabilized with 0.1% Triton X-100 in PBS for 5 min, and incubated with Phalloidin conjugated to the dye Alexa488 (Molecular Probes Inc., Eugene, OR, United States) for 20 min in the dark at room temperature. After washing in PBS, the whole mount specimens were mounted using Vectashield-DAPI (Vector Laboratories, Burlingame, CA, United States). Quantification of Sensory Hair Cell Maintenance and Statistical Analysis Sensory hair cell maintenance and hair cell loss were quantified for organ preservation and toxicity assays (Tropitzsch et al., 2014). The effects of these experimental paradigms were quantified in whole mount preparations of the organ of Corti. These preparations consisted of three or four segments of the organ of Corti, designated basal, middle, and apical segments. The whole mount preparations were analyzed using a Zeiss Axioplan 2 epifluorescence microscope. Photomicrographs were taken and segments analyzed off-line. The length of each cochlea segment was determined at the midline between the inner and outer hair cell area along the longitudinal axis of the organ of Corti. A software module (Soft Imaging System, Stuttgart, Germany) allowing curved length measurements was used. Total cochlear length was determined by adding the length values of each segment. The inner hair cells and outer hair cells were counted using the ImageJ (Abramoff et al., 2004) plugin "Cellcounter, " which also allows to store the coordinates of the cells. These coordinates were used to determine the longitudinal position along the cochlear duct to allow for the construction of cochleograms (custom made program). Cisplatin induced hair cell loss showed a marked base to apex gradient with the base being more vulnerable. Hair cell preservation was quantified as the fraction of cochlear length with less than 10% hair cell loss, as seen in the cochleogram. All values of length measurements are presented as the mean fraction ± SD for the inner ear organ preservation assay. Differences between experimental groups were assessed using JMP (Version 14, SAS Institute). t-tests were performed to assess significance ( * p < 0.05, * * p < 0.01 and * * * p < 0.001 values were considered significant), multiple testing was accounted for using a Dunnett's test, comparing a number of treatments/conditions with a single control. SigmaPlot (Version 8, SPSS Inc.) was used to fit a typical dose-response curve with a variable slope parameter (four Parameter Logistic Equation) to obtain a median effective concentration (EC50): where min is the bottom of the curve, max is the top of the curve, EC50 is the median effective concentration, and Hillslope characterizes the slope of the curve at its midpoint. In brief, the cell lines NT2 and 2102 EP were rinsed with PBS (Biochrom), trypsinised and re-suspended in 1 ml of the appropriate culture medium to count the cells in a hemocytometer chamber. A total of 4 × 10 3 cells/well were seeded in 96-well. Cells were allowed to adhere overnight. The cells were exposed to drugs (cisplatin or a test drug) for up to 24 h. The medium was then removed, and 0.2 ml MTT solution (final concentration: 0.5 mg/mL MTT; Sigma) was added. The plates were incubated for 2 h, the medium was removed, 0.1 ml DMSO was added, the plates were agitated for 15 min, and the optical density was read using a photometer (MRX Revelation, Dynex Technologies, VWR International, Bruchsal, Germany) at 570 nm. The optical density readout was averaged across wells, corrected for the water-value, and expressed as the percent survival relative to untreated cells. Error bars were calculated using error propagation for divisions (fractional uncertainties add in quadrature). The PARP inhibitors and their respective concentrations: Pirenzepine Pirenzepine is used as gastric acid secretion inhibitor worldwide and was re-introduced as potential neuroprotective compound via PARP1 modulation. In vivo, it is metabolized to 5,11-dihydrobenzo[e]pyrido[3,2-b][1,4]diazepin-6-one, also known as LS-75. This compound was previously found to be an inhibitor of PARP1 (Riss et al., 2009). Cisplatin Dose-Response Curve The effects of cisplatin exposure on cochleae obtained from wildtype, heterozygous, and PARP1-deficient mice in microgravity culture were quantified from whole mount preparations of the organ of Corti for control groups and cisplatin-exposure groups. Cisplatin concentrations ranged from 0.1 µg/ml to 5 µg/ml. Hair cell preservation was calculated as the fraction of cochlear length with less than 10% hair cell loss. The EC 50 values of inner hair cell preservation were 1.77 µg/ml for wildtype, 2.45 µg/ml for heterozygous, and 2.79 µg/ml for PARP1 deficient cochleae (Figure 1). For outer hair cell preservation, the EC 50 values for the three different genotypes were 1.52 µg/ml, 1.998 µg/ml, and 1.89 µg/ml, respectively (Figure 1). The hillslope was also affected by the genotype. The steepness of the fitted curves (hillslope) was 5.3 for the inner hair cells and 6.2 for the outer hair cells of wildtype cochleae. The hillslope values were 3.85 and 3.73 for inner and outer hair cells of heterozygous animals, respectively, and 3.48 and 3.98 for homozygous animals, respectively. The cisplatin tolerance of cochlear hair cells was significantly higher in heterozygous and homozygous animals compared to wildtype ( Table 1). When cisplatin concentration exceeded a concentration of 1.75 µg/ml, a statistically significant increase in hair cell preservation rate was observed in both heterozygous and homozygous PARP1-deficient genotypes ( Table 1). For outer FIGURE 1 | Dose-response curves after cisplatin exposure calculated for the fraction of preserved inner and outer hair cells in wildtype (red; n = 3-9), heterozygous (blue; n = 5-6), and homozygous PARP-deficient (green; n = 4-13) mice (11 concentrations for each genotype). Regression lines were obtained using non-linear regression analysis. Frontiers in Cellular Neuroscience | www.frontiersin.org hair cells, nearly complete preservation (0.9) in wildtype cochleae was maintained up to a cisplatin concentration of 1 µg/ml. A complete loss of outer hair cells was observed when the concentration exceeded 2 µg/ml cisplatin. The tolerance of outer hair cells to cisplatin in heterozygous animals increased compared to wildtype cochleae ( Inhibition of PARP Activity Through Cisplatin Treatment Pharmacological inhibition of PARP was tested to verify the increased cisplatin tolerance of hair cells observed in the PARP-deficient genotypes. Using 1.75 µg/ml cisplatin, wildtype cochleae were treated with the PARP inhibitors pirenzepine or LS-75 for 24 h at concentrations ranging from 0.1 to 100 µM (Figure 2 and Table 2). Cultures treated with pirenzepine or LS-75 at low concentrations of 0.1-1 µM showed poor hair cell preservation comparable to control levels (ears treated with cisplatin only). Cultures treated with Pirenzepine or LS-75 at intermediate concentrations of 3-30 µM showed (Figure 2 and Table 2 for statistics) increased hair cell preservation. Applying pirenzepine increased preservation rates for inner hair cells significantly (p < 0.05, Dunnett's test) from 0.44 (±0.15) to 0.80 ± 0.11 (3 µM), 0.78 ± 0.12 (10 µM), and 0.88 ± 0.09 (30 µM). Similar observations were made for both, inner and outer hair cells. The hair cell preservation rate in cultures that were treated with 100 µM pirenzepine was unchanged compared to controls (p > 0.05, Dunnett's test) compared to controls. Using LS-75, preservations of inner and outer hair cells rates were significant (p < 0.05, Dunnett's test) different from those in wild type mice at 3 and 10 µM, reaching maxima at 0.75 ± 0.04 for the outer hair cells and 0.86 ± 0.06 for the inner hair cells at 10 µM respectively. The hair cell preservation rate in cultures that were treated with 30 or 100 µM LS-75 was unchanged (p < 0.05, Dunnett's test) compared to controls. Cytostatic Potency of Cisplatin Is Not Reduced by PARP Inhibitors To study the influence of PARP inhibitors on the cytostatic potency of cisplatin, the effects of different PARP inhibitors on the survival of cisplatin-treated tumor cells was tested. The germ cell tumor lines 2102Ep and NT2 (Podrygajlo et al., 2009) were treated with 1.4 µM cisplatin and the PARP inhibitors LS-75 Figure 1) are plotted on the left columns. The white bars indicate the hair cell preservation rate of control cochleae, and the colored portions of the bars indicate the difference in the wildtype control (cisplatin treated) (green: preservation; red: loss). Differences to the wildtype control were calculated using Dunnetts test ( * * * p < 0.001 values were considered significant). DISCUSSION In this study, we used a genetic mouse model and demonstrated that PARP deficiency exhibits an otoprotective effect by increasing the tolerance to cisplatin exposure. We also showed that the effect observed in the genetic model was reproduced pharmacologically in applying the PARP-inhibitor pirenzepine or its metabolite LS-75. We furthermore extend upon our previous studies (Hahn et al., 2008;Arnold et al., 2010;Tropitzsch et al., 2014) regarding the use of a rotating bioreactor system that allows for successful culture of the whole inner ear organ. Several models of cisplatin ototoxicity in mammals have been introduced. Previati et al. (2007) used an immortalized cell line (OC-k3 cells) derived from the organ of Corti of transgenic mice (Immortomouse TM ). Other experiments of cochlear and utricular organotypic cultures were performed on preparations obtained from postnatal day 3-4 rats (Zhang et al., 2003). Similarly, Yarin et al. (2005) used organotypic cultures of the organ of Corti from 3-to 5-day-old rats, while earlier studies used organ of Corti explants from 3-day-old rats (Liu et al., 1998). However, the otoprotective effect of PARP1 inhibition has thus far only been described in models of ischemia-reperfusion (Tabuchi et al., 2001) and acoustic trauma (Tabuchi et al., 2003) in vivo, but not in the context of cisplatin ototoxicity. The primary biological target of cisplatin is DNA (Reviews: (Jamieson and Lippard, 1999;Wang et al., 2004), distorting the DNA duplex. Subsequently, cell death occurs through apoptosis and necrosis via several complex signal pathways. In primary cultures of mouse proximal tubular cells, apoptotic and necrotic cell death is dependent on cisplatin concentration. High concentrations of cisplatin (800 µM) led to necrotic cell death over a few hours, whereas much lower concentrations of cisplatin (8 µM) led to apoptosis (Lieberthal et al., 1996). The study furthermore concluded that reactive oxygen species played a role in mediating apoptosis, but not necrosis, induced by cisplatin. DNA damage repair is mediated to a large extent by the PARP1 enzyme (De Vos et al., 2012), which is generally considered as a factor beneficial for cell physiology. However, in cancer, PARP-mediated repair of DNA damage allows cells to survive and can potentially contribute to cancerogenesis. Therefore, numerous PARP inhibitors have been developed clinically to be used as adjuvant and maintenance therapies, together with DNA damaging agents, including cisplatin (Ledermann, 2016). Consequently, extensive human tolerability and efficacy data are already available (Sonnenblick and Piccart, 2015). Paradoxically, overactivation of PARP is also associated with increased cell death, primarily in postmitotic, non-dividing cells and tissues (Paquet-Durand et al., 2007;Wang et al., 2011). This may explain why, in the post-mitotic cells of the inner ear and also in the retina (Sahaboglu et al., 2016), PARP inhibitors can have beneficial, pro-survival effects, while in dividing cancerous cells, loss of PARP activity increases cell death. In this context, it is interesting to note that the heterozygous organ cultures (i.e., PARP1 +/− ) showed a better hair cell preservation than the full knockout cultures, indicating that some level of PARP activity is still required for their survival. This would then correspond to the effects of pharmacological PARP inhibition where it is likely that PARP was only partially inhibited, and high inhibitor concentrations eventually showed a detrimental effect. Interestingly, overactivation of PARP may also be involved in the pathology of kidney damage where the PARP inhibitor 3-AB protected from acute kidney injury after septic shock (Wang et al., 2018). Pirenzepine is a topical antiulcerative M1 muscarinic antagonist that inhibits gastric secretion (Carmine and Brogden, 1985). It promotes the healing of duodenal ulcers and due to its cytoprotective action is beneficial in the prevention of duodenal ulcer recurrence. It is generally well tolerated by patients. To date, pirenzepine does not appear to have been tested for possible beneficial effects on hearing and to the best of our knowledge this is the first study to show that pirenzepine can offset the ototoxic effects of cisplatin chemotherapy. Remarkably, pirenzepine is known to have a marked effect on the retina, leading to a decrease in eye length growth and preventing the progression of myopia in children and adolescents (Smith and Walline, 2015). In the retina other selective PARP inhibitors, such as PJ34 and olaparib, were found to have protective effects on photoreceptor degeneration (Paquet-Durand et al., 2007;Sahaboglu et al., 2010Sahaboglu et al., , 2016. On a similar level of evidence, in the inner ear, PARP activation was associated with the degeneration and loss of hair cells (Jaumann et al., 2012). Together, these studies support the concept of the use of PARP inhibitors to protect postmitotic sensory cells. In conclusion, the PARP1-deficient model revealed a protective effect against cisplatin treatment on hair cell survival. This effect was confirmed using the PARP inhibitor pirenzepine and its metabolite LS-75, which protected against cisplatin ototoxicity in a dose-dependent manner. Additionally, the cytostatic potential of cisplatin was not hampered in two germ cell tumor lines, making PARP an attractive target for an adjuvant therapy aimed at limiting the negative ototoxic side-effects of cisplatin anti-cancer treatments. DATA AVAILABILITY All datasets generated for this study are included in the manuscript and/or the supplementary files. ETHICS STATEMENT Animal use for organ explants was approved by the Committee for Animal Experiments of the Regional Council (Regierungspräsidium) of Tübingen. AUTHOR CONTRIBUTIONS AT, MM, FM, and AM performed the experiments. AT, MM, H-GK, and HL analyzed the data. MM, H-GK, AS, and HL conceived the study. AT, MM, FP-D, AM, and HL wrote and revised the manuscript. FUNDING Parts of this study were funded by ProteoSys AG, Mainz, Germany, under a contract research agreement with the University of Tübingen Medical Center (Universitätsklinikum Tübingen) titled "Otoprotektion in vitro."
5,804.4
2019-09-10T00:00:00.000
[ "Biology", "Medicine" ]
Urinary MMP-9/UCr association with albumin concentration and albumin-creatinine-ratio in Mexican patients with type 2 diabetes mellitus Background Chronic kidney disease is one of the most common complications of type 2 diabetes mellitus (T2DM), causing an increased risk of cardiovascular morbidity and mortality. Matrix metalloproteinase (MMP) activity has been proposed as useful biomarker for diabetic renal and vascular complications. Methods A cross-sectional study was conducted among T2DM patients who attended a public secondary hospital in Mexico. We performed clinical, biochemical, and microbiological assessments, as well chronic kidney disease diagnosis according to the KDIGO guideline. Urinary MMP-9 was quantified by ELISA and adjusted using urinary creatinine (UCr). Results A total of 111 patients were included. Most participants were women (66%). Mean age was 61 ± 10 years and median T2DM duration was estimated at 11 years. Through multivariate analysis, MMP-9/UCr was found to be associated with albumin concentration and albumin to creatinine ratio. Discussion Validation of non-invasive biomarkers of chronic kidney disease among T2DM patients is necessary. Here, we demonstrate MMP-9/UCr as a potential biomarker of albumin concentration and albumin to creatinine ratio in Mexican patients with T2DM. INTRODUCTION The prevalence in Mexico in 2016 of previously diagnosed type 2 diabetes mellitus (T2DM), and the combined prevalence of overweightness and obesity were 9.4% and 72.5%, respectively (Rojas-Martínez et al., 2018). Chronic kidney disease (CKD) is a common complication of T2DM, which confers an increased risk of cardiovascular morbidity and mortality (Górriz et al., 2019). Although urine albumin concentration is normally used according to the KDIGO guideline (Kidney Disease Improving Global Outcomes (KDIGO), CKD Work Group, 2012). Conditions that could affect pre-analytical requirements, as well as levels of MMP-9, were controlled, and subjects presenting menstruation, hematuria, presence of sperm in urine (Hirsch et al., 1992), antibiotic treatment, anuria, liver cirrhosis, rheumatoid arthritis, chronic obstructive pulmonary disease, autoimmune disorder and cancer were excluded, as well as those who had had intercourse or exercised vigorously in the 72 h preceding the clinical sampling. The GFR and ACR were each estimated twice, presenting with the second estimation in each case performed some 3 months after the first. Subjects who did not coincide in terms of GFR and ACR categories were excluded. Sampling of subjects To randomly select subjects, sampling was conducted as previously described in (Hernández-Hernández et al., 2017;García-Tejeda et al., 2018). A convenient size sample was defined as at least 100 participants. To achieve this sample size, a sample frame was defined as at least 290 participants, considering the response and inclusion rates seen in previous studies (Hernández-Hernández et al., 2017;García-Tejeda et al., 2018). Ethics The research protocol was approved by the investigation committee of the ''Clínica Hospital del Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado 25 A 300 400'' (Register No. 001/073/0230). This study was conducted according to the principles of the Declaration of Helsinki and included subjects signed an informed consent. MMP-9 quantification Aliquots of urine samples were centrifuged and stored as previously described in (Hernández-Hernandez et al., 2017;García-Tejeda et al., 2018). Human active MMP-9 (82 kDa) and proactive MMP-9 (92 kDa) were measured in urine using a commercially available enzyme-linked immunosorbent assay kit (DMP900 Quantikine; R&D systems, Minneapolis, MN, USA). Each sample, standard curve and controls were assayed following the instructions of the supplier. Blind analysis of the samples was conducted. According to the supplier, the ELISA kit we employed has a sensitivity of 0.156 ng/mL. Intra and interassay coefficients of variation were previously established at 5.9% and 7.9%, respectively (García-Tejeda et al., 2018). MMP-9 levels were adjusted by UCr. Variables The T2DM was defined as previous medical diagnosis, which was verified in the medical records. The GFR was estimated using sex-specific CKD Epi equations based on serum creatinine (Kidney Disease Improving Global Outcomes (KDIGO), CKD Work Group, 2012). Urine ACR categories were according to the KDIGO guideline (Kidney Disease Improving Global Outcomes (KDIGO), CKD Work Group, 2012). Subjects were classified as ''No CKD'' or ''With CKD''. No CKD: GFR ≥60 mL/min/1.73 m 2 (GFR categories G1 and G2) and ACR A1 (normal to mildly increased) present for >3 months, while ''With CKD'': GFR ≤59 mL/min/1.73 m 2 and ACR A1, A2 (moderately increased) or A3 (severely increased) or with GFR ≥60 mL/min/1.73 m 2 and ACR A2 or A3 present for >3 months. Urinary tract infection (UTI) for women and men were according the SIGN guideline (Scottish Intercollegiate Guidelines Network, 2013). Vigorous exercise was defined as practicing athletics, tennis, swimming, or soccer. Statistical analysis Categorical variables were summarized using absolute frequencies and percentages. Continuous variables of non-normal distribution were expressed through medians and the interquartile range. Correlation analysis was conducted using the Spearman's rank correlation. Multivariate analysis was performed in order to identify the association between the independent variable, MMP-9/UCr and the dependent variables, albumin concentration and ACR. Three models were generated for each dependent variable. Regression coefficient and CI95% were calculated by multivariate linear regression. A value of p ≤ 0.05 was considered statistically significant. EpiDat version 3.1 and IBM-SPSS Statistics, version 23 were used for analysis. RESULTS A total of 292 patients were invited to participate in the study, with signed informed consent obtained from 99% of those invited. Fifty-eight participants were excluded for not meeting the requirements for calculating GFR, antibiotic treatment, anuria, liver cirrhosis, rheumatoid arthritis, or cancer. Subsequently, 75 participants were excluded for having practiced vigorous exercise in the 72 h preceding the clinical sampling (n = 1), failure to provide clinical samples (n = 14), hematuria (n = 58) and sperm in the urine sample (n = 2). GFR and ACR were estimated in 158 participants and, more than 3 months later, 137 patients returned for a second estimation of GFR and ACR. The study finally comprised a total of 111 patients with GFR and ACR categories in concordance. Characteristics of the population Most of the participants were women and their mean age was 61.3 ± 10.3 years, with 31.5% reported having an education level higher than primary (Table 1). The mean duration of T2DM was 11 years. Most of the patients were prescribed oral hypoglycemic and more than half reported having a diagnosis of hypertension. A total of 18% of the participants had been diagnosed with CKD (Table 1). Markers of kidney damage in urine sediment and granular cylinders were observed in one subject with ACR A2. In the rest of the subjects, no markers of kidney damage were observed. Fasting serum glucose levels were above the expected value for patients with T2DM (70-130 mg/dL) ( Table 1). The median value of triglycerides was also above reference values (Table 1). Univariate and multivariate analysis Urinary MMP-9 levels were determinate by ELISA, which were adjusted to UCr. Seventyfour percent of samples had not detectable MMP-9 (Table 1). There was a positive correlation, with a correlation coefficient of at least 0.3230 between the MMP-9/UCr value with age, serum glucose, ACR and CFU (Table 2). Six multivariate models were generated to identify the possible association between urinary MMP-9/UCr and albumin concentration and ACR. Age, body mass index (BMI), waist size, systolic blood pressure, diastolic blood pressure and serum glucose were included in the models (Tables 3 and 4). In unadjusted (n = 101), adjusted (n = 101), and adjusted in participants without UTI (n = 94) models, urinary MMP-9/UCr was the only statistically significant variable in albumin concentration and ACR. BMI also showed a statistically significant association in most of the models. The remaining variables were not found to be significantly associated (Tables 3 and 4). DISCUSSION Through two indicators of CKD, our results show that urinary MMP-9/UCr is associated with albuminuria in patients with T2DM, although detection of MMP-9 was in 26% of the subjects, probably due to the sensitivity of the ELISA. A similar association has been reported by van der Zijl and colleagues between MMP-9 activity and incipient diabetic nephropathy (Van der Zijl et al., 2010). Through an MMP activity assay, van der Zijl and colleagues detected urinary MMP-9 activity in 89% and 74% of increased albuminuria and normal albuminuria patients, respectively. Besides the selection of the technique to determine activity instead of levels, the inclusion criteria employed by van der Zijl and colleagues differed from our study in terms of hematuria, UTI, GFR equation and chronicity. Age, serum glucose and CFU were found to be associated with urinary MMP-9/UCr, which is consistent with the literature (Thrailkill et al., 1999;Morais et al., 2005;Kundu et al., 2013;García-Tejeda et al., 2018). The pathological mechanism of kidney fibrosis involves a series of complex molecular events that produce an excess of ECM deposition. In response to noxious stimuli, renal cells secrete pro-inflammatory and pro-fibrotic molecules that promote the recruitment of inflammatory cells. This inflammatory state can lead to the epithelial-mesenchymal transition (EMT), thus contributing to the progression of fibrosis (Garcia-Fernandez et al., 2020). Studies have shown that MMP upregulation promotes fibrosis, possibly by interaction with overexpressed TIMPs or as a result of the capacity of degradation of ECM by MMP, the degradative products of which can induce EMT (Garcia-Fernandez et al., 2020). The ECM is a versatile and dynamic compartment that harbors cryptic biological functions that can be activated by proteolysis (Visse & Nagase, 2003), as well as being involved in modulating cell proliferation, migration, differentiation and apoptosis (Sato & Yanagita, 2017). Type IV collagen is the principal component of the glomerular basement membrane and mesangial matrix. Excretion of type IV collagen in urinary samples has been postulated as an indication of matrix turnover in diseased kidneys (Tomino et al., 2001). Since MMP-9 is involved in the degradation of type IV collagen, changes in its expression or activity might reflect early renal damage in T2DM, prior to any increased level of albuminuria (Van der Zijl et al., 2010). MMP-9 has been associated with vascular kidney damage by diverse pathological mechanisms, including matrix deposition, kidney fibrosis, EMT, arterial Model 1: corrected square R: 0.04; P-value = 0.16; inclusion method of the variables: Enter. Ten participants were excluded for lack of blood pressure or waist size data. Model 2: corrected square R: 0.08; P-value = 0.007; inclusion method of the variables: Forward. Model adjusted for the following excluded variables (if P-value > 0.05); urinary creatinine (mg/dL), age (years), waist size (cm), systolic blood pressure (mm Hg), diastolic blood pressure (mm Hg) and serum glucose (mg/dL). Ten participants were excluded for lack of blood pressure or waist size data. Model 3: corrected square R: 0.13; P-value = 0.001; inclusion method of the variables: Forward. Model adjusted for the following excluded variables (if P-value > 0.05): urinary creatinine (mg/dL), age (years), waist size (cm), systolic blood pressure (mm Hg), diastolic blood pressure (mm Hg) and serum glucose (mg/dL). stiffening, vascular calcification, and atherogenesis (Provenzano et al., 2020). MMP-9 has been proposed as a potential therapeutic target to prevent kidney fibrosis in CKD (Zhao et al., 2013). The EMT is characterized by the loss of epithelial markers such as cytokeratin and E-cadherin, and by nuclear translocation of β-catenin accompanied by expression of mesenchymal markers such as vimentin and α-smooth muscle actin (Tanabe et al., 2020). In patients with T2DM, elevated plasma levels of TGF-β1, CXCL-16 and angiopoietin-2 have been shown as independent predictors of albuminuria and these molecular markers can improve renal risk models beyond established clinical risk factors (Scurt et al., 2019). Through the direct inhibition of Mmp-9 using a neutralizing antibody in a murine model of unilateral ureteral obstruction, Tan et al. (2013) have suggested a potential mechanism underlying the contribution of Mmp-9 to kidney fibrosis: (1) Mmp-9 cleaves osteopontin; (2) osteopontin plays a key role in macrophage recruitment; (3) infiltration of macrophages; (4) tubular cell EMT and (5) kidney fibrosis. Moreover, Mmp-9 can be upregulated by TGF-β1 in mouse renal peritubular endothelial cells, causing endothelial-mesenchymal transition (Zhao et al., 2017). No association was found between urinary MMP-9/UCr and BMI. Among reproductiveaged women, a positive correlation has been demonstrated between some serum MMPs, Model 1: corrected square R: 0.06; P-value = 0.08; inclusion method of the variables: Enter. Ten participants were excluded for lack of blood pressure or waist size data. Model 2: corrected square R: 0.08; P-value = 0.006; inclusion method of the variables: Forward. Model adjusted for the following excluded variables (if P-value > 0.05); urinary creatinine (mg/dL), age (years), waist size (cm), systolic blood pressure (mm Hg), diastolic blood pressure (mm Hg) and serum glucose (mg/dL). Ten participants were excluded for lack of blood pressure or waist size data. Model 3: corrected square R: 0.11; P-value = 0.002; inclusion method of the variables: Forward. Model adjusted for the following excluded variables (if P-value > 0.05): urinary creatinine (mg/dL), age (years), waist size (cm), systolic blood pressure (mm Hg), diastolic blood pressure (mm Hg) and serum glucose (mg/dL). including MMP-9, and BMI, as well as significantly higher concentrations of MMPs being found in obese subjects, compared to patients of normal BMI (Grzechocińska et al., 2019). Plasma gelatinase MMP-9 activity has been reported as being significantly higher in obese compared to non-obese patients, which could be reversed by bariatric surgery (García-Prieto et al., 2019). A novel function for MMPs as modulators of adipogenesis has been reported as a consequence of abnormal ECM metabolism (Derosa et al., 2008). In animal models, lipid deposition is found in tubular and glomerular portions of the kidneys, the major sites of diabetic nephropathy lesions (Thongnak, Pongchaidecha & Lungkaphin, 2020). Given the pathophysiological processes that encompass DKD, such as hyperfiltration and pro-fibrotic, pro-inflammatory and angiogenic processes, it has been challenged whether albuminuria, or any other single biomarker, is individually capable of accurately forecasting the development and progression of renal damage in T2DM (Norris et al., 2018). Moreover, a consensus classification of type 1 and type 2 diabetic nephropathies, that covers the entire kidney, states that the early event is injury to the tubules and vessels, rather than glomerular lesions (Tervaert et al., 2010). In our population with G1 and G2 GFR categories, the existence of extra glomerular lesions, and even hyperfiltration, is therefore possible. The negative association found between BMI with albumin concentration and ACR is conflictive. Higher BMI has been positively correlated with higher ACR and an increased risk of major renal events in patients with T2DM (Mohammedi et al., 2018). Although, Raikou & Gavriil (2019) found that obesity is not a significant risk factor for an increased degree of albuminuria in hypertensive patients with a poor estimated GFR rate, when both diabetes mellitus and a low eGFR value act as confounders. This study presents some limitations related to the sensitivity of the ELISA kit, and the absence of MMP-9 found in urinary samples might be questionable. Another limitation of this study is the inclusion of patients that received renin-angiotensin system drugs, given their effect on the expression of some MMPs (Lods, 2003). The strengths of this study are the well-defined population and robust statistical analysis. We consider this study is particularly relevant, since CKD prevalence and mortality rate have doubled in Mexico over the last 20 years (Dávila-Torres, González-Izquierdo & Barrera-Cruz, 2015;Salinas Martínez et al., 2020). For this reason, larger-scale studies are required in the Mexican population in order to determine the predictive values of urinary MMP-9/UCr, including sensitivity, positive predictive value, negative predictive value, probability ratio for positive test, probability ratio for negative test and area under the ROC curve. Ideally, these studies should include other kidney disease in order to demonstrate specificity. CONCLUSIONS Non-invasive biomarkers of urine are required to assess DKD development and progression. The present study demonstrated that MMP-9/UCr correlated positively with urine albumin concentration and ACR in Mexican patients with T2DM. We propose MMP-9/UCr as a possible early biomarker of CKD in T2DM patients.
3,652.6
2020-12-16T00:00:00.000
[ "Medicine", "Biology" ]
PRODUCTION OF COMPOSITE PELLETS FROM WASTE COFFEE GROUNDS , MILL SCALE AND WASTE PRIMARY BATTERY TO PRODUCE FERROMANGANESE ; A ZERO WASTE APPROACH This study was aimed to produce ferromanganese by using waste battery as manganese source, mill scale as iron source and waste coffee ground as reduction agent and carbon source. Waste batteries were collected from waste battery collection bins. Mill scale was collected from hot rolling workshop. Waste coffee grounds were household used coffee. All starting materials were characterized. Weighted raw materials blended with addition of bentonite as a binder. Pelletizing equipment was used to produce composite pellets. Produced pellets were dried then used for reduction experiments. Reduction experiments were conducted in Argon purged tube furnace for 1250 C, 1300 C and 1400 C according to thermodynamic background. Produced ferromanganese samples were characterized for chemical compositions and metallization rate. INTRODUCTION Waste management is gaining importance because of depleting natural resources. Increasing consumption of materials with increasing population gained popularity to recycling and material recovery technologies. Energy efficiency is another important issue in recycling operations. Recycling of a material decreases the energy consumption by cutting off raw material preparing steps in production. Waste battery is considered as a hazardous waste because of acidic or basic electrolyte in the battery system [1]. Primary batteries or in other words single use batteries are designed for one time using with irreversible chemical reactions to produce electrical current [2]. There are different types of batteries with different ingredients, the most known and used primary battery composition is ZnO -MnO2 with option of alkaline and Zn -C varieties [3]. The difference between Zn -C and alkaline batteries is used electrolyte. Zn -C batteries uses ZnCl2 as electrolyte while alkaline battery uses KOH [4]. Cathode and anode structures are similar in both battery types [2]. Recycling of single use batteries studied from different researchers with different methods. General approach for recycling single use batteries is pyrometallurgical method because of fast and efficient process [3]. Primary waste battery recycling consists of reductionevaporation of Zn at first step followed by total or partial reduction of manganese oxides to produce various manganese based chemicals or alloys [5,6]. High vapour pressure of Zn at relatively low temperatures to Mn available the vaporization -condensation route for Zn recovery in battery recycling [6]. Vacuum applied processes can be applied to decrease Zn evaporation temperature [6,7]. After Zn recovery batteries can be used as Mn raw material and generally considered for ferromanganese production [3,6]. Waste coffee grounds are evaluated in various recycling applications due to large amount of waste [8]. Coffee is the second most used trading good after petroleum worldwide [9]. Waste coffee grounds are used with and without treatment in various applications. Waste coffee grounds are valorized to produce carbon based materials generally [8,10,11]. Active carbon [12], pharmaceutical compounds [11], biodiesel [10], anode material [13] are produced via valorization of waste coffee grounds. Chemical treatment is applied to waste coffee grounds to refine unwanted compounds in order to obtain desired products without using valorization in different applications [14,15]. Large consumption amounts and accessible waste stream worldwide, offers different recycling or reusing opportunity to waste coffee grounds. Hot rolling process is applied to steel slabs in order to produce thinner steel sheet. Mill scale is the lost material from steel slab due to oxidation at annealing step of hot rolling [16]. Mill scale is consists of oxides of elements in steel which is Fe in large portions. Recycling of mill scale is generally reuse of material in steelmaking process as an iron source [17]. Mill scale can be used in production of iron based materials as well [18]. Mill scale contains high iron amount thus can be evaluated as raw material for steel producing instead of blending in other resources [19], [20]. Recycling of mill scale and other steelmaking dusts requires pre-treatment of sample in order to use in conventional steelmaking process. Agglomeration is applied to iron containing dusts due to small particle size that restricts use in blast furnace operation [16,17-21-23]. Pelletizing can be used as an agglomeration process in the recycling of mill scale. This study is aimed to produce ferromanganese with using waste coffee grounds as a reductive carbon source, waste battery paste as manganese source and mill scale for iron source. Pelletizing process is applied to produce composite pellets for reduction. Reduction of pellets is conducted in controlled atmosphere tube furnace. Characterization of starting materials and end product is done. Raw Materials Waste coffee grounds are collected from household wastes. Collected waste coffee grounds are washed with distilled water and dried at 105 o C for 24 hours. DTA-TG analysis of waste coffee grounds is applied to sample of 1 mg. Waste battery is obtained from municipal battery collection points. Waste batteries are shredded; steel casing and battery paste is separated. Chemical analysis of battery paste is done with standard procedure of Atomic Absorption Spectroscopy. The battery paste is washed with distilled water to wash out the battery electrolyte. Mill scale is chemically analyzed with AAS and XRD technique is used to determine phases of oxides. Further explanation of characterization results of raw materials are discussed in Results. Blend composition is given in Table 1. Bentonite that used in pelletizing process as binder is Na-based bentonite obtained from Tokat-Turkey. Chemical composition of Na-bentonite is analyzed with AAS and phases are determined by X-ray analysis. Thermodynamic Calculations Reduction of metallic oxides by carbon is the general practice for many manufacturing process. Production of ferromanganese needs extensive CO partial pressure for reduction of MnO [24]. CO is produced in reduction systems via Boudouard reaction. Boudouard reaction is the basis of carbothermic reduction system. Boudouard reaction is given as; 2CO ↔ C +CO2 (1) CO is produced or consumed in the reduction system according to equilibrium of Boudouard reaction. CO is more stable at elevated temperatures than CO2. Due to this thermodynamic background reduction reactions are expected to start with solid C then proceed to CO gas-solid reactions. Free energy calculations for reduction of Zn, Fe and Mn oxides are done with HSC 6.1 thermodynamic database. Gibbs free energy of oxides along with CO and CO2 are given in Figure 1 Ellingham diagram shows reduction conditions for metal oxides in the system. Reduction reactions of ZnO and MnO are expected to take place in CO atmosphere. On the other hand, FeO carbothermic reduction is at the equilibrium temperature of the Boudouard reaction. That means FeO can be reduced via both C and CO or in other words direct and indirect reaction take place in FeO reduction simultaneously. Reduction reactions of metal oxides are given as; MnO + CO = Mn + CO2 (2) ZnO + CO = Zn + CO2 (3) FeO + CO = Fe+ CO2 (4) FeO + C = Fe + CO (5) Reduction Process Reduction roasting process is proposed based on thermodynamic calculations and literature knowledge. Proposed flow diagram of process is given in Figure 2. Raw materials of waste coffee grounds, mill scale and waste battery paste are blended with bentonite powder. Blend is pelletized with water addition in laboratory scale pelletizing disc with diameter of 50 cm. Pellets are dried then direct reduced in Argon purged (1 lt/min) tube furnace at 1250, 1300 and 1400 o C. Chemical analysis of pellets is conducted with Atomic Absorption Spectroscopy. Condensed ZnO is found in furnace exit. ZnO could not be collected in experiments due to lack of facilities for condensation chamber. RESULTS AND DISCUSSION Raw materials are characterized with techniques mentioned in Experimental Materials and Methods section. DTA-TG analysis of coffee grounds is given in Figure 3. Coffee ground is analyzed and pyrolysis occurred under Argon atmosphere. Stable carbon is found to be in left over mass after gasification in DTA-TG analysis. Stable carbon amount is 18.5% in used spent ground coffee waste. Pyrolysis process of spent ground coffee starts in 300 o C and finishes about 500 o C. Mass loss of spent ground coffee waste continues with higher temperature exposure but to a limited extend. 18.5 % of spent ground coffee waste is taken into account in calculations for reductive carbon source. Pyrolysis of waste coffee grounds produces reduction agent of carbon in the pellets. Carbothermal reduction of zinc, iron and manganese oxides requires high energy and CO partial pressure [6,24]. Pyrolysis and reduction process takes place simultaneously in the reduction vessel. Thermodynamic calculations along with DTA/TG analysis can be concluded into that pyrolysis is followed by reduction reaction. Chemical composition of washed, dried waste battery is given in Table 2. Waste battery paste is rich in manganese and zinc oxides. Alkaline based electrolyte in alkaline batteries is the cause of K in battery paste. Manganese is evaluated is main manganese source for ferromanganese production. Zinc is planned to be evaporated and condensed as zinc oxide in furnace. Mill scale chemical analysis and XRD analysis is given in Table 3 and Figure 4 consecutively. Main compound in mill scale is iron and iron oxides. Mill scale is collected from hot rolling workshop. Chemical composition of mill scale represents original chemical composition of steel slab in oxide form. Table 3 shows large amount of iron without excessive alloying elements. Na-bentonite chemical analysis is given in Table 4. Bentonite consists of aluminum calcium silicates with addition of Na2O. Relatively higher Na2O level of bentonite is the cause of Na-bentonite naming. Pelletizing experiment results are collected in three subjects; pelletizing efficiency, free fall test and moisture content. All results are given in Table 5. Free fall numbers and moisture content of green pellets meet the green pellet requirements for pelletizing. Efficiency of pelletizing process is adequate to consider the process feasible. Metallization rate of iron and manganese is calculated after reduction experiments. Produced ferromanganese alloy is characterized with AAS technique. Zinc is evaporated from reaction chamber and zinc evaporation rate is calculated with leftover zinc in ferromanganese. Results of ferromanganese chemical analysis are given in Table 6. Metallization rate is visualized in Figure 5. The produced ferromanganese alloy is low in manganese than calculated value. This result shows that the alloy cannot be considered as ferromanganese but a master alloy in ferromanganese production. Use of waste coffee grounds as reducing agent in composite pellets is efficient in reduction of manganese, iron and zinc oxides. Pyrolysis of waste coffee grounds produces amorphous carbon skeleton of coffee grain with high surface area [13]. High surface area, direct contact of reagents eases the reduction process. Reagent contact and short diffusion pathway for reducing gases helps the reduction process. Reduction temperature of the pellets is lower than thermodynamic calculated reduction point nevertheless metallization of 70% in manganese was achieved in this study. XRD analysis of pellets that reduced at 1400 o C conducted to understand what is produced. XRD results explains low metallization rate of manganese. MnO peaks are exhibited along with manganese peaks while the rest is cementite (Fe3C). Thermodynamic background supports this result that MnO needs high CO pressure and high temperature to be reduced [24]. Further reduction of MnO needs higher CO partial pressure hence approximately 29% Mn left in MnO form. Increasing the reduction rate of MnO can be achieved by increasing the temperature or CO partial pressure. XRD result of reduced pellets is given in Figure 6. Conclusion Pelletizing of waste coffee grounds, waste battery paste and mill scale is a promising solution for recycling of these wastes. The produced pellets have sufficient properties for producing ferromanganese. Pelletizing process efficiency is calculated 91.18% which can be evaluated as successful. This process can utilize three different waste into two useful products; ferromanganese and zinc oxide. Waste batteries are classified as hazardous waste and by this process it can be turned into a useful product. Pelletizing process and solid state reaction ease the operation of waste utilization. Zinc oxide on the other hand can be evaluated in producing zinc metal or various compounds. The wastes are totally converted into useful products and economical value is obtained.
2,797.2
2020-06-04T00:00:00.000
[ "Environmental Science", "Materials Science" ]
VARIATIONAL APPROACH FOR THE RECONSTRUCTION OF DAMAGED OPTICAL SATELLITE IMAGES THROUGH THEIR CO-REGISTRATION WITH SYNTHETIC APERTURE RADAR In this paper the problem of reconstruction of damaged multi-band optical images is studied in the case where we have no information about brightness of such images in the damage region. Mostly motivated by the crop field monitoring problem, we propose a new variational approach for exact reconstruction of damaged multi-band images using results of their co-registration with Synthetic Aperture Radar (SAR) images of the same regions. We discuss the consistency of the proposed problem, give the scheme for its regularization, derive the corresponding optimality system, and describe in detail the algorithm for the practical implementation of the reconstruction procedure. Introduction It is well-known that visible red, green, and blue bands and also near-infrared (NIR) and SWIR regions of the electromagnetic spectrum of optical satellite images have been used successfully to monitor crop cover, crop health, soil moisture, nitrogen stress, and crop yield (see, for instance, [14,37,43]). In view of this, qualitative analysis of vegetation and detection of changes in vegetation patterns are the keys to natural resource assessment and monitoring. Thus, it comes as no surprise that the detection and quantitative assessment of crop cover and green vegetation biomass is one of the major applications of remote sensing for environmental resource management and decision making. However, in spite of the fact that optical satellite multi-band images have a high resolution and can be easily captured by low-cost cameras, they are often corrupted because of the poor weather conditions, such as rain, clouds, fog, and dust conditions. Moreover, it is a typical situation when the measure of degradation of optical images is such that we can not rely even on the brightness recovery of the damaged regions. As a result, some subdomains of such images become invisible or their coverage by the spectral vegetation indices is rather far from to be reliable and consistent. However, in contrast to the optical observation, the radar images do not depend on reflected sunlight and they can be used at night and under poor weather conditions. In the vegetation case, instead of giving an indication on biophysical processes in the plant, the radar backscatter rather contains information on the structure and moisture content of vegetation and the underlying soil. Therefore, the fusion of SAR and optical images is very important for classification of land cover [42] and estimation of soil moisture to remove vegetation cover effects from radar backscattering coefficient [19,20,47]. At the same time, because of the distinct natures of SAR and optical images, there exist huge radiometric and geometric differences between optical and synthetic aperture of radar images. As a result, their structure and texture are drastically different. Because of this, it would be naive to suppose that the gray level of the original color image u 0 or its brightness on the damaged region D can be successfully recovered at high level of accuracy through SAR images of this region. Thus, in spite of the fact that in the literature there are many approaches to the reconstruction of an image when information of the colors is not everywhere available (see, for instance, [26,35,48,49,51]), the traditional approaches to the exact reconstruction of damaged color images are no longer applicable in this case and it makes this problem challenging. The aim of this paper is propose and study the variational model for exact reconstruction of damaged multi-band optical satellite images using results of their co-registration with SAR images of the same regions. The variational approach we consider is inspired, in some sense, by the famous ROF model for denoising, introduced by Rudin, Osher and Fatemi in the context of grey level functions (see [45]): to minimize BV (Ω) u → |Du|(Ω) + λ u − u 0 2 L 2 (Ω), (1.1) where Ω ⊂ R 2 denotes the image domain, u 0 ∈ L 2 (Ω) is the given image, and λ > 0 is a tunning parameter. In order to be able to reconstruct edges in the image, it is represented by a function of bounded variation u ∈ BV (Ω). So, the first term in (1.1) is the total variation |Du|(Ω) which has a regularizing effect but at the same time allows for discontinuities which may represent adges in the image. The second term is the fidelity term which measures the distance to the given image. Often, some weaker norms such as the H −1 -norm are considered to define the latter term. However, the non-differentiability of the total variation is challenging from a computational point of view. Moreover, in some papers (see, for instance, [3,23]), it was shown that natural images are incompletely represented by BV (Ω) ∩ L 2 (Ω) functions. In fact, it can be shown that BV (Ω) ∩ L ∞ (Ω) is contained in the fractional Sobolev space H s (Ω) for 0 < s < 1/2 [52]. As a result, they propose to replace the total variation term in (1.1) by a squared fractional Sobolev norm. In other words, they involve in minimization the following energy with 0 < s < 1 and β ∈ [0, 1], where (−∆) s denotes the fractional power of the Laplacian with zero Neumann boundary conditions. Then the first necessary and sufficient optimality condition determines the unique minimizer u via which is a linear elliptic partial differential equation that can be efficiently solved using, for instance, the Fourier spectral method [3] or the Stinga-Torrea extension [44]. Since, from the reconstruction point of view, it is desirable that the regularity of the solution to (1.2) is low in places in Ω where edges or discontinuities are present in u true, and that is high in places where u true is smooth or contains homogeneous features, it is of interest to consider (1.2) where s : Ω → [0, 1] is not a constant. For the details we refer to the recent paper [4]. Later on it was shown that the ROF model can be extended to various image processing problems, one of which is T V inpainting method [11]. Since the colorization task can also be understood as inpainting the colors (as Sapiro's insight [48]), the T V minimizing approach has been widely used for different problems related with colorization of black and white images in computer graphics, and also with reconstruction of damaged color images [18]. However, in contrast to the standard setting of reconstruction problem for color damaged images where the starting point is either the knowledge of the gray level of the original color image u 0 on a given open subset D of Ω (the damaged region) together with the exact information of u 0 on Ω \ D (the undamaged region) or the grey level information in the damage region D ⊂ Ω is modeled as a nonlinear distortion of the colors, in this paper we deal with the case where we do not have any information about u 0 inside D but instead we assume that a SAR image u SAR : Ω → R of the same region is given. When dealing with multi-band optical satellite images, a color of each pixel can be identified with a vector ξ = (ξ 1 , . . . , ξ M ) t ∈ R M , where components ξ i corresponds to the different channels which are crucial for calculation of the major vegetation indices that have a wide application in many agricultural monitoring services. For instance, These indexes are well established proxies for the crop conditions and give us early insights into how well the crops are doing and if they are in need of water or nutrients [56]. So, when we speak about multi-band optical satellite images, we suppose that they have at least red, green, blue, NIR, and SWIR channels. Let M be the number of channels that we are allowed to use. There are two preferred ways to represent multi-band images mathematically. The first one is the model, where an image u ∈ BV (Ω, R M ) is represented via its M channels u = (u 1 , . . . , u M ). The other way to represent an image is called Chromaticity/Brightness (or CB-model), where a multi-band image u ∈ BV (Ω, R M ) is decomposed into two components: its chromaticity C := u/|u| = (u 1 /|u|, . . . , u M /|u|) and its brightness B := |u| = u 2 1 + · · · + u 2 M . It is wellknown that treating brightness separately from the chromaticity can give more flexibility in detail recovery (see, for instance [17,27]). In particular, in [10], the authors showed that CB model gives better color control and detail recovery for color image denoising compared to different color settings. In this paper, we use CB multi-band model for the reconstruction of optical satellite images. The paper is organized as follows. After recalling basic notions and background in Section 2, we give in Section 3 the precise setting of the reconstruction problem for multi-band damaged optical images. We do it in several steps. First we discuss the procedure of denoising and selective smoothing for the SAR data of the same region assuming that these data are well co-registered with the original optical image. Moreover, the main focus of this procedure is to preserve the edges inside the damage region D after transformations related to denosing and selective smoothing of SAR images. At the second step, we set the reconstruction problem of the brightness B 0 = |u 0 | ∈ BV (Ω \ D) of a multi-band optical image in the damage domain D ⊂ Ω. The novelty of model that we propose, is that the edge information for the brightness reconstruction is derived from the SAR data. In some sense, this model is related to piecewise smooth Mumford-Shah segmentation [39], since our model yields smooth spectral bands for each homogeneous region, while the edge information is enforced by the special weight function. We also present in this section a variational model for recovery of the chromaticity data C in the damage domain D ⊂ Ω provided the chromaticity components C 0 for the original multi-band image u 0 are well defined in Ω \ D. With that in mind we introduce a special constrained minimization problem where the minimization of the cost functional with respect to the chromaticity C is proposed to affect by the brightness data B 0 in Ω \ D and by the smoothed SAR data u * SAR in the damage region D. As a result, we expect that the diffusion of chromaticity is inhibited across the edges of B 0 in Ω \ D and the edges of u * SAR in D, yielding a sharp transition in the function C in the rest regions. After the chromaticity C rec and the brightness B rec are recovered by solving of the corresponding minimization problems, the fully multi-band image u in Ω can be restored by the rule u = C rec B rec . Thus, we give the way for the reconstruction of the major vegetation indices in the damage region D. We notice that the proposed reconstruction scheme is rather flexible. Each spectral band diffuses in D as long as it meets the boundary of region enforced by the SAR information. Moreover, by using the chromaticity model, natural spectral band blending is possible with respect to geodesic direction in chromaticity space S M −1 . In a homogeneous region, if different bands are given, this model will naturally diffuse the spectral band by diffusing the vector values in S M −1 in geodesic direction between bands. Our main intention in Section 4 is to show that constrained minimization problems for the chromaticity and the brightness recovery, are consistent and the corresponding sets of their solutions are nonempty. Since the chromaticity recovery problem is not trivial in its practical implementation (because of the nonconvex state constraint C(x) ∈ S M −1 ), we discuss its relaxation and asymptotic properties of the sequence of minimizers for penalized minimization problems in Section 5. Optimality conditions for the penalized chromaticity recovery problem and the brightness reconstruction problem are studied in Sections 6-7. The last section is devoted to the short description of the crucial steps of alternative minimization method that we suggest for the numerical simulations of the spectral indices by the damaged multi-band satellite optical images. Notation and Preliminaries We begin with some notation. For vectors ξ ∈ R N and η ∈ R N , (ξ, η) = ξ t η denotes the standard vector inner product in R N , where t denotes the transpose operator. The norm |ξ| is the Euclidean norm given by |ξ| = (ξ, ξ). Let Ω ⊂ R 2 be a bounded open set with a Lipschitz boundary ∂Ω. For any subset D ⊂ Ω we denote by |D| its 2-dimensional Lebesgue measure L 2 (D). For a subset D ⊆ Ω let D denote its closure and ∂D its boundary. We define the characteristic function χ D of D by For a function u we denote by u| D its restriction to the set D ⊆ Ω, and by u ∂D its trace on ∂D. Let C ∞ 0 (Ω) be the infinitely differentiable functions with compact support in Ω. For a Banach space X its dual is X * and ·, · X * ;X is the duality form on X * × X. By and * we denote the weak and weak * convergence in normed spaces, respectively. For given M ∈ N and 1 p +∞, the space L p (Ω; R M ) is defined by The inner product of two functions f and g in L p (Ω; Let D (Ω) be the dual of the space C ∞ 0 (Ω), i.e. D (Ω) is the space of distributions in Ω. By H 1 0 (Ω) we denote the closure of C ∞ 0 (Ω)-functions with respect to the norm Then H 1 0 (Ω) is a Banach space and the norm in H 1 0 (Ω) can be defined by We denote the dual of H 1 0 (Ω) by H −1 (Ω). Then (see [46, p.401]), H −1 (Ω) is isometrically isomorphic to the Hilbert space of all distributions F ∈ D (Ω) satisfying It can be shown that the standard norm in H −1 (Ω) is equivalent to the following one (for the details we refer to [24]) BV (Ω; R M ), endowed with the norm , and the supremum is taken on the set of functions Remark 2.1. For a Borel set B ⊂ Ω and an arbitrary function u ∈ BV (Ω; R M ), the mapping B → |Du|(B) is a Radon measure that is lower semi-continuous with respect to the L 1 -convergence of sets. We recall that a sequence ) and, therefore, the notatioń Ω φ dDf k should be interpreted as followŝ converges strongly to some f in L 1 (Ω; R M ) and sup k∈N´Ω d|Df k | < +∞, then (see, for instance, [1] and [5]) So, a simple criterion for weak * convergence can be states as follows (see [1, p.125]): and converges to u strongly in L 1 (Ω; R M ). The following embedding results for BV -function plays a crucial role for variational problems that we study in this paper (see [5, p.378]). We also recall the Poincare-Wirtinger inequality: in two dimensional case, there exists a constant C P W such that, for any u ∈ BV (Ω; R M ), we have By analogy with the theory of Sobolev spaces, the notion of trace operator can be extended for BV -functions. Namely, the following result is well-known [ • the Green's formula holds: ∀ ϕ ∈ C 1 (Ω; R 2 ), where ν(x) is the outer unit normal at H 1 -almost all x in ∂Ω. A special case of functions of bounded variation are those that are characteristic functions of sets of finite perimeter and were introduced by R. Caccioppoli in [8]. Let χ E be its characteristic function. We say that E is a set with finite perimeter in Ω if χ E ∈ BV (Ω). This means that the distributional gradient Dχ E is a vectorvalued measure with finite total variation. The total variation |Dχ E |(Ω) is called the perimeter of E in Ω, i.e., P (E, Ω) = |Dχ E |(Ω) and, therefore, It is known that for every set E with finite perimeter in Ω the following generalized Gauss-Green formula holdŝ where ν E is the inner unit normal to E. Since the sets with finite perimeter are not smooth in general, the correct way to represent the measure Dχ E is to introduce the so-called reduced boundary ∂ * E. In this way, for every set E of finite perimeter we have The following properties are well-known: (c) E → P (E, Ω) is lower semicontinuous with respect to the convergence in measure in Ω; (d) For any two sets E 1 and E 2 with finite perimeters in Ω, the relation holds, with equality holding if only the distance between these sets in Euclidean space R 2 is non-zero. For further information concerning functions of bounded variation, we refer to [1,5]. We recall also some auxiliary results concerning the vector fields z ∈ L ∞ (Ω; R 2 ) whose divergence in the sense of distribution are L 2 (Ω)-functions. We denote by L ∞ 2,div (Ω; R 2 ) the space of all such vector fields z. Then the trace of the normal component of the field z ∈ L ∞ 2,div (Ω; R 2 ) on ∂Ω can be defined as distribution Tr(z, ∂Ω) in the sense of Anzellotti (see [2]). In particular, having assumed that the original domain Ω ⊂ R 2 is of class C 1 , the trace of the normal component of z on ∂Ω is the distribution defined as follows For example, if z is a piecewise continuous vector field that can be extended continuously in Ω, then (see [12, p. 22]) Utilizing the property ∂Ω = ∂ * Ω, we have the following result (the so-called Gauss-Green formula) (see [12,Theorem 5.1] and [28, Proposition 6.12] for the details). Lemma 2.1. For any u ∈ BV (Ω) and z ∈ L ∞ 2,div (Ω; R 2 ) there exists a Radone measure on Ω denoted by (z, Du) and a function Tr u ∂Ω ∈ L 1 (∂Ω, dH 1 ) stands for the trace of u ∈ BV (Ω) on ∂Ω. The measure (z, Du) and | (z, Du) | are absolutely continuous with respect to |Du| and, for any open Ω ⊂ Ω, ∀ ϕ ∈ C ∞ 0 ( Ω), and for all Borel sets Ω ⊂ Ω, we have Moreover, it turns out that in this case the following estimate holds true Setting of the Problem Let Ω ⊂ R 2 be a bounded image domain with Lipschitz boundary ∂Ω, and let D ⊂ Ω be a Borel set with non empty interior and sufficiently regular boundary and such that |Ω \ D| > 0. We call D the damage region of a given multi-band image u 0 ∈ BV (Ω \ D; R M ). As it was mentioned in Introduction, we deal with the case where we have no information about the original image u 0 inside D. So, the brightness B 0 = |u 0 | and the chromaticity components C 0 = u 0 /B 0 are only defined in Ω\D. Instead of this, we assume that a SAR image u SAR : Ω → R of the same region is given, and this image is well co-registered with u 0 ∈ BV (Ω\D; R M ) in Ω \ D. By default, we assume that all functions C 0 , B 0 , and u SAR take values in the set of strictly positive real numbers R + . The problem is to reconstruct the original multi-band image u in the damaged domain D ⊂ Ω. We will do it in several steps. First, we realize the denoising and selective smoothing procedure for the SAR data u SAR in order to preserve the edges inside the damage region D. With that in mind, we propose to make use of Perona-Malik equation [41] in its combination with the image empirical mode decomposition method (IEMD) [38]. Denoising and selective smoothing procedure for the SAR data Formally, this procedure can be described as follows. A2. Perform the procedure of selective smoothing and denoising for the image J n : Ω → R. With that in mind, we define a smooth version V n of J n as a solution of the following initial-boundary value problem where ∂Vn ∂ν is the normal derivative of V n at the boundary of Ω, g : [0, ∞) → (0, ∞) is a continuous monotone decreasing function such that g(0) = 1 and g(t) > 0 for all t > 0 with lim t→+∞ g(t) = 0, and G σ * I determines the convolution of function I with the Gaussian kernel G σ : Here, σ determines the width of the Gaussian kernel and we will refer to σ as the inner scale. A3. Find the IEMD for Z n := V n + R n , i.e., represent Z n in the form of its decomposition of its intrinsic oscillatory modes (known as IMFs) and the last residue, A4. Set up n := n + 1, R n := R L(n),n , and J n = L(n) k=1 C k,n (x, y); A5. If n N 0 then go to step A2. Otherwise the procedure should be stopped and image u * SAR := Z n declared as a denoised and selective smoothed version of the SAR image u SAR . We notice that the time variable t in the evolution equation (3.1) corresponds to a spatial scale analogous to σ, and by the regularity result in [41], we have V n ∈ C 1 loc (0, ∞; H 1 (Ω)) for each n ∈ N. Typical examples of the edge function g are Reconstruction of the brightness At this stage, we deal with the reconstruction problem of the brightness B 0 = |u 0 | ∈ BV (Ω\D) in the damage domain D ⊂ Ω. In order to recover the brightness data everywhere in D, we propose to solve the following constrained minimization problem (see [50] for comparison) where χ Ω\D (x) = 1 for x ∈ Ω \ D Here, from physical point of view, we suppose that a feasible brightness should be non-negative. As for the regularizing term R(B), its role is to fill in the brightness content into the damage domain D, e.g., by diffusion and/or transport. After pioneering works of Masnou and Morel [40] and Bertalmio et. al [7], the typical choice of regularizing term is to use the total variation, i.e. R(B) = |DB|(Ω) =´Ω d|DB|. Other examples to be mentioned for R(B) are the active contour model based on Mumford and Shah's segmentation [54], the inpainting scheme based on the Mumford-Shah-Euler image model [16], and the Euler elastica model, where with positive weights a and b. As follows from the coarea formula, we have the following properties where Γ s = {x ∈ Ω : B(x) = s} is the level line of the brightness for the grayvalue s. This circumstance motivate us to specify the cost functional J(B) in (3.7) as follows where λ > 0 is a weight coefficient, g : [0, ∞) → (0, ∞) represents an edge function with properties described in A2 (see Subsection 3.1), u * SAR is the denoised and selective smoothed version of the SAR image u SAR , * denotes the convolution operator, and G σ stands for the Gaussian kernel (3.4). As follows from (3.7), the minimization of the functional (3.9) with respect to the brightness B is now affected not only by the brightness data B 0 in Ω \ D and but also by the smoothed SAR data u * SAR in the damage region D. The first term in the functional J is the functional weighted by the linear combination of the edge functions g (see [27,57] for its utilization in the color image inpainting problems). In view of the properties of the edge function g, we see that the value of χ Ω\D g(|∇G σ * B 0 |)+χ D g(|∇u * SAR |) is close to one in regions of Ω\D where the original brightness B 0 is slowly varying, and in regions of D where the smoothed SAR image u * SAR is also slowly varying. At the same time, this value is small at the edges of brightness and the SAR image, if both σ and the constant a in (3.6) are small enough. Hence, the first term of J acts as a regularization functional such that the diffusion of brightness is inhibited across the edges of B 0 in Ω \ D and across the edges of u * SAR in D, yielding a sharp transition in the function B. At the same time, the second term in the cost functional J requires the BV -function B to be close to the brightness data B 0 in Ω \ D. Thus, the novelty of model (3.7), (3.9) is that the edge information for the brightness reconstruction is derived from the SAR data. In some sense, this model is related to piecewise smooth Mumford-Shah segmentation [39], since our model yields smooth spectral bands for each homogeneous region, while edge information is enforced by the weight function χ Ω\D g(|∇G σ * B 0 |) + χ D g(|∇u * SAR |). Recovery of chromaticity via SAR-weighted harmonic map In this subsection we present a variational model for recovery of the chromaticity data C in the damage domain D ⊂ Ω provided the chromaticity components C 0 for the original multi-band image u 0 are only defined in Ω \ D. We consider the following cost functional for reconstruction of the chromaticity data C in D: where α > 0 is a weight coefficient, and the rest counterparts are as in the cost functional (3.9). We associate with (3.10) the corresponding constrained minimization problem So, by analogy with the previous subsection, the minimization of the functional (3.10) with respect to the chromaticity C is also proposed to affect by the brightness data B 0 in Ω \ D and by the smoothed SAR data u * SAR in the damage region D. As a result, we expect that the diffusion of chromaticity is inhibited across the edges of B 0 in Ω \ D and the edges of u * SAR in D, yielding a sharp transition in the function C. Thus, the novelty of model (3.11) is that the edge information for chromaticity recovery is derived from the brightness and SAR data, and what is more important, this problem can be solved separately from the brightness reconstruction one. Final recovery of multi-band optical image After the chromaticity C rec and the brightness B rec are recovered by solving minimization problems (P 1 ) and (P 2 ), respectively, the fully multi-band image u in Ω will be given by u = C rec B rec . As a result, we give the way for the reconstruction of the major vegetation indices in the damage region D. In order to weight up all pros and cons of this approach, we notice that the proposed reconstruction scheme is rather flexible. Each spectral band diffuses in D as long as it meets the boundary of region enforced by the SAR information. Moreover, by using the chromaticity model, natural spectral band blending is possible with respect to geodesic direction in chromaticity space S M −1 . In a homogeneous region, if different bands are given, this model will naturally diffuse the spectral band by diffusing the vector values in S M −1 in geodesic direction between bands. Existence Results Our main interest in this section is to show that constrained minimization problems (3.7) and (3.11) are consistent and the corresponding sets of their solutions are nonempty. We begin with the following result (for comparison, we refer to [9,36,50]). Proof. First we notice that minimization problem (3.7)-(3.9) is consistent, that is, J(B) < +∞ for any feasible B ∈ Ξ. Indeed, since ∂Ω is a Lipschitz domain in R 2 , it follows from Poincare inequality (2.4) that BV (Ω) space is continuously embedded into L 2 (Ω). On the other hand, by Sobolev embedding theorem we have a compact injection H 1 0 (Ω) → L 2 (Ω). Hence, by the duality arguments and Riesz representation theorem, we deduce that L 2 (Ω) = L 2 (Ω) * → H 1 0 (Ω) * = H −1 (Ω) with a compact embedding as well. Thus, to finalize the remark about consistency, it is enough to observe that B 0 ∈ L 2 (Ω) and g(|∇G σ * B 0 |) together with g(|∇G σ * u * SAR |) are continuous functions. Hence, and therefore, J(B) < +∞ provided B ∈ Ξ. As follows from (3.9), the infimum in (3.7) is finite. Let {B k } k∈N be a minimizing sequence for (3.7), i.e. lim Let us show that, in fact, the minimizing sequence {B k } k∈N is bounded in BV (Ω). Indeed, by smoothness of |∇G σ * B 0 | and |∇G σ * u * SAR |, and positiveness of B 0 and u * SAR , we deduce the existence of a constant γ > 0 such that Hence,ˆΩ d|DB k | by (4.14) Then, by Poincaré-Wirtinger inequality (2.4), there exists a constant C P W > 0 depending only on Ω such that Let us show that, in fact, we can indicate a constant C > 0 such that Arguing as in [36], we set w k = 1 |Ω|´Ω B k dx χ Ω and v k = B k − w k . Then it is clear that w k , v k ∈ BV (Ω) for all k ∈ N. Moreover, by compactness of the embedding Since it follows from (4.5) and the inequality with some constant C > 0 independent of k. This implies that Hence, Thus, due to compactness of the embedding L 2 (Ω) → H −1 (Ω), there exists a constant C * > 0 depending on Ω such that From this and (4.3), we finally deduce that uniformly with respect to k ∈ N. Therefore, there exists a subsequence of {B k } k∈N , still denoted by the same index, and a function B rec ∈ BV (Ω) such that Moreover, passing to a subsequence if necessary, we have: in Ω, and B k B rec weakly in L 2 (Ω). (4.10) in Ω for all k ∈ N, it follows from (4.10) 1 that the limit function B rec is also subjected the same restriction. Thus, B rec is a feasible solution of reconstruction problem (3.7)-(3.9). Taking into account the weak convergence in L 2 (Ω) (see (4.10) 2 ), by compactness of the embedding L 2 (Ω) → H −1 (Ω), it implies that B k → B rec strongly in H −1 (Ω). (4.11) Therefore, utilizing properties (4.9) 2 and (4.11), we get As a result, it follows from the above consideration that Since the set of feasible solutions Ξ is convex and closed in BV (Ω), by Mazur's theorem we have that this set is sequentially closed with respect to the weak * convergence in BV (Ω). Thus, B rec ∈ Ξ and B rec is a minimizer for constrained minimization problem (3.7)-(3.9). It remains to show that B rec is a unique minimizer for this problem. Indeed, let us assume the converse. Let B ∈ Ξ and D ∈ Ξ be two minimizers for the problem (3.7)-(3.9). Then by the strict convexity of norm · H −1 (Ω) and convexity of the set of feasible solutions Ξ, we have which brings us into conflict with the initial assumptions. Thus, B rec is a unique minimizer to the problem (3.7)-(3.9). The proof is complete. We proceed further with the study of the SAR-weighted chromaticity recovery problem (3.11). Our aim is to show that this problem has a solution. The proof relies on the following Poincaré type of inequality (see [13,Lemma 4.1]). (4.13) We are now in a position to prove the existence result concerning the constrained minimization problem (3.11) with the convex functional F : BV (Ω; R M ) → R + and the non-convex sphere constraint C(x) ∈ S M −1 for a.a. x ∈ Ω. Since the indicated type of constraints is not trivial to satisfy, in practice the problem (P 2 ) requires a relaxation in passing from the non-convex set |C| = 1 to the convex unit ball |C| 1 with the corresponding penalization of this constraint into the minimizing functional. This issue will be considered in details in the next section. Let Ω ⊂ R 2 be a bounded image domain with Lipschitz boundary ∂Ω, and let D ⊂ Ω be a damage region such that |Ω \ D| > 0 and int D = ∅. Then, for any given B 0 ∈ BV (Ω \ D) and u SAR ∈ L ∞ (Ω) such that B 0 > 0 and u * SAR > 0 a.e. in Ω, the minimization problem (P 2 ) has a solution, i.e. there exists a function C rec ∈ BV (Ω; S M −1 ) such that Proof. Since 0 F (C) < +∞ for all C ∈ BV (Ω; S M −1 ), it follows that there exists a non-negative value µ 0 such that µ = inf Let {C k } k∈N ⊂ BV (Ω; S M −1 ) be a minimizing sequence to the problem (3.11), i.e. C k ∈ BV (Ω; S M −1 ), ∀ k ∈ N, and lim k→∞ F (C k ) = µ. So, we can legitimately suppose that F (C k ) µ + 1 for all k ∈ N. Then By smoothness of |∇G σ * B 0 | and |∇G σ * u * SAR |, and positiveness of B 0 and u * SAR , we deduce the existence of a positive constant γ such that χ Ω\D g(|∇G σ * B 0 |) + χ D g(|∇G σ * u * SAR |) > γ in Ω. Hence, Ω d|DC k | by (4.14) < γ −1 (µ + 1). (4.16) Taking into account the estimatê it follows from Lemma 4.1 that So, there exists a function of chromaticity C rec ∈ BV (Ω; R M ) such that, up to a (not relabeled) subsequence, C k → C rec strongly in L 1 (Ω; R M ) and C k (x) → C rec (x) almost everywhere in Ω. Since C k (x) ∈ S M −1 for a.a. x ∈ Ω, it follows from the pointwise convergence property that C rec ∈ BV (Ω; S M −1 ). It remains to take into account estimate (4.16) and the lower semi-continuity property (2.3) in order to deduce |DC rec |(Ω) lim inf k→∞ |DC k |(Ω). (4.17) Utilizing again the pointwise convergence C k (x) → C rec (x) almost everywhere in Ω and Fatou's lemma, we get Hence, by (4.17) and (4.18), we finally obtain so that, C rec ∈ BV (Ω; S M −1 ) is a solution of the minimization problem (3.11). On Relaxation of the Chromaticity Recovery Problem As it was mentioned in the previous sections, the constrained minimization problem (P 2 ) is not trivial in its practical implementation because of the nonconvex state constraint C(x) ∈ S M −1 . It is worth to notice that in recent years, minimization problems over manifold-valued data has been attracted many interest (see, for instance, [6,55,57] and the other references therein). Many interesting and well-suited approaches for non-convex optimization problems were proposed including augmented Lagrangian methods, penalty methods, alternative direction minimization, and others. In this section, by analogy with the recent results developed in [6] (see also [29][30][31][32]34]), we make use of the relaxation approach passing from the non-convex constrained set |C(x)| = 1 to the convex unit ball |C(x)| 1 in R M with further penalization of this constraint in the corresponding minimization cost functional. With that in mind, for any real number ε > 0 and any given B 0 ∈ BV (Ω\D) and u SAR ∈ L ∞ (Ω), we consider the convex functional and the corresponding minimization problem with a convex constraint Hereinafter, we assume that the parameter ε varies within a strictly decreasing sequence of positive real numbers which converge to 0. When we write ε > 0, we consider only the elements of this sequence. We begin with the following existence result for the penalized variational problem (P ε ). Proof. The proof of an existence result is similar to that of Theorem 4.1. For the uniqueness of such solution it is enough to apply the convexity arguments. Therefore, we sketch only the main points. Due to the standard properties of the convolution operator, we see that |∇G σ * B 0 | and |∇G σ * u * SAR | are bounded functions in the closure of Ω \ D and D, respectively. Hence, by the definition of the edge function g, it follows that there exists a positive constant γ such that As a result, we deduce: the cost functional F ε is coercive on the set of feasible solutions Λ = C ∈ BV (Ω; R M ) : |C(x)| 1 for a.a. x ∈ Ω and lower semi-continuous with respect to the weak * convergence in BV (Ω; R M ). Then a solution of the variational problem (P ε ) exists for any ε > 0. Let C rec ε be a solution of (P ε ) for a given ε > 0. Let us show that the sequence {C rec ε } ε>0 is bounded in BV (Ω; R M ). To this end, we set C is a constant vector field in R M such that | C| = 1 at each point x ∈ R M . Then it is clear that C ∈ Λ. Therefore, Since the right-hand side of (5.3) does not depend on ε, it follows that where µ : Taking into account that sup where the constant C > 0 comes from the Poincaré inequality (4.13). We proceed further with the study of asymptotic properties of the sequence of minimizers {C rec ε } ε>0 as ε → 0. For the technique and more details, we refer to [33]. Theorem 5.1. Let {C rec ε } ε>0 be a sequence of minimizers to the penalized minimization problems (P ε ). Then this sequence is compact with respect to the weak * convergence in BV (Ω; R M ), and each its weakly * cluster point is a solution to the chromaticity recovery problem (3.11). Proof. Since the set Λ is sequentially closed with respect to the weak * convergence in BV (Ω; R M ), and the uniformly bounded sets in BV -norm are relatively compact in L 1 (Ω), it follows from (5.6) that there exists a function C 0 ∈ Λ ⊂ BV (Ω; R M ) and a subsequence C rec ε k k∈N such that C rec ε k → C 0 strongly in L 1 (Ω; R M ) and almost everywhere in Ω as k → ∞. From this we deduce that Since, for any ε > 0, we have Utilizing this fact together with (5.7), we obtain: |C rec ε k | → 1 strongly in L 2 (Ω), and |C 0 (x)| = 1 for a.a. x ∈ Ω. Hence, C 0 ∈ BV (Ω; S M −1 ). It remains to show that the function C 0 solves the chromaticity recovery problem (P 2 ). Indeed, for an arbitrary function C ∈ BV (Ω; S M −1 ), we obviously have |C|(x) = 1 a.e. in Ω, and, therefore, Then passing in the right-hand side of (5.9) to the limit as k → ∞ and taking into account the Fatou's lemma and the lower semi-continuity property of the total variation (as we did in the proof of Theorem 4.1), we have Hence, and this implies that F C 0 = inf C∈BV (Ω;S M −1 ) F (C). Optimality System for the Penalized Chromaticity Recovery Problem The main object of our consideration in this section is the constrained minimization problem (5.2). Following in general aspects the technique of Temam for the problem of minimal surfaces [53] and duality results from [15], we derive in this section the necessary optimality conditions in order to characterize the solution C rec ∈ BV (Ω; S M −1 ) the problem (P ε ). To begin with, we reformulate problem (5.2) as an equivalent problem on the space X := L 2 (Ω; R M ). For this reason we define the following functional on X: where K (B 0 , u SAR ) is given by (4.12). Notice that all functional are well-defined on X. In view of this, we extend the energy functional F ε to the entire set L 2 (Ω; R M ) by the rule Then it is clear that a minimizer C rec ∈ Ξ of (5.2) is also a minimizer of the modified problem because the inclusion C ∈ Λ is equivalent to the consistency condition of the problem (6.6), i.e. C ∈ Λ ⇔ F ε (C) < +∞ and G(C) 0. (6.7) Our next step is to derive some necessary condition for minimizers of a constrained minimization problem (6.6). A fundamental difficulty that typically appears in this case is the lack in differentiability of the energy functional (6.5). Since the functionals F ε : L 2 (Ω; R M ) → R and G : L 2 (Ω; R M ) → R are convex, a necessary condition for a minimizer C rec ε ∈ Λ should employ the convex subdifferential ∂ F ε (C rec ε ) which is some subsets of the dual space L 2 (Ω; R M ) * = L 2 (Ω; R M ). Before proceeding further, we recall the definition of the subdifferential ∂F (u) of a convex proper functional F : X → R ∪ +∞ at some element u ∈ X. If X = L 2 (Ω), then X * = X and, for given u ∈ X, an element ξ ∈ X belongs to ∂F (u) if and only if, ∀ v ∈ X, Thus, ξ ∈ ∂F (u) if u is a minimizer on X of the following variational problem We begin with the following technical results (for the proof and their substantiation we refer to [24, Section 3]). Proposition 6.1. Let B 0 ∈ BV (Ω\D), C 0 ∈ BV (Ω; S M −1 ), and u SAR ∈ L ∞ (Ω) be given functions. Then the functionals E 2 , E 2 : L 2 (Ω; R M ) → R are convex and Gâteaux differentiable in L 2 (Ω; R M ) with , ∀ H ∈ L 2 (Ω; R M ). (6.10) Our next step is to define the structure of subdifferential for the functional E 1 given by the rule (6.1). To do so, we make use of the following result (for the proof we refer to [24,Proposition 4.4]). As immediately follows from (6.11), a vector field z can be formally identified with the quotient DC |DC| provided |DC| is nonzero and well defined at a given point x ∈ Ω. However, due to the Azellotti's theory of pairing, the correct interpretation of the quotient DC |DC| can be made through the equality (z, DC) = |DC|, where the field z ∈ L ∞ 2,div (Ω; R M ×2 ) is such that z L ∞ (Ω;R M ×2 ) 1 (see [12] for the details). We are now in a position to derive the necessary conditions for a unique minimizer C rec ε ∈ Λ ⊂ BV (Ω; R M ) of constrained minimization problem (5.2). Since the functionals J i : L 2 (Ω; R M ) → R and G : L 2 (Ω; R M ) → R are convex, necessary conditions should employ the convex subdifferentials ∂ J i (C rec ε ) and ∂G (C rec ε ). As a result, utilizing Proposition 6.3 in [28], we arrive at the following result. of convex functionals saying that we deduce the existence of elements Hence, in view of Propositions 6.1 and 6.2, we arrive at the announced relations (6.13)-(6.16). Optimality Conditions for Brightness Reconstruction Problem The main object of our consideration in this section is the following constrained minimization problem where the set of admissible solutions Ξ is defined in (3.8), and the weight multiplayer K (B 0 , u SAR ) is given by (4.12). By analogy with the previous section, we define the following functionals +∞, B ∈ L 2 (Ω) \ BV (Ω). Notice that all functional are well-defined on L 2 (Ω). In view of this, we extend the energy functional J to the entire set L 2 (Ω) by the rule Then it is clear that a minimizer B rec ∈ Ξ of (7.1) is also a minimizer of the modified problem because the inclusion B ∈ Ξ is equivalent to the consistency condition of the problem (7.1), i.e. Schemes for Numerical Simulations In this section we shortly describe the crucial steps of alternative minimization method that we suggest for the numerical simulations of the spectral indices by the damaged multi-band satellite optical images. Since the reconstruction problem is split up onto two independent constrained minimization problems (P 1 ) and (P ε ), we begin with the chromaticity recovery problem (3.11). Due the operator splitting technique (see, for instance, [21,22]), we pass to the following regularized version of the problem (5.2) Find C 0 ε ∈ BV (Ω; R M ) and V 0 ε ∈ L 2 (Ω; R M ) such that F ε C 0 , V 0 = inf C,V F ε (C, V ) subject to V = C, |V | = 1, Following method of multipliers and gradients that was proposer in [25], we associate with constrained minimization problem (8.1) the following augmented Lagrangian functional Here, λ 1 = λ 1 (x) ∈ R M and λ 2 = λ 2 (x) ∈ R are the Lagrange multipliers associated to the constraints V = C and |V | = 1, respectively, and r 1 > 0 is the penalty parameter corresponding to the Lagrange multiplier λ 1 . Let at some iteration k ∈ N, C k , V k , λ k 1 , and λ k 2 be given. Then the idea is to exploit the alternating minimization approach to find the saddle points of the Lagrangian L (C, V , λ 1 , λ 2 ) iteratively. With that in mind, we propose the following iteration scheme (see [57,Section 2] for comparison) where τ 2ˆΩ C − C k 2 dx, with τ > 0, plays the role of the proximal term. So, the original chromaticity recovery problem can be reduced to the sequences of two unconstrained minimization problems and two multiplier updates. Moreover, their solutions can be obtained separately for each spectral channel. For the details of this procedure, its substantiation, and practical implementation, we refer to the recent paper [57]. As for the numerical scheme for the brightness reconstruction problem (7.1), we refere to [50], where some generalization of Chambolle's algorithm to the case of an H −1 -constrained minimization of the total variation in the case of T V -H −1 inpainting problem has been proposed (see [50,Section 2.3]).
11,298
2020-10-09T00:00:00.000
[ "Environmental Science", "Engineering", "Physics" ]
Integration of Single-Photon Emitters in 2D Materials with Plasmonic Waveguides at Room Temperature Efficient integration of a single-photon emitter with an optical waveguide is essential for quantum integrated circuits. In this study, we integrated a single-photon emitter in a hexagonal boron nitride (h-BN) flake with a Ag plasmonic waveguide and measured its optical properties at room temperature. First, we performed numerical simulations to calculate the efficiency of light coupling from the emitter to the Ag plasmonic waveguide, depending on the position and polarization of the emitter. In the experiment, we placed a Ag nanowire, which acted as the plasmonic waveguide, near the defect of the h-BN, which acted as the single-photon emitter. The position and direction of the nanowire were precisely controlled using a stamping method. Our time-resolved photoluminescence measurement showed that the single-photon emission from the h-BN flake was enhanced to almost twice the intensity as a result of the coupling with the Ag nanowire. We expect these results to pave the way for the practical implementation of on-chip nanoscale quantum plasmonic integrated circuits. Introduction Efficient coupling of a single-photon emitter with an optical waveguide is essential for implementing a quantum photonic integrated circuit [1]. In particular, when the single-photon emitter is coupled to a plasmonic waveguide, the local density of states (LDOS) of the emitter is increased, and the photon emission is enhanced [2][3][4][5][6][7][8]. This feature was observed by integrating single-photon emitters with randomly dispersed metal nanowires on a substrate [9][10][11][12]. However, as the optical properties of the emitter strongly depend on its distance and direction relative to the metal nanowire, it is necessary to precisely place the emitter and nanowire in the desired positions in order to yield a large photon enhancement and a high coupling efficiency. Recently, the positions of metal nanowires and metal nanoparticles have been controlled by using an atomic force microscopy (AFM) tip to couple them with single-photon emitters such as nanodiamonds containing color centers [13][14][15], quantum dots [16], and defects in hexagonal boron nitride (h-BN) [17]. Single-photon emitters were also reported to have been coupled with surface-plasmon-polariton (SPP) waveguides formed on the metal surface by electron-beam lithography or plasmonic V-grooves fabricated by focused ion-beam (FIB) milling [7][8][9][10][11][12][13][14][15][16][18][19][20]. However, alignment of the plasmonic waveguide at an accurate angle is necessary to investigate the polarization dependence of the single-photon emitter on the efficiency of coupling with the plasmonic waveguide, which has yet to be demonstrated. The N B V N defect in h-BN is well known for its ability to form a promising single-photon source at room temperature by exhibiting ultrabright light emission with linear polarization and a narrow linewidth [21][22][23][24]. Optically detected magnetic resonance (ODMR) was also reported to originate from the h-BN defect [25]. In this study, we demonstrate efficient coupling between the single-photon emitter in h-BN and a plasmonic waveguide in the form of a Ag nanowire. Numerical simulation shows that the efficiency of light coupling between the emitter and the Ag nanowire depends on the position and polarization of the emitter. To precisely control the position and the direction of the nanowire in the experiment, we used a poly(dimethylsiloxane) (PDMS)/poly(propylene carbonate) (PPC) stamping method. The photoluminescence (PL) measurements before and after the coupling of the emitter with the Ag nanowire at room temperature verify that the Ag nanowire enhances the single-photon emission. The propagation of single photons along the plasmonic waveguide is also demonstrated. Sample Preparation h-BN flakes (model: BLK-h-BN solution (2D semiconductors, Scottsdale, AZ, USA)) in solution were dispersed onto a SiO 2 substrate. The sample was annealed at 850 • C for 30 min at 1 Torr under Ar at a flow rate of 100 sccm to prevent oxidation and increase the emitter density. The PDMS surface was exposed to O 2 plasma with a power of 30 W and a flow rate of 40 sccm. The PPC was spin-coated onto the PDMS at 1000 rpm for 30 s and baked at 60 • C for 3 min. The Ag nanowires dispersed on the PPC were transferred onto the h-BN flakes using the PDMS/PPC method, where the nanowire's axis was perpendicular to the polarization direction of the emitters. The PDMS and PPC layers were then separated by heating at 140 • C for 20 min. Then, the samples were immersed in acetone for 10 min to completely remove the PPC from the SiO 2 substrate. Numerical Simulations A three-dimensional (3D) finite-difference time-domain (FDTD) method was used for numerical modeling. The simulation domain was divided by a 2.5 nm mesh, and the perfectly matched layer (PML) was employed as the boundary condition. The refractive index and extinction coefficient of Ag were set at 0.266 and 4.05, respectively. The refractive index and extinction coefficient of Au were set at 0.162 and 2.95, respectively. The thickness of the SiO 2 substrate was 280 nm. The refractive indices of Si and SiO 2 were 3.93 and 1.46, respectively, at a wavelength of 600 nm. Then, an electric dipole with a wavelength of 600 nm was introduced as the single-photon emitter. The power emitted via various channels was calculated by normalizing it with the power of the dipole on the SiO 2 substrate. We varied the distance between the dipole and the center of the nanowire from 50 to 200 nm, and we varied the angle between the polarization direction of the dipole and the nanowire axis from 0 to 90 • . Experimental Setup The optical properties of the single-photon emission were measured using a home-built confocal microscope setup consisting of a system of two sets of galvanometers to control the excitation and emission individually. Pulsed (SuperK EXTREME (NKT Photonics, Birkeød, Denmark)) or continuous-wave (CW) lasers (OBIS 532 nm LS (Coherent Inc., Santa Clara, CA, USA)) with a wavelength of 532 nm were used as the pumping source. Emission from the h-BN was collected by a 100× objective lens that had a numerical aperture of 0.90 and was coupled to the single-mode fiber to send the PL to either the Avalanche photodiode (APD) or the spectrometer (Acton SP2500 (Teledyne Princeton Instruments, Trenton, NJ, USA)). To measure the second-order correlation, we used a Hanbury Brown and Twiss (HBT) interferometer setup composed of two APDs and a time-correlated single-photon counting module (Picoharp 300 (PicoQuant, Berlin, Germany)). The emission from the sample was divided into two APDs using a 50:50 fiber beam splitter. Polarization measurements were conducted by placing a linear polarizer in front of the fiber coupler. Results Figure 1a schematizes the coupling of single photons emitted from a defect in h-BN to the Ag nanowire. The photons propagate along the nanowire surface as single plasmons and are converted to single photons again at the end of the nanowire. While propagating along the Ag nanowire, certain photons are converted to surface plasmons, whereas other photons are absorbed by the metal. This was simulated using our home-built 3D FDTD software. In our simulation, a Ag nanowire with a length of 3 µm and a diameter of 100 nm was placed on the SiO 2 substrate. An electric dipole with a wavelength of 600 nm was located on the substrate, whereas the distance (D) between the nanowire and the dipole and the polarization direction (θ) of the dipole were varied (Figure 1b, left). We first calculated the electric field intensity distribution when D = 105 nm and θ = 0 • (inset of Figure 1a). Light scattering is observed at both ends of the nanowire, whereas the direct emission from the dipole into the air occurs in the middle of the nanowire. Nanomaterials 2020, 10, x FOR PEER REVIEW 3 of 9 Figure 1a schematizes the coupling of single photons emitted from a defect in h-BN to the Ag nanowire. The photons propagate along the nanowire surface as single plasmons and are converted to single photons again at the end of the nanowire. While propagating along the Ag nanowire, certain photons are converted to surface plasmons, whereas other photons are absorbed by the metal. This was simulated using our home-built 3D FDTD software. In our simulation, a Ag nanowire with a length of 3 μm and a diameter of 100 nm was placed on the SiO2 substrate. An electric dipole with a wavelength of 600 nm was located on the substrate, whereas the distance (D) between the nanowire and the dipole and the polarization direction (θ) of the dipole were varied (Figure 1b, left). We first calculated the electric field intensity distribution when D = 105 nm and θ = 0° (inset of Figure 1a). Light scattering is observed at both ends of the nanowire, whereas the direct emission from the dipole into the air occurs in the middle of the nanowire. The schematic on the left shows the distance (D) between the emitter, the Ag nanowire, and the polarization direction (θ) of the emitter. The schematic on the right shows the closed surfaces used to calculate the power emitted via various channels. PE, total power from the dipole emitter; PRad, total radiative power; Pfs, direct radiative power from emitter to free space; Pc, power transferred to nanowire; PA, nonradiative power; PS, power scattered by nanowire. (c-e) Calculated 2D maps of the total power enhancement (c), radiative power enhancement (d), and scattered power enhancement (e), as functions of D and θ. The θ varies from 0 (normal to the nanowire) to 90° (parallel to the nanowire). D varies from 50 to 200 nm. Results To investigate the effects of D and θ on the coupling efficiency, we calculated the power emission via various channels as functions of D and θ. We considered all possible channels in terms of the following parameters: total power from the dipole emitter (PE), total radiative power (PRad), direct radiative power from the emitter to the free space (Pfs), power from the emitter to the nanowire (Pc), The schematic on the left shows the distance (D) between the emitter, the Ag nanowire, and the polarization direction (θ) of the emitter. The schematic on the right shows the closed surfaces used to calculate the power emitted via various channels. P E , total power from the dipole emitter; P Rad , total radiative power; P fs , direct radiative power from emitter to free space; P c , power transferred to nanowire; P A , nonradiative power; P S , power scattered by nanowire. (c-e) Calculated 2D maps of the total power enhancement (c), radiative power enhancement (d), and scattered power enhancement (e), as functions of D and θ. The θ varies from 0 (normal to the nanowire) to 90 • (parallel to the nanowire). D varies from 50 to 200 nm. To investigate the effects of D and θ on the coupling efficiency, we calculated the power emission via various channels as functions of D and θ. We considered all possible channels in terms of the following parameters: total power from the dipole emitter (P E ), total radiative power (P Rad ), direct radiative power from the emitter to the free space (P fs ), power from the emitter to the nanowire (P c ), total nonradiative power (P A ), and power scattered by the nanowire (P S ). As shown in the panel on the right in Figure 1b, P E , P Rad , and P A were calculated by integrating the Poynting vectors over the closed surface E (surrounding the dipole), the closed surface T (surrounding the dipole and nanowire), and the closed surface A (surrounding only the nanowire), respectively. P S was calculated by integrating the Poynting vectors of the scattered fields over the closed surface A, and P C was the sum of P S and P A [26,27]. Figure 1c-e shows P E , P Rad , and P S normalized by the total emitted power without the Ag nanowire, P 0 , when θ varies from 0 (normal to the nanowire) to 90 • (parallel to the nanowire), and D varies from 50 (the radius of the nanowire) to 200 nm. As the dipole emitter approaches the Ag nanowire and the polarization direction of the dipole is normal to the axis of the nanowire, the dipole is efficiently coupled to the Ag nanowire with an increased total power enhancement ( Figure 1c). When D is 50 nm and θ is 0 • , the total power enhancement is 3.97. In addition, the radiative power enhancement increases as D and θ become smaller ( Figure 1d). When D is 50 nm and θ is 0 • , the radiative power enhancement is 2.40. Because of the power absorbed by the metal, the radiative power enhancement is less than the total power enhancement. Furthermore, it is important to enhance the radiative power to greater than 1 to obtain a strong single-photon emission. Even if the Ag nanowire is placed near the emitter (D = 50 nm), θ should be smaller than 60 • to achieve this objective (i.e., for the radiative power enhancement to be greater than 1). When θ is 0 • (normal to the nanowire), the radiative power enhancement is greater than 1 if the D is smaller than 200 nm. This result indicates that both D and θ affect the coupling of single photons to the Ag nanowire. Next, we calculated the power enhancements depending on the dimensions and materials of the nanowires (Figure 2). Figure 2a shows the results obtained when only changing the length of the Ag nanowire from 1 to 3 µm, while keeping the nanowire diameter, D, and θ, fixed as 100 nm, 50 nm, and 0 • , respectively. The simulation shows that the enhancements of P E , P Rad , and P S are almost the same even when the nanowire length changes. The power fluctuation of each curve may result from the Fabry-Perot resonance in the Ag nanowire [28]. In addition, we studied the enhancement of P Rad as a function of D for Ag and Au nanowires with different diameters (Figure 2b). Diameters of 100 and 70 nm for the Ag nanowire and a diameter of 100 nm for the Au nanowire were examined, while the other parameters were fixed. In all cases, the radiative power enhancement decreases as the D increases. For a Ag nanowire with a smaller diameter, the electric field intensity of surface plasmons is weaker and the P Rad is smaller for the same D. For the Au nanowire, the power enhancement is also weaker due to the increased optical loss of Au when the other structural parameters are all the same. Nanomaterials 2020, 10, x FOR PEER REVIEW 4 of 9 total nonradiative power (PA), and power scattered by the nanowire (PS). As shown in the panel on the right in Figure 1b, PE, PRad, and PA were calculated by integrating the Poynting vectors over the closed surface E (surrounding the dipole), the closed surface T (surrounding the dipole and nanowire), and the closed surface A (surrounding only the nanowire), respectively. PS was calculated by integrating the Poynting vectors of the scattered fields over the closed surface A, and PC was the sum of PS and PA [26,27]. Figure 1c-e shows PE, PRad, and PS normalized by the total emitted power without the Ag nanowire, P0, when θ varies from 0 (normal to the nanowire) to 90° (parallel to the nanowire), and D varies from 50 (the radius of the nanowire) to 200 nm. As the dipole emitter approaches the Ag nanowire and the polarization direction of the dipole is normal to the axis of the nanowire, the dipole is efficiently coupled to the Ag nanowire with an increased total power enhancement ( Figure 1c). When D is 50 nm and θ is 0°, the total power enhancement is 3.97. In addition, the radiative power enhancement increases as D and θ become smaller (Figure 1d). When D is 50 nm and θ is 0°, the radiative power enhancement is 2.40. Because of the power absorbed by the metal, the radiative power enhancement is less than the total power enhancement. Furthermore, it is important to enhance the radiative power to greater than 1 to obtain a strong single-photon emission. Even if the Ag nanowire is placed near the emitter (D = 50 nm), θ should be smaller than 60° to achieve this objective (i.e., for the radiative power enhancement to be greater than 1). When θ is 0° (normal to the nanowire), the radiative power enhancement is greater than 1 if the D is smaller than 200 nm. This result indicates that both D and θ affect the coupling of single photons to the Ag nanowire. Next, we calculated the power enhancements depending on the dimensions and materials of the nanowires (Figure 2). Figure 2a shows the results obtained when only changing the length of the Ag nanowire from 1 to 3 μm, while keeping the nanowire diameter, D, and θ, fixed as 100 nm, 50 nm, and 0°, respectively. The simulation shows that the enhancements of PE, PRad, and PS are almost the same even when the nanowire length changes. The power fluctuation of each curve may result from the Fabry-Perot resonance in the Ag nanowire [28]. In addition, we studied the enhancement of PRad as a function of D for Ag and Au nanowires with different diameters (Figure 2b). Diameters of 100 and 70 nm for the Ag nanowire and a diameter of 100 nm for the Au nanowire were examined, while the other parameters were fixed. In all cases, the radiative power enhancement decreases as the D increases. For a Ag nanowire with a smaller diameter, the electric field intensity of surface plasmons is weaker and the PRad is smaller for the same D. For the Au nanowire, the power enhancement is also weaker due to the increased optical loss of Au when the other structural parameters are all the same. To verify these theoretical results, we conducted a coupling experiment between a single-photon emitter (h-BN defect) and a Ag nanowire. First, h-BN flakes were dispersed on a SiO 2 substrate with markers for alignment (Figure 3a). The optical properties of h-BN defects were measured using a confocal scanning fluorescence microscope setup with a 532 nm CW pump laser. The emitted photons were collected by a 100× microscope objective lens with a numerical aperture of 0.9 that was coupled to a single-mode fiber to send the photons to the APD or spectrometer. Confocal fluorescence scanning of the sample revealed bright and localized spots (inset of Figure 3b). The spectrum of the bright spot exhibits a sharp zero phonon line (ZPL) peak at a wavelength of 600 nm and with a linewidth of 7 nm (Figure 3b). We also measured the second-order autocorrelation (g 2 (τ)) of the photon emission using a Hanbury Brown and Twiss interferometer setup (Figure 3c). The measured g 2 (τ) data were fitted using a three-level model equation: where the parameters τ 1 and τ 2 represent the radiative transition lifetime and the metastable state lifetime, respectively, and a is a bunching factor. The measured g 2 (0) value was 0.18, exhibiting a clear characteristic of single-photon emission. Additionally, we measured the polarization of the ZPL wavelength. The photon emission is linearly polarized with a polarization visibility of 0.64 (Figure 3d). Nanomaterials 2020, 10, x FOR PEER REVIEW 5 of 9 To verify these theoretical results, we conducted a coupling experiment between a single-photon emitter (h-BN defect) and a Ag nanowire. First, h-BN flakes were dispersed on a SiO2 substrate with markers for alignment (Figure 3a). The optical properties of h-BN defects were measured using a confocal scanning fluorescence microscope setup with a 532 nm CW pump laser. The emitted photons were collected by a 100× microscope objective lens with a numerical aperture of 0.9 that was coupled to a single-mode fiber to send the photons to the APD or spectrometer. Confocal fluorescence scanning of the sample revealed bright and localized spots (inset of Figure 3b). The spectrum of the bright spot exhibits a sharp zero phonon line (ZPL) peak at a wavelength of 600 nm and with a linewidth of 7 nm (Figure 3b). We also measured the second-order autocorrelation (g 2 (τ)) of the photon emission using a Hanbury Brown and Twiss interferometer setup (Figure 3c). The measured g 2 (τ) data were fitted using a three-level model equation: where the parameters τ1 and τ2 represent the radiative transition lifetime and the metastable state lifetime, respectively, and a is a bunching factor. The measured g 2 (0) value was 0.18, exhibiting a clear characteristic of single-photon emission. Additionally, we measured the polarization of the ZPL wavelength. The photon emission is linearly polarized with a polarization visibility of 0.64 ( Figure 3d). Next, the PDMS/PPC stamping method was used to place the Ag nanowire in the desired position near the h-BN flake. The stamping method is comprised of the following steps (additional details are given in Section 2): First, the PPC was spin-coated on the PDMS at 1000 rpm for 30 s and baked at 60 °C for 3 min. After dispersing the Ag nanowires onto the PPC, an isolated single Ag nanowire was selected. This target Ag nanowire was transferred onto a single-photon emitter in the h-BN flake, with the direction of the Ag nanowire perpendicular to the emitter polarization (Figure Next, the PDMS/PPC stamping method was used to place the Ag nanowire in the desired position near the h-BN flake. The stamping method is comprised of the following steps (additional details are given in Section 2): First, the PPC was spin-coated on the PDMS at 1000 rpm for 30 s and baked at 60 • C for 3 min. After dispersing the Ag nanowires onto the PPC, an isolated single Ag nanowire was selected. This target Ag nanowire was transferred onto a single-photon emitter in the h-BN flake, with the direction of the Ag nanowire perpendicular to the emitter polarization (Figure 4a, (i)). Then, the PDMS film was separated from the PPC film by heating. The remaining PPC on the sample was removed by acetone for 10 min (Figure 4a, (ii)). The optical microscope images of the fabricated samples before and after transferring the Ag nanowire onto the h-BN flake are shown below (Figure 4a, (iii) and (iv)). Nanomaterials 2020, 10, x FOR PEER REVIEW 6 of 9 4a, (i)). Then, the PDMS film was separated from the PPC film by heating. The remaining PPC on the sample was removed by acetone for 10 min (Figure 4a, (ii)). The optical microscope images of the fabricated samples before and after transferring the Ag nanowire onto the h-BN flake are shown below (Figure 4a, (iii) and (iv)). The single-photon emissions from a defect in the h-BN, coupled to the Ag nanowire, were measured using a home-built confocal microscope setup that consisted of two sets of galvo mirror scanners to scan the pump position and detect the position independently (Figure 4b). One galvo mirror scanner was used to efficiently pump the h-BN single-photon emitter, whereas the other galvo mirror scanner was used to acquire PL images and position-dependent spectra of the entire sample, including the h-BN and Ag nanowire. Figure 5a shows the confocal PL image scan of the single-photon emission coupled to a Ag nanowire, while the pump laser was focused on the h-BN flake. Strong photon emission from the h-BN defect in the middle of the Ag nanowire (Figure 5a, yellow circle) was coupled to and propagated along the Ag nanowire. As a result, weak photon emission was observed at both ends of the Ag nanowire (Figure 5a, white circles). The spectra measured at both ends of the nanowire were almost the same, showing a well-resolved peak at 600 nm (Figure 5b), and also fairly similar to the spectrum measured before the coupling with the Ag nanowire (Figure 3b). This result indicates that the singlephoton emission from the h-BN flake was converted to SPP and propagated along the Ag nanowire. In addition, we measured the time-resolved PL before and after coupling with the Ag nanowire (Figure 5c). The measured data were fitted by a double exponential decay function. The lifetimes were 2.09 and 1.11 ns before and after the coupling, respectively. The lifetime reduction by the Ag nanowire was attributed to the increased nonradiative and plasmonic decay channels, which led to the increased LDOS of the single-photon emitter. Specifically, the experimentally measured reduction in lifetime by 1.88 times was comparable to the simulation result of the total power enhancement in Figure 1c. In the simulation, this enhancement was obtained when the polarization direction of the single-photon emitter was perpendicular to the Ag nanowire and was positioned at a distance of approximately 100 nm from the center of the Ag nanowire. A comparison between the measured and simulated results indicates that the PDMS/PPC stamping method enabled the Ag nanowire to be placed near the single-photon emitter, with a placement accuracy of less than 100 nm. The single-photon emissions from a defect in the h-BN, coupled to the Ag nanowire, were measured using a home-built confocal microscope setup that consisted of two sets of galvo mirror scanners to scan the pump position and detect the position independently ( Figure 4b). One galvo mirror scanner was used to efficiently pump the h-BN single-photon emitter, whereas the other galvo mirror scanner was used to acquire PL images and position-dependent spectra of the entire sample, including the h-BN and Ag nanowire. Figure 5a shows the confocal PL image scan of the single-photon emission coupled to a Ag nanowire, while the pump laser was focused on the h-BN flake. Strong photon emission from the h-BN defect in the middle of the Ag nanowire (Figure 5a, yellow circle) was coupled to and propagated along the Ag nanowire. As a result, weak photon emission was observed at both ends of the Ag nanowire (Figure 5a, white circles). The spectra measured at both ends of the nanowire were almost the same, showing a well-resolved peak at 600 nm (Figure 5b), and also fairly similar to the spectrum measured before the coupling with the Ag nanowire (Figure 3b). This result indicates that the single-photon emission from the h-BN flake was converted to SPP and propagated along the Ag nanowire. In addition, we measured the time-resolved PL before and after coupling with the Ag nanowire ( Figure 5c). The measured data were fitted by a double exponential decay function. The lifetimes were 2.09 and 1.11 ns before and after the coupling, respectively. The lifetime reduction by the Ag nanowire was attributed to the increased nonradiative and plasmonic decay channels, which led to the increased LDOS of the single-photon emitter. Specifically, the experimentally measured reduction in lifetime by 1.88 times was comparable to the simulation result of the total power enhancement in Figure 1c. In the simulation, this enhancement was obtained when the polarization direction of the single-photon emitter was perpendicular to the Ag nanowire and was positioned at a distance of approximately 100 nm from the center of the Ag nanowire. A comparison between the measured and simulated results indicates that the PDMS/PPC stamping method enabled the Ag nanowire to be placed near the single-photon emitter, with a placement accuracy of less than 100 nm. (Figure 3b). (c) Comparison between the time-resolved photoluminescence (TRPL) measurements before and after coupling of the singlephoton emitter with a Ag nanowire. The red and black curves were measured before and after coupling, respectively. (d) Comparison between the fluorescence saturation curves before and after coupling with a Ag nanowire. The red and black curves were measured before and after coupling, respectively. Finally, we measured the fluorescence saturation curves before and after coupling with the Ag nanowire (Figure 5d). The saturation measurements were fitted with the equation: where I and P are the saturated emission rate and the excitation power at the saturation, respectively. The saturated emission rates were estimated to be 3.52 × 10 5 counts/s and 4.90 × 10 5 counts/s before and after coupling, respectively. This improvement in the saturated emission rate signifies an efficient coupling of single photons with the Ag nanowire. Conclusions In summary, we successfully demonstrated the efficient coupling of h-BN single-photon emitters and Ag nanowire plasmonic waveguides. We analyzed the power enhancement of the emitter as a function of its polarization and position relative to the Ag nanowire by calculating the Poynting vectors through various channels. The simulation showed that the distance between the single-photon emitter and Ag nanowire and the polarization direction of the single-photon emitter affected the coupling efficiency. In the experiment, a Ag nanowire was accurately positioned close to the h-BN single-photon emitter using the PDMS/PPC stamping method. The optical measurement demonstrated the enhanced single-photon emission from h-BN as a result of the emitter coupling with the Ag nanowire. Certain photons were converted to single plasmons, propagated along the surface of the Ag nanowire, and emitted into free space at the ends of the nanowire. In addition, the lifetime of the emitter was determined to be reduced by 1.88 times by the Ag nanowire. A comparison (Figure 3b). (c) Comparison between the time-resolved photoluminescence (TRPL) measurements before and after coupling of the single-photon emitter with a Ag nanowire. The red and black curves were measured before and after coupling, respectively. (d) Comparison between the fluorescence saturation curves before and after coupling with a Ag nanowire. The red and black curves were measured before and after coupling, respectively. Finally, we measured the fluorescence saturation curves before and after coupling with the Ag nanowire ( Figure 5d). The saturation measurements were fitted with the equation: where I ∞ and P Sat are the saturated emission rate and the excitation power at the saturation, respectively. The saturated emission rates were estimated to be 3.52 × 10 5 counts/s and 4.90 × 10 5 counts/s before and after coupling, respectively. This improvement in the saturated emission rate signifies an efficient coupling of single photons with the Ag nanowire. Conclusions In summary, we successfully demonstrated the efficient coupling of h-BN single-photon emitters and Ag nanowire plasmonic waveguides. We analyzed the power enhancement of the emitter as a function of its polarization and position relative to the Ag nanowire by calculating the Poynting vectors through various channels. The simulation showed that the distance between the single-photon emitter and Ag nanowire and the polarization direction of the single-photon emitter affected the coupling efficiency. In the experiment, a Ag nanowire was accurately positioned close to the h-BN single-photon emitter using the PDMS/PPC stamping method. The optical measurement demonstrated the enhanced single-photon emission from h-BN as a result of the emitter coupling with the Ag nanowire. Certain photons were converted to single plasmons, propagated along the surface of the Ag nanowire, and emitted into free space at the ends of the nanowire. In addition, the lifetime of the emitter was determined to be reduced by 1.88 times by the Ag nanowire. A comparison of this result with the simulation result suggested that the h-BN single-photon emitter was positioned at a distance of 100 nm from the Ag nanowire. By fine-tuning the emission wavelength of the single-photon emitter, we expect our results to pave the way toward the practical implementation of quantum plasmonic integrated circuits based on 2D materials and Ag nanowires.
7,287
2020-08-25T00:00:00.000
[ "Physics", "Materials Science" ]
Frequency-diverse multimode millimetre-wave constant-ϵr lens-loaded cavity This paper presents a physical frequency-diverse multimode lens-loaded cavity, designed and used for the purpose of the direction of arrival (DoA) estimation in millimetre-wave frequency bands for 5G and beyond. The multi-mode mechanism is realized using an electrically-large cavity, generating spatio-temporally incoherent radiation masks leveraging the frequency-diversity principle. It has been shown for the first time that by placing a spherical constant dielectric lens (constant-ϵr) in front of the radiating aperture of the cavity, the spatial incoherence of the radiation modes can be enhanced. The lens-loaded cavity requires only a single lens and output port, making the hardware development much simpler and cost-effective compared to conventional DoA estimators where multiple antennas and receivers are classically required. Using the lens-loaded architecture, an increase of up to 6 dB is achieved in the peak gain of the synthesized quasi-random sampling bases from the frequency-diverse cavity. Despite the fact that the practical frequency-diverse cavity uses a limited subset of quasi-orthogonal modes below the upper bound limit of the number of theoretical modes, it is shown that the proposed lens-loaded cavity is capable of accurate DoA estimation. This is achieved thanks to the sufficient orthogonality of the leveraged modes and to the presence of the spherical constant-ϵr lens which increases the signal-to-noise ratio (SNR) of the received signal. Experimental results are shown to verify the proposed approach. Scientific Reports | (2020) 10:22145 | https://doi.org/10.1038/s41598-020-78964-1 www.nature.com/scientificreports/ scene information is encoded in terms of quasi-randomness of the measurement modes (N) along a frequency bandwidth. The computation imaging application work in [25][26][27][28][29] is limited to near-field where the frequency-diverse aperture works as a transmitter and a receiver, however, the DoA estimation problem using highly directive frequency-diverse antenna apertures works purely as a receiver, and the technique is required to work into the far-field. For this application, channel information within a field of view (FoV) in terms of far-field radiation source should be able to be re-constructed from quasi-random measurement modes, in conjunction with the mode-mixing cavity transfer functions and computational techniques, such as the least-square algorithm and matched-filtering in a given bandwidth 30 . A multiplexing technique in the physical layer for microwave imaging for far-field using ultra-wide band (UWB) antenna array is shown in 31 , while the concept of passive multiplexing for imaging was introduced in 32 . It is important to stress that the compressed imaging for channel sounding application has not been demonstrated before. Whereas a preliminary theoretical investigation in this domain was carried out in 24 with a hypothetical high-Q factor frequency-diverse antenna, in this paper, we demonstrate the first numerical and experimental validation of a computational frequency-diverse cavity-backed metasurface antenna loaded with a lens for channel characterization in the form of a DoA estimation problem. In the work we now present, we show for the first time that usable results can be achieved even after these theoretical requirements for frequency-diverse antenna are significantly relaxed. A radiating aperture as a transmitter with Luneburg lens is shown in 33 . In the approach we describe in this paper, the radiating aperture of a relatively low Q-factor mode-mixing cavity covered with a high gain constant-ϵ r lens 34,35 and is coupled through a curved surface with sub-wavelength holes. This geometric configuration concentrates the radiation intensity in an angular sector in front of the lens-loaded cavity, helping to overcome propagation losses, and additionally requires only a single radio frequency (RF) channel. The compressed signal received at the RF output of the lensloaded cavity is computationally processed to give DoA estimates of incoming mmWave signal angle(s) of arrival. The motivation behind our approach can be summarized as follows. Firstly, the known methods of DoA estimation requires an array of antennas with each antenna requiring a separate connection to the base-band processing unit via a dedicated RF chain, resulting in an increased hardware cost especially at the mmWave spectrum. Secondly, the known methods of using mode-mixing cavity (like in 25,26,28,29 ) have their core capability limited to near-field microwave imaging. The size of a mode mixing cavity is large in sub-6 GHz 5G frequencies, making it difficult to mount to radio sub-systems and/or base-stations antennas, while the mode-mixing cavity size is practical at mmWave frequencies. Contributions. Our approach poses following advantages over the current state of the art. Firstly, the lensloaded cavity not only generates frequency-diverse modes, but due to the spherical constant-ϵ r lens operation, enhances the gain of the radiation mask. Secondly, the estimation scheme in our proposed architecture uses spatio-temporal orthogonal modes, which needs only a relatively modest frequency-sweep and a single RF-chain to estimate DoA. For the first time, it is shown that by leveraging the concept of frequency-diversity together with the focusing capability of a spherical constant-ϵ r lens, the DoA of a plane-wave striking the lens-loaded cavity can be accurately retrieved. Thirdly, with the help of antenna characterization, it is shown that the size of the lens-loaded cavity makes the proposed solution viable for mmWave operation. Fourth, it is shown that the sub-wavelength coupling arrangement used equalizes the radiating energy from the lens-loaded cavity across a wide FoV coverage sector. Methods System architecture. The presented system architecture requires only a single lens-loaded cavity radiator with a single input/output, and the system block diagram is shown in Fig. 1. The system architecture consists of a lens-loaded cavity connected to an RF chain, which is subsequently connected to a computational unit. The lensloaded cavity in Fig. 1 comprises of a multi-mode cavity having volume from 15λ × 15λ × 15λ to 18λ × 18λ × 18λ over the frequency range 27-29 GHz air-spaced coupled through a sub-wavelength hole array and a spherical constant-ϵ r lens having diameter of 12.4 λ at the central frequency of operation, 28 GHz. The structural configuration of the lens-loaded cavity is presented in Fig. 2, which shows the mmWave metallic cavity and the curved radiating surface with sub-wavelength holes having diameter of 0.8 λ at 28 GHz. The centre to centre distance between the consecutive holes on the curved surface is 1.2 λ at 28 GHz and coupling through it follows the principle of extraordinary microwave energy transmission through an array of subwavelength holes 36 . The curved surface follows the circumference of a full spherical constant-ϵ r lens, while the rear hemispherical surface of the www.nature.com/scientificreports/ lens is placed 3.5 mm from the hole array. A metallic sheet of size 30 × 52 mm 2 is placed inside the cavity at an arbitrary location and orientation in order to enhance the mode-mixing capability of the cavity by introducing quasi-random disorderly reflections without the requirement of any external mode-mixing mechanism. This way we can avoid the superposition of wave vectors in the reciprocal space, which will lead to symmetry breaking in the measurement modes. The presence of the mode mixing scatterer between the sub-wavelength hole opening and the input/output channel will result in attenuation of the direct paths between the feed and the lens, which helps in limiting the level of spatial correlation within the cavity. The mmWave chaotic cavity constructed from copper is terminated into a standard WR28 waveguide. Lens-loaded cavity. A spherical constant-ϵ r lens has a unique property, for a specific range of ϵ r values, of being able to focus incoming microwave energy to locations outside the lens surface defined by its Petzval curvature 37 . Typically, suitable lens operation is possible for ϵ r values greater than 2.0 and less than 3.5 34 . The constant-ϵ r lens's material selection is generally governed by two choices, one is the maximum utilizable lens spherical area and other is the position off the lens where the energy is to be focused to, Fig. 3. In our case, this choice yields ϵ r = 2.5, with the resulting focal surface 3.5 mm from the spherical constant-ϵ r lens surface chosen so as to prevent the generation of unwanted reflections between the cavity and the constant-ϵ r lens as well as minimising energy leakage into the external environment around the lens when a 28 GHz plane wave strikes the lens surface. The material chosen for lens realisation is Rexolite 38 with ϵ r = 2.53. This material has low loss tanδ ≈ 0.00066, which means that the signal passing through it will suffer significantly low attenuation compared to the gain achieved due to energy focusing capacity. The benefit of using a lens for antenna gain enhancement is reinforced in 34 , while details on the effect of lens material on losses can be found in 39 . Rexolite material is readily machined and highly resistant to moisture absorption and exhibits low dimensional variation due to external environmental influences such as temperature and humidity, thus helping to mitigate long term calibration issues. Consider now the lens-loaded cavity in Fig. 2 facing towards the + x-direction in a Cartesian coordinate system. A plane wave is incident upon the lens-loaded cavity from the FoV along − x-direction, θ and φ represents azimuthal and elevation angles in the incident wave direction. The lens-loaded cavity operates from 27 to www.nature.com/scientificreports/ 29 GHz and has a single WR28 channel output connected to the RF-chain. The exponential decay associated with the cavity dictates the time domain impulse response h(t) of the lens-loaded cavity. This is proportional to the loaded Q-factor of the structure in Fig. 2 and is given by Q = πf 0 τ 40 , where f 0 is the central frequency, here 28 GHz. The channel impulse response is given by where n m ∼ N 0, σ 2 when N is the normal distribution with 0 mean and σ 2 variance. This suggests that a high Q-factor is desirable since it will enhance a frequency-diverse antenna aperture impulse response h(t) , i.e. reduce the correlation between multiple modes 19,24 . The number of useful modes in frequency-diverse antenna apertures is given in 40 as N max = QB f 0 where B is the bandwidth of operation and f 0 is the centre frequency. The Q-factor of the lens-loaded cavity is calculated to be Q = 4636 at 28 GHz using the energy decay profile, simulated in the CST Microwave Studio 41 , using method given in 42 . Hence, the upper bound limit on the number of modes is N max is 331 modes. Later it will be shown that we can still characterize the channel within the FoV even when the practical N is significantly less than 331 i.e. 41. It should be noted that, in this work, we use the 27-29 GHz band to demonstrate the application of the lens-loaded frequency-diverse cavity for the entire 28 GHz 5G spectrum. Whereas it is possible that a 5G channel can occupy a smaller bandwidth, the developed lens-loaded cavity can readily generate N max = 331 modes. Therefore, for a 5G channel with a smaller bandwidth, the presented technique can readily produce a sufficient number of modes to achieve DoA estimation. For example, considering 5G channel with 500 MHz bandwidth, the developed cavity can produce 66 orthogonal modes, which is above the number of modes that we show is sufficient to achieve high-fidelity DoA retrieval in this work, i.e. 41. Moreover, by further increasing the Q-factor of the antenna, we can further grow the number of frequency-diverse modes sampling the incident plane-wave. Following the same definition for the characterization plane of a frequency-diverse aperture as given in 24 , we evaluated the mode generation capacity of the lens-loaded cavity first through simulation and then by measurement. The transfer function of a frequency-diverse aperture can be obtained experimentally 28 or analytically 24 . The radiated field from the lens-loaded cavity can be approximated by creating a metasurface loaded with a number of meta-elements generating repeatable field maps. This is analogous Huygens' metamaterial surfaces 43 in which electric and magnetic sheet impedances provide the required currents to generate a prescribed radiating wave. The projection of a radiating aperture on a characterization plane can be written as: where r ′ is the coordinates of the characterization plane, r on the equivalent aperture plane m ω (r ′ ) is the radiation of a single meta-element on the antenna aperture, while G ω (r 1 , r 2 ) is the Green's function defined as in which k 0 is the wavenumber. Due to far-field approximation, the magnitude term in Eq. (3) is dropped. The DoA estimation problem is then calculated using far-field projection in the form of: Note that the calculations in Eq. (4) assume that the far-field source remains constant across the frequency band of operation and we are not correcting any time domain dispersion since the bandwidth is rather narrow. Therefore, the wavenumber k 0 in (4) is fixed at 28 GHz. It should be noted that this is not a fundamental limitation of the presented technique, but rather is an assumption to simplify the mathematical model of the far-field source. This assumption is valid for the scenarios in which coherence intervals > 0.5 ns, which is reasonable for mmWave 5G channels with low terminal mobility 44 . Moreover, given the central frequency, 28 GHz, the frequency variation is limited to ± 1 GHz around 28 GHz, suggesting a maximum of only ± 3.5% fractional bandwidth window centred around 28 GHz. So, for a consistent and normalized plane wave incident on the lens-loaded cavity, the compressed signal at the output port of the lens-loaded cavity after the RF chain (see Fig. 1) can be written in the form, this is then followed by the DoA estimator using a matched filter reconstruction algorithm. In order to elaborate the contribution of the lens-loaded cavity in the system architecture of Fig. 1, let us consider the effect of input signal excitation at the waveguide port in Fig. 2. The simulation results for this setup at the centre frequency, i.e. 28 GHz, are given in Fig. 4. Resonance modes and the slope of energy decay shown in Fig. 4a,b shows that the design of the lens and coupling scheme used results in high quality antenna matching and resulting modal Q-factor. The benefit of the spherical lens placed in front of the frequency-diverse cavity can be seen Fig. 4c with the lens significantly helping to confine the radiating energy into the FoV along the + x-direction. Also, from Fig. 4d, when the far-field is represented in u, v plane defined as, 4) p(θ, ϕ) = e −jk 0( y sin θ cos ϕ+z sin θ sin ϕ) www.nature.com/scientificreports/ the sectoral coverage for the cavity with the spherical constant-ϵ r lens is found to be lower compared to the same cavity without lens. This is as a result of the peak gain value at 28 GHz with lens case being increased to 10.9 dBi from 6.8 dBi without the lens. Over the frequency range of 27 to 29 GHz by using the spherical constant-ϵ r lens, we can enhance system gain by as much as 6 dB. This enhancement in the gain increases the spatial incoherence between the orthogonal modes, improving the mathematical conditioning of the inverse problem which will be further explained from the measurement results and singular value decomposition (SVD) analysis in the next section. Note that the measurement modes in DoA estimation context means field patterns on the characteristic plan for discrete frequencies within the bandwidth of interest. This is analogous to the measurement modes in microwave imaging 28,30 in which when the driving frequency of frequency-diverse antenna aperture is varied, the field pattern changes. This leads to a diverse set of measurement modes just by sweeping frequency. From the perspective of a coded aperture 29 , the radiation modes are very different from a regular antenna, in that we essentially use "a multiplicity of sidelobes" to illuminate multiple pixels in the FoV. This is evident in the field plots of Fig. 4c that the "sharper" the beams become, the less likely that they will overlap with the sidelobes of the next mask. By increasing the gain of the sidelobes probing the FoV information, we are sharpening the sidelobes and reducing the overlap between masks. To further elaborate this point, correlation coefficient contours for 11 radiation masks with and without the presence of spherical lens is given in Fig. 5a,b, from where the positive impact of the presence of lens can be observed. As discussed before, for the simplicity of the model we are considering modes on discrete frequencies that are equidistance on the frequency spectrum. Figure 5b also confirms that our approximation is valid since the recorded fields on these discrete frequencies have low correlation. The superposition of the radiation masks for the far-field of first 21 modes that correspond to the frequencies at which the lens-loaded cavity radiates most efficiently are shown in Fig. 5c. The result shows that the entire forward sector within − 60° < θ < 60° and − 60° < φ < 60° FoV can be sampled with a multiplicity of spatial modal diversity with no shadow zones and with approximately equal power per spatial mode. Measurements and results A proof-of-concept hardware is constructed in which the oversized mmWave chaotic cavity is created using 180 × 180 mm 2 single sided copper coated substrate sheets. The curved surface with sub-wavelength holes is created using carefully bent copper strips, while sub-wavelength holes of diameter = 5.34 mm are machined using an LPKF Protomat H100 milling machine. A plane copper sheet placed within the cavity is used as a randomly oriented scatterer and metal adhesive tape is used to seal the cavity edges. A spherical constant-ϵ r lens is created by machining out a solid spherical lens from Rexolite plastic material rod. In addition to the electrical properties mentioned before, other properties of Rexolite material like density = 1.05 g/cm 3 , coefficient of linear thermal expansion 70 × 10 -6 °C −1 and thermal conductivity 0.146 W m −1 °C −1 makes it an excellent choice for mmWave lens development. The synthesis approach given in 34 resulted in the Rexolite lens focal point to be at 77.5 mm from the lens spherical origin while the lens diameter is 133 mm (radius 66.5). A Styrofoam holder is machined to offset the lens by 3.5 mm in front of the cavity. A K-type to WR28 converter is connects the lens-loaded cavity at 38 mm and 68 mm from the left bottom corner to a wideband coax input/output. The entire assembly is placed in a planar NSI near-field anechoic chamber where a mmWave horn antenna operating from 27 to 29 GHz is used to record the field on a plane 0.5 m from the lens-loaded cavity. The www.nature.com/scientificreports/ fabricated lens-loaded cavity and the measurement setup is shown in Fig. 6. Separate measurements were taken in the vertical and horizontal planes and the results in the form of magnitude and phase of N = 41 modes are given in Fig. 7a,b. The field peaks are randomly projected, and the sectoral coverage provided by the lens-loaded cavity is evident from the magnitude plots, as predicted, Fig. 7a. Lowering the mode number is the key to reduce the computational complexity of the retrieval problem, as well as calibration over frequency for each operating mode. We now show how this is possible, thanks to the fact that the measurement modes are highly orthogonal due to the presence of the lens. This is evident from the correlation between modes shown in Fig. 7c. In addition to this, by using fewer number of points to sample the frequency band (i.e. N = 41 is much lower than N max = 331), we are increasing the frequency interval between our measurement modes, further de-correlating the measurements. A detailed version of the phase plots in Fig. 8 further indicate that mode symmetry is broken, which is beneficial for accurate DoA estimation as it reduces the chances of ghost DoA estimation. The reason behind the requirement of symmetrical irregularities in the modes to avoid ghost images can be found in 42 . Consider, for example, for a radiation source at θ = 0° and φ = 10°, there might be a chance of ghost DoA estimation at θ = 0° and φ = − 10° in case of symmetrical phase response, which is not the case in the lens-loaded cavity measured phase response shown in Fig. 8. In 42 the coded apertures are positioned in an irregular manner in order to break the regular periodicity, while in this study, a metallic scatterer internally positioned in the cavity is mainly responsible for breaking the mode symmetry. DoA estimation from measurement modes The DoA estimation problem is an estimate of the plane-wave projection pattern on the characterization plane, which can be analysed from the compressed measurement as follows 24 : www.nature.com/scientificreports/ where the lens-loaded cavity characteristic aperture conjugate transposed (donated by (·) † ) transfer function is applied to the compressed measurement (also known as matched-filtering). The number of modes is denoted by N and the number of pixels on the characterization plane is denoted by M. Note that we use plane wave projection model which accurately mimics real world signal. As in 24 , to minimize the error residue in the estimation of the objective function, we use the least-squares reconstruction iterative algorithm given as It is evident in (8) that the least-squares solution is an iterative process, making use of the matched-filter solution in (7) as an initial estimate. After the retrieval of source projection pattern estimation, the DoA estimation can be directly achieved by a Fourier transform operation applied to the evaluated p est , while the incident angles in θ, φ can be retrieved using peak finding algorithm from the patterns. Following this approach using the measurements of the lens-loaded cavity, we successfully evaluated the DoA for 4 sample cases given in Fig. 9 while the comparison between the evaluated DoA and the ground truth is provided in Table 1. Note that for the reconstructions provided in Fig. 9, 20 dB Gaussian noise is added to the measurements, according to 30 : where g n is the measurement with the added noise and n(σ) is the Gaussian identical independent distribution (i.i.d) noise model with zero mean and variance of σ 2 = SNR|g|. The noise is defined with respect to the average received signal over all frequency samples, |g|. As indicated in previous section, although a very high number of (8) p est+1,M = arg min g N − E N×M p est,M 2 2 (9) g n = g + n(σ ) www.nature.com/scientificreports/ measurement modes is theoretically desirable for a frequency-diverse aperture antenna to work, reliable DoA estimation is practically possible using only 41 modes with the proposed lens-loaded cavity structure. The modes exploitable in our lens-loaded cavity correspond to independent degrees of freedom used to reconstruct the interrogated space. The problem is relaxed compared to a classical imaging system because the depth is not probed, restricting the dimensions of the unknowns to the transverse components of the plane wave vectors. Under the conditions of this experiment, it is possible to reconstruct a maximum of 41 independent pieces of information related to the DoAs of the incident signals, making the lens-loaded cavity a good candidate for mmWave channel sounding applications at a reasonable SNR level 30 . One of the assumption made in this work is that the power spectrum of the source field incident upon the lens-loaded cavity does not vary as a function of frequency. Please note, this is not a fundamental limitation of the presented technique, but is an assumption to simplify the mathematical model of the far-field source. The fractional bandwidth of the studied 5G channel is around 7% and the assumption that the spectrum of the incident plane-wave remains constant can be justified by the much higher frequency of the plane-wave, 28 GHz, with respect to the variation across center frequency, ± 1 GHz. To elaborate this, we take one of the studied scenarios (case 4: θ = 20°, φ = − 20°) and model the incident plane-wave, P in (5), as frequency-independent (28 GHz) and frequency dependent (between 27 and 29 GHz). Comparing the reconstructed DoA estimation patterns in the Fig. 10, it can be seen that this assumption does not have a significant effect in the reconstruction. SVD is a measure of diversity of the field pattern generated by frequency-diverse cavities while looking towards a coverage area. The mode orthogonality enhancement discussed in the previous section SVD due to the presence of spherical constant-ϵ r lens has an implication in the SVD contour, as can be seen from Fig. 11. General form of the SVD can be written as 29 when U and V are unitary matrices and S is a rectangular diagonal matrix with the singular values in descending order, while (·) T is the matrix transpose operation. The Q = 100 and Q = 10,000 patterns corresponds to the analytical approach in 24 with same number of measurement modes, 41, in comparison to the fabricated lensloaded cavity. The Q-factor of the ideal model is, approximately 4600, and the SVD result obtained from the experimental cavity in Fig. 11 remains inside the region bounded by Q = 100 and Q = 10,000 patterns calculated in 24 within the same frequency band of operation, 27-29 GHz. As the number of modes increase, the normalized singular values decrease, but not drastically, ensuring that the proposed lens-loaded cavity modes have a reasonable orthogonality. The Q-factor of the fabricated cavity can be further enhanced by carefully exploring the mmWave power coupling mechanism and reducing the coupling parameter and coupling coefficient. This Conclusion In this paper, we have presented a novel structural configuration of a lens-loaded cavity operating as a frequencydiverse antenna created using an oversized mmWave chaotic cavity and a spherical constant-ϵ r lens for DoA estimation. A proof-of-concept lens-loaded cavity is developed using a metallic cavity and a Rexolite spherical constant-ϵ r lens, operating in 27-29 GHz mmWave 5G frequency bands. The presented lens-loaded cavity captures the channel information and compress the incoming plane wave source patterns into a single channel, which requires only a single RF chain hardware to successfully retrieve the DoA information, resulting in aggressive hardware reduction compared to classical DoA estimation methods. A set of near-field measurements in vertical and horizontal planes are taken in anechoic chamber and computational methods are used with the measured data to retrieve arbitrary far-field radiation source information from which a plane wave in mmWave frequency bands strikes the lens-loaded cavity. Although demonstrated for the DoA estimation of individual far-field sources, our initial studies also suggest that the presented technique can be scaled to the DoA estimation of multiple sources and assigned with independent phase references. This aspect of the presented computational DoA system will be pursued in future works. For the spherical constant-ϵ r lens development, we use high quality Rexolite material and ensured controlled experimental work, however, other low-cost low-loss plastic materials like poly (methyl methacrylate) or PMMA can be used for the lens development. The primary application of the proposed approach is mmWave channel sounding for 5G and beyond, while extension of the same system architecture can find further applications in smart antenna systems, navigation systems, radar tracking, mmWave communication, and radio astronomy. www.nature.com/scientificreports/
6,368.6
2020-12-01T00:00:00.000
[ "Physics" ]
Operational Risk Management in Corporate and Banking Sector of Pakistan This paper is to examine the current status of operational risk management in Pakistan concerning corporate and banking sector and explore the reasons for the adoption or lack of adoption of integrated approach to operational risk management. It identifies the imperatives for implementation of comprehensive risk management solutions leading to enterprise risk management (ERM). The mode of research is qualitative. The paper shows that effective risk management can enhance organizational performance but appropriate infrastructure is not available in companies. This paper highlights the fact that knowledge of risk management in corporate sectors of Pakistan is insufficient and sample companies hesitate to respond thinking that it may reflect inefficiencies and in banking sectors the concept of operational risk management can be seen up to some extent. Benefits of Operational risk management: The benefits which can be got from operational risk management are: 1. Reduction of operational loss. 2. Lower compliance/auditing costs. 3. Early detection of unlawful activities. 4. Reduced exposure to future risks. Risk management involves four stages: risk identification, measurement, monitoring, and management. Operational risk and control assessments are often the first process that a firm uses to carry out operational risk management. Frequently the assessment is carried out without an operational risk management framework in place and without much thought being given to high-quality corporate governance around the multiple interlocking processes of operational risk management Operational risk management provides us with a set of tools that will allow us to attain even greater and more consistent results by using a systematic method to approach issues rather than relying on experience. Risks have to be assessed against benefit, the purpose of ORM is to lessen risk and thus improve the ratio of benefit to cost. Whenever any risk arises first it recognizes, then it is prioritized according to the importance then it is managed, as it is shown in following diagram. Risk management must be integrated part of planning and executing any operation, routinely applied by management, not a way of reacting when some unforeseen problem occurs. Managers are responsible for the routine use of risk management at every level of activity, preliminary with the planning of that activity and continuing through its completion. Key points operational risk management includes: ORM is systematic not merely intuitive ORM focuses on excellence, not standard ORM addresses all dimensions of organizational risk, not just safety risk ORM doesn't aim solely at reducing risk but instead at optimizing it ORM enables a safety role in emergency situations ORM transforms safety from a "cost" to an "investment" ORM is "upstream" management instead of "downstream" ORM emphasizes getting it right the first time ORM is empirical and data-based ORM occurs from within the process, not from outside 1.3 Enterprise risk management (ERM) ERM in business includes the methods and processes used by organizations to manage risks and grab opportunities related to the accomplishment of their objectives. ERM provides a framework for risk management, which usually involves identifying particular events or circumstances significant to the organization's objectives (risks and opportunities). Management Information and Knowledge Management www.iiste.org ISSN 2224-5758 (Paper) ISSN 2224-896X (Online) Vol.4, No.5, 2014 This is an area which has not been explored so far by any researcher especially nobody worked out on operational risk management in Pakistan. P.K Gupta carried out his research in Indian companies which was qualitative research and he conducted through structured questionnaires based on both closed ended and open ended questions and also conducted interviews from 130 companies. It is very recent article and published in 2011. Now in this paper I worked out to examine the field of operational risk management in context to Pakistan. I carried out my research in different Pakistani companies like Platinum Pharmaceutical Company, Mukhtar oil and Soap Company, Indigo textile, State life Insurance and on banking sector which includes National Bank of Pakistan (NBP), HBL (Habib Bank Limited), Allied Bank, MCB Bank, JS Bank, Bank ALHABIB Limited, Meezan Bank. Design/methodology which will be used to test is qualitative research by developing questionnaire. PURPOSE/AIM OF RESEARCH: The motive of my research is to find the ratio of awareness about the risk management in the context of Pakistan. Hypothesis: My hypothesis is that risk management function is not yet fully developed in Pakistani companies. REVIEW OF LITERATURE: Operational risk is for many organizations the most common form of risk, and is often regarded as the most important function or task to be performed. Managing risks is of course not new. Typically, risks have been managed by insight and experience. Operational risk is clearly very common since it's "the risk of loss resulting from either inadequate or failed internal processes, systems or people or from external events" (Basel Committee on Banking Supervision, 2003). A similar discussion has been made by (Chenhall 2003), who differentiates between what he calls "uncertainty" or "unpredictability" and risk. Risk is concerned with all those situations in which relevant databases can be built and thus probabilities attached to specific events occurring. Such risks can be capital credit and physical security lie into this category. This leads the way to advanced statistical techniques and model building and gives clear indications for parameter estimation. This means that risks should be managed by a department of statisticians and risk specialists reporting to upper level management. The problems involved in defining or elaborating what data are relevant to operational risk and its management are very complex because there can be no standard procedure. (e.g. Power, 2003). Barki et al. (2001) have noted that the risk management profile of a project needs to differ according to the level of risk posed by the project itself with more risky projects needing more broad and extensive risk resolution. Risk management begins by sitting down and identifying possible risks. Risk Management encompasses cost-benefit analyses that will help out policy-makers delegate scarce resources (Wolf, 1998). Every enterprise is subject to numerous types of risks and the concentration varies across organizations. Risk has been identified, classified and interpreted from various perspectives. Gupta (2004a, b) says "Risk reflects to the possibility of deviation from the standard path. These deviations reduce or minimize the value and imply unhappy situations". Classification of risk as finance, market and operational is a widely accepted concept and methodology (Lam, 2001;BCBS, 2003). Risk is not uniform across different enterprises, but the absolute need to manage risk applies to all type of entities. Operational Risk Management is all about maintaining growth in our business in a highly competitive environment at optimal operating costs. The move toward the portfolio approach has led to the development of enterprise risk management (ERM) as a top management concern in several companies. To manage our risk and protect our business, we need to know exactly: • How the business is now functioning? • What the key risks and issues are? • How you are going to manage them? .Recognition of risk management as a separate managerial function entails number of advantages. Inclusion of risk management as a strategy in the general management function helps to enhance or promote the value (Suranarayana, 2003). According to Jorion (2001), the success of organization depends upon the risk management and manufacturing firms are still in primitive or initial stage to understand properly the firm's sensitiveness to numerous types of risk. KPMG (2001) traces the change of risk management approach from an individualistic narrow silo type (structure for storing bulk material) to portfolio type and the risk management is initiated to be perceived as a new means of strategic business management, linking business strategy to day-to-day risks. Tonello (2009) study on risk management in financial institutions shows that the role of chief risk officers (CROs) had extended dramatically, with more than half of them recurrently involved in firm-level strategic decisions. This indicates that though the idea is catching in developed countries, yet it is a long way in India. As Information and Knowledge Management www.iiste.org ISSN 2224-5758 (Paper) ISSN 2224-896X (Online) Vol.4, No.5, 2014 a matter of recent development, amendments to the clause 49 of the listing agreement between companies and stock exchanges in India makes it mandatory for companies to institute internal controls and report on the deficiencies. The board of companies is now necessary to review the company's risk management framework. According to some risk consultants, most of the companies do not have articulated risk management policies, and, even those that do infrequently have it linked to their business plans (www.expresscomputers online.com). Shimell (2002) study the risk management practices to indicate that risk management focuses and concentrate to now shifting to a strategic one and risk involvement must be universal and thorough in the enterprise. Doherty (2000) argues that risk management suffers from the problem and hurdles of duality in the sense that either the enterprise can remove the risk or its effect (accommodate). Risks should include a various range of approaches to accommodate the variations in industry risk measurement and management practices. Risk optimization is important to value creation (Murphy and Davies, 2006) and EWRM enhances organization's capabilities to counter risk and seize opportunities (Miccolis, 2001). Berinato (2006) argues that risk management is critical because balancing risk is becoming the only effective way to manage a corporation in a complex world. Researchers have shown that firms feel an aggregate measure should include all risks facing the enterprise, but acknowledging the fact that some risks like operational risk are complicated to quantify in a consistent way. Training is the best opportunity for developing, implementing, and managing the operational risk management process (Loflin and Kipp, October, 1997). A well-run and efficient Risk Management Program incorporates key functions of the organization; the components must handle every division within the department. In fact, one of the ingredients of successful risk management is the effective linking of all loss avoidance activities into a single, unified program (Wilder, 1997). Huchzermeier and Cohen (1996) analyze operational flexibility, which they define as the capability to switch among different global manufacturing strategy options. We observed that risk management is quickly gaining the attention of the market participants and slowly the regulation is also moving in the specific direction. EWRM framework is geared to availing an entity's objectives and goals, set onward in four categories: 1. Strategic. High-level goals, aligned with and supporting its mission. 2. Operations. Effective and efficient use of its resources. 3. Reporting. Reliability of reporting. 4. Compliance. Compliance with applicable laws and regulations Risk became a dominant preoccupation within Western society towards the end of the 20th century, to the point where w are now said to live in a 'risk society' (Beck, 1992), with an emphasis on uncertainty, individualization and culpability There has been a concurrent growing distrust of professionals in social work and an increased reliance by the profession on complex systems of assessment, monitoring and quality control (Stalker, 2003). The risk management in organizations has undergone a paradigm shift. It has moved from being "hazard type" to "strategic type". Risks are now not perceived as threats (adverse financial effects) but as potential opportunities. The focus of risk management has changed from all risks to critical risks (KPMG LLC, 2001). The risk management process is a tool that can keep the crisis services business a step ahead. It is not a cure-all, but it can identify problems or risks that can influence the organization. DATA AND METHODOLOGY: Our hypothesis is that risk management function is not yet fully developed in Pakistan regarding corporate sector and banking sector. The method proposed for research is qualitative research to collect the data because it enables to explore the issues, understanding phenomena, and answering questions. My sample is consisting on both large and medium sized companies in the manufacturing and service base and on different banks either comes under private sector or government sector. For conducting qualitative research, structured questionnaire is prepared which include closed ended questions framed on a Likert type scale. It consists on five major building blocks which are: a) Risk awareness b) Risk communication c) Risk responsibility d) Risk measurement and analysis e) Risk implementation and integration The above factors will assist to analyze the company's specific risk profile and the current utilization and future outlook of various risk recognizing techniques. The respondent for my research are executives, supervisors and any employee of the firm from the staff who are directly related to managing and minimizing all the risks faced by organization. Data collection method Qualitative method is used to collect the data in empirical research. This method generally assists the researchers to understand in-depth and detailed description of phenomenon of being studied. It also provides a wealth of Information and Knowledge Management www.iiste.org ISSN 2224-5758 (Paper) ISSN 2224-896X (Online) Vol.4, No.5, 2014 detailed information and tends to be generalized and useful tool for testing the hypothesis. questionnaire development The development of suitable and reliable questionnaire is one of the major tasks of the research which needs a prominent consideration to be designed with a use of appropriate words. EMPIRICAL FINDINGS As we have already mentioned that we conducted our research on banking sector and Corporate sector .The results which we found regarding operational risk management in both sectors are relatively different. The findings are tabulated separately. • Findings from banking sector • Findings from corporate sectors Finding from Banking Sector The tabulated result shows that in banking sector a proper system of operational risk management exists due to fully awareness and resources required identifying to minimize and mange the risk are properly allocated and it's communicated among each relevant employees to implement all the risk control measures. Over all Findings From Banking Sector Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree Findings from Corporate Sector The following tabulated result shows that in corporate sectors companies especially medium sized or small having lack of awareness regarding operational risk management. In most of the companies a proper system to manage operational risks does not execute Findings from Corporate Sector The following tabulated result shows that in corporate sectors companies especially medium sized or small having lack of awareness regarding operational risk management. In most of the companies a proper system to manage operational risks does not execute which causes to face huge losses. The following tabulated result shows that in corporate sectors companies especially medium sized or small having lack of awareness regarding operational risk management. In most of the companies a proper system to 4, No.5, 2014 Overall findings from Corporate sector: Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree 6. RESULT AND DISCUSSION Findings shows that in corporate sector concept of operational risk 28% respondent respond on neither agree nor disagree which shows lack of awareness and 29% respondent deny with the implementation of the operational risk management process in their organization while in banking sector a proper system is executed because 18% respondents were strongly agree and 52% respondents are agree with implementation of operational risk management system only 10% were disagree so these findings bifurcated my hypothesis in two parts: RESULT AND DISCUSSION Findings shows that in corporate sector concept of operational risk management is not fully developed because 28% respondent respond on neither agree nor disagree which shows lack of awareness and 29% respondent deny with the implementation of the operational risk management process in their organization while in banking sector a proper system is executed because 18% respondents were strongly agree and 52% respondents are agree with implementation of operational risk management system only 10% were disagree so these findings bifurcated my hypothesis in two parts: Ho accepted Ho rejected management is not fully developed because 28% respondent respond on neither agree nor disagree which shows lack of awareness and 29% respondent deny with the implementation of the operational risk management process in their organization while in banking sector a proper system is executed because 18% respondents were strongly agree and 52% respondents are agree with implementation of operational risk management system only 10% were disagree so these findings CONCLUSION The key points to be concluded here is that companies are not having sufficient awareness and tools to avoid and minimize operational risks due to which these companies usually get involved in losses. On the contrary in banking sector all the banks have a proper channel, staff and resources to identify, assess, prioritize and manage the upcoming risks. The fact to be focused is that operational risk management has become a core management function which has its own specific importance. No any company can step towards success until or unless it goes through this function. Operational risks have become a sole focus area of attention in several companies. However, there are enormous risks which may be relatively more important. Hence, in the approach to risk assessment a drastic and extreme change is required in companies while in banks there is just a need of improving the operational risk management process to enhance the performance. Operational Risk management in corporate sector of Pakistan is currently facing the problem of integration and implementation. The risk management function is not suitably blended into the corporate strategy and use of information technology for risk management is minimal. While concerning banking sector the awareness of operational risk management is exist at its appropriate level and implementation is done through proper planning and procedures. ERM can be considered as a risk-based approach to managing an enterprise, integrating concepts of internal control and strategic planning. The strategies to manage risk include transferring the risk to other parties, avoiding the risk, dropping the negative effect or probability of the risk, or even accepting some or all of the consequences of different specific risk. RECOMMENDATIONS: We will recommend that this topic can be further explored in the other sectors such as agriculture sector, educational sector, health sector and so on. Researcher has a bright opportunity to work on these sectors because no any researcher has explored this topic in the other mentioned sectors in the context of Pakistan.
4,193
2014-01-01T00:00:00.000
[ "Business", "Economics" ]
Tuning recombinant protein expression to match secretion capacity. BACKGROUND The secretion of recombinant disulfide-bond containing proteins into the periplasm of Gram-negative bacterial hosts, such as E. coli, has many advantages that can facilitate product isolation, quality and activity. However, the secretion machinery of E. coli has a limited capacity and can become overloaded, leading to cytoplasmic retention of product; which can negatively impact cell viability and biomass accumulation. Fine control over recombinant gene expression offers the potential to avoid this overload by matching expression levels to the host secretion capacity. RESULTS Here we report the application of the RiboTite gene expression control system to achieve this by finely controlling cellular expression levels. The level of control afforded by this system allows cell viability to be maintained, permitting production of high-quality, active product with enhanced volumetric titres. CONCLUSIONS The methods and systems reported expand the tools available for the production of disulfide-bond containing proteins, including antibody fragments, in bacterial hosts. Background Microbial cells have evolved phenotypic traits and cellular functions matched to their endogenous environmental niches; however they have not necessarily evolved with the cellular production capacity requirements often demanded in a biotechnological context. With respect to recombinant protein production, host cells are required to produce large quantities of heterologous protein, but may not exhibit the appropriate intracellular processing capacity to match this biotechnological demand imposed upon them. For example, they may not exhibit the required cellular synthetic capacity, folding capacity or indeed secretion capacity. In such scenarios high levels of recombinant protein production overload the host's capacity resulting in deleterious outcomes for the recombinant protein and/or the production host [1][2][3][4][5][6][7]. A number of potential solutions are available to address these imbalances: (i) increase the host's capacity, e.g. by overexpression of endogenous genes encoding helpers proteins such as chaperones, secretion machinery, and ancillary factors, (ii) add new capability e.g. expression of heterologous genes encoding helper proteins, or (iii) seek to match expression demand with the host's capacity [3,[8][9][10][11]. Secretion of recombinant protein offers a number of potential advantages. By allowing segregation of the protein product away from the cytoplasmic components to (i) reduce the chance of any deleterious interactions of the recombinant protein with the host and reduce molecular crowding effects, (ii) reduce the exposure of the recombinant protein to host cytoplasmic proteases, (iii) aid disulfide bond formation, away from reducing cytoplasmic environment, and (iv) produce recombinant proteins with a true N-terminus (absence of methionine). In Gram-negative bacteria protein secretion across the inner membrane into the periplasmic space occurs predominantly via the SecYEG translocon [12]. Pre-proteins containing a N-terminal signal sequence Open Access Microbial Cell Factories *Correspondence<EMAIL_ADDRESS>† Luminita Gabriela Horga and Samantha Halliwell contributed equally to this work 1 Manchester Institute of Biotechnology, School of Chemistry, University of Manchester, Manchester M1 7DN, UK Full list of author information is available at the end of the article (signal peptides), of 18-30 amino acids in length, target the proteins for secretion [13]. The hydrophobicity of the signal peptide determines whether secretion occurs via the SecB-dependent or the signal recognition particle-dependent (SRP) pathway [14]. The classical distinction is that translocation via the SecB path occurs post translation, and the SRP path via co-translation. Both the SecB and SRP pathways maintain the pre-protein in an unfolded 'translocation-competent state' [15]. Both pathways involve 3 key steps, (i) sorting and targeting, (ii) translocation, and (iii) release. The efficiency of each step is dependent upon the dynamic, transient interactions between the target protein and the various stages of the respective pathways, and hence secretion efficiency is highly dependent upon the biophysical characteristics of the recombinant protein [12,14,15]. Although Sec-dependent secretion is widely used, there are well-documented examples where the secretion machinery becomes overloaded and the Sec translocon becomes 'jammed' resulting in accumulation of the target protein in the cytoplasm and cell toxicity [16]. Above a certain optimal rate of translation, secretion rates can rapidly decrease [17]. This is most likely due to the limited secretion capacity of the E. coli transport machinery compared to the rate of translation [5]. When this secretion capacity is overwhelmed, the excess target protein is likely to accumulate in inclusion bodies, affecting protein titres and cell viability, highlighting the need to carefully optimize expression levels and rate of recombinant protein production [18]. The commonly employed inducible bacterial expression systems mostly operate at the transcriptional level. For instance, lactose or arabinose regulated systems generate a heterogeneous cell population upon induction, where some cells are fully induced and other cells remain un-induced [19,20]. Tuneable expression systems can address some of these limitations by modulation of gene expression to adjust to the physiological needs of the bacterial host and provide optimal parameters for recombinant protein production [21,22]. The RiboTite technology has been demonstrated to robustly control the expression of a variety of recombinant genes encoding therapeutic proteins in E. coli, and provides cellular level titratable control of gene expression and very tight control of basal gene expression in the absence of induction [23]. The system operates at both the transcription and translation level to afford a gene regulatory cascade, by using an inducible promoter-operator-repressor (IPTG, P/O lac , lacI), and a small molecule Pyrimido-pyrimidine-2,4diamine (PPDA) inducible translational ON orthogonal riboswitch (ORS), to control both a chromosomal copy of T7 RNAP and an episomal copy of recombinant gene of interest (GOI) (Fig. 1a). Production of therapeutically important proteins such as cytokines and antibody fragments in E. coli commonly employs the SecYEG translocon to secrete the proteins into the periplasmic space [24,25]. Antibody fragments are truncated and engineered versions of antibodies, usually derived from the IgG isotype, contain the complementarity-determining regions (CDRs) that retain binding capacity to specific antigens [26]. A single chain Fv (scFv) consists of heavy and light chain association through a short synthetic peptide linker. Antibody fragments have extensive applications for diagnostics and detection of a wide repertoire of agents, as well as for therapeutic treatment of a range of health disorders [27]. A range of scFv agents and derivatives are currently in clinical trials, with one anti-VEGF scFv that successfully completed Phase III trials in 2017 [28,29]. In this study, we explored whether the precise control of gene expression offered by the RiboTite system would avoid the previously observed overload of the Sec translocon [5,16], and permit isolation of protein with increased product quality, activity and titres. Concept and workflow of applying the RiboTite expression system for titratable secretion The RiboTite expression system [23] was employed in order to regulate SecYEG-dependent secretion of single chain antibody fragments (scFv) into the periplasm of E. coli (Fig. 1b). Here expression plasmids were constructed where the gene of interest (GOI) was placed in-frame with sfGFP to generate a fusion protein (pEN-TRY) (Fig. 1c). The pENTRY permits rapid evaluation and selection of signal sequence variants from a synonymous codon library. Following selection of variants with enhanced expression and regulatory performance, the fusion protein was removed by sub-cloning the GOI into pDEST plasmid and secretion performance was assessed. The selected clones were then assessed under fed-batch fermentation control to validate their performance under high cell density culture conditions. In this study we utilised the single chain antibody fragments anti-βgalactosidase (scFvβ) [30], anti-histone (scFvH) [31], and anti-tetanus (scFvT) [32]. Design and construction of the expression strain and plasmids In this study expression strains and plasmids were developed to simultaneously achieve enhanced basal control and integration of the 5′ encoded signal peptide sequence respectively. The E. coli expression strain BL21(LV2), was designed from the previously reported BL21(IL3) strain [23], by (i) replacing the repressor gene with a stronger repressor (lacI q ), (ii) inverting its orientation to the opposite direction to the T7 RNAP gene, and (iii) incorporating an additional operator (O3) to further tighten the basal expression (Additional file 1: Fig. S1). To assess this modification we benchmarked performance of various T7 RNAP-dependent strains for expression and regulatory control (Additional file 1: Table S1, Fig. S2). The analysis was performed by monitoring expression of eGFP (cytoplasmic) under different induction conditions, times and growth media. The BL21(LV2) strain demonstrated total expression comparable to the most commonly used expression strain BL21(DE3), but with significantly greater regulatory control (> 1000-fold vs. ~ 30-fold) in the presence of the respective inducers, and was used for all subsequent analysis. Expression-secretion plasmids (pENTRY, pDEST) were designed to direct the produced recombinant protein towards the SecYEG translocon for periplasmic secretion. Four different signal peptide encoding sequences (SP) were cloned upstream of the GOI: two Fig. 1 Concept and workflow of applying the RiboTite expression system for titratable secretion. a The RiboTite system operates at both the transcription and translation level, to afford a gene regulatory cascade controlling both T7 RNAP and the gene of interest (GOI). Transcriptional control is mediated by the lacI repressor protein, induced by Isopropyl β-D-1-thiogalactopyranoside (IPTG). Translational control is mediated by an orthogonal riboswitch (ORS) which releases and sequesters the ribsome binding site (RBS) in the presence and absence of the inducer Pyrimido-pyrimidine-2,4-diamine (PPDA) respectively. The system is composed on an E. coli expression strain BL21(LV2) strain, and expression plasmids containing the T7 promoter. Shown are the pENTRY and pDEST expression plasmids used that incorporate the signal peptide sequence (SP) to direct the produced protein for periplasmic translocation, and GOI and GOI-sfGFP fusions also under orthogonal riboswitch (ORS) and T7 promoter control. For further description of the BL21(LV2) cassette see Additional file 1: Fig. S1. b Riboswitch-dependent translation control of the RiboTite system is employed to match expression rate to the secretion capacity of the Sec pathway. c Schematic diagram of workflow. The pENTRY vectors were used to integrate the 5′UTR riboswitch with the 5′ encoded SP sequences. (1) A synonymous codon signal peptide library was generated, and (2-3) screened to select for clones that exhibit high protein expression and high regulatory control over basal induction. (4) Selected clones were sub-cloned into the pDEST vectors, and (5) screened for expression and secretion at small scale in shaker flasks (6) and in fed-batch bioreactors (7) SecB-dependent signal peptides (Piii and PelB) and two SRP-dependent signal peptides (DsbA and yBGL2) [33,34]. Integration of signal peptide sequences with the regulatory RiboTite system permits tuneable control of gene expression The performance of cis-encoded regulatory RNA devices is known to be highly sensitive to flanking nucleotide sequence and structure [35,36]. This poor modulatory limits the facile integration of RNA devices, e.g. riboswitches into alternative coding contexts. Close to an open reading frame RNA regulatory performance e.g. translation initiation from the ribosome-binding site (RBS) has been shown to be sensitive to secondary structure in the 5′ coding region [37][38][39]. Building on this approach we recently developed a riboswitch integration method that permits selection of codon variants with expanded riboswitch-dependent regulatory control over gene expression [40]. To optimise the regulatory performance of the cisencoded translation ON riboswitch located in the 5′UTR and 5′ encoded signal peptide sequences, the recently developed codon context integration method was used [40]. The method is based on the introduction of synonymous codons immediately downstream from the start codon; this conserves the amino acid sequence of the resulting signal peptide that interacts with the secretory apparatus (i.e. SRP or SecB), whilst permitting codon usage and RNA folding space to be explored. The synonymous codon libraries encoding the signal peptides of interest were generated by site directed mutagenesis, to produce variants at codons 2 through to 6 using pENTRY (Additional file 1: Table S2). The theoretical library sizes ranged from 48 to 256 variants dependent on the specific signal peptides, sufficient colonies were screened to ensure 95% coverage (> 3-times theoretical size per library), using the BL21(LV2) expression strain. Hits were selected on the basis of expanded riboswitch-dependent expression control relative to the starting (WT) sequence. Strains with the selected codon-optimised and WT signal peptide sequences were treated with increasing inducer concentration to assess expression and titratability (Additional file 1: Fig. S3). All selected codon variant strains exhibited higher maximum expression compared to their respective WT. Most variants showed a modest increase of maximum expression (up to twofold), whereas the Piii-E5 variant showed the highest expression increase, 577-fold higher than the strain with the WT signal peptide (Table 1). In the absence of any inducer, all strains showed minimal fluorescence signal. Expression in the presence of only the transcriptional inducer (IPTG = 150 µM) was reduced relative to wild type for the SRP-dependent pathway, whereas the reverse was observed for SecB-dependent signal peptides. In terms of regulatory performance the strain with the Piii-E5 signal peptide exhibited the largest dynamic range both for riboswitch-dependent control (IP/I) (16-fold), and total expression control (IP/UI) (127fold) (Fig. 2a). The strains with DsbA-E1 and yBGL2-H1 also presented good riboswitch-dependent control of expression (IP/I) of 11-fold and 13-fold, and total expression control (IP/UI) 33-fold and 60-fold respectively. This is in comparison to other inducible T7 RNAP expression systems that have been reported to display twofold expression control of secretion [41]. All strains with codon optimised signal peptide constructs were PPDAtitratable and showed improved expression and titratability compared to WT constructs indicating a good integration of the riboswitch (Additional file 1: Fig. S3). Due to resource limitation and metabolic burden upon the host, higher protein production usually negatively impacts the cell density of bacterial culture [42, 43]. However, strains containing codon-optimised signal peptides DsbA-E1 and Piii-E5, both displayed increased biomass (OD 600 ) and higher expression per cell (RFU/OD) than the respective strains with WT signal peptides. Indeed induction dependent inhibition of cell growth was more prominent for the WT signal peptides (Additional file 1: Fig. S3). This observation seems to indicate that the optimised signal peptides permit more efficient expression and reduced host burden. Both the SecB-dependent (Piii-E5) and the SRPdependent (DsbA-E1) constructs, displayed significant improvements, in terms of expression and control, over their respective constructs with wild-type signal sequences (Fig. 2a). Overall the SecB-dependent Piii-E5 construct presented the highest maximum expression, best regulatory performance and biomass accumulation, while the SRP-dependent DsbA-E1 construct exhibited the best dose response profile (Additional file 1: Fig. S3). The DsbA-E1 and Piii-E5 constructs were selected and sub-cloned (pDEST) to remove the GFP fusion ("Methods" section). Codon optimised signal peptides permit tuneable expression and secretion of scFvβ Expression and secretion performance of pDEST-scFvβ containing the DsbA-E1 and Piii-E5 signal peptides was assessed using E. coli BL21(LV2) expression strain following induction at 30 °C for 14 h ("Methods" section). Lower maximum protein production per cell (yield), expressed as mg of recombinant protein per g of dry cell weight (mg/g DCW) was achieved for the strain with the DsbA-E1 signal peptide (9 mg/g DCW) compared to the Piii-E5 (29 mg/g DCW) ( Fig. 2b) (Additional file 1: Table S3). Both strains displayed excellent basal control with no detectable production of scFvβ in the absence of induction. Further, both strains displayed good riboswitch-dependent (IP/I) control of expression of 7 and 11-fold for DsbA-E1 and Piii-E5 signal peptides respectively. In terms of secretion the strain with DsbA-E1 displayed a good yield and secretion efficiency (7.6 mg/g and 81% respectively), whereas the strain with Piii-E5 displayed a slightly lower yield and poorer efficiency (5.6 mg/g and 19%), due to greater total production and retention of scFvβ in the spheroplast fraction. Addition of the inducers did not greatly compromise the biomass, with only a small reduction (15%) in final OD 600 (Additional file 1: Table S3). Both strains displayed good riboswitch-dependent (IP/I) control of secretion of 6 and 13-fold for DsbA-E1 and Piii-E5 respectively, demonstrating that the control afforded by the system permits attenuation of scFvβ through the SecYEG translocon both via the SRP and SecB-dependent pathways (Fig. 2b). Analysis of the half maximal effective concentration (EC 50 ) indicates that the expression with DsbA-E1 (10 ± 2 μM) is saturated at higher inducer concentration compared to Piii-E5 (23 ± 6 μM). In terms of secretion both signal peptides/pathways displayed similar sensitivity/ saturation (EC 50 ), DsbA-E1 (7 ± 2 μM) and Piii-E5 (9 ± 6 μM) (Additional file 1: Table S3). Interestingly, this closer matching of the EC 50 values between expression and secretion for the DsbA-E1 seems to reflect the greater degree of coordination between translation and secretion of the co-translational SRP pathway [14]. Under these conditions both signal peptides/pathways displayed similar yield, with the co-translation (SRP) pathway performing with greater secretion efficiency (Additional file 1: Table S3). To assess the utility of using the pEN-TRY (GOI-GFP fusion) plasmid to select signal peptide sequences with optimised codon usage for use in the final secretion pDEST plasmids, we sought to correlate induction-dependent regulatory control from strains with these plasmids (pENTRY vs. pDEST) (Fig. 2c, d). For both signal peptides expression from the scFv-GFP fusion (pENTRY) displayed linear regression coefficient (slope ~ 1) with total expression of the scFv protein (pDEST). Expression from the pENTRY also displayed close to linear coefficient with secretion pDEST for the DsbA-E1 signal peptide (slope ~ 0.8), whereas the coefficient with secretion for the Piii-E5 signal peptide was reduced (slope ~ 0.2). Performance of codon optimised signal peptides in the absence of translational riboswitch control To evaluate and benchmark protein production and secretion in the RiboTite system compared to standard expression systems, the scFvβ gene bearing the same signal peptide sequences (DsbA E1 and Piii E5) were sub cloned into a compatible expression plasmid (pET), and expression assessed in the most commonly used T7 RNAP expression strain, BL21(DE3) ("Methods" section). Bacterial cell cultures were grown under the same conditions, and induced for 14 h at 30 °C. The non-riboswitch containing strains (BL21(DE3)-pET) produced scFvβ in yields of 22 and 12 mg/g DCW, and periplasmic secretion yields of 3.5 and 3.6 mg/g DCW for DsbA-E1 and Piii-E5 respectively, affording periplasmic secretion efficiencies of 16 and 30% (Additional file 1: Fig. S4). The final OD 600 achieved for the BL21(DE3)-pET strains was 1.5 and 3.7 with the Piii-E5 and DsbA-E1 signal peptides, compared to OD 600 10 and 11 for the respective signal peptides in the BL21(LV2)-pDEST strains. This compromise in final biomass led to lower total expression and periplasmic secretion titres for scFvβ in the non-riboswitch DsbA-E1 (25.3 ± 3.6 and 4.5 ± 0.7 mg/L) and Piii-E5 strains (6.2 ± 0.5 and 1.9 ± 0.3 mg/L). This is compared to expression and secretion titres in the BL21(LV2) DsbA-E1 (36.7 ± 10.4 and 26.2 ± 7.3 mg/L) and Piii-E5 strains (101.7 ± 31.1 and 17.0 ± 0.2 mg/L) ( Table 2). Regulatory control of 17 and 3-fold was observed for the DsbA-E1 and Piii-E5 signal peptide respectively in the BL21(DE3) strain. No basal expression was detected for either signal peptide in the BL21(LV2) within the western blot detection limit. This analysis was performed using a highly sensitive near infra-red fluorescent detection technique which is capable of detecting down to 50 pg of scFvβ, equivalent to 0.01 mg/L based on biomass of OD 600 = 10. In summary the BL21(LV2) strain permitted better secretion per cell (yield), better secretion efficiency, along with better biomass accumulation than the BL21(DE3) strain. The cumulative benefits of these improvements lead to a significant improvement up to ninefold increase in scFvβ secretion titres. Codon optimised signal peptides permit tuneable expression and secretion of alternative scFv's To explore the modularity of both the approach and the selected signal peptides, expression and secretion of alternative single chain antibody fragments, anti-histone (scFvH) [31] and anti-tetanus (scFvT) [32] was explored ( Fig. 3) (Additional file 1: Table S3 and Fig. S5). In terms of total expression, the scFv's were differently produced ranging from 5 to 16 mg/g for strains with the DsbA-E1 signal peptide to between 8 and 138 mg/g with Piii-E5. Despite this variability, rank order of scFv expression was maintained (scFvT > scFvβ > scFvH). All strains displayed riboswitch-dependent (IP/I) control of expression between 5 to 11-fold, with the Piii-E5 generally outperforming the DsbA-E1. In terms of secretion, Piii-E5-scFvT displayed the best yield but the poorest efficiency secretion efficiency was highly variable but greater efficiency was observed for SRP-dependent pathway (DsbA-E1, 37-81%) compared to the SecB-dependent pathway (Piii-E5, 9-23%). The strains with DsbA-E1-scFvT and Piii-E5-scFvβ displayed the best riboswitch-dependent (IP/I) control of protein secretion of 7 and 13-fold respectively. Intriguingly, clear attenuation of scFvβ and scFvT in periplasmic fraction is observed with DsbA-E1 up to maximum of ~ 7 mg/g (Fig. 3a-c). However, above a certain level (> 4 mg/g) greater retention of scFv is observed in the spheroplast fraction, indicating a system capacity overload at these higher production levels. Similarly at higher production levels (> 6 mg/g) release of scFv into the media fraction was observed. To verify proper post-secretion processing of the scFv from the higher producing constructs, scFvβ and scFvT ( Fig. 2a-d), intact mass spectrometry was used (Additional file 1: Fig. S6), which showed that all scFv's isolated from the periplasm, were correctly processed mature proteins (signal peptide absent), following correct signal peptidase-I processing. The scFvT isolated from the media fraction was also analysed by intact mass spectrometry and also validated the correct processing of the recombinant protein (Additional file 1: Fig. S7). The scFvβ and scFvT proteins were also assessed by size-exclusion chromatography coupled with multiangle light scattering, this indicated that both were monomeric with apparent molecular mass values that correspond to the expected protein molecular weight (Additional file 1: Fig. S8). In order to further assess the precursor protein processing and spheroplast retention of scFvβ and scFvT, proteins located in the spheroplast and periplasm fractions were analysed by western blot, following SDS-PAGE using an extended running time to separate the protein forms (Fig. 4a-d). Analysis of DsbA E1-scFvβ indicates that the target protein located in both the spheroplast and periplasm fractions has the same retention time (Fig. 4a), and the same is also observed for DsbA E1-scFvT (Fig. 4c). In contrast, analysis of Piii E5-scFvβ and Piii E5-scFvT (Fig. 4b, d) indicates the spheroplast fractions contain two species, the processed scFv and presumably the precursor, with the precursor being the dominant species. Due to the small difference in molecular weight between the DsbA and Piii signal peptides (1.99 vs. 2.16 kDa), the processed and precursor forms for DsbA-dependent constructs should, in principle, be resolved by SDS-PAGE, as per the Piii-dependent constructs. On this basis, it appears possible that the spheroplast fraction for the DsbA E1-dependent samples contains the mature processed protein. The same periplasm fractions for scFvβ and scFvT were also assessed for the disulfide bond formation under reducing and non-reducing conditions (Fig. 4e-h). The faster migration of non-reduced samples is due to their more compact structure, which indicates correct disulfide bond formation of the scFv's within the periplasm. Finally the scFvβ and scFvH isolated from the periplasm fraction were also analysed for binding activity to β-galactosidase and histone substrates respectively ("Methods" section), and displayed binding affinity values (Additional file 1: Fig. S9 and Fig. S10) comparable to literature values [44]. Expression and secretion control performance is maintained under fed-batch fermentation Fed-batch fermentation experiments were performed on the ambr250 multi-parallel bio-reactor system. Initial trials focused on scFvβ production with both the DsbA-E1 and Piii-E5 signal peptides in the BL21(LV2)-pDEST strain (Fig. 5a, b). Following inoculation bioreactor cultures were grown in batch mode until a sharp dissolved oxygen increase, used as an indicator of nutrient limitation, then an exponential glucose feed was initiated to achieve a specific growth rate (μ = 0.2) until the end of the fermentation (22.5 h). The cultures were induced at OD 600 = 20-30, with fixed IPTG (100 µM) and different PPDA (0, 4, 40, 400 µM) concentrations. Following addition of inducers all cultures grew with similar growth kinetics for the first 2 h, whilst between 4 and 6 h postinduction the culture with the highest concentration of inducers displayed reduced biomass accumulation (Fig. 5a). At 8 h post-induction the final biomass varied from OD 600 = 80 to 50 dependent on the inducer concentration. This inverse trend between inducer concentration and final biomass was consistent with cell viability (Additional file 1: Fig. S11). Samples for 4 h post induction (18 h) were analysed for protein production and secretion (Fig. 5b, Table 3) (Additional file 1: Table S4 DsbA-E1 (43 mg/g DCW, 859 mg/L) signal peptides. No expression 'leak' was observed prior to induction (14 h). Induction with IPTG-only led to basal protein production in both the strains (2-3 mg/g DCW, 36-66 mg/L). Addition of PPDA (400 µM) resulted in riboswitchdependent expression control of 27-fold and 17-fold for the DsbA-E1 and Piii-E5 respectively (Fig. 5b). In terms of secretion similar yields and titres were observed, with the DsbA-E1 (12 mg/g DCW, 248 mg/L) slightly outperformed by the Piii-E5 (14 mg/g DCW, 269 mg/L). Secretion efficiency was slightly higher for the DsbA-E1 (29%) than the Piii-E5 (27%), at the highest inducer concentration. At low inducer concentration secretion efficiency increases significantly up to ~ 80% for both DsbA E1 and Piii E5 (Fig. 5b). The scFvβ isolated from the periplasm fraction were also analysed for binding activity to β-galactosidase substrates (Methods), and displayed binding affinity values (Additional file 1: Fig. S9 and Fig. S10) comparable to literature values [44]. To further explore the secretion productivity seen with the DsbA-E1 signal peptide, another fermentation experiment was performed using the DsbA-E1 signal peptide with scFvβ and scFvT (Fig. 5c, d). As very tight control of expression in the absence induction was observed in the initial trial, but reduced biomass accumulation was also observed for induction times ≥ 4 h, a modified growth/induction strategy was implemented. The batchfed transition was maintained as before ("Methods" section), but cultures were induced later at OD 600 = 55-65, with fixed IPTG (100 µM) and different PPDA (20,40,200, 400 µM) concentrations. Prior to induction no leaky expression was detected (Additional file 1: Table S5). Following addition of inducers all cultures grew with similar growth kinetics for the first 4 h, attaining biomass of OD 600 = 95-110 at 4 h post-induction, further induction time led to a plateau in growth and drop in biomass, possibly due to dilution with the continuous feed (Fig. 5c). Good viability was observed for all induction conditions and times (Additional file 1: Fig. S11). The scFvβ and scFvT from the periplasm fraction were purified and analysed using intact mass spectrometry to confirm that the protein is correctly processed by cleavage of the signal peptide (Additional file 1: Fig. S7). To demonstrate the correct cell fractionation procedure an indicative coomassie stained SDS-PAGE and western blot analysis is shown (Fig. 6). Western blot analysis against the cytoplasmic specific marker (sigma 70) indicates correct fractionation, due to the absence of signal in the periplasmic fractions. Precursor protein processing and spheroplast retention of scFvβ and scFvT were assessed by western blot, following SDS-PAGE (Additional file 1: Fig. S12). Consistent with the shake flask analysis, both DsbA E1-scFvβ and DsbA E1-scFvT from spheroplast fractions were composed of only one species. The same periplasm fractions were also assessed for disulfide bond formation under reducing and non-reducing conditions; which demonstrated disulfide bond formation for both scFvβ and scFvT isolated from the periplasm (Additional file 1: Fig. S12). The highest scFv production was achieved with the highest concentration of inducers (IPTG: 100 µM, PPDA: 400 µM), with higher yields and titres observed for the scFvβ (17 mg/g DCW, 572 mg/L), compared to scFvT (11 mg/g DCW, 392 mg/L) at 4 h post-induction (Fig. 5d) (Additional file 1: Table S5 and Fig. S13). Production levels at 6 h for scFvβ decreased slightly (14 mg/g DCW, 436 mg/L), whereas scFvT increased slightly (13 mg/g DCW, 417 mg/L) (Additional file 1: Fig. S13 and Table S6). In terms of secretion yields and titres, scFvβ had maximal production at 4 h post induction (11 mg/g DCW, 352 mg/L), whereas scFvT had a maximal production at 6 h post induction (7 mg/g DCW, 219 mg/L) ( Table 3) (Additional file 1: Table S5 and Table S6). Both displayed similar secretion efficiency (53-62%) at the highest inducer concentration. Additionally secretion efficiency was modulated under riboswitch-dependent control, achieving up to 90% efficiency at lower inducer concentrations (Fig. 5d). Comparing between the two fermentation trials the total production yield and titre of DsbA-E1-scFvβ were higher (2.5-fold and 1.5-fold respectively) in the initial fermentation trial (Additional file 1: Table S4 and S5). However, the second trial displayed enhanced secretion efficiency (29% vs. 62%), in addition to enhanced biomass accumulation (OD 600 57 vs. 90) and cell viability (CFU/ mL/OD 2.4 × 10 9 vs. 8.5 × 10 10 ), this led to a similar titre for scFvβ secretion from both trials (248 vs. 352 mg/L). However, secreted scFv was all contained within the periplasm in trial 2, whereas trial 1 exhibited substantial leakage (25%) across the outer membrane to the media fraction. Discussion Here we have shown that use of the multi-layered gene expression control system, RiboTite, in combination with codon optimised signal peptide sequences, permits attenuation of recombinant expression and periplasmic secretion of single chain antibody fragments (scFvs). In this study we employed the use of an orthogonal translation riboswitch control element (ORS), which releases and sequesters a RBS in the presence and absence of the small molecule inducer (PPDA) [23]. A modified T7 RNAP-dependent E. coli expression strain BL21(LV2), was developed and benchmarked for expression and control against BL21(DE3). This system uses two small molecule inducers (IPTG and PPDA) that operate at the transcriptional and translation level respectively, controlling expression of both the T7 RNAP and the gene of interest [23]. This new strain displayed excellent riboswitch-dependent control (> 40-fold), and extremely large small molecule-dependent (IPTG + PPDA) control of expression (> 1200-fold), which as far as we are aware is an unprecedented induction dynamic range for T7 RNAP-dependent expression systems (Additional file 1: Table S1). The exemplar gene of interest, coding for the single chain antibody fragment anti-β-galactosidase (scFvβ) [30], was initially expressed as a GFP fusion protein (pENTRY) to permit the rapid selection of signal peptide sequences from a synonymous codon library (Fig. 1). Codon usage is an important feature for optimal heterologous gene expression [45], and a large number of algorithms have been developed to optimize codon usage for recombinant genes [46][47][48][49]. The 5′ coding region of genes for secreted proteins are known to be enriched with 'non-optimal' or rare codons [50][51][52]. Clustering of non-optimal codons in the N-terminal region of the signal peptide is believed to slow the rate of translation and allow efficient engagement with secretion apparatus [51]. An alternative to this 'ramp' hypothesis is derived from the observation that non-optimal codons have a higher proportion of A-T pairs, affording transcripts with reduced local secondary structure [37,53,54]. For secretion of recombinant proteins, non-optimal codon usage in the signal peptide sequence has been shown to positively impact protein folding and export [33,55]. Optimal integration of the orthogonal riboswitch (ORS) into the 5′UTR recently demonstrated that codon selection is determined by structural features rather than codon rarity [40]. Here in this study the codon selection method permitted functional context-dependent integration of orthogonal riboswitch in the 5′UTR to afford a broad, inducer-dependent, dynamic range of gene expression control. The small sample size (n = 8) in this current study did not permit thorough statistical analysis of codon and mRNA folding metrics for the identified signal peptide sequences to support or exclude either the ramp or structural hypothesis. The selected clones included codon optimised signal peptide sequences for both the SecB-dependent (Piii-E5) and SRP-dependent (DsbA-E1) pathways, permitting good riboswitch dependent (up to 16-fold), and total (up to 127-fold) control over eGFP reporter gene expression. Removal of the reporter fusion afforded pDEST, which demonstrated absolute control of basal expression in absence of induction and excellent dynamic range control of gene expression and secretion (Fig. 2b, Table 2). Under batch shake flask conditions scFvβ was less wellexpressed and secreted with the DsbA-E1 signal peptide, however this difference in performance was reduced under fed-batch fermentation conditions (Fig. 5, Table 3). Correlation of inducible expression performance between fusion and non-fusion constructs (pENTRY vs. pDEST) indicated that dynamic range of expression control for both showed excellent linearity and good regression coefficients (slope) validating the approach and utility the pENTRY screen to select for optimal codon variants ( Fig. 2c, d). Interestingly, the iterative (post-translocation) mechanism of SecB-dependent pathway was clearly demonstrated by a small regression coefficient (shallow slope) between expression of the GFP-fusion and secretion performance of the non-fusion. In the same manner comparison of the secretion efficiency and the doseresponse curves (EC 50 ), indicated the better coordination and coupling between expression and secretion for the SRP-dependent pathway (Additional file 1: Table S3). When compared against the classical T7 RNAPdependent inducible promoter/operator E. coli expression strain, BL21(DE3), the RiboTite system permitted greater control over scFvβ expression and secretion and displayed enhanced secretion titre (up to ninefold) (Additional file 1: Fig. S4). Exchange of the scFvβ for alternative antibody fragments anti-histone (scFvH) [31], and antitetanus (scFvT) [32] was performed, and these related proteins display amino acid and nucleotide sequence identity down to 87%. Regulatory control was maintained for all scFvs, however, expression yields were very sensitive to the gene of interest (Fig. 3). This observation was consistent with previous reports which indicate that variability within the complementarity-determining region (CDR) of antibody fragments significantly affects production yields [56]. For each scFv protein expressed, the DsbA-E1 gave lower total expression compared to the Piii-E5, but better secretion efficiency for both shake flask and fed-batch fermentation experiments. For all samples analysed, scFv isolated from the periplasm were correctly processed with disulfide bond formation and activity. For all batch and fed-batch experiments, retention of target scFv was observed in the spheroplast fraction. Product retained in the spheroplast for Piii-E5 (SecB) was predominately precursor protein, which had not been translocated or cleaved, indicative of an overload of the secretion pathway. However, product retained in the spheroplast for DsbA-E1 (SRP) appears to be processed mature scFv, based on identical SDS-PAGE retention to scFv from the periplasm, indicating scFv protein insolubility in the periplasm, and/or overload of the periplasmic folding capacity. Previous studies have also shown that use of SRP-dependent signal peptide sequences increased secretion yield and efficiency of recombinant proteins, by avoiding premature cytoplasmic folding associated with the SecB-dependent pathway [10,57]. The ability to secrete recombinant proteins into the E. coli periplasmic compartment is limited by the periplasm size and the secretion capacity of the cell. The smaller periplasmic compartment accounts for less than 20% of the total cell volume [58]. Depending on the strain, signal peptide and protein of interest used for secretion, there is a certain threshold of the protein amount that can be exported into the periplasmic compartment. For recombinant human proinsulin, an upper secretion limit of 7.2 mg/g DCW was previously reported [5]. Previous studies on the secretion of scFv's under both batch and fed-batch conditions have reported between 50 and 90 mg/L [10,59], higher values have been reported, but periplasmic titres above > 400 mg/L resulted in significant cell lysis [60]. Here, in this study under fed-batch fermentation a periplasmic secretion yield for scFvβ of 14 mg/g DCW with the Piii-E5, and 12 mg/g DCW with DsbA-E1 was observed (Additional file 1: Table S4). Exceeding a specific limit from each condition led to accumulation of protein in the media fraction. In our studies intact mass spectrometry analysis showed that the scFv protein detected in the media culture is processed correctly, the signal peptide was cleaved from the recombinant protein, indicating the protein was translocated across the inner membrane via SecYEG, and was released across the outer membrane (Additional file 1: Fig. S6). It has been recognised that recombinant protein secretion can lead to release of the protein into the cultivation media [25,61]. The exact mechanism is not yet known, but outer membrane protein and lipid composition have been shown to be altered during prolonged fermentation conditions [62,63]. It is well also known that the SecYEG-dependent secretion apparatus can easily become overloaded [17,41,64]. To overcome this careful optimisation is required to match recombinant expression rate to the secretion capacity of the host to maximise translocation efficiency. In previous studies we demonstrated, with a closely related strain, that the RiboTite system produced recombinant GFP fourfold slower (RFU/OD/hr) than the classically E. coli T7 RNAP-dependent strain, BL21(DE3), and the rate of expression could be reduced a further eightfold at lower inducer concentrations [23]. In this study the slower expression kinetics of the RiboTite system and the ability to attenuate the expression rate, permitted a range of expression rates to be assessed and matched to host secretion rate, to maximise secretion efficiency (Fig. 3). In this study, the RiboTite system produced industrially relevant titres of scFv under fed-batch fermentation conditions. Further improvements in secretion titres could be achieved by co-expression with periplasmic chaperones and helpers. Indeed co-expression of molecular chaperones has been reported to be favourable to increase secretion of recombinant proteins by correct protein folding and/or promoting disulfide-bond formation [3,8,9]. Specifically for scFv expression, co-expression of Skp chaperone [59,65,66], FkpA peptidyl-prolyl isomerase [59,67], and the disulfide bond isomerase DsbC [68,69] have been shown to improve recombinant protein solubility and increase titres. Conclusion We demonstrate that tuning gene expression, and therefore protein secretion with the RiboTite system is a viable approach for secretion of recombinant proteins. Codon optimisation of the signal peptide sequences allowed integration of the orthogonal riboswitch to permit finetuning of protein production. The RiboTite system permits (i) robust control over basal expression in absence of induction, and (ii) finely-tuned control over expression; to avoid overload of the Sec-dependent secretion pathway. Under fed-batch fermentation protein production and secretion titres of up to 1.0 g/L, and 0.35 g/L respectively were achieved, whilst cell viability and biomass accumulation were maintained. High product titre, quality, and activity were achieved irrespective of the Secdependent pathway employed, although greater secretion efficiency was observed with the SRP pathway. Increasing host secretion efficiency and productivity is an important cost consideration for the manufacture of recombinant antibody fragments. Enhanced protein production capability can facilitate the transition of candidate therapeutic proteins towards the clinic by limiting manufacturing failure during early stage development. Additionally reduced manufacturing costs could lessen the financial burden upon healthcare providers, and permit more equitable global access to protein-based therapeutic medicines. Construction of BL21(LV2) and K12(LV2) strains of E. coli To build on the RiboTite system for periplasmic secretion purposes, the BL21(IL3) strain [23] was modified to generate the BL21(LV2) strain for tighter secretion of recombinant proteins into the bacterial periplasm. A cassette was constructed in using the pZB insertion plasmid (containing chloramphenicol expression cassette flanked by two dif sites [70], modified by the addition of regions of homology to the araD and araC genes). Relative to the insertion cassette of the BL21(IL3) strain here the LacI was switched to lacI q and the orientation was inverted. The modified cassette was amplified by PCR, and inserted by homologous recombination into the genome of E. coli BL21 (F -ompT gal dcm lon hsdSB (rB -mB -) [malB + ]K-12(λS)), within the araC-D locus, with pSIM18 [71] to generate BL21(LV2). The same cassette was also inserted into the E. coli strain K12 W3110 (F − lambda − INV(rrnD-rrnE) rph-1), to generate K12(LV2). Bacterial cell culture All cell cultures were grown in TB media (2.7% yeast extract, 4.5% glycerol, 1.3% Bactotryptone) supplemented with 0.2% glucose. The cell culture for codon and strain selection assays were also grown in LB media (0.5% yeast extract, 0.5% NaCl, 1% Bactotryptone) with addition of 0.2% glucose. Plasmids were selected using ampicillin (100 µg/mL) or kanamycin (50 µg/mL), all purchased from Sigma. Cultures were inoculated directly from freshly plated recombinant colonies. For strain selection, pre-cultures were grown at 37 °C with shaking (180 rpm) to an OD 600 = 0. Selection of signal peptides with synonymous codons Mutagenesis was performed as per the manufacturers (NEB) protocol using Phusion HF DNA Polymerase. Mutagenic libraries of the pENTRY template were generated by PCR mutagenesis with primers randomized at the wobble position at codons 2 to 6. Dependent on the codon degeneracy of each specific amino acid, the appropriately randomised nucleotide base (Y, R, N) was incorporated in the positions within the mutagenic primer, corresponding to the 3rd nucleotide for each codon, permitting generation of a synonymous codon library. The product was DpnI treated to remove the template (37 °C, 4 h), and transformed into Top10 F' competent E. coli cells. Individual colonies (N > 10) were picked and screened to confirm complete template removal and library diversity. Colonies were screened to ensure 95% coverage (threefold) of the theoretical library size, variants were selected on the basis of expanded riboswitch (PPDA) dependent control. Fractionation of E. coli Cultures were grown as described in "Bacterial cell cultures"-shake flask section. After specific times post induction, ODV 10 (OD 600 * mL = 10) were collected by centrifugation (6000 g, 15 min, 4 °C) and the pellet was resuspended in 250 μL Buffer 1 (100 mM Tris-acetate pH 8.2, 500 mM Sucrose, 5 mM EDTA), followed by addition of lysozyme (0.16 mg/mL) and Milli-Q water (250 μL). Cells were left on ice for 5 min and then MgSO 4 (20 mM) was added to stabilise the spheroplasts. The periplasm (supernatant) fractions were collected by centrifugation, while the spheroplasts (pellet) were washed once with Buffer 2 (50 mM Tris-acetate pH 8.2, 250 mM Sucrose, 10 mM MgSO 4 ) and resuspended in Buffer 3 (50 mM Tris-acetate pH 8.2, 2.5 mM EDTA, 0.1% Sodium Deoxycholate and 250 U/μL Benzonase). Spheroplasts were lysed by freezing at − 20 °C overnight and thawing at room temperature prior to being analysed. The media and cell fractions were stored at 4 °C short-term or at − 20 °C for long-term storage. All fractions were prepared from biological triplicates. Western blot analysis The media, periplasm and spheroplast fractions were analysed by western blot, using an infrared imager LI-COR Odyssey Sa. Samples were re-suspended in SDS-PAGE loading buffer (Thermo Fisher), supplemented with 50 mM dithiothreitol, and boiled for 10 min. Samples were diluted to be within the linear range protein standard curve (below). Equal volumes of sample were loaded, separated by SDS-PAGE to confirm and quantify protein amounts by western blot. The membrane was then blocked with phosphate buffered saline (PBS) containing 5% skimmed milk for 20 min (~ 55 rpm, room temperature). The His-tagged scFv protein was detected with mouse monoclonal Anti-His antibody (Pierce, 1:3000 in PBS 5% milk) and IRDye 680RD donkey antimouse IgG (LI-COR, 1:20,000 in PBST 5% milk). The protein bands were visualised at 700 nm with the Odyssey Imaging System (LI-COR) and the signal intensity were quantified with the Image Studio 5.0 Software for densitometry analysis. Protein quantification was performed using purified recombinant scFv protein standard curves (between 50 pg to 120 ng). All data were measured in biological triplicates. scFv binding assay The activity assay was performed using a Bio-Dot Device (Bio-Rad). A pre-wet 0.2 µm nitrocellulose membrane (Amersham Hybond) was placed in the apparatus and the membrane was rehydrated with PBS. The substrates, β-galactosidase or and core histone mix (Sigma) and the negative control (bovine serum albumin) were added onto the membrane diluted in Buffer 1 (0.5 ng/dot). The membrane was left to dry by gravity flow. A serial dilution of the periplasmic fraction containing scFv13R4 or scFvT in Buffer 1, were then added to the membrane. The membrane was left again to dry by gravity flow. A vacuum was applied to the apparatus to remove any excess liquid. The membrane was taken from the apparatus and was blocked for 20 min with 5% milk PBS (50 rpm, room temperature). His-tagged scFv13R4 or scFvT protein was detected as described above ("Western blot analysis" section). The signal intensity was quantified with the Image Studio 5.0 Software for densitometry analysis and GraphPad Prism 7 was used to for curve fitting using a four-parameter logistic function. All data were measured in biological triplicates. Fed batch fermentation Starter cultures were grown overnight in 25 mL of LB with 0.2% glucose and 50 µg·mL −1 kanamycin at 30 °C. Overnight cultures were used to inoculate 50 mL of LB with 0.2% glucose and 50 µg·mL −1 kanamycin in a 250 mL baffled shake flask which was incubated at 30 °C at 200 rpm until an OD 600 of between 2 and 4. Fed-batch fermentations used the Ambr ® 250 modular (Sartorius Stedim) which comprises 250 mL single-use bioreactors. Fermentations started with 150 mL of batch medium and 100 mL of feed. The batch medium was from [72] and comprised batch salts ( kanamycin. Batch salts were sterilised by autoclaving. All other culture medium components were filter sterilised and added to the fermentation vessels before use. The pH was maintained at 6.8 using 10% NH 4 OH and 1 M HCl. Polypropylene glycol (PPG 2 000) was used as antifoam. The dissolved oxygen (DOT) was maintained at above 20% when possible, using cascade control (increasing the stirrer speed followed by an increase in the air flow rate, and if not sufficient, by addition of O 2 ). Bioreactors were inoculated to an OD 600 of 0.1. Exponential feeding was used according to Eq. 1. where F is the feed rate in L·h −1 , S is the substrate concentration in the feed (depending on the fermentation run, 220 g·L −1 or 440 g·L −1 glucose monohydrate), μ is the required specific growth rate (0.2 h −1 ), Y XS is the yield coefficient (0.365 g biomass per g glucose), m is the maintenance coefficient (0.0468), X 0 is the biomass in g at the start of the feed and t is time. The feed was started when the DO increased, indicating nutrient limitation. Cell viability assay (CFU) Culture samples taken post-induction were serially diluted in PBS and plated onto LB agar to evaluate cell culturability, used as an indication of cell viability. LB agar plates were incubated at 37 °C overnight. Data processing and statistical analysis Data was processed and analysed using Microsoft Excel, GraphPad Prism7 and OriginPro 8.5.1. Each data point used for analysis was from three biological experimental repeats and was used for fitting a logistic growth curve. The EC 50 value represents the amount of PPDA needed to achieve half of the maximum induction response. Error bars represent calculated standard deviations. For western blot quantification, the Image Studio 5.0 Software was used for densitometry analysis. A calibration curve was constructed using 3 up to 6 scFv standards. Data was fitted using linear regression into a straight line and the linear equation from the scFv calibration curve was used to normalise and convert the western blot sample into ng of protein. The measured OD 600 were used to normalise and calculate the mg/g of dry cell weight. Dry cell weight was determined, by collecting culture in dry pre-weighed 2 mL tubes; the samples were centrifuged 10 min at 6000 g, cell pellets were dried at 100 °C for 48 h and tubes re-weighed, replicate values were used to determine an OD to g DCW conversion factor (0.35 mg/ mL). Subsequently, the dry cell weight of the E. coli cell was calculated as the OD 600 multiplied by conversion factor (0.35 mg/mL). Linear regression was employed to analyse correlation at 30 °C induction between pENTRY and pDEST (secretion and expression). The relationship between western blot data and dot blot data was again investigated by linear regression. Semilog regression analysis evaluated the relationship between pENTRY and pDEST. Pearson's correlation coefficient and the best-fit line were calculated. P < 0.05 was considered statistically significant. Protein purification All scFv proteins expressed have a hexa-histidine tag to allow purification by standard immobilized metal affinity chromatography (IMAC) using HisPur ™ Ni-NTA Resin (ThermoFisher). The proteins used as standard for western blot quantification were purified using whole cell lysates from cell cultures expressing the genes of interest ("Bacterial cell culture"). Cell pellets were collected by centrifugation (9000g for 30 min) and resuspended in lysis buffer (50 mM Tris HCl pH 7.5, 500 mM NaCl, 10 mM imidazole) supplemented with EDTA-free protease inhibitor (Roche), DNAase (10U/mL) and lysozyme (1 mg/mL). Cells were sonicated and the supernatant was collected by high-speed centrifugation (42,000g). Supernatant was incubated with the Ni-NTA beads for at least 1 h at 4 °C. Protein was washed 3 × times with lysis buffer and then eluted with 100 mM imidazole. Protein was then concentrated using 5000 MWCO Vivaspin centrifugal units and then dialysed with dialysis buffer (25 mM Tris HCl pH 7.5, 50 mM NaCl). Protein purity was assessed by SDS-PAGE and its concentration determined using NanoDrop 2000 Spectophotometer and extinction coefficient. Protein was stored at − 80 °C. Intact mass spectrometry A 1200 series Agilent LC was used to inject 5 µL of sample into 5% acetonitrile (0.1% formic acid) and desalted inline. This was eluted over 1 min by 95% acetonitrile. The resulting multiply charged spectrum was analysed by an Agilent QTOF 6510, and deconvoluted using Agilent Masshunter Software. Size-exclusion chromatography coupled with multi-angle light scattering (SEC-MALS) analysis Samples were loaded onto a Superdex 75 26/600 column (GE healthcare) pre-equilibrated in protein dialysis buffer (25 mM Tris pH 7.5, 50 mM NaCl) running at a flow rate of 0.75 mL/min. Samples were analysed using a DAWN Wyatt HeliosII 18-angle laser photometer, with an additional Wyatt QELS detector. This was coupled to a Wyatt Optilab rEX refractive index detector and the molecular mass moments, polydispersity, and concentrations of the resulting peaks were analysed using Astra 6.1 software (Wyatt, Santa Barbara, USA). Additional file Additional file 1. Additional figures and tables. Authors' contributions LGH performed experiments, analysed data, compiled figures, and wrote the manuscript. SH performed experiments and analysed data. TSC performed experiments. CJW performed experiments. CFROM performed experiments. DSY performed experiments. RK analysed data. RM analysed data. SGW planned experiments. DCS planned the project. ND planned the project, analysed data, and wrote the manuscript. All authors read and approved the final manuscript. 1 Manchester Institute of Biotechnology, School of Chemistry, University of Manchester, Manchester M1 7DN, UK. 2 Cobra Biologics Ltd, Keele ST5 5SP, UK.
11,650.8
2018-12-01T00:00:00.000
[ "Engineering", "Biology" ]
Preliminary Sizing of High-Altitude Airships Featuring Atmospheric Ionic Thrusters: An Initial Feasibility Assessment : When it comes to computing the values of variables defining the preliminary sizing of an airship, a few standardized approaches are available in the existing literature. However, when including a disruptive technology in the design is required, sizing procedures need to be amended, so as to be able to deal with the features of any additional novel item. This is the case of atmospheric ionic thrusters, a promising propulsive technology based on electric power, where thrusters feature no moving parts and are relatively cheap to manufacture. The present contribution proposes modifications to an existing airship design technique, originally conceived accounting for standard electro-mechanical thrusters, so as to cope with the specific features of new atmospheric ionic thrusters. After introducing this design procedure in detail, its potential is tested by showing results from feasibility studies on an example airship intended for a high-altitude mission. Concurrently, the so-obtained results allow the demonstration of the sizing features corresponding to the adoption of atmospheric ionic thrusters at the current level of technology, comparing them to what is obtained for the same mission when employing a standard electro-mechanical propulsion system. Introduction Lighter-than-air (LTA) flying craft, currently employed or still in a design stage, are usually lofted according to two major paradigms, namely as passively controlled, nonpropelled balloons or as actively controlled, propelled airships.In the latter case, existing realizations invariably make use of rather standard propulsion techniques, with propellers coupled to piston engines or electric motors [1,2]. As is widely known [3][4][5], among the missions that LTA craft can cover, highaltitude observation missions are particularly interesting since the peculiar atmospheric conditions-especially the good predictability and overall stability of the thermodynamic and chemical state of the atmospheric mixture at altitudes around 18-20 km from the ground-potentially allow the overcoming of the inherent weaknesses of these platforms, primarily bound to controllability [6][7][8][9], unfolding their potential as an alternative to more expensive aircraft and satellites.Currently, passively controlled balloons are employed for several missions, especially for gathering signals or measurements during the ascent or for relatively short-term signal relay in the higher layers of the atmosphere [10][11][12].However, for image collection and signal relay while in a station-keeping attitude, active control and propulsion are required, at least to counter the stratospheric wind encountered at the target stationing altitude.The latter mission, often referred to as high-altitude pseudo-satellite (HAPS), is of special relevance for both civil and military purposes, yet it is very challenging from the design standpoint [13][14][15][16][17]. First, the exposure to high energy intensity and a chemically active gaseous mixture tends to degrade materials quicker than in other design problems in the aeronautical domain.Second, the design of the propulsion system for a high time between overhauls (TBO) so as to allow an almost-permanent deployment at altitude may become an issue similar to the case of satellites, given the articulated and uneasy procedures required for a safe descent and recovery, and for the re-deployment as well, which an overhaul would invariably require.As a matter of fact, high-altitude airships (HAA) are actually still in their infancy, with conceptual and experimental machines proposed and tested over the years [18][19][20], but with no standardized model having entered production to date. To the aim of increasing the TBO of HAA platforms while additionally starkly decreasing the chance of detection, novel technology has been proposed for generating thrust, namely atmospheric ionic thrusters (sometimes referred to as ion-plasma thrusters) [21][22][23][24][25]. Working on the principle of ionizing air in proximity to electrodes and accelerating plasma through a voltage differential, these thrusters are currently capable of producing a modest thrust, which, however, can be profitably employed for propelling and controlling airships, since these platforms do not rely on aerodynamic lift (hence on wings pushed by a thrust force, and not even on a powered rotor) for staying aloft.Furthermore, featuring no moving parts at all, this type of thruster is generally simpler and cheaper to manufacture, and its TBO can be substantially extended over that typical to more standard propulsion systems. Several technological aspects related to ion-plasma-based propulsion are currently under investigation within the scope of the EU-funded project IPROP [26].Besides the optimal geometry and arrangement of the thrusters, the characterization of their performance is the focus of the technological part of the project.Another crucial step foreseen within IPROP, required to enable the adoption of novel atmospheric ionic thrusters on board, is the synthesis of corresponding preliminary sizing and lofting techniques.While being inspired by existing sizing techniques prepared for conventionally powered airships [27], new design algorithms need to cope with the specific features of atmospheric ionic thrusters. The present work deals especially with the latter topic.Drawing on the technological data made available by the investigations carried out within the first stage of the project, where stable predictions of the performance associated with drafted thruster geometries have been made available, a preliminary sizing technique for airships employing atmospheric ionic thrusters has been envisaged.This has been inspired by existing techniques, typically employable for standard-propelled airships.It will be detailed in the methodological section.In particular, an inner sizing loop, so as to compute the weight of an airship corresponding to an assigned geometry and compliant with performance requirements coming from a target mission, is described at first.Then, an optimal algorithm is employed on top of the sizing loop so as to steer the selection of the geometrical parameters according to weight optimization logic. Results from the application of the proposed algorithm have been investigated recently for two target missions, namely a demonstration mission where an airship mounting atmospheric ionic thrusters is sized for a short flight in proximity to the ground (the manufacture of this flying demonstrator is among the goals of the project IPROP), and a HAPS mission, where the machine is sized according to a totally different mission.An analysis of the first results of the application of this technology to the design of a lowaltitude demonstrator has been presented elsewhere [28].In the present contribution, numerical results will be shown concerning a high-altitude mission (the analysis of the feasibility and the preliminary design of this type of platform are among the long-term goals of the project IPROP).Parametric analyses, showing, in particular, the effect of a changing value of the technological parameters associated with the thrusters, allow not only to testify on the overall feasibility and expected performance of airships based on atmospheric ionic thrusters for propulsion but also to identify the most critical technological features in the corresponding design, in view of their further development. Baseline Sizing Methodology for an Airship Featuring Standard Electric Propulsion To explain more easily the peculiarities of the design methodology proposed for an airship pushed by atmospheric ionic thrusters, we recall at first a preliminary design procedure introduced by the authors for high-altitude airships [5], subsequently amended to allow for the design of airships stationing at an arbitrary altitude [29].Those design schemes, already documented in full elsewhere, had been developed originally for the case of a standard electro-mechanical propulsion system.The modifications needed to cope with the specific features of atmospheric ionic thrusters are reported in a later section after reviewing the technical characteristics of atmospheric ionic thrusters relevant to the airship sizing procedure. Referring to Figure 1 [29], for an airship featuring standard electro-mechanical propulsion (i.e., figuratively electric motors powering propellers), it is possible to envisage two logical layers, namely a sizing loop and an optimum-seeking loop.The sizing loop is conceptually an algorithm to compute preliminary sizing quantities-in particular, the weight breakdown, envelope volume, and installed thrust of an airship-starting from the assignment of a set of quantities defining a target mission (stationing altitude, climb and cruise speed, time duration, etc.), the technology adopted for the airship subsystems and components (e.g., battery chemistry, envelope density, etc.), as well as the general shape of the envelope (which influences its aerodynamic characteristics).The optimum-seeking loop implements a suitable optimization algorithm, which smartly steers a set of key sizing parameters in an automated fashion so as to reduce take-off mass to a minimum while satisfying a set of physical constraints. An advantage of the proposed procedure (logically described by the scheme in Figure 1) is the automated computation of a complete set of design variables describing an airship sizing solution, which should be otherwise negotiated one by one.Furthermore, the optimization algorithm ensures compliance with respect to some technological constraints, which are naturally taken into account in the optimization algorithm.This procedure has been practically implemented in the suite Morning Star of the Department of Aerospace Science and Technology, Politecnico di Milano [5,28,29]. To illustrate this procedure in more depth, consider (as a possible practical instance) taking the shape of the airship envelope as a low-drag bi-ellipsoidal solid.The corresponding geometrical sizing can be fully assigned through its length L r and fineness ratio FR.The latter is defined as the ratio between the airship length L r and the top diameter 2R of the airship envelope (R being the top radius).Concurrently, highly flexible solar cells are considered capable of adjusting to the local shape of the envelope.A symmetric placement of the cells to the left and right of the longitudinal plane of symmetry of the airship is hypothesized.The size of the cells and their placement are therefore assigned through limit azimuth values on a cross-section, namely ϑ in and ϑ out , and through longitudinal positions of the extremities of the panels, measured along the longitudinal axis of the airship, namely through corresponding longitudinal coordinates x le and x te .This takes the overall set of parameters managed by the numerical optimizer to those in the array p = {L r , FR, ϑ in , ϑ out , x le , x te }. Other environmental and technological parameters for the sizing need to be assigned, yet they are considered constant in the optimization process.These parameters can be collected in a few major containers as follows: • Mission parameters.These are first the stationing altitude and geographical position (coordinates) on the start date of the mission.These features influence the thermodynamic state of the atmosphere, as well as wind (including its intensity and direction) and solar irradiance (including daylight time and radiation direction) along the mission profile.A reference profile with altitude for these characteristics has been worked out, obtained by weighting and averaging the values corresponding to geographical and temporal samples over the surface of the Earth and the time of the year.Such a profile can be employed to uncorrelate the sizing of the airship from a specific location and the start time of the mission.Further flight mechanics quantities include the buoyancy ratio at altitude, the climb/descent angle and velocity, and the cruise velocity with respect to the ground (typically null for a station-keeping mission). • Payload parameters.Payload mass and related power supply. • Envelope and systems parameters.Maximum acceptable wind speed for envelope sizing, envelope material (specified through its mass density as well as top-stress characteristics), and lifting gas (including purity level).Characteristics of the fins, septa, and stringers.Characteristics of the ballonet system. • Power system parameters.Battery chemistry (yielding specific energy and specific power characteristics), solar cell material (specified through its mass density and energy conversion efficiency), and motor characteristics (including the conversion efficiency of the electric motor and the propulsive efficiency of the propeller). Required power Envelope-related mass Starting from the assignment of these constant parameters and for a choice of the optimization parameters in p, it is possible to invoke a set of models and regressions, producing the sizing of all major subsystems on board.This set of instructions-namely constituting the sizing loop-make for a procedure to be called repeatedly by the optimumseeking loop, changing the input parameters p seeking for a mass-optimal design solution, compliant with a set of constraints.These two logical components of the sizing algorithm are described in the following paragraphs. Sizing Loop The operations required for completing the sizing of an airship, starting from the value of the parameters specified in p and the assigned technological properties mentioned in the previous listing, are wrapped in the sizing loop, which can be synthetically described as follows. 1. Geometrical sizing of envelope and solar cells.Through the assignment of the parameters in p, it is immediately possible to build the complete geometry of the airship envelope, thus computing, in particular, its volume, area of the external surface, and area of the front and longitudinal sections.Correspondingly, the exact size of the solar cells and their orientation in space (dictated by the local orientation of the envelope surface) can be computed.An estimation of the zero-lift drag coefficient C D,0 and drag due to lift coefficient K, configuring a classical parabolic polar for the airship, can be carried out based on regressions for the envelope, in particular involving its fineness ratio FR [27].In the same fashion, it is possible to obtain estimations for the sensitivities C L α and C Q β (the latter representing that of the aerodynamic side force vs. the sideslip angle).Refinements of this preliminary estimation can be carried out based on the size of the fins and gondola (if any) [30], themselves in turn obtained from regressions of statistical data, given the size of the envelope (or payload and energy system for the case of the gondola). 2. Environmental conditions at stationing altitude and during climb.Through the assignment of the position on Earth and time of the year for the ascent, it is possible to compute from dedicated models the temperature, pressure, and density of air, the wind intensity and direction at altitude, and the solar irradiance (in terms of both intensity and direction).In particular, the International Standard Atmosphere (ISA) model can be employed for static air characteristics [31], whereas the Horizontal Wind Model (HWM) can be employed for obtaining the wind characteristics [32,33].The SMARTS model can be adopted for the computation of direct and diffused irradiance at altitude [34]. As pointed out earlier in this section, alternatively to the selection of a location and time of the year for the mission and to the corresponding explicit computation of all quantities just mentioned through models accounting for such information, it is possible to carry out this step in the sizing procedure employing an averaged model for the state of the atmosphere, as well as for the wind and solar irradiance, where each output quantity is only a function of the altitude.This averaged model has been prepared to start from the original ones [32][33][34], sampling the profile with altitude at nodal positions on Earth and time-wise along a yearly period.This choice is especially interesting for making comparisons among concurrent designs (for instance, design solutions obtained by changing the value of some parameters) without binding the solution to a specific location or time of the year for the start of the mission. 3. Computation of total power and energy required.Having assigned the target stationing altitude and having computed the wind characteristics along the climb and at altitude, it is possible to define the power P r (and the corresponding peak power P r,max ) and energy E required for a mission profile, composed of an ascent, stationing phase at altitude for a certain time, and descent. In particular, it is assumed that the airship is flying in climb/descent with a given climb angle γ c with an assigned heading and course.The angle of attack is computed correspondingly, considering the actual intensity of the horizontal wind to compute the angle of attack and the sideslip angle.Conversely, when at the stationing altitude, the airship is hypothesized to be always oriented with the wind so that, in particular, non-sideslip occurs.With these assumptions, it is possible to estimate the lift coefficient C L , the side force coefficient C Q , and correspondingly, the drag coefficient C D , the latter being a function of both C L and C Q (as well as bound to a constant C D,0 component independent of the two) through the polar of the airship.The computation of power for propulsion is therefore possible in climb/descent by further assigning the climb speed V c , and at the stationing altitude as well, having computed the wind speed and having assigned a cruise speed V cr with respect to the ground (i.e., a ground speed in cruise).The latter may be null, typically in casein the case that the mission is that of station-keeping (yet in that case the velocity of the wind will not be null, and the power required for propulsion in cruise will be correspondingly non-null).The value of power can be computed at any time as P r = DV, as usual for flight performance computations, where drag D is obtained from dynamic pressure at each considered altitude along the flight profile, and the drag coefficient C D has been computed as just mentioned. Power for propulsion is complemented by the power required for the payload and by the power required for other plants on board, including losses (estimated via regressions).Once the time history of power along the mission profile is known, peak power and the energy required for the mission are easily obtained. 4. Computation of available power and energy.Available power and energy are estimated starting from the geometry of solar cells and from the mission profile.The latter provides a flight trajectory and the orientation of the airship along it (through the assumptions introduced at the previous point).This knowledge can be employed to define the power capture based on the irradiance data coming from the corresponding model.By comparing the power available and the power required (previous point), a power balance can be carried out, yielding the size of the batteries required for covering the mission. In particular, the energy quotas considered for battery sizing are obtained by integrating the difference between the power available from solar irradiance and the power required by the airship (for propulsion and systems operation), considering those conditions where this difference is negative.This means that when the power from irradiance exceeds the power required, as typical in daylight at cruising altitude, the airship is powered by solar energy.Conversely, at night and during climb and descent, the airship is typically fed by the batteries.Considering all segments in the mission profile where the power required exceeds the power available, corresponding values of energy quotas ∆E i are obtained.The maximum among them (namely ∆E max ) is selected within the sizing algorithm.Additionally, since batteries are associated with a top value of power that they can treat, the top value of the power flow from the batteries is considered to be well, as a possible constraint for battery sizing.The sizing operation can be, therefore, written in mathematical terms as the problem In Equation ( 1), the values of e bat and p bat represent the specific energy and power of the battery, η d the battery discharge efficiency, η m the efficiency of the motors and η p the efficiency of the propeller.The values of e bat and p bat are typically related to one another through a specific choice of the chemistry of the battery [35]. 5. Mass of power system.The power required for flight allows the assignment of the power of the motors and propellers W m .These are turned into corresponding masses and complemented by those of the power-trains and sub-plants (cables, power electronics, etc.), wrapped in W el .Finally, the mass of the power system includes that of the batteries W bat and solar cells W sc , obtained starting from their respective sizing (see previous points).6. Stress analysis on envelope.With a knowledge of the dynamic pressure along the mission and of the maximum wind speed to sustain, as well as of the external pressure, it is possible to compute the pressure differential and its corresponding maximum ∆P in−out max along the mission profile, and from it the hoop stress and longitudinal stress on the envelope from regressions.It should be noted that, in the presence of inflatable ballonets, the pressure differential is typically constant during any altitude change.Conversely, when the ballonets are not present, like for almost-constant altitude missions, or when the airship is operating above its target altitude and ballonets are empty, the differential may indeed change with the altitude.When present, ballonets will be associated with a weight W bal , computed via statistical regressions from their volume, in turn, intended to allow reaching the target altitude without increasing the stress on the envelope.7. Mass of envelope.From the knowledge of the sizing of the envelope, its mass W env can be readily computed.It is noteworthy that the thickness of the envelope is assigned a priori since it is not considered to be a continuous variable, being based on the number of layers of the same material that are superimposed, hence not being practical to change in an optimization algorithm.In other words, the number of layers and their corresponding thickness are assigned among the constant parameters, and the sizing of the envelope is carried out accordingly.The mass of the lifting gas W lg and of the pressure system required to fill and pressurize the envelope are computed at this step as well, together with the masses of structural parts like the fins (W t ) and inner diaphragms (W str ), which are functions of the size of the envelope. Following the definition of all the components in the breakdown of the total weight of the airship, it is possible to assemble the latter by simple sum, yielding where the component W pl represents the weight of the payload, and it is a known parameter in the design. Optimum-Seeking Loop The procedure just outlined produces a complete candidate sizing, corresponding in particular to a total weight W (as well as its breakdown, see Equation ( 2)), which has been chosen as the cost function of the minimization problem solved by the optimizer. However, as demonstrated in Figure 1, the outcome of the sizing loop just outlined also needs to guarantee the satisfaction of three constraints. 1. Buoyancy.The buoyancy ratio BR of the airship should be over an assigned minimum.The latter is typically chosen very close to unity for HAAs for safety reasons unless the wind is expected to provide a steady and sufficiently predictable contribution to lift.This analytically yields where the target buoyancy ratio BR needs to be matched by that obtained from computations in the sizing loop, in particular considering the buoyancy force found at the cruising altitude of the airship (B h cr ), which is the lowest encountered value, hence making the satisfaction of the constraint more requiring. 2. Envelope stress.The stress values on the envelope computed in the previous procedure are compared to the nominal stress values σmax , which the material adopted for the envelope can sustain (as obtained from a corresponding characterization).This yields where the value of σ max measured along the mission profile has been obtained from the sizing loop, starting from the pressure differential ∆P in−out max . 3. Battery power flow.Within the sizing loop, the sizing of the power system (including, in particular, the battery) is carried out without considering the ability of the system to reload the battery in preparation for covering the energy requirement of those phases of the flight where the power harvested from the solar cells is lower than the power required.A corresponding constraint is therefore added, imposing that the energy stored in the batteries during those time frames when the power harvested is larger than that required to be at least equal to the energy released by the battery when the power flow from the solar cells is incapable of covering the requirements of the airship. In analytic terms, this constraint can be written by conceptually defining three time instants.First, a time instant t rec corresponds to the start of a phase where solar power is recharging the battery, meaning sufficient power is harvested for that task and for covering the power required by the airship.Second, a time instant t dis corresponds to inversion of the power flow, which is no longer charging the battery but where power is flowing from it since the harvested power is no longer sufficient to cover the power required by the airship.Finally, t ′ rec corresponds to the end of the latter discharging phase and corresponds to a new recharge-discharge cycle.It is possible to find this triple of time instant for each of the N cyc recharge-discharge cycles during a mission, thus allowing the identification of time boundaries for evaluating the integral of power.Additionally, for clarity, we introduce the power harvested by the solar cells as P sc and that flowing from the batteries as P dis .All these definitions allow the construction of the following constraint Having introduced the set of constraints for the optimal problem, a corresponding analytic description of the latter can be given as Given the general regularity of the solution with respect to the proposed optimization parameters, even considering the action of constraints, a gradient-based algorithm has been employed to numerically solve the optimal problem.The stability of convergence generally displayed by the proposed algorithm has allowed its adoption as a sizing tool to carry out several parameterized analyses, presented in the application section. Airship Sizing Methodology Accounting for Atmospheric Ionic Thrusters The sizing loop and optimal-weight-seeking algorithm presented in the previous Section 2, can be largely retained when dealing with atmospheric ionic thrusters instead of standard electric motor and propeller assemblies.However, in order to accurately describe the modifications required to the baseline design procedure when including this novel type of thrusters on board, a quick review of their geometry and associated technological parameters is required.This will be presented in the next subsection, followed by the proposed corresponding amendment to the design methodology. Atmospheric Ionic Thrusters for Airships Atmospheric ionic thrusters exploit the formation of ionized plasma from air and its acceleration between two electrodes set at a distance from one another and subject to an assigned voltage differential.A couple of such electrodes, composing the basic nucleus of the thruster, is made of an emitter and a collector.Clusters of emitters and collectors, geometrically arranged in parallel (with emitters to the front and collectors to the back), allow the upscaling of the propulsive yield of a single couple while retaining most of the power-conditioning apparatus and load-bearing structure, thus providing a way to set up a thruster in this fashion. Current research efforts [21][22][23] (including the most recent developments within project IPROP) are investigating the properties of these thrusters with respect to the geometrical characteristics of the setup.In particular, the number of emitters and collectors (which might not be defined according to a 1:1 rule in a thruster but may be arranged in a way such that more than one emitter feeds a single collector), their mutual positioning and distances, the lofting and sizing of the emitters and (especially) of the collectors, all bear an impact on the performance of the resulting thruster.Concerning the shaping of collectors, a smart way of obtaining good aerodynamic and electrical properties for this component has been proved to be the adoption of thin airfoils (the target for aerodynamic performance, in particular, is that of minimizing drag).Further options are being investigated at the time of writing within project IPROP, but for the present paper, dealing with an initial assessment of feasibility and performance, we assume to work with this type of constructive technology.Therefore, where emitters are typically thin wires with no special aerodynamic properties, collectors need more care in terms of manufacture, choice of the material (so as to optimize weight vs. strength), and construction strategy as well (for instance, they might be built either as hollow structures similar to typical aircraft empennages or conversely as filled-in structures). To the aim of setting up a computational procedure able to manipulate the variables that assign the geometry of the thruster according to an automated algorithm, the definitions in Figure 2 should be assumed.Considering a single emitter-collector couple in that figure, c is the chord of the collector, and d is the distance between the emitter and the leading edge of the collector.The sum of these distances gives l = c + d.Considering a cluster of more emitters and collectors forming a thruster, the distance between two adjoining collectors is defined as ∆s, whereas the total radial extension of the thruster is s, and it can be obtained from the number of collectors N c and the distance ∆s as s = N c ∆s.The length l and height s define two components of the overall dimensions of the thruster.The last one, namely a measure of width w, is bound to the span of the collector.Due to the physics underlying the plasma-based propulsive effect, it is typical to have a minimum size constraint, which comes in terms of minimum gap d min between the emitters and collectors, a minimum span w min to reduce boundary effects close to the tips of the collectors, as well as minimum radial and longitudinal extensions (respectively s min and l min ) of the overall thruster. The supporting structure of the thruster is composed of two side plates and loadbearing rods connecting them.Thanks to the generally low values of forces involved, a light material can be employed for the supporting structure. The thruster is electrically fed, with voltage employed for regulating the intensity of thrust.The manipulation of electrical variables requires the employment of specific electric components, including a voltage booster.Typically employed out of the aeronautical domain, existing realizations of this component feature limited weight performance, expressed through a high value of the weight-to-power ratio.However, novel and more performing exemplars of this technology are under development within project IPROP to allow for easier adoption of this component on flying machines. In terms of performance, similarly to the domain of turbomachinery, it is typical to define as characteristic figures of the proposed atmospheric ionic thruster structure a thrust-over-frontal area ratio T A m , thrust-over-weight T W m and thrust-over-volume T V m .Furthermore, a power efficiency measure can be defined through the parameter T P m , where P m is the power flow required from the electrical system to produce the corresponding thrust.From experiments, preliminary figures have been obtained for all these quantities within project IPROP, according to the general layout of the proposed specific implementation of the thrusters. The material arrangement of atmospheric ionic thrusters on airships is currently a matter of investigation [23].Different promising solutions are currently under investigation, up to now set up mostly considering the advantages and shortcomings of the mutual placement of multiple thrusters in a streamwise direction on the side of the envelope.The interaction of multiple streamwise aligned thrusters-which, in that configuration, take the name of longitudinal stages-is still under investigation [24,25], and results currently available appear to indicate the existence of constraints on the minimum longitudinal distance between the stages and a maximum recommendable number.This is explained by a significant increase in drag, besides thrust, associated with an increase in the number of stages.This, in turn, produces a progressively less steep increase of the net thrust per additional stage when the number of longitudinal stages is increased. An option considered in the present study is that of arranging multiple thrusters sharing the frontal area, but set sufficiently apart in a longitudinal direction to allow considering the interaction between streamwise aligned thrusters negligible.Where the study of the detailed lofting of multiple thrusters on board the airship has not been included in the present work, the adoption of this layout allows some flexibility in the longitudinal placement of the thrusters, which is of great relevance for longitudinal balancing in static or near-hover conditions [7,9], as well as for maneuverability.These static balance (i.e., trimmability) and dynamic performance aspects (including both the configuration of the eigenmodes of the free response, bound to the inertia of the system, and controllability, bound to the positioning of thrust forces in the configuration), which of course constrain the positioning of thrusters, as well as that of stages in a streamwise close-coupled multi-stage arrangement, will be in the focus of further studies within IPROP. A Sizing Methodology for Atmospheric Ionic Thrusters on Airships From the standpoint of a sizing algorithm, the adoption of atmospheric ionic thrusters requires the definition of a series of geometrical and technological parameters while leaving one (or a set) of design parameters free to tune so as to cope with design requirements.The actual sizing value of this parameter (or this set of parameters) shall be defined based on the same type of requirement leading to the adoption of a certain electric motor in the baseline procedure (Section 2), namely the need to satisfy equilibrium conditions along the mission profile. A difference between the sizing of atmospheric ionic thrusters and electric motor/propeller assemblies may lie in the fact that the former, where scaled up in size to increase thrust, may tend to concurrently increase drag, mostly due to the side plates and thruster cover (the latter has been assumed in the current implementation, but it is not necessarily always present), forming a nacelle for the thrusters, similar to ducted fans or jet engines on aircraft.Such an increase in drag needs to be assessed so as to avoid over-estimating the advantage on the net thrust of increasing the size of the thrusters. According to the topology presented in Figure 3, considered in the present work, it is assumed that the thrusters are placed on the lower half of the airship for obvious balance reasons (center of buoyancy above the center of gravity).Their number at a certain longitudinal section, N t,l , combined with the number of longitudinal stations N l , is such that the total number of thrusters is N t = N l N t,l .Of course, the actual specific positioning of the thrusters on board will have an impact on free dynamics (through the corresponding positioning of the center of gravity and the values of the components of the inertia tensor) and the controlled response of the airship.However, this level of detail will be dealt with through an analysis of lofting (further on within project IPROP), which is triggered by the preliminary sizing presented in this work. In the proposed sizing methodology, the amendment due to the adoption of atmospheric ionic thrusters is included at the level of the estimation of the thrust required for the mission profile (Section 2.1), as will be detailed in the following paragraphs. Assigned Geometrical and Technological Parameters First, it is assumed to work with assigned data concerning the following aspects. • Geometrical features of each thruster.In particular, referring to Figure 2, quantities l, w, and s are assigned (they are optimized in the laboratory, starting from basic theoretical models, currently being employed at Politecnico di Milano in conjunction with practical testing).This yields an a priori knowledge of the geometrical sizing of each thruster unit.In particular, it is assumed for simplicity that s = w implying a square front section of each thruster. • Thrust-to-frontal area.The value of the ratio T A m follows an assigned behavior with the altitude (similar to geometrical characteristics of the thruster, experimentation on the optimization of this value through a selection of the thruster configuration, supported by a dedicated theoretical model, is well underway within project IPROP).This relevant assumption is supported by the adoption of a certain geometry and general arrangement of the components within the thrusters (e.g., the relative numbers and positioning of emitters and collectors, the sizing of the basic components like c, d and ∆s, etc.).This behavior shows an increasing trend, yielding an increasing value of the ratio with the altitude [24,36], according to the law displayed in Figure 4.It can be observed that the T A m ratio is generally increasing with the altitude.• Thrust-to-power.The ratio T P m is a measure of the efficiency of the thruster, and it might bear an impact on the actual value of power required from the electrical system (and batteries in particular).The behavior of this quantity with altitude is currently a matter of investigation (among the aims of project IPROP).For the present work, assumptions on the behavior of the thermodynamic state of the atmosphere at altitude, known to bear an impact on the T P m ratio, have been employed to feed a preliminary first-principle model [24,36], thus producing the behavior in Figure 5. From the figure, it is immediate to check that T P m is decreasing with the altitude, yielding a less efficient conversion of the power fed to the thrusters into thrust when the airship approaches higher layers of the atmosphere. • Arrangement and number of thrusters.Based on the assumed size of the frontal area, in particular, considering the width w, it is possible to compute the maximum number of thrusters N t,l to put on a longitudinal station of the airship hull by simply considering the bottom half-circumference of that station and dividing it by the width of each thruster.However, it is interesting to study the effect of the placement of the thrusters, also considering reducing the top number of units that can be arranged on a longitudinal station (a dedicated paragraph is correspondingly included in the application section).To this aim, we introduce a blockage parameter ξ, considered to be assigned and constant and representing the share of the bottom half-circumference of a longitudinal station that can be taken over by the thrusters.When ξ = 1, the entire bottom half-circumference is available for the thrusters.Conversely, when (for example) ξ = 0.5, only half of the bottom half-circumference is available for placing thrusters, and correspondingly, gaps will appear between thrusters on the same longitudinal station.• Voltage booster.The voltage booster is associated with a technological figure, namely a ratio of the power over weight and to a voltage level.These quantities are considered to be assigned and constant. It should be remarked that the assignment of these parameters corresponds to the definition of a certain structure of the thrusters and a corresponding characterization of their performance.These choices clearly influence the results.Given the relative immaturity of the atmospheric ionic thruster technology, since many of these quantities have been estimated through experiments in laboratories and not always from measurements in a relevant environment for high-altitude employment, the results presented in the application section take the meaning of a preliminary feasibility assessment (as cited in the title of this work).However, the employment of provisional figures of performance within the sizing procedure allows the illustration of its potential and draws some interesting preliminary results as well. Amendments to the Baseline Sizing Procedure As can be argued from the listing of parameters just introduced, the geometry of a single thruster is considered to be assigned completely, and its technological characterization is similarly available.Conversely, the number of thrusters to be put on board has not been assigned, in particular since the number of longitudinal stations N l is unknown, and the number of thrusters N t,l on a given longitudinal section is a function of the circumference of the cross-section of the airship. Within the sizing loop proposed for airship sizing in the baseline scenario (Section 2.1), it is possible to carry out the computation of these additional parameters in a way that satisfies equilibrium along the mission profile.This is conceptually similar to the baseline case, where thrust is obtained from an electric motor/propeller assembly.However, a significant difference with respect to the baseline case is in the relationship between the thruster size (hence its nominal thrust) and drag.Actually, at the level of the computation of the drag coefficient C D,0 , an additional component due to the presence of the nacelles of the thrusters must be taken into account.To this aim, an inner iterative loop where the value of N l is solved has been envisaged as follows.The starting point is a first-guess sizing, where the number of longitudinal stations N l is considered N l = 1, and, consequently, the number of thrusters is N t = N t,l .Starting from this first-guess candidate solution, the following amendments to the original sizing procedure (Section 2.1) are included. 1. Compute the drag coefficient C D,0 associated with the configuration of the thrusters.This can be performed starting from the drag coefficient value obtained for the airship without thrusters and estimating the additional contribution due to the nacelles of the thrusters.This step can be performed based on a model of the nacelle sides as flat plates.The drag coefficient of the plate is obtained as a function of the relative velocity and viscosity of air and of the length of the plate, which compose the Reynolds number.Then, the drag coefficient obtained for one nacelle can be multiplied by N t to obtain the total additional drag ∆C D,0 , hence the actual value of C D,0 . 2. Compute the drag for each node along the mission profile.From this time series of values, the maximum drag encountered over the mission as D tot,max can be computed. 3. Compute the nominal available thrust.Based on the number of thrusters on board N t , assumed in the current run of the sizing loop, and based on the value of the thrust-to-area ratio T A m , it is possible to compute the actual value of thrust at the altitude corresponding to each time node along the mission, multiplying the total front area of all thrusters A m = wsN t by that ratio, thus yielding Check the minimum difference of the thrust available vs. drag.Following the computation of the thrust available at every altitude and, correspondingly, the drag, it is possible to check whether the installed thrust is sufficient to compensate for the drag, especially in the worst conditions encountered along the profile.In analytic terms, that check corresponds to the evaluation of the constraint In case the constraint in Equation ( 8) is not satisfied, thrusters are added on a further longitudinal station, increasing the number N l by 1 and restarting from point 1. of this cycle, with a new total number of thrusters N t = N l N t,l .Conversely, if the inequality in Equation ( 8) is satisfied, the procedure is over. Upon reaching convergence, the number of the thrusters will be such that their thrust will be able to balance the requirement of the mission even in the worst condition, accounting for the drag penalty due to the nacelles.With this inner sizing constraint satisfied, it is possible to size the batteries according to point 4. in Section 2.1 and Equation ( 1) in it (with the only caveat that the propeller efficiency η p and electric motor efficiency η m need to be taken out of the expression).The sizing loop then proceeds along points 5-7. in Section 2.1. As a remark, it should be noted that the drag associated with the stream flowing within the thrusters has not been accounted for explicitly.This is due to the fact that the T A m figure specified for a certain atmospheric ionic thruster construction represents a net thrust.This is in accordance with what is typically done to provide the characteristic performance of any thruster (e.g., the thrust figure typically specified for a jet engine is not such that the drag of the flow blowing through the compressor and turbine vanes needs to be taken away from it in order to obtain the actual thrust). Once the inner sizing loop producing the geometry and thrust of the propulsive system has been assigned, the corresponding mass can be assessed in order to amend the computation of the weight breakdown within an optimal design procedure, as presented in Section 2.1.The weight components associated with the atmospheric ionic thruster can be listed as follows: • Wires employed as emitters.Due to their very limited diameter, these components are associated with a negligible weight. • Collectors.Depending on the material employed and structural sizing (i.e., hollow or filled structure), the weight may vary significantly. • Load-bearing structure.Thanks to the relatively low value of force exerted by each thruster, its load-bearing structure can be manufactured with relatively light and flexible material.The cage structure naturally resulting from the setup of this type of thruster allows the obtaining of overall good levels of rigidity at the price of a mild global weight of the structure. • Nacelle.The material of the nacelle may be the same as the load-bearing structure. The sides of the nacelle may be actually part of it.The structural role of the nacelle top is typically not relevant; hence, this component can be manufactured from very light material.• Voltage booster.As pointed out, this component is typically not to be found in powerplants for aviation, and its corresponding weight-to-power figure may be somewhat penalizing, albeit already compatible with airship flight operations at the current level of technology.Ways to obtain a better value of this parameter are currently under study (within project IPROP). The weight corresponding to atmospheric ionic thrusters can be related to their thrust through a thrust-to-weight parameter T W m , which accounts for the aggregated contribution of all the components in the previous listing, except the voltage booster.This allows the definition of the weight of the propulsive units W m from the knowledge of the top thrust, whereas the weight W vb of the voltage booster is typically computed separately, based on the power it has to deal with, which in turns is again related to the top thrust to be obtained from each thruster. The weight breakdown in Equation ( 2) can be, therefore, built up according to a modified set of components, as (9) wherein it should be observed that the weight of the gondola W gon can be obtained as a function of the weight of the voltage booster W vb , as well as of the other components stored in the gondola according to the adopted baseline architecture. The optimal problem introduced in Equation ( 6) remains unchanged.The addition of a further parameter (namely N l ) to be solved by the sizing algorithm does not alter the set of optimization parameters p. From a mathematical-numerical perspective, such an additional variable is not continuous and cannot be managed directly by a gradient-based algorithm working on continuous variables (listed in p).For this reason, this quantity needs to be treated via the inner loop just introduced, nested inside the algorithm, leading to the evaluation of the merit function W. A value of N l can be obtained for each assigned set of parameters in p encountered along the numerical solution of the optimal problem. It is relevant to remark that the solution obtained from the sizing procedure just outlined is totally dependent on the many assumptions on the geometrical and technological features of the thruster (as pointed out in Section 3.2.1).For this reason, an evaluation of the sensitivity of the outcome of the design procedure with respect to these parameters is of great interest, and it will be assessed in the application Section 4. Application Studies As pointed out in the introduction (Section 1), it is interesting to show the result of the application of the comprehensive sizing procedure described in this work.It has been explained in Section 3.2 how the original sizing algorithm recalled in Section 2, intended for airships featuring standard electro-mechanical propulsion, and practically wrapped in the suite Morning Star (which works in Matlab ® R2019b), can be enriched with features so as to allow the sizing of an airship accounting for the specific features of atmospheric ionic thrusters.As an example application, considering the goals of the project IPROP, a highaltitude airship mission has been considered, with a realistic payload and mission profile, which will be introduced next.A comparative study is presented then, where an airship featuring standard electro-mechanical propulsion is sized to be able to fly that mission concurrently with another airship featuring atmospheric ionic thrusters.The outcome of this first analysis allows the comparison of the effect of the inclusion of atmospheric ionic thrusters in the design of airships with a high-altitude mission.The so-obtained design of a machine based on atmospheric ionic thrusters is then employed as a baseline for further analyses of the outcome of optimal sizing obtained for changing values of some parameters related to the mission profile or the technology of the components. Baseline Mission Characteristics and Payload In the following, the results of optimal sizing will be presented referring to a baseline high-altitude station-keeping observation mission, where the airship takes off, climbs to a cruising altitude h cr = 17 km (chosen according to the payload requirements and an a posteriori justification, shown later in this section), and keeps there for an active observation phase of the mission.It then descends back to the original take-off level.The entire mission duration is T = 48 h.This duration, which involves two daily rechargedischarge cycles besides the climb and descent phases, is well representative of a longer (virtually permanent) multi-day flight mission.All along the mission profile (including ascent and descent phases, and of course, the station-keeping phase at altitude), the airship faces realistic characteristics of the environment, and in particular, changing values with respect to the altitude of the atmospheric thermodynamic state, including temperature, pressure, and density, as well as of the horizontal wind and solar radiation (both in intensity and direction).As pointed out in the methodological section (Section 2.1), in order to simplify the analysis, thus focusing on aspects of more direct interest considering this example application, a representative profile for all cited quantities has been elaborated, averaging samples from locations at different geographical locations over an entire yearly cycle, obtained from state-of-the-art models (in particular, those described in [31,33,34]), so as to create a realistic reference profile with altitude, uncorrelated from any specific position or time of the mission. The basic data defining the desired mission profile are reported in Table 1.It should be noted that for a station-keeping mission, the ground speed on the cruise is null, and the airspeed in that condition is dictated by the velocity of the wind at cruising altitude.The latter is, therefore, not a specification by the designer.The payload for the mission has been designed so as to obtain state-of-the-art detection capabilities in the visual and infrared spectra, as well as a radar scanning ability through a synthetic aperture radar (SAR), thus yielding a monitoring capability of the ground from altitude through the weather.The selection of detection systems is reported in Table 2.This payload capability configures a complete airborne detection platform, merging the abilities of different existing flying systems in the same machine.The resulting weight of the payload reaches W pl = 576 kg, adsorbing a nominal power P pl = 14.85 kW. Comparison of Design Solutions on Baseline Mission The technology of onboard systems and components has been chosen to be as standard as possible so as to avoid unrealistic projections in the characteristics of the out-coming design.Furthermore, both the conventional airship based on electro-mechanical thrusters and the one based on atmospheric ionic thrusters share most of the choices concerning components, including, in particular, key contributors to the weight breakdown like the envelope and batteries.The data employed for the envelope material, battery, and electrical wiring in both designs are reported in Table 3. Concerning the envelope material, data in Table 3 refer to a laminated material, based on a sandwich composed of Zylon ® (Madrid, Spain) fibers in the middle as a load-bearing component, a Mylar ® (PET) film (Chester, VA, USA) for an internal gas barrier layer, a Kapton ® (PI) film (Tianjin, China) with an aluminum deposit and Corrosion Resistant Coating (CRC) as top weathering layer, and two EVOH films as adhesive layers between other functional layers [37].Battery technology data refer to off-the-shelf products based on Lithium-Polymer chemistry.Electric cables are manufactured from standard copper wires.Considering the airship based on standard electro-mechanical propulsion, this employs electric motors and propellers to propel the airship.In this case, the sizing of the propulsion system requires the definition of a technological relationship between the power needed for propulsion and the weight of the motor so as to define W m in Equation ( 2) based on the value of required power P r,max resulting from the mission.In particular, a linear relationship based on motors for industrial applications of comparable power as for the airship at hand has been employed, yielding P r,max g W m = 1905 kW/kg.Furthermore, as required according to Equation ( 1), values of the propeller and motor efficiency, respectively, of η P = 85% and η m = 95% have been assumed [38]. For the solution based on atmospheric ionic thrusters, the definition of a feasible solution requires not just the definition of the installed thrust (as for the electro-mechanical case), but conversely, it comes together with the definition of a general arrangement and the number of thrusters on board.As explained (Section 3.2.2), the reason for that is the dependence of both propulsion and drag on the number and sizing of the thrusters for this technology.As parameters for the sizing, a square frontal area has been hypothesized for each thruster, yielding a height of the thruster of s = w = 2 m.The length in the streamwise direction is l = 0.055 m, dictated by the arrangement and sizing of emitters and collectors currently under study within IPROP.The number of emitters and collectors within a thruster has been set to 80.It should be noted that these figures are very preliminary since atmospheric ionic thrusters are currently being developed primarily for smaller-scale applications [28].The density of the collectors within the atmospheric ionic thrusters has been assumed at 50 kg/m 3 , corresponding to a styrofoam-based structure coated in light metal alloy, from experiments on prototypes developed within the activities of project IPROP.A 50% infill allows the actual reduction of this figure by the same amount without apparently producing any detrimental effect on structural stiffness.The density of the external structure of the thruster is assumed of 50 kg/m 3 , similar to that of the collectors.It has been practically checked that a load-bearing structure made of this material, where suitably lofted, allows the supporting of the range of loads to which the thruster should be subjected.The blockage parameter ξ, defining the share of the bottom half-circumference of a cross-section of the airship available for putting the thrusters (see Section 3.2.1),has been set to ξ = 1. The thrust-to-area ratio T A m and thrust-to-power ratio T P m have been assigned as functions of the altitude, as shown in Figures 4 and 5, respectively.Finally, a currently achievable value of W vb P m = 0.83 kg/kW has been assumed for the voltage booster.The outcome of the weight-optimal sizing for both concurrent designs is reported in Table 4. Concerning the constraints in the optimal sizing, for both designs, the value of the buoyancy ratio has been set to BR = 0.93, the nominal stress limit has been set to σmax = 970 N/cm, and the specific energy of the battery to 450 J/kg. The number of thrusters, defined only for the case of atmospheric ionic thrusters, is N l = 30 and N t,l = 41, yielding a total of N t = 1230 thrusters. To help appreciate the features of the respective design solutions, the corresponding geometrical sizing has been reported in Figure 6.In these figures, it is possible to check the actual shape and geometrical sizing of the envelope, the extension and positioning of the solar cells, and, for the atmospheric ionic solution, the sizing of the thrusters.For the latter, the longitudinal positioning in the sketch is the result of an arbitrary assumption.As pointed out, future research will be devoted to the actual optimal placement of the thrusters on board in view of the obtainment of a desired dynamic performance.From Table 4, it can be noticed that both airship designs correspond to a significant geometrical size.To allow for a direct visual comparison, despite very limited scientific validity due to the completely different missions for which the respective ships have been sized, it can be observed that the conventional airship compares well in size with the Zeppelin NT, whereas the one pushed by atmospheric ionic thrusters with the Zeppelin LZ129.According to the same comparison, the weight of the conventional design is less than one half that of the Zeppelin NT, with which it shares the general non-rigid construction, and similarly, the weight of the design based on atmospheric ionic thrusters is less than one half that of the Zeppelin LZ129, which however featured a radically different rigid structure.The relatively contained overall weight figures for the designs presented here are a result of the comparatively low payload compared to the transport airships just cited. The fineness ratio of the ion-based design is higher, yielding a significantly slenderer shape compared to the standard propulsion case.The weight breakdown for the two designs at hand features some relevant differences, which can be linked to the constitutionally lower weight efficiency of the novel thrust generation system, which includes, for instance, a voltage-boosting component creating a weight offset with respect to the conventional design.A corresponding increase in the lifting volume, hence in the weight of the envelope and lifting gas, is obtained.This, in turn, corresponds to an increase in size, which pushes the energy balance towards higher values of drag and thrust, which tend to increase the requirement for energy harvesting and storage systems.Despite these differences, both designs appear feasible, with the novel one based on ion technology bringing in the strategic advantages mentioned in the introduction (Section 1), which include very low detectability and a higher TBO. As a remark, it should be recalled that the results presented in this work have been obtained willingly as a preliminary forecast based on the current level of technology, i.e., without accounting for any assumption on the yield of future developments.The soobtained preliminary results are, therefore, especially encouraging, showing that even with the strict assumptions just mentioned, a stratospheric design based on atmospheric ionic thrusters is theoretically feasible.Clearly, since project IPROP is largely dedicated to the increase in the technological readiness of atmospheric ionic thrusters and all related components (including the voltage booster), it is very likely that the figures just presented will be improved further, as the research in this field unfolds. Effect of Mission and Technology Parameters on the Design Solution The design solution obtained from the optimal sizing procedure in Section 4.2 shows that a significant difference exists between an airship sized considering conventional thrusters and that obtained from atmospheric ionic thrusters.As anticipated, it is interesting to investigate the sensitivity of the design solution obtained when the novel atmospheric ionic thrusters are mounted on board, and corresponding to changing values of some parameters either pertaining to the mission or bound to technological assumptions. Effect of Stationing Altitude on the Design Based on Conventional and Atmospheric Ionic Thrusters Considering the high-altitude mission of interest here, a relevant quantity to consider as a changing parameter is the target altitude for the stationing phase.In Figure 7, the change in the weight breakdown corresponding to weight-optimal solutions at different target altitude values is presented for the conventionally propelled airship (top plot) and the airship based on atmospheric ionic thrusters (bottom plot).Looking at the results for the conventionally propelled airship (top plot in Figure 7), the choice of the reference altitude for the baseline design at h cr = 17 km can be reinforced a posteriori .As already pointed out, such an altitude value is ideal for the selected payload.Additionally, from the plot, it appears to correspond to a typical sweet spot for the overall weight of the machine.It should be underlined that the very existence of a condition of minimal weight is interesting in itself. Considering the optimal altitude of h cr = 17 km, the trade-off resulting in the behavior of the weight breakdown portrayed in the plot can be explained according to at least four drivers.A lower level of radiation intensity and a more intense wind are encountered at a lower altitude, corresponding, respectively, to a higher weight of the batteries (which need to store more of the energy required for the mission) and installed thrust (needed to overcome a more intense drag).On the other hand, at higher altitudes, an increase of the ballonet sizing and a higher volume of lifting are observed, due, respectively, to the need for producing a buoyancy force with a lower density of external air and to the (non-linear) decrease in the pressure of air as more altitude is gained.The balance of all these antagonist effects bears the trend resulting in the plot of Figure 7. From the bottom plot in Figure 7, the novel design based on atmospheric ionic thrusters appears to follow a more complex trend.In particular, it can be observed that the global weight minimum is obtained for an altitude of h cr = 18 km, which is different from that corresponding to the conventional propulsion case, and the increase trend of weight for a changing altitude is generally steeper than for the other case.Furthermore, the weight of the ballonet system is not monotonically increasing, in contrast with the previous case.To help explain these differences, it should be recalled that the optimal sizing algorithm accounting for atmospheric ionic thrusters computes the actual number of thrusters (this is not true for the conventional case, where only the overall installed power is computed), choosing it so as to simultaneously guarantee a sufficient propulsive power and produce a minimal-weight solution.However, the thrust required (due to drag) is also a function of the number of thrusters, as pointed out.This feature in the design procedure reflects an additional complexity in the sizing of the airship in the presence of atmospheric ionic thrusters, i.e., the inner link between the sizing of the thrusters and drag, besides the more obvious relation with thrust.The number of thrusters corresponding to the optimal solutions at increasing altitude presented on the bottom plot in Figure 7 is reported in Table 5. Table 5. Number of thrusters for weight-optimal design solution with atmospheric ionic thrusters, for changing the value of the top altitude (as in the bottom plot of Figure 7).The total number of thrusters N t for each sizing solution in Table 5 is obtained by multiplying N l and N t,l .Cross-checking Table 5 together with the right plot in Figure 7 allows the discovery of a correlation between the number of thrusters and the overall weight, which offers an interpretation of the trend just observed.As an additional driver yielding that trend, it should be pointed out that in the case of atmospheric ionic thrusters, the thrust produced by these thrusters increases with the altitude, through an increase in T A m , whereas the value of the thrust-to-power ratio T Pm decreases, as mentioned in Section 3.2.1.This further trade-off contributes to producing an offset of the optimal altitude for this case, compared to the conventional propulsion case, and also supports the very existence of an optimum. Effect of Envelope Density on the Design Based on Atmospheric Ionic Thrusters Considering the significant share taken by the envelope in the baseline design (see Section 4.2) and the level of innovation required for the manufacture of the airship envelope for a mission in the higher layers of the atmosphere (due to the mechanical stress bound to the wind, and to the extreme intensity of the solar radiation), a second interesting parameter is the density of the envelope.The plot in Figure 8 reports the result of the weight-optimal sizing of an airship featuring atmospheric ionic thrusters for two additional values of the envelope density, respectively, 80% and 120% of the baseline value.The trend appears regular in all these cases for all weight components in the breakdown, corresponding to the fact that the number of optimal motors remains unchanged.An upscaling effect especially on the batteries and lifting gas (corresponding to an increase in size, not portrayed), and of course on the envelope, is found for increasing values of the envelope density.Furthermore, it can be noticed that the trend of the output total weight is considerably steep, showing a significant change in weight for the considered (relatively small) change in the density.Also a slightly non-linear increase can be noticed, with a weight penalty when increasing the envelope density from the baseline bigger than the weight saving when decreasing the density by the same percent amount. These results show, on the one hand, that the effect of the density of the envelope is, as expected, sizeable.However, at least for the considered changes in this parameter, the outcome is not different to the point of altering the general geometric sizing of the airship (as can be seen from the lifting gas, which is proportional to volume), which remains in the size class of the baseline. Effect of the Sizing of Atmospheric Ionic Thrusters Considering the focus put in this paper on the development of a reliable technique capable of sizing an airship pushed by atmospheric ionic thrusters, it is interesting to show how this procedure can cope with two key parameters concerning the general sizing and arrangement of atmospheric ionic thrusters.In particular, it is hypothesized to change two parameters.The first is the maximum affordable blockage of the front section, already introduced as a non-dimensional parameter ξ ranging between 0 (extreme value, nonphysical) and 1. Considering Figure 3, a value of ξ = 1 implies that the entire half-circle corresponding to the bottom half of a cross-section circumference of the airship can be taken over by the thrusters.A lower value of this parameter reduces the occupation of the frontal area, thus reducing the blockage of the front section of the airship due to the thrusters.Setting this parameter restricts the maximum number of thrusters per section, i.e., N t,l , to an extent that depends also on their geometrical sizing and the actual sizing of the airship envelope.Correspondingly, as a second parameter, the height s and width w of the front section of the thruster are increased while retaining a square shape (thus, they are treated as a single parameter, given that s = w, as observed). The results obtained combining the changes in these parameters are reported in Figure 9.A blockage parameter of ξ = 0.5 or 1 is considered, respectively, for the top and bottom plots.In each plot, the value of the width and height of the thruster section is increased as reported.The corresponding number of thrusters is reported in Table 6.Comparing the plots in Figure 9, it is possible to notice that a stark change in the scale of overall mass values of the airship is obtained, in particular with a significantly higher weight (total, and proportionately for each component within it) for a lower value of the blockage parameter (top plot) than for a higher value of that parameter (bottom plot).Furthermore, on both plots (i.e., for both values of ξ), it is possible to notice a decreasing trend of weight with respect to the sizing of the frontal area of a single thruster.The results reported in Table 6 help to explain the behavior just reported.For the lower value of the blockage parameter (ξ = 0.5), a lower number of thrusters per section (N t,l ) is encountered, yielding a generally higher number of sections upon which the thrusters are spread (N l ). The total number of thrusters is generally significantly higher for ξ = 0.5 than for ξ = 1.0, implying a generally less efficient sizing solution.Table 6.Number of thrusters for weight-optimal design solution with atmospheric ionic thrusters, for changing the value of the blockage parameter and frontal area sizing (as in Figure 9).The results in Figure 9 and Table 6, albeit interesting and apparently indicating a clear best practice in choosing a higher value of the blockage parameter, should be considered with caution.Actually, they ignore the possible positive coupling effect of thrusters working in series along a streamwise direction.Current research results within project IPROP tend to support a positive interaction effect of thrusters working in that configuration.Considering this effect (still to assess with precision) is likely to positively affect the overall efficiency of thrusters put in series, thus potentially reducing the need for higher N l when the blockage ξ is lower than unity compared to what has been obtained, thus, in turn, altering the balance shown in the results just introduced (Figure 9 and Table 6). Conclusions The present contribution has introduced an amendment to an original preliminary sizing algorithm for airships, developed by the authors considering electro-mechanical motors, so as to include atmospheric ionic thrusters in the design (under development within project IPROP).The so-identified procedure allows the keeping of the number of free sizing parameters to a minimum while coping with the specific features of this novel type of thruster and making use of the technological data pertaining to them as obtained from experiments.The sizing algorithm, which works in an automated fashion within the airship sizing suite Morning Star, returns a general sizing of the envelope and solar cells needed for energy harvesting, the weight breakdown including the propulsion system and batteries, and the number and general arrangement of the atmospheric ionic thrusters, when they are considered in the design.Such a sizing algorithm is based on a weight-optimal criterion, capable of producing a minimum-weight sizing complying with constraints coming from the mission profile and from technological limits. Preliminary sizing results for the design of a high-altitude airship (HAA) with a payload and mission granting a top-tier high-altitude pseudo-satellite (HAPS) observation capability (as requested among the milestones of project IPROP), show that an airship based on atmospheric ionic thrusters is indeed feasible at the current technological level, albeit in association with a more requiring sizing solution compared to a case based on standard electro-mechanical propulsion. An exploration of the space of design solutions has been carried out, considering changing values of the stationing altitude, envelope material, and assumptions on the thruster geometrical characteristics and arrangement.The study of the dependence of the optimal solution on the stationing altitude reveals the existence of an optimum, which is different yet not excessively dissimilar for the conventional and ion cases.The analysis of the dependence on the envelope material and the thruster characteristics shows clear trends in the overall weight and its breakdown, as well as a general detrimental upscaling effect when the arrangement of the thrusters is such that it leaves gaps between units on the same cross-section.A word of caution concerning the latter result is bound to the limits of the modeling of the interaction among thrusters in a streamwise series, which, when accounted for, may significantly alter the outcome of the analysis. Further research on the design of HAAs featuring atmospheric ionic thrusters will follow within project IPROP, as the technology of these thrusters and the related systems is developed (also thanks to the experience gained through the design and planned manufacture of a low-altitude flying demonstrator).Besides updating the values pertaining to technological variables, research will especially target the implication of the dynamic behavior of the airship and its controllability bound to a detailed lofting of the weight components and the thrusters onboard. Figure 1 . Figure 1.Logical flowchart illustrating the airship design scheme based on a sizing loop and an optimum-seeking loop. Figure 2 . Figure 2. Schematic representation of an atmospheric ionic thruster, defining some characteristic geometrical parameters.(Left) side view.(Right) three-quarters view. Figure 3 . Figure 3. Basic working topology adopted in the design algorithm. Figure 4 . Figure 4. Considered model of the ratio TA m with altitude. Figure 5 . Figure 5. Considered model of the ratio T P m with altitude. Figure 6 . Figure 6.Geometry of baseline designs.Showing the geometrical sizing of the envelope, solar cells, and thrusters (the latter for ionic cases only).(Top plot) conventionally propelled airship.(Bottom plot) airship featuring atmospheric ionic thrusters. Figure 7 . Figure 7. Effect of the target altitude on the weight breakdown.(Top plot) conventionally propelled airship.(Bottom plot) airship featuring atmospheric ionic thrusters. Figure 8 . Figure 8.Effect of a change in the envelope material density on the weight breakdown of an airship featuring atmospheric ionic thrusters. Figure 9 . Figure 9.Effect of the frontal blockage parameter ξ and of the side of the square front capture area of the atmospheric ionic thruster.(Top plot) ξ = 0.5.(Bottom plot) ξ = 1. array & MPPT • Battery • Motor & Power-train • Gondola Total mass Properties of materials • Solar Table 1 . Basic data for high-altitude mission sizing. Table 2 . Components of the payload. Table 3 . Assigned values of technological parameters. Table 4 . Results of weight-optimal sizing for airships based on conventional or atmospheric ionic thrusters.Baseline technology data is assumed for the computations.
16,877
2024-07-19T00:00:00.000
[ "Engineering", "Physics" ]
A new twist in the photophysics of the GFP chromophore: a volume-conserving molecular torsion couple Dynamics of a nonplanar GFP chromophore are studied experimentally and theoretically. Coupled torsional motion is responsible for the ultrafast decay. Introduction Photoswitches play important roles in biology. Examples include the primary step in the vision pigment rhodopsin, bacterial phototaxis stimulated by the photoactive yellow protein and uorescent protein (FP) photochromism used in super-resolution microscopy. [1][2][3] These efficient protein based photoswitches inspired the design of diverse molecular photoswitches, which power a variety of nano-and micro-scale phenomena. [4][5][6][7][8][9] In the following we investigate the photophysics of a novel sterically crowded variant of the chromophore of the green uorescent protein (GFP), by means of ultrafast spectroscopy and high-level quantum chemical calculations. This bridge methylated derivative (see Scheme 1) shows an exceptionally fast excited state decay which is almost independent of solvent viscosity. In contrast to the native GFP chromophore, whose decay is calculated to be governed by ring rotation, the presently calculated excited state structural evolution suggests that the methylated derivative follows a barrierless, volume conserving coordinate composed of ring rotation and pyramidalization of the central carbon. We further show that this arises from a nonplanar form of the chromophore analogous to that found in some photoactivated FPs. 10 The FP family is established as one of the most important tools in bioimaging and cell biology. [11][12][13][14] A signicant and intriguing feature is the wide range of photophysical phenomena exhibited by the covalently bound chromophore common to most of them (Scheme 1). 15 In recent years this diversity afforded FPs a range of applications beyond bioimaging. The chromophore exhibits, depending on its environment: photochromism, critical to applications in super-resolution imaging; 3,16,17 excited state proton transfer; [18][19][20] photo-isomerization; 21 intermolecular photochemical reaction (exploited in "optical highlighter" proteins); 22-24 electron transfer (generating reactive oxygen species leading to photo-stimulated cell death). 25 In an effort to understand this diversity, the synthetic analogue of the FP chromophore (4 0 -hydroxybenzylidene-2,3dimethylimidazolinone, p-HBDI, Scheme 1) has been studied intensely, through both experiment and quantum chemical calculation. [26][27][28][29][30][31][32][33][34][35][36][37][38] The extended delocalisation leads to a visible absorbing chromophore which is approximately planar in its electronic ground state and adopts the cis (Z) isomer (although it may be twisted or even trans (E) in the protein environment). The photophysics of p-HBDI are dominated by ultrafast radiationless decay and isomerization. 21,28,39,40 Quantum chemical calculations suggest that excited state population decay occurs at a conical intersection (CI) reached by an approximately 90 rotation about the s or f twisting coordinate (Scheme 1). 34,35,37,[41][42][43] Such large scale molecular motions are opposed by solvent friction, and thus predict a solvent viscosity dependence, which is not observed experimentally. Consequently other radiationless decay coordinates which displace less solvent, notably "hula-twist" or pyramidalization at the central bridging methyne group, have been considered. 42,44,45 These observations led to efforts to control motion along coordinates involving the bridging bonds. Chou and co-workers made a 'locked HBDI' with a 5 membered ring restraining the f coordinate. 46 The excited state lifetime was only slightly extended compared to p-HBDI. Remarkably, when the intramolecular H-bonded ortho hydroxy derivative was studied, the uorescence lifetime increased by three orders of magnitude, to the nanosecond range. In contrast o-HBDI itself has a more modest lifetime enhancement of about 1 order of magnitude. Thus, both s and f coordinates should be constrained to recover the high quantum yield associated with imaging FPs, consistent with earlier studies of a boron coordination complex. 47 Very recently other routes to s locked HBDI-like structures were reported. Although no direct comparison was made with HBDI, the excited state decay remained sub-picosecond. 48,49 These data suggest a key role for the bridging carbon. In this work we synthesized the bridge methylated derivative of HBDI (1-(4-hydroxyphenyl)ethylidene-2,3-dimethylimidazolinone, I, Scheme 1) which yields a sterically crowded nonplanar structure where neither bond is rigidly constrained. Steric crowding has previously been seen to accelerate the excited state decay of stilbene derivatives, 50 and a similar acceleration has also been recently observed when the bridge carbon of a model bilirubin chromophore, structurally related to HBDI, is methylated. 51 The synthesis of I affords us the opportunity to investigate in detail the excited state dynamics of a nonplanar GFP chromophore. It has been reported from a systematic study of the structures of GFP-like proteins that the chromophore is oen nonplanar, a phenomenon ascribed to the steric effect of the surrounding matrix. 52 It was also reported that a variation in the size of the Y145 residue adjacent to the chromophore, could have a controlling inuence on the uorescence quantum yield, again probably due to a steric effect. 53 A more extreme perturbation of chromophore structure by the host protein is in the generation of the less common trans form, which is then oen signicantly distorted from a planar structure, and exhibits both weak uorescence and photochromism. 10,16,54 The latter point is central to the application of photoconvertible proteins such as dronpa in super-resolution uorescence bioimaging. These factors all suggest that an investigation of the photophysics of a nonplanar chromophore outside the protein matrix may be important in assessing the role of nonplanar geometries in the photophysics of GFP-like proteins. Thus, we present a detailed experimental and theoretical study of the excited state chemistry of I. The decay is ultrafast, even compared to the sub-picosecond decay of p-HBDI, and is only a very weak function of the environment. These observations are explained through high level calculations combining time-dependent density functional theory (TD-DFT) using the CAM-B3LYP functional with the complete active space secondorder perturbation (CASPT2) method. The ultrafast decay arises from a unique excited state structural reorganization, which reveals the sterically crowded I as a volume-conserving molecular "torsion couple". The molecular motions required are driven by bond inversion promoting phenol ring planarization coupled to imidazole ring rotation through the methyl group. In the ground state the rings are already twisted, and aer excitation they rotate in the same direction, so that the decay to the ground state can occur with minimal changes in the volume. The generalization of this mechanism may explain some of the results outlined above, such as the short lifetimes of some 'locked' derivatives. Results and discussion Electronic structure and photochemistry The absorption spectra of the neutral protonated form of I are shown in Fig. 1a, in a series of alcohol solvents of different viscosity and polarity. The absorption is a single asymmetric band with maximum near 360 nm, and is only a weak function of solvent. This is slightly to the blue of p-HBDI and has a smaller extinction coefficient of ca. 14 000 M À1 cm À1 compared to 32 000 M À1 cm À1 . NMR data (ESI 1 †) show that I is synthesized in the cis (Z) and trans (E) isomers in the ratio 9 : 1. We refer to these isomers as Z-I and E-I, respectively. The absence of a strongly bimodal line shape suggests the isomers have similar spectra. Photochemical measurements show that irradiation into S 1 with 365 AE 20 nm light converts Z to E in an analogous fashion to p-HBDI, 21 conrming that Z and E isomers have similar spectra although E has an additional shoulder below 300 nm (Fig. 1b); that no additional photoproducts are formed was conrmed by NMR. Photoconversion kinetics were measured and analysed to recover yields for the Z / E and E / Z reactions (Fig. 1). Analysis of these data (Table 1 and ESI 4 †) shows that the Z / E photoconversion yield is ca. 4 AE 2%, but signicantly larger for the reverse reaction at 25 AE 10%. The ground state E to Z relaxation rates in water and methanol are 3 and 7 times greater respectively than for p-HBDI (see ESI 4 †), which corresponds to differences in the activation energy of 0.7-1.2 kcal mol À1 at room temperature. For comparison, the calculated gas-phase barriers for the thermal E / Z pathway (MS-CASPT2 energies on CASSCF geometries, see ESI 3 †) are 44.5 and 44.0 kcal mol À1 for p-HBDI and I, respectively (Fig. 2). This corresponds to a 2.5 times faster relaxation rate for I, in reasonable agreement with the experimental data. However, the recovery time of p-HBDI and I in the aprotic solvent acetonitrile is very long (ESI 4 †), suggesting a signicant solvent dependence of the E / Z reaction rate. A similar observation has been made for p-HBDI. 21 Such medium effects will be important in understanding relaxation kinetics in photochromic proteins and will be studied in a more extensive range of solvents and solvent mixtures. The uorescence of I is very weak, but the spectra have an approximately mirror image relationship to the absorption, and are a weak function of the solvent (Fig. 1a). All the alcohol solvents studied show a maximum I* emission wavelength around 427 nm although the diol, ethylene glycol, is slightly blue-shied. In this paper we focus throughout on the neutral protonated chromophore, which is oen the most photochemically active form of the FP chromophores, but very similar experimental results were obtained for the deprotonated chromophore in basic solvents (ESI 5 †). The calculated minimum energy ground state structure of I at the MP2/cc-pvtz level has the Z conguration around the C 5 -C 7 0 double bond and is markedly non-planar (C 1 symmetry), with f ¼ 29.8 and s ¼ 2.1 (Fig. 2). This is traced to the steric interaction between the methyne methyl substituent and the two rings, which prevents formation of a stable planar minimum. The departure from planarity gives rise to the observed blue-shi and reduced oscillator strength of Z-I compared to the planar p-HBDI. The barrier for rotation around f, which goes through a planar transition structure and leads to a mirror inverted structure where the rings are rotated in opposite directions, is 0.6 kcal mol À1 (Fig. 2). The E-I minimum is 0.7 kcal mol À1 higher in energy than Z-I and is slightly more twisted, with f ¼ 44.5 and s ¼ À176.7 . The calculated energy difference predicts that approximately 69% of the ground-state Table 1 Rate of photoconversion (dominant Z / PSS (photostationary state)) and rate of dark relaxation PSS / Z for I. All photoconversion measurements were made under identical conditions (excitation at 365 nm with irradiance of ca. 2 mW cm À2 , and a sample absorbance of 0.14 at 365 nm). Relaxation measurements were conducted on the same samples immediately after stopping irradiation. For further analysis and cross section calculation see ESI 4 Table 1. population will correspond to the Z form at 300 K, in line with the 9 : 1 ratio seen in NMR (ESI 1 †). The ground-state energy prole for the rotation of the methyl group is particularly signicant as it illustrates the function of the proposed torsional couple discussed below. The rotation of the methyl group (r coordinate, Scheme 1) is accompanied by rotation of the phenoxy group, and the calculated barrier for methyl rotation in the Z-I form is 0.7 kcal mol À1 (see Fig. ESI3 †). For this barrier we estimate room temperature methyl group rotation to occur on a timescale of 0.3-3 ps (assuming a preexponent of 10 12 -10 13 s À1 (ref. 55)) which is slow on the 100 fs timescale of the excited state dynamics (see below) but fast on the NMR timescale (hence only a single NMR peak is seen for the methyl group). The vertical absorption energies were calculated with MS-CASPT2 and TD-CAM-B3LYP at the MP2/cc-pvtz optimized ground-state geometries, assuming a gas-phase environment (which is appropriate given the weak solvent effect, Fig. 1). Table 2 shows the calculated vertical excitation energies for Z-I below 56 as a function of emission wavelength and solvent. The analysis in terms of a sum of two exponential decay terms, which accurately ts all data, is presented in Table 3. In all cases the decay is exceptionally fast, being dominated (ca. 90%) by a component of 70 AE 20 fs. The second component decays on a slightly slower timescale, but always faster than 400 fs. We obtained the same result for the anion I À , although the decay times are slightly longer, and the longer lived component has a higher weight (ESI 5 †). These data show that the excited state decay of I is approximately a factor of four faster than that of p-HBDI, which is already sub-picosecond (Fig. 4). 39 The measurements were made in a chemically similar series of alcohol solvents, in which viscosity varied by a factor of 30. The effect on the mean excited state lifetime is at most a factor of 1.5, showing that the coordinate leading to radiationless decay is essentially independent of viscosity ( Fig. 4b), which is different to the case for sterically hindered stilbenes. 50 Inspection of Table 3 shows that even this small viscosity dependence is mainly carried by the lower amplitude 'slow' component. Further, no dependence on solvent polarity was observed. Fig. 4c shows that the emission wavelength dependence is weak, although on the red edge of the spectrum the longer component has an increased amplitude, which is reected in a small but reproducible wavelength dependence in the mean relaxation time ( Table 3). The absence of a risetime in the emission at any wavelength shows that the uorescent state is formed within the time resolution of the measurement. The near wavelength independence shows that neither the shape nor the energy of the uorescence spectrum is evolving signicantly in time, as was also observed for p-HBDI. 57 The observed ultrafast viscosity and polarity independent decay, combined with a time independent spectrum is difficult to reconcile with the most well-established mechanisms for excited state isomerization. These typically invoke diffusive motion along a reaction coordinate (for example a single bond rotation) in the excited electronic state to access a CI, a mechanism which is expected to show a strong viscosity effect. [58][59][60] Such models have been successfully applied to picosecond time scale excited state reactions. More recently a number of sub- picosecond excited state isomerization reactions have been reported, where ultrafast relaxation suggests reaction coordinates which do not displace large solvent volumes, and are therefore less sensitive to solvent viscosity. Examples include ultrafast isomerization in cis-stilbene and rhodopsin, in which hydrogen out-of-plane (HOOP) motion plays a key role in driving the reaction to the CI on an ultrafast timescale. 61,62 Such measurements stimulated our synthesis of I, where simple kinematics suggested that methylation might slow the reaction relative to p-HBDI. The observed acceleration is therefore inconsistent in principle with HOOP modes playing a key role in the radiationless decay of I. A similar low volume coordinate which has been invoked in sub-picosecond excited state reactions is pyramidalization at an ethylenic carbon atom. 63 However, there is again no reason to predict that motion along such a volume conserving coordinate would be markedly accelerated by methylation. To better understand the origin of the remarkably fast decay in I we calculated the excited state minimum energy path (MEP), which reveals that the acceleration arises from the sterically crowded non planar structure, and further, that this gives rise to a highly cooperative excited state structural reorganization, resulting in transfer of torsion between the two rings. The MEP calculations for the Z-I and E-I isomers are summarized in Fig. 5, where displacements are measured in atomic units, a.u. (bohr amu 1/2 ). The paths (Fig. 5a) were obtained with TD-CAMB3LYP optimizations, and the energies rened with MS-CASPT2 (see Computational details). Both isomers have similar bimodal MEPs characterized by an initial steep decay (approximately 0-1 a. u. displacement) associated to bond inversion, followed by an extended at region (a plateau) where the main motion is ring rotation. Note that the initial part of the path (0-5 a. u.) is plotted in a different scale for clarity. The path leads without a barrier to an S 1 /S 0 CI that lies 1 eV below the vertical excitation (2.45 and 2.33 eV for Z-I and E-I, respectively). Representative structures along the decay paths are shown in Fig. 5b, namely the FC and CI structures for both isomers and the structures at the beginning of the plateau (1.2 a.u. displacement), labeled plat. The structural changes are further detailed in ESI 3. † We center our discussion on the Z isomer (see the angle denitions in Scheme 1), but for comparison we provide also the data for the E form, which follows a similar course. From the structural point of view (Fig. 5b), the initial steep decay phase is characterized by inversion of the central bonds; for Z-I, the C 7 0 -C 5 bond, which is a double bond in the ground state (Scheme 1), stretches from 1.37 to 1.42Å at Z-plat, while the C 1 0 -C 7 0 bond, which has single bond character in S 0 , is shortened from 1.47Å to 1.42Å (see Fig. S4 † for the evolution of the distances along the whole path). This is in line with the character of the orbitals involved in the excitation, since the occupied orbital is bonding along the C 7 0 -C 5 bond and antibonding along C 1 0 -C 7 0 , and the virtual orbital has the opposite character ( Fig. 3). At the beginning of the plateau, the calculated S 1 /S 0 energy gap is approximately 2.8 eV and decreases slowly. The calculated value is in agreement with the measured emission maximum of 2.9 eV, and suggests that the uorescence comes mainly from the fraction of molecules that resides in this plateau region. To reach the plateau, I only undergoes changes in the bond lengths, which is consistent with the absence of a rise time in uorescence and the mirror image relationship between the absorption and emission spectra. The remaining part of the decay path is characterized by rotation of the two rings. For the major Z-I isomer, there is a continuous decrease of the oscillator strength along the path (see Fig. S5 †). This supports the idea that most of the uorescence comes from the molecules in the initial plateau region. In contrast, E-I has a lower oscillator strength that almost does not change along the path. The bond rotation along the path occurs in response to the inversion in bond character (see Fig. 5b and ESI 3 †). The phenol ring, which is initially rotated out-of-plane, becomes co-planar with the central C 5 -C 7 0 -C 1 0 unit, i.e. f decreases from approximately 30 to nearly 0 and further to À17 . In turn, the imidazolone ring twists out of the plane until it becomes perpendicular to the central plane, i.e. s increases from 2.1 , to reach a nal value of approximately 90 at the CI (see Fig. S6 †). The CI structure is consistent with previous studies for the neutral and anionic forms of HBDI, where the minimum energy CI is found at f ¼ À30-25 and s ¼ 75-103 . 35,37,43 The MEP is also similar to the one calculated for the sterically crowded bilirubin model chromophore, although there the decay, measured by transient absorption, takes place on a slower time scale. 51 Signicantly, in our case the methyl group (dihedral angle r Scheme 1) rotates simultaneously with the phenol and imidazole rings (see Fig. S6 †). This behavior is also seen for the E-I isomer and demonstrates the idea that the ring rotations are coupled by the methyl group. The importance of ring torsion for the radiationless decay may seem at odds with the absence of a viscosity effect. Large scale structural changes should be opposed by solvent friction, which has not been observed (Fig. 4b). However, this can be readily understood considering that the two rings are rotating in the same direction, which results in small volume changes during the decay. This can be quantied in terms of a 'apping' Fig. S6 and S7. † angle g (see Scheme 1), which is an effective measure of the relative motion of the two rings. Along the decay MEP (see Fig. S7a †), there is only a small increase of g for Z-I (from 27 at Z-FC to 51 at Z-CI), whereas g stays almost unchanged at values of 45-50 for E-I. From this perspective, the main coordinate that drives the decay is the pyramidalization of the central, C 7 0 carbon (see Fig. S7 † for a quantitative discussion of the pyramidalization). Our analysis of the apping and pyramidalization angles supports a mechanistic picture where the CI is characterized, both for Z and E forms, by a apping angle of approximately 50 between the two rings and a pyramidalized central carbon. Due to the steric crowding, the ground state structure is already signicantly pretwisted. This probably accelerates the decay compared to the unsubstituted structure, as suggested for the related chromophores. 29,51 More importantly, it reduces the changes in volume required to access the CI. This is illustrated in Fig. 6, where the FC and CI structures are superimposed for both isomers. The images show that for both isomers, one of the main differences between the FC and CI structures is the position of the methyl group. The importance of this volumeconserving pyramidalization coordinate for the decay is thus consistent with the ultrafast decay and the lack of a viscosity effect. The passage through the CI can lead to double bond isomerization if the direction of imidazolone rotation is maintained aer the decay. However, the small changes required to access the CI and the larger slope of the ground state surface at the CI compared to the excited state, suggest that return to the reactant conguration will be favoured. This is consistent with the small isomerization quantum yields, which are estimated at about 4 and 25% for Z-I and E-I, respectively. Remarkably, the two isomers have signicantly different yields, in spite of the structural similarity between the Z-CI and E-CI structures. To explain this one must consider that the two CIs are probably part of an extended seam of intersection, 64,65 similar to the one characterized in detail for HBDI, 43 and the different yields may be due to the dynamics of the decay at the seam. ‡ The excited-state dynamics of a molecular system depend crucially on how the excitation energy is converted into nuclear motion. In the HBDI family of rotors, as well as in others such as retinal, 66 diazobenzene 67 or fulvene-based systems, 68,69 the modes initially excited are bond stretching modes associated with bond inversion (Fig. 5). The excited-state lifetime then depends on how the energy ows from these to the torsional modes, i.e. on intramolecular vibrational energy redistribution (IVR). There are two IVR-related features that distinguish I from other rotors, which explain its extremely fast dynamics and suggest its role as a photoswitch. The rst one is coupling between the orientations of the two rings provided by the methyl group. Initially the main torque is applied on the phenyl group, which is driven towards planarity by bond inversion, and the coupling through the methyl group translates this rotational impulse to the imidazole ring, which then has to twist to reach the conical intersection. This additional driving force distinguishes I from other photoswitches such as the azobenzenes. 8 Secondly, I is already twisted in its ground-state structure, and this favours efficient IVR from the stretching to the torsional modes. This result is more general than in the present specic case of I. The GFP chromophore is an example of a wider family of mono-methyne dyes, many of which exhibit strong absorption and weak uorescence and undergo excited state structure change. 70 It is likely that bridge methylation in these cases would also lead to steric crowding, a nonplanar ground state and thus the excited state torsional coupling, as illustrated in Fig. 5. The advantage of access to this broad family of dyes is the variety of aromatic rings available, which then offers a range of synthetic targets, providing synthetic chemists an opportunity to produce torsional couples with higher isomerization yields than I and larger angular displacements between the rings. Further, such a range of nonplanar methyne bridged ground states could be coupled to different molecular and supramolecular structures, to exploit the structure change. Of course the exploitation of such a torsional couple depends on the synthesis of derivatives with larger photochemical cross sections and bigger displacements. Finally, we discuss the role of non-planarity in relation to the chromophore of GFP-like proteins, which arises due to steric crowding in the protein matrix. Our observation of an accelerated excited state decay in the nonplanar I compared to HBDI (Fig. 3) is at rst sight consistent with the observation of weak uorescence from highly nonplanar trans forms of the chromophore in GFP-like proteins. 71 However the relationship between planarity and quantum yield is generally not a simple one, with uorescence being observed for proteins with a wide range of angles about both s and f twists, for both cis and trans chromophores; 52 as far as we are aware there is no systematic study of uorescence yield as a function of chromophore geometry. Of perhaps greater signicance is our characterization of a volume conserving pyramidalization at the bridging carbon in the radiationless decay coordinate of the nonplanar I. It is not straightforward to see how a simple steric effect in the protein might suppress such a motion, and thereby enhance the uorescence quantum yield to the very high values characteristic of FPs used in bioimaging. To gain further insight into this point, we will reassess the decay of the unsubstituted HBDI chromophore in future work. However, in addition to the protein modifying the steric environment of the chromophore it also alters the electronic structure, through intermolecular interactions and through changes to the electrostatic environment. These changes can result in signicant 'protein shis' in the energy of the electronic transitions. 72 It is possible that such interactions, rather than steric effects, play a role in determining the uorescence quantum yield. This may occur if such interactions either modify the position of conical intersections, or otherwise steer the excited state structural evolution away from them. In this connection, the recent characterization of a key role for the electrostatic environment of the chromophore in controlling photoprotein uorescence yield may be signicant. 73 Conclusions We have synthesized a novel nonplanar form of the GFP chromophore, I, and investigated its photophysics. A number of GFP chromophores have been previously synthesized with a view to stabilizing the planar geometry and enhancing uorescence. In contrast I is sterically crowded and has a nonplanar ground state, which nevertheless undergoes an extremely fast excited state decay. Ultrafast uorescence shows that the excited state decay is greatly accelerated compared to planar p-HBDI. Quantum chemical calculations reveal a barrierless MEP to an S 1 /S 0 intersection where radiationless decay and Z/E isomerization occurs. The calculations reveal a plateau in the early part of the relaxation, where the excited state resides for its ca. 70 fs lifetime. As a consequence of the pre-twisted ground-state geometry, the CI can be reached by a volume conserving coordinate composed of torsion of the rings in the same direction and simultaneous pyramidalization of the methyne bridge carbon, consistent with the observed negligible solvent viscosity effect. Signicantly, the MEP also reveals strong coupling between the torsional angle of the two rings. Electronic excitation leads to bond inversion in the bridging methyne, which drives the phenol ring towards planarity. The orientation of the phenyl ring is coupled to the imidazolone ring orientation via the methyl substituent, causing it to be driven out-of-plane. This can be viewed as a light driven molecular torsion couple. Importantly, the excited state evolution described is unlikely to be restricted to I. There is a large family of related monomethyne dyes which share a similar bridging motif and electronic structure to p-HBDI. 74 This promises a range of possibilities to both modify MEPs to optimise isomerization yield and to design sites which allow the proposed light driven torsion couple to be incorporated into molecular machines. Experimental methods The synthesis and characterization of I are described in detail in ESI 1. † The synthesis was based on the methods described by Burgess and Wu 75 for the preparation of the imidazolone moiety, and Hsu et al. 46 for the coupling between the ketone and the imidazolone compounds. Time resolved uorescence measurements were made with a previously described up-conversion spectrometer (see ESI 2 †). 56 The excitation wavelength was 400 nm and emission at wavelengths from 470 nm to 550 nm was up-converted with 800 nm pulses. The sample was contained in a 1 mm pathlength cell. Computational Details Ground-state optimizations of the neutral form of I in the gas phase were carried out at the MP2/cc-pvtz level of theory. The excited state decay path from the Franck-Condon (FC) region to the CI was mapped with a series of constrained optimizations where points on the potential energy surface are optimized on a hypersphere with a xed radius centred on an initial pivot point. 76 The path was obtained using the optimized point of every calculation as pivot point for the following step, and the mass-weighted displacements are given in atomic units (a.u.), i.e. bohr amu 1/2 . For optimizations we used TD-CAM-B3LYP/6-311G** and the Gaussian program. 77 CASSCF/6-311G** excitedstate optimizations were discarded because this method fails to give the right state order near the FC region, 35 and it overestimates the S 1 -S 0 energy gap in the vicinity of the CI. Z-CI and E-CI are the last points of each decay path, where the TD-CAM-B3LYP optimizations fail to converge. The energies along the paths were recomputed at the MS-CASPT2/ANO-S level of theory with Molcas 78 to provide a uniform picture of the reaction paths at the multireference, dynamically correlated level. Further computational details are provided in the ESI. † Conflicts of interest There are no conicts of interest to declare. Notes and references ‡ The small differences in the shape of the MEPs near the CI visible in Fig. 5a, where the MEP for Z-I appears to have a minimum close to the CI, are due to the failure to converge the last step of the TD-CAM-B3LYP MEP optimizations and not to differences in the seam topography at the CI. MS-CASPT2 calculations along the gradient difference coordinate showed that the seam has very similar topography at Z-CI and E-CI.
7,116.2
2018-01-10T00:00:00.000
[ "Chemistry", "Physics" ]
High Resolution X-Ray Spectroscopy with Compound Semiconductor Detectors and Digital Pulse Processing Systems The advent of semiconductor detectors has revolutionized the broad field of X-ray spectroscopy. Semiconductor detectors, originally developed for particle physics, are now widely used for X-ray spectroscopy in a large variety of fields, as X-ray fluorescence analysis, X-ray astronomy and diagnostic medicine. The success of semiconductor detectors is due to several unique properties that are not available with other types of detectors: the excellent energy resolution, the high detection efficiency and the possibility of development of compact detection systems. Among the semiconductors, silicon (Si) detectors are the key detectors in the soft X-ray band (< 15 keV). Si-PIN diode detectors and silicon drift detectors (SDDs), with moderate cooling by means of small Peltier cells, show excellent spectroscopic performance and good detection efficiency below 15 keV. Germanium (Ge) detectors are unsurpassed for high resolution spectroscopy in the hard X-ray energy band (>15 keV) and will continue to be the choice for laboratory-based high performance spectrometers. However, there has been a continuing desire for ambient temperature and compact detectors with the portability and convenience of a scintillator but with a significant improvement in resolution. To this end, numerous high-Z and wide band gap compound semiconductors have been exploited. Among the compound semiconductors, cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) are very appealing for hard X-ray detectors and are widely used for the development of spectrometer prototypes for medical and astrophysical applications. Beside the detector, the readout electronics also plays a key role in the development of high resolution spectrometers. Recently, many research groups have been involved in the design and development of high resolution spectrometers based on semiconductor detectors and on digital pulse processing (DPP) techniques. Due to their lower dead time, higher stability and flexibility, digital systems, based on directly digitizing and processing of detector signals (preamplifier output signals), have recently been favored over analog electronics ensuring high performance in both low and high counting rate environments. In this chapter, we review the research activities of our group in the development of high throughput and high resolution X-ray spectrometers based on compound semiconductor detectors and DPP systems. First, we briefly describe the physical properties and the signal Principles of operation of semiconductor detectors for X-ray spectroscopy Semiconductor detectors for X-ray spectroscopy behaves as solid-state ionization chambers operated in pulse mode (Knoll, 2000).The simplest configuration is a planar detector i.e. a slab of a semiconductor material with metal electrodes on the opposite faces of the semiconductor (Figure 2).Photon interactions produce electron-hole pairs in the semiconductor volume through the above discussed interactions.The interaction is a twostep process where the electrons created in the photoelectric or Compton process loose their energy through electron-hole ionization.The most important feature of the photoelectric absorption is that the number of electron-hole pairs is proportional to the photon energy.If E 0 is the incident photon energy, the number of electron-hole pairs N is equal to E 0 /w, where w is the average pair creation energy.The generated charge cloud is Q 0 = e E 0 /w.The electrons and holes move toward the opposite electrodes, anode and cathode for electrons and holes, respectively (Figure 2).The movement of the electrons and holes, causes variation Q of induced charge on the electrodes.It is possible to calculate the induced charge Q by the Shockley-Ramo theorem (Cavalleri et al., 1971;Ramo, 1939;Shockley, 1938) which makes use of the concept of a weighting potential .The weighting potential is defined as the potential that would exist in the detector with the collecting electrode held at unit potential, while holding all other electrodes at zero potential.According to the Shockley-Ramo theorem, the induced charge by a carrier q, moving from x i to x f , is given by: () () where  (x) is weighting potential at position x.It is possible to calculate the weighting potential by analytically solving the Laplace equation inside a detector .In a semiconductor, the total induced charge is given by the sum of the induced charges due both to the electrons and holes.For a planar detector, the weighting potential  of the anode is a linear function of distance x from the cathode: where L is the detector thickness.Neglecting charge loss during the transit time of the carriers, the charge induced on the anode electrode by N electron-hole pairs is given by: 0 where t h and t e are the transit times of holes and electrons, respectively.Charge trapping and recombination are typical effects in compound semiconductors and may prevent full charge collection.For a planar detector, having a uniform electric field, neglecting charge de-trapping, the charge collection efficiency (CCE), i.e. the induced charge normalized to the generated charge, is given by the Hecht equation (Hecht, 1932): where λ h = h  h E and λ e = e  e E are the mean drift lengths of holes and electrons, respectively. The CCE depends not only on λ h and e , but also on the incoming photon interaction position.Small λ/L ratios reduce the charge collection and increase the dependence by the photon interaction point.So, the random distribution of the interaction point increases the fluctuations on the induced charge and thus produces peak broadening in the energy spectra.The charge transport properties of a semiconductor, expressed by the hole and electron mobility lifetime products ( h  h and  e  e ) are key parameters in the development of radiation detectors.Poor mobility lifetime products result in short λ and therefore small λ /L ratios, which limit the maximum thickness and energy range of the detectors.Compound semiconductors, generally, are characterized by poor charge transport properties, especially for holes, due to charge trapping.Trapping centers are mainly caused by structural defects (e.g.vacancies), impurities and irregularities (e.g.dislocations, inclusions).In compound semiconductors, the  e  e is typically of the order of 10 -5 -10 -2 cm 2 /V while  h  h is usually much worse with values around 10 -6 -10 -4 cm 2 /V, as reported in Table 1.Therefore, the corresponding mean drift lengths of electrons and holes are 0.2-200 mm and 0.02-2 mm, respectively, for typical applied electric fields of 2000 V/cm.As pointed out in the foregoing discussions, poor carrier transport properties of CdTe and CdZnTe materials are a critical issue in the development of X-ray detectors.Moreover, the significant difference between the transport properties of the holes and the electrons produces well known effects as spectral distortions in the measured spectra, i.e. peak asymmetries and long tails.To overcome the effects of the poor transport properties of the holes, several methods have been employed (Del Sordo et al., 2009).Some techniques concern the particular irradiation configuration of the detectors.Planar parallel field (PPF) is the classical configuration used in overall planar detectors, in which the detectors are irradiated through the cathode electrode, thus minimizing the hole trapping probability.In an alternative configuration, denoted as planar transverse field (PTF), the irradiation direction is orthogonal (transverse) to the electric field.In such configuration different detector thicknesses can be chosen, in order to fit the detection efficiency required, without modifying the inter-electrode distance and then the charge collection properties of the detectors.Several techniques are used in the development of detectors based on the collection of the electrons (single charge carrier sensitive), which have better transport properties than that of the holes.Single charge carrier sensing techniques are widely employed in compound semiconductor detectors by developing careful electrode designs (Frisch-grid, pixels, coplanar grids, strips and multiple electrodes) and by using electronic methods (pulse shape analysis).As it is well clear from the above discussions, the charge collection efficiency is a crucial property of a radiation detector that affects the spectroscopic performance and in particular the energy resolution.High charge collection efficiency ensures good energy resolution which also depends by the statistics of the charge generation and by the noise of the readout electronics.Generally, the energy resolution of a radiation detector, estimated through the full-width at half maximum (FWHM) of the full-energy peaks, is mainly influenced by three contributes: The first contribute is the Fano noise due to the statistics of the charge carrier generation.In compound semiconductors, the Fano factor F is much smaller than unity (0.06-0.14).The second contribute is the electronic noise which mainly depends on the readout electronics and the leakage current of the detector, while the third is the contribute of the charge collection process. Electronics for high resolution spectroscopy: The digital pulse processing (DPP) approach Nowadays, the dramatic performance improvement of the analog-to-digital converters (ADC) stimulated an intensive research and development on digital pulse processing (DPP) systems for high resolution X-ray spectroscopy.The availability of very fast and high precision digitizers has driven physicists and engineers to realize electronics in which the analog-to-digital conversion is performed as close as possible to the detector.This approach is reversed with respect to more traditional electronics which were made out of mainly analog circuits with the A/D conversion at the end of the chain.Figure 3 shows the simplified block diagrams of analog and DPP electronics for X-ray detectors.In a typical analog electronics, the detector signals are amplified by a charge sensitive preamplifier (CSP), shaped and filtered by an analog shaping amplifier and finally processed by a multichannel analyzer (MCA) to generate the energy spectrum.In a DPP system, the preamplifier output signals are directly digitized by an ADC and so processed by using digital algorithms.A DPP system leads to better results than the analog one, mainly due to (i) stability, (ii) flexibility and (iii) higher throughput (i.e. the rate of the useful counts in the energy spectrum).With regard to the improved stability of a DPP system, the direct digitizing of the detector signals minimizes the drift and instability normally associated with analog signal processing.In terms of flexibility, it is possible to implement complex algorithms, that are not easily implementable through a traditional analog approach, for adaptive processing and optimum filtering.Moreover, a DPP analysis require considerably less overall processing time than the analog one ensuring lower dead time and higher throughput, very important under high rate conditions.The dead time of an analog system is mainly due to the pulse processing time of the shaper and to the conversion time of MCA. The pulse processing time is generally related to the temporal width of the pulse which depends on the shaping time constant of the shaper and can be described through the well known paralyzable dead time model (Knoll, 2000).The MCA dead time, generally described by nonparalyzable dead time model (Knoll, 2000), is often the dominant contributor to overall dead time.In a DPP system there is no additional dead time associated with digitizing the pulses and so the equivalent to MCA dead time is zero.Therefore the overall dead time of a digital system is generally lower than that of the analog one. An another positive aspect of the DPP systems regards the possibility to perform off-line analysis of the detector signals: since that signals are captured, more complex analyses can be postponed until the source event has been deemed interesting. A digital CdTe X-ray spectrometer for both low and high counting rate environments In this section, we report on the spectroscopic performance of a CdTe detector coupled to a custom DPP system for X-ray spectroscopy.We first describe the main characteristics of the detector and the DPP system and then we present the results of the characterization of the overall detection system at both low (200 cps) and high photon counting rates (up to 800 kcps) by using monoenergetic X-ray sources ( 109 Cd, 152 Eu, 241 Am, 57 Co) and a nonclinical Xray tube with different anode materials (Ag, Mo).This work was carried out in the sequence of previously developed DPP systems (Abbene et al., 2007(Abbene et al., , 2010a(Abbene et al., , 2010b(Abbene et al., , 2011;;Gerardi et al., 2007) with the goal of developing a digital spectrometer based on DPP techniques and characterized by high performance both at low and high photon counting rate environments. CdTe detector The detector is based on a thin CdTe crystal (2 x 2 x 1 mm 3 ), wherein both the anode (indium) and the cathode (platinum) are planar electrodes covering the entire detector surface.The Schottky barrier at the In/CdTe interface ensures low leakage current even at high bias voltage operation (400 V), thus improving the charge collection efficiency.The thickness of the detector guarantees a very good photon detection efficiency (99%) up to 40 keV.A Peltier cell cools both the CdTe crystal and the input FET of the charge sensitive preamplifier (A250, Amptek, U.S.A.) at a temperature of -20 °C.Cooling the detector reduces the leakage current, allowing the application of higher bias voltages to the electrodes; moreover, cooling the FET increases its transconductance and reduces the electronic noise.The detector, the FET and the Peltier cooler are mounted in a hermetic package equipped with a light-vacuum tight beryllium window (modified version of Amptek XR100T-CdTe, S/N 6012).To increase the maximum counting rate of the preamplifier, a feedback resistor of 1 G and a feedback capacitor of 0.1 pF were used.The detector is equipped with a test input to evaluate the electronic noise. Digital pulse processing system The DPP system consists of a digitizer and a PC wherein the digital analysis of the detector pulses (preamplifier output pulses) was implemented.The detector signals are directly digitized by using a 14-bit, 100 MHz digitizer (NI5122, National Instruments).The digital data are acquired and recorded by a Labview program on the PC platform and then processed off-line by a custom digital pulse processing method (C++ coded software) developed by our group.The analysis time is about 3 times the acquisition time.The DPP method, implemented on the PC platform, performs a height and shape analysis of the detector pulses.Combining fast and slow shaping, automatic pole-zero adjustment, baseline restoration and pile-up rejection, the digital method allows precise pulse height measurements both at low and high counting rate environments.Pulse shape analysis techniques (pulse shape discrimination, linear and non linear pulse shape corrections) to compensate for incomplete charge collection were also implemented.The digitized pulses are shaped by using the classical single delay line (SDL) shaping technique (Knoll, 2000).Each shaped pulse is achieved by subtracting from the original pulse its delayed and attenuated fraction.The attenuation of the delayed pulse allows to eliminate the undesirable undershoot following the shaped pulse, i.e. acting as pole-zero cancellation.For a single digitized pulse, consisting of a defined number of samples, this operation can be represented by the following equation: where T s is the ADC sample period, n d T s = T d is the delay time, A is the attenuation coefficient, V shaped (nT s ) is the shaped sample at the discrete time instant nT s and V preamp (nT s ) is the preamplifier output sample at the discrete time instant nT s .The width of each shaped pulse is equal to T d + T p , wherein T p is the peaking time of the related preamplifier output pulse. Our DPP method is characterized by two shaping modes: a fast SDL shaping mode and a slow SDL shaping mode, operating at different delay times.The "fast" shaping operation, characterized by a short delay time T d, fast , is optimized to detect the pulses and to provide a www.intechopen.com High Resolution X-Ray Spectroscopy with Compound Semiconductor Detectors and Digital Pulse Processing Systems 47 pile-up inspection.If the width of the shaped pulses exceeds a maximum width threshold then the pulse is classified as representative of pile-up events; whenever it is possible, each overlapped event is recognized through a peak detection analysis.Obviously, these events are not analyzed by the "slow" shaping procedure.The delay time of the "fast" shaping operation is a dead time for the DPP system (paralyzable dead time) and it must be as small as possible, depending of detector and ADC characteristics.With regard to the paralyzable model , the true photon counting rate n is related to the measured photon counting rate m through the following equation (Knoll, 2000): It is possible to evaluate the true rate n from the measured rate m by solving the equation ( 7) iteratively.The DPP system, through the "fast" shaping operation, gives the estimation of the true rate n through the equation ( 7) and the measured rate m.We used T d, fast = 50 ns. The "slow" shaping operation, which has a longer delay time T d, slow than the "fast" one, is optimized to shorten the pulse width and minimize the ballistic deficit.To obtain a precise pulse height measurement, a convolution of the shaped pulses with a Gaussian function was performed.The slow delay time T d, slow acts as the shaping time constant of an analog shaping amplifier: the proper choice depends on the peaking time of the preamplifier pulses, the noise and the incoming photon counting rate. To ensure good energy resolution also at high photon counting rates, a standard detection system is typically equipped with a baseline restorer which minimize the fluctuations of the baseline.The digital method performs a baseline recovery by evaluating the mean value of the samples, within a time window equal to T d, slow /2, before and after each shaped pulse.This operation sets a minimum time spacing between the pulses equal to T a = 2T d, slow + T p for which no mutual interference must exist in the baseline measurement.The minimum time spacing T a is used to decide whether the events must be discarded, in particular if the time spacing does not exceed T a the two events are rejected.It is clear that a T d, slow value too long reduces the number of the counts in the measured spectrum (analyzed events) and its optimum value is the best compromise between the required energy resolution and throughput.The time T a is paralyzable dead time for the slow shaping operation.Both fast and slow procedures gave consistent values of the incoming photon counting rate. Pulse shape discrimination A pulse shape discrimination (PSD) technique was implemented in our DPP system by analyzing the peaking time distribution of the pulses and their correlations with the energy spectra.Pulse shape discrimination, first introduced by Jones in 1975 (Jones & Woollam, 1975), is a common technique to enhance spectral performance of compound semiconductor detectors.Generally, this technique is based on the selection of a range of peaking time values of the pulses that are less influenced of incomplete charge collection.As previously discussed, incomplete charge collection, mainly due to the poor transport properties of the holes, is a typical drawback of compound semiconductor detectors, producing long tailing and asymmetry in the measured spectra.As well known, the pulses mostly influenced by the hole contribution are generally characterized by longer peaking times.These effects are more prominent increasing the energy of radiation (i.e. the depth of interaction of radiation); the events, with a greater depth of interaction, take place close to the anode electrode producing pulses mostly due to the hole transit. To perform the pulse shape analysis, we carried out the measurement of the peaking time of the analyzed pulses.We first evaluate the rise time of the pulses, i.e. the interval between the times at which the shaped pulse reaches 10% and 90% of its height (after baseline restoration).The times, corresponding to the exact fractions (10% and 90%) of the pulse height, are obtained through a linear interpolation.We estimate the peaking time equal to 2.27 times the rise time (i.e. about five times the time constant).Due to the precise measurement of the pulse height-baseline and interpolation, the method allows fine peaking time estimations (with a precision of 2 ns) both at low and high photon counting rates. Figure 4(a) shows the pulse peaking time distribution of 57 Co events measured with the CdTe detector.The distribution has an asymmetric shape and suffers from a tail, which is attributed to the slow peaking time events.The correlation between the peaking time and the height of the pulses is pointed out by the bi-parametric distribution ( 57 Co source), shown in Figure 4(b).It is clearly visible that for longer peaking times, the photopeak shifts to lower energies, as expected.This distribution is very helpful to better understand the tailing in the measured spectra and implement correction methods.As proposed by Sjöland in 1994 (Sjöland & Kristiansson, 1994) pulse shape discrimination can also be used to minimize peak pile-up events, i.e. overlapped preamplified pulses within the peaking time that are not detectable through the "fast" shaping operation.Because the shape (peaking time) of a peak pile-up pulse differs from that of a pulse not affected by pile-up, analyzing the measured spectra at different peaking time regions (PTRs) in the peaking time distribution is helpful to reduce peak pile-up.Figure 5(a) shows some selected PTRs in the peaking time distribution of the pulses from the 109 Cd source (at 820 kcps), while in Figure 5(b) are shown the 109 Cd spectra for each PTR.These results point out the characteristics of the peak pile-up events, which have a longer peaking time than the correct events, and then the potentialities of the PSD technique to minimize these spectral distortions.et al., 1996), the method corrects all pulses to a hypothetical zero peaking time.The method requires a preliminary calibration procedure, strictly depending on the characteristics of the detector, based on the analysis of the behaviour of the centroid of photopeaks versus the peaking time.Figure 6(a) shows the photopeak centroid vs. the peaking time values for some photopeaks of the measured spectra ( 109 Cd, 152 Eu, 241 Am and 57 Co).We analyzed the photopeak centroid shift by using the following linear function: where E det is the photopeak centroid, m E is the slope of the linear function, T p is the peaking time and E cor is the corrected centroid at zero peaking time.The corrected centroid, E cor , is the result of correcting E det to an ideal point of zero peaking time and it is the desired height for a pulse.It is interesting to note that both the slope m E and E cor are linear functions of the photon energy E, as shown in Figures 6(b) and 6(c).The fitting equations are: where k 1 , k 3 and k 2 , k 4 are the slopes and the y-intercepts of the linear functions, respectively.Combining the equations ( 9) and (10), yields: Combining the equations ( 8) and ( 12), yields: By using equation ( 14) is possible to adjust the pulse height E det of a pulse through the knowledge of the bi-parametric distribution and of the constants A and B obtained by the calibration procedure. We also implemented a non linear pulse shape correction method, by using the following function: where the coefficients n E , m E and E cor are yet linear functions of the photon energy E: So, combining the equations ( 16), ( 17), ( 18) with (15) yields: The bi-parametric correction method presents an important limitation: it is only applicable to pure photoelectric interactions, i.e. when the energy of each incident photon is deposited at a single point in the detector.If the photon Compton scatters at one depth in the detector and then undergoes photoelectric absorption at a second depth, the height-peaking time relationship can vary from that due to a single interaction.For high atomic number compound semiconductors, such as CdTe and CdZnTe, photoelectric absorption is the main process up to about 200 keV (Del Sordo et al., 2009). Spectroscopic characterization To investigate on the spectroscopic performance of the system, we used X-ray and gamma ray calibration sources ( 109 Cd: 22.1, 24.9 and 88.1 keV; 241 Am: 59.5, 26.3 keV and the Np L Xray lines between 13 and 21 keV; 152 Eu: 121.8 keV and the Sm K lines between 39 and 46 keV; www.intechopen.comX-Ray Spectroscopy 52 57 Co: 122.1, 136.5 keV and the W fluorescent lines, K 1 =59.3 keV, K 2 =58.0 keV, K 1 =67.1 keV, K 3 =66.9 keV, produced in the source backing).The 14 keV gamma line ( 57 Co) is shielded by the source holder itself.For high rate measurements, we also used another 241 Am source, with the Np L X-ray lines shielded by the source holder.To obtain different rates (up to 820 kcps) of the photons incident on the detector (through the cathode surface), we changed the irradiated area of the detector by using collimators (Pb and W) with different geometries.To better investigate on the high-rate performance of the system, we also performed the measurement of Mo-target X-ray spectra at the "Livio Scarsi" Laboratory (LAX) located at DIFI (Palermo).The facility is able to produce X-ray beams with an operational energy range of 0.1-60 keV (tube anodes: Ag, Co, Cr, Cu, Fe, Mo, W), collimated on a length of 10.5 m with a diameter at full aperture of 200 mm (Figure 7).In this work, we used Mo and Ag targets.No collimators were used for these measurements.To characterize the spectroscopic performance of the system, we evaluated, from the measured spectra, the energy resolution (FWHM) and the FW.25M to FWHM ratio, defined in agreement with the IEEE standard (IEEE Standard, 2003).We also evaluated the area of the energy peaks (photopeak area) through the high side area (HSA) (IEEE Standard, 2003), i.e. the area between the peak center line and the peak's high-energy toe; the photopeak area was calculated as twice the HSA.The measured spectra were analyzed by using a custom function model, which takes into account both the symmetric and the asymmetric peak distortion effects (Del Sordo et al., 2004).Statistical errors on the spectroscopic parameters with a confidence level of 68% were associated. Low count rate performance In this paragraph we present the performance of the system at low photon counting rate (200 cps), by using PSD, linear and non linear PSC techniques.We first used the PSD technique looking for the best performance despite the high reduction of the photopeak area (about 90%).With regard to the PSD technique, we obtained the following results: energy resolution (FWHM) of 2.05 %, 0.98 % and 0.68 % at 22.1, 59.5 and 122.1 keV, respectively.Figure 8 shows the measured 241 Am and 57 Co spectra using PSD and no PSD techniques.To better point out the spectral improvements of the PSD technique, we report in Figures 8(b) and 8(d) a zoom of the 59.5 and 122.1 keV photopeaks, normalized to the photopeak centroid counts.As widely shown in several works, PSD produced a strong reduction of peak asymmetry and tailing in the measured spectra: the 122.1 keV photopeak of 57 Co spectrum, after PSD, is characterized by an energy resolution improvement of 57 % and low tailing; the FW.25M to FWHM ratio is reduced up to 1.46, quite close to the ideal Gaussian ratio (FW.25M/FWHMGaussian =1.41).The reduction of trapping effects allows to use some semi-empirical models of the energy resolution function that can be used to estimate some characteristic parameters of compound semiconductors, such as the Fano factor F and the average pair creation energy www.intechopen.comX-Ray Spectroscopy 54 w.For low trapping, the energy resolution, as proposed by Owens (Owens & Peacock, 2004), can be described by the following equation: where a and b are semi-empirical constants that can be obtained by a best-fit procedure.The equation ( 20) could be used to estimate the Fano factor F and the average pair creation energy w.In our case, we used a tabulated value of w (4.43 eV) and obtained F, a and b by a best-fit procedure; we also measured the energy resolution of the 17.77 keV Np L X-ray line to obtain almost one degree of freedom (dof). Best fitting equation ( 20) to the measured energy resolution points (with no PSD) resulted in a bad fit, due to the high trapping contribute.We obtained a good fit with the measured data points after PSD [ 2 /dof = 1.21; dof = 1)], as shown in Figure 9.The fitted value for the Fano factor (0.09±0.03) is in agreement with the literature values (Del Sordo et al., 2009).Figure 9, also, shows the individual components of the energy resolution (after PSD).It is clearly visible that electronic noise (dashed line) dominates the resolution function below 60 keV, whereas Fano noise (dotted line) dominates the charge collection noise (dot-dashes line) within the overall energy range (up to 122 keV).After PSD, the charge collection noise is mainly due to electron trapping and diffusion.We stress that this analysis was performed in order to better point out the performance enhancements of the PSD technique and not for precise measurements of Fano factor.Nevertheless, the potentialities of the technique for precise Fano factor estimations are well evident: It would be sufficient to measure a greater number of monoenergetic X-ray lines to ensure more precise Fano factor estimations. Table 2 shows the performance of the detector with no correction, after PSD, linear and non linear PSC.We first used (i) the PSD technique selecting the PTR which produced no reduction of the photopeak area, (ii) we applied both linear PSC and PSD to obtain no reduction of the photopeak area, and finally (iii) we applied both linear and non linear PSC techniques to all peaking time values obtaining no reduction of the total counts.Despite the similar results for linear and non linear corrections, the implementation of the non linear correction opens up to the possibility for charge collection compensation for thicker detectors wherein hole trapping effects are more severe.We stress as the flexibility of the digital pulse processing approach also allows the easy implementation of more complicated correction methods for the minimization of incomplete charge collection.The linear PSC was applied to a selected PTR which ensured no photopeak area reduction. After linear PSC, we obtained an energy resolution of 0.73% FWHM at 122.1 keV.(b) Measured 57 Co spectra with no correction and using non linear PSC.The non linear PSC was applied to all peaking time values obtaining no reduction of the total counts.After non linear PSC, we obtained an energy resolution of 0.87% FWHM at 122.1 keV. High count rate performance Figure 11 shows the performance of the detection system (with T d, slow = 3 s), irradiated with the 109 Cd source, at different photon counting rates (up to 820 kcps).The throughput of the system (i.e. the rate of the events in the spectrum or the rate of the analyzed events by the slow pulse shaping), the 22.1 keV photopeak centroid and the energy resolution (FWHM) at 22.1 keV were measured.The photopeak centroid shift was less than 1 % and low energy resolution worsening characterized the system.Despite the low throughput of the system, it is possible to measure the input photon counting rate through the fast pulse shaping.The system, through both slow and fast pulse shaping is able to determine the input count rate and the energy spectrum with high accuracy and precision even at high photon counting rates.Figure 12 shows the measured 109 Cd spectra at (a) 200 cps with no correction, (b) at 820 kcps with no correction and (c) at 820 kcps after PSD (100 PTR 148 ns).The results (summarized in Table 3) highlight the excellent high rate capability of our digital system.Moreover, PSD allowed a strong reduction (96%) of the number of peak pile-up events in the measured spectrum, as shown in Figure 12(c High-rate 241 Am spectrum measurements (Figures 12(d), (e), (f)) also show as both PSD and linear PSC can be used for compensation of charge trapping and peak pile-up.With regard to 241 Am spectra, we first minimized peak pile-up (with a reduction of about 96%) by selecting a proper PTR (100 PTR 154 ns) and then we applied the linear PSC in the selected PTR.We also measured X-ray spectra from a non clinical X-ray tube with different anode materials (Ag, Mo).In Figure 13 are shown the measured Ag-target X-ray spectra (32 kV) at 8.8 kcps with no correction, at 258 kcps with no correction and at 258 kcps after PSD.At high photon counting rate, the measured Ag spectrum, despite the good energy resolution of the peaks (22.1 and 24.9 keV), is characterized by a high background beyond the end point energy, due to the peak pile-up; while, after PSD, this background is quite similar to the spectrum at low photon counting rate.These results open up the possibility of precise estimations of the end point energy, i.e. the peak voltage of a X-ray tube, even at high photon counting rates.As well known, precise peak voltage measurements are essential for accurate quality controls on clinical X-ray tubes.Figures 13(d), (e) and (f) also show the measured Mo X-ray spectra.Fig. 13.Ag-target X-ray spectra (32 kV) at (a) 8.8 kcps with no correction, (b) at 258 kcps with no correction and (c) at 258 kcps after PSD.Mo-target X-ray spectra (32 kV) at (d) 9.9 kcps with no correction, (e) at 363 kcps with no correction and (f) at 363 kcps after PSD. Direct measurement of mammographic X-ray spectra with the digital CdTe spectrometer: A medical application Knowledge of energy spectra from X-ray tubes is a key tool for quality controls (QCs) in mammography.As well known, X-ray spectra directly influence the dose delivered to the patients as well as the image quality.X-ray spectra can be used for accurate estimations of the peak voltage (KVp) of the tubes, the energy fluence rate, the inherent filtration, the beamhardening artifacts and for the correct implementation of the new dual-energy techniques. By way of example, the peak voltage of a diagnostic X-ray tube should be routinely monitored, since small KVp changes can modify both absorbed dose and image contrast in mammography.With regard to dosimetric investigations, X-ray spectra can be also used to estimate the exposure, the air kerma and the absorbed energy distribution inside a breast tissue or a test phantom, overcoming the well known problem of the energy dependence of the response of the dosimeters (solid state detectors and ionization chambers) which are commonly used for the measurements of the absorbed energy distribution.Dosimeter calibrations, which usually involve complicated and time-consuming procedures, are a critical issue for routine investigations.The spectrum emitted by a mammographic X-ray tube is, typically, obtained by analytical procedures based on semi-empirical models and Monte Carlo methods.In routine quality controls, insufficient information about some characteristic parameters of the X-ray tubes, such as the anode angle, the filters and the exact value of the applied tube voltage, could compromise the precision and the accuracy of the spectra.Of course, measurement of X-ray spectra would be the best procedure for accurate quality controls in mammography.Currently, routine measurement of mammographic X-ray spectra is quite uncommon due to the complexity of the measurement procedure.The measurement of mammographic X-ray spectra is a difficult task because of limitations on measurement with high energy resolution at high photon counting rates as well as geometrical restrictions, especially in a hospital environment.In this section, we report on direct measurements of clinical molybdenum X-ray spectra by using the digital CdTe detection system. Experimental set-up Mo-target X-ray spectrum measurements were performed under clinical conditions (Istituto di Radiologia, Policlinico, Palermo).We used a Sylvia mammographic unit (Gilardoni) with a Mo anode tube (MCS, 50MOH), characterized by an additional filtration of 0.03 mm Mo and a 10° anode angle.The compression paddle was removed during the measurements.The detector was placed on the cassette holder with a 59.5 cm focal spot-detector distance. To reduce the photon counting rate on the detection system to an acceptable level, we used a pinhole collimator: a tungsten collimator disk, 1 mm thick with a 100 m diameter circular hole, placed in front of the detector (over the beryllium window).Using this collimation setup, we measured X-ray spectra with a photon counting rate up to 453 kcps.It is well known that the choice of the proper collimation system is a critical issue for accurate measurements of X-ray spectra.An excessive reduction of the aperture and the thickness of the collimator can produce several distortions in the measured spectra (peaks and continuum events).These distortions are mainly due to (i) the penetration of photons through the collimator material, (ii) scattered photons from the collimator edges and (iii) characteristic X rays from the collimator material.The first effect can be reduced by choosing a proper collimator material and thickness.In the investigated energy range (1-30 keV), the penetration of photons through the 1 mm thick tungsten collimator is negligible, as demonstrated by the estimated values of the transmission equivalent aperture (TEA).By using the tabulated tungsten mass attenuation coefficient values (Boone & Chavez, 1996), we obtained a TEA equals to the collimator aperture area, showing that no photon penetration occurs.The other collimation distortions mainly depend on the alignment of the collimator with the X-ray beam and on the energy of the X-ray beam.Misalignment between the X-ray beam and the collimator can produce scattered photons and characteristic X rays from the collimator edge.Obviously, accurate alignment becomes more difficult as the thickness of the collimator increases and its aperture diameter decreases.The optimum collimation setup should be a trade-off between the reduction of collimation distortions and the photon counting rate.To optimize the beam-detector alignment, the detector was mounted on an aluminum plate equipped with three micrometric screws.A preliminary focal spot-detector alignment was carried out with a laser pointer taking into account the reference marks positioned on both sides of the tube head, while a more accurate alignment was obtained by changing the plate orientation looking for the maximum photon counting rate and the absence of distortions in the measured spectra.We also performed the measurement of attenuation curves through the measured Mo spectra.This curve, which is usually measured to characterize the spectral properties of a beam, was compared with that measured by using a standard mammographic ionization chamber (Magna 1cc together with Solidose 400, RTI Electronics) and a solid state dosimeter (R100, RTI Electronics).Exposure values from spectral data were obtained through the estimation of the energy fluence and the air mass energy absorption coefficients, as described in our previous work (Abbene et al., 2007).To measure the attenuation curves, we used a standard aluminum filter set (type 1100, Al 99,0 % purity, RTI electronics).To minimize the effects of scattered radiation, we performed the measurements in a "good geometry" condition, as suggested by several authors (Johns & Cunninghan, 1983); in particular, the experimental set-up for these measurements was characterized by a filterdetector distance of 42 cm. Clinical X-ray spectra measurements Figure 14(a) shows the measured Mo-target X-ray spectra under clinical conditions.The tube settings were: tube voltages of 26, 28 and 30 kV and tube current-time product of 20 mAs.The photon counting rates are 278, 362 and 453 kcps at 26, 28 and 30 kV, respectively.Figure 14(b) shows the attenuation curves (28 kV and 20 mAs) obtained from the spectra measured with the digital system, from simulated spectra (IPEM Report 78) and by using the exposure values directly measured with the ionization chamber (Magna 1cc together with Solidose 400, RTI Electronics) and with the solid state dosimeter (R100, RTI Electronics).It is evident the good agreement among the curves obtained from the detector, the simulation and the ionization chamber.The disagreement with the attenuation curve obtained from the solid state dosimeter, points out the energy dependence of the dosimeter response.Since aluminum filters harden the X-ray beam and alter the energy spectrum, if dosimeter does not has a flat response for different spectra, the attenuation curve will be in error.The correction of the energy dependence of the dosimeter response need accurate calibrations which involve complicated and time-consuming procedures, critical for routine investigations.These comparisons highlight two main aspects: (i) the ability of the digital system to perform accurate mammographic X-ray spectra without excessive time-consuming procedures and (ii) the possible use of this system both as dosimeter and for calibrations of dosimeters. Conclusion High-Z and wide band gap compound semiconductors are very promising materials for the development of portable high resolution spectrometers in the hard X-ray energy band (>15 keV).In particular, CdTe and CdZnTe detectors, due to the high atomic number, the high density and the wide band gap, ensure high detection efficiency, good room temperature performance and are very attractive for several X-ray applications.CdTe/CdZnTe detectors coupled to digital readout electronics show excellent performance and are very appealing for high-rate X-ray spectroscopy.The digital pulse processing (DPP) approach is a powerful tool for compensation of the effects of incomplete charge collection (typical of CdTe and CdZnTe detectors) and the effects due to high-rate conditions (baseline fluctuations, pile-up).The performance of the presented prototype, based on a CdTe detector and on a custom DPP system, highlight the high potentialities of these systems especially at critical conditions.The digital system, combining fast and slow shaping, automatic pole-zero adjustment, baseline restoration, pile-up rejection and some pulse shape analysis techniques (pulse shape discrimination, linear and non linear corrections), is able to perform a correct estimation of the true rate of the impinging photons, a precise pulse height measurement and the reduction of the spectral distortions due to pile-up and incomplete charge collection.High-rate measurements (up to 800 kcps) highlight the excellent performance of the digital system: (i) low photopeak centroid shift, (ii) low worsening of energy resolution and (iii) the minimization of peak pile-up effects.Measurements of clinical X-ray spectra also show the high potentialities of these systems for both calibration of dosimeters and advanced quality controls in mammography.The results open up the development of new detection systems for spectral X-ray imaging in mammography, based on CdTe/CdZnTe pixel detectors coupled with a DPP system.Recently, single photon counting detectors are very appealing for digital mammography allowing the implementation of dual-energy techniques and improvements on the quality of images.In this contest, CdTe/CdZnTe pixel detectors can ensure better performance (energy resolution <5% at 30 keV) than the current prototypes based on silicon detectors (energy resolution of about 15% at 30 keV).Future works will regard the development of a real time system, based on the digital method, by using a field programmable gate array (FPGA)technology.Abbene, L. et al. (2007).X-ray spectroscopy and dosimetry with a portable CdTe device. Fig. 1 . Fig. 1.(a) Linear attenuation coefficients for photoelectric absorption and Compton scattering of CdTe, Si, and Ge.(b) Total and photoelectric efficiency for 1 mm thick CdTe detectors compared with Si and Ge. Fig. 2 . Fig. 2. Planar configuration of a semiconductor detector (left).Electron-hole pairs, generated by radiation, are swept towards the appropriate electrode by the electric field.(right) The time dependence of the induced charge for three different interaction sites in the detector (positions 1, 2 and 3).The fast rising part is due to the electron component, while the slower component is due to the holes. Fig. 4 . Fig. 4. (a) Pulse peaking time distribution of the CdTe detector ( 57 Co source).The peaking time is equal to 2.27 times the rise time of the pulses.(b) 3D plot of 57 Co spectra measured for different peaking time values (bi-parametric distribution). Fig. 5 . Fig. 5. (a) Pulse peaking time distribution of the CdTe detector ( 109 Cd source) at a photon counting rate of 820 kcps; the selected peaking time regions (PTRs) are also visible.(b)109 Cd spectra for the selected PTRs (820 kcps).It is evident that the peak-pile events are characterized by longer peaking times than the correct events. Fig. 6 . Fig. 6.(a) Photopeak centroid vs. the peaking time values for some photopeaks of the measured spectra ( 109 Cd, 152 Eu, 241 Am and 57 Co).(b) Slope m E and (c) corrected centroid E cor vs. the radiation energy. Fig. 7 . Fig. 7. (a) An overview of the LAX facility.(b) The detector chamber located at 10.5 m from the X-ray tube; the detection system was mounted on a XYZ microtranslator system. Fig. 8 . Fig. 8. Measured (a) 241 Am and (c) 57 Co spectra using PSD and no PSD techniques.After PSD, we obtained an energy resolution of 0.98 % and 0.68 % FWHM at 59.5 keV and 122.1 keV, respectively.Zoom of the (b) 59.5 and (d) 122.1 keV photopeaks, normalized to the photopeak centroid counts. Fig. 9 . Fig. 9. Energy resolution (FWHM) using no PSD and PSD techniques.The solid line is the best-fit resolution function (equation 20).The components of the energy resolution are also shown: the noise due to carrier generation or Fano noise, the electronic noise and charge collection or trapping noise. Figure 10(a) shows the enhancements in 57 Co spectra after linear PSC and the PSD techniques, without any photopeak area reduction.Figure 10(b) shows the enhancements in 57 Co spectra after non linear PSC, applied to all peaking time values (with no reduction of the total counts).No spectral improvements are obtained in 109 Cd spectrum with the PSC methods. Fig. 10 . Fig. 10.(a) Measured 57 Co spectra with no correction and using both linear PSC and PSD.The linear PSC was applied to a selected PTR which ensured no photopeak area reduction.After linear PSC, we obtained an energy resolution of 0.73% FWHM at 122.1 keV.(b) Measured 57 Co spectra with no correction and using non linear PSC.The non linear PSC was applied to all peaking time values obtaining no reduction of the total counts.After non linear PSC, we obtained an energy resolution of 0.87% FWHM at 122.1 keV. Fig. 11 . Fig. 11.Performance of the detection system, irradiated with a 109 Cd source: throughput, 22.1 keV photopeak centroid and energy resolution (FWHM) at 22.1 keV at different input photon counting rates . Fig. 12 . Fig. 12. Measured 109 Cd spectra at (a) 200 cps with no correction, (b) at 820 kcps with no and (c) at 820 kcps after PSD.Measured 241 Am spectra at (d) 200 cps with no correction, (e) at 255 kcps with no correction and (f) at 255 kcps after linear PSC. Fig. 14 . Fig. 14.(a) Mo-target X-ray spectra (26, 28 and 30 kV; 20 mAs) measured with the digital system under clinical conditions; the counts were normalized to the total number of detected events.(b) Attenuation curves obtained from measured and simulated spectra and from direct exposure measurements (ionization chamber and solid state detector).The tube settings were: 28 kV and 20 mAs. Table 2 . Spectroscopic results for the CdTe detector at low photon counting rate (200 cps) using pulse shape analysis techniques.Changes of the photopeak area and total counts are calculated respect to the spectra with no correction.The photopeak area was calculated as twice the HSA.We used T d, slow = 15 s. ).
10,742.2
2012-02-01T00:00:00.000
[ "Engineering", "Physics" ]
Signal Game Analysis on the Effectiveness of Coal Mine Safety Supervision Based on the Affective Events Theory /e main cause of coal mine safety accidents is the unsafe behavior of miners who are affected by their emotional state. /erefore, the implementation of effective emotional supervision is important for achieving the sustainable development of coal mining enterprises in China. Assuming rational players, a signaling game between miners (emotion-driven and judgement-driven) and managers is established from the perspective of Affective Events/eory in order to examine the impact of managers’ emotions on coal miners’ behavior; it analyzes the players’ strategy selections as well as the factors influencing the equilibrium states./e results show that the safety risk deposits paid by managers and the costs of emotion-driven miners disguising any negative emotions affect equilibrium. Under the separating equilibrium state, the emotional supervision system faces “the paradox of almost totally safe systems” and will be broken; the emotion-driven miners disguising any negative emotions will be permitted to work in the coal mine, creating a safety risk. Under the pooling equilibrium state, strong economic constraints, such as setting suitable safety risk deposits, may achieve effective emotional supervision of the miners, reducing the safety risk. /e results are verified against a case study of the China Pingmei Shenma Group./erefore, setting a suitable safety risk deposit to improve emotional supervision and creating punitive measures to prevent miners from disguising any negative emotions can reduce the number of coal mine safety accidents in China. Introduction Emotions can affect people in many aspects of their lives and define a person's happiness, stress, and longevity [1,2]. In high-risk industries, such as coal mining, emotions directly cause unsafe behavior [3] and are, thus, an "invisible killer" that is detrimental to the life safety of employees. Scholars have found that people's emotions are important contributing factors to the social production of safety [4][5][6] and are caused by various affective events in an employee's work and life [7]. Research in the field of organizational behavior has stated that affective events inevitably occur when employees interact with colleagues and leaders during work [8,9]. Weiss and Cropanzano [10] proposed the Affective Events eory (AET) to explore the relationships between affective events and reactions and between attitudes and behaviors of organization members. Scholars have since conducted a series of verifications that developed AET [9,11,12]. AET provides a natural support for the study on the induction and prevention of emotions that create an unsafe environment from a dynamic perspective but has not garnered enough attention in the field of safety management. Few scholars have tried to analyze the sources of unsafe emotions in the workplace from the perspective of AET [13]. Coal consumption accounts for more than 60% of total energy consumption in China, reflecting the importance of coal mine safety production in one of the pillar industries that play a leading role in China's economic development. Safety management in China differs significantly from those in developed countries. Developed countries have mature production management systems due to their high levels of economic development. In addition, employees in developed countries have a higher average educational level, stronger selfsafety awareness, and better control over their emotions than those in developing countries do. Under these conditions, enterprise safety management is based on the individual psychological motivations of employees and attaches the importance to a human-centered, flexible enterprise safety performance management [14]. While China has experienced rapid economic development during the 30 years of reform after opening up, there still exist gaps in safety management between it and developed countries. In China, while production management and management systems have gradually improved, its employees' educational level and safety awareness remain significantly different from those in developed countries. Additionally, Chinese employees generally express emotions more implicitly than do their counterparts in Western countries [15]. Under these conditions, enterprise safety management is based on the system, culture, and atmosphere and attaches importance to a system-centered, fixed management of individual safety behaviors [16,17]. erefore, under a system-centered, fixed management, improving the safety management of Chinese coal mining enterprises is critically dependent upon effectively suppressing the inducing effect of internal factors, such as emotions, on unsafe behaviors and establishing the causes and countermeasures of coal mine safety accidents. Given the current conditions of coal mine production in China, a "miner-manager" signal game model is constructed from the perspective of AET. By calculating and analyzing the conditions of separating equilibrium and pooling equilibrium, possible suggestions for improving miners' emotional supervision are provided. Literature Review e work conditions and environment in China's high-risk industries are based on the nature of production. In particular, the coal mining industry operates in small spaces underground under the harsh conditions of high temperatures and humidity and is constantly changing because of the mining process. e requirements for safety are also extremely demanding. e government has always focused on the supervision of coal mine safety production. For example, in 2018, General Secretary Xi Jinping set eight important instructions on coal mine safety management [18], three of which addressed coal mine safety production. Studies have shown that anxiety, anger, and even excessive happiness can lead to the unsafe behavior of employees [19][20][21] and may lead to safety accidents, but a clear, common definition of such emotions has not been found. Based on the existing research, the emotions that may lead to employees performing unsafe behavior are hereafter defined as "unsafe emotions." e research that addressed preventing unsafe emotions generally followed the logic of "emotional recognition/ measurement-unsafe behavior prevention" [22][23][24]. Although reasonable, it is actually the prevention of unsafe behavior, not unsafe emotions, that is lacking. In fact, the research on preventing unsafe behavior is lagging behind that of preventing unsafe emotions. Existing research is based on the assumption that emotional outbursts, or sudden changes in mood, can, to some extent, be identified by instrument or visual observations. However, latent emotions are difficult to observe and monitor. As it has been recorded that, culturally, the emotional expressions of people in China are generally implicit, the abovementioned research limitations are relevant to this study. us, it is crucial to design a new theoretical path. Scholars have studied the influence mechanisms of emotions on individual behavior from different perspectives. ese theories can be divided into two categories, depending on how emotions affect behavior. One category is based on emotional contagion, which is the idea that emotions are transmitted from one person to another and that people subconsciously copy other people's facial expressions, sounds, postures, and movements simultaneously. e length of the imitation of these emotional characteristics is very short and almost synchronous [25,26]. Such research is represented by the emotional contagion theory (ECT) [27]. e other category is based on individual cognitive appraisal, which believes that the influence of emotion on individual behavior is not through simple emotional contagion but the result of individual processing and judgement of the emotional information from others. In other words, when in the presence of others' emotions, individuals may show different emotional or behavioral characteristics [28][29][30]. Such a phenomenon is known as Cognitive Appraisal eory of Emotion (CATE) [31]. However, further studies found that the impact of others' emotions on an individual's behavior may be driven by both emotional contagion and cognitive appraisal [10], which may be integrated. us, AET was proposed. Compared to ECT and CATE, AET has advantages in scope and explanatory power. erefore, this paper conducts research based on AET. AET is gradually emerging in the field of organizational behavior and naturally provides new methods for solving the abovementioned dilemmas. AET considers the dynamic nature of emotions and believes that the characteristics of the work environment and work events, for example, affect the emotional response of employees and, thus, their behavior (see Figure 1). For example, the simple and tedious nature of work tasks can easily provoke employees' emotional reactions of being bored [32]. Likewise, challenging events can stimulate an employee's attentiveness emotional response, and interrupting events can trigger an indignation emotional reaction [33]. A leader's behavior and leadership style can also result in different emotional reactions from their employees [34]. us, in terms of the impact of emotional responses on behavior, AETproposes two types of behaviors, affect-driven and judgement-driven, although the theory applies an effective response rather than an emotional response (emotional responses in actual research are often measured by emotions) [35]. AET is based on the "symmetry assumption" [36]; that is, the leader's negative emotion leads to a negative emotional event. AET is focused on explaining the negative influence of the leader's negative emotion on employee performance and has obvious limitations in explaining the positive effects of leaders' negative emotions [37]. Complexity On the basis of AET, Van Kleef introduced the idea of information processing in the study of leader's emotions and developed the Emotion As Social Information (EASI) model [38], which indicates that observers choose a path, affective reactions, or inferential processes through the judgements they make from emotional expressions. In the context of the coal miners, employees who choose the affective reactions path, or emotional contagion, are infected with the leader's negative emotions, resulting in reduced performance; employees who choose the inferential processes path, or cognitive appraisals, will extract information about the completion of the employee's task from the leader's negative emotions and respond by improving their performance (see Figure 2). From the above discussion, we can find that although the existing researches have greatly promoted the development of AET by providing a theoretical explanation of the impact of affective events on employee behavior, most of them are based on the perspective of employees. ere are few studies on how managers conduct effective emotional supervision for different types of employees. As such, the existing research does not address the following two questions well: Question 1: with the continuous improvement of hardware conditions in coal mine safety production, such as supporting technology and equipment, the safety status of coal mine production has been significantly improved. However, the safety accidents caused by the unsafe behavior of miners still prevail, and emotion is the direct cause of this unsafe behavior. How can the causes and countermeasures for the repeated prohibitions of coal mine safety accidents be found from the emotional level? Question 2: does the safety risk mortgage incentive mechanism implemented by coal mines affect the emotional supervision of miners? How can the amount of safety risk deposit be set to achieve effective supervision of miners' emotions? In order to answer the above two questions, this paper introduces the AET in the field of organizational behavior. It combines the actual reward and punishment system of coal mines to construct the "miner-manager" signal game model from the perspective of affective events and ultimately provides possible advice on the efficiency of miners' emotional supervision. Model Assumptions (1) Due to economic reasons, the expected return of miners' downhole operations is positive. Only under this assumption do the miners have the motivation to work underground, and the coal mining enterprises operate. (2) Referring to Hsieh's method, we divide miners into two categories: miners who are easily driven by emotion (emotion-driven miners) and miners who are driven by cognitive appraisal (judgement-driven miners) [39]. (3) If emotion-driven miners are affected by the negative emotions of the leader and, as a result, are prohibited from downhole work, the miners' psychological damage will be denoted as X. As they have a negative outlook, they will think that X is bigger than the fixed wage W : X > W. (4) According to the EASI model, miners who choose emotional contagion experience a reduction in safety performance, while miners who choose cognitive appraisal improve safety performance. It is assumed that the miner who chooses the latter improves their safety performance to such an extent that they do not cause a safety accident. Game Subject. Before the miners go downhole to work, they have a prework meeting during which tasks are assigned and the managers assess their mental and physical states. e miners who fail this assessment are asked to take steps to improve. However, if the miners continue to not meet the requirements for downhole work, they are prohibited from doing so by the managers. In summary, it is assumed that Player 1 of the game represents the miners who are preparing for downhole work and are divided into emotiondriven miners k E and judgement-driven miners k D . e miners' classification into emotion-driven and judgementdriven miners is determined by questionnaires in conjunction with their daily performance. Player 2 represents the managers. Both players behave rationally; they have the ability to make optimal response strategies under given conditions to maximize their own interests. Game Signal. Before the miners are lowered into the mine, the managers conduct a safety status judgement on each miner. To be able to work downhole, the miners may disguise their emotional state. erefore, the emotional state (m) exhibited by the miner is the signal in the signal game model and forms the signal space M � m N , m P (m N represents a stable emotional state, and m P represents a negative emotional state). us, Player 1 is the signal sender, and Player 2 is the signal receiver. Parameter Assumptions. W is the fixed salary of the miner; A is the amount of fines the miner pays after a safety accident; I is the safety performance salary of the miner; S is the safety risk deposit paid by manager; G is the fixed amount of the manager's salary; J is the safety performance salary of the manager; ΔP is the change in safety performance of the emotion-driven miners after being affected by the negative emotions of the leader; X is the additional damage on the emotion-driven miners' state of mind from being affected by the leader's negative emotions and, consequently, being forbidden to work downhole; C is the cost of emotional supervision (such as time, money, and energy); M is the cost the emotion-driven miners incur for disguising their emotional state; M ′ is the cost the judgement-driven miners incur for disguising their emotional state; α is the probability of the emotion-driven miner causing a safety accident; θ is the safety performance transfer coefficient (0 ≤ θ ≤ 1); and φ is the accident responsibility transfer coefficient (φ > 0). Game Model. e chronological order of the game is as follows: (1) Nature (N) first selects the type k ∈ K of Player 1 with type space K � k E , k D and prior probability Pk D � p, and Pk E � 1 − p (2) After observing the natural choice, Player 1 knows its type and selects one from the feasibility signal set m ∈ m N , m P to send a signal (3) After observing the signal from Player 1, Player 2 corrects the prior probability P(k) using Bayes' rule to obtain the posterior probability P(k | m) of the exact type of Player 1 from the action set a ∈ a L , a NL (a L are the managers allowing the miners to work downhole; a NL are the managers not allowing the miners to work downhole) (4) e payoffs for Player 1 and Player 2 can be expressed as u 1 (k, m, a) and u 2 (k, m, a), respectively Based on the field investigation of a coal mine in Pingdingshan City, China, the salary standards between miners and managers were found to be quite different. us, the actual salary calculation method for miners and managers can be expressed as follows: Miners' salary � basic salary + safety performance Managers' salary � basic salary + safety performance + safety risk deposit refund e signal game model of the miners and managers from the perspective of AET is shown in Figure 3. Game Equilibrium Analysis of the Miners and Managers from the Perspective of Emotional Events e perfect Bayesian equilibrium of the signal game model is a combination of a strategy combination (m * (k), a * (k)) and posterior probability P(k | m), which needs to satisfy the following two conditions [40]: (1) a * (m) ∈ u 2 (m, a, k), that is, the optimal response of Player 2 to the signal sent by Player 1 in the case where the posterior probability P(k | m) is given (2) m * (k) ∈ u 1 (m, a * (m), k), that is, the optimal strategic decision made by Player 1 in the case where the optimal response of Player 2 has been predicted If the managers are displaying negative emotions during the prework meeting, the rational judgementdriven miners choose to display the stable emotional state, and the emotion-driven miners become infected and choose to display a negative emotional state. However, in reality, some emotion-driven miners realize that they may lose the opportunity to work downhole once their negative emotions are discovered by the managers, resulting in certain economic losses. To improve their own interests, they may disguise their real emotional state to be able to work. Suppose that, in the prework meeting, the emotion-driven miners hide their negative emotions and signal fake information with probability ρ, 0 ≤ ρ ≤ 1; that is, P(m N | k E ) � ρ, P(m P | k D ) � 0, P(m P | k E ) � 1 − ρ, and P(m N | k D ) � 1. According to Bayes' rule, the posterior probability of Player 2 can be calculated as follows: Complexity e perfect Bayesian equilibrium of the signal game can be divided into separating equilibrium, pooling equilibrium, and quasi-separating equilibrium. According to the above, the probability ρ has a significant influence on the corrected posterior probability P(k | m) of Player 2, and the magnitude of ρ has a direct significant influence on the equilibrium state of the signal game. When ρ � 0, the signal game is in the separating equilibrium state; when ρ � 1, the signal game is in the pooling equilibrium state; and when 0 < ρ < 1, the signal game is in the quasi-separating equilibrium state. Since there are only four pure strategies in the game, the quasi-separating equilibrium can be ignored in the perfect Bayesian equilibrium; only the separating equilibrium and pooling equilibrium states will be analyzed. Proposition 1. e signal game model has a unique separation and perfect Bayesian equilibrium. When s < (12[(θ − α)J + C − (1 − α)θΔP − φA])/α is met, the emotion-driven miners choose to express negative emotions, the judgementdriven miners choose to express stable emotions, and the managers agree to let the emotion-driven miners go downhole. At this point, the regulatory system is caught in the "paradox of almost totally safe systems" [41] and is, thus, paralyzed. Proposition 1 explains the "paradox of almost totally safe systems" in the field of safety science from an economic perspective-even highly reliable organizations cannot be immune to disasters. Coal mine safety and a sound production system may become vulnerable when miners are in an unstable mood. e reason is that miners have the qualifications specified by external regulations (such as safety training) to work downhole, but their actual behaviors are largely driven by internal emotions. When the safety risk deposit paid by the manager is small enough, it cannot effectively improve the emotional supervision of the miners through economic means. e managers are likely to be either indifferent to having emotional supervision or saving on emotional supervision costs and allowing the emotion-driven miners to go downhole, the latter of which poses a safety hazard to the coal mine production. Proposition 1 also reflects that coal mining enterprises should strengthen the intangible, emotional safety management and improve the tangible, hardware conditions for safe production. Such emotional safety management methods include training programs that improve the managers' awareness and ability to provide emotional supervision and the miners' ability to recognize and handle emotions. Proof of Proposition 1. Player 1's separating strategy includes (k E , k D ) ⟶ (m P , m N ) and (k E , k D ) ⟶ (m N , m P ). For the latter separating strategy (k E , t D ) ⟶ (m N , m P ), there is no analytical significance in reality because emotional-driven miners may not be allowed to work downhole if they do not disguise their emotional state. At the same time, it is meaningless and counterintuitive for judgementdriven miners to disguise their stable emotional state as a negative emotional state. erefore, this paper only discusses the first separation strategy, that is, the emotion-driven miner choosing to hide their negative emotions with Complexity probability ρ � 0, leading to the separating equilibrium state in the signal game. e equilibrium strategy of the signal sender is (k E , k D ) ⟶ (m P , m N ); that is, Player 1 of the k E type sends signal m P , and Player 1 of the k D type sends signal m N . e posterior probability of Player 2 is therefore P(k E | m p ) � P(k D | m N ) � 1, and P(k D | m p ) � P(k E | m N ) � 0: (1) After Player 2 receives the signal "negative emotion" from Player 1, the expected return E 1 of selected a L and the expected return E 2 of selected a NL are, respectively, According to equation (2), when s < (12[(θ − α)J+ C − (1 − α)θΔP − φA])/α, E 1 > E 2 , and the expected payoffs for Player 2 allowing the miner to work downhole are higher than the payoffs for not allowing them to work downhole. erefore, the optimal response strategy of Player 2 is allowing the miner to work downhole, a * (m p ) � a L . Conversely, when s > (12[(θ − α)J + C − (1 − α)θΔP − φA])/α, the optimal response strategy of Player 2 is to prevent the miner from working downhole; a * (m p ) � a NL . After Player 2 receives the signal "stable emotion" from Player 1, the expected payoffs E 1 ′ of selecting a L and expected payoffs E 2 ′ of selecting a NL are, respectively, According to equation (3), E 1 ′ > E 2 ′ , which means that the expected payoffs for Player 2 allowing the miner to work downhole are higher than the payoffs for not allowing them to work downhole. e optimal strategy is thus a * (m N ) � a L . (2) When the action selection for a given signal receiver )/α is satisfied, the payoffs for Player 1 on the equilibrium path are According to equations (4) and (5), u 1 (k D , m N , a * (m N )) > u 1 (k D , m P , a * (m p )) and u 1 (k E , m P , a * (m p )) > u 1 (k E , m N , a * (m N )); the payoffs for Player 1 on the equilibrium path are higher than those on the nonequilibrium path. In this case, there is no incentive to deviate from equilibrium. erefore, when the safety risk deposit paid by the manager is small enough, at s < (12[(θ − α)J + C − (1 − α)θΔP − φA])/α, (m P , m N ), (a L , a L ), (P(k E | m p ) � 1), (P(k E | m N ) � 0)} is the separating perfect Bayesian equilibrium of the game. When s > (12[(θ − α)J + C − (1 − α)θΔP − φA])/α, the action selection of the given signal receiver is (m P , m N ) ⟶ (a NL , a L ), and the payoffs for Player 1 on the equilibrium path are According to equation (6), u 1 (k E , m P , a * (m p )) < 0. In comparison, the expected payoffs of working downhole are positive, and, in order to maximize interests, Player 1 will disguise their emotional state and display a stable emotional state. Proposition 1 is validated. The actual meaning of Proposition 2 is that when the cost incurred by the emotion-driven miners to disguise their emotional state is small enough, they choose to disguise their negative emotions in exchange for the opportunity to work downhole. In this situation, all the miners show stable emotions in the prework meeting. It is thus necessary to strengthen the managers' emotional supervision and awareness by setting a suitable safety risk deposit, thereby achieving effective supervision of the miners who choose the emotional contagion paths and, thus, have emotional safety risks. Proof of Proposition 2. e strategies of Player 1 in the pooling equilibrium include (k E , k D ) ⟶ (m N , m N ) and (k E , k D ) ⟶ (m P , m P ). For the latter strategy (k E , k D ) ⟶ (m P , m P ), according to the rational player hypothesis, judgement-driven miners will not disguise their stable emotional state as a negative emotional state, as it has no practical significance. erefore, only the former kind of pooling equilibrium state is considered. When the probability that the emotion-driven miners choose to disguise their negative emotions is 1 (ρ � 1), the signal game reaches the pooling equilibrium state. e equilibrium strategy of the signal sender is (k E , k D ) ⟶ (m N , m N ), meaning that Player 1 of the type k E sends signal m N , and Player 1 of the type k D also sends signal m N . e posterior probability of Player 2 is P(k D | m N ) � P, P(k E | m N ) � 1 − P, P(k E | m P ) � 1, and P(k D | m P ) � 0: (1) After Player 2 receives the equilibrium signal "stable emotion" from Player 1, the expected payoffs E 1 ″ of the selection a L and the expected payoffs E 2 ″ of the selection a NL are, respectively, According to equation (7), if E 1 ″ > E 2 ″ , it must satisfy e expected payoffs for Player 2 choosing a L will be higher than the expected payoffs for choosing a NL , so the optimal response strategy of Player 2 is a * (m N ) � a L . e above two conditions indicate that if the safety risk deposit is small enough when both types of miners send stable emotional signals, the managers will choose to allow both types of miners to go downhole, and the emotional supervision would have failed. After Player 2 receives the nonequilibrium signal "negative emotion" from Player 1, the expected payoffs E ‴ 1 of the selection a L and the expected payoffs E ‴ 2 a of the selection a NL are, respectively, According to equation (9), if E ‴ 1 > E ‴ 2 , it is necessary to satisfy s < 12[((C + (θ − α)J − (1 − α)θΔP)/α)− φA], and the optimal response strategy of the managers is a L , which is a * (m P ) � a L ; otherwise, if , and the optimal response strategy for the managers is a NL , which is a * (m P ) � a NL . (2) When the optimal selection strategy for a given Player 2 is (m N , m P ) ⟶ (a L , a L ), it is necessary to satisfy the conditions s < min ( e payoffs for Player 1 on the nonequilibrium path are as follows: It is assumed in the model hypothesis that W − X < 0, and the expected payoffs for the miners working downhole are positive; thus, it is only necessary to ensure that u 1 (k E , m N , a * (m N )) > 0, and the payoffs for Player 1 on the equilibrium path are higher than those on the nonequilibrium path. Real Case Demonstration e selected case study belongs to the China Pingmei Shenma Group. e coal mine was officially opened in February 1964 and generated an annual output of 2.9 million tons of coal, bringing more than 20 million yuan in annual tax revenue to the local government. e geological conditions of the coal mine are complex; a typical underground coal mine requires the coal seam to be buried relatively deep. e values of the parameters are mainly obtained from the official announcements of the coal mining enterprise and further investigations. Due to the fluctuation of some values within a certain range, the mean values are adopted for simplicity. Table 1 shows the value assignment of the parameters. Separating Equilibrium. Once the values of the above parameters are substituted into equations (2)-(6), it can be concluded that when the safety risk deposit paid by the managers is S < 2, 400, the separation equilibrium occurs. In this situation, the managers fail to realize effective emotional supervision, resulting in the failure of the safety risk deposit system. Complexity 5.2. Pooling Equilibrium. Assuming 1 − p � 0.5, which means the probability of the emotion-driven miners exhibiting a stable emotional state is 0.5, and substituting the above parameter values into equations (7)-(11), it can be concluded that when 7, 200 < S < 36, 000 and M < 4, 000, the system reaches a pooling equilibrium; the safety risk deposit system works, and the managers are motivated to perform effective emotional supervision of miners' emotional states. However, when 2, 400 < S < 7, 200 or S > 36, 000, the separating equilibrium and pooling equilibrium coexist, and the system falls into chaos. According to the investigated data, the average safety risk deposit of this coal mine is 12,000, which is within a rational range. Conclusions and Implications A signal game model on emotional supervision is constructed from the AET perspective, the separating equilibrium and pooling equilibrium states are analyzed, and the perfect Bayesian equilibrium paths of the two equilibrium states are obtained. e research not only proves that the repeated emergence of coal mine safety accidents is the consequence of "the paradox of almost totally safe systems" but also explains the reason for this paradox from an economic perspective. When the amount of safety risk deposit paid by manager is small enough, the economic constraint may not motivate the managers to improve their awareness and ability to provide effective emotional supervision. In this situation, the managers only consider the miners' external, formalistic safety conditions (such as qualifications and health condition), which are easy to obtain, and ignore their internal, emotional safety states. us, the miners with hidden emotional safety risks may pose a risk to the mining enterprise's operations. Consequently, when the cost of disguising their emotional state is small enough, the emotion-driven miners may choose to disguise negative emotions in order to be able to work downhole. erefore, effective economic constraints should be formed by setting a suitable safety risk deposit to strengthen emotional supervision and creating punitive measures to prevent miners from disguising their emotions. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
7,347.2
2020-06-26T00:00:00.000
[ "Economics" ]
Band gap maps beyond the delocalization limit: correlation between optical band gaps and plasmon energies at the nanoscale Recent progresses in nanoscale semiconductor technology have heightened the need for measurements of band gaps with high spatial resolution. Band gap mapping can be performed through a combination of probe-corrected scanning transmission electron microscopy (STEM) and monochromated electron energy-loss spectroscopy (EELS), but are rare owing to the complexity of the experiments and the data analysis. Furthermore, although this method is far superior in terms of spatial resolution to any other techniques, it is still fundamentally resolution-limited due to inelastic delocalization of the EELS signal. In this work we have established a quantitative correlation between optical band gaps and plasmon energies using the Zn1−xCdxO/ZnO system as an example, thereby side-stepping the fundamental resolution limits of band gap measurements, and providing a simple and convenient approach to achieve band gap maps with unprecedented spatial resolution. In this work, using monochromated EELS in probe-corrected STEM, we investigate the relationship between band gaps and plasmon energies, and establish a robust quantitative correlation using in the ZnO-CdO alloys as an example. This provides a fast and effortless pathway to carry out band gap mapping that can be performed without the need of special hardware such as monochromators or probe Cs correctors, and with much less complexity in the experimental acquisition and data analysis. We furthermore show that using this approach we achieve a higher spatial resolution than the conventional method, without compromising the accuracy. Background In the periodic table, Cd is located directly below Zn and can therefore be considered iso-electronic. However, Cd has a significantly larger ionic radius than Zn, and when Cd 2+ ions (radius 0.097 nm) replace Zn 2+ (radius 0.074 nm) in the wurtzite ZnO matrix, the unit cell volume is increased while the band gap is reduced 1,4,24,25 . Meanwhile, higher Cd content is also associated with a drop in plasmon energy, since the valence electron density decreases as the unit cell volume expands. The plasmons are collective excitations of valence electrons triggered by their collective response to the repulsive field carried by the incident electron. In a free electron model, the plasmon energy is given by 25 : where E p,F is the free electron plasmon energy in EELS spectrum, ω p is the plasmon frequency, ħ is the reduced Planck constant, N is the number of valence electrons per unit cell, e is the elementary charge, V(x) is the Cd-concentration dependent volume of unit cell, m 0 is the electron mass, and ε 0 is the permittivity of free space. This free electron model assumes that the valence electrons behave as simple harmonic oscillators, which is an obvious simplification when considering real materials. Nevertheless, many simple metals (e.g. Be, B, Na, Al) and semiconductors (e.g. Si, Ge, GaAs) have sharp plasmon peaks near the value predicted by this model 26 , and it has also been shown that Equation (1) is quite successful in estimating the plasmon energy of more complex materials 21 . For wide band gap semiconductors, the free electron model can be modified by introducing a bound oscillation with frequency ω b = E g /ħ, so that a semi-free electron plasmon energy can be obtained 25 : thereby improving the correspondence with the experimental values somewhat. As both the band gap and plasmon energy depend on the Cd concentration (x), these models predict a relationship between the band gap and the plasmon energy: via the volume alone in the case of the free electron model, while in the semi-free model the band gap energy also appears directly. This indicates that a quantitative connection between the plasmon and band gap energies may be formulated. This theory is elaborated further in the Supplementary Information section 5, and forms the basic justification for the approach taken in the present work. Results Shifts of E g and E p of ZnO after incorporation of Cd. Figure 1 illustrates experimental spectra obtained from the pure ZnO buffer layer and the alloyed layer with a large amount of Cd. Both the plasmon peaks and the band gap energy loss onsets are easily identified. To extract the band gap value, a power-law model was used to reliably subtract the background signal at energies E < E g , and a parabola was fitted to the remaining spectrum, further details about this procedure can be found elsewhere 18,19 . In pure ZnO, the average band gap is found to be 3.22 ± 0.02 eV using the fitting range 2.4-2.9 eV for background subtraction ( Supplementary Fig. S8a), in agreement with previous investigations 19 . In order to measure the plasmon energy, we first employed the Fourier-log method, as implemented in DigitalMicrograph, to remove plural scattering. Thereafter the EELS spectra within a selected narrow energy range were fitted with a Gaussian function to determine the plasmon peak positions. The average plasmon energy of ZnO is 18.88 ± 0.02 eV ( Supplementary Fig. S1), consistent with previous reports 27, 28 . In comparison, the free electron model (Equation (1)) predicts a theoretical value of 18.64 eV, while the semi-free model (Equation (2)) using the observed band gap as an input leads to a higher value of 18.92 eV. Thus, the two models, and the semi-free model in particular, are in good agreement with the experimental observation. For the Cd-containing layer, Fig. 1 shows that both E g and E p are shifted to lower energies, as expected from the unit cell volume expansion caused by the incorporation of Cd atoms into the structure. Simultaneous E g and E p maps. Figure 2 shows band gap and plasmon energy maps of Zn 1−x Cd x O/ZnO taken from two different areas. Red and yellow colors indicate a high band gap or plasmon energy, while blue and green imply a lower value. As demonstrated by energy-dispersive X-ray spectroscopy (EDX) maps obtained for Cd and Zn (see Supplementary Figs S2 and S3), the film exhibits the expected two-layer structure, with an inner buffer layer consisting of pure ZnO, and an outer layer of Cd-containing ZnO 19 . The α-Al 2 O 3 substrate is situated at the bottom. The thickness of each layer is about 120 nm. The transition from ZnO to the Zn 1−x Cd x O layer is clearly visible as a rather abrupt drop in E g or E p values. However, the interface between ZnO and Zn 1−x Cd x O is rough. Furthermore, we observed that there are significant spatial variations of band gaps and plasmon energies internally in the Cd-containing layers. Intriguingly, their variations match very well, and are supported by the elemental EDX maps in Supplementary Information. These EDX measurements confirm that the higher Cd content is correlated with decreasing band gap and plasmon energy, and that the maximum Cd content (x) in our specimens is ≈0.51. Previous work 2 shows that Zn 1−x Cd x O stabilizes single-phase wurtzite with Cd content x up to 0.67, this is also confirmed by our X-ray diffraction investigations. In comparison, the variations observed in Fig. 2 within the pure ZnO layer are much smaller, suggesting a high spectral precision. Note that the optical band gaps and plasmon energies were simultaneously acquired pixel by pixel. Therefore these maps with point-to-point correspondence are well suited to investigate their relationship and establish a quantitative correlation. Importantly, the plasmon energy map displays a significantly better spatial resolution than the band gap map, and there are some small energy variations that are undetectable in the band gap map. This is expected from the difference in EELS spatial resolving ability, which depends on the energy loss as well as high tension of the microscope. As an example, in the current experimental setup with 60 keV incident electrons, the EELS spatial resolution as expressed by the inelastic delocalization length (L 50 ) is estimated to be approximately 5.41 nm for the band gap transitions in ZnO, and 1.46 nm in the case of its plasmon excitations 23,25,29 . See the Supplementary Information for further details. Quantitative correlation between E g and E p . To establish a quantitative relationship between the plasmon energy and band gap, several experiments such as those in Fig. 2 were performed on different regions of the sample. In Fig. 3 we have plotted the observed band gap and plasmon energy in 13876 pixels (spectra) originating from eleven different spectrum images, sufficient for establishing the quantitative correlation. Owing to the poorer spatial resolution of the band gap map, several different plasmon energies may be observed for each particular value of the band gap. This is indicated by the error bars which show the spread (one standard deviation) of plasmon energies around the average value. As shown in the Supplementary Information a relationship between the plasmon energy and band gap can be derived. In the free electron model this relationship is as follows: while for semiconductor or insulator the semi-free model should be used, resulting in the following equation: Here, a, b, c, d, e, f, g and N are all constants that can be found in literature, see Table 1 for an overview. The relationship predicted between E g and E p predicted by the free and semi-free electron models using the literature inputs from Table 1 are plotted in Fig. 3. It can be seen that the two models are somewhat successful in estimating the plasmon energy in the low gap and high gap range respectively, but neither of the models offer satisfactory results over the entire range. We now follow two different routes to establish the quantitative relationship between the band gap and plasmon energy. First a polynomial function relating E g and E p is fitted on the basis of the experimental data, as shown in Supplementary Fig. S5. Although this results in a rather exact fit, it does not directly relate to any of the physical parameters that serve as determining factors in the variation of band gap and plasmon energy. Instead, we take Equation (4) above as a starting point, and use the constants as fitting parameters, arriving at a correlation described by the black line in Fig. 3 and the parameters in Table 1. An excellent match with the observed correlation can be achieved, while at the same time retaining physically realistic and meaningful fitting parameters. It is particularly encouraging that reasonable values of the unit cell volume and the band gap are kept. It needs to be pointed out that the Cd compositional range in our work differs significantly from the literature 3 , resulting in a significant discrepancy between the fitted and the literature values of f and g. Reconstructed E g map with improved spatial resolution. The proposed relationship between the plasmon energy and band gap can now be employed to reconstruct band gap maps from plasmon energy maps. It was not possible to acquire an analytical solution of Equation (4) for E g in terms of E p . Instead, the equation was solved numerically by slowly increasing E g until a value equal or larger than E p was found, for each pixel of a plasmon energy map. See the Supplementary Information for attached python code. This was applied to the two data sets shown in Fig. 2. The resulting reconstructed band gap maps are shown in Figs 4a and 5a. For convenience, the color scale here remains the same as the directly measured E g map in Fig. 2a,c. As expected, the directly measured and reconstructed maps show a strong similarity, but the reconstructed map clearly resolves several additional variations not observable in the directly measured E g maps. Line profiles from the reconstructed maps are shown in Figs 4b,c and 5b,c together with the corresponding line profiles from the directly measured maps as indicated by red and black arrows. These line profiles confirm that a greater resolution is achieved in the reconstructed maps. Furthermore, as shown in Supplementary Fig. S8a,b, the reconstructed band gaps in the chemically homogeneous ZnO layer are very close to the directly measured values. By averaging over 260 pixels an average value of the reconstructed band gap of 3.24 ± 0.02 eV is obtained compared to the directly measured value of 3.22 ± 0.02 eV, thereby showing that a good accuracy is retained. Note that, in Figs 4b,c and 5b,c, the band gap values are revealed by the data points, and the polynomial curves are just added as a guide to the eye. Both the directly measured and reconstructed maps exhibit some standard deviations, which causes the shifts of the maximum (or minimum) position of the polynomial curves. Discussion Compared to the directly measured band gap maps, the reconstructed maps have significantly higher spatial resolution. As shown in Figs 4 and 5, the two methods capture the same general features, but the reconstructed map reveal both higher and lower absolute values. There are two effects contributing to this difference. First, in the case of the directly measured band gap maps, the incident electron not only experience energy transfer to excitations "locally", but also to transitions taking place some distance away from the position where it penetrates the specimen. This inelastic delocalization of the signal causes a spectrum from one position on the sample to have contributions from a larger volume, quantified as an inelastic delocalization length (L 50 , contributing 50% of the signal) of approximately 6 nm (see Supplementary Information for more details). Second, to create the band gap map, the experimental spectra need to be analyzed in order to identify the onset of energy loss corresponding to the band gap. In this process, even small or moderate contributions from adjacent areas can weigh heavily and thereby causing a broadening of the features 19,22 . For a feature to be resolved using this direct mapping method, it must therefore be quite large both spatially and spectrally. (Fig. 2b) using the semi-free electron fitting. The arrows display the start and end points of the two lines chosen for analysis in (b), (c). Directly measured (Fig. 2a) and reconstructed E g along the horizontal (b) and vertical (c) line profiles. Polynomial curves are superimposed to guide the eyes. These problems are greatly reduced when using the E p -to-E g reconstruction approach. First, the inelastic delocalization is much smaller in the case of plasmon losses, and second, the data processing now only requires determination of the plasmon peak position, a procedure that is much less influenced by contributions from adjacent areas than attempting to find an energy loss onset. The improvements in resolution are clearly evident in the finer details of Fig. 5. Here several features that were not resolved in the directly measured maps now become apparent. For example, in Fig. 5c (dashed box) we see a sharp drop in the band gap between adjacent points in the reconstructed data. The distance between these points is only 4.14 nm, while the E g reduction is well above the statistical error. In comparison, this feature is completely missing in the directly measured data. As mentioned in Supplementary Information section 3, the spatial resolution of the directly measured band gaps could in theory be improved by using a different experimental setup. If a specific spatial resolution is required for a given excitation energy, the resolution could in principle be improved by lowering the accelerating voltage of the microscope. However, as shown in Supplementary Fig. S4, a dramatic reduction in accelerating voltage is needed to reduce the inelastic delocalization to comparable levels as that predicted for plasmon losses at 19 eV: an L 50 of 1.46 nm is achieved for 3 eV band gaps only by reducing the accelerating voltage below 0.18 kV. While monochromatic electron beams with such low energy can indeed be generated, the increase in beam size (and resulting reduction in resolution) far outweighs the improvement in inelastic delocalization. In comparison, the E p -to-E g reconstruction approach can be used on regular samples at standard accelerating voltages, while still achieving excellent spatial resolution, accuracy and precision. Conclusions In summary, taking advantage of state-of-the-art monochromated EELS in conjunction with probe-corrected STEM, the local optical band gaps and plasmon energies of Zn 1−x Cd x O/ZnO were simultaneously mapped on the nanometer scale with a high level of spectral precision, and their quantitative correlation was successfully established. This provides a practical method to acquire semiconductor band gap values via plasmon energies, with drastically improved spatial resolution as compared to the direct measurement of the band gap. These findings pave the way for studies of band gap engineered semiconductor nanostructures with spatial resolution beyond the traditional delocalization limits, with the added benefit of greatly relaxed requirements on hardware and data processing. Methods Thin-film specimen of Zn 1−x Cd x O with variable Cd concentrations was synthesized by metal organic vapour phase epitaxy (MOVPE) on α-Al 2 O 3 substrate along the [0001] axis buffered with a pure ZnO layer. The sample for STEM investigations was prepared by cutting the slices along the [0001] direction of α-Al 2 O 3 . These slices were then mechanically ground to 150 μm, after which the slices were made into wedges with a tilt angle of 2.5°, using the MultiPrep System (Allied High Tech Products, USA). One side of the slices was further ground down to 20 μm. Thereafter the ground side was thinned by the low-angle ion milling & polishing system (Fischione 1010) with gun voltages of 5 kV/4 kV/3 kV, gun currents of 5 mA/4 mA/3 mA, and an incident beam angle of 8°. The total milling time was about 3 h. Immediately before the STEM experiments were performed, the sample was plasma cleaned in a Fischione Model 1020 plasma cleaner. The STEM investigations were undertaken in a probe-corrected and monochromated FEI Titan G2 60-300, equipped with a Fischione HAADF detector (3000), a Gatan GIF Quantum 965 EELS spectrometer, and four FEI Super-X EDX detectors. The actual content of Cd in Zn 1−x Cd x O was studied by EDX maps at a 300 kV accelerating voltage. Zn Kα and Cd Lα characteristic X-ray peaks were used for EDX quantification of content, and Cd content was determined to be 0.01 < x < 0.51. To prevent the Cherenkov radiation from overlapping with the band gap, monochromated EELS spectrum imaging were operated at a high tension of 60 kV, close to Cherenkov limit of ZnO. The specimens were finally thinned to be approximately 20~30 nm, further efficiently eliminating the unwanted retardation losses. The dispersion of the 2048-channel EELS spectrometer was set at 0.025 eV/channel in order to simultaneously collect the signals of band gap and plasmon energy pixel by pixel. To guarantee sufficient signals for EELS spectrum image, the time for exposure at each pixel was very close to the limiting exposure for the CCD. Before carrying out the structural and spectral mapping, the α-Al 2 O 3 substrate was tilted to the [2 11 0] orientation to make the electron beam perpendicular to the film growth direction. During the EELS or EDX spectrum imaging, software correction of the spatial drift was employed. The energy resolution in the monochromated EELS measurements was approximately 0.175 eV, as determined by the full width at half maximum (FWHM) of the zero-loss peak (ZLP). The band gap is identified as the onset of energy loss (similar to the onset of absorption in optical absorption experiments) followed by fitting a parabolic model to the spectrum after background subtraction. An EELS spectroscopic image is composed of many pixels (spectra). The parabolic fitting performed at each spectrum would eventually lead to a two-dimensional band gap image. A detailed explanation of the steps taken for band gap fitting has been published previously 19,22 . The plasmon energy values in EELS spectrum were obtained by fitting a Gaussian function in DigitalMicrograph, before which plural scattering was removed using the Fourier-log method.The datasets generated during and/or analyzed during the current study are available from the corresponding author upon a reasonable request.
4,670.4
2018-01-16T00:00:00.000
[ "Materials Science", "Physics" ]
High power, frequency agile comb spectroscopy in the mid-infrared enabled by a continuous-wave optical parametric oscillator While mid-infrared optical frequency combs have been widely utilized in areas such as trace gas sensing, chemical kinetics, and combustion science, the relatively low power per comb tooth has limited acquisition times and sensitivities. We have developed a new approach in which an electro-optic frequency comb is utilized to pump a continuous-wave singly-resonant optical parametric oscillator in order to spectrally translate the comb into the mid-infrared. Through the use of electro-optic combs produced via chirped waveforms we have produced mid-infrared combs containing up to 2400 comb teeth. We show that a comb can be generated on the non-resonant idler when the pump modulation is non-synchronous, and we use these combs to perform high resolution spectroscopy on methane. In addition, we describe the underlying theory of this method and demonstrate that phase matching should allow for combs as broad as several THz to be spectrally translated to the mid-infrared. The high power and mutual coherence as well as the relatively low complexity of this approach should allow for broad application in areas such as chemical dynamics, quantum information, and photochemistry. Introduction Molecular spectroscopy at mid-infrared wavelengths has been widely utilized for gas sensing [1], chemical dynamics [2], quantum information [3], and astrochemistry [4] due to the large absorption cross-sections present in this region.Direct optical frequency comb spectroscopy (DCS) can offer advantages in spectral bandwidth, resolution, and acquisition time over other spectroscopic techniques, but it is difficult to simultaneously realize these capabilities in the mid-infrared.In particular, while broadband, high-resolution DCS has been demonstrated in the mid-infrared, the low power per comb tooth has largely precluded fast acquisition times and high sensitivities [5], [6]. Achieving higher power per comb tooth in the mid-infrared has been an ongoing challenge.Mode-locked quantum cascade lasers (QCLs) can provide milliwatts of power per tooth, but gigahertz tooth spacings limit the spectral resolution [7].High-resolution midinfrared comb approaches have traditionally offered tens to hundreds of nanowatts of power per comb tooth [8], [9], with at most 100 μW per comb tooth generated through the use of a 57 W pump laser [8].Recently, we have developed a new approach for mid-infrared comb generation in which a continuous-wave singly-resonant optical parametric oscillator (CWSRO) is used for frequency conversion of near-infrared electro-optic frequency combs [9].This approach allows for high power per tooth (approximately 80 mW), and correspondingly high acquisition rates (up to 50 MHz) while also offering broad wavelength tunability. These results indicate that this method may fulfill the promise of fast, sensitive absorption spectroscopy in the mid-infrared.In the present study, we describe the theory governing the frequency conversion of the modulated pump and derive the conditions under which the pump comb is replicated on the non-resonant, mid-infrared CWSRO idler.Our initial DCS demonstration [9] utilized over-driven electro-optic phase modulators (EOMs) to produce roughly twelve comb teeth in each frequency comb with a comb tooth spacing of 2.55 GHz. Here we pump the CWSRO with a single electro-optic frequency comb combined with a local oscillator.To generate this comb, a near-infrared EOM is driven with chirped waveforms to produce an optical frequency comb with up to 2400 individual comb teeth of similar amplitude.This flat, high-resolution comb allows us to experimentally confirm the derived theory for replication of the pump comb on the idler, and results in a mid-infrared comb that is widely tunable between 2.19 µm and 4.00 µm with Watt-level power and approximately 450 µW per comb tooth in a relatively simple setup.We have then utilized the comb spectrometer to perform high resolution molecular spectroscopy in the critical mid-infrared spectral region. Theory In this section we derive the conditions under which the spectrum of a pump laser is replicated on the non-resonant idler of a CWSRO.In particular we show that the pump spectrum should contain no spectral components which are separated by the signal cavity's free spectral range (FSR), and that the phase matching bandwidth of the CWSRO provides the ultimate limit on the bandwidth of the idler spectrum. We follow the derivation of Yariv for three waves interacting in a nonlinear medium [10].There, differential equations are derived assuming that the three waves have no time dependence except oscillation at their carrier frequencies.Here we allow for variation in time at rates much slower than the pump carrier frequency.The real electric field ℰ for pump, signal, and idler waves j = (p, s, i) is written in terms of a complex field E which varies slowly in time and space: (, ) − + .. ( where ωj are the center angular frequencies of the three waves and kj are their wavevectors. This expression for the electric field represents a plane wave, and we model the field as such for simplicity.In practice, the pump, signal, and idler in our CWSRO have spatial modes defined by the pump laser and optical cavity.However, this plane wave approximation has adequately modeled the behavior of similar CWSROs [11], [12], [13], and the transfer of modulation to the idler that we explore here is independent of the spatial mode. We employ Type 0 phase matching in periodically-poled lithium niobate (PPLN), so that all electric field components have the same polarization.The equations which govern propagation of the pump, signal, and idler in the nonlinear crystal are then given by where nj are the nonlinear crystal refractive indices at ωj, and c is the speed of light in vacuum.The wavevector mismatch is Δk = kp − ks − ki − Kg, where Kg = 2π/Λg is the wavevector of the quasi-phase matching (QPM) grating of period Λg, and deff is the effective nonlinear coefficient for ideal first-order QPM.In PPLN deff = 17 pm/V [14]. We are concerned with the evolution of the pump, signal, and idler spectra, so we take the Fourier transform of these equations with respect to angular frequency detuning Ω to obtain As each wave in the nonlinear crystal propagates it generates new spectral components from the convolution of the spectra of the other two waves. The nonlinear crystal is located inside an optical cavity which is resonant only for the signal wave.The optical cavity supports signal frequencies which correspond to longitudinal modes, and all other signal frequencies will interfere destructively.If the signal oscillates on only a single longitudinal cavity mode, then its spectrum is practically a delta function with respect to the pump spectrum and Eqn.7 for the idler becomes Therefore, if the signal is single mode, then a spectral component detuned Ω from the pump carrier will be generated on the idler detuned Ω from the idler carrier.Thus, the frequency spacing of the idler spectral components will match that of the pump.The relative amplitudes of these spectral components on the idler nearly match the pump.The ωi + 2Ω factor on the right-hand side of Eqn. 8 will slightly skew the idler spectrum.As described below, the maximum detuning considered here is on the order of 1 THz, which is much smaller than the 75 THz to 135 THz idler carrier frequency.Thus, if the signal is single mode, then the idler spectrum will replicate the pump spectrum except with a slope in amplitude of approximately 1%. We note that there are modulation conditions which can cause the signal to oscillate on multiple modes.An etalon in the CWSRO cavity is used to force single longitudinal mode operation in the case of an unmodulated pump.The etalon has a bandpass full-width at half maximum (FWHM) of approximately 30 GHz and the cavity has an FSR near 530 MHz.One cavity mode under the etalon bandpass has the highest gain and dominates when the pump is unmodulated, but approximately 60 modes have significant gain.When the pump is modulated, then cavity modes which neighbor the highest-gain mode also can oscillate. An extreme case of pump modulation coupling to the resonant wave is a synchronously pumped OPO [17], [18].There, the pump and OPO cavity are stabilized so that the comb mode spacing exactly matches the cavity FSR.From Eqns.5-7, a comb spectrum on the pump and signal leads to cascaded generation of comb teeth on the pump, signal, and idler. To ensure that only a single cavity mode oscillates we avoid synchronous modulation.The signal oscillates on a center cavity mode at ωs, and in general it can carry modulation and have spectrum ̃(, Ω).However, the signal can only support modulation at frequencies which correspond to longitudinal modes, where the detuning Ω is equal to multiples of the cavity FSR.If the pump spectrum ̃(, Ω) does not have any spectral components which are separated by multiples of the cavity FSR, then pump modulation cannot couple to the signal cavity. Thus, the first condition for successful replication of the pump spectrum on the idler of a CWSRO is that the pump spectrum does not contain spectral components which are separated by the cavity FSR.The second condition relates to the bandwidth of the pump comb.The PPLN poling period can be chosen so that Δk = 0 at the center frequencies of the pump, signal, and idler.However, the second term in Eqn. 3 provides a first-order correction to the phase accumulation in each of the three waves due to dispersion.This will result in non-zero Δk for spectral components which are detuned from the center frequencies.This phase mismatch results in decreased gain, and thus decreased power for idler comb teeth.The fullwidth at half-maximum (FWHM) of the phase matching gain curve for the CWSRO used here was calculated by SNLO [15] and is plotted as a function of idler wavelength in Fig. 1.It shows that the widest FWHM of the idler spectrum is several terahertz when the idler wavelength is near the degeneracy point and decreases to approximately 100 GHz far from degeneracy.We note that the CWSRO could be optimized for broader phase matching.A similarly designed CWSRO with a broadband incoherent pump utilized non-collinear phase matching via tight focusing to generate an idler that was 5.7 THz wide (FWHM) at 3.4 µm [16]. Experiment A schematic of the present mid-infrared optical frequency comb spectrometer can be found in Figure 2a.Light from a continuous-wave external cavity diode laser with a wavelength near 1064 nm had its fiber-coupled output split into two paths: a probe path and a local oscillator (LO) path in a self-heterodyne configuration [17], [18].An electro-optic frequency comb is produced on the probe path by driving an EOM with the output of a chip-scale direct digital synthesizer (DDS) [19].The DDS generates periodic frequency chirps as shown in Fig. 2b and accepts sweep and timing information from a programmable microcontroller.This periodic phase modulation results in a frequency comb where the comb tooth spacing is equal to the chirp repetition rate and the comb bandwidth is twice the bandwidth of the chirp.Importantly, this approach allows for agile, ultraflat frequency combs to be generated over a wide range of comb tooth spacings (100's of Hz to GHz) [19]. An acousto-optic modulator (AOM) placed on the LO path provided a 54.2 MHz shift to ensure that the positive and negative order comb teeth occur at unique radiofrequencies once combined on a photodiode.A representative near-infrared self-heterodyne comb spectrum can be found in Fig. 2c.240 comb teeth which are spaced by 10 MHz can be observed as well as the carrier tone which occurs at the AOM frequency.The combined probe and LO paths were then amplified by an ytterbium fiber amplifier which increased the seed power from 5 mW to 10 W. This amplified pump was then injected into the CWSRO which generates tunable mid-infrared radiation by down-converting nearinfrared pump photons into near-infrared signal and mid-infrared idler photons through the use of a PPLN crystal.The CWSRO used here has been previously described in detail in Ref. [20].The insets show magnified versions of the radiofrequency spectra which are centered near the OPO cavity's free spectral range.Each of the shown power spectra is the average of one hundred power spectra each of which were acquired in 0.5 ms. The poling period of the PPLN varies along the crystal height in a fan-out structure.Vertical translation of the crystal relative to the pump beam changes the phase matching conditions, which widely tunes the signal (1450 nm to 2070 nm) and idler (2190 nm to 4000 nm).Rotation of the etalon and continuous tuning of the seed laser finely tunes the idler to a target mid-infrared wavelength.The depleted pump and signal beam were sent to a wavemeter for a continuous wavelength measurement.Measurement of the pump and signal wavelengths with 150 MHz accuracy allowed calculation of the idler wavelength with 210 MHz accuracy.The CWSRO had a nominal output power of 2 W in the depleted pump, 3 W in the signal, and 2 W in the idler. Results and Discussion The three outputs of the CWSRO (depleted pump, signal, and idler) were recorded on fast photodiodes using a digitizer operating at 3 gigasamples/s.Fast Fourier transforms (FFTs) of these interferograms can be found in Figure 3 for both 1 MHz and 10 MHz comb spacings (left and right panels, respectively).The depleted pump spectrum contains both the carrier tone at 54.2 MHz as well as the 2400-MHz-wide comb which was initially produced in the nearinfrared prior to amplification (i.e., Fig. 2c).The shown pump combs contained 2400 and 240 individual comb teeth for the 1 MHz and 10 MHz comb spacing cases, respectively.In order to further visualize the flatness and extent of the optical frequency combs we extracted the FFT magnitudes at the known comb tooth frequencies and normalized them against the carrier tone for the 1 MHz pump, signal, and idler combs (see Fig. 4).The shown idler combs had powers per comb tooth of approximately 40 µW and 450 µW for the 1 MHz comb and 10 MHz combs, respectively, and are therefore well suited for high signal-to-noise molecular spectroscopy.As previously described, complete replication of the pump comb on the idler requires that the pump comb does not contain spectral components spaced by multiples of the cavity FSR.The EO comb carrier and the 54.2 MHz-detuned LO are the dominant components of the pump, so separations from these components at multiples of the cavity FSR are the dominant contributors to multimode oscillation of the signal.While the 10 MHz spaced pump comb is replicated with minimal distortion on the idler, the 1 MHz spaced comb contains teeth which are separated from the comb carrier and LO by multiples of the cavity FSR.As a result, we see significant depletion of the pump and enhancement of the signal and idler at multiples of 530 MHz (see Fig. 3b and 4).Weaker features also can be observed which arise at frequencies which are separated from the LO frequency by the cavity FSR (e.g., 476 and 584 MHz).The signal should only have gain at harmonics of the cavity FSR, so the observation of comb teeth on the signal at frequencies between multiples of the cavity FSR was unexpected.We believe these weak comb teeth are present as there can be some transfer of phase to the signal in a single pass through the crystal. As an initial spectroscopic demonstration, we passed the idler beam through a 40 cm long cell containing 48 Pa of methane.Normalization of the resulting comb spectrum was performed via a background spectrum recorded when the cell was removed.The resulting transmission spectrum is plotted in Fig. 5 and compared to a HITRAN 2020 [21] fit in which only the center wavelength and background level were floated to account for uncertainty associated with the wavelength meter and optical cell coupling losses.This spectrum contains 2400 individual comb teeth and was recorded in only 0.5 s.The use of the 1 MHz comb for this demonstration allows us to see the impact of the slight idler comb perturbations when used for spectroscopy.Distortions are clearly present on the recorded spectrum at multiples of the cavity FSR, but the features viewed here are much broader than these distortions and an accurate spectrum can be measured.We note that these distortions can be removed by using a wide spaced comb (e.g., 10 MHz) which does not contain comb components at the cavity FSR. We believe that the present technique holds numerous advantages in comparison to other mid-infrared comb approaches such as QCL combs [7], difference frequency generation (DFG) combs [8], [22], [23], and synchronously pumped femtosecond OPO-based combs (SySRO combs) [24], [25].The present approach offers far higher tunability and agility with respect to repetition rate than QCL or SySRO combs where the repetition rate is essentially fixed and limited by either the QCL cavity length (generally near 10 GHz) [7] or the SySRO cavity length (generally a few hundred MHz).The repetition rates used in the present work are determined by the driving frequencies of an EOM, which are frequency agile and digitally controlled.Thus, a single EOM-comb CWSRO system can be used for applications requiring spectral resolution ranging from well less than 1 MHz to more than 1 GHz.In addition, since the local oscillator (or a second comb) can be generated from the same near-infrared laser and simultaneously translated into the mid-infrared by the same CWSRO, there is no need for complicated phase locking, phase correction, or a second comb source, in contrast to these other methods.Finally, in comparison to DFG combs the present method offers far higher optical powers for a given pump power [8], [22], [23]. One area of future work will be extending the bandwidth of the pump combs to reach the bandwidth limits imposed by the phase matching.Broader pump combs could be generated via cascaded modulators or non-linear spectral broadening.As shown earlier, the phase matching condition of the CWSRO is very broad and is expected to accommodate combs as wide as several THz, allowing for wideband multiplexed spectroscopy. The combination of high resolution, high power, and mutual coherence provided by CWSRO EOM combs is ideally suited to applications such as sub-Doppler spectroscopy, where narrow features must be located within a broad spectral region.In addition, we see strong applications for this approach in areas such as chemical kinetics, optical metrology, communications, and quantum sensing where the flexibility, agility, and high optical power are expected to be transformative. Figure 1 . Figure 1.Full-width at half-maximum phase matching bandwidth of the idler of the continuouswave singly resonant optical parametric oscillator pumped at 1064 nm. Figure 2 . Figure 2. (a) Schematic of the continuous-wave optical-parametric-oscillator-based optical frequency comb spectrometer.The output of an external-cavity diode laser (ECDL) is split into two paths, one containing an electro-optic phase modulator (EOM) and the other an acoustooptic modulator (AOM) which serve as probe and local oscillator (LO) paths, respectively.The EOM is driven by a periodic chirp to generate an electro-optic frequency comb.The chirp repetition rate (frep) and frequency range are controlled by the output of a direct digital synthesis (DDS) chip (see panel (b)).The LO path passes through an AOM to produce a frequency shift of 54.2 MHz.The probe and LO paths are then recombined and amplified with a fiber amplifier before being injected into the continuous-wave optical parametric oscillator (CWSRO) containing a periodically poled lithium niobate (PPLN) crystal.The resulting mid-infrared (MIR) idler frequency comb was attenuated and then passed through a gas cell containing 48 Pa of methane before being detected on a photodetector (DET).A representative near-infrared (NIR) self-heterodyne spectrum can be seen in panel (c). Figure 3 . Figure 3. Radiofrequency spectra of the pump (a), signal (b), and idler (c) when driven by nearinfrared electro-optic combs with 1 MHz (left panels) and 10 MHz repetition rate (right panels).The insets show magnified versions of the radiofrequency spectra which are centered near the OPO cavity's free spectral range.Each of the shown power spectra is the average of one hundred power spectra each of which were acquired in 0.5 ms. Figure 4 . Figure 4. Comb tooth magnitudes normalized against the carrier tone for the pump, idler, and signal outputs of the CWSRO for a 1 MHz spaced comb.The shown spectra are the average of ten spectra each of which was acquired in 1 ms. Figure 5 . Figure 5. Transmission spectrum (black points) of a gas cell containing 48 Pa of methane as well as a spectral fit (red line) using HITRAN 2020[21] parameters.Only the center wavelength and background level were adjusted to account for uncertainty in the wavelength meter and coupling losses of the optical cell.The shown spectrum contains 2400 individual comb teeth spaced by 1 MHz and was the average of 500 spectra each of which was acquired in 1 ms.As predicted by the theory, small distortions in the spectrum are observed for comb teeth occurring at multiples of the CWSRO cavity's free spectral range from the carrier wavelength.
4,793.6
2023-09-26T00:00:00.000
[ "Physics", "Engineering" ]
Mathematical formulation to predict the harmonics of the superconducting Large Hadron Collider magnets. II. Dynamic field changes and scaling laws A superconducting particle accelerator like the LHC (Large Hadron Collider) at CERN, can only be controlled well if the effects of the magnetic field multipoles on the beam are compensated. The demands on a control system solely based on beam feedback may be too high for the requirements to be reached at the specified bandwidth and accuracy. Therefore, we designed a suitable field description for the LHC (FIDEL) as part of the machine control baseline to act as a feed-forward magnetic field prediction system. FIDEL consists of a physical and empirical parametric field model based on magnetic measurements at warm and in cryogenic conditions. The performance of FIDEL is particularly critical at injection when the field decays, and in the initial part of the acceleration when the field snaps back. These dynamic components are both current and time dependent and are not reproducible from cycle to cycle since they also depend on the magnet powering history. In this paper a qualitative and quantitative description of the dynamic field behavior substantiated by a set of scaling laws is presented. I. INTRODUCTION The baseline of the LHC control system includes feedforward control intended to reduce the burden on the beam based feedback. Known as the field description for the LHC (FIDEL) [1], this feed-forward system will predict the main field and the harmonics of the superconducting magnets during the whole machine operation cycle. This system is particularly critical during the beam injection and the initial phase of the particle acceleration where the machine magnetic state is dynamic and its reproducibility is, to some extent, unknown. During beam injection, the LHC superconducting magnets need to have a constant magnetic field of 0.537 T and therefore are kept at a constant current of 760 A. However, the magnetic field multipoles drift when the magnets are on a constant current plateau. This appears as a ''decay'' of the persistent current contribution to the multipoles and causes significant changes in the beam tune and machine chromaticity [2]. The present understanding of the origin of this dynamic magnetic behavior is the diffusion of a nonuniform current distribution along the Rutherford cable originating from spatial gradients in the field sweep rate and gradients in the cable properties (e.g. cross-contact resistances). Even at constant transport current, as is the case on the injection plateau, these currents produce spatially modulated changes in the local field. These field changes locally reduce the magnetization and hence cause the decay [3][4][5][6][7]. In turn, when the external field is increased during the first few seconds of the current ramp, the magnetization is restored to its original hysteresis state, hence canceling out the decay. This phase, called snap-back [8], can be too fast to be compensated solely using beam diagnostics. In addition, these dynamic field changes are not reproducible from one powering cycle to another and they are dependent on the powering history of the magnet [9]. Extensive research on these phenomena has been done at the hadron electron ring facility (HERA) [10,11] and for the superconducting super collider (SSC) [12]. These ef- A. Time dependence in LHC dipoles The LHC dipole magnets have a cost-saving twinaperture design, where two particle beam apertures with separate coil systems are incorporated within the same magnet [14]. The standard decay magnetic measurements executed on dipoles consist of rotating coil measurements [15] in both apertures during a 1000 s simulated particle injection plateau at 0.537 T. The injection conditions are reached following a standard powering cycle consisting of a cleansing quench, a ramp to 8.33 T at 50 A=s, a 1000 s flattop, and a ramp-down to 0.25 T at 50 A=s. The purpose of this precycle is to simulate the LHC operation at 7 TeV, while the purpose of the cleansing quench is to erase the memory of previous powering cycles and thus make the measurements comparable. The sample measured consists of 352 apertures (corresponding to 176 magnets) and is almost equally distributed amongst the three different manufacturers (Alstom®, Ansaldo Superconduttori®, and Babcock Noell®). As generally accepted for accelerator magnets and for use in beam optics simulations, the magnetic field B in the 2D imaginary plane x; y can be expressed using the harmonic expansion: where C n indicates the generic non-normalized complex harmonic of order n given in the reference frame aligned with the main field direction. R ref is the reference radius ( 17 mm for the LHC) and is representative of the maximum beam size. For convenience, the normalized harmonic coefficients, indicated as c n , can be defined as c n b n ia n 10 4 C n B m : B m is the main magnetic field expressed in a reference frame where the main skew component is zero. b n and a n are the normal and skewed multipole coefficients, respectively. The factor 10 4 is used to produce practical relative dimensions for the normalized coefficients. The normalized c n are expressed in the form above in so called ''units.'' Figure 1 shows the variation of b 1 , b 3 , and b 5 during injection, arbitrarily shifted along the vertical axis to make the initial value at injection equal to zero. Note that only 58 apertures are included in Fig. 1 so as to limit the amount of data in one graph. However, the average decay is computed from the entire magnet population. A quantity of specific interest to analyze the properties of the magnet population is the decay amplitude at the end of the injection. This is summarized in Fig. 2 and Table I, reporting, respectively, the average decay amplitude std of the main field and the harmonics. It should be noted that in Table I to the dipole field at 17 mm reference radius multiplied by 10 ÿ4 as indicated in Eq. (2). From Fig. 2 we clearly see that the decay manifests itself as a systematic behavior only in the allowed harmonics, and hence it must be modeled and compensated in the machine. From this observation, the decay modeling is limited to the main field and the first two allowed harmonics which can be compensated by using corrector magnets. We use our understanding of the physical origin of the decay to develop a mathematical formulation that can be derived to describe the decay evolution in time. In particular, we assume that the decay driver is current diffusion in the superconducting cable. Making the hypothesis that the cable current distributes continuously among the strands of a uniform cable, the time evolution of the currents is governed by an infinite series of harmonic modes damped by an exponential with time constants n 2nÿ1 2 [3]. The time constants depend on the cable geometry (affecting the line inductance) and the interstrand resistances. A direct solution of current diffusion is not practical, as it depends on too many parameters that are not measured (such as the cross-contact resistance and its variation along the coils). In our model we substitute these unknowns by constants that can be determined by fitting to cold magnetic measurement data. Under these assumptions, the normalized decay can be modeled by the following equation: which holds for I I inj and t > t inj . t is the instantaneous time, t inj is the time when injection starts, I inj is the current at injection, is the time constant, and HOT stands for higher order terms in the series expansion. Neglecting this last, the parameter d gives the normalized weight of the fast mode of the decay and its complement to one, 1 ÿ d, gives the normalized weight of the slow mode. where the parameter m represents the decay amplitude at a reference time t std inj . B b inj is the field at the beginning of injection, I inj is the injection current. The contribution of decay to the transfer function is modeled by where the transfer function (TF) is defined as the ratio of field generated and operating current: The contribution to the harmonics is given by c decay n n I inj jIj t; t inj ; n ; d n t std inj ; t inj ; n ; d n ; where m and n are in units. Figure 3 shows the decay model for b 3 for an injection plateau of 10 000 s. The values of the parameters obtained as a result of the fits of the average decay as well as the standard deviation of the difference between the sample average and the model are reported in Table II. The two- B. Decay scaling So far we have discussed modeling of a finite population in one specific cycling condition. Operation in the LHC will depend on many factors that will surely cause deviations from the measurement conditions used during cold tests. For this reason, it is planned to improve the model using data from direct beam measurements as well as offline reference magnet measurements. The adjustment will be effective only if the model of the average or a reference magnet measurement can be scaled to be representative of the whole magnet population, which is not obvious in principle. Observing the single magnet data, it seems that a simple scaling factor applied to the decay of a single magnet, i.e. stretching the measured data in the y direction, could be enough to match the average curve. This is clearly true if the dynamics of the decay do not change from magnet to magnet. Starting with this assumption, it was sought whether the scaling law, produces a satisfactory result. In Eq. (8) n is the average decay (i.e. the value for the sector or for the ring), i n is the decay of the reference magnet i, and f decay n is the scaling factor. The latter is determined as the ratio of the measured decays for the sample average and for the reference magnet chosen at the end of the simulated injection, i.e., in the above notation: It should be noted that there is no free parameter in the above scaling, all quantities being known once the measurement on the beam is performed or once the reference magnet, or a suitable sample, have been measured in cold conditions. Equations (8) and (9) have been used to scale the decay of each magnet measured, producing curves of the type represented in Fig. 4 for a selected magnet (in this case the sextupole harmonic of magnet 3154 aperture 1). The difference between the scaled decay and the average of the magnet population has been computed at all times during the injection plateau. To quantify the goodness of the scaling, we have taken the maximum of the absolute value of this difference. A histogram of the maximum residual error of all the magnets as well as a log-normal distribution for b 1 , b 3 , and b 5 , respectively, are shown in Fig. 5. The log-normal distribution is used because it can fit a data set that is skewed and can also be used to describe data that cannot fall below zero but that might increase without limit. The goodness of fit is tested using the Kolmogorov-Smirnov test [16], which is satisfied for b 3 and b 5 . b 1 does not pass the test, and we attribute this to the noise inherent in the measurement (see Fig. 1). The scaling law tested produces typical maximum residual scaling errors in the range 0.1 to 5 units @ 17 mm for 5 . There are few outliers that are not shown in the figure. These are generally related to magnets that have a large scaling factor or that have anomalous behavior and that appear as a tail in the distributions. Since the data distribution is skewed as shown in the histograms, the most probable residual errors (i.e. the mode) are less than the medians of the distribution. A conservative choice can be made by taking the median as an indication for the typical scaling, i.e., 0.5 units @ 17 mm for b 1 , 0.06 units @ 17 mm for b 3 and 0.02 units @ 17 mm for b 5 . In fact, in principle, it would be possible to achieve better results by defining the scaling factor based on a general optimization over the time span available in measured data. This is not done here to keep the reasoning simple and because it has little influence on the final conclusions. C. Tevatron dipoles As a part of the overall optimization of the Tevatron run II, several dipole magnets were remeasured at the magnet test facility in Fermilab [17,18] aiming at reducing beam losses associated to residual correction errors during injection and snap-back. Thanks to the copious results obtained in this measurement campaign, it was possible to compare the behavior of the sextupole during injection in specific magnets to the chromaticity measurements taken during the injection plateau in the accelerator [19]. The result of this test is shown in Fig. 6, which demonstrates that the good agreement between the average behavior of a magnet population and the scaled results from a single magnet is not accidental. In the case reported in Fig. 6, the scaled magnet behavior reproduces the dynamics of the Tevatron chromaticity evolution to within 0.04 units @ 25.4 mm over a time span of nearly 2 hours. Beyond this time, there is a deviation which is due to a difference in the dynamics of the decays. However, the deviation remains small. This gives confidence that the scaling of Eq. (8) can produce results accurate enough for precise control. D. HERA dipoles The correction scheme employed by HERA at DESY makes use of online reference magnets and look-up tables. Two reference magnets, one for each magnet production line, have been chosen to represent the behavior of the two halves of the proton ring. The reference magnets were chosen to be at the center of the drift spread of their respective magnet family. The beam parameters can be controlled automatically using NMR probes in the reference magnets to detect the b 1 change, and rotating coils to measure the drift of the b 3 component [20]. The corrections obtained are applied without scaling to the corrector magnets in the ring. This corresponds to the scaling procedure outlined above for the LHC magnets, where the scaling factor f decay of the single magnet to the average of the population is 1 because of the magnet selection adopted. As shown in [21,22], the effect of decaying persistent currents leads to a change in the horizontal and vertical chromaticities in opposite directions. Without correction, the chromaticity reaches unacceptable values within a few minutes. However, if the correction system is switched on, the use of reference magnet data counteracts the decaying persistent current sextupole fields and the chromaticity in both planes is kept close to the desired values. As in the case of the Tevatron dipoles, these results show that a single magnet can be taken to represent the behavior of a whole family and support the scaling property observed for the LHC magnets. to return to the hysteresis branch that would have been measured without the injection stop (dashed line). The measurement of the snap-back was performed with a snap-back analyzer [23] that outperformed rotating coils [15] which are too slow to provide the time resolution necessary for accurate modelling (1 to 10 Hz). A typical snap-back measurement campaign consists of several LHC cycles with the precycle parameters changed so as to vary the decay amplitude. The cycles are separated by a quench to erase the memory of previous powering. It was found experimentally [24], and proven analytically in [25], that during the snap-back the first allowed harmonics b 3 and b 5 follow an exponential law. For the normal sextupole, this law was written as follows: A. LHC dipoles where b snap-back 3 t is the sextupole variation during the snap-back, It is the instantaneous value of the current, initially at the injection value I inj . The amplitude b decay 3 of the snap-back and the current change I are the two fitting constants. However, given that the multipoles are continuous in time, the snap-back amplitude is equal and opposite to the magnitude of the decay at the end of the injection. This implies that b decay 3 is not an independent parameter in the overall model. Figure 8 shows the exponential fit of the sextupole snapback data of Fig. 7, demonstrating that the model is well suited to the data. The standard deviation of the fit is in general less than 0.03 units during the whole snap-back. Based on this observation, the snap-back of the main field, transfer function, and all harmonics is modeled as follows: TF snap-back TF decay t ramp e I inj ÿI=I m (12) c snap-back n c decay n t ramp e I inj ÿI=I n ; (13) where the factors B decay m t ramp , TF decay t ramp , and c decay n t ramp are the change of the main field, the transfer function, and the normalized harmonics, respectively, during the decay evaluated at the time of the beginning of the ramp t ramp . These parameters can hence be determined from the double exponential fit of Eq. (3). The only remaining parameter is the characteristic currents for the exponential change, I m and I n . Analyzing data obtained for a single magnet during measurements of snap-back following different magnet powering sequences, it can be observed that both the amplitude parameters (B decay m t ramp , TF decay t ramp , and c decay n t ramp ), as well as the characteristic currents I m and I n , change from run to run. This corresponds to the well-known fact that the snap-back (as the decay) is a function of the magnet powering history. We have found however, that the two sets of fit parameters are strongly correlated, and once represented in a scatter plot they lie on a straight line. Furthermore, a very interesting property is that the correlation between the fit parameters is approximately the same for all magnets tested. An example of this correlation on the sextupole fit parameters b decay 3 t ramp vs I 3 is shown in Fig. 9 for the 138 measurements on LHC dipoles tested to date using the snap-back analyzer [23]. This finding is substantiated by the fact that the magnets tested were not specially selected (e.g. with respect to cable properties) and comparable results are found performing the same measurements and data analysis on both the LHC and Tevatron dipoles, as discussed later. Hence, it seems that the correlation plot can be used to characterize the behavior of the dipoles in the whole accelerator, i.e., it can act as a scaling law. The implication is that only one of the two fit parameters, either c decay n t ramp or I n , is strictly necessary to predict the sextupole change. In practice, the waveform of the snap-back can be predicted by taking the observed decay c decay n t ramp at the end of injection [e.g. computed using Eq. (7)], and computing the corresponding I n using the linear correlation coefficient g SB n : c decay n t ramp g SB n I n : t ramp (units @ R ref 17 mm) and I n that correspond to sets of different powering cycles in the LHC dipoles tested and analyzed to date. The data has been fitted with a linear regression and is compared to the theoretical prediction presented in [25]. For the sextupole, which is in practice the only harmonic that could be sampled reliably, the value obtained from measurements is g SB 3 0:172 units=A which is comparable to the theoretical value g SB theoretical 3 0:19 units=A calculated in [25]. The R-squared value of the correlation line is 0.882. To have a better indication of the quality of the snap-back scaling law, the same procedure as used in the decay scaling analysis described above is employed. This is done by taking the residual error as being the maximum deviation of the fit parameter b decay 3 t ramp from the correlation of Eq. (14) for all measurement sets analyzed. The histogram and the log-normal distribution of the difference between the sextupole snap-back amplitudes and the correlation line are shown in Fig. 10. The use of the log-normal distribution is justified by the same reasoning discussed earlier. The residual errors range from 0.01 to 0.6 units @ 17 mm, with a median value of 0.14 units @ 17 mm. The above values for the median residual error can be taken as an estimate for the deviation between the predicted and the actual snap-back waveforms in the accelerator. B. Tevatron dipoles In support of the above discussion, we report here a summary of the sextupole snap-back measurements of the same type as described above that were performed on 12 Tevatron dipoles [26]. Following the same analysis procedure as for the LHC dipoles, the result is represented in the scatter plot of Fig. 11, and leads to the same conclusion, namely, that the two parameters c decay n t ramp and I n are strongly correlated. The fact that the same result is obtained on two different families of dipole magnets, with major design and manufacturing differences (both on the superconducting cable and coil) supports the idea that the correlation found has some fundamental origin, and can thus be used for a robust prediction. IV. MODEL OF THE POWERING HISTORY DEPENDENCE The decay and snap-back of allowed multipoles in the LHC magnets is known to be strongly dependent on the powering history of the magnet [4,5,8,9,27]. This dependence can be explained by the way the nonuniform current distributions are formed and are diffused in the Rutherford cable during magnet powering. The studies and analysis performed over short dipole models, dipole prototypes, and series dipole magnets have concentrated on the measurement of decay and snap-back following a quench, erasing all previous memory, and a current cycle whose current values and duration have been varied parametrically. The prototype of this cycle is shown in Fig. 12, which also defines the main parameters varied. The measurements cited above have shown that three parameters mostly affect the injection decay amplitude and subsequent snap-back. These are the flattop current I FT , the flattop time t FT , and the time spent on the preinjection plateau t preparation . In terms of the notation introduced in the previous section, the change in the decay amplitude can be described through a change of the parameter in Eq. (3), where, taking the example of the harmonic of order n, we have in general that n n I FT ; t FT ; t preperation : To model the changes in n , we use the following parame- n std E n 0 ÿ E n 1 e ÿI FT = n E dI=dt E n 0 ÿ E n 1 e ÿI std FT = n E dI=dt T n 0 ÿ T n 1 e ÿt FT = n T T n 0 ÿ T n 1 e ÿt std FT = n T P n 0 ÿ P n 1 e ÿt preparation = n P P n 0 ÿ P n 1 e ÿt std preparation = n P ; (16) where std is the decay measured for a standard precycle, i.e., with flattop current of I std FT 11 850 A, flattop time t std FT 1000 s, and no preinjection time t std preparation 0 s. The time constants n E , n T , and n P describe the length of the magnet memory vs the flattop current, flattop time, and preinjection time, respectively. dI=dt is the precycle current ramp rate which is taken to be 50 A=s for both rampup and ramp-down. The fitting parameters in Eq. (16) are the above time constants and the variables E n 0 , E n 1 , T n 0 , T n 1 , P n 0 , and P n 1 . Equation (16) can be seen as a direct consequence of the assumption of exponential decay during constant current excitation, i.e., Eq. (3), where only the longest time constant has been retained for simplicity. The same equation can be applied to m . The parametrization was tested against the measured effect of the three precycle parameters, as sampled on a total of 19 magnets, listed in Table III. When testing the influence of one parameter (e.g. the flattop current), the second and third parameters were held constant (e.g. the flattop time and the preinjection time) at the value corresponding to the standard precycle. In addition, it should be noted that, due to the long test time (each measurement requires a quench and a complete precycle that last several hours), in some cases only the influence of one of the three parameters was measured. Figure 13 shows the measurement results and the average variation of decay amplitude vs precycle flattop current for the measurements shown in Table III. b 1 , b 3 , and b 5 all have an approximate linear dependence on I FT . We remark however that the b 1 dependence is very close to the measurement accuracy limit. Figures 14 and 15 show the measurement results and the average variation of decay amplitude vs precycle flattop duration and the decay amplitude vs preparation duration, respectively, for the measurements shown in Table III. b 1 , b 3 , and b 5 all have a general asymptotic exponential dependence. However, the dependence for b 1 and b 5 in both cases is considered to be negligible since it is comparable to the rotating coils measurement repeatability and is not reproducible on a magnet by magnet basis. Therefore, these dependencies are only considered to be important for b 3 . We can assess the importance of the three precycle parameters on the main field and the harmonics being considered by comparing the range of variation of the measurements average. The effect of each powering history parameter on the decay amplitude is summarized in Table IV. The I FT dependence is relevant for the main field, sextupole, and decapole, while in practice the other two parameters t FT and t preparation only affect the sextupole. The fit of the parametrization of Eq. (16) yields the parameters reported in Table V. The surfaces in Fig. 16 show how the parametrization of Eq. (16) describes the average magnet data scaled to the entire magnet population using Eq. (8). The parameters of Table V can be used in Eq. (16) to compute the difference between the scaled behavior of a single magnet [using Eq. (8)] and the 4D fits. The maximum residual error between these two can be taken as a measure of the quality of the scaling. The histograms and the log-normal distribution for the three powering history parameters are shown in Fig. 17. The use of the log-normal distributions is justified by the reasons described in Sec. II B. Because of the modest number of measurements, Fig. 17 may not indicate a log-normal distribution, however the goodness of fit is confirmed by checking with the Kolmogorov-Smirnov test [16]. As done earlier, the medians can be taken as an indication of the residual error in a magnet selected at random. V. CONCLUSIONS The decay and snap-back behavior of a set of several magnets in different magnetic states can be deduced using simple models of the data. We have given suitable mathematical models for the scaling laws, and shown how to apply them to represent a portion or the whole LHC ring. Following the discussion of our result, the basic information to establish and adapt the scaling can be derived from 14 Total snap-back error 0.14 (a) the series measurements in operating conditions, available on a sample of the dipole magnets, (b) extended measurements on selected magnets that will be available as an offline reference for LHC operation, and (c) direct beam measurements, e.g., taken during machine development time. In the case of measurements on a single magnet, the residual error of the scaled predictions does not depend drastically on the magnet selected, so that the scaling of a single magnet to a portion or the whole LHC ring will not be a critical process. In practice, following the reasoning of this chapter, half of the magnets produced can be used as LHC references. Table VI reports a summary of the maximum expected residual errors due to the dynamic model and the scaling procedure. For the injection plateau, this estimate is obtained as the quadratic sum of the residual error on the decay and on the prediction of the powering history dependence. To put these values in perspective, the maximum residual sextupole error corresponds to about 7 units of chromaticity in the LHC, which is an excellent result.
6,797
2006-01-09T00:00:00.000
[ "Physics", "Mathematics" ]
Thermal Stability, Formability, and Mechanical Properties of a High-Strength Rolled Flame-Resistant Magnesium Alloy alloy used in this study was twin roll cast (TRC) Mg–10Al–0.2Mn–1Ca (mass%) alloy (AMX1001). The material properties of this alloy were compared with those of samples of of fine-grained rolled A6N01 alloy. A study of the damping properties of various alloys showed that they improved in the order steel ≤ aluminum alloys ≤ magnesium alloys. Overall, the properties of high-strength AMX1001 rolled sheet are superior to those of fine-grained aluminum alloys. In particular, this Mg alloy shows excellent thermal stability, damping properties and formability. Introduction Lightweight magnesium (Mg) alloys with excellent shock-absorption properties are being actively adopted for electronic information devices and automotive parts [1]. Mg alloys shows excellent environmental properties because of their light weight, which can lead to improved energy efficiency and hence a reduction in emissions of carbon dioxide [1][2]. For use in structural applications, Mg alloys need to have adequate ductility, thermal stability, and strength. However, Mg alloys often exhibit low ductility, low tensile yield strength, and poor formability as a result of limited slip in their hexagonal close-packed structures [2]. The known ways for effectively improving the mechanical properties and formability of Mg alloys include grain refinement [3][4] and control of the texture of the alloy [4][5]; both these techniques promote prismatic slip and facilitate the creation of large plastic deformations. It is well known that the ignition temperature of Mg alloys is lower than that of other materials for vehicles [6]. However, the ignition temperature of Mg alloys can be markedly raised by the presence of small amounts of calcium (Ca) [6]. Recently, AZX and AMX (X = Ca) alloys have been shown to have higher ignition temperatures than other Mg alloys [6][7]. The effect of adding Ca to improve the flame resistance of Mg alloys has been demonstrated experimentally, as shown in Figure 1. At a temperature at 550 °C, AZ61 Mg alloy ignites, the surface of A6N01 alloy begins to change color, and AZX611 alloy is unaffected. In general, Mg alloys that contain other elements are known to have greatly enhanced mechanical properties, but their ductility can be maintained only processing the cast metal through extrusion and/or plastic deformation [10][11]. It has been suggested that grain refinement of the Mg phase occurs during extrusion deformation [3,[10][11][12]. The strength and microstructure of extruded Mg-Ca and Mg-Ca-Al(Zn) [6][7][9][10][11][12][13] alloys have been examined, together with the distribution of compounds within these materials [7,12,15]. Most of the plastic deformation processes to which Ca-containing Mg alloys have been subjected are extrusion processes [9][10], and there have been few attempts to examine improvements in the strength of these materials by means of rolling and working processes. It is widely known that the plastic deformation of alloys of Mg with rare earth (RE) metals [8] or Ca [11][12] requires many working cycles and high processing temperatures in comparison with commercial Mg alloys [5,7]. If wrought Mg materials are to be more widely used, it will be important to develop techniques for the production of rolled sheets in addition to extruded materials. It will also be necessary to elucidate the mechanical properties of such materials and to establish a rolling process for Mg alloys that is faster than the current extrusion processes. It has been reported that a rolling process has been carried out at processing temperatures (sample temperature, roll surface temperature, and reheating temperature) above the static recrystallization temperature [6][7][8][9][10][11][12][13]. However, there have been no previous studies on the relationship between the microstructural changes responsible for strength enhancement and rolling processes without reheating or of the maximum reduction in thickness achievable in single-pass rolling. In this study, we investigated the changes in the microstructure and tensile properties at room temperature produced by controlled rolling of samples of Mg-Al-Mn-Ca cast alloy produced by a twin-roll casting process. The heat resistance, formability, and damping properties of the rolled sheet produced were compared with those of the A6N01 alloy currently used in highspeed rail vehicles [14]. Appearance of samples of A6N01, AZ61+0.5Ca and AZ61+1Ca alloys subjected to flame-resistance testing (a). Ignition temperature of Ca addition magnesium AZX611 alloy (c) higher than AZ61 (b) magnesium alloys. Experimental procedure The alloy used in this study was twin roll cast (TRC) Mg-10Al-0.2Mn-1Ca (mass%) alloy (AMX1001). The material properties of this alloy were compared with those of samples of Mg-3Al-1Zn-1Ca (AZX311) alloy and Mg-6Al-1Zn-1Ca (AZX611) alloy. Ingots were prepared in an electric furnace in an atmosphere of argon. The as-received samples of the TRC material measured 100 mm wide by 2000 mm long by 4 mm thick. Samples for rolling, measuring 60 mm wide and 120 mm long, were cut from the as-received materials. The TRC direction and rolling direction were parallel to one another, and the microstructure was observed from the direction perpendicular to the direction of rolling and casting. Rolling was carried out on samples preheated to 100 to 400 °C for 10 min in a furnace. The surface temperature of the rollers was maintained at 250 °C by means of embedded heating elements. The sample was rolled from a thickness of 4 mm to one of 1 mm in several passes (1 mm per pass) without reheating during the rolling process, or by single-pass rolling. The roll diameter was 180 mm and roll speed was set at 5, 10, 15, or 25 m/min. We chose to roll the sheets to a thickness of 1 mm to permit comparison of the mechanical properties of sheets produced by multipass rolling with those of sheets rolled in a single pass. The samples were cooled with water within 5 s of the final rolling pass. The AMX1001 alloy consisted of an α-Mg phase and Al-Ca compounds [13]. Samples for tensile testing with gauge sections 2.5 mm in width and 15 mm in length were machined from samples of the rolled and annealed materials. Tensile tests were carried out at an initial strain rate of 5.0 × 10 -4 s -1 at room temperature. Samples were annealed at temperatures of between 100 and 400 °C in an electric furnace for various times between 1 and 1000 h, with subsequent cooling in water. The formability of the rolled sheets was investigated by conical cup tests performed temperatures between room temperature and 250 °C at an initial strain rate of 2.7 × 10 1 s -1 . The microstructures of the rolled and annealed samples were observed by optical microscopy (OM), scanning electron microscopy (SEM), and electron backscattering diffraction (EBSD). The optical and Electron probe micro-analyser (EPMA) maps of as-cast AMX1001 alloy are shown in Figure 2. The initial mean grain size of AMX1001 cast alloy was 53 µm. The brighter areas in Figure 2 correspond to Al 2 Ca compounds. The Al 2 Ca compounds were present as networks and as coarse agglomerations. Structure of the cast materials and the limited reduction in thickness diagram Rapidly cooled AMX1001 TRC alloy (the as-received material) has a finer grain structure than AZX311 or AZX611 gravity-cast alloys. However, the mean grain size of the AZX311 and AZX611 alloys was similar to that of AMX1001 TRC. The initial optical microstructure of these materials is shown in Figure 3. It is necessary to know the relationship between the sample temperature and the maximal reduction in thickness per pass in order to prepare thin sheets without cracking. Figure 4(a) shows the maximum reduction in thickness per pass in strip pressing of samples of AZX311, AZX611, and AMX1001 cast alloy; optical micrographs of samples of AZX311 and AMX1001 subjected to single-pass rolling at 200 °C are shown in Figure 4(b) and (c). The maximum reduction in thickness at a deformation temperature of 200 °C for samples of AZX311 and AZX611 cast alloys was 9%, whereas that of AMX1001 cast alloy was 30% [ Figure 4(a)]. This shows that AMX1001 TRC alloy has a higher deformability than AZX311 and AZX611 cast alloys. Furthermore, the maximum reduction in thickness reached 50% for AMX1001 TRC alloy when the deformation temperature was raised to 300 °C; the maximum reduction in thickness therefore increases significantly at deformation temperatures between 200 and 300 °C. Optical micrographs recorded after applying a reduction in thickness of 20% to AMX1001 TRC alloy at a sample temperature at 200 °C and a roll surface temperature of 250 °C are shown in Figure 4(b) and (c). It can be seen that, in comparison with the cast material, shear deformation is introduced into the microstructure. As will be discussed later, dynamic recrystallization (DRX) occurs on elevating the sample temperature after the rolling process. As a result of the rolling process, the α-Mg phase becomes finer, Al-Ca compounds are finely crushed, and structural rearrangement occur; furthermore, the Mg phase and the Al-Ca compounds exhibit a lamellar structure. Curves showing the maximum reduction in thickness of extruded AZX311 and AZX611 alloys are presented in Figure 5 [7]. The maximum reduction in thickness of AMX1001 cast material was the same as that of extruded AZX311 and AZX611 alloys. In other words, the TRC material shows a good rollability in comparison with the gravity-cast materials when the mean grain size is less than 100 µm. The TRC process, which involves rapid cooling, therefore has advantages in terms of the rollability of the product, in addition to its greater productivity. By the way, Figure 5 shows that the maximum reduction in thickness of AZX611 alloy decreases when the deformation temperature is increased to more than 400 °C. This decrease is probably the result of growth of grains of α-Mg phase and the proximity of the annealing temperature to the solution-treatment temperature [13]. These observations are consistent with the forging and molding properties of AZ31 alloys [15] and Mg-Zn-Y alloys [8], and also with the fact that AZ31 alloy is a nonaging alloy, whereas AZ61 alloy is an aging alloy. In other words, the roll surface temperature and the reduction in thickness are important factors in strip processing of AMX1001, AZX311, and AZX611 alloys, and a fine dispersion of Al 2 Ca compounds occurs as a result of the refinement of the α-Mg phase. Effects of rolling conditions on the microstructure of flame-resistant Mg alloy To investigate the effect of rolling conditions and grain refinement, we used 10-mm-thick samples of AZX611 gravity-cast material with a coarse grain, because increasing the total reduction in thickness eliminated the fine structure of the cast material. MPa, its ultimate tensile strength (UTS) was 350 MPa, and its elongation was 4%. The YS and UTS of the 400 °C rolled sheet were 220 and 270 MPa, respectively, and its elongation was 11%. In the single-pass rolling at a sample temperature of 400 °C, we were able to reduce the thickness by 90%, because the sample temperature was close to the solution temperature and the Mg phase and the Al-Ca compounds could be easily deformed. The YS and UTS of the resulting sample were lower than those of the alloy sample rolled at 200 °C, but the elongation was improved. The difference between the temperature of the sample (200 °C) and the roll surface temperature (250 °C) suppresses loss of heat during the rolling process, thereby inducing DRX and increasing plastic deformability. Optical micrographs of the 200 °C and 400°C rolled materials are shown in Figure 6(b) and (c). The Al-Ca compounds are finely crushed and dispersed during the rolling process, and fine Al-Ca compounds are formed at grain boundaries. It is well known that finely crushing Al-Ca compounds contributes to control of grain growth [7]. A comparison of the textures in optical micrographs of the 200 °C rolled sheet after nine passes and in the 400 °C rolled sheet after a single pass showed that the former contained relatively equiaxial grains, whereas the latter showed a microcrystalline structure between the grain boundaries and a completely uniform texture was not formed. Therefore, strip processing for a few passes under a small load to produce a uniform texture is effective in reducing the total load and increasing the strength of the alloy without reheating. In the sample rolled at 200 °C, the recrystallized region amounted to 81% of the rolled sheet, and the mean grain size was reduced from 1140 µm to 3.5 µm (the mean grain sizes in the recrystallized and the nonrecrystallized regions were 1.5 µm and 9.4 µm, respectively) ( Figure 7a). The sample and roll surface temperatures of 200 °C and 250 °C, respectively, are close to the static and dynamic recrystallization temperatures of Mg alloy [10]. The microstructure was refined by DRX, resulting in a duplex grain structure consisting of partially elongated grains, and shear deformation was observed. As can be seen in Figs. 7(b-c), the intensity of the basal texture of the AZX611 rolled sheet was 10.2; the intensities of the basal textures of the recrystallized and nonrecrystallized regions were 9.8 and 19.6, respectively. As shown in Figures 7(d-f), the sample rolled at 400°C contained a region with elongated grains and fine grains. The recrystallized region of the rolled sheet accounted for 47% of the total, and the mean grain sizes of the recrystallized and nonrecrystallized regions were 2.9 and 27.8 µm, respectively. As can be seen in Figures 7(ac), the intensity of the basal texture of the AZX611 rolled sheet was 13.9, and the intensities of the basal textures of the recrystallized and nonrecrystallized regions were 4.5 and 21.4, respectively. The sample temperature of 400 °C is near the grain growth and solution temperature of AZX611 and AZ61 alloys [7,[12][13]. The microstructure was refined by DRX, but grain growth occurred immediately after rolling. As a result, the alloy rolled at 400 °C showed a duplex grain structure, the intensity of the basal texture of the nonrecrystallized region was higher than that for the sample rolled at 200 °C, and the area frequency of the nonrecrystallized region was more than 50%. Therefore, if there is a difference in the sample temperature, but no observed significant differences in the mean grain size, provided the nonrecrystallized region is taken into consideration, the grain-refinement mechanism is dependent on the sample temperature and the reduction in thickness. Next, we focused on the sample temperature after rolling in the nine-pass process that produced high-strength AZX611 rolled sheet. Figure 8(a) shows the results of measurements of the sample temperature after rolling and the mechanical properties for various total reductions in thickness. Figures 8(b-d) show optical micrographs for the several reductions in thickness, as indicated in Figure 8(a). From Figure 8(a), it is apparent that when the total reduction in thickness was less than 60%, the sample temperature after rolling increased slightly as the number of rolling passes increased. On the other hand, the sample temperature after the rolling process did not show any increase for a total reduction in thickness of more than 60%. The strength increased after a total reduction in thickness of 60%, but the elongation was reduced. Figures 8(b-d) show that the dendrite structure was also elongated in the rolling direction after a total reduction in thickness of 40% and that Al-Ca compounds remained in the grain boundary. The existence of shear deformation of the microstructure when the total reduction in thickness reached 60% was confirmed and, at this stage, Al-Ca compounds were beginning to be crushed. When the total reduction in thickness reached 80%, bending of the microstructure occurred as a result of shear deformation and elongation of the grain in the direction parallel to the rolling direction. By performing multipass rolling and introducing shear deformation by the rolling process, we were able to increase the strength of the Mg phase by DRX, while the Al-Ca compounds were crushed. This was possible because the deformation process induced heating of the alloy during each rolling pass, thereby maintaining the sample at a temperature near its solution temperature. Al-Ca compounds that had been crushed were rearranged in layers in the Mg phase in the direction parallel to the rolling direction. Production of high-strength Mg-10Al-0.2Mn-1Ca rolled sheet by a rolling process From Figures 2 and 3, the initial grain size of AMX1001 TRC alloy was lower, by a ratio of 1:20, than the mean grain size of AZX311 and AZX611 gravity-cast alloy, so that improved rollability would be expected. Therefore, a sample of AMX1001 was rolled from a thickness of 4 mm to 1 mm in three passes. Figures 9(a-c) show the relationship between the mechanical properties and various rolling conditions for AZX1001 alloy. Figure 9(a) shows that, as a result of the multipass rolling process, the YS decreased slightly from 390 to 340 MPa and UTS also decreased slightly from 410 to 380 MPa, probably due to the increase in sample temperature from 100 to 350 °C, whereas the elongation improved from 3.5 to 8.5%. Additionally, it was clear that the YS and UTS of samples subjected to single-pass rolling were lower than those of samples subjected to multipass rolling process, but the single-pass rolling markedly improved the elongation to 20% [ Figure 9(b)]. These results suggest that the elongation of rolled materials depends on the sample temperature, whereas the YS and UTS depend on the number of rolling passes. The multipass rolling regime that we used did not involve reheating between individual passes, and we suggest that recovery and DRX do not act effectively during the initial passes of the rolling process. From Figures 7 and 8, it is clear, however, that recovery and DRX do act effectively when the reduction in thickness reaches more than 70%. Figure 9(c) shows that the YS and UTS decrease with increasing rolling speed for a total reduction in thickness of 80%, even with a multipass rolling process. This is because there is an increase in the sample temperature after rolling, as in the case of extrusion [8]. It is possible to combine single-and multipass rolling processes to suit particular applications; the rolling speed and sample temperature are the important factors in the rolling process. In addition, a total reduction ratio of 60% or more is necessary to produce high-strength rolled material. Figures 10(a-c) show optical micrographs of the as-cast sample and samples of rolled sheet subjected to rolling for three passes, together with the corresponding IPF and PF maps for a sheet rolled at a speed of 10 m/min at a sample temperature of 200 °C and a roll surface temperature of 250 °C. The microstructure of the cast ingot consisted of a coarse grain structure with Al 2 Ca and Al compounds. After the three-pass rolling process, the recrystallized region of the rolled sheet accounted for 70% of the total, and the grain size was reduced from 53 to 3.8 µm. The rolling temperature of 200 °C is close to the recrystallization temperature of both AZX and AZ series alloys [11][12]15]. The microstructure was refined by DRX to form a duplex grain structure that was partially elongated, and shear deformation was observed into the elongated grain. To increase the extent of the recrystallized region, an increase in the rolling temperature or a greater total reduction thickness was required, but the strength then tended to decrease because of the influence of grain growth. The Al-Ca compounds were finely crushed and dispersed during the rolling process, and therefore these compounds contributed to control of the grain growth that would otherwise result from an increase in the rolling temperature. As can be seen in Figure 10(c), the intensity of the basal texture of the AMX1001 rolled sheet was 8.2. This basal texture was lower than those of AZ31 alloy samples subjected a single rolling process at 200 or 400 °C, until the rolling reduction reached 86%, at which point the intensities were 7 and 5, respectively [16]. Figure 11 shows the tensile properties of the ascast and rolled (single-and three-pass schedules) alloys for a roll surface temperature of 250°C and a rolling speed of 10 m/min; multipass rolling was performed at 100 °C, and singlepass rolling was performed at a sample temperature of 400 °C. The YS and UTS of the sheet subjected to three-pass rolling were 380 and 400 MPa, respectively, and the elongation was 8%. A sheet rolled from a thickness of 4 mm to one of 1 mm in a single pass showed a UTS of 320 MPa, a YS of 220 MPa, and an elongation of 15%. The failure to improve the strength of the material by the three-pass rolling process was the result of the heat generated during the metal-forming process. In fabricating the 1-mm-thick rolled sheet by the three-pass rolling process, grain refinement of the Mg phase and crushing of the Al-Ca compounds occurred. The difference between the temperature of the sample and that of the roll suppresses heat removal during the rolling process, thereby inducing DRX and increasing deformability. If no reheating is incorporated into the rolling process, the sample and roll-surface temperatures are close to the recrystallization temperature of the Mg alloy [7,15]. As a result, it is possible to induce DRX through the increase in sample temperature generated by repeated rolling. As a result of the temperature difference between the sample and roll surface, the sample temperature approaches the roll surface temperature more closely as the number of passes increases, making this an important factor. as-cast Single-pass three-passes Figure 11. Nominal stress-strain curves for as-rolled AZX1001 alloy samples subjected to multipass rolling at 200 °C or single pass rolling at 400 °C. Figure 12(a) shows the relationship between the mechanical properties and the annealing temperature for rolled sheets of AMX1001. Figure 12(b)-(e) shows optical micrographs of rolled sheets of AMX1001 annealed at various temperatures between 150 and 400 °C. The YS and UTS decreased by about 40 MPa and the elongation increased to 11% when the annealing temperature was raised from 150 °C to 200 °C. A further increase in the annealing temperature to 300 °C resulted in a significant decrease in the YS to 260 MPa and in the UTS to 310 MPa, whereas differences in the YS, UTS, and elongation for annealing temperatures between 300 and 400 °C were minimal. An AMX1001 rolled sheet subjected to annealing at 200 °C for 1 h showed no significant reduction in strength or tensile properties, which were are similar to those of high-strength Mg alloy. In other words, the AMX1001 alloy showed excellent thermal stability as a result of the addition of a small amount of Ca [10][11]. The addition of Al, however, did not appear to have any effect on the thermal stability. The changes in the strength and elongation of the AMX1001 rolled sheet suggest that static recrystallization occurs at 200 to 250 °C [Figures 12 (a) and (b)-(c)]. The deterioration in mechanical properties can be effectively controlled by suppressing grain growth. Figure 13 shows that Al-Ca compounds form along the grain boundaries after annealing, but that some compounds form in the grain. Focusing on the grain size, fine grains are seen in the material annealed at 300 °C, whereas 18-µm grains are seen in the material annealed at 350 °C; furthermore, there is also a decrease in the formation of Al-Ca compounds. As the annealing temperature was increased, the Al-Ca compounds were able to control the grain growth, and grain coarsening occurred rapidly at temperatures close to the solution temperature. The other elements in the alloy are considered to be partially soluble at 400 °C, and the formation of Al 2 Ca compounds has been reported to be effective in improving the ductility of alloys [13]. The alloy in this work, in which Al 2 Ca compounds were formed as a result of the addition of 1 mass% of Ca, is considered to retain its ductility while showing a greater strength and larger elongation than other Mg alloys. The rolled samples of AMX1001 alloy did not show any marked loss of strength or changes in microstructure on annealing at 200 °C for 1 h. We therefore extended the annealing time to 1000 h to test the thermal stability of the AMX1001 rolled sheet. Figure 14(a) shows the YS and elongation for a sample of rolled AMX1001 alloy annealed at 200 °C for various times up to 1000 h, as tested at room temperature. The YS and UTS decreased gradually with increasing annealing time, whereas the elongation markedly improved. The tensile properties of samples annealed at 200 °C for 1000 h did not depend on the annealing time. Figure 14(b) shows optical micrographs of samples annealed at 200 °C for various annealing time. Figure 14(a) shows that when the annealing temperature was maintained at 200°C for 1000 h, even though the YS was reduced from 390 to 280 MPa, the elongation improved from 8 to 22%. Although the α-Mg phase grew from 4 to 10 µm, and a lamellar microstructure in which the Al 2 Ca compounds was finely dispersed in the α-Mg phase was formed, no substantial changes in the microstructure were observed, even after annealing for 1000 h. An examination of the optical micrographs in Figure 14(b) shows that when the sample was annealed for 1000 h, the lamellar microstructures of the α-Mg and Al 2 Ca compounds were the same as those observed before annealing. In other words, degradation of the mechanical properties of the AMX1001 rolled sheet after annealing at 200 °C is due to static recovery of the alloy. With regard to the reinforcing factor of this material, Al 2 Ca compounds control the grain growth of α-Mg phase even after annealing at 200 °C for 1000 h. The formation of Al 2 Ca compounds by adding 1 mass% of Ca is therefore an effective way of increasing the heat resistance of the alloy. The YS and UTS of rolled AMX1001 alloy were lower than those of Mg-Zn-Y extruded alloy [8] after heat treatment at 200 °C for 1000 h; however, AMX1001 alloy can be fabricated into thin rolled sheet at rolling temperature below 200 °C with a small number of passes. In the case of Mg alloys, it is important that they retain a high strength and a high ductility if they are to be used as industrial materials, and the Ca-containing Mg alloy AMX1001 is a material that possesses such properties. Formability and damping property of flame-resistant magnesium alloy The formability of AMX1001 high-strength rolled sheets was examined by means of a conical cup tests performed at room temperature to 250 °C and an initial strain rate of 2.7 × 10 -1 s -1 . Specimens measuring 36 mm in diameter were cut from AMX1001 rolled sheet and subjected to conical-cup tests, the results of which are shown in the Figure 15. For comparison, Figure 15 also shows the conical cup value for AZ61 Mg rolled sheet and for high-strength rolled sheet 6N01 Al alloy (Al-0.58Mg-0.6Si mass% alloy) [14], which has a YS of 480 MPa, a UTS of 497 MPa, and an elongation of 8%, and is used in high-speed rail vehicles. The conical cup value of AMX1001 rolled sheet was 27 at 150 °C or above; this value was not significantly improved by increasing the testing temperature to 200 °C. At test temperatures of up to 100 °C, the 6N01 Al rolled sheet showed a better formability than the AZ61 and AMX1001 Mg rolled sheet; however, at a test temperature of 150 °C, the conical cup values for the Mg and Al alloys were very similar. The conical cup value for the AMX1001 rolled sheet was therefore excellent at test temperatures of 150 °C or more. At high testing temperatures, samples of AZ61 and AMX1001 alloy produced by low-load processing showed higher conical cup values than did high-strength 6N01 Al rolled sheet. AMX1001 rolled sheet fabricated by the rolling process described in this study therefore showed better formability than fine-grained 6N01 Al rolled sheet. Figure 16 shows optical micrographs and IPF maps of AMX1001 rolled sheet after conical cup testing at 150 °C and 200 °C. The observation area was 500 µm from the fracture tips of the crown part. The microstructure after conical cup testing at 150 °C showed an elongated grain in comparison with the as-rolled microstructure shown in Figure 10. Along with the improvement in formability demonstrated by the conical-cup test, equiaxial grains were formed in the microstructure at a testing temperature of 200 °C; additionally, the nonrecrystallized region changed to a recrystallized region and a fine grain structure formed with without elongation when the temperature of deformation was high. In other words, to improve the formability of the Mg alloy, it is necessary to select an appropriate temperature and to make use of DRX during plastic deformation. The IPF maps shown in Figure 16 show that at a testing temperature of 200 °C, crystal orientation was random and the texture of the sample after the conical-cup test was weak in comparison with that observed after testing at 150 °C. We found that at a testing temperature of 200 °C, DRX occurred during plastic deformation. It is well known that Mg alloys have excellent vibration-absorbing properties. The vibration properties of Mg alloys are often reported [17][18], but few comparisons have been made with steel or Al alloys. We examined the damping properties of samples of rolled and/or extruded steel, Al alloys, and Mg alloys by analyzing the waveforms produced after a displacement of 0.45 mm by performing a cantilever vibration test. The cantilever test specimens measured 18 mm wide, 200 mm long, and 1 mm thick. We investigated the damping ratio at a strain amplitude of 2 × 10 -4 s -1 and a displacement of 0.45 mm at the tip of the cantilever. The damping properties of steel (SUS 304), aluminum alloys (A7075, A5083, A6063, A6N01), and magnesium alloys (AZ series, Mg-RE, AZX, and AMX) are shown in Figure 17. From Figure 17, the damping properties improved in the order Mg alloys ≥ Al alloys ≥ steel. The damping ratios of the Mg alloys were dependent on the type of alloy, as in the case of the mechanical properties, and they were affected by the nature of the elements forming the alloy. However, the variation in Figure 15. Appearance of conical cup sample and the relationship between the conical cup value and the testing temperature. The conical cup test was performed at various temperatures at an initial strain rate of 2.7 × 10 -1 s -1 . the percentage of damping ratio on changing of elemental content was small. The material type and the damping ratio showed a linear relationship. In the case of Mg alloys, alloys in the region between the AZ series of Mg alloys and Mg-RE alloys, where flame-resistant Mg alloys occur, had damping properties that were inferior to those of commercial AZ-series Mg alloys, showing that these properties are weakened by the addition of Ca. However, no effect of the addition of Al on the damping ratio could be identified. Summary We investigated various properties of flame-resistant Mg alloys. By subjecting TRC materials to a total reduction in thickness of up to 75% by multipass rolling without reheating, we produced a rolled sheet with a tensile strength of 400 MPa and an elongation of 8%. During the multipass rolling process, grain refinement occurred as a result of dynamic recrystallization of the Mg phase and crushing of Al 2 Ca compounds. A study of the heat-treatment properties of AMX1001 high-strength rolled sheet found that the yield strength was reduced from 330 to 250 MPa on heating at between 200 and 300 °C for 1 h, whereas the elongation improved from 8 to 17%, suggesting that by static recrystallization had occurred. When we investigated the mechanical properties after heating the material at 200 °C for 1000 h, the tensile strength reached 280 MPa and the elongation improved to 22%. In other words, AMX1001 rolled sheet have an excellent thermal stability. Furthermore, the conical cup value for AMX1001 rolled sheet was maximal at a test temperature of 150 °C or more, and this value was superior to that of fine-grained rolled A6N01 alloy. A study of the damping properties of various alloys showed that they improved in the order steel ≤ aluminum alloys ≤ magnesium alloys. Overall, the properties of high-strength AMX1001 rolled sheet are superior to those of fine-grained aluminum alloys. In particular, this Mg alloy shows excellent thermal stability, damping properties and formability.
7,530.6
2014-06-11T00:00:00.000
[ "Materials Science" ]
Fusion to Snowdrop Lectin Magnifies the Oral Activity of Insecticidal ω-Hexatoxin-Hv1a Peptide by Enabling Its Delivery to the Central Nervous System Background The spider-venom peptide ω-hexatoxin-Hv1a (Hv1a) targets insect voltage-gated calcium channels, acting directly at sites within the central nervous system. It is potently insecticidal when injected into a wide variety of insect pests, but it has limited oral toxicity. We examined the ability of snowdrop lectin (GNA), which is capable of traversing the insect gut epithelium, to act as a “carrier” in order to enhance the oral activity of Hv1a. Methodology/Principal Findings A synthetic Hv1a/GNA fusion protein was produced by recombinant expression in the yeast Pichia pastoris. When injected into Mamestra brassicae larvae, the insecticidal activity of the Hv1a/GNA fusion protein was similar to that of recombinant Hv1a. However, when proteins were delivered orally via droplet feeding assays, Hv1a/GNA, but not Hv1a alone, caused a significant reduction in growth and survival of fifth stadium Mamestra brassicae (cabbage moth) larvae. Feeding second stadium larvae on leaf discs coated with Hv1a/GNA (0.1–0.2% w/v) caused ≥80% larval mortality within 10 days, whereas leaf discs coated with GNA (0.2% w/v) showed no acute effects. Intact Hv1a/GNA fusion protein was delivered to insect haemolymph following ingestion, as shown by Western blotting. Immunoblotting of nerve chords dissected from larvae following injection of GNA or Hv1a/GNA showed high levels of bound proteins. When insects were injected with, or fed on, fluorescently labelled GNA or HV1a/GNA, fluorescence was detected specifically associated with the central nerve chord. Conclusions/Significance In addition to mediating transport of Hv1a across the gut epithelium in lepidopteran larvae, GNA is also capable of delivering Hv1a to sites of action within the insect central nervous system. We propose that fusion to GNA provides a general mechanism for dramatically enhancing the oral activity of insecticidal peptides and proteins. Introduction Arthropod venoms contain a rich diversity of compounds including a significant number of neurotoxic disulphide-rich peptides. Most spiders prey exclusively upon insects and other arthropods and thus it is not surprising that many spider-venom peptides have been shown to modulate the activity of arthropod ion channels. One such example is v-hexatoxin-Hv1a (formerly vatracotoxin-Hv1a; hereafter referred to as Hv1a), the best-studied member of a family of 36-37 residue insecticidal neurotoxins isolated from the venom of the Australian funnel web spider Hadronyche versuta. Hv1a specifically inhibits insect but not mammalian voltage-gated calcium channels [1,2,3]. Structurally, Hv1a comprises a disordered N-terminus (residues 1-3), a disulfide-rich globular core (residues 4-21), and a highly conserved C-terminal b hairpin (residues 22-37) that protrudes from the disulphide-rich core and contains the key residues for insecticidal activity [4,2]. The three disulphide bonds form an inhibitor cystine knot motif that provides many spider-venom peptides with extreme chemical and thermal stability, as well as resistance to proteases [5,6]. Hv1a is highly toxic by injection towards many different insect pests including species from the Orders Lepidoptera, Coleoptera, Dipteran and Dictyoptera [7][8][9][10]. Its potency and phyletic specificity makes Hv1a an ideal candidate for development of novel bioinsecticides. However, whilst toxic by injection, Hv1a and many other insecticidal venom peptides are typically ineffective, or at least much less potent, when delivered orally and this is thought to be due to ineffective delivery of the toxins to their sites of action in the central (CNS) or peripheral nervous system (PNS). This lack of oral activity clearly limits their potential application as bioinsecticides. In order to access the nervous system after oral delivery, peptide toxins must be resistant to proteolytic degradation in the insect gut, and they must be able to cross the insect gut epithelium. The latter factor is thought to be the major limitation in oral toxicity of protein and peptide toxins. The mannose-specific lectin GNA (Galanthus nivalis agglutinin; snowdrop lectin) is resistant to proteolytic activity in the insect gut. Moreover, following ingestion, GNA binds to gut epithelial glycoproteins and is transported into the haemolymph [11]. This property of GNA can be used to transport peptides across the insect gut [12]. Previous results have shown that the oral insecticidal activity of peptides derived from the venom of spiders and scorpions can be dramatically enhanced by fusing them to GNA [13][14][15][16], presumably because GNA mediates their delivery to the hemolymph where the toxins can subsequently reach their sites of action in the CNS or PNS. The present paper reports on the insecticidal activity of a fusion protein comprised of Hv1a linked to the N-terminus of GNA. We show that the Hv1a/GNA fusion protein, expressed in yeast and purified from culture supernatant, is biologically active by injection, indicating that fusion to GNA does not compromise the insecticidal activity of Hv1a. Whereas Hv1a alone was not orally active against the cabbage moth Mamestra brassicae, the Hv1a/GNA fusion protein had significant oral activity against this lepidopteran crop pest. Moreover, for the first time, we present direct evidence for binding of orally delivered GNA to the CNS of lepidopteran larvae. This suggests that in addition to providing a mechanism for delivery of peptide toxins across the insect gut, GNA may further facilitate toxin activity by delivering covalently attached toxins to the CNS of insects. Thus, fusion to GNA provides a general mechanism for dramatically enhancing the oral activity of insecticidal peptide neurotoxins. Synthetic Gene and Fusion Protein Construct Assembly A synthetic gene encoding the mature Hv1a amino acid sequence was assembled using a series of overlapping oligonucleotides, with codon usage optimised for expression in yeast (Table 1). Following assembly, the coding sequence was amplified by PCR and ligated into a yeast expression vector (derived from pGAPZaB) that contained a sequence coding for the mature GNA polypeptide (amino acid residues 1-105). The 37-residue Hv1a peptide was fused to the N-terminus of GNA via a tri-alanine linker sequence as depicted in Fig. 1A. The Hv1a/GNA construct was cloned such that the N-terminal yeast a-factor preprosequence would direct the expressed protein to the yeast secretory pathway. The final Hva1/GNA fusion protein is predicted to contain an additional two alanine residues at the N-terminus (after removal of the prepro-sequence) and terminate at residue 105 of the mature GNA protein, giving a predicted molecular mass of 16.36 kDa. The Hv1a/GNA-pGAPZaB construct was cloned into E. coli and the coding sequence was verified by DNA sequencing. Expression and Purification of Recombinant Hv1a/GNA Fusion Protein DNA from a verified Hv1a/GNA-pGAPZaB clone was linearised, transformed into the protease-deficient P. pastoris strain SMD1168H, and selected on antibiotic containing plates. Ten clones were analysed for expression of recombinant protein by Western blot (using anti-GNA antibodies) of supernatants derived from small-scale cultures (results not shown). This allowed selection of the best expressing clone for fusion protein production by bench-top fermentation. For fusion protein production, P. pastoris cells were grown in a BioFlo 110 laboratory fermenter. Recombinant GNA was expressed and purified as previously described [14]. The Hv1a/ GNA fusion protein was purified from clarified culture supernatant by hydrophobic interaction chromatography followed by a second gel-filtration step to remove high molecular weight contaminating yeast proteins. Two major proteins of ,20 kDa and ,14.5 kDa were recovered following fermentation and purification of recombinant Hv1a/GNA (Fig. 1B). The 20-kDa protein migrates at a higher than expected molecular weight than the 16.36 kDa predicted for intact fusion protein. However, Western blot analysis (Fig. 1C) using anti-GNA and anti-Hv1a antibodies confirmed that the higher molecular weight protein represents intact fusion protein as it is immunoreactive with both anti-GNA and anti-Hv1a antibodies. The lower molecular weight band, which does not show positive immunoreactivity with anti-Hv1a antibodies, represents GNA from which the Hv1a peptide has been cleaved. Analysis of samples taken during fermentation confirmed that cleavage of the fusion protein occurs during expression and not during purification (results not shown). Intact Hv1a/GNA fusion protein was expressed at levels of ,50 mg/l of culture supernatant. The ratio of intact fusion protein to cleaved GNA was consistently 2:1 as judged by SDS-PAGE gels and Western blots. Injection Toxicity of Hv1a/GNA and Hv1a The biological activity of Hv1a/GNA was verified by injection of 5-20 mg of purified fusion protein into fifth stadium M. brassicae larvae (40-70 mg). Injections of comparable molar amounts of recombinant Hv1a (2.3-9.2 mg) were also conducted. Larval mortality occurred over a period of 4 days ( Table 2) but was observed predominantly within the first 48 h following injection. Larvae injected with higher doses of fusion protein (10 mg and above) or toxin alone (4.6 mg and above) displayed symptoms of paralysis, and survival was significantly reduced as compared to the control treatment (Kaplan-Meier survival curves; Mantel-Cox log-rank tests; P,0.001). Levels of mortality were comparable between fusion protein injected and toxin injected treatments (e.g., 80% mortality for larvae injected with 92 mg toxin/g insect compared to 90% mortality for larvae injected with 100 mg toxin as a component of fusion protein/g insect). Oral Toxicity of Hv1a/GNA and Hv1a Several experiments were performed to assess whether fusion to GNA was able to improve the oral toxicity of Hv1a. First, fifth stadium M. brassicae larvae were fed daily for four days on droplets containing 40 mg of purified fusion protein or 9.6 mg Hv1a ( Fig. 2A). Ingestion of daily droplets of fusion protein was found to result in a complete cessation of larval feeding evidenced by the significantly reduced mean weight recorded for this treatment as compared to the control group. After four days, 40% of the treated larvae were dead and the remaining insects did not survive to pupation. In striking contrast, no reduction in larval growth as compared to the control BSA treatment was observed for larvae fed on droplets containing Hv1a, indicating that the oral toxicity of Hv1a is dramatically enhanced by fusion to GNA. In a second assay, fourth stadium larvae were fed on a single droplet containing 40 mg of Hv1a/GNA (Fig. 2C) and this was shown to cause a reduction in larval growth as compared to control-fed larvae over a period of approximately six days. By day 7, control larvae had attained their maximum weight after which a reduction in weight was observed as insects enter the pre-pupal phase (day 6-7). By contrast, larvae that had ingested a single Hv1a/GNA-containing droplet exhibited a reduced growth rate reaching maximal weight at day 8-9, after which larvae pupated. The oral toxicity of the Hv1a/GNA fusion protein was further investigated by feeding 2nd instar M. brassicae larvae on cabbage Table 1. Oligonucleotide sequences used for assembly and amplification of a synthetic gene encoding for the mature Hv1a toxin. discs coated with purified recombinant proteins, an assay that might be more representative of situations in which Hv1a is employed on crops as a foliar bioinsecticide. The survival of larvae was significantly reduced when insects were fed on Hv1a/GNAcoated discs (Fig. 3) such that 15% and 20% of larvae remained after 10 days of exposure to discs coated with Hv1a/GNA at concentrations of 0.2% w/w and 0.1% w/w, respectively. In contrast, 80% survival was recorded for larvae reared for 10 days on discs coated with 0.2% w/w GNA, which was not significantly different to the 90% survival recorded for the control (no added protein) treatment. Fusion protein treatment survival curves were significantly different to both the GNA and control treatments (Kaplan-Meier; Mantel-Cox log-rank tests; P,0.001). Exposure to Hv1a/GNA-coated discs also retarded larval growth in surviving larvae. The reduction in growth was dose-dependent, so that by day 7 the average weight of surviving larvae fed on 0.2% or 0.1% w/w Hv1a/GNA was reduced by 90% and 76%, respectively, compared to the control treatment. GNA was also shown to reduce larval growth, so that by day 7 the average weight of larvae fed 0.2% w/w GNA was reduced by 45% compared to the control treatment. Delivery of Ingested Hv1a/GNA to the Circulatory System and Binding of Injected Hv1a/GNA and GNA to the Central Nerve Chord We have previously shown that GNA is capable of transporting covalently attached peptides across the insect gut into the hemolymph [11,13]. To determine if the toxic effects observed in oral bioassays were attributable to GNA-mediated delivery of Hv1a to the circulatory system of M. brassicae larvae, haemolymph was extracted from insects fed on diets containing Hv1a/GNA and analysed for the presence of fusion protein by Western blotting using anti-GNA antibodies. A representative blot, depicted in Fig. 4A, confirms immunoreactivity of a major band corresponding to the molecular weight of intact fusion protein in samples from larvae fed Hv1a/GNA, but not control insects. As shown previously in Fig. 1C, fusion protein samples contain two GNA-immunoreactive bands corresponding to intact fusion protein and GNA from which the Hv1a peptide has been cleaved. Thus, the presence of a second smaller immunoreactive band in haemolymph samples from fusion protein fed larvae suggests uptake of both intact Hv1a/GNA and cleaved GNA, or cleavage of intact fusion protein after absorption in the insect gut. Cross-reactivity and poor sensitivity of the anti-Hv1a antibodies did not allow the detection of fusion protein or toxin when these antibodies were used to probe Western blots of larval haemolymph. The above results indicate that the major reason for the improved oral activity of Hv1a when it is fused to GNA is the ability of this lectin to mediate delivery of Hv1a to the insect hemolymph. However, we also wondered whether GNA might also be able to enhance delivery of Hv1a to its sites of action in the insect nervous system. To investigate if GNA is able to bind to the nerve tract of lepidopteran larvae, intact nerve chords were dissected from insects injected with either GNA or Hv1a/ GNA and analysed by Western blotting using anti-GNA antibodies. Nerve chords and haemolymph samples, pooled from 3-6 insects, were typically extracted 3-12 h following the injection of 10-20 mg of GNA or Hv1a/GNA. Fig. 4B shows positive immunoreactivity of bands corresponding in size to GNA and intact Hv1a/GNA fusion protein in both nerve chord and haemolymph samples taken from injected insects, which suggests that GNA is able to bind to the nerve tract of lepidopteran larvae. Bands corresponding to GNA or Hv1a/ GNA fusion protein were not observed in nerve tissue extracted from insects fed on GNA or Hv1a/GNA (at 2.5 mg/5 g wet wt. diet), presumably due to the levels of bound protein being below the limits of detection of the anti-GNA antibodies. Further evidence of the ability of GNA to bind to the central nerve chord was sought by visualisation of nerve chords dissected from insects that had been injected with, or fed on, fluorescently-labelled GNA or Hv1a/GNA. Control treatments were FITC-labelled ovalbumin or FITC alone. The visualisation of nerve chords dissected following injection was carried out on four separate occasions where typically 2-3 nerve chords per treatment were analysed and comparable results obtained. A composite showing different regions of M. brassicae nerve chords from different treatments is presented in Fig. 5. Low background fluorescence was observed in control FITC alone and FITC-labelled ovalbumin nerve chords. By contrast, fluorescence was observed along the entire length of the nerve tracts, including the terminal brain ganglion, of insects injected with FITC-labelled GNA or Hv1a/GNA. Fluorescence appeared to be predominantly localised to the nerve chord sheath. Reduced fluorescence was observed in instances where FITClabelled GNA had been pre-incubated in the presence of mannose, suggesting that localisation to the nerve chord was mediated by binding of GNA to mannose-containing polypeptides in the nerve chord epithelium. However, binding was not completely inhibited under the conditions tested (results not shown). Similar results were obtained in experiments where larvae had been fed on diets containing FITC-labelled proteins although the levels of fluorescence were lower than those visualised from injected larvae (Fig. 5). This was attributed to lower levels of GNA and Hv1a/GNA being delivered to the circulatory system following ingestion as compared to the levels present in injected insects. Hva1 Retains Insecticidal Activity when Fused to GNA Hv1a is the most studied member of the v-HXTX-1 family of insecticidal toxins isolated from the venom of the Australian funnel web spider Hadronyche versuta [9]. It has been shown to be highly toxic by injection to a wide range of insects [1][2][3]9]. Toxins of the v-HXTX-1 family contain three conserved disulphide bonds that form an inhibitor cystine knot motif that is critical for toxin activity. We therefore used Pichia pastoris, a host capable of correctly forming disulphide cross-links, to create a fusion protein containing Hv1a linked to the N-terminal region of the lectin GNA. The use of a secretory signal enabled facile purification of recombinant fusion protein from fermented culture supernatants. As observed previously for SFI1/GNA and ButaIT/GNA [13,14], some cleavage of the Hv1a/GNA fusion protein occurred during expression, despite the use of a protease-deficient host strain. Differences were observed in the degree of proteolysis and cleavage patterns (as assessed by Western blot analysis) for the three fusion proteins although all appear to be prone to cleavage between the N-terminal region of the toxin sequence and the Cterminus of GNA. For Hv1a/GNA, a single cleavage site is indicated by the presence of two major proteins in purified fractions, corresponding to intact fusion protein and GNA protein from which the Hv1a toxin has been cleaved. Nevertheless, the majority of expressed Hv1a/GNA was present as intact fusion protein, as evidenced by molecular mass on SDS-PAGE gels and positive immunoreactivity with anti-GNA and anti-Hv1a antibodies. The C-terminus of Hv1a (residues 33-36) includes the sequence VKRC, which is similar to the signal sequence (EKRE) present in the a-factor signal sequence expression vector that is cleaved between R and E by the KEX2 gene product. Further analysis is required to establish whether this or another site is the precise location of cleavage between the Hv1a peptide and GNA protein. Previously reported values for toxicity by injection of recombinant and synthetic Hv1a are highly variable, even when considering different species of the same genus. For example, the ED 50 reported for synthetic Hv1a against the cotton bollworm Heliothis armigera is 3 nmol/g [7], which is more than 10-fold higher than the PD 50 dose of 250 pmol/g reported for the tobacco hornworm Heliothis virescens [8]. In our hands, the doses of injected recombinant Hv1a and Hv1a/GNA required to induce flaccid paralysis and significant mortality of fifth stadium M. brassicae larvae were comparable (50-100 mg toxin/g insect equivalent to 12-25 nmoles/g), suggesting that Hv1a activity is not significantly compromised by C-terminal linkage to GNA. However, these doses are somewhat higher than those typically reported for recombinant Hv1a (e.g., LD 50 of 77 pmol/g and 716 pmol/g respectively for the housefly Musca domestica and lone star tick Amblyomma americanum; [10]). Differences in the toxicity of Hv1a towards different species must, to a large degree, be determined by differences in the ability of the toxin to disrupt ion channel function. However, variability also derives from the use of different toxicity parameters (e.g., LD 50 , ED 50 and PD 50 ), different sources of toxin (i.e. synthetic, recombinant or native peptide) and the suitability and/or ease of injection. In previous studies we assessed the injection toxicity of the spider toxin SFI1 and the scorpion toxin ButaIT, when fused to GNA, towards larvae of the tomato moth Lacanobia oleracea and the cotton leafworm Spodoptera littoralis. Whilst SFI1/GNA was more toxic than ButaIT/GNA towards L. oleracea [12,14], ButaIT/GNA was more toxic than SFI1/GNA towards S. littoralis [16]. In addition, ButaIT/GNA was found to be more generally toxic than SFI1/GNA when tested against a range of insect pests including lepidopteran larvae, dipteran adults, coleopteran adults and larvae and dictyopteran nymphs [16]. Variability in the susceptibility of different insect species prevents strict comparative analysis. However, we note that the amounts of Hv1a/GNA necessary to cause toxicity to M. brassicae larvae (50-100 mg toxin/g insect causing .50% mortality) are comparable to the amounts required for other GNA-toxin fusion proteins (i.e., SFI1/GNA: 20-125 mg toxin/g insect causing .50% mortality in Lacanobia oleracea [12]; ButaIT/GNA: 50-135 mg toxin/g insect causing .50% mortality in Spodoptera littoralis [16]). Fusion to GNA Massively Enhances the Oral Toxicity of Hv1a Hv1a has been reported to be orally active against ticks, and it appears to be orally active against the lepidopteran Spodoptera littoralis and Helicoverpa armigera when expressed in plants [10,17]. However, we found that Hv1a alone was not orally active when fed to fifth stadium M. brassicae larvae. This is consistent with the observation that the LD 50 for Hv1a in the sheep blowfly Lucilia cuprina is 90-fold lower when the toxin is delivered per os compared with injection (V. Herzig and G.F.K, unpublished data). In striking contrast, the Hv1a/GNA fusion protein was orally toxic towards M. brassicae larvae in both cabbage leaf disc and droplet feeding assays. High levels of mortality and reduced growth were observed for second instar larvae exposed to discs coated with purified fusion protein. The oral toxicity observed in these assays must be a result of the Hv1a/GNA fusion protein, since GNA at a comparable dose did not reduce survival (although a reduced effect on larval growth was observed). These results are comparable to previously published data for the oral insecticidal activity of SFI1/GNA, and growth inhibition of GNA, towards L. oleracea larvae [12]. The consumption of droplets containing 40 mg of Hv1a/GNA fusion protein by fifth stadium larvae was seen to result in a complete cessation of feeding and larvae appeared relatively immobile, consistent with the previously described paralytic activity of the toxin [1,9]. Larvae failed to survive to pupation following droplet consumption of a total of 160 mg of fusion protein over four days. By contrast, larvae exposed to droplets containing an equivalent dose of Hv1a showed no evidence of reduced feeding or paralysis and all survived to pupation. The absence of oral toxicity for Hv1a contrasts with the previous results reporting 100% mortality of Heliothis armigera and S. littoralis exposed to transgenic tobacco expressing Hv1a [17]. One possibility is that natural insecticidal compounds produced by these plants might produce disturbances in the insect gut epithelium and thereby act synergistically with Hv1a to improve its oral activity. Khan and co-workers [17] also reported contact insecticidal activity for Hv1a, although in their assays the fusion protein was applied topically in a solution containing high levels of imidazole, a compound known to have contact insecticidal activity [18]. GNA Mediates Delivery of Hv1a to Insect CNS Most spider toxins act peripherally at neuromuscular junctions but Hv1a acts at sites within the central nervous system [1,8]. Bloomquist, 2003 previously demonstrated that Hv1a is able to cross the nerve sheath; whereas Hv1a acted instantaneously in Drosophila melanogaster nerve preparations that had been transected to facilitate toxin penetration, an 18 minute delay in the blockage of nerve firing occurred when intact nerve preparations were used and this delay was consistent with the time taken to observe paralysis following injection of the toxin. Surprisingly, Western blot analysis of nerve chords dissected from insects injected with GNA and Hv1a/GNA indicated that GNA binds to the central nerve chord of lepidopteran larvae and is therefore capable of mediating the delivery of Hv1a to sites of action within the CNS. Further direct evidence for GNA localization to CNS was provided by fluorescence imagery of nerve chords dissected from larvae that had been injected with, or fed on, FITC-labelled proteins. That GNA binds to mannose-containing membranebound polypeptides was indicated by intense fluorescence of the nerve chord sheath and also by reduced binding in tissues extracted from insects injected with GNA that had been preincubated with mannose. Neurophysiological studies with cockroaches, lepidopteran and dipteran larvae have indicated that Hv1a impairs ganglionic neural transmission, rather than conductance along the nerve chord. The characteristic delay in paralysis observed after injection of the toxin is thought to be attributable to the time required for the toxin to cross the nerve sheath and enter the CNS [1,8]. The results presented here suggest that GNA may help to localise covalently attached insecticidal neurotoxins, such as Hv1a, to the CNS of exposed insects and thereby facilitate toxin action within the CNS. In conclusion, the data presented here indicates that GNA not only mediates delivery of insecticidal peptides across the insect gut but that it is also capable of delivering peptides to the insect central nervous system. In the case of Hv1a, the massive improvement in oral activity upon fusion to GNA can be attributed to both of these properties. Many insecticidal peptides have been isolated from arachnid venoms [9,19,20], and fusion to GNA would appear to provide a general mechanism for dramatically enhancing their oral activity. GNA-toxin fusion proteins could be used for crop protection either as exogenously applied treatments or as endogenous proteins expressed in transgenic plants or entomopathogens. Materials and Recombinant Techniques General molecular biology protocols were as described in [21] except where otherwise noted. Subcloning was carried out using the TOPO cloning kit (pCR2.1 TOPO vector; Invitrogen). Pichia pastoris SMD1168H (protease A deficient) strain, the expression vector pGAPZaB, and Easycomp Pichia transformation kit were from Invitrogen. Oligonucleotide primers were synthesised by Sigma-Genosys Ltd. T4 polynucleotide kinase was from Fermentas. Restriction endonucleases, T4 DNA ligase, and Pfu DNA polymerase were supplied by Promega. Plasmid DNA was prepared using Promega Wizard miniprepkits. GNA was produced as a recombinant protein in yeast using a clone generated as previously described [22]. Anti-GNA antibodies (raised in rabbits), were prepared by Genosys Biotechnologies, Cambridge, UK. Anti-Hv1a polyclonal antibodies (raised in rabbits) were prepared by the Institute of Medical and Veterinary Science, Adelaide, Australia. Recombinant Hv1a was prepared as described previously [2,4]. All DNA sequencing was carried out using dideoxynucleotide chain termination protocols on Applied Biosystems automated DNA sequencers by the DNA Sequencing Service, School of Biological and Biomedical Sciences, University of Durham, UK. Sequences were checked and assembled using Sequencher software running on Mac OS computers. The Hv1a/GNA sequence has been deposited in Genbank (#1527166). Assembly of expression constructs for production of Hv1a/GNA fusion protein. The Hv1a amino acid sequence (UniProtKB P56207) was used as the basis for assembly of a synthetic Hv1a gene. Codon usage was optimised for expression in yeast (www.yeastgenome.org/community/codonusage.shtml). The coding strand was subdivided into two fragments and the complementary strand was subdivided into three fragments, such that the coding fragments overlapped the complementary strand fragments by 21 bases. Five oligonucleotides based on these fragments were synthesised and used to assemble the mature Hv1a coding sequence (Table 1). All primers were individually 59phosphorylated using T4 polynucleotide kinase. An equimolar solution of 100 pmol of each phosphorylated primer was boiled for 10 min to denature secondary structures, then the solution was slowly cooled to room temperature (RT) to allow the primers to anneal. After addition of T4 DNA ligase, annealed oligonucleotides (in ligase buffer) were left to anneal for 15 h at 4uC. To obtain sufficient DNA for cloning into the yeast expression vector pGAPZaB, the Hv1a coding sequence was amplified by PCR using primers containing 59 PstI and 39 NotI restriction sites. Following amplification, gel purification and restriction digest, the PCR product was ligated into a previously generated yeast expression construct [14] containing the mature GNA coding sequence (amino acids 1-105 derived from LECGNA2 cDNA; [23]) to create the plasmid Hv1a/GNA-pGAPZaB. The sequence of the Hv1a/GNA expression construct has been given the accession number JQ898015 by GenBank. Expression and Purification of Hv1a/GNA Fusion Protein Plasmid Hv1a/GNA-pGAPZaB DNA was transformed into chemically competent P. pastoris cells (strain SMD1168H) according to protocols supplied by Invitrogen. Transformants were selected by plating on medium containing zeocin (100 mg/ml). A clone expressing recombinant Hv1a/GNA was selected for production by bench-top fermentation by Western analysis using anti-GNA (1:3300 dilution) antibodies of supernatants from smallscale cultures grown at 30uC for 2-3 days in YPG medium (1% w/ v yeast extract; 2% w/v peptone; 4% v/v glycerol; 100 mg/ml zeocin) (results not shown). For protein production, P. pastoris cells expressing Hv1a/GNA fusion protein or GNA encoding sequences were grown in a BioFlo 110 laboratory fermenter. Briefly, 36100 ml YPG cultures (grown for 2-3 days at 30uC with shaking) were used to inoculate 3 l of sterile minimal media supplemented with PTM1 trace salts [24,25] Cultivation was conducted at 30uC, pH 4.5-5.0, 30% dissolved oxygen (cascaded agitation 250-750 rpm) with a glycerol feed (5-10 ml/h; 1.3 l over 72 h). Secreted proteins were separated from cells by centrifugation (30 min at 7500 g, 4uC). NaCl was added to the supernatant to a final concentration of 2 M. Recombinant proteins were purified by hydrophobic interaction chromatography on a phenyl-Sepharose (Amersham Pharmacia Biotech) column (1 cm dia., 25 ml), run at 2 ml/min. After loading, the phenyl-Sepharose column was washed with 2 M NaCl and a linear salt gradient (2 M-0 M NaCl) applied over 60 min. Recombinant Hv1a/GNA eluted at ,1 M NaCl. Fractions containing purified proteins (analysed by SDS-PAGE) were then pooled, dialysed against distilled water and lyophilised. Lyophilised fusion protein and GNA were subject to gel filtration on Sephacryl S-200 columns (1.6 cm diameter, 90 cm length, flow rate 0.3 ml/min) to remove high molecular weight yeast proteins as described previously [14]. Fractions containing purified recombinant proteins were again dialysed and lyophilised, or desalted and concentrated using Microsep TM centrifugal concentrators (VivaScience AG, Hannover, Germany). Electrophoresis and Western Blotting Proteins were routinely analysed by SDS-PAGE (17.5% acrylamide gels). Samples were prepared by adding 56 SDS sample buffer (containing 10% b-mercaptoethanol) and boiling for 10 min prior to loading. Gels were either stained with Coomassie blue or transferred to nitrocellulose for Western blotting using a Biorad Trans-blot SD semi dry transfer cell according to the manufacturer's recommendations. Western blotting of recombinant proteins and larval samples (haemolymph and nerve chord) using anti-GNA (1:3300 dilution) or anti-Hv1a (1:1000 dilution) antibodies was carried out as described [26]. FITC Labelling Recombinant GNA, Hv1a/GNA, and ovalbumin (control) were fluorescently labelled with a 2:1 molar excess of fluorescein isothiocyanate (FITC, Sigma). Recombinant proteins (1 ml) were re-suspended at 2 mg/ml in 500 mM carbonate buffer pH 9.0 then incubated with 50 ml FITC (1 mg/ml in DMSO) with rotation for 4 h at RT, under dark conditions. Samples were dialysed against phosphate-buffered saline (PBS pH 7.4) at RT to remove excess FITC. FITC labelling of Hv1a was unsuccessful, presumably due to the scarcity of primary amines available for FITC attachment. Insect Rearing M. brassicae were originally obtained from cultures held at the Food and Environment Research Agency (FERA) and were reared at the University of Durham continuously on artificial diet [27] Injection Bioassays Purified recombinant Hv1a peptide and Hv1a/GNA were tested for biological activity by injecting 4-5 ml of aqueous samples (lyophilised protein re-suspended in PBS) into newly eclosed fifth stadium M. brassicae larvae (40-70 mg). For each concentration tested, 10-20 larvae were injected and toxic effects were monitored over 4 days. PBS was injected as a negative control. Recombinant GNA is known to have no effect upon M. brassicae larvae when injected at up to 200 mg/larva (unpublished data). Feeding Bioassays Droplet feeding assays: M. brassicae. Several dropletfeeding assays were conducted to assess the oral activity of Hv1a/ GNA towards M. brassicae fourth and fifth stadium larvae. Final sample numbers were relatively small (n = 7-8 per treatment) as larvae were reluctant to ingest daily droplets and insects that did not consume a full 5-ml droplet were discarded from data sets. Two representative assays are described herein. Droplet assay 1. Newly moulted fifth stadium larvae were fed daily for 4 days with a 5-ml droplet containing 40 mg of Hv1a/ GNA or 9.6 mg of Hv1a toxin in 16PBS and 10% sucrose solution. Control larvae were fed on droplets containing 40 mg bovine serum albumin (BSA). To encourage droplet consumption, larvae were starved for ,2-3 h prior to feeding. Larval weight was recorded daily ,1 h after droplet feeding. Treated larvae were placed individually in ventilated plastic pots (250 ml) with standard artificial diet. After 4 days of daily droplet feeding, larvae were maintained on optimal diet until the onset of pupation. Droplet assay 2. Newly moulted fifth stadium larvae were fed on a single 5-ml droplet containing 40 mg of Hv1a/GNA or 40 mg BSA (control) in 16PBS and 10% sucrose. Larvae were maintained as described above and weights recorded daily for 10 days. Leaf disc assays: M. brassicae. The oral activity of Hv1a/ GNA was further tested by feeding second instar M. brassicae larvae on cabbage (Brassicae oleracea) discs coated with purified fusion protein at concentrations of 0.2% w/w and 0.1% w/w (i.e., 10 mg/5 g and 5 mg/5 g leaf wet weight, respectively) or recombinant GNA at 0.2% w/w. Discs (,20 mm dia., 140 mg fresh wt.) were prepared by adding droplets of protein (resuspended in 0.56PBS and 0.1% v/v Tween) onto upper and lower surfaces of discs and air dried. Control discs were prepared with 0.56 PBS, 0.1% v/v Tween. Larvae were reared from hatch for 72 h on non-treated cabbage and then placed into ventilated plastic pots (250 ml) containing coated leaf discs and moist filter paper to prevent dessication. Freshly prepared discs were provided every 2-3 days. Two replicates of 10 larvae per treatment were assayed. Survival was recorded for 10 days. To minimise handling time, larval weights were recorded on days 4, 7, and 10. Haemolymph Extraction and Nerve Chord Dissection Haemolymph samples were extracted and prepared for Western analysis [12] from day 2 fifth instar larvae fed for 24 h on diet containing Hv1a/GNA at 2 mg/5 g wet wt. (,2% dietary protein). Typically, aliquots of two replicate samples containing pooled haemolymph (3-5 larva per sample) were run on SDS-PAGE gels and analysed by immunoblotting using anti-GNA antibodies. To investigate if GNA or Hv1a/GNA were localized to the CNS after oral delivery or injection, nerve chords were analysed by one of two methods. Nerve chords were dissected from sixth stadium larvae 4-24 h after injection or after being fed on droplets containing 20-50 mg GNA or fusion protein. Nerve tissue was subsequently analysed by Western blotting or visualised by fluorescent microscopy (section 2.11). Nerve chords were dissected as follows. Pre-chilled larvae were immersed in ice-cold distilled water prior to making a ventral incision from the tail to the head capsule. The resulting flaps of cuticle were fixed with pins into dissecting wax. The entire gut was carefully removed and the head capsule split to expose the terminal brain ganglia. Intact nerve chord and brain was then separated (using scissors) from the cuticle and head capsule and immersed immediately either in SDS sample buffer for Western analysis or in 3.7% w/v paraformaldehyde (PFA) for microscopy. Fluorescent Microscopy Nerve chords were extracted from sixth stadium larvae 4 h after injection of ,10 mg of FITC-labelled GNA or FITC-labelled Hv1a/GNA. Larvae were also injected with GNA that had been pre-incubated for 1 h at RT with 0.2 M mannose (methyl a-Dmannopyranoside). Nerve chords were also extracted from larvae after feeding on artificial diet containing FITC-labelled GNA or FITC-labelled Hv1a/GNA such that each larva consumed 50-100 mg labeled protein. Control treatments included FITClabelled ovalbumin (10 mg per injection, 50-100 mg by ingestion) and FITC alone (0.5 mg per injection, 2.5 mg by ingestion). Following dissection and immersion in PFA (30-60 min), nerve chords were washed 36in ice cold PBS (15 min per wash), mounted onto glass slides and overlaid with coverslips. Nerve chords were visualized using a fluorescent microscope (Nikon) under FITC filter (absorbance 494 nm; emission 521 nm) and images were captured in OpenLab. Statistical Analysis Data were analysed using Prism 5.0 (GraphPad Software Inc.). Kaplan-Meier insect survival curves were compared using Mantel-Cox log-rank tests. Insect weights were compared using either Student's t-tests or one-way analysis of variance (ANOVA), followed by Tukey-Kramer post hoc means separation. The accepted level of significance was P,0.05 in all cases.
8,299.2
2012-06-22T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science" ]
Different patterns of short-term memory deficit in Alzheimer's disease, Parkinson's disease and subjective cognitive impairment It has recently been proposed that short-term memory (STM) binding deficits might be an important feature of Alzheimer's disease (AD), providing a potential avenue for earlier detection of this disorder. By contrast, work in Parkinson's disease (PD), using different tasks, has suggested that the STM impairment in this condition is characterised by increased random guessing, possibly due to fluctuating attention. In the present study, to establish whether a misbinding impairment is present in sporadic late-onset AD (LOAD) and increased guessing is a feature of PD, we compared the performance of these patient groups to two control populations: healthy age-matched controls and individuals with subjective cognitive impairment (SCI) with comparable recruitment history as patients. All participants performed a sensitive task of STM that required high resolution retention of object-location bindings. This paradigm also enabled us to explore the underlying sources of error contributing to impaired STM in patients with LOAD and PD using computational modelling of response error. Patients with LOAD performed significantly worse than other groups on this task. Importantly their impaired memory was associated with increased misbinding errors. This was in contrast to patients with PD who made significantly more guessing responses. These findings therefore provide additional support for the presence of two doubly dissociable signatures of STM deficit in AD and PD, with binding impairment in AD and increased random guessing characterising the STM deficit in PD. The task used to measure memory precision here provides an easy-to-administer assessment of STM that is sensitive to the different types of deficit in AD and PD and hence has the potential to inform clinical practice. Introduction With~45% of individuals aged >85 years being diagnosed with Alzheimer's disease (AD) (Liu, Liu, Kanekiyo, Xu, & Bu, 2013), one of the key priorities of healthcare has become the identification of individuals using sensitive measures that can be administered relatively rapidly. Cognitive deficits, specifically memory-related impairments, are an important feature of AD. Although much of the focus previously has been on long-term memory (LTM) or episodic memory, recent investigations have shown that patients with either familial AD (FAD) or lateonset AD (LOAD) can also have significant deficits in shortterm memory (STM) (Guazzo, Allen, Baddeley, & Sala, 2020;Liang et al., 2016;Parra et al., 2009Parra et al., , 2010Parra et al., , 2011Parra et al., , 2015. These findings intersect with recent models of memory which propose that the medial temporal lobes (MTL) e and specifically the hippocampus, a region often implicated relatively early in ADeis involved not only in LTM but also plays a role in STM. According to this perspective the hippocampus might play a role in a specific computation: retention of high resolution binding of features belonging to a memory episode, regardless of retention duration, short or long (Olson, Page, Moore, Chatterjee, & Verfaellie, 2006;Pertzov et al., 2013;Yonelinas, 2013). Consistent with this proposal, several studies have now provided evidence for binding deficits in STM in individuals with focal MTL lesions as well as those with AD (Della Sala, Parra, Fabi, Luzzi, & Abrahams, 2012;Koen, Borders, Petzold, & Yonelinas, 2016;Liang et al., 2016;Parra et al., 2010Parra et al., , 2011Parra et al., , 2009Pertzov et al., 2013;Zokaei, Nour, et al., 2018). A series of pioneering investigations that have provided evidence for binding impairments in patients with AD by Parra and colleagues (Della Sala et al., 2012;Guazzo et al., 2020;Parra et al., 2009Parra et al., , 2010Parra et al., , 2011Parra et al., , 2015 used variants of a changedetection task in which LOAD or FAD cases were presented with memory arrays consisting of either single features (e.g., colours), or multiple features bound together in a single object (e.g., coloured objects). Participants were asked to keep these in mind and later, following a brief delay, detect any changes in a second array compared to the one held in memory. Individuals with AD consistently performed worse in the binding conditions only (Della Sala et al., 2012;Guazzo et al., 2020;Kozlova, Parra, Titova, Gantman, & Sala, 2020;Parra et al., 2009Parra et al., , 2010Parra et al., , 2011Parra et al., , 2015. The change-detection studies described above employed a paradigm in which participants make either correct or incorrect (binary) responses. Performance on the task can be used to estimate the number of items which people can recall correctly from STM (Luck & Vogel, 1997). However, simply because an individual fails to recall an item does not mean that all the information regarding that item was completely lost from memory. In other words, change detection tasks do not provide a measure of the quality of memory representations when an observer makes an incorrect response. Moreover, the condition of interest in AD, the binding condition, required an additional operation compared to single-feature trials (Della Sala et al., 2012;Guazzo et al., 2020;Parra et al., 2009Parra et al., , 2010Parra et al., , 2011Parra et al., , 2015. Thus, participants had to remember both single features as well as their associations with one another, hence potentially limiting any direct comparisons made with trials in which only single features were to be retained. A recent theoretical and empirical approach to STM employs a different means to probe STM. It allows researchers to examine the resolution with which items are retained in memory by asking participants to respond using a continuous, rather than binary, response (for a review see: Ma, Husain, & Bays, 2014;Fallon, Zokaei, & Husain, 2016), thereby addressing some of the limitations of change detection methods raised above. In these continuous reproduction tasks, participants are required to reproduce the exact quality of remembered features in an analogue response space which provides a more sensitive measure of STM (P.M. Bays, Catalao, & Husain, 2009;Gorgoraptis, Catalao, Bays, & Husain, 2011;Pertzov, Dong, Peich, & Husain, 2012;Zokaei, Gorgoraptis, Bahrami, Bays, & Husain, 2011). One such paradigm which has also been validated in patients with focal MTL lesions (Pertzov et al., 2013;Zokaei, Nour, et al., 2018) and in patients with FAD (Liang et al., 2016) examines the resolution with which object-location bindings are retained in STM. The results showed that FAD and MTL lesion cases do indeed have deficits in feature binding (Liang et al., 2016;Pertzov et al., 2013;Zokaei, Nour, et al., 2018), supporting previous studies using change detection tasks in FAD (Parra et al., 2010). However, this task has not yet been tested in sporadic LOAD cases. Continuous reproduction STM paradigms that measure recall precision can also provide a means to dissect out sources of error contributing to the pattern of performance using modern analytical techniques (P.M. Bays et al., 2009;Grogan et al., 2019). Specifically, three different contributions to impaired performance can be separated using these methods: error due to imprecision (noisiness) of recall, increased misbinding (or swap) errors in which participants report a feature associated with another item in memory, or alternatively increased proportion of random guesses. For example, in an object-location binding task, a swap occurs when participants report the location of another item in memory and hence misbinding the objects and their corresponding locations. Therefore, without needing to separate trial-types depending on the type of information that is retained (single features vs. bound objects), it is possible to isolate the underlying associated impairment in STM: whether the errors are driven largely by imprecision (noisiness) of recall, random guessing or misbinding (swaps). This dissection of the nature of errors contributing to STM impairments is important because it has the potential to provide mechanistic insights into the cognitive processes that are dysfunctional in a brain disorder. It is now known that several different neurodegenerative conditions can lead to STM deficits (e.g., Panegyres, 2004) but the underlying mechanisms might be different across different diseases. For example, patients with Parkinson's disease (PD) have long been known to exhibit STM impairments, apparent at the very earliest stages of the disease (e.g., Dujardin, Degreef, Rogelet, Defebvre, & Destee, 1999;Muslimovic et al., 2005;Owen et al., 1992;1993;Verbaan et al., 2007). In contrast to work in AD, research on STM deficits in PD using a different type of continuous response paradigm (which examined colourorientation binding) has shown that these individuals and those at risk of developing PD make significantly more random guessing responses than healthy controls (Rolinski et c o r t e x 1 3 2 ( 2 0 2 0 ) 4 1 e5 0 al., 2015; Zokaei, McNeill, et al., 2014). Thus, the mechanism underlying the STM deficit in PD might be distinct to that observed with patients with hippocampal deficits such as patients with AD. To the best of our knowledge, however, LOAD and PD cases have not previously been compared directly using a continuous reproduction task, although other researchers have compared LOAD cases to PD dementia using a change detection task (Della Sala et al., 2012). This study reported increased misbinding in LOAD but no visual STM deficit in PD patients who have developed dementia. A subsequent investigation by the same group compared LOAD patients to PD cases with or without dementia (Kozlova et al., 2020). The authors concluded again that, although misbinding is increased in LOAD, the PD caseseeither with or without dementiaeshow no significant visual STM deficit compared to healthy controls on change detection performance. It remains to be established therefore if LOAD and PD cases have doubly dissociable patterns of underlying STM deficitewith increased misbinding in AD and increased guessing in PDeusing the same reproduction task to test both groups. To put this hypothesis to the strongest test, it would be important to compare LOAD cases with PD patients without dementia because it is now known that mean onset to dementia is 10 years after the diagnosis of PD (Aarsland & Kurz, 2010). If it is possible to demonstrate, on the same task, an underlying cause of impaired STM performance in PD cases without dementia that is doubly dissociable from that in LOAD, at comparable times since diagnosis, that would potentially provide strong evidence for distinctly different cognitive mechanisms contributing to STM dysfunction in the two diseases. In this study, therefore, we examined visual STM performance and the sources of error in LOAD and PD cases without dementia, who were not significantly different from each other in terms of diagnosis duration, on the same continuous reproduction task. In addition, we examined performance of two control groups. First, we studied individuals with subjective cognitive impairment (SCI). These patients were included as they present to clinics complaining of everyday memory difficulties, but are not diagnosed with any neurological disorder at the time of testing (Stewart, 2012). They therefore provide a potentially important second comparison group as their subjective experience of their memory abilities is impaired, as is often the case in AD, but they do not have objective evidence of a significant neurodegenerative condition. Therefore, we would not expect most SCI patients to show a visual STM deficit characterized by misbinding as we would in AD, despite the fact that both groups of patients might complain of memory deficits. The definition of SCI we use here is different to authors Jessen et al. (2014) who specifically wish to develop criteria for individuals with subjective cognitive decline (SCD) who are in the pre-clinical phase of AD, prior to mild cognitive impairment (MCI). Our definition is the wider one of all patients who report difficulties with their memory but do not have evidence of significant objective deficits and are not given a diagnosis of a neurodegenerative disorder (Howard, 2020). Lastly, in addition to SCI cases, we also examined a group of healthy controls without significant memory complaints, as they provide a second control or baseline of performance. In the present study, we used an object-location continuous reproduction binding task to examine STM performance (Pertzov et al., 2013;Zokaei et al., 2017;Zokaei, Cepukaityt _ e, et al., 2018) across all four groups of individuals: LOAD, PD, SCI and healthy controls. The task required participants to report the exact location of remembered objects and, importantly in addition, enabled us to explore the underlying sources of error contributing to impaired STM using computational modelling of response error. The paradigm was developed for clinical use following a series of studies in healthy people challenged the view that the best way to characterize STM might be in terms of the number of items it can hold (P.M. Bays & Husain, 2008;Ma et al., 2014;Wilken & Ma, 2004). Instead, data from several investigations have demonstrated that the use of continuous (rather than discrete) error measures provides a view of STM that is far more flexible than previously envisaged. Moreover, these tasksesometimes referred to as precision STM tasksereadily permit modelling of the sources of error contributing to memory performance (P.M. Bays et al., 2009;Ma et al., 2014). Participants No part of the study procedure was pre-registered prior to the research being conducted. Overall, eighty-nine individuals participated in this study. This included: 20 patients with a diagnosis of LOAD based on the NIA-AA core clinical criteria for probable AD (McKhann et al., 2011), 13 of whom were on donepezil 20 patients with a diagnosis of PD based on the UK Parkinson's Society Brain Bank criteria (Hughes, Daniel, Kilford, & Lees, 1992) (mean daily levodopa equivalent dose ¼ 658 mg) 24 people with SCI defined as people who presented with complaints about their memory but clinically did not present with symptoms of MCI or dementia (Howard, 2020) on the basis of the history obtained from the patient and an informant, and on the basis of performance on the Addenbrookes Cognitive Examination-III (ACE-III). Of the 24 participants with SCI, 6 had anxiety (one on anxiety medication), 6 had depression (two were on antidepressants) and lastly 3 reported poor sleep (though none were on any specific medication for this) and one of these cases also reported anxiety. 25 healthy controls (HCs). Patients were recruited over three years through a neurology clinic with a specialist interest in cognitive disorders at the John Radcliffe Hospital, Oxford and were tested on one occasion. Control participants were recruited from Oxford Dementia and Ageing Research database. Demographics, patient information and details of statistical comparisons are presented in Table 1. There was no significant difference in age of patients with AD, PD, SCI and HCs. The Addenbrooke's Cognitive Examination (ACE III) test was administered as a general cognitive screening test to patients with AD, PD, SCI and HCs. Patients with AD scored significantly lower on the ACE compared to healthy controls, patients with PD and individuals with SCI (all Bonferroni corrected p < .001). There was no significant difference in ACE scores between the PD, SCI and healthy controls. On average, PD cases had been diagnosed slightly longer than AD patients but this difference was not significant. An approximation of the sample size was determined based on previous studies on short-term memory performance, using a similar task to the one employed here, in various patient groups and individuals at risk of developing neurodegenerative disorders (Liang et al., 2016;Rolinski et al., 2015;Zokaei, McNeill, et al., 2014;Zokaei et al., 2017;Zokaei, Nour, et al., 2018;Zokaei, Cepukaityt _ e, et al., 2018). All participants had normal or corrected to normal vision and HC participants and individuals with SCI had no neurological disorders at time of testing. The study was approved by the local NHS ethics committee and all participants provided fully informed consent to task procedure. 2.2. Short-term memory task The STM task was identical to one previously used (Zokaei et al., 2017;Zokaei, Nour, et al., 2018) (Fig. 1). It was presented on a touchscreen (Inspiron All-in-One 2320; DELL) with a 1920 Â 1080 pixel resolution (corresponding to 62 Â 35 of visual angle) at a viewing distance of approximately 62 cm. In brief, in each trial, participants were presented with 1 or 3 abstract images (fractals) comprising the memory array, for 1 or 3 s. The memory array was then followed by either 1 or 4 s of a black screen, before recall when participants were presented with 2 fractals on the vertical meridian at screen centre. One of the fractals had appeared in the preceding memory array while the other was a foil, i.e., a novel fractal. Participants were asked to select the fractal that had appeared in the memory array by touching it (identification accuracy). Once one of the fractals was selected, participants had to drag it on the touchscreen it to its remembered location (localization memory), and confirm their response with a key press. The localization phase of the task provides a continuous measure of error, rather a binary correct/incorrect response. Stimuli were randomly selected from a pool of 60 coloured fractals, with a maximum height and width of 120 pixels (4 of visual angle). The location of each fractal was random, but with a minimum distance of 3.9 from the monitor edge, and a minimum distance of 6.5 from screen centre. Participants completed between 2 and 4 blocks of the task, depending on availability. Each block consisted of 16 trials in which 1 item was presented in the memory array (8 per delay duration) and 32 trials in which 3 items had to be retained in memory (16 per delay duration). The full task took approximately 30 min to complete. Participants were familiarized with task procedure prior to the testing by completing 8 practice trials with increasing difficulty. Behavioural analysis Identification accuracy and localization error were used as an overall measure of performance. Identification accuracy is calculated as the proportion of trials in which participants correctly select the item that was previously in the memory array. Trials in which the correct item was not identified were excluded from subsequent analysis. Localization error was then calculated as the difference between the location of the item in the memory array and the reported location in pixels. Mixture modelling of error STM precision tasks, such as the one employed here, also provide a means to dissect out sources of error contributing to the pattern of performance (P.M. Bays et al., 2009;Gorgoraptis et al., 2011;Ma et al., 2014). In these paradigms, error can potentially arise from three distinct sources. First, error can be due to variability in memory for the probed item (Imprecision). In other words, how well a feature, here location, is stored in memory. Second, participants may make random errors, because on some trials, they may be simply guessing (Guesses). Lastly, error can arise from misreporting features of the non-probed (other) items that were presented in the memory array (Swaps). In such cases, participants' responses might be systematically biased by other items that were encoded into STM. This general model has successfully been applied previously to one dimensional features in memory such as motion, orientation or colour in both healthy population (e.g., P.M. Bays et al., 2009;Gorgoraptis et al., 2011;Zokaei et al., 2011) as well as the ageing population and patients with various neurological disorders (Mok, Myers, Wallis, & Nobre, 2016;Peich, Husain, & Bays, 2013;Rolinski et al., 2015;Zokaei, McNeill, et al., 2014). Here, to identify sources of error contributing to overall STM performance, a specific model for this type of task was applied to localization error data for set size three trials (Grogan et al., 2019). According to this model, as in previous applications to other stimuli noted above, error can arise due to increased imprecision (Fig. 3a left panel), random responses due to guesses (Fig. 3a middle panel), or swap/misbinding errors (Fig. 3a right panel). In this case, imprecision refers specifically to the noisiness (variability) of response around c o r t e x 1 3 2 ( 2 0 2 0 ) 4 1 e5 0 the true location of the probed item which had appeared in the memory display. Random guessing responses are those that are classed as occurring at locations other than the probed item or any of the other items that had been in the memory display. Finally, swap (misbinding) errors are those in which the responses fall in the locations of items that had been in the memory display but were not actually probed. Thus, swap errors arise in trials in which participants pick the correct fractal but place it in the location of one of the other (nonprobed) items from the memory array. The model is described by the following equation: where the free parameters of a, b, g, and k, correspond to proportion of target responses, swaps, guesses and the imprecision respectively. Moreover, b q parameter corresponds to the response, q to the target, f i to the non-probed item's coordinates, and j to the bivariate Gaussian distribution, and A to the screen dimensions. In this model, swaps are assumed to be similar to target responses, except that they are centred on the locations of non-probed items. Thus, they take the form of a multivariate Gaussian distribution with the same imprecision parameter as the probed target item. Guesses, however, are assumed to be entirely unrelated to any stimuli locations and take the form of a uniform distribution across the entire screen. This, therefore, reflects a random guess similar to what would happen if the participant had either entirely forgotten all the stimuli, or effectively had their eyes shut during stimulus presentation. Put simply, responses close to non-probed items are more likely classed as swaps (depending on the imprecision parameter), while responses far away from both probed and non-probed items (hence all items in memory) are more likely classed as guesses. Separate mixed ANOVAs were used, with set size and delay as within-subject factors, and participant group as betweensubject factors. For non-normally distributed data, appropriate transformation was applied to meet the requirements of ANOVA. An estimate of effect size is reported as etasquared (reported for significant effects). We report how we determined our sample size, all data exclusions (if any), all inclusion/exclusion criteria, whether inclusion/exclusion criteria were established prior to data analysis, all manipulations, and all measures in the study. Legal restrictions that are beyond our control prevent us from publicly archiving the task and analysis scripts used in this research. Specifically, for commercial use, these can be obtained through licensing agreement with Oxford University Innovation Ltd. These digital materials will, however, be shared freely on request with research groups and non-profit making organisations provided they agree in writing not to share them with commercial parties or use them for profit. The conditions of our ethics approval do not permit public archiving of the data supporting this study. Readers seeking access to this data should contact the lead author, Prof Masud Husain. Access will be granted to named individuals in accordance with ethical procedures governing the reuse of sensitive data. Specifically, to obtain the data, requestors must complete a formal data sharing agreement, including conditions for secure storage of sensitive data. Results Due to participant availability, a few patients and healthy controls did not complete sufficient trials to examine the effect of memory delay on performance. Hence, for the purposes of this analysis performance across the two retention delays (1 or 4 s) was collapsed to allow for the investigation of the impact of memory set size on performance. All post-hoc ttests were Bonferroni corrected. Behavioural performance For identification accuracy, that is proportion of trials in which participants correctly identified the fractal, a repeated measures ANOVA was performed with set size as a withingroup factor (1 or 3 items) and group as between-subject factor. There was a significant effect of set size on identification accuracy (F (1,85) Fig. 2a Identification) with reduced identification accuracy when 3 items had to Fig. 1 e Short-term memory task. Schematic of the short-term memory task. Participants were presented with a memory array followed by a delay. They were then presented with two fractals, one from the memory array and a foil. On a touchscreen computer, participants first had to touch the fractal they had seen before (in the memory array) and drag it its remembered location. be remembered compared to when only 1 had to be retained. In addition, there was a significant main effect of group (F (3,85) ¼ 24.4, p < .001, h 2 p ¼ .46) and a significant interaction between set size and group (F (3,85) ¼ 4.17, p ¼ .008, h 2 p ¼ .13), indicating that memory load affected the groups differently. This interaction was followed up with two one-way ANOVAs per memory set size. For set size 1, there was a significant effect of group on performance (F (3,85) ¼ 8.7, p < .001, h 2 p ¼ .24). Bonferroni corrected post-hoc tests revealed significant difference in performance between AD patients compared to HC participants (p < .001), patients with PD (p < .001) and individuals with SCI (p < .001). For set size 3, there was also a significant effect of group (F (3,85) ¼ 32.4, p < .001, h 2 p ¼ .53), with AD patients performing significantly worse than HCs, individuals with SCI and PD (all p < .001), in Bonferroni corrected post-hoc comparisons. Patients with PD did not perform significantly different compared to HCs and individuals with SCI. We next examined localization memory by measuring the distance between the reported and true location of the probed item. There was a significant main effect of set size (F (1,85) ¼ 672, p < .001, h 2 p ¼ .9), with larger localization error in trials with 3 compared to 1 fractal and a significant main effect of group (F (3,85) ¼ 18, p < .001, h 2 p ¼ .39). Post-hoc, Bonferroni corrected comparisons revealed AD patients had significantly greater localization error compared to HC, PD and SCI groups (all p < .001) (Fig. 2b localization). Patients with PD did not perform significantly different compared to HCs and individuals with SCI. 3.2. Mixture modelling of error Application of mixture modelling to data from STM precision tasks, such as the one employed here, provides a means to dissect out sources of error contributing to the pattern of performance (P.M. Bays et al., 2009;Gorgoraptis et al., 2011;Ma et al., 2014). A recent additional analytical technique for the type of task used here (Grogan et al., 2019) allowed us to estimate the proportion of responses arising from three sources of error: Imprecision of response around the true location of the correctly identified item (Fig. 3a left panel) Random responses due to guesses (Fig. 3a middle panel) where the correctly identified item was dragged to a location which was neither its true location nor the location of the other two (non-probed) items that had appeared in the memory display Swap or misbinding errors (Fig. 3a right panel) where participants select the correct fractal at probe but place it in the location of one of the other two (non-probed) items from the memory array. The parameters returned from the model reflect the proportion of responses classed as each type of error (see Table 2 for means and standard deviations of model estimates per participant group). For model estimates of imprecision, proportion of random responses and proportion of swaps, a repeated measures ANOVA was performed with group as between-subject factor. There was no significant effect of group on model estimate of imprecision (F (3,85) ¼ 2.09, p ¼ .108, h 2 p ¼ .069, Fig. 3beImprecision). For model estimate of proportion of guesses however, there was a significant main effect of group (F (3,85) ¼ 3.94, p ¼ .011, h 2 p ¼ .122, Fig. 3beGuesses). Post-hoc, Bonferroni corrected comparisons revealed that patients with PD made significantly more guesses compared to both HC (p ¼ .013) and SCI participants (p ¼ .001). AD patients did not make significantly more guesses compared to HCs and SCI participants. Lastly, we examined the effect of group on proportion of swap (binding) errors using a one-way ANOVA with group as between-subject factor. There was a significant main effect of group (F (3,85) ¼ 3.79, p ¼ .013, h 2 p ¼ .12, Fig. 3beSwaps) and Bonferroni corrected post-hoc comparisons revealed that patients with AD made significantly more swaps compared to HC (p ¼ .004) and SCI participants (p ¼ .017) as well as patients with PD (p ¼ .005). Discussion In the present study we examined STM performance in patients with LOAD versus PD, SCI and healthy controls using a Fig. 2 e Short-term memory performance. Behavioural task performance, for identification accuracy (a) and localization error (b) for 1 and 3 item conditions for patients with AD, PD, SCI and healthy controls (HC). c o r t e x 1 3 2 ( 2 0 2 0 ) 4 1 e5 0 sensitive, continuous analogue reproduction task that measures the retention of bound object-locations (Fig. 1). In line with previous research, we found a selective impairment of feature binding in the STM performance of patients with LOAD compared to all other tested groups (Fig. 3b). A previous study in FAD cases also demonstrated increased misbinding in asymptomatic cases, prior to the onset of dementia (Liang et al., 2016). Increased binding errors in patients with LOAD is also consistent with the results of a series of previous studies, using a different (change-detection) methodology, which demonstrated higher rates of misbinding in patients with LOAD and FAD (Della Sala et al., 2012;Guazzo et al., 2020;Parra et al., 2009Parra et al., , 2010Parra et al., , 2011Parra et al., , 2015. Together, these findings provide growing support for the view that AD is associated not only with LTM but also STM impairments, and that increased misbinding might be an important signature of STM deficits in the condition. Classically, it has been proposed that the MTL and hippocampus in particular play a key role in retention of relational binding of features belonging to an episode in LTM (e.g., Davachi, 2006). However, deficits of STM retention of objectlocation bindings as demonstrated here and by changedetection studies in patients with LOAD, who typically have MTL atrophy, points to a general role of the MTL that extends beyond the traditional distinction between long vs. short-term memories. In fact, it highlights a computation that might be shared between STM and LTM, namely the high-resolution binding of features to perceive and maintain coherent and bound objects (Yonelinas, 2013). Complementary to this view, precise retention of object-locations even for short durations has been found to rely on MTL structures (Koen et al., 2016;Liang et al., 2016;Libby, Hannula, & Ranganath, 2014;Pertzov et al., 2013;Zokaei, Nour, et al., 2018). Although the results of these studies point to a key role of the MTLeacross different pathologiesein feature binding in visual STM, in the context of neurodegenerative disorders it would be important to consider whether binding deficits can distinguish AD from other conditions that are either associated with neurodegeneration or memory complaints. To this end, in this study we compared LOAD cases to three groups: PD c o r t e x 1 3 2 ( 2 0 2 0 ) 4 1 e5 0 patients with diagnosis duration that is not significantly different to the AD cases; people with SCI who present with subjective memory complaints but are not considered to have AD after investigation; and healthy controls. Previously other investigators have reported, using change detection tasks, that PD patients, with and without dementia, do not show increased misbinding as observed in LOAD (Della Sala et al., 2012;Kozlova et al., 2020). Our results also show that the type of impairment observed in patients with LOAD is distinctly different to those observed in patients with PD, a neurodegenerative disorder which is also associated with STM deficits (Fallon, Mattiesing, Muhammed, Manohar, & Husain, 2017;Owen, Iddon, Hodges, Summers, & Robbins, 1997;Zokaei, Burnett Heyes, et al., 2014;Zokaei, McNeill, et al., 2014). Here, we used a recently developed computational model of response error for this task (Grogan et al., 2019) to demonstrate doubly dissociable underlying sources of error for LOAD compared to PD. Even without dementia, but nonsignificantly different disease durations, PD patients show increased guessing compared both HCs and individuals with SCI (Fig. 3b). Importantly, this deficit was observed, despite the fact that on simple indices of identification and localization performance PD patients were not significantly impaired compared to healthy controls (Fig. 2a). That the nature of STM impairments in patients with PD is different to that in LOAD has been suggested by the results of previous investigations which used a continuous response paradigm testing colour-orientation bindings. Those studies reported that PD patients and people at risk of developing PD make significantly more random guessing responses (Rolinski et al., 2015;Zokaei, McNeill, et al., 2014). However, the performance of LOAD and PD cases has not previously been compared directly on the same continuous reproduction task, as here. It is possible that increased guessing on STM tasks in PD are manifestations of lapses in attention, resulting in an all or none memory recall . There is now considerable evidence of fluctuations in attention in disorders associated with Lewy body pathology, as in PD (O'Dowd et al., 2019). It is also possible that visuospatial processing deficits in patients, independent of any impairments in attentional fluctuations, might be a contributing factor to increased random response. Future research might profitably focus on understanding the link between attentional or visuospatial deficits and the type of STM impairment observed in PD. In this study, we further explored the selectivity of STM deficits on our task by comparing performance in patients with LOAD to a group of individuals with SCI. Patients with SCI express deficits in cognition, but do not demonstrate any clinical symptoms at the time of testing. However, recent studies have shown that SCI represents a heterogenous group of individuals, many with psychiatric disorders such as depression, anxiety or mood disorders but a few at risk of developing dementia in the longer-term (Amariglio et al., 2012;Buckley et al., 2013;Hohman, Beason-Held, & Resnick, 2011;Mitchell, Beaumont, Ferguson, Yadegarfar, & Stubbs, 2014;Slavin et al., 2010;Stogmann et al., 2016). This group who present to the clinic with memory concerns provides an interesting control to test the selectivity of STM impairments we observed. Interestingly, in the present study, compared to LOAD or PD, SCI patients overall did not demonstrate any impairment in short-term retention of object-location bindings. Thus, as a group, they do not show the pattern of misbinding that we have observed in LOAD here and in presymptomatic FAD (Liang et al., 2016). Nevertheless, this task might be useful to detect and track longitudinally 'outliers' who show abnormally high misbinding at presentation, despite performing normally on standard cognitive screening. Evidence in favour of pursuing this possibility comes from a study (Koppara et al., 2015) that assessed visual STM in patients with subjective cognitive decline (SCD) using the same change detection task developed by Parra and colleagues (Della Sala et al., 2012;Guazzo et al., 2020;Parra et al., 2009Parra et al., , 2010Parra et al., , 2011Parra et al., , 2015. Unlike our findings, that investigation reported that SCD cases showed increased misbinding compared to healthy controls, but not as high as patients with mild cognitive impairment (MCI). It is possible that the sample of SCD cases in that study might be different to the SCI group in our study. It is now widely acknowledged that a very small proportion of such cases will go on to develop dementia, but most will not (Howard, 2020). The overall findings in any group might therefore depend upon the percentage who are in the earliest preclinical phases of AD, and that proportion might be relatively small in our sample, while it might have been larger in the study of Koppara and colleagues. Long-term follow up of cases is therefore crucial to establish whether increased visual STM misbinding in any one individual is an important early cognitive marker of preclinical AD, in the context of patients who present to memory clinics and are suspected to have an underlying neurodegenerative condition, even though cognitive screening does not reveal significant deficits. Together, our findings provide support for the selective impairment in short-term retention of bound features in patients with LOAD that is distinct to those observed in healthy controls, the SCI group we studies and patients with PD who, even without dementia, demonstrated a separate, distinctively different pattern of STM impairment. The task used here provides a relatively rapid means to measure STM and sources of error in performance. It has the potential to inform clinical practice and assessment. Neuroimaging is supported by core funding from the Wellcome Trust (203130/Z/16/Z). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. r e f e r e n c e s
8,507.8
2020-08-20T00:00:00.000
[ "Psychology", "Biology" ]
Solution of Combined Economic Emission Dispatch Problem with Valve-Point Effect Using Hybrid NSGA II-MOPSO Solution of Combined Economic Emission Dispatch Problem with Valve-Point Effect Using Hybrid NSGA II-MOPSO This chapter formulates a multi-objective optimization problem to simultaneously mini- mize the objectives of fuel cost and emissions from the power plants to meet the power demand subject to linear and nonlinear system constraints. These conflicting objectives are formulated as a combined economic emission dispatch (CEED) problem. Various meta-heuristic optimization algorithms have been developed and successfully implemented to solve this complex, highly nonlinear, non-convex problem. To overcome the shortcomings of the evolutionary multi-objective algorithms like slow convergence to Pareto-optimal front, premature convergence, local trapping, it is very natural to think of integrating various algorithms to overcome the shortcomings. This chapter proposes a hybrid evolu- tionary multi-objective optimization framework using Non-Dominated Sorting Genetic Algorithm II and Multi-Objective Particle Swarm Optimization to solve the CEED prob- lem. The hybrid method along with the proposed constraint handling mechanism is able to balance the exploration and exploitation tasks. This hybrid method is tested on IEEE 30 bus system with quadratic cost function considering transmission loss and valve point effect. The Pareto front obtained using hybrid approach demonstrates that the approach converges to the true Pareto front, finds the diverse set of solutions along the Pareto front and confirms its potential to solve the CEED problem. Introduction In order to operate the power system economically and also to protect the environment from pollution the power system operator has to carry out optimal scheduling of active power to simultaneously minimize the fuel cost and the emissions from the fossil fuel-fired power plants. These objectives are desirable to obtain great economic benefit [1] and to reduce the nitrogen oxide (NO x ), sulfur oxide (SO x ) and carbon dioxide (CO 2 ) pollutants which cause harmful effect on human beings [2]. These conflicting objectives can be formulated as a multi-objective combined economic emission dispatch (CEED) problem. This CEED problem can be solved using traditional mathematical programming techniques such as lambda iteration, gradient search [1] and can also be solved using modern heuristics optimization techniques. The numerous advantages of solving the CEED problem using heuristic optimization methods compared to the traditional mathematical programming techniques are they are population-based, do not require any derivative information, do not use gradient information in search process, use stochastic operators in search process, they are simple to implement and flexible, have inbuilt parallel architecture and they are scalable and are also computationally quick. A single optimal solution cannot be obtained for a multi-objective CEED problem which simultaneously minimizes the conflicting objectives of fuel cost and emission. Thus the simultaneous minimization of conflicting objectives in a multi-objective optimization problem (MOP) gives rise to a set of tradeoff solution called as Pareto-optimal (PO) solutions [3] which needs further processing to arrive at a single preferred solution. In literature domination based framework using multi-objective evolutionary algorithms (MOEA) which simultaneously minimizes the fuel cost and emission have been employed to solve the CEED problem. These population-based approaches can obtain the multiple non dominated solutions in a single simulation run. These non-dominated solutions portray the tradeoff between fuel cost and emission objectives of CEED problem. Modern meta-heuristic optimization algorithms like Genetic Algorithm [4,5], Biogeography Based Optimization [6], Particle Swarm Optimization [7], Bacterial Foraging Algorithm [8], Scatter Search [9], Teaching Learning Based Optimization [10], Differential Evolution [11] and Harmony Search Algorithm [12] have been developed and successfully implemented to solve this complex, highly nonlinear, non-convex CEED problem. The multiple objective CEED problem can also be transformed into a single objective problem using a weighted sum approach and h parameter values. The h parameters are used to overcome the dimensionality problem when combining multi-objectives and the converted single objective problem is then solved using evolutionary algorithms [13][14][15]. Another technique to solve CEED problem without the h parameter is to normalize the fuel cost and emission components [6] and solve the single objective function using evolutionary algorithms (EA). In these approaches for the chosen value of weights will give one particular PO solution at a time. However, the disadvantage of these methods is that it requires multiple runs to find the set of PO solutions. Each evolutionary algorithm has its own characteristics and merits; therefore it is natural to think of integrating these different algorithms to handle a complex problem like CEED. In the research field of Evolutionary Algorithms merging of two or more optimization algorithms into a single framework is called hybridization. In [16][17][18][19][20][21] hybrid multi-objective optimization algorithms have been successfully applied to solve CEED, various complex engineering problems, and standard test functions. The results indicate that the hybrid algorithms are effective, can exchange elite knowledge within the hybrid framework, can do parallel processing, can improve the exploration and exploitation capabilities and can yield more favorable performance than any single algorithm. In order to obtain a globally optimal solution without being trapped in local optima requires a tradeoff between exploration and exploitation task in the search process. Exploration phase in any algorithm is important to search every part of the solution domain to provide an estimate of the global optimal solution. On the other hand exploitation phase in any algorithm is important to improve the best solutions found so far by searching in their neighborhood. In this chapter, a hybrid framework using Non-Dominated Sorting Genetic Algorithm II (NSGA II) [22] and Multi-objective Particle Swarm Optimization (MOPSO) [23] is used to solve the CEED problem. This hybrid framework integrates the desirable features of the NSGA II and MOPSO while curbing their individual flaws. These population-based approaches use different techniques for exploring the search space and when they are combined will improve the tradeoff between the exploration and exploitation tasks to converge around the best possible solutions. The main purpose of this hybridization technique is to obtain a well-spread and well-diverse PO solution. When the proposed hybrid algorithm is used to solve the highly complex CEED problem the PO solution is obtained in less number of iteration and is also computationally fast when compared to MOPSO. The rest of the chapter is organized as follows. The next section formulates the CEED problem. In Section 3, the transmission loss handling procedure and the constraint handling procedure is explained. In Section 4 the short review of NSGA II and MOPSO is provided. Section 5 is devoted to explaining the hybrid algorithm. In Section 6 the hybrid algorithm is applied on standard IEEE 30 bus systems and it also discusses the simulation results. Finally, the conclusion is drawn in Section 7. Formulation of combined economic emission dispatch (CEED) problem The combined economic emission dispatch problem has two conflicting objectives. The first objective can be stated as determining the optimal power generation schedule from a set of online generating units to satisfy the load demand subject to several physical and operational constraints to minimize the fuel cost. The second objective can be stated as determining the optimal power generation schedule from a set of online generating units to satisfy the load demand to minimize the pollutant emissions produced by the generating units. Both the conflicting objectives have to be minimized at the same time because operating the system with minimum cost will result in higher emission and considering only the minimum environmental impact is not practical which results in high production cost of the system. This section formulates the objective functions of the CEED problem along with equality and inequality constraints to maintain rigorous standards to meet the practical requirements of the power system. The goal of this chapter is to find the Pareto-optimal solutions of the CEED problem which minimize both these objectives subject to constraints. The mathematical formulation is as follows. Objective functions of CEED problem The general formulation for a multi-objective optimization problem (MOOP) is to minimize the number of objective functions simultaneously. A general mathematical model is represented as follows [21]: where f x ð Þ represents the vector of objectives and f i x ð Þ, i ¼ 1, 2, ⋯, m is a scalar decision variable which maps decision variable x into objective space f i ¼ R n ! R: The n-dimensional variable x is restricted to lie in a feasible region D which is constrained by j in-equality constraint and k equality constraint, i.e. The decision variable x can be written more suitably as where T is the transposition of the column vector to the row vector. The decision variables are restricted to take a value within a lower x bounds. These bounds are called the decision space [3]. In MO CEED problem the number of objectives m ¼ 2.The mathematical model of CEED is represented as follows: subject to power balance equality constraints h x ð Þ and bounds. The function f 1 x ð Þ represents the minimization of total fuel cost function and the function f 2 x ð Þ represents the minimization of the emissions from the fossil fuel fired plants. The decision variable x consists of the real power generation of the n generating units and can be written as where Pg i is the real power output of the i th generator. Power plants commonly have multiple valves that are used to control the power output of the units. In a practical generating unit, when steam admission valves in thermal units are first opened, a sudden increase in losses is registered which results in ripples in the cost function. In order to model these ripples accurately, sinusoidal functions are added to the quadratic cost function [24]. The resulting cost function contains higher order nonlinearity and makes the problem non-differentiable and non-convex. Hence there are two versions of the fuel cost function, the quadratic function represented by f 1 x ð Þ and the combination of quadratic and a sinusoidal (valve-point) function represented by f 1, V x ð Þ . The two versions of the fuel cost functions are given below where a i , b i , c i represent the cost coefficients of the generator i. e i and f i are coefficients to model the effect of valve point of the generator i. The second objective function f 2 x ð Þ is an emission function which takes into account the major pollutants caused by the fossil fuel fired power plants. The main pollutants from the power plants are the sulfur oxides and nitrogen oxides. The sulfur oxide emissions are proportional to the fuel consumed by the power plants and have the same form as that of the fuel cost function given by (6). The sulfur oxide emission function can be stated as follows [7]. The nitrogen oxides emissions are difficult to evaluate as the nitrogen is available in air and also in the fuel. The production of nitrogen gas is related to boiler temperature and air content. The modeling of the nitrogen oxides consists of straight lines and exponential terms. The nitrogen oxides emission function can be stated as follows The total emission function is obtained by adding the coefficients of (8) and (9) which gives the combination of the mixture of sulfur oxides and nitrogen oxides pollutants [7]. The total emission function can be stated as follows The total emission function given by (10) has a quadratic term and an exponential term which makes the function highly nonlinear. In (10) α i , β i , γ i , η i , δ i are the emission coefficients of the generator i. The modeling of the emission function is very important because according to the Amendments of the Clean Air Act regulatory agencies might decide to limit power plant emission in the areas where there are high concentrations of harmful contaminants. Active power balance equality constraint and bounds In order ensure that the total real power generation exactly match with the total load demand Pd and transmission loss Pl in the system a power balance equality constraint given in (11) should be satisfied. The transmission losses in the power network are function of Pg and can be represented using B-matrix coefficients (Kron's loss formula [1]) as follows where B ij , B 0i , B 00 are transmission loss coefficients. There are instances in literature where the power losses in the system is neglected and the power balance equation given by (11) is curtailed as follows The above equations given by (11) and (13) are most common form of power balance equation found in the literature. The power output of each generator i should lie within its minimum limit (Pg min i ) and maximum limit (Pg max i ) given by Combined economic emission dispatch The purpose of the CEED problem is to determine the Pareto-optimal real power generation à T that minimize the two conflicting objective given by (7) and (10) while satisfying the real power equality constraint given by (11) and the bounds given by (14). The bi-objective CEED problem can be formulated as In MO CEED problem, the economic and emission objectives will conflict with each other and is not possible to satisfy them simultaneously. There is no way of improving these objectives without degrading at least one of these objectives and the resulting set of non-dominated solutions thus obtained are called Pareto-optimal set. The objective function values of all elements in the PO set in the objective space constitute the Pareto front. When the sufficient number of PO solutions is available for the CEED problem then it is possible to find a convex curve containing these solutions to produce the Pareto front. The two main goals of MO CEED problem: 1. Find a set of non-dominated solutions which lie on the Pareto-optimal front 2. Find a wide spread of non-dominated solutions to represent the entire range of the Paretooptimal front. Constraint handling mechanism At any stage of the algorithm whenever a new population is being generated it is very important to make sure that the population lies within the decision space. While solving the CEED problem this implies that the population should satisfy the equality constraints and bounds. If the transmission losses are neglected than the k th variable of the candidate solution Pg k can be calculated by subtracting the sum of the power generations (excluding the Pg i from the power demand Pd. If the power transmission losses are considered, to determine Pg k and to maintain the equality constraint becomes hard. It is done using the following steps. Step 1. Update the variables belonging to the set α n by normal optimization process of an evolutionary algorithm. Here rand is a uniformly distributed random number in the range of 0; 1 ½ . The set α n contains all the integers in the range 1; n ½ except k, where k is a randomly generated integer which lies in the range of 1; n ½ Step 2. If updating of the variables is carried out using any other technique then regulate the updated variables which violate the lower bounds as Pg i ¼ Pg min i ; i ∈ α n . Regulate the updated variables which violate the upper bounds as Pg i ¼ Pg max i ; i ∈ α n . Step 3. Obtain the value of the k th variable of the candidate solution Pg k by solving the following quadratic equation (17) whose coefficients are associated with the variables belonging to the set α n and the transmission loss coefficients [7]. To improve the potential candidate solution and also to improve the flexibility and diversity of the optimization algorithm the value of k is randomly generated integer between 1 and n. Out of the two roots of the quadratic equation (17), one root will be selected as the value of the variable Pg k using the following procedure. If both the roots of the quadratic equation lie within the bounds then the root which has the minimum value is selected. If only one root lies within the bounds, this root is selected as the value of Pg k and the other root which lies outside the bounds is neglected. If both the roots lay outside the bounds the value of Pg k is set equal to Pg min k . Step 4. Calculate the residue P RD by subtracting the total system demand Pd and the total system transmission loss Pl from the sum of the total power generation P n i¼1 Pg i . If P RD j j< tol, then go to step 7; otherwise go to step 5. Here, tol is the demand tolerance usually set as 0:001 p:u: Step 5. Recalculate Pg i using Eq. (16). Step 6. Repeat step 3, step 4 and step 5 until P RD j j< tol. This step will ensure that the candidate solution will always lie within the decision space. Step 7. Stop the constraint handling procedure. The main purpose of this constraint handling mechanism is to increase the flexibility and diversity of the algorithm and to make sure that the candidate solution generated at any point of the algorithm always lies within the decision space. NSGA II and MOPSO algorithms for solving CEED problem Several Evolutionary Multi-objective (EMO) algorithms like NSGAII, MOPSO, SPEA 2 (Strength Pareto Evolutionary Algorithm), GDE 3 (Generalized Differential Equation) have been designed and used in solving numerous complex real word problems involving two or more objectives. All these algorithms can find the multiple Pareto-optimal solutions in a single run. Out of all these available algorithms, two of the widely used reliable methods for solving bi-objective optimization problems are the NSGA II and MOPSO. This section provides the review of these two EMO algorithms. NSGA II was proposed in [22] as an improvement of the NSGA proposed in [25]. This NSGA II algorithm was the revised version of NSGA to overcome the following criticisms: • Computational complexity associated with non-dominated sorting. • Lack of elite-preserving strategy. • Lack of maintaining diversity among obtained solutions. The NSGA II algorithm is very efficient for solving multi-objective optimization problems since it incorporates an efficient elitism preserving technique using non-domination sorting. The population is ranked based on non-domination sorting before the selection is performed. All non-dominated individuals are classified into one category. Another layer of non-dominated individuals are considered after the group of classified individuals are ignored. This process is continued until all individuals in the population are classified. NSGA II also uses a mechanism for preserving the diversity and spread of the solutions without specifying any additional parameters (NSGA uses fitness sharing). This crowding distance operator guides the selection process towards a uniformly spread out Pareto-optimal front. The NSGA II algorithm for solving the CEED problem is stated below: • Specify the parameters for the CEED problem In order to handle multiple objectives Pareto dominance is incorporated into PSO algorithm and the MOPSO algorithm is proposed in [23]. The algorithm proposed in [23] uses an external repository of particles to keep a record of the non-dominated vectors found along the search process. At each generation, for each particle in the swarm, by using Roulette wheel selection, a leader is selected from the external repository. This leader then guides other particles towards better regions of the search space by modifying the flight of the particles. A special mutation operator is applied to the particles of the swarm and also to the range of each design variable of the problem to be solved to improve the explorative behavior of the algorithm. The value of the mutation operator is decreased during the iteration. To produce well spread Pareto fronts the MOPSO algorithm in [23] uses an adaptive grid. The MOPSO algorithm for solving the CEED problem is stated below: • Specify the parameters for the CEED problem Hybrid NSGA II and MOPSO algorithm for solving CEED problem The mechanism of the proposed hybrid approach for solving the CEED problem is to integrate the desirable features of NSGA II (retaining the elitism feature) and MOPSO (exploitation capability) while curbing the individual flaws (NSGAII--does not have an efficient feedback mechanism, PSO overutilization of resources). The mechanism to explore the search space differs in both the algorithms. GA uses mutation and crossover operators which will enhance the exploration task of the hybrid algorithm. The particles in PSO are influenced by their own knowledge and information shared among swarm members. PSO enhances the exploitation task of the hybrid algorithm by finding better solutions from the good ones by searching the neighborhood of good solutions. In this hybrid algorithm at every generation, the Pareto dominance of the population is computed and based on these values non dominated sorting is performed [19]. In order to avoid premature convergence, the elite upper half of the population are enhanced by NSGA II algorithm while the lower half of the population are considered as swarm particles and are optimized by MOPSO to make them converge around the best possible solutions. The hybrid NSGA II-MOPSO algorithm for solving the CEED problem is stated below: • Specify the parameters for the CEED problem Numerical tests In order to validate the proposed hybrid algorithm, the CEED problem was solved for IEEE 30bus system and the results are presented in this section. The fuel cost coefficients with valvepoint loading, emission coefficients, and generator limits are adapted from [26] and is given in Table 1. The transmission loss B-matrix coefficients are obtained by running a load flow program and is in [26] is adapted here and given in Table 2. The total power demand in the system is 2:834 p:u: to the base of 100 MVA. Program in MATLAB was developed for the Hybrid Algorithm to perform CEED and executed on 1:60 GHz, Intel T2050 processor, 1:5 GB RAM HP Pavilion Laptop with WINDOWS 7 operating system. Various test cases are considered to compute the Pareto front of the multi-objective CEED problem. The Pareto-optimal front is obtained using the NSGA II algorithm and also using the MOPSO algorithm given in Section 4. The Pareto front obtained from the hybrid approach given in Section 5 is then compared with the Pareto front obtained using NSGAII and MOPSO algorithm. In case 1 the fuel cost function is modeled as a quadratic function with sine term to incorporate the valve-point effect. The transmission losses are also considered in this case. The Pareto front obtained using NSGA II, MOPSO, and Hybrid NSGAII-MOPSO is shown in Figures 1, 2 and 3 respectively. In all these figures there is a discontinuity in the Pareto front due to modeling of the valve point loading effect of generators. Table 3. We can observe from Figure 2 and Table 3 that there are difficulties in MOPSO algorithm in obtaining well spread Pareto front and also very slow convergence to the Pareto front when compared to NSGA II. This can be improved if the proposed hybrid approach is used to solve the CEED problem. The Parameter setting for the hybrid algorithm is same as those given above expect for the settings provided here Population Size nPop ¼ 200; Maximum number of iteration MaxIt ¼ 50; Repository size nRep ¼ 20. The extreme points of the Pareto front and time for execution of the proposed NSGAII-MOPSO hybrid algorithm are provided in Table 3. From Table 3 it is clear that the extreme points found by the hybrid algorithm are better than NSGA II and MOPSO Figure 3. Pareto-optimal curve for IEEE 30 bus system obtained using Hybrid NSGAII and MOPSO Algorithm. algorithm. Even though the time of execution of the Hybrid algorithm is slower than NSGA II it is able to find well spread Pareto front compared to NSGA II. The hybrid algorithm is far superior to MOPSO in terms of converge speed and also in finding well spread Pareto-optimal front. Method In case II the valve point effect is neglected from the fuel cost curve and is solved using the proposed hybrid approach using the same parameters. The Pareto front obtained is shown in Figure 4 and is a continuous curve when compared to the Pareto front shown in Figure 3. In Figure 3 the Pareto front is discontinuous due to the effect of the Valve point loading in the cost curve. Both these case studies indicate that the hybrid approach is effective to solve the CEED problem. Conclusion In this chapter, a hybrid multi-objective optimization algorithm based on NSGA II and MOPSO have been proposed to solve the highly nonlinear, highly constrained combined economic emission dispatch problem. At any stage of the algorithm, only feasible solution is created because of the incorporation of the proposed constraint handling mechanism. During every iteration of the hybrid algorithm new population is created and NSGA II is applied on best performing individuals whereas MOPSO is applied on the lower ranked individuals to strengthen the exploration and exploitation capability of the algorithm. This hybrid approach is tested on an IEEE 30 bus system. The results obtained shows that the hybrid approach is efficient for solving CEED problem and is also able to quickly converge to a better Paretooptimal front when compared to MOPSO algorithm. The result obtained by the hybrid approach also demonstrates it is able to yield a wide spread of solutions and convergence to true Pareto-optimal fronts.
5,887
2017-12-20T00:00:00.000
[ "Computer Science" ]
A Practical Method of Optimal RSUs Deployment for Smart Internet of Vehicle —The roadside units (RSUs) are absolutely indispens- able elements in sparse highway senario for smart Internet of Vehicle (IoV). Most recent RSU deployment methods consider vehicle mobility, warning message reception probability and time delay respectively, do not mentioned the situation once the accident vehicle cant send the warning message. In this paper, we present a comprehensive analysis that integrates all these factors above. In particular, we model three kinds of common highway scenarios and give the closed-form expression of RSUs deployment number along the highway. The proposed method has been validated by extensive simulations using Matlab and NS2, its performance has been compared with TAPC method. Results reveal that our proposed method has better performance under the condition of high warning message probability. Abstract-The roadside units (RSUs) are absolutely indispensable elements in sparse highway senario for smart Internet of Vehicle (IoV). Most recent RSU deployment methods consider vehicle mobility, warning message reception probability and time delay respectively, do not mentioned the situation once the accident vehicle cant send the warning message. In this paper, we present a comprehensive analysis that integrates all these factors above. In particular, we model three kinds of common highway scenarios and give the closed-form expression of RSUs deployment number along the highway. The proposed method has been validated by extensive simulations using Matlab and NS2, its performance has been compared with TAPC method. Results reveal that our proposed method has better performance under the condition of high warning message probability. A. Background and Significance The Global Status Report on Road Safety 2013 shows that 1.24 million people were killed on the worlds roads in 2010. This is unacceptably high. Road traffic injuries take an enormous toll on individuals and communities as well as on national economies. Middle-income countries, which are motorizing rapidly, are the hardest hit [1]. Smart internet of vehicle (IoV) is a promising Intelligent Transportation System (ITS) technology, which enables vehicles to communicate with each other and Road Side Units (RSU) by wireless radio. Recently, smart IoV have rapidly developed due to its safety applications, such as emergency warning message propagation, rescue message dissemination [2]. The roadside units(RSU) are an absolutely indispensable element in sparse highway for smart IoV, because Vehicleto-Roadside(V2R) communication is essential, as Vehicle-to-Vehicle(V2V) communication is not enough to support safety message propagation only. Meanwhile vehicles nearby driving in the same direction and at almost the same velocities can group a cluster to reduce the communication blind areas [2]- [6]. Distinct from Urban Scenario, the traffic flow density is low in Highway Scenario. Once an accident happens in the area where there are no vehicles within its communication range or an accident is severe, the communication devices on the vehicles are damaged. The following vehicles could cost more time to take notice, which increases the probability of a second accident, and furthermore it may cause traffic jam. But the cost may be high to cover whole area, and relatively the message time delay will be large if are deployed only a few RSUs. A rational number of RSUs along the highway is necessary. The problem of RSU deployment along the highway can be thought of from two aspects: (1)What is the least number of RSUs that should be deployed while smart IoV can provide a certain level of connectivity and Quality of Service(QoS) for safety applications? (2)What is the best RSU distribution along the roads that satisfies the safety message propagation as well as emerging the alert message? We consider that RSU deployment should be uniformly distributed between two hot spots along the highway, because the vehicle and the accident site distribution is a stochastic process. In this respect, our contribution in this paper is presenting a RSU generating warning message method. Then we propose a new RSUs deployment method, that considers a wide range of scenarios unlike the other related works, which consider only a few aspects such as RSU coverage radius from technical view or the traffic flow density from traffic view. In this paper, we concentrate on the delay analysis for emergency message propagation both in the single vehicle and vehicle in the cluster scenario. We only calculate the meaningful time delay that is useful for the vehicles approaching the accident site. Due to the long distance from the accident site, vehicles have enough time to respond to the accident including reducing speed or leaving from the nearby exit. Finally we give the closed-form expression of RSU numbers along the highway. B. Ralated Works As former work concerning RSUs deployment, Liu et al. consider delay analysis through the curve fitting method, and present a mathematical model and analysis for broadcasting delay in cluster based Vehicular Ad Hoc Networks (VANETs) [7]. However, the optimal results are equal to full coverage numbers, so that the cost of deployment is high. From the economic point of view, this is unacceptable. O Trullols et al. maximize the number of vehicles that get in contact with the RSUs over the considered area. They formulate the problem as a maximum coverage problem [8]. They find the knowledge of vehicular mobility is a main factor in achieving an optimal deployment, while our simulation results are obtained under given traffic flow densities. Baber Aslam et al. present an analytical Binary Integer Programming(BIP) method and a novel Balloon Expansion Heuristic(BEH) method for placement of a limited number of RSUs in an urban region, then after comparison finds that BEH is more versatile and performs better than BIP [9]. Both two methods need to use enumeration method, which becomes inefficient with the increase in the size of the region. Tao et al. study the vehicle mobility characteristics along the highway and propose a Cluster-based RSU deployment scheme with Traffic-Aware Power Control Method to maximize the network performance, as well as minimizing the energy consumption of RSUs [10]. However the model does not consider the situation without the clusters, while we take consideration of both with and without the cluster in our model. Cumbal et al. present a RSU deployment method for an ideal Multi-hop communication environment between vehicles and RSUs [11]. We think the safety message propagation is unreliable through multi-hop communication. Xiong et al. transform the gateway deployment problem into a vertex selection problem in a graph, and propose a heuristic algorithm to find the optimal places. The algorithm utilizes the fine-grained statistical characteristic of mobility [12].The starting point of the algorithm is trying to keep the connection between vehicles and RSUs, which we think is not necessary. Cheng et al. propose a genetic algorithm-based sparse coverage with statistical analysis. They model a resourceconstrained problem as an NP-hard budget coverage problem [13]. However the algorithm needs a lot of iterations to achieve the results. Filippini et al. study the dynamics of infrastructure deployment by using game theoretical tools [14], however, the model is not suitable for safety message propagation. Renzo Massobrio et al. introduce a multi-objective formulation of the problem of locating roadside infrastructure for vehicular networks over realistic urban areas. They consider the deployment cost, and a traffic and coverage model for quality-of-service is defined, but they do not take consideration of accidents as a more realistic scenario in the problem, which we define in this paper [15]. Ahmed Makkawi et al. introduce a cumulative weight based method as a solution to the placement problem in the urban, rural and mountainous areas. The method gets the weight of each site of interest and adds the weights of the surrounding neighbors to its weight and considers the highest weight first in the distribution process [16]. The problem of RSU placement is formulated to binary integer programming by Nik Mohammad Balouchzahi et al. They present an optimization method based on BIP to find optimal locations for RSU installation in highway and urban scenarios. However, the model does not consider the influence of the traffic densities [17]. Xu Liya et al. investigate the problem of optimal RSU placement by developing a randomized algorithm, which gives an approximation to the optimal distance to guarantee the information can be passed to RSUs from the accident site via the VANET. However, the algorithm ignores the speed distribution function, which we take consideration into our model [18]. Tsung-Jung Wu et al. propose a capacity maximization placement scheme. Although it adapts to different vehicle population distribution and different vehicle speeds on the road, when the vehicle distribution exhibits more fluctuations, the set of RSUs is spaced apart more uniformly on the road [19]. The scheme also does not consider the situation of accidents, which will fluctuate the vehicle distribution and make it less accurate. And it also does not give the closed-form expression of the RSU numbers. A. Highway model Here we suppose all the vehicles are equipped with communication devices which support both V2V and V2I mode. Highways are usually segregated by toll stations or rest stops. Due to the convenience of connecting to other networks, RSU should be deployed on the toll stations or the rest stops. Therefore we only need to consider the road segment between two toll stations or rest stops. We suppose L miles straight path between two stations, and the inter vehicle distances are i.i.d and exponentially distributed with parameter ρ. Specifically, the CDF of the inter-vehicle distance x is given by Vehicles nearby are apt to form clusters. Once the cluster is formed, we suppose the vehicle will maintain its speed for stable communications. Taking account of the performance of clusters, we assume that vehicles within the cluster communicate with each other no more than two-hops, so the cluster size can make a simple approximation by where R denotes the maximum communication range. R also satisfies the threshold of SNR, and φ is the cluster size expansion coefficient that is flexible and dependent on the communication hops. Suppose that the RSUs are uniformly distributed along the road segments, and there are N RSUs deployed between the stations. For the simplicity of analysis we assume the same communication range between V2V and V2I mode. The blind area distance between two adjacent RSUs is given by Figure 1 shows the relationship among the parameters in (3). When traffic flow density(TFD) is high, there will always be V2V communication link, so the time delay lower bound is equal to the radio transmission delay. However, when the TFD is low, the V2V communication link may be rare. if an accident happens out of the RSU coverage area that we defined as blind areas, the alerting message will not disseminate through IoV, or the accident happens in the RSU coverage areas, but it is severe so that the communication devices are damaged, the vehicle cannot make the alert message, RSUs will generate it like in Figure 2. So the RSU generated warning message time delay is given by where v enotes speed of the vehicle reported to the former RSU last time. So if the next RSU does not receives the heartbeat message after t rsu . It will generate warning message and broadcast to the previous RSUs. We ignore the wireless propagation delay and the time delay between RSUs. The position where the accident happens obeys the uniform distribution is shown by The probability density function of vehicle speed v is a truncated Gaussian PDF given by where µ v is the average speed, σ v is the standard deviation of vehicle speed, the maximum speed v max = µ v + 3σ v , the minimum speed v min = µ v − 3σ v , erf (·) is the error function [20]. Due to the position and the speed are independent, the PDF of RSU time delay is given by If the vehicle is located close to the accident site, it will has less time to decelerate or dodge. We only discuss the vehicles in the closest blind areas situation as Figure 3,4,5 described. Figure 3 illustrates the low TFD scenario. The vehicle at a moderate speed will obtain the warning message when it comes into the RSU coverage area. Figure 4 illustrates the medium TFD scenario and there is a cluster where the vehicle will stay for some purpose like cooperative driving or infotainment sharing. Figure 5 shows another situation, in which the warning message is delivered by the vehicles behind. Here we do not consider the radio transmission delay and the data processing delay. Similar to other documents, the warning message is without feedback or retransmission. To avoid a secondary accident, the following vehicle needs a time advance. Let the minimum advance be Ψ. Here we simply assume Ψ is the time that a vehicle decelerates its speed from 100km/h to 0km/h, defined as Ψ = 15s. For the sake of simplicity, the minimum safety distance is S Saf emin = Ψv. We first calculate the vehicle time delay under situation in Figure 3. The larger the t rsu , the higher the probability of the following vehicle passing RSU radio coverage without receiving an alerting message. If drivers only depend on visual observation, this is a loss of safety functionality of RSU deployment plan. The following vehicle position is 0 ≤ S position ≤ S blind . The CDF of S position is S blind S blind +2R ∼ 1, as S blind 2R. The distance of following vehicle after the accident happens is 0 ≤ S vehicle ≤ vt rsumax . The distance of alerting is 0 ≤ S alert ≤ S blind + vt rsumax . In order to guarantee the following vehicle to have absolutely time advance, we need ensure S blind + 2R − S alert ≥ S saf emin . That is 2R − vt rsumax ≥ S saf emin . Substituting S saf emin we have Another calculation of the vehicle time delay under the situation in Figure 5, which is also low traffic flow density situation. Suppose the yellow car receives a warning message through RSU broadcasting and it is faster than the green car. According to common human behaviors or the smart safety functionality, the yellow vehicle will now decelerate. We consider the situation that the green car keeps its speed, and receives warning message through V2V communication by the yellow car. The model is given by Since the PDF of velocity is f (v), the PDF of velocity difference is given by where v d is the velocity difference that is always greater than 0. Further we have t last, we consider the medium traffic flow density situation. As shown in Figure 4, there is a cluster formed by a few vehicles in the blind area, supposing they keep their speed until they reach RSU radio coverage areas. Here, with respect to the cluster size, we need focus on the probability that the cluster receives the warning message. We have the equation below B. Optimal deployment method of RSUs As the time delay is a random variable, we take the expectation of the variable when calculating the results. The expectation of t rsu is We take expectation on both sides of inequality 9 and substituting 3. We have III. EXPERIMENTS Time delay is the critical index in high speed vehicular environment. Only the prompt time delay may help the following vehicle, which could have enough time to decelerate. Some simulation parameters are shown in Table I. The expected number of RSUs along the highway(100km) calculated with different average and standard variance of speed under Figure 3 scenario are shown in Table II. According to the simulation results, the standard variance of speed has an impact on the number of RSUs. When the standard variance is bigger, the expected number of RSUs is smaller. Figure 10 shows the other conditions that occurrence probability is lower than the above scenario. When highway length is 100km, the total average speed, the standard variance and the expected number of RSUs is 90km/h, 27km/h and 28 respectively. We compare with the results in Table II, the expected number of RSUs is 73. From the comparisons above we can learn that different models fluctuate the results. Based on the simple principle we all know, the bigger number of RSUs, the smaller the blind areas. And we should take the result from higher occurrence probability scenarios. So we assume the bigger number of RSUs is more reliable. Under medium TFD situation, when N is bigger than 106, we assume the cluster can 100% receive the warning message generated by RSU and have enough distance to decelerate. According to (3), the distance of blind area is 335m, which is smaller than the biggest cluster size, which means the cluster can make temporary coverage for the blind area. Furthermore, we use NS2 simulating the scenario described in Figure 3. The parameters are shown in Table III IV. RESULTS AND DISCUSSION Figure 6 shows the relationship between the number of RSUs and the average speed. Also we can have the probability In a similar way we can have the probability of the RSU number with different average speeds when the standard deviation is fixed, as we show in Figure 8. In terms of Figure 5 situation, we take expectation on both sides of inequality (12) and substituting (3), then we have The relationship between the average velocity difference and the number of RSUs is shown in Figure 9. For a two lane highway, there should always be a velocity difference between the express lane and the slow lane. So we can have the relationship between RSU numbers and the probability of the velocity difference with fixed average speed and standard deviation shown in Figure 10. Finally we compute the probability of cluster vehicle receiving the warning message with (13), (14), (15). For the sake of simplicity, we assume the cluster velocity is the average vehicle speed. The probability close to zero is N equal to 27 when L=100km. In addition we compare the blind area with the cluster size. To satisfy blind area longer than cluster size, N should be smaller than 82. When the probability is near to 1, N is equal to 106, so we take N from 27 to 106. The relationship of cluster velocity, numbers of RSUs and the probability of cluster vehicle receiving the warning message is shown in Figure 11. We test the packet loss rate of the vehicle that is in the nearest blind area to the accident site like Figure 3, and the results are shown in Figure 12. The results illustrate once the vehicle is nearer to the RSU coverage marginal, the packet loss rate is high, because the warning message is still not generated by the RSU yet, the vehicle is driving forward. Anyway, the vehicle still receives a part of the warning message packets. And wherever the vehicle is, it will receive the warning message although the packet loss rate is higher. Due to the features of the safety application message, once the vehicle receives message, it will help the driver being aware of the danger ahead. So the model is quite suitable to the scenarios in Figure 3. In comparison with TAPC method [10], The findings can be seen in Figure 13. When the probability of receiving warning message is higher than 92%, the interval between two RSUs calculated by our proposed method is bigger than TAPC method, which means our proposed method is better than TAPC method in condition of higher receiving probability. Due to the need for safety application, higher probability circumstance is our main focus. The TAPC method is calculated mainly based on power consumption, Which we do not think is necessary when receiving important safety information. Above all, for the sake of safety application, the optimal RSU numbers along the highway should be the biggest number that is given by the three above model. In consideration of better performance, RSUs should be uniformly distributed along the highway. From a realistic point of view, gas stations, resting areas, road lamp locale and etc. will be the RSU locations, so the actual blind area range is smaller than theoretical calculation. V. CONCLUSION In this paper, we proposed a practical model for computing the optimal RSU number based on a single and clusters of vehicle in smart IoV. Our model integrated three normal situation in the highway, and we also derived closed-form expressions for different situations, validated by extensive simulations. The proposed model and analysis provide guidelines for the design and management of smart IoV to achieve the balance of economic factors and safety application performance. In future works, one can consider extending our model to adapt more complicated road shapes.
4,814.6
2021-08-31T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
SemEval-2016 Task 8: Meaning Representation Parsing In this report we summarize the results of the SemEval 2016 Task 8: Meaning Representation Parsing. Participants were asked to generate Abstract Meaning Representation (AMR) (Banarescu et al., 2013) graphs for a set of English sentences in the news and discussion forum domains. Eleven sites submitted valid systems. The availability of state-of-the-art baseline systems was a key factor in lowering the bar to entry; many submissions relied on CAMR (Wang et al., 2015b; Wang et al., 2015a) as a baseline system and added extensions to it to improve scores. The evaluation set was quite difficult to parse, particularly due to creative approaches to word representation in the web forum portion. The top scoring systems scored 0.62 F1 according to the Smatch (Cai and Knight, 2013) evaluation heuristic. We show some sample sentences along with a comparison of system parses and perform quantitative ablative studies. Introduction Abstract Meaning Representation (AMR) is a compact, readable, whole-sentence semantic annotation (Banarescu et al., 2013). It includes entity identification and typing, PropBank semantic roles (Kingsbury and Palmer, 2002), individual entities playing multiple roles, as well as treatments of modality, negation, etc. AMR abstracts in numerous ways, e.g., by assigning the same conceptual structure to fear (v), fear (n), and afraid (adj). Figure 1 gives an example. The soldier was not afraid to die. The soldier did not fear death. been substantial interest in creating parsers to recover this formalism from plain text. Several parsers were released in the past couple of years (Flanigan et al., 2014;Wang et al., 2015b;Werling et al., 2015;Wang et al., 2015a;Artzi et al., 2015;Pust et al., 2015). This body of work constitutes many diverse and interesting scientific contributions, but it is difficult to adequately determine which parser is numerically superior, due to heterogeneous evaluation decisions and the lack of a controlled blind evaluation. The purpose of this task, therefore, was to provide a competitive environment in which to determine one winner and award a trophy to said winner. Training Data LDC released a new corpus of AMRs (LDC2015E86), created as part of the DARPA DEFT program, in August of 2015. The new corpus, which was annotated by teams at SDL, LDC, and the University of Colorado, and supervised by Ulf Hermjakob at USC/ISI, is an extension of pre-vious releases (LDC2014E41 and LDC2014T12). It contains 19,572 sentences (subsuming, in turn, the 18,779 AMRs from LDC2014E41 and the 13,051 AMRs from LDC2014T12), partitioned into training, development, and test splits, from a variety of news and discussion forum sources. The AMRs in this corpus have changed somewhat from their counterparts in LDC2014E41, consistent with the evolution of the AMR standard. They now contain wikification via the :wiki attribute, they use new (as of July 2015) PropBank framesets that are unified across parts of speech, they have been deepened in a number of ways, and various corrections have been applied. Other Resources We made the following resources available to participants: • The aforementioned AMR corpus (LDC2015E86), which included automatically generated AMR-English alignments over tokenized sentences. • The tokenizer (from Ulf Hermjakob) used to produce the tokenized sentences in the training corpus. • The AMR specification, used by annotators in producing the AMRs. 1 • A deterministic, input-agnostic trivial baseline 'parser' courtesy of Ulf Hermjakob. • The JAMR parser (Flanigan et al., 2014) as a strong baseline. We provided setup scripts to process the released training data but otherwise provided the parser as is. • The same Smatch scoring script used in the evaluation. • A Python AMR manipulation library, from Nathan Schneider. Evaluation Data For the specific purposes of this task, DEFT commissioned and LDC released an additional set of English sentences along with AMR annotations 2 that had not been previously seen. This blind evaluation set consists of 1,053 sentences in a roughly 50/50 discussion forum/newswire split. The distribution of sentences by source is shown in Table 1. Task Definition We deliberately chose a single, simple task. Participants were given English sentences and had to return an AMR graph (henceforth, 'an AMR') for each sentence. AMRs were scored against a gold AMR with the Smatch heuristic F1-derived tool and metric. Smatch is calculated by matching instance, attribute, and relation tuples to a reference AMR (See Section 7.2). Since variable naming need not be globally consistent, heuristic hill-climbing is done to search for the best match in sub-exponential time. A trophy was given to the team with the highest Smatch score under consistent heuristic conditions. 3 Participants and Results 11 teams participated in the task. 4 Their systems and scores are shown in Table 2. Below are brief descriptions of each of the various systems, based on summaries provided by the system authors. Readers are encouraged to consult individual system description papers for more details. CAMR-based systems A number of teams made use of the CAMR system from Wang et al. (2015a). These systems proved among the highest-scoring and had little variance from each other in terms of system score. 6.1.1 Brandeis / cemantix.org / RPI (Wang et al., 2016) This team, the originators of CAMR, started with their existing AMR parser and experimented with three sets of new features: 1) rich named entities, 2) a verbalization list, and 3) semantic role labels. They also used the RPI Wikifier to wikify the concepts in the AMR graph. 6.1.2 ICL-HD (Brandt et al., 2016) This team attempted to improve AMR parsing by exploiting preposition semantic role labeling information retrieved from a multi-layer feed-forward neural network. Prepositional semantics was included as features into CAMR. The inclusion of the features modified the behavior of CAMR when creating meaning representations triggered by prepositional semantics. RIGA (Barzdins and Gosko, 2016) Besides developing a novel character-level neural translation based AMR parser, this team also extended the Smatch scoring tool with the C6.0 rule-based classifier to produce a human-readable report on the error patterns frequency observed in the scored AMR graphs. They improved CAMR by adding to it a manually crafted wrapper fixing the identified CAMR parser errors. A small further gain was achieved by combining the neural and CAMR+wrapper parsers in an ensemble. 6.1.4 M2L (Puzikov et al., 2016) This team attempted to improve upon CAMR by using a feed-forward neural network classification algorithm. They also experimented with various ways of enriching CAMR's feature set. Unlike ICL-HD and RIGA they were not able to benefit from feed-forward neural networks, but were able to benefit from feature enhancements. Other Approaches The other teams either improved upon their existing AMR parsers, converted existing semantic parsing tools and pipelines into AMR, or constructed AMR parsers from scratch with novel techniques. 6.2.1 CLIP@UMD (Rao et al., 2016) This team developed a novel technique for AMR parsing that uses the Learning to Search (L2S) algorithm. They decomposed the AMR prediction problem into three problems-that of predicting the concepts, predicting the root, and predicting the relations between the predicted concepts. Using L2S allowed them to model the learning of concepts and relations in a unified framework which aims to minimize the loss over the entire predicted structure, as opposed to minimizing the loss over concepts and relations in two separate stages. (Flanigan et al., 2016) This team's entry is a set of improvements to JAMR (Flanigan et al., 2014). The improvements are: a novel training loss function for structured prediction, new sources for concepts, improved features, and improvements to the rule-based aligner in Flanigan et al. (2014). The overall architecture of the system and the decoding algorithms for con- (Butler, 2016) No use was made of the training data provided by the task. Instead, existing components were combined to form a pipeline able to take raw sentences as input and output meaning representations. The components are a part-of-speech tagger and parser trained on the Penn Parsed Corpus of Modern British English to produce syntactic parse trees, a semantic role labeler, and a named entity recognizer to supplement obtained parse trees with word sense, functional and named entity information. This information is passed into an adapted Tarskian satisfaction relation for a Dynamic Semantics that is used to transform a syntactic parse into a predicate logic based meaning representation, followed by conversion to the required Penman notation. (Bjerva et al., 2016) This team employed an existing open-domain semantic parser, Boxer (Curran et al., 2007), which produces semantic representations based on Discourse Representation Theory. As the meaning representations produced by Boxer are considerably different from AMRs, the team used a hybrid conversion method to map Boxer's output to AMRs. This process involves lexical adaptation, a conver-sion from DRT-representations to AMR, as well as post-processing of the output. (Goodman et al., 2016) This team developed a novel transition-based parsing algorithm using exact imitation learning, in which the parser learns a statistical model by imitating the actions of an expert on the training data. They used the imitation learning algorithm DAG-GER to improve the performance, and applied an alpha-bound as a simple noise reduction technique. They applied Markov Chain Monte Carlo (MCMC) algorithms to learn Synchronous Hyperedge Replacement Grammar (SHRG) rules from a forest that represents likely derivations consistent with a fixed string-to-graph alignment (extracted using an automatic aligner). They make an analogy of string-to-AMR parsing to the task of phrase-based machine translation and came up with an efficient algorithm to learn graph grammars from string-graph pairs. They proposed an effective approximation strategy to resolve the complexity issue of graph compositions. Then they used the Earley algorithm with cube-pruning for AMR parsing given new sentences and the learned SHRG. 6.2.7 CU-NLP (Foland and Martin, 2016) This parser does not rely on a syntactic pre-parse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for estimating AMR graph structure. Result Ablations We conduct several ablations to attempt to empirically determine what aspects of the AMR parsing task were more or less difficult for the various systems. Impact of Wikification The AMR standard has recently been expanded to include wikification and the data used in this task reflected that expansion. Since this is a rather recent change to the standard and requires some kind of global external knowledge of, at a minimum, Wikipedia's ontology, we suspected performance on :wiki attributes would suffer. To measure the effect of wikification, we performed two ablation experiments, the results of which are in Figure 2. In the first ("no wiki"), we removed :wiki attributes and their values from reference and system sets before scoring. In the second ("bad wiki"), we replaced the value of all :wiki attributes with a dummy entry to artificially create systems that did not get any wikification correct. The "no wiki" ablations show that the inclusion of wikification into the AMR standard had a very small impact on overall system scores. No system's score changed by more than 0.01 when wikification was removed, indicating that systems appear to wikify about as well as they handle the rest of AMR's attributes. The "bad wiki" ablations show performance drop when wikification is corrupted of around 0.02 to 0.03 for six of the systems, and a negligible performance drop for the remaining systems. This result indicates that the systems with a performance drop are doing a fairly good job at wikification. Performance on different parts of the AMR In this set of ablations we examine systems' relative performance on correctly identifying instances, attributes, and relations of the AMRs. Instances are the labeled nodes of the AMR. In the example AMR of Figure 1, the instances are fear-01, soldier, and die-01. To match an instance one must simply match the instance's label. 5 Attributes are labeled string properties of nodes. In the example AMR, there is a polarity attribute attached to the fear-01 instance with a value of "-." There is also an implicit attribute of "TOP" attached to the root node of the graph, with the node's instance as the attribute value. To match an attribute one must match the attribute's label and value, and the attribute's instance must be aligned with the corresponding instance in the reference graph. Relations are labeled edges between two instances. In the example AMR, the relations (f, s, ARG0), (f, d, ARG1), and (d, s, ARG1) exist. To match a relation, the labeled edge between two nodes of the hypothesis must match the label of the edge between the correspondingly aligned nodes of the reference graph. It should not be surprising that systems tend to perform best at instance matching and worst at relation matching. Note, however, that the best performing systems on instances and relations were not the overall best performing systems. Ablation results can be seen in Table 3. Performance on different data sources As discussed in Section 8, less formal sentences, sentences with misspellings, and sentences with non-standard representations of meaning were the hardest to parse. We ablate the results by domain of origin in Table 4. While the strongest-performing systems tended to perform best across ablations, we note that the machine-translated and informal corpora were overall the hardest sections to parse. Qualitative Comparison In this section we examine some of the sentences that the systems found particularly easy or difficult to parse. Easiest Sentences The easiest sentence to parse in the eval corpus was the sentence "I was tempted." 6 It has a gold AMR of: (t / tempt-01 :ARG1 (i / i)) The mean score for this sentence was 0.977. All submitted systems except one parsed it perfectly. Another sentence that was quite easy to parse was the sentence "David Cameron is the prime minister of the United Kingdom." 7 Two systems parsed it perfectly and a third omitted wikification but was otherwise perfect. Figure 3 shows a detailed comparison of each system's performance on the sentence. In general we see that shorter sentences from the familiar and formal news domain are parsed best by the submitted systems. (y / yes) M E D I A A D V I S O R Y (a / advise-01 :ARG1 (m / media)) Data noise was another confounding factor. In the next example, 9 which had an average score of 0.17, parsers were confused both by the misspelling ("lie" for "like") and by the quoted title, which all systems except UCL+Sheffield, tried to parse for meaning. Why not a title lie "School Officials Screw over Rape Victim?" (t / title-01 :polarity -:ARG1-of (r / resemble-01 :ARG2 (t2 / title-01 :wiki "A_Rape_on_Campus" :name (n2 / name :op1 "School" :op2 "Officials" :op3 "Screw" :op4 "Over" :op5 "Rape" :op6 "Victim"))) :ARG1-of (c / cause-01 :ARG0 (a / amr-unknown))) We note that all of these difficult sentences are not conceptually hard for humans to parse. Humans have far less difficulty in resolving errors or processing non-standard tokenization than do computers. 9 There Can Be Only One? We intended to award a single trophy to the single best system, according to the narrow evaluation conditions (balanced F1 via Smatch 2.0.2 with 5 restarts, to two decimal places). However, the top two systems, Brandeis/cemantix.org/RPI and RIGA, scored identically according to that metric. Hoping to elicit some consistent difference between the systems, we ran Smatch with 20 restarts, looked at four decimal places, and re-ran five times. Each system scored a mean of 0.6214 with standard deviation of 0.00013. We thus capitulate in the face of overwhelming statistics and award the inaugural trophy to both teams, equally. 10 Conclusion The results of this competition and the interest in participation in it demonstrate that AMR parsing is a difficult, competitive task. The large number of systems using released code lowered the bar to entry significantly but may have led to a narrowing of diversity in approaches. Low-level irregularities such as creative tokenization and misspellings befuddled the systems. We hope to conduct another AMR parsing competition in the future, in the biomedical domain, and also conduct a generation competition.
3,687.6
2016-06-01T00:00:00.000
[ "Computer Science" ]
Multiscale Numerical Simulations of Branched Polymer Melt Viscoelastic Flow Based on Double-Equation XPP Model The double-equation extended Pom-Pom (DXPP) constitutive model is used to study the macro and micro thermorheological behaviors of branched polymermelt.The energy equation is deduced based on a slip tensor.Theflowmodel is constructed based on a weakly-compressible viscoelastic flowmodel combined with DXPPmodel, energy equation, and Tait state equation. A hybrid finite element method and finite volume method (FEM/FVM) are introduced to solve the above-mentioned model. The distributions of viscoelastic stress, temperature, backbone orientation, and backbone stretch are given in 4 : 1 planar contraction viscoelastic flows. The effect of Pom-Pom molecular parameters and a slip parameter on thermorheological behaviors is discussed. The numerical results show that the backbones are oriented along the direction of fluid flow in most areas and are spin-oriented state near the wall area with stronger shear of downstream channel. And the temperature along y = −1 is little higher in entropy elastic case than one in energy elastic case. Results demonstrate good agreement with those given in the literatures. Introduction Branched polymer becomes more and more concerned because of its unique structural characteristics and properties, now its development is one of the fastest in macromolecular materials.Branched polymer has more complex thermorheological behavior compared with other polymers, and its rheological behavior depends on its topological structure of branched molecules [1,2].As compared to the linear polymer, when the main chain of branched polymer introduces a certain number and length of branched chains, the viscoelasticity is significantly different.Branched polymer in shear flow shows the similar strain softening but has a longer relaxation time at the end of branched molecular chains because of the limitation of branched chain.Moreover, branched polymer in elongation flow has entirely different strain softening.Therefore, branched has a great influence on polymer viscoelastic properties. In recent decades, some researchers have developed many viscoelastic constitutive models for describing the rheological behavior of polymer based on different theories [3]. Among them, the model based on molecular theory can more truly reflect the rheological properties of fluid and can more fully reflect the flow of the fluid [4].And for all we know, a branched polymer melt can be considered as a melt in which a certain concentration of branched molecules is embedded in a viscous melt.In this, Mcleish and Larson [1] proposed a Pom-Pom model based on Doi-Edwards's peristaltic tubes theory.In the Pom-Pom model, they simplified each branched molecule to a molecule with only two branched points at each end, and with a certain number of arms at each branched points.This model is not completely consistent with the topological structure of branched molecules, but it is an important breakthrough in the field of viscoelastic constitutive models.This model introduces the important branched information and distinguishes the orientation relaxation time and extension relaxation time of backbone.It can also study the relaxation time of branched molecules and their effects on the above two relaxation time.Subsequently, Verbeeten et al. [5] improved the Pom-Pom model and proposed an extended Pom-Pom (XPP) model by using the slip tensor.This model overcomes some defects 2 Advances in Mathematical Physics of Pom-Pom model, such as the discontinuity of steady-state stretching, the unrestricted orientation under the high strain, and the unpredicted second normal stress difference.In addition, Clemeur et al. [6,7] proposed a Double Convected Pom-Pom (DCPP) model in order to solve the problem that the solution of XPP model is not unique.However, the DCPP model suffered from numerical instability in the numerical simulation.On the basis of this, Clemeur and Debbaut [8] proposed a modified DCPP model and Wang et al. [9] given the Simplified Modified Double Convected Pom-Pom (S-MDCPP) model with good numerical stability and easy programmable ability. Generally, there are two kinds of Pom-Pom molecule constitutive models: single-equation model and double-equation model.Due to the simple solution and easy programming of the single-equation model, many studies have used the singleequation XPP model to simulate the viscoelastic flows [10][11][12][13][14][15][16][17], but it cannot describe some micro information.Doubleequation XPP (DXPP) model can describe the microscopic orientation and stretch of backbone and study the influence of microscopic molecular parameters on the rheological behavior of branched polymers.However, due to the complexity of the DXPP model, there are few reports on the numerical simulation of this model.Therefore, DXPP model is used to study the microscopic information of the orientation and stretch of branched molecules in this paper. In the past twenty or thirty years, the numerical simulation of viscoelastic flow has been developing rapidly and the main numerical methods are finite element method, finite volume method, and meshless method.Although there are many numerical methods, they each have their own advantages and disadvantages.There is no certain method to "dominate the world."There is only one method to solve a problem when appropriate or not.Therefore, the combination of the merits of various methods to form a hybrid algorithm will be a trend of numerical simulation [16,18,19].In this paper, the hybrid finite element method and finite volume method (FEM/FVM) are proposed based on the advantages of finite element method and finite volume method and the characteristics of the solved problem. In addition, since the actual polymer processing is often a nonisothermal viscoelastic flow problem, the effects of temperature are also considered.The slip tensor of viscoelastic fluid actually affects the energy equation; that is, the energy equation is also different for different slip tensor [20,21].Therefore, we will give the derivation of the energy equation based on the slip tensor and study the influence of the slip parameter on the temperature. Above all, the DXPP model is used to study the macroand micro-rheological information of branched polymer melt.The energy equation based on the slip tensor is deduced and used to study the influence of slip parameters on the temperature.Subsequently, based on the characteristics of weakly-compressibility and high specific heat capacity of the polymer melt, the hybrid FEM/FVM method is used to solve the above model, and the macro and micro thermorheological properties of the branched polymer are discussed according to the numerical simulation results. Mathematical Models where ∇ S ≡ S/ + u ⋅ ∇S − S ⋅ ∇u − (∇u) ⋅ S denotes the upper convected time derivative of orientation tensor S; D is the rate of deformation tensor; the slip tensor B is defined as where is a material parameter, defining the amount of anisotropy; 0 is the relaxation time of the backbone tube orientation; the exponential stretch relaxation time = 0 −](Λ−1) ensures the stretch relaxes very fast and stays bounded for high strains; 0 is the relaxation time for the stretch, and ] = 2/, where is the amount of arms at the end of a backbone; tr(⋅) is the trace; Λ is the backbone tube stretch and its material derivative Substituting (2) into (1) gives the orientation equation Viscoelastic stress equation is where 0 is the plateau modulus; I is the unit tensor. In conclusion, ( 3) and ( 4) constitute a DXPP model describing the backbone tube stretch and orientation using two decoupled equations; (5) denotes the viscoelastic stress.Here, the model is extended with a second normal stress difference when ̸ = 0.By defining = 0 0 as the viscosity of polymer, We = 0 / as the Weissenberg number, and = 0 / 0 as the relaxation time ratio, dimensionless DXPP model can be written as We where is the ratio of Newtonian viscosity to total viscosity and and are the velocity and length of the dimensionless parameters, respectively. Governing Equations. In the polymer processing, the weakly-compressibility of polymer melt cannot be ignored.Therefore, the weakly-compressible flow conservation equation is used to describe the polymer melt flow.For weaklycompressible viscoelastic flows, the conservation equations for mass and momentum can be expressed as follows, respectively, where Re = / denotes the Reynolds number; and are the density and viscosity of the dimensionless parameters, respectively. The energy equations of different viscoelastic fluids also vary due to the different slip tensors.The derivation of the energy equation based on the XPP fluid slip tensor is given below. The general form of the energy equation based on the slip tensor is as follows: where is the specific heat, is the temperature, is the heat flux, is the Cauchy stress tensor, is the material parameter, and their expressions are Substituting ( 2) and ( 12) into (11) gives the energy equation The second term in the right-hand side of ( 13) reflects the contribution of entropy elasticity.The last one reflects the contribution of energy elasticity and ∈ [0,1].The dimensionless energy equation is where Pe = 0 / 0 is the Peclet number, Br = 2 /( 0 ) is the Brinkman number, and , 0 , and 0 are the temperature, specific heat, and coefficient of heat transfer of the nondimensional parameters, respectively, where In addition, a P-V-T equation of state is necessary to satisfy the completeness of governing equations because of considering the compressibility of the polymer melt.Tait state equation [18] is usually considered as the classical empirical equation and is capable of describing both the liquid and solid regions.So Tait state equation is used in this paper. Numerical Methods The flow of brand polymer melts is governed by the conservation of mass, momentum, and energy equations, Tait state equation, together with a DXPP constitutive model.The numerical simulation of the above model employs hybrid FEM/FVM [18] method.The momentum equations are solved by the FEM, in which a discrete elastic viscous stress split (DEVSS) scheme is used to overcome the elastic stress instability, and an implicit scheme of iterative weaklycompressible Crank-Nicolson-based split scheme (WCNBS) is used to avoid the Ladyzhenskaya-Babuška-Brezzi (LBB) condition.The energy and DXPP equations are solved by the FVM, in which an upwind scheme is used for the strongly convection-dominated problem of the energy equation. To analyse the accuracy of the algorithm mentioned above, we construct the DEVSS scheme based on (10) and consider its discretization in the time domain within a typical time subinterval [ , +1 ], which give us the form of the Wilson- method as follows: where D is an added variable for constructing DEVSS scheme, 0 ≤ , 1 , 2 , , ≤ 1. The truncation error of ( 16) is Based on formula ( 17), ( 16) has the second-order accuracy when = 1 = 2 = = = 0.5, which is adopted in this study for the Crank-Nicolson scheme. Since the energy equation is deduced based on the slip tensor and the viscoelastic stress is calculated using a DXPP model that can describe the backbone orientation and stretch of the polymer molecules, we will detail the solution of the energy equation and the DXPP model based on the nonstaggered grid under the framework of the FVM.The energy equation and the DXPP model can be normalized as follows: where , Γ are constants; and are the physical quantities and source term which are defined in Table 1.The terms from left to right in (18) represent the time, convective, diffusive, and source contributions, respectively.The discretization of the energy equation ( 13), orientation equation ( 6), and stretch equation ( 7) can be written as the following form by a generalized quantity ; that is, where is the source term after discretization in ( 13), (6), and In this paper, the hybrid FEM/FVM method described above is used to solve the weakly-compressible flow model based on the DXPP constitutive model.Details are as follows. Step 2. Solve the momentum and mass conservation equations to calculate u, , and under the framework of the FEM. Step 3. Solve the energy equation and the DXPP constitutive equation to obtain , S, and Λ under the framework of the FVM.Step 4. Use expression (8) to calculate the polymer stress (). Step 5. Substitute into the momentum equation (10) to ensure the calculation of the coupling of the physical quantities. Numerical Simulation and Analysis The 4 : 1 planar contraction flow is a benchmark test example and has been widely studied [9][10][11].In the 4 : 1 planar contraction flow, the fluid flows into the narrower channel from the wider channel with a simple shear flow far from the contraction region, a pure elongation flow along the central axis, a complex strong shear flow near the wall, and a mixture of shear and elongation flows near the reentrant corners.In fact, the contraction flow widely exists in the processing of polymer materials, such as the polymer extrusion and injection molding.Therefore, this example not only verifies the correctness of the proposed algorithm and model, but also provides the basis for the processing of polymer materials. The sketches of the lower half of the 4 : 1 planar contraction geometry and computational mesh are shown in Figure 1.The lengths of upstream and downstream channel are both 16, in which denotes the height of downstream channel.The structured triangular mesh and rectangular mesh are used in FEM and FVM, respectively.It is noted that the mesh is refined in the near of the reentrant corner. The initial and boundary conditions are as follows. We choose the polymer High-Density Polyethylene (HDPE) Sclair 2714 made by Nova Chemicals Inc. as fluid.The material parameters of HDPE, which are obtained from the materials database of Moldflow software, are shown in Tables 2 and 3, respectively.A series of meshes is used for the FEM/FVM method to ensure spatial convergence based on salient-corner vortex cell size.Mesh characteristics, detailing numbers of elements (FEM/FVM) or volumes (SLFV) [22] and smallest mesh spacing employed, and salient-corner vortex cell size are provided in Table 4.Moreover, the quantitative information regarding mesh convergence of the salient-corner vortex cell size is compared to the results in literature [22] for the Re = 0, covering the range 1 ≤ We ≤ 10.The information in Table 4 demonstrates that convergence with mesh refinement has been achieved for the range of parameters considered.Figure 3 shows the numerical results for the stress along the axis of symmetry with different values of the amount of arms and relaxation time ratio .The values of stress increase with the values of increase and they have obvious changes near the reentrant corner.However, the values of are almost no change when the increases to a certain extent.In addition, the values of under different values of tend to be consistent when the flow is fully developed.For different values of , the values of have similar change trends with different values of except they decrease with the values of increase. Numerical Solutions of Temperature. Figure 4 shows the distribution of the temperature near the reentrant corner at different Peclet numbers when the Weissenberg number is fixed to 1.0.It is observed from Figure 4 that the temperature is lower near the wall, and low-temperature region is getting smaller and smaller with the increasing of Pe number.This is because the effect of heat convection gradually increases with the increase of Pe number.Then it causes the heat transport ratio of heat convection to heat dissipation changes. The temperature along = −1 with different slip parameters is shown in Figure 5.It is seen that the temperature values are slightly higher when = 1 than those when = 0; that is, the temperature of the entropy elasticity is slightly higher than that of the energy elasticity.This is consistent with the result of literature [20].The most intuitive way to describe the backbone orientation of branched polymer molecules on the molecular scale is to use the specific information of the second-order orientation tensor S. The ellipse method is adopted to obtain the backbone orientation state for two-dimensional cases.For ellipse method, the eigenvalues and eigenvectors are first obtained by computing the second-order matrix corresponding to S, and then the eigenvectors and eigenvalues represent the major axis's direction and length of the orientation ellipse, respectively. The backbone orientation in 4 : 1 planar contraction flow is shown in Figure 6.As can be seen in Figure 6, the backbones are oriented along the direction of fluid flow in most areas; the backbones are spin-oriented state near the wall area with stronger shear of downstream channel.This is because abrupt contraction flow area and rapid increase fluid velocity lead to backbone quickly spin near the reentrant corner.Near the wall and away from the area of the reentrant corner, the shear stress is the largest, and the backbone orientates first along the wall and then spins with the flow.Along symmetry axis, the backbone near the reentrant corner is first stretched and exhibits uniaxial tension state due to the velocity gradient increases; then the other backbones are in turn oriented in the horizontal axis.x Λ q = 2 q = 4 q = 8 q = 12 q = 15 the size of angle vortex, the fluid stretching thickening [17], and shear rate distribution.The appearance of this fact will help to understand the phenomenon of polymer wall slip and extrusion instability.Figure 8 shows the numerical results for backbone stretch along the axis of symmetry with different values of the amount of arms and relaxation time ratio with We = 10.It is observed that the value of Λ is increasing with the increasing value of and there are obvious changes near the reentrant corner.However, the value of Λ is almost no change when increases to a certain extent.In addition, the values of Λ under different values of tend to be consistent when the flow is fully developed.For different values of , backbone stretches have similar change trends with different values of except they are decreasing with the increasing values of . Conclusions In this paper, the DXPP constitutive model, which describes backbone orientation and stretch, is used to study the thermorheological behaviors of branched polymer melt.The hybrid FEM/FVM method is used to solve the nonisothermal weakly-compressible viscoelastic flow model coupled with a DXPP model.The distribution of viscoelastic stress, temperature, and backbone orientation and stretch are given.The effect of Pom-Pom molecular parameters and a slip parameter on thermorheological behaviors is studied.All numerical results can prove the models and numerical methods mentioned above are valid. For 4 : 1 planar contraction flow, the backbone orientation is along the flow direction in most of contraction area and is spin in the downstream stronger shear near wall area.Stress increases with the increase of the value of and decreases with the increase of the value of .The backbone stretch increases with the increase of the values of We number and , and it decreases with the increase of the values of .The variable trend of stress and backbone stretch for the different values of is the same with the full development of polymer melt flow.In addition, the temperature along the centre line is little higher in entropy elastic case than one in energy elastic case.The macroscopic thermal rheological behavior in the flow field is a true reflection of the microscopic topological structure of the polymer melt.The characterization of the microscopic information helps to further study the flowinduced residual stress and other complicated behaviors in the process of polymer melt processing and provide a theoretical basis to improve the polymer product performance. 4. 3 . Numerical Solutions of Backbone Orientation.The Pom-Pom molecular model describes the relaxation time of branched macromolecules separately on two different time scales and introduces backbone stretch parameters to describe the tensile behavior.Using the DXPP model allows one to investigate the complexity rheological behavior on the molecular scale. Figure 3 :Figure 4 : Figure 3: The influence of different parameters on : (a) and (b) . . The coefficients , , , , and can be expressed as the combination of the convection term and the diffusion term; that is, The form of (| Δ |) can be different under different discretization schemes for the convection term.In order to solve the convection-dominated problem caused by the high specific heat capacity and high Weissenberg number, (| Δ |) equals 1 for the upwind scheme in this paper.All the coefficients are formulated as follows. Table 3 : Property parameters of HDPE.The numerical results for the first normal stress , first normal stress difference 1 , and second normal stress difference 2 near the reentrant corner are illustrated in Figures2(a), 2(b), and 2(c), respectively.It is seen that the stress contours near the reentrant corner are smooth and 2 is not zero.This proves that the given FEM/FVM method is feasible.
4,701.2
2018-05-27T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Nonlinear Stochastic Multiobjective Optimization Problem in Multivariate Stratified Sampling Design Decision-making in survey sampling planning is a tricky situation; sometimes it involves multiple objectives, with various decision variables emanating from heterogeneous and homogeneous populations. Dealing with the entire population under study and its uncertain nature becomes a challenging issue for researchers and policymakers. Hence, an appropriate sampling design and optimization methodology are imperative. e study presents a useful discussion on stochastic multiobjective multivariate strati ed sampling (MSS) models theoretically, and the concepts are illustrated with numerical examples. Also, it has been found that the linearization of sampling variance in survey sampling does not help determine the optimal sampling allocation problem withminimum variability. Optimal allocation problems under the weighted goal programming, stochastic goal programming, and Chebyshev goal programming methods are also discussed with numerical examples. Finally, the study discussed the linear approximation of the MSS problem with examples. e study is a conceptual and theoretical framework for MSS under a stochastic environment. e numerical data is simulated using the stratifyR package. Introduction e classical method of optimization for di erential calculus is too restrictive and challenging in terms of applicability to many statistical areas in a real-life situation. e lack of numerical algorithms suitable for solving optimization problems poses some severe limitations in this regard and hence led to the utilization of some ine cient statistical procedures in choosing the objective functions and constraints. For decades, a better technique for optimization with broader applicability in statistics, with an increasing computing power able to be implemented has been forthcoming. Mathematical programming is one of such evolving methods with potential application in statistical methodologies. Several optimization techniques have various applications in statistical problems such as designing a speci c experiment, extensive survey for data collection, characterizing observed data using a model, drawing inferences about a population based on sample data, testing of hypothesis, and estimation in the decision-making process [1]. In all the applications, one has to optimize (minimize or maximize) an objective function subject to a set of constraints, such as cost or other input parameters. e sampling problem about the population characteristics remains deriving information on several populations statistically. In a sampling survey, the objectives are to minimize the sampling variance and cost, these depend on the sample size, the sampling scheme, the size of the sampling unit or the scope of the study. Alternatively, a di erent formulation may be to minimize survey inaccuracy, given that the survey cost is within the budgetary limits. us, the research aims at nding a solution for this challenging problem of optimal sample size or sampling scheme that could help in estimating the desired characteristics of a population under prescribed properties. e objective of this research is to successfully formulate the problems of sample surveys as mathematical programming problems and develop an efficient algorithm or technique to solve them. e objective is to identify existing and future works of allocation problems in survey sampling and to investigate and suggest solutions to them, and also, study the problems in an uncertain environment, i.e., a stochastic. e uncertainty that exists in real life has motivated us to work on this aspect. e problems become more complicated when some or all of the parameters involved are uncertain; it may be either stochastic or fuzzy. e objective is to develop an efficient algorithm to solve such types of sample surveys. e formulated problems may be single-objective or multiobjective. For solving the multiobjective optimization problems, we need to develop efficient algorithms for the formulated problems. e goal programming, fuzzy goal programming, and other new modified version or extended version of these techniques will be used to solve the multiobjective optimization problems. is study comprises modeling and optimization of different sampling design that helps in providing the efficient allocation of samples simultaneously by achieving the highest accuracy and minimizing the sampling variances. is project provides a useful insight into the decisionmaking for implementing strategies in different socio-economic sectors for the country based on sampling results. Our contribution to this project is proposing new models and techniques for the sampling scheme to determine the optimal allocation of samples based on which policymakers can suggest what kinds of additional efforts can simultaneously be taken in the planning of socio-economic sectors. e problems related to the case studies are usually complicated, but it has become more complicated when some or all of the input information parameters involved are uncertain. e study provides mathematical optimization problems in survey sampling, which is a powerful tool to make the best policy on national planning and industries. erefore, the study is an integration of sample surveys, operational research, and computational modeling. Optimal sampling techniques can play an essential role in annual budgeting, income and expenditure forecasting for the preparation of five-year plans in national planning and budgeting. It can also be used in major projects scheduling of national interest, estimation of country's population, agricultural yields, employment, gross national product (GNP), and gross domestic products (GDP) amongst others. Optimal sampling techniques provide the best (optimal) solutions to the problem under study. ere is a need for Statistical Information in modern society now more than any time before, in particular, when data is to be collected periodically to satisfy the information need on a specified set of elements, known as finite populations. Surveys played a significant role in issues relating to real life, if we want to get a sense of a massive population. Sampling is the best tool that gives us a fresh idea about the whole population. A sample survey is one of the most critical data collection modes for meeting this need. Over time, an extensive literature survey sampling has developed into a vast array of theories, processes, and operations that are used every day throughout the globe. It is appropriate to speak of a worldwide survey industry with different sectors, namely, a government sector, academic sector, a private and mass media sector, a residual sector consisting of ad-hoc and in-house surveys. Optimization is the science of selecting the best among many possible decision alternatives in a complicated real-life situation. e ultimate target of any decision is to either maximize the desired benefit or to minimize the effort (cost or time) required or incurred in a particular course of action. In recent times, more authors formulated different types of sampling problem as a nonlinear mathematical programming problem or integer programming problem and tried to find the best solution [2]. An integer compromise allocation in MSS has been determined using the goal programming approach [3]. A multiobjective all-integer nonlinear model for MSS design considering some travelling costs has been developed, and a compromising solution was obtained using the value function approach, ε − constraint method, and distance-based method [4]. Also, uncertainties in the MSS problem have been investigated where some cost parameters were considered as fuzzy parabolic numbers [5]. e authors formulated a fuzzy multiobjective nonlinear programming problem with a quadratic cost function and solved using fuzzy programming. A case of nonresponse in the MSS problem has been studied and modeled as an all-integer multiple objective problem [6]. e solution was sought using four different procedures. Several authors have worked in optimum allocation problems of sampling and parameter estimation, for instance a Multiobjective Integer Nonlinear Programming Problem has been formulated and converted to a single-objective problem using the value function technique [7]. Also, they used Lagrange Multipliers Technique to obtain the continuous sample sizes formula, which approximates the optimal solutions. Similarly, traveling costs within strata has been considered, and a multiple objective nonlinear stochastic programming problem was formulated for finding a compromise allocation in the sample survey [8]. e problem was solved using D 1 − distance, goal programming, and the Chebyshev approximation technique. A problem of estimating p − population means considering nonresponse and nonlinear cost functions have been investigated, and solution procedures suggested using lexicographic goal programming [9]. e dynamic programming technique has been employed in proposing an efficient methodology for optimum stratum boundaries and determining optimum sample size in survey variables under the Neyman allocation [10]. A multiple pooling of the standard deviations of the estimates in an MSS for more than three strata has been studied and formulated as a multiobjective (MO) problem, which was solved using fuzzy programming [11]. Others considered compromise allocation problems under stratified samples with twostage randomized and multiresponse models [12,13]. An optimum allocation problem in MSS has been considered as an integer nonlinear stochastic programming and 2 Mathematical Problems in Engineering solved with five different techniques [14]. e authors suggested the use of coefficient of variations instead of variances. Also, the MSS problem has been studied with stochastic optimal design [15,16], with flexible goals [17], and with integer solution [18]. Several mathematical models have been designed based on multiobjective optimization for solving different aspects of human endeavors. For instance, a mixed-integer linear programming (MILP) model has been developed for addressing a closed-looped supply chain network problem during the coronavirus pandemic. e study considered different items such as recycling, reusing, quarantine, collection, distribution, production, supply, and location within a multiperiod, multiechelon, and multiproduct supply chain [19]. A multiple criteria decision-making tool has been used in determining the supply chain performance in a petrochemical industry incorporating sustainable strategies [20]. An optimization method has been designed to optimize the distribution and allocation of scarce resources amongst individuals during a crisis, based on credibility theory and a harmony search algorithm considering random simulation [21]. A scheduling problem has been studied, and a mathematical model developed with a view to obtain nearoptimal solution using meta-heuristic algorithms (MHA) [22]. Multiobjective optimization has been widely used in different sectors considering diverse applications and scenarios. For instance, robust optimization with artificial intelligence (AI) has been hybridized as multiobjective optimization applied to the product portfolio problem [23]. Location, allocation, and routing problem have been studied with the help of an improved harmony search algorithm [24]. Another important application area is that of dairy product's demand prediction, where an integrated approach based on AI and novel MHA has been used in achieving the desired future demands [25]. e MOOP has been used to formulate socio-economic and environmental issues related to sustainable development goals in several countries, such as India [26], Nigeria [27], Saudi Arabia [28], and other areas, such as municipal waste management system [29]. Organization of the Paper. e introduction of the subject matter, the background of the study, the literature review, and paper organization are presented in section 1. In Section 2, the multiobjective MSS techniques are presented. Section 3 provides single-objective stochastic MSS models. Section 4 discussed the MO stochastic MSS models. e linear approximation of MSS is discussed in Section 5. Section 7 concludes the article. Multiobjective Multivariate Stratified Sampling Let N be the size of the population partitioned into L strata each of sizes N h , h � 1, 2, . . . , L. Suppose p is considered as characteristics (p ≥ 2) that are measured on each unit of the population, and the interest is on p-population characteristics estimation. Let n h , h � 1, 2, . . . , L be the units taken randomly from the stratum without replacement. Sampling Variance Function. e population mean (Y j ) for the j th character is where i�1 y jhi is a stratum means. e sampling means of the j th character is given as e sampling variance of the estimator of the mean for the j th characteristic is given as follows: where 2 are stratum variances and y jhi is the value of the i th unit in the h th stratum for the j th characteristics (j � 1, 2, . . . , p and h � 1, 2, . . . , L). Sampling Cost Function. In survey sampling, when enumeration cost, traveling cost, and labor costs are high [30,31], the total cost function is defined as follows: where c h is the per-unit cost of measurement in the h th stratum, t h is the travel cost for enumerating for a unit of the j-th character in the h-th stratum, and ω is the cost of labor for a unit time. e labor time is available with respect to the time for a sampling unit within a stratum and follows an exponential distribution with rate λ. If in (4) the labor expenses are not significant, then we have a quadratic cost function given in (5). If in (5) the traveling cost is not significant, then we have a linear cost function with a fixed overhead cost of sampling (C 0 ) given in (6). In a particularly important case of (6), if c h � c, that is, if the per-unit cost in all strata is assumed to be the same, then the enumeration cost terms become constant, and the fixed cost for optimum allocation reduces to fixed sample size optimum allocation and is called Neyman [32]. Multiobjective Optimization Problem. e multiobjective optimization problem (MOOP) using the abovegiven definitions can be defined as subject to : where X � X 1 , or X 2 , or X 3 is the feasible space of the problem and it is defined as Weighted Goal Programming for Optimum Allocation Problem. In the goal programming approach, the p objective functions goals have been identified by solving the problem for individual j th objective function ignoring the other (j − 1) objective functions with the feasible set constraints. e general form of goal programming is where f � (f 1 , f 2 , . . . , f p , ) is the targeted goal vector which has been obtained at individual solutions and , and f in some selected norm. e function (7) in the l 1 norm is given as where w j ≥ 0 is the weight assigned to the jth objective function. e goal programming can be converted to a singleobjective optimization problem by introducing the auxiliary variables Finally, the weighted goal programming model is 4 Mathematical Problems in Engineering In (13) where d + j and d − j are overachievement and underachievement functions respectively for the j th goal value. It is further noted from (13) (iv) that d + j and d − j can never be achieved simultaneously. It means that when overachievement is more significant than zero, then underachievement functions will be zero and vice versa. If the objective is maximization type function and hence underachievement function is not desirable. For this situation w + j � 0 and w − j � 1, and equation (13) where w + j and w − j are the weight assigned to overachievement and underachievement functions, respectively. Conversely, for minimization type objective functions, the underachievement function is not desirable, that is, w + j � 0 and w − j � 1, and (13) (i) objective function is reduced to min p j�1 w + j d + j . Single-Objective Stochastic Multivariate Stratified Sampling Models Deciding under uncertainty is challenging and unavoidable in most real-life problematic situations. e problems are mainly aim to optimize a set of function (s) under uncertain conditions by the decision-maker (s). If some or all of the constraints' parameters are unknown and are considered random, then such an optimization function becomes a stochastic programming problem. Any modeling framework that optimized a problem under uncertainty can be viewed as a stochastic programming problem. e ultimate goal of these modeling types is to obtain a set of solution(s) that is feasible and optimal in some kind of set of data. Most of the models under this category involve parameters that follow probability distributions and can be known in advance or estimated using established procedures. In general terms, stochastic programming can also be called probabilistic programming if some or all data of the optimization function follow probability distributions. In other words, variables that behave randomly in optimization problems can be regarded as stochastic or probabilistic as the case may be. Charnes & Cooper [33] developed and converted the chance-constrained programming technique into its equivalent deterministic nonlinear constraints.Many Mathematical Problems in Engineering authors have discussed the stochastic optimization problem. Among them are Prekopa [34], and Charnes and Cooper [35]. In the context of response surface methodology, Diaz Gracia et al. [36] had studied the problem under several stochastic optimization techniques. Diaz Gracia and Ramos-Quiroga [15,16] formulated the problem of stratified sampling. In this problem, the authors have considered the sampling variances as a random variable. e sampling variances s 2 h have an asymptotically normal distribution. e given problem converts into an equivalent deterministic problem by using a modified-E model. Again, Diaz Gracia and Garay Tapia [14] formulated the problem of stratified sampling. In this problem, authors have considered stochastic programming for minimizing the cost function under the constraint to a known bound for the estimated variance of the mean. e following problem converts into an equivalent deterministic problem by using chance constraints; where V o is a known non-negative constant and K α is the value of the standard normal variable. Multiobjective Stochastic Multivariate Stratified Sampling Models In this section, we discussed the various nonlinear optimization sampling models under stochastic approaches. For instance, a problem of attaining several goals targets under probabilistic intervals was formulated as a linear stochastic model [37]. Problems involving stochastic MO have been analyzed considering different efficient concepts and establishing the relationships between the identified concepts [38]. A Multivariate Stratified random Sampling has been investigated where the asymptotic normality of the optimal solution was established, as well as the perturbation effect of the stratum variance on the optimal solution [39]. Similarly, a problem of estimating several population means in an MSS design has been investigated [40]. e authors formulated an all-integer nonlinear model and proposed the solution using dynamic programming concepts with numerical illustrations. A multiobjective goal optimization in stratified sampling design was conducted by trading off between the sampling cost and its variance [41]. More than a single parameter estimation in a stratified sampling problem has been studied with a fixed budget and nonlinear cost [42]. Beale described the convex function minimization as a linear programming problem considering the coefficients as random variables [43]. Consider a multiobjective nonlinear programming problem (MNLPP) is en, Equation (18) has been defined under the stochastic assumption given as the following stochastic nonlinear programming problem (SNLPP) for the p characteristics. subject to : Definition 1. A point is called feasible if and only if the probability measure of the event g j (n h , ξ) ≤ 0, j � 1, . . . , p is at least β. Or equivalently the constraints will be violated at most (1 − β) times. e joint chance constraint is separately defined and is referred to as a separate chance constraint. at is, Now, by applying minimax chance constrained programming, (18) is as follows: subject to : where β is the predetermined confidence level and min f j is the variance term. We can also formulate a stochastic goal programming for the problem defined in (21) Remark 1. (i) e stochastic objective constraints coincide with the form in (20) by defining (ii) e stochastic goal constraints coincide with the form in (20) by defining Mathematical Problems in Engineering where δ + is an overachievement goal. (iii) e stochastic problem constraints coincide with the form in (20) by defining (iv) e stochastic problem constraints coincide with the form in (20) by defining where overachievement goal. (v) For a continuous random variable ξ, the value Pk β ≤ ξ � 1 − Φ(k β ) holds always, and we have Conversion of Stochastic Inequalities to Equivalent Deterministic. In (18) (i), the term s 2 jh is assumed to be a random variable. In practice, some approximations of these parameters, which are known from some preliminary or recent survey, may be used. e concept of limiting the distribution of the sample variances in a sampling problem is used in [39], considering the random variable ξ h defined as where Y jh � (1/N h ) N h i�1 y jhi . Note that ξ h has an asymptotically normal distribution with mean and variance respectively, where is the fourth central moment of j th character in the h th stratum. e sequence of sample variances is given by where n h /N h − 1 ⟶ 1 and (y jhi − Y jh ) 2 ⟶ 0 in probability and hence under the asymptotically normal property S 2 jh ⟶ a N(E(ξ), Var(ξ)), h � 1, 2, . . . , L are independents. Based on the above discussion, the multivariate stratified sampling variance function with the following expected function and variance function as n h is sufficiently large, and as n h is sufficiently large. Theorem 1. Assume that the stochastic vector ζ degenerates to a random variable ξ with a probability distribution Φ , and the function g j (n h , ζ) has the form g j Note that the probability P k β ≤ ξ will be increased if k β is replaced with a smaller number. Theorem 2. Assume that the stochastic function and g(n h , ξ) has the form If s 2 jh are assumed to be independent normally distributed random variables, then g(n h , ξ) ≤ 0 ≥ β if and only if where Φ is the standardized normal distribution function. Proof. In the probability model of survey sampling, the probability that the sampling variance value is smaller or equal to an absolute goal value is maximized. at is, where f * j is the minimum target goal value for the jth objective function. Recall the (18) (i), independently normally distributed random variables s 2 jh . Moreover, covariance terms will be zero, and only the variance terms will be remain. where η is the standardized normally distributed random variable. From the fact that (42) it is equivalent to and from (43), the maximum of β is searched in the interval (0, 1). e following function is defined for convenience: Mathematical Problems in Engineering Here, we assume that for an optimal solution (n * h , β * ) to the problem (44), β * > 0.5 holds. From this assumption, Φ − 1 (β) > 0, and then for a fixed value of β * , it follows that be an optimal solution to the problem (21) with a target value f j larger than i.e., f j > f oj . en, β * > 0.5 holds. Proof. From the condition f j > f oj > 0, of the preposition, f j − f oj > 0, and then we have we have erefore, (n h , β) is a feasible solution to the problem (44) with the target value f j . Since the value (n * h , β * ) is the optimal solution of (44), it holds. or equivalently where Φ is the standardized normal distribution function. Proof. Let chance constraint, that is, It is assumed that C, c h , and t h are normally distributed random variables. Moreover, assume that all are independent of each other. We note that Equation (52) is the standard normal random variable N(0, 1), and it follows or equivalent to where η is the standardized, normally distributed random variable. e above-given constraint holds if and only if Hence, the chance constraint (12) (ii) can be transformed into It can be further assumed that only c h and t h are normally distributed random variables and independent to each other where the total budget for the survey is fixed. e same procedure as discussed above will be followed and hence the cost constraint defined in (56) will be defined as follows: (57) □ Stochastic Goal Programming Sampling Variance Model. In light of the above discussion, the problem formulated in (57) is transformed equivalently as follows: Subject to : e individual sampling variance goal value can be obtained using the following define equation: Chebychev Goal Programming Sampling Model. In this method, first, we set goals for each objective that we want to attain. Let some goals g � (g 1 , g 2 , . . . , g k ) ′ be identified for the ′ be defined as close as possible to goals g � (g 1 , g 2 , . . . , g k ) ′ . e difference between f � (f * . . , f * k (x h )) ′ and g � (g 1 , g 2 , . . . , g k ) ′ is defined as the deviation function D(f(n h ), g). In a sampling optimization problem, the aim is to find an n * h ∈ X, which minimizes D(f(n h ), g), that is, where D f n h , g � max D f 1 n h , g 1 , . . . , f k n h , g k , (61) is the maximum deviation of individual goals. Finally, a preferred solution is then defined as one that minimizes the maximum deviations from the goals. In light of the above discussion, the problem formulated in (55) is transformed equivalently with an auxiliary variable δ as follows: subject to : where C * is the target goal value. Linear Approximation of Multivariate Stratified Sampling Problem e objective function in (7) f j is linearized at the individual optimum points [44]. us, for j � q at the point n * q � (n * q1 , n * q2 , . . . , n * qh ), f q ′ may be approximated by the linear function with n h as where f q ′ is the variation term and ∇ ′ f q (n * qh ) is the value of the vector of partial derivatives of f q with respect to n qh (h � 1, 2, . . . , L) at the point n * qh given as follows: After dropping the constant terms in the linear objective function, the NLPP (6) can be approximated, and the final problem is equivalent to maximizing (− f k ′ ). at is, Numerical Results is section presents some numerical examples to illustrate the various theoretical concepts discussed above. Example 1. A simulation study is used to show the computational procedure for the theoretical discussion of multiobjective MSS. e R package (stratifyR) [45] is used to simulate the data for the two different characteristics, which are divided into four strata. e information on simulation studies is given in Table 1. e available budget for the survey is C 0 � $2500. Using (7), in Section 2.3, the best individual optimal solutions for both characteristic j � 1, 2 are obtained as follows: f 1 � 1218.183, n 1 � 26, n 2 � 33, n 3 � 99, n 4 � 74 and .f 2 � 494.3353, n 1 � 29, n 2 � 36, n 3 � 92, n 4 � 77. e Weighted Goal Programming discussed in 2.4 is applied to obtain the compromised allocations using (13) with the help of the LINGO optimization package [46], as follows: f 1 � 1219.624, f 2 � 496.7824, n 1 � 27, n 2 � 35, n 3 � 96, n 4 � 75. Example 2. A simulation study is used to show the computational procedure of the stochastic multiobjective multivariate stratified sampling. e R package (stratifyR) [45] is used to simulate the data for the two different characteristics, which are divided into four strata. e information on simulation studies is given in Table 2. e available budget for the survey is C 0 � $2000. e calculated parameters used in this study are presented in Table 3. e individual solutions of numerical Example 2 for both characteristics j � 1, 2 are obtained using (59) of Section 4.2 as follows: f 1 � 0.1517723, n 1 � 20, n 2 � 26, n 3 � 64, n 4 � 52. and f 2 � 0.3982894, n 1 � 19, n 2 � 22, n 3 � 67, n 4 � 52. e stochastic Goal programming discussed in Sec. 4.2 is applied to obtain the compromised allocations using (58) as follows: f 1 � 0.03982894, f 2 � 0.03982894, n 1 � 19, n 2 � 22, n 3 � 67, n 4 � 52. e Chebychev Goal programming discussed in Section 4.3 is applied to obtain the compromised allocations using (62) as follows: e Stochastic Sampling Cost Model discussed in Section 4.4 is applied to obtain the compromised allocations using (63) as follows: C � 1997.716, n 1 � 21, n 2 � 24, n 3 � 67, n 4 � 49. Example 3. Here, the linearization of sampling variance are discussed numerically. Using the data of Table 1 in (67), the following sample allocations for j � 1 are n 11 � 2, n 12 � 195, n 13 � 75, n 14 � 2 are obtained with the sampling variance of Solving the nonlinear sampling variance problem defined in (7) with the same data, the sample allocation was perceived to be n 11 � 26, n 12 � 33, n 13 � 99, n 14 � 74 with sampling variance of f nonlinear � 1218.183. It is observed that the sample allocation from the nonlinear problem is better than the linearized one. Also, the sampling variability is higher in the linearized model than in the nonlinear. erefore, it can be concluded that linearization of the sampling problem does not give better sample allocations as well as minimum sample variance. Example 4. Here, the linear approximation of sampling variance are numerically presented. Using the data of Table 1 in (67), the following sample allocations for j � 2 are n 21 � 2, n 22 � 3, n 23 � 203, n 24 � 2 are obtained with the sampling variance f linear � L h�1 (1/n h − 1/N h )W 2 h S 2 jh � 595.91. Solving the nonlinear sampling variance problem with the same data, the sample allocation was perceived to be n 21 � 29, n 22 � 36, n 23 � 92, n 24 � 77 with sampling variance of f nonlinear � 494.3353. It is again observed that the sample allocation from the nonlinear problem is better than the linearized one as in the case of Example 3. Also, the sampling variability is higher in the linearized model than in the nonlinear. erefore, it can be concluded that linearization of the sampling problem does not give optimal sample allocations as well as minimum sample variance. In general, It can be concluded that linearization of nonlinear sampling variance in a survey sampling problem does not help to determine the optimal sample allocations with minimum variability since approximation of nonlinear into a linear function will not sufficiently optimize the function value as a result of a loss of generality during the linear transformation of sampling variance. In such cases, it has been observed that the optimal global solution (Pareto optimal solution) of a function can suffer. Suppose if the original nonlinear function is convex and we approximate it into the linear function, we know that the linear function can be convex or concave. Indeed, the transformation process of nonlinear to a linear function could compromise several properties of the nonlinear based on the fact that a nonlinear function has a high convergence rate to a linear function. Hence, it can be verified that the linearized sampling variance case optimal allocation has high variability compared to the actual sampling problem. Hence, it can be concluded that the need for linearizing the sampling variance function for obtaining the optimal sample allocation is not an optimal decision in a sampling survey. Since the linearization of sampling design under deterministic does not give an efficient solution, there is no need to carry out the same under stochastic, and therefore, the study did not consider linearization under the uncertainty. Other interested researchers can explore the context of different sampling designs. Conclusion In sample design, allocating samples efficiently and attaining maximum accuracy in minimizing variances plays an important role. Various techniques, theorems, properties and prepositions, and stochastic models were studied, discussed, and presented for the multiobjective multivariate stratified sampling scheme. e discussion is supported with numerical examples in each case. is research is a theoretical framework and conceptual methodology for survey sampling in optimal allocation problems in a certain and stochastic environment. Based on the discussion and numerical illustrations, it can be deduced that sampling variance values resulting from the linearization have higher variability than the nonlinear sampling variance case. erefore, the study suggests that there is no need for linearizing the original sampling variance function with the hope of getting an optimal decision regarding sampling allocation in survey sampling. e interested researchers can further demonstrate the usefulness and power of the techniques and methods presented in this study. In the future, the study could be extended to more sampling designs in optimal allocation problems for survey sampling. Data Availability Not applicable. Conflicts of Interest e authors declare that there are no known conflicts of interest regarding financial or authorship arrangement for this research.
7,778.4
2022-08-29T00:00:00.000
[ "Mathematics" ]
Using Excel to Explore the Effects of Assumption Violations on One-Way Analysis of Variance (ANOVA) Statistical Procedures To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violations of these assumptions. In our earlier articles (Laverty, Miket, & Kelly [1]) and (Laverty & Kelly, [2] [3]) we used Microsoft Excel to simulate both a Hidden Markov model and heteroskedastic models showing different realizations of these models and the performance of the techniques for identifying the underlying hidden states using simulated data. The advantage of using Excel is that the simulations are regenerated when the spreadsheet is recalculated allowing the user to observe the performance of the statistical technique under different realizations of the data. In this article we will show how to use Excel to generate data from a one-way ANOVA (Analysis of Variance) model and how the statistical methods behave both when the fundamental assumptions of the model hold and when these assumptions are violated. The purpose of this article is to provide tools for individuals to gain an intuitive understanding of these violations using this readily available program. quires that the population being sampled is a normal distribution and that the observations in the sample are independent. If these underlying assumptions do not hold, the desired performance of the statistical procedure may no longer hold true. Sometimes the effect of an invalid assumption on a property of the procedure is minimal, sometimes not so. If the population is non-normal but has a finite mean and variance (such that the Law of Large Numbers and the Central Limit theorem applies), the departure from normality will have little effect on the properties of confidence intervals computed assuming normality when the sample size is adequately large. The reason for this is that it is a consequence of the Central Limit Theorem. The purpose of this paper is to show how to use the program Excel to simulate data for which the statistical technique of one-way Analysis of Variance (ANOVA) is used. The advantage of the using the program Excel is that when you press the recalculate button, under the Formulas menu, the data that is generated at random will be regenerated, statistical calculations will be recalculated and relevant graphs will be redrawn. This allows the user to observe the variation in these procedures for different realizations of the data. See Figure 1. A Model for Non-Normality (The Cauchy Distribution, the t-Distribution) For most cases when one-way ANOVA is applicable the normality assumption is The Standard Cauchy distribution is equivalent to the t distribution with 1 degree of freedom. A graph of the standard normal distribution, the t-distribution with 5 degrees of freedom, and the Cauchy distribution is in Figure 2. The Cauchy Distribution is an example of a distribution where the Law of Large numbers and the Central limit Theorem do not apply [4]. In order for these two Laws to hold both the mean and higher moments have to exist and be finite. This is not the case for the Cauchy distribution. There is no convergence of the distribution of the sample mean to the central value. In fact the distribution of the sample mean is the Cauchy distribution for any sample size (i.e. the distribution of the sample mean is the same as that of any individual observation when the data comes from the Cauchy distribution). The Cauchy distribution is a heavy-tailed distribution. The t-distribution is also a heavy-tailed distribution (but not as extreme) when the degrees of freedom ν is small. As the degrees of freedom increases the t distribution approaches the standard normal distribution. Tsay [5] uses the t-distribution with 5 degrees to model random disturbances that appear in various time series models of financial data. This accounts for the sometimes extreme changes that appear in financial data. The Cauchy distribution is appropriate if extreme values are prevalent in the data (the t-distribution with degrees of freedom higher than 1 in the less extreme case). This could occur in surveys where individuals were asked to make a continuous measurement of some quantity and extreme values were prevalent in the populations. For example, measurements of blood pressure, IQ, and performance of a political leader, could result in non-normal data with extreme values at either end. In such cases alternatives to ANOVA are appropriate. 1 We haven't considered these alternatives in this paper. The t-distribution with ν degrees of freedom can also be shown to be mixture of Normal distributions with mean 0 and variance W, where the weighting distribution for W is the inverse gamma distribution with α = ν/2 and β = ν/2 (Cook [6]). This implies that a random variable T will have the t-distribution with ν degrees of freedom if W is selected from the inverse gamma distribution with α = ν/2 and β = ν/2 and then T is selected from Normal distributions with mean 0 and variance W. Simulation of Data from a Continuous Distribution in Excel Uniform random variates on [0, 1] can be generated in Excel with the function "RAND()". The generation of random variates from a continuous distribution with measure of central location μ and measure of scale σ, can be carried out using the inverse-transform method (Fishman [7]). Namely Y = F −1 (U) where F(u) is the desired cumulative distribution of Y and U has a uniform distribution on [0, 1] (see Figure 3). In Excel this is achieved for the Normal distribution (mean μ, standard deviation σ) with the function "μ + σ* NORMSINV(RAND())" and for the Cauchy (t with 1 d.f.) location parameter, μ, and scale parameter, σ, "μ + σ* TINV(2*(1-RAND()),1)" (Figure 3). Comment: The Excel function TINV(U,df) does not calculate F −1 (U) for the t-distn with degrees of freedom df, however the excel function TINV(2*(1-U),df) does achieve the desired calculation. Setting Up the Excel Worksheet to Simulate Anova Data The data simulated will come from 3 populations (this can easily be generalized to more than 3 populations). The parameters of the populations 1) mean (central location), stored in cells C2:E2 2) standard deviation (scale parameter), stored in cells C3:E3 3) sample size), stored in cells C4:E4 4) a parameter that determines normality of the data versus non-normality. stored in cells C1:E1. This parameter is set to zero if the desired data is normal. If this parameter is set to an integer, ν, greater than 0 the data will come from a t -distribution with ν degrees of freedom. The t -distribution is a non-normal heavy-tailed, centered and symmetric about zero. 5) A final parameter (precision), located in cell A2 specifies the of decimal places that the raw data is rounded to ( Table 1 Generating Simulated Data Copy the observation numbers (1 to 10) in Cells B7:B:16 Paste in cell C7 the formula =IF($B7>C$4,"",ROUND(C$2+C$3*IF(C$1=0,NORMSINV(RAND()),TINV (2*(1-RAND()),C$1)),$A$2))" Copy this formula to cells C7:E16. If the normality parameter is 0, the data generated will be from the normal distribution with mean = "loc. Par." And standard deviation = "scale par."). If the normality parameter is an integer greater than 0, the data will be a random number with a t-distribution scaled by the "scale par." and location shifted by the "loc. par." The data will be rounded to the number of decimals specified by "precision". Computation of Statistics Required for One-Way ANOVA Suppose we have data from k Normal populations with means where ( ) This statistic has an F-distribution with ν 1 = k -1 degrees of freedom in the numerator and ν 2 = N -k degrees of freedom in the denominator. The computing formulae for 2 2 The testing for One-way ANOVA is carried out using the Analysis of Variance table (Table 2). Place the formula "=SUM(C18:E18)" in cell G18 to compute the grand total, can be placed in cell N22. The formula for a (1 − α)100% confidence interval for the mean of the ith sample is: To construct Box-whisker plots of the data 1) Select a range containing the data C6:E16 for 10 observations from each sample from the 3 Populations. 2) The menu item for Box-plots can be found under the histogram item ( Figure 5). Comment: There is a problem with Excel's method of drawing box-plots. If in the data range there is a blank cell, when drawing a box-plot Excel treats that cell as containing a zero rather than treating the observation as non-existent. Exercises That Can Be Performed to Illustrate the Effects of Assumption Violations on ANOVA In these exercises we generate samples using different ANOVA assumptions to examine the violations of these assumptions on the ANOVA calculations. Discussion In applying any statistical procedure it is important understanding the assumptions on which it is based. It is also important to understand the effects on these procedures of the violations of these assumptions. Sometimes the effects of the violations can be extreme, sometimes minimal. The purpose of this article is to provide tools for individuals to gain an intuitive understanding of these violations using the readily available program Microsoft Excel. The advantage of the using the program Excel is that when you press the recalculate button, under the Formulas menu, the data that is generated at random will be regenerated, statistical calculations will be recalculated and relevant graphs will be redrawn. The statistical procedure that we have chosen to illustrate these tools is one-way ANOVA. This procedure is an important component of introductory statistical courses and textbooks. The tools can be easily extended to other and more advanced univariate procedures. Conclusion Excel is a very useful tool for examining the performance of One-Way Anova of variance both when the assumptions hold and more importantly when the assumptions are violated.
2,286.2
2019-08-05T00:00:00.000
[ "Mathematics", "Computer Science" ]
Synthetic Covalently Linked Dimeric Form of H2 Relaxin Retains Native RXFP1 Activity and Has Improved In Vitro Serum Stability Human (H2) relaxin is a two-chain peptide member of the insulin superfamily and possesses potent pleiotropic roles including regulation of connective tissue remodeling and systemic and renal vasodilation. These effects are mediated through interaction with its cognate G-protein-coupled receptor, RXFP1. H2 relaxin recently passed Phase III clinical trials for the treatment of congestive heart failure. However, its in vivo half-life is short due to its susceptibility to proteolytic degradation and renal clearance. To increase its residence time, a covalent dimer of H2 relaxin was designed and assembled through solid phase synthesis of the two chains, including a judiciously monoalkyne sited B-chain, followed by their combination through regioselective disulfide bond formation. Use of a bisazido PEG7 linker and “click” chemistry afforded a dimeric H2 relaxin with its active site structurally unhindered. The resulting peptide possessed a similar secondary structure to the native monomeric H2 relaxin and bound to and activated RXFP1 equally well. It had fewer propensities to activate RXFP2, the receptor for the related insulin-like peptide 3. In human serum, the dimer had a modestly increased half-life compared to the monomeric H2 relaxin suggesting that additional oligomerization may be a viable strategy for producing longer acting variants of H2 relaxin. Introduction Relaxin, one of the first hormones to be discovered, is a member of the insulin superfamily of peptides [1]. It is a small two-chain, three-disulfide bonded peptide [2,3]. Once much ignored by the international research community [4], relaxin, similar to its sister hormones insulin and the insulinlike growth factors (IGFs), is now known as a multifunctional hormone. It is primarily involved with the maintenance of reproduction and pregnancy and in the facilitation of the delivery of the young. Its native G-protein-coupled receptor, relaxin family peptide receptor 1 [5], RXFP1 (previously known as LGR7), was shown to be widely distributed in various organs in both males and females. Human (H2) relaxin, the major stored and circulating form of human relaxin, is now known to play a key role in inflammatory and matrix remodeling processes and possesses potent vasodilatory, angiogenic, and other cardioprotective actions [6,7]. At physiological concentrations, the H2 relaxin exists as a monomer [3]. However, Eigenbrot et al. reported the crystal structure for H2 relaxin and showed that it exists as a noncovalent dimer [8]. This was confirmed by sedimentation equilibrium analytical ultra-centrifugation studies [9]. This probably corresponds to the stored form of H2 relaxin and such a dimer is likely to be biologically inactive because the known key receptor binding residues (R B13 , R B17 , and I B20 ) in H2 relaxin [2,7] form part of the dimer interface [8]. This is supported by the fact that the monomer of the related peptide, insulin, is involved in its tyrosine kinase receptor activation whereas its dimeric and hexameric forms are involved in stabilizing the molecule during storage [10,11]. A covalently linked (through a disulfide bridge) dimeric insulin peptide was recently prepared by recombinant DNA technology. It neither bound to the insulin receptor nor induced a metabolic response in vitro [12], which is consistent with the view that the monomeric form is the active form of insulin. However, the insulin dimer was shown to be extremely thermodynamically stable in vitro which highlighted the importance of oligomerization for insulin stability [12]. This suggests that a covalently linked dimeric relaxin may also be more stable in vitro as well as in vivo compared to its monomeric form. Such a compound, if it retained biological activity, would be very valuable given that H2 relaxin recently passed Phase III clinical trials for treating acute heart failure [13] despite having a short in vivo half-life of approximately 10 min [14,15] that is characteristic of many peptides and proteins. It is for this reason that H2 relaxin requires continuous intravenous infusion into patients over 48 hours [13]. Therefore, there is a clear need for improving the pharmacokinetic properties of H2 relaxin in order to potentially improve its therapeutic value. In this study, we undertook to design and develop a covalently linked dimeric analogue of synthetic H2 relaxin using both click chemistry and a small polyethylene glycol (PEG) spacer to link the two monomers in such a structural orientation so as to retain biological activity ( Figure 1). We studied its structural and functional role using circular dichroism (CD) spectroscopy and RXFP1-and RXFP2-expressing cells, respectively. It was shown that the dimeric H2 relaxin possessed a high degree of secondary structural similarity to native H2 relaxin. Importantly, unlike the insulin dimer, the dimeric H2 relaxin is equipotent to native H2 relaxin monomer and exhibits improved in vitro serum stability. Synthesis 2.2.1. Chemical Peptide Synthesis. The H2 relaxin A-and Bchains were assembled as C-terminal amides on an automated Protein Technologies Tribute peptide synthesizer (Tuscan, AZ) or CEM Liberty microwave peptide synthesizer (Ai Scientific, Australia) using Fmoc chemistry. Side chain protecting groups of trifunctional amino acids were TFAlabile, except for tert-butyl-(tBu-) protected cysteine in position A11 and acetamidomethyl-(Acm-) protected cysteines in positions A24 and B23. Using instrument default protocols, both A-and B-chains were separately synthesized either at 0.1 mmol scale or 0.2 mmol scale activated with either 4-or 5-fold molar excess of HBTU (0.4 or 0.5 mmol for a 0.1 mmol scale of resin; 0.8 mmol for a 0.2 mmol scale of resin) in the presence of 5 equivalents DIEA. Resinattached peptides were treated with 20% v/v piperidine/DMF to remove N -Fmoc protecting groups. When using the microwave synthesizer, coupling and deprotection steps were carried out at 75 ∘ C and 25 W microwave power for 5 mins and 60 W microwave power for 3 mins, respectively, when using the CEM Liberty microwave synthesizer. Coupling and deprotection steps were carried out for 30 and 10 mins, respectively, when using Tribute synthesizers. Upon complete coupling of the final amino acid of the native B-chain peptide sequence, an extra amino acid containing the alkyne group (Fmoc-L-propargylglycine) was coupled by manual coupling procedures. Peptide-Resin Cleavage. Upon completion of solid phase synthesis, the A-and B-chains were cleaved from the solid supports by treatment with TFA containing anisole/TIPS/DODT (94%/2.5%/2%/1.5%, 20 mL) for 2 h. Cleaved products were concentrated by N 2 bubbling, precipitated with ice-cold diethyl ether and centrifuged at 3000 rpm for 5 mins. The centrifuged pellet was then washed with icecold diethyl ether and centrifuged. This process was repeated at least three times. Peptide Purification. All RP-HPLC analytical and purification reactions were monitored using analytical and preparative Vydac C18 columns (pore size, 300Å; particle size 4.6 × 250 mm or 22 × 250 mm, resp.), in a gradient mode with eluant A: 0.1% aq TFA and eluant B: 0.1% TFA in acetonitrile. purification of the crude A-and B-chains, stepwise formation of the three disulfide bonds was carried out via successive oxidation, thiolysis, and iodolysis as previously described [18]. [17] in the absence or presence of increasing concentrations of unlabelled peptides. Nonspecific binding was determined with an excess of unlabelled peptides (500 nM H2 relaxin). Fluorescent measurements were carried out at excitation of 340 nm and emission of 614 nm on a Victor Plate reader (Perkin-Elmer, Melbourne, Australia). All data are presented as the mean ± S.E. of the total specific binding percentage (in triplicate wells), repeated in at least three independent experiments and curves fitted using one-site binding curves in GraphPad Prism 5 (GraphPad Inc., San Diego, CA). Statistical differences in pIC50 values were analyzed using one-way analysis of variance (ANOVA) coupled to a Newman-Keuls multiple comparison test for multiple group comparisons in GraphPad Prism 5. Functional cAMP Assay. The influence of cAMP signaling by synthetic analogues in HEK-293T cells expressing either human RXFP1 or RXFP2 receptors was assessed using cAMP reporter gene assay as described previously [20]. Briefly, HEK-293T cotransfected with response element pCRE -galactosidase reporter plasmid were plated out in Corning Cell BIND 96-well plate at 50 000 cells per well per 200 L. 24 h later, cotransfected cells were treated with increasing concentrations of H2 relaxin analogues in parallel with native H2 relaxin or human INSL3. After 6 h incubation at 37 ∘ C, cell media were aspirated and cells were frozen at −80 ∘ C overnight. A -galactosidase colorimetric assay measuring absorbance at 570 nm on Benchmark Plus microplate spectrophotometer (Bio-Rad, Gladesville, Australia) was then used to measure relative cAMP responses. Each concentration point was measured in triplicate and each experiment performed independently at least three times. Ligand-induced stimulation of cAMP was expressed as a percentage of the maximum H2 relaxin response for RXFP1 cells or human INSL3 for RXFP2 cells. GraphPad Prism 5 was used to analyze the cAMP activity assay which was expressed as the mean ± S.E.M. Statistical analysis was conducted using one-way ANOVA with Newman-Keuls post hoc analysis. 2.2.11. In Vitro Serum Stability. The stability of the synthetic analogue was measured against native H2 relaxin in human serum. The purchased serum was not heat inactivated in order to retain as much of its proteolytic enzymic activity as possible. The peptides tested were normalized to a final concentration of 1.0 mg/mL with deionized water, and 10.0 L of peptides was added to 590 L 100% human serum. After addition of peptide into the serum, 50 L of samples was removed and quenched with 250 L of ice-cold acetonitrile/0.1% TFA. The solution was then spun down with a benchtop centrifuge at 13,500 rpm (Eppendorf Centrifuge 5804R, Melbourne, Australia) for 15 mins at 4 ∘ C to pellet the larger, precipitated serum proteins. The remaining peptide/serum solution was placed immediately into 37 ∘ C incubator. Degradation of peptides was monitored with manual RP-HPLC injections of supernatant using a Phenomenex Aeris Widepore 3.6 m C4 analytical column (pore size, 100Å; particle size 4.6 × 250 mm), in a stabilized gradient mode with eluent A: 0.1% aq TFA and eluant B: 0.1% TFA in acetonitrile. Samples were taken at time points 0.5, 1.0, 1.5, 2.0, 2.5, 3, 4, 5, and 6 h. Each assay was carried out in triplicate. Elution of target peptide was elucidated by retention time analysis and characterization with MALDI-TOF MS at each time point. Kinetic analysis of target peptide at each time point was carried out by least squares analysis of the logarithm of the integration peak area versus retention time. Nonspecific peptide degradation was measured by peptide degradation in serum at 0 h. Correction for serum peptides that might coelute with target peptides was carried out by subtracting integrated peak areas with equivalent serum only solution at all time points (background subtraction). GraphPad Prism 5 was used to analyze the peptide degradation in serum which was expressed as the mean ratio ± S.E.M. and Microsoft Excel was used to graph all data. Statistical analysis was conducted using one-way ANOVA with Newman-Keuls post hoc analysis. Results The selectively S-protected A-and B-chains were of high purity as determined by analytical RP-HPLC and MALDI-TOF MS. An Fmoc-propargylglycine was manually coupled onto the N-terminal end of the B-chain prior to cleavage from the solid support and subsequent crude peptide purification. Following this, stepwise regioselective disulfide bond formation between the A-chain and monoalkyne moleculechain was carried out according to established protocols [21][22][23] to form the two-chain, three-disulfide bond alkyne H2 relaxin. Copper-catalyzed alkyne-azide cycloaddition was then utilized to "click" two molecules of alkyne-H2 relaxin onto a bisazido PEG, (PEG) 7 , hence forming a dimeric relaxin molecule separated by a short PEG spacer ( Figure 2). This was termed H2 (PEG) 7 H2 dimer and was shown to be of high purity as assessed by analytical RP-HPLC and MALDI-TOF MS (Figure 1). The CD spectra of H2 (PEG) 7 H2 were measured in 10 mM PBS (pH 7.4) buffer (Figure 3). The dimer was found to retain a very similar secondary structure and a high degree of -helical conformation (with pronounced double minima at approximately 208 nm and 222 nm) along with somesheet and random coiled structure. The -helical content of H2 relaxin was found to be 49% helicity compared to the H2-PEG dimer with 48%. These values were calculated from the MRE at 222 nm, the [ ] 222 values for relaxin and H2 (PEG) 7 H2 dimer being −17511.4 and −17679.4, respectively. The similarities between the MRE and helix content between these two peptides suggested that the dimer essentially retained native H2 relaxin-like structure. To assess the biological activity of the dimer, binding and activity in vitro assays were undertaken in comparison to the native recombinantly produced H2 relaxin. These assays were carried out in HEK-293T cells stably expressing RXFP1, the native receptor of H2 relaxin. The starting alkyne H2-relaxin was also assessed for binding and signaling through the RXFP1 and RXFP2 receptor. This was performed to confirm that the disulfide bonds within the two monomeric analogues were assembled in the correct form as other combinations of disulfide pairings have previously been shown to result in no interaction with the RXFP1 receptor (neither binding nor activity) [24]. Following confirmation of the alkyne monomer possessing full H2 relaxin-like activity (data not shown), the H2 (PEG) 7 H2 dimer was tested for binding and cAMP activity on the same HEK-293T cells expressing RXFP1. It was found to have similar affinity and potency to H2 relaxin at the RXFP1 receptor (Figures 4(a) and 4(b)). The dimer was then also tested with RXFP2, the native receptor for INSL3. Interestingly, the peptide displayed significantly weaker activation propensity ( Figure 5). The degradation kinetics and serum stability of the synthetic H2 (PEG) 7 H2 dimer was measured against native H2 relaxin in male human serum ( Figure 6). The purchased serum was not heat inactivated in order to retain as much activity of its proteolytic enzymes and other reductants as possible. This provided a better representation of human serum and a direct in vitro measurement in an experimental setting of the degradation kinetics of the relaxin dimer when compared to native relaxin. The H2 (PEG) 7 H2 dimer was observed to retain its original dimeric form in vitro with a half-life of 2.52 hrs, significantly longer when compared to native relaxin with 2.22 hrs. Discussion Nearly 9 decades since its discovery, the pleotropic peptide H2 relaxin recently completed a successful Phase III clinical trial for the treatment of acute heart failure [13,25]. Despite this important success, the short in vivo half-life of the peptide (10 mins) [14,15] necessitates its continuous intravenous infusion for optimum activity. As the in vivo half-life of a peptide or protein is affected by both its degradation by enzymes and renal clearance [26,27], it can be improved by, for example, conjugation with large MW compounds such as PEG or by polymerization/aggregation which acts to (at least partially) shield it from proteolytic enzymes and also by slowing clearance, although best results for the latter occur with a molecular mass of greater than 40 kDa [26]. Conjugation of PEG molecules to protein-based biopharmaceuticals has recently proved highly successful in increasing their therapeutic index and clinical use [28]. For example, a combinational treatment of PEGylated -2interferon and ribavirin has successfully eradicated hepatitis C virus in approximately 50% of the treated patients, leading to several PEG-based proteins being approved for therapeutic purposes [29]. PEG moieties are also highly hydrated, hence increasing the hydrodynamic radius of the conjugates and correspondingly improving solubility and reducing urinary glomerular filtration rate [30]. However, such modifications are not without disadvantages. The polydispersity of PEG can complicate accurate quality control as well as introducing physicochemical property modifications that make chemical characterization by RP-HPLC and MS extremely challenging due to peak broadening, even disappearance, and a lack of ionization. For this reason, oligomerization strategies are gaining increasing attention as a means of increasing in vivo stability [31]. A dimeric erythropoietin formed by chemical crosslinking of the monomer showed an increased plasma half-life in rabbits of more than 24 h compared to the monomer's 4 h. The dimer also possessed 26-fold higher activity in vivo [32]. A dimer of the antibacterial peptide, A3-APO, possessed a substantially increased serum half-life (100 min) compared to the monomer (4 min) [33]. For this reason, we undertook to examine the feasibility of developing a synthetic dimer of H2 relaxin as a first step towards obtaining improved pharmacokinetics. This was designed and assembled using a bisazido PEG 7 moiety between two molecules of synthetic functionalized H2 relaxin via click chemistry. This linker was chosen because it is sufficiently longer to space the two peptide units apart and also because it is monodisperse thus simplifying characterization. Our previous studies have shown that the N-terminus of the Bchain can be truncated by six residues [34] or modified to accommodate a functional moiety such as a biotin [35] or large fluorophore [18] without significant loss of activity. This is because this site is far from its primary active site which consists of the B-chain C-terminal -helical region [2]. Consequently, the bisazido PEG 7 moiety was conjugated at both its ends with N-terminal B-chain alkyne H2 relaxin employing the now well-established click reaction. The CD spectral data showed that H2 (PEG) 7 H2 dimer had a comparable secondary structure to native H2 relaxin (Figure 3), strongly suggesting that the dimeric form of H2 relaxin has native relaxin-like structural integrity. Further evidence was also provided by the near-native RXFP1 receptor binding and activation activity (Figure 4). This result confirmed that the presentation of the active site of H2 relaxin is essentially unaffected by the tethering of the molecule to the PEG 7 linker via the N-terminus of the B-chain. This is reflected by the in vitro RXFP1 binding and cAMP data in that despite the dimer being at least twice the molecular size of native relaxin, its size and spacer length did not impair the ability of the dimer to bind and activate downstream RXFP1 signaling. The RXFP1 binding domain within the Bchain of the dimer was still exposed and able to interact with the receptors, akin to native relaxin. This observation is consistent with previous work where prorelaxin with its C-chain intact was still able to interact and signal through the RXFP1 receptor [36,37]. Interestingly and positively, at RXFP2, the receptor for the related peptide insulin-like peptide 3, the dimer was significantly less active than H2 relaxin itself ( Figure 5). Importantly, in the in vitro serum stability assay, the H2 (PEG) 7 H2 dimer had an improved increase in its halflife compared to the native H2 relaxin (Figures 6(a) and 6(b)), which is probably due to increased molecular shielding. However, as evidenced by the disparity between the in vitro serum half-life (ca. 2 h; this study) and reported in vivo half-life (a few minutes [14,15]) of native H2 relaxin, it is clear that renal clearance is a greater issue. It remains to be determined if the increased hydrodynamic volume due to the larger molecular weight of the H2 relaxin dimer will also result in a moderation of its renal clearance. Past experience suggests that this will be the case [33]. Indeed, it was also recently shown that a covalent dimer of exendin-4 had 2.7 times greater increased biological half-life over the native monomer [38]. In conclusion, efficient solid phase peptide synthesis in combination with click chemistry techniques was used to successfully prepare a structurally well-defined homodimer of H2 relaxin, H2 (PEG) 7 H2. This peptide was shown to possess increased in vitro serum stability compared to native H2 relaxin whilst retaining relaxin-like binding affinity, activity, and structural integrity in vitro. The dimer peptide also has the potential to slow urinary glomerular filtration rate [30] which will be investigated in the future. Similar approaches can be adopted for the preparation of further increased multimeric forms of H2 relaxin using, for example, a benzene core bearing three or more azido moieties [39] or short PEG-linked dendrons [31].
4,614.4
2015-01-22T00:00:00.000
[ "Chemistry", "Biology" ]
High-temperature thermoelectric properties of Na- and W-Doped Ca3Co4O9 system The detailed crystal structures and high temperature thermoelectric properties of polycrystalline Ca3−2xNa2xCo4−xWxO9 (0 ≤ x ≤ 0.075) samples have been investigated. Powder X-ray diffraction data show that all samples are phase pure, with no detectable traces of impurity. The diffraction peaks shift to lower angle values with increase in doping (x), which is consistent with larger ionic radii of Na+ and W6+ ions. X-ray photoelectron spectroscopy data reveal that a mixture of Co2+, Co3+ and Co4+ valence states are present in all samples. It has been observed that electrical resistivity (ρ), Seebeck coefficient (S) and thermal conductivity (κ) are all improved with dual doping of Na and W in Ca3Co4O9 system. A maximum power factor (PF) of 2.71 × 10−4 W m−1 K−2 has been obtained for x = 0.025 sample at 1000 K. The corresponding thermoelectric figure of merit (zT) for x = 0.025 sample is calculated to be 0.21 at 1000 K, which is ∼2.3 times higher than zT value of the undoped sample. These results suggest that Na and W dual doping is a promising approach for improving thermoelectric properties of Ca3Co4O9 system. Introduction Thermoelectric (TE) power generation from waste heat is considered as a promising renewable energy technology. 1,2 TE devices convert thermal energy into electricity via the Seebeck effect, and electrical power into solid state refrigeration via the Peltier effect. 3 In order to convert waste heat into electrical energy efficiently, good TE materials with high values of dimensionless gure of merit (zT) are required: 4 where S (V K À1 ) is the Seebeck coefficient, T (K) is the absolute temperature, r (Um) is the electrical resistivity, and k (W m À1 K À1 ) is the thermal conductivity. For practical high waste heat to electricity conversion efficiency devices, zT > 1 is an essential prerequisite. Therefore robust TE materials with large thermoelectric power factor: PF ¼ S 2 /r and smaller thermal conductivity are required. In addition, TE materials must be stable in air at high operating temperatures over a long period of time and should be made of earth-abundant low cost elements. Conventional thermoelectric materials such as Bi 2 Te 3 (T max ¼ 550 K), SiGe (T max > 1300 K expensive and oxidation sensitive) and half-Heusler compounds (T max ¼ 850 K) do not meet all requirements for high temperature thermoelectric applications. [5][6][7] Transition metal oxides are promising candidates and have been explored for their potential applications in high temperature thermoelectric devices. A number of transition metal oxides such as CaMnO 3 , 2 Al-doped ZnO 8 and Ta-doped SrTiO 3 (ref. 9) show good thermoelectric properties and are stable in air at high temperatures of around 1000 K. Moreover, metal oxides can be synthesized from non-toxic and inexpensive precursors 10 and can possibly segmented with non-oxide materials in TE modules to increase the efficiency of devices. 11 Consequently, signicant research efforts have been recently devoted to the development of thermoelectric generators (TEGs) for automotive applications. 12 Among the transition metal oxides, mist-layered cobaltites such as Na x CoO 2 , 13 Ca 3 Co 4 O 9 , 14 CuAlO 2 (ref. 15) and Bi 2 Sr 2 -Co 2 O x (ref. 16) are considered to be promising p-type thermoelectric oxides for high temperature applications. Mist-layered Ca 3 Co 4 O 9 (abbreviated as C-349 in the following text) cobaltite is especially an interesting candidate material due to its good thermoelectric performance (zT $ 0.83 at 973 K for single crystal Ca 3 Co 4 O 9 and $0.64 at 1073 K for heavily doped polycrystalline Ca 3 Co 4 O 9+d materials with metallic nanoinclusions), and its high thermal and chemical stabilities in air. 17,18 Ca 3 Co 4 O 9 cobaltite has a monoclinic mist structure with superspace group (X2/m(0b0)s0) crystal symmetry. C-349 compound is generally described as [Ca 2 CoO 3 ][CoO 2 ] 1.61 and its high performance is linked with its unique layered crystal structure. 14 It consists of two subsystems: a NaCl-type rocksalt (RS) Ca 2 CoO 3 layer [subsystem 1] sandwiched between two CdI 2type (H) CoO 2 hexagonal layers [subsystem 2]. 19 These two subsystems share the same a, c and b lattice parameters, and stack alternatively along the c axis. The mismatch of two unit cells results in dissimilar lattice parameters along the b axis i.e., b 1 [subsystem 1] and b 2 [subsystem 2] with a ratio b 1 (RS)/b 2 (H) $ 1.61. The Ca 2 CoO 3 (RS)-type block is an insulating layer whereas CoO 2 (H) sheet is conductive. 20 Recently, a number of research studies have focused on improving the TE performance of C-349 polycrystalline materials by using innovative synthesis methods such as spark plasma sintering (SPS), 21 hot pressing, 22 auto-combustion synthesis and solgel based electrospinning followed by SPS 23,24 etc. Chemical substitution of alternate metal cations at both Ca-and Co-sites of Ca 3 Co 4 O 9 is another approach that has been used to ne tune the electrical and thermal transport properties of TE oxides. These studies include partial substitution of Na, Bi, Y, Ag, Nd, Sr and Pb ions [25][26][27][28][29][30][31][32] at Ca-sites, which adjusts the carrier concentration without changing much the band structure of materials, and substitution of Fe, Mn, Cu, Ti, Ga, Mo, W and In ions [33][34][35][36][37][38] at Cosites with signicant changes in the band structure and transport mechanism. In another research work, it was reported that doping of Na ions at Ca-sites resulted in decrease of electrical resistivity and as a consequence increase of thermoelectric power factor to $5.5 Â 10 À4 W m À1 K À2 at 1000 K, though thermal conductivity (k) of these samples was still too high (4.0 W m À1 K À1 ), impeded the further improvement of zT values. 25 On the other hand, high valence 4d and 5d transition metal-doped C-349 samples exhibited much smaller thermal conductivity with reasonably good zT values. 39 There are some research studies on simultaneous substitution of two different metal cations at Ca-and Co-sites in C-349 system with signicant improvement in TE properties with zT values of $0.20-0.25. [40][41][42] However, there are no reports published on dual doping of Na and W metals in Ca 3 Co 4 O 9 cobaltite as yet. This prompted us to prepare a series of Ca 3À2x Na 2x Co 4Àx -W x O 9 (0 # x # 0.075) oxides by the conventional solid-state reaction method, and investigate their structural and hightemperature thermoelectric properties. We anticipated that Na and W dual doping in C-349 system would increase the Seebeck coefficient and electrical conductivity while thermal conductivity would decrease due to the W substitution. In this way, we expected to achieve much better zT values for these materials. Experimental Polycrystalline samples of Ca 3À2x Na 2x Co 4Àx W x O 9 (0 # x # 0.075) series were prepared by the conventional solid-state reaction method. Stoichiometric quantities of CaCO 3 ($99.5%; Sigma-Aldrich), Co 3 O 4 ($99.5%; Sigma-Aldrich) and Na 2 WO 4 $2H 2 O ($99.5%; Sigma-Aldrich) were ground, thoroughly mixed and pressed into pellets and initially sintered at 700 C for 8 h. Sintered pellets were reground, pressed into pellets again and sintered twice at 900 C for 8 h, with intermediate grinding and pelletizing, at a heating rate of 10 C min À1 in air and slowly cooled down to room temperature. Powder X-ray diffraction (XRD) data were collected in 2 theta range 5 # 2q # 60 with a step size of 0.02 using a Bruker D8 Advanced diffractometer at room temperature with Cu K a (l ¼ 1.5406 A) radiation. Rietveld renements of XRD data were performed using a computer program JANA2006. 43 Surface morphology of samples was studied using a FEI Nova NanoSEM 450 scanning electron microscope (SEM). X-ray photoelectron spectroscopy (XPS, Thermo Electron Limited, Winsford, UK) was used to examine the oxidation states of Co and W ions in C-349 based materials. Analyses were performed using a monochromatic (Al-Ka) X-ray source at room temperature with a takeoff angle of 90 from the surface plane. High-resolution Co 2p and W 4f XPS spectra were recorded using 50 eV detector pass energy and 10 scans. The binding energies were assessed by referencing to the Au 4f peak at 84.0 eV. Hall measurements were carried out at room temperature by using van der Pauw method with a (5.08 T) superconducting magnet. The Seebeck coefficient (S) and electrical resistivity (r) were simultaneously measured from room temperature to 1000 K with an ULVAC-RIKO ZEM3 under a low pressure helium atmosphere. The thermal diffusivity (a) was measured with (NETZSCH LFA-457) laser ash system under vacuum. The heat capacity (C p ) was estimated using temperature independent Dulong-Petit law. The thermal conductivity (k) was calculated using equation (k ¼ a$r$C p ), where C p , r and a are the specic heat capacity, mass density and thermal diffusivity, respectively. Mass density of the samples was measured by Archimedes method using water with a few drops of surfactant. Crystal structure and surface morphology The crystal structures of Ca 3À2x Na 2x Co 4Àx W x O 9 (0 # x # 0.075) samples were analyzed by collecting the powder X-ray diffraction data at room temperature. The diffraction peaks in XRD patterns of all samples ( Fig. 1(a)) are identical to the standard JCPDS card (21-139) of C-349 system, 44 indicating the formation of phase pure compounds. The enlarged portions of (0020) diffraction peaks are presented in inset of Fig. 1(a) to illustrate the effect of Na and W dual doping on C-349 crystal structure. It can be clearly seen that the diffraction peaks shi to lower 2q values with increase in doping content (x). The XRD data was Rietveld rened using a computer program JANA 2006 (ref. 43) in the superspace group X2/m(0b0)s0 and the resulting structural parameters are listed in Table 1. The rened XRD pattern of x ¼ 0.05 sample is shown in Fig. 1(b), as an example. It can be seen from Table 1 and Fig. 2 that the lattice parameters a, b 1 , c and unit cell volumes (V 1 and V 2 ) all slightly increase with increase in doping content (x), which is consistent with the observed shiing of diffraction peaks to lower 2q values. Table 1 Crystallographic parameters for Ca 3À2x Na 2x Co 4Àx W x O 9 (0 # x # 0.075) samples obtained from the Rietveld refinement analysis of powder X-ray diffraction data a Composition Ca 3À2x Na 2x Co 4Àx W x O 9 x ¼ 0.0 Lattice parameters a ( A) 4.8229 (2) (4) 236.6(4) 237.0(4) V 2 ( A) 3 145.2 (7) 145.5 (7) 145.6 (7) 146.0 (7) Reliability The morphology of samples in two different directions, parallel (kp) and perpendicular (tp), to the pellets pressure axis (Fig. 3) were studied by the scanning electron microscopy in order to nd out if there is any micro-dimensional anisotropy in these layered materials. The grain morphology in both parallel (kp) and perpendicular (tp) directions of the pressure axis seems to be almost identical, suggesting that there is no or negligible anisotropy that can be observed on a micrometer scale. The SEM images show the plate-like crystal grains morphology, which is a typical feature of materials, including C-349 system, that are prepared by the conventional solid state chemistry method. 46 Close inspection of the SEM micrograph for x ¼ 0.025 sample reveals that crystal grains are larger ($2.52 mm) and more compact than crystal grains of other samples (0.83-1.45 mm). The measured densities for all samples are in the range $86-94% of the theoretical density ( Table 2). The binding energies of Co 2p and W 4f sub-shells of selected samples were estimated from the high resolution XPS measurements as shown in Fig. 4. As reported elsewhere, the XPS spectrum of Co 2p splits into two parts, Co 2p 3/2 and 2p 1/2 with an intensity ratio of approximately (2 : 1) due to the spinorbit coupling. 47 The line shapes of both Co 2p 3/2 and 2p 1/2 spectra are similar to the reported results in literature. 48 The main peaks corresponding to the Co 2p 3/2 energy are located at 778.8 eV, 779.69 eV and 781.0 eV corresponding to x ¼ 0.0, 0.025 and 0.075 samples, respectively. Shake-up satellite peaks due to the metal-to-ligand charge transfer processes at higher binding energies than the 2p 3/2 and 2p 1/2 main peaks are also detected. The observed variations in Co 2p 3/2 binding energies can be explained by the larger electronegativity of tungsten (2.36, Pauli scale) than cobalt (1.88, Pauli scale). 49 With careful analysis of XPS data, we can predict that Co ions have three types of valence states: Co 2+ , Co 3+ and Co 4+ in all samples. However, the average valence state of Co is most likely between 3+ and 4+ as reported elsewhere. 50,51 The observed increase in binding energies of Co 2p 3/2 peaks suggests that the relative population of Co 3+ ions is decreasing with increase in doping. The 4f 5 and 4f 7 peaks for W ions closely resemble with the reference peaks of WO 3 , indicating that they are present in W 6+ valence states in all samples. Thermoelectric properties The temperature dependent electrical resistivities, r(T), as a function of Na and W co-doping (x) are shown in Fig. 5(a). The r(T) curve for x ¼ 0.0 sample exhibits a semiconducting like behavior (dr/dT < 0) from room temperature to around 500 K, and then shows a transition to metallic like behavior (dr/dT > 0) from 600 K onwards. This kind of behavior in resistivity of C-349 system has been previously attributed to the spin-state transition, 52 removal of oxygen atoms from porous layered cobaltites 38 and structural distortion in Ca 2 CoO 3 sheets. 53 On the other hand, all doped samples show metallic behavior at low temperatures before showing a transition to semiconducting behavior above 400 K. The absolute values of resistivity at 1000 K for x ¼ 0.025 and 0.05 samples are smaller than that of undoped sample, but higher for x ¼ 0.075. This shows that dual doping of small amounts of Na and W has benecial effect on resistivity of our samples. We can describe the high temperature electrical resistivity of cobaltites using the small polaron hopping model, 54 which is given by the following relation: where r 0 is a constant factor called the residual resistivity, k B is the Boltzmann constant and E a is the activation energy of electrical conductivity for polaron hopping. The linear ts of ln(r/T) versus 1/T above 600 K, as shown in Fig. 5(b), suggest that the small polaron hopping model applies well to the electrical resistivity of these materials. The slopes of straight lines (E a /k B ) were used to estimate the activation Paper energies for all samples as listed in Table 2. It has been observed that E a values for doped samples are relatively higher than the pristine C-349 system. This suggests that energy demand for carriers to jump from the top of valence band to the bottom of conduction band, in general, increases with doping in our samples. However, this variation could also be due to the creation of some in-gap states which would change with doping. As discussed elsewhere, hoping of carriers occurs between Co 3+ and Co 4+ in CoO 2 layer and as a consequent the ratio of Co 3+ and Co 4+ ions directly affects the hopping distance in these materials. 55 We anticipate that concentration of Co 4+ ions would decrease with increase in doping of W 6+ ions resulting in increase of hopping distance and therefore increase in activation energies with doping. Similar results have been reported for the activation energies of Fe, Ag, Gd and Y doped mist layered cobaltites. 41,55,56 Hall effect measurements were carried out at room temperature for all samples, to determine the carrier concentration and their mobility as a function of dopant. It has been observed that carrier concentration (n 300 K ) initially increases from 5.12 Â 10 19 cm À3 (x ¼ 0.0) to 6.59 Â 10 19 cm À3 (x ¼ 0.025) with doping and then decreases again to 4.48 Â 10 19 cm À3 (x ¼ 0.075) with further increase in doping content as shown in Fig. 6. This increase in carrier concentration at low doping level is probably due to the substitution of Na + for Ca 2+ ions in C-349 system which results in increase of the number of hole carriers. With further increase in doping content (x), structural distortions and electron-doping like behavior of W 6+ ions, due to the higher valence state of W 6+ than Co 3+ and Co 4+ ions, start to dominate and result in decrease of carrier concentration. The carrier mobilities (m 300 K ) also follow the same trend and decrease to a value of 0.65 cm 2 V À1 s À1 with increase in doping aer showing the maximum value for x ¼ 0.025 sample ( Table 2). These trends in n 300 K and m 300 K together with the larger grain sizes ( Table 2) of Na and W dual doped samples can be used to explain the electrical resistivity of these materials. With the largest values of n 300 K , m 300 K and grain sizes, the x ¼ 0.025 sample has the lowest value of electrical resistivity and then r increases with increase in doping according to the equation: 1/r ¼ nem. Fig. 7(a) shows the temperature dependence of thermopower (S) of the Ca 3À2x Na 2x Co 4Àx W x O 9 samples. The sign of S is positive for all samples suggesting that holes are the majority charge carriers. The values of S increase with increase in temperature for all samples. It is also evident from Fig. 7(a) (inset) that thermopower values increase with increase in doping content (x). The maximum value of 216 mV K À1 at 1000 K for x ¼ 0.075 sample is higher than previously reported S values for Na doped C-349 system (187 mV K À1 ) at this temperature. 25,40 As discussed above, x ¼ 0.025 sample shows the largest values of n 300 K and m 300 K , and then values of these quantities decrease with further increase in doping. This is consistent with the observed behavior of S with doping in our samples. 26,28 The contribution of carrier concentration and carrier mobility, m(3) in describing S is given by the Mott's formula (originated from the Sommerfeld expansion). 57 By using [s ¼ enm (3)] in eqn (3), we can get: where and the electronic specic heat (C e ), but the effect of C e dominates over n which results in larger S values for doped samples. 59 (2) We could assume that slope of the density of states at Fermi level is the main contribution to the second part of the above equation for undoped sample. (3) Partial substitution of W 6+ for Co 3+ /Co 4+ ions decreases hole carriers and thus results in an increase of thermopower. According to the Pisarenko relation for degenerated semiconductors: where k B , h, q and m* are the Boltzmann constant, Plank's constant, unit charge of electron and effective mass of carriers respectively. We have calculated a value of m*/m e $ 0.9 for all samples by plotting room temperature S values vs. n À2/3 as shown in inset of Fig. 5(b). We can apply a simple parabolic band model by using the measured n and estimated m* values as described by the following equations: 60 where F 1/2 (x) is the Fermi integral, x is the reduced electrochemical potential, l is a scattering parameter and its value is taken 0 for acoustic phonon scattering, 1 for optical phonons scattering, and 2 for ionized impurity scattering. 60 The calculated S values at room temperature as a function of carrier concentration (n) are shown in Fig. 7(b). Three different scattering mechanisms are represented by three straight lines in this plot. The measured and calculated values of S match very well when we take l ¼ 0, which suggests that the acoustic phonon scattering is the dominant scattering mechanism for all samples. We have used the electrical resistivity and thermopower values to calculate the thermoelectric power factor PF ¼ S 2 /r for all samples as shown in Fig. 8. The PF values increase with increase in temperature for all samples due to increase of thermopower with temperature. We can also see from Fig. 8 that PF values are signicantly improved with Na and W dual doping in C-349 system. Among all doped samples, x ¼ 0.025 sample exhibits the highest PF of 2.71 Â 10 À4 W m À1 K À2 at 1000 K which is about 2.3 times more than PF, 1.27 Â 10 À4 W m À1 K À2 , of undoped sample. The PF obtained in this work is higher than the previously reported value of $2.1 Â 10 À4 W m À1 K À2 at 1073 K for Ca 3 Co 3.97 Cu 0.03 O 9 sample prepared by the conventional solid state reaction method. 61 3.2.1. Thermal conductivity. The temperature dependent total thermal conductivity (k Total ) for Ca 3À2x Na 2x Co 4Àx W x O 9 samples is shown in Fig. 9(a). We can clearly see that k(T) decreases with increase in temperature for all samples in the measured temperature range. For x ¼ 0.0 sample, measured value of k at 1000 K is 1.36 W m À1 K À1 and it decreases to 1.26 W m À1 K À1 for x ¼ 0.025 sample. On further increase in doping, k 1000 K slightly increases again but its value remains lower than the pristine C-349 system. In order to understand the observed changes, we have investigated the contribution of electronic (k el ) and lattice (k Lattice ) parts of the thermal conductivity, separately. Fig. 9(b) shows the values of k el as determined from the experimentally measured r values by using the Wiedemann-Franz law (k el ¼ LT/r), where L is the Lorentz number and its value is 2.44 Â 10 À8 W U K À2 for free electrons. 3 The values of k Lattice were calculated using the relation k Lattice ¼ k Total À k el and are shown in Fig. 9(c). It is evident from plot that k Lattice , and not k el , is the major contributing factor to total thermal conductivity of our samples. Hence, changes in k Total with increase in doping content (x) are mainly originated from the changes in k Lattice . 33 We can attribute these changes in k Lattice to the larger ionic radius of W 6+ than Co 3+ /Co 4+ ions resulting in structural distortions, and therefore increase in phonon scattering. However, we cannot rule out some other unexplained microstructural aspects of these materials that can also be responsible for irregular behavior in thermal conductivity of doped samples. 3.2.2. Figure of merit. The thermoelectric gure of merit (zT ¼ S 2 /kr) for Ca 3À2x Na 2x Co 4Àx W x O 9 samples as a function of temperature and doping content (x) are shown in Fig. 10. It is evident that zT values of doped samples are signicantly higher than the pristine C-349 system. The x ¼ 0.025 has the highest zT value of 0.21 at 1000 K among all samples, which is about 2.3 times higher than zT value of the undoped sample. This increase in zT value is due to increase in Seebeck coefficient, and decrease in electrical resistivity and thermal conductivity of this sample. zT values of other doped samples are also reasonably good as listed in Table 2. As a comparison, zT value of our x ¼ 0.025 sample is comparable or slightly better than previously reported results of Na doped Ca 2.5 -Na 0.5 Co 4 O 9 (zT 1000 K ¼ 0.18), 25 Bi and Na dual doped C-349 system (zT 1073 K ¼ 0.18), 62 42 It has been observed that electrical resistivity of our samples is still higher than most of the previously reported results, and therefore zT values of these materials are moderate. We believe that zT values of these samples can be further improved by preparing more compact materials under optimized synthesis conditions. Conclusion Polycrystalline samples of Ca 3À2x Na 2x Co 4Àx W x O 9 (0 # x # 0.075) have been synthesized by the conventional solid state reaction method. Powder X-ray diffraction data revealed that Na + and W 6+ ions enter into the Ca 2 CoO 3 and CoO 2 layers of Ca 3 Co 4 O 9 system respectively. High resolution XPS data showed that the average valence state of Co in our samples is between 3+ and 4+. Signicant improvements in r, S and k values of Na and W dual doped samples have been observed. These results are also supported by the carrier concentrations (n) and carrier mobility (m) as conrmed by the Hall effect measurements. The observed power factor (PF) and thermoelectric gure of merit (zT) of 2.71 Â 10 À4 W m À1 K À2 and 0.21 respectively at 1000 K for x ¼ 0.025 sample are comparable or slightly higher than most of the reported results of C-349 based materials, prepared by the conventional solid state reaction method. These results show that Na and W dual doping is an effective route for improving thermoelectric properties of Ca 3 Co 4 O 9 system. Conflicts of interest There are no conicts to declare. Fig. 9 (a) Total thermal conductivity (k Total ¼ k el + k Lattice ); (b) doping content (x) dependence of k Total at 1000 K; (c) electronic part of thermal conductivity (k el ) and (d) lattice part of thermal conductivity (k Lattice ) for Ca 3À2x Na 2x Co 4Àx W x O 9 (0 # x # 0.075) samples as a function of temperature. Fig. 10 Thermoelectric figure of merit (zT) for Ca 3À2x Na 2x Co 4Àx W x O 9 (0 # x # 0.075) samples as a function of temperature; inset shows doping content (x) dependence of zT at 1000 K.
6,188
2018-03-26T00:00:00.000
[ "Materials Science" ]
Benefits of Intermediate Annotations in Reading Comprehension Complex compositional reading comprehension datasets require performing latent sequential decisions that are learned via supervision from the final answer. A large combinatorial space of possible decision paths that result in the same answer, compounded by the lack of intermediate supervision to help choose the right path, makes the learning particularly hard for this task. In this work, we study the benefits of collecting intermediate reasoning supervision along with the answer during data collection. We find that these intermediate annotations can provide two-fold benefits. First, we observe that for any collection budget, spending a fraction of it on intermediate annotations results in improved model performance, for two complex compositional datasets: DROP and Quoref. Second, these annotations encourage the model to learn the correct latent reasoning steps, helping combat some of the biases introduced during the data collection process. Introduction Recently many reading comprehension datasets requiring complex and compositional reasoning over text have been introduced, including HotpotQA (Yang et al., 2018), DROP (Dua et al., 2019), Quoref , and ROPES (Lin et al., 2019). However, models trained on these datasets (Hu et al., 2019;Andor et al., 2019) only have the final answer as supervision, leaving the model guessing at the correct latent reasoning. Figure 1 shows an example from DROP, which requires first locating various operands (i.e. relevant spans) in the text and then performing filter and count operations over them to get the final answer "3". However, the correct answer can also be obtained by extracting the span "3" from the passage, or by adding or subtracting various numbers in the passage. The lack of intermediate hints makes learning challenging and can lead the model Question: How many touchdown passes did Cutler throw in the second half? Answer: 3 .....In the third quarter, the Vikes started to rally with running back Adrian Peterson's 1-yard touchdown run (with the extra point attempt blocked). The Bears increased their lead over the Vikings with Cutler's 3-yard TD pass to tight end Desmond Clark. The Vikings then closed out the quarter with quarterback Brett Favre firing a 6-yard TD pass to tight end Visanthe Shiancoe. An exciting .... with kicker Ryan Longwell's 41-yard field goal, along with Adrian Peterson's second 1-yard TD run. The Bears then responded with Cutler firing a 20-yard TD pass to wide receiver Earl Bennett. The Vikings then completed the remarkable comeback with Favre finding wide receiver Sidney Rice on a 6-yard TD pass on 4th-and-goal with 15 seconds left in regulation. The Bears then took a knee to force overtime.... The Bears then won on Jay Cutler's game-winning 39-yard TD pass to wide receiver Devin Aromashodu. With the loss, not only did the Vikings fall to 11-4, they also surrendered homefield advantage to the Saints. to rely on data biases, limiting its ability to perform complex reasoning. In this paper, we present three main contributions. First, we show that annotating relevant context spans, given a question, can provide an easy and low-cost way to learn better latent reasoning. To be precise, we show that under low budget constraints, collecting these annotations for up to 10% of the training data (2-5% of the total budget) can improve the performance by 4-5% in F1. We supervise the current state-of-the-art models for DROP and Quoref, by jointly predict the relevant spans and the final answer. Even though these models were not designed with these annotations in mind, we show that they can still be successfully used to improve model performance. Models that explicitly incorporate these annotations might see greater benefits. Our results suggest that future dataset collection efforts should set aside a fraction of budget for intermediate annotations, particularly as the reasoning required becomes more complex. Question: What record do the children that Conroy teaches play back to him? Answer: Beethoven's Fifth Symphony Conroy tries to teach them about the outside world but comes into conflict both with the principal and Mr. Skeffington, the superintendent. He teaches them how to brush their teeth, who Babe Ruth is, and has the children listen to music, including Flight of the Bumblebee and Beethoven's Fifth Symphony. He explains that the when Beethoven wrote the Fifth Symphony, he was writing about "what death would sound like". He is also astounded they've never even heard of Halloween, and he decides to take them to Beaufort on the mainland to go trick-or-treating, which the superintendent has forbidden. He also must overcome parental fears of "the river." As he leaves the island for the last time, the children come out to see him leave, all of them lined up on a rickety bridge. As he is about to leave by boat, one of the students then begins playing a record, which is the beginning movement of Beethoven's Fifth Symphony. Second, these annotations can help combat biases that are often introduced while collecting data (Gururangan et al., 2018;Geva et al., 2019). This can take the form of label bias-in DROP, 18% of questions have answers 1, 2, or 3-or annotator bias, where a small group of crowd workers creates a large dataset with common patterns. By providing intermediate reasoning steps explicitly, the annotations we collect help the model overcome some of these biases in the training data. Finally, the intermediate annotations collected in this work, including 8,500 annotations for DROP and 2,000 annotations for Quoref, will be useful for training further models on these tasks. We have made them available at https://github.com/dDua/ Intermediate_Annotations. Intermediate Annotations Intermediate annotations describe the right set of context spans that should be aggregated to answer a question. We demonstrate their impact on two datasets: DROP and Quoref. DROP often requires aggregating information from various events in the context ( Figure 1). It can be challenging to identify the right set of events directly from an answer when the same answer can be derived from many possible event combinations. We annotate the entire event span including all the attributes associated with the specific event. Quoref requires understanding long chains of coreferential reasoning, as shown in Figure 2, which are often hard to disentangle, especially when the context refers to multiple entities. We specifically annotate the coreference chains which lead to the entity being queried. Collection process: We used Amazon Mechanical Turk to crowd-source the data collection. We randomly sample 8,500 and 2,000 QA pairs from the training set for DROP and Quoref respectively. We showed a QA pair and its context to the workers and asked them to highlight "essential spans" in the context. In case of DROP, crowd workers were asked to highlight complete events with all their corresponding arguments in each span. For Quoref, they were asked to highlight the coreference chains associated with the answer entity in the context. Cost of gathering intermediate annotations: Each HIT, containing ten questions, paid $1, and took approximately five minutes to complete. Overall, we spent $850 to collect 8,500 annotations for DROP and $200 to collect 2,000 annotations for Quoref. If these annotations are collected simultaneously with dataset creation, it may be feasible to collect them at a lower cost, as the time taken to read the context again will be avoided. Experiments and Results In this section, we train multiple models for the DROP and Quoref datasets, and evaluate the benefits of intermediate annotations as compared to traditional QA pairs. In particular, we will focus on the cost vs benefit tradeoff of intermediate annotations, along with evaluating their ability to mitigate bias in the training data. Setup We study the impact of annotations on DROP on two models at the top of the leaderboard: NABERT 1 and MTMSN (Hu et al., 2019). Both the models employ a similar arithmetic block introduced in the baseline model (Dua et al., 2019) on top of contextual representations from BERT (Devlin et al., 2019). For Quoref, we use the baseline XLNet (Yang et al., 2019) model released with the dataset. We supervise these models with the annotations in a simple way, by jointly predicting intermediate annotation and the final answer. We add two auxiliary loss terms to the marginal loglikelihood loss function. The first is a cross-entropy loss between the gold annotations (g) and predicted annotations, which are obtained by passing the final BERT representations through a linear layer to get a score per token p, then normalizing each token's score of being selected as an annotation with a sigmoid function. The second is an L 1 loss on the sum of predicted annotations, encouraging the model to only select a subset of the passage. The hyper-parameters α 1 and α 2 were used to balance the scale of both auxiliary loss terms with the marginal log-likelihood. Cost vs Benefit To evaluate the cost-benefit trade-off, we fix the total collection budget and then vary the percentage of budget that should go into collecting intermediate annotations. As shown in Figure ??, the model achieves better performance (+1.7% F1) when spending $7k where 2% budget is used for collecting intermediate reasoning annotations as compared to model performance when spending $10k for collecting only QA pairs. Overall, from Figure 3 we can see that allocating even 1% of the budget to intermediate annotations provides performance gains. However, we observe that allocating a large percentage of the budget to intermediate annotations at the expense of QA pairs reduces performance. In our experiments, we find that the sweet-spot percentage of the budget and training-set that should be allocated to intermediate annotations is 2% and ∼10% respectively. Bias Evaluation Unanticipated biases (Min et al., 2019;Manjunatha et al., 2019) are often introduced during dataset collection due to many reasons (eg., domain-specific contexts, crowd-workers distributions, etc.). These "dataset artifacts" can be picked up by the model to achieve better performance without learning the right way to reason. We explore two examples of such dataset artifacts in DROP and Quoref. In DROP, around 40% of the passages are from NFL game summaries. The frequency of counting and arithmetic questions from this portion of the data resulted in the answers 1, 2, and 3 making up 18% of the entire training set. To study the effect of biased answer distribution on model performance, we sample 10k QA pairs with answers ∈ [0,9] from the training set randomly as a biased training set. We also sample QA pairs from the validation set uniformly for each answer ∈ [0,9] thus ensuring that each answer has equal representation in the unbiased validation set. In Quoref, we found that around 65% of the answers are entity names present in the first sentence of the context. Similar to DROP, we create a biased training set with 5k QA pairs from the original training data, and an unbiased validation set with equal representation of answers from the first sentence and the rest of the context. We investigate the effects of spending a small additional budget, either by adding more QA pairs (from the biased data distribution) or by collecting intermediate annotations, on this bias. We use two metrics to measure the extent to which bias has been mitigated. The first is the original metric for the task, i.e. F 1 , that measures how accurate the model is on the unbiased evaluation. Further, we also want to evaluate the extent to which the errors made by the model are unbiased; in other words, how much is the error diffused over all possible answers, rather than only over the biased labels. We compute confusion loss (Machart and Ralaivola, 2012) as the metric for this, which measures error diffusion by computing the highest singular value of the unnormalized confusion matrix after setting the diagonal elements (i.e. true positives), to zero (Koço and Capponi, 2013) (lower confusion loss implies more diffusion). In an ideal scenario, all labels should have an equally likely probability of being a mis-prediction. Higher confusion loss implies that if we consider mis-classifications of a model we see that it has a tendency of overpredicting a specific label, making it biased towards that specific class. Table 1 shows that along with higher improvements in F 1 on providing annotations as compared to more QA pairs, we also see a reduction in the confusion loss with annotations indicating bias mitigation. Further, we also find that for DROP, the false positive rate for top-3 common labels fell down from 47.7% (baseline) to 39.6% (with annotations), while the false positive rate for the bottom-7 increased from 30.4%(baseline) to 36.3%(with annotations), further demonstrating mitigation of bias. The confusion matrices are included in Appendix. Figure 4 shows a DROP example where the model trained without annotations is not able to determine the right set of events being queried, returning an incorrect response. The model trained with annotations can understand the semantics behind the query terms "first half" and "Cowboys", to arrive at the correct answer. The curves depicting quanti-How many times did the Cowboys score in the first half? Qualitative Result Still searching for their first win, the Bengals flew to Texas Stadium for a Week 5 interconference duel with the Dallas Cowboys. In the first quarter, Cincinnati trailed early as Cowboys kicker Nick Folk got a 30-yard field goal, along with RB Felix Jones getting a 33-yard TD run. In the second quarter, Dallas increased its lead as QB Tony Romo completed a 4-yard TD pass to TE Jason Witten. The Bengals would end the half with kicker Shayne Graham getting a 41-yard and a 31-yard field goal. In the third quarter, Cincinnati tried to rally as QB Carson Palmer completed an 18-yard TD pass to WR T. J. Houshmandzadeh. In the fourth quarter, the Bengals got closer as Graham got a 40-yard field goal, yet the Cowboys answered with Romo completing a 57-yard TD pass to WR Terrell Owens. Cincinnati tried to come back as Palmer completed a 10-yard TD pass to Houshmandzadeh (with a failed 2-point conversion), but Dallas pulled away with Romo completing a 15-yard TD pass to WR Patrick Crayton. Related Work Similar to our work, Zaidan et al. (2007) studied the impact of providing explicit supervision via rationales, rather than generating them, for varying fractions of training set in text classification. However, we study the benefits of such supervision for complex compositional reading comprehension datasets. In the field of computer vision, Donahue and Grauman (2011) collected similar annotations, for visual recognition, where crowd-workers highlighted relevant regions in images. Within reading comprehension, various works like HotpotQA (Yang et al., 2018) and CoQA (Reddy et al., 2019) have collected similar reasoning steps for entire dataset. Our work shows that collecting intermediate annotations for a fraction of dataset is cost-effective and helps alleviate dataset collection biases to a degree. Another line of work (Ning et al., 2019) explores the cost vs. benefit of collecting full vs. partial annotations for various structured predictions tasks. However, they do not focus on intermediate reasoning required to learn the task. Our auxiliary training with intermediate annotations is inspired by extensive related work on training models using side information or domain knowledge beyond labels (Mann and McCallum, 2008;Chang et al., 2007;Ganchev et al., 2010;Rocktaschel et al., 2015). Especially relevant is work on supervising models using explanations (Ross et al., 2017), which, similar to our annotations, identify parts of the input that are important for prediction (Lei et al., 2016;Ribeiro et al., 2016). Conclusion We show that intermediate annotations are a costeffective way to not only boost model performance but also alleviate certain unanticipated biases introduced during the dataset collection. However, it may be unnecessary to collect these for entire dataset and there is a sweet-spot that works best depending on the task. We proposed a simple semi-supervision technique to expose the model to these annotations. We believe that in future they can be used more directly to yield better performance gains. We have also released these annotations for the research community at https: //github.com/dDua/Intermediate_Annotations. Motteux was also without heirs and bequeathed Sandringham, together with another Norfolk estate and a property in Surrey, to the third son of his close friend, Emily Lamb, the wife of Lord Palmerston. At the time of his inheritance in 1843, Charles Spencer Cowper was a bachelor diplomat, resident in Paris. On succeeding to Motteux's estates, he sold the other properties and based himself at Sandringham. He undertook extensions to the hall, employing Samuel Sanders Teulon to add an elaborate porch and conservatory. Cowper's style of living was extravagant he and his wife spent much of their time on the Continent and within 10 years the estate was mortgaged for £89,000. The death of their only child, Mary Harriette, from cholera in 1854 led the couple to spend even more time abroad, mainly in Paris, and by the early 1860s Cowper was keen to sell the estate. Figure 9: Predicted relevant spans for question answered correctly with annotation (prediction:"Charles Spencer Cowper") and incorrectly without annotations (prediction:"Lord Palmerston") by XLNet on Quoref
3,931.2
2020-07-01T00:00:00.000
[ "Computer Science" ]
Comparing Gravitation in Flat Space-Time with General Relativity General relativity (GR) and gravitation in flat space-time (GFST) are covariant theories to describe gravitation. The metric of GR is given by the form of proper-time and the metric of GFST is the flat space-time form different from that of proper-time. GR has as source the matter tensor and the Einstein tensor describes the gravitational field whereas the source of GFST is the total energymomentum including gravitation and the field is described by a non-linear differential operator of order two in divergence form. The results of the two theories agree for weak gravitational fields to the order of measurable accuracy. It is well-known that homogeneous, isotropic, cosmological models of GR start from a point singularity of the universe, the so called big bang. The density of matter is infinite. Therefore, our observable universe implies an expansion of space, in particular an inflationary expansion in the beginning. This is the presently most accepted model of the universe although doubts exist because infinities don’t exist in physics. GFST starts in the beginning from a homogeneous, isotropic universe with uniformly distributed energy and no matter. In the course of time, matter is created out of energy where the total energy is conserved. There is no singularity. The space is flat and the space may be non-expanding. Introduction Einstein's general theory of relativity is at present the most accepted theory of gravitation.The theory gives for weak gravitational fields' agreement with the corresponding experimental results.But the results for homogeneous, isotropic, cosmological models imply difficulties.So, the universe starts from a point singularity, i.e. the universe starts from a point with infinite density of matter.The observed universe is very big.Hence, the space of the universe must expand very quickly which implies the introduction of an inflationary universe in the beginning. GFST has a pseudo-Euclidean geometry and the proper time is defined similar to that of general relativity, i.e. space-time and proper time are different from one another.GFST starts from an invariant Lagrangian which gives by standard methods the field equations of gravitation.The source is the total energy-momentum tensor including gravitation.The energy-momentum of gravitation is a tensor.The field is described by non-linear differential equations of order two in divergence form.The theory is generally covariant.The gravitational equations together with the conservation law of the total energy-momentum give the equations of motion for matter.The application of the theory implies for weak gravitational fields the same results as GR to experimental accuracy, e.g.gravitational redshift, deflection of light, perihelion precession, radar time delay, post-Newtonian approximation, gravitational radiation of a two-body system and the precession of the spin axis of a gyroscope in the orbit of a rotation body.But there are also differences of the results of these two theories.GFST gives nonsingular, cosmological models and Birkhoff's theorem doesn't hold.GFST may e.g.be found in the book [1] and in the cited references.Additionally, non-singular, cosmological models are e.g.given in the articles [2]- [6]. Subsequently, homogeneous, isotropic, cosmological models will be summarized.Let us use the pseudo-Euclidean geometry.The received universe is non-singular under the assumption that the sum of the density parameters is greater than one, e.g. a little bit greater than one.This implies that the universe may become hot in the course of time.It starts without matter and without radiation and all the energy is gravitational energy.Matter and radiation emerge from this energy by virtue of the conservation of the total energy.The space is flat and the interpretation of a non-expanding space is natural.But it is also possible to state an expansion of space by a suitable transformation as consequence of general covariance of the equations.For a zero cosmological constant matter increases for all times whereas radiation increases and the universe becomes hot.After that radiation decreases to zero as time goes to infinity.Short time after the universe has reached the maximal temperature the production of matter is finished, i.e. the universe appears nearly stationary.Under the assumption of a positive cosmological constant, a certain time after the beginning, matter goes to zero and the universe converges to dark energy as time goes to infinity.Hence, a universe given by GFST appears more natural than that received by GR which gives singular solution with infinite densities.The universe starts from a point and therefore space must expand to be in agreement with the observed big universe.The geometry is in general non-Euclidean but the observed universe implies a flat space.Section 2 contains GFST; Section 3 contains cosmological models and Section 4 contains the comparison of GFST and GR. GFST The theory of GFST is shortly summarized.The metric is the flat space-time given by ( ) where ( ) ij η is a symmetric tensor.Pseudo-Euclidean geometry has the form Here, ( ) ( ) are the Cartesian coordinates and 4 x ct The gravitational field is described by a symmetric tensor ( ) and put similar to (3) ( ) The proper time τ is defined by ( ) The Lagrangian of the gravitational field is given by ( ) where the bar/denotes the covariant derivative relative to the flat space-time metric (1). The Lagrangian of dark energy (given by the cosmological constant Λ ) has the form ( ) where κ is the gravitational constant.Then, the mixed energy-momentum tensor of gravitation, of dark energy and of matter of a perfect fluid are ( ) ( ) Here, ρ, p and u i denote density, pressure and four-velocity of matter.It holds by ( 6) Define the covariant differential operator of order two.Then, the field equations for the potentials (g ij ) have the form Define the symmetric energy-momentum tensor Then the equations of motion in covariant form are In addition to the field Equation ( 13) and the equations of motion (16) the conservation law of the total energy-momentum holds, i.e. / 0. The field equations of gravitation are formally similar to those of GR where i j T is the energy-momentum without that of gravitation since the energy-momentum of gravitation is not a tensor for GR.Therefore, the differential operator is the Einstein tensor which may give a non-Euclidean geometry The results of this chapter may be found in the book [1] and in many other articles of the author, as e.g. in [5]. Homogeneous, Isotropic, Cosmological Models In this chapter GFST is applied to homogeneous, isotropic, cosmological models.The pseudo-Euclidean geometry (1) with ( 2) is used.The matter tensor is given by perfect fluid with velocity ( ) where the indices m and r denote matter and radiation.The equations of state for matter (dust) and radiation are 1 0, 3 The potential are by virtue of (18) and the homogeneity and isotropy The four-velocity is by Equation ( 18) and Equation ( 6) ( ) ( ) Let 0 0 t = be the present time and assume as initial conditions at present where the dot denotes the time derivative; 0 H is the Hubble constant and 0 h is a further constant; 0 m ρ and 0 r ρ denote the present densities of matter and radiation.It follows from ( 16) under the assumption that matter and radiation do not interact ( ) The field Equation (13) implies by the use of (21) the two nonlinear differential equations ( ) where ( ) The expression ( ) L G κ is the density of gravitation.The conservation law of the total energy is where λ is a constant of integration.The Equations (25), ( 26) and (27) give by the use of the initial conditions (23 with W. Petry Integration of (28) yields Equation ( 27) gives for the present time 0 0 t = by the use of the initial conditions (23) ( ) It follows from ( 27) by the use of the standard definition of the density parameters of matter, radiation and the cosmological constant with the abbreviation ( ) the differential equation ( ) The initial condition is by (23) ( ) The solution of (33) with (30) describes a homogeneous, isotropic, cosmological model by GFST.Relation (31) can be rewritten in the form A necessary and sufficient condition to avoid singular solutions of (33) is a a t = then it follows from (33a) with It holds for all t ∈  ( ) 1 0. Subsequently assume ( ) Then we get by virtue of (38) It follows from (32) by virtue of (41) i.e. the sum of the density parameters is a little bit greater than one.Hence, ( ) a t starts from a positive value, decreases to a small positive value, and then increases for all .t ∈  The proper time from the beginning of the universe till time t is ( ) ( ) The differential Equation (33a) is rewritten by the use of (30) in the form Hence, the differential equation for the function a by the use of the proper time is This differential equation is by virtue of (41) and a not too small function ( ) a t identical with that of GR for a flat homogeneous, isotropic universe.Therefore, away from the beginning of the universe, the result for the universe agrees for GFST with that of GR. These results may be found in the book [1] and in the article [5]. The subsequent considerations can be found in the book [1]. We introduce in addition to the proper time τ the absolute time t′ by ( ) ( ) ( ) This gives for the proper time in the universe where dx denotes the Euclidean norm of the vector ( ) d , d x x x x = .Relation (47) implies that the absolute value of the light-velocity is equal to vacuum light-velocity c for all times t′ .The introduction of the absolute time t′ in the differential Equation (45) gives ( ) Assume that a light ray is emitted at distance r at time e t′ The age of the universe since the minimal value of ( ) Therefore, the age of the universe measured with absolute time is greater than 0 1 H independent of the density parameters, i.e. there is no age problem.We will now calculate the redshift of light emitted from a distant object at rest and received by the observer at present time.It is useful to introduce the absolute time.Assume that an atom at a distant object emits a photon at Therefore, the energy of the emitted photon is ( ) The energy of the photon moving to the observer in the universe is constant by virtue of (47), i.e. by the constant light velocity.Then, the corresponding received frequency is where 0 ν is the frequency emitted at the observer from the same atom.The redshift is given by ( ) ( ) Differentiation of Equation (48) yields by neglecting small expressions ( ) This gives the redshift formula The detailed calculations of Formula (52) can be found in the book [1].Higher order Taylor expansion gives higher order redshift approximations. Differences of Theory and Results of GFST and GR 1) It is worth to mention that the space of the universe by GFST is also flat by the use of ( 6) with (21).This is important because the experiment implying flatness of space of GR uses Formula (6).This is the result of the flat space-time geometry of GFRS.The results for the universe of GFST and GR away from the beginning of the universe agree for a flat space. 2) The metric of GFST is a flat space-time and the space of GFST is flat by the use of ( 1) and ( 2).The gravitational field is a tensor of rank two and it is described in flat space-time.The left hand side is a non-linear differential operator of order two and the right hand side is the total energy-momentum tensor including that of gravitation which is a tensor in GFST.Proper time is defined by the use of the gravitational field.The metric of GR is identical with the definition of the proper time which is formally identical with that of GFST.The energy-momentum of gravitation by GR is not a tensor.The left hand side of the field equations is a linear combination of the Ricci tensor and the right hand side of the differential equations is the matter tensor.Both gravitational theories are covariant.The theory of GR implies in general a non-Euclidean geometry.Experimental results indicate that our universe is flat 3) The space of the universe by GFST is by ( 1) and (2) non-expanding.Experimental results of Lerner [7] also yield a non-expanding universe.The space of the universe by GR is singular in the beginning, i.e. it starts from a point.The observed universe is very big.Therefore, the space must expand and perhaps it implies an inflationary universe. 4) The universe received by GFST is non-singular, i.e. all the physical quantities are defined in contrast to those of GR where the universe starts with a singularity in the beginning, i.e. the space consists of a point with infinite density of matter. 5) The redshift is an intrinsic gravitational effect by GFST whereas GR explains the redshift as Doppler effect of an expanding universe. 6) Linear perturbation theory of cosmological models by GFST can give in the matter dominated universe a quick increase of the inhomogeneity (see [1], chapter 9.4) which may explain the galaxies whereas by the use of GR the increase of the inhomogeneity is much too slow. 7) The theory of GFST gives non-expanding, cosmological models.Hence, gravitational waves cannot be generated in the beginning.In the beginning of the universe by GR, it can imply gravitational waves by virtue of inflation.Signals from the birth of the universe were measured by BICEP2.But shortly after this announcement the result was retracted. 8) Studies of supernovae are used to measure distances in space.It seems that the ancient supernovae aren't as distant as believed.This means that the cosmological constant is smaller than till now assumed.A vanishing cosmological constant (no dark energy) is perhaps not excluded if a modified Hubble law is used where it is assumed that every object is surrounded by a medium (see [1] chapter 12.4 and article [8]).This gives a new redshift formula.9) A non-singular, non-expanding universe with vanishing cosmological constant is already studied in article [9]. 18)and pressure p and density ρ with , gives by Taylor expansion of ( ) e a t′ in relation (51) resp. at time d
3,333.4
2016-08-02T00:00:00.000
[ "Physics" ]
TSCMF: Temporal and social collective matrix factorization model for recommender systems In real-world recommender systems, user preferences are dynamic and typically change over time. Capturing the temporal dynamics of user preferences is essential to design an efficient personalized recommender system and has recently attracted significant attention. In this paper, we consider user preferences change individually over time. Moreover, based on the intuition that social influence can affect the users’ preferences in a recommender system, we propose a Temporal and Social Collective Matrix Factorization model called TSCMF for recommendation. We jointly factorize the users’ rating information and social trust information in a collective matrix factorization framework by introducing a joint objective function. We model user dynamics into this framework by learning a transition matrix of user preferences between two successive time periods for each individual user. We present an efficient optimization algorithm based on stochastic gradient descent for solving the objective function. The experiments on a real-world dataset illustrate that the proposed model outperforms the competitive methods. Moreover, the complexity analysis demonstrates that the proposed model can be scaled up to large datasets. Introduction Recommender systems are very useful tools in overcoming the information overload problem of users. These systems provide personalized recommendations to a user that he/she might like based on past preferences or observed behavior about one or various items. An essential problem in real-world recommender systems is that users are likely to change their preferences over time. A user's preference dynamics is known in the literature as temporal dynamics (Koren 2010) that may be caused by various reasons. According to Koren (2010), Rafailidis et al. (2017), and Lo et al. (2018), the most important of these reasons are: (i) User experiences: The past interaction of users and items make users like some items and dislike some others. For example, if a user is satisfied with the purchase on an auction website then he/she will probably continue buying from it in future. (ii) New items: The appearance of new items may change the focus of users. For example, users usually like to explore new items over time instead of interacting multiple times with the same items. (iii) Social influence: Friends' preferences may affect a user's decision and change the user preferences over time. (iv) Item popularity: Popular items may affect user interactions, regardless of his/her past preferences. For example, if there is a popular action movie but the user is interested in romantic films, the user may prefer to watch this action movie instead. Modeling temporal dynamics of user preferences is essential to design a recommender system (Koren 2010;Shokeen and Rana 2018), as it leads to significant improvements in recommendation accuracy (Zafari et al. 2019;Rana and Jain 2015;Cheng et al. 2015). The need to model the dynamics of user preferences over time in recommender systems poses several essential challenging problems. First of all, because the amount of available data dramatically is reduced in a particular time period, the issue of data sparsity (Yusefi Hafshejani et al. 2018) in this situation is more intense (Lo et al. 2018). Moreover, based on the intuition that the time change pattern for each user may differ (Rafailidis and Nanopoulos 2016;Tang et al. 2015), how can the temporal information be incorporated to capture each individual user preference dynamics? Finally, what is the efficient approach to model the dynamics of user preferences in order to generate more accurate recommendations? For this purpose, in this paper, we present a Temporal and Social Collective Matrix Factorization model called TSCMF. The model captures the user preference dynamics based on collective matrix factorization (CMF) (Singh and Gordon 2008) framework to perform temporal recommendation. CMF is an extension of the MF which takes into account the side information, leading to more effective latent features. We take into account the user preferences can change individually over time, and based on the intuition that social influence can affect the users' preferences in a recommender system, we jointly factorize the users' rating matrix and social trust matrix via introducing a joint objective function. We adopt stochastic gradient descent (SGD) method and present an efficient optimization algorithm for solving the objective function. In our model, we assume that user preferences change smoothly (Lo et al. 2018;Tang et al. 2015) and the user preferences in the current time period depend on his/her preferences in the previous time period. Therefore, we introduce and learn a transition matrix of user preferences for each individual user to model user dynamics in two successive time periods into CMF. Experimental results on a real-world dataset, Epinions, illustrate that our proposed model outperforms the competitive methods. In addition, the complexity analysis implies that our model can be scaled up to large datasets. The remainder of this paper is structured as follows. The next section presents the related works. Section 3 defines our problem and details our proposed model. Section 4 reports the experimental results. Finally, Section 5 provides the conclusions and future research directions. Some studies on capturing the dynamics of user preferences in recommender systems are based on the computing user or item neighborhoods. These approaches generally boost recent ratings and penalize older ratings that possibly have less relevance at recommendation time, by employing time windows or a decay function (Vinagre 2012). For instance, in Su et al. (2015) and Liu et al. (2010), we see that it has given more weight to recently rated items and reduced the importance of past rated items gradually in rating prediction using an exponential time decay function. They consider that the preference dynamics are homogeneous for all users, whereas the changes in user preferences may be individual. A similar method was proposed in Cheng and Wang (2020), which takes into account that different users have different degrees of sensitivity to time. However, the primary challenge in these approaches is that it is hard to estimate an appropriate weighting scheme (Rabiu et al. 2020;Zhang 2015). The broadly used technique to implement temporal recommender systems is matrix factorization (MF) (Yin et al. 2014). The MF technique has the advantage of relatively high accuracy and scalability (Lo et al. 2018;Yang et al. 2017). In this technique, each users and items is characterized by a series of features showing latent factors of the users and items in the system. It decomposes the matrix of users' ratings on items into two low-dimensional matrices, which directly profile users and items to the latent feature space, respectively, and these latent features are later used to make user behavior predictions. TimeSVD++ (Koren 2010) is the first MF-based popular method for modeling user preference dynamics. This model adopts the singular value decomposition (SVD) that is the most basic technique to matrix factorization (Yang et al. 2014). TimeSVD++ incorporates time-varying rating biases of each item and user into the MF. It assumes that older ratings are less important in rating prediction. The parameters of this method in different aspects and time periods must be learned individually, so it needs considerable effort for parameter tuning (Lo et al. 2018). A temporal MF method to capture the temporal dynamics in each of the individual user preferences was proposed in Lo et al. (2018). This model uses both rating information within the specific time period and overall rating information to learn the latent feature vector of each user at each time period by introducing a modified SGD algorithm. The method learns a linear model to extract the transition pattern for each user's latent feature vector using Lasso regression. An approach based on multi-task non-negative MF was presented in Ju et al. (2015) that uses a transition matrix to map between latent features of users in two successive time periods in order to track the temporal dynamics of user preferences. The transition matrix used in this method needs to be fixed, while in practice, this matrix is different for each user and each time period. A temporal MF (TMF) approach was proposed in Zhang et al. (2014) that captures the temporal dynamics of user preferences by designing a transition matrix for each user latent feature vectors between two successive time periods. Next, this approach is extended to a fully Bayesian treatment called BTMF by introducing priors for the hyperparameters to control the complexity and improve the accuracy of TMF. A dynamic MF based on collaborative Kalman filtering approach was proposed in Sun et al. (2014). This method extends the Gaussian probabilistic MF to capture user preference dynamics using a transition matrix of users' features based on a dynamical state space model. For learning model parameters from historical users' preferences, it exploits an expectation-maximization (EM) algorithm that uses Kalman filter in the expectation step of the EM. Despite the comprehensiveness of this method, the transition matrix used in it is homogeneous for all users. Moreover, the method is impractical for large datasets due to the run-time performance. The aforementioned methods exploit only a single type of user-item interaction (users' rating information) without any side information. Exploiting the side information of users or items (Sun et al. 2019) beside the users' rating information can help to alleviate the data sparsity problem and thus provides users with better-personalized recommendations (Pan 2016). In this regard, a series of studies based on MF exploit the side information in temporal recommendation systems. A method based on MF was proposed in Wu et al. (2018) that fuses ratings, review texts, and the relationship between items by considering the temporal dynamics of user preferences to improve prediction results. The authors use TimeSVD++ as part of the model to capture temporal dynamics. However, the rating prediction for new users is difficult in this method. Moreover, this method assumes that the number of latent factors in ratings is equal to the number of hidden topics in reviews, while, as the authors point out, the number of latent factors is more than the number of hidden topics. CMF is an effective method that can be employed in recommender systems to simultaneously factorize multiple related matrices such as ratings and trust matrices. A temporal CMF method to generate the recommendations was proposed in Li and Fu (2017). This work jointly factorizes the multimodal user-item interactions to extract the user temporal pattern. The method introduces a transition matrix of users' preferences between two successive user latent feature matrices. Similarly, a dynamic CMF approach to predict the behavior of users was proposed in Rafailidis et al. (2017), which introduces a transition matrix of users' behaviors. This method models the temporal dynamics between purchase activity and click response behavior of users. It exploits the side information to alleviate the sparsity problem. The transition matrix used in these two last methods is homogeneous for all users; which is a major limitation of them. Social trust information accumulated in social networks would be a rich source of information to address the aforementioned sparsity problem (Shokeen and Rana 2018), which has recently attracted the attention of many researchers into their recommendation models (Guo et al. 2016;Wu et al. 2016). A user is more likely to be affected by users whom he/she trusts. Therefore, the trust relations between users affect users' preferences. Although trust information is also very sparse, especially in a time period, it is complementary to rating information. Taking collective preferences and social trusts between users in a social recommendation system as additional input can be helpful in making more accurate and personalized recommendations (Bao et al. 2013). An SVD-based method was presented in Tong et al. (2019) that integrates rating, trust and time information to model user preference dynamics. This method includes time-variant biases for each item and each user. However, in this method, the feature vectors of users are not optimized with temporal information. In Aravkin et al. (2016), a framework was developed that incorporates trust relations into dynamic MF model to capture user preference dynamics. The method defines a transition matrix of users' preferences, assumes that trust relations among users are a graph at each time period, and considers a regularization term for dynamics that can incorporate known trust relations via the graph Laplacian. This method assumes that the preference dynamics are homogeneous for all users. In Liu et al. (2013), an approach was proposed in which heterogeneous user feedbacks as well as time and social networks are exploited for more accurate movie recommendation. It proposes a ranking-based MF model for combining both implicit and explicit user feedback, and extends the model to a sequential MF model for enabling time awareness parameterization. An approach based on social probabilistic MF was proposed in Bao et al. (2013) which exploits both temporal and social information to predict user preferences in micro-blogging. In this method, by employing an exponential time decay function, the users' latent features and the topics associated with previous latent features are made. The method considers the importance of all previous time periods as well as the current as the same for all users and assigns the same weight to all users. However, in practice, the importance of previous time periods varies for each user. Some studies exploit tensor factorization (TF) (Frolov and Oseledets 2017;Oh et al. 2019) to model user preference dynamics. In these studies, TF extends MF into a threedimensional tensor through adding the temporal effects to the model. In Xiong et al. (2010), a movie recommendation method was proposed based on the Bayesian probabilistic TF. This method introduces a set of additional time features and adds constraints in the time dimension of the tensor to model the evolution of data over time. In Dunlavy et al. (2011) and Spiegel et al. (2011) were proposed the temporal link prediction methods based on TF. In Dunlavy et al. (2011), time-evolving bipartite graphs were employed and several methods were presented based on both matrix and tensor factorizations for predicting future links. In Spiegel et al. (2011), the importance of past user preferences using a smoothing factor was reduced. This method gives all user preferences the same weight at a specific time period whereas user preference dynamics of each user may vary individually (Rafailidis and Nanopoulos 2016). A temporal recommendation model based on the coupled TF was proposed in Rafailidis and Nanopoulos (2016). In this model, the importance of user past preferences is weighted based on a proposed user preference dynamics rate. The user demographics as side information are coupled with temporal interactions of users in this model. Despite the success of temporal recommendation methods based on TF, the processing and solving the tensor decomposition is hard (Lo et al. 2018) and usually leads to very high computing cost in practice (Zou et al. 2015), especially when the tensor is large and sparse (Lo et al. 2018). Different from the aforementioned methods, in the present study, we model the temporal dynamics of user preferences by extending the CMF formulation to jointly factorize two matrices of user-item rating and social trust. Under the assumptions that the time change pattern for each user differs and that the user preferences change smoothly, we learn a transition matrix for each individual user to capture user dynamics in two successive time periods. Proposed model In this section, first we describe the problem definition and introduce the notations used throughout the paper. Then we present our TSCMF model. Table 1 presents the important notations used throughout this paper. Problem definition Suppose we have a social recommender system including m users indexed from i=1, 2, . . ., m and n items indexed from j=1, 2, . . ., n. We consider two types of information sources with timestamps including user-item ratings and social trusts between users. Given P predefined time periods indexed from t=1, 2, . . ., P, we define R (t) ∈ R m×n to be the user-item rating matrix in time period t, and R (t) ij indicates the rating given by user i on item j in time period t. The ratings are normally integer values between 0 and R max (eg., 0 to 5), where 0 denotes that the user has not rated that item in time period t. The higher rating means the better satisfaction. In practice, each user rates only a few items and thus R (t) is usually very sparse. User-item rating matrix in time period Trust value between users i and k in time period t R (t) Prediction of through a prediction algorithm Predicted rating of user i on item j in time period t Latent feature matrix of users and items in time period t Latent feature vector of user i and item j in time period t Latent feature matrix of trusters and trustees in time period t Latent feature vector of truster i and trustee k in time period t M (t) i Transition matrix between user latent feature vectors U Indicator function that takes 1 if user i rated item j in time period t, and 0 otherwise Indicator function that takes 1 if user k trusted by user i in time period t, and 0 otherwise Number of nonzero entries in R (t) and T (t) In social recommender systems, a user not only can rate items, he/she can also often specify other users as trusted friends. Let T (t) ∈ R m×m be the user-user trust matrix in time period t and T (t) ik ∈ [0, 1] denotes that the extent user i trusts the user k in time period t. T (t) ik = 1 indicates the user i extremely trusts user k in time period t and T (t) ik = 0 donates the user i does not trust k in this time period. Based on intuitions that users' preferences change individually over time and social influence can affect the users' preferences in a recommendation system, our goal is to provide a model to predict R (t) by capturing the preference dynamics for each individual user based on integrating the ratings and trust matrices. Temporal and social collective matrix factorization (TSCMF) In this section, we first formulate the objective function of our TSCMF model to capture the user preference dynamics based on CMF for performing temporal recommendation. We then devise an optimization algorithm for solving the objective function. Finally, we analyze the complexity of our model. Figure 1 shows the framework of the proposed model. Objective function As mentioned before, when a user is rating, the existing ratings of users whom he/she trusts will more likely affect his/her rating. Based on this intuition as well as considering temporal dynamics of user preferences, we present an approach to fuse users' rating matrix and social Fig. 1 The framework of the proposed TSCMF model trust matrix under a CMF framework by considering the temporal information to model each individual user preference dynamics. The standard CMF ignores the temporal dynamics and it can only exploit all the previous data for model training. The old data may not be useful and may even have a negative impact on the making recommendations in the current time since the user preferences might change dramatically over a long period of time (Li and Fu 2017;De Pessemier et al. 2010). Therefore, unlike the standard CMF, we do not exploit the training data from all previous time periods. However, since users' rating information at a time period is very sparse, the social trust information that we use beside the users' rating information can alleviate the sparsity problem. Suppose U (t) ∈ R m×d and V (t) ∈ R n×d be the latent feature matrices of users and items in time period t, respectively, with row vectors U j . Also, suppose B (t) ∈ R m×d and W (t) ∈ R m×d be the latent feature matrices of trusters and trustees in time period t, respectively, with row vectors B Since the users in the rating matrix and the trusters in the trust matrix are the same Yang et al. (2017) and Guo et al. (2016), based on CMF, we jointly factorize these matrices by associating them through sharing a common user latent feature space. We consider the users feature matrix U (t) as the latent space commonly shared by R (t) and T (t) . Therefore, every vector U (t) i simultaneously characterizes how the user i rates items and also how the same user trusts others in time period t. In addition, without loss of generality, similar to Yang et al. (2017), Yu et al. (2018), and Jamali and Ester (2010), we map the raw rating R (t) ij to the interval [0,1] by adopting the function f (x) = x/R max . Also, we exploit the logistic function g (x) = 1/ (1 + exp (−x)) to bound the inner product of latent feature vectors in the range of [0,1]. Thus, the objective function of CMF for time period t is as follows: where the first two sum terms represent the approximation errors. I R (t) ij and I T (t) ik are indicator functions; I R (t) ij takes 1 if user i rated item j in time period t, and 0 otherwise. Also, takes 1 if user i trusted the user k in time period t, and 0 otherwise. The parameter λ T controls how much the user's trusters influence his/her preferences. The last three terms in (1) are regularizations to avoid overfitting. λ is the regularization parameter and . 2 F denotes the Frobenius norm with ij | 2 . In practice, the user preferences change smoothly over time (Lo et al. 2018;Tang et al. 2015;Li and Fu 2017); therefore, the users' latent features should not significantly change in a short time period. Based on this intuition, we assume that the users' latent features in time period t ( t > 1 ) have a temporal dependence to the users' latent features in time period t-1. We introduce a transition matrix M (t) i ∈ R d×d between the user latent feature vectors U (t−1) i and U (t) i in two successive time periods t-1 and t for each user i. The transition matrix M (t) i captures the mapping between the previous user latent feature vector U (t−1) i and the current user latent feature vector U (t) i for user i. We add the following temporal smoothness term in (1) to account the temporal dynamics in user preferences: Therefore, we can rewrite the objective function in (1) as follows: where the third term with respective regularization parameter λ 1 is the smoothness regularization based on the intuition that the user preferences should be smoothly changed over time. The last regularization term m i=1 M (t) i 2 F is used to control the model complexity. λ 2 is the regularization parameter. We let λ 1 = λ 2 in our implementation for the sake of simplicity. Choosing the proper length of the time period is critical to the performance of our model. We study its impact on recommendation accuracy in Section 4.7. Optimization algorithm The objective function L in (3) is not convex for all variables U simultaneously, but L is convex with respect to each variable separately. Therefore, we can obtain a local minimum of L using SGD method. The SGD has become very popular recently for using in non-convex optimization problems (Sidiropoulos et al. 2017). It usually has a very good convergence property (Li and Fu 2017 where η is the learning rate. We derive the gradients of L with respect to each variable as follows: where g (x) = exp(−x)/(1 + exp(−x)) 2 is the derivative of the logistic function g (x). The pseudocode of our proposed TSCMF model is presented in Algorithm 1. First, the raw ratings R (t) ij and R (t−1) are mapped to the interval [0,1] in line 1. Then, in line 2, the transition matrix M (t) i for each user i is initialized by setting M (t) i = I , where I is an d × d identity matrix. Also, in line 3 the latent feature matrices U (t) , V (t) and W (t) are initialized with small random values. In line 4, we perform the MF on R (t−1) by applying LIBMF library (Chin et al. 2016) to compute the user latent feature matrix U (t−1) . In our iterative optimization algorithm in lines 5-12, after selecting a pair of random entries R (t) ij and T (t) ik , the variables U and M (t) i are updated using (4)-(7), respectively. In line 11, the objective function L in (3) is calculated based on updated variables. The algorithm repeats until L has converged or the maximum number of iterations has been reached. Convergence is achieved when the change of L between current and the previous iteration is greater than a predefined convergence threshold. In our implementation, we set the convergence threshold to 10 −6 and the maximum number of iterations to 10 5 . Finally, in lines 13-15, the predicted rating matrixR (t) as output of algorithm is computed. Complexity analysis The main computation cost of learning our model is to evaluate the objective function L and its gradients against variables. The computation complexity to evaluate the objective function L is O (dN R + dN T ) , where N R and N T are the number of nonzero entries in matrices R (t) and T (t) , respectively. The number of latent features d is fixed. The computational complexities for calculating gradients ∂L and O (1), respectively. Therefore, the overall computational complexity for each iteration is O (dN R + dN T ), which is linear with respect to the number of nonzero entries in rating and trust matrices R (t) and T (t) . Therefore, our model can be scaled to large datasets with millions of users and items. Dataset and evaluation methodology Epinions 1 is a popular product review site by which the users can assign numerical ratings on a 1-5 scale and review the items. An item may be a product or service. In addition, Epinions provides a social network with trust relations where users can add other users to their trust networks. We conduct experiments on Epinions dataset (Tang 2019). This dataset contains rating information, social trust relations, and temporal information for both ratings and trust relations that make this dataset ideal for our experiments. The Epinions dataset used in our experiments contains 22166 users who have assigned ratings to at least one of a total of 296277 items. The total numbers of ratings and trust relations are 922267 and 300548 respectively. The rating data are from July 5, 1999 to May 8, 2011. The whole dataset was split into 11 time periods in chronological order. Since the temporal information about the trust relations before January 11, 2001 is not available, the first time period contains the data before January 11, 2001 and the last time periods covers data after January 11, 2010. Each of other time period contains data for one year. For example, the second time period contains data from January 12, 2001 to January 11, 2002. We use time-dependent cross-validation based on increasing time window (Campos et al. 2014) as evaluation methodology. This method ensures that time dependencies between data are held in each train-test set pair. Based on this method, the data in each time period (except the first time period) are considered as the test set and all data prior to that time period as the training set. Therefore, we have 10 different train-test splits in total. Finally, the average results on test sets are reported. We use the threshold-based relevant item condition (Campos et al. 2014) to determine favorite items for each user. Based on this condition, the items in the user's test set rated higher than or equal to a threshold value are considered as favorite items. Accordingly, similar to Yang et al. (2017), we consider items in the user's test set with ratings higher or equal to 4 as his/her favorite items. We conduct all the experiments using MATLAB 2016a on Windows 10 PC with Intel Core i5 2.53 GHz with 8 GB memory. Evaluation metrics We adopt two most popular rating prediction evaluation metrics, i.e., Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) (Yang et al. 2014), to evaluate the rating accuracy of our proposed model in comparison with other methods. These metrics are defined as: where R test is the set of ratings in the test set, r ij is the real rating of user i on item j, and r ij is the predicted rating of user i on item j. The lower the MAE and RMSE indicate better predictive accuracy. In addition, we use the metrics Recall@K (R@K for short), Precision@K (P@K for short), and F1@K (Yang et al. 2017) to assess the quality of the top-K recommendations. These metrics are defined as: where F av i is the set of favorite items of user i in the test set. Rec i is the set of top-K recommended items for user i, which is generated by selecting the K items with the highest predicted ratings. Comparison methods We compare our TSCMF model with the following approaches: -Probabilistic Matrix Factorization (PMF) (Salakhutdinov and Mnih 2008): This method is the baseline MF approach. It does not consider the temporal dynamics. -Collective Matrix Factorization (CMF) (Singh and Gordon 2008): This method jointly factorizes two matrices that share one-side information and does not consider the temporal dynamics. We use the user-item rating and social trust matrices in this method. CMF is the basis of our proposed model. -TimeSVD++ (Koren 2010): This method is a baseline for modeling the user preference dynamics. It incorporates the time-varying rating biases of each item and user into MF and generates the recommendations. -Bayesian Temporal Matrix Factorization (BTMF) (Zhang et al. 2014): This is a Bayesian temporal MF approach that captures the temporal dynamics of user preferences by learning a transition matrix for each user latent feature vectors between two successive time periods. -Dynamic Multi-Task Non-Negative Matrix Factorization (DMNMF) (Ju et al. 2015): This method models the user preference dynamics by fusing multi-task non-negative MF and a transition matrix of users' latent features. -Temporal Matrix Factorization (TMF) (Lo et al. 2018): This method models the user preference dynamics by extracting a transition pattern for each user's latent feature vector. -Dynamic Matrix Factorization with Social Influence (Aravkin et al. 2016): This method incorporates trust relations into dynamic MF model to capture user preference dynamics. It introduces a transition matrix of users' preferences and assumes that trust relations among users are a graph at each time period. To facilitate comparison, we refer this model as DMF. -TimeTrustSVD (Tong et al. 2019): This method integrates rating, trust and time information. It adopts the time-variant biases for each item and each user into the model to capture temporal dynamics of user preferences. The PMF, TimeSVD++, BTMF, DMNMF, and TMF methods exploit only the user-item rating matrix without any side information. Parameter settings The optimal parameters for each method are determined by cross-validation. Accordingly, we set the learning rate η to 0.001 in PMF and 0.003 in TimeSVD++, CMF, TMF, TimeTrustSVD, and TSCMF. We also set υ 0 = d, β 0 = 2, W 0 = Z 0 = I , μ 0 = 0 for BTMF, α = 0.6 in CMF, λ = 10 −2 in DMF, λ T = 0.8, and λ T = 5 in TSCMF. For making a fair comparison, we fix the dimension of latent feature vectors to be 10 in all comparison methods. In addition, we set the regularization parameters to 0.001 in all our experiments. Experimental results Performance of the methods compared in terms of MAE and RMSE on the Epinions dataset is shown in Table 2. We observe that PMF performs worse than other methods. There are significant differences in terms of both MAE and RMSE between PMF and other methods. This is because PMF does not consider the temporal dynamics of user preferences and also does not exploit any side information such as trust relations. The results show that the proposed TSCMF method has the best performance in terms of both MAE and RMSE among the compared methods. The improvements of TSCMF against competitive methods indicate that our model can significantly improve the accuracy of rating prediction. Performance of the methods compared in terms of R@K, P@K, and F1@K (with K=5, 10) is shown in Table 3. We observe that the temporal methods achieve significantly higher R@K, P@K, and F1@K than PMF. This implies that considering temporal dynamics of user preferences is useful for improving the recommendations. From Table 3, we can see that the proposed TSCMF has the best performance in terms of R@K, P@K, and F1@K among the compared methods for both values of K. In comparison with the other temporal competitors, the results indicate that our TSCMF method can better capture the temporal dynamics of user preferences. We believe that the transition matrix introduced in our model is a key element that contributes to this improvement. Compared to the transition matrix used in DMNMF, BTMF, TMF, and DMF, this matrix is dynamic and is trained individually for each user. Tables 2 and 3, we can observe that CMF, which uses both ratings and trust information and does not consider the temporal dynamics, outperforms the temporal methods TimeSVD++, DMNMF, and TMF. This finding indicates that regardless of temporal information, incorporating the social trust relations is effective in improving recommendation accuracy. On the other hand, we can see that the temporal methods that use both ratings and trust information (i.e., DMF, TimeTrustSVD, and our TSCMF method), perform better than CMF. The better results obtained for these methods than CMF imply that temporal dynamic and trust relations could be complementary to each other in boosting the accuracy of recommendations. The superiority of TSCMF over competitive methods indicates that the latent features learned from the previous time period are helpful. Also, capturing the dynamics of user preferences in our model, regarding the fact that the user preferences change individually over time, improves the recommendation accuracy. In order to evaluate the efficiency of our TSCMF model, we compare the running time of our model with other methods. Table 4 reports the experimental results, in seconds. We can see that PMF has the lowest running time. This is because PMF only exploits ratings to learn latent features and also does not consider the temporal dynamics. The trust-based recommendation methods CMF, DMF, and TimeTrustSVD have a higher running time than other methods, which is mainly due to the use of trust information in these methods. Among trust-based methods, our TSCMF method outperforms other methods in terms of the running time. Also, compared to methods that do not exploit any trust information, the running time of TSCMF is lower than BTMF and DMNMF. The main reason is that TSCMF only uses the data of the previous time period to learn latent features, thus the running time is reduced. We notice that from Table 2 the percentage of relative improvements of TSCMF is close to BTMF (around 6.38% in MAE). Comparing the results of Tables 2 and 3, we observe that when the performance is improved from 0.9722 to 0.9102 with respect to MAE, it achieves more than 10 percent relative improvement in precision. Since the running time of the proposed TSCMF method is lower than BTMF, this amount of improvement in the quality of the recommendations can be valuable. Impact of parameter λ T The parameter λ T plays an important role in our TSCMF model via controlling the impact of social trust on user' preferences. The larger values of λ T indicate more influence of the social trust information on users' preference. To assess how different values of λ T affect the final recommendation accuracy, we set λ T to be 0.1, 0.5, 1, 2, 5, 10, and 20 in our model. We perform this assessment for each of the 10 train-test splits. Figure 2 presents the average MAE and RMSE of our model with different values of λ T . As can be seen, λ T affects the recommendation results dramatically, suggesting that fusing the users' rating matrix and social trust matrix can help to improve the recommendation accuracy. As λ T increases, the average MAE and RMSE values decrease at first, indicating that the recommendation accuracy increases. In comparison, when λ T exceeds a certain threshold, the average MAE and RMSE values increase. These findings demonstrate that merely exploiting the user-item rating matrix or merely exploiting the social trust information cannot generate better results than appropriately fusing these two resources together in our model. As shown in Fig. 2, TSCMF has its best results for λ T = 5. Impact of the length of the time period Choosing the optimal length of the time period is critical in temporal models (Li and Fu 2017), and usually depends on the application of the recommender system (Rafailidis and Nanopoulos 2016). For example, in a news recommender system, users' preferences in specific news topics may take only a few days, while in a movie recommendation, users' preferences in movies may change slowly over time. Therefore, choosing a shorter time period may be appropriate for capturing users' preferences in news than movies (Sahoo et al. 2012). In such a situation, choosing too long time period may lead to miss any change in the behavior of users within that time period. In order to study of the effect of the time period length on the methods' performance, we only consider the methods that incorporate the temporal dynamics of user preferences into models. Since the trust information in Epinions dataset used in our experiments is available as annually, the shortest time period length that we select is 1 year. Figure 3 shows the performance of three different lengths of the time period in terms of average MAE and RMSE. From this figure, we see that all methods gain their best results when the length of the time period is set to 1 year. With increasing the length of the time period, the performance of all methods decreases. Another interesting finding in this regard is that for all three examined time period lengths, our model outperforms the other compared methods. Conclusion In this paper, we proposed the Temporal and Social Collective Matrix Factorization (TSCMF) model to capture the temporal dynamics of user preferences for implementing temporal recommendation. We jointly factorized the users' rating information and social trust information in a collective matrix factorization framework by introducing a joint objective function. We assumed that the user preferences in current time period have a temporal dependence to user preferences in the previous time period and model user dynamics into the collective matrix factorization framework by learning a transition matrix of user preferences between two successive time periods for each individual user. We presented an efficient optimization algorithm by adopting stochastic gradient descent method for solving the objective function. The experiments on a real-world dataset collected from a popular product review website, i.e., Epinions, show that our proposed model outperforms the other compared methods. In addition, the proposed model can be scaled to large datasets with millions of users and items. Our findings strengthen the idea that modeling the dynamics of user preferences based on the fact that the changes in user preferences vary individually leads to improvements in recommendation accuracy and, consequently, user satisfaction. In addition, considering temporal dynamic and trust relations could be complementary to each other to the development of social recommender systems. The proposed method can help to improve the quality of social recommender systems. However, in some social recommender systems, trust information is not explicitly available. For future work, we plan to extract implicit trust based on users' interactions whenever explicit trust is not available and use in our model. We also want to extend the model to address the problem of cold-start users who do not have any rating and any trust relation in both previous and current time periods (named as new users). One possible approach to deal with this problem is exploiting additional side information such as users' attributes. In some social recommender systems, users can express distrust toward other users. Additionally, we want to exploit distrust relations among users with temporal information in addition to trust relations in our model for generating better-personalized recommendations. Compliance with Ethical Standards Conflict of interests The authors declare that they have no conflict of interest. Funding Information Open Access funding provided by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommonshorg/licenses/by/4.0/.
9,626.4
2020-08-15T00:00:00.000
[ "Computer Science" ]
Epigenome-wide change and variation in DNA methylation from birth to late adolescence DNA methylation (DNAm) is known to play a pivotal role in childhood health and development, but a comprehensive characterization of genome-wide DNAm trajectories across this age period is currently lacking. We have therefore performed a series of epigenome-wide association studies in 5,019 blood samples collected at multiple time-points from birth to late adolescence from 2,348 participants of two large independent cohorts. DNAm profiles of autosomal CpG sites (CpGs) were generated using the Illumina Infinium HumanMethylation450 BeadChip. Change over time was widespread, observed at over one-half (53%) of CpGs. In most cases DNAm was decreasing (36% of CpGs). Inter-individual variation in linear trajectories was similarly widespread (27% of CpGs). Evidence for nonlinear change and inter-individual variation in nonlinear trajectories was somewhat less common (11% and 8% of CpGs, respectively). Very little inter-individual variation in change was explained by sex differences (0.4% of CpGs) even though sex-specific DNAm was observed at 5% of CpGs. DNAm trajectories were distributed non-randomly across the genome. For example, CpGs with decreasing DNAm were enriched in gene bodies and enhancers and were annotated to genes enriched in immune-developmental functions. By contrast, CpGs with increasing DNAm were enriched in promoter regions and annotated to genes enriched in neurodevelopmental functions. These findings depict a methylome undergoing widespread and often nonlinear change throughout childhood. They support a developmental role for DNA methylation that extends beyond birth into late adolescence and has implications for understanding life-long health and disease. DNAm trajectories can be visualized at http://epidelta.mrcieu.ac.uk. Introduction DNA methylation (DNAm), an epigenetic process whereby DNA is modified by the addition of methyl groups, has gained increasing attention over the past few decades, due to its pivotal role in development. In utero, DNAm is involved in a range of essential processes including cell differentiation 1-3 , X-chromosome inactivation 4 and fetal growth 5 . Its role extends well beyond birth, e.g. by maintaining cell type identity and genome stability [6][7][8] , responding to environmental exposures [9][10][11] , and its involvement in immune 12 and neural development 13 . Since it is influenced by both genetic and environmental factors 14,15 , DNAm has also emerged as a key mechanism of interest for understanding the gene-environmental interplay in normal ageing and disease development. Numerous studies have identified strong associations between DNAm and age. While most have relied on cross-sectional data [16][17][18] , but a few have utilized longitudinal measurements of DNAm within individuals [19][20][21][22][23] . Longitudinal measurements allow one to distinguish intra-individual change from inter-individual differences in change, thereby greatly improving the power to detect change over time and to identify differences between individuals 24 . Identifying and characterizing CpGs for which DNAm changes differently over time between individuals (i.e. inter-individual variation in change) is a necessary step in identifying genetic and environmental influences on the methylome as well as their potential impact on health outcomes 25 . Moreover, longitudinal designs facilitate the study of nonlinear trajectories 26,27 , which might help to identify sensitive periods for DNAm change in development. To date, the largest epigenome-wide longitudinal study on DNAm included 385 elderly individuals who were followed up to five times over a maximum period of 18 years, identifying DNAm change at 1,316 CpG (Cytosine-phosphate-Guanine) sites 19 and inter-individual variation at change at 570 CpGs 20 . Yet, little is known about DNAm trajectories across early development, as existing studies in childhood DNAm typically have been limited by small sample sizes 21,23 , short timeperiods 22,28 or focused on specific CpGs in relation to maternal smoking 29 , birthweight 30 , or maternal BMI 31 . In the current study, we aim to provide a benchmark of typical epigenome-wide age-related DNAm trajectories within individuals, spanning the first two decades of life. This study combines repeated measurements of DNAm at nearly half a million CpG sites across the genome from two large population-based cohorts, the Generation R Study and Avon Longitudinal Study of Parents and After the DNAm datasets of the two cohorts underwent joint functional normalization (see Supplementary Figure 1 for distributions of mean DNAm levels), within-cohort stability of DNAm at birth and 6 or 7 years (in Generation R and ALSPAC, respectively) was compared. Stability of DNAm at individual CpG sites (437,864 autosomal sites) was estimated in three ways: relative concordance using Spearman correlations between time points, absolute concordance using intraclass correlations between time points (children with data for both time points: n Generation R=476, n ALSPAC=826), and change over time using change estimates from a linear mixed model (Model 1, online Methods) applied within each cohort (children with data for at least one of the two time-points: n Generation R=1,394, n ALSPAC=944). Estimates of all stability measures for both cohorts are depicted in Figure 2. Next, agreement of these stability estimates between the two cohorts was estimated with the between the datasets. The Spearman correlation of the relative concordance was ρ=0.62, the Pearson correlation of the absolute concordance was ρ=0.60, and the Pearson correlation of the change estimates was r=0.86, indicating strong agreement between datasets. Based on these results the two datasets were joined to form one set with four different time-points of DNAm (birth, age 6/7 years, 10 years, 17 years). Table 2). From this it follows that typically in (cord-/peripheral) blood tissue, DNAm levels for CpGs do not change from a fully unmethylated to fully methylated state, or vice versa, over the course of 18 years. Further, we observed substantial inter-individual variation in linear DNAm changes over time at 27.4% of all CpGs (i.e. random slope variance was greater than zero at Bonferroni-corrected threshold P<1x10 -07 ; Figure 3c). On average, this variation accounted for 2.7% (SD=1.5%) of all estimated inter-individual variation (for intercept, age, batch, and residual) at these CpGs. At 17.3% of all CpGs, we observed both change and inter-individual variation in change. Nonlinear DNAm change Model 2 (see online Methods) was identical to Model 1 but permitted slope changes at ages 6 and 9 years to test for nonlinear DNAm trajectories. At 11.0% of CpGs a nonlinear trajectory was detected. Specifically, at 4.8% of all CpGs, DNAm increased from birth and remained stable from 6 onward (Positive-Neutral; Figure 3d). Second, at 3.1% of all CpGs, DNAm decreased from birth and then remained stable at 6 years (Negative-Neutral; Figure 3e). The remaining 3.0% of all CpGs followed other nonlinear trajectories (e.g. Figure 3f), with each trajectory observed in <1.0% of all CpGs. Overall, linear and/or nonlinear changes in Model 1 or 2 were observed in 52.6% of CpGs (Figure 3), indicating that most nonlinear patterns were also detected as linear patterns in Model 1. Inter-individual differences in change (i.e. random variance in slopes) from birth onward was detected at 3.4% of all sites (Figure 3g), inter-individual differences in slope change at 6 years in 0.2% (Figure 3h), and inter-individual differences in slope change at 9 years at 8.2% of CpGs (Figure 3i). Inter-individual differences in slope (change) at each time-point were detected more often at CpGs with an increasing rather than decreasing overall DNAm change in Model 1 (P=2.37x10 -144 ). Last, both Positive-Neutral and Negative-Neutral changes coincided more often with inter-individual variation from birth (P<9.88x10 -324 ). Any inter-individual differences in change, detected by Model 1 or 2, was observed at 27.9% of CpGs. In total, Models 1 and 2 detected age-related change whether linear, non-linear or inter-individual differences in change at 62.8% of all CpG sites ( Figure 3). Sex differences in longitudinal DNAm and DNAm change According to Model 3 (online Methods), sex differences in DNAm were present at 4.9% of (autosomal) CpGs ( Figure 3). Specifically, stable longitudinal sex differences (main sex effects) were observed at 4.8% of all (autosomal) CpGs (Figure 3j), and sex differences in DNAm change (sex by age interaction effects) were found at 0.4% of all (autosomal) CpGs (Figure 3k). At sites with stable sex differences, DNAm levels were higher in girls at 3.6% ( Figure 3j) and lower at 1.2% of CpG sites. DNAm at sites with higher DNAm in girls tended to increase over time, whereas DNAm at sites with higher DNAm in boys tended to decrease (P=4.20x10 -205 ). Most commonly (at 0.2% of all CpGs), DNAm was higher in girls at birth but DNAm in boys increased at a higher rate. Both CpGs with stable sex differences and those with sex differences in DNAm change were less likely to show inter-individual variation than other sites (20.8% versus 27.5% and 18.1% versus 27.3%; P=5.36x10 -111 and P=7.57x10 -18 ). Finally, CpGs with stable sex differences or sex differences in DNAm change detected in Model 3 were much more likely to follow an overall Positive-Neutral trajectory of DNAm change detected in Model 2 than other CpG sites were (24.2% of CpGs with stable sex differences followed a Positive-Neutral trajectory versus 3.8% of other CpGs and 53.9% of CpGs with sex differences in DNAm change followed a Positive-Neutral trajectory versus 4.6% of other CpGs; P<9.88x10 -324 , P<9.88x10 -324 ; Figure 3l). Albeit less prominently so, CpGs with stable sex differences or sex differences in DNAm change also more often followed a Negative-Neutral trajectory than other CpGs did (stable sex differences: 5.0% versus 3.0%, P=5.43x10 -62 ; sex differences in DNAm change: 7.7% versus 3.1%, P<7.11x10 -28 ). Follow-up analyses Follow-up analyses were performed to understand how different types of age-related DNAm trajectories are distributed across the genome (Supplementary Tables 3-5). All reported enrichments have significance below a Bonferroni-corrected threshold of P<4.46x10 -04 , corrected for the number of chi-square tests (n=112). We further report enrichment of Gene Ontology (GO) pathways (nominal P<0.05) for genes annotated to CpG sites in each trajectory (Supplementary Tables 5-7). Last, we study enrichment of age-related DNAm trajectories in reported hits of different EWASs ( Figure 6). All reported EWAS enrichments are below a Bonferroni-corrected threshold of P<2.16x10 -04 , corrected for the number of Fishers' exact tests (n=231; Supplementary Table 8). Patterns of DNAm change and CpG location CpG sites with DNAm change associated patterns were labeled by gene associated regions, CpG island associated regions, as well as enhancer elements. Although many exceptions exist, low levels of DNAm in the promoter area but high levels of DNAm in the gene body are generally associated with increased gene transcription 32,33 . CpGs annotated to TSS200 regions more often showed an overall DNAm increase (Model 1) than other CpGs (19.0% versus 15.6%), whereas CpGs annotated to the gene body more often showed an overall DNAm decrease than other sites (38.8% versus 33.7%). TSS200 CpGs showed less inter-individual variation in overall DNAm change than other sites (22.2% versus 28.1%), whereas gene body CpGs showed somewhat more inter-individual variation in overall DNAm change than other sites (28.9% versus 26.5%). Promoter areas often coincide with CpG islands 34 . Here, 63.3% of TSS200 CpGs were also annotated to CpG islands. As in TSS200 areas, CpGs annotated to CpG islands had lower DNAm levels (mode M1 intercept 2.4% (SD=30.2%)), and more often showed an overall DNAm increase than other sites (25.2% versus 12.0%). DNAm sex differences were especially present in the shores of CpG islands compared to all other island associated regions (stable sex differences: 7.5% versus 4.0%, sex differences in DNAm change: 0.6% versus 0.3%). Enhancers act on promoters to regulate gene transcription 35 . CpGs annotated to enhancer elements (2.0% of CpGs) tended to have low DNAm levels (mode M1 intercept 5.07%; SD=31.4%) and then increased with age more than other CpGs (23.9% versus 15.9%). Inter-individual variation in change from birth was more common at enhancer sites than at other sites (5.6% versus 3.3%). Functional associations Enrichment of Gene Ontology categories was tested for genes linked to CpGs with different DNAm trajectories. In short, genes annotated to CpGs with overall decreasing DNAm levels were enriched in immune-developmental functions, whereas those annotated to CpGs with increasing levels were enriched in neurodevelopmental functions. This pattern seemed even more pronounced at genes annotated to nonlinear Negative-Neutral and Positive-Neutral CpGs, with the former more often associated to immune-development and the latter to neurodevelopment. Genes linked to CpGs with stable sex differences and sex differences in DNAm change were enriched in pathways associated with sexual development, such as genital development, as well as pathways associated with neurodevelopment. Genes linked to CpGs with sex difference in DNAm change were also enriched in functions related to tooth and hair development. Enrichment in EWASs We further investigated functional relevance of CpG sites with age-related DNAm trajectories by testing enrichment with published EWAS associations ( Figure 6) 28,36-61 . Unsupervised clustering of the enrichments shows that CpG sites with inter-individual variation in change over time have distinct enrichments and cluster differently from those with age-associated change that is consistent among individuals. The CpG sites of each age-associated DNAm trajectory were enriched with published age associations in adulthood. Multiple smoking EWAS clustered together with enrichment patterns exhibiting strongest enrichments among CpG sites with Negative-Neutral trajectories and mostly weak enrichments among CpG sites with inter-individual variation in change. Further, despite adjusting for cell count heterogeneity in our models, we observed enrichments of CpG sites that differ by white blood cell type among sites following nearly all age-associated trajectories. Finally, we observed enrichments of CpG sites associated with gestational age and prenatal smoking with sexspecific DNAm. Discussion In this study we described changes in DNAm levels through the first two decades of human life. We examined DNAm levels per CpG by their linear association with age, their nonlinear trajectories and inter-individual variation in change, as well as sex differences and CpG characteristics. We found that about half of sites change: consistent linear and/or nonlinear DNAm change was found at 53% of sites. We further found that over a quarter of sites, 28%, were characterized by substantial inter-individual differences in the direction of this change. DNAm sex differences were present, but not abundant: 5% of autosomal sites displayed different DNAm levels or differences in change over time for girls and boys. Specifically, we determined that DNAm at 52% of the measured methylome have some form of linear change from birth to late adolescence, with DNAm decreasing at 36% and increasing at 16% of CpGs. CpGs with decreasing DNAm tended to have high levels of DNAm and were more often located in gene bodies. CpGs with increasing levels of DNAm tended to have low levels of DNAm and were more likely to be located in promoter regions and at enhancers. The predominance of decreasing CpGs is in agreement with literature on epigenome-wide DNAm and age in cross-sectional research on children and adults 18,62 , as well as with longitudinal research in adults 19 . Nonlinear DNAm trajectories were detected at 11% of CpGs, mostly involving changes in DNAm from birth to age 6 years, after which DNAm was more stable. We note that this could be due to cord blood being used to generate DNAm profiles at birth, whereas peripheral blood was used at later ages. A previous study 23 including eight children showed that the cord blood DNAm profile at birth clustered separately from later peripheral profiles, after which DNAm changed gradually from 1, to 2.5, to 5 years. Such differences between DNAm in cord and peripheral blood might be due to Stable sex differences were found at 5% of autosomal CpGs, and sex differences in DNAm change were found at 0.4% of all CpGs. In general, if there were stable sex differences, girls had higher levels of DNAm (4% of all CpGs), in case of sex differences in DNAm change, boys had an accelerated upward change (0.2% of all CpGs). The direction of stable sex differences detected are congruent with a cross-sectional study on newborns, in which girls had higher DNAm levels than boys for the large majority of the 3031 significant autosomal CpGs 54 . Sex-discordant associations with age seemed to be more prevalent from birth to age 6 years than afterwards, suggesting that any phenotypic sex differences associated to DNAm would be established in early childhood. Their enrichment in the shores of CpG islands, areas at which DNAm has been associated with tissue differentiation and tissue-specific gene expression 64 , is consistent with the critical role that these processes play in sexual differentiation. Studies into sex differences in epigenetic regulation might want to focus on these locations. We also found the other DNAm trajectories to be arranged throughout the genome in a nonrandom fashion. Earlier studies 32,65 have shown that, for active genes, lower DNAm towards the promoter area (TSS200) and higher DNAm in the gene body relate to increased gene transcription. Here we add the observation that promoter DNAm tends to increase and gene body DNAm tends to decrease with age. From this finding, one might infer that a downregulation of gene expression takes place from birth to late adolescence. Enrichment analyses of published EWAS associations further showed that different traits and exposures exhibited distinct enrichment patterns among DNAm trajectories. For example, there were clear differences between smoking and BMI-related traits. Enrichment of sites with DNAm sex differences in EWASs on prenatal maternal smoking is consistent with studies finding that prenatal smoking affects traits such as birth weight 66 , brain development 67,68 , and attention 69 differently in boys and girls. Clustering for prenatal maternal smoking EWASs also showed enrichment for CpGs with consistent change among individuals, not for CpGs with inter-individual variation in change. This may suggest a link with the well-known effects of prenatal smoking on childhood development since consistent DNAm change is more likely related to development or aging programming than inter-individual variation. This may explain why changes associated to prenatal smoking persist throughout life 70 . Notably, this pattern of change without inter-individual variation is visible in cg05575921, the AHRR CpG site strongly and persistently associated with prenatal smoking 71,72 (Supplemental Figure 2; http://epidelta.mrcieu.ac.uk/). 'Epigenetic age acceleration' is a term coined to indicate the deviation of chronological age from age as estimated by an 'epigenetic clock' and is associated with disease risk and mortality 73 . Existing clocks are all linear models based on DNAm. Consequently, one might expect that all CpGs included in the clock model change linearly with age. Furthermore, to detect age acceleration, one would expect that these CpG sites would also vary between individuals. Surprisingly, many CpG sites included in the most popular clocks do not match these expectations 74,75 (Supplemental Tables 9, 10). For example, we observe that over one-quarter and nearly one-half of the CpG sites included in the Horvath and Hannum clocks, respectively, follow non-linear DNAm trajectories in childhood. Given the widespread use of clocks to investigate biological aging, further investigation is warranted to better understand how, and perhaps if, associations using these clocks should be interpreted in child DNAm profiles. We note three main limitations of our findings. First, the use of different tissue types (cord blood and peripheral blood) could account for some of the differences between birth and later time points, e.g. sites that increased or decreased between birth and 6, but did not show change after that. Generation of DNAm profiles of a single tissue or cell type collected across childhood would be needed to disentangle this issue further. Unfortunately, such a dataset is not currently available as most cohorts have generated DNAm profiles from peripheral blood and cord blood 63 . Analysis of these complex tissues has nevertheless yielded many valuable insights. Second, since DNAm at 9 years was measured only in Generation R and at 17 years only in ALSPAC, DNAm differences from 9 to 17 may be to some extent driven by batch effects or cohort differences. This may explain some of the inter-individual differences in slope changes at 9 towards 17 years. However, the high level of agreement in both stability and change among the corresponding time points of the two cohorts is reassuring. Moreover, it is not entirely surprising that inter-individual variation in directionality of change was higher for the largest age interval. This interval, furthermore, encompasses the period of adolescent development, a time in which many inter-individual phenotypic differences arise. Finally, it should be noted that the current study only included children of European ancestry. Considerable DNAm differences have been found between populations 76-78 , but research on age-associated DNAm differences is scarce. One study 79 reported evidence for overlap in age-associated CpGs in two African populations with studies on European-ancestry populations, but more research is needed to map the generalizability of longitudinal DNAm changes among different populations. In conclusion, in the first comprehensive CpG-by-CpG characterization of DNAm from birth to late adolescence, we found that DNAm at more than half of the studied CpG sites changes consistently between individuals, and that considerable inter-individual variation in change exists. Further, characteristics such as child sex, CpG location, and environmental and disease traits have distinct associations with patterns of DNAm change. Further analysis of these patterns is made readily available at http://epidelta.mrcieu.ac.uk/, which we hope can be used in future studies to test developmental hypotheses that promote our understanding of the developmental nature of DNAm, its role in gene functioning, and the associated biological pathways leading to health and disease. Setting Data were obtained from two population-based prospective birth cohorts, the Dutch Generation R Study (Generation R) and the British Avon Longitudinal Study of Parents and Children (ALSPAC). Pregnant women residing in the study area of Rotterdam, the Netherlands, with an expected delivery date between April 2002 and January 2006 were invited to enroll in Generation R. A more extensive description of the study can be found elsewhere 80 Pregnant women residing in the study area of former county Avon, United Kingdom, with an expected delivery date between April 1991 and December 1992 were invited to enroll in the ALSPAC study. Detailed information on the study design can be found elsewhere 3,81 . The ALSPAC website contains details of all available data through a fully searchable data dictionary and variable search tool (http://www.bristol.ac.uk/alspac/researchers/our-data/). Ethical approval for the study was obtained from the ALSPAC Ethics and Law Committee and the Local Research Ethics Committees. Consent for biological samples has been collected in accordance with the Human Tissue Act (2004). Informed consent for the use of data collected via questionnaires and clinics was obtained from participants following the recommendations of the ALSPAC Ethics and Law Committee at the time. Study Population In the Generation R Study, 9,778 pregnant DNA methylation Cord blood was drawn after birth for both cohorts, and peripheral blood was drawn at a mean age of In Generation R, quality control was performed on all 2,467 available DNAm samples with the CPACOR workflow 84 . Arrays with observed technical problems such as failed bisulfite conversion, hybridization or extension, as well as arrays with a mismatch between sex of the proband and sex determined by the chromosome X and Y probe intensities were removed from subsequent analyses. Additionally, only arrays with a call rate >95% per sample were processed further, resulting in 2,355 samples, 22 of which belonged to half of an excluded sibling pair, hence 2,333 samples were carried forward into normalization. In ALSPAC, quality control was performed on 6,057 samples (3,286 belonging to children, 2,771 to their mothers), using the meffil package 85 To minimize cohort effects as much as possible, we normalized both cohorts together as a single dataset. Functional normalization (10 control probe principal components, slide included as a random effect) was performed with the meffil package in R 85 . Normalization took place on the combined Generation R and ALSPAC set comprising a total of 5,178 samples for a total of 485,512 CpGs. One-hundred and fifty-nine ALSPAC samples belonging to non-European children or children with missing data on gestational age were excluded, leading to a final ALSPAC set of 2,686 samples Covariates Sample plate number (N=29 in Generation R and N=31 in ALSPAC), was used to correct for batch effects, which was added as a random variable in the model (see below). White blood cell (WBC) composition was estimated with the reference-based Bakulski method 87 for cord blood and Houseman method 88 for peripheral blood (Supplemental Table 11). Nucleated red blood cells were not further analyzed due to its specificity to cord blood, leaving CD4+ T-lymphocytes, CD8+ Tlymphocytes, natural killer cells, B-lymphocytes, monocytes, and granulocytes. Other covariates included gestational age in weeks, sex of the child, and cohort. Statistical analyses Step 1: Assessing cross-cohort comparability in DNA methylation stability To ascertain comparability amongst the two cohorts we compared within-cohort DNAm stability between the time points that were present in both cohorts -i.e. birth and 6/7 years (Generation R/ALSPAC, respectively). Longitudinal stability per CpG within each cohort was assessed by studying estimates of concordance and change. For concordance, DNAm data was first residualized within each cohort for all variables present in the longitudinal models except the 'cohort' variable, in order to remove between-cohort differences due to other covariates. Concordance was then measured both with Spearman correlation (data at most CpGs is not normally distributed) as a measure of relative concordance, and with intra-class correlations as a measure of absolute concordance (children with data for both time points: n Generation R=476, n ALSPAC=826). Longitudinal change from birth to 6/7 years was assessed by studying the estimates of the change in DNAm per year by applying Model 1 (see below) within each cohort (children with data for at least one of the two time-points: n Generation R=1,394, n ALSPAC=944). In a second step, cross-cohort comparability was assessed with Spearman (ρ) correlation of concordance estimates of the CpGs of each cohort (which were not normally distributed) and Pearson correlations (r) amongst the change estimates of the CpGs of each cohort (which were normally distributed). Step 2: Longitudinal modelling of DNA methylation using combined Generation R and ALSPAC data The combined Generation R and ALSPAC dataset had four time points of collection (birth, age Here, participants are denoted by i, time points by j, and sample plates by k. M denotes DNAm level, β0 fixed intercept, u0i random intercept, β1 fixed age coefficient, u1i random age coefficient, u0k random intercept for sample plate. Hence, β1 represents the average change in DNAm per one year. Variability in this change amongst individuals was captured with u1i. To avoid problems with model identification, the random slope of age was uncorrelated to the random intercept (i.e. a diagonal random effects matrix was used). Model 2: Nonlinear change. To identify nonlinear changes in DNAm, we extended Model 1 to allow slope changes at ages 6 and 9 30,31 : M2: Where a + = a if a>0 and 0 otherwise, so that β2 represents the average change in DNAm per year from 6 years of age onward, after accounting for the change per year from birth onward, as denoted by β1. Likewise, β3represents the average change in DNAm per year from 9 years of age onward, after accounting for the change per year from 6 years of age onward. Hence, with those variables we are able to detect slope changes at 6 and 9 years old. These slope changes were used to identify different types of nonlinear patterns. With u2i and u3i the inter-individual variation in slope changes at 6 and 9 years were captured, respectively. General linear hypothesis testing 89 was applied to our fitted models to determine if there were changes in DNAm per year from 6-9 years and from 9-18 years. Model 3: Sex differences in change: To identify CpGs for which DNAm changes differently over time for boys and girls, we applied the following model: Here, Sexi denotes the sex of child i. Both main and interaction effects for sex were studied. The three mixed models were fitted using maximum likelihood estimation in R with the lme4 package 90 . Continuous covariates (WBCs, gestational age) were z-score standardized. Random slopes were kept uncorrelated with random intercepts and the NLopt optimizer was used, enabling us to improve computational speed compared to the default settings. P-values for the fixed effects were computed with a z-test. P-values for random slopes of the Age effects were obtained by refitting the model without the random slope and comparing the fit estimates of the two models with a likelihood ratio test. Within each model, P-value thresholds were Bonferroni-corrected for the number of tested CpGs (i.e. to P<1x10 -07 ). Step 3: Functional characterization of probes with comparable patterns of change To interpret the functionality of the age-related DNAm patterns from the three models, CpG sites adhering to 8 different age-related patterns (M1 linear change and inter-individual variation in linear change, M2 nonlinear trajectories, and inter-individual variation in change from birth, in slope change at 6 years, and in slope change at 9 years, and M3 stable sex differences and sex differences in DNAm change) were tested for enrichment in: relative genomic regions (TSS1500, TSS200, 5'UTR, 1st exon, gene body, 3'UTR, and intergenic regions 64 Disclosure declaration The authors declare that they have no competing interests.
6,753.6
2020-06-10T00:00:00.000
[ "Biology", "Medicine" ]
Comparison of Battery Architecture Dependability This paper presents various solutions for organizing an accumulator battery. It examines three different architectures: series-parallel, parallel-series and C3C architecture, which spread the cell output current flux to three other cells. Alternatively, to improve a several cell system reliability, it is possible to insert more cells than necessary and soliciting them less. Classical RAMS (Reliability, Availability, Maintainability, Safety) solutions can be deployed by adding redundant cells or by tolerating some cell failures. With more cells than necessary, it is also possible to choose active cells by a selection algorithm and place the others at rest. Each variant is simulated for the three architectures in order to determine the impact on battery-operative dependability, that is to say the duration of how long the battery complies specifications. To justify that the conventional RAMS solutions are not deployed to date, this article examines the influence on operative dependability. If the conventional variants allow to extend the moment before the battery stops to be operational, using an algorithm with a suitable optimization criterion further extend the battery mission time. Context An electrical energy storage systems (EESS) may be an electrochemical cell association in a battery [1]. These cells can belong to different technologies and chemistries. The oldest are lead-acid cells. Then, there are the cells using alkaline metals. Finally, for a quarter of a century, lithium cells have been marketed. Lithium-ion cells are more stable than the first lithium cells and have higher energy and power densities than the lead-acid and nickel-based technologies [2]. They also have a higher lifespan, which partly explains their strong growth. The lithium-ion battery market continues to grow, to the point that their unit manufacturing price decreases steadily because of the growing number of units produced [3]. Naturally, in a battery, all cells belong to the same technology. This paper presents three architectures that can be used in cell batteries: two classics, using permanent series and parallel connections between cells, and one innovative, described later and allowing different connections between the cells. On these architectures, different variants are tested: common solutions (balancing between cells, using of more cells than necessary to provide the nominal power), other conventional but not deployed in best industrial solutions (failure tolerance of some cells with over-solicitation of others, addition of redundant cells to replace the first failing cells) and a new variant (redundant cells management with a control law using cells fundamental states as a choice criterion). The performances of the architectures and their variants are compared by simulation under Matlab, by resorting to a cell model that including aging phenomena. The cell main electrical characteristic is its maximum storable capacity Q 0 . It is well known that this capacity decrease with aging. When a cell is new, this capacity is optimal. It is noted in this paper as Q*. The battery physical behavior can be modeled by a second order Thevenin model scheme [4] as To express physical state in which a cell is, two indicators are commonly used: SoC and SoH, respectively, the state of charge and the state of health [5]. The SoC describes, in percent, the amount of electrical charge Q(t) it contains at t time, compared to its maximum storable capacity. The SoC is determined by Equation (1). Moreover, when a cell ages, it cannot store as much electrical charge than when it is new. The maximum storable capacity Q0 decreases over time continuously and gradually from its optimal capacity Q*. So, a cell's State of Health (SoH) is defined by the ratio between these two capacity values, as shown by Equation (2). For applications requiring significant power more than a willingness to deliver energy, SoH can also be defined relative to the ESR [6]. The ISO 12405-2 standard for electric vehicles [7] specifies test procedures for lithium-ion batteries and electrically propelled road vehicles. It specifies that a cell enters the old age phase when its SoH goes down to 80%. Manufacturers communicate a lifetime value for cells that incorporate, as specified in their datasheet, the two aging modes that a cell faces: an age-related calendar mode [8] and a cyclic mode related to cell use [9]. These two phenomena [10] can be described by a combined evolution of a linear aging and an "aging" power of time whose principles are described by Equation (3), where the aging parameter is between 0.5 and 2 and A1 and A2 are two constants. Cyclic aging is aggravated mainly by three parameters: the operating temperature [11], the amount of electrical charge extracted in a cycle [12] and the current [13]. Despite the cell manufacturing standardization, disparities can occur between cells from the same batch. These are reflected in an optimal capacity Q* variability and in an equivalent series resistances ESR variability [14]. Thus, when cells are associated in an EESS, their disparities should lead to imbalances in currents, cell temperatures and then in their aging. Consequently, a battery may remain operational for a shorter time than a cell lifetime. Operative dependability Odep is defined as the time, expressed in hours or in number of cycles, during which a multi-cellular battery can meet the external load specifications [15]. In other words, Odep is the time before a downtime related to a full discharge or a too high age. That is to say, it contains enough operational cells to provide the requested power. For a single cell, this cessation comes from: -a cell aging failure, resulting in a SoH of less than 0.8; -a sudden random failure (open or short circuit); -a complete discharge, SoC drops down to zero in operating phase. To express physical state in which a cell is, two indicators are commonly used: SoC and SoH, respectively, the state of charge and the state of health [5]. The SoC describes, in percent, the amount of electrical charge Q(t) it contains at t time, compared to its maximum storable capacity. The SoC is determined by Equation (1). Moreover, when a cell ages, it cannot store as much electrical charge than when it is new. The maximum storable capacity Q 0 decreases over time continuously and gradually from its optimal capacity Q*. So, a cell's State of Health (SoH) is defined by the ratio between these two capacity values, as shown by Equation (2). For applications requiring significant power more than a willingness to deliver energy, SoH can also be defined relative to the ESR [6]. The ISO 12405-2 standard for electric vehicles [7] specifies test procedures for lithium-ion batteries and electrically propelled road vehicles. It specifies that a cell enters the old age phase when its SoH goes down to 80%. Manufacturers communicate a lifetime value for cells that incorporate, as specified in their datasheet, the two aging modes that a cell faces: an age-related calendar mode [8] and a cyclic mode related to cell use [9]. These two phenomena [10] can be described by a combined evolution of a linear aging and an "aging" power of time whose principles are described by Equation (3), where the aging parameter is between 0.5 and 2 and A 1 and A 2 are two constants. Cyclic aging is aggravated mainly by three parameters: the operating temperature [11], the amount of electrical charge extracted in a cycle [12] and the current [13]. Despite the cell manufacturing standardization, disparities can occur between cells from the same batch. These are reflected in an optimal capacity Q* variability and in an equivalent series resistances ESR variability [14]. Thus, when cells are associated in an EESS, their disparities should lead to imbalances in currents, cell temperatures and then in their aging. Consequently, a battery may remain operational for a shorter time than a cell lifetime. Operative dependability O dep is defined as the time, expressed in hours or in number of cycles, during which a multi-cellular battery can meet the external load specifications [15]. In other words, O dep is the time before a downtime related to a full discharge or a too high age. That is to say, it contains enough operational cells to provide the requested power. For a single cell, this cessation comes from: − a cell aging failure, resulting in a SoH of less than 0.8; − a sudden random failure (open or short circuit); − a complete discharge, SoC drops down to zero in operating phase. For a battery, O dep depends on the architecture (how cells are connected); if some cells are in redundancy for the current mission and if failed cells can be isolated or not. It is therefore necessary to monitor the cell states. In order to monitor cells, in particular to control end of charging and to prevent overheating, EESS cells are managed by a BMS (Battery Management System) [16]. Today, multi-cellular EESS are constituted according to two conventional architectures (series-parallel and parallel-series) and sometimes include variants, as explained in the next section, to improve their availability. From a formal point of view, the operative dependability O dep is defined by the equation set (4), according to SoC and SoH cells. In the next section, different architectures and possible variants to improve the battery operative dependability are presented. Then, part III presents simulations performed on each variant, whatever the cell technology, in order to determine the impact on operative dependability, for LiFePO 4 cell example. The next part presents and compares the simulation results, especially for operative dependability. Architectures and Variants In the literature, to improve the operative dependability, various solutions for reconfiguration of the internal structure are proposed, such as the power tree solution presented in [17] or the DESA architecture (Dependable, efficient, scalable architecture) [18]. The first solution does not allow use of a battery at full power and the second only improves the operative dependability of a similar order to the amount of additional cell number (typically 50% with 50% of additional cells). Three architectures are compared to determine which has the best operative dependability: two conventionally used and a new one. These architectures are reconfigurables by using a limited number of switches associated with a cell. To increase the battery voltage, the cells are associated in series. To increase the current, they are associated in parallel. Thus, batteries are generally associated in a SP (series-parallel) architecture, as described in Figure 2, which presents a n-rows m-columns structure, noted (n, m). Switches allow to isolate a column, following one of its cell failure, for instance. By duality, the same structure can be inserted into a PS (parallel-series) architecture, as shown in Figure 3. One switch for each cell is needed in order to isolate a failed cell. In an SP architecture, same-column cell voltage disparities can appear. In a PS architecture, the same-row cell currents can be different. To deliver the same power, by keeping the same structure, another architecture is possible: the C3C [19], depicted in Figure 4. The architecture consists of an element association. Each C3C element comprises a cell and three switches, as shown by Figure 5. Each includes an upstream connection and three downstream. The A indexed switch is placed on the upstream connection. Switch B is on the first downstream connection, C on the third. The middle connection is intended to be connected with the same-column downstream-row cell upstream connection. The C3C architecture combines the advantages of both conventional architectures, as PS' self-balancing and allows a series association of some cells located on different columns. The current that leaves the Cell i,j in the ith row and the jth column can be directed to one of the three following row cells: Cell i+1,j−1 , Cell i+1,j , Cell i+1,j+1 , in a Batteries 2018, 4, 31 4 of 12 recharge phase, respectively by the switches S i,j B, S i+1,j A and S i,j C or to one of the three upstream row cells: Cell i−1,j−1 , Cell i−1,j , Cell i−1,j+1 , in a discharge phase. This is made by activating the appropriate switches, respectively S i−1,j−1 C, S i,j A and S i−1,j+1 B. Thanks to the architecture, the mth column cells are connected with those of the first column. The C3C architecture involves two particularities. Firstly, to take advantage of its very large possible configuration number [20], it must be managed by an algorithm that chooses the best cell combination, with a cell-aging-reducing aim. The cells selection parameters can be chosen to define the optimal combination. Only two are considered in this paper: SoC and SoH. Secondly, whatever the architecture, to increase battery reliability, a minimum amount of redundancy must be included [21]. This minimal part consists of using a column (especially the mth) as a redundant column. Thus, a (n, m) battery is calibrated to provide a nominal power Pn given by Equation (5). The C3C architecture involves two particularities. Firstly, to take advantage of its very large possible configuration number [20], it must be managed by an algorithm that chooses the best cell combination, with a cell-aging-reducing aim. The cells selection parameters can be chosen to define The C3C architecture involves two particularities. Firstly, to take advantage of its very large possible configuration number [20], it must be managed by an algorithm that chooses the best cell combination, with a cell-aging-reducing aim. The cells selection parameters can be chosen to define the optimal combination. Only two are considered in this paper: SoC and SoH. Secondly, whatever the architecture, to increase battery reliability, a minimum amount of redundancy must be included [21]. This minimal part consists of using a column (especially the mth) as a redundant column. Thus, a (n, m) battery is calibrated to provide a nominal power Pn given by Equation (5). The C3C architecture involves two particularities. Firstly, to take advantage of its very large possible configuration number [20], it must be managed by an algorithm that chooses the best cell combination, with a cell-aging-reducing aim. The cells selection parameters can be chosen to define the optimal combination. Only two are considered in this paper: SoC and SoH. Secondly, whatever the architecture, to increase battery reliability, a minimum amount of redundancy must be included [21]. This minimal part consists of using a column (especially the mth) as a redundant column. Thus, a (n, m) battery is calibrated to provide a nominal power P n given by Equation (5). As a result, if the conventional architectures include a redundant column, they can also be managed by the same algorithm that chooses the best cell combination. The possible combination number is nevertheless much lower. Therefore, if the battery must supply its nominal current I nom , depict in Equation (6), one cell in each row stays at rest. In PS and C3C architectures, this cell can be any of the m cells of each row. In SP, all the same-column cells are placed at rest. To do this, it requires adding a switch in series with each cell in PS and one by column in SP, as shown in Figures 2 and 3. If SP and PS batteries include a redundant column, it becomes possible to use this redundancy classically. That is, the battery can work with its base cells, corresponding to the first (m−1) columns, until one of them stops to be operational. It is thus replaced by the spare cell. In this variant, the mth switches in the Figure 3 are off in the beginning for a PS architecture. For an SP architecture, in Figure 2, all cells in the mth column are redundant. Another variant can also be examined, tolerating a cell failure by replacing it with a short circuit in SP architecture and allowing the battery to continue to fulfill its mission, even if it requires the active cells to afford a greater current than their nominal current. To do so, two switches should be placed around each cell: one in series and one in parallel as S i,j A is being open and S i,j B closed when cell i,j must be isolated, as depicted for cell 22 , as shown by Figure 6. In this example, cell 22 fails, the current of the second column passes through S 22 B. The cell is marked with a red cross to symbolize its failure. This variant does not require redundancy cells. The same power as in Equation (5) is obtained with a (n, m−1) structure. In a PS architecture, the cell may just be disconnected by only one switch, as shown for cell 22 As a result, if the conventional architectures include a redundant column, they can also be managed by the same algorithm that chooses the best cell combination. The possible combination number is nevertheless much lower. Therefore, if the battery must supply its nominal current Inom, depict in Equation (6) If SP and PS batteries include a redundant column, it becomes possible to use this redundancy classically. That is, the battery can work with its base cells, corresponding to the first (m−1) columns, until one of them stops to be operational. It is thus replaced by the spare cell. In this variant, the mth switches in the Figure 3 are off in the beginning for a PS architecture. For an SP architecture, in Figure 2, all cells in the mth column are redundant. Another variant can also be examined, tolerating a cell failure by replacing it with a short circuit in SP architecture and allowing the battery to continue to fulfill its mission, even if it requires the active cells to afford a greater current than their nominal current. To do so, two switches should be placed around each cell: one in series and one in parallel as Si,jA is being open and Si,jB closed when celli,j must be isolated, as depicted for cell22, as shown by Figure 6. In this example, cell22 fails, the current of the second column passes through S22B. The cell is marked with a red cross to symbolize its failure. This variant does not require redundancy cells. The same power as in Equation (5) is obtained with a (n, m−1) structure. In a PS architecture, the cell may just be disconnected by only one switch, as shown for cell22 in Figure 7, in which cell1m−1 and cell22 are faulty and marked with a red cross. It is also possible to use a battery comprising m columns to provide a power corresponding to the nominal power of Equation (5). In this way, the battery has an over-capacity compared to the external load specifications. Moreover, in order to reduce the disparities between the SoCs when cells are associated in series, in an SP architecture, balancing circuits are often used. They enable to homogenize the electrical charges in all cells in the string [22]. These balancing circuits are controlled by a BMS [23]. They allow It is also possible to use a battery comprising m columns to provide a power corresponding to the nominal power of Equation (5). In this way, the battery has an over-capacity compared to the external load specifications. Moreover, in order to reduce the disparities between the SoCs when cells are associated in series, in an SP architecture, balancing circuits are often used. They enable to homogenize the electrical charges in all cells in the string [22]. These balancing circuits are controlled by a BMS [23]. They allow better stored energy use [24] and a battery operative dependability improvement [25,26]. To evaluate balancing impact, basic SP of Figure 2 (with (m−1) columns) with and without balancing circuits are simulated. All compared batteries must provide the same Equation (5) power. So, this structure only includes (m−1) columns. Several balancing techniques exist in an SP architecture: − dissipative balancing [27], consisting of balancing the electrical charges from below by removing excess energy by Joule effect; − redistributive balancing to send excess energy from the most charging cell(s) to the least charging cell(s) in the same column [28]. Its principle is described by the Figure 8 scheme, relating to a single column j in an SP architecture. When Cell a,j is more charged than Cell b,j , the intermediate capacitor Cb j , associate to this column j, is placed in parallel by switching on the S a,j + and S a,j− switches. Then, these switches are switched off and the Sb ,j + and Sb ,j− switches are switched on. At the end, Cell aj electric charge have been reduced and Cell aj one have been increased. Batteries 2018, 4, x FOR PEER REVIEW 6 of 12 better stored energy use [24] and a battery operative dependability improvement [25,26]. To evaluate balancing impact, basic SP of Figure 2 (with (m−1) columns) with and without balancing circuits are simulated. All compared batteries must provide the same Equation (5) power. So, this structure only includes (m−1) columns. Several balancing techniques exist in an SP architecture: dissipative balancing [27], consisting of balancing the electrical charges from below by removing excess energy by Joule effect; -redistributive balancing to send excess energy from the most charging cell(s) to the least charging cell(s) in the same column [28]. Its principle is described by the Figure 8 scheme, relating to a single column j in an SP architecture. When Cella,j is more charged than Cellb,j, the intermediate capacitor Cbj, associate to this column j, is placed in parallel by switching on the Sa,j+ and Sa,j− switches. Then, these switches are switched off and the Sb,j+ and Sb,j− switches are switched on. At the end, Cellaj electric charge have been reduced and Cellaj one have been increased. In this study, only redistributive balancing circuits from one cell to another are considered. This leads to add two switches by cell in SP schemes, each connecting a terminal of the cell to a terminal of an intermediate capacitor, used to temporarily store the energy to be transferred. By this way, the different variants combining architecture and improvement are listed in Table 1. The structure column number mc and the cell-associated switch number are also specified. For the basic PS and the over-capacity PS variants, the switches in Figure 3 are not useful because cells are not managed individually. The cells are all active or all inactive. In the same way, those of Figure 2 are not useful in the without-balancing SP variant. Apart from the variant reported without balancing, all other SPs include balancing circuits, which add two switches per cell. The variants deployed in present In this study, only redistributive balancing circuits from one cell to another are considered. This leads to add two switches by cell in SP schemes, each connecting a terminal of the cell to a terminal of an intermediate capacitor, used to temporarily store the energy to be transferred. By this way, the different variants combining architecture and improvement are listed in Table 1. The structure column number m c and the cell-associated switch number are also specified. For the basic PS and the over-capacity PS variants, the switches in Figure 3 are not useful because cells are not managed individually. The cells are all active or all inactive. In the same way, those of Figure 2 are not useful in the without-balancing SP variant. Apart from the variant reported without balancing, all other SPs include balancing circuits, which add two switches per cell. The variants deployed in present industrial solutions are PS-base, SP-base with or without balancing and over-capacity variants. In this study, only redistributive balancing circuits from one cell to another are considered. This leads to add two switches by cell in SP schemes, each connecting a terminal of the cell to a terminal of an intermediate capacitor, used to temporarily store the energy to be transferred. By this way, the different variants combining architecture and improvement are listed in Table 1. The structure column number mc and the cell-associated switch number are also specified. For the basic PS and the over-capacity PS variants, the switches in Figure 3 are not useful because cells are not managed individually. The cells are all active or all inactive. In the same way, those of Figure 2 are not useful in the without-balancing SP variant. Apart from the variant reported without balancing, all other SPs include balancing circuits, which add two switches per cell. The variants deployed in present industrial solutions are PS-base, SP-base with or without balancing and over-capacity variants. In this study, only redistributive balancing circuits from one cell to another are considered. This leads to add two switches by cell in SP schemes, each connecting a terminal of the cell to a terminal of an intermediate capacitor, used to temporarily store the energy to be transferred. By this way, the different variants combining architecture and improvement are listed in Table 1. The structure column number mc and the cell-associated switch number are also specified. For the basic PS and the over-capacity PS variants, the switches in Figure 3 are not useful because cells are not managed individually. The cells are all active or all inactive. In the same way, those of Figure 2 are not useful in the without-balancing SP variant. Apart from the variant reported without balancing, all other SPs include balancing circuits, which add two switches per cell. The variants deployed in present industrial solutions are PS-base, SP-base with or without balancing and over-capacity variants. Simulations The different variants and architectures were modeled using Matlab. The program simulates cell associations according to each architecture for different (n, m) structures. For each variant, it submits the battery to regular cycling, leading the current shown in Figure 9 for a single 10 Ah cell. The battery is initially full. This cycle consists of firstly, a discharge under a current I equal to the battery nominal current Inom divided by the active column number mc (that is to say (m−1) or m) for a duration of 2500 s. So, except for over-capacity variants, I = 10 A in discharging phase. By this way, the battery is discharged by 70% of its initial capacity. This discharge value is relevant for quantifying cell aging. Indeed, it allows to stop the discharging phase before a complete discharge. The more Qo(t) decreases, the more the SoC at the end of discharging phase decreases. This is because a cell has to provide the same amount of energy. Then, the cells are recharged, for the same duration, to return to full charge. Finally, the battery is placed at rest for an identical duration, allowing the return to internal balance. To model a cell, a characteristic equation that describes the OCV evolution as a function of the SoC is used. The typical shape of this curve is described for an amorphous iron phosphate (LiFePO4) 2 Without balancing 0 (m−1) Simulations The different variants and architectures were modeled using Matlab. The program simulates cell associations according to each architecture for different (n, m) structures. For each variant, it submits the battery to regular cycling, leading the current shown in Figure 9 for a single 10 Ah cell. The battery is initially full. This cycle consists of firstly, a discharge under a current I equal to the battery nominal current Inom divided by the active column number mc (that is to say (m−1) or m) for a duration of 2500 s. So, except for over-capacity variants, I = 10 A in discharging phase. By this way, the battery is discharged by 70% of its initial capacity. This discharge value is relevant for quantifying cell aging. Indeed, it allows to stop the discharging phase before a complete discharge. The more Qo(t) decreases, the more the SoC at the end of discharging phase decreases. This is because a cell has to provide the same amount of energy. Then, the cells are recharged, for the same duration, to return to full charge. Finally, the battery is placed at rest for an identical duration, allowing the return to internal balance. To model a cell, a characteristic equation that describes the OCV evolution as a function of the Simulations The different variants and architectures were modeled using Matlab. The program simulates cell associations according to each architecture for different (n, m) structures. For each variant, it submits the battery to regular cycling, leading the current shown in Figure 9 for a single 10 Ah cell. The battery is initially full. This cycle consists of firstly, a discharge under a current I equal to the battery nominal current Inom divided by the active column number mc (that is to say (m−1) or m) for a duration of 2500 s. So, except for over-capacity variants, I = 10 A in discharging phase. By this way, the battery is discharged by 70% of its initial capacity. This discharge value is relevant for quantifying cell aging. Indeed, it allows to stop the discharging phase before a complete discharge. The more Qo(t) decreases, the more the SoC at the end of discharging phase decreases. This is because a cell has to provide the same amount of energy. Then, the cells are recharged, for the same duration, to return to full charge. Finally, the battery is placed at rest for an identical duration, allowing the return to internal balance. Simulations The different variants and architectures were modeled using Matlab. The program simulates cell associations according to each architecture for different (n, m) structures. For each variant, it submits the battery to regular cycling, leading the current shown in Figure 9 for a single 10 Ah cell. The battery is initially full. This cycle consists of firstly, a discharge under a current I equal to the battery nominal current Inom divided by the active column number mc (that is to say (m−1) or m) for a duration of 2500 s. So, except for over-capacity variants, I = 10 A in discharging phase. By this way, the battery is discharged by 70% of its initial capacity. This discharge value is relevant for quantifying cell aging. Indeed, it allows to stop the discharging phase before a complete discharge. The more Qo(t) decreases, the more the SoC at the end of discharging phase decreases. This is because a cell has to provide the same amount of energy. Then, the cells are recharged, for the same duration, to return to full charge. Finally, the battery is placed at rest for an identical duration, allowing the return to internal balance. Simulations The different variants and architectures were modeled using Matlab. The program simulates cell associations according to each architecture for different (n, m) structures. For each variant, it submits the battery to regular cycling, leading the current shown in Figure 9 for a single 10 Ah cell. The battery is initially full. This cycle consists of firstly, a discharge under a current I equal to the battery nominal current I nom divided by the active column number m c (that is to say (m−1) or m) for a duration of 2500 s. So, except for over-capacity variants, I = 10 A in discharging phase. By this way, the battery is discharged by 70% of its initial capacity. This discharge value is relevant for quantifying cell aging. Indeed, it allows to stop the discharging phase before a complete discharge. The more Q o (t) decreases, the more the SoC at the end of discharging phase decreases. This is because a cell has to provide the same amount of energy. Then, the cells are recharged, for the same duration, to return to full charge. Finally, the battery is placed at rest for an identical duration, allowing the return to internal balance. s. So, except for over-capacity variants, I = 10 A in discharging phase. By this way, the battery is discharged by 70% of its initial capacity. This discharge value is relevant for quantifying cell aging. Indeed, it allows to stop the discharging phase before a complete discharge. The more Qo(t) decreases, the more the SoC at the end of discharging phase decreases. This is because a cell has to provide the same amount of energy. Then, the cells are recharged, for the same duration, to return to full charge. Finally, the battery is placed at rest for an identical duration, allowing the return to internal balance. To model a cell, a characteristic equation that describes the OCV evolution as a function of the SoC is used. The typical shape of this curve is described for an amorphous iron phosphate (LiFePO4) cell in Figure 10. Batteries, with an amorphous iron phosphate positive electrode support high intensities in charge and discharge as well as fast charges. These cells have higher power density and lower fire risks than cells with a lithium cobalt oxide (LiCoO2) positive electrode [29]. On this curve, four points are identified by a red dot. These four coordinate points (S0, E0), (SL, EL), (SV, EV) and (SM, EM), respectively, delimit three sectors. The curve being continuous, it is so possible to perform partial regression. In the sector where the state of charge is low (SoC < SL) and OCV low (OCV < EL), the curve can be described by an exponential function. It is the same for high states of charge (SoC > SV and OCV > EV). The central sector can be described by a linear function. Ultimately, the characteristic equation can be described by equation set (7) as a function of the SoC (explain as a dummy variable To model a cell, a characteristic equation that describes the OCV evolution as a function of the SoC is used. The typical shape of this curve is described for an amorphous iron phosphate (LiFePO 4 ) cell in Figure 10. Batteries, with an amorphous iron phosphate positive electrode support high intensities in charge and discharge as well as fast charges. These cells have higher power density and lower fire risks than cells with a lithium cobalt oxide (LiCoO 2 ) positive electrode [29]. On this curve, four points are identified by a red dot. These four coordinate points (S 0 , E 0 ), (S L , E L ), (S V , E V ) and (S M , E M ), respectively, delimit three sectors. The curve being continuous, it is so possible to perform partial regression. In the sector where the state of charge is low (SoC < S L ) and OCV low (OCV < E L ), the curve can be described by an exponential function. It is the same for high states of charge (SoC > S V and OCV > E V ). The central sector can be described by a linear function. Ultimately, the characteristic equation can be described by equation set (7) as a function of the SoC (explain as a dummy variable x). ν and ξ parameters are empirical. ν is between 10 and 20. It corresponds to the rapid growth of the voltage when the SoC approaches its maximum. ξ parameter is its complement when the voltage collapses, the SoC decreasing towards its minimum. Batteries 2018, 4, x FOR PEER REVIEW 8 of 12 x).  and ξ parameters are empirical.  is between 10 and 20. It corresponds to the rapid growth of the voltage when the SoC approaches its maximum. ξ parameter is its complement when the voltage collapses, the SoC decreasing towards its minimum. In these simulations, the cells had an initial capacity Q* = 10 Ah and an average ESR of 20 mΩ. The ambient temperature was set to 25 °C. Since the Q0(t) degradation is continuous and progressive, it is logical to consider that the SoH, which is a representation of this degradation, also decreases continuously. This degradation is proportional to the using conditions. At each cycle, a cell ages. This translates into an SoH decrease. The cell temperature evolves as a function of the current flowing through it. By convection, radiation and conduction, this heat can spread to other cells. For simplicity, the used model does not integrate thermal coupling phenomena. Finally, the initial conditions for each cell was given randomly around nominal a value with a In these simulations, the cells had an initial capacity Q* = 10 Ah and an average ESR of 20 mΩ. The ambient temperature was set to 25 • C. Since the Q 0 (t) degradation is continuous and progressive, it is logical to consider that the SoH, which is a representation of this degradation, also decreases continuously. This degradation is proportional to the using conditions. At each cycle, a cell ages. This translates into an SoH decrease. The cell temperature evolves as a function of the current flowing through it. By convection, radiation and conduction, this heat can spread to other cells. For simplicity, the used model does not integrate thermal coupling phenomena. Finally, the initial conditions for each cell was given randomly around nominal a value with a variability of more or less 10%. The same Cell i,j is used for all variants in all architectures, so as to ensure a true comparison of the variant intrinsic performance. A cell is considered faulty in two cases. First, when it is completely discharged whereas it should still supply energy (SoC = 0). Second, when it is too old and has SoH = 0.8. This corresponds, for a single cell, to the operative dependability definition given above. Depending on the architecture, the variant and the failed cell location(s), the battery may or may not continue to provide the requested power. If, at moment, it is no longer able to provide this power, the O dep worth. Simulation was performed several times with different initial conditions. Only the mean values are reported here, even if the figures illustrating this demonstration are related to a single simulation, among all those performed. From these different simulations, it is possible to extract operative dependability. O dep corresponds to the time when the battery stops fitting to the specifications: delivery of the battery nominal current I nom . According to the variant, some cells may have failed (by aging, random failure or complete discharge) before this time and be isolated or replaced by others. The simulations presented here were carried out for several structures: with n = 2 and m varying from 3 to 10 on the one hand, and on the other hand with m = 3 and n varying from 2 to 4. Results and Comparisons Results from a (3,4) structure simulation performed with a C3C architecture with SoH-based optimization algorithm is illustrated in Figure 11, which presents SoC, SoH, OCV and Q 0 evolution. In this example, only the first-row cells are shown because it is in this row that the first two cells fail. Cell 14 (cyan curve) fails first. O dep is limited by Cell 12 aging (green curve). This simulation was performed willingly in accelerated mode, with an announced cell lifetime of only 50 cycles. In order to read SoC evolution on the curve, the aging parameter has been set to its maximum value (aging = 2) to better differentiate the SoH curves when the battery lifespan approaches. By using this variant, aging control is visible on the curve b). All cells age together, their SoH decreases together. Table 2 records the average operative dependability obtained for the different structures. This average is given for cells with lifetimes of 1000 cycles. In the same way that the reliability of a several identical all-necessary element system decreases if the element number increases, the more a battery contains cells, the lower the operative dependability is. For instance, for a basic PS architecture, with a n = 2 structure, O dep reduces when the column number j increases: respectively, to 772, 754, 750, 748, 747 and 674 cycles when j varies from 4 to 10. The lowest operative dependability variant is the PS architecture without redundant cell (basic PS). For instance, O dep = 772 cycles for a (2,4) structure. Its performance serves as a reference value. Operative dependability for the variants are relative compared (base 100 for basic PS) in the bar graph in Figure 12, drawn for a (3,4) structure. Thus, the performances can be grouped into three clusters: those that are between 100% and 120% of the basic PS operative dependability, those between 120% and 140%, and those higher [30]. Among the less powerful variants are the classic solutions: balancing in SP architecture, redundancy, fault tolerance and over-capacity. In the second family are the SoC-based optimization algorithm variants for the three architectures. Finally, those with the best operative dependability are those that use variant with SoH optimization algorithm, regardless of the architecture. Cluster 3 variants show a different O dep improvement by architecture, as summarized in Table 3. On average, this improvement is close to 36%. Unlike the DESA architecture, whose management cannot be deployed in another architecture, with the optimization algorithm, the operative dependability improvement is greater than the share of redundant cells. For example, for a (2,4) structure, O dep is improved by 50% for only 33% of redundant cells. In the same way, for a (2,10), O dep is improved by almost 25% regardless of the architecture for 10% more cells. between 120% and 140%, and those higher [30]. Among the less powerful variants are the classic solutions: balancing in SP architecture, redundancy, fault tolerance and over-capacity. In the second family are the SoC-based optimization algorithm variants for the three architectures. Finally, those with the best operative dependability are those that use variant with SoH optimization algorithm, regardless of the architecture. Cluster 3 variants show a different Odep improvement by architecture, as summarized in Table 3. On average, this improvement is close to 36%. Unlike the DESA architecture, whose management cannot be deployed in another architecture, with the optimization algorithm, the operative dependability improvement is greater than the share of redundant cells. For example, for a (2,4) structure, Odep is improved by 50% for only 33% of redundant cells. In the same way, for a (2,10), Odep is improved by almost 25% regardless of the architecture for 10% more cells. Conclusions In this paper, some variants to improve the operative dependability of EESS are described and compared. For this, a formal model integrating the aging of a cell is used. The different architectures and the possible solutions to improve the performances in duration of use are simulated under Matlab. Whatever the architecture, classical variants as over-capacity and balancing only improve Odep by up to 20%. By using a minimal portion of redundant cells and an SoH-based optimization algorithm, it is possible to improve, on average, the battery operative availability by more than 35%, regardless of its architecture. When the current flowing through the battery is below nominal current, in a C3C architecture, it is possible to perform specific cell-to-cell balancing while using other cells to meet the current Conclusions In this paper, some variants to improve the operative dependability of EESS are described and compared. For this, a formal model integrating the aging of a cell is used. The different architectures and the possible solutions to improve the performances in duration of use are simulated under Matlab. Whatever the architecture, classical variants as over-capacity and balancing only improve O dep by up to 20%. By using a minimal portion of redundant cells and an SoH-based optimization algorithm, it is possible to improve, on average, the battery operative availability by more than 35%, regardless of its architecture. When the current flowing through the battery is below nominal current, in a C3C architecture, it is possible to perform specific cell-to-cell balancing while using other cells to meet the current demand. No other architecture allows this differentiated use of cells. It is necessary to continue this work by comparing the cost of adding the redundant cells, the switches and the over-cost induced on the BMS in terms of computing capacity with the gain in additional mission time. Author Contributions: C.S. wrote the paper; P.V., L.P., A.S. and É.N. have made corrections; All authors discussed results data and decided on next steps. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflicts of interest.
10,249.4
2018-07-03T00:00:00.000
[ "Engineering", "Materials Science" ]
Neglected Facts on Mycobacterium Avium Subspecies Paratuberculosis and Type 1 Diabetes Civilization factors are responsible for the increasing of human exposure to mycobacteria from environment, water, and food during the last few decades. Urbanization, lifestyle changes and new technologies in the animal and plant industry are involved in frequent contact of people with mycobacteria. Type 1 diabetes is a multifactorial polygenic disease; its origin is conditioned by the mutual interaction of genetic and other factors. The environmental factors and certain pathogenetic pathways are shared by some immune mediated chronic inflammatory and autoimmune diseases, which are associated with triggers originating mainly from Mycobacterium avium subspecies paratuberculosis, an intestinal pathogen which persists in the environment. Type 1 diabetes and some other chronic inflammatory diseases thus pose the global health problem which could be mitigated by measures aimed to decrease the human exposure to this neglected zoonotic mycobacterium. Introduction Type 1 diabetes (T1D) is an insulin-dependent type of diabetes caused by the destruction of pancreatic β-cells leading to major insulin deficiency. T1D represents around 10% of all cases of diabetes [1]. It develops either on an autoimmune basis, which is the cause of the disease in 70-90%, or idiopathically with not entirely clear pathogenesis [2]. The disease is most often diagnosed in children and adolescents, in whom the first symptoms are significant polyuria, polydipsia, and polyphagia. T1D may also manifest in adulthood in middle-aged and elderly patients. The disease often resembles type 2 diabetes at first, but gradually insulin production weakens, and its exogenous supply is required. The term latent autoimmune diabetes in adults (LADA) is used for this specific form. The correct diagnosis is possible to verify in the laboratory by confirming specific LADA autoantibodies and decreasing C-peptide levels [3]. The prevalence of T1D is increasing global; the incidence of T1D in children worldwide has increased at a rate from up to 5% per year since the 1970s, in particular in fast developing countries [4]. The results of the meta-analysis performed by Mobasseri et al. [5] showed that the incidence of T1D in the world is 15 per 100,000 of the population and the prevalence is 9.5 per 10 000 population. The incidence of T1D in Europe is 15 per 100,000 of the population and the prevalence is 12.2 per 10,000 people. In the Czech Republic in the last 30 years, the incidence of T1D increased more than 3 times reaching 25 new cases per 100,000 children under the age of 15. T1D is the most common diabetes type in this age (95% cases) along with monogenic diabetes (4%) and type 2 (1%) [6]. Not only T1D, but also other autoimmune diseases such as inflammatory bowel diseases, autoimmune thyroiditis and juvenile idiopathic arthritis show increasing incidence [7] without known specific triggering factors. T1D Etiology and Environmental Factors T1D is a multifactorial polygenic disease; its origin is conditioned by the mutual interaction of genetic and other factors. The influence of genetics on the etiology of T1D is approximately 50%, especially the strong association of T1D with genes for HLA-II (human leukocyte antigen) molecules. The role of HLA-molecules present on the surface of leukocytes is to present antigens to T-lymphocytes [8]. After a certain time, the interaction of internal and external factors can lead to the failure of immunoregulatory mechanisms. Destructive autoimmune insulitis develops, involving autoantibodies and autoreactive T-cells targeted against specific antigenic structures of pancreatic β-cells. In the first phase, the islets of Langerhans are infiltrated by leukocytes and macrophages. Subsequent activation of antigen-presenting cells (APCs: B-lymphocytes, macrophages, and dendritic cells) leads to binding of β-cell antigens to high-risk HLA-II molecules and to presenting these antigens to potentially autoreactive T-cells. Both T-lymphocytes and APCs then produce the cytokines IFN-γ, IL-1 and TNF-α, which act cytotoxically towards β-cells through the induction of apoptosis and the formation of free radicals. The clinical manifestation of diabetes occurs only after the death of 80-90% of all insulin-producing cells [8,9]. The main environmental factors triggering T1D include viral infections caused mainly by enteroviruses, early administration of cow's milk to infants or excessive consumption of foods containing gluten and nitrates [10]. Environmental factors (e.g., diet, viruses, and chemicals) may trigger the induction of diabetes and act as primary injurious agents damaging pancreatic beta cells or stimulating an autoimmune process. Some viruses such as encephalomyocarditis virus and Mengo virus 2T may directly infect mice pancreatic beta cells. In contrast, persistent infection of cytomegalovirus and rubella virus may induce islet cell autoantibodies against a 38 kDa islet cell protein [11]. Other potential risk factors for the development of T1D are also respiratory infections in early childhood [12], enteroviruses [13] or the group B coxsackieviruses [14]. It is suggested that the gut microbiome may protect from the development of T1D by promoting intestinal homeostasis [15]. There is evidence that reduced gut microbial diversity of the Clostridium leptum group in children may lead to a decreased number and function of regulatory T-cells promoting the autoimmune response [16]. The lack of Vitamin D supplementation in infancy increases the subsequent risk of T1D as European case-control study (The EURODIAB Sub-study 2 Study Group. 1999) has indicated [15]. Vitamin D is important in the prevention of islet cell death and might improve the survival of islet cell grafts, and it ameliorates the production of insulin. Low Vitamin D levels were shown to have negative effect on β-cell function [17]. Both early and delayed introduction of gluten have been implicated in the risk of autoimmunity and T1D [10]. A study conducted by Norris et al. [18] suggests there may be a window of exposure to cereals in infancy outside which initial exposure increases autoimmune risk in susceptible children. The TRIGR study was initiated in 2007. Extensive casein hydrolyzed formula was given to one group of children and regular commercial milk-based formula plus casein hydrolysate in a 4:1 proportion to the control group up to 6-8 months after weaning. An international double-blind randomized clinical trial of 2159 infants with human leukocyte antigen-conferred disease susceptibility and a first-degree relative with T1D recruited from May 2002 to January 2007 in 78 study centers in 15 countries [19]. The results were preliminary reported in 2011 [20] and finally in 2018. Among infants at risk for T1D, weaning to a hydrolyzed formula compared with a conventional formula did not reduce the cumulative incidence of T1D after median follow-up for 11.5 years. These findings do not support a need to revise the dietary recommendations for infants at risk for T1D [21]. Niegowska et al. [22] included 23 children at risk for T1D, formerly involved in the TRIGR study, and 22 healthy controls (HCs). Positivity to anti Mycobacterium avium subsp. paratuberculosis (MAP) peptides and homologous human peptides was detected in 48% of at-risk subjects compared to 5.85% HCs, preceding appearance of islet autoantibodies. Being MAP easily transmitted to humans with contaminated cow's milk and detected in retail infant formulas, MAP epitopes could still be present in extensively hydrolyzed formula and act as antigens stimulating β-cell autoimmunity. Hydrolyzed milk formula does not guarantee that it is MAP free, since it can resist easily to the enzymes used and even if they was not viable, MAP components such as MAP3865 c, homologous to zinc transporter 8 (ZNT8) and proinsulin could still be present triggering the immune response. As this review shows, many known, and as yet undiscovered biomolecules may be involved in triggering pathological pathways that lead not only to T1D but to many other chronic immune regulated inflammatory and autoimmune diseases. Causality studies could be focused on T cell response against the specific epitopes of MAP homologous to ZNT8 and proinsulin (in addition to the B cell studies) in animal models and T1D diabetes patients at risk (presence of one or more autoantibodies) and in T1D patients at onset. Mycobacterium tuberculosis releases triggers just like other mycobacteria. This is evidenced by the high number of patients with comorbidity of tuberculosis and diabetes as well as other chronic inflammatory diseases [23][24][25]. For further references see Box 3 M. avium subsp. paratuberculosis in [26] and a review by Hruska and Pavlik [27]. T1D and Mycobacterium avium Subsp. Paratuberculosis (MAP) MAP is a fastidious slow-growing mycobacterium, which infects a large range of ruminant species and is responsible for paratuberculosis (also known as Johne's disease). It belongs to the M. avium complex which is a group of slow-growing mycobacteria including M. avium, M. intracellulare, or M. chimaera based on a gene sequence similarity. M. avium has four subspecies, namely M. avium subsp. avium, M. avium subsp. hominissuis, M. avium subsp. paratuberculosis and M. avium subsp. silvaticum [28]. Opportunistic infections caused by M. avium are among the most frequent in patients suffering from chronic respiratory infections and/or from acquired immunodeficiency syndrome [28]. However only MAP has been associated with a large number of autoimmune and inflammatory diseases and investigations all over the world are under way [29]. The target of MAP is the digestive system, infection leads to weight loss and might cause the death of the animal. This subspecies also infects animals in the wild such as red deer, rabbits, or buffalo which raises serious ecological concerns [28]. In humans, MAP has been associated with a long list of inflammatory and autoimmune diseases: Crohn's disease, sarcoidosis, Blau syndrome, Hashimoto's thyroiditis, autoimmune diabetes (T1D), multiple sclerosis (MS), rheumatoid arthritis, lupus, and Parkinson's disease [26,[29][30][31][32][33][34][35][36][37][38][39]. The inhalation of aerosolized MAP-contaminated manure by women in the first four weeks of pregnancy, and intrauterine transmission to the embryo, may be responsible for the development of anencephaly in the fetus. The cluster of babies with anencephaly, with a reported rate of over 60 times the national average is likely associated with application of cow's feces to agricultural fields at the rate of 1000 gallons per acre in the rural Yakima Valley community in state Washington, USA [39]. Furthermore, MAP as a causative agent of bovine paratuberculosis, is shed in cow's milk and has been shown to survive pasteurization. According to many epidemiologic studies, MAP is presented as a causative agent of T1D [40][41][42]. In the Sardinian and Italian population, MAP has been previously associated with T1D as an environmental agent triggering or accelerating the disease [41][42][43]. Human exposure to MAP has increased owing to the expansion of the dairy industry in developed countries as a result of dairy cattle breeding [44]. Humans are exposed to MAP also from milk and cheese from sheep and goats which suffered from paratuberculosis. For instance, Sardinia contains high numbers of MAP infected sheep and cattle, which excrete the bacteria into the environment where they persist within the protists. Environmental cycling facilitates not only the re-infection of livestock through deposition of extracted slurry from water treatment plants but also the dispersal of MAP via aerosols directly infecting human populations [45,46]. Devitalized MAP and their decay products might be therefore present in pasteurized milk and in infant formula [47,48]. The often-cited protective effect of breastfeeding tends to indicate the burden of MAP components from formula on the baby. Municipal tap water is colonized with nontuberculous mycobacteria (NTM) which are resistant to disinfectant, heavy metal, and antibiotics [49,50]. Hence, normal water treatment processes such as filtration and chlorination prefer mycobacteria organisms by killing off their competitors. Mycobacteria grow on tap water pipes in biofilms and on plastic water bottles and survive in amoebas in soil and water [31,[51][52][53][54]. Additionally, surface water and soil can expose humans to mycobacteria. Vegetables from hydroponic plants and pork from farms affected by M. avium pig infections may also be involved in human exposure to mycobacteria. Comprehensive data regarding the ecology of mycobacteria and their impact on human and animal health were published by Kazda et al. [55], by Falkinham [49,52], and by Hruska and Pavlik [27]. Epitopes, Proteins and Genes of Nontuberculous Mycobacteria Associated with T1D T1D is characterized by uplifted immune responses targeted against several autoantigens including heat shock protein 60 (Hsp60), insulin, insulinoma-associated protein-2 (IA-2), and pancreatic glutamic acid decarboxylase (GAD65) [56]. In 1994, Rabinovitch [57] noticed that destruction of pancreatic islet β-cells and insulin-dependent diabetes mellitus (IDDM) has a genetic basis with modulating effects of environmental factors. Microbial agents including certain viruses and extracts of bacteria, fungi, and mycobacteria may have a protective action against diabetes development. Protective effects of administering microbial agents, adjuvants, and a β-cell autoantigen (GAD65) may result from activation of a Th2 subset of T-cells that produce the cytokines IL-4 and IL-10 and consequently downregulate the Th1-cell-mediated autoimmune response. Cow's milk feeding is considered an environmental trigger of immunity to insulin in infancy. Since cow's milk contains bovine insulin, the development of insulin-binding antibodies may occur in children fed with cow's milk formula. This immune response to insulin may later be diverted into auto-aggressive immunity against β-cells in some individuals [58]. It has been revealed the molecular mimicry of the mycobacterial antigens with human self-epitopes which supports the theory of MAP being an infectious trigger of T1D [46,59]. According to Songini et al. [60], MAP infects the intestine and activated T-cells migrate to the pancreatic lymph nodes and to the pancreas where they attack β-cells which present antigens structurally similar to those of MAP. Thus, MAP mimicry triggers an autoimmune process. For instance, the mycobacterial heat-shock protein of MAP (Hsp65) and GAD65 expressed in the β-cells of human islets have similar amino acid sequences and conformation [32,56,60,61]. Hsp65 is a 65 kDa protein which participates in cytokine expression and stabilizes cellular proteins in response to stress or injury. Hsp65 is presented to human CD 41 T-cells in association with multiple HLA-DR molecules [62]. Therefore, it has been proposed that MAP being the source of mycobacterial Hsp65 is an environmental trigger for T1D [32,63]. Recognition of ZnT8, proinsulin, and homologous MAP peptides in Sardinian children at risk of T1D precedes detection of classical islet antibodies. ZnT8 which is related to insulin secretion is recently identified as an autoantibody antigen in T1D. ZnT8 is a membrane protein involved in Zn 2+ transportation expressed in insulin-containing secretory granules of β-cells and might participate in insulin biosynthesis and release, and subsequently, involved deteriorated β-cell function [64]. MAP3865c, a MAP cell membrane protein, has a relevant sequence homology with ZnT8. Furthermore, antibodies recognizing MAP3865c epitopes have been found to cross-react with ZnT8 in T1D patients [65]. Masala et al. [66] previously also reported that MAP3865c and ZnT8 homologous sequences were cross-recognized by antibodies in Sardinian T1D adults. The MAP3865c 281-287 epitope emerged as the major C-terminal epitope recognized. Similarly, Niegowska et al. [67,68] observed increased serum-reactivity to ZnT8 transmembrane regions and their homologous MAP peptides (MAP3865c) in either Sardinian or Italian cohorts. Niegowska et al. [68] conducted the first study aimed at the evaluation of MAP being an infective agent in LADA pathogenesis. A serum-reactivity against MAP-derived peptides and their human homologs of PI and ZnT8 was analyzed in the Sardinian population. A significantly elevated positivity for MAP/proinsulin was detected among LADA patients [68]. Rosu et al. [46] confirmed the association of MAP with T1D through the detection of a mptD protein (MAP3733c) in blood plasma from T1D patients using a phage-specific sandwich ELISA method. MptD protein is a membrane protein expressed during infection stages and a significant virulent determinant since the mptD gene emerged to be an important factor for the iron uptake and metabolic adaptation of MAP required for persistence in the host [69,70]. Alongside mptD protein, Cossu et al. [69] detected a strong immune response against MAP3738c recombinant protein in T1D sera using ELISA method. It is assumed that MAP3738c protein is involved in mycolic acid biosynthesis as cyclopropanation enzyme or methyltransferase on methoxy-mycolic acids. Positive humoral immune response was revealed only in sera from T1D patients and not in T2D subjects. Accordingly, Niegowska et al. [41,67] also proved the role of molecular mimicry through which MAP might contribute to T1D development. Since the MAP peptides identified within different proteins (MAP2404c, MAP1,4-α-glucan branching protein and MAP3865c) were characterized by sequence homology with proinsulin (PI) and ZnT8, they evaluated levels of antibodies directed against MAP epitopes and their human homologs in children from mainland Italy. Indirect ELISA to detect antibodies specific for MAP3865c/ZnT8, MAP1, 4αgbp/PI and MAP2404c/PI homologous peptide pairs was performed with positive results. Moreover, intact bacilli were isolated from certain blood samples. A lipid-rich cell wall is involved in the virulence of mycobacteria. Moreover, the lipids exposed on the bacterial surface are highly antigenic [28]. Unlike other mycobacteria, MAP does not produce glycopeptidolipids on the surface of the cell wall but rather a lipopentapeptide (L5P) which was demonstrated to be unique for this subspecies. This L5P antigen contains a pentapeptide core, in which the N-terminal end is linked with a fatty acid [71]. Biet et al. [28] showed in their study that L5P induces a strong host humoral response involving IgM, IgG 1 , and IgG 2 antibodies. Niegowska et al. [67] compared titers of the previously detected antibodies with serum-reactivity to L5P. It was discovered that anti-L5P antibodies appeared constantly in individuals with a stable immunity against MAP antigens. The overall coincidence in positivity to L5P and the other MAP epitopes exceeded 90%. Other anti-MAP antibodies which were investigated using ELISA, were those against heparin-binding hemagglutinin (HBHA) and glycosyl transferase (GSD) [44,56]. Molecular characterization of the recombinant HBHA from the MAP was reported by Sechi et al. [72] HBHA plays a significant role in the adaptation of MAP to the gastrointestinal tract of ruminants. It is an adhesin important for binding of the mycobacteria to epithelial cells and other non-phagocytic cells via heparin and heparan sulfate, present on the eukaryotic cell surface [73]. GSD is an enzyme which catalyzes the glycosidic bond formation of many oligosaccharides and glycoconjugates implicated in mycobacterial cell-wall biosynthesis [74]. Sechi et al. [44] and Rani et al. [56] observed in their studies significant humoral immune responses to recombinant HBHA and GSD, and the MAP whole-cell lysate in T1D patients. However, these responses could not be indicative of an active infection since HBHA and GSD are encoded by a wider range of mycobacteria, which raised an issue of cross-reactivity with tubercle bacilli in the bacillus Calmette-Guerin (BCG) vaccinated individuals [56]. Some of the mentioned antigens/epitopes involved in T1D pathophysiology are expressed not only in MAP, but also in other nontuberculous mycobacteria that may contribute to T1D development as well. For instance, Hsp65 is also produced in M. bovis [75,76]. Horváth et al. [75] measured the epitope specificity of antibodies against peptide p277 of human Hsp60 and of M. bovis Hsp65 as well as for human Hsp60 and M. bovis Hsp65 proteins by ELISA. Both, anti-human-and the anti-M. bovis peptide p277 antibody levels were significantly higher in the diabetic children. Antibodies to two epitope regions on Hsp60 and Hsp65 were detected in high titers, the first region was similar to the sequence found in GAD65, whereas the second one overlapped with p277 epitope. A major adhesin of mycobacteria, HBHA, is encoded by a wide range of mycobacterial species. Lefrancois et al. [73] compared sequence alignment of HBHA coming from various mycobacteria. The sequences were similar in HBHA from MAP, M. avium subsp. hominissuis and M. bovis, while M. smegmatis showed slightly different structure arrangement. Similarly, GSD can be also found within different mycobacteria [74]. The control of mycobacterial infection depends on the recognition of the pathogen and the activation of both the innate and adaptive immune responses. Toll-like receptors (TLR) were shown to play a critical role in such recognition [77]. TLRs are mainly found on the surface of macrophages and dendritic cells, but they are also expressed by tissue cells in the central nervous system, the kidneys, and the liver [78]. [77]. These mycobacterial components are recognized mainly by TLR2 in association with TLR1/TLR6, or by TLR4 resulting in rapid activation of cells of the innate immune system. The balance between PIM, LM, and LAM synthesis by pathogenic mycobacteria might provide pro-or anti-inflammatory, immunomodulatory signals during primary infection but also during latent infection [62,77]. In general, these signals lead to an increase in the microcellular environment concentration of proinflammatory cytokines, antimicrobial peptides, and type I IFNs. These events, involving infection and an increase in TLR activity, lead to a release of islet antigens, which are picked up by dendritic cells and presented to pathogenic T-cells. These processes are followed by β-cell destruction leading to T1D development [78,79]. According to Adamczak et al. [78], Vitamin D seems to protect against T1D by reducing the TLRs' level of activation. It was found that the expression of TLR2 and TLR4 decreases with increasing 25-hydroxycholecalciferol serum concentrations. Analogously, the expression of Vitamin D receptor and Vitamin D-1-hydroxylase is upregulated by activation of TLRs. Therefore, the supplementation of Vitamin D might be one such potential intervention. Recently, it has been found that single nucleotide polymorphisms in protein tyrosine phosphatase non-receptor type 2 and 22 (PTPN2/22) affect several immunity genes, leading to an overactive immune system which is involved in the pathogenic process of inflammatory autoimmune disorders. Genes for TPN2/22 are found in T-cells, β-cells, in a majority of epithelial cell types including synovial joint tissue, and intestinal tissues, where they control apoptosis and chemokine production [80,81]. These genes may thus have a fundamental role in the development of immune dysfunction and its polymorphisms are associated with rheumatoid arthritis, T1D, systemic lupus erythematosus, or Crohn's disease [82]. A single nucleotide polymorphism in PTPN22, rs2476601, is associated with increased risk of T1D, reduced age at onset, and reduced residual β-cell function at diagnosis. It affects T-cell receptor and B-cell receptor signaling as well as other adaptive and innate immune cell processes [83]. Sharp et al. [80] demonstrated that single nucleotide polymorphisms in PTPN2/22 are found significantly in patients with Crohn's disease and lead to an increase in T-cell proliferation due to loss of negative regulation, an increase of pro-inflammatory cytokines such as IFN-γ, and an increase of susceptibility to mycobacterial infections. MAP DNA was detected in 61% of patients with Crohn's disease in comparison with only 8% of healthy controls. In the field of autoimmunity, the most frequently mentioned is the PTPN22-C1858T polymorphism. A correlation between PTPN22-C1858T polymorphism and increased risk of autoimmune diseases as well as bacterial infections was shown in genome-wide association studies. Li et al. [84] showed in their study that the PTPN22-C1858T polymorphism is relevant to increased susceptibility to the infection of M. leprae. The BCG vaccine has recently shown a therapeutic effect for T1D. The BCG vaccine is an attenuated form of M. bovis originally developed 100 years ago for tuberculosis prevention. Repeated BCG vaccinations in long-term diabetics can restore blood sugars to near normal by resetting the immune system (by restoring regulatory T-cells and selectively killing pathogenic T-cells) and by increasing glucose utilization through a metabolic shift from oxidative phosphorylation, a state of minimal sugar utilization, to aerobic glycolysis, a high glucose-utilization state [85]. Kuhtreiber et al. [86] observed in their study that BCG vaccination of long-standing T1D patients, followed by a booster in 1 month, resulted in the control of blood sugar seen after a delay of 3 years. After year 3, BCG lowered hemoglobin A1c to near normal levels for the next 5 years. Dow et al. [32] proposed an alternative explanation that the positive response to BCG in T1D individuals is due to a mitigating action of BCG upon MAP that allows recovery of pancreatic function. Klein et al. also evaluated the inhibitory effect of BCG on T1D. Their results indicate that early post birth vaccination and boosting is sufficient to reduce T1D prevalence of respective cohorts. According to their work, vaccination stimulates T-regulatory cells and natural suppressor cells that inhibit the autoimmune (diabetogenic) response against β-cells [87]. Dow and Chan [88] recently reported that BCG has shown benefit in T1D mellitus and multiple sclerosis, autoimmune diseases that have been linked to MAP via Hsp65 and disease-specific autoantibodies. Obviously, a number of factors lend credence to the notion of a pathogenic link between environmental mycobacteria and Sjogren's syndrome (SS), including the presence of antibodies to mycobacterial Hsp65 in SS, the homology of Hsp65 with SS autoantigens, and the beneficial effects seen with BCG vaccination against certain autoimmune diseases. Furthermore, given that BCG may protect against NTM, has immune modifying effects, and has a strong safety record of billions of doses given, BCG and/or antimycobacterial therapeutics should be studied in SS [88]. Conclusions Mycobacteria as a source of triggers of chronic inflammatory and autoimmune diseases. One of the possible T1D triggers is often mentioned to be MAP infection from the environment or from contaminated food. MAP molecular mimicry can induce the production of antibodies and thus affect the production of insulin. This etiology is fully consistent with the definition of T1D as an autoimmune disease. However, many molecules, epitopes, genes, and metabolic products produced or released also from other mycobacteria are associated with the development of T1D. Many of them belong to the group of the nontuberculous mycobacteria that people commonly encounter. NTM often colonize drinking water supplies and are inhaled as an aerosol mainly in showers and whirlpools. They usually do not multiply in the affected organism and do not cause pathological changes in the lungs, skin, and internal organs. However, if the long-term and massive exposure to mycobacteria encounters the host organism with a particular genetic and health predisposition, mycobacterial breakdown products may act as triggers of chronic inflammatory and autoimmune diseases, including T1D. Ongoing diseases that may have the same etiology significantly support the risk. Triggers from other pathogens may be also related to T1D as they can affect the host organism due to persistent acute infectious diseases during which these triggers may stimulate the immune system to develop chronic disease later in life. Civilization Factors Involved in Human Exposure to Mycobacteria Chronic inflammatory and autoimmune diseases are referred to as diseases of civilization. Their incidence grows in parallel with factors that are accompanied by increasing human exposure to NTM. Urbanization has significantly increased the number of people using drinking water from municipal distribution that is used for showering and hydrother-apy during which an aerosol is formed, and mycobacteria are inhaled. The severity of the consequences is associated with genetic factors, thus individuals who already have a chronic inflammatory disease or their blood relatives may be at greater risk. Exposure to mycobacteria is also associated with a change in lifestyle. The popularity of fast food based on ground beef and pork has increased and has become part of the regular food menu. Therefore, meat contaminated with MAP or M. avium spp. can be for its consumers a source of mycobacterial triggers. New technologies in vegetable growing such as hydroponics in conjunction with fish farming and aeroponics may also increase consumer exposure to mycobacteria. Children who cannot be breastfed may be exposed to dead and live mycobacteria in infant formula. Additionally, the popularity of baby swimming may contribute to the mycobacterial exposure of children during the maturation of their immune systems. The number of people who frequently use indoor swimming pools has increased in general. Daily use of whirlpools, air humidifiers and water mist for cooling can be dangerous if household water is heavily colonized with mycobacteria. Jogging in cities where the air contains high concentrations of airborne dust can cause the inhalation of mycobacteria, for which solid nanoparticles are a suitable carrier. Civilization development has also influenced animal production technologies and international trade with food and animals. In a short period of several decades, paratuberculosis and the associated contamination of bovine meat and milk have spread globally. However, wild ruminants, wild pigs, camels, and buffaloes suffer from paratuberculosis as well. In some areas with a high density of large cattle farms, humans may be endangered by aerosols generated by spraying liquid manure on the field. Chronic inflammatory and autoimmune diseases affect not only countries with advanced economies, but increasingly also developing countries due to the influence of civilization factors. Nontuberculous mycobacteria are thus a global health problem. Highlights for Intervention and Control of T1D Reducing human exposure to nontuberculous mycobacteria will be a challenging and long-term task. It is crucial to thoroughly understand the triggers of immune-mediated chronic inflammatory and autoimmune diseases by physicians and to expand the study programs at universities for students of medicine, veterinary medicine, environmental studies, and food and agriculture technology. The active participation of the public is needed for successful intervention and implementation of adequate measures. Therefore, it is necessary to disseminate the knowledge patiently so that people would be able to reduce their NTM exposure effectively. For instance, breastfeeding should be preferred to formula feeding if possible. If domestic water is heavily colonized by NTM, showering should be restricted for infants. Households should have an opportunity to control the water at an affordable price. NTM in the environment, in water, and in food are not subjected to any control. It is necessary to determine the limits on the maximum permitted number of NTM in drinking water, in the air, and in the water of indoor swimming pools, fitness centers, and hydrotherapy facilities. Operators of these facilities should disclose to their customers the results of the inspections carried out by themselves. Strict supervision by veterinary inspections at slaughterhouses, dairies, and during industrial preparation of ground beef and pork should help to identify sources of contamination while appropriate incentives should be applied to reduce the burden. Certain recommendations and orders will need to be included in technical standards and regulations to ensure occupational safety health, food safety and consumer protection. For the semi-quantitative determination of NTM, methods that allow processing of a large number of samples as well as equipment for on-site inspection (care-of-point) based on biosensors are needed [89]. Many scientists with experience in studying MAP as a cause of Crohn's disease have long called for measures to rapidly eliminate MAP from the milk and meat supply through effective MAP control measures including biosecurity and hygiene, vaccination, and testand-cull programs [26,29,36,37,[90][91][92][93][94][95][96][97][98]. Our contribution shows on the example of T1D that interest must be aimed not only at reducing the incidence of Crohn's disease by reducing human exposure to MAP from milk and meat, but also to many other chronic inflammatory and autoimmune diseases. Millions of people around the world suffer from chronic inflammatory and autoimmune diseases. The illnesses of these people and the economic consequences of their absence from the labor market and their costly treatment are therefore a global health and environmental problem that needs to be addressed urgently. All measures to reduce human exposure to mycobacteria should be supported by scientists, clinicians and by macro economists and gradually applied throughout the European Union and G7 countries by competent and well-informed politicians. The combat must be moved from the scientific journals and academic field to parliaments.
7,156.8
2022-03-26T00:00:00.000
[ "Biology", "Medicine" ]
AN IMPROVED DOUBLE HETEROGENEITY MODEL FOR PEBBLE-BED REACTORS IN DRAGON-5 In pebble-bed reactors, the fuel is contained in small grains, which are included in a graphite matrix. Some burnable poison particles may also be present. In this work, an additional ’double heterogeneity’ model is introduced in the DRAGON5 lattice code. The model is based on the legacy work of She (INET) and has been improved to overcome intrinsic limitations. It is based on a simplified physical model whereas the two already existing models were based on the collision probability analysis or on renewal theory. The advantage of this new model is its simplicity to implement. The theory shows that the correction suggested in the original model should not be arbitrary, but a constant equivalent particle fraction of 0.63. Numerical comparison between the models is generally good. However it does not support the theory of a constant equivalent fraction. Additional work is needed to reduce the discrepancy between the models in some cases. CONTEXT AND INTRODUCTION In pebble-bed reactors, the fuel is contained in small grains, which are included in a graphite matrix. Here, the fuel dispersion reactors is studied and a new 'double heterogeneity' model is introduced in the DRAGON5 lattice code, based on the legacy work of She (INET). With a deterministic approach, a direct and exact geometry representation would be too expensive, and a homogenized mixture would be highly inaccurate. Several models have been introduced in the past, one based on the collision probability analysis (Hébert [1]), one based on renewal theory (Sanchez and Pomraning [2]), and finally, one based on a simplified physical model (She [3]). The later is very easy to implement, but some of its extensions to resonance self-shielding and fuel depletion had to be tested. DRAGON5 code already included the first two models, the later was thus programmed as proposed by She. The advantage of using the same code is that no bias is introduced when comparisons are done. The only differences come from the calculations of the equivalent cross-sections in the composite material, not the resonance treatment or the tracking options for examples. In this paper, the model of She is briefly presented in section 2. The computation of the source term needed to solve the transport equation and the flux is detailed both for single type and multiple types of particles in the composite. Then, in section 3 an improvement to a major limit of the model is proposed. Test cases results are presented in section 4, before conclusions are drawn. THEORY As mentioned previously, several approaches can be found in the literature to take into account properly the double heterogeneity of geometries such as in pebble bed. The main goal is to compute the average cross-sections in the composites and to be able to obtain the flux in each of their components. This section presents the theory and model proposed by She et al. [3] harmonized with the notation used by Hebert [4]. It also defines several quantities needed by the different steps of DRAGON calculations and for the improvement presented in Section 3. The model proposed by She is introduced in a subroutine of DRAGON which computes the equivalent cross-sections of a composite. This subroutine is called by the self-shielding modules as well as the flux solver. Unique Type of Microstructure in a Composite First, the case with only one type of microstructure in the composite is studied, and the basic assumptions are presented. Additionally to Hébert notation to remove all possibilities of mistakes, notations have the superscript '1' when referring to a unique microstructure composite. To compute the different properties of the composite, She et al. [3] make the assumption that the composite i behaves as a cylinder with a spherical grain in its centre ( Fig. 1 and 2 in Ref [3]). The neutrons travel through the cylinder parallel to its axe. Thus, they enter on one round side, travel through the matrix, then the grain of type j (all its layers) and the matrix again, to finally exit on the other round side. Some of then collide along the way, some not, which defines p 1 (i, j), the probability for a neutron to go through without collision. Then, the total equivalent macroscopic cross-sectioñ Σ 1 i in a composite i is given by: where the length of the cylinder, L 1 (i), is chosen to keep the volume ratio of microstructure / matrix. To simplify the notations, all the reference to the grain type j, are removed in this section since only one type of grain is considered. The conservation of reaction rates equation (Eq. 1 of [3]) is then given by: where the indexes i, dil and k represents respectively: the composite, its matrix and the layer of the microstructure j,φ is the average flux in the whole composite, φ dil (i) and φ k (i) the flux in the different subregions of the composite, V , V 1 dil and W k the total, matrix and microstructure subregion volumes respectively. This can be rewritten as follows: where f are the volume fractions of the different subvolumes, with f 1 (i) = k f 1 k (i), and S are the self-shielding factors of the matrix and the layers of the microstructure j. Moreover, the total equivalent macroscopic cross-sectionΣ i in a composite i can also be expressed as the average of the equivalent cross-section of all its subvolumes: A comparison of the Eq. 2 to 4 shows that the spatial self-shielding factors are: As She et al mentioned in [3], the self-shielding can be applied to all reaction types. Thus, using notations from [4], the within-group scattering equivalent macroscopic cross-sectionΣ w i in a composite i is given by: Using the definition of the spatial self-shielding factor in Eq. 5 to the within-group scattering equivalent microstructure cross-sectionsΣ w,k (i, j), and Eq. 24 and 25 of Ref. [3], we can write : where p 1 k (i, j) and p 1 dil (i) are referred as p k j in Ref. [3]. Similarly to p 1 i , p 1 k (i) is obtained by following the path the neutrons have to travel to have their first collision in layer k of microstructure j of composite i. Note that the meaning of indexes j and k are switched from Ref. [3]. Moreover, the conservation of neutrons in terms of collision probabilities can be rewritten as follows: Using Eq. 7 and 8 in Eq. 6, we obtain: By taking a closer look at the previous equation, we can see that the within-group scattering equivalent macroscopic cross-sections is equal to the total equivalent cross-sections time a factor. This factor is defined as a weighted average on all microstructures (and matrix) of the ratio withingroup scattering divided by total cross-section. The weight is the probability of first collision in each subregion. Note that the weights (p 1 k (i) and p 1 dil (i)) are normalized by their sum which is the total collision probability (1 − p 1 (i)). By definition, the source is the product of a cross-section (scattering or fission) and a flux. When the equivalent cross-section for the composite is computed, the reaction rates are preserved, which means that the sources also are. This leads to the following simple equation for the equivalent source : Note that the source has to be computed with the local flux, NOT the average composite flux to keep the previous equation valid. Once all the equivalent values are obtained for all composites, the average flux can be computed. To recover the flux in each matrix and layers of the microstructures, Eq. 5 is used. Local fluxes can then be used to perform depletion calculations, for example. Note that if there is no microstructure (homogeneous macro-volume), then in Eq. 10 and 11, p 1 k (i) = 0 and f 1 k (i) = f (i) = 0. Thus, according to Eq. 9, p 1 dil (i) and 1 − p 1 (i) are equal, which leads to: Several Types of Microstructures in a Composite For a composite with several types of microstructures, She et al propose the following procedure. First, the main assumption is that the grains do not have shadowing effect on each other, regardless of their number and type. The matrix can then be split between them accordingly to their volume fraction f (i, j) in the composite i. This procedure defines a representative volume for each microstructure j with a total volume V eq (i, j), a microstructure fraction f eq (i, j), and its layers fractions f eq k (i, j) with: where f (i) is the volume fraction of all microstructures together, and W k (i, j) are the volumes of each layer in the microstructure. Since there is no shadowing effect according to the main assumption, the composite equivalent macroscopic cross-section is then given by the average of the equivalent macroscopic cross-section on all representative volumes V eq (i, j): Similarly, for the composite equivalent within-group macroscopic cross-section and sources (excluding within-group scattering), we have: It is highly important here to note that the equivalent valuesΣ 1 w,i (j),Σ 1 w,i (j) and q as * ,1 i (j) are computed with the equivalent fraction f (i) (Eq.12) and NOT their actual volume fraction. The procedure presented in the previous section is followed for each microstructure independently, before the average is made. Again, once all the equivalent values are obtained for all composites, the average flux can be computed. To recover the flux in each layers of the microstructures, Eq. 5 is used: For the matrix, a similar equation in the representative volume for each microstructure is used. This leads to a flux value potentially different for each microstructure; the average is then performed according to the volume fraction as follows: With 12, 13 Eq. 16 and 17 , the flux distribution can be recovered in all subregions (matrix and layers of microstructures) and used for further calculations such as reaction rates, depletion... LIMITATIONS AND IMPROVEMENT As mentioned by She et al [3], the equivalent volume approach using a cylinder has an intrinsic default. Indeed, no verification is done on the length vs. radius of the cylinder. As the authors underlined, this becomes an issue when the microstructure volume fraction becomes small, then the cylinder become too long, much longer than the composite itself. To overcome this issue, She et al propose to increase the radius of the microstructure (R ), as if a layer of matrix mixture was applied around it. The new volume fraction f of this augmented microstructure is then larger, which means that the length of its cylinder is reduced (L ). The relative dimensions between initial and augmented cylinders are presented in Ref. [3]. She et al showed on an example that this correction is mandatory when the fraction is under 5%. Similar results will be presented later. They also suggest a value between 5 and 10% for the minimum arbitrary volume fraction f . In the remaining of this section, we would like to explore more what a specific arbitrary choice actually means in terms of physics and approximations done. On the one hand, as mentioned before, when the volume fraction is small and the length of the cylinder is artificially very long, it means the neutron have to travel through a very long path through the matrix before and after they get to the microstructure. This leads to an excessive and not realistic absorption in the matrix. On the other hand, the length of the cylinder could be chosen as short as possible to fit the pseudo-microstructure in it, which is its diameter. In that case, the pseudo-volume fraction is 2/3. This approach is closer to Hebert or Sanchez where the authors approximate the representative volume as a sphere of matrix around spheric microstructures. Now, one could ask which approach does make more sense physically. The former may not physically be acceptable and the later has the shortest distance of matrix. To reduce the approximations done in the cylinder model, the better choice should reflect the actual length travelled by a neutron between two microstructures. Even though the general approximation is to suppose that all microstructures are independent and randomly dispersed in the matrix, it would be more realistic if the length of the cylinder is equal to the average distance between microstructure. Then, the relative radius of the microstructure and the equivalent cylinder would be better suited to represent the number of neutrons in the composite which travel across the microstructure or only through the matrix along the cylinder length. The first step is to calculate the average distance between each microstructure of one type. For that, we assume momentarily that the microstructures are on a regular tetrahedron pattern, in which all the closest microstructures are at the same distance. In that case, when spheres touch each other, the maximum density is obtained: where L is the length of the tetrahedron edges and of the equivalent cylinder we are looking for, V S is the volume occupied by the sphere in the tetrahedron, V T is the tetrahedron volume. For the same edges length, but smaller spheres (radius R), the volume fraction is given by: Thus for the cylinder shape, the volume fraction f and the equivalent fraction f are given by: The previous equation shows that the 'best' equivalent volume fraction is actually a constant, close to the maximum. Test Case Description To test the implementation and the improvement of the double heterogeneity model presented in the previous sections, pebbles similar to those used in the HTR-10 reactor [6] are considered. They consist in a spherical composite of carbon matrix with particles. The composite is 5cm diameter covered by a layer of the same carbon matrix (0.5 cm thick). The particles include a core of U O 2 fuel with a diameter of 250µm. Similar cases with additional burnable poison particles of 10 B 4 C are also considered. Both particles content and geometry are described completely in Ref. [5]. Simulations Computations were performed with DRAGON using a Jeff 3.2 library and SHEM361 energy group structure. The results for pebbles with a single type of particle are presented in Table 1. Several amounts of uranium particles (w in grams per ball) were simulated with the method of Hébert (HEBE), Sanchez (SAPO) and She (SLSI). For the SLSI method, several minimum equivalent fractions of particles were tested: none, 5%, 10%, 20%, 35% and 63%. The results show that Hébert and Sanchez methods give very similar results. The minimum fraction has a large influence on the results (several hundreds of pcm between 5% and 63%). Similar tests performed with XPZ with ENDF-B7.0 by the author of Ref. [5] show the same phenomenon. When compared to the results with two Monte-Carlo based codes (SERPENT with Jeff 3.1.1 and RMC with ENDF-B7.0), the theoretical fraction of 63% calculated previously do not always give the closest results. By interpolation of the results, an approximate equivalent fraction of 1.5 time the real fraction gives the same results as SERPENT. However, when compared to the XPZ results, an equivalent fraction between 5 and 10% is the closest. Table 2 presents the results for the simulation with burnable absorbers: B 4 C particles * . In this case, RMC results presented in Ref [5] are taken as reference. The larger the amount of absorbent is, the larger the discrepancy becomes. The influence of the minimum equivalent fraction is even more important with these cases, where the equivalent theoretical fraction of 63% gives the closest results. CONCLUSIONS The method developed by She to simulate the double heterogeneity in composite, such as in pebblebed fuel, has been successfully implemented in DRAGON. Moreover, the limitations of the method regarding small fraction of particles in composites have been studied. In this paper, it has been demonstrated theoretically that the equivalent fraction used to define the representative volume is actually a constant close to the maximum that a cylinder approach can allow. The numerical results support the theory and the improvement proposed in this work for cases with larger amount of absorbent. The general comparison between the different methods shows * Note that a composition error of the B 4 C was introduced in the numerical results of [5] and [3]. This error was intentionally reproduced here to be able to compare our results with those found in the literature. * RMC used as reference: k ∞ and ∆k ∞ = (k HEBE ∞ − k ∞ ) * 10 5 underneath that they can be in agreement as long as the amount of absorbents (either fuel or poison) is not too large, and that the equivalent fraction is chosen adequately for the case. This observation represents actually a drawback to the theory that a constant fraction should be taken. More work is in progress to identify the source of the discrepancy. However, since it increases with the fuel or burnable absorber content, we can suppose that the approximation of independent particles may not be valid in those cases.
3,995.6
2021-01-01T00:00:00.000
[ "Engineering", "Physics" ]
A Fresh Look at a Well-Known Solid: Structure, Vibrational Spectra, and Formation Energy of NaNH 2 : Sodium amide (NaNH 2 ) in its α form is a common compound that has recently seen renewed interest, mainly for its potential use as a solid-state hydrogen storage material. In this work, we present a synergic theoretical and experimental characterization of the compound, including novel measured and simulated vibrational spectra (IR and Raman) and X-ray diffraction patterns. We put forward the hypothesis of a low-temperature symmetry breaking of the structure to space group C 2/ c , while space group Fddd is commonly reported in the literature and experimentally found down to 80 K. Additionally, we report a theoretical estimate of the heat of formation of sodium amide from ammonia to be equal to − 12.2 kcal/ mol at ambient conditions. ■ INTRODUCTION −6 The typical reaction that takes place leads to the formation of amides MNH 2 (M = alkali metal), and they can be schematized as This reaction has been characterized from different points of view, from kinetics 7 to the equilibrium constant, 8 from the catalysis 7,9 to electrical conductivity, 1 and so forth.NaNH 2 can also act as an intermediate in the decomposition of ammonia to nitrogen and hydrogen. 10−21 Moreover, sodium is not such a critical raw material as lithium. Sodium amide has been characterized from different points of view: structural and electronic properties 22−27 have been investigated together with vibrational properties, 23,28,29 both at ambient conditions and at high pressures.Nonetheless, the number of theoretical studies of solid NaNH 2 seems not to be very large to date. 23,25,30n this paper we intend to partly fill this gap, by (i) studying its structural stability and actually unveiling a possible lowtemperature symmetry breaking; (ii) simulating IR and Raman spectra, with subsequent vibrational modes assignment; and (iii) simulating powder XRD patterns.All computed properties are complemented by and compared with new, original experimental data.Furthermore, we also evaluate computationally the heat of reaction of eq 1 at ambient conditions and at the athermal limit. The paper is organized as follows: in the next section, experimental and computational details are reported, followed by the results of our work, unveiling a symmetry breaking from the Fddd to a C2/c structure; powder X-ray diffraction patterns, heat of reaction calculations, as well as vibrational spectra (IR and Raman) are discussed; in the last section, final conclusions of our work are summarized. ■ METHODS Computational Details.We used a development version of the CRYSTAL23 code 31 for all the calculations, which adopts atom-centered Gaussian-type functions, along with a PBE0 32 hybrid exchange-correlation functional.van der Waals dispersion interactions were accounted for through the empirical DFT-D4 method, 33−35 which improves upon the D3 dispersion correction scheme especially for ionic systems.The adopted basis sets are pob-TZVP-rev2 36 for N and H and pob-TZVP for Na. 37The integration over the Brillouin zone in the reciprocal space was performed using a 8 × 8 × 8 Monkhorst−Pack grid for NaNH 2 , a 12 × 12 × 12 grid for Na, considering a 2 × 2 × 2 supercell, and a 6 × 6 × 6 grid for NH 3 .The thresholds that control the five truncation criteria (T i ) of the Coulomb and exchange infinite lattice series have been set to 7 (T 1 −T 4 ) and 25 (T 5 ).The threshold for convergence on total energy has been set to 10. The vibrational mode frequencies are evaluated according to the harmonic approximation, the Hessian is evaluated as the numerical derivative of first-order analytical gradients, and intensities are computed through a coupled-perturbed scheme. 38xperimental Details.From the experimental side, all work was carried out trying to exclude moisture and air in an atmosphere of dried and purified argon (5.0, Praxair) using high-vacuum glass lines and a glovebox (MBraun).Liquid ammonia was dried by storage over Na.The glass vessels were flame-dried under fine vacuum several times before utilization. Synthesis.Na (Acros, >99.5%) was freed from any crusts under hexanes and placed into a flame-dried Schlenk vessel under Ar.After pumping off any residual hexanes, the Na was molten in vacuum using a Bunsen burner and slowly poured into an attached glass ampule and flame-sealed.Any hydroxides or oxides present stuck to the glass surface of the Schlenk vessel, and pure Na flowed into the ampule.A flamedried borosilicate glass ampule with 8 mm inner diameter was charged in the glovebox with Na metal (12 mg, 0.5 mmol); the ampule was closed using a glass valve, taken out of the glovebox, and attached to a Schlenk line for the work with anhydrous NH 3 .In an Ar counter stream, a trace amount of rust was added to catalyze the reaction.The ampule was evacuated and cooled to −78 °C using dry ice/isopropanol; ca. 2 mL of NH 3 was distilled into it, and the Na dissolved first with bronze and then with blue color.The ampule was then cooled to liquid N 2 temperature and flame-sealed under vacuum.The sealed-off tube was stored for 6 h at room temperature during which the solution became colorless and colorless NaNH 2 precipitated in quantitative yield.To grow crystals large enough for the diffraction experiment, the flamesealed ampule was placed into a heating block at 40 °C for 5 days.The ampule was cooled to liquid nitrogen temperature and cut open under Ar, and the residual NH 3 evaporated at room temperature and then under vacuum.NaNH 2 was obtained as a white powder in quantitative yield. Powder X-ray Diffraction.The sample was filled into a predried borosilicate glass capillary with a diameter of 0.3 mm.The powder X-ray pattern was recorded with a StadiMP diffractometer (Stoe & Cie) in the Debye−Scherrer geometry.The diffractometer was operated with Cu−Kα1 radiation (1.5406 Å, germanium monochromator) and equipped with a MYTHEN 1K detector.The diffraction pattern was indexed using the WinXPOW suite. 39 and Raman Vibrational Spectroscopy.Infrared spectra were measured on a Bruker Alpha Platinum FT-IR spectrometer using the ATR Diamond module with a resolution of 4 cm −1 .The spectrometer was located inside a glovebox under argon (5.0, Praxair) atmosphere.For data collection, the OPUS 7.2 software was used. 40he Raman spectra were measured at room temperature with a Monovista CRS+ confocal Raman microscope (Spectroscopy & Imaging GmbH) using a solid-state laser (488/ 532/633 nm) and a 300 grooves/mm (low-resolution mode, fwhm: <5.50 cm −1 (488 nm)/<4.62cm −1 (532 nm)/<3.25 cm −1 (633 nm)) grating. 41The sample was measured inside a borosilicate glass ampule. X-ray Structure Determination at 80 K. Single crystals were selected under a predried argon stream in perfluorinated polyether (Fomblin YR 1800, Solvay Solexis) and mounted using the MiTeGen MicroLoop system at ambient temperature.X-ray diffraction data was collected using the monochromated Cu−Kα (λ = 1.54186Å) radiation of a Stoe StadiVari diffractometer equipped with a Xenocs Microfocus Source and a Dectris Pilatus 300 K detector.Evaluation, integration, and reduction of the diffraction data were carried out using the X-AREA software suite. 42Multiscan absorption correction was applied with the LANA module of the X-AREA software suite.−45 All atoms were located by difference Fourier synthesis and non-hydrogen atoms refined anisotropically.Hydrogen atoms were located from difference Fourier syntheses and freely refined isotropically.CCDC 2250356 contains the supplementary crystallographic data for this paper.This data can be obtained free of charge from The Cambridge Crystallographic Data Centre. Structure and Stability. Hypothesis for a Symmetry Breaking in the NaNH 2 Structure.Literature reports that the thermodynamically most stable polymorph of NaNH 2 at ambient conditions is the orthorhombic α structure (space group Fddd, 70). 14,22,23After a first geometry optimization in that symmetry, however, we found one imaginary frequency (about 112i cm −1 , B 2g symmetry), indicating this structure is not a true local minimum. We show in Figure 1 the computed potential energy surface along this normal mode, which actually shows that a lowerenergy structure follows from symmetry breaking�the The Journal of Physical Chemistry C corresponding subgroup is monoclinic, C2/c.In Figure 1, the red dots are the results of our calculations, while the blue curve is the result of cubic spline interpolation of our data. We reoptimized the structure from the observed minimum, and detailed information is found in Table 1. Such symmetry breaking is not uncommon and is similar to what we recently observed in a different system, namely Li 6 PS 5 Cl. 46Clearly, the energy barrier between the optimized structures in Fddd and C2/c space groups is very small (only about 0.02 eV), and thus, already at this stage, we do not expect such symmetry breaking to be measurable at room temperature conditions. Figure 2 shows that the structural difference between the optimized geometries is not very large, the most notable features being a slight change in the alpha angle, a variation in the a and b lattice parameters, and a change in Na−N distances. As expected, the change in the computed electronic structure is also not too relevant: the band gap is 4.29 eV for NaNH 2 in space group C2/c and 4.27 eV in space group Fddd.Such values�computed with hybrid functionals�are larger than those obtained from LDA calculations in previous works. 23,30ompared with the experiments, our optimized cell is about 6% smaller in volume.Considering that our calculations are carried out at the athermal limit and the delicate balance of the whole computational setup, we deem such a result more than satisfactory. X-ray Diffraction Pattern.In order to investigate further the Fddd vs C2/c puzzle of solid NaNH 2 , we carried out single crystal X-ray diffraction at 80 K of sodium amide single crystals.A splitting of the at high angles would potentially indicate a lowering to monoclinic symmetry.The structure resulted, however, to be still orthorhombic Fddd, and no splitting of reflections was observed.Considering the low energy difference between the two structures and that all atoms show very strong thermal vibrations, 22 it is likely that the phase transition occurs at lower temperatures.For this reason, additional investigations could be performed, such as a measurement of heat capacitites down to 2 K and neutron diffraction patterns, which we leave for future work. We computed the powder X-ray diffraction (PXRD) pattern for NaNH 2 in both Fddd and C2/c space groups, considering an incident wavelength equal to 1.5406 Å, correspondent to the Cu−Kα1 line.In Table 2 and Figure 3 we compare such a computed pattern with our experimental one and experimental data from the literature. 24e Fddd-NaNH 2 structure type pattern compares better to the experimental one, confirming the hypothesis that the monoclinic structure is just the low-temperature polymorph of sodium amide.Changing the symmetry, peak indexing is changing, and a splitting of reflections occurs.For this reason, a direct comparison of peak assignment for the calculated a Experimental results are obtained at room temperature, while the calculated ones are referred to the optimized structure at the athermal limit.In parentheses, the deviations of calculated values from observed ones are given. The Journal of Physical Chemistry C pattern of structure in the C2/c space group and experimental results would not be consistent and is not presented.The detailed peak assignment can be found in the Supporting Information. The maximum deviation from the experimental 2θ angle is about 4.5% for NaNH 2 in the Fddd space group.Differences in the calculated and experimental pattern of structure in the Fddd space group can be noticed.It is expected since the experimental pattern was recorded at room temperature whereas the calculated ones are referred to athermal limit simulated structures.Therefore, reflection positions will be different due to the temperature dependence of the lattice parameters, leading to some discrepancy.Moreover, the intensities could differ because of the thermal motion of the atoms. At higher diffraction angles the comparison becomes difficult due to the low intensity of the experimental peaks�because of X-ray atomic form factors and interactions of X-rays with electrons of the atoms.For this reason, in Table 2 we reported just the comparison of the most intense peaks.The reflections we could not assign are marked with an x symbol in the pattern in Figure 3. Anyway, these peaks are surely related to NaNH 2 because reflections of alien phases would show up at lower diffraction angles, additionally. Prediction of the Heat of Formation from Ammonia.In order to evaluate the formation energy of NaNH 2 (eq 1), we first performed a full geometry optimization for all the other structures involved, namely metallic Na, solid NH 3 , and lastly the H 2 molecule.The optimized cell parameter for Na is 4.027 Å (space group Im3̅ m (229), lattice parameter a = 4.235 Å 47 ), while for NH 3 it is 4.949 Å (space group P2 1 3 (198), lattice parameter a = 5.048 Å 48,49 ).As expected, the band structure for Na shows metallic character, and the band gap of solid ammonia is equal to 7.02 eV.A reduction (about −5%) in the lattice parameter of sodium with respect to values reported in literature is obtained. 47It is well-known, in fact, that the choice of the basis set for the treatment of metallic systems is crucial, and very diffuse functions are needed to correctly reproduce the density characterizing them.However, in order to obtain comparable results with the other studied compounds, a coherent level of theory was necessary, leading to the choice of the pob-TZVP basis set for Na, which Peintinger et al. 37 indicated as suitable for the study of metallic sodium too, without the addition of diffuse valence functions. We computed the heat of the reaction as where p is the index for the products and r is the index for the reagents. The thermodynamic function for ambient conditions was obtained from the frequency calculations (using a 2 × 2 × 2 supercell in the case of metallic Na).Results are reported for all the species involved in the reaction in Table 3. Our estimate for the heat of reaction at 298.15 K is −12.2 kcal/mol (−51.1 kj/mol) for NaNH 2 in space group Fddd, since it is the stable polymorph at ambient conditions, which is in very good agreement with previously reported experimental values of −12.3 50 and −11.7 kcal/mol. 51At the athermal limit, our ΔH 0 estimate is −12.9 kcal/mol (−54.1 kj/mol) for NaNH 2 in space group C2/c, since it is supposed to be the equilibrium structure at low temperature.By increasing the supercell size, such results change only marginally (0.35 kcal/ mol). Vibrational Spectra.Infrared Spectrum.We simulated the IR spectrum of NaNH 2 in both Fddd and C2/c space groups, which we report in Figure 4 and Table 4.The experimental IR spectrum was recorded at ambient temperature, while simulated IR is at the athermal limit.At a first glance it is evident that the computed spectra are both in general agreement with our experimental one which, in turn, is fully coherent with the available literature data. 28,29The most notable discrepancy is found in the range 800−1500 cm −1 where some unpredicted features appear in the experimental spectrum.These peaks are not visible even in other experimental IR spectra reported in the literature, and we Experimental results are obtained at room temperature, while the calculated ones are referred to the optimized structure at the athermal limit.Peaks marked with the x symbol were not assigned. The Journal of Physical Chemistry C were not able to identify their origin.Overall, the simulated IR spectra for the orthorhombic and monoclinic structures do not show significant differences.Let us look more in detail at the specific frequency ranges. The band reported at the 609 cm −1 wavenumber in the work of Liu and Song 28 and corresponding to the 591 cm −1 wavenumber in our experimental spectrum is a very broad one, and it may be related to the convolution of three or more bands since the shape is not as well-defined as the other ones.In all our simulations, we notice a rigid shift in the group of frequencies, as a direct consequence of the underestimation of the cell volume reported in the previous section and harmonic approximation, reflected on these collective vibrational modes.In the IR spectrum simulated for the structure with C2/c symmetry, the appearance of three bands can be observed, while the Fddd spectrum reports in this region an intense band at 634 cm −1 and a weak one at 580 cm −1 .However, even if the broadening of the experimental band can be related to the convolution of more bands, its origin could possibly be of a different nature. In the other regions we see no decisive feature in the spectra for the different structures, all in good agreement with the experiment.An extra band with weak intensity is estimated to be around 1590 cm −1 in the simulated spectrum for the C2/c space group, and it is related to bending of the N−H bond.After the usual rescaling of the frequencies due to anharmonicity (we adopted here a factor of 0.95), also the high-frequency H-stretching modes compare well. Raman Spectrum.We computed powder (directionaveraged) Raman spectra for the different structures of NaNH 2 , using the same algorithm as described above for band positions and analytical evaluation of intensities. 52We took also into account the experimental conditions (temperature and laser wavelength) to calculate Raman intensities.The experimental Raman spectrum was measured using three different lasers working at 488, 532, and 633 nm.For comparison with the calculated data, we considered the 488 nm one. In Figure 5, a comparison among calculated and experimental spectra is presented, while in Table 5 the frequencies of the most intense bands are reported for a comparison among data from the literature, our experimental results, and calculated ones.The experimental Raman spectrum was recorded at room temperature, and the simulated spectrum was obtained by setting the temperature equal to 298 K and the incoming laser wavelength equal to 488 nm. In the low-frequency region of the Raman spectrum, the number of bands in the calculated spectra is higher than that of the experimental one.It is worth remembering here that the computed intensity is interpreted as the band height, and a uniform broadening is applied, while this quantity is more properly to be interpreted as the band area.It is clear here that the experimental bands�especially those in the 400−600 cm −1 region�all have different broadening, much larger than the reported ones, and hence they may largely overlap with each other.For instance, the experimental bands at 470 and 533 cm −1 may well be the convolution of two of the four bands which are individuated in the calculated spectra for Fddd and In parentheses, scaled values of frequencies (scale factor = 0.95) can be found. The Journal of Physical Chemistry C C2/c structures, which are reported in Table 5.The same can be observed for experimental bands at 177, 247, and 380 cm −1 . A blue shift of bands can be noticed as observed in the IR, but no crucial differences can be individuated between the simulated Raman spectra of orthorhombic and monoclinic structures. In the region around 1500 cm −1 , only one bands appears, corresponding to the symmetric bending of the N−H bond.It is possible to find a better agreement between the experimental frequency and the calculated one for the Fddd structure, as expected since the orthorhombic polymorph is the most stable one at room temperature.However, an extra shoulder band appears in the Fddd system. The calculated frequencies of the bands in the highwavenumber region have been once more downscaled to empirically account for missed anharmonic effects (scale factor = 0.95).However, we found no valuable information in the analysis of these particular bands. Furthermore, we calculated the anisotropic displacement parameters (ADPs) for all the structures.ADP is a 3 × 3 tensor associated with each atom of the unit cell.Diagonalizing the resulting tensors, we obtained three positive eigenvalues for each irreducible atom of the cell which define the length of the principal axes of the ellipsoids.The ellipsoids define the surfaces of constant probability of atomic displacement.In Figures 6 and 7 we show the calculated ADPs for structures in the Fddd and C2/c space groups, respectively.As can be seen, the ADPs of all atoms, especially for hydrogens, of the structure in the Fddd space group are larger than those of the monoclinic structure.This observation seems to be in agreement with that reported by Nagib et al. 22 We observed the same behavior by considering NaND 2 in both Fddd and In parentheses, scaled values of frequencies (scale factor = 0.95) can be found.The Journal of Physical Chemistry C C2/c space groups.This observation supports our hypothesis of a symmetry breaking and suggests that hydrogen atoms are responsible for it.In fact, since XRD cannot localize them perfectly and in the Fddd-type structure they strongly waggle, it is reasonable to expect a lowering of symmetry.Neutron diffraction on NaND 2 should be performed to verify our hypothesis. ■ CONCLUSIONS In this work, we present a synergic computational and experimental study of NaNH 2 .All calculations are ab initio and carried out at the athermal limit.In Raman spectra, which are evaluated within the Placzek approximation, 54 room temperature and laser wavelength are approximately taken into account as a simple prefactor to peak intensities.From vibrational calculations, we found its Fddd structure to be metastable.A lower-energy structure was identified characterized by C2/c symmetry.However, from acquired X-ray diffraction data on single crystals of sodium amide at 80 K, the crystal structure resulted to be still orthorhombic rather than monoclinic.Further analysis should be performed in order to verify if a phase transition occurs at lower temperature.However, analysis of anisotropic displacement parameters of sodium amide atoms in both Fddd and C2/c space groups supports our hypothesis of a lowering of symmetry because of the very large displacements of H (and D) atoms in the Fdddtype structure.Finally, due to the role of sodium amide in hydrogen storage applications, we also calculated the NaNH 2 formation reaction enthalpy from Na and NH 3 , which leads to the evolution of H 2 .The resulting values at ambient conditions are equal to −12.2 kcal/mol for the orthorhombic system, which is in very good agreement with experimental results (−12.3 kcal/mol). 50ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jpcc.3c02059.Supplementary material, including (i) additional details on the optimized NaNH 2 structures and standardization of monoclinic crystal structure; (ii) analysis of XRD patterns for the monoclinic system, structure factors for both Fddd-and C2/c-NaNH 2 and simulated PXRD pattern considering experimental lattice parameters; and The Journal of Physical Chemistry C Figure 2 . Figure 2. On the left, representations of the crystal structures and conventional cells (top: structure in the Fddd space group; bottom: structure in the C2/c space group) with highlighted α angle; on the right, representation of the primitive cells with highlighted bond (Å).Color code: Na purple, N blue, H white. Figure 3 . Figure 3. Experimental and calculated XRD patterns comparison.Experimental results are obtained at room temperature, while the calculated ones are referred to the optimized structure at the athermal limit.Peaks marked with the x symbol were not assigned. Figure 4 . Figure 4. On the left, experimental (room temperature) and calculated (athermal limit) IR spectra comparison of NaNH 2 for the different optimized geometries; on the right, zoomed-in IR spectra zones (in order from top to bottom: 500−800, 1350−1700, and 3100−3300 cm −1 (scaled by a factor of 0.95)).The calculated spectra are convoluted with a full width at half-maximum of 8 cm −1 . Figure 5 . Figure 5. On the left, room temperature experimental and calculated Raman spectra comparison of NaNH 2 for the different optimized geometries; on the right, zoomed-in Raman spectra zones (in order from top to bottom: 50−800, 1300−1700, and 3100−3350 cm −1 (scaled by a factor of 0.95)).The calculated spectra are convoluted with a full width at half-maximum of 8 cm −1 . Table 1 . 22ll Parameters (Å), α Angle, and Volume (V) (Å 3 ) of Experimental Lattice Parameters22Obtained from Powder Diffraction at Room Temperature, Experimental Geometry Determined at 80 K from Single Crystal X-ray Diffraction, and Optimized Geometries aIn parentheses, standard uncertainties of experimental values and deviations of the calculated results from experimental geometry at 80 K are shown. a Table 2 . Comparison of Powder X-ray Diffraction Pattern's Peaks among Experimental and Calculated Results a Table 4 . Experimental (Room Temperature) and Calculated (Athermal Limit) IR Frequencies (cm −1 ) Comparison of NaNH 2 for the Different Geometries a a Table 5 . Experimental (Room Temperature) and Calculated (298 K) Raman Frequencies (cm −1 ) Comparison of NaNH 2 for the Different Geometries a a
5,685.4
2023-06-13T00:00:00.000
[ "Chemistry", "Physics", "Materials Science" ]
Procedural Terrain Generation Using Generative Adversarial Networks —Synthetic terrain realism is critical in VR applications based on computer graphics (e.g., games, simulations). Although fast procedural algorithms for automated terrain generation do exist, they still require human effort. This paper proposes a novel approach to procedural terrain generation, relying on Generative Adversarial Networks (GANs). The neural model is trained using terrestrial Points-of-Interest (PoIs, described by their geodesic coordinates/altitude) and publicly available corresponding satellite images. After training is complete, the GAN can be employed for deriving realistic terrain images on-the-fly, by merely forwarding through it a rough 2D scatter plot of desired PoIs in image form (so-called “altitude image”). We demonstrate that such a GAN is able to translate this rough, quickly produced sketch into an actual photorealistic terrain image. Additionally, we describe a strategy for enhancing the visual diversity of trained model synthetic output images, by tweaking input altitude image orientation during GAN training. Finally, we perform an objective and a subjective evaluation of the proposed method. Results validate the latter’s ability to rapidly create life-like terrain images from minimal input data. I. INTRODUCTION Manually-created virtual terrains are still superior in quality than ones derived with automated means, at the cost of significant labour and time expenses.The complexity of the real world (rocks, grass, trees, mountains) renders the creation of plausible, original terrain content still a challenging task.This issue can be bypassed using Procedural Content Generation (PCG), i.e., a set of methods for (semi-)automatically creating new content for 2D/3D graphics on-the-fly, and thus replacing the artistic part of content generation with a choice of tweakable parameters and random elements.PCG algorithms can be used for on-the-fly creating 2D terrain images that encode 3D characteristics (e.g., altitude); this terrain image can then be transformed into a 3D terrain mesh at a final post-processing step. Typical noise-based terrain generators (e.g., Worley [1], simplex [2], Perlin [2], value [3] or diamond-square [4]) suffer with regard to memory/computational requirements and/or output quality.More recent PCG approaches that have been The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 951911 (AI4Media).applied for terrain generation, such as Software Agents [5], Erosion Modeling [6] and Evolutionary Algorithms [7] also typically require significant manual post-processing (e.g., applying an image overlay to achieve a realistic look) and/or extensive manual parameter tuning. Thus, Deep Neural Networks (DNNs) such as Generative Adversarial Networks (GANs) [8] have been alternatively explored for visual content generation.In [9] a GAN-based method is presented for multi-scale terrain texturing with reduced tiling artifacts.It involves training a GAN to upsample and texture map a low-resolution terrain input.Thus, during the inference stage, low-resolution terrain images can be translated on-the-fly to high-resolution ones; thus the terrain is needed upfront as input to be up-scaled.Other GAN-based methods [10] [11] create mountain-like 3D terrains, using information extracted from training height map data.Acquiring height maps is not trivial, while the generated results need to be heavily post-processed, since they are missing textures and realistic visual features (e.g., grass, rivers, forests, etc.). In comparison, this paper presents a novel GAN-based method for procedural terrain generation with significantly more relaxed input data requirements (very loose constraints are only imposed upon the input data) and a higher diversity of terrain results.We call this proposed method GAN-terrain.Unlike other GAN-based terrain generation methods, it does not require sophisticated input data types (e.g., height maps).Thus, after training, it only incurs minimal manual supervision, since its required input simply consists of easily constructed (in a matter of seconds), rough 2D scatter plots of desired Points-of-Interest (PoIs) in image form; we call such a plot an "altitude image".The output is a 2D textured terrain resembling a satellite image, with colour encoding height and/or geomorphological properties (e.g., snow, water-body, forest, etc.), so that it can then be trivially post-processed and converted into a semantically annotated 3D terrain mesh. During training, the model learns to extract altitude/spatial information from colour density/distances of input PoIs. GANs can easily learn complex real-word semantic content, like mountains, sea, deserts, islands, or flora, in a way that follows natural spatial alignment constraints (e.g., no jungles depicted in frozen Arctic regions, no rivers flowing uphill, etc.).However, simply training a GAN on a large set of ground-truth terrain images does not guarantee that the Generator will learn to produce complex content that obeys similar restrictions.Therefore, we opted for an Image-to-Image Translation GAN, training it using geographic coordinates and altitude information from a dataset of neighbouring landmarks, paired with the corresponding satellite image of their region. The main advantage of GAN-terrain lies in its novel input strategy, that simplifies the actual use of the deployed DNN model on the field: new inputs for the trained network, i.e., novel altitude images at the inference stage, can be trivially created in a matter of seconds with any image processing software.In fact, although the training/evaluation dataset for this paper was constructed using real geographic data, we have successfully tested the trained GAN-terrain model with arbitrary input images; the Generator still predicts relatively realistic terrain images. The only existing methods partly similar to GAN-terrain are [12] and [13].However, the first one also requires height maps for training, while both of them rely on unconditional GANs for 2D terrain image/texture generation.In contrast, GAN-terrain does not require height maps and is built upon the Image-to-Image Translation framework for increased robustness. II. GAN-TERRAIN METHOD Generative Adversarial Network (GAN) are employed as the primary tool for completing the procedural terrain generation task.GANs are composed of two sub-networks being trained jointly, namely a Generator (G) and a Discriminator (D).After training, only the Generator is typically retained for content generation purposes.In this paper, the conditional GAN variant for Image-to-Image Translation tasks is employed [14].GAN theory and training is briefly presented below (details in [8] [14]). A. Generative Adversarial Networks In an image synthesis scenario, GANs are generative models that learn a mapping G : z → Y from a random noise vector The Generator G is trained to produce outputs that cannot be distinguished from "real" images by an adversarially trained Discriminator D, which gradually learns to discern the synthetically generated images from real ones.The objective of a conditional GAN can be expressed as: where G tries to minimize this objective against an adversary D that tries to maximize it: In the unconditional variant, where the Discriminator does not observe X, it holds that: (2) It is best practice to augment the GAN objective with a more traditional loss, such as L 1 or L 2 norm.Although the Discriminator's job remains unchanged, the Generator is additionally constrained to stay near the corresponding ground-truth output as follows: The overall training objective is: B. GAN-Terrain The proposed GAN-terrain method consists in training a conditional GAN for image synthesis so that it learns to map rough 2D Point-of-Interest (PoI) scatter maps (so-called altitude images) into realistic satellite terrain images containing geomorphological details.In the inference/deployment stage, after training has been completed, a similar altitude image can be easily crafted at minimal labour and time expense (within seconds), in order to be fed to the trained model as observed input image X.The corresponding model output Y will be a procedurally generated 2D terrain image with rich, colorcoded geomorphology that typically does not violate spatial intuitions. To train the desired conditional GAN model under this framework, we initially collect a set of N earth surface PoIs .., N , composed of longitude λ i , latitude φ i and altitude R i components.The altitude is rescaled and quantized to integer interval [0, 253], assuming the height of the mount Everest (8.848m) is the maximum possible value.These N vectors can be grouped into geographic patches, i.e., rectangle-shaped earth regions defined from 4 PoIs.Subsequently, this set is uniformly sampled to select a set of M geographic patches, so that most earth region terrain variations are represented on the training dataset.Such a representation of all earth terrain variations is essential for high-quality, diverse content generation.Finally, for each of the M geographic patches, we collect a random number of PoIs falling geographically within it, as well as a satellite image of the patch.Patch PoIs p ji ,i = 1, ...N are employed to construct a 2D altitude image (λ j , φ j ), of patch j = 1, ...M where the horizontal/vertical coordinate corresponds to PoI latitude/longitude(λ ji , φ ji ), respectively, while the luminance of each point encodes PoI normalized altitude R ji .Such altitude images are very sparse, since typically we sample only few Earth surface points p ji ,j = 1, ...M per patch.All other altitude image pixels have a value of 255 (white on grayscale) or (255,255,255) (white on RGB) and are excluded from altitude evaluation.This 2D altitude image, converted into image form, is an observed input image X j , j = 1, ...M .The corresponding satellite image (depicting actual geomorphology of the patch region) is employed as Each 2D altitude image X j can be constructed in two slightly different ways: a) a grayscale one-channel image can be derived by encoding normalized altitude R ji per-PoI as a pixel luminance value.Alternatively, a linear color palette can be used to convert normalized altitude R ji into RGB color values, in order to finally obtain a three-channel colored image (e.g., one from blue to yellow, where the deepest blue/yellow denotes sea level/highest mountain peak level, respectively).Both approaches were implemented and compared in the context of this paper, as described in Section III. As shown in Figure 1, the visual properties of the generated content are correlated with the color-coded altitude of the input PoIs; in all other respects GAN-terrain has realistically filled-in the generated terrain details fully autonomously.At model deployment-time, random input altitude images can be constructed very rapidly on-the-fly in an automated manner, thanks to the very minimal amount of required information.Even manually drawn, swiftly sketched arbitrary images can be utilized as inputs; a trained GAN-terrain model will successfully interpret them as altitude images, as shown in Figure 4. In general, output diversity is an important property of a successful PCG system.In the GAN-terrain case, the purpose of the final trained GAN model during system deployment is not to precisely translate the input altitude image into an actual satellite image, but to procedurally generate a new, realistic but imaginary terrain, which may be only vaguely based on the given input.Thus, in order to enhance trained model output diversity, we optionally perform random rotations and/or flipping of each X j , j = 1, ...M to augment the training dataset, without changing the corresponding Y j , j = 1, ...M .Below, we refer to GAN-terrain models trained with/without this optional augmentation strategy as "Augmented"/"Non-augmented", respectively. As shown in Section III, this training set augmentation strategy allows the final GAN to synthesize terrain images of greater apparent diversity, by forcing it to ignore input orientation during training.Thus, during deployment of the trained model, small rotations to the input altitude image may produce arbitrarily large rotations to the output, since output orientation is in fact arbitrarily "decided" by the model and not constrained by input orientation.Thus, the Non-augmented model is forced more intensely to mimic ground-truth, while the Augmented one typically provides a more diverse result. III. GAN-TERRAIN EVALUATION We employed publicly available geographical data [15] in order to construct the training and testing sets for our method.We initially collected N = 11.2 million world PoIs, which were utilized to create M = 4300 geographic patches and attach their corresponding satellite images (of 512×512 pixels resolution) using the Microsoft Bing Maps API. The employed GAN architecture was based on the Pix2Pix Network [14].The network was trained using 3000 input/output patch pairs {X j , Y j } and was evaluated using a test set of 1300 input/output patch pairs.Color and grayscale variants of the dataset were used for training separate GANterrain models.Color 2D altitude images resulted in predicted network outputs with a higher level of detail than the ones obtained using grayscale inputs, thus GAN-terrain evaluation proceeded with the color variant only.The results were impressive, as GAN-terrain successfully created highly realistic complex terrain images from very simplistic inputs.Training was completed in 300 epochs, on a 24-core Intel Xeon PC with 256GB RAM and an NVIDIA GeForce GTX2080Ti GPU. Evaluating the quality of synthesized images is an open and difficult problem [16].In this paper we chose a simple objective evaluation approach, exploiting the fact that pixel color in the output image encodes semantic information.Thus, we measured the Normalized Histogram Intersection similarity (NHI) [17] between the 64-bin joint HSV color histograms [18] of each GAN-terrain prediction and its corresponding ground-truth image from the test set.Minimum/maximum NHI similarity values are 0.0/1.0,respectively.High NHI similarity between the ground-truth and predicted image histograms can be interpreted as high semantic concordance among them, with regard to the distribution of visible geomorphological details (water bodies, forest, snow, mountains, etc.). Quantitative results indicate that the mean NHI similarity between 1300 ground-truth and predicted images is indeed relatively high (0.7665).This implies that, when the trained GAN model is given a previously unseen 2D altitude image, it synthesizes a highly similar terrain image in terms of semantic concordance.Although NHI similarity does not capture differences between the two terrain images in terms of the the exact landmass/coastline shape/orientation, this is rather irrelevant to terrain image generation task, since our goal is not to replicate the ground-truth terrain.Examples of altitude image, ground-truth terrain image and predicted output image triplets are presented in Figure 1.NHI similarity for each triplet is included for visual inspection purposes.The occasional phenomenon of semantic disconcordance between ground-truth and predicted image (example in Figure 2) can be attributed to the minimal information content of the input altitude images, which is however the main advantage of the proposed method: such 2D scatter plot inputs can be rapidly and easily constructed in model deployment-time.Fig. 2: Test set example of semantic disconcordance between prediction and ground-truth.Here, the network avoided to synthesize snow (NHI score: 0.58).However, the predicted terrain image is still realistic-looking. Additionally, we performed a subjective evaluation of generated terrain images, using 40 terrain images from our test set and 10 observers.The goal of the subjective evaluation was to let observers deduce in a systematic manner: a) whether the predicted terrain images resembles a real satellite terrain image ("plausibility"), and b) the spatial correspondence between the input 2D altitude image PoIs and the predicted terrain image ("correspondence").We employed 20 predicted generated terrain images shuffled with 20 ground-truth terrain images for control purposes, totalling 40 terrain images.The participating subjects did not know whether each terrain image they saw was a ground-truth or a predicted one.For each image, they recorded two integer score values in the range [1,5] for plausibility and correspondence evaluation, respectively.TABLE I: Evaluation results of the Non-augmented GANterrain model.Correspondence and plausibility are scored using a scale in [1,5], while NHI similarity is a percentage.In all cases higher is better.Subjective evaluation results, shown in Table I, indicate that ground-truth and predicted images are nearly indistinguishable by human subjects: mean correspondence for predicted/ground-truth images was 4.6633/4.6138,respectively, while mean plausibility for predicted/ground-truth images was 4.4682/4.5955,respectively.In fact, artificial GANterrain images performed even better than the real ones. Subjective evaluation was necessarily performed with a GAN model trained using the non-augmented training dataset variant of the proposed method, due to the nature of the employed "correspondence" qualitative metric.Disabling the proposed training data augmentation strategy, which was described in Section II, imposes shape/orientation constraints to be learned by GAN-terrain.Thus, absence of training dataset augmentation may reduce the diversity of GAN-terrain outputs during deployment.To quantify this possibility, we trained a second GAN-terrain model using training data augmentation and then compared the predictions of the two GAN-terrain models on the test set.Evaluation consisted in calculating a GIST global image description vector [19] for each predicted terrain image in the test set, once for the Non-augmented and once for the Augmented model, and subsequently computing the mean global dispersion of these descriptors.This can be measured by averaging over the total variance (i.e., trace of the covariance matrix) of the 1300 960-dimensional GIST vectors f i ,i = 1, ..., 1300, separately for the two models. The results, shown in Table II, indicate that the mean global dispersion/total variance of test set predictions is significantly greater on the Augmented model variant, where our input augmentation strategy was enabled during training: it is 0.23454/0.18995for the Augmented/Non-Augmented variant, respectively.To grasp a sense of the significance of this difference in total variance magnitude, we report that the mean/variance of the main diagonal of GIST covariance matrix in the Augmented variant is 0.000244/3.55e-8,respectively.On the other hand, mean NHI similarity of joint HSV histograms between ground-truth and predictions is slightly higher for the Non-augmented GAN-terrain model: it is 0.7665, versus 0.74172 for the Augmented case.This indicates a slight tradeoff between semantic concordance and output diversity. IV. CONCLUSIONS The proposed GAN-terrain method is able to derive realistic 2D terrain images resembling satellite images in model deployment-time, given only 2D altitude images containing rough PoI scatter plots that encode spatial distribution and altitude of desired geographic landmarks.Although the altitude images employed for training were constructed using real geographic data, similar arbitrary input images can easily and rapidly be created at the inference stage using any image processing software.In contrast, all competing GAN-based terrain generation methods require more sophisticated deploymenttime inputs that are comparatively difficult to construct.The output images can be easily transformed into semantically rich 3D terrain meshes by trivial post-processing.Extensive evaluation of the generated terrain images indicates a relatively high degree of semantic concordance between the expected terrain geomorphology and the actually GAN-terrain generated ones, as well as very realistic and plausible generated terrains.Additionally, GAN-terrain evaluation results indicate a predicted terrain image diversity gain, at a very low penalty in semantic concordance, when using the proposed training data augmentation strategy.Future work may focus on actually synthesizing 3D terrain content, generating both terrain image texture and geometry data. Fig. 3 : Fig. 3: Outputs of pre-trained Non-augmented/Augmented model ("Predicted Image I/II", respectively) using two different altitude image inputs from the test set: a) and b). Fig. 4 : Fig.4: An example input/predicted image pair, using: a) the pretrained Augmented GAN-terrain model, and b) an arbitrary input (smiling face with glasses), manually drawn in less than 30 seconds. TABLE II : Predicted images diversity comparison between the Non-augmented and Augmented model, using GIST descriptors and total variance.
4,289.6
2021-08-23T00:00:00.000
[ "Computer Science" ]
Modern Radiotherapy for Head and Neck Cancers: Benefits and Pitfalls-a Literature Review 1 University of Medicine and Pharmacy, Craiova, Romania 2 Regional Institute of Oncology, Iasi, Romania 3 „Gr. T. Popa” University of Medicine and Pharmacy, Iasi, Romania 4 „Sf. Spiridon” University Hospital, Iasi, Romania 5 EUROCLINIC Oncological Center, Iasi, Romania Corresponding author: Roxana Irina Iancu, Department of Pathophysiology, „Gr. T. Popa” University of Medicine and Pharmacy, 16 University Street, 700115, Iasi, Romania. E-mail<EMAIL_ADDRESS>riiancu@umfi asi.ro Abstract REVIEW luation of patients "reports of dry mouth syndrome" demonstrated the ability of the IMRT technique to protect the parotid glands compared to the 3D-CRT irradiation technique.However, avoiding the parotid glands to reduce xerostomia should be approached with caution in order not to aff ect the tumor target volume dosimetric coverage.Eisburuch et al. notes that an improvement in saliva production over time associated with dose reduction in the parotid glands may lead to a reduction in late xerostomia.Th e authors identify the average dose of the oral cavity as an independent predictor of xerostomia, stressing the need to reduce the doses received by the tumor-free oral cavity 2 . However, the IMRT technique also has disadvantages, requiring quality assurance of the treatment plan by verifying it in the phantom or with the help of portal dosimetry and the large number of fascicles can increase the irradiation time with possible radiobiological consequences if the treatment time per fraction exceed 20 minutes.Th e large number of monitor units (MU) associated with the scattering of small doses into large volumes of tissue 3 . Dysphagia associated with irradiation in patients receiving radiotherapy for HNSCC was defi ned as aspiration or stricture evidenced by video-fl uoroscopy or endoscopy, gastrostomy tube or aspiration pneumonia diagnosed at ≥12 months after treatment completion.Evaluation of this toxicity associated with IMRT treatment for multimodal patients treated for oropharyngeal cancer revealed an age-dependent incidence increasing from 5% to 20% for patients <50 years of age >70 years receiving a dose >60Gy per pharyngeal constrictor superior evaluated on volume dose histograms (DVH) 4 . A systemic review analyzed the benefi t in terms of xerostomia, overall survival (OS) and quality of life (QOL) included patients from randomized controlled trials diagnosed with locally advanced, non-metastatic HNSCC who received radiotherapy with curative intent.Th e analysis included 1155 patients, s treated with conventional IMRT and 2D or 3D-CRT radiotherapy in HNSCC.Th e use of the IMRT technique has led to a relative reduction in the risk of 36% acute grade 2 xerostomia and the risk of late xerostomia compared INTRODUCTION Curative radiotherapy is part of the multimodal treatment of locally advanced squamous cell carcinoma of the head and neck (HNSCC) as a unique method of treatment or in combination with concurrent or sequential chemotherapy.Th e lasts decades have come with changes and a signifi cant progress in improving irradiation techniques and with the implementation of multidisciplinary treatments by combining induction or concurrent chemotherapy or Cetuximab based biological therapy.Th e goal of quality based radiotherapy is to improve the therapeutic ratio that establishes the effi cacy/toxicity ratio.IMRT technique has become a standard in the treatment of HNSCC following Phase III Intensity Modulated Radiation Th erapy (IMRT) versus conventional radiotherapy in head and neck cancer (PARSPORT trial).Th e purpose of this study was to demonstrate the advantage of the IMRT technique in the proportion of patients presenting xerostomia 2 or higher degree, 1 year after the completion of the treatment for head and neck cancers.Th e control group received radiotherapy by 3D-Conformal technique (3D-CRT) 1 . Xerostomia associated with dry mouth syndrome is one of the complications that most severely aff ect the quality of life for radiotherapy and radio-chemotherapy treated patients for the head and neck cancers.Th e irradiation of the salivary glands region leads to changes in the glands volume, consistency and pH and composition of the secreted saliva.Th ese phenomena are also implicated in the pathogenesis of the tooth diseases that aff ect structure and resistance.Xerostomia is also related to infections of the oral cavity. Intensity Modulated Radiation Therapy (IMRT) in HNSCC Reducing the volume of parotid glands receiving a high dose of radiation has been shown to be a necessary condition for reducing severe xerostomia, but irradiation of level II lymph nodes makes it diffi cult by standard radiotherapy technique to spare parotid glands.Limiting doses to at least one parotid gland or if it is not possible to reduce the dose received by a sub-volume from the parotid glands is a purpose for reducing post-irradiation xerostomia.Th e severity of salivary gland damage depends on the total radiation dose and irradiated volume, recent studies being focused on the possibility of parotid glands sparing.Evaluation of salivary fl ow or salivary gland scintigraphy as well as subjective eva- Intensity Modulated Volumetric Therapy (VMAT) in HNSCC Volumetric Intensity Modulated Arc Th erapy (VMAT) is a newer radiotherapy technique than "step and shot" IMRT, based on the treatment is delivered while gantry of Linac performs a continuous rotation simultaneously with the modulation of the beam intensity using the multi-leaf collimator (MLC).Th e advantage of the technique is the increase in conformity of the treatment plan and the treatment dose delivery in a shorter time compared with the IMRT technique.Also some studies highlight the possibility of reducing the monitor units (MU) number and a better dose homogeneity compared to the plans obtained by the IMRT technique 3,5 .Verbakel et al. evaluated the potential advantages of VMAT technique over IMRT by comparing treatment plans for nasopharynx, oropharynx and hypopharynx cancers.Treatment plans based on 2 complete arcs used were superior in terms of dose delivery time and dose homogeneity, and in terms of organ at risk sparing (OAR), VMAT plans were similar to the plans IMRT 6 . A study that included 222 patients diagnosed with oropharyngeal cancers (134 who received radiation therapy using the IMRT technique and 88 who were irradiated by the VMAT technique) analyzed dysphagia and xerostomia of degree 2 or greater.All the treatment plans were dosimetric compared.In the group of patients irradiated by VMAT technique, the toxicities were signifi cantly lower 7 . Th e same superiority for VMAT technique in reducing the number of MU to 30% compared with IMRT radiotherapy is also demonstrated by a retrospective study by Fung-Kee-Fung and collaborators for concurrent radio-chemotherapy treated patients for head and neck cancer (stages II-IV) 8 . Although the irradiation techniques have improved considerably, xerostomia remains a diffi cult problem to solve even in the era of modern techniques, the rate of xerostomia remaining high even when using new techniques.By retrospectively analyzing data from the dosimetry of 609 patients, the average dose and the average percentage of the volume of the salivary gland that received at least 26Gy (V26) evaluated for the contralateral parotid gland were 24.50Gy and 40.92% .Identifying an average dose of 48.18Gy for submandibular glands, the study authors conclude that even if the submandibular glands are not suffi ciently considered as OAR, the "level one" priority should be to target volume coverage target and the OAR protection associated with xerostomia remain only a "second level priority" 9 . Late xerostomia may ocur at large intervals after IMRT irradiation with a decreasing tendency over time.Although the toxicities at large intervals after treatment are rare, Baudelet et al. reported an increase in the risk of dysphagia between 5 and 8 years after treatment, observing a non-linearity of the phenomenon, both for dysphagia and for fi brosis of the neck 10 . Radiation superiority of IMRT technique in nasopharyngeal cancer has been demonstrated by a metaanalysis including 8 studies and 3570 patients, (1541 treated by IMRT technique and 2029 treated by 2D and 3D-CRT techniques).Th e authors compared the clinical results of the treatment regression free survival (PFS) and OS in both patients groups and the late toxicities of intensity modulated radiotherapy (IMRT) with those obtained with two-dimensional radiotherapy (2D-RT) or three-dimensional conformal radiotherapy (3D-CRT) in nasopharyngeal carcinoma.IMRT technique has shown superiority in OS but also in tumor control.Th ere was also a lower rate of trismus and temporal lobe neuropathy in the IMRT radiotherapy treated patients 11 . CONCLUSIONS Th e IMRT technique has become a therapeutic standard in curative treatment, proving the ability to reduce acute and late toxicity and superiority in survival and loco-regional control in nasopharyngeal cancer.Th e VMAT technique off ers a reduction of the irradiation time, avoiding the negative radiobiological consequences of the prolonged dose delivery time, and increasing the patient's comfort level and limiting the risk of ballistic errors in dose delivery by reducing the immobilization time.Combining the dosimetric qualities of the IMRT «step and shot» plan with the advantages in reducing the delivery time of the VMAT treatment could replace the IMRT method in the treatment of HNSCC. Compliance with ethics requirements: Th e authors declare no confl ict of interest regarding this article.Th e authors declare that all the procedures and experiments of this study respect the ethical standards in the Helsinki Declaration of 1975, as revised in 2008 (5), as well as the national law.Informed consent was obtained from all the patients included in the study.
2,122.4
2019-09-25T00:00:00.000
[ "Medicine", "Physics" ]
A Values-Based Approach to Exploring Synergies between Livestock Farming and Landscape Conservation in Galicia (Spain) The path to sustainable development involves creating coherence and synergies in the complex relationships between economic and ecological systems. In sustaining their farm businesses farmers’ differing values influence their decisions about agroecosystem management, leading them to adopt diverging farming practices. This study explores the values of dairy and beef cattle farmers, the assumptions that underpin them, and the various ways that these lead farmers to combine food production with the provision of other ecosystem services, such as landscape conservation and biodiversity preservation. This paper draws on empirical research from Galicia (Spain), a marginal and mountainous European region whose livestock production system has undergone modernization in recent decades, exposing strategic economic, social and ecological vulnerabilities. It applies a Q-methodology to develop a values-based approach to farming. Based on a sample of 24 livestock farmers, whose practices promote landscape conservation and/or biodiversity preservation, the Q-methodology allowed us to identify four ‘farming styles’. Further analysis of the practices of the farmers in these groups, based on additional farm data and interview material, suggests that all 24 farmers valorize landscape and nature and consider cattle production and nature conservation to be compatible within their own farm practices. However, the groups differed in the extent to which they have developed synergies between livestock farming and landscape conservation. We conclude by discussing how rural development policy in Galicia could strengthen such practices by providing incentives to farmers and institutionally embedding a shift towards more diversified farming and product development. Introduction Although the globalization of food supply chains has brought benefits for consumers, such as year-round availability of food at relatively low prices [1,2], this has come at considerable, albeit hidden, costs which include an acceleration in climate change, increased risks to public health and the depletion of scarce resources [3], challenges that are especially marked with livestock farming. These negative impacts have led to calls for a territorially-rooted approach, which is better suited to adaptors. The difference is that our sample contains farmers who are actively constructing synergies between livestock farming and the available physical and human resources ('eco-pioneers') as opposed to those adopting externally designed technologies. In our research we tested how producers value and link or look for synergies between livestock farming and landscape conservation. We conclude that all the respondents in the sample consider that integrating the production of ES is compatible with their farm practices. However, the farmers differed in terms of the means and abilities to improve synergies between ES and their productive activities. Policy programs should address these differences if they are to be effective. A Value-Based Approach From a socio-economic perspective, farmers tend to perceive and express the natural environment in terms of the monetary revenue that the values it contains generates. Farmers have to earn a living, and their farm strategies are usually based on an economic calculus, which implicitly or explicitly results in trade-offs between economic and environmental assets. Alongside food provisioning farmers (can) provide a range of other ecosystem services but usually they optimize farm production according to what 'adds up' and to what 'remains below the line'. Some of their activities protect and enhance natural resources (e.g., improving soil structure or maintaining hedgerows), while certain forms of farming have a negative impact on the environment (e.g., increased specialization of production and dependency on external resources, such as fertilizers and pesticides, which can pollute soils and water resources). Yet, current and future adaptations to the emerging environmental and resource vulnerabilities may lead to adjustments in land-use and farm practices that reconnect man and living nature, and result in (and stem from) a broader understanding of 'economic value'. Humans can play a role in improving the natural environment, and when they do so, this results in and represents 'objectified and accumulated labor' [24], or 'ecological capital' [25]. Government representatives, researchers and other intermediaries need to understand these (potential) environmental benefits, and how an increase in endogenous natural resources (improvement of the stock and/or quality of nature in an area) can be achieved [25][26][27] through co-production with nature [28,29], also conceptualized as 'transformative values' [30]. In this context it is useful to situate the provision of ES in relation to wider rural development, and, in order to identify departure points for successful territorial strategies, to study farm heterogeneity. One can identify three distinct, yet mutually interdependent, aspects that shape the heterogeneity of farms [31]: • notions or ideas about 'how to farm', i.e., the drivers and motivations for farming that are based on a farmer's reality and needs and his or her cultural beliefs; • actual farm practices, the strategic actions that are an expression of those beliefs, and; • the different kinds of internal and external relationships, such as those with markets, technology, and administrative and policy frameworks [31,32]. Van der Ploeg and Ventura [33] (p. 23) have argued that farm practices result from the "goal-oriented, knowledgeable and strategic behavior of actors" which is framed, and can be either blocked or facilitated by the institutional environment [31,34,35]. The values of farmers and other stakeholders (anthropocentric values) and the environment (intrinsic values) mutually influence each other, in what is sometimes referred to as the 'convergence principle' [15], and shape the development trajectories which can follow different directions [36,37]. The turn to multifunctionality and more localized production and consumption are often a response to the negative effects of globalization [8,16,38]. Over the past decade, scholars have identified how the negative effects of globalization have encouraged farmers to participate in (or even initiate) programs to enhance ecosystem services [39]. Shifting from production-oriented land-use and expanding the provision of multiple ES often fosters a progressive understanding that the 'local' also, and perhaps paradoxically, contains a strong element of connectivity (among stakeholders in the region and with the 'outside world') [40] and a realization that food systems are not inevitably shaped by external forces but can created and actively reshaped through changing local practices [13,41]. This, in turn, deepens actors' understanding of how to combine human and non-human elements in order to achieve these changes [42]. There are now many examples of strategies for successfully linking livestock farming with the maintenance of biodiversity and even technical guidelines for doing so [43]. Scholars have also examined why voluntary agri-environmental programs sometimes fail to meet their objectives [44] and often do not bring about the anticipated changes in farmers' behavior or motivations [45]. The changes needed in farm management require the creation of coherence and synergies between economic systems and the environment, at the level of the individual farm, the regional economy and in relation to wider contexts (e.g., policy frameworks, markets) [13]. This means that we should not solely consider farm practices', which include the management of the fields and biodiversity and the farmer's relations with the farm animals, but should adopt a broader view that encompasses markets and regional policy frameworks. This implies that the focus should not be strictly limited to the 'local' but, following Massey [46], should be considered as an 'activity space': a spatial network of links and activities within which an actor (in our case: the farmer) operates. The concept of activity space describes an assemblage of spatial practices that transcends the local-global duality [40], and includes all actors (animate and inanimate) that shape the farm and its trajectory. The concept offers a heuristic device for studying the socio-spatial connections between the physical and human resources for production and consumption [40,47] that frame farmers' strategic actions and how farmers value, and give significance to, the natural environment. In the remainder of this paper we use the Q-methodology to explore how multifunctional livestock farmers in Galicia value ES and perceive the combination of productive activities (food and fibers) and socio-cultural services (landscape and biodiversity). We then discuss the different ways in which these livestock producers view the (potential) synergies between livestock farming and providing ecosystem services and how they adapt their farming practices to make use of the physical and human resources that they have access to. Materials and Methods In Galicia, an autonomous region in northwest Spain, the socio-spatial connections between the physical and human resources for production and consumption have been oriented towards economically optimizing livestock production. Since the 1980s, milk production in areas with good conditions (in terms of soil, climate, and slope) has increased, whilst dairy farming in mountainous areas has virtually disappeared [48]. These areas are becoming economically vulnerable, due to depopulation and the ageing of the remaining population. This is accompanied by a high level (at least 20 per cent) of land abandonment; especially in mountainous areas [49] where the total surface area used for extensive grazing and for crop production has diminished significantly [50]. Galicia is extremely rural (about 97% of its total area is defined as rural) and more than half of its population owns land. Out of a population of around 2.8 million there are 1.6 million land owners, although less than 65,000 of these landowners can be classed as farmers. The average landowner has 1.8 ha of land, spread across an average of seven plots, creating a large 'minifundio' (smallholding) sector [50] (in total Galicia has 11.4 million plots of land). Small farm sizes and the small and scattered pattern of field parcels limit farmers' abilities to run profitable farms. Out of a total surface of almost three million hectares of land in Galicia, almost 400,000 ha are cultivated with crops and just over 300,000 ha are permanent grassland, so the total Utilized Agricultural Area is less than a quarter, with forests occupying a further 45% [50]. Table 1 provides selective data about the intensification of the Galician dairy sector over recent decades. It illustrates the dramatic decrease in the number of dairy farms and the land used for dairy farming, whilst total milk production almost doubled between 1982 and 2009. Since the number of cows has remained more or less the same, milk production per ha, per cow, and per farm has increased. The intensification of dairy production was largely driven by the (until recently) attractive price ratio of milk to animal feed and the relatively high prices paid to milk producers until the second half of 2013 [50,53]. The relatively high dependence on purchased animal feed among Galician livestock farmers [48], extreme fluctuations in feed prices on the world market, and the dependency of Galician farmers on these external inputs (dependence on animal feed is a major contributor to Galicia's negative agrifood trade balance) made many livestock producers economically vulnerable. Mountain farmers (at altitudes between 800 and 1400 m) face less favorable natural conditions, and have half the number of animals on twice the amount of land (0.6 livestock units per ha) than lowland farmers. Only a few mountain farmers have dairy cattle, and, although not a statistically significant sample, subsidies make up over one-third of their farm revenues. In addition, the average family size is higher and owners are older [48], reflecting the general demographic of rural Galicia [50]. The income that mountain farmers derive from livestock production is often not the only, or most important, income source (which is often supplemented with the retirement pensions received by household members) while many of the incomes of lowland farmers' (often dairy farmers) largely depend on revenues from livestock production [48]. Galicia has around 10,000 dairy producers and 350,000 dairy cows and accounts for around 40 per cent of total Spanish milk production [53]. In this context, the vulnerabilities mentioned in the previous paragraphs challenge the competitiveness and future of Galician livestock production. The abandoned land (hundreds of thousands of hectares of land that could potentially be an endogenous resource base) could help livestock producers in the lowlands to significantly reduce their costs (e.g., through accessing nearby land, reducing external fodder input; in combination with a different farm optimization [54]. In addition a revival of livestock and crop production in remote mountainous areas could significantly reduce the risk of forest fires, which have been increasing as a result of land abandonment [50,55]. Farmers' opinions or ideas about how they might farm better may differ from how they actually operationalize their farms and livestock production, due to structural conditions, such as laws and a lack of market opportunities and public support schemes. Since there are very few formally supported agro-environmental schemes in Spain, we set out to explore the potential for farmers' adopting management strategies that support ecosystem services as a result of their socio-cultural values [54,56]. To do so we applied the Q-methodology, a simple and fast exercise in which respondents are asked to reply to a number of statements, the responses to which can help classify them as following one of several management styles. Respondents respond to a selection of statements (the 'Q-sort') on a scale ranging from 'strongly agree' to 'strongly disagree' in interviews that last approximately 40 minutes [22]. The list of statements was built from an in-depth study of the main features of the Galician farm sector and the general landscape characteristics that involved interviewing key-informants (researchers, government representatives and farmers), participating in a regional event on the future of rural Galicia ('Encontro Rural Imaxinado: do presente ao future porvir', held 13 November 2015 in Lalín) and carrying out additional desk studies (literature, reports, and newspaper articles on Galician farming). Together this provided us with the necessary background information to develop a set of statements, which was tested in pilot interviews and subsequently reduced from 54 to 49 statements (see Appendix A). The second stage consisted of the application of the Q-methodology and additional on-farm interviews that provided us with farm data in order to deepen our understanding of the interrelations between livestock farming and landscape conservation. Since we could not draw on a list of farmers participating in agri-environmental schemes we selected our sample from recommendations from key-informants about farmers who were pursuing more ecological farming strategies and seeking to create synergies between livestock farming, landscape conservation, and biodiversity preservation. Twenty-four farmers participated in the second stage (of whom 21 were full-time farmers). The sample was made up of five organic dairy farmers (DO), six organic beef cattle farmers (BO), four conventional dairy farmers (DC), and nine traditional beef cattle farmers (BC). The farmers (some women, but mostly men) in the sample aged from 26 to 70 years old. Farm sizes ranged from 35 to 100 ha for beef farms and 30 to 100 ha for dairy farms. The number of beef cattle varied between 25 and 73 head at individual farms and between 114 and 324 at farms grazing communal land (a special feature of Galician farming). The number of milking cows at the dairy farms ranged from 21 to 90 head, and total milk production from about 200,000 to 800,000 kg per year. In general dairy farmers had a relatively high milk production (between 7500 and 8500 kg milk per cow/year) on both smaller-and larger-scale farms. The sample included specialized beef and dairy producers and farmers who had diversified their farm production. Among the diversification activities were horticulture production (onions, tomatoes), processing raw materials (milk and meat), and selling farm production (raw and processed materials such as cheese and beef products) through short food supply chains. In some cases farmers sold their produce directly to consumers, in other cases they sold specialty products through a cooperative. All the respondents were first asked to organize the Q-sort in three simple piles: 'agree', 'neutral', and 'disagree'. Next, farmers scored the statements in a grid scale that forces the statements into a quasi-normal arrangement, with −5 represented 'strongly disagree' and +5 'strongly agree'. Scores around zero meant that farmers were indifferent to that statement. After that farmers briefly explained why they had selected the statements that they ranked as −5/+5. The PQMethod software version 2.35 [57] was used for the data analysis. The PQMethod software calculated the correlations among Q-sorts and performed a principal components analysis (PCA). The resulting factors were rotated using Varimax rotation. The default analysis produced eight un-rotated factors so, in order to select the number of factors to be rotated, the standard protocol for Q-methodology was followed in which an eigenvalue of more than 1.0 is needed to be statistically significant, as well as at least two of the Q-sorts loading significantly on that factor. This resulted in four significant factors, discussed in the following section. Results The outcome of the factor analysis using PCA and subsequent Varimax rotation is presented in Table 2. Four outcome factors were identified that represented the different attitudes of the farmers towards farm development: 'Diversifying Farmers' (A), 'Conventional Farmers' (B), 'Businessmen' (C), and 'Economical Farmers' (D). These factors represented 63% of the total variance and accounted for 21 of the 24 participants. All the organic producers (types DO and BO) were classified as 'Diversifying farmers' (factor A) and around one third of non-organic farmers also fell into this group. The other conventional farmers were distributed among the three other groups. The differentiation in scores on the statements resulted in the distinction of four patterns of coherence which were based on the on-farm interviews, and further analyzed and interpreted in terms of how farmers differently valorize the natural environment. However, we were not able to classify all the farmers in the sample: 12% of the sample did not fit in with any of the identified groups, whether due to error or because of their individual, idiosyncratic attitudes. Management Orientations and Ecosystem Service Provisioning All the farmers in the sample recognized the aesthetic and biodiversity values of the Galician landscape. However, the integration of these values into daily practices differed between farmers. Diversifying farmers (group A) most clearly expressed their interrelations with the natural environment, and built their farm strategy upon the locally-available resource base. They combine productive farm activities (milk and beef production) with other opportunities provided by the natural environment, such as locally marketing food products and agro-tourism activities. The conventional farmers (group B) farmed more intensively and perceived limitations in terms of the productivity of the land, which necessitated the use of artificial fertilizers to improve the productivity of their grassland. In comparison to the other groups, these farmers put less emphasis on their interrelations with landscape and nature. The businessmen (group C) ran larger holdings (in terms of the number of cows and the size of the holding) and use productive cow breeds. The main limitation that they perceive is access to land, which often leads them to rent land, either close by or at a distance. They focus on the financial aspects of feeding their cattle but pay little attention to cost reduction strategies (fertilizers, fodder input, medications, etc.). The economical farmers (group D) often apply a cost reduction strategy in combination with a less intensive farming practice and tend to keep less productive, but more robust, breeds. They value living in the countryside and look to make their living from farming. One notable feature of farmers in this group was the high value that they place on living in a family setting. Table 3 provides examples of how farmers in the different groups expressed their relationship to the landscape and nature, and how they valorize ecosystem services. Conventional farmers perceived activities such as closing nutrient cycles and improving soil quality, to be less relevant for farm development than the farmers in the other groups (see for example farmers BC17 and DC8), whilst economical farmers (for example farmer BC22) look to optimize farm performance by boosting the resilience of their natural resource base. One example of this was to keep traditional breeds (e.g., farmers BO12 and BC22), an optimization strategy in which farmers keep hardier, but less productive, breeds of livestock that are better adapted to marginal, mountainous areas and can thrive on poor grazing land, with minimal intervention. Farmer BO12, who indicated that natural production processes and limits inspire his farm strategy, regarded his contribution to animal biodiversity and landscape aesthetics as also providing socio-cultural services. Farmer DO3, who diversified his strategy along similar lines, also related his way of livestock production to community building and the provisioning of recreational and aesthetic landscape values. While farmer DO3 adds value through direct farm sales and agro-tourism (offering rooms), the non-monetized assets he produces (e.g., the aesthetic landscape value) can be described as a socio-cultural service. Farmer BO12 was aware that environmental management schemes could increase the value of his beef production, as he would receive payments for certain ES in addition to the subsidies he currently receives by virtue of being an organic farmer. Farmer DC9, not only recognized the their aesthetic and ecological value of hedges and trees, but also that they provide valuable ES: reducing erosion and the runoff of nutrients from the fields, attracting pollinators and insects that control pests and diseases, and providing shelter for the animals. The analysis shows that farmers in the diversifying group are more motivated than the others to provide ecosystem services, because they are more cognizant of their benefits, and also recognize their broader societal value. Table 3. Farmers' reflections on the provisioning of ecosystem services. Diversifying Farmers 'Hedges and trees delimit the plots and restrict the access of the cattle to other areas. They function as natural fences and also create a microclimate, protecting the cattle from the wind.' (DC9) 'I believe that our type of production is more oriented to improving our quality of life as well as the quality of life of our animals; hence, we aim to enhance the relationship with nature.' (BO12) 'Working with living beings is a huge responsibility. You cannot compare it to working with inert things. It is essential to give the animals living proper conditions as well as taking their welfare into account.' (DO2) Conventional Farmers 'A reduction in the use of chemicals would be better for the human health and the animals but in this area you need a lot of chemical fertilizers in order to produce enough fodder, we spend a lot of money on chemical fertilizers, since there is not enough manure to fertilize all the plots and because the cows are permanently in the paddocks, there is no chance to collect the manure.' (BC17) 'Cows can get sick when they eat pasture that has been sprayed with pesticides but it is not profitable to convert to organic production.' (DC8) Businessmen 'We have too many cows but not enough land to maintain them hence we have to rent more land for the cattle, so we rent land in an area close to Leon [called 'Las Brañas', distant pastures located just outside Galicia at an altitude of between 1000 and 1300 m where cattle can stay from the end of April to the end of November, AOT]. It is around eight hours from here by foot.' (BC16) 'If you have a good income but do not know how to manage it then the farm will have financial problems, so it will collapse. [ . . . ] I used to take the cattle to Las Brañas but I consider it too far and too time consuming so I now rent land closer to my farm.' (BC19) 'It is important to re-invest the money you earn in the farm.' (BC20) Economical Farmers 'I try not to use pesticides unless it is unavoidable, pesticides are not good for the environment, health nor the animals. I would prefer to lose a potato rather than to treat it with sulphate, but it's different with animals if they get sick I prefer to give them the antibiotic instead of letting the animal die.' (BC22). The dairy and beef cattle producers in our sample recognized the value of traditional landscape elements and made use of them in their production practices. The diversifying farmers provided the most detailed description of the benefits derived from traditional landscape elements. Apart from recognizing the benefits of supporting services, such as enhanced soil fertility and animal biodiversity (mainly livestock but some also spoke about pollinators and beneficial pests), they frequently mentioned the socio-cultural benefits provided by traditional houses and buildings, stone walls, and hedgerows. During the interviews we noticed that quite a few farmers rented nearby plots which, given the Galician context of small, scattered plots, is often not easy to do. One conventional beef producer in a mountainous area had been able to rent land close to his farm, which meant he no longer had to take his cattle to more distant land, higher in the mountains. This tendency, of allowing upland pastures to revert to forestry poses a dilemma. From a societal point of view it increases the risk of forest fires devastating the upland ecology. Yet from the individual farmers' point of view it is completely logical as driving the (beef) cattle to upland pastures and regularly checking on their welfare is extremely time consuming. Dairy farmers do not have the option of summer grazing their livestock on distant pastures, but the high ratio of cows to land poses other problems: specifically, transporting grass onto the farm. One large-scale dairy farmer planned to address this by reducing the number of cows and using their milk to make cheese on-farm in order to increase the value added per kg of milk. Discussion Attempts are being made to diversify rural economies through secondary and tertiary activities (such as services, tourism, SMEs, technology, and industries). While these activities are becoming increasingly important as sources of incomes and jobs, primary activities (agriculture and forestry) still largely determine rural land-use patterns. Given the gradual decline in production subsidies and the shift in European policies towards promoting more balanced and more sustainable territorial development, farmers face the challenge of adjusting their farm businesses so that they are less reliant on productivist agriculture. One way of doing this is by becoming involved in conserving the landscape and biodiversity. This can bring multiple benefits, it can: • make farmers more reliant on their own resource base and less so on purchasing inputs (thereby reducing their costs); • (under the right circumstances) be a direct source of income (through 'Pillar 2' subventions), and; • improve the attractiveness of the rural areas and provide the basis for other rural development activities from which farmers can, directly or indirectly, benefit. Our study shows that dairy and beef cattle farmers involved, to some extent, in providing ecosystem services have different perceptions about the benefits and potential of doing so. Understanding this heterogeneity in farmers' perceptions, and their farm practices, is useful in shining light on possible departure points for promoting more sustainable agricultural practices in Galicia and other marginal European agricultural regions. Farming styles research is a well-established approach for studying farm heterogeneity. According to farming style theory, farm practices are the expressions of the strategic actions of actors, and are influenced by cultural beliefs [31]. In order to identify differences in perceptions and attitudes about the natural environment among Galician livestock producers we applied Q-methodology, a research tool that merges quantitative and qualitative techniques in the analysis of subjectivity ('viewpoints' or 'discourses') [58]. Its quantitative aspect makes use of statistics and mathematical techniques in data-collection and analysis, and its qualitative component focuses on respondents' values and beliefs. This allowed us to group the farmers into different farming styles, according to their values and goals and, to some extent, to grasp how these values influence their practices. We did experience some limitations with applying the Q-methodology. Since Q-methodology involves farmers responding to a pre-determined set of questions there is a risk that this methodology does not fully capture farmers' attitudes and perceptions but that these are pre-filtered through a structure established by the researcher. The methodology delivers a classification of management perceptions and attitudes but does not demand that respondents clarify the classification (in a way this influences the classification). In our research we asked the farmers to briefly explain the two extremes of the results of their individual Q-sort but this did not result in farmers' clarifications on differences of attitudes and perceptions among farmers in the sample. By contrast, farming style research combines analyses of farm economic data and in-depth interviews in which farmers are asked to explain how their farm management strategies differ from those of their peers, thus inviting them to think about their farming strategy from a comparative perspective. As such farming style analysis provides a tool to study farmers' perceptions, realities and development trajectories, and how these compare to those of others. As such it captures the crucial interrelations between farmers' cultural ideas about how to farm well, the farming practices they employ and the networks (the market, technology, and administrative and policy frameworks) in which farmers and their farms are embedded [31]. Further, while the sorting of statements enabled us to classify respondents according to their attitudes and values towards farm management, it was not sufficient to draw hard conclusions concerning their influence on farming practices. While farmers' ideas and motivations are often congruent with farm practice we could not verify whether the differences in value orientations revealed by the Q-methodology led to farmers' adopting different farming practices. This leads us to conclude that there is a need to complement this research tool with other methods that allow for better identifying differences in farming practices and farmers' interactions with institutional environments. This would enable future research to ascertain how empirically-grounded differences in farmers' value orientations influence their practices and the institutional embedding of their combined provisioning of food and ecosystem services (ES). One theme to emerge from the research is how farmers, in order to reduce their vulnerability and keep their farm business viable, struggle with balancing the land-animal ratio. This is expressed in terms of both the number of animals in relation to available farmland, as well as matching the cow breed with the grassland conditions (less productive cow breeds that are more robust and can be more easily maintained in less favored areas). Farmers' ideas and motivations are often congruent with their farm practices, and stem from their belief that they can make a living from the resources available to them, which includes their cows, machines, and the locally-available biodiversity. Yet, these resources only become valuable when farmers recognize their potentials, and how to turn these potentials into benefits. This can result in monetary income (through market exchanges) but can also be re-invested in the farm (strengthening and sustaining it). The differences in farmers' attitudes and strategies stem from the resources that they perceive to be available to them: both externally (such as artificial fertilizers or credit) and internally (the natural resource base that farmers turn into ecological capital). When farmers face difficulties in accessing land, they are more likely to buy more feed and fodder and, in order to finance these inputs, are often led to increase the farm scale (either in terms of land or the number of animals), a strategy that is often in the interests of upstream industries and the banking sector. Farmers with access to sufficient land appear to develop their farm differently from farmers who lack such access. We noted a similar split between those farmers who rely on veterinary services and medicines and accept the monetary costs of keeping animals productive (according to scientifically-based risk reduction) and those who work towards, and rely on, improving the soil-plant-animal-manure balance in their own fields in order to improve resistance to infections and diseases (a strategy based on farmers' experiences, as one of the farmers explained). Although all the farmers in our sample have some interest in strengthening their farms' interrelations with landscape and nature, the farmers who keep hardier but less productive breeds (a cost reduction strategy that lowers reliance on externally-provided inputs and technologies) were more enthusiastic and optimistic about strengthening these interrelations. We also noted that, for very practical reasons, farmers tend to abandon more distant plots when they can do so (either by acquiring access to more conveniently located plots or by restructuring their farm business to allow them to abandon the more distant plots). This raises questions about the ongoing problem of land abandonment on remoter, upland areas and whether this can be countered through policy measures. While farmers' attitudes towards benefitting from potential synergies between livestock production and landscape conservation will play a role in this, this will far more critically depend on the Spanish and Galician authorities tapping into European funding mechanisms to protect these ecologically, economically and socially vulnerable areas. Finally, we explore how policy can further encourage the provision of ES, and landscape conservation in particular, by being more explicitly and directly aligned with the differentiated value orientations and practices of Galician farmers. To really understand which farming styles are best suited to delivering ecological and social services, and under what conditions, one also needs to consider three policy issues: To this end Spain's Rural Development Plan should include more measures to support land-use activities that maintain and enhance the natural resource base and strengthen the provision of ecological and socio-cultural services. There are mechanisms within the European policy framework to encourage farmers to collaborate (Measure 16-collaboration) and jointly design land-use strategies (Measure 10-environmental protection). Such mechanisms enable farmers to engage in landscape protection and encourage biodiversity in ways that transcend the limits of the individual farm unit-and the strategies available to individual farmers (such as reducing the use of agrochemicals). The funding that such mechanisms can bring can provide a valuable counterbalance to the increased volatility of market prices that often threatens farms' economic security. Such schemes, that reward the provision of public goods, alongside the adoption of cost reduction strategies that involve less use of external inputs and technologies, are increasingly important elements in ensuring the continuity and success of livestock farming in other European regions [27]. Apart from providing some degree of financial incentive to farmers to adopt more sustainable farming practices, agro-environmental, and related schemes also provide crucial structures that enable such practices to become embedded in local farming repertoires, especially in the initial stages. Additionally, while the European Rural Development Regulation [59] allows space for measures that encourage the restoration, preservation and enhancement of ecosystems (priority 4) such measures are currently absent from Spain's Rural Development Program (RDP) [60]. The Galician RDP acknowledges this priority and has made a start in pursuing priority 4 measures [61] (pp. 169-170), but these could be extended by creating platforms for local groups of stakeholders (under Measure 16, 'Cooperation'), in order to identify the types of nature, landscape, and biodiversity that they wish to protect and develop and to design strategies for so doing (under Measure 10, 'Environmental Protection'). The current Galician policy framework also provides other opportunities for promoting more multifunctional livestock production and more localized production and consumption patterns. EU Rural Development priorities [59], such as knowledge transfer and innovation (priority 1), farm viability and competiveness (priority 2), and food chain organization and risk management (priority 3), are all mentioned in the Galician RDP, and could help farmers to adjust their land-use and farm practices so as to reduce environmental and economic vulnerabilities. At the same time the inclusion of new optimization features (ecosystem services beyond food production alone-included in priority 4) has the potential to enhance farm productivity and incomes, which could improve the economic viability of dairy and beef cattle farmers in Galicia and improve the attractiveness of the region's rural areas for tourism and for new residents. Establishing structures to enable participatory design on landscape conservation (under Measures 10 and 16) could have a beneficial impact on the economic performance of farms through providing support for diversification (priority 3, 'New Investments in Small-Scale Processing, Marketing and Product Development' [61] (p. 168)) and through adding value to primary production by linking landscape-specific biodiversity to the reputation of products and their quality features. In this way the first three priorities of the Galician RDP could be used to catalyze a shift in the strategic behavior of farmers, aligning anthropocentric values (of farmers and other stakeholders) and intrinsic values (the natural environment), and deliver ecological and social services through product development. Acknowledgments: Case study research for this article was carried out in the context of the 'Plan Galego de Investigación, innovación e crecemento 2011-2015' of the Xunta de Galicia. The Xunta de Galicia provided financial support for the postdoctoral project 'POS-B/2016/028' which covered the costs for writing and language editing, and to publish the article in an open access format. We are grateful to all who commented on earlier versions of this article. Our special thanks go to the anonymous reviewers for their useful suggestions on how to improve this article. Many thanks go to Nicholas Parrott (TextualHealing.eu) for providing English language editing and editorial advice. Responsibility for the views and the argumentation provided in this article lie with the authors. Author Contributions: All authors contributed to the conception and design, Amanda Onafa Torres, Paul Swagemakers, and Lola Domínguez García were involved in acquiring the field data, and all the authors were involved in data analysis and interpretation. Paul Swagemakers drew up the original draft paper with the other co-authors, making significant intellectual input. Conflicts of Interest: The authors declare no conflict of interest. Table A1 contains the statements used in the Q-sort exercise done with the 24 farmers in the sample. The statements have been presented to the farmers in Spanish (Table A1 represents these translated into English). 16. I only intensify my production milk/meat with resources I already have. 17. By improving the fertility of my cattle I will improve the quality of the milk/meat and my income will also increase. 18. I improve the quality of my pastures, in order to raise the milk quality and my income. 19. I am satisfied with the present level of development on my farm and I intend to develop it further by renting some more land. 20. I am satisfied with the amount of land I have to farm now; and since land is very scarce in this area nobody wants to rent it out or sell it. 21. I am not interested in having a big farm, or increasing my production. 22. The land I own is enough to produce, so I do not need to rent more land. 23. The land I have is not enough to produce so I rent most of the land. 24. The land I have is made up of several scattered plots, which increases my workload and makes it unattractive to increase the number of animals. 25. My farm produces (most) of its own fodder. 26. I sometimes/often employ external labor. 27. Family members come and help with the tasks on the farm and provide the main labor force. 28. Government loans and subsidies are very important and/or helpful. 29. My goal is to reduce my workload and improve the quality life of my family. 30. A good farmer concentrates his energies on the farm and is not sidetracked by interests or activities outside the farm. 31. The best part of farming is to have your family working alongside you. 32. I am a farmer because I like what I do. 33. I am a farmer because it is the family tradition, the family has owned the farm for many generations. 34. Farm work needs to be done but there is no great joy in it. 35. My long-term goal is to learn how to manage resources in cooperation with nature. 36. I consider it important to maintain a basic relationship between animals and human being. 37. In order to maintain healthy animals a good farmer considers three levels: the physical, the biological and the social. 38. Organic farmers feel more satisfaction knowing that they are doing things 'right'. 39. Farm tasks must take priority over family time. 40. The cattle spend all their time in the stable. 41. I would prefer to have an extensive farm. 42. Calves and cows graze freely in the paddocks and are able to eat as much as they want. 43. A good farmer gives the animals proper care, considering them as living beings and part of nature. 44. Farmers today need to be sensitive to the environment by reducing their use of agro-chemicals. 45. I am doing everything I can do be environmentally aware and conserve the land I farm. 46. Working close to the nature is difficult and unrewarding 47. I consider reducing pesticide use as one way to improve living and working conditions on the farm. 48. I want to increase biodiversity on my farm even if it means taking land out of production. 49. I do not know the effects that pesticides may have on my farm.
9,482.2
2017-10-31T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Economics" ]
A Time-continuous Compartment Model for Building Evacuation We propose here a general framework to estimate global evacuation times of complex buildings, and to dynamically investigate the dependence of this evacuation time upon various factors. This model relies on a network, which is in some way the skeleton of the building, the nodes of which are the bottlenecks or exit doors. Those nodes are connected by edges which correspond to portions of egress paths located within a given room. Such models have been proposed in a discrete setting. The model we propose takes the form of a continuous evolution equation of the differential type. It relies on a limited number of variables, namely the number of people gathered upstream each node, together with the number of people on their way from a node to the next one. The basic parameters of the model are the capacities of doors, and the time needed to walk from one node to the next one. In spite of its macroscopic character (the motions of pedestrians are not described individually), this approach allows to account for complex and nonlinear effects such as capacity drop at bottlenecks, congestion induced speed reduction, and possibly some dispersion in evacuees behaviors. We present here the basic version of the model, together with the numerical methodology which is used to solve the equations, and we illustrate the behavior of the algorithm by a comparison with experimental data. Introduction Providing accurate and robust estimates for evacuation times in complex buildings is a long-term challenge in public safety. A common dilemma resides in the opposition between microscopic and macroscopic approaches. Microscopic descriptions (see e.g. [5,10,11] allow for a precise description of evacuees interactions, possibly accounting for non uniformity of individual behavior (social tendencies, speed, . . . ), but they lead to higher, possibly prohibitive, computational times. Besides, they call for an accurate knowledge (at least statistically) of people characteristics, which is most of the time out of reach. On the other hand, macroscopic models ( [6,8]) handle the crowd as a continuum, represented by a local density. The evolution of this density typically follows a conservative transport equation which expresses the "people conservation", and the core of the model lies in the manner the effective velocity is determined, based on individual tendencies and local density. This approach makes it possible to account for very large number of people at a reasonable computational cost. Yet, most models of this type are not able to reproduce some observable effects, like the Faster-is-Slower effect ( [4]), the Capacity Drop phenomenon ( [1]), or the fluidizing role of an obstacle ( [12]). Both approaches rely on a fine description of the behavior of individuals and their interactions with neighbors (finite number of finite-size individuals for the microscopic setting, infinitely many point particles for the macroscopic setting). If one aims at predicting the evacuation time of complex building, it may be of interest to use a coarse grain description of the crowd, based on quantities that are directly observable and measurable. This approach, which we may call systemic, is based on a decomposition of the building into various compartments, corresponding to distincts areas (like rooms, halls, or corridors). Those compartments are connected by doors, and the balance of global headcounts in compartments is driven by fluxes between them. The model we propose here is based on a network defined as follows: the nodes of the network are the exits of the various compartments. Each compartment has a certain number of entrances, each of which is the exit of an compartment upstream. Each entrance of a given room is connected by an oriented edge to the exit. We shall follow the convention that the exit points to the entrances, in such a way that the arrows (oriented edges) express a dependence relation. More precisely, a node i points to a node j if i is influenced by the situation of j (possibly in the past). The model which is presented here is not new in its constitutive principles. In particular the Capacity Constrained Routing Approach presented in [7] is based on the same type of network. The same type of model is also proposed in [3] to represent car traffic networks. The main novelty of the approach lies in the nature of the model, which is continuous in time, whereas previous approaches were essentially discrete. This continuous character gives a sound theoretical character to the approach, which can be used in particular to design rigorous methodologies for parameter identification. It also makes it possible to tune up the time step, which is in our setting a discretization parameter, depending on the situation which is considered. While allowing for faster-than-real-time computations, the large granularity of the description level a priori rule out the possibility to properly describe small scale interactions between individuals, and this model may not be used to investigate in any way the cause of the aforementionned phenomena. Yet, the most relevant parameters (in particular capacities and node-to-node travel time) can be made dynamic, and allow for an account of phenomena like the Capacity Drop, or the reduction of the walking speed in case of congestion. Model description, mathematical formulation Let us start with a toy problem: A certain quantity of people is accumulated upstream a single exit door. Since we aim at setting a continuous model, applicable to large numbers of entities, we represent this quantity by a real number N ∈ R + . The capacity of the exit, that is the maximal number of individuals which can go through it per unit time is denoted by C. We denote by f = f (t) ≥ 0 the incoming flux, that is the upstream flux of pedestrians, and by Φ = Φ(t) ≥ 0 the instantaneous flux through the door. The incoming flux is assumed to be known. The balance at the door writes The core of the model relies in the expression of Φ as a function of the dynamic variables N , f , and the static parameter C. By definition of the capacity it holds that Φ ∈ [0, C], and Φ = C whenever N > 0. When N = 0, Φ lies between 0 and C. Its value is f when f < C, but it may happen that Φ saturates to C if f > C. To sum up, the evolution problem can be written The extension to a many-room building is built in a similar manner, by accounting for the fact that people flowing through some passage node reach the next one after some transit time which is a parameter of the model. We denote by R the number of room / compartments, m i the number of inlet accesses to room i. For n = 1, . . . , m i , we denote by α n i the index of the room upstream the access n to room i. The time spent by an individual to walk (in room i) from entrance n to the exit is T n i . Finally, Φ i is the instantaneous flow rate of people through the exit of i. The problem writes (2.2) Numerical solution In spite of non-trivial mathematical issues (see [9]), designing numerical algorithms to solve this sort of problems is straightforward. Let τ > 0 denote a time step. To simplify the presentation, we shall assume that the transfer times are whole multiples of this time step, and denote byT i = T i /τ ∈ N the corresponding dimensionless times. We shall furthermore assume that people are initially gathered in the neighborhood of exits. The approximation of N i at time t k = kτ is denoted by N k i . The scheme is actually simpler than the continuous model, since it simply expresses that, at some time t k , the number of individual walking through exit i between t k and t k+1 is C i τ whenever there is enough people available to achieve this full capacity, and N i otherwise. In other words, the average flux in this time interval is either C i or N i /τ . where Φ k i is set to 0 for all k < 0 (no evacuation before the initial time), and R is the number of rooms. Assuming, as we did in the continuous setting, that all people are initially gathered upstream doors, we supplement this system with initial conditions N 0 1 , . . . , N 0 R . We may add an extra equation to keep an account of people who have evacuated the building at time k: It is also straightforward, for each room i, to keep an account of the number N i,n of individuals who are on their way to the exit of room i, coming from the exit of room α n i : Global people balance is straightforwardly obtained by summing up all discrete equations: Illustration: comparison to experimental data The model is tested in the configuration presented in [2], were a full set of experimental evacuations is presented. The topography is represented in Fig. 3.1 (left): it is made of 10 interconnected compartments. The underlying oriented network is represented on the left-hand side of the figure. Transit times are automatically computed as follows: the common target is defined as the gate upstream thestairs, denoted by 2 on the figure. The geodesic distance to the target, which corresponds to the length of the shortest path (accounting for walls and obstacles) to the target, is quasi-instantaneously computed by mean of a fast marching algorithm. More precisely, distances are computed at the center of cells of a cartesian grid which covers the whole domain, in a frontal way, starting from the target where is it set at 0, and then propagating backward upstream the domain. We refer to [9] (Chapter 8) for methodological and implementational details concerning those computations. The travel times are computed from those distances by setting the speed of pedestrian at 1 m/s. Isolines of this geodesic distance is represented in Fig. 3.1 (middle). To illustrate the evacuation, we also represent some egress paths computed with a microscopic model ( [10]), based on an initial random distribution of individuals ( Fig. 3.1, right). The time needed to get from one node to the other can then be computed as the difference between the geodesic distances divided by the speed of pedestrians. The initial number of individual is 86, distributed over the rooms 2 to 10 (circled black indexes in Fig. 3.1, left). The headcount in each room is indicated by the numbers in red. The effect of various gate widths have been experimentally studied, and we reproduce with the present model the two scenarios, referred to as scenario 4 and scenario 5, respectively, like in [2]. In scenario 4, the exit gate is wide open (1.24 m), and the corresponding capacity is estimated at 2.78 Ps −1 . In scenario 5, the door is reduce to half its width (0.62 m), with a capacity of 1.72 Ps −1 . We run the numerical model is both situations, and we illustrate the results by representing the computed number of people gathered upstream the exit gate versus the time. Fig. 4.1 represents the plots in the two scenarios. The model makes it possible to recover the "state" of this sensitive point when the evacuation time goes on. As expected (see the figure at the top), this number first decreases. Then, around time 8 s, people coming from upstream rooms start to reach this node. The gate continues to work at full capacity, but the incoming fluxes are higher, which explains the increase. The little irregularities on the curve between times 8 and 17 corresponds to instants at which the first people coming from some given upstream room reach the gate node. Then, in a final phase (after time 25), all people still in the building are gathered at the gate, and the crowd flows out at full door capacity. The second plot corresponds to a smaller capacity: the initial decreasing phase is less efficient, then, in the second phase, the slope is higher (the incoming flux is the same as previously, but the outflow smaller). We recover, like in the experiments, a larger evacuation time, around 50 s, to compare to the 32 s of Scenario 4.
2,889.6
2018-09-12T00:00:00.000
[ "Computer Science" ]
Portrait of a colour octet New colour octets stand out among the new physics proposals to explain the anomalous forward-backward asymmetry measured in tt¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ t\overline{t} $$\end{document} production by the CDF experiment at the Tevatron. We perform a fit to tt¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ t\overline{t} $$\end{document} observables at the Tevatron and the LHC, including total cross sections, various asymmetries and the top polarisation and spin correlations, to find the most likely parameters of a light colour octet to be consistent with data. In particular, an octet coupling only to right-handed quarks gives a good fit to all measurements. The implications from the general fit are drawn in terms of predictions for top polarisation observables whose measurements are yet not very precise, and observables which simply have not been measured. Introduction Almost twenty years after the discovery of the top quark by the CDF and D0 Collaborations at the Tevatron, top physics has entered the era of precision measurements, with the large samples collected not only at the Tevatron but also at the Large Hadron Collider (LHC). Among many measurements performed only one of them, namely the tt forward-backward (FB) asymmetry (see [1] for a recent review), showed a significant disagreement with respect to the Standard Model (SM) predictions [2][3][4][5][6]. This asymmetry can be defined as with ∆y = y t − yt the difference between the rapidities of the top quark and antiquark in the laboratory frame. When this discrepancy first appeared [7] and especially when the deviations surpassed 3σ [8], it motivated a plethora of new physics explanations [9][10][11][12][13][14], as well as SM ones [15]. After the full Tevatron data set has been analysed, the situation is rather unclear. The updated CDF result in the semileptonic channel [16] still shows an excess, which is not confirmed by the D0 experiment [17], and the naive average of all measurements is 1.7σ above the SM predictions. The tt lepton-based asymmetries A ℓ FB [18,19] and A ℓℓ FB [20,21] are above the SM predictions [6] as well. In the case of A ℓ FB the statistical significance of the deviation is around 1.5σ when naively combining results from the two experiments. On the other hand, most of the precision tt measurements at the LHC have shown good consistency with the SM predictions and exclude some of the new physics models proposed, at least in their simplest forms. Among the surviving ones, a new light colour octet G exchanged in the s channel is the best candidate to explain the anomaly in case it corresponds to new physics: 1. When fitting the tt asymmetry, it does not distort higher-order Legendre momenta of the cos θ distribution, also measured by the CDF Collaboration [22]. (Models explaining the excess with the exchange of light t-channel particles, for example a new Z ′ boson, do.) JHEP08(2014)172 2. A colour octet can be consistent with measurements of the tt invariant mass (m tt ) spectrum [23][24][25][26]. If either the couplings to the light quarks or to the top quark are axial, the interference with the SM is identically zero. If the resonance is within kinematical reach, it will show up anyway, unless it is very wide [27][28][29][30] or below threshold [30,31]. On the other hand, models with t-channel exchange of new particles lead to departures at the high-mass tail [32][33][34]. For u-channel exchange the deviations are also present but less pronounced. 3. It is compatible with top polarisation measurements at the LHC [35,36], for example the polarisation in the helicity axis is identically zero if the coupling to the top quark is purely axial. (Models where the coupling to the top has a definite chirality, for example colour sextets and triplets, predict too large a polarisation [37].) Furthermore, an octet G is compatible with the measured value of the top-antitop helicity correlation parameter C [36,38], which is currently 1.5σ below the SM prediction [39]. 4. It can fit, albeit with some parameter fine tuning, an asymmetry excess at the Tevatron and no excess at the LHC [40][41][42][43], or even an asymmetry below the SM prediction, if the couplings to up and down quarks have different sign [44,45]. On the negative side, a light octet (which in this context means a mass of few hundreds of GeV) can be produced copiously in pairs and decay each into two light jets. This would give an unobserved dijet pair signal [46]. The dijet pair excess can be avoided, but at the cost of introducing additional new physics to suppress the decays into dijets. In this paper we perform a fit to tt observables to find the favoured parameter space of a light colour octet, to determine in first place to what extent it can improve the global agreement with experimental data, in comparison with the SM. In addition, we explore potential signals in top polarisation at the Tevatron and the LHC, as well as in spin correlations. (Previous studies [37,47,48] have focused on specific points in the parameter space of octet couplings.) The method used for the fit and the observables used as input are explained in section 2. The results of the fit are given in section 3. In section 4 we use these results to give predictions for polarisation observables. Conversely, the possible impact of the upcoming measurements is discussed in section 5. In section 6 we draw our conclusions. Fit methodology In addition to its mass and width, a colour octet exchanged in uū, dd → G → tt has vector and axial couplings to the up, down and top quarks, g u A,V , g d A,V , g t A,V totalling eight parameters. The ss and cc initial states do not contribute to the asymmetries because the parton distribution functions are the same for quarks and antiquarks, and the contribution to the cross section is marginal for reasonable values of the colour octet couplings, therefore we set them to zero. The relevant interaction Lagrangian is [33] We therefore do some simplifications to reduce the dimensionality of the parameter space, while maintaining a broad applicability of our results. In first place, we select a mass JHEP08(2014)172 M = 250 GeV below threshold, and a large width Γ/M = 0.2, possibly resulting from new physics decays [46,49]. Then, in our fit we only use inclusive observables that are integrated over the full m tt spectrum, so that the dependence of our results on the particular mass value chosen is milder. For completeness, in the appendix we present the results of the fit in the limit of very large M , which are qualitatively very similar. The six couplings are not all independent parameters in the processes considered, since a rescaling of the light couplings by a factor κ and the top ones by a factor 1/κ gives the same amplitudes. Also, it is assumed that the coupling to the left-handed up and down quark is the same, g u L = g d L . We therefore have only four independent parameters. All couplings have to be real to ensure the hermiticity of the Lagrangian, and we also choose g u A ≥ 0 without loss of generality. The couplings can be written in terms of four independent parameters, We only consider A = 0, in which case the denominator of r V is defined. That is, we consider that either the up or down quark coupling to G has an axial component, so that the interference term with the SM amplitude generates an asymmetry. The A parameter determines the 'overall' strength of the octet contribution to tt production, and a 2σ global agreement with all measurements considered (see below) requires A 3. For r V we consider 0 ≤ r V ≤ 2, which turns out to be the region of main interest. (This restriction is also reasonable since large vector couplings to the light quarks might enhance dijet production in uū → uū, dd → dd.) Note that for φ l = π/4 one has r V ≥ 1 in order to fulfill the equality g u L = g d L , whereas for φ l = π/4 smaller values are possible. The parameter space is scanned using a grid in the variables φ l , φ h , A, r V of 4 × 10 5 points. For each parameter space point, a Monte Carlo calculation for pp → tt is run using Protos [50] to find the new physics corrections to the observables considered. We use 10 5 Monte Carlo points for Tevatron, 5 × 10 5 points for LHC with a CM energy of 7 TeV and 5 × 10 5 points for LHC with 8 TeV. This amounts to 4.4 × 10 11 evaluations of the 2 → 6 phase space and squared matrix element, which is computationally demanding. The observables used for the fit are collected in table 1. They comprise the total cross sections σ at the Tevatron and the LHC; the asymmetries A FB , A ℓ FB and A ℓℓ FB at the Tevatron; the charge asymmetry A C and dilepton asymmetry A ℓℓ C at the LHC; the polarisation P z and spin correlation C hel in the helicity basis at the LHC and the spin correlation C beam in the beamline basis at the Tevatron. The precise definitions of all these observables can be found in the corresponding references. For the parameter space points where the overall agreement is of 2σ or slightly above, a refined calculation of the tt observables is made with higher statistics (2 × 10 5 points for Tevatron and 2 × 10 6 points for LHC at each CM energy), and the fit is repeated with these values. Fit results When consider globally, the agreement of SM predictions with data is good, around 1.3σ for 12 observables considered. Even when looking to the Tevatron and LHC asymmetries together, the agreement is within 1.3σ for six observables. But the still intriguing feature is that the most significant deviations are found precisely in the three Tevatron asymmetries, for which the agreement is reduced to 1.8σ. A colour octet can significantly improve this, while maintaining or improving a good fit to the rest of observables. The results are presented in figure 1, in terms of products of light and heavy couplings, introducing . Orange points correspond to 2σ global agreement and green points to 1σ agreement. We also mark 'best fit' points that have a global agreement of 0.5σ, a 0.5σ agreement for the six charge asymmetries, and individual agreement of 1.5σ for each observable. The upper left plot corresponds to the chirality for the top coupling. The preference is for an axial to right-handed coupling, which is welcome from model building since it avoids potential problems in low-energy B physics [64,65]. The upper right plot represents the axial coupling of the up and down quark. There is a preference for couplings of opposite sign, so as to fit the Tevatron and LHC asymmetries at the same time [44]. The lower two plots in figure 1 show the vector versus axial coupling of the up and down quark. There are two points to notice here. First, that the light quarks can have non-negligible vector couplings of opposite sign, in which case the interference contribution to the cross section has opposite sign in uū → tt and dd → tt. This may be achieved with nearly right-handed couplings, where also g u A ∼ −g d A , and corresponds to the central regions in the two plots. Second, there are disconected regions where there is a cancellation between linear and quadratic octet contributions to the cross section. These regions are allowed by the observables considered here but are not the most compelling from the point of view of model building. To conclude this section, we remark that the simple case of an octet with right-handed couplings to all quarks gives a good fit to all data, yet with only two independent parameters g u R g t R and g d R g t R . We collect in table 2 the predictions for the observables considered for the best-fit point g u R g t R ≃ 0.25, g d R g t R ≃ −0.5. Noticeably, the spin correlations can be driven below the SM prediction. Points with C hel closer to the SM value are also possible, but are not favoured by the experimental data used for the fit. For octets with purely axial couplings the agreement with data is comparable to the SM. Predictions for spin observables The polarisation of the top (anti-)quarks produced in pairs has not been measured at the Tevatron. The D0 Collaboration examined in [68] the charged lepton distribution in the top quark rest frame, which depends on the top polarisation, and found it compatible with the JHEP08(2014)172 Table 2. Predictions for the best-fit points corresponding to an octet with right-handed couplings to all quarks. The global χ 2 is 8.1. no polarisation hypothesis. However, an unfolded measurement was not provided. Polarisation measurements at the Tevatron are feasible given the available statistics, nevertheless. Given the size of the samples used for the semileptonic asymmetry measurements [16,17], one would expect a precision of ±0.08 or better per experiment. We use the helicity basis for our predictions, introducing in the top quark rest frame a reference system (x, y, z) withẑ in the direction of the top quark 3-momentum in the tt rest frame, p t . Theŷ axis is chosen orthogonal to the production plane spanned by p t and the proton momentum in the top rest frame p p -which has the same direction as the initial quark momentum in the qq subprocesses. Finally, thex axis is orthogonal to the other two. That is,ẑ The polarisations in theẑ,x andŷ directions are denoted respectively as 'longitudinal', 'transverse' and 'normal'. The normal polarisation is small since a non-zero value requires complex phases in the amplitude, which can arise from the gluon propagator if produced on its mass shell [67]. This is not the case for the G mass value selected. On the other hand, P z and P x can be sizeable, as it can be observed in figure 2 (left). Even if one considers that P z may not be of order O(0.4) given the D0 results on the charged lepton distribution at the reconstruction level [68], the transverse polarisation can reach few tens of percent. At the LHC, one needs some criterion to select amont the two proton directions to specify the orientation of theŷ,x axes. We use the direction of motion of the tt pair in the laboratory frame [67], which the majority of the time coincides with the initial quark direction in the qq subprocesses. The resulting polarisations are presented in figure 2 (right). Part of the allowed range for P z is disfavoured by the current average P z = −0.014 ± 0.029. But even if one assumes that P z is small, P x might be measurable, provided the experimental uncertainties are similar to the ones for the current P z measurements. In this respect, we note that P x is diluted by the 'wrong' choices of the proton direction, when the direction of motion of the tt pair does not correspond to that of the initial quark. (This is analogous to the well-known dilution of the charge asymmetry A C [45].) Then, P x may be quite enhanced if one, for example, sets a lower cut on the tt velocity in the laboratory frame β = |p z t + p z t |/|E t + Et| [69]. The cut on β not only reduces the dilution but also increases the qq fraction of the cross section, and the enhancement expected in P x is similar to the one found for the charge asymmetry A C , around a factor of two. A specific analysis and optimisation of the sensitivity is beyond the scope of this paper. Deviations are also possible in the spin correlation coefficients C beam and C hel at the Tevatron and the LHC, respectively. We define ∆C beam = C beam − C SM beam , ∆C hel = C hel − C SM hel the deviations with respect to the SM predictions, and plot these two quantities in figure 3. Part of the ∆C beam range is disfavoured by the current average ∆C beam = −0.21 ± 0.20 from table 1. But for ∆C beam around its central value, there may still be some deviations in C hel at the LHC. In order to observe these devations one would need a better precision, with smaller systematic uncertainties than in current measurements in the dilepton decay mode [36,38]. This might be achieved in the upcoming analyses in the semileptonic channel. Implications of upcoming measurements The top longitudinal polarisation P z and spin correlation parameter C hel will certainly be measured with good precision at the LHC with 8 TeV data, and perhaps the top quark polarisation will be also measured at the Tevatron. As discussed in the previous section, there is room for departures from the SM predictions. But then the question arises, how would these improved measurements affect the fit? In particular, it is interesting to know whether SM-like measurements of these observables would imply that one could not reproduce the Tevatron and LHC asymmetries any longer with a colour octet. In order to answer that, we plot these four observables (P z,x at the Tevatron; P z and C hel at the LHC) in figure 4 with three colour codes according to the size of the new physics contribution to the tt asymmetry ∆A FB : (i) red for ∆A FB ≤ 0.03, as is the case of the latest D0 measurement [17]; (ii) orange for 0.03 ≤ ∆A FB ≤ 0.06, as favoured by the current Tevatron average in table 1; (iii) green for 0.06 ≤ ∆A FB , as it corresponds to the CDF measurement [16]. From these plots one can conclude that the polarisation measurements, albeit very useful to probe possible deviations from the SM due to the octet contribution (and new physics in general), are not conclusive with respect to the presence or not of an anomalously large asymmetry A FB , which can be reproduced even with SM-like measurements of those observables. In figure 5 we do the same but considering instead possible correlations with the new physics contribution to A ℓ FB : (i) red for ∆A ℓ FB ≤ 0.02, as given by the combined D0 measurement [19]; (ii) orange for 0.02 ≤ ∆A ℓ FB ≤ 0.04, as it corresponds to the average in table 1; (iii) green for 0.04 ≤ ∆A FB , as for the CDF combination [21]. In this case we can also see that the measurements of polarisation observables are not conclusive with respect to A ℓ FB . Notice, however, that larger A ℓ FB has some preference for larger P x , in agreement with the simplified analysis of [66]. Conclusions The possible presence of elusive new physics in tt production that shows up in the Tevatron asymmetries remains yet unsolved, despite the many efforts to uncover it or explain the anomaly otherwise. In this respect, one cannot just ignore the results of a Tevatron experiment to focus on the other one, but a further understanding is needed. In this paper we have used a benchmark model of a light colour octet exchanged in the s channel to investigate to what extent the several measurements in tt production at the Tevatron and the LHC are compatible with new physics that yields these asymmetries. When considered globally, the fit is good within the SM, χ 2 = 15.8 (1.3σ) for 12 observables. A light colour octet (with 4 independent coupling parameters) improves the fit to χ 2 = 6.4. Half of the contribution to the χ 2 in this case comes from the total cross sections, and the asymmetries and polarisation observables are very well reproduced. Analogous results hold for heavy colour octets (see the appendix). But apart from the actual χ 2 improvement, the remarkable feature is precisely that one can at the same time reproduce (i) the Tevatron asymmetries above the SM value, in particular A FB and A ℓ FB , whose measurements are more precise; (ii) the LHC asymmetries, in agreement with the SM; and (iii) the top polarisation and spin correlation at the LHC. Then, at least, one can affirm that a colour octet that would explain the Tevatron anomalies is not inconsistent with other tt data. Further LHC measurements, and possible late analyses of Tevatron samples, might be very illuminating. We have seen that SM-like outcomes of these measurements would not be conclusive, as there are regions of the parameter space for which A FB and A ℓ FB (and also A ℓℓ FB ) can be significantly larger than in the SM, yet the remaining measurements can be consistent with the SM expectation. In this case, the solution to the Tevatron asymmetry puzzle may arrive from other kinds of measurements [45,70]. Yet, for the JHEP08(2014)172 parameter space that gives a global 1σ agreement with data, we have seen that sizeable deviations are possible in top polarisation observables, both at the LHC and the Tevatron. These observables then deserve a detailed experimental scrutiny. A Fit results for a high-mass octet For a heavy octet with a mass M much larger than the typical energy scales involved in tt production the results are qualitatively very similar to the ones for M = 250 GeV, except for the fact that the axial coupling to the up and top quarks must have opposite sign, in order to generate a positive asymmetry at the Tevatron. We present in figure 6 the results JHEP08(2014)172 of our fit. The favoured regions are analogous to the ones for a light octet but with the repacement g t A → −g t A , g t V → −g t V . In particular, a good fit to data can be achieved with couplings g/M ∼ 1 TeV −1 . The overall agreement with data is comparable with the one achieved for M = 250 GeV, either in the general case (χ 2 = 7.8) or for octets with right-handed couplings (χ 2 = 9.5). Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
5,296.2
2014-08-01T00:00:00.000
[ "Physics" ]
Visually Grounded Compound PCFGs Exploiting visual groundings for language understanding has recently been drawing much attention. In this work, we study visually grounded grammar induction and learn a constituency parser from both unlabeled text and its visual groundings. Existing work on this task (Shi et al., 2019) optimizes a parser via Reinforce and derives the learning signal only from the alignment of images and sentences. While their model is relatively accurate overall, its error distribution is very uneven, with low performance on certain constituents types (e.g., 26.2% recall on verb phrases, VPs) and high on others (e.g., 79.6% recall on noun phrases, NPs). This is not surprising as the learning signal is likely insufficient for deriving all aspects of phrase-structure syntax and gradient estimates are noisy. We show that using an extension of probabilistic context-free grammar model we can do fully-differentiable end-to-end visually grounded learning. Additionally, this enables us to complement the image-text alignment loss with a language modeling objective. On the MSCOCO test captions, our model establishes a new state of the art, outperforming its non-grounded version and, thus, confirming the effectiveness of visual groundings in constituency grammar induction. It also substantially outperforms the previous grounded model, with largest improvements on more `abstract' categories (e.g., +55.1% recall on VPs). Introduction Grammar induction is a task of finding latent hierarchical structure of language. As a fundamental problem in computational linguistics, it has been extensively studied for decades (Lari and Young, 1990;Carroll and Charniak, 1992;Clark, 2001;Klein and Manning, 2002). Recently, deep learning models have been shown very effective across NLP tasks and have also been applied to grammar induction, greatly advancing the area (Shen et al., 2018(Shen et al., , 2019Kim et al., 2019a,b;Jin et al., 2019). These neural grammar-induction approaches have been generally limited to relying on text, without considering learning signals from other modalities. In contrast, the crucial aspect of natural language learning is that it is grounded in perceptual experiences (Barsalou, 1999;Fincher-Kiefer, 2001;Bisk et al., 2020). We thus anticipate improved language understanding by leveraging grounded learning. Promising results from grounded learning have been emerging in areas such as representation learning (Bruni et al., 2014;Kiela et al., 2018;Bordes et al., 2019). Typically, they use visual images as perceptual groundings of language and aim at improving continuous vector representations of language (e.g., word or sentence embeddings). In this work, we consider a more challenging problem: can visual groundings help us induce syntactic structure? We refer to this problem as visually grounded grammar induction. Shi et al. (2019) propose a visually grounded neural syntax learner (VG-NSL) to tackle the task. Specifically, they learn a parser from aligned imagesentence pairs (e.g., image-caption data), where each sentence describes visual content of the corresponding image. The parser is optimized via REIN-FORCE, where the reward is computed by scoring the alignment of images and constituents. While straightforward, matching-based rewards can, as we will discuss further in the paper, make the parser focus only on more local and short constituents (e.g., 79.6% recall on NPs) and to perform poorly on longer ones (e.g., 26.2% recall on VPs) (Shi et al., 2019). While for the former it outperforms the text-only grammar induction methods, for the latter it substantially underachieves. This may not be surprising, as it is not guaranteed that every constituent of a sentence has its visual representation in the aligned image; the reward signals can be noisy and insufficient to capture all aspects of phrase-structure syntax. Consequently, Shi et al. (2019) have to rely on language-specific inductive bias to obtain more informative reward signals. Another issue with VG-NSL is that the parser does not admit tractable estimation of the partition function and the posterior probabilities for constituent boundaries needed to compute the expected reward in closed form. Instead, VG-NSL relies on Monte Carlo policy gradients, potentially suffering from high variance. To alleviate the first issue, we propose to complement the image-text alignment-based loss with a loss defined on unlabeled text (i.e., its loglikelihood). As re-confirmed with neural models in Shen et al. (2019) and Kim et al. (2019a), text itself can drive induction of rich syntactic knowledge, so additionally optimizing the parser on raw text can be beneficial and complementary to visual grounded learning. To resolve the second issue, we resort to an extension of probabilistic contextfree grammar (PCFG) parsing model, compound PCFG (Kim et al., 2019a). It admits tractable estimation of the posteriors, needed in the alignment loss, with dynamical programming and leads to a fully-differentiable end-to-end visually grounded learning. More importantly, the PCFG parser lets us complement the alignment loss with a language modeling objective. Our key contributions can be summarized as follows: (1) we propose a fully-differentiable endto-end visually grounded learning framework for grammar induction; (2) we additionally optimize a language modeling objective to complement visually grounded learning; (3) we conduct experiments on MSCOCO (Lin et al., 2014) and observe that our model has a higher recall than VG-NSL for five out of six most frequent constituent labels. For example, it surpasses VG-NSL by 55.1% recall on VPs and by 48.7% recall on prepositional phrases (PPs). Comparing to a model trained purely via visually grounded learning, extending the loss with a language modeling objective improves the overall F1 from 50.5% to 59.4%. Background and Motivation Our model relies on compound PCFGs (Kim et al., 2019a) and generalizes the visually grounded gram-mar learning framework of Shi et al. (2019). We will describe the relevant aspects of both frameworks in Sections 2.1-2.2, and then discuss their limitations (Section 2.3). Compound PCFGs Compound PCFGs extend context-free grammars (CFGs) and, to establish notation, we start by briefly introducing them. A CFG is defined as a 5-tuple G = (S, N , P, Σ, R) where S is the start symbol, N is a finite set of nonterminals, P is a finite set of preterminals, Σ is a finite set of terminals, 2 and R is a set of production rules in the Chomsky normal form: PCFGs extend CPGs by associating each production rule r ∈ R with a non-negative scalar π r such that r:A γ π r = 1, i.e., the probabilities of production rules with the same left-hand-side nonterminal sum to 1. The strong context-free assumption hinders PCFGs and prevent them from being effective in the grammar induction context. Compound PCFGs (C-PCFGs) mitigate this issue by assuming that rule probabilities follow a compound probability distribution (Robbins, 1951): where p(z) is a prior distribution of the latent z, and g r (·; θ) is parameterized by θ and yields a rule probability π r . Depending on the rule type, g r (·; θ) takes one of these forms: , where u is a parameter vector, w N is a symbol embedding and N ∈ {S} ∪ N ∪ P. [·; ·] indicates vector concatenation, and f s (·) and f t (·) encode the input into a vector (parameters are dropped for simplicity). A C-PCFG defines a mixture of PCFGs (i.e., we can sample a set of PCFG parameters by sampling a vector z). It satisfies the context-free assumption conditioned on z and thus admits exact inference for each given z. Learning with C-PCFGs involves maximizing the log-likelihood of every observed sentence w = w 1 w 2 . . . w n : where T G (w) consists of all parses of the sentence w under a PCFG G. Though for each given z the inner summation over parses can be efficiently computed using the inside algorithm (Baker, 1979), the integral over z makes optimization intractable. Instead, C-PCFGs rely on variational inference and maximize the evidence lower bound (ELBO): where q φ (z|w) is a variational posterior, a neural network parameterized with φ. The expected loglikelihood term is estimated via the reparameterization trick (Kingma et al., 2014); the KL term can be computed analytically when p(z) and q φ (z|w) are normally distributed. Visually grounded neural syntax learner The visually grounded neural syntax learner (VG-NSL) comprises a parsing model and an image-text matching model. The parsing model is an easyfirst parser (Goldberg and Elhadad, 2010). It builds a parse greedily in a bottom-up manner while at the same time producing a semantic representation for each constituent in the parse (i.e., its 'embedding'). The parser is optimized through REIN-FORCE (Williams, 1992). The reward encourages merging two adjacent constituents if the merge results in a constituent that is concrete, i.e., if its semantic representations is predictive of the corresponding image, as measured with a matching function. We omit details of the parser and how the semantic representations of constituents are computed, as they are not relevant to our approach, and refer the reader to Shi et al. (2019). However, as we will extend their image-text matching model, we explain this component of their approach more formally. In their work, this loss is used to learn the textual and visual representations. For every constituent c (i) of a sentence w (i) , they define the following triplet hinge loss: where is the matching function measuring similarity between the constituent representation c and the image representation v. The expectation is taken with respect to 'negative examples', c and v . In practice, for efficiency reasons, a single representation of an image v and a single representation of a constituent (span) c from another example in the same batch are used as the negative examples. Intuitively, an aligned image-constituent pair (c (i) , v (i) ) should score higher than an un- The total loss for an image-sentence pair (v (i) , w (i) ) is obtained by summing losses for all constituents in a tree t (i) , sampled from the parsing model (we write c (i) ∈ t (i) ): ( 3) In their work, training alternates between optimizing the parser using rewards (relying on image and text representations) and optimizing the imagetext matching model to refine image and text representations (relying on the fixed parsing model). Once trained, the parser can be directly applied to raw text, i.e., images are not used at test time. Limitations of the VG-NSL framework While straightforward, there are several practical issues inhibiting the visually grounded learning framework. First, contrastive learning implicitly assumes that every constituent of a sentence has its visual representation in the aligned image. However, it is not guaranteed in practice and would result in noisy reward signals. Besides, the loss in Equation 2 (and a similar component in the reward, see Shi et al. (2019)) focuses on constituents corresponding to short spans. Long spans, independently of their syntactic structure, tend to be sufficiently discriminative to distinguish the aligned image v (i) from an unaligned one. This implies that there is not much learning signal for such constituents. The tendency to focus on short spans and those more easily derivable from an image is evident from the results (Shi et al., 2019;Kojima et al., 2020). For example, their parser is accurate for noun phrases (recall 79.6%), which are often short for captions, but performs poorly on verb phrases (recall 26.2%) which have longer spans, more complex compositionally and also harder to predict from images (see our analysis in Section 4.3.2). While there may be ways to mitigate some of these issues, we believe that any image-text matching loss alone is unlikely to provide sufficient learning signal to accurately captures all aspects of syntax. Instead of resorting to language-specific inductive biases as done by Shi et al. (2019) (i.e., head-initial bias (Baker, 2008) of English), we propose to complement the image-text matching loss with the objective derived from the unaligned text (i.e., log-likelihood), jointly training a parser to both explain the raw language data and the alignment with images. Moreover, their learning is likely to suffer from large variance in gradient estimation as their parser does not admit tractable estimation of the partition function, and thus they have to rely on sampling decisions. This will be even more of a problem if we would attempt to use it in the joint learning setup. Also note that similar parsing models do not yield linguistically-plausible structures when used in the conventional (i.e., non-grounded) grammarinduction set-ups (Williams et al., 2018;Havrylov et al., 2019). In the next section, we will use compound PCFGs and describe an improved visually grounded learning framework that can tackle these issues neatly. Visually grounded compound PCFGs We use compound PCFGs (Kim et al., 2019a) and develop visually-grounded compound PCFGs (VC-PCFGs) within the contrastive learning framework. Instead of sampling a tree and computing a point estimate of the image-text matching loss, we can compute the expected image-text matching loss under a tree distribution and use end-to-end contrastive learning (Section 3.1). Since it is inefficient to compute constituent representations relying on the chart, we will introduce an additional textual representation model to encode constituents (Section 3.2). Moreover, VC-PCFGs let us additionally optimize a language modeling objective, complementing the visually grounded contrastive learning (Section 3.3). End-to-end contrastive learning In the visually grounded grammar induction framework, the parsing model is optimized through learning signals derived from the alignment of images and constituents, as scored by the image-text matching model. Denoting a set of image representations by V = {v (i) } and the corresponding set of sentences by W = {w (i) }, the image-text matching model is optimized via contrastive learning: We define s(v (i) , w (i) ) as the loss of aligning v (i) and w (i) . In VG-NSL, it is estimated via point estimation (see Equation 3). While in VC-PCFGs, given an aligned image-sentence pair (v, w), we compute the expected image-sentence matching loss under a tree distribution p θ (t|w), leading to an end-to-end contrastive learning: where h(c, v) is the hinge loss of aligning the unlabeled constituent c and the image v (defined in Equation 2). Minimizing the hinge loss encourages an aligned image-constituent pair to rank higher than any unaligned one. Expanding the right-hand side of Equation 5 s where p(c|w) is the conditional probability (i.e., marginal) of the span c given w. It can be efficiently computed with the inside algorithm and automatic differentiation (Eisner, 2016). Span representation Estimation of the expected image-text matching scores relies on span representations. Ideally, a span representation should encode semantics of a span with its computation guided by its syntactic structure (Socher et al., 2013). The reliance on the predicted tree structure will result in propagating learning signals derived from the alignment of images and sentences back to the parser. To realize this desideratum, we could follow the inside algorithm and recursively compose span representations (Le and Zuidema, 2015;Stern et al., 2017;Drozdov et al., 2019), which is, however, time-and memory-inefficient in practice. Instead, we produce span representations largely independently of the parser, as we will explain below. The only way the parser model influences this representation is through the predicted constituent label: we use its distribution to compute the representation. 3 Specificially, as a trade-off for a better training efficiency, we adopt a single-layer BiLSTM to encode spans. A mean-pooling layer is applied over the hidden states h of the BiLSTM and followed by a label-specific affine transformation f k (·) to produce a label-specific span representation c k . Take a span c i,j = w i . . . w j (0 < i < j ≤ n): The BiLSTM encoding model operates at the span level and encodes semantics of a span. Unlike using a single sentence-level (Bi)LSTM encoder, it guarantees that no information from words outside of the span leaks into its representations. More importantly, it can run in O(n) for a sentence of length n with a parallel implementation. While the produced representation does not reflect the structural decisions made by the parser, it can be sensitive to word order and may be affected by its syntactic structure (Blevins et al., 2018). In order to compute the representation of unlabeled constituent c, we average the label-specific span representation c k under the distribution of labels defined by the parser: where p(k|c, w) is the probability that the span c has label k, conditioned on having this constituent span in the tree. To further reduce computation we estimate the matching loss only using the n(n−1) This is the case anyway (see discussion in Section 2.3), so we expect that this simplification would not hurt model performance significantly. Joint objective Rather than simply optimizing the contrastive learning objective, we additionally maximize the loglikelihood of text data. As with C-PCFGs, we optimize the ELBO: This learning objective complements contrastive learning. As contrastive learning optimizes a parser by solely matching images and constituents, the parser would only focus on simple and local constituents (e.g., short NPs). Moreover, in practice, since not every constituent can be grounded in an image, contrastive learning would suffer from misleading or ambiguous learning signals. To summarize, the overall loss function is where α is a hyper-parameter balancing the relative importance of the contrastive learning. Parsing The parser can be directly used to parse raw text after training, without requiring access to visual groundings. Parsing seeks for the most probable parse t * of w: Still, though the maximum a posterior (MAP) inference over p θ (t|w) can be solved by the CYK algorithm (Kasami, 1966;Younger, 1967), inference becomes intractable when introducing into z. The MAP inference is instead approximated by where δ(·) is the Dirac delta function and µ φ (w) is the mean vector of the variational posterior q φ (z|w). As δ(·) has zero mass everywhere but at the mode µ φ (w), it is equivalently solving argmax t p θ (t|w, µ φ (w)). Datasets and evaluation Datasets: We use MSCOCO (Lin et al., 2014). It consists of 82,783 training images, 1,000 validation images, and 1,000 test images. Each image is associated with 5 caption sentences. We encode images into 2048-dimensional vectors using the pre-trained ResNet-101 (He et al., 2016). At test time, only captions are used. We follow Shi et al. (2019) and parse test captions with Benepar (Kitaev and Klein, 2018). We use the same data preprocessing 4 as in Shen et al. (2019) and Kim et al. (2019a), where punctuation is removed from all data, and the top 10,000 frequent words in training sentences are kept as the vocabulary. Evaluation: We mainly compare VC-PCFGs with VG-NSL (Shi et al., 2019). To verify the effectiveness of the use of visual groundings, we also compare our model with a C-PCFG trained only on the training captions. All models are run four times with different random seeds and for at most 15 epochs with early stopping (i.e., the image-caption loss / perplexity on the validation captions does not decrease). We report both averaged corpus-level F1 and averaged sentence-level F1 numbers as well as the unbiased standard deviations. Settings and hyperparameters We adopt parameter settings suggested by the authors for the baseline models. For VG-NSL we run the authors' code. 5 We re-implement C-PCFG using automatic differentiation (Eisner, 2016) to speed up training. Our VC-PCFG comprises a parsing model and an image-text matching model. The parsing model has the same parameters as the baseline C-PCFG; the image-text matching model has the same parameters as the baseline VG-NSL. Concretely, the parsing model has 30 nonterminals and 60 preterminals. Each of them is represented by a 256-dimensional vector. The inference model q φ (z|w) uses a singlelayer BiLSTM. It has a 512-dimensitional hidden state and relies on 512-dimensitional word embeddings. We apply a max-pooling layer over the hidden states of the BiLSTM and then obtain 64-dimensitional mean vectors µ φ (w) and log-variances log σ φ (w) by using an affine layer. The image-text matching model projects visual features into 512-dimensitional feature vectors and encodes spans as 512-dimensitional vectors. Our span representation model is another single-layer BiLSTM, with the same hyperparameters as in the inference model. α for visually grounded learning is set to 0.001. We implement VC-PCFG relying on Torch-Struct (Rush, 2020), and optimize it using Adam (Kingma and Ba, 2015) with the learning rate set to 0.01, β 1 = 0.75, and β 2 = 0.999. All parameters are initialized with Xavier uniform initializer (Glorot and Bengio, 2010). Main results Our model outperforms all baselines according to both corpus-level F1 and sentence-level F1 (see Table 1). Notably, it surpasses VG-NSL+HI by 10% F1. 6 The right branching model is a strong baseline on image captions, as observed previously on the WSJ corpus, including in recent work (Shen et al., 2018;Kim et al., 2019a). Comparing with C-PCFG, which is trained solely on captions, VC-PCFG achieves a much higher mean F1 (+5.7% F1), demonstrating the informativeness of visual groundings. However, VC-PCFG suffers from a larger variance presumably because the joint objective is harder to optimize. Visually grounded contrastive learning (w/o LM) has a mean F1 50.5%. It is further improved to 59.4% when additionally optimizing the language modeling objective. Moreover, we show recall on six frequent constituent labels (NP, VP, PP, SBAR, ADJP, ADVP) in the test captions. Unsurprisingly, VG-NSL is best on NPs because the matching-based reward signals optimize it to focus only on short and concrete NPs (recall 64.3%). It performs poorly on other constituent labels such as VPs (recall 28.1%). In contrast, VC-PCFG exhibits a relatively even performance across constituent labels, e.g., it is most accurate on SBARs and ADVPs and works fairly well on VPs (recall 83.2%). Meanwhile, it improves over C-PCFG for NPs, which are usually short and 'concrete', once again confirming the benefits of using visual groundings. Visually grounded contrastive learning (w/o LM) tends to behave like Table 1: Recall on six frequent constituent labels (NP, VP, PP, SBAR, ADJP, ADVP) in the MSCOCO test captions and corpus-level F1 (C-F1) and sentence-level F1 (S-F1) results. The best mean number in each column is in bold. † indicates results reported by Shi et al. (2019). denotes results obtained by running their code. Notice that the results from Shi et al. (2019) are not comparable to ours because they keep punctuation and include trivial sentence-level spans in evaluation. the right branching baseline. Additionally optimizing the language modeling objective brings a huge improvement for NPs (+19.3% recall). Analysis We analyze model performance for constituents of different lengths ( Figure 1). As expected, VG-NSL becomes weaker as constituent length increases, and the drop is very dramatic. C-PCFG and its grounded version VC-PCFG consistently outperform VG-NSL on constituents longer than four tokens and display a more even performance across constituent lengths. Meanwhile, VC-PCFG beats C-PCFG on constituents of length below 5, confirming that visual groundings are beneficial for short spans. We further plot the distribution over constituent length for different phrase types (Figure 2) and find that around 75% constituents in our dataset are shorter than six tokens, and 60% of them are NPs. Thus, it is not surprising that the im- provement on NPs, brought by visually grounded learning, has a large impact on the overall performance. Next, we analyze induced tree structures. We compare model predictions against gold trees, left branching trees, and right branching trees. As there is little performance difference between corpus-level F1 and sentence-level F1, we focus on sentence-level F1 in this analysis. We report self F1 (Williams et al., 2018) to show model consistency across runs. The self F1 is computed by averaging over six model pairs from four different runs. All results are presented in Table 2. Overall, all models have self F1 above 70%, indicating a relatively high consistency. We observe that using the head-initial bias pushes VG-NSL closer to the rightbranching baseline, while visual grounded learning Figure 3 we visualize a parse tree predicted by the best run of VC-PCFG. We can see that VC-PCFG identifies most NPs but makes mistakes in PP attachement and consequently fails to identify the VP. Related work Grammar Induction has a long history in computational linguistics. Following observations that direct optimization of log-likelihood with the Expectation Maximization algorithm (Lari and Young, 1990) is not effective at producing effective grammars, a number of approaches have been developed, emboding various inductive biases or assumption about the language structure and its relation to surface realizations (Klein and Manning, 2002;Smith and Eisner, 2005;Cohen and Smith, 2009;Spitkovsky et al., 2010). The recent advances in the area have been brought by flexible neural models (Jin et al., 2019;Kim et al., 2019a,b;Drozdov et al., 2019). All these methods, with the exception of Shi et al. (2019), rely solely on text. Visually grounded learning is motivated by the observation that natural language is grounded in perceptual experiences (Steels, 1998;Barsalou, 1999;Fincher-Kiefer, 2001;Roy, 2002;Bisk et al., 2020). It has been shown effective in word representation learning (Bruni et al., 2014;Silberer and Lapata, 2014;Lazaridou et al., 2015) and sentence representation learning (Kiela et al., 2018;Bordes et al., 2019). All this work uses visual images as perceptual experience of language and exploits visual semantics derived from images to improve continuous vector representatios of language. In contrast, we induce structured representations, discrete tree structure of language, by using visual groundings. We propose a model for the task within the contrastive learning framework. Learning involves estimating concreteness of spans, which generalizes word-level concreteness (Turney et al., 2011;Kiela et al., 2014). In the vision and machine learning community, unsupervised induction of structured image representations (aka scene graphs or world models) has been receiving increasing attention (Eslami et al., 2016;Burgess et al., 2019;Kipf et al., 2020). However, they typically rely solely on visual signal. An interesting extension of our work would be to consider joint induction of structured representations of images and text while guiding learning by an alignment loss. Conclusion We have presented visually-grounded compound PCFGs (VC-PCFGs) that use compound PCFGs and generalize the visually grounded grammar learning framework. VC-PCFGs exploit visual groundings via contrastive learning, with learning signals derived from minimizing an image-text alignment loss. To tackle the issues of misleading and insufficient learning signals from purely agreement-based learning, we propose to complement the image-text alignment loss with a loss defined on unlabeled text. We resort to using compound PCFGs which enables us to complement the alignment loss with a language modeling objective, resulting in a fully-differentiable end-to-end visually grounded learning. We empirically show that our VC-PCFGs are superior to models that are trained only through visually grounded learning or only relying on text.
6,112.4
2020-09-25T00:00:00.000
[ "Computer Science" ]
CNT Enabled Co-braided Smart Fabrics: A New Route for Non-invasive, Highly Sensitive & Large-area Monitoring of Composites The next-generation of hierarchical composites needs to have built-in functionality to continually monitor and diagnose their own health states. This paper includes a novel strategy for in-situ monitoring the processing stages of composites by co-braiding CNT-enabled fiber sensors into the reinforcing fiber fabrics. This would present a tremendous improvement over the present methods that excessively focus on detecting mechanical deformations and cracks. The CNT enabled smart fabrics, fabricated by a cost-effective and scalable method, are highly sensitive to monitor and quantify various events of composite processing including resin infusion, onset of crosslinking, gel time, degree and rate of curing. By varying curing temperature and resin formulation, the clear trends derived from the systematic study confirm the reliability and accuracy of the method, which is further verified by rheological and DSC tests. More importantly, upon wisely configuring the smart fabrics with a scalable sensor network, localized processing information of composites can be achieved in real time. In addition, the smart fabrics that are readily and non-invasively integrated into composites can provide life-long structural health monitoring of the composites, including detection of deformations and cracks. Scientific RepoRts | 7:44056 | DOI: 10.1038/srep44056 deformations, cracks and failure modes in hierarchical composites 26,27 . In addition to monitoring the health state of FRPs in their service stage, it is equally important to be able to in-situ and in-line monitor the resin infiltration and curing kinetics in manufacturing process of composites. Because of the complicated fibrous preforms, resin infiltration is always non-uniform and hard to predict. This may introduce problematic flow defects such as resin-starved (dry spots) and resin-rich areas 28 . In addition, the curing kinetics is also susceptible to variations of the processing conditions. These mentioned manufacturing issues, in return, would introduce part-to-part variations and negative effects on macroscopic properties of the final composites, if it is lack of in-line monitoring approaches. Aiming to resolve these issues, Zhang et al. used electrophoretic method to fabricate CNT coated glass fiber for probing the curing process of epoxy resin 29 . By taking advantage of the unique porous structure of CNT and graphitic nanoplatelet (GNP) thin films, our group invented CNT and GNP thin film based fiber sensors for in-situ monitoring the manufacturing process of fiberglass prepreg laminates 30,31 . Given the progress being made, however, there is still lack of a highly sensitive, reliable and scalable method for large-area monitoring the manufacturing stage of hierarchical composites. To approach this ultimate goal and advance the above mentioned emerging technology, we report here a novel strategy in developing smart fabrics comprised of co-braidable, scalable and designable CNT fiber sensors, which can be fabricated through a cost-effective and high-efficient dip coating process. The smart fabrics can be used as one layer of the woven roving preforms. As such, it is able to be readily integrated into a composite structure through the well-developed vacuum assisted resin transfer molding (VARTM) technique. The smart fabrics could provide a precise way to monitor and quantify various events occurred during the composite manufacturing stage, including resin infusion, gelation as well as curing kinetics. This unique functionality has been firmly corroborated through off-line rheological and DSC studies of a series of composite samples processed under varied curing temperatures and resin formulations. The most distinctive feature of this smart fabric is the built-in multiple fiber sensors that can serve as a sensing network for large area monitoring. This allows for mapping the localized information of composites in real time and, is highly valued for the quality assurance of composite manufacturing. Lastly, the sensor network offered by the smart fabrics naturally exists in the composite laminate and is capable of strain mapping and detection of cracks in the service stage. Considering the non-invasiveness, robustness, and large-area deployment capabilities as well as its built-in dual functionalities for structural health monitoring of composites over its lifespan, we expect the high advantage of smart fabrics for enhancing safety, performance and reliability of future lightweight composites. Results and Discussion Fabrication, Integration and Characterization of Smart Fabrics. The smart fabric composites were prepared by three steps, including fabrication of CNT enabled fiber sensors, formation of smart fabrics, and composite manufacturing. First, a home-made roll-to-roll continuous process was established to fabricate the CNT enabled fiber sensors (see schematics of Figure S1, Supporting Information). This assembly was composed of a computer controlled motor (speed set at 1 cm/min) and a series of pulleys, which were used to pass a long fiberglass roving respectively through a multi-walled carbon nanotube (MWCNT) dispersion for thin film coating 32 , a deionized water bath for removing surfactant molecules (Triton-X-100), and a heating station for drying at ~200 °C. The smart fabrics were subsequently prepared by manually braiding the as-prepared fiber sensors into a woven roving cloth (Fig. 1a). Following the braiding process, they were stacked with other pristine fibrous plies to form dry composite preform, into which a mixture of vinyl ester resin and methyl ethyl ketone peroxide (MEKP) resin hardener were then immediately introduced through VARTM process operated in a vacuumed plastic bag to cause simultaneous in-plane and transverse resin wetting of the preform. During VARTM, the electric signal of fiber sensors embedded in the smart fabrics was simultaneously recorded to evaluate its capability for in-line monitoring of resin infusion and curing. After the manufacturing process, the smart fabrics existed in the composite laminates can serve for detecting various mechanical deformations and cracks. Additionally, it is scalable to a fiber sensor network to cover a large piece of composites. For demonstration purpose, the largest smart fabrics specimen we have made was ~300 cm 2 with an incorporated 5 × 5 sensor array (Fig. 1a). Certainly, more sensors can be included to cover a larger composite. To characterize the CNT structure coated on the fiber core, a variety of microscopy and spectroscopy methods have been used. First, based on visual inspection (Fig. 1a), the dark appearance of a fiber sensor as compared to the white color of a pristine fiber roving is a clear indication of a dense CNT film formed on the fiber surface through dip coating. In addition to visual inspection, SEM images provide detailed evidence of CNT coating as shown in Fig. 1b,c (low magnification) and 1d (high magnification). Albeit the small diameter of an individual fiberglass filament (15-20 μ m), CNT nanoparticles were successfully coated on the fiber surface and assembled as rope/bundle entangled network morphologies. Similar morphologies can also be found on a large-area 2D substrate 33,34 . To further examine the successful coating of CNT on a glass fiber, we performed energy-dispersive X-ray (EDX) and Raman spectroscopy measurements on the fiber prior to and after the CNT coating. As normalized to the silicon peak intensity, EDX spectra ( Fig. 1e) clearly show that the carbon peak is increased from 5.77 wt.% to 22.24 wt.% upon CNT coating. Similarly, the signature Raman features (Fig. 1f) of MWCNTs, i.e. G-band around 1600 cm −1 , D-band around 1300 cm −1 and 2D-band around 2600 cm −1 , confirm the coating process. To achieve a uniform CNT coating, one needs to bear in mind that a high quality CNT dispersion is a critical prerequisite. Thus, the quality of the CNT dispersion has been quantitatively analyzed using preparative ultracentrifuge method [35][36][37] . According to the experimentally determined sedimentation function and subsequent model fitting (Fig. 1g), we have calculated the averaged length (1845.2 nm) and diameter (3.74 nm) of the nanotube in dispersion. Based on weight measurements, the CNT content is determined to be less than 0.5 wt.% of the fiber sensor. With this piece of information and taking the market price of MWCNT (~$1 per gram), we estimated the cost of raw materials of the fiber sensor as low as ~$1.5 per 100 meters. Sensing Capability of Smart Fabrics for Process Monitoring. Smart fabrics with a single fiber sensor have been studied principally to establish and quantify its sensing capability for monitoring resin infusion and curing. To specify, the fiber sensor with equal length in the fibrous ply was placed parallel to the direction of the resin infusion. To study its capability, real-time resistance signal of a representative smart fabric sensor was demonstrated in Fig. 2a. The total processing time of 24 h spans across the complete curing process, which is mainly categorized into three stages: 1) resin infusion stage including the time for injecting resin and hardener (1.25 wt.%) mixer into the pre-vacuumed plastic bag; 2) dwelling or pot-life stage when the resin has fully filled the bag but keeps a low viscosity; and 3) curing stage from the onset to the end of the polymer cross-linking reaction. Following the suggested curing protocols by the resin vendor, the curing temperature was isothermally controlled at 25 °C; and the bagging pressure was kept at 0.1 MPa. By monitoring resistance change (dR/R 0 ) of the single fiber sensor embedded in smart fabrics, as shown in Fig. 2a, a rapid increase of dR/R 0 from 0 to ~11 (0-6 min) was accordingly observed and it was then smoothly merged into a milder increase (6-28 min) approaching to a stabilized value of ~16 (highlighted in blue). Subsequently, this maximum dR/R 0 value was kept almost constant from 28 min to 55 min (highlighted in green). As the continuity of the process, a pronounced decrease of dR/R 0 from ~16 to ~7 was initially observed (1 h -3 h) and then gradually leveled off to a plateau value of ~4 toward the end of the process (3 h to 24 h, highlighted in red). By correlating the resistance changes with physical states in composite manufacturing, it is interesting that the high repeatable trends of the sensor signal described above mimic closely to all the three stages of whole curing progression. First, as resin injecting and wetting the fibrous preform with fiber sensors, two types of flows compete with each other throughout the infusion process, namely, the inter-and intra-roving flow. Comparing to inter-roving distance ranging from hundreds of microns to several millimeters depending on density of the woven fabrics, intra-roving space is only micro-or nanoscale (clued in Fig. 1c). Thus the former speed to fill the voids among rovings is much faster than the latter speed to penetrate inside rovings. Following this line of thought, the different incremental rate of dR/R 0 before and after the inflection point can be explained. For the first 6 min, the inter-roving flow dominates and it allows the resin molecules to wet the MWCNT network deposited on outer surface of the fiber roving to result in expansion and even breakage of tube/tube contacts, which causes the significant resistance increase (indicated by schematics of Fig. 2b). From 6 min to 28 min, the intra-roving flow dominates. Comparing to the first 6 min, it slowly but continuously penetrates/infiltrates the fiber roving to further disrupt MWCNT network. The hypothesis can be indirectly proved by video-taping of the resin flow ( Figure S2, Supporting Information). It indicates after 6 min and 10s, the resin fills the bag. However, the detailed intra-fiber flow cannot be revealed. This reversely reflects the unique capability of smart fabrics such as for detecting detailed resin flow and tracking the resin flow front. In addition to the infusion stage, we attribute the rest of the sensor signal to physical/chemical changes in the cross-linking reaction and the concomitant variations of system viscosity, as well as the development of matrix shrinkage caused by phase transformation of gelation and vitrification. Under low levels of cross-linking, the resin molecules retain low viscosity and do not disrupt the equilibrium state of the vacuumed system. As a result, the constant value of dR/R 0 from 28 min to 55 min was observed. Continuing with the curing process, sufficiently high levels of cross-linking density would cause a drastic increase of the system viscosity and volumetric shrinkage of the resin. Consequently, the MWCNT network with infiltrated resin shrinks accordingly to cause its conductive paths closing together to have higher packing density, which is illustrated in the right scheme of Fig. 2b. Thus, the substantial decrease of dR/R 0 from ~16 to ~4 was observed when the processing time ranges from 1 h to 24 h. To further corroborate the argument, we compared the CNT coated fibers to carbon fibers as another type of embedded roving sensors. The results demonstrated in Figure S3 (Supporting Information) indicate that the sensitivity of the smart fabric co-braided with CNT fibers is at least two orders of magnitude higher. With the rationale of carbon fibers with more densely packed graphitic structures, it provides clear evidence that the loosely packed CNT network can be interrupted much easier by physical/chemical changes of the resin. It is also important to note that due to exothermal reaction, the intrinsic transport property of the MWCNT network and thermal expansion of the laminates could contribute to resistance change in the curing stage. However, due to the relatively small temperature coefficient of MWCNT network (− 0.137% K −1 ) 38 and thermal expansion coefficient of fiberglass composites (~20 ppm/°C) 30 , these two effects are negligible as compared to the cross-linking effect. To better understand the correlation between the sensor signal and cross-linking reaction of the polyester resin, we further argued that the dynamic resistance decay shown in Fig. 2a has a strong connection with the resin curing kinetics, such as gel time/point, onset and end of cure, percentage and rate of curing. To verify this hypothesis, we systematically investigated the effect of curing temperature and resin formulation by comparing the results from three different techniques, namely, smart fabric sensor, rheometer and differential scanning calorimetry (DSC). First, Fig. 3a,b respectively show the real-time signals of two series of smart fabric sensors for monitoring the curing process under varied curing temperatures and MEKP concentrations. To better convey the data, the resistance change (dR/R 0 ) was normalized to its maximum value. By keeping the MEKP at 1.25 wt.%, Fig. 3a presents a clear trend that for a given resin formulation, the higher the curing temperature, the faster the decay of the sensor signal with respect to the processing time. The inset thermal images in Fig. 3a reveal the strategy for controlling curing temperature using ice water or heating stage (More details are shown in Figure S4, Supporting Information). As a representative example of 0 °C, dR/R 0 keeps staying high after ramped up. This strongly indicates that the resin molecules hardly crosslink under this extreme condition. In addition, as increasing the controlled temperature from 15 °C to 50 °C, the dwelling duration that maintains the highest dR/R 0 substantially decreased from ~70 min to ~6 min. For another instance, the elapsed time for normalized dR/R 0 decayed from 1 to 0.5 was decreased from ~120 min (15 °C) to ~25 min (50 °C). Similar trends were also observed by varying the MEKP concentration from 0.4 wt.% to 1.25 wt.% while keeping the whole curing process at room temperature (~25 °C). For safety considerations, 1.25 wt.% MEKP was set as the highest level as the vendor suggested. Again, as raising the MEKP concentration, the sensor signal decays more rapid. It is also interesting that there is a large discrepancy on the final stabilized dR/R 0 as the resin formulation is varied. For example, this value drops from 0.59 to 0.33 as MEKP increases from 0.4 wt.% to 1.25 wt.%. This shows a large contrast when compared with the case in Fig. 3a, in which all dR/R 0 values were stabilized to 0.3 ± 0.05. We argue that the curing temperature only modifies the crosslinking velocity but the resin formation determines not only the curing rate but also the final degree of polymerization. To quantify the ability for unveiling key parameters of curing kinetics, we further correlated the results of smart fabric sensors with that of a rheometer and DSC. Figure 3c,d show the viscosity profile of two series of pure resin samples monitored by a parallel-plate rheometer with a temperature-controlling chamber. Instead of emphasizing the fact that cure temperature and resin formulation have a profound influence on the profile, we stress that all viscosity curves have a critical upturn moment before diverging toward infinity. And, this critical moment could be used to represent gel time. Then, by extracting the approximate gel time from each viscosity profile and matching it accordingly in the resistive curve, it is rather interesting that all the determined dR/R 0 (normalized) is almost identical (~0.96). This performance is confirmed in both the temperature and the MEKP series, as respectively shown in insets of Fig. 3c,d. Thus, this critical moment when dR/R 0 decays to 0.96 can be used as an indicator for representing gel time. In addition to rheological results, DSC provides more detailed information of curing kinetics by measuring the quantitative heat flow as a function of time. Again, by changing the isothermal temperature or varying the MEKP concentration, Fig. 3e,f show the heat flow curves of various resin samples by defining the onset and end of cure as the time of extrapolated baseline and the moment when the signal leveled off. Based on DSC curves, the degree of cure (α ) and the rate of reaction (dα /dt) as a function of time can be estimated according to: where Δ H i (t) is the partial heat of reaction at a certain time and Δ H tot is the total heat of reaction. Calculated by Equation 1, inset of Fig. 3e,f presents the plot of α versus time at different isothermal temperatures or MEKP concentrations. It is clear that the higher the isothermal temperature or the MEKP concentration, the higher the fractional conversion of resin curing at a certain time. We found this behavior is highly related to the varying tendency of the smart fabric resistance. By selecting the resistive data in accordance with the onset and end of curing regulated by DSC, we defined the decay of resistance change as a function of curing time D(t) as: where dR i and dR f are the normalized resistance change at the onset and end of curing. As clearly demonstrated with the dotted curves in both insets of Fig. 3e,f, they preserve a strong correlation with DSC results for disclosing the quantitative degree of cure. Although the two sets of data are not perfectly coincident with each other, presumably caused by inaccurate temperature control using different heating sources and time mismatch considering different resin handling and transporting, the results of the smart fabric indeed restores the detailed features of the curing conversion in DSC curves. For instance, comparing to high temperature and MEKP samples with a fast ramping stage and a long leveling stage, the low temperature and MEKP samples show a much clearer three-stage behavior where the data is slowly ramped up followed by a high ramping stage and a slow conversion stage. By further comparing the rate of reaction (dα /dt) with the decay rate of resistance change (dD/dt), similar consequences happened ( Figure S5, Supporting Information). Therefore, as compared with rheometer and DSC only adapted for off-line monitoring, the smart fabric is highly useful for in-line quantifying the resin curing in manufacturing stage of hierarchical composites. Large-area Monitoring of Smart Fabrics. The smart fabric with a single sensor introduced in the previous section can be easily scaled up to a large sensing network by co-braiding multiple fiber sensors to monitor every local spot of composites. As schematically demonstrated in Fig. 4a and Figure S6 (Supporting Information), the aligned horizontal and vertical fiber sensors arranged in separate layers (not contacted by others) cover the whole piece of composites. Thus for a n × m sensor network with "n" horizontal fiber sensors (labeled as Hi from "H1" to "Hn") and "m" vertical fiber sensors (labeled as Vj from "V1" to "Vm"), each sensor monitors the whole resistance change (R Hi stands for resistance change of the ith horizontal sensor and R Vj stands for resistance change of the jth vertical sensor) of the line region it covers during the whole process of resin infusion and curing. To extract the information of resin changes at the location defined by the cross point of Hi and Vj sensors, we defined Rij as the local resistance change of the ith horizontal sensor crossed by the jth vertical sensor. Thus, we formulated a model to describe each Rij by proportionally allocating R Hi based on the ratios of all R Vj , as seen in Equation 4. One needs to notice that Rij is nothing but a technically defined quantity, which is R Hi weighed by a factor. The approximation symbol (≈ ) is used because polynomial terms in the model are omitted but this would not alter the basic trend of resistance changes. ij Hi Vj m Vj By simultaneously monitoring every horizontal and vertical fiber sensor in the smart fabric, Fig. 4b to e show the dR/R 0 distribution at four representative moments of the resin process with a 5 × 5 sensor array. The optical image in Fig. 4b indicates that the curved resin head was just crossing the rightest vertical sensor line. At this critical moment, the resistance change of every local point was allocated based on Equation 4. Displaying each resistive value by a three-dimensional bar plot with a color scale, it is amazing that the shape of dR/R 0 distribution has faithfully captured the position of the resin head. Clearly, the left 20 points remain unchanged because the resin has not started to wet those regions. In sharp contrast, dR/R 0 of the right five points is strongly dependent upon their positions. To specify, the center point experiences the highest dR/R 0 . As deviating from the center location, the resin head gradually lags behind and the same is true for the resistance change of the corresponding points. Thus, we observed that R 55 < R 45 < R 35 > R 25 > R 15 . As the resin head continued to move to the left, Fig. 4c,d clearly showed that more and more regions were impacted by the resin transfer. A clear gradient of dR/R 0 distribution was also found that the longer the region was suffered from the resin infusion, the higher the value of its resistance change. This was caused by the combination of inter-roving and intra-roving flow explained in the previous section. In addition to resin infusion, the dR/R 0 distribution during the curing stage was also monitored. Figure 4e shows the corresponding results after three hours of processing. With similar blue-green colors, the magnitude of entire bars fell between ~5 and ~7, indicating all the local regions were subjected to similar degrees of curing. Based on its ability to map local areas with scalable size and density, we anticipate that the smart fabric technique presented will ensure full cure and no voids for manufacturing high quality composites. In addition to monitoring the manufacturing process, the smart fabric sensors embedded in composite structures are readily to diagnose its health states after it is de-molded or debagged. Figure 4f gives an example to demonstrate the capability for capturing distribution of compression forces loaded on the composite laminate with the same 5 × 5 sensor array. The schematic picture shows that the cylindrical compression plates (50 mm in diameter) of the mechanical tester could only provide forces on the central part of the host composites. By applying a compression load of 450 N, the resistive response of each horizontal and vertical fiber sensor was converted to local dR/R 0 values based on the same algorithm (Equation 4). The distribution results show that with negligible resistance changes on the peripheral area, clear dR/R 0 changes near the central portion of composites were displayed in brighter colors, indicating its capability to detect local stresses and deformations. In addition to monitoring loadings and deformations, it is also highly valuable for detecting different failure modes of the composites, such as fiber/matrix delamination and crack initiation ( Figure S7, Supporting Information). For instance, different modes of the composite laminate, i.e. elastic deformation (0-1.5%), initiation and development of micro-cracks or delaminations (1.5-5.5%) and catastrophic failure (> 5.5%), were coincided with the substantially distinctive piezoresistive performance of the embedded sensor. Specifically, it has a gauge factor (GF) of 1.48/15.69/infinite value under elastic/crack/failure mode of composites, respectively. Conclusion In conclusion, we demonstrated the robust and versatile sensory technology of smart fabric for diagnosing and evaluating the health states of polymeric composites from the manufacturing process to the service stage and finally to failure. By co-braiding MWCNT enabled fiber rovings into a fiberglass woven preform, we first demonstrated the use of the smart fabric sensor to provide in situ resin infusion and curing information during the vacuum assisted resin transfer molding (VARTM) process of composite manufacturing. Confirmed by rheological and DSC methods, the key processing parameters were quantitatively revealed with respect to resin flow, gelation and curing. We believe these findings are highly valuable and critical for quality assurance of the host composite structures. Then, the unique smart fabric sensor readily and noninvasively integrated into the laminate proved to be desirable for monitoring the strain and stress states, as well as for detecting the failures of the host structure. More importantly, the scalable size and adjustable sensing range of the smart fabric allows for covering a laminate of a comparatively large size and also suitable for monitoring the local information of resin processing and mechanical deformation. The multipurpose sensing capabilities in conjunction with their unique scalability and noninvasiveness make the smart fabric sensor highly valuable for life-long structural health monitoring of high-performance polymeric composites. Methods Fabrication and Characterization of Smart Fabric Sensors. CNT dispersion was prepared by dispersing 100 mg MWCNT (SWeNT ® SMW, bulk density of 0.22 g/cm3, averaged diameter of 5.5 nm, aspect ratio of 1000 and carbon content of > 98%) in 200 mL of deionized water with 5 mL of nonionic surfactant triton X-100 (Sigma-Aldrich) in an ice bath using a Misonix 3000 probe sonicator (20 kHz). The sonicator was operated in a pulse model (on 10 s, off 10 s) with the power set at 45 W for 1 h. The length and diameter of CNTs in the resulting dispersion were characterized using preparative ultracentrifuge method (PUM) with the Optima MAX-XP ultracentrifuge (TLA-100.3, 30° fixed angle rotor, 13000 g-force, Beckman Coulter Inc.) and Delsa Nano C particle size analyzer (Beckman Coulter Inc.). The geometrical dimension of CNTs was also compared with images ( Figure S8, Supporting Information) of MultiMode AFM (Veeco Instruments, Inc.). ScanAsyst and PeakForce Tapping probes (SCANASYST-AIR, Bruker) were used for AFM imaging. To fabricate smart fabrics, a fiberglass roving extracted from the woven fabrics (part # 223, Fibre Glast Developments Corp.) was used as the substrate of fiber sensors. As introduced in the previous section, the roll-to-roll coating process including a series of pulleys, a fixed heat gun (HG-301A, Master Appliance Corp.) and a computer-controlled stepper motor (Silverpak 17C, Lin Engineering Corps.) was used for converting the neat fiber roving into fiber sensors. They were then manually braided into the same woven fabric to form the smart fabric. Depending on different requirements, single or five parallel fiber sensors were incorporated in the smart fabric. The structures of CNT coating were examined by scanning electron microscopy (SEM), energy-dispersive X-ray (EDX) and Raman spectroscopy. SEM was performed with JEOL 7400 at 10 kV for examining the morphologies of the smart fabrics. The sample was sputter coated with gold prior to SEM imaging. The same instrument was used to acquire the EDX to determine the carbon content of sensors. A Renishaw inVia Raman microscope was used for collecting the Raman spectra of the smart fabric sensor with a 785 nm excitation laser at a power of 1 mW. sensors were respectively placed on top and at the bottom. The top (bottom) layer has the sensors parallel (vertical) to the resin flow direction. Copper electrodes were then connected to every two ends of fiber sensors. Subsequent to sensor integration and preform stacking, the fibrous stack was then placed in a vacuumed plastic bag to induce the resin infusion and curing process. The VARTM process was operated close to the standard atmospheric pressure (0.1 MPa) with a controlled temperature. Then the mixture of vinyl ester resin (part # 1110, Fibre Glast Developments Corp.) catalyzed by methyl ethyl ketone peroxide (MEKP-925, Norac, Inc.) with a certain mixing ratio was introduced to fill the vacuum bag. The total processing was isothermally maintained for 24 h. Subsequently, the composite laminate was debagged. Manufacturing of Composites Sensing Performance of Smart Fabrics for SHM of composites. To monitor the process of composite manufacturing, the resistance of embedded sensors was recorded by a Keithley 2401 Sourcemeter controlled by a homemade LabVIEW user interface during the whole curing process 39,40 . The curing temperature during each process was tested by a FLIR E40 thermal imaging camera. The rheological tests were performed by an ARES-LS3 rheometer (TA Instruments) with 25 mm parallel plate fixture. Oscillatory shear flow experiments with 5% strain and 1 Hz frequency were carried out under an isothermal cure temperature controlled in the furnace. The differential scanning calorimetry tests were performed by a Q-100 DSC (TA Instruments) by monitoring the heat flow under isothermally controlled temperature. The stretching and compression tests were performed using Shimadzu AGS-J micro test frame with a 5000N load cell. Based on tension-to-failure test, mechanical properties of the composites including tensile modulus (7.15 GPa) and tensile strength (259.2 MPa) were measured as referenced information.
6,903.4
2017-03-08T00:00:00.000
[ "Materials Science" ]
Third-order spatial correlations for ultracold atoms We present here the first measurement of the third-order spatial correlation function for atoms, made possible by cooling a metastable helium cloud to create an ultracold thermal ensemble just above the Bose–Einstein condensation point. The resulting large correlation length well exceeds the spatial resolution limit of the single-atom detection system, and enables extension of our earlier temporal measurements to evaluate the third-order correlation function in the spatial plane of the detector. The enhancement of the spatial third-order correlation function above a value of unity demonstrates the presence of spatial three-atom bunching, as expected for an incoherent source. Introduction The coherence of a light source can be characterized by determining the statistical distribution of the time of arrival between photons from the source, as described theoretically by Glauber [1]. For pairs of photons, the normalized second-order correlation function is defined as the arrival probability of a second photon as a function of the delay following the detection of the first photon, normalized by the individual photon probabilities. Likewise, the third-and higher-order correlation functions are obtained from the arrival times between triplets and larger groups of photons. A perfectly coherent source is characterized by a uniform arrival probability, yielding a correlation function of unity for all orders. By contrast, a thermal (incoherent) light source is characterized by photon bunching, whereby the probability of detecting groups of photons at short delay times is higher than at long delay times. Theoretically, the maximum bunching enhancement factor for the nth-order correlation function is n! at zero delay time [1], but this may be significantly reduced in experimental measurements due to the finite resolution of the detection system. However, provided that the detector temporal resolution is significantly smaller than the width of the bunching signal, then the bunching width corresponds to the correlation time of the source. An early experiment by Hanbury Brown and Twiss [2] measured for the first time the second-order correlation function of an incoherent light source and demonstrated two-photon bunching. The same measurement can be applied in the spatial domain, where the probability of pairs of photons with particular spatial (rather than temporal) separations is measured. Applying this to stellar light sources, Hanbury Brown and Twiss [3] were able to measure the secondorder spatial correlation function to determine the angular width of stars. Their pioneering experiments represent the early foundations of quantum optics. The same concepts apply to particles-including atoms-where in addition both bosonic and fermionic species exist (cf bosonic photons). Second-order correlation functions have been determined for both incoherent (thermal) sources [4] and coherent (Bose-Einstein condensate(BEC)) atomic sources [5]. Thermal and BEC sources exhibited bunching and unity correlation values respectively, with antibunching being demonstrated for fermions [6,7]. However, until recently, there has been no matter-wave experimental verification of Glauber's conjecture that coherent sources exhibited unity correlation functions to higher order. Using a new experimental technique in our laboratory, we were able to make the first measurements of the third-order correlation function [8] which demonstrated temporal threeatom-bunching for a thermal gas of ultracold bosonic helium, and a correlation value of unity for a BEC. This result is consistent with BECs being characterized by coherence to higher orders, in the same way that an optical laser is higher-order coherent. More recently, temporal correlations up to fourth order have been measured for trapped thermal atoms [9]. In subsequent experiments using ultracold helium atoms guided in multiple modes of an all-optical waveguide [10], we were able to demonstrate the connection between the first observation of spatial atomic speckle and temporal two-atom bunching [11], both characteristic of second-order incoherence. This correspondence between spatial and temporal correlations is to be expected from Glauber's formalism as was demonstrated for photons by Hanbury Brown and Twiss. In similar experiments using atoms, second-order spatial correlation measurements have also demonstrated atom bunching [5]. Spatial auto-correlation measurements have yielded similar information about the spatial structure of the ensemble [12][13][14][15][16]. Experiments determining the third-order spatial correlation function have thus far only been performed using photons, whereby three separate point detectors were used to measure the spatial coincidence of photon triplets [17]. The simulated (and experimental) results show that the probability of jointly detecting three randomly radiated photons from a chaotic thermal source using three separate detectors is 6 (and ∼5) times greater if the events fall within the coherence time and volume of the radiation field. The imperfect three-photon-bunching enhancement is attributed to the finite detector resolution. In this paper, we apply a similar concept to measure the third-order spatial correlation function of an atomic ensemble for the first time. (Other experiments have indirectly measured the effect of third-order correlations on collision processes [18][19][20].) The significance of spatial correlation measurements is fourfold. Firstly, for an isotropic (three-dimensional spatially symmetric) cloud of atoms, spatial coherence measurements can-interchangeably with temporal coherence measurements-be used as a diagnostic for the overall coherence properties of the ensemble. Secondly, in situations where there is a constrained dimensionality of the system e.g. for atoms confined in one-dimensional and two-dimensional trapping geometries, the measurement of the spatial correlation functions in each trapping dimension can provide additional information on the coherence properties of the atomic source [15]. Thirdly, higher-order spatial correlations can be used to enhance image visibility, such as in the photon experiments undertaken in [17]. There the authors also perform a 'ghost imaging' experiment using three-photon coincidences, and show that the visibility is significantly improved over similar experiments performed using two-photon coincidences. Fourthly, while the formalism developed by Glauber [1] indicates that third-order correlation functions can be expressed via Wick's theorem in terms of lower order correlation functions (see supplementary online materials in [8]), the third-order correlation functions can be related directly to processes involving three-body physics. This includes three-body collisions (such as studied in [18][19][20]), and Efimov physics where stable three-body bound states can exist even when two-body interactions are too weak to create pairing [21][22][23]. Spatial third-order correlation measurements are therefore of interest in their own right through the elucidation of three-body physics, as well as for the information provided on the overall coherence, geometric dimensionality and imaging resolution. Experiment Here, we extend our measurements of spatial first-order coherence from the matter-wave imaging techniques employed previously [10], to enable single-atom measurements that yield both second-and third-order correlation functions in the spatial domain. We employ metastable helium atoms (He * ) in the 2 3 S 1 state which has a very long lifetime (∼8000 s [24]) and which lies ∼20 eV above the ground state enabling efficient single atom detection [25]. We use the same apparatus as employed earlier to create a He * BEC [26] and a pulsed He * atom laser [27] to generate a cloud of ultracold He * atoms. In our previous temporal third-order correlation experiments [8] we used multiple radiofrequency (RF) output coupling pulses to create a large number of thermal samples just above the relatively high (∼1 µK) critical temperature T c for BEC formation. (The temperature is determined by measuring the time-of-flight distribution of atoms released from the trap, with the uncertainty dominated by shot-to-shot fluctuations.) However, in the current experiments we progressively evaporatively cool the atoms in the magnetic trap to much lower temperatures (∼100 nK) in order to increase the correlation length to minimize the effects of detector resolution, while at the same time increasing the probability for detecting pairs of atoms within the spatial correlation length. The starting point is a bi-planar quadrupole Ioffe configuration (BiQuic) magnetic trap [26] shown schematically in figure 1, which has trapping frequencies in the x, y and z (or equivalently time) direction of 50, 550 and 550 Hz respectively. We load atoms into the trap, and continuously evaporatively cool the atoms using a swept RF field to just above T c yielding ∼10 6 atoms at a temperature of ∼1 µK. At this point, the correlation length is of order the spatial resolution of the detector (∼150 µm [8]). The correlation length at the detector l d scales linearly with the time t taken for the atoms to drop from the trap to the detector, and is given by [28] l d =ht/ms t , where m is the particle mass and s t is the size of the cloud in the trap for trap temperature T, frequency ω and Boltzmann constant k B . By comparison, the transverse de Broglie wavelength in trap is given by λ dB = h/mv trans ∼ l d (2π/tω). To increase the correlation length, we continue the evaporative cooling process to reduce the temperature (and hence the trap size) while simultaneously smoothly attenuating the atom number to avoid condensation. The attenuation is achieved during the final 2.7 s of the evaporation process by using additional broadband RF pulses (with a 50% duty cycle over a 100 µs pulse width) to remove atoms uniformly over the entire ensemble. This expels the vast majority of atoms from the trap, leaving a very cold cloud (∼95 + 10 nK) of ∼1000 atoms, whose minimum temperature is limited by the stability of the (already highly-stablized) magnetic trap [30]. The correlation length at the detector along the y-axis is then ∼1.5 mm which is an order of magnitude larger than the detector resolution. The magnetic trap is then switched off, thereby releasing the atoms which fall ∼848 mm over 416 ms under gravity onto an 80 mm diameter micro-channel plate (MCP) detector. By optimizing the evaporation process, we can achieve a highly reproducible ensemble number which yields ∼350 detection events per experimental cycle, which are then averaged over nearly 4000 cycles. As the He * atoms arrive at the MCP their ∼20 eV internal energy creates an electron pulse which is then incident upon a delay-line detector (Roentdek DLD80). The DLD consists of a wire coil in both the x and y dimensions (figure 1) in which a current pulse is generated for each MCP event. By measuring the arrival time at the end of each wire the three-dimensional position of the detection event can be reconstructed. The detector spatial resolution (∼150 µm) is set by a combination of the MCP pore size and the DLD detection electronics. The detector resolution was determined by passing He * atoms through a fiducial mask which acts as a point source of atoms. The point spread function determined by the system geometry was then deconvolved from the resulting image to obtain the detector resolution. The correlation functions are then determined using a similar technique to that described in [8], except that here we measure correlations in the plane of the detector (defined by the xand y-axis in figure 1) rather than in time (or its equivalent, the z-axis). For a spherical trap the isotropic nature of the expanding atomic cloud yields the same correlation length at the detector along both axes. However, in our system, the difference in the xand y-axis trapping frequencies (50 and 550 Hz respectively) yields more than an order-of-magnitude greater correlation length in the y-direction. Since the x-axis correlation length is of order the detector resolution, we have concentrated on measuring the y-axis correlation function. A detection event is analysed to determine whether another event occurs within a given spatio-temporal bin to determine g (2) ( y). The bin size used was 250 µs in the z-direction (corresponding to z = 1 mm at a velocity of 4 m s −1 ), 1 mm in the x-direction and 200 µm in the y-direction. The x, y, t bin sizes need to be at least as large as the detector resolution, and the smaller y bin size reflects the optimization of the data sets needed to achieve the best signalto-noise ratio. To measure the third-order spatial correlation function g (3) ( y 1 , y 2 ), the data is analysed to determine whether a further particle arrives within the same bin volume centred on the second particle position. Normalized second-order spatial correlation function g (2) ( y) for ∼95 nK thermal atoms. The red line shows a gaussian fit to the data. Results The second-order correlation function g (2) ( y) determined by this process is shown in figure 2 for a thermal ensemble at ∼95 nK. Here the clear atom bunching signature can be seen at small particle separations, with a correlation length of 1.30 ± 0.03 mm. The bunching enhancement factor is 1.131 ± 0.015 compared with the expected value of 2.0 (2!) given perfect detector resolution. A simple theoretical model [8] using our experimental values with no free parameters predicts an enhancement of 1.14 ± 0.02 and a correlation length of 1.5 ± 0.2 mm, in good agreement with the experiment. The simple theoretical model is the same as employed in our previous experiment [8, supplementary online material] and is based on the method of Gomes et al [28] which employs the theory of Naraschewski and Glauber [29]. The theory allows the determination of g (2) ( y) at the detector for a trapped atomic cloud released in the absence of interparticle interactions (which applies at our very low atomic number densities). The correlation function depends on the cloud temperature, the trap frequencies, and the time taken for the atoms to fall under gravity from the trap to the detector, all of which we determine empirically from the experiment. The model can also be extended to higher order correlation functions via Wick's theorem. However, the finite experimental resolution will reduce the peak bunching amplitude to a value less than n!-the ideal value for the nth order correlation function for incoherent bosons. In addition to the finite detector spatial resolution (∼150 µm), as indicated above the data is binned along the various axes to improve the signal-to-noise-ratio, and if the bin size comparable to or greater than the correlation length, it will also contribute to a loss of resolution. The correlation functions determined using [28] are averaged over the bin volume where the position of each detected atom is degraded by the detector resolution. The reduction in bunching amplitude is dominated by this imperfect resolution, which can be approximated by convoluting the correlation function with a single Gaussian function that represents the effective resolution (the combination of bin size and detector resolution) so that the peak bunching for second order becomes where l d α (t) is the correlation length at the detector, S α (t) = t * (k B T /m) 1/2 is the cloud size at detector, and d α is the effective rms resolution. To further improve this, corrections are added to represent the response of the MCP and DLD system and temperature fluctuations using a Monte Carlo method to average the correlation functions over the three-dimensional effective bin volume. The resulting model therefore has no free parameters as all the variables are determined directly from the experiment. The measured third-order correlation function g (3) ( y 1 , y 2 ) is shown in figure 3 (top left). This is plotted as a function of the y-axis separation for each particle pair in the triplet i.e. y 1 = y 2 − y 1 and y 2 = y 3 − y 2 . The measured bunching enhancement is 1.44 ± 0.02 compared with an expected enhancement of 6.0 (3!) with perfect detector resolution. The simple theoretical model [8] with no free parameters yields 1.47 ± 0.05 (top right). A plan view of both the three-dimensional plot is also shown in the plots underneath. Note that the secondorder correlation function in figure 2 can in principle be derived from the third-order function in figure 3 when one atom is a large distance from the other two in each triplet (for example, g (2) ( y 1 ) can be found by averaging g (3) ( y 1 , y 2 ) for values of y 2 much greater than the correlation length). Apparent in all the three-dimensional and plan views is a ridge of enhancement along the diagonal, where the separation of the first and third particles from the second is the same. This is to be expected, as when y 1 = y 2 there is an enhanced probability of two of the three particles being close together and therefore interfering.The same phenomenon is present in the thirdorder spatial correlation measurements using photons reported in [17]. This ridge line is not present in the temporal third-order correlation function presented in [8] since time-ordering is implicit in the detection of particle three after the detection of a particle pair (allowing intervals without time ordering i.e. enabling time reversal of the first two particles, would yield the same diagonal ridge). Finally, it is interesting to note that g (3) contains contributions from g (2) since (as noted above) when one of the three particles is taken far away, g (3) reduces to g (2) . It is possible to remove these two-body contributions from the three-body correlation function (see equation S2 in the -supporting online material [8]). Such a decomposition has been used previously for ultracold atoms [31] and weakly correlated plasmas [32]. Figure 4 shows the three-body correlation function (g (3) ( y 1 = y 2 = y) − 1, dashed red line) as well as the contribution towards this signal from solely three-particle interference (solid blue line). Conclusion Using an improved experimental technique, we have extended our temporal third-order correlation measurements to determine the spatial third-order correlation function for atoms for the first time. By reducing the temperature of the atomic cloud by over an order of magnitude to ∼95 nK, we have been able to increase the correlation length in the y-direction to more than an order of magnitude greater than the spatial resolution of the detector. This also increases the number of pair and triplet events measured within the bin size, despite the reduction in the number of atoms detected as a result of the evaporative cooling process. The result is a large enhancement of the spatial atom-bunching signal values of 1.131 ± 0.015 for g (2) ( y) and 1.44 ± 0.02 for g (3) ( y 1 , y 2 ), compared with 1.022(2) and 1.061 (6) respectively for the temporal second-and third-order correlations measured previously [8]. This strong atom-bunching signal is a clear signature of the incoherent nature of the thermal atomic ensemble. The measurement of higher-order spatial correlation functions in atomic ensembles holds promise for enhancing imaging visibility [17], and for probing quantum mechanical phenomena such as entanglement and the violation of Bell's inequalities [33]. By improving our correlation techniques to increase the atom-bunching signals, we aim to make higher-order correlation measurements more accessible for such applications.
4,352.6
2013-01-01T00:00:00.000
[ "Physics" ]
Bmc Medical Informatics and Decision Making Coupling Computer-interpretable Guidelines with a Drug-database through a Web-based System – the Presguid Project Background: Clinical Practice Guidelines (CPGs) available today are not extensively used due to lack of proper integration into clinical settings, knowledge-related information resources, and lack of decision support at the point of care in a particular clinical context. Background CPGs integrate generic recommendations for specific medical circumstances. They have been defined as "systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific medical circumstances" [1]. They are designed to compile the best medical knowledge in order to provide physicians with a practical decisional aid. Clinical guide-lines aim to eliminate clinician errors and promote best medical practice. Most CPGs are now easily accessible on specialized web sites. For instance, the National Guideline Clearinghouse. [2] alone has almost 1,000 publicly accessible guidelines. The ANAES (Agence Nationale d'Accréditation et d'Evaluation en Santé: The agency tasked by the French government with the production of medical references and CPGs) provides CPGs in PDF format on its Web site [3]. The wide diffusion of CPGs does not however solve the problem of their effective use in daily practice. Research on the lack of adherence to CPGs has not been fully carried out. Davis' paper [4] shows that the compliance with guidelines increases if it is developed with the direct participation of clinician users (50% increase in relative compliance). Attaching guidelines to patients' records is a second important factor (20 % increase). The guidelines are usually issued as long sealed electronic narratives (i.e.static PDF or HTML documents) that are difficult to use at the point of care during time-limited medical consultations. Disseminating CPGs in such a textual format has proved inefficient. The practitioner has to read several pages in order to find the appropriate care recommendation for a specific clinical circumstance. Their practical aspect rather than their content is at fault. Guidelines would prove much more efficient if they were available in the healthcare setting, integrated in the health care information system, easily adaptable to given clinical situations/scenarios and able to avoid overloading physicians with non-essential information. Minimizing the time spent consulting CPGs is crucial when attempting to improve their usage in every day practice. Physicians need both timely and pragmatic information to provide patients with the most appropriate care, and computerized CPGs have proved valuable in this respect [5,6]. Many authors report integration in the information system as a crucial point for the dissemination of CPGs. The heterogeneity of legacy schemata and local vocabularies is an important reason why software systems are not interoperable. The community effort to develop prescribing aid systems coupled with computerized CPGs and requiring Electronic Medical Records (EMRs) is considerable [7,8]. CPG-EMR integrations are desirable but not always feasible, for the moment, when CPGs are to be extensively disseminated [9,10]. EMRs are not always available and the data they contain may not be complete or structured well enough for computer interpretation. Few practitioners use EMRs able to manage structured and standardized data in France today [11]. Their large number and their heterogeneity are obstacles to their proper interoperability (e.g., there are more than 120 medical practice software packages currently available on the French market and most of these include an EMR module). In other words producing CPGs today which are integrated in the physician private information system is very expensive and the results are not easily accessible. However, more and more physicians are connected to the Internet and use it to search for the available CPGs [12]. Internet becomes an efficient means to improve CPGs' impact on daily practice if these online CPGs actually pro-vide practitioners with pragmatic assistance [13]. Providing current and precise information on drugs prescription within the CPGs will help improve both drug usage and routine CPG compliance. The PRESGUID project objectives From a pragmatic point of view, very few French physicians, in private practice, currently use an electronic patient record at their place of work. Without discarding the fully-integrated approach for the future, we have chosen, in the PRESGUID project (PREScriptions and GUIDelines), a pragmatic approach for the first step of implementation. We assume that producing more precise customized and shortened advice for drug prescription than the textual formats generally used by health agencies, significantly enhances CPG dissemination and health care quality. The project aims to meet the information needs of the practicing physician by combining clinical guidelines with documentation of regularly updated scientific evidence and reliable documentation on drug descriptions. The CPGs produced by ANAES usually recommend the most appropriate therapeutic drug classes given the clinical setting. However this help is not sufficient in the daily healthcare process. Physicians must also choose among the recommended classes the appropriate specific medication to be prescribed. Ideally, clinicians should know every drug available and keep up-to-date with current pharmacopoeia (new molecules, market recalls, contraindications, marketing authorizations, etc.). In addition, they should make their selection based on both medical and economical criteria (using different incentives, the government encourages the prescription of cheaper generic medications). Computerizing CPGs and coupling them with drug databases for more complete information and improved clinical adaptability represents a definite improvement as far as physicians are concerned. Vidal ® is currently commercializing such a database which is both authoritative and extensively used by French physicians (Vidal is equivalent to the PDR in the USA) [14]. The online version gives access to medication monographs and allows checking for drug interactions among a list of drugs selected by the practitioner. Yet, it does not include any module guiding the users towards CPGs or definite medical references. The PRESGUID project is designed as an online service enabling physicians to consult computerized CPGs linked with drug databases allowing a patient specific decision support. It will consequently improve CPG consultation and integration into the healthcare process. We use a pragmatic approach both to implement the textual CPGs from the ANAES into a computer-interpretable format and to enhance treatment recommendation by referring to the Vidal ® drug database. The purpose is to provide physicians with an interactive CPG consultation interface displaying recommendations that match relevant patient data and the clinical setting. PRESGUID provides a web-based guideline system that takes, as input, clinical data on a particular patient and returns, as output, the customized recommendation. Should the recommendations require prescribing drugs, the system will query the drug database and will display detailed information about the relevant specific medications. Methods The guideline task asks for data requirements, abstraction (patient characteristics, classes of medications) and modeling requirements. Computable CPG development is facilitated by the use of an architecture integrating modular components and including user-friendly authoring tools and execution facilities [15]. The PRESGUID project is based on such a Web architecture including a CPG development and distribution platform as well as a drug database. The platform includes various tools required for the development and distribution of computerized CPGs. This platform was further described in a previous paper [16]. Figure 1 presents the architecture of the platform and its main features are presented below. CPG model and graphic editor Representing CPGs in a computer-interpretable format is a critical issue for guideline development, implementation, and evaluation. Our knowledge representation model ( Figure 2) is inspired by the GLIF model [17]: a CPG is a sequence of decision and action steps. Action steps are used in order to collect patient data, compute and assign data value (e.g., compute the body mass index from weight and height) and display messages or recommendations. Decision steps enable evaluation of a condition composed of one or more criteria. These criteria concern the data values derived from effecting the previous action steps. We use classical comparative operators, logical operators and mathematical functions to produce the expressions to be evaluated. Patient data inferred while performing a CPG are represented by DataItem class (Figure 3). The DataItem class is composed of one or more attribute-value pairs. The value type is either numeric or string. Values are constrained by a range (minimum, maximum) or a domain of values defined in a given list of items. Following the patient data model, guideline authors have various options to create DataItem instances according to their own needs for a computable CPG. At this stage, we do not want to impose a particular patient data classification because there is no standard clearly established and widely used for patient data representation and classification. However, use of standard classification is a key point to allow interoperability and coupling CPGs with EMRs. In order take this into account, DataItem and Attribute instances are identified by an "Id" attribute. Future implementation may use this "Id" in order to map patient data inferred in a guideline with targeted classifications (e.g. ICD-10, MeSH,...). Architecture of the PRESGUID project Further, we have elaborated a visual guideline-authoring tool which provides a graphical interface to create and maintain guideline logic and content ( Figure 4). Our editor expresses the content of a CPG structured as an algorithm in XML format. The syntax we use within XML CPG documents is specific to our CPG engine. Classes of our model are instantiated as XML tags containing sub-elements and attributes which detail the CPG logical structure and its content. The main objective of the authoring tool is to help CPG authors easily create accurate CPGs. A major consideration was to ensure flexibility and comprehensiveness for the authors. Within the authoring tool, a CPG is presented as a decision tree handled with drag and drop operations. The tool also provides specific forms to structure each element of the guideline (steps, decision criteria, patient data which have to be inferred, recommendations, etc.). Clinician experts involved in developing computable CPGs are familiar with decision tree presentation often used to draw-up classical paper-based guidelines. It is therefore easier for them to take part in the authoring process. Within the authoring tool, patient data and other elements of the CPG can be linked to didactic documents (bibliographic references, definition of medical concepts, additional multimedia documents, relevant web sites URLs...) which can be displayed while consulting the CPG. During the authoring process, the CPG as well as its linked documents are identified by a set of properties (authors, summary, usage, MeSH keywords...) and are indexed on the CPG server. Moreover, guideline recom-mendations are ranked by evidence grading as defined by the Oxford Center for EBM.) [18]. CPG server and engine Fully developed CPGs are stored and indexed on a web server. To improve accuracy and consistency of guidelines we have opted for a procedure in which the guidelines can be tested in real conditions before putting them at users' disposal. An 'approved' status is delivered by the specialist in charge of the tests once he has judged the results satisfactory: the CPG then becomes available for any clinicians who have access to the server. The guideline engine carries out a CPG according to patient data values. It links the steps, infers the conditional criteria and initializes actions to be performed (patient data requests, data calculation, recommendation display...). The user interface is a dynamically generated web page. Initially, the information presented in the user interface is in XML format. We use eXtensible StyLe sheets (XSL) to generate HTML in order to format information and create data entry forms and controls. Drugs database The Vidal ® database provides information about those drugs that are available on the French market and it is accepted as a reference by all healthcare professionals. It is implemented in an Oracle ® relational database and delivered with a set of APIs providing access to detailed drugrelated data and documents (monographs, dosage, chemical composition, etc.). The Vidal ® database uses both the ATC (Anatomic Therapeutic Classification) as well as its own classification which although not formally recognized is nevertheless extensively used by healthcare professionals throughout France. CPG and drugs database coupling Knowledge coupling between CPGs and the drug database is based on XML message exchanges. We have developed a software component matching the XML description of therapeutic classes recommended in CPGs with appropriate medications within the Vidal ® database. To describe a drug prescription within a CPG we use an XML tag (DRUG_PRESCRIPTION) containing the recommended therapeutic class codes (either ATC or Vidal ® classifications can be used). Since CPGs are expressed in XML format, the original XML DRUG_PRESCRIPTION tag is nested within a second tag standing for a message recommending a drug-related therapy ( Figure 5). The CPG authoring tool provides specific forms that allow browsing drug classifications; selecting required classes; and finally automatic embedding of the DRUG_PRESCRIPTION tags within recommendation messages. Guideline and 'Display Recommendation' step edition In addition, we have developed a drug-database access component (DDBAC) designed to query the Vidal ® database whenever a DRUG_PRESCRIPTION tag is included within a recommendation. This component uses Vidal ® APIs to find the medications which belong to specified therapeutic classes. It produces and delivers an XML root tag including the appropriate medications, for each of which are specified: identifier, trade name, status (generic / referent), URL of the web page including the full monograph and the therapeutic sub-classes if any ( Figure 6). The results are included within the initial CPG XML document. Users may consult the different medications and select the one to prescribe by consulting the detailed monographs from the drug database. Typical uses of the PRES-GUID CPGs are described in the next section. Results We have computerized CPGs from the ANAES library. Two textual CPGs issued by the ANAES (hypertension management and diabetes mellitus treatment) have been coded and implemented to test and validate the computerization process (a demonstration version is available at http://cybertim.timone.univ-mrs.fr/presguid/). These CPGs have been computerised using our guidelineauthoring tool by the conceiver of this tool. Furthermore, two GPs, not directly involved in the PRESGUID project, have used the authoring tool in order to propose a computerised version of five CPGs from ANAES (high blood pressure, diabetes mellitus, cerebrovascular accident, benign prostatic hypertrophy and cancer pain treatment). They have been trained in the use of this tool over four hours and they have help assistance if needed. Their work is not yet fully completed. But so far, it appears that the tool is user-friendly because it is mainly focused on structuring a decision tree with especially intuitive drag-anddrop operations. Only a dozen minutes or so is required to get an initial version of a functional algorithm. It can be executed immediately and improved easily step by step. When initial textual CPGs are ambiguous and/or incomplete, it of course takes more time and expert advice is A DRUG_PRESCRIPTION tag nested in a recommendation message necessary. For example, the hypertension CPG indicates that smokers have higher cardiovascular risk (the recommendation is risk dependent) but does not specify which patients must be considered as smokers (Current smokers? Former smokers? Passive smokers? Active smokers? Tobacco quantity?...). Similarly, it gives a recommendation for patients with renal dysfunction and another one for patients with cardiac dysfunction but no explicit advice about patients who have both diseases. We realise that the initial CPGs should be improved dramatically. From this point of view, the guideline-editor tool does not solve the problems of ambiguities and incompleteness of the given guidelines but it highlights these crucial problems. It helps towards validating guidelines given by experts. A typical consultation scenario involving the diabetes mellitus CPG is presented below. During CPG consultation clinicians have to fill in patient data forms dynamically generated by the CPG engine. The collected data are summarized on the left hand side of the screen (Figure 7). Once the data have been collected and inferred, the system displays the recommendation adapted to the patient's profile. Clinicians can simulate data modifications and observe the effects on the recommendations. A DRUG_PRESCRIPTION tag nested in a recommendation is presented as a hyperlink (Figure 7, right-hand side). It provides access to all the specific medications that can be used in the present context. Physicians may then browse medications (displayed in a tree structure) and consult up-to-date monographs from the Vidal ® database (Figure 8). Discussion Key points to improve CPG effectiveness concern efficiency, usefulness, information content, user-friendly interface and CPG integration into the clinician workflow processes prior to delivering recommendations on time, at the point of care. Moreover, access to up-to-date information and guided processes also improve drug prescription safety [19]. In order to achieve these goals, we propose computable and interactive CPGs that can be used on computers connected to our server. The A RESULT tag produce by the DDBAC Figure 6 A RESULT tag produce by the DDBAC. recommendations provided are related to specific patient data collected by clinicians during medical consultations. Furthermore, recommendations regarding prescription drugs are coupled with a knowledge database widely used by clinicians. Within CPG consultation interface, practitioners have direct access to detailed information which will help them choose among the medications potentially appropriate for the clinical context. We have paid special attention to the accessibility and the detail level of CPGs. CPG impact depends on the assumption that clinicians will access the CPGs and will apply them in patient-specific circumstances. They must be easy to consult, integrated in the care process, and able to produce patient-specific recommendations rapidly. The computerization of CPGs needs a strict and detailed structuring of their contents. The variability of CPG types and their complexity sometimes lead developers to construct over-specific systems thus narrowing their field of application. The authoring and the maintenance of guideline is a difficult process, which involves a detailed review of all the alternatives by evaluating current scientific evidence, a consensus process among medical experts, a specification of optimal decision and finally a documentation of the recommendations. CPGs are often complex and A diabetes mellitus therapy recommendation with evidence level given as a hyperlink «grade» Figure 7 A diabetes mellitus therapy recommendation with evidence level given as a hyperlink «grade». The recommendation is displayed on the right hand side of the screen. The link with the drug data base is activated by clicking on the drugs family recommended (in this example, "sulfamides and alpha glucosidase inhibitors". On the left hand side, the data collected from the patient. difficult to maintain in a patient-specific computer-based form. In PRESGUID the guideline editor is an easy to use authoring interface requiring minimal training. It facilitates local adaptation and editing of guidelines. It can be used to create any type of CPG structured as an algorithm. It provides physicians with a very simple CPG management tool. The prescribing aid systems currently available help detect drug contraindications and/or interactions. In our approach to knowledge coupling with CPGs, we have attempted to enhance both practical and operational aspects. This approach goes along with other significant efforts made to take the whole treatment strategy into account within systems using both CPGs and drug-knowledge databases: PRODIGY [7] and ASTI [8] are known examples of these systems. CPG consultation in PRES-GUID is similar to the "guided mode" and "interacting mode" respectively of the ASTI and PRODIGY systems. In both these systems, the CPG module is intended to be provided as a plug-in/extension of commercial EMRs (one specific EMR for ASTI, several EMRs for PRODIGY which have to be compliant with minimum standards edited by the National Health Service Information Authority of the United Kingdom). Such an approach has great benefits in improving CPG integration within clinical practice but narrows their usage for the GPs who have these specific EMRs. In France there are presently no edited standards with which EMR software providers are supposed to be compliant. Moreover, there are many EMR editing software companies sharing this market. As a consequence, developing computerised CPGs is still restricted to very few systems and de facto few GPs have experience in the use of computerised CPGs connected to their EMR in France. This evidence convinces us that we have first to find a way to providing access to computerised CPGs potentially accessible to a large number of GPs without the needs of specific systems. We are aware that it is a first step to improving CPG use, but we also believe that it is a necessary step to show to a large number of practitioners and medical software providers the potentialities of computerised CPGs. In our opinion, it is also a way to influence the authoritative textual CPGs developers: computerising a CPG formatted as a textual document Recommended specialities (left hand side) and monographic information (right hand side) Figure 8 Recommended specialities (left hand side) and monographic information (right hand side). Generic specialities appear in green, referring ones in red, others in blue. will reveal its ambiguities, its lack of knowledge and will consequently improve its logical structure. This can be illustrated for example, by the fact that in the French ASTI project the hypertension management CPG from the Canadian Medical Association has been preferred to the French CPG from ANAES because the latter was less detailed and less complete. Thus, computerisation of CPGs is not just a solution to improve their use by practitioners, it is also, as a first step, a quality assessment procedure to test the guidelines, and ideally to improve their contents. With the PRESGUID project, we designed a decision system based on patient data directly collected from clinical users that allows for easy switching between the CPGs and the clinical system (this is a key requirement as far as patient-specific decision support is concerned). This system can be used as a stand-alone CPG system whose main advantage is that it is ready for use whatever the structure of the health information system may be. The associated CPGs can be widely disseminated over the Internet and easily shared between care providers. Further, this system is interactive and physicians can easily modify patient data to observe the effects on the resulting recommendations. We believe that this feature reduces the 'black box' negative effect that some physicians may feel when using decision-aid systems. This feeling is one of the obstacles to the wider adoption of computerized CPGs [20]. Data collection, presently assumed by the clinician user when he/ she consults the guideline, could be completed or replaced by the use of a mapping module with an EMR. Linking CPGs and EMR can improve their daily use. However, standalone CPGs can also have positive results in helping clinicians to choose and memorise the appropriate medication for a given patient [21]. As showed by Wang et al. [22] sharing CPGs among institutions can be achieved by using a mapping process between a guideline model and generalized guideline execution tasks. Such evidence reinforces our conviction of the necessity of basing computable CPG developments on consensual models. Our developments take advantage of the GLIF model that presents many similarities with most models currently in use [23]. As we have mentioned above, we use our own syntax to format CPG with XML. We believe that this syntax could be enriched and modified using XSL transformations in order to be compliant with GLIF expression language if needed. However, such transformations have not been tested. Other researchers have also used this emerging model as well as the XML format in order to structure CPGs [24][25][26]. Suggestions were made to improve CPG modeling based on patient data representation [27,28], logic specification [29,30] and additional information [31]. Nonetheless, taking drug prescription into account is also an important factor in improving CPG modeling and implementing. This research has potential implications both for CPG developers and providers of electronic drug databases. Defining consensual methods to match drug databases and computable CPGs will enhance the whole treatment strategy. We suggest using a 'drug prescription' element embedding the ATC code associated with recommended therapies. Classifications may be different according to the drug databases that are being targeted. However it is still preferable to use nationally or internationally recognized classifications facilitating CPG sharing and re-use. On the whole, we must bear in mind that CPG shareability is not only a question of syntax used, of knowledge representation and of coupling functionalities with EMRs. CPG shareability also requires integration into the health care workflow and similar/uniform health care processes. We need further analyzes to find a model which can take into account all these aspects so as to be fully satisfactory when implemented in the real world. PRESGUID is a decision support system based both on authoritative CPGs and on an authoritative drug database. The project also has potential implications for healthcare institutions because the Vidal ® database is now available free of charge for institutions. Our developments are thus applicable to the information systems used in these institutions. The coupling with other online drug databases using ATC coding would be similar. To improve accessibility, the guideline server uses a standard vocabulary (MeSH) and a set of properties to reference the guidelines (subjects, authors, public concerned, date, etc.). Thanks to these properties, we can structure a catalogue of the available guidelines. Depending on his/her needs, each user can build his/her own CPG list. In order to offer an even better structured guideline catalogue we plan to use more properties according to Shiffman et al. [31]. An evaluation study is currently in progress. Evaluations of the user-interface of the PRESGUID system have already been carried out by independent evaluators who performed a standard usability inspection (heuristic evaluation). The recommendations from the usability study revealed some problems. These recommendations have already been taken into account in the re-engineering of the Human-Computer Interface. The current online demonstration version integrates these recommendations and reflects corrections in line with the criterions of Scapin and Bastien [32]. Evaluation of PRESGUID by GPs is already planned. This evaluation will be performed using five CPGs issued by the ANAES. The study protocol planes to evaluate the use of the system by interviews and videorecordings of GPs using the system. We can certainly improve the PRESGUID decision system by taking advantage of the existing functionality that checks for drug interactions. The next PRESGUID version will use the API modules provided in the Vidal ® database to check for interactions between drugs recommended in the CPGs and those already administered to patients. Finally, we propose a method to disseminate the CPGs produced by the ANAES in a user-friendly format and subsequently improve their application in daily practice. Experience shows that computerisation has to be taken into account from the first stage of CPG development to improve their clarity and their comprehensiveness.
6,136.2
0001-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Sewer Inlet Localization in UAV Image Clouds: Improving Performance with Multiview Detection : Sewer and drainage infrastructure are often not as well catalogued as they should be, considering the immense investment they represent. In this work, we present a fully automatic framework for localizing sewer inlets from image clouds captured from an unmanned aerial vehicle (UAV). The framework exploits the high image overlap of UAV imaging surveys with a multiview approach to improve detection performance. The framework uses a Viola–Jones classifier trained to detect sewer inlets in aerial images with a ground sampling distance of 3–3.5 cm/pixel. The detections are then projected into three-dimensional space where they are clustered and reclassified to discard false positives. The method is evaluated by cross-validating results from an image cloud of 252 UAV images captured over a 0.57-km 2 study area with 228 sewer inlets. Compared to an equivalent single-view detector, the multiview approach improves both recall and precision, increasing average precision from 0.65 to 0.73. The source code and case study data are publicly available for reuse. The Need for Urban Drainage Network Infrastructure Data Urban drainage network infrastructure is foundational to public health and safety in urban areas.As such, great investments have been made into such infrastructure, especially in developed countries.In Switzerland, for example, the replacement value of all public sewer and drainage infrastructure is estimated at 66 billion Swiss francs [1], which corresponds to around 7000 euros per capita.To manage and maintain this infrastructure in the long term, it is essential to catalog the constituent assets and geographical layout of the networks.Comprehensive and detailed network layout information also plays a role when assessing flood risks.According to Hürter and Schmitt [2], the inclusion of sewer inlets in the model has a clear impact on the simulation results for urban pluvial floods caused by medium-sized rain events.This finding speaks against the common engineering practice of considering manholes as the sole interface between surface flows and the drainage network.Going a step further, Simões et al. [3] looked at the impact of capacity restriction of sewer inlets due to debris during flood events.Using a stochastic modeling approach, the authors showed that sewer inlet capacity does indeed have a large impact on flooding occurrence. Despite these reasons, data pertaining to urban drainage networks are often found lacking, inaccurate, or hard to access.Again in Switzerland, a report from 2012 states that low data availability characterizes the whole water management sector [4].While no international surveys on the topic are Unmanned Aerial Vehicles Enable Low-Cost Collection of Aerial Image Clouds Unmanned aerial vehicles (UAVs) are natural contenders for multiview aerial detection applications: when UAVs are used for orthoimage creation, aerial images must be captured with a high overlap and processed because of the low flight height and consequently high perspective distortion.In practice, it is recommended to have between 60% and 80% overlap in both lateral and longitudinal directions [16].To create the orthoimage, the UAV images are undistorted, reprojected with a digital surface model, and stitched together into a single orthoimage using a mosaicking approach.If the end goal is object detection, however, the individual images could also be used directly in a multiview detection framework, as has shown to be successful with ground-level imagery [11,13]. Scope and Novelty of the Present Study In this work, we present a multiview framework for detecting small, static objects in UAV image clouds.The framework is applied to the detection of sewer inlets in a municipality near Zurich, Switzerland.The performance of multiview detection is compared to that of an equivalent single-view approach in which objects are detected in an orthoimage of similar resolution and geographical extent as the individual UAV images. This study is (to the best of our knowledge) the first demonstration of multiview detection with UAV image clouds.From the standpoint of urban water management, while UAVs have been tested for surface imperviousness observation [17] and elevation model generation [18], this study is also the first investigation of UAVs for drainage system inventory mapping.Finally, the detection framework and data presented in this study have been published and free for anyone to reuse or build upon. Single-View and Muliview Detection The single-and multiview detection approaches compared in this work (Figure 1) differ in essence only in the image medium used for detection: in the first case, objects are detected within a single georeferenced and orthorectified aerial image whereas in the second, individual aerial images are used.There are three main advantages expected from using a multiview approach.Firstly, thanks to the multiple perspectives provided by the individual images, the issue of visual obstruction from trees and moving vehicles should be mitigated.Secondly, the additional information should increase detection accuracy (i.e., fewer false alerts).Finally, the individual UAV images are not processed as is the orthoimage, so a higher image quality and resolution can be expected.The steps necessary for both approaches are detailed in the following sections and can be summarized follows, as illustrated in Figure 1.In step 1, images are clipped using road network information to limit the search area.In step 2, sewer inlets are detected in each image using a sliding window classifier.In step 3, which only applies to the multiview approach, detections are cast from each image into three-dimensional (3D) space.Overlapping detections are clustered together in step 4, and properties are computed for each cluster.In the final step, the clusters are classified and filtered based on their properties to remove false positives.The detection results are cross-validated in five folds of training and testing data. Image Clipping Sewer inlets are most often situated on the side of the road, and knowledge of this fact can be leveraged to dramatically restrict the area that needs to be searched when localizing sewer inlets, thereby also reducing the number of false positives.In this work, a mask (Figure 2b) was created from land use data, by adding an inner and outer buffer to road edge lines.Since sewer inlets in Switzerland are usually situated on the inner edge of the road and have a principal dimension of around 50 cm, an external buffer of 50 cm and an internal buffer of 100 cm were chosen for the mask.Thus, only image data from the roadsides are retained (Figure 2c) from the original orthoimage (Figure 2a).While image clipping is trivial for the orthoimage, some processing is required to transform the mask into the projections of the individual (nonrectified) aerial images.First, the mask vertices were enhanced with elevation values extracted from a digital elevation model.Then, the 3D mask was back-projected into the 2D space of each image, using the known camera poses, by back-projecting polygon vertices according to: where v is the point coordinate vector in normalized image space, X is the point coordinate vector in the world coordinate system, K is the [3 × 3] camera lens distortion matrix, R is the [3 × 3] camera rotation matrix, and t is the [3 × 1] camera position vector in the world coordinate system.The K, R, and t camera parameters are determined prior to the detection process using photogrammetry software.We used the Geospatial Data Abstraction Library (GDAL) and OGR Simple Features Library [19] for reading, writing, and processing geospatial raster and vector data.Numpy [20] was used for matrix operations.Pix4Dmapper [21] was used to estimate camera parameters. Object Detection in Images To detect objects in the images, a sliding window approach is used in conjunction with a cascaded boosted image classifier of the type presented by Viola and Jones [22].The sliding window approach is a simple way of performing object detection for objects that do not fill the whole image frame.Conceptually, it consists in incrementally sliding a window across the image, classifying the content of the window at each step.The size of the window can be varied according to the range of expected object sized, but in the present work the window size was kept constant since all images are taken from a similar distance to the ground.The method proposed by Viola and Jones is characterized by (i) the concept of integral images, a preprocessing step that accelerates feature computation; (ii) a learning algorithm (Adaboost) that gives weight to discriminative image features among a large pool of candidates; and (iii) a structured decision cascade that discards negative images early in the detection process.The implementation is extremely efficient at detection time despite the sliding window approach because at each window location, features are computed stage by stage.Since the vast majority of proposals are discarded after the first stages, only a fraction of all features need to be computed.This aspect is relevant when many images must be processed, as is the case with multiview detection with hundreds of images.Certain developments have been made since the original implementation by Viola and Jones [22], some of which were found useful for the present work: instead of the originally used features based on Haar basis functions [23], an extended feature set with rotated Haar-like features [24] was used to improve the detection of rotated objects.Also, Gentle Adaboost [25] was used to train the classifier, which is reportedly of greater numerical stability, more resistant to over-fitting and less sensitive to outliers.The same classifier was used for both multiview and single-view approaches. A feature of sliding window classifiers is that the window slides over the image with a step that is much smaller than the width of the window.Therefore, as the window passes over an object, multiple neighboring detections of the object are documented (Figure 3a-c).Often, these are aggregated directly after detection but in the present study they are aggregated in step 4, after they have been projected into 3D space along with detections from the other images.OpenCV [26] and specifically the CascadeClassifier class was used as the framework for training and executing the classifier.This class allows the use of different feature sets and boosting methods.The main settings used for training the boosted classifier are listed in Table 1.The number of stages corresponds to the number of successive "strong" classifiers by which an image sample must pass in order to be classified as an object.If any one of the strong classifiers rejects the sample, it is immediately discarded and not evaluated by the following classifiers.This architecture makes it acceptable for each stage to have a moderate false alarm rate, but requires a high hit rate, since the overall performance is estimated as the performance of each stage to the power of the number of stages.Each strong classifier is composed of a number of weak classifiers, which in this study are decision trees of unit depth computed with Extended Haar-like features.The boosting algorithm serves to prioritize training samples from one stage to the next, to help the algorithm identify the most discriminating features for classification.Apart from the feature and boosting type, which were chosen for the reasons explained previously, the other settings were chosen by trial and error as an acceptable compromise between performance and training time. Projection of Proposals into Three-Dimensional Space Objects detected in individual UAV images are projected back into world coordinates by casting a ray from the camera projection center into 3D space, i.e.: and intersecting the ray with a digital mesh model of the terrain surface.Because UAV images are captured with a high overlap, each sewer inlet can be detected in multiple images, which leads to clusters of points around the actual sewer inlet locations (Figure 4a).This is accentuated by the fact that in each image, sewer inlets are detected multiple times due to the sliding window.Intersections are computed with the Visualization Toolkit (VTK) Library [28] that internally implements OBBTree [29], a data structure for efficiently detecting interference between geometries. Clustering Proposals We use the Density-Based Spatial Clustering for Applications with Noise (DBSCAN) algorithm [30] for identifying clusters (Figure 4b).The algorithm identifies clusters based on a minimum density threshold set by the user, where the density threshold is roughly defined by a minimum number of points within a given area.DBSCAN is well-suited to the sewer inlet detection problem because it scales well with large numbers of points and clusters.Additionally, the points are clustered in 3D space, which is useful if elevation needs to be accounted for.However, the algorithm is sensitive to the density threshold set by the user.In this work, the threshold was adjusted with a simple grid search, based on the typical sewer inlet area of around 0.25 m 2 and the expected image overlap.Different clustering parameters were used for single-view detection.The clustering is performed with the scikit-learn Python package [31]. Removal of False Positives Based on Cluster Properties The clusters of individual detections are characterized and classified in order to remove false positives.For each cluster, the following properties are computed: • Detection count: the number of detections that are part of the cluster. • Image count: the number of images contributing detections to the cluster. • Maximum, average, and summed detection scores: each of the detections comes with a score from last stage of the Viola-Jones classifier.For the ensemble of detections belonging to a cluster, the maximum score is found, and the arithmetic average and the sum of the scores are computed. • Surface area: the surface area of the bounding box containing the detections is computed in map units.This property informs on how spread out the detections are. • Density: the density is computed as the number of detections divided by the surface area. • Histogram of detections per image: a vector x, with each element x i containing the number of UAV images contributing i detections to the cluster, with i varying from 1 to 49.The vector is generally quite sparse. • Average and maximum detections per image: for images contributing detections to the cluster, the average and maximum number of detections is computed. These properties are used as features to predict whether the object candidate actually corresponded to a sewer inlet.This is a typical two-class classification problem for which we tested three established classification algorithms: Linear SVM, Logistic Regression (LR), and Artificial Neural Network (ANN).With SVM, a hyperplane is fitted in between the two classes of data such that the margin between the data and hyperplane are maximized.SVM are not well suited to nonseparable classes of data and by default do not provide confidence scores for predictions.However, it is possible to estimate confidence scores using Platt scaling [32], with n-fold cross-validation to avoid overfitting the scaling parameters.With LR, a logistic curve is fitted to the data by maximizing the likelihood of observing the data.In contrast to SVM, LR does not assume that the classes are separable and the predictions provided by LR have a direct probabilistic interpretation.In this work, the ANN used is a multilayer perceptron (MLP) with a single hidden layer with 100 neurons.The MLP is trained by adjusting the connection weights between neurons to minimize prediction error.ANN are valuable when the boundary between classes is non-linear.As with LR, the output of ANN is given in terms of confidence scores.Thanks to these classification methods, each cluster can be assigned a confidence scores0F, as illustrated in Figure 4c.For the details of these methods, we refer to standard textbooks such as [33,34].Since multiview clusters have fundamentally different properties due to the additional information they contain, the cluster classifier must be trained for single-view and multiview clusters separately.In all cases, classification was performed with the scikit-learn Python package [31]. Assessing Detection Performance Both the multiview and the single-view detection methods that were implemented provide point detections and not bounding boxes, as is otherwise often the case for object detection.It was therefore not possible to use, for example, the intersection-over-union ratio to evaluate whether an object was matched.Instead, since the representative object size is assumed to be of around 50 cm, an object was considered matched to a cluster if the centers of the two are situated within 50 cm of each other. The cluster classification step described in the previous section assigns to each cluster a confidence score based on the cluster's properties.By increasing the confidence threshold for accepting a cluster as an object, the classification is made more conservative (fewer false positives but also fewer true positives).To measure this tradeoff, we use the well-known precision and recall metrics: Recall = true positives true positives + true negatives Both precision and recall can take values between 0 and 1, where a low precision means many false positives and a low recall means many objects were missed.Precision and recall are often computed for a range of confidence intervals between 0 and 1, and plotted against each other in a precision-recall curve.The shape of the precision-recall curve can be summarized by the average precision (AP), which corresponds to the area below the curve: where n designates the n-th probability threshold for which precision and recall are computed.Perfect classification yields an AP of 1, and the chance level is an AP of 0.5, given a balanced number of positive and negative samples.Object detection problems, however, are not balanced since there are practically infinite negative samples, therefore the actual chance level is much closer to zero. To assess whether the performance of the multiview localization is statistically different from that of the single-view localization, we perform a paired difference t-test on the AP of the cross-validation folds.Student's t-test is a statistical test commonly used to verify whether two sets of data are significantly different.It assumes that both sets of data follow a normal distribution of unknown standard deviations.The pairing eliminates confounding effects due to differences between folds, thereby increasing the statistical power of the test.First, we must compute the difference ∆AP i between the multiview and single-view AP of each fold: where i stands for the fold index, taking values between 1 and the number of folds n.Under these conditions, the test value t is then computed as: where X D and s D are the mean and standard deviation of the differences ∆AP i , n is the number of folds, and µ 0 is equal to zero under the null hypothesis that the multi-and single-view deliver the same performance. Analyzing Sensitivity to Clustering Parameters The clustering algorithm used in this work, DBSCAN [30], is known to be sensitive to the two parameters that define minimum cluster core density.To elucidate the influence of these parameters, we conducted a sensitivity analysis by varying the two parameters: (i) ε is the maximum cluster core size and should be chosen according to object size and localization accuracy, and (ii) N is the minimum number of points that should be found within the cluster core.N depends on how the sliding window iterates over the images and on the number UAV images in which each object is visible.These two parameters ε and N were varied on a grid between values of 0.15-0.25 m and 1-15 samples, respectively1F.These ranges were selected based on a preliminary sensitivity analysis that was broader and coarser than the one presented in the results. For each combination of parameter values, both the single-view and multiview detections were clustered and classified using the three classifiers (SVM, LR and ANN).The quality of the resulting clusters is measured by means of the average precision (AP), and differentiated by the cluster classification algorithm used. Analyzing Hard Negatives Hard negatives, which are detection errors committed with high confidence by the classifier, offer insight into the shortcomings of the method and potential paths for improvement.However, such an analysis remains fundamentally qualitative since the actual underlying causes of detection errors can only be presumed based on visual inspection of the images.In the present analysis, the 15 highest-scoring false positives and 15 lowest-scoring false negatives are extracted and their causes for misclassification are hypothesized.The causes are then ranked according to frequency of occurrence. Data Collection and Preprocessing The UAV used to collect data in this study is a fully autonomous electric fixed-wing UAV, the eBee (1st generation) produced by senseFly SA (Cheseaux-sur-Lausanne, Switzerland).With a foam body, a wingspan just under one meter, and a rear-facing propeller, it is safe and well-suited to urban or suburban areas.The UAV carries a 16 MP Canon IXUS 125 HS compact digital camera that is controlled by the UAV autopilot.At typical cruise heights of 100 to 300 m, GSD of the images is between 3 and 10 cm/pixel.The UAV was flown over a residential area near Zurich, Switzerland, in conformity to the Swiss regulations for special category aircraft [35].These regulations allow autonomous flight without a special license or permit under the following conditions: manual, line-of-sight flight override must be possible at all times; flight must be specified distances away from certain protected nature areas, military facilities, gatherings of people, airports, landing strips, and heliports; for airplanes heavier than 500 grams, a liability insurance of at least 1 million Swiss francs is needed; privacy and data protection laws must be respected.In total, 252 images were taken at a flight height of 90 m, giving a GSD of 3-3.5 cm/pixel (due to variations in perspective and topography).In the study area, 228 sewer inlets were identified manually in the UAV images (Table 2).The UAV images were processed with Pix4Dmapper [21] to estimate internal and external camera parameters and generate an orthoimage (Figure 5) as well as a digital surface model (DSM) for the case study.The orthoimage is created by making a mosaic from projections of the UAV images, during which resolution is slightly reduced.While minimal loss of image quality is experienced for flat horizontal surfaces (like roads), objects with sharp or complex edges (like buildings) can suffer from distortions and artefacts.These issues are generally of no concern for sewer inlet detection, unless such an object is situated over or right next to a sewer inlet.Ten ground control points were used to georeference the project (placement shown in Figure 5, registration error documented in Table A1).The processing time for estimating camera parameters was 7 min, and the subsequent processing time for generating the orthoimage and DSM was 3 h (using an Intel i7-4790K CPU @ 4 GHz, with 16 GB RAM and a 12 GB NVIDIA GeForce GTX TITAN X graphics card). Training and Testing Data The same case study area was used both for training and testing the detection methods.Because of the limited number of objects contained within the area, we cross-validate the results with five folds of train (80%) and test (20%) data.In each fold, the objects labeled for training were used to extract positive training images from the UAV images (resulting in multiple images/views per object).Negative training images were extracted from locations taken from the highest scoring hard negatives from previous detection runs.The positive and negative sample images were then transformed to greyscale and augmented with random rotations and reflections (positive samples were augmented by a factor 3 and negative samples by a factor 2).The final images were of 32 × 32 pixel resolution, some of which are shown in Figure 6.To train the Viola-Jones classifier, 2661 positive and 3936 negative samples were used.For training the cluster classifier, the clusters were first divided into three categories: (i) clusters that match to objects labeled for training (training positives); (ii) clusters that match to objects labeled for testing (testing positives); and (iii) clusters that do not match to an object (false positives).Of the false positives, 80% were randomly selected to be used for classifier training, along with the training positives.The remaining 20% were combined with the testing positives to form a test set. Multiview Significantlys Outperforms Single-View Detection In Figure 7, the precision-recall curves of the best multiview detector (red) are compared with those of the best single-view detector (black).Both detectors use ANN for cluster classification, but are optimal with different clustering parameters, as illustrated in Figure 8.The comparison is made for each fold of test data (Figure 7a-e), as well as for all test data combined (Figure 7f).The results show that in terms of average precision, the multiview approach precision is improved consistently across all folds of data.Using the paired t-test described in Section 2.7, we obtain a t score of 7.2 standard errors, corresponding to a p-value of 0.002 for a two-tailed test, which is significant at p < 0.01.Overall, average precision for the combined test data is increased from 0.652 to 0.730 (Figure 7f), which is a relative increase of about 12%.(a-e) Precision-recall curves for individual folds of test data; (f) Precision-recall curves for all folds combined.In terms of average precision, the multiview approach is consistently better than the single-view approach.When the folds are combined, the multiview approach outperforms single-view for the whole reach of the curve. Sensitivity to Clustering Algorithm Parameters The results of the clustering parameter sensitivity analysis (Figure 8) show that overall, multiview detection performs better than single-view detection regardless of the cluster classification algorithm.Additionally, multiview detection is less sensitive to clustering parameters than single-view detection, as can be seen in the broad color gradient of the single-view results.While SVM, LR, and ANN all perform comparably, ANN was able to produce detections with the highest average precision for both approaches. For single-view detection (Figure 8), optimal clustering parameter values are situated around N = 2 and ε = 16.The low value of N is expected, since there is only one image in which objects can be detected.For multiview detection, performance is best for N values above N = 3, after which there is no clear dependency on clustering parameters except for the ANN, for which performance begins to degrade at N = 11. Analysis of Nature of Hard Negatives In Figure 9, the 15 highest-scoring locations falsely classified as sewer inlets are shown.In this sample, the main reasons for false detection are apparently (i) geometric patterns on the ground with high contrast (7 cases); (ii) manholes or round sewer inlets (4 cases).For the remaining four cases, no clear reason can be identified. Comparison to Previous Work As stated earlier, there are no studies to our knowledge investigating sewer inlet identification in aerial images, let alone in UAV image clouds.If compared to recent work on manhole cover detection in aerial imagery, which is a similar problem, our results are 15-20 percentage points better in terms of precision.At 40% recall, Pasquet et al. [14] report 80% precision (we achieve 95% precision) and at 50% recall, Commandre et al. [15] report 69% precision (we achieve 90% precision).Besides the use of a multiview approach, there are two other aspects of our study that could give an edge, namely the slightly higher image resolution used and the preliminary image-clipping step.The differences between the two detection subjects must also be stated: sewer inlets are smaller than manholes, but they also have visual patterns with higher contrast than manholes. Advantages of Multiview Detection While the performance increase thanks to multiview detection is significant, it is not exceptionally large as compared to single-view detection (Figure 7).One could have expected the multiview approach to greatly improve detection performance, especially for difficult-to-detect objects (e.g., partially hidden by nearby obstacles) since the multiview approach provides information from many angles of view as compared to the single-view approach.However, this phenomenon, which would translate to higher performance on the right side of the precision-recall curve, is not marked.One possible explanation is that since the images were taken vertically (configured to 0 • from nadir), the difference in perspective was not sufficient to make a strong difference for detection.Another explanation is that there were simply not many objects hidden from view in such a way (e.g., vegetation is not a major issue because of the winter season of the flights). Sensitivity to Clustering Parameters The sensitivity analysis (Figure 8) reveals another advantage of multiview detection, which is that it is less sensitive to clustering parameters than single-view detection.Thus, when applying the method to new locations, there is a greater chance that the clustering parameters identified in this study function well, despite inevitable differences in the data. For both approaches, there is some noise in the performance level, particularly for the ANN cluster classifiers.This noise can be explained by the stochastic and iterative nature of classifier optimization algorithms, which due to high class overlap sometimes fail to reach the global optimum.In practice, the noise may be a disadvantage if one desires to identify optimal parameter values. Role of Digital Surface Model Accuracy While not investigated in this work, the accuracy of the DSM used to project detections into 3D space can affect detection results.Indeed, if the surface model is inaccurate in the proximity of a sewer inlet, then the projected detections (as in Figure 3) will be erroneous and cause the resulting cluster of points (as in Figure 4) to be more dispersed and possibly shifted.Due to the fact that road surfaces are relatively flat and can be well-described with UAV photogrammetry [18], DSM accuracy is not expected to be a major factor of error in the present study.Indeed, in the analysis of hard negatives, only one of 15 sewer inlets appeared to suffer from localization issues (Figure 10f). Improvement Potential and Directions for Future Work The results presented in this work could possibly be improved upon if certain changes were made to the data collection and methodology.The analysis of hard negatives revealed that some of the main reasons for error, namely high contrast patterns on the ground and unusual sewer inlet shapes, could be solved by increasing the amount and diversity of training data.Other causes of error, namely differentiating manholes from round sewer inlets and low image quality, could be solved by increasing image quality (e.g., by flying lower or using a better camera).The problem of obstruction by vegetation could be mitigated by increasing the tilt angle at which images are taken.In terms of the method, improvements could be made by applying deep convolutional neural networks (DCNN) instead of Viola-Jones.DCNNs, such as Faster R-CNN [36] (a combined region proposal and convolutional neural network), are currently state-of-the-art for most object detection challenges, although they are computationally expensive and require much training data.It is also questionable whether current DCNNs are well suited to small objects like sewer inlets, which have little internal structure [37].Another change that could be made to the methodology can be illustrated by the failure cases in Figure 10k-o.In these examples, which are probably at the edges of the study area, only one to four images are able to see the sewer inlets.However, the current cluster properties do not account for the visibility of a sewer inlet-thereby penalizing objects in areas where images are less frequent or where obstructions block the view.Finally, sewer inlets, like most public infrastructure objects, have typical distributions (e.g., one manhole every 20 m and rarely two manholes next to each other).While these patterns are regional, they are often known a priori and could be taken into account in the cluster classification process. Inherent Limitations of Aerial Sewer Inlet Mapping Despite the many possibilities for improvement, there are two main limitations that are intrinsic to the approach of automated aerial sewer inlet localization.First, there is an unavoidable risk that a portion of the objects are not visible in aerial images because they are momentarily covered by vehicles or debris.This risk can be partially mitigated by performing multiple flights, at different times of day and different seasons in the year.Second, there is a large variety in the form and situation of sewer inlets, with some being integrated into the curbstone.To accommodate for this variety, one must not only increase the variety within the training data, but also adapt how images are captured, e.g., by further increasing camera tilt.Therefore, depending on the completeness required of the data and the relevance of the aforementioned limitations, it may be necessary to adjust the detection method or to manually verify the detection results. Practical Considerations for Urban Water Management As stated in the introduction, we understand the scarcity of urban drainage infrastructure to be widespread.Even when urban drainage asset managers hold a catalog of assets, it is common that this catalog is incomplete or outdated when it comes to sewer inlets.Based on our experience with establishing the case study ground truth for the present study, for which no reliable ground truth was originally available, having a pool of proposals greatly improves the speed and accuracy of manual object localization.In this context, the primary application of the UAV-sourced data would be to suggest likely sewer inlet locations, and therefore the classifier confidence threshold should be selected to favor data completeness (i.e., recall) over precision.The proposals can then be manually validated to update the inventory.In practice, this can be done in the form of a dedicated field survey or integrated into the routine tasks of municipal workers (e.g., street sweeping or sewer inlet cleaning).In cases where street-level imagery is available, objects can also be validated remotely. Thanks to the flexibility of UAV-based data collection, such an inventory update would probably benefit from multiple data collection campaigns, e.g., in winter (low vegetation cover), under different lighting conditions, and to randomize the visibility of obstructions such as parked cars).In the context of operational urban water management, regular UAV flights would also be of value for detecting blockages and scheduling maintenance.Based on the results of [38], such an application would reduce the risk of urban pluvial flooding. Reusability and Generality of the Multiview Methodology Although the methodology described in this work is presented in the context of urban drainage infrastructure mapping, using sewer inlets as a case study and a UAV as a platform for image capture, it is in fact of general applicability.Still within the context of infrastructure mapping, one could apply the methodology to manhole covers, rainwater tanks, or power transformers.In the realm of environmental research, it is applicable to the detection of plant species or animal nests.Even in the context of medical science and dermatology, with only slight adaptation it could be turned into a tool for identifying birth marks and moles. Conclusions This work demonstrates that the use of a multiview framework significantly improves the detection performance for sewer inlets from UAV imagery.With a cross-validated case study with 228 sewer inlets, we show that the use of additional image information increases average precision from 0.652 to 0.730 as compared to an equivalently trained single-view detector.The gain is attributed not only to the additional perspectives made available, but also to the ability to exploit the full resolution of the raw UAV images.The multiview approach is further able to identify 60% of the sewer inlets with a precision of 80% and localize them in three dimensions.Both precision and recall are substantially better than the latest reported results for the comparable problem of manhole cover detection.For urban water practitioners seeking to create or update their inventory, the value added by multiview detection is more than the incremental improvement that is usually gained by tuning the image classification method.Thus, this sewer inlet detection solution can be used to address the frequently mentioned scarcity of urban drainage infrastructure data.The methodology, for which the code has been released, can easily be adapted for reuse within other infrastructure or environmental mapping projects. Supplementary Materials: The data used in this paper is available online at https://zenodo.org/record/1197592,and the code is available online at https://github.com/Eawag-SWW/raycast. Author Contributions: M.M.d.V. collected the data, processed the data, and drafted the manuscript.K.S. provided essential support for designing and implementing the object detection methods.J.P.L. and J.R were principle investigators of the project, providing valuable support in the orientation and coordination of project execution, and paper drafting.All authors were involved in editing and reviewing the manuscript. Figure 1 . Figure 1.Single-view and multiview detection approaches.The multiview approach uses all available image information for detection and performs clustering in three-dimensional (3D) space. Figure 2 . Figure 2. (a) A small part of an unclipped orthoimage.The object in the upper left is part of a car and the object in the lower middle is a sewer inlet; (b) the clipping mask overlaid on the orthoimage; (c) the clipped orthoimage. Figure 3 . Figure 3. (a-c) Results of the moving window classification for 3 of 10 unmanned aerial vehicle (UAV) images in which the sewer inlet was detected.The points represent individual detections as the moving window moved across the original image, with the orientation of the detections reflecting the images' orientation.The outline of the sliding window (square with dotted border) is to scale. Figure 4 . Figure 4. (a) The combined detections from all ten images in which the sewer inlet was detected, forming an obvious cluster around the object; (b) Cluster centers identified from the combined detections; (c) Cluster centers with associated confidence scores, as computed by the cluster classifier. Figure 5 . Figure 5. Study area near Zurich, Switzerland used as a case study for this work.The orthoimage shown was generated from UAV images.Taken during winter, the image reflects a situation with little vegetation to obstruct sewer inlet visibility. Figure 6 . Figure 6.Examples of positive (left) and negative (right) training images used to train the Viola-Jones classifier.Despite the small resolution (32 × 32 pixels), sewer inlets are easily identifiable to the trained human eye-given adequate image quality. Figure 7 . Figure 7. Precision-recall curves for the best-performing multiview and single-view detectors.(a-e)Precision-recall curves for individual folds of test data; (f) Precision-recall curves for all folds combined.In terms of average precision, the multiview approach is consistently better than the single-view approach.When the folds are combined, the multiview approach outperforms single-view for the whole reach of the curve. Figure 8 . Figure 8.Average precision of detection for different cluster classifiers, as a function of clustering parameters.Multiview outperforms single-view for any given combination of clustering parameters.The optimal clustering parameter configurations are highlighted with a black outline.Grey areas indicate where clustering parameters were too exclusive for classification to succeed. Figure 9 . Figure 9. Examples of locations falsely classified as sewer inlets.Each subplot shows all views available for a given location.The apparent main causes for false classification are: (a,b,d,f,h,k,o) patterns on ground with high contrast and/or strong geometric patterns; (e,i,l,m) manholes or possibly round sewer inlets.(c,g,j,n) were falsely classified as sewer inlets for unknown reasons. Figure 10 . Figure 10.Examples of sewer inlets missed by the multiview classifier.Each subplot shows all views available for a given sewer inlet.The apparent main causes for nondetection are: (k-o) too few images in which the sewer inlet is visible; (a,c,d,e,g,) insufficient image quality or obstruction by vegetation; (f,o) the object surroundings are complex or unusual; (b,e,h,i,j) they are visually different from typical inlets (compare to example training data in Figure 6); (f) the image poses are imprecisely determined or imprecise DSM. Table 1 . [26]ings used for the storm drain classifier used in this study.Explanations for the settings can be found in the OpenCV user manual[26]. Table 2 . Characteristics of the study area, UAV flight, and images.
8,969.8
2018-05-04T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Integrated photonic metasystem for image classifications at telecommunication wavelength Miniaturized image classifiers are potential for revolutionizing their applications in optical communication, autonomous vehicles, and healthcare. With subwavelength structure enabled directional diffraction and dispersion engineering, the light propagation through multi-layer metasurfaces achieves wavelength-selective image recognitions on a silicon photonic platform at telecommunication wavelength. The metasystems implement high-throughput vector-by-matrix multiplications, enabled by near 103 nanoscale phase shifters as weight elements within 0.135 mm2 footprints. The diffraction manifested computing capability incorporates the fabrication and measurement related phase fluctuations, and thus the pre-trained metasystem can handle uncertainties in inputs without post-tuning. Here we demonstrate three functional metasystems: a 15-pixel spatial pattern classifier that reaches near 90% accuracy with femtosecond inputs, a multi-channel wavelength demultiplexer, and a hyperspectral image classifier. The diffractive metasystem provides an alternative machine learning architecture for photonic integrated circuits, with densely integrated phase shifters, spatially multiplexed throughput, and data processing capabilities. Metasurface-based multi-layer systems, named metasystem, expand the functionality of metasurface in the out-of-plane dimension [38][39][40] . Lithographically assisted alignment and bonding between metasurface layers are required for providing sufficient precision and robustness in functional metasystems 39,40 . The integrated photonics platform provides such alignment with onestep lithographically defined multiple metasurface layers. Compared to the waveguide-based integrated photonic processors [41][42][43] , the metasystem architecture offers higher throughput vector-by-matrix multiplication, which can be further expanded by wavelength-division multiplexing (Supplementary Note 1) 44,45 . The metamaterial manifested weight element density, combined with diffraction strengthened inter-layer connectivity, enables the passive system to accomplish machine learning tasks of spatial pattern classification (Fig. 1a). The diffraction manifested data processing capacity allows the training process to incorporate the random phase offsets caused by nanofabrication and measurement. Unlike the other integrated photonic processors [41][42][43] , the passive photonic metasystems are fully functional without active layers for phase correction. The passive integrated metasystem can grasp the key information with a femtosecond single-shot exposure, and thus save the time and energy consumption for subsequent electronic processing for onthe-fly data compression. As the depth of a machine learning system outweighs the number of elements per layer, here we demonstrate high accuracy image classifications at telecommunication wavelengths in the multi-layer one-dimensional metasurface systems. Arrays of rectangular slots are defined in the silicon layer. The slot lengths in those phase-only transmissive arrays are pre-trained by deep diffractive neuron networks. Beyond conventional classification functions, the metasystems also demonstrate unique functions of wavelength demultiplexing and multi-wavelength pattern classifications, with potential applications in spatial division multiplexing based optical interconnects and machine vision 34,46 . Results Design of the metasystem. The metasystems are defined on a silicon on insulator (SOI) substrate with single-step lithography and dry etch (Method). As individual cells in metasurface, the geometry of the rectangular slots are learnable parameters. Each pair of the slots represents a weight element and connects to the following layers through diffraction and interference of the inplane waves (Fig. 1a, b). Both amplitude modulation and the phase shift of the transmitted wave can be programmed by adjusting the width and length of the subwavelength slots, respectively 47,48 (Supplementary Fig. S2). With a fixed slot width of 100 nm and lattice constant of 500 nm, the phase shift of the transmission coefficient can be continuously tuned from 0 to 2π with the slot length, while the amplitude stays more than 95% (Fig. 1c). Figure 1d shows the angle-dependent complex transmission coefficient. The amplitude of the transmission reduces to half as the incident angle increases from 0˚to 28˚, with phase distortion less than 0.1 (in the unit of 2π rad). The results in Fig. 1d are insensitive to the slot length ( Supplementary Fig. S3). Distinguished from our prior demonstration of gradient metasurface-based mathematical operators, large phase contrasts between neighboring cells are required in the metasurfaces for machine learning tasks. As the transmission coefficient of each metasurface design is numerically calculated from a periodic array, single-slot implementation of each phase shifter in gradient metasurface design results in unexpected discrepancies, and thus two subwavelength slots are employed here for representing one phase shifter in the designed network 48 (Inset of Fig. 1d). The diffractive metasystem is firstly designed in Python and then verified by finite-difference time-domain (FDTD) simulations and experiments. During the training stage, the phase shifts in each metasurface layer are iteratively updated by following the gradient descent algorithm (Supplementary Note 2) 49 . In the forward propagation step, we calculate outputs of the metasystem with input data from the training dataset. The difference to target outputs (the loss) is then derived for the next step. In the backpropagation step, we calculate the gradient of the phase for every cell and then update the phase value to decrease the loss. The random phase offset with uniform distribution within the interval [0,0.5π) are introduced to each cell during the training stage, to improve the system's robustness against nanofabrication variations and free-space phase fluctuations in measurement ( Supplementary Fig. S1a). The photon propagation from layer l with k neurons to the next layer with n neurons resemble the vector-matrix multiplication: where t l ðpÞ ¼ a à exp½jϕ l p À Á represents the transmission coefficient of the p-th neuron in l-th layer. The amplitude a is near 1 for the slot width of 100 nm. The phase shift ϕ l ðpÞ is proportional to the slot length. m l ðpÞ and m lþ1 ðqÞ are the amplitude of input photons towards the p-th neuron in the l-th layer and the q-th neuron in the (l + 1)-th layer, respectively. The inter-layer connectivity W is a k × n transfer matrix derived by the Rayleigh-Sommerfeld diffraction equation, representing the wave propagation in the SOI slab waveguide (Fig. 1b). The p; q À Á -th element of the W is: 50 where r is the distance between the p-th neuron in layer l and the qth neuron in layer l + 1. λ is the effective wavelength in the planar waveguide. Considering the angle-dependent transmission amplitude ( Fig. 1d), an additional factor of UðΔyÞ / e À½ πΔyσ λa 2 is superimposed onto the outputs of each layer, where Δy is the relative distance along the y-direction, a is the spacing along the xdirection. σ is 0.45 μm for the first layer and 0.08 μm for the subsequent layers, obtained by fitting the model to the numerical simulation results. Metasystem for spatial pattern classification. As an example, we implement an integrated two-layer metasystem for letter classifications. Each metasurface layer contains 450 phase shifters. The inter-layer distances are selected to be 100 μm, balancing the insertion loss and classification accuracy (Discussion section). The metasystems and grating couplers are defined on the SOI substrate with single-step lithography and etching process (Methods). The setup for characterizing the metasystem is illustrated in Fig. 2a. The input patterns are reshaped from a twodimensional (2D) matrix to a one-dimensional (1D) vector and then projected onto the 1D grating coupler array through a digital micromirror device (DMD). The input patterns are the binary letter images with 15 pixels (bottom insets in Fig. 2a). The outputs are collected by a single-mode fiber through a grating coupler and delivered to a broadband infrared (IR) photodiode. A digital IR camera monitors the alignment between the reflected patterns from DMD and the grating coupler array. The optical image (left in Fig. 2a) shows the perspective view of one device under test (DUT). A single mode fiber picks up the signal from output grating couplers on DUT (Fig. 2b). The scanning electron microscope (SEM) images show the detailed nanostructures of the grating coupler array (Fig. 2c) and the pre-trained metasurface (Fig. 2d) on DUT. The testing dataset is the binary letter images with amplitude flipping in random pixels ( Supplementary Fig. S1b). The twolayer metasystem is pre-trained by 10,000 such matrices. Numerical testing by the other 1000 datasets predicts 98% accuracy in letter classifications. Figure 3a shows an example optical field intensity distribution of the optical diffractive network. Three waveguides are placed 100 µm apart on the output plane, representing three channels of classification results. Channels 1,2 and 3 are correspondent to the input letter patterns of 'X', 'Y', and 'Z' respectively. With an input image of the letter 'X', the light intensity is significantly higher near the spatial position of channel 1 (Ch1) on the output plane. The detailed light intensity distribution along the output plane is plotted as gray lines in Fig. 3b. The experimentally measured data (squares with error bars) are consistent with FDTD simulations (Fig. 3b). The blue, red, and yellow squares are the light intensity from the grating couplers connected to Ch1, 2, and 3, respectively. At 1550 nm continuous wave (CW) input, numerically simulated ( Fig. 3c) and measured confusion matrix (Fig. 3d) show the classification accuracy of 96% and 92%, respectively. The metasystem's response is consistent for the CW inputs across the C and L bands (Supplementary Note 3). The broadband operation is critical for ensuring high classification accuracy of single-shot ultrafast pulsed inputs. Under 90 femtosecond pulsed light (centered at 1551.6 nm with a bandwidth of 30 nm), the measured confusion matrix shows 89% classification accuracy in this metasystem (Fig. 3e). Numerical simulation shows the insertion loss in the metasystem classifier is 9.3 dB. The dispersion engineering of the metasystem. The dispersion of the metasurface system can be tailored for expanding device applications to machine vision and hyperspectral imaging. To show the spectral engineering capability, we implement a threelayer metasystem that can effectively separate input signals centered at 1490, 1530, and 1570 nm (Fig. 4). The distances between the input plane, metasurfaces and output plane are fixed at 100 µm. Three parallel output waveguides are spaced 30 µm apart along the x-direction. Under CW tunable laser excitation at 1490 nm, light merges at the destinated x-position on the output plane, where channel 1 waveguide locates (Fig. 1a). The numerically predicted spectra along the output plane ( Fig. 1b and dashed curves in Fig. 4c) align with experiments collected from the three output channels (solid curves in Fig. 4c). The blue, red, and orange curves represent the outputs for channels 1, 2, and 3 respectively. The measured insertion losses for such a three-layer system are 13.1 dB, 16.8 dB, and 18.9 dB for the wavelength at 1490 nm, 1530 nm, and 1570 nm, respectively. The spectral resolution of such a metasystem is limited by the number of output ports. The spectral resolution of 7 nm can be achieved with 11 output ports. The complicated diffraction and interference allow one-to-one correspondence between the spatial distributions of the light and the laser wavelength 51 . Combining both features, we design and experimentally demonstrate a two-wavelength pattern classification system (Fig. 5). An optical image of the designed metasystem is shown in Fig. 5a. The input grating couplers design is same as the one in Fig. 3. The metasystem is composed of 2-layer metasurfaces with 600 phase shifters per layer. The 6 output ports are correspondent to pattern "X", "Y", and "Z" at 2 input wavelengths of 1530 nm and 1570 nm. For the input pattern of "Y" at 1570 nm, the simulated light distributions on the output plane (gray curves in Fig. 5b) are consistent with measured data points (solid squares in Fig. 5b). The measured confusion matrix ( Fig. 5c) indicates the hyperspectral pattern classification accuracy of 70%, with an insertion loss of 14.2 dB. Discussion Compared to the 2D metasystem in free space, the metasurface on the integrated platform is limited to a smaller number of cells and out of plane-in plane couplers, with the advantages of lower insertion loss and feasible fabrications for multi-layer structures. With the same total cell number, classification accuracy is more sensitive to the depth of metasystems than the size of each layer ( Supplementary Fig. S7a). Currently, the fabrication limited metasurface cell number is 10 4 , which is sufficient for the standard testing databases with propagation matrix compression (Supplementary Note 2.2). We numerically explored the 1D metasystem's computing capability by designing one for a Modified National Institute of Standards and Technology (MNIST) handwritten digit database with 784-pixel inputs (Supplementary Note 4). 3 Epochs bring a metasystem's accuracy to be 96% ( Supplementary Fig. S6). Currently, the main technical challenge is the layout design of a large number of I/O ports on an integrated photonic platform with tolerable phase distortions from nanofabrication. Theoretically, a 2D metasurface with the subwavelength cell owns significant computing capabilities. However, experimental implementation of such a system for machine learning has never been reported in telecommunication wavelength or infrared, but feasible if the fabrication or alignment errors are considered in the training process (Supplementary Note 2.1, 2.5). Commercially available components (DMD or diffractive optical elements) have a typical cell number of 10 4 -10 6 . Single layer component has been utilized for high-accuracy image classifications 35,36 . The integrated photonic platform can eliminate out-of-plane light diffraction, and thus result in orders of magnitude lower insertion loss compared to free-space optical systems. Based on the Toeplitz matrix, the training algorithm of the 1D metasystem requires less memory and time during the training process (Supplementary Note 2). The time and computational cost-efficient design algorithm facilities systematic design studies of the MNIST classifiers (Supplementary Note 4). Given sufficient weight matrix size, a one-layer metasystem can only achieve 88% accuracy. 5-10% accuracy boost is observed with increased layer number (Supplementary Fig. S7a). The diffraction distance is proportional to inter-layer distance, which results higher classification accuracy and insertion loss ( Supplementary Fig. S7b). The reconfigurability and nonlinear activation functions can be introduced into the metasystem platform via hybrid integration of active materials. For example, phase change materials with high refractive index contrast can fill those slots and provide sufficient phase tunability 52 for a fully reprogrammable metasystem. Certain active materials exhibit exceptionally high nonlinear responses (such as two-photon absorption-related free carrier absorption or absorption saturation) and are transparent at telecommunication wavelength ranges, which can be integrated into the diffractive networks as nano-scale activation functions with solution processing 53,54 . Designed by diffractive optical networks, we experimentally demonstrate cascaded metasurface systems for wavelengthselective pattern classifications in telecommunication wavelength. The miniaturized metasystem is fabricated on SOI substrate with one-step lithography and etching. Compared to conventional integrated photonic circuits, the manifested throughput and computing capability in the metasystem is attributed to dense phase shifters and efficient diffractions. With proper training, the integrated metasystem can be robust against input noise and random nanofabrication offsets. As a spatial pattern classifier, 92% and 89% accuracy are achieved in a twolayer metasystem, under narrow-band CW excitation and broadband femtosecond pulse excitation, respectively. The broadband operation of the pattern classifier allows single-shot image classification with boosted parallelism for optical signal processing. The wavelength selectivity of such a metasystem can be co-designed with the pattern classification function for hyperspectral imaging, machine vision, and hardware accelerators. Methods Device fabrication. The integrated metasystem is fabricated on an SOI substrate from Soitec, with a 250 nm silicon layer and a 3μm thermal dioxide layer. The designed patterns of the metasurface, waveguides, and grating couplers are firstly defined in CSAR 6200.09 positive resist layer by using a Vistec EBPG5200 electron beam lithography system with 100 kV acceleration voltage, followed by optimized resist development and single-step dry etch procedures. A 300-nm thick silicon dioxide protection layer is finally deposited on the sample by plasma-enhanced chemical vapor deposition (PECVD). The loss of grating couplers and channel waveguides used in the devices are less than 6 dB and 1 dB respectively. Optical measurements. Tunable lasers (ANDO AQ4321A and AQ4321D) generate coherent and linearly polarized light with 1 pm spectral resolution. For the pulsed signal measurement, a femtosecond laser centered at 1551.6 nm with a duration less than 90 fs and spectral bandwidth around 30 nm (Calmar laser CFL-10CFF) is used to replace the continuous wave light source. The infrared light travels through a polarization controller, a beam expander, DMD (Texas Instruments DLP650LNIR), a lens, a long working distance objective (a Mitutoyo Plan Apo 20× infinity-corrected objective), and incident onto the input grating couplers. A single-mode fiber probe collects optical outputs and sends them to an InGaAs photodiode and optical power meter (Newport 818-IG-L-FC/DB and 1830-R-GPIB). A 640 × 512-pixel format and 25 µm pitch size digital IR camera (Goodrich SU640KTSX) monitors the input pattern alignment with the substrate. Data availability The datasets generated during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.6345622. Code availability The python script used in this paper have been deposited in the Zenodo repository, https://doi.org/10.5281/zenodo.6339743.
3,850
2021-05-20T00:00:00.000
[ "Physics" ]
Motivation, Definition, Application and the Future of Edge Artificial Intelligence – The term " Edge Artificial Intelligence (Edge AI)" refers to the part of a network where data is analysed and aggregated. Dispersed networks, such as those found in the Internet of Things (IoT), have enormous ramifications when it comes to "Edge AI," or "intelligence at the edge". Smartphone applications like real-time traffic data and facial recognition data, including semi-autonomous smart devices and automobiles are integrated in this class. Edge AI products include wearable health monitors, security cameras, drones, robots, smart speakers and video games. Edge AI was established due to the marriage of Artificial Intelligence with cutting Edge Computing (EC) systems. Edge Intelligence (EI) is a terminology utilized to define the model learning or the inference processes, which happen at the system edge by employing available computational resources and data from the edge nodes to the end devices under cloud computing paradigm. This paper provides a light on "Edge AI" and the elements that contribute to it. In this paper, Edge AI's motivation, definition, applications, and long-term prospects are examined. I. INTRODUCTION Over the past few decades, the application of Artificial Intelligence (AI) has significantly increased around the globe. Due to the expansion of commercialized activities in business, the cloud computing paradigm has become a pivotal element of AI's progress. More businesses are realising the need of making their technology available on smartphones so that they may better meet the needs of their customers. As a result, growth in the Edge Computing (EC) [1] business is projected in the next years. Edge Artificial Intelligence (Edge AI) analyses data generated by a local hardware device using Machine Learning (ML) algorithms. Real-time millisecond processing and decisions do not need an Internet connection. As a consequence, the communication costs of the cloud model are drastically lowered. Data and processing are two areas where Edge AI focuses its efforts, moving datasets and processing nearer to an interaction point with users. Examples of this technology in action include Google's Homepod, Amazon's Alexa, and Apple's Homepod, all of which employ ML to learn and recall local words and phrases. Artificial Intelligence (AI) analyses the user's inquiry through an Edge network, which translates the user's speech into text. With no edge networks, the time of respond would take longer; with edge, the time will be 400 milliseconds longer than normal. Whenever it comes to Edge AI or cutting-edge AI [2], the present study is still in its infancy. In order to demonstrate the benefits and applications of Edge AI, Microsoft created a mobile voice command recognition prototype in 2009. Although research on Edge AI is still at an early stage, there is still no precise definition of the term. "Edge AI" is currently being used to describe the practise of running AI algorithms directly on an end device utilising data produced on the device itself (sensor data or signals). Despite the fact that Edge AI is now the most often used method, it is important to remember that this definition severely limits the range of possible solutions (e.g., using high-end AI processors). High-end CPUs are required to run DNN models, for example, locally due of their computationally demanding nature. Additionally, these rigorous criteria are incompatible with existing legacy end devices, which have a limited computing capacity, pushing up the cost of Edge AI further. As we explain in this section, Edge AI need not to be confined to operating the models of AI just on the edge devices or servers. According to a dozen recent research, running Deep Neural Network (DNN) models [3] with edge-cloud synergy decreases end-to-end delay and consumption of energy in contrast to the operating them on a local level. There are various purposes as to why we have a belief on a collaborative hierarchy being integrated into the establishing of cutting-edge information gathering technology systems. Training and inference are assumed to take place in power cloud datacenters, and this assumption is widely held. Since training consumes much more resources than inference, this is a reasonable assumption to base our design decisions on. result of the marriage of cutting-edge computers with AI. Using data and computing resources from the cloud datacenters, to edge nodes and end devices to train or infer models at the network edge is what is meant by "Edge AI". Edge AI's motivation, definition, applications, and long-term prospects are all examined in this research. In that regard, the content is arranged as follows: Section II focusses on the motivation of the research while Section III presents a critical definition of Edge AI. Section IV reviews the advantages of Edge AI in different real-life contexts. Section V focusses deeply on the fundamental application of Edge AI. Section VI projects the future of Edge AI in a web-linked ecosystem while Section VII presents concluding remarks on the paper. II. MOTIVATION BEHND THE STUDY This article discusses the benefits of Edge AI and how to put it to use. When it comes to applications, edge AI technology has no boundaries. Because of the Covid-19 conundrum, organisations have turned to AI to offer real-time data solutions. As an example, AI is being used in the monitoring, assessment, and treatment of patients, for example. The developer community's creativity and imagination are the only real limitations of Edge technology. Because of this, a number of collaborative projects are already underway to educate Science, Technology, Engineering and Mathematics (STEM) professionals and students about this new technology. A joint effort between Intel and Udacity is developing the Intel Edge AI for IoT Professionals Nanodegree initiative, which teaches computer vision and deep learning. This category includes software developers, ML technologists, data experts, and professionals engaged in the creation of cloud-centered AI devices and applications. Due to the programme, it is projected that EI applications would become more user-centric. III. DEFINITION OF EDGE AI The term "Edge Computing" refers to any programme that reduces latency closer to the user's request. By "Edge Computing," Li and Hong [5] refers to computation that occurs outside the cloud, at the network's edges, and in applications that need real-time processing of data. They explained this in his keynote talk to IEEE DAC 2014 and in an invited session at MIT's MTL Seminar in 2015. EC, in contrast to cloud computing, makes use of data that is constantly being generated by sensors or users. Edge AI is the most popular term used to describe fog computing. When it comes to the EC concept, servers located in the "final mile" of a network are regarded to be a part of it. An appropriate reference is required. According to [6], anything that is not a normal data centre may be an 'edge. As an edge node, gamelets broadcast games to clients that are one or two hops distant. In order to meet the response time requirements for real-time gaming, edge nodes are often one or two hops removed from the mobile client. In the context of EC, virtualization technologies may make it easier to set up and manage many different types of applications on edge servers. When data is analysed and combined at the network's outer edge, it's called an "Edge AI." Using the term "Edge AI," or "intelligence at the edge," has significant ramifications when it comes to scattered networks like the IoT. System nodes placed outside the system's core may now execute functions that were previously only available to the system's core. Traditional data storage and retrieval from IoT-connected devices and sensors in a central data warehouse or repository has a number of disadvantages. It is possible that the system will be more vulnerable and inefficient if the data is not secured. These nodes may intelligently analyse the data as it moves through the edge network and into the data warehouse in a smart edge architecture. Because of this, data-handling systems may become more responsive and safer. As a result of these and other factors, a number of cloud providers and other enterprises with experience in IoT structure and nature advocate the use of an Edge AI. IV. BENEFITS OF EDGE AI This includes smartphone apps such as feature extraction and real-time traffic dataset; and semi-automated vehicles or smart gadgets. Gaming consoles, smart speakers, robotics, drones and surveillance cameras are among the Edge AI-enabled gadgets. Here are a few additional applications for Edge AI that we hope to see in the future: • It will aid in the identification of security cameras by providing information. Traditional video security cameras capture long-exposure photos that may be stored and retrieved as needed. It's a different story with Edge AI, which uses real-time algorithmic processes to identify and analyse suspicious actions in real time, making it less costly and efficient for monitoring. • Increased real-time processing of data and photos will allow autonomous cars to recognize traffic signs, people and other vehicles as well as roadways. This will improve transportation security. • For example, it might be used to analyze images and videos and create reactions to audio-visual stimuli or real-time locale or scene identification in smartphones. • Minimize the costs and enhance safety with IIoT (Industrial IoT). Edge AI will survey machines for suspected flaws or mistakes, while ML will assemble real-time data from the whole manufacturing chain. • It will be utilized in the field of emergency medicine to analyze photos. • The development of 5G technology networks will lead to faster and more reliable mobile data transfer, which will make Edge AI more effective. For example, Red Hat and IBM have collaborated to establish 5G-oriented edge technologies; hence making it easy for firms to effectively manage activities across larger numbers of devices from multiple suppliers, providing communications firms the agility it required to promptly provide the required services to consumers. Since EC and AI have many commonalities, it seems sense that the two will come together. Artificial Intelligence (AI) and EC, on the other hand, try to mimic the behaviors of machines and devices by learning from data established by edge servers and devices. Additionally, to obtaining the rewards of EC, the following advantages accrue when AI is pushed to the limit. Data created at the periphery of the network requires AI to unlock its full potential. End devices of many kinds, from security surveillance systems, wearables sensors and smart sensors, to IoDs (Internet of Drones), have been interconnected with the web in recent times. It is shown in Fig. 3 that the Internet of Things (IoT) setup is utilised for connecting devices and sensors to the web directly. In this scenario, raw data is sent to backend servers for processing by ML algorithms. Fig 3. IoT configuration In Fig. 3, IoT configuration is used for the connection of devices or sensors directly to the web. In this case, raw data is provided to backend servers whereby ML algorithms are operated. 4 shows configuration of Edge AI run locally by ML algorithms on the embedded systems or hardware devices in contrast to the servers. Based on the AI application or the category of the device, there are different hardware categories for the performance and activity of edge AI processing e.g., SoC, FPGAs, ASICs, GPUs, and CPUs accelerators. A considerable amount of data (such as video, image, and audio) is continually detected at the device end as a consequence of the proliferation of these different gadgets. The potential of AI to swiftly evaluate and derive insights from such vast amounts of data will make it a functional need in this environment. With deep learning, which is one of the most widely used AI approaches, the edge device can automatically discover patterns and detect abnormalities based on the data it collects from the surrounding environment. Real-time predictive decision-making based on sensed data (e.g., planning for public transit and traffic management) is therefore able to respond more quickly and effectively to rapidly changing surroundings. Deep learning algorithms outperform classical intelligence methods that rely on the tracking of numerical limits to be fulfilled. PTC ThingWorx, Amazon AWS IoT, IBM Watson IoT, IBM Watson IoT, and the Microsoft Azure IoT, are the leading IIoT systems, which are embedded with predictive AI capacities. In contrast with today's 10%, Liu et al. [7] project that by the end of 2022, more than 85% of firms' IoTs initiatives will integrate an AI element. EC, on the other hand, allows AI to be enriched with large amounts of data and application scenarios. Deep learning's current rise has been attributed to four factors: algorithm, hardware, data, and application contexts. Data and application scenarios have received much less attention than algorithmic and hardware factors in deep learning research. The most popular method for improving the performance of deep learning algorithms is to add extra layers of neurons to the DNN. We need to learn additional DNN parameters, and the amount of data needed for training increases as a result of this. We can see that the average delayed time between significant methods and accompanying advancements over the last 30 years has been approximately eighteen years, wherein similar advancements between vital datasets have taken less than 3 years on average. This is a clear demonstration of the relevance of data in the advancement of AI. Following the recognition of the value of data, the following question is, "Where do we get our data from?" Mega-scale datacenters have traditionally been the primary source of data generation and storage. However, with the fast expansion of the IoT, this tendency is gradually being reversed. Chen et al. [8] predicted that by the end of 2022, all people, machines, and things will create roughly 850 ZB, up from 220 ZB generated in 2016. Smart finance, cancer diagnosis, and drug development are just a few of the new frontiers that AI is helping to unlock. The objective of "developing AI for each and every business at any moment in time" has been expressed by major IT firms, with a broad variety of applications and possibilities. This requires bringing AI up close to the people, information, and edge devices. When it comes to accomplishing this goal, EC is definitely better than cloud technology. Edge workstations, as contrasted to cloud data centres, are located nearer to end users, data sources, and other computing equipment than cloud data centres. As a result, EC is cheaper and simpler to adopt than cloud computing systems. Can be used to get around obstacles AI technologies, on the other hand, may help popularise EC even more. There has always been an issue to worry about in the cloud computing realm whereby high-demand services edge technology might take to the next level, which cloud computing could not. To dispel any confusion, Microsoft's research team, which co-invented the cloudlet concept, has been investigating since 2009 what sorts of applications should be transferred from the clouds to the edge, including voice control identification, AR/VR, immersive cloud gameplay, and real-time video monitoring. A much more compelling use case for cutting-edge computers would be real-time video analytics. Using machine vision, real-time video intelligence continually extracts high-definition movies from security camera feeds and analyses them, necessitating a large amount of computing, high bandwidth, confidentiality, and reduced latency. Evidently, EC is the only conceivable option that can achieve these stringent criteria. A dozen cognitive support applications, such as machine vision, voice recognition, and computational linguistics, have been the primary focus of CMU experts in their efforts to promote the cloudlet idea. The rise of EC may be traced back to the rise of AI applications, which have played a significant influence in its popularity. Most mobile and IoTrelated AI solutions have this in common since they are computationally and energy expensive, privacy-sensitive, or otherwise affected by latency or other delays. This makes them a good fit for EC. Edge AI has gotten a lot of attention lately because of its advantages and requirement in operating AI applications at the edge. Cloud-edge AI systems were proposed as a significant research topic in December 2017 by the UC Berkeley (University of California, Berkeley) in the paper: "A Berkeley View on Systems Constraints for AI". 'Edge AI' made its debut in Gartner's Hype Cycle in August of 2021 [9]. In the next 5 to 10 years, Gartner predicts that edge AI will hit a plateau of efficiency, which is still in the development trigger stage. Many pilot projects for cutting-edge AI have been carried out in the sector as well. The conventional cloud service providers, e.g., Google, Microsoft, and Amazon have built service frameworks to extend the knowledge of Cloud Servers to the edge, allowing network nodes to conduct ML inferences locally using pre-trained models. Google Edge TPU, Huawei Ascend 910, Intel Nervana NNP, and Ascend 310 are four examples of high-end AI processors designed for executing ML models currently available on the marketplace. V. APPLICATION OF EDGE AI AI and Retail: Shopping Experience It is critical for small and big merchants alike to provide their customers with an enjoyable buying experience, since it has a significant impact on consumer loyalty. With the help of AI, companies can keep consumers happy and keep them coming back for more transactions. Another one of the many Edge AI implementations used to assist workers in their day-to-day operational processes and to improve the client experience is using AI to ascertain when product lines being sold need newer products and when product lines have to be resupplied (mostly appropriate for perishable commodities). Smart checkout structures utilise ML (a branch of ML that assists computer systems in determining the characteristics of objects, classifying them, and drawing inferences from digital images acquired from video content and webcams) to verify that the barcodes on the objects being scanned are those that belong to the items being identified. Smart surveillance analytics is also used by retailers to learn more about their consumers' preferences and better plan their shop layouts. Customers' buying habits are also analysed by merchants in order to enhance their consumer shopping experiences by evaluating data about transactions and abstracted information from videos. AI in Smart Industries: Business Experience Fast Moving Consumer Goods (FMCG) precision manufacturing companies must constantly guarantee the accuracy and safety of their products. The manufacturing plant will be safer and more effective as a consequence of the use of AI and EC in firm operations. In-plant inspections may be carried out using Edge AI applications, such as those created by BMW and Procter & Gamble. In order to help Procter & Gamble employees, the company uses an inspection camera. Video footage is analysed to ensure that no defective products leave the factory. Quality control and safety initiatives are bolstered as a consequence. For instance, Edge AI can detect quality assurance, verification, and deviation issues that the human visual system could overlook. Industrial quality control may be effectively monitored using computer vision. Other automakers are also using EC and AI on the assembly line for real-time views and insights. This has resulted in a more efficient production process. Powering Smart Hospitals: Medical Service Experience A constant need exists for healthcare experts to require technological assistance from time to time. Using AI and cuttingedge computers in the medical profession will improve patient care and operational effectiveness. Data security is a major concern for smart hospitals, and edge AI applications can assist with that. It is possible to undertake high accuracy thermal scanning, inventory control, remote patients surveillance and even disease prediction with Edge AI in the medical industry. Smart healthcare facilities may become a reality with the help of Edge AI apps. The COVID-19 pandemic and the emergence of 5G networks, as well as the growing interest in telehealth and other connected solutions, have prompted more people to consider the notion of a smart hospital in recent months. Medical professionals who have access to cutting-edge technology will be able to rapidly and readily obtain patient health measurements, test results and other information in a smart hospital. A renowned data and analytics business called GlobalData [10] argues it will be necessary to coordinate technologies from a number of areas to make this a reality. The rising use of robotics in hospitals is an excellent illustration of the necessity for smart hospitals to have interconnectivity across many technologies. Medication and medical equipment delivery robots, assistive robots for doctors and nurses to transfer patients or execute operations, and surgical robots are some examples. In order to keep the hospital running well, all of the robots in the facility must be able to be supervised and controlled at all times. In hospitals, fewer people would be needed to supervise the network of robots since computer software would take care of it first. The smart hospital's network infrastructure will be its most important component. Intrahospital networks will be required to swiftly exchange, transport, and distribute data between equipment and departments in these hospitals. There will be a wide range of medical instruments in the hospital, including on-patient sensors and magnetic resonance imaging scanners, that will require ongoing connectivity. However, 5G might eliminate these issues while allowing for faster interhospital links. These may then be utilised for patient or physician discussions, enhancing treatment efficiency. It's currently possible to build smart hospitals, but it will need a big investment from all parties. Manufacturers of medical equipment must make sure that new products are both functional and safe from cyber-attacks, while also being simple to use for the patient. Hospitals will need to focus on areas that provide the greatest value, while ensuring that their fundamental IT infrastructure is safe and adequate to meet the needs of their patients. To take advantage of 5G and the smart hospital, telecoms will have to make significant investments in network improvements. Telecommunications: Communication Experience For telecommunications companies, Edge AI Applications can provide different and innovative experiences in the application of 5G for quality assurance checking in factories equipped with sensors and cameras, in the manner that softwaredefined systems are utilised to computerise self-checkout processes in stores, and for customer perception powered by AI. Network operators will benefit greatly from the combination of these factors. Drones: Visual Analysis In addition to traffic control, engineering, and mapping, drones equipped with edge AI may be utilised for a variety of other purposes. Image classification, document classification, and object identification and tracking are all common features in drones today. Drones equipped with AI are able to identify and locate objects by replicating human visual search patterns. As a result of using edge AI technologies in drones, drone data may be successfully examined. In addition to facial object recognition and monitoring in real-time, it helps with predictive management and logo identification, as shown in Table 1. Fire Fighting Robots Similar to how drones are used to inspect dangerous regions that are too dangerous or difficult for humans, robots may be utilised to put out fires. As they go around a structure, AI-enabled robots may inspect and map it for structural damage while saving lives and extinguishing fires. The Howe and Howe Innovations Thermite Robot is an illustration of a firefighting robot, which was initially built for the US Army and includes cameras and a water line that can pump 500 gallons per minute linked to it. With Edge AI installed, it can access risky areas without the need of a remote control and can recognise harmful situations that humans are unable to encounter and take action to relieve them. Autonomous Vehicles Autonomous vehicles (also known as self-driving automobiles) are a great example of cutting-edge AI. Data processing may be done on the same hardware as the device itself with Edge AI-enabled devices [11]. Preventing fatal accidents requires the processing capability included inside the same hardware. If the process is too sluggish, it might result in severe consequences. Data from this Edge AI application is collected and analysed in real-time in order to assist drivers and other road users avoid accidents and avoid potential hazards. Automation is not a new phenomenon in commercial aviation, since sensors are constantly being monitored and analysed in order to ensure the safety of the aircraft. Table 1. Benefits of drones for visual analysis Significance Definition Forewarned repairs It is possible to do preventative maintenance on old and deteriorating infrastructure in order to avoid collapse and total erosion. Structures, bridges, and roadways, for example, deteriorate and lose strength with time, posing a danger to many people if they are not adequately maintained in a timely manner. There are several ways in which drones may assist in ensuring that the correct structures are repaired quickly in order to prevent further deterioration. Detection of logos There are a variety of uses for drones in marketing, including gathering data on advertising campaigns or logo placements to better understand how such efforts are being received by target audiences. Real-time object identification and tracking Drones may be used to keep an eye on congestion and detect and track moving objects in real time. Tracking traffic violators and fugitives ensures security and safety. Face recognition Drones equipped with AI may be used to monitor pedestrian movement, particularly in high-risk areas with a large volume of traffic. A variety of applications, including law enforcement, call for its use. Cartography and Mapping: Using drones for cartography and mapping is advantageous since they are able to access locations that are inaccessible to people. An AI-powered drone can achieve what a team of professionals would have to do in a fraction of the time and at a fraction of the expense. To create 3D pollution mapping, they can collect data on things like noise, air, or radioactive pollution, and they can scan things like bridges, structures, and roads to see whether damage has occurred. Industrial IoT (IIoT) The Industrial IoT relies heavily on the digitalization of operational operations and production in order to increase productivity. Using Edge AI, industrial IoT devices can do visual examination and manage robots more quickly and at reduced prices. Energy The usage of smart grids is one example of edge AI uses in the energy industry, which may be seen. These smart grids can use renewable energy, monitor usage and decentralise energy production because of EC driven AI. In order for smart grids to work, data must be sent directly between devices, hence it is preferable not to use the cloud for this [17][18][19]. Safe, Smart Road Networks: Transport Experience Roads are becoming smarter in an effort to make them safer for drivers in the future. In addition to establishing smart roads, some tech individuals and organizations are hoping to create entirely smart cities where roads, cars, and buildings all communicate with one another. Edge AI techniques are frequently used to offer cameras that study traffic in real time to detect obstructions, irresponsible driving, disasters, and hold-ups. Edge AI solutions are also being used by businesses to improve traffic flow and make roadways safer for all road users, including walkers, cars, motorcyclists, and others. There are also edge AI applications in surveillance cameras, which do not need to send raw video signals intended for computation to the cloud. It's possible for Edge AI-powered cameras to analyse photos on the spot and in real time. Using this technology, it is possible to monitor and track things in order to take rapid action and to prevent fatalities from occurring [20][21][22]. VI. THE FUTURE OF EDGE AI We conduct a complete review of the literature on cutting-Edge AI and provide our findings. Because it allows for the development of AI in the last mile and enables consumers to benefit from high-efficiency intelligent services without having to rely on central cloud servers, Edge AI is an apparent plus. Reiterating that there are still unresolved issues in achieving Edge AI is a good idea. Identifying and analysing these problems is essential, as is looking for new theoretical and technological answers. This Some of the most pressing issues in Edge AI are discussed here, along with some potential solutions. Problems with data scarcity at the edge, inconsistency on edge devices, poor flexibility of statically trained models and incentive mechanisms are only a few of the difficulties faced [23][24][25]. Data scarcity at edge Because of the need on high-quality training data for most ML algorithms -particularly supervised ML -high performance is a pre-requisite. HAR and speech-recognition applications, for example, are examples of Edge AI settings where the gathered data is sparse and unlabeled. In contrast to typical cloud-based intelligent services, which collect all of the training data into a single database, edge devices develop models from data they collect or data they produce themselves. Good picture features, for example, are missing from these datasets. Assuming that the training examples are of high quality, most current efforts disregard this problem. In addition, the training data is often unlabeled. To address the issue of unlabeled training examples, several studies advocate the use of active learning. In cases when there are just a few examples and categories, this strategy may be employed. The decentralised nature of data may be used in a federated learning strategy to successfully tackle the issue. Federated learning, on the other hand, is best suited for group training rather than the one-on-one instruction required for personalised models. The following is a list of potential remedies to this issue. • Adopt models that can be trained with a limited amount of data. In general, a ML method that is easier to learn from tiny datasets would perform better. In certain cases, a basic model, such as Naive Bayes, linear model, and decision tree, is sufficient to cope with the issue since they are simply attempting to learn less. As a result, when confronted with real-world issues, it's important to choose the right model. • Incremental techniques of learning. In order to integrate their fresh data, edge devices might retrain a regularly used pre-trained model in an unprecedented manner. Fewer training sessions are needed to create a bespoke model in this approach. • Methods based on transfer learning, such as few-shot learning. • The cold-start issue is often avoided, and the quantity of training data needed is reduced, thanks to the use of transfer learning, which applies what has been learnt from one model to another. Transfer learning may be an option if the target training data is few and the source and target domains share a few characteristics. • Methods based on data augmentation. To enhance the effectiveness of a specific model, data augmentation is used throughout the training process. For instance, flipping, rotating, scaling, translating, cropping, and the like may increase the number of photos while maintaining the semantic meaning of the labels. The network's performance on new data will be improved because of the training on enhanced data, which will make it more resistant to deformations. Data consistency on edge devices Edge AI applications, such as voice recognition, event detection, and face recognition, typically use sensors at the network's edge to collect data. If the collected data is not consistent, it could have an impact on the results [12]. Since sensors can be found in so many different places, this problem is made worse. Environmental noise (e.g., wind or rain) and its conditions can have an impact on the sensor data collected (e.g., library or street). Due to the heterogeneity of sensors, data collected by sensors may also be subject to variation (e.g., hardware and software). In terms of sensitivity, sampling rate, and sensing efficiency, sensors can differ greatly. Sensor data collected from the same source may be interpreted differently by different sensors. Training parameters like feature variables will change as the data changes because they are dependent on the training data. Despite the wide ranges of variance, existing sensing solutions are still hampered. As long as the model is taught centrally, this problem can be easily solved. A large training set is centrally located to ensure that the invariant properties of the variants can be learned. While this situation isn't covered by Edge AI, it's a common one. It is imperative that future attempts to fix this issue take into account the detrimental effect that variance has on model accuracy. We may explore data augmentation and reinforcement learning in this regard. Data augmentation may be used to improve the model's capacity to handle noise. As an example, a variety of background noises may be used in speech recognition applications for mobile devices in order to avoid the varying effects of the environment. As an analogy, sensor hardware noise might be used to solve the problem of inconsistent findings. Because of the usage of enhanced data, the algorithms are better equipped to handle these changes. The representation of data has a significant impact on model performance. There are many different approaches to learning the representation of data in order to extract more useful characteristics when building models. If we could translate between two sensors that use the same data source, we'd see a massive boost in the model's performance. As a result of this, representation learning has the potential to reduce the detrimental consequences of inconsistent data. Possible future developments in this area include more efficient pipelines and data processing. Bad adaptability of statically trained model First, the system is trained on a centralized server, and then distributed to edge devices in most AI applications. Once the training phase is complete, the model will not be retrained. Users will have an unpleasant user experience because of the poor performance of statically trained systems when confronted with unexpected data or tasks. Decentralized learning, on the other hand, simply takes into account data from a single location. As a result, these models may only become specialists in their limited field. The level of service suffers as the region served expands. Lifelong deep learning and knowledge exchange are two potential answers to this challenge. Continuous learning and self-improvement are possible with the LML paradigm, which is a more sophisticated approach to learning. In the future, machines will be able to learn new things on their own rather than being taught by humans. LML differs from meta learning, which allows computers to learn new models autonomously. To cope with new data and changing contexts, edge devices might employ LML and a sequence of learnt tasks. Remember that the LML is not meant to be used by low-end devices, which means that they will need to have a high level of processing power. As a result, if LML is used, model design, model decompression, and offloading mechanisms should all be taken into account. Knowledge sharing facilitates the exchange of information between several edge servers. One way an edge server may get help when it receives a job it can't do is by sending knowledge requests to other network edge. Because the knowledge is distributed among several network edge, the server that has the necessary information answers to the request and completes the job for the user. In a knowledge sharing model like this, a technique for assessing and querying one's own expertise is necessary. Privacy and security issues Diverse smart objects and edge servers must work together to supply computational power in order to realise Edge AI. There's a chance that the publicly cached data and computing jobs (either learning or inferences tasks) will be transmitted to unfamiliar machines in this operation for further processing. As a result of the data's potential to contain personally identifiable information, such as photos and tokens, it raises the possibility of data breaches and attacks by malicious users. Without encryption, unscrupulous people have easy access to sensitive data. To hide personal data and compress the data conveyed, some initiatives posit to perform some preparatory processing locally. However, the processed data may still be used to get private information. Viruses inserted into computer processes may also be used by malevolent people to attack and take control of computing devices. Users' privacy and security are at risk due to the absence of appropriate privacy and security policies or methods. Another option is to use a credit system. Like the commercial bank system, which verifies each user and verifications their credit information, this is a similar system. Creditors with bad records would be removed from the database. Because of this, all computers that supply processing power are trustworthy and safe for all users. Several works e.g., by Han, Pan, and Li [13] have previously employed encryption to secure their subjects' private information as a privacy safeguard. However, the encrypted data must be decrypted before training or inference activities can be carried out, which necessitates an additional amount of processing. This issue could be addressed in the future by increasing the focus on homomorphic encryption. Direct computations on encoded ciphertexts may create encrypted outputs using homomorphic encoding. There is no difference in the outcome of decryption and calculation on unencrypted data 42. Homomorphic encoding, on the other hand, makes it possible to directly perform training or inference tasks on encrypted files. Incentive mechanism Edge AI relies heavily on data gathering and model training/inference. It is difficult to guarantee the accuracy and usefulness of the obtained data while it is being used for data gathering. All the resources and time needed to perceive and gather data are consumed by data collectors. Preprocessing data cleaning, extraction of features, and encryption need additional resources that can't be assumed to be shared by all data collectors. Participants in a collaborative model training/inference must work together unselfishly on a particular task. AI architectures such as [14]'s have one 'master' and many 'slaves', for example. Pipelines allow workers to recognise items in a certain mobile visual environment and offer masters with training examples. This kind of architecture works best in private settings, like a person's house, where all the equipment is compelled to work together to build a superior intelligent modeling for their proprietor. However, in situations when the master initiates a task and assigns subtasks to unknown players, this approach would fail. Typically, in smart environments where all devices are not owned by a single master, an additional incentive issue arises. To encourage data collection and task completion, participants must be rewarded. Efforts in the future should examine the use of reasonable incentive systems. Data gathering, data management, and data processing all require different amounts of resources from participants. Everyone who takes part is hoping to receive the biggest award imaginable. The operator, on the other hand, is looking for the highest predictive performance at the lowest feasible cost. The most difficult part of building the best incentive system is figuring out how to measure the workloads of various missions to match the related rewards. These problems might be the focus of future initiatives. Predictive AI Predictive analytics powered by AI [15] is helping many firms better forecast customer behaviour and take preventative steps. Predictive AI built on Apache Hivemall, for example, may be used by a customer data platform (CDP) to assess consumers based on factors like churn affinity or upselling potential. Marketers may then target specific consumers based on these rankings. Businesses in various industries might use this as a source of inspiration. For example, a top game developer was able to accurately forecast, using Treasure Data's CDP and ML, which sorts of in-game prizes would keep players interested. Pretend there's a retail business out there attempting to hold onto its most loyal clients. AI-based predictive scoring may be used to identify and alert customers who are most likely to discontinue their purchases. Predictive AI may be able to identify customers who are about to leave a purchase and then learn over time which offers or interactions tempt them back. With the help of artificial intelligence, it is possible to determine which characteristics and patterns of behaviour suggest that a consumer is on the verge of making a purchase. The system may issue an invitation to an in-store demo when it recognises a consumer who meets a certain profile and therefore assist enhance sales. Predictive AI in the back office will have an impact on customer service. Inventory management systems will enhance their ability to search for predicted trends and select when and where to distribute products, similar to what Amazon is currently doing with AI-based approaches like anticipatory delivery. As a consequence of this new trend, customers will have greater in-store options and reduced wait times. Retailers could see more happy customers if they can provide faster, simpler, and more convenient in-store pick-up choices to their customers. In the near future, new automated ML tools may make predictive AI available to both small and large organisations, democratising the technology. Northstar, an interactive prediction tool, was developed by MIT and Brown University researchers [16]. On any touchscreen device, Northstar's drag-and-drop graphical interface lets users to input datasets and build predictive ML models. With predictive AI, any organisation, no matter how large or small, may benefit without having to hire in-house data scientists. Using Northstar, a small business owner may estimate sales based on historical data, and then choose which items they want to keep on hand. VII. CONCLUSION With the application of distributed Edge Computing (EC), data storage and processing are nearer to the source as opposed to traditional cloud computing. Reaction times are expected to improve, but bandwidth is expected to decrease. In this case, there are two ways to look at it: "Misconceptions about Internet of Things (IoT) and EC abound. Using EC, the IoT is an example of how this type of distributed computing can be used." It's an architectural term, not a specific technology. To deliver web and video content from servers close to users, content dispersed networks were created in the late 1990s. This is where the origins of EC can be traced back to. It wasn't until the early 2000s that services such as shopping carts and dealer locators with real-time aggregates of data and ad insertion machines were developed for commercial use. This paper has focussed on Edge Artificial Intelligence (Edge AI) motivation, definition, applications, benefits and the future. As an example, this research demonstrates how edge AI can improve customer experiences, eliminate human risk, supplement human healthcare efforts, and make roads safer. In a wide range of industries, businesses are relying on edge AI applications to improve operational efficiency and real-time monitoring. Edge AI protects data, maintains privacy, eliminates latency and bandwidth issues, and lowers hardware costs. Using Edge AI applications in your company requires a willingness to embrace new technology and an understanding of business practises. With AI, you may make use of a wide range of sensors, including those found in drones, robots, inspection cameras, and many more.
9,907.8
2022-07-05T00:00:00.000
[ "Computer Science" ]
Spectro-Temporal Analysis of the Ionospheric Sounding of an NVIS HF Sensor † : In communications, channel models are useful approximations to the performance of a real channel, which most of the time is not available for repeated tests. In this work we present the problem of the real Near Vertical Incidence Skywave (NVIS) ionospheric scenario channel sounding, and evaluate the channel propagation characteristics in terms of frequency and time spread, with the final goal of designing a channel model. An NVIS channel model can be obtained from the evaluated channel parameters; however, on one hand, there is the problem of missing data due to bad channel performance in some frequencies, and, on the other hand, the measured parameters have strong dependencies between them that cannot be evinced directly. In this work, we conduct a first set of analyses of the measured parameters of the soundings to determine the dependencies in terms of quality of the channel propagation but refer mainly to the Doppler spread and the delay spread in the sensor. This classification approach allows us to face the second part of the research focusing on the design of the channel model for the ionospheric communication of remote sensors. Introduction Interest in radio communication using a beyond line of sight technique named Near Vertical Incidence Skywave (NVIS) propagation has increased, mainly for both emergency communications and for sensing in remote areas [1]. This ionospheric propagation technique enables communication over large areas without the need for fixed infrastructure or satellite time. A radio signal taking advantage of the NVIS propagation mode is broadcast straight up into the ionosphere. The returning signal is refracted back to Earth and spread out over a much larger area (see Figure 1). Thanks to the near vertical radiation angles, large obstructions such as buildings and mountains do not shadow or obscure the radio signal transmit paths [2], making NVIS a true beyond line of sight technique. In order to design the physical layer of an NVIS modem [4] for communications in mid-latitudes [5], we design a set of ionospheric soundings to measure the channel parameters. These soundings were conducted by a Lowell Digisonde 4D Ionospheric sounder [6] located at the Ebro Observatory (40.8 N, 0.5 E). The goal of these tests was to collect and analyse data in order to model the performance of an ionospheric NVIS channel. The purpose of this paper is to describe the parameters analyzed in the designed soundings, and to conduct the first approach to a clustering using that data, to reach the definition of a channel model. We claim to define a model with a good, a regular and a bad scenario (which we will call the good, the ugly and the bad). For that purpose, we pretend to cluster the collected information about the channel performance, taking into account the most relevant parameters to define the propagation of the channel, which, on the other side, we already know have strong dependencies between them [5]. This paper is structured as follows: The NVIS sounding characteristics are detailed in Section 2, followed by the first clustering approaches described in Section 3, and finally the conclusions and future work are detailed in Section 4. NVIS Sounding Characteristics In this section we describe the basic principles of the ionosphere as a communications channel, and its effects over the transmitted signal. We also describe the particular NVIS parameters that are measured in the designed soundings. They will be used to describe the channel performance depending on the frequency of transmission and the hour of the day. The Ionosphere as a Communications Channel The ionosphere is a region of the upper atmosphere where sufficient ionisation exists to affect the propagation of radio waves. Generally this region is considered to be between approximately 50 km and several Earth radii [5]. Ionisation in this region is a result of the interaction between atmospheric molecules and solar radiation. Many complex factors influence the free electron density, resulting from the ionisation of atmospheric molecules. The main contributing factors are the incidence angle of solar radiation, hence latitude is important, the amount of solar radiation, hence factors such as season and diurnal cycle, and the relative density and composition of the atmosphere, hence altitude is important. In addition, the factors leading to the creation of free electrons and many other phenomena such as atmospheric turbulence, weather and even gravitational waves can provide a significant influence on localise electron density. Many factors lead to the creation and destruction of free electrons and as such characterise the ionosphere as an extremely hostile communications channel. Even with this hostility, modern requirements for beyond line of sight communications demand practical solutions to mitigate or compensate for these affects. One of the common mitigation strategies is to utilise a wide spread of transmission frequencies and intelligently select the one that maximises throughput or propagation quality. Obviously the distance a radio wave propagates within such a hostile region limits the communication effectiveness of the channel. In this sense, NVIS propagation is an excellent compromise for the challenges of such a hostile channel. NVIS channel uses the beyond line of sight characteristics of typical ionospheric channels but limits the distance traversed within the hostile environment. NVIS Parameters of Interest We will characterise our NVIS channel as a tapped delay line as presented in Equation (1). This tapped delay line combines all the parameters we collected during this sounding campaign. The stratification typically associated within the ionosphere leads to a scenario where a single point source transmission may have multiple points of refraction, and hence multiple propagation paths between a transmitter and a receiver, we refer to this effect as multipath propagation. Each of the N p paths from Equation (1) will arrive at a receiver with a different delay, τ sp . For each path, we collect data on the spreading in both time and frequency using a near impulsive transmission signal. We define the time spreading of each path at delay τ sp (n) as the single path delay spread P n (τ sp (n)). The total delay spread of a channel at time sample t is τ(t) is the delay from the first received path to the final received path for a single channel, so τ(t) = τ sp (N p ) − τ sp (1). Due to the turbulent nature of the ionosphere, each propagating path will shift in frequency depending on the relative motion encountered along the propagation path. This frequency change will have two components, a Doppler shift and a Doppler spread. We use Doppler shift to represent the frequency change resulting in the average relative motion of the ionosphere on a propagation path. Doppler spread is modelled as a distribution representing the non-deterministic motion of the ionosphere along the propagation path. In this equation, A n (t) is a scalar that represents the relative amplitude factor of the n-th path at time t, P n (t) represents the single path delay spread of the n-th path, and finally G n (t) represents the Doppler shift and Doppler spread of path n-th at time t. In this paper we detail the evaluation of these parameters, because they are crucial for the future definition of the channel model. They influence the channel propagation in a different manner; if the total delay spread of a channel is longer than the duration of a transmitted symbol, several symbols will mix together, effectively interfering with each other. Clearly if the current symbol is mixed with previous symbols, the reception and decoding is more complex. It should be clear that relative amplitude for each path significantly changes the ratio of the desired SNR having a relevant impact on reception quality. Finally, the Doppler shift and Doppler spread will change the synchronisation between the transmitter and the receiver, resulting in uncertainty. Depending on the relative motion of the ionosphere and hence magnitude of the Doppler, this uncertainty may increase the complexity of decoding any transmitted symbols in the receiver. Selection of Channels with Similar Performance An NVIS channel model could be obtained from the previously described data, but we have, on one hand, some data missing due to bad channel performance for some frequencies. On the other hand, the dependencies between the measured parameters are not easily separable, so an algorithm for detailing the good, the bad and the ugly channel performance in terms of measured parameters is needed. Description of the Parameters Only two of the five described channel sounding parameters have been used to perform the tests. We leave the entire set of parameters for future work. In this work, the parameters we used are: • Doppler spread of the main path • Delay spread for all paths We decided to consider these two parameters for the first clustering due to their relevance in the quality of the channel propagation. Description of the Algorithms In this section, we describe the K-Means algorithm which has been used to classify the NVIS data-set [7]. K-Means is an unsupervised learning algorithm that groups the samples in a dataset into a fixed number of (K) clusters. A cluster is a subset of samples sharing similarities in their features. In K-Means, clusters are defined around centroids. A centroid is a point in space which is located at the same distance from multiple samples. The default implementation of the K-Means uses the squared Euclidean distance to compute the separation between a sample and a centroid. Despite the existence of non-standard implementations using alternative distance functions, they were not used in our work. The algorithm uses an iterative refinement approach to find an optimal solution from an initial state. The initial state consists of K centroids, which are randomly chosen unless prior knowledge about the dataset can be leveraged. For each iteration two steps are performed. The first step is to assign each sample to its nearest centroid using the distance function. Secondly, the position of all centroids is redefined by computing the arithmetical mean of all the samples which belong to them. These steps are repeated until convergence is achieved or a maximum number of iterations is reached. The convergence of the algorithm to a global optimum is not guaranteed. Depending on the implementation, results may be influenced by the points chosen as centroids in the initial state. To minimize the negative impact of this behaviour, the results presented in this paper were obtained using a multi-pass implementation of K-Means which uses randomized restarts between each pass. Preliminary Clustering Results The results of the preliminary clustering are shown in Figure 2. Up to eight different clusters are found in the three situations. Figure 2a takes into account only the Doppler spread measured in the sounding of the channel. Figure 2b considers of only the information coming from the delay spreads. Figure 2c corresponds to the clustering of all the data considering both groups of parameters. In the vertical axis we can find the sounding frequency, and in the horizontal axis the time of the day when the sounding was performed. The different colors in the figure show the classification of the eight resulting clusters. Results show that the influence of the two parameters is not equivalent. Observing Figure 2c presents a group of eight clusters very similar to the ones shown in Figure 2b-which correspond to delay spread only-with slight changes due to the influence of the Doppler spread. Conclusions and Future Work In this paper we present the results of the first approach to the ionospheric parameters measured in a sounding campaign conducted with a Lowell Digisonde from the Ebro Observatory in October 2014. The preliminary results given by the K-Means algorithm show that both tests conducted, one with the Doppler spread measurements and the other with the delay spread measurements, lead us to eight clusters, which is a large number in terms of channel design. Future work will be focused on reevaluating the data to be reduced to 3-4 clusters, with which we can manage to design a variant ionospheric channel model. Another interesting fact is that, in these first tests, where we selected only two of the five described measured parameters, the delay spread configuration in the clusters was mainly maintained when we conducted the final test, aggregating the two parameters (both Doppler spread and delay spread). The preliminary results shown in this paper encourage us to follow the research line clustering the measured parameters in order to design an NVIS channel model. Future work is also focused on testing the clustering with the three other parameters, and even including more variables in the clustering algorithm to evaluate if the clusters fit closer to what in communications theory would be described as a good, a bad and a regular channel propagation.
2,970.8
2019-11-14T00:00:00.000
[ "Engineering", "Physics" ]
Structural basis for the tRNA-dependent activation of the terminal complex of selenocysteine synthesis in humans Abstract O-Phosphoseryl-tRNASec selenium transferase (SepSecS) catalyzes the terminal step of selenocysteine (Sec) synthesis in archaea and eukaryotes. How the Sec synthetic machinery recognizes and discriminates tRNASec from the tRNA pool is essential to the integrity of the selenoproteome. Previously, we suggested that SepSecS adopts a competent conformation that is pre-ordered for catalysis. Herein, using high-resolution X-ray crystallography, we visualized tRNA-dependent conformational changes in human SepSecS that may be a prerequisite for achieving catalytic competency. We show that tRNASec binding organizes the active sites of the catalytic protomer, while stabilizing the N- and C-termini of the non-catalytic protomer. Binding of large anions to the catalytic groove may further optimize the catalytic site for substrate binding and catalysis. Our biochemical and mutational analyses demonstrate that productive SepSecS•tRNASec complex formation is enthalpically driven and primarily governed by electrostatic interactions between the acceptor-, TΨC-, and variable arms of tRNASec and helices α1 and α14 of SepSecS. The detailed visualization of the tRNA-dependent activation of SepSecS provides a structural basis for a revised model of the terminal reaction of Sec formation in archaea and eukaryotes. INTRODUCTION Synthesis and co-translational insertion of selenocysteine (Sec) is one of only two events in nature to expand the genetic code and incorporate a nonstandard amino acid into the proteome (1,2). Though resembling L-cysteine (Cys), Sec is distinct as it carries a selenol (SeH) moiety in place of a thiol. The comparatively lower pKa (5.2 versus 8.3) and redox potential (−488 mV versus −233 mV) of SeH render Sec fully ionized under physiological conditions (3,4), while its increased nucleophilicity causes Sec to be more reactive than Cys, thus arming selenoenzymes with both enhanced catalytic efficiencies (5,6) and resistance to oxidative inactivation (7). In higher organisms, selenoproteins and selenoenzymes play important biological roles and are pivotal for survival. Glutathione peroxidases and thioredoxin reductases remove reactive oxygen species and protect the cell membrane and DNA from oxidative damage (8)(9)(10), iodothyronine deiodinases maintain thyroid hormone homeostasis (11)(12)(13), and SelenoP regulates selenium (Se) levels (14)(15)(16). The systemic deletion of the cognate tRNA (tRNA Sec ) is embryonically lethal in mice (17), and replacement of Sec with L-serine (Ser) or Cys compromises selenoenzyme activity and selenoprotein folding (18)(19)(20). Moreover, mutations and deficiency of selenoproteins cause disorders affecting various organ systems (21). In contrast to the 20 canonical amino acids and pyrrolysine, there is no cellular pool of free Sec and the cognate SecRS never evolved (22)(23)(24). Instead, Sec synthesis occurs directly on tRNA Sec in all organisms. The cycle commences with a misacylation event during which a promiscuous seryl-tRNA synthetase (SerRS) attaches Ser to tRNA Sec (25), generating the first reaction intermediate, Ser-tRNA Sec (26,27). In the subsequent steps, the bacterial and archaeal/eukaryotic Sec cycles diverge. Whereas the bacterial SelA directly converts Ser to Sec (28,29), archaea and eukaryotes employ L-seryl-tRNA Sec kinase (PSTK) and O-phosphoseryl-tRNA Sec selenium transferase (SepSecS) to improve the efficiency of SeH substitution (30). PSTK first activates the hydroxyl leaving group of Ser by ATP-dependent phosphorylation (31), and then SepSecS exchanges the phosphoryl group for SeH in a reaction dependent on mono-selenophosphate and a pyridoxal phosphate (PLP) co-factor (32,33). While many studies have helped elucidate these pathways, questions remain about how these enzymes distinguish tRNA Sec and interact with one another to reliably generate Sec. The evolution of both Sec pathways relied on specialized synthetic and translational machinery to form Sec on tRNA Sec and recode an in-frame UGA stop codon (34). In all species, tRNA Sec features structural elements distinct from canonical tRNAs that are central to the specificity, fidelity, and efficiency of the Sec synthetic enzymes. In contrast to the 7/5 acceptor-T C helix found in canonical tRNAs, tRNA Sec adopts a longer 13-base pair (bp) acceptor-T C helix, resulting in an 8/5 fold in prokaryotes (35) and a 9/4 fold in archaea and eukaryotes (36). As the length of the acceptor arm impacts positioning of both the 5 -phosphate group and the CCA-end (37), its extension may influence productive interactions of tRNA Sec with Sec-synthetic enzymes. Additionally, tRNA Sec harbors enlarged D-and variable arms that could serve as auxiliary recognition determinants and/or anti-determinants. Moreover, the lack of otherwise conserved interactions between the 8th nucleotide of the acceptor arm and the D-arm may engender tRNA Sec with some conformational malleability (38,39). This flexibility could allow productive interactions with SerRS while retaining specificity for SelA, PSTK, and SepSecS. The divergence in the mechanisms of SeH substitution between prokaryotic and archaeal/eukaryotic systems is evident in the differences between SelA and SepSecS. Both enzymes are Fold Type I PLP-dependent enzymes with catalytic sites positioned at the dimer interfaces. Along with SepCysS, SelA and SepSecS are the only Type I PLPdependent enzymes that act on a tRNA substrate, yet each of these enzymes occupy phylogenetically distinct branches (40). Whereas SelA is a functional homodecamer that binds up to 10 tRNA Sec molecules (29), SepSecS is a tetramer (41). SelA primarily recognizes the extended D-and T C arms of tRNA Sec (29), while SepSecS approaches tRNA Sec from the opposite side where it establishes contacts with the variable arm and the minor groove of the acceptor arm (32). Early structural work revealed a cross-dimer substrate binding mode for complex formation wherein SepSecS is pre-ordered for binding and catalysis (42). Despite possessing four equivalent tRNA-binding and active sites, SepSecS only acts on up to two tRNA Sec molecules at a time (43), leading to a half-sites occupancy. In this arrangement, one SepSecS dimer, designated the non-catalytic protomer, docks two tRNAs and situates the CCA-ends near the catalytic sites in the neighboring catalytic protomer. The other dimer is the catalytic protomer which establishes tRNA Sec identity and provides sites for catalysis (32). Surprisingly, with the exception of minor side-chain rearrangements in the phosphate binding loop (P-loop), the catalytic and noncatalytic promoters largely resembled one another in crystal structures (32,43). Although previous studies indicated that SepSecS utilizes a tRNA-binding mechanism dissimilar to its closest orthologs (44), the structural elements in SepSecS and tRNA Sec governing formation of the productive ter-minal complex remained poorly understood. Overall, the originally proposed model of SepSecS catalysis failed to explain the half-sites occupancy, the supposed pre-ordered conformation of the enzyme for catalysis, the absence of any substrate-induced conformational changes in SepSecS, and the mechanism whereby the enzyme senses leaving groups and reaction products. To address these outstanding questions, we performed a thorough structural and biophysical analysis of the human holo SepSecS and SepSecS•tRNA Sec binary complex. Our new high-resolution crystal structures reveal that tRNA binding induces a conformational change of the P-loop in the active sites of the catalytic protomer, while also stabilizing the extreme N-and C-termini of the non-catalytic protomer. The structural adjustment of the N-terminus allows the CCA-end of tRNA Sec to access the active-site pocket, while the stabilization of the C-terminus may regulate the overall complex architecture. Furthermore, our data show that complex formation between SepSecS and tRNA Sec is enthalpically driven and mediated by electrostatic interactions between helices ␣1 and ␣14 of the enzyme and the sugar-phosphate backbone of the acceptor-T C arms of tRNA Sec . Moreover, residues of ␣14 help establish the catalytically competent state of the binary complex. Altogether, this study clarifies how enzyme-substrate interactions mediate the specificity and formation of a catalytically competent complex, revising the paradigm for the terminal reaction of Sec synthesis in archaea and eukaryotes. Crystallization and data collection Crystals were obtained by the vapor diffusion, sitting drop method in a 96-well plate format (Hampton Research). Prior to assembly, tRNA Sec was heat denatured for 1 min at 90-95 • C (20 mM Tris, pH 8.0, 150 mM NaCl) and allowed to cool to room temperature and renature on the bench. For crystallization, the holoenzyme and complexes were assembled in the assembly buffer: 20 mM Tris, pH 8.0, 200 mM NaCl, 5% (v/v) glycerol, 10 M PLP, and either 0.5 mM TCEP (native SepSecS) or 5 mM TCEP (SeMet-SepSecS). Crystals of the holoenzyme were grown with 4 mg/ml human SepSecS mixed with 1 mg/ml unacylated tRNA Sec in 0.36 M lithium citrate, 15% (w/v) PEG 3350, and 0.1 M sodium cacodylic acid titrated with 0.04 M HCl to pH 6.3. Crystals of the underivatized binary complex were grown using 2.5-7.5 mg/ml human SepSecS•tRNA Sec (with a 2fold molar excess of SepSecS) in 0.24 M lithium citrate, 9-10% (w/v) PEG 3350, and 0.1 M sodium cacodylic acid titrated with HCl to a pH of 6.2-6.4. Crystals of the SeMetderivatized binary complex were produced with 5.0 mg/ml of human SeMet-SepSecS•tRNA Sec (with a 2-fold molar excess of SepSecS) in 0.28-0.3 M ammonium acetate, 19.8% (v/v) MPD, and 0.1 M sodium citrate titrated with 0.057 M HCl to a pH of 5.5. For all setups, 1 l of the protein or complex was mixed with 1 l of reservoir buffer and crystals were grown at +12 • C. After structure determination, we identified that the originally cloned SepSecS gene harbored a V491A mutation. Crystals grown in the presence of PEG 3350 were cryoprotected with 20% (v/v) ethylene glycol prior to X-ray exposure, and those obtained with MPD were cryoprotected using 30% (v/v) MPD. The diffraction data were collected at cryogenic temperatures at the Life Sciences Collaborative Access Team (LS-CAT) beamline at APS-ANL. For the SeMet complex, a Se fluorescence spectrum scan in SepSecS-tRNA Sec crystals indicated that wavelength of 0.979439Å was optimal for anomalous diffraction data collection. The X-ray diffraction data were processed in HKL-2000 (45). Structure determination and refinement The holo SepSecS structure was solved by molecular replacement using PDBID 3HL2 as a starting model and Phaser (46) within the Phenix software package (47). For the underivatized binary complex, the 3HL2 complex, in which tRNA Sec is bound to the tetramer in two alternative conformations (32), was used as an initial model for refinement. The crystal structure of SeMet-SepSecS in complex with human tRNA Sec was determined by single-wavelength anomalous diffraction (SAD) phasing based on SeMet. SHELX was used to determine the positions of 58 (including two alternate confirmations) out of a possible 68 Se atoms (48). To improve the phase estimate, several rounds of density modification in DM (49) were performed. Iterative model building and structure refinement were done in Coot (50) and Phenix (51), respectively. For the purposes of model building, the tRNA conformations were split into two separate molecules and then combined into one molecule for refinement. The occupancy for each conformation of tRNA Sec during refinement was fixed at 0.5. All figures were made using PyMOL Molecular Graphics System, Version 2.4.2 Schrödinger, LLC. Structural analysis To improve visualization of ␣16, feature-enhanced maps (FEM) that minimize noise and model bias were calculated using Phenix (52). The electrostatic potential surface for holo SepSecS and tRNA Sec were calculated in PyMOL (version 2.4.2) with continuum electrostatic calculations using the Adaptive Poisson-Boltzmann Solver (APBS) software package plugin (53). Briefly, holo SepSecS was superimposed onto SepSecS complexed with tRNA Sec in PyMOL. The holoenzyme and tRNA Sec from the binary complex structure were converted to a PQR file using PDB2PQR. The PQR file was then analyzed by APBS using the default settings with a solvent probe radius of 1.4Å, surface sphere density of 10 grid points/Å 2 . Temperature was set to 310 K, ionic strength to 0.15 M in monovalent salt, and the dielectric constants for solute (protein and ligands) and solvent to 2.0 and 78.00, respectively. Tycho unfolding profiles Protein stability and integrity of the WT and SepSecS mutants were evaluated by comparing thermal unfolding profiles generated by a Tycho instrument (NanoTemper Technologies). For sample preparation, all proteins were diluted to 1 mg/ml (17.3 M) in 20 mM HEPES, pH 8.0, 150 mM NaCl, 5% (v/v) glycerol, and 0.05% (v/v) Tween-20. The diluted protein samples were spun for 5 minutes at 12000 rpm to pellet and remove any aggregated protein. Finally, samples were loaded into Tycho capillaries (NanoTemper Technologies) and analyzed in duplicate. Micro-scale thermophoresis (MST) binding assay To follow binding during MST, each SepSecS mutant was labeled using the Monolith Protein Labeling Kit RED-NHS 2nd Generation (NanoTemper Technologies). The labeling reaction was performed according to the manufacturer's protocol. Briefly, 20 M of protein was mixed with the dye (in the supplied buffer), keeping a dye-to-protein molar ratio of 3:1 and incubated in the dark for 30 min. Unreacted dye was removed with the supplied, dye-removal column equilibrated with 20 mM HEPES, pH 8.0, 150 mM NaCl, 5% (v/v) glycerol, and 0.05% (v/v) Tween-20. The protein concentration and degree of labeling were determined using UV/VIS spectrophotometry at 650 and 280 nm. A degree of labeling of ∼0.8-1 was typically achieved. Subsequently, bovine serum albumin (BSA) was added to the labeled protein to a final concentration of 0.4 mg/ml. For the MST experiment, the labeled SepSecS was adjusted to 10 nM with MST assay buffer (with 20 mM HEPES, pH 8.0, 150 mM NaCl, 5% (v/v) glycerol, 0.05% (v/v) Tween-20, and 0.4 mg/ml BSA). Prior to complex assembly, tRNA Sec was heat denatured for 1 min at +90-95 • C (20 mM HEPES, pH 8.0, 150 mM NaCl) and allowed to cool to room temperature and renature on the bench. Dilution series were then prepared according to the MO.Control software-protocol (NanoTemper Technologies) generated from an estimated K d . A series of 2-fold dilutions of tRNA Sec were prepared in 10 l of MST assay buffer to yield a range of tRNA Sec concentrations. For each measurement, 10 l of each ligand dilution was mixed with 10 l of labeled SepSecS, which led to a working SepSecS concentration of 5 nM. After 10 min, the samples were loaded into Monolith NT.115 Premium Capillaries (NanoTemper Technologies). MST for WT SepSecS was measured using the Monolith NT.Automated (NanoTemper Technologies) using 15% LED power and medium MST power. All other measurements were performed on a Monolith NT.115Pico instrument (NanoTemper Technologies) at room temperature using 5% LED power and medium MST power. For the R398A and R398E mutants of SepSecS and Mut5 of tRNA Sec , the setup was adjusted to maximize the ligand concentration. A series of 2-fold dilutions of tRNA Sec were prepared in 20 l, and then 18 l of each ligand dilution was mixed with 2 l of 50 nM labeled SepSecS. The tRNA Sec concentrations were input into the MO.Control software and run in Expert Mode, using 5% LED, medium MST power, and an MST on-time of 20 s. For all studied interactions, replicates (n = 3-6) from independently pipetted measurements were analyzed (MO.Affinity Analysis software version 2.3) using the signal from a 5 s MST on-time. van't Hoff calculations To determine the enthalpy and entropy of binding between SepSecS and tRNA Sec , MST for a single set of capillaries Nucleic Acids Research, 2023, Vol. 51, No. 8 4015 was run at +24, +26, +28, +30, +32 and +34 • C to determine the K d for the same sample at each temperature. Replicates of five or six per species were run using 5% LED power and medium MST power and analyzed using an MST ontime of 5 s. From the temperature and the associated K d value, we generated the corresponding van't Hoff plot by plotting ln(K a ) versus 1/T (54). A linear regression of the data (Equation 1) determined the slope and y-intercept, allowing calculation of the enthalpy ( H • ) and entropy ( S • ) of binding according to Equation (1): E. coli SepSecS complementation assay The activity of WT and the human SepSecS variants was assessed by evaluating their ability to rescue the loss of SelA in selA JS2(DE3) cells via the activity of the selenoenzyme, formate dehydrogenase (FDH) (55). The day prior to the assay, we inoculated LB broth supplemented with 1% (w/v) glucose, carbenicillin (100 g/ml), and chloramphenicol (34 g/ml) and grew each strain aerobically for 16 h at +37 • C. Cells were centrifuged and resuspended in sterile PBS to a cell density of 4 × 10 9 cells/ml. Each strain was then serially diluted in PBS to a cell density of 4 × 10 5 cells/ml. Subsequently, 10 l of each dilution series was plated onto a row of square LB agar plates containing carbenicillin (100 g/ml), chloramphenicol (34 g/ml), 10 M IPTG, 1 M Na 2 MoO 4 , 1 M Na 2 SeO 3 , 50 mM HCOONa and 0.5% (w/v) glucose. On a separate LB plate for downstream validation experiments, 250 l of each undiluted culture was plated. Plates were incubated in an anaerobic chamber (Type A vinyl 110V, Coy Lab Products) with a gas mix of 90% N 2 , 5% H 2 , 5% CO 2 for 24 h at +25 • C. The next day, the LB top agar (0.75% (w/v) agar) was prepared and supplemented with 1 mg/ml benzyl viologen (BV), 250 mM HCOONa and 25 mM KH 2 PO 4 (pH 7.0). For each assay plate, 10 ml of the supplemented top agar was poured on and gently distributed to cover the plate. To visualize the BV reduction, plates were imaged 30 min after the overlay with the top agar. High resolution structures of holo SepSecS and SepSecS in complex with unacylated human tRNA Sec Optimization of crystallization conditions and purification protocols (32,42) improved the diffraction quality of crystals containing either holo SepSecS, native SepSecS•tRNA Sec , or SeMet-derivatized SepSecS•tRNA Sec . Crystals of holo SepSecS diffracted to 2.25Å, whereas the native and SeMet binary complex crystals diffracted to 2.32 and 2.07Å resolution, respectively (Supplementary Table S1). Diffraction power at higher angles of binary complex crystals grown in the presence of PEG 3350 was of limited quality. In contrast, the complex crystals obtained from MPD-containing buffers consistently yielded well-defined reflections in higher resolution shells, thus permitting SAD phasing experiments. The final maps were of outstanding quality, allowing construction of the most comprehensive models of human SepSecS to date. Experimental SAD phases showed strong peaks in the anomalous difference maps ( Supplementary Figures S1A, B), which allowed positioning of Se atoms in SepSecS and provided an additional layer of confidence for structural analysis (Supplementary Figures S1C-E). It is prudent to mention we later discovered that the SepSecS used for crystallization harbored an inadvertent Val491Ala mutation. The mutation was corrected for all downstream experiments, and given the similarity between Ala and Val, this mutation is unlikely to have any effect on the structural results and interpretations. Structural superimposition of SepSecS tetramers derived from our structures yielded RMSD values within ∼0.4 A (Supplementary Figure S2), establishing that human SepSecS adopted the same quaternary structure in all crystal forms. Both binary complexes exhibited a common tetrameric architecture with the enzyme binding tRNA Sec in a cross-dimer fashion ( Figure 1A). The primary binding elements mediating complex formation are helices ␣1, ␣9, and ␣14 of SepSecS and the acceptor-T C and variable arms of tRNA Sec ( Figure 1B). The non-catalytic protomer employs helices ␣1 and ␣9 to dock the acceptor-T C and variable arms of tRNA Sec , whereas conserved regions of ␣14 in the catalytic protomer position the 3 -end of the acceptor arm of tRNA Sec near the catalytic groove. tRNA Sec binding induces conformational changes in the noncatalytic SepSecS protomer Previous structures suggested that SepSecS adopted a fold pre-ordered for tRNA Sec binding (32), with bindinginduced conformational changes occurring only in the tRNA substrate. Yet, such a model could not explain how the enzyme recognizes substrate binding to initiate catalysis nor perceives product formation for release after catalysis. New high-resolution crystal structures allowed us to further probe these questions. Our results showed that tRNA Sec binding induces both short-and long-range restructuring of the extreme termini of the non-catalytic protomer. In the new complex structures, we could trace the protein backbone out to Arg11, thus adding seven residues to the previously visualized protein register. Importantly, in the non-catalytic protomer, a turn of N-terminal ␣1 (residues 18-20) unwinds and the segment encompassing residues 11-20 assumes a coiled conformation ( Figure 2A). The extreme N-terminus folds upwards and away from the active site entrance. Given its proximity to the CCA-end of the bound tRNA Sec , the structural adjustment and movement of the extreme N-terminus may help the aminoacylated tRNA Sec substrate access the active site of the neighboring catalytic protomer. tRNA binding also reshapes the extreme C-terminus of the non-catalytic protomer. The more detailed mF o -DF c electron density difference maps divulged an additional ␣-helix sandwiched between ␣14 of the non-catalytic protomer and ␣1 of the catalytic protomer ( Figure 2B). While this helical density was also present in the SeMetderivatized structure, the maps derived from the native complex crystals were of higher quality in this region. The lack of electron density for a linker between the new helix and the rest of the protein created an ambiguity as to whether the helix belonged to the N-terminus of the catalytic protomer or the C-terminus of the non-catalytic protomer. Moreover, secondary structure prediction algorithms suggested that SepSecS possesses additional ␣-helices at both the N-(residues 3-11) and C-termini (residues 481-491) (Supplementary Figure S3). The new ␣-helix features side-chain densities reaching out towards Arg398 of the non-catalytic protomer, suggesting the new helix possesses acidic residues that engage in electrostatic interactions with ␣14 ( Figure 2B). Importantly, in the catalytic protomer, Arg398 interacts with the Hoogsteen face of the G73 discriminator base to establish tRNA Sec identity ( Figure 2C). Given that the extreme C-terminus of SepSecS is markedly acidic, we modeled residues from Glu477 to Leu493 as helix ␣16 (Supplementary Figure S4). The resulting register positions Glu482 and Asp489 within H-bonding distance from the guanidium group of Arg398 ( Figure 2D). These close contacts with ␣16 prevent Arg398 of the non-catalytic protomer from engaging with G73 of tRNA Sec as the analogous Arg398 residues from the catalytic protomer do ( Figure 2D). The rest of ␣16 sterically blocks the active site in the non-catalytic monomers, thereby precluding the non-catalytic protomer from catalyzing the reaction ( Figure 2B). Interestingly, the overall occupancy of ␣16 was 100%, whereas tRNA occupancy in each binding site was approximately 50%. Thus, the crystal structure suggested that a single tRNA binding event alters the conformation of the extreme C-termini in two monomers, breaking the equivalency of the tRNA-binding sites in human SepSecS. In other words, docking of the first tRNA induces conformational changes that define the catalytic and non-catalytic nature of the SepSecS protomers. Altogether, our results demonstrated that tRNA-induced conformational changes in the N-and C-termini of SepSecS lead to the structural asymmetry of the SepSecS•tRNA Sec complex (43), which may be functionally relevant. tRNA Sec and anions stabilize the active site conformation in the catalytic SepSecS protomer The similarly modeled P-loop (residues Gly96-Lys107) in the active sites of all previous structures (32,41,43) implied such a pre-ordered P-loop conformation was catalytically competent. Phosphate and sulfate anions stabilized the P-loop in murine and archaeal holo SepSecS, respectively (33,41), while phosphoserine and thiophosphate stabilized the same conformation in the initial human SepSecS•tRNA Sec crystal structure (32). However, with a minimally altered P-loop and no obvious structural changes in the active site, it was unclear how the SepSecS catalytic cycle would proceed. Our new structures demonstrated that both tRNA Sec and small ligands induce structural rearrangements in the P-loop that may organize the SepSecS active site into a catalytically competent state. Our 2.25-Å resolution structure of holo SepSecS possessed a phosphate ion bound to the P-loop ( Figure 3A) in a distinct binding pocket as previously observed (33,41). Additionally, our new crystal structure of the native SepSecS•tRNA Sec complex, obtained under high-citrate concentrations, revealed that citrate bound to a similar site near the P-loop in both the catalytic and non-catalytic protomers ( Figures 3B). An intriguing prospect of citrate binding is that cellular citrate or similar metabolites may regulate Sec synthesis. The overall positive electrostatic potential of the catalytic groove of SepSecS accommodates large anions mimicking selenophosphate, thiophosphate, or the sugar-phosphate backbone of the tRNA (Supplementary Figure S5). By contrast, the isomorphous SeMet-SepSecS•tRNA Sec complex structure, obtained under lowcitrate concentrations, harbored active sites devoid of large anions. Remarkably, while the P-loops are ordered in the catalytic protomer, they are disordered in the non-catalytic protomer ( Figure 3C), presumably adopting two or more conformations. Moreover, in the absence of tRNA Sec or large anions, the predominant conformation of P-loop residues (Ala103-Lys107) clashes with the placement of tRNA Sec in the catalytic protomer. Thus, positioning of tRNA Sec into the active site requires organization of the Ploop. Based on our structural data, we propose that tRNA Sec binding is a pre-requisite for ordering the P-loop into a catalytically competent state that accommodates entry of the CCA-end into the active site, while small ligand binding may additionally stabilize the active sites in SepSecS. Together the tRNA and ligand binding pockets could help the enzyme distinguish different steps in the reaction cycle. Polar interactions govern binding of SepSecS to tRNA Sec in solution Mapping the electrostatic potential onto the surfaces of SepSecS and tRNA Sec illustrated that positively charged catalytic pockets in SepSecS are complementary to the negative charges on the tRNA Sec backbone and phosphoserine and selenophosphate ligands ( Figure 4A). Further examination of these surfaces in our crystal structures revealed that solvent-exposed residues in helices ␣1 and ␣14 of SepSecS and the sugar-phosphate backbone of the acceptor and T C arms of tRNA Sec comprise the complementary electrostatic surfaces ( Figures 4B, C). Surprisingly, sequence conservation of polar residues of ␣1 is weak (Supplementary Figure S3). This lack of conservation implies that the presence of hydrophilic amino acids, and not their identity, is sufficient to engage with tRNA Sec . Conversely, hydrophobic residues of ␣1 are conserved as they anchor ␣1 within the ␣1-␣2-␣1-␣2 tetramerization motif. Consistent with their direct role in recognizing and orienting 73 GCCA 76 of tRNA Sec , stronger sequence conservation is present in ␣14 (Supplementary Figure S3), especially in positions 396-398 ( Figure 4C). To corroborate the significance of electrostatic interactions in mediating the SepSecS:tRNA Sec interaction, we performed MSTbased assays to determine the dissociation constant (K d ) and thermodynamic parameters (i.e. H • and S • ) of complex formation. We determined that WT SepSecS binds unacylated tRNA Sec with K d of 134 nM ( Figure 5A-B), which is in good agreement with the K d of 78 nM obtained using tryptophan fluorescence quenching (43). To calculate H • and S • , we performed MST experiments at +2 • Cintervals over a temperature range of +24 to +34 • C (Fig-ure 5C). As the temperature increased the binding affinity decreased, indicating that complex formation is an exothermic process ( Figure 5D), as expected for an interaction mediated by electrostatics. The MST-derived van't Hoff plot (R 2 = 0.9132) yielded a H • of −64.65 ± 11.30 kJ/mol and S • of −0.0869 ±0.0374 kJ/(mol•K) ( Figure 5E). These data characterize the SepSecS:tRNA Sec interaction as an enthalpically driven and entropically restricted process, whereby electrostatic interactions drive complex formation. A pairwise alignment using the anticodon stem of SepSecS-bound and free tRNA Sec illustrates the entropic cost of binding, as SepSecS induces strain in the acceptor, T C, and variable arms of tRNA Sec ( Figure 5F). Thus, the thermodynamic data support a model in which a favorable enthalpy, derived from electrostatic interactions between the enzyme and tRNA, drives complex formation to overcome the cost of conformational stabilization. Probing the role of helices ␣1 and ␣14 of SepSecS in tRNA Sec binding After establishing that polar interactions mediate SepSecS•tRNA Sec complex formation, we sought to investigate the contributions of individual residues in helices ␣1 and ␣14 in tRNA binding. Consequently, we engineered a series of enzyme mutants (e.g. R26A, S27A, H30A, E37L, K38M, F396V, T397V, R398A, R398E and Q399A) and evaluated their binding to tRNA Sec using MST. Our results showed that primarily positive and solvent-exposed side chains in these helices are important for tRNA Sec binding. We first probed the structural integrity of mutant enzymes by monitoring their thermal unfolding profiles. The similar initial ratios and ratios of the SepSecS mutants indicated the mutants have a similar aggregation status, while their comparable inflection temperatures (T i ) suggest they follow the same unfolding trajectory and adopt the same structure as the WT enzyme (Supplementary Table S2). Subsequent MST analyses of ␣1 and ␣14 mutants provided a nuanced view on the role of individual side chains in tRNA Sec binding and recognition. For instance, R26A and K38M caused an increase in the K d , whereas R33A marginally increased the affinity (Table 1, Supplementary Figures S6 and S7). S27A, H30A, and E37L recapitulated the WT K d value, suggesting their negligible role in tRNA binding. In the case of the ␣14 mutants, we observed a similar range of effects. Perhaps the most striking result was from probing the functionally relevant and highly conserved Arg398, which forms H-bonds with the Hoogsteen face of the G73 discriminator base. Its replacement with Ala (R398A) weakened affinity by more than an order of magnitude, while substitution with Glu (R398E) abolished binding (Table 1, Supplementary Figure S8E). The Q399A mutant, which coordinates the 5 -phosphate binding pocket, slightly diminished the affinity, whereas S393A resembled the WT enzyme (Table 1, Supplementary Figures S6 and S8). Surprisingly, F396V and T397V were stronger tRNA Sec binders, just like R33A (Table 1, Supplementary Figures S6 and S8). Here, we speculated that the removal of a flexible side chain (e.g. Phe and Arg) would decrease the entropy, permitting a closer contact with tRNA Sec to further stabi- lize electrostatic interactions that could then increase the enthalpy of binding. Indeed, these higher-affinity mutants all displayed a marked reduction in the entropy of binding (Table 2) and a greater enthalpy of binding when compared to WT SepSecS (Supplementary Figure S9 and S10). Taken together, our MST data validate a model of complex formation whereby ␣1 provides electrostatic interactions to aid tRNA docking, while ␣14 supplies specific residues that establish tRNA identity. However, the data revealed nuances of how SepSecS refines the strength of the interaction. For example, SepSecS appears to use bulkier side chains to weaken the binding affinity. Maintaining the binding affinity within a certain range may be important for ensuring efficient turnover of the product to the eEFSec and the Sec translational machinery. While our crystal structures revealed that Ser27, His30, Glu37 and Ser393 may form hydrogen bonds with the tRNA backbone atoms, their substitution with Ala had minimal effect on binding affinity. Perhaps, these contacts act in synergy or solvent molecules and/or protein backbone atoms could replace these interactions with ease. Significance of ␣1 and ␣14 of SepSecS in selenoprotein synthesis MST assays characterized the binding of SepSecS mutants to unacylated tRNA Sec but were uninformative regarding their contribution to catalysis. Thus, we delineated whether any of the residues in helices ␣1 and ␣14 play a functional role during selenoprotein synthesis using a well-established benzyl viologen (BV)-based Escherichia coli complementation assay (33). This indirect activity assay evaluated whether co-expression of SepSecS and archaeal PSTK could compensate for the loss of SelA to enable synthesis of a bacterial selenoenzyme, formate dehydrogenase (FDH) under anaerobic conditions in a selA bacterial strain ( Figure 6A). This system demonstrated that SepSecS catalytic competency is tolerant to mutations in ␣1, but especially sensitive to mutations in ␣14. Thus, the role of ␣14 residues in orienting the 73 GCCA 76 end is essential to the enzyme achieving catalytic efficiency. Only co-expression of catalytically active SepSecS and PSTK rescued FDH expression, allowing reduction of the BV substrate from its colorless oxidized form to a reduced, purple form ( Figure 6B). Interestingly, the co-expression conveyed a growth advantage, likely due to the ability of the host E. coli cells to metabolize formate as an energy source (Supplementary Figure S11). The advantage was evident by the denser growth of E. coli cells on agar plates. Further, though displaying a range of K d values for tRNA Sec binding (29 nM-1.7 M), ␣1 mutants of SepSecS reduced BV equally well as the WT enzyme, arguing that SepSecS can form a productive complex with tRNA Sec over a wide range of binding affinities ( Figure 6C). Conversely, apart from Q399A, mutations affecting solvent-exposed residues in ␣14 largely led to catalytic impairment. Residues S393A and T397V exhibited a minor deficiency in catalysis at the lowest dilution level, whereas R398A and R398E were completely inactive over the entire dilution range, consistent with earlier functional results (32). Given that R398E was unable to bind tRNA Sec , the lack of catalysis was expected (Table 1). Surprisingly, the high affinity F396V mutant was also incapable of catalysis. Western blots confirmed that all strains expressed PSTK and either WT or a mutant SepSecS (Supplementary Figure S12), thus the absence of BV reduction was solely due to a loss of function and not lack of expression. The acceptor-T C arm of tRNA Sec is the major recognition determinant for SepSecS Because human SepSecS binds primarily to the acceptor-T C and variable arms, we speculated that these two elements (Supplementary Figure S13A) may be the major recognition motifs in tRNA Sec (32). To assess their significance for complex formation, we employed mutational studies and MST binding assays. We engineered bacteriallike 8/5-fold (Mut 3) and canonical-like 7/5-fold (Mut 5) tRNA Sec mutants (Supplementary Figure S13B, versus C) (26), as well as hybrid constructs which either completely Figure S13D). Our MST data established that Mut3, Var, and vSer bind to WT SepSecS (Table 3 and Supplementary Figure S14). Given its promiscuity towards bacterial tRNA Sec , the binding of SepSecS to the 8/5-fold Mut3 was expected. However, the binding affinity was significantly lower (∼2 M versus 134 nM) compared to WT tRNA Sec . Conversely, the SepSecS interaction with Mut5 tRNA Sec exhibited a binding curve with the right part of the curve trailing up with no plateau, indicating that SepSecS cannot specifically engage with Mut5 (Supplementary Figure S14D). Consequently, we concluded that the 13 bp-long acceptor-T C arm of tRNA Sec is the major determinant for SepSecS recognition. On the other hand, the extended variable arm of vSer raised the K d to 449 nM, while binding to the Var mutant that lacked a variable arm resembled the K d value for WT tRNA Sec . Taken together, the variable arm of tRNA Sec does not appear to be a recognition element for SepSecS but may help the enzyme discriminate against tRNAs with extended variable arms, such as tRNA Ser . DISCUSSION Recognition of a specialized tRNA Sec and its discrimination from canonical tRNAs was crucial for the expansion of the genetic code to incorporate Sec into selenoproteins while maintaining translation fidelity (56). tRNA Sec possesses extended acceptor, T C, D-and variable arms compared to canonical tRNAs that aid the Sec synthetic machinery in their recognition and discrimination of tRNA Sec . Here, we sought to delineate the precise elements governing formation of a productive complex between human SepSecS and tRNA Sec . Previous studies proposed that human SepSecS adopts a pre-ordered conformation for a high-affinity interaction with tRNA Sec (32,41,43). Substrate binding was believed to occur by a sequential mode of allosteric regulation, where binding of one tRNA Sec molecule facilitated binding of second tRNA Sec to the cross-dimer and reduced the binding affinities of the non-catalytic protomer (43). Within this model, it remained unclear how a pre-ordered enzyme could perceive substrate acquisition or product release and what could be the mechanism for allosteric regulation (40,57). Recently, we obtained new high-resolution crystal structures which revealed novel features of SepSecS that hinted at an alternative mechanism of complex formation. To resolve these questions within the framework of our new structures, we deployed a combination of biochemical, biophysical, and functional assays. Electrostatic potential mapping indicated that SepSecS employs charge-based interactions to recognize and engage tRNA Sec . MST data confirmed that the enzyme uses ␣1 and ␣14 residues to engage in polar interactions with tRNA Sec to generate a favorable binding enthalpy that compensates for the entropic cost of stabilizing the tRNA Sec conformation and elements in SepSecS. Our measurements affirmed that SepSecS primarily recognizes the extended 13-bp long acceptor-T C fold of tRNA Sec . On the other hand, the variable arm of tRNA Sec may serve as a discriminatory element. While most tRNAs have four or five nucleotides in their variable loop, class II tRNAs (including tRNA Sec ) have 10 or more nucleotides (58). Our MST data suggests that SepSecS may discriminate against other class II tRNAs with extended variable arms, such as tRNA Ser . Additionally, this element may serve as an anti-determinant preventing false recognition by aminoacyl-tRNA synthetases other than SerRS and other enzymes and factors involved in protein translation (37). This quality check could help prevent mis-incorporation of Ser and phospho-Ser, but not Cys, at Sec UGA codons. Surprisingly, ␣1 residues that contribute to tRNA binding minimally impact catalysis, as the enzyme could sustain catalysis over a wide range of binding affinities (K d from 29 nM-1.7 M). By contrast, nearly all ␣14 mutants exhibited impaired catalysis. This impairment concurs with structural data showing that conserved ␣14 residues deliberately engage the 73 GCCA 76 -3 end of tRNA Sec ( Figure 4C). Thus, ␣14 residues do not merely aid substrate binding, but actively participate in orienting and positioning the CCA-end and the attached phosphoseryl moiety within the active-site groove for catalysis. Since Arg398 directly engages with the G73 discriminator base, its import is clear. However, the role of Phe396 was not as unambiguous. We had previously speculated that Phe396 forms -stacking interactions with one of the nucleobases of the CCA-end. However, the CCA-end was poorly resolved in our crystal structures, and the F396V mutation minimally strengthened binding affinity, indicating that Phe396 is not essential to the binding energy. Our MST data also demonstrated that F396V caused a significant loss of entropy, implying that the F396V complex adopts a productive-like conformation but with a more rigid CCA-end. Such rigidity could impair optimal positioning of the phosphoseryl moiety near the P-loop and PLP or hinder movement of the aminoacyl group through the catalytic cycle. Likewise, introduction of Val in place of Thr397, which interacts with N7 of the G73 discriminator just upstream of the CCA-end, decreased the entropy of binding and impaired catalysis. Altogether, our results argue that the CCA-end requires some flexibility for optimal catalysis. Conservation of an aromatic residue in the Phe396 position in archaea and eukaryotes, and conservation of Thr397 among vertebrates supports proposed roles for these residues in catalysis (Supplementary Figure S3). The apparent contradiction that ␣1 residues that participate in tRNA binding negligibly affect catalysis could be because the functional assay relied on the interaction between human SepSecS and bacterial SelC and not human tRNA Sec . Given the strict conservation of 73 GCCA 76 -3end across all tRNA Sec species, the interaction of ␣14 with either SelC or tRNA Sec should be similar (39), but significant differences may be present at the interface between ␣1 and the acceptor arm. The difficulties in synthesizing large quantities of Sep-tRNA Sec (55,59) limited our structural and in vitro experiments to using unacylated tRNA Sec . Thus, we could not interrogate the role of the aminoacyl moiety in the binding and catalysis by SepSecS. Hence, enzymatic studies that could directly determine k cat and K M values of the SepSecS-catalyzed reaction would further elucidate the mechanisms of enzyme turnover and catalysis. Alternatively, the similar catalytic efficiency of SepSecS over a wide range of tRNA Sec binding affinities ( Figure 6C, D) may instead reflect that SepSecS and tRNA Sec are components of a multi-enzyme Sec-synthetic complex in the cell (60,61). Such a complex would improve the efficiency of Sec synthesis and limit Se toxicity, while being compatible with the half-sites occupancy of SepSecS. Within such a larger complex, a single mutation in ␣1 could have minimal impact on tRNA Sec binding and catalysis, as we observed in our study. Altogether, binding and functional studies combined with our high-resolution crystal structures of holo and tRNA-bound SepSecS delineate a revised model of the terminal Sec-synthetic reaction (Supplementary Video S1). In holo SepSecS, all four monomers are equivalent, possessing disordered catalytic P-loops and C-termini ( Figure 7A). Upon binding of the first Sep-tRNA Sec , the N-terminus of the docking SepSecS monomer extends, unwinds into a coiled conformation, and tilts away from the active site entrance to accommodate the acceptor arm of tRNA Sec and allow access of its CCA-end to the active site. Binding stabilizes ␣16 in the neighboring monomers, causing steric occlusion of their tRNA-binding and catalytic sites, thus breaking the equivalency of these sites within the tetramer ( Figure 7B, top). Since the N-terminus participates in the tetrameric interface, tRNA-induced changes in this region could relay substrate binding across the enzyme, such that tRNA Sec binding in one monomer promotes stabilization of ␣16 in the two neighboring monomers (31). These conformational changes lead to a clear demarcation of the 'docking', noncatalytic and catalytic SepSecS protomers. Binding of substrates, large anions and/or tRNA Sec , is sufficient to organize the P-loop of the enzyme ( Figure 7B, bottom), perhaps via a mechanism of induced fit or conformational selection. Given that the anions may mimic the selenophosphate donor and phosphate leaving group (32), their binding may inform the enzyme about its state along the reaction coordinate. In the end, our results show that tRNA Sec binding initiates a series of conformational adjustments that facilitate transition of the holoenzyme into a catalytically competent state. However, additional studies addressing the physiological relevance of the half-sites occupancy of SepSecS are warranted. Our study provides a foundation for further manipulation of the SepSecS•tRNA Sec interaction to address unanswered questions about the Sec translational machinery and selenoprotein synthesis. Moreover, because the catalytic mechanism of SepSecS involves the anhydroalanyl species, modulation of this enzyme could lead to engineering of a direct system for synthesis of covalently modified proteins, which would be of immense value in the realm of synthetic biology. DATA AVAILABILITY The coordinates and structure factors are deposited in PDB with the accession codes 7L1T (for holo SepSecS), 7MDL (for SepSecS•tRNA Sec ) and 8G9Z (for SeMet-SepSecS•tRNA Sec ).
9,874.2
2023-03-17T00:00:00.000
[ "Biology", "Chemistry" ]
GeneSis: A Generative Approach to Substitutes in Context The lexical substitution task aims at generating a list of suitable replacements for a target word in context, ideally keeping the meaning of the modified text unchanged. While its usage has increased in recent years, the paucity of annotated data prevents the finetuning of neural models on the task, hindering the full fruition of recently introduced powerful architectures such as language models. Furthermore, lexical substitution is usually evaluated in a framework that is strictly bound to a limited vocabulary, making it impossible to credit appropriate, but out-of-vocabulary, substitutes. To assess these issues, we proposed GeneSis (Generating Substitutes in contexts), the first generative approach to lexical substitution. Thanks to a seq2seq model, we generate substitutes for a word according to the context it appears in, attaining state-of-the-art results on different benchmarks. Moreover, our approach allows silver data to be produced for further improving the performances of lexical substitution systems. Along with an extensive analysis of GeneSis results, we also present a human evaluation of the generated substitutes in order to assess their quality. We release the fine-tuned models, the generated datasets, and the code to reproduce the experiments at https://github.com/SapienzaNLP/genesis. Introduction The lexical substitution task (McCarthy and Navigli, 2009) requires a system to provide adequate replacements for a target word in a given context. Through the years, two lexical substitution variants have been proposed, i.e., candidates ranking and substitutes prediction (Melamud et al., 2015). While the former aims at ranking a list of predefined candidate substitutes for a word in a given context, the latter is more challenging, requiring a system to output a sorted list of replacements without any predefined substitutes inventory. Although it is not explicitly required by either of the two tasks, a good substitution system is expected to capture the semantics of its input and implicitly perform a soft disambiguation. For example, denoting bright as target word in the context sentence "She is a bright student", we expect a good substitution system to provide a set of substitutes closer to {intelligent, clever, smart} than to {luminous, clear, light}. Thanks to this implicit disambiguation capability, lexical substitution has shown its usefulness in several scenarios, such as word sense induction (Başkaya et al., 2013;Amrami and Goldberg, 2018;Arefyev et al., 2019), data augmentation (Jia et al., 2019;Arefyev et al., 2020), word sense disambiguation (Hou et al., 2020) and semantic role labeling (Bingel et al., 2018). However, despite having been employed in numerous downstream tasks, the lexical substitution task still presents unresolved issues that need to be addressed. First, the shortage of large-scale corpora annotated with the expected substitutes hinders the use of supervised techniques, including powerful Transformer-based language models, thus leaving the task in a possibly sub-optimal setting. Second, the evaluation metrics provided for the task are bound to the test vocabulary, hence they fail to capture the quality of substitutes outside the vocabulary; moreover, the vocabulary is usually small and often biased by the particular linguistic style and background of the annotators who developed the datasets. 1 In this paper, we focus on substitutes prediction and address the above problems by proposing GEN-ESIS, a generative approach to lexical substitution. We find that not only is this novel approach effec-tive when tested on the lexical substitution task, but also that it can be applied to generate substitutes from raw text, enabling the effortless construction of large-scale silver data. Moreover, we conduct an annotation task to analyze the results of our model and to validate out-of-vocabulary generations. Our contribution is threefold: • A novel generative approach to lexical substitution that outperforms the state of the art. • An automated method to produce high-quality silver data for lexical substitution. • An annotation task to evaluate out-ofvocabulary generations. Related Work Through the years, several approaches have been developed to tackle the lexical substitution tasks, but, to the best of our knowledge, ours is the first attempt to apply a generative approach to it. In what follows, we first review the principal approaches and resources for the lexical substitution tasks, and then provide a brief overview of generative methods across different fields. Lexical Substitution Approaches Since its presentation by McCarthy and Navigli (2007), a variety of different approaches have been explored to produce the substitutes that better fit the context. Earlier methods made use of external knowledge bases such as WordNet (Miller, 1995) to extract possible substitutes and construct delexicalized features (Szarvas et al., 2013), or they employed word embeddings to represent both the target and the substitutes in their context and rank them through ad-hoc metrics (Melamud et al., 2015(Melamud et al., , 2016. However, the recent spread of pre-trained language models has deeply reshaped approaches to lexical substitution, standardizing the use of contextualized word representations to provide a context-aware distribution over the output vocabulary. The first work in this direction was that of Garí Soler et al. (2019), where ELMo (Peters et al., 2018) embeddings are used to rank substitutes according to their cosine similarity to the target. In Zhou et al. (2019), instead, the input context is represented through a BERT model (Devlin et al., 2019). The authors partially mask the target word in its context, in order to obtain a representation that includes a faded target information; this representation is then used to obtain a probability distribution over the BERT output vocabulary that is not biased towards the target. Finally, the top scoring substitutes are reranked with a measure of similarity that takes into account both the cosine similarity and the relative attention scores between the target and the substitute. In a similar vein, Arefyev et al. (2020) proposed an extensive comparison of how several pretrained language models perform on the task, also injecting information about the target from word embeddings or rephrasing the input with dynamic patterns. Their best performing method produces an XLNet (Yang et al., 2019) contextualized embedding of the target word combined with static frequency information about proximity between target and substitute. This combined representation is then used to obtain a ranking of substitutes from the XLNet vocabulary that is further refined with postprocessing. Despite the improvement in performances that large language models brought to the task, these methods work in a potentially sub-optimal setting, since they are used as feature extractors and are not finetuned, due to the paucity of large-scale annotated data (Garí Soler et al., 2019). Lexical Substitution Resources The first dataset released was the Lexical Substitution Task (LST), proposed as test set for the task by McCarthy and Navigli (2007). It contains 2010 sentences with a single target word per sentence, including around 200 distinct targets. Each instance is associated with several substitutes that were chosen by five native English speaker annotators. The small coverage of LST led to the creation of the Turk bootstrap Word Sense Inventory (Biemann, 2012, TWSI), a first attempt to collect a large-scale dataset. The author deployed Amazon Mechanical Turk to annotate 25K sentences from Wikipedia, which, however, only cover noun targets. To overcome this shortcoming, Kremer et al. (2014) proposed Concept In Context (CoInCo), a dataset of 2474 sentences covering 3874 distinct targets with diverse part-of-speech tags. Each sentence has one or more targets, for a total of 15k instances annotated through Amazon Mechanical Turk. Generative Approaches Generative pre-trained language models such as GPT (Radford et al., 2018) have shown to be highly effective in Natural Language Generation, catching the attention of the research community. Indeed, pre-trained models such as BART (Lewis et al., 2020) suit a wide range of NLP applications. Thanks to the flexibility of seq2seq learning, these models can be easily adapted to different tasks, including sequence and token classification or sequence generation, inter alia. Interestingly, generative models have also been employed in tasks that are not usually formulated as sequence-to-sequence learning; for example, there have been effective applications of seq2seq architectures to definition modeling (Bevilacqua et al., 2020), cross-lingual Abstract Meaning Representation (Blloshmi et al., 2020), end-to-end Semantic Role Labeling (Blloshmi et al., 2021 and Semantic Parsing (Procopio et al., 2021;Bevilacqua et al., 2021a). Inspired by these successful applications of generative approaches we here propose applying, for the first time, a generative seq2seq model to the lexical substitution task. Differently from previous approaches in the field, we finetune a pre-trained model to produce substitutes starting from a word in its context. Moreover, our method can be used to generate silver data for the lexical substitution task, reducing the lack of annotated data. GENESIS The task of substitutes prediction requires finding replacements for a target word in a context that ideally do not modify the overall meaning of that context. More formally, given a target word w t occurring in a context sentence x = w 1 , . . . , w n , a substitution system has to assemble a ranked list s of possible replacements for w t according to its context x. Consider as an example the context The roses are bright. (1) where the target w t = bright appears. As output of our system, we expect a generated list of substitutes, such as s = [vivid, luminous, shining]. We tackle the lexical substitution task with a two-stage process: first, we use a seq2seq model that takes as input both the context and the target, and generates several possible lists of substitutes (substitutes generation, Section 3.1); second, we process the substitutes collected with the first step to obtain the final, ranked list (substitutes ranking, Section 3.2). The whole process is described in Figure 1. Throughout the paper, we will consider each target word w t to be univocally associated with its part-of-speech (POS) tag. To improve readability we discard POS tags from the notation. Substitutes Generation We assume to have a seq2seq model M that, given a context x where a target word w t occurs, is able to generate a sequence of substitutes s by modeling the probability where s 0 is a special start token. In order to structure both the target and the context as a single input sequence for M , we identify the target in its context by surrounding it with two special tokens. Formally, for a target word w t in x we define the input m wt,x as: Thus, example (1) is structured as: The roses are <t>bright</t>. The expected output s, instead, is a commaseparated sequence s = s 1 , . . . , s q where each word s i is a possible substitute for w t in x. At training time, we provide the model with a sequence of gold substitutesŝ =ŝ 1 , . . . ,ŝ k also structured as a comma-separated list. Thus, we can train M by minimizing the cross-entropy loss between the gold and the generated sequences. At inference time, for each input sequence m wt,x we actually produce several substitute sequences s 1 , . . . , s b obtained with beam-search decoding (Figure 1(a)). Substitutes Ranking Once the model has produced a set of substitute sequences, we collect the unique substitutes and rank them according to the context. Collection and Filtering First, we create the set W of words 2 that occur across the sequences s 1 , . . . , s b . W could contain inappropriate substitutes, such as the target itself or words that are closely related to the target but have a different part of speech (Figure 1(b)). To provide a cleaned list of substitutes, for each target word we define a possible output vocabulary and remove from W all the words that are not part of it, including the target itself (Figure 1(b), bold)). We denote this reduced set as W clean . The building of the output vocabulary is detailed in Section 4. Figure 1: A schematic representation of GENESIS. The input is fed to a seq2seq model that produces several sequences of substitutes (a); the substitutes are collected and filtered according to an output vocabulary (b). We create a new sentence for each substitute by using it as replacement for the target in the original context (c); finally, we use the contextualized representations of the substitutes to rank them by similarity with the target (d). Contextualization We denote with j the index of the target word w t in context x, and the contextualized representation of w t in x as: where N LM (x) is the representation of x obtained through an arbitrary neural language model. Then, for each valid substitute w c ∈ W clean , we obtain a modified context x wc by replacing the target word w t with w c (Figure 1(c)). Now we can obtain a contextualized representation of each substitute as: Ranking To produce the final ranking of the substitutes (Figure 1(d)) we compute the cosine similarity of the target word vector with respect to that of each substitute, i.e., cossim(v x,j , v xw c ,j ) ∀w c ∈ W clean , and order the substitutes by their descending cosine similarity with the target. Vocabulary Definition One of the challenges in the lexical substitution task is the lack of a predefined substitute inventory, i.e., for each target word we lack a reference list of possible replacements. Importantly, with GEN-ESIS we can produce approximately any word in the English vocabulary as substitute, although standard evaluation benchmarks consider valid only the words in the test vocabulary. To reach a suitable trade-off between the generative power of the model and the necessity of a fair evaluation, we define an output vocabulary that the model has to stick to, i.e., we discard any generated word that is not contained in it (Section 3.2). To build our vocabulary, we take advantage of WordNet 3.0, a widely-used lexicographic resource structured as a graph. Each WordNet node is a synset, i.e., a set of different lexicalizations with the same meaning and POS, while edges represent semantic relations between synsets, such as hyponymy and hypernymy. For example, one of the synsets for the adjective bright is {bright, brilliant, vivid}, that is connected through the similar-to semantic relation to the synset {colorful, colourful}. For each target w t we compute a set of synsets D wt that defines the output vocabulary. We initialize D wt as the set of synsets S wt where w t appears. 3 Then, for each s wt ∈ S wt we expand D wt by collecting all the neighbors N (s wt ) of s wt ; finally, for each neighbor n that is connected to s wt through a hyponymy, hypernymy, similar-to or see-also relation, we add all the neighbors of n to D wt . We define as possible substitutes for w t the union of all the lexicalizations appearing in D wt . This procedure, visualized in Figure 2, builds a vocabulary that covers all the senses enumerated in WordNet for a given target, defining a reduced range of available substitutions that is still challenging for the task. To provide a quantification of the coverage of the output vocabulary, we report that, when computed for the LST targets, it includes 25 842 distinct substitute words, while the Figure 2: A visual example of how the vocabulary is constructed. We consider s w , one of the two synsets where the noun wine appears (purple oval). First, we consider all its neighbors (orange ovals), then for each neighbor n of s w connected through hypernymy, hyponymy, similar-to and see-also relations (double orange oval), we include all the neighbors of n (green ovals). The neighbors of synsets connected through different relations to s w are discarded (grey ovals). original test set has 3154 possible substitutes. Their intersection covers 2013 words. Dataset Generation GENESIS is able to generate substitutes starting from a word in its context. Thus, starting from a source dataset C of target words in context, we can exploit GENESIS to produce ranked lists of substitutes and, associating the generated substitutes with the targets, obtain silver datasets for the lexical substitution task. To this end, first, we finetune GENESIS on a gold dataset for the lexical substitution task; then, we give as input to the finetuned model the corpus C, generating as output a list of replacements for each input instance. The input instances, associated with the generated substitutes, constitute the silver corpus. To ensure the quality of the generated substitutes, we apply a similarity threshold λ on the ranking step of GENESIS (cf. Section 3), keeping only the substitutes whose similarity to the target is higher than λ. As source dataset C we exploit SemCor (Miller et al., 1993), a manually annotated corpus where instances are sense-tagged according to the WordNet sense inventory 4 . While it is typically used as a training corpus for English Word Sense Disambiguation (WSD), as we show, its manually-curated sense distribution is also beneficial for lexical substitution. Indeed, having a frequency of target words that Experimental Setup In this section, we specify the setting used to tackle the lexical substitution task. Model We use BART (Lewis et al., 2020) as seq2seq model, trained through the RAdam optimiser (lr 10 −5 ); we train it for a maximum of 100 epochs, with early stopping and patience set to 2 epochs. The input is fed to the model in batches of up to 600 tokens. To obtain the contextualized representations in Equations (2) and (3) we use the average of the last four 5 hidden layers of BERT large cased. Both BART and BERT are used through the HuggingFace (Wolf et al., 2020) implementations. Datasets We finetune BART on the concatenation of CoInCo and TWSI. Indeed, the former is originally distributed without training split, with only test and dev sets released; the latter, instead, contains only nouns, so it is not suitable for training alone. Thus, we concatenate the two datasets and produce new train and dev splits by randomly reserving 30% of the target contexts for the dev set CT D and the remaining 70% as training split CT T . As test set we use LST, the dataset originally released for the SemEval-2007 task. As regards the generated datasets, we denote with GENSEMCOR n the dataset obtained by generating substitutes for a sample of n contexts randomly drawn from Sem-Cor. Starting from the size of CT T , i.e., 37k sentences, we generate four different samples by doubling the dataset size each time, with each sample including all the sentences in the previous one, i.e GENSEMCOR 37k ⊂ GENSEMCOR 74k and so on. The final dataset, that includes all the SemCor sentences for which at least one substitute has been generated, is identified by GENSEMCOR. We highlight that, when training on GENSEMCOR datasets, we use only silver data, without concatenating gold corpora. The properties of the gold and generated datasets are summarised in Table 1. Evaluation Metrics We evaluate the performance of our model using the metrics originally proposed for the task (McCarthy and Navigli, 2009), i.e., best and out-of-ten (oot), together with their mode variations. The best metric allows a system to produce as many substitutes as are considered useful, by dividing the credit for each correct guess by the number of produced guesses. The best substitute should ideally be provided first. For each test instance, we provide the scorer only with the first substitute from the ranking detailed in Section 3.2. The oot metric evaluates up to ten candidates that have all the same relevance for the target, without dividing the credit of each correct guess by the number of produced guesses. In this case, we provide the scorer with the first ten substitutes as ranked by GENESIS. The mode variations of the best and oot metrics evaluate only the subset of the test set where a mode exist, i.e., where a majority of the annotators selected a single substitute as the best replacement. The formalization of the metrics employed is detailed in the supplementary materials, Section A. In addition to the standard metrics, we follow Arefyev et al. (2019) and also report p@1, p@3 and r@10. Fallback Strategy When evaluating a system on the oot metrics, there is no advantage in providing less than ten substitutes. For this reason, whenever the procedure described in Section 3 results in less than ten substitutes, we apply a two-stage fallback strategy. First, we include in the substitutes all those words generated that were discarded when cutting on the output vocabulary. Second, if the list still has less than ten candidates, we extract the substitutes from the vocabulary that are not produced by the model, rank them according to their cosine similarity with the target (cf. Section 3.2) and extend the sequence produced until it reaches ten substitutes. Parameter Selection GENESIS has several features that can be personalized, from the model configuration to the generation parameters. With the aim of obtaining the best-performing setting for the lexical substitution task, we conduct an extensive tuning of GENESIS configuration, testing how each parameter affects the results on the dev dataset (CT D ). Here we briefly describe the best-performing settings, while we report the results for each variation of the parameters in the supplementary material (Sections B, C). Model Parameters We experiment with different values of dropout, encoder layer dropout and decoder layer dropout. We investigate how the variation of each parameter influences the performances by exploring values in the range [0, 0.6]. When training on the CT T dataset, the best performing setup uses dropout = 0.5, encoder layer dropout = 0.2 and decoder layer dropout = 0.6. This setting is used to perform all the experiments on the CT T dataset and to generate the datasets from SemCor. Then, a new selection of parameters is made on the GENSEMCOR 37k dataset, resulting in a new configuration with dropout = 0.1, encoder layer dropout = 0.6 and decoder layer dropout = 0.2. This configuration is used for the experiments on the GENSEMCOR datasets. Generation Parameters Several decoding strategies are available for seq2seq models. We experiment with beam sizes and check whether the use of sampling is beneficial for the task. The optimal configuration has beam size 50 and no sampling. Dataset Generation Parameters The generated substitutes are filtered through the similarity threshold λ. We tune it experimenting with the values in [0.5, 0.7, 0.8], with the best performing dataset obtained with 0.7. Experiments Once all the parameters have been tuned, we train GENESIS on CT T , cut the substitutes generated according to the output vocabulary, apply the fallback strategy and test on the LST dataset. Baseline Using a predefined vocabulary limits the possible outputs of our model; therefore, to assess whether the performances of GENESIS are mainly influenced by the restricted vocabulary, we Table 2: Results on the lexical substitution task of GENESIS trained on the CT T dataset (third block) and on GENSEMCOR (fourth block). We compare GENESIS with the two latest approaches to the task (first block) and to a baseline (second block). -and -indicate that the output vocabulary cut and fallback strategy are discarded, respectively. For all the metrics, the higher the better. devise a baseline that for each target word ranks by cosine similarity all the substitutes contained in the vocabulary built for the target, deploying the contextualization detailed in Section 3. Comparison Systems We choose as comparison systems the two most recent approaches to the task, i.e., the BERT-based system proposed by Zhou et al. (2019) and the best-performing solution presented by Arefyev et al. (2020), i.e., an XLNetbased model enhanced with the injection of specific embedding information about the target word. These two models achieve the currently highest reported results on the task. In these approaches the language models are used in a feature-based approach, i.e., they are not finetuned for the task. As already noted by Arefyev et al. (2020), both models output a probability distribution over a BPE-based vocabulary, making it tricky to reconstruct words at inference time. GENESIS, instead, overcomes this limit by relying on the decoding strategy of the generative model. Results We report our results in Table 2. The baseline (second block) performs poorly when it comes to predicting the most appropriate substitute (best, p@1, p@3), while it is quite strong in evaluating the top ten substitutes (oot, r@10). This is somehow to be expected: the average number of substitutes in the test set is four (cf . Table 1), hence, there is a good chance that the ten substitutes in WordNet that are closest to the target include the gold ones. As regards the results obtained with GENESIS, in each configuration we report the average of five runs with their standard deviation. First, we inspect the performances of GENESIS without output vocabulary and without fallback strategy (GENESIS -). The generative approach alone is noisy, showing performances that are lower than the state of the art in any metric. Indeed, when adding the cut on the output vocabulary (GENESIS -), the scores increase on best, best-mode, p@1 and p@3, reaching performances that are higher than the previous state of the art on best and p@3. At the same time, though, reducing the substitutes to the output vocabulary leads to the production of less than ten substitutes, thus decreasing the recall scores, as shown by the drop on r@10 and oot. This is further confirmed by the improvement on the oot and r@10 metrics given by the use of the fallback strategy (GENESIS), i.e., when using the complete system and always providing ten substitutes. In the fourth block of the table we present the results obtained when finetuning BART over the generated datasets. We start with GENSEMCOR 37k , that is comparable in size with CT T . In this case, the silver dataset performs better than the gold one, with results that are better than the state of the art on p@3, best and best-mode, besides being way more stable, as shown by the reduced variance across the metrics. We believe this improved behavior is due to the wider variety of targets (and consequently substitutes) to be found in the generated datasets (cf. Table 1), which helps the model to generalize more effectively. Increasing the size of the sample considered to 148k helps improve the results, achieving state-of-the-art performances on five metrics out of seven 6 . With 148k sentences the system seems to reach a stable point, and keeping on adding sentences does not bring any additional useful information to the model. Qualitative Evaluation Our quantitative analysis shows that the substitutes produced by GENESIS are good enough to outperform the previous approaches to the task. The evaluation setting, though, is inherently limited to a fixed vocabulary 7 . Hence, we devise an annotation task to assess the quality of generated substitutes, investigating whether GENESIS is able to generate substitutes that, even when not appearing in the gold standard, are judged good replacements by human annotators. Annotation Task We set up a test where an annotator is provided with a target word in context and a set of substitutes that are equally distributed among the gold and the generated ones. The annotator is required to select, if there are any, all the substitutes that are not suitable replacements for the target in the given context. We select three annotators with certified proficiency in English and previous experience in linguistic annotation tasks and present them with a sample of 322 test instances drawn from the LST dataset 8 . The annotators are asked to select the inappropriate substitutes from an anonymized shuffled set of three gold and three generated substitutes, obtained with GENESIS trained on CT T . For all the instances, the gold substitutes do not appear in the generated ones and vice versa. The annotation guidelines are reported in the supplementary material (Section D). Inter-Annotator Agreement Since each annotator may select more than one substitute, we measure the inter-annotator agreement (IAA) using Kraemer's κ coefficient (Kraemer, 1980), an extension of the better known Cohen's κ (Cohen, 1960) that allows multiple answers to be provided by annotators. We follow Landis and Koch (1977) to interpret κ values in the range (0.4, 0.6] as moderate agreement, values in (0.6, 0.8] as substantial agreement and those in (0.8, 1.0) as almost perfect agreement. Annotations are usually considered reliable if their IAA agreement is equal or higher than 7 We recall from Section 4 that the vocabulary of LST has slightly more than 3000 words. 8 The sample size is significant with respect to the source dataset with confidence level of 95% and a margin error of ± 5. The annotation interface was developed through Label Studio https://github.com/heartexlabs/ label-studio#try-out-label-studio. (Eugenio and Glass, 2004). Results As expected, the percentage of bad substitutes is higher in the generated dataset than in the manually produced gold, with 21% of the generated replacements considered inappropriate, versus the 13% discarded from the gold dataset, with an interannotator agreement of 0.71. The high percentage of accepted substitutes among the generated ones reflects the good quality of the replacements provided by GENESIS, confirming the validity of the approach. The results on the gold set, instead, raise some questions on its completeness and on the validity of an evaluation setting entirely dependent on such a restricted vocabulary. Indeed, more than 10% of gold substitutes are considered inappropriate by the annotators, and 40% of the discarded substitutes are gold. Moreover, we recall that, for each instance given to the annotators, the gold and the generated substitutes are disjoint, thus meaning that all the generated substitutes accepted by the annotators (79% of the ones proposed) are missing from the gold but still considered as suitable replacements. To give a deeper insight into the incompleteness of the gold dataset, in Table 3 we provide an example of substitutes generated by GENESIS compared with the gold ones. GENESIS is able to provide a richer variety of appropriate substitutes compared to the gold, which lacks several valid substitutes. GENESIS shows its effectiveness in particular when the target is an adjective (e.g. tremendous, bright); with nouns and verbs, it still manages to provide additional good substitutes in comparison to the gold (e.g. rest, skip), but it shows shorter generations, leading to less substitutes. On the adverbs, instead, it sometimes fails to capture the semantics of the target, producing replacements that are not appropriate for the context (e.g. late) or that do not fit syntactically in the sentence (e.g earlier). Conclusions In this paper we presented GENESIS, the first generative approach to lexical substitution. The method is simple but versatile: by finetuning a seq2seq model and post-processing its output we are able to generate appropriate substitutes for target words in contexts. Testing GENESIS on the lexical substitution task, we show performances that surpass the state of the art on several measures. At the same time, our approach can be used to produce large-scale silver data, which, when used as train- Context Gold Substitutes Generated Substitutes I think this idea has tremendous promise. If you wish to collect your robes earlier you should contact the above number to arrange collection. beforehand, sooner, prior to that, by then, before previously, before He was bright and independent and proud. intelligent, clever sharp, enthusiastic, intelligent, talented Let your child pick one bug to glue on the lid. insect fly, insectoid, critter, insect, creature Table 3: An excerpt of the GENESIS output for LST sentences when training on GENSEMCOR 74k , compared with gold substitutes. The generated substitutes reported do not include those added through the fallback strategy. ing corpora for GENESIS, lead to outperformance over the state of the art on five out of seven metrics. Moreover, large-scale datasets could possibly be deployed to finetune lighter models for the task. Finally, we conduct an annotation task to evaluate the quality of generated substitutes, which results in recognizing 79% of the proposed replacements as good substitutes and also highlights some weaknesses of the current evaluation setting, in that it is strictly bound to an incomplete output vocabulary. As future work, we plan to extend GENESIS to other languages for which the lexical substitution task has been proposed, such as Italian (Toral, 2009) and German (Miller et al., 2015). Moreover, we will investigate how the substitutes produced can be deployed in lexical-semantic tasks such as WSD (Bevilacqua et al., 2021b) or Lexical Simplification (Paetzold and Specia, 2017 A Metrics We measured the performances on the task with the standard metrics for lexical substitution, i.e., best and oot. Preliminaries We define T as the set of test instances and H as the set of annotators for the test set. Then, A is the set of instances in T for which the system provides at least one substitute. For the item i ∈ A we identify the set of substitutes provided by the system as a i , while h i represents the set of responses for the item i provided by annotator h ∈ H. Finally, for each i we compute the multiset union H i for all h i for each annotator; each unique type res in H i will have an associated frequency f req res for the number of times it appears in H i . For example, let us consider an item for happy and assume the annotators had supplied answers as follows: annotator id responses 1 glad, merry 2 glad 3 cheerful, glad 4 merry 5 jovial then H i would be [glad glad glad merry merry cheerful jovial]. The res with associated frequencies would be glad 3, merry 2, cheerful 1, jovial 1. As regards the mode variations, we define as the mode m i the most frequent response for instance i ∈ T , if it exists. The sets where this mode exists are T M and AM , respectively, for the gold substitutes and the system ones. best and best-mode Defining P b and R b as best precision and best recall respectively, we formulate best as where As regards the mode variation, we modify precision and recall as respectively, where bg is the best guess in the list of substitutes provided by the system. Then, bestmode is computed as in Equation 4. oot and oot-mode In this case, we define P o and R o as oot precision and oot recall, respectively. Then we compute oot as where while in the mode variation precision and recall are slightly modified as respectively. Once again, oot-mode can be computed by following Equation 9. B Parameter Selection As regards the model parameters, we explored how dropout, encoder layer dropout and decoder layer dropout affect the performances of the model. We found two different groups of better performing parameters when training on CT T and on GENSEM-COR 37k . In both cases, the variation of the results was measured on the CT D set. To maintain a feasible number of experiments, we set one parameter at a time, setting the dropout first, then the encoder layer dropout and finally the decoder layer dropout. In Figures 4, 5 and 6 there are the results of each experiment for the tuning of parameters for the models trained on CT T , while Figures 7, 8 and 9 report the performances when training on GENSEMCOR 37k . Often there is no single value for which the model performs uniformly better on all the metrics. In these cases, we tried to select the values that maximised more than one metric, or those that provided a higher improvement. C Generation Parameters We compared the results obtained on CT D after finetuning GENESIS on CT T , without any kind of filtering on the output vocabulary and without fallback strategy, in order to evaluate how each decoding parameter directly affects the generation quality. As evaluation metrics, we considered only the metrics that affect the whole dataset, i.e., we excluded the mode variations from our analysis. As generation parameters we experimented with beam size and top-k sampling. We compared the results for k = 5, 10 with those obtained without sampling, i.e., always picking the most probable one. In all the three cases, we used beam size 5 and postprocessed the generations of each beam as described in Section 3. We can see in Figure 10 that sampling increases the variety of the generated sequences, and consequently the precision's scores, while sticking to the most probable candidate results in a higher recall and oot score, as we can see in the graph. Considering that there is no configuration that achieves the best results on all the metrics, and the increase in memory and time requirements to keep track of the k higher ranked words, we decided not to use sampling at decoding time. As regards beam size, we compared the results obtained when using 5, 15, 25 or 50 beams, described in Figure 11. As expected, generating more sequences results in a higher variety of words generated, thus leading to higher oot and r@10. At the same time, a broader generation may imply "dirtier" substitutes, with words that are close to the target but are not appropriate replacements, slightly decreasing best and p@k scores. D Annotators Guidelines For the annotation task, we provided each annotator with a set of instances comprising a context with a single target word (in bold) and six possible substitutes. The annotator had to select all the inappropriate substitutes, sticking to the following guidelines: Figure 10: The results on the dev set when not using sampling (left) and when using it with top-5 (center) and top-10 (right) most probable elements in the output distribution. 1. A substitute is wrong if it is an inflection of the target (1). 2. A substitute is wrong if its replacement modifies the meaning of the sentence (2, 6). 3. A substitute is wrong it its replacement in the context results in a wrong structure of the sentence (6). 4. A substitute is wrong if it has an inflected form that is different from that of the target (5). 5. A substitute is correct if it is in its base form and not in the same inflection as that of the target (3, 4).
8,885.8
2021-01-01T00:00:00.000
[ "Computer Science" ]
Extension of B-spline Material Point Method for unstructured triangular grids using Powell–Sabin splines The Material Point Method (MPM) is a numerical technique that combines a fixed Eulerian background grid and Lagrangian point masses to simulate materials which undergo large deformations. Within the original MPM, discontinuous gradients of the piecewise-linear basis functions lead to the so-called grid-crossing errors when particles cross element boundaries. Previous research has shown that B-spline MPM (BSMPM) is a viable alternative not only to MPM, but also to more advanced versions of the method that are designed to reduce the grid-crossing errors. In contrast to many other MPM-related methods, BSMPM has been used exclusively on structured rectangular domains, considerably limiting its range of applicability. In this paper, we present an extension of BSMPM to unstructured triangulations. The proposed approach combines MPM with C1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^1$$\end{document}-continuous high-order Powell–Sabin spline basis functions. Numerical results demonstrate the potential of these basis functions within MPM in terms of grid-crossing-error elimination and higher-order convergence. Introduction The Material Point Method (MPM) has proven to be successful in solving complex engineering problems that involve large deformations, multi-phase interactions and historydependent material behaviour. Over the years, MPM has been applied to a wide range of applications, including modelling of failure phenomena in single-and multi-phase media [1,35,40], crack growth [14,20] and snow and ice dynamics [12,31,34]. Within MPM, a continuum is discretised by defining a set of Lagrangian particles, called material points, which store all relevant material properties. An Eulerian background grid is adopted on which the equations of motion are solved in every time step. The solution on the background grid is then used to update all material point properties such as displacement, velocity and stress. In this way, MPM incorporates both Eulerian and Lagrangian descriptions. Similarly to other combined Eulerian-Lagrangian techniques, MPM attempts to avoid the numerical difficulties arising from non-linear convective terms associated with an Eulerian problem formulation, while at the same time preventing grid distortion, typically encountered within mesh-based Lagrangian formulations (e.g. [10,32]). Classically, MPM uses piecewise-linear Lagrange basis functions, also known as 'tent' functions. However, the gradients of these basis functions are discontinuous at element boundaries. This leads to the so-called grid-crossing errors [2] when material points cross this discontinuity. Gridcrossing errors can significantly influence the quality of the numerical solution and may eventually lead to a lack of convergence (e.g. [28]). Different methods have been developed to mitigate the effect of grid-crossings. For example, the Generalised Interpolation Material Point (GIMP) [2] and Convected Particle Domain Interpolation (CPDI) [25] methods eliminate grid-crossing inaccuracies by introducing an alternative particle representation. The GIMP method represents material points by particle-characteristic functions and reduces to standard MPM, when the Dirac delta function centred at the material point position is selected as the characteristic function. For multivariate cases, a number of versions of the GIMP method are available such as the uniform GIMP (uGIMP) and contiguous-particle GIMP (cpGIMP) methods. The CPDI method extends GIMP in order to accurately capture shear distortion. Much research has been performed to further improve the accuracy of the CPDI approach and increase its range of applicability [16,22,26]. On the other hand, the Dual Domain Material Point (DDMP) method [41] preserves the original point-mass representation of the material points, but adjusts the gradients of the basis functions to avoid grid-crossing errors. The DDMP method replaces the gradients of the piecewise-linear Lagrange basis functions in standard MPM by smoother ones. The DDMP method with sub-points [7] proposes an alternative manner for numerical integration within the DDMP algorithm. The B-spline Material Point Method (BSMPM) [28,29] solves the problem of grid-crossing errors completely by replacing piecewise-linear Lagrange basis functions with higher-order B-spline basis functions. B-spline and piecewise-linear Lagrange basis functions possess many common properties. For instance, they both satisfy the partition of unity, are non-negative and have a compact support. The main advantage of higher-order B-spline basis functions over piecewise-linear Lagrange basis function is, however, that they have at least C 0 -continuous gradients which preclude grid-crossing errors from the outset. Moreover, spline basis functions are known to provide higher accuracy per degree of freedom as compared to C 0 -finite elements [15]. On structured rectangular grids, adopting B-spline basis functions within MPM not only eliminates grid-crossing errors but also yields higher-order spatial convergence [3,11,28,29,37]. Previous research also demonstrates that BSMPM is a viable alternative to the GIMP, CPDI and DDMP methods [11,19,30,39]. While the CPDI and DDMP methods can be used on unstructured grids [16,22,41], to the best of our knowledge, BSMPM for unstructured grids does not yet exist. This implies that its applicability to real-world problems is limited compared to the CPDI and DDMP methods. In this paper, we propose an extension of BSMPM to unstructured triangulations to combine the benefits of Bsplines with the geometric flexibility of triangular grids. The proposed method employs quadratic Powell-Sabin (PS) splines [23]. These splines are piecewise higher-order polynomials defined on a particular refinement of any given triangulation and are typically used in computer-aided geometric design and approximation theory [9,18,24,27]. PS-splines are C 1 -continuous and hence overcome the grid-crossing issue within MPM by design. We would like to remark that although this paper focuses on PS-splines, other options such as refinable C 1 splines [21] can be used to extend MPM to unstructured triangular grids. The proposed PS-MPM approach is analysed based on several benchmark problems. The paper is organised as follows. In Sect. 2, the governing equations are provided together with the MPM solution strategy. In Sect. 3, the construction of PS-spline basis functions and their application within MPM is described. In Sect. 4, the obtained numerical results are presented. In Sect. 5, conclusions and recommendation are given. Material Point Method We will summarise the MPM as introduced by Sulsky et al. [32] to keep this work self-contained. First, the governing equations are presented, afterwards the MPM strategy to solve these equations is introduced. Governing equations The deformation of a continuum is modelled using the conservation of linear momentum and a material model. It should be noted that MPM can be implemented with a variety of material models that either use the rate of deformation (i.e. the symmetric part of the velocity gradient) or the deformation gradient. However, for this study it is sufficient to consider the simple linear-elastic and neo-Hookean models that are based on the deformation gradient. Using the Einstein summation convention, the system of equations in a Lagrangian frame of reference for each direction x k is given by where u k is the displacement, t is the time, v k is the velocity, ρ is the density, g k is the body force, D kl is the deformation gradient, δ kl is the Kronecker delta, σ kl is the stress tensor, x 0 is the position in the reference configuration, λ is the Lamé constant, E kl is the strain tensor defined as 1 2 (D kl + D lk ) − δ kl , J is the determinant of the deformation gradient and μ is the shear modulus. Equations 2 and 4 describe the conservation of linear momentum and the material model, respectively. Initial conditions are required for the displacement, velocity and stress tensor. The boundary of the domain Ω can be divided into a part with a Dirichlet boundary condition for the displacement and a part with a Neumann boundary condition for the traction: where x = [ x 1 x 2 ] T and n is the unit vector normal to the boundary ∂Ω and pointing outwards. We remark that the domain Ω, its boundary ∂Ω = ∂Ω u ∪ ∂Ω τ and the normal unit vector n may all depend on time. For solving the equations of motion in the Material Point Method, the conservation of linear momentum in Eq. 2 is needed in its weak form. For the weak form, Eq. 2 is first multiplied by a continuous test function φ that vanishes on ∂Ω u and is subsequently integrated over the domain Ω: in which a k = ∂v k ∂t is the acceleration. After applying integration by parts, the Gauss integration theorem and splitting the boundary into Dirichlet and Neumann part, the weak formulation becomes whereby the contribution on ∂Ω u equals zero, as the test function vanishes on this part of the boundary. Equation 8 can be solved using a finite element approach by defining n bf basis functions φ i (i = 1, . . . , n bf ). The acceleration field a k is then discretised as a linear combination of these basis functions: Discretised equations in whichâ k, j is the time-dependent j th acceleration coefficient corresponding to basis function φ j . Substituting Eq. 9 into Eq. 8 and expanding the test function in terms of the basis functions φ i leads to which holds for i = 1, . . . , n bf . By exchanging summation and integration, this can be rewritten in matrix-vector form as follows: where M denotes the mass matrix,â k the coefficient vector for the acceleration, while F trac k , F int k and F body k denote, respectively, the traction, internal and body force vector in the x k -direction. When density, stress and body force are known at time t, the coefficient vectorâ k is found from Eq. 12. The properties of the continuum at time t + Δt can then be determined using the acceleration field at time t. Solution procedure Within MPM, the continuum is discretised by a set of particles that store all its physical properties. At each time step, the particle information is projected onto a background grid, on which the momentum equation is solved. Particle properties and positions are updated according to this solution, as illustrated in Fig. 1. To solve Eq. 12 at every time step, the integrals in Eq. 11 have to be evaluated. Since the material properties are only known at the particle positions, these positions are used as integration points for the numerical quadrature, and the particle volumes V are adopted as integration weights. Assume that the total number of particles is equal to n p . Here, subscript p corresponds to particle properties. A superscript t is assigned to particle properties that change over time. The mass matrix and force vectors are then defined by in which m p denotes the particle mass, which is set to remain constant over time, guaranteeing conservation of the total mass. The coefficient vectorâ t k at time t for the x k -direction is determined by solving Unless otherwise stated, Eq. 17 is solved adopting the consistent mass matrix. Next, the reconstructed acceleration field is used to update the particle velocity: The semi-implicit Euler-Cromer scheme [5] is adopted to update the remaining particle properties. First, the velocity field v t+Δt k is discretised as a linear combination of the same basis functions, similar to the acceleration field (Eq. 9): in whichv t+Δt k is the velocity coefficient vector for the velocity field at time t + Δt in the x k -direction. The coefficient vectorv t+Δt k is then determined from a density weighted L 2projection onto the basis functions φ i . This results in the following system of equations: where M t is the same mass matrix as defined in Eq. 13. P t+Δt k denotes the momentum vector with the coefficients given by P t+Δt Particle properties are subsequently updated in correspondence with Eqs. 1-4. First, the deformation gradient and its determinant are obtained: Here, I denotes the identify matrix and ε t+Δt p denotes the symmetric part of the velocity gradient: For linear-elastic materials, the particle stresses are computed as follows: For neo-Hookean materials, the stresses are obtained from The determinant of the deformation gradient is used to update the volume of each particle. Based on this volume, the density is updated in such way that the mass of each particle remains constant: Finally, particle positions and displacements are updated from the velocity field: The described MPM can be numerically implemented by performing the steps in Eqs. 13-31 in each time step in the shown order. Note that all steps can be applied with a variety of basis functions, without any essential difference in the algorithm. In this paper, we investigate the use of Powell-Sabin spline basis functions, which are described in the next section. Powell-Sabin grid refinement To construct PS-splines on an arbitrary triangulation, a Powell-Sabin refinement is required, dividing each of the original main-triangles into six sub-triangles as follows (see also Fig. 2). 1. For each triangle t j , choose an interior point Z j (e.g. the incentre), such that if triangles t j and t m have a common edge, the line between Z j and Z m intersects the edge. The intersection point is called Z jm . 2. Connect each Z j to the vertices of its triangle t j with straight lines. 3. Connect each Z j to all edge points Z jm with straight lines. In case t j is a boundary element, connect Z j to an arbitrary point on each boundary edge (e.g. the edge middle). The area consisting of the main-triangles around V i is referred to as the molecule Ω i of V i (see Fig. 2). Each PSspline is associated with a main vertex V i and will be defined on the molecule Ω i . Control triangles For each main vertex V i , the set of PS-points is given by the union of V i and the sub-triangle edge midpoints directly around it, see Fig. 3. A control triangle for V i is then defined as a triangle that contains all its PS-points. Note that this control triangle is not uniquely defined. However, it preferably has a small area to ensure good stability properties of the resulting PS-splines [8]. A sufficiently good control triangle can be constructed by considering only control triangles with two or three edges shared with the convex hull of the PS-points and taking the one with smallest surface. Further details about the implementation of such an algorithm can be found in [6]. Each control triangle defines three basis functions, all associated with vertex V i and the molecule Ω i . Therefore, the basis functions are indexed φ The triplets of the three PS-spline basis functions corresponding to vertex V i are determined from the control triangle and the position of V i . Let the Cartesian coordinates of the control triangle vertices be 4 A control triangle lifted to z = 1 in one of its vertices for determining a triplet of a basis function. The location in the lifted plane above the vertex V i in the middle is marked with a dot; the value and gradient there correspond to the triplet of the associated basis function The three triplets are then determined by solving The control triangle has a direct geometric interpretation for the triplet. Solving for the triplet α Bernstein-Bézier formulation for quadratic splines i is constructed as a piecewise quadratic polynomial over each sub-triangle in the PS-refinement in barycentric coordinates. Cartesian coordinates (x, y) can be converted to barycentric coordinates (η 1 , η 2 , η 3 ) with respect to the considered triangle vertices Next, we define 6 quadratic Bernstein polynomials B i, j,k for each sub-triangle in barycentric coordinates, which are non-negative and form a partition of unity. Any desired quadratic polynomial b(η) can be uniquely constructed as a linear combination of the 6 quadratic Bernstein polynomials B i, j,k over a sub-triangle, where b i, j,k are called the Bézier ordinates. The desired quadratic polynomial can thus be fully defined by its Bézier ordinates, which can be schematically represented by associating Bézier ordinate b i, j,k with barycentric coordinates i 2 , j 2 , k 2 , as shown in Fig. 5. Construction of PS-splines Next, consider the molecule Ω i around a vertex V i and one of the main-triangles t j in the molecule, as depicted in Fig. 6. The grey main-triangle has been divided into 6 sub-triangles in the PS-refinement. In each of the sub-triangles, the 6 locations with barycentric coordinates i 2 , j 2 , k The Bézier ordinates for basis function φ 1 i , corresponding to the vertex V i considered in Fig. 6. There are three different basis functions, note that for each of these basis functions, α, β and γ should correspond with the triplet (α q , β q , γ q ) corresponding to PS-triangle vertex Q q with β = β(x 2 − x 1 ) + γ (y 2 − y 1 ) and The resulting PS-splines corresponding to the shown control triangle of vertex V i are shown on the associated molecule in Fig. 3. Projecting on PS-splines and lumping within PS-MPM In this section, we show by numerical experiments that the projection on PS-splines leads to third-order spatial convergence in the L 2 -error between an analytic function and its projection. Furthermore, we also investigate the effect of performing a projection on PS-splines using a lumped variant of the mass matrix instead of the consistent one. Lumping of the mass matrix is often used in MPM with piecewise-linear basis functions, but we will show that lumping is less suitable for MPM with PS-spline basis function when no special measures are taken. We consider the projection of a two-dimensional sine function onto a basis of PS-splines, using a consistent and a lumped mass matrix, respectively. The results are also com-pared with a projection onto a basis of piecewise-linear basis functions. We project the function f = sin(x/π ) sin(y/π ) on the unit square onto a basis of PS-splines using a standard L 2 -projection: The entries of the mass matrix M and the right-hand side b are evaluated using high-order Gauss integration such that the integration error is insignificant. Finally, we may obtain the projectionf by either solving M c = b consistently, or by using the lumped mass matrix, b i = c i /M ii , in which the lumped mass matrixM has been created by lumping all the mass of each row onto its diagonal element,M ii = n bf j=1 M i j ,M i j = 0 for i = j. A basis of PS-splines, and of piecewise-linear functions were adopted, respectively, both defined on a structured triangulation. Figure 8 shows the L 2 -error in the function value and xderivative for various refinements of the grid. When using a consistent mass matrix, the expected orders of convergence are obtained for piecewise-linear basis functions and PS-spline basis functions, both in the function value and the x-derivative. The use of a lumped mass matrix leads to a firstorder accurate approximation of the function value for both types of basis functions. However, the x-derivative does not converge at all when adopting a lumped mass matrix within a PS-spline basis. As the gradient of the reconstructed field is often used in MPM (e.g. Eq. 24), lumping within PS-MPM will often lead to spatial oscillations as will be shown in more detail in Sect. 4.3. Numerical results To validate the proposed PS-MPM, several benchmarks involving large deformations are considered. The first benchmark describes a thin vibrating bar, where the displacement is caused by an initial velocity field. For this benchmark, PS-MPM on a structured triangular grid is compared with a reference solution. A second benchmark considers a unit square undergoing axis-aligned displacement with known analytical solution. The spatial convergence of PS-MPM on an unstructured grid is determined for this benchmark. The third benchmark describes a solid column under self-weight. In this benchmark, we investigate the use of a lumped mass matrix in PS-MPM to stabilise the simulation when part of the original domain becomes empty. We will show that a Vibrating bar In this section, a thin linear-elastic vibrating bar is considered with both ends fixed. A one-dimensional UL-FEM [36] solution on a very fine grid serves as reference. The grid used for PS-MPM and the initial particle positions are shown in Fig. 9. The bar is modelled with density ρ = 25 kg/m 3 , Young's modulus E = 50 Pa, Poisson ratio ν = 0, length L = 1 m, width W = 0.05 m and initial maximum velocity of 0.1 m/s. The chosen parameters result in a maximum normal strain in the x-direction of approximately 7%. At the left and right boundary, homogeneous Dirichlet boundary conditions are imposed for both x-and y-displacement, whereas at the top and bottom boundary, a homogeneous Dirichlet boundary condition is imposed only for the y-displacement, and a free-slip boundary condition for the x-direction. The initial displacement equals zero, but an initial x-velocity profile is set to v x (x 0 , y 0 ) = 0.1 sin(x 0 π/L). The initial y-velocity is equal to zero. The time step size for the simulation is Δt = 5·10 −3 s. The Courant number for two-dimensional problems is defined as C = 2Δt h √ E/ρ, in which h is the typical element length, and √ E/ρ is the characteristic wave speed. Due to the ambiguity of h for a PS-refinement on an unstructured triangular grid, we estimate the average edge length of the sub-triangles by h ≈ 0.025 m. In this case, the Courant number is C ≈ 0.56 < 1, satisfying the CFL condition [4]. Figure 10 shows the displacement in the middle of the bar over time and the stress profile through the entire bar at the end of the simulation. Although a relatively coarse PS-MPM grid and few particle are used, the method yields accurate results and a smooth stress profile. In case of small and large deformations, the energy in the system over time is shown in Fig. 11. Results for small deformations have been obtained by setting the initial maximum velocity equal to 0.001 m/s. In the limit of small deformations, the PS-MPM solution is described exactly by a harmonic oscillator, and the simulation with small deformations is indeed in good agreement with the (sampled) exact solution. For large deformations, however, the solution is no longer perfectly periodic. Nonetheless, the energy over time obtained with PS-MPM strongly resembles the UL-FEM reference solution. Only after 7 s, the PS-MPM simulation Axis-aligned displacement on an unstructured grid In this section, we consider two-dimensional axis-aligned displacement on the unit square (L × W , with L = W = 1 m) to investigate the grid-crossing error and spatial convergence of PS-MPM on an unstructured grid. An analytical solution for this problem is constructed using the method of manufactured solutions: the analytical solution is assumed a priori, and the corresponding body force is calculated accordingly. This benchmark has been adopted from [25] and [38]. The analytical solution for the displacement in terms of the reference configuration is given by u y = u 0 sin 2π y 0 sin E/ρ 0 π t + π . Here, ρ 0 = 10 3 kg/m 3 , u 0 = 0.05 m and Young's modulus E = 10 7 Pa. The corresponding body forces [25] are in which the Lamé constant λ, the shear modulus μ and the normal components of the deformation gradient D x x and D yy are defined as D yy = 1 + 2u 0 π cos(2π y 0 ) sin E/ρ 0 π t + π . (46) Here, ν = 0.3 and all solutions are again given with respect to the reference configuration. This benchmark was simulated with standard MPM and PS-MPM, using an unstructured triangular grid with material points initialised uniformly over the domain, as shown in Fig. 12. A time step size of Δt = 2.25 · 10 −4 s was chosen, which results in a Courant number of C ≈ 0.72. For the adopted parameters, the imposed solution in Eqs. 40-41 has period T = 0.02 s. Grid-crossing error First, it is shown that the grid-crossing error typical for standard MPM does not occur in PS-MPM, by comparing the normal horizontal stress resulting from these methods (see Fig. 13). The configurations for PS-MPM and standard MPM contain the same number of particles (5120, 4 times as many as in Fig. 12) and a comparable number of basis functions, 289 for standard MPM and 243 for PS-MPM. As expected, the stress field in standard MPM suffers severely from grid-crossing, whereas with PS-MPM a smooth stress field is observed. The same conclusion can also be drawn when investigating individual particle stresses over time, as is shown in Fig. 14. For this figure, the particle positioned at (x 0 , y 0 ) ≈ (0.25 m,0.47 m) has been traced throughout the simulation. The figure depicts the stress and trajectory of the particle. For standard MPM, it is observed that the stress profile is highly oscillatory, resulting in a disturbed trajectory. PS-MPM shows no oscillations, but only a small offset, which was found to disappear upon further refinement of the grid. Spatial convergence PS-MPM with quadratic basis functions is expected to show third-order spatial convergence. To determine the spatial convergence, the time averaged root-mean-squared (RMS) error is used: where n t denotes the total number of time steps of the simulation. Under the assumption that the spatial error is much larger than the error produced by the numerical integration and time-stepping scheme, the spatial convergence of standard MPM and PS-MPM is determined by varying the typical element length h. This length was defined as the average edge length for standard MPM and the average sub-triangle edge length for PS-MPM. It has been observed that the timeintegration error is indeed sufficiently small, but the number of particles required for an adequately accurate integration increases rapidly as h decreases. Figure 15 shows the spatial convergence of both standard MPM and PS-MPM for different numbers of particles per element on an unstructured grid. Provided that a sufficient number of particles is adopted, standard MPM shows secondorder spatial convergence, and PS-MPM converges with third order. Both orders of convergence correspond to the order of the projection-error, as shown in Sect. 3.5. Besides higherorder spatial convergence, PS-MPM also results in a smaller RMS-error compared to standard MPM, even for the coarsest grid. To achieve the optimal convergence order for both standard MPM and PS-MPM, the number of integration particles required increases rapidly, as the integration error otherwise dominates the total error, as shown in Fig. 15. Note that the convergence rate in the number of particles for both standard MPM and PS-MPM is measured to be of first order. However, for a fixed typical element length h and number of particles per element, the use of PS-MPM leads to a lower RMS-error, illustrating the higher accuracy per degree of freedom. The inaccurate integration in MPM due to the use of particles as integration points is known to limit the spatial convergence. Different measures have been proposed to decrease the quadrature error in MPM based on function reconstruction techniques like MLS [13], cubic splines [37] and Taylor least squares [39]. The combination of PS-MPM with function reconstruction techniques to obtain optimal convergence rates with a moderate number of particles is subject to future research to further improve PS-MPM for practical applications. Column under self-weight In the previous examples, we considered PS-MPM with a consistent mass matrix. However, lumping of the mass matrix is common practice in many applications of MPM, as it speeds up the simulation and increases the numerical stability. In case elements become (almost) empty, the consistent mass matrix in Eq. 13 has very small entries and becomes ill-conditioned, leading to stability issues, which is a wellknown phenomena in MPM [32]. Using a lumped version of the mass matrix to solve for the acceleration and velocity fields is known to overcome the ill-conditioning [32]. However, as explained in Sect. 3.5, lumping within PS-splines can cause spatial oscillations. We show that this is indeed the case and propose an alternative based to lumping to overcome the ill-conditioning of the mass matrix while mitigating spatial oscillations as much as possible. First, we demonstrate that the standard lumping technique is poorly suited for PS-MPM, by considering a solid col- Fig. 12 The exact solution in which particles (marked with dots) move back and forth along the marked vectors (left) and the unstructured grid with the initial particle configuration (right) Fig. 13 The interpolated particle stress in the x-direction at t = 0.016s umn under self-weight, as shown in Fig. 16. The column in this exemplary benchmark is modelled as a linear-elastic material. It is fixed in all directions at the bottom, and completely free at the top. At the left and right boundary, we impose a free-slip boundary condition for movement in the y-direction, but the displacement in the x-direction is fixed to be zero. The column is compressed by gravity due to its self-weight. The solid column is modelled with density ρ = 1 · 10 3 kg/m 3 , Young's modulus E = 1 · 10 5 Pa, Poisson ratio ν = 0, gravitational acceleration g = −9.81 m/s 2 , height H = 1 m and width W = 0.1 m. The maximum strain when adopting these parameters is approximately 18%. Figure 16 illustrates the grid considered for this benchmark as well as the particle positions at maximal deformation. Due to the fact that part of the grid becomes empty, the use of a consistent mass matrix leads to stability issues with unlumped PS-MPM. The use of a lumped mass matrix within PS-MPM restores the stability, but causes spatial oscillations in the stress profile, as was also discussed in Sect. 3.5. The oscillations degrade the solution of the displacement and velocity as well, as shown in Fig. 17 (left column). Since these spatial oscillations are typical for lumped PS-MPM, common remedies to solve problems due to lumping the mass matrix, like the momentum formulation method [33] or distribution coefficient method [17], do not reduce the spatial oscillations in lumped PS-MPM. Instead, we propose the use of partial lumping. Partial lumping to mitigate spatial oscillations As full lumping within PS-MPM causes spatial oscillations, we will instead consider a partial-lumping approach to overcome the stability issues due to almost empty elements, while at the same time minimising spatial oscillations due to lumping. Partial lumping only lumps those rows responsible for the ill-conditioning. These rows typically correspond to the basis functions in the part of the grid where very few particles are left. By only lumping these few rows, the simulation is stabilised, while introducing a much smaller lumping error compared to full lumping. Note that the choice of rows to lump is crucial to ensure stability. Lumping more rows increases the robustness of the simulation, but decreases the accuracy. The trade-off between stability and accuracy within partially lumped PS-MPM should be reconsidered for each benchmark. For the column under self-weight, we implemented partial lumping by lumping all rows corresponding to a basis function with at least one empty main-triangle in its molecule. Additional empty elements are added at the top of the column, to ensure that the top-most basis functions are always lumped. Figure 17 (right column) shows the displacement Fig. 18 The equilibrium stress at individual particles in a damped solid column. The numerical solutions were determined with fully lumped and partially lumped PS-MPM at t = 10 s, at which the solution was in equilibrium and velocity of a particle over time, as well as the stress profile at t = 2.5 s obtained with partially lumped PS-MPM. Compared to the results obtained with fully lumped PS-MPM, shown in Fig. 17 (left), the stress profile, the velocity and displacement over time significantly improve. Furthermore, the total energy fluctuates less for partially lumped PS-MPM than for fully lumped PS-MPM and also shows less numerical dissipation. However, for very long simulations, partially lumped PS-MPM requires smaller time steps than fully lumped PS-MPM in order to remain stable. Finally, we compared the equilibrium solution for the stress profile in the solid column with partial and full lumping. The analytical equilibrium solution is given by [41]: σ yy (y eq ) = σ yy (0) (1 − y eq /H ) + κ(y eq /H ) where y eq refers to the equilibrium position of the particles. Furthermore, σ yy (0) = H ρg and κ = σ yy (0) 2E . To obtain the static solution based on the dynamic calculations, a damping term has been added: where, α = 0.6. The damping force damps the PS-MPM solution at each time step in the opposite direction of the velocity field. The equilibrium solution has then been determined by simulating the system with both fully lumped and partially lumped PS-MPM until a stable solution was reached. Both the analytical and numerical static solutions are shown in Fig. 18. For fully lumped PS-MPM, the spatial oscillations in the stress profile remain visible in the equilibrium solution. However, the stress profile of partially lumped PS-MPM is in close agreement with the exact solution. Hence, the use of partial lumping within PS-MPM combines the advantages of adopting a consistent and lumped mass matrix, while minimising their drawbacks, and thereby improves the overall performance of the method. Future research will focus on techniques to further mitigate or fully overcome the oscillations caused by lumping, in particular when PS-MPM is applied for practical problems. Conclusion In this paper, we presented an alternative for B-spline MPM suited for unstructured triangulations using piecewise quadratic C 1 -continuous Powell-Sabin spline basis functions. The method combines the benefits of smooth, higher-order basis functions with the geometric flexibility of an unstructured triangular grid. PS-MPM yields a mathematically sound approach to eliminate grid-crossing errors, due to the smooth gradients of the basis functions. As a first validation, a vibrating bar was considered, for which the PS-MPM solution yields accurate results on a relatively coarse grid. Numerical simulations obtained for a unit square undergoing axis-aligned displacement have shown higher-order convergence for the particle stresses and displacements. The use of a lumped mass matrix to increase stability within PS-MPM leads to spatial oscillations in the stress profile. A partial lumping strategy was proposed to combine the advantages of adopting a consistent and lumped mass matrix, which successfully mitigates this issue. Investigation of alternative unstructured spline technologies, in particular, refinable C 1 splines on irregular quadrilateral grids [21] is underway. Compliance with ethical standards Conflict of interest Pascal de Koster has received funding support from Dutch research institute Deltares. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
8,253
2020-03-26T00:00:00.000
[ "Engineering", "Materials Science" ]
A novel caged Cookson‐type reagent toward a practical vitamin D derivatization method for mass spectrometric analyses Rationale 25‐Hydroxylated vitamin D is the best marker for vitamin D (VD). Due to its low ionization efficiency, a Cookson‐type reagent, 1,2,4‐triazoline‐3,5‐dione (TAD), is used to improve the detection/quantification of VD metabolites by liquid chromatography/tandem mass spectrometry (LC/MS/MS). However, the high reactivity of TAD makes its solution stability low and inconvenient for practical use. We here describe the development of a novel caged Cookson‐type reagent, and we assess its performances in the quantitative and differential detection of four VD metabolites in serum using LC/MS/MS. Methods Caged 4‐(4′‐dimethylaminophenyl)‐1,2,4‐triazoline‐3,5‐dione (DAPTAD) analogues were prepared from 4‐(4′‐dimethylaminophenyl)‐1,2,4‐triazolidine‐3,5‐dione. Their stability and reactivity were examined. The optimized caged DAPTAD (14‐(4‐(dimethylamino)phenyl)‐9‐phenyl‐9,10‐dihydro‐9,10‐[1,2]epitriazoloanthracene‐13,15‐dione, DAP‐PA) was used for LC/MS/MS analyses of VD metabolites. Results The solution stability of DAP‐PA in ethyl acetate dramatically improved compared with that of the non‐caged one. We measured the thermal retro‐Diels‐Alder reaction enabling the release of DAPTAD and found that the derivatization reaction was temperature‐dependent. We also determined the detection limit and the lower limit of quantifications for four VD metabolites with DAPTAD derivatization. Conclusions DAP‐PA was stable enough for mid‐ to long‐term storage in solution. This advantage shall contribute to the detection and quantification of VD in clinical laboratories, and as such to the broader use of clinical mass spectrometry. accounts for most of serum 25(OH)D, differential determination of 25-hydroxyvitamin D 2 (25(OH)D 2 ) is desirable when VD 2 -containing supplements are used by the subject (patient). Because significant levels of the C-3 epimer of 25(OH)D 3 , 3-epi-25(OH)D 3 , are present in the serum in both infants and adults, 3 its presence may result in the overestimation of 25(OH)D if it is not properly resolved by chromatography. Furthermore, the accurate quantification of 24R,25-dihydroxyvitamin D 3 (24,25(OH) 2 D 3 ) is essential for the differential diagnosis of infantile hypercalcemia of unknown etiology. 4 Thus, there is an increasing demand for the quantitative and routine mass spectrometric measurement of differential VD metabolites. 5 Because VD metabolites exhibit a low ionization efficiency under the conditions used in VD analysis by liquid chromatography/tandem mass spectrometry (LC/MS/MS), attempts to improve ionization efficiency by derivatization are being reported. 6 VD metabolites have a particular structural feature: a conjugated s-cis diene. Consequently, VD selective derivatization reagents that take advantage of the reactive dienophile, 1,2,4-triazoline-3,5-dione (TAD), which is known as a Cookson-type reagent, have been developed for enhancing their detection limits in mass spectrometric analyses. 7 DAPTAD solution without purification needs to be stored at −18°C and is recommended for use within 2 months. 10 In addition, on-site preparation of the chemicals is inherently associated with a risk of contamination, which may affect the yield of the derivatization reaction. Furthermore, water condensation can also disturb or hamper the derivatization reaction, and ambient or near-ambient temperature (refrigerator level) storage of the reagent solution is thus desirable, which, on the other hand, will affect the reagent's stability. Overcoming these limitations would enable not only an easier operational handling, but may also open the way to the automated and routine detection/quantification of VD metabolites. Effective TAD reagents, including DAPTAD, for VD metabolites have been reported; 13,14 however, their solution stability were often low or in many cases not investigated. Although 2-nitrosopyridine 15 might be a better derivatization reagent for VD metabolites than TAD reagents, popularity and accumulated knowledge of 2nitrosopyridine as a derivatization reagent are still limited compared with those of well-known and widely used TAD reagents. In addition, long-term solution stability of 2-nitrosopyridine has not yet been confirmed. These factors motivated us to use TAD in our experiment. The stability of TAD reagents is dependent on the TAD group and not on the conjugated aromatic functional group. Therefore, we focused on DAPTAD as a representative TAD molecule. | Analytical high-performance liquid chromatography (HPLC) The chromatographic analysis of the RDA reaction was performed using an | Materials for LC/MS/MS The | Solution stability of DAPTAD and DAP-PA Although TAD is one of the most reactive dienophiles and provides an excellent derivatization tag, its solution instability has been reported. [17][18][19][20] In nucleophilic solvent systems (alcohols or water), it undergoes a nucleophilic attack of its oxygen functional groups or a loss of nitrogen-yielding dimeric compounds. We first examined the solution stability of DAPTAD, which was synthesized according to the published protocol, 10 using NMR. The amount of residual DAPTAD was estimated by the integration of aromatic proton peaks (Figures 5 and S1, supporting information). As expected, a timedependent decrease in the proton signals was observed in ethyl acetate, which is a common solvent for DAPTAD derivatization. 10 The addition of molecular sieves 4A at 4°C increased the solution stability of DAPTAD, which strongly suggested that a nucleophilic attack by residual moisture occurred in the non-caged TAD. On the other hand, the solution stability of DAP-PA in ethyl acetate dramatically improved ( Figure 6) in comparison with that of the noncaged TAD, which indicated that a DA-type protection (Figure 3) of the dienophile, TAD, was effective. | RDA reaction of caged DAPTAD The RDA reaction is a reversible reaction, which can be controlled by thermal regulation (Figure 3). 21,22 This dynamic, reversible covalent bond formation and cleavage makes it possible to consider thermal protection and deprotection. The thermal RDA reaction enabling the release of DAPTAD was examined using analytical HPLC. DAPTAD itself was difficult to detect using reversed-phase HPLC due to its instability in aqueous solvents, and we thus monitored the presence of 1,4-diphenyl-1,3-butadiene and 9-phenylanthracene at 70°C in ethyl acetate, which showed that the reaction with 1,4-diphenyl-1,3butadiene was irreversible (Figure 3). On the other hand, the release of 9-phenylanthracene was observed using the irreversible diene tag, 1,4-diphenyl-1,3-buthadiene (Figures 4 and S8, supporting information). The release rate of 9-phenylanthracene from DAP-PA in ethyl acetate was temperature-dependent (Table 2). 21,22 The reaction temperature (Table S9, supporting information), time (Table S10, supporting information), solvent (Table S2, supporting information), and concentration ( Figure S11, supporting information) were also investigated using LC/MS/MS. The reaction was saturated in about 15 min. Although the reaction with aromatic solvents (toluene, anisole, and o-dichlorobenzene) gave better rate constants than that with ethyl acetate (Table S2, supporting information), the peak area in the SRM chromatogram of derivatized VD metabolites produced in ethyl acetate was larger than that produced in toluene ( Figure S11, supporting information). Considering their boiling points (indicating the easiness to remove), we concluded that ethyl acetate at 80°C was the best reaction condition. | LC/MS/MS analyses of VDs without serum Four VD metabolites with and without DAPTAD derivatization in the absence of serum were detected using the previously reported procedure. 12 Their retention time (t R ), LOD values, and sensitivity increase are summarized in Table 3. | LC/MS/MS analyses of VDs in SRM972a level 2 serum LLOQs of four VD metabolites in SRM972a level 2 serum with DAPTAD derivatization are summarized in Table 4. Their SRM chromatograms are shown in Figure 7. In our study 1α, 25-dihydroxyvitamin D 3 (1,25(OH) 2 D 3 ) could not be detected quantitatively due to its extremely low concentration and the difficulty in chromatographic separation of 1,25(OH) 2 D 3 and other dihydroxylated VD metabolites such as 4β, 25-dihydroxyvitamin D. 23 This problem could be solved by combining our present protocol with immunoaffinity extraction. 23 | CONCLUSIONS We screened several diene groups to develop the caged Cooksontype reagent, DAP-PA, which was stable enough for mid-to longterm storage in solution. Highly stable reagents are essential for data reproducibility in clinical laboratories. The stability of the reagents in a solution is guaranteed by the production of pure products, which is generally achieved by crystallization, and the caged DAPTAD is easy to crystallize, which is a strong advantage in terms of quality control. In addition, from a practical viewpoint, the caged DAPTAD is available in large quantities, and thus market supply is stable and ample. This advantage will contribute to the field of VD detection and quantification in clinical laboratories, and thus to the broader use of clinical mass spectrometry. 24
1,877.4
2019-11-12T00:00:00.000
[ "Chemistry", "Medicine" ]
First principle band calculations of Mg Si thin films with 2 ( 001 ) and ( 110 ) orientations Magnesium silicide, Mg2Si has been known as a promising candidate for the thermoelectric applications at mid-range temperature (600-900 K) . The fundamental semiconducting properties of bulk crystalline have been investigated thoroughly such as the band gap properties [1], and electrical properties and the Seebeck effect [2]. The band structure of Mg2Si has been discussed theoretically [3,4]. The Mg2Si film fabrication has been demonstrated by several works, including the (111) texture by molecular beam epitaxy (MBE) [5–7], and the (110) texture by the MBE [8]. The deposition of Mg-Si films is fabricated by ion beam sputtering [9], Mg2Si thin films are grown on Si(111) by the solid-phase annealing method [10,11]. The thin films are prepared by the radio-frequency magnetron sputtering, and measured the Seebeck coefficient [12]. The thin film is also grown by an industrial rapid thermal annealing [13]. Recently, the origin of the p-type conductivity in thin films of Mg2Si is investigated [14]. The experimental results show that the possible Mg2Si thin films are the (001), (110), and (111) surfaces. The first principle calculations of Mg2Si thin films have been performed in recent works. Liao et al. investigate surface electric properties of Mg2Si thin films with the (001) surface [15]. The band structure and the local density of states are calculated to point out the difference between the Siterminated surface and Mg-terminated surface. Balout et al. investigate thermoelectric properties of Mg2Si thin films with the (001), (110), and (111) surfaces for various film thicknesses [16]. Migas et al. investigate the band structure of mono-layer Mg2Si with the (111) surface as well as the monolayers of Ca2Si, Sr2Si, and Ba2Si [17]. In thin films, quantum confinement effect emerges, and the electronic states are quantized. This leads to a sharper density of states profile, and it would be favorable for thermoelectric characteristics. In addition, recent studies of the topological materials show that there appears the surface electronic state inside the band gap. This also helps to improve the thermoelectric properties. In this work, we focus on the surface band structure in Mg2Si thin films with the (001) and (110) surfaces for various film thicknesses. Although the band structures of the (001) and (110) surfaces have been investigated theoretically [15,16], the nature of the surface band structure has not been addressed in detail. For this purpose, we investigate the first principle calculations to clarify the difference in the surface band structures between the films with (001) and (110) orientations, and the film thickness dependence of the band structure and the density of states with the Si-terminated and Mg-terminated films. JJAP Conf. Proc. , 011001 (2020) https://doi.org/10.7567/JJAPCP.8.011001 8 5th Asia-Pacific Conference on Semiconducting Silicides and Related Materials (APAC-Silicide 2019) Introduction Magnesium silicide, Mg2Si has been known as a promising candidate for the thermoelectric applications at mid-range temperature (600-900 K) . The fundamental semiconducting properties of bulk crystalline have been investigated thoroughly such as the band gap properties [1], and electrical properties and the Seebeck effect [2]. The band structure of Mg2Si has been discussed theoretically [3,4]. The Mg2Si film fabrication has been demonstrated by several works, including the (111) texture by molecular beam epitaxy (MBE) [5][6][7], and the (110) texture by the MBE [8]. The deposition of Mg-Si films is fabricated by ion beam sputtering [9], Mg2Si thin films are grown on Si(111) by the solid-phase annealing method [10,11]. The thin films are prepared by the radio-frequency magnetron sputtering, and measured the Seebeck coefficient [12]. The thin film is also grown by an industrial rapid thermal annealing [13]. Recently, the origin of the p-type conductivity in thin films of Mg2Si is investigated [14]. The experimental results show that the possible Mg2Si thin films are the (001), (110), and (111) surfaces. The first principle calculations of Mg2Si thin films have been performed in recent works. Liao et al. investigate surface electric properties of Mg2Si thin films with the (001) surface [15]. The band structure and the local density of states are calculated to point out the difference between the Siterminated surface and Mg-terminated surface. Balout et al. investigate thermoelectric properties of Mg2Si thin films with the (001), (110), and (111) surfaces for various film thicknesses [16]. Migas et al. investigate the band structure of mono-layer Mg2Si with the (111) surface as well as the monolayers of Ca2Si, Sr2Si, and Ba2Si [17]. In thin films, quantum confinement effect emerges, and the electronic states are quantized. This leads to a sharper density of states profile, and it would be favorable for thermoelectric characteristics. In addition, recent studies of the topological materials show that there appears the surface electronic state inside the band gap. This also helps to improve the thermoelectric properties. In this work, we focus on the surface band structure in Mg2Si thin films with the (001) and (110) surfaces for various film thicknesses. Although the band structures of the (001) and (110) surfaces have been investigated theoretically [15,16], the nature of the surface band structure has not been addressed in detail. For this purpose, we investigate the first principle calculations to clarify the difference in the surface band structures between the films with (001) and (110) orientations, and the film thickness dependence of the band structure and the density of states with the Si-terminated and Mg-terminated films. Method Mg2Si crystal structure is categorized in Fm-3m (225) space group. We consider the thin film stacked along [001] and [110] directions as shown in Figs. 1 (a) and (b). The direction of the stacking of the atomic layers is indicated in the figure. The electronic structures are calculated within the density functional theory (DFT) framework using the WIEN2K program which performs the calculation using the full potential, linearized augmented planewave (FLAPW) and local orbital methods [18]. We used the general gradient approximations (GGA) proposed by Perdew, Burke and Ernzerhof [19]. The calculations have been done for pure Mg2Si crystal structure with the lattice constant a=6.35 Å. We use RKmax = 7.0, 25 × 25 × 1 k-mesh points for thin film calculations. Energy convergence is set to 0.0001 Ry. Figure 1(c) shows the band structure of the bulk Mg2Si crystal. Mg2Si is a semiconductor with an indirect band gap [20,21] between the Γ point and the X point. Results Now we investigate the band structure of Mg2Si thin films. For the films along the [001] direction, the surface layers consist of either Si atoms or Mg atoms, each of which is referred as Si or Mg terminated film, respectively. For the [110] direction, the surface layers consist both of Mg and Si atoms. The band structures for these thin films are shown together with thin film atomic arrangements next to each band diagram in Fig. 2. For the [001] direction, the band structures consists of two parts: The surface band structures indicated by the red arrows while the quantum confinement band structures are indicated by the blue arrows. For the [110] direction, the surface band is not clear and almost merged with the quantum confinement band structure. Note that a direct band gap appear at the Γ point. The appearance of the direct band gap at the Γ has been pointed out in Ref. [17] in the film with the (111) surface. This result is in contrast to the bulk crystal result, where only the indirect gap appears [20,21]. This point will be an advantage for photo detection devices using Mg2Si [22][23][24]. To understand the surface electronic states, we now consider the layer resolved density of states (L-DOS) for thin films stacked along the [001] direction; Figure 3 shows the L-DOS of Mg2Si with Si terminated film while Figure 4 shows the L-DOS of Mg2Si with Mg terminated film. A common feature in both cases is that large L-DOS at the Fermi energy appears near the surface layers and L-DOS is suppressed as the atomic layer moves to the center of the film. This result indicates that the surface band structure in Fig. 1 actually originates from the surface electronic states. There is a qualitative difference in the L-DOS results. For the Si terminated film, both Si and Mg atoms have large L-DOS around the Fermi energy near the surface layers. For the Mg terminated film, L-DOS changes its magnitude near the Fermi energy. The large DOS at the Fermi energy generally leads to instability. This means the Mg terminated film is more stable than the Si terminated film. This conclusion is consistent with the result in Ref. [15]. Finally, we show another evidence of the surface band structure; The surface band structure should be unchanged under the change of the film thickness. Figure 5 changed against the film thickness changes. Quantum confinement band structure shows, on the other hand, that the energy separation of the sub-bands decreases as the film thickness increases because of the confinement over the film. Conclusions We performed the first principle band structure calculations in Mg2Si thin films stacked along the [001] direction both with Si-terminated and Mg-terminated surfaces, and [110] direction aiming at understanding the surface band structure of Mg2Si films. Clear surface band structures appear in the [001] direction films. The surface band structures can be directly detected by an angle-resolved photoemission spectroscopy. We calculate the layer resolved density of states and the band structure for various film thicknesses. These results show that the surface band structure comes form the electronic states near the surface atomic layers.
2,206.4
2020-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
Understanding Progress : A Heterodox Approach This paper examines the possibility of understanding and measuring well-being as a result of “progress” on the basis of today’s dominant epistemological framework. Market criteria distort social values by allowing purchasing power to define priorities, likening luxury goods to basic needs; in the process they reinforce patterns of discrimination against disadvantaged social groups and women, introducing fatal distortions into the analysis. Similarly, because there are no appropriate mechanisms to price natural resources adequately, the market overlooks the consequences of the abuse of natural resources, degrading the quality of life, individually and collectively, or—in the framework of Latin American indigenous groups—foreclosing the possibility of “living well”. We critique the common vision of the official development discourse that places its faith on technological innovations to resolve these problems. The analysis points to the need for new models of social and environmental governance to promote progress, approaches like those suggested in the paper that are inconsistent with public policies currently in place. At present, the social groups forging institutions to assure their own well-being and ecological balance are involved in local processes, often in opposition to the proposals of the political leaders in their countries. Introduction The dominant epistemological framework in the social sciences, shaped by the neoclassical paradigm of economics, responds to the question "What is progress?"with statistical indicators based on market valuations of advances in material well-being, modified by other quantitative measures of the quality of life [1][2][3][4][5].The definitions of the concept are conditioned by the political contexts in which we operate, or, in some cases, by the proposals of new strategies that we would like to use in order to (re)build the world.In this short essay we focus on the latter: those proposals that can guide us in moving forward to overcome the growing socio-political, economical, and environmental obstacles that prevent current societies from advancing towards good living or "buen vivir".We focus on the underlying factors that define the way in which we can advance towards an improvement in the quality of life. To begin, it is useful to present an alternative proposal for measuring well-being, in contrast to the measures of Gross Domestic Product (GDP) or its components.We refer to the 1972 proposal by King Jigme Singye Wangchuck of Bhutan, to implement an alternative system of assessing a country's wellbeing, according to an index of "Gross Domestic Happiness" (GDH).This concept proposes to measure the richness of nations by evaluating the real well-being of their citizens, their happiness, measuring smiles instead of money or material possessions, as does the GDP [6,7].The initial idea was to assure that "prosperity is shared by the whole of society and well-balanced concerning cultural traditions conservation, environmental protection with a government that responds to the needs of those being governed"; rather than proposing an ideal system, a "utopia", the proposal aims pragmatically for success: "an economic system that maximizes the capability each person has to "be" and "do" what they value and have reason to value…" [8]. Although personal income in Bhutan is one of the lowest in the world, life expectancy increased around 20 years from 1984 to 1998, from 43 to 66 years; the literacy rate jumped from 10% in 1982 to 60% today, and the infant mortality rate fell from 163 deaths per one thousand inhabitants to 43 [8].This change in approach to development has been reinforced by the country's strong commitment to environment conservation.Bhutan's legislation defines 70% of the country as "green areas", including 60% as forests.Although this small country faces high unemployment, the perception of its inhabitants concerning their quality of life as "good" has been significant enough for the indicator GDH to be considered seriously in many other countries.Since Bhutan's initiative became well-known, interest in the problem of well-being has become quite widespread.The "economics of happiness" has become a burgeoning field [9] and the incorporation of well-being as a complementary policy goal, to complement orthodox economic management instruments, has become significant.Evidence of this is the "blue ribbon" Commission on the Measurement of Economic Performance and Social Progress, convened by President Sarkozy and headed by Stiglitz, Sen, and Fitoussi, to offer suggestions for alternative ways of "advancing the progress of society, as well as for assessing and influencing the functioning of economic markets", considering that there is a "marked distance between standard measures of important socio economic variables (…) and widespread perceptions" [10] (Executive Summary).In efforts to measure the phenomenon, the World Values Survey reports that Latin American countries, for example, recorded much more subjective "happiness" than their economic levels would suggest [11].Likewise, a multinational team organized by the Inter-American Development Bank, using a different methodology, concluded that individual perceptions and values in a variety of countries of the region reveal huge discrepancies with statistics concerning living conditions or the opinions of government agencies [12].In fact: "The evidence suggests that once people have their basic material needs adequately met, the correlation between income and happiness quickly begins to fade" [13].This is because measuring happiness includes subjective aspects, not material ones, such as the influence of social relations, autonomy, and self-determination, among others; for the poor, the problem exists because increases in national output often do not generate corresponding changes in well-being.Of course, sustainability indicators are an integral part of these measurement efforts. This effort to measure happiness has become so 'mainstream' that even in Mexico, a bulwark of orthodox economic management and measurement, the National Statistics Institute (INEGI) recently (November 2012) published the results of a sample survey used to construct such an index.Supporting the results mentioned above, the organization reported that in spite of the fact that more than one-half of Mexico's families have incomes below the poverty line, 84% of families said that they were satisfied (or moderately satisfied) with their lives.Using a methodology adapted from the "European Social Survey", the Mexican study clearly reflects the profound contradictions that the population faces: on the individual level, the results indicate a high level of satisfaction with their family life (8.6 on a scale of 10), autonomy (8.5), and health (8.2), while considerably less satisfaction with the economic situation (6.5), the country (6.8), and the education system (6.9)[14].As we will show in this paper, this distinction between the individual and the collective is a striking feature of these data; they reflect the inability of the market to attend to peoples' needs and the public sector's inability or unwillingness to provide the basic social services required by the population.As a result, the decision by many communities to implement alternative collective strategies for assuring their welfare is an interesting social development that points to the existence of significant social capacities, once they decide to build their own institutions. The Concept of Progress This is not the place to review the endless discussions about poverty indicators or their meaning.In many other circles, scholars are trying to understand what makes people happy and the determinants of a good quality of life.The academic community seems incapable of defining the concept, because of the difficulty of recognizing that it is the very structure of society and the operation of the global market that creates inequality and limits the possibilities for generating opportunities that would allow people to progress [15].Furthermore, current definitions dominant in the social sciences do not contribute to an appropriate understanding of either poverty or progress [16]. In this situation, then, new definitions of progress are more urgent than ever.An essential question is: What elements would offer an advance in our understanding?An answer would include some of the GDH's index components, such as education, health and medical services.This would require a change in emphasis of social policy; as in Bhutan, where life expectancy increased as a result of the new priorities in public policy.Similar results were achieved in Cuba, demonstrating the lack of correspondence between social benefits and economic growth [17].It is now clear that our efforts to advance towards a better quality of life cannot be limited to the instruments of the social policy tied to the market economy; in spite of improvements in education and medical care, it is evident that throughout the world we are suffering environmental degradation and a deterioration in our quality of life, resulting from the weakening of the social and solidarity networks (with a direct increase in personal and social violence) [15].The inability to guarantee a basic package of social services and economic assistance, accompanied by a shocking deterioration of environmental quality, have extreme effects on the quality of life everywhere, exacerbated by the prospects of a further deterioration occasioned by the intensification of the process of climate change [18].This is a multi-factorial theme and, for this reason, questioning the essential meaning of progress requires a multidisciplinary vision and revaluing some of the fundamental elements that we normally associate with the "traditional" society. Generally speaking, when problems such as well-being or progress are being discussed, we must refer to the development policies that create the social dynamics that prevent improvements in the quality of life.These policies are promoting a form of development that distances society from its stated objectives.It is evident that the advances offered by orthodox economists do not offer appropriate solutions.This is clear once we examine the process of development; Gilbert Rist describes this process with an enlightening definition of development in his classic work: "Development" consists of a set of practices, sometimes appearing to conflict with one another, which require-for the reproduction of society-the general transformation and destruction of the natural environment and of social relations. Its aim is to increase the production of commodities (goods and services) geared, by way of exchange, to effective demand. [19] (p.13). It is not necessary to analyze this definition in greater detail -as Rist did in his classic analysis of the concept-to realize how inappropriate the present development policies to promote a better quality of life are.Rist offers an interesting explanation, starting by pointing out that although cooperation and international help are necessary and often valuable, they "have little impact, compared with the many measures imposed by the implacable logic of the economic system" [19] (p.xi).He identifies three suppositions underlying development practice that impede progress: social evolutionism, individualism and economism [19] (p.9). Alternative Paradigms In this section we introduce two paradigms offering alternatives to "development"; they are philosophical and analytical approaches that are stimulating the intellectual work that must accompany the search for new ways of understanding.Even though these notes are limited to the academic literature, important social movements are underway, motivating and triggering scholarly work in spite of the resolute resistance from official institutions to any exploration of alternative models.These two important alternatives are: Degrowth and "Good Living", as it is called by the Andean groups where the term originated (in Quechua and Aymara).Other areas of academic work related to these two paradigms include ecological economics and social and solidarity economics.While they are not the only alternatives being proposed and implemented by communities around the world, and their practice reveals the substantial hurdles to be overcome as well as the difficulties in implementation, the description offers some insights into the steps that might be taken to forge different kinds of societies capable of generating a better quality of life along with social and environmental justice [20][21][22]. Degrowth The "new" field of "degrowth" emerged from the critical diagnosis of the current situation: "An international elite and a "global middle-class" are causing havoc to the environment through conspicuous consumption and the excessive appropriation of human and natural resources.Their consumption patterns lead to further environmental and social damage when imitated by the rest of society in a vicious circle of status-seeking through the accumulation of material possessions" [23]. During the international meeting where this statement emerged, its proponents offered a critique that extended to transnational corporations, financial institutions and governments, insisting on the profound structural causes of the crisis.Likewise, they indicated that the measures to confront crises by promoting economic growth will only deepen social inequalities and accelerate environmental degradation, creating a social disaster and generating economic and environmental debts for future generations, especially for those who live in poverty. Those attending the Conference declared that the main challenge is how to conduct the necessary transition (as they see it) to economic degrowth, transforming production to attend a smaller consumption package requiring fewer resources and less energy with beneficial effects for the environment, in a process that would be implemented in an equitable manner at national and global levels.The proposals offered by participants in this school of thought embraced all the dimensions of productive and social activity.A significant portion of the persons proposing these alternatives are optimistic with regard to the possibility of implementing changes in life styles and community organization to reduce the ecological footprint of the different social groups.In their critique of the current model there is a clear tendency to protect and strengthen individuals' rights and to reduce the scale of social and productive activity, emphasizing the local over the global.At this Second Conference on Economic Degrowth, however, there was a persistent effort to focus on the design of reforms that could be discussed and implemented within the current organizational framework of rich societies from which most of the participants came; the few efforts to question the possibility of implementing these changes in the current system of capitalist organization came to naught [24]. Although this school of thought takes its intellectual impulses from the field of ecological economics, it does not propose mechanisms to challenge the fundamental contradictions arising from current organization of society and its economy.On the basis of their ambiguous commitment to reduce the scale of production and consumption of the wealthy in the "advanced" countries, their proposals are committed to the possibility of a soft transition towards a "de-scaling", towards a "stationary state" economy.This school of thought proposes the possibility of reorganizing "rich" societies to release resources that would create political and productive spaces so that they could redeploy their energy to their own social fulfillment and guarantee appropriate living standards for their people.Many of their proposals are technological, offering new physical and productive solutions that ignore institutional and corporate structures that would prevent these changes, while also completely ignoring their dependency on the countries from the "south" for even a more austere lifestyle. Good Living (Sumak Kawsay) The concept of "Good Living" is a translation or adaptation of the expression in Quechua and Aymara, the languages of descendants of the Incan peoples in Ecuador, Peru and Bolivia.It is defined in the preface of the new Ecuadorian Constitution as "a new form of citizens' coexistence, in harmony and diversity with nature, in order to achieve a good life, or sumak kawsay".Elevated to constitutional principle in Bolivia and Ecuador [25,26], sumak kawsay recognizes the "Rights of Nature" and a new complex citizenry, "that accepts social as well as environmental commitments.This new citizenry is plural, because it depends on its multiple histories and environments, and accepts criteria of ecological justice that goes far beyond the traditional dominant vision of justice" [27,28]. As expressed by Alberto Acosta, one of its important protagonists in the Ecuadorian scene, the basic value of an economy, in a Good Living regime, is solidarity.A different economy is being forged, a social and solidarity economy, different from economies characterized by a supposedly free competition, that encourage economic cannibalism among human beings and feed financial speculation.In accordance with this constitutional definition, they hope to build relations of production, exchange and cooperation that promote efficiency and quality, founded on solidarity.We talk about systematic productivity and competitiveness, based on collective advances rather than individuals who are arbitrarily added together as is often the practice at present [29]. In contrast to current policies for facing the problem of the existence of growing segments of society that require charity or official transfer payments for their survival, this approach towards a social and solidarity economy offers a stark contrast with the proletarian organization of community life.Its approach far exceeds the reforms proposed by many participants in the debates based on economistic visions which do not consider abandoning individual or corporate accumulation at the expense of collective well-being.Sumak kawsay requires reorganizing social life and economic production, transforming the essential function of the market, shaping it so it can serve society rather than determining social relations, as it does at present [30]. Sumak kawsay proposes a holistic integration of economic, social, and political processes to support a different organization of society and its relationship with nature.The new social dynamic is expected to generate equality and freedom, social justice (productive and distributive) as well as environmental justice; it is evident that dramatic actions are required to reverse the currently existing inequalities [29].If this principle were applied, it would constitute a solid base for reorienting the productive apparatus and political and cultural relations, reversing inequalities that violate rights and prevent the possibilities of an effective democracy.Progress, in this sense, would be defined in terms of a social and productive organization that generates equality directly, that produces social justice through direct democracy. Constructing a Different Way of Life The principles examined in this text are integral part of a long tradition of social movements challenging the elites that shape institutions preventing the fruits of progress to improve the lot of the majority.They take us back to the dawn of the French Revolution in the Paris Commune, to Richard Owens' commune and to the intentional communities of Protestant and Jewish sects, as well as to the workers' struggles in the 19th century.Most of them were suppressed in one way or another with tragic massacres committed by forces at the service of a particular model of the concept of "progress" that has betrayed humanity and the planet. Today, individuals who are looking for another model of progress realize that Schumacher's "Small is Beautiful" still has a lot to teach us [31].We are also obliged to consider that Marshall Sahlins' affirmation might now be truer than ever: hunter-gathers offer a model of a really affluent society. The world's most primitive people have few possessions, but they are not poor. Poverty is not a certain small amount of goods, nor is it just a relation between means and ends; above all, it is a relation between people. Poverty is a social status. As such it is the invention of civilization. It has grown with civilization, at once as an invidious distinction between classes and more importantly as a tributary relation that can render agrarian peasants more susceptible to natural catastrophes than any winter camp of Alaskan Eskimo… Sahlins concludes by asking, rhetorically: "Might we not ask, as do some scholars and critics: Did medieval peasants work less than today's industrial working-class?" [32]. Although these reflections can provide some indicators, they certainly raise many questions.In order to document the fruitless dynamic of current efforts of programs such as the Millennium Development Goals, or the destructive effects of society's current organization, we can turn to measurements of life expectancy, educational levels, morbidity and mortality rates by age, social or gender groups.Similarly, we can include diverse indicators of economic and geographic inequality, and of indices of access to social and cultural infrastructures.We can add diverse efforts to identify the relationship between production and human well-being; for example, variables related to the freedom of association in unions and their effectiveness to protect internationally recognized working rights; the quantification of measures of healthiness and work safety, and a welfare system after workers retire, would also be associated with this dimension. Most of these measurements, however, avoid the fundamental criticism of alternative visions; in other words, a description of society's current organization and its productive apparatus, with all measurements already mentioned, does not consider the way in which the process contributes to the enrichment of a few at the expense of the majority.After all, while this concentrated (and dynamically growing) control persists, the possibility of reverting the deepening poverty and exclusion of huge social groups will be minimum (or null).Perhaps one of the greatest barriers to improvements in the quality of life so deeply engrained in the present discussions of "progress" is its emphasis on the role of the individual and the absence of any analysis of the benefits of collective action for a society's advance [32]. In our search for alternative explanations, we focus on how many societies continue to persist in their stubborn ties to the land, to their traditional structures of production and reproduction.Although some of our colleagues who work within the dominant epistemologies are convinced that these societies are condemned to disappear, to sink into a miasma of sub-proletarian misery, our research suggests that what appears as poverty in many rural societies is the result of deliberate collective choices made by their members to shape or reshape their communities on the basis of different principles [34,35].The communities focus on satisfying their own basic needs and assuring an ever more effective ability to govern themselves and negotiate their autonomy in the face of intensifying efforts to integrate them into global markets and the logic of rationalities based on individual benefit and monetary valuations of social relations and natural resources [36]. The evidence for this peculiar situation is the concerted efforts by societies throughout the Americas to forge solutions on their own, or in alliance with other communities or in collaboration with outside agents.What seems clear is that these efforts are not exceptional cases of peoples trying to do things differently; rather they are rooted in alternative visions of how the world operates and their relationship to the planet.This is poignantly examined in a detailed methodological discussion of the implications of being indigenous, of the need for learning about different epistemologies already available and being used to better understand these alternative proposals [22,37].The relatively recent recognition of the significance of these non-western epistemologies reflects their legitimacy in international academic institutions; unfortunately, this recognition has not extended to their incorporation into the methodologies of "orthodox" social science analysis.Throughout the world, however, there are numerous social movements in defense of their territory, in proposals for building alternatives that lead to a better quality of life, although not necessarily more consumption that are derived from these epistemologies.What is striking is the volume of literature documenting these efforts, both those that are "bringing up to date" long traditions of many groups who tenaciously defend their ideological and cultural heritages [38] as well as those who are searching out new paths, directly controllable by themselves, such as the Zapatistas in Mexico and the MST (Landless People's Movement) in Brazil [39][40][41]. The process is not limited to ethnic communities [42][43][44].It is interesting to note the significance for many peasant communities of the consolidation of one of the largest social organizations (and movements) in the world, Vía Campesina [45,46].This group integrates local small-scale farmer organizations from around the world, with a view to promoting local capacities for self-sufficiency based on technologies that combine the benefits of organic cultivation where appropriate with intensive use of the producer's own equipment and knowledge to increase production, with an important focus on food self-sufficiency.This approach, that combines agroecology with the reorganization and strengthening of local institutions, is widely acknowledged to be appropriate for overcoming many of the considerable obstacles impeding the successful expansion of small-scale farming in the third world [47][48][49].Evaluations of the implementation of these strategies reflect the benefits not just of the productive gains from a production system reoriented to local needs and distribution systems, but their contribution to strengthening local communities and environmental balance [50,51]. There is no space in this text to delve into the details of these innovative strategies, many of which do not offer material solutions to poverty when measured by ownership or access to a certain package of commodities.Instead, they address a much more thorough-going re-conceptualization of the possibilities for a different meaning of the concept of "quality of life", and therefore of the social and material significance of poverty [52,53].In this different context, then, it might be that much of the poverty to which most of the literature is addressed, has its origins in the individualism and the alienation of the masses whose behavior is embedded in the Western model of modernity, a model of concentrated accumulation based on a system of deliberate dispossession of the majority by a small elite [54,55].The collectivism implicit in the proposals offered by the communities implementing their own areas of conservation is accompanied by the social concomitant of solidarity that pervades the processes inherent in these alternative strategies.The realization of the importance of people becoming involved in identifying and protecting their territories is an integral part of a complex dynamic that examines the importance of the place-based nature of cultures and their survival.As a result, peoples around the world are finding accompaniment in their efforts to protect these areas by a global alliance of such communities and organizations seeking to promote this effort; similarly, the communities are organizing their own circles for mutual support and broader understanding of their capabilities to improve measurably their living conditions as part of processes that enables them to govern themselves more effectively while also contributing to ecosystem protection and rehabilitation [52,56]. In this context we have distilled five underlying principles for this construction-derived from the practice of many recent experiences-that contribute to avoid the "syndrome" of poverty: autonomy and communality; solidarity; self-sufficiency; productive and commercial diversification; and sustainable management of regional resources [50].In many of these circles, the collective commitment to ensure that there are no individuals without access to their socially defined basic needs implies a corresponding obligation of all (and of each one) to attend to the strengthening of the community's productive capacity, to improve its infrastructures (physical, social, environmental), and to enrich its cultural and scientific capabilities.Poverty, in this light, is an individual scourge-created by the dynamics of a society based on individualism and its isolation-that is structurally anchored in the very fabric of society.To escape from this dynamic, the collective subject that is emerging in the process offers a meaningful path to overcoming the persistence of poverty in our times. But true social and environmental progress will also require taking note of societies' growing dependence on the extraction of natural resources, both renewable and non-renewable.The increasing intensity of the organized protests against the social and environmental degradation that this expansion has wrought is part of the same process of collective construction of alternative organizations and productive structures [44].Communities' actions make it evident that the present patterns of territorial expansion and resource use are not viable; together with changes in consumption patterns and energy use, it will be necessary to reduce our dependency on these natural resources, to reduce the generation of diverse pollutants, and especially the most toxic, in addition to the emission of greenhouse gases associated with climate change.Alternative models of community organization, involving collective commitments to social reorganization and respect for the planet with a deliberate reorientation towards guaranteeing basic needs and attending the demands for the quality of life rather than the amount of consumption, will have to guide our search for more effective paths to sustainability. These measures would have to be accompanied by efforts to develop mechanisms to identify the need for ecosystem rehabilitation and the possibilities of effectively protecting some vulnerable areas and species in danger of extinction, incorporating processes to integrate local populations in these tasks, taking advantage of their knowledge and own organizations, with appropriate recognition that would allow them to live with dignity.These tasks are not readily quantified, in spite of our recognition of the importance of revaluing the significance of these environments relative to material production. In contrast, there are other indicators in the process of development in international circles, to facilitate the challenge of identifying environmental problems.Some include the indicators already mentioned above, as well as the energy intensity of production and the volume of greenhouse gases generated globally and by different productive sectors.Among ecological economists there is an effort to develop and systematize the study of these processes; among the most intensively studied at present are: the Human Appropriation of net Primary Productivity (HANPP), the ecological rucksack, and material balances [57][58][59].Current mechanisms to control global emissions (such as the market for carbon emissions permits and the program for "reduction of emissions for degradation and deforestation"), however, are allowing major polluters to continue their practices and their customers to maintain their consumption patterns, by simply purchasing underpaid environmental services from producers in the Third World.It would be necessary to become much more critical about the use of current environmental quality indicators in order to try to establish processes to really advance towards "progress".If we were to insist on a global ethic, it would not be permissible to postpone recognizing every human being's moral right to satisfy his/her basic needs, to fulfill his/her wishes of having a better life, to conserve the necessary vital functions of ecosystems, and to have a fair access to global resources. Conclusion This reflection about "progress" introduces a critical, yet pessimistic vision of the possibility of understanding and measuring the concept within today's dominant epistemological context.It rejects the orthodox evaluation of the growth of production, which equates basic needs with superfluous ones, accepting the discrimination against diverse social groups and gender, condemning most current indicators to a fatal distortion.We also criticize the tendency to underestimate the consequences of (ab)using natural resources and "sub-altern" groups; when the environment is degraded by human activity, it contributes to degrading individuals and their societies; in other words, this degradation limits the possibility of "good living".We reject the vision of centering our hope to overcome current contradictions on technological innovations since this leaves us with insuperable difficulties. The search for alternative strategies, such as those mentioned in this document, do not offer solutions that are consistent with existing institutions, shaped by global capital and served by accounting agencies and social and economic performance assessment agencies.For that reason, after all is said and done, an effort to measure well-being and "progress" would require people, their communities, and their regions to forge their own "niches of sustainability" in a sea of social disintegration and inequality plagued by problems of environmental degradation and define ways of quantifying the results.
7,019.6
2013-01-30T00:00:00.000
[ "Economics", "Philosophy" ]
Higher bottomonium zoo In this work, we study higher bottomonia up to the $nL=8S$, $6P$, $5D$, $4F$, $3G$ multiplets using the modified Godfrey-Isgur (GI) model, which takes account of color screening effects. The calculated mass spectra of bottomonium states are in reasonable agreement with the present experimental data. Based on spectroscopy, partial widths of all allowed radiative transitions, annihilation decays, hadronic transitions, and open-bottom strong decays of each state are also evaluated by applying our numerical wave functions. Comparing our results with the former results, we point out difference among various models and derive new conclusions obtained in this paper. Notably, we find a significant difference between our model and the GI model when we study $D, F$, and $G$ and $n\ge 4$ states. Our theoretical results are valuable to search for more bottomonia in experiments, such as LHCb, and forthcoming Belle II. I. INTRODUCTION Since J/ψ and Υ(1S ) were observed in 1974 and 1977 [1][2][3][4], respectively, a heavy quarkonium has become an influential and attractive research field because its physical processes cover the whole energy range of QCD. This energy range provides us an excellent place to study the properties of perturbative and non-perturbative QCD [5]. An exhaustive and deeper study of a heavy quarkonium helps people better understand the QCD characteristics. Since the bottomonium family is an important member of heavy quarkonia, people have made great effort to investigate a bottomonium experimentally and theoretically in the past years. Thirty years ago, a couple of higher excited bottomonia have been experimentally observed in succession, e.g., Υ(nS ) with the radial quantum numbers n from 2 to 6 and P-wave spin-triplet states χ bJ (1P) and χ bJ (2P) with J = 1, 2, 3 [3,4,[6][7][8][9][10][11]. With the upgrade and improvement of two B-Factories BaBar and Belle together with LHC, new breakthroughs in experiments have been achieved in recent years. First, the pseudoscalar partners η b (1S ) and η b (2S ) were identified by the BaBar Collaboration in 2008 and 2011, respectively [12,13]. Subsequently, the follow-up studies by the CLEO [14,15] and Belle Collaborations [16] confirmed the existence of these two bottomonia, where Belle did the most accurate measurements of the mass to date with values of 9402.4±1.5±1.8 MeV and 9999.0±3.5 +2.8 −1.9 MeV, respectively [16]. The χ b1 (3P) state is a new state discovered by LHC in the chain decay of χ b1 (3P) → γΥ(1S )/γΥ(2S ) → γµ + µ − [17]. This state has been confirmed by the D∅ Collaboration [18]. BaBar discovered a possible signal of the Pwave spin-singlet h b (1P) state in 2011 [19], and subsequently, Belle firstly and successfully confirmed the observation of the h b (1P) in the process of Υ(5S ) →π + π − h b (1P), where the first radial excited state h b (2P) has been also observed with mass value 10259.76 ± 0.64 +1. 43 −1.03 MeV [20]. Thus, both the ground and first radial excited states of the P-wave bottomonium have been fully established experimentally. However, this is not enough to make people satisfied because many of experimental information is still incomplete, including the total width and branching ratios of some significant decay channels, which still require both of experimental and theoretical efforts. With regard to the search for a D-wave bottomonium in the experiments, there also has some progresses by the CLEO and BaBar Collaborations [21,22], where spin-triplet Υ(1 3 D 2 ) was observed with a 5.8 σ significance while confirmation of Υ(1 3 D 1 ) and Υ(1 3 D 3 ) states is dubious on account of lower significance [22]. Such plentiful experimental achievements in the bottomonium family not only increase our motivation to search for more higher excited bottomonia in future experiments but also provide an excellent opportunity to test various theories and phenomenological models. To better understand the properties of a bottomonium, both the progress of experimental measurements and the calculation of theoretical and phenomenological methods are necessary. The lattice QCD is usually considered to be the most promising solution to the nonperturbative difficulty in low energy regions. In principle, the spectrum of the heavy quarkonium is directly obtained from the lattice QCD, but actually, it is quite troublesome because an additional large heavy quark mass m Q needs a tremendous computational effort than that of the light quarkonium [23]. In spite of this, the lattice QCD still has an advantage in the calculations of bottomonia [24,25]. Before the lattice method becomes more comprehensive than the past, the mass spectra of hadrons are usually treated by the phenomenological model, namely, potential model, which takes into account the non-perturbative effects of QCD in the interaction potentials and can give satisfactory results consistent with the experiments. In the past decades, a variety of potential models have been used to study the bottomonium system [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45], among which the most well-known model is the Godfrey-Isgur relativistic quark model. This model has been widely used in the study from light mesons to heavy mesons [26,46]. Recently, Godfrey and Moats used the Godfrey-Isgur relativistic quark model for systematically investigating the properties of the bb system. This system contains a large number of higher excited states, and they have given the results of the production in e + e − and pp collisions so that experimentalists can look for and observe the most promising new bottomonia [26] in future. However, this is similar to the case of other meson families with abundant experimental information. That is, high-lying states are related to the higher mass values predicted by the GI model. In comparison with the measured results of bb states, one of the most salient particles of the model is the Υ(11020) or Υ(6S ) whose theoretical mass value is about 100 MeV larger than the experimental data. In Ref. [47], the present authors proposed a modified GI model with color screening effects to investigate higher excited charm-strange mesons. Namely, they have taken into account the effect that the confinement potential is softened by the influence of light quarks induced from the vacuum in the long-range region [48], and have found that the addition of this effect can well describe the properties of higher charmstrange mesons. Hence, the recent study of bottomonia by the GI model [26] motivates us to explore what different conclusions can be drawn after considering screening effects in the GI model. With this motivation, we will carry out a most comprehensive study on the properties of the bottomonium family by the modified GI model with screening effects, primarily focusing on the properties of higher excited states. In this work, we calculate the mass spectra and decay behaviors of bottomonia using the modified GI model, including computations of radiative transition, hadronic transition, annihilation process, and OZI-allowed two-body strong decay, where the meson wave functions also come from the modified GI model. This model will be thoroughly introduced in the next section. After the χ 2 fit with abundant experimental information, we can give a fairly good description for the mass spectra of bottomonia, where the masses of the higher excited states have been dramatically improved compared with the estimates by the GI model [26]. At the same time, we also predict the mass values of higher bottomonia, which are valuable for the experiments such as BaBar, Belle, and LHC to search for these missing particles. From the mass spectra of bb states, we discuss bottomonia by dividing them into those below and above open-bottom thresholds. For the states below the threshold, since strong decay channels are not open, radiative transitions and annihilation decays are usually dominant. Dipion hadronic transitions are, of course, also major decay modes. Comparing calculated partial widths of these modes with those of the GI model, we find that the results between the two models are almost the same for the low-lying states but are different from them for the higher excited states such as 1F or 1G states. It is shown that the screening effect has little influence on the low-lying states but it is crucial for the higher states. For the states above the threshold, we calculate the strong decays using the 3 P 0 model, where we adopt a meson wave function numerically obtained rather than the SHO wave function with a corresponding effective β value. Because the states above the threshold have relatively large masses, an influence of the screening effect becomes more obvious for these states and our results that are largely different from the GI model should give a better prediction for the higher excited states. In addition, it is emphasized that the discussion and prediction on the higher bottomonia are the main points of our work derived from our calculations. We hope that the present investigation can not only reveal the inherent properties of the observed bottomonia but also provide valuable clues to the experimental search for more missing bb states in the future. This paper is organized as follows. In Sec. II, we will give a brief introduction to the modified GI model with the screening effect and compare the results of the GI model and modified GI model. With our mass spectrum, the bottomonium spectroscopy will be analyzed in this section. Combining information of mass spectra and decay behaviors, we will study properties of the states bellow the BB threshold and those above the threshold in Secs. III and IV, respectively, where plentiful predictions will be given. In Sec. V, we compare the results between the modified GI model and a coupled-channel quark model. We make a summary in Sec. VI. Finally, all the theoretical tools of various decay processes and physical quantities, like partial decay width, branching ratio, annihilation decay, radiative transition, hadronic transition, and total width, will be presented in Appendix A. A complete list of interaction potentials of the modified GI model will be given in Appendix B. II. SPECTRUM A. Modified GI model with a screening effect In this work, we use the modified Godfrey-Isgur model with a screening potential [47] to calculate bottomonium mass spectrum and wave function, which will be employed in the calculation of decay processes. The Godfrey-Isgur relativized quark model [46] is one of the most successful model in predicting mass spectra of mesons. Even though the GI model has achieved great success, for the higher orbital and radial excitations, the predicted masses are larger on the whole than the observed masses of newly discovered particles in recent decades. Coupled-channel effects usually play an important role for the higher excitations and a screening effect could partly substitute the coupled-channel effect [49,50]. In 1985, Godfrey and Isgur proposed a relativized quark model motivated by QCD [46]. Compared with other models, relativistic corrections and universal one-gluon-exchange interaction as well as linear confinement potential are the most important features of this model. In the following, before presenting our modified GI model, we will introduce the GI model one by one. The Hamiltonian of a meson system is composed of kinetic energy and interaction between quark and anti-quark, and kinetic energy adopts a relativistic form as where m 1 and m 2 are constituent quark masses corresponding to quark and anti-quark, respectively. The interaction between quark and anti-quark includes short-range one-gluonexchange potential G(r) and long-range confinement S (r), spin-orbit interaction, color hyperfine interaction (contact and tensor term). G(r) and S (r) have the following forms as G(r) = − 4α s (r) 3r (2) and S (r) = br + c, where α s (r) is a running coupling constant. Relativistic contributions are divided into two parts. Firstly, the model makes a smearing transformation of G(r) and S (r). To express it shortly, we use a general function symbol V(r) instead of G(r) and S (r). A special operation follows as where σ is a smearing parameter. Second, an important reflection of relativistic effects lies in the momentum dependence of interactions between quark and anti-quark. Therefore, a momentum-dependent factor is brought into the interactions. In this situations, a one-gluon-exchange potential G(r) is modified asG Tensor, contact, scalar spin-orbit, and vector spin-scalar terms should be changed as where E 1 and E 2 are the energies of quark and anti-quark and ε i corresponds to i-th type of interactions (ε c , ε t , ε sov , and ε sos ). We readily notice that in the non-relativistic limit the factors become unity, and particular values of ε i can be obtained from Table I. After a brief review of the GI model, we will introduce a modified GI model with a screening potential. Our previous work [47] presented the modified GI model with a color screening effect and revisited the properties of charm-strange mesons, subsequently, charm mesons have been also studied in the framework of the modified GI model [51], where higher Table I. excitations have been greatly improved. A screening effect can be achieved by modifying a confinement potential as where µ is a parameter which expresses the strength of a screening effect. As described in the preceding paragraphs, the modified confinement potential Eq. (8) also needs relativistic corrections. Details of techniques can be found in Ref. [47], and a complete list of the interaction potentials in the modified GI model are listed in Appendix B. It is worth mentioning that the modified GI model also employs simple harmonic oscillator (SHO) wave functions which can be considered as a complete basis set to expand exact meson wave functions. In the momentum space, an SHO wave function has the form with is an associated Laguerre polynomial, and β is a parameter of an oscillator radial wave function. In the next subsection, we calculate bottomonium mass spectra via the modified GI model, which help us well understand the bb family and are also used for the subsequent decay processes. B. Mass spectrum Due to the introduction of a new parameter µ in our modified GI model, we need to combine experimental information Mass MeV BB 9500 10 000 10 500 11 000 11 500 Mass spectrum of bottomonium. Here, bottomonia are classified by the quantum number 2S +1 L J , from left to right successively in each category , black, pink, and purple lines stand for our results from the modified GI model, the predictions of the GI model [26,46], and experimental values taken from PDG [52], respectively. The position of the open-bottom threshold is identified by the dashed line. In addition, the notation J means all the total angular momentum numbers of triplet states such as J = 0, 1, 2 for the P-wave states and J = 1, 2, 3 for the D-wave states and so on. to scale all available model parameters. At the same time, the richness of bottomonia in the experiment also provides us with an excellent opportunity to determine these model parameters, where the χ 2 fitting method is adopted. The χ 2 fitting method is to find the minimum χ 2 value, thereby, to obtain a set of corresponding fitting parameters in which the theoretical predictions of phenomenological model and experimental results are most consistent on the whole. χ 2 is defined as R r where the V Exp (i), V T h (i) and V Er (i) are experimental value, theoretical value and error of i-th data, respectively. We select eighteen bottomonia to fit our model parameters as shown in [52] and a uniform value of V Er (i) = 5.0 MeV is chosen as a fitting error for all the states, which is larger than their respective experimental uncertainty. The reason is that the experimental errors of these particles are relatively small and unevenly distributed. In other words, if the error of a particle is much smaller than the others, its mass will be too precise so that it is unfavorable to the overall fitting. The final fitting χ 2 value is given as 11.3 in conjunction with a uniform error and all corresponding fitting parameters of the modified GI model are listed in Table I, where parameters of the GI model are also given for comparison. In Table II, the theoretical mass values of the GI model are also listed and the corresponding χ 2 value is calculated to be 31.4. Comparing χ 2 values, one can easily see that the fitted modified GI model well improves the whole description of the bottomonium spectrum. Although the GI model was successful in investigating the bottomonium spectrum, comparing with the observed masses, there are two main shortcomings in the theoretical estimation of the GI model. The first one is that the theoretical mass values of low-lying states are universally 10-20 MeV smaller than the expeimental values. The second is that the theoretical predictions for higher excited states become larger than the experiments. This is because the screening effect is not taken into account, especially for the Υ(6S ) state whose mass difference is close to 100 MeV. It is worth mentioning that the experimental mass value of Υ(10860) or Υ(5S ) state is overestimated. That is, the recent BaBar experiment tends to give a 30 MeV higher than the initial experimental value [52]. This means that the theoretical prediction of the modified GI model for this state will be inevitably small and this can also explain why the fitted χ 2 value can not be much smaller. In fact, other studies of nonrelativistic potential models that consider a screening effect [28,29,43] are also not very satisfactory for the description of this state, Υ(5S ). Except for Υ(10860), our results for the bottomonium spectrum are pretty good that can be seen from Table II. Our model not only solves the two above shortcomings of the GI model but also gives a fairly precise theoretical estimate, where the calculated mass values of many particles are almost the same as experimental results within the fitting errors. Based on our excellent description for the bottomonium spectrum with experimental information, we also predict the mass values for higher bottomonia from S -wave to G-wave that have not yet been observed in Table III. In order to facilitate readers to understand the bottomonia more intuitively, the mass spectra are also shown in Fig. 2, which could give an overall grasp of the spectroscopy and is convenient for the comparison among the results of two models and experiments. As seen from Fig. 2, after introducing the screening effect, the impact of a screening potential on the ground states and low-lying states is not so obvious. In contrast, the screening effect begins to be important in the region of higher excited states, especially for those with larger radial quantum numbers n. The greater the n value the more prominent this effect is because of the more and more obvious mass differences between the GI model and modified GI model. This feature can also be reflected from the wave function of bottomonia and we take the S -wave Υ family as an example, whose radial wave functions are shown in Fig. 3. Here the wave functions of the Υ(mS ) with m = 1, · · · , 4 are almost undistinguishable but the higher radial excited states appear to be visibly different in the positions of nodes and radial distributions. Of course, to thoroughly understand the impact of the screening effect and the nature of the bottomonium family, we need to analyze their decay behaviors, which is also the main task of the later sections. The η b family with a quantum number 1 S 0 is the partner of spin-triplet Υ states. The ground state η b (1S ) and radial excited state η b (2S ) have been established in the experiment, and their measured average masses are 9399.0 ± 2.3 MeV and 9999 ± 3.5 +2.8 −1.9 MeV [52], respectively, which are in pretty good agreement with our theoretical values in Table II. There is a more interesting physical quantity, i.e., the hyperfine mass splitting between the spin-singlet and spintriplet states ∆m(nS ) = m[Υ(nS )] − m[η b (nS )], which reflects the spin-dependent interaction and can be used to test various theoretical models. For the 1S state, our theoretical result of hyperfine mass splitting is 65 MeV within the upper limit of error, which is well consistent with the experimental result of 62.3 ± 3.2 MeV [52] and the lattice calculation of 60.3 ± 7.7 MeV [53]. The ∆m(2S ) from our modified GI model is estimated as 28 MeV which is also equally well consistent with the measured value of 24.3 +4.0 −4.5 MeV [16] and the lattice computation of 23.5 − 28.0 MeV [53]. It is worth mentioning that the predicted mass splittings from the GI model and our modified GI model are exactly the same, although there are some differences in the respective masses. Together with the successful description for the mass of η b system, we also explore their decay behaviors. We list partial widths and branching ratios of electromagnetic decays, annihilation decay, and hadronic transitions for η b states below the open-bottom thresholds from η b (1S ) up to η b (4S ) in Table VI. Here, the OZI-allowed two-body strong decay channels do not yet open so that the annihilation into the two gluons is dominant, whose corresponding branching ratio is almost 100%. From Table VI, we see that the decay widths from our modified GI model are not so different from those of the GI results [26]. This is because screening effects are weak for the wave function of low-lying states as discussed in Section. II. The decay predictions of η b (1S ) and η b (2S ) are also consistent with those of nonrelativistic constituent quark model [27]. For the two unobserved η b (3S ) and η b (4S ) state, we predict the masses of 10336 and 10597 MeV and total widths of 5.52 and 4.07 MeV, respectively. The corresponding hyperfine mass splitting ∆m(3S ) is 20 MeV while the result of nonrelativistic constituent quark model [27] is 19 MeV, and our estimate for ∆m(4S ) is 15 MeV. These predictions require further validation by the future experiments. In addition to the dominant two gluon annihilation decay, some other possible main decay channels of η b (3S ) are η b (1S )ππ and h b (2P)γ, both of which have almost the same branching ratios. The decay mode h b (1P)γ is also estimated to be important in our calculation, but the corresponding prediction of nonrelativistic constituent quark model [27] is two orders of magnitude smaller than our result. Similarly, the hadronic transition to η b (1S )ππ and E1 radiative transition to h b (3P)γ are predicted to be important decay modes of η b (4S ) with the branching ratios of (9.1 × 10 −3 ) and (3.7 × 10 −4 ), respectively. B. The Υ states From Fig. 2, we can see clearly that Υ(4S ) state is slightly above the open-bottom threshold. Hence, the states below the thresholds are only Υ(1S ), Υ(2S ), and Υ(3S ), which were first discovered bottomonia together in the E288 Collaboration at Fermilab by studying produced muon pairs in a regime of invariant masses larger than 5 GeV [3,4]. At present, these three particles do not bring much controversy due to the high accuracy of the experimental measurements for their masses, which can be usually matched by the calculation of most of the potential models and the lattice QCD. The mass differences between our theoretical estimates and experimental center values for these three states are 3, 5 and 1 MeV, respectively, which also indicates the reliability of our modified GI quark model in the mass spectrum. In addition, the BaBar collaboration [54] Partial widths and branching ratios of electromagnetic de-cays, annihilation decay, and hadronic transitions and the total widths for Υ(1S ), Υ(2S ), and Υ(3S ) are listed in Table VII. Compared with the S -wave spin-singlet η b family, the experimental information of the spin-triplet Υ states is clearly much more abundant, which includes total widths and partial rates of most of the decay processes. Next, we firstly start from the analysis of the common decay properties of Υ(1S ), Υ(2S ), and Υ(3S ). From Table VII, the annihilation decay to three gluons is dominant since each branching ratio is 89%, 63.8% and 58.7% from our model, respectively, and the contributions to the total width from other three annihilation decay modes ℓ + ℓ − , γgg, and γγγ are much smaller. Especially, the γγγ decay is difficult to search for in the experiment due to the very small branching ratio, the 10 −5 ∼ 10 −6 order of magnitude, and leptonic annihilation decay and γgg decay modes have almost the same predicted partial widths. All the experimental widths and branching ratios basically agree with our theoretical calculations although the errors in Υ(3S ) state are large. Additionally, it needs to be emphasized that the experimental partial widths marked as b shown in Table VII are obtained by combining measured total widths and branching ratios of the PDG [52]. The M1 radiative transition Υ(1S ) → η b (1S )γ is the unique electromagnetic decay of Υ(1S ) state, which has no experimental information until now. The calculations of three models, our work and Refs. [26,27] give a consistent estimate, 0.01 keV. Numerous radiative decay modes are opened for Υ(2S ) including the transition to χ bJ (1P), all of which are measured by experiment. By comparison, we can see that the experimental data of radiative transition of Υ(2S ) can be well reproduced by our model and that of Refs. [26,27] except for the decay channel η b (1S )γ. It should be mentioned that the calculations of branching ratios by the nonrelativistic constituent quark model [27] adopt the measured total widths, which is the reason why our theoretical results are smaller but the estimates of partial widths are close to the vlaues of Ref. [27]. At last, the hadronic transition Υ(2S ) → Υ(1 3 S 1 )ππ has been used to fix the unknown model parameter C 1 in the QCD multipole expansion approach, Eq. (A11) in Appendix A. There are some difficulties in the theoretical description of Υ(3S ) as a whole. Our theoretical total width of Υ(3S ) is 35.8 keV, which is larger than the PDG result of 20.32 ± 1.85 keV. The excess mainly comes from the annihilation mode of ggg and hadronic mode of Υ(1S )ππ. Considering the uncertainty of the phenomenological models, we make a comparison in radiative transitions by using experimental partial widths rather than directly measured branching ratios. Our predictions for χ bJ (2P)γ are satisfactory and in spite of some deviations, the χ bJ (1P)γ and η b (1S )γ modes can be ensured in the order of magnitude. The P-wave states χ bJ (1P) and χ bJ (2P) with J = 1, 2, 3 were firstly discovered in the search for the radiative processes Υ(2S ) → χ bJ (1P)γ [10] and Υ(3S ) → χ bJ (2P)γ [11] in 1982. The corresponding spin-singlet partners h b (1P) and h b (2P) VI: Partial widths and branching ratios of annihilation decay, radiative transition, and hadronic transition and total widths for the S -wave η b below the open-bottom thresholds. Experimental results are taken from the PDG [52]. The theoretical results of the Godfrey-Isgur relativized quark model [26] and the nonrelativistic constituent quark model [27] are summarized in the rightmost columns. The width results are in units of keV. This work Expt. [52] GI [26] Ref. [27] State Channels Width a From the summation of partial widths of Ref. [27]. b From the calculation of combining experimental total widths and branching ratios of the PDG [52]. In the past few years, the Belle collaboration has studied the decay mode of h b (1P) → η b (1S )γ, whose branching ratio is 49.2 ± 5.7 +5.6 −3.3 % in 2012 [16] and 56 ± 8 ± 4% in 2015 [56]. Combining these results with the average ratios of PDG [52], VIII: Partial widths and branching ratios of annihilation decay and radiative transition and total widths for the 1P bottomonium states. Experimental results are taken from the PDG [52]. The theoretical results of the GI model [26] and the nonrelativistic constituent quark model [27] are summarized in the rightmost columns. The width results are in units of keV. In addition, it is worth noting that our results of critical hadronic decay channel of χ bJ (2P) → χ bJ (1P)ππ with nonflip J and h b (2P) → h b (1P)ππ calculated by the QCD multipole expansion approach are roughly consistent with those of the GI model [26]. However, our results for the process of χ bJ (2P) → χ bJ ′ (1P)ππ with J J ′ are incompatible with the GI model, where our estimates are quite suppressed similar to the calculations of nonrelativistic constituent quark model [27]. The analogous situation also exists in the 3P and Dwave bottomonium states, which needs to be identified by the future experimental measurements. Although the first 3P state has been experimentally observed, the experimental information of its decay behaviors is still lacking. Our predictions for various decay channels of 3 1 P 1 , 3 3 P 0 and 3 3 P 1 , 3 3 P 2 states are listed in Tables X and XI, respectively. By comparing our results with those of the GI model [26], we find that although most of decay modes have not changed much, the screening effects have demon-strated the power in the M1 radiation transitions. Taking the mode h b (3P) → χ bJ (1P)γ as an example, there is an order of magnitude difference between the two models. Such a large difference mainly comes from the change of bottomonium wave functions rather than the phase space. To illustrate it, we compare the square of the M1 radiation matrix element where ω is almost the same for both. For the final states χ bJ (1P)γ with J=0, 1, 2, their numerical results calculated by the GI model are 2.56 × 10 −4 , 9.61 × 10 −4 and 1.02 × 10 −3 [26], and those of our modified GI model are 1.38 × 10 −3 , 1.08 × 10 −4 and 1.13 × 10 −4 , respectively. This comparison once again proves the importance of the screening effects in describing the inner structure of meson states. In addition, the total widths of the 3P states predicted by us are higher on the whole. In addition to the annihilation decay ggg, the decay modes η b (1S , 2S , 3S )γ and h b (1P)ππ are also important for h b (3P) states. Especially, the process of h b (3P) → η b (3S )γ has a predicted branching ratio of 10.4%. If h b (3P) is confirmed in the future, we suggest experiments to search for the missing η b (3S ) state by studying this radiative process of h b (3P). Likewise, compared to other radiative transition processes, the decay process to the S -wave Υ state is very significant for χ bJ (3P) states. It should be noted that the χ b1 (3P) state is just found in the chain decay of the radiative transition to Υ(1S , 2S )γ and to Υ → µ + µ − . It should be also empha- IX: Partial widths and branching ratios of annihilation decay, radiative transition, and hadronic transition and total widths for the 2P bottomonium states. Experimental results are taken from the PDG [52]. The theoretical results of the GI model [26] and the nonrelativistic constituent quark model [27] are summarized in the rightmost columns. The width results are in units of keV. D. The 1D and 2D states The ground states and first radial excitations of the D-wave bottomonia with orbital angular momentum L = 2 were estimated as 10150-10170 MeV and 10440-10460 MeV, respectively, which are below the BB thresholds. For the D-wave states, the annihilation decays to three-gluon or two-gluon XII: Partial widths and branching ratios of annihilation decay, and radiative transition and total widths for the 1D bottomonium states. Experimental results are taken from the PDG [52]. The theoretical results of the GI model [26] and the nonrelativistic constituent quark model [27] are summarized in the rightmost columns. The width results are in units of keV. This work Expt. [52] GI [26] Ref. [27] State 96 · · · · · · 24.9 91.5 17.23 96.58 Total 25.3 100 · · · · · · 27.2 100 17.84 100 are generally suppressed than the S -wave. Therefore, all of the 1D and 2D states of bottomonium are expected to be narrow states. In 2010, the BaBar Collaboration observed the D-wave spin-triplet Υ(1 3 D J ) states through decays into Υ(1S )π + π − [22] and the member of J = 2 was confirmed with a significance of 5.8 σ, although other two states Υ 1 (1 3 D 1 ) and Υ 3 (1 3 D 3 ) have lower significances of standard deviations 1.8 and 1.6, respectively. In general, the experimental studies on the D-wave bottomonia are presently not enough, just as the total width and the branching ratio of typical decay channels are still unknown. For the Υ 2 (1 3 D 2 ) state, the predicted masses by the GI model [26] and the nonrelativistic constituent quark model [27] are 10147 and 10122 MeV, respectively, which are much lower than the measured value 10163.7 ± 1.4 MeV [52]. Our theoretical mass is 10162 MeV and such a consensus again shows an excellent description of bottomonium mass spectrum in this work. Furthermore, we also predict the mass splittings m[ MeV. Ignoring delicate differences of hyperfine spin splittings, the mass of the 2D bottomonium is estimated to be about 10450 MeV, which is in good agreement with that calculated in Ref. [26]. Experimental study of 2D states is also an interesting issues. Partial widths and branching ratios of annihilation decay, radiative transition, and hadronic transition and total widths for the 1D and 2D bottomonia are shown in Table XII and Tables XIII-XIV, respectively. We want to emphasize that our results for most of decay channels are comparable with those given by the GI model [26]. However, there is still a significant difference for the M1 radiation transition widths calculated from the present work and the GI model, which is mainly due to the difference of the wave functions obtained under the quenched and unquenched quark models. This situation can be happen when comparing the result from the screened potential model and the GI model for the decays of the 1F and 1G states, which will be discussed later. For the 1D bottomonium states, we notice that the process Υ 2 (1 3 D 2 ) → Υ(1S )π + π − can determine the constant C 2 of Eq. (A11) and the predicted branching ratios of the identical hadronic processes for other two members with J = 1, 3 are also consistent with different theoretical models [26,27]. The radiative transition η b2 (1D) → h b (1P)γ may be an op- timal channel to detect the η b2 (1D) state on account of the estimate of the branching ratio 96% by us. Accordingly, the radiative process of Υ( is the dominant decay mode for spin-triplet states of 1D bottomonium, which has the estimated branching ratios of 44.1 %, 74 % and 90.5 % for J = 1, 2, 3, respectively. Surely, the non-forbidden radiative decay to P-wave χ bJ states is also significant for Υ(1 3 D J ) state. There are similar decay behaviors in 2D bottomonium states, where the radiative transitions to P-wave bottomonium states are still dominant except for the Υ 1 (2 3 D 1 ) state, whose half of the total width is contributed by the annihilation mode ggg. The analysis of other decay chan- Total 25.9 100 nels of 2D bottomonium states from Tables XIII and XIV can be summarized as follows: 1. The branching ratios of M1 radiative transitions of 2D → 1D and 2D → 2D with spin-flip are about 10 −5 ∼ 10 −7 , which indicate that it is difficult to observe these decay modes in experiments. 2. The E1 spin non-flip radiative transitions of 2D → 1F with J i = J f − 1 with the total angular momentum of initial (final) state J i( f ) can be used to study first Fwave bottomonium, due to the predicted partial width of 0.16 ∼ 1.4 keV and branching ratio of 0.7% ∼ 6%. 3. Compared with radiative transitions, the contributions of the dipion hadronic decays are much smaller, where the decays to S -wave η b or Υ states and D-wave η b2 or Υ J states occupy about 10 −3 ∼ 10 −1 % and 10 −10 ∼ 10 −2 %, respectively. 4. It is easy to see that the width of leptonic annihilation decay of D-wave Υ state is much smaller than that of Swave Υ states with three orders of magnitude by comparing Table VII with Tables XII-XIII, which can be used to distinguish two kinds of Υ states with the same E. The 1F and 1G states As the high spin states, the F-wave and G-wave bottomonia have no experimental signals at present. If these states can be experimentally observed, it could be a good confirmation for the theoretical calculation of the potential model. The pre- dicted decay properties of 1F and 1G bottomonia for partial widths and branching ratios of annihilation decay, electromagnetic transition, and hadronic transition and the total widths are listed in Tables XV and XVI, respectively, where one can find a very interesting common decay feature. That is, the dominant decay modes of eight particles are all sorts of the radiative transition of 1F (J) → 1D (J−1) or 1G (J) → 1F (J−1) with almost more than 90% branching ratios, where subindices J / J − 1 represent the total angular momentum of a particle. This also indicates that experiments are likely to observe the first F-wave bottomonia via these radiative processes of h b3 (1 1 F 3 ) → η b2 (1 1 D 2 )γ and χ bJ (1 3 F J ) → Υ J−1 (1 3 D J−1 )γ once all the ground states of D-wave bottomonium are experimentally established. In the same way, search for the first G-wave bottomonia also requires the confirmation of the Fwave states in experiment. This forms a chain-like relationship, which means that search for these high spin states is probably achieved step by step. Our predicted mass values of 1F and 1G states are shown in Table III, where the average mass values of spin triplet states for 1F and 1G are estimated as 10366 MeV and 10535 MeV, respectively. These are al-most the same as those of spin singlet states. We hope that these predicted results will be helpful for the future experimental studies on high spin F and G-wave states. In addition, it can be seen from Tables XV-XVI that the 1F (1G) states could also transit to the specific P (D)-wave states by emitting two light mesons ππ. However, partial widths of these hadronic decay processes are calculated to be relatively small. The predicted total widths of the ground states of F and Gwave bottomonia are all about 20 keV, which are consistent with the estimates of the GI model [26]. IV. ANALYSIS AND PREDICTIONS OF HIGHER BOTTOMONIA A. Higher η b radial excitations As the pseudoscalar partner of the S -wave Υ states, the established members of the η b family are much less than those of the Υ family. Based on this fact, we need to predict some theoretical results and simultaneously make a systematic study on the higher excited states of the η b family that will not only provide meaningful clues to experimentally search for them above thresholds but also further reveal their inner nature. For the bb states above thresholds, they usually decay mainly into a pair of positive and negative bottom mesons or bottomed-strange mesons. The OZI-allowed strong decay behaviors could be studied by the QPC model, whose details are given in Appendix A. In this work, the parameter γ for the bottomonium system in the QPC model can be determined by fitting to experimental widths of Υ(4S ), Υ(5S ), and Υ(6S ) states and the fitted results are shown in Table V. For the input masses of bottomonium states, the experimental masses will be considered. If the members of a bottomonium multiplet are partially discovered by experiments, the input masses of the missing states can be obtained by combining the measured mass and predicted splitting. For the predicted bottomonia, we calculate the behaviours of strong decays by utilizing our theoretical masses in Table III as an input. Besides the input masses, the spatial wave functions of bottomonium states can be directly derived from our modified GI model. The input masses of bottom and bottomed-strange mesons are summarized in Table IV, where two 1 + bottom mesons B(1P 1 ) and B(1P ′ 1 ) are the mixture of 3 P 1 and 1 P 1 states, and the mixing angle θ 1P = − arcsin( √ 2/3) = −54.7 • is adopted as in the heavy quark limit [57][58][59]. Finally, the wave functions of bottom and bottomed-strange mesons are taken from the calculations of Ref. [60] and the quark mass parameters are given in Appendix A. In this subsection, higher radial excitations from η b (5S ) to η b (8S ) are systematically studied, whose mass values are predicted in Table III. Their corresponding decay properties including total widths and partial widths and branching ratios of OZI-allowed strong decay, annihilation decay and radiative transition are listed in Tables XIX-XXI. Here, we make a comparison of the decay results of η b (5S ) and η b (6S ) presented in this work and given by the GI model [26], which generally reflect that these two quark models can achieve the same goal. The mass of η b (5S ) state is calculated as 10810 MeV by the modified GI model. We give another estimate of the mass value of η b (5S ) of 10870 MeV utilizing the results of theoretical mass splitting and the measured mass of Υ(10860). At the same time, this mass estimate is also used as the model parameter of strong decay calculations. In Table XIX, one can see that the total width of η b (5S ) is predicted to be 28.4 MeV, which can mainly decay into the BB * , gg, B * s B * s , B s B * s , and B * B * , and corresponding branching ratios are 71.5 %, 11.5 %, 10.2 %, 3.91 %, and 2.71 %, respectively. The dominant radiative decay mode is h b (4 1 P 1 )γ with a partial width of 33.5 keV. We should notice that the notation BB * refers to BB * +B * B , B * B * to B * B * , and so on. The predicted mass of the η b (6S ) is 10991 MeV, which should also be improved compared to the estimates of the GI model, just like Υ(6S ). The total width of η b (6S ) is estimated to be 20.4 MeV, which is about the same as that of η b (5S ). The dominant decay channel of η b (6S ) is still BB * with a branching ratio of 73 % and other decay modes gg, BB(1 3 P 0 ), B s B * s , and B * B * also have large contributions to the total width. In general, our decay results indicate that the η b (5S ) and η b (6S ) are most likely to be detected in the BB * mode. In comparison with the Υ(nS ) states with n=7, 8, which are likely to become a new research area for bottomonium physics in the future as their lower radial excited states have been observed experimentally, we also study the η b (7S ) and η b (8S ) state here. We predict their mass values as 11149 and 11289 MeV, and hyperfine mass splittings ∆m(7S )\∆m(8S ) as 8\7 MeV, respectively. There are obviously large differences between the predicted mass by the GI model and modified GI model, which can be seen in Table III. From Tables XX and XXI The S -wave Υ states with J PC = 1 −− have always been the significant research field of bottomonium physics. Up to now, it is also the unique place to experimentally study some interesting effects above open-bottom thresholds, which includes Υ(4S ), Υ(10860), and Υ(11020). Therefore, further theoretical exploration for higher Υ bottomonia is necessary. Addi- XVII: Partial widths and branching ratios of the OZI-allowed strong decay, annihilation decay, and radiative transition and the total width for Υ(4S ). Experimental results are taken from the PDG [52]. The theoretical results of the GI model [26] and the nonrelativistic constituent quark model [27] are summarized in the rightmost columns. The width results are in units of keV. There is no controversy in experimental measurements of the Υ(4S ) state, whose total width was first measured as 25 ± 2.5 MeV and 20 ± 2 ± 4 MeV by the CUSB [7] and CLEO [6] collaborations, respectively. On the other hand, BaBar's measurements in 2005 gave the similar resonance parameters [61] and the current average total width of PDG is 20.5 ± 2.5 MeV [52]. For the Υ(5S ) and Υ(6S ), however, the situation is not so optimistic since the early measured data and the recent measurements of resonance parameters are incompatible with each other. In short, after 2010, the Belle collaboration released consistent mass values of Υ(10860) by the different production cross sections of e + e − → Υ(nS )π + π − with n=1,2,3 and e + e − → bb, as well as e + e − → h b (nP)π + π − with n=1,2 [62][63][64]. Their values are larger than the theoretical calculation and early experimental values. In addition, the measurement of the total width by Belle also disagreed with the previous experimental conclusion of a broad state of Υ(10860) [6,7]. Although there are a few experimental results of Υ(11020), the center mass values from the BaBar [65] and the recent Belle collaboration [63,64] are consistently about 20 MeV smaller than the earlier experimental results [6,7]. It can be seen clearly from the PDG [52] that the total width of Υ(11020) is measured to be about 30-60 MeV by integrating the present experimental information. At the same time, because the amount of data is not sufficient, the measurements of these two resonances by the analysis of the R value in the previous experiments are probably unconvincing to a great extent, where R = σ(e + e − → hadrons)/σ(e + e − → µ + µ − ). Therefore, we hope to see the more accurate experimental results for the Υ(10860) and Υ(11020) in the forthcoming Belle II experiment. It is worth mentioning that in the measurements of the Belle [63], the resonance parameters from R Υ(nS )ππ and R b are completely consistent within the error range [63] although the application of a flat continuum in the R b fit brings some inconsistencies between the fitted amplitudes for R Υ(nS )ππ and R b . Furthermore, the measured mass values and total widths from R Υ(nS )ππ have too large statistical and systematic errors. So in the calibration of parameter γ, we select the latest Belle's measurements of the process e + e − → bb, which give the resonance parameters for Based on the above consideration, in the following, the systematical study will be performed on the Υ(nS ) states above the thresholds. Higher bottomonia Υ(7S ) and Υ(8S ) states are also discussed in this subsection. XVIII: Partial widths and branching ratios of the OZI-allowed strong decay, annihilation decay, and radiative transition and the total widths for Υ(10860) and Υ(11020). Experimental results are taken from the PDG [52] The theoretical results of the GI model [26] and the nonrelativistic constituent quark model [27] are summarized in the rightmost columns. The width results are in units of keV. State Channels in experiment; secondly, the calculated partial width of strong decay is dependent on particle's real mass and a more accurate mass value may reduce the possible deviation from the phenomenological decay model. Finally, there may be some other physical effects and mechanisms in the strong decay process of Υ(10860). In Ref. [66], authors just discussed the application of the Franck-Condon principle, which is common in molecular physics on the anomalous high branching ratios of B * s B * s versus B s B * s . For the Υ(11020) state, there are no experimental data in the open-bottom decay channel at present. We predict that the dominant modes of Υ(6S ) are BB * , BB(1P 1 ), BB, B * B * with the corresponding branching ratios as 43 %, 21.6 % ,20.4 % and 11.5 %, respectively. These partial widths are quite different from the predictions by the GI model [26], though both of the predictions of total widths are contiguous, which are expected to be tested by the forthcoming Belle II. Υ(7S ) and Υ(8S ) In Ref. [26], authors did not study the properties of Υ(7S ) and Υ(8S ) in the framework of the GI model. Considering the importance of a screening effect for higher bottomonia, we employ the modified GI model with a screening effect to study the nature of the two particles in this work. In Table III, we predict the masses of Υ(7S ) and Υ(8S ) as 11157 MeV and 11296 MeV, which are raised by 154 and 293 MeV, respectively, compared with the mass value of Υ(11020) from Belle [63]. We suggest future experiments to search for these two bottomonia in the vicinity of their corresponding energies mentioned above. The decay information of partial widths and branching ratios of strong decay, annihilation decay, and radiative transition and the total widths for the Υ(7S ) and Υ(8S ) are shown in Tables XX and XXI, respectively. The total width of Υ(7S ) is estimated to be 101.4 MeV, which means a broad state. There are thirteen open-bottom modes, among which the important decay channels are B * B(1 3 P 2 ), BB * , B * B * , B * B(1P ′ 1 ), B * B(1P 1 ), and BB with the corresponding partial widths, 28.1, 22.0, 20.4, 9.26, 9.03, and 6.79 MeV, respectively. From Table XX, we find that the combination of a vector meson B * and a P-wave bottom meson accounts for a large portion of the total width of Υ(7S ). Contributions from the B + B(1P) and bottom-strange meson modes can be almost ignored. Some typical ratios of partial widths are given by which can be tested by future experiments. In Table XXI, the total width of Υ(8S ) is predicted to be 66.6 MeV and the dominant decay channels are B * B * and BB * with branching ratios 40.9 % and 31.0 %, respectively. The B * B * and BB * modes are excellent decay channels to detect the Υ(8S ) bottomonium state in the future experiments. All the decay modes BB, BB(1P ′ 1 ), and BB(1 3 P 2 ) have a consistent partial width of about 4−5 MeV, which almost occupies the remaining contributions to the total width. It is difficult to experimentally observe the configuration of bottom-strange mesons in the Υ(8S ) decay. C. P-wave states From Fig. 2, the first above-threshold P-wave bottomonia are 4P states, which include spin-singlet h b (4P) and spintriplet χ bJ (4P) with J = 0, 1, 2 and exceed the BB threshold by about 200 MeV according to our estimates. These abovethreshold particles have not yet been found by experiments. In Table III, the mass values of higher radial P-wave bottomonia 5P and 6P states are estimated to be about 10940 and 11100 MeV, respectively, which are reduced by about 70 and 110 MeV compared to the predictions of the GI model, respectively. In this subsection, we will give a theoretical analysis of their decay properties including higher radial P-wave bottomonia 5P and 6P states in the framework of the modified GI model. We hope to provide useful information for the search for these bottomonia in future experiments. The decay behaviors of 4P, 5P, and 6P bottomonia are given in Tables XXII-XXVI in succession. For the 4P bottomonium states and even higher bottomonia with higher radial quantum number or higher orbital angular momentum, the screening effect begins to manifestly show its power in the description of corresponding decay properties because the theoretical calculations of various decay processes are directly dependent on the mass value and the wave function of the related hadrons. Therefore, in the following discussion, including the subsequent higher D, F, and G-wave states, we will perform a detailed comparison of the predicted decay properties with and without a screening effect. In Ref. [26], authors studied the P-wave bottomonia only up to 5P states by the GI model. Thus, we focus only on the comparison of 4P and 5P bottomonia, and our predicted decay properties of 6P bottomonia will be illustrated at the end. According to the numerical results in Tables XXII-XXIV and Ref. [26], we conclude the followings. Although the difference in total width is not conspicuous for most of the 4P and 5P states, but the predicted dominant decay modes of 4P and 5P states from two different models are almost all opposite, which also illustrates that the influence of the screening effect on the higher bottomonia is quite significant. Thus the present work should be of great value for revealing nature of bottomonium. The dominant decay channels of the h b (5P) state are BB * and B * B * , and corresponding branching ratios are 75.7 % and 21.1 % respectively. We predict that the total width of χ bJ (5P) is about 40 ∼ 50 MeV, and decay mode BB and B * B * are critical for the χ b0 (5P) state and the dominant decay channels of χ b1 (5P) are BB * and B * B * , furthermore, for the χ b2 (5P), there are three important decay modes BB * , B * B * and BB which all have a great contribution for its total width. The above conclusions could be examined by experiment in the future. The predicted average mass of 6P bottomonium states by modified GI model is 11099 MeV, which is about 80 ∼ 100 MeV above the experimental mass of the observed Υ(11020) state. Hence, more strong decay channels of 6P states are opened. From Table XXV and XXVI, Our results indicate that the h b (6P) and χ bJ (6P) all are broad bottomonium mesons because of the predicted total width of about 107 ∼ 140 MeV. Additionally, BB * , B * B * , and B * B(1 3 P 2 ) just simultaneously are the main decay channels for spin-singlet h b (6P) and triplet χ bJ (6P) with J = 1, 2, and the sum of their contributions all are more than 70 % to the total decay width. The decay modes B * B(1 3 P 2 ), B * B * and BB are dominant for the χ b0 (6P) state with branching ratios of 28.8 %, 25.7 % and 24.5 %, respectively. A common decay feature of 6P bottomonium states can clearly be seen, that is, the role of mode B * B * and B * B(1 3 P 2 ) are quite considerable. D. D-wave states In this subsection, we will discuss features of D-wave bottomonia. Among them, the D-wave vector bottomonium with J PC = 1 −− is quite interesting because unlike charmonium system, there is no clear signal to show the existence of Dwave vector states although many S -wave vector bottomonia have been discovered by experiments. Hence it is helpful for understanding this puzzle and further understanding the behaviors of bottomonia that we study the intrinsic properties of these missing Υ 1 (n 3 D 1 ) states. There are only 1D and 2D bottomonia below the BB threshold. The mass of 3D states is located at around 10680 MeV according to our estimates, which is 120 MeV larger than the threshold. We also predict that the masses of 4D and 5D bottomonia are about 10880 and 11050 MeV, respectively. In Table III, we notice that the predicted mass of Υ 1 (4 3 D 1 ) is 10871 MeV, which is very close to the measured value of the Υ(10860) state [63]. However, the possibility of a such candidate for Υ(10860) can be basically excluded by the following analysis. Next, we discuss the decay properties of D-wave bottomonia up to 5D states in detail. In Tables XXVII-XXX, we list the numerical results of 3D and 4D bottomonium decays. Comparing ours with the calculation of Ref. [26], we can conclude 1. Compared with the GI model [26], almost all of our estimated partial widths of radiative transition for 3D and 4D bottomonia become smaller except for the electromagnetic processes of 3D → 1P and 4D → 2P, which become larger from 20 % to 300 % range. 2. Similar to the situation of χ b0 (4P), our prediction of the total width for the Υ 1 (3D) is 54.1 MeV, which is largely different from an estimate of 103.6 MeV by the GI model. Overall, the 3D bottomonia are broad resonances, where the predicted total widths of other three particles η b2 (3D), Υ 2 (3D) and Υ 3 (3D) are 143.0, 96.3, and 223.8 MeV, respectively. Our results of 3D and 4D states on the partial widths and branching ratios of strong decay channels are still quite different from those of the GI model. The strong decay modes of η b2 (3D) are only B * B * and BB * with close predicted ratios, which accounts for almost all the contributions to the total width. For the Υ 1 (3D), the dominant decay mode is (3D). Additionally, its branching ratio of annihilating to the leptonic pair ℓ + ℓ − is three orders of magnitude smaller than Υ(4S ) state. Hence, it is hard to detect this D-wave vector bottomonium in the final state events of µ + µ − . This situation is also applied to the Υ 1 (4D) and Υ 1 (5D) states. Finally, one can easily find that the modes B * B * and BB * are dominant both for the Υ 2 (3D) and Υ 3 (3D) at the same time. 3. The total widths of 4D bottomonia including the spin singlet η b2 (4D) and three triplets Υ J (4D) are estimated to be about 70 ∼ 90 MeV. Our results are generally 10 ∼ 20 MeV larger than those of the GI model. The η b2 (4D) mainly decays into BB * and B * B * with the branching ratios 63.2 % and 33.4 %, respectively. The corresponding partner Υ 2 (4D) has the similar decay behavior. Although the mass of Υ(10860) is compatible with our prediction of Υ 1 (4 3 D 1 ) state in Table III, we can safely rule out this allocation since the two orders of magnitude difference on the branching ratios of the leptonic pair decay ℓ + ℓ − between the theoretical and measured data. We predict that the main decay channels of Υ 1 (4D) are B * B * , BB and BB * with corresponding annihilation widths are too small as mentioned before. Hence, this can also explain why the S -wave Υ's are experimentally found in succession but the D-wave Υ 1 's always have no movement in experiments. The detailed decay information of 5D bottomonium states are presented in Table XXXI and XXXII. Our results indicate that the spin-singlet η b2 (5D) and triplet Υ J (5D) with J = 1, 2, 3 are all broad states, whose predicted total widths are 110.7, 121.7, 101.6, and 86.0 MeV, respectively. We also notice that the dominant decay channels of η b2 (5 1 D 1 ), Υ 2 (5 3 D 2 ) and Υ 3 (5 3 D 3 ) are all B * B * and BB * and other relatively important decay modes are provided by the strong decay channels containing P-wave bottom mesons. It is very interesting to study the properties of Υ 1 (5 3 D 1 ) because the mass of Υ 1 (5D) is only 40 MeV larger than that of Υ(6S ) according to our prediction in Table III, which is nearly 100 MeV smaller than that of the GI model. Our numerical results show that the Υ 1 (5 3 D 1 ) is a broad state, for which there are ten openbottom decay modes. Furthermore, its main decay channels are B * B * , BB, BB * , BB(1P ′ 1 ) and BB(1 3 P 2 ), and the corresponding branching ratios are 38.7 %, 16.4 %, 15.8 %, 14.8 %, and 7.59 %, respectively. The contributions of bottomstrange mesons are too low to exceed 1 %. Finally, we hope that these results can provide valuable clues for future experiments to search for more D-wave bottomonia. E. F-wave states In the following, we will focus on the higher F-wave bottomonia, i.e., 2F, 3F, and 4F bottomonia. The theoretical mass values of F-wave bottomonia are presented in Table III. Their average mass values are the same as those of the spinsinglet states of 10609, 10812, and 10988 MeV with radial quantum number n = 2, 3, 4, respectively. From Fig. 2, it is also easily to find that the mass values of these particles are close to the S -wave bottomonium states corresponding to radial quantum number n = 4, 5, 6, respectively. In Tables XXXIII-XXXV, we present the numerical decay results of 2F and 3F bottomonia, which are also studied by Ref. [26] in the framework of the GI model. By analyzing and comparing our calculation with the GI model [26], we conclude 1. The radiative transitions of F-wave states also have a rule very similar to P-wave and D-wave bottomonia, which can be seen by comparing the predictions of the modified GI model and GI model. That is, our partial widths of radiative decays are smaller than those of the GI model on the whole except for the radiative processes 2F → 1D and 3F → 2D. 2. 2F bottomonia are states near the threshold, and their corresponding open-bottom channels are either BB or BB * , whose partial widths from the GI model are much larger than our predictions. We predict that the total widths of h b3 (2F) and χ b3 (2F) states are 0.413 and 0.543 MeV, respectively, which are about one-30th states are all narrow states, which means that it is not difficult to distinguish the spin-triplet with J = 2 of 2F bottomonia from the experiment. Some important radiative decay modes of these narrow states may also have large contributions to the total width. Hence, the processes h b3 (2F) → η b2 (2D)γ, χ b3 (2F) → Υ 2 (2D)γ, Tables XXXVI and XXXVI and the predicted total widths of 4F states are located at near 70 MeV. In their openbottom decay channels, the contributions from bottom-strange mesons are all small, among which the largest is not more than 3 %. The dominant decay modes of h b3 (4F) and χ bJ (4F) with J = 2, 3 are B * B * and BB * , whose the sum of branching ratios is more than 80 %. Only the mode B * B * with branching ratio 92.1 % governs the χ b4 (4 3 F 4 ) state. The BB channel is also important for the χ b2 (4 3 F 2 ) state but is negligible for the χ b4 (4 3 F 4 ) state on account of the branching ratios 17.5 % and 1.27 × 10 −4 , respectively. F. G-wave states G-wave bottomonia are typical high spin states, which are difficult to be experimentally found in recent years. However, we still give our predictions of decay behaviors for the higher G-wave bottomonia with radial quantum number up to 3, which have an extra calculation of 3G bottomonium states compared to Ref. [26]. The hyperfine mass splittings among the spin-singlet and spin-triplet states of 2G and 3G are quite small and the average mass values of 2G and 3G bottomonia in Table III are 10747 and 10929 MeV, respectively, which are close to those of 4P and 5P bottomonia, respectively. The decay information on partial widths of several decay types and total widths of 2G and 3G bottomonia are shown in Tables XXXVIII-XL. For the radiative transition of 2G states, we obtain the behaviors similar to P, D, F-wave states. Hence, we no longer explain it here. Furthermore, the total widths of 2G bottomonia are estimated to be about 110 ∼ 160 MeV, which means that they are all broad states although these values are significantly small relative to those of the GI model. Since the mass values of 2G bottomonia do not reach the threshold line of including a P-wave bottom meson, the main decay modes of 2G states are BB, BB * and B * B * . The mode BB * is dominant for the Υ 3 (2 3 G 3 ) and Υ 4 (2 3 G 4 ) with branching ratios 50.9 % and 62.5 %, respectively. The mode B * B * with a branching ratio 76.8 % is critical for the Υ 5 (2 3 G 5 ). Finally, the modes BB * and B * B * have the same importance for the η b4 (2 1 G 4 ) state. Similarly, the 3G bottomonium states cannot decay into Pwave bottom mesons on account of the mass values. In Tables XXXIX and XL, the predicted total widths of η b4 (3G) and Υ J (3G) with J = 3, 4, 5 states are 53.3, 39.8, 50.4, and 67.5 MeV, respectively. All the contributions of bottom-strange mesons to the total widths of 3G states occupy about 4 %. In addition, the strong decay behaviors of channels BB, BB * , and B * B * of 3G bottomonia are almost exactly the same as the case of 2G bottomonia, which also indicates that the dominant decay modes of each 2G bottomonium state and corresponding radially excited 3G state are similar to each other. The screening potential model adopted in this work and the coupled-channel quark model are typical unquenched quark models. In Ref. [50], the authors compared the results from the screening potential model and the coupled-channel model by taking a charmonium spectrum as an example, which, to some extent, reflects the equivalence between the screening potential model and the coupled-channel quark model. We notice that Ferretti and Santopinto [31] studied the bottomonium spectrum below 10.6 GeV by the coupled-channel quark model, which makes us have a chance to test the equivalence of two unquenched quark models further. In Fig. 4, we compare the results, which are from the present work obtained by the screening potential model and from a coupled-channel quark model [31]. In general, these two unquenched quark models present the comparable results, which again supports the conclusions of Ref. [50]. Surely, we also find the differences in the results from two unquenched quark models for the 3P and 1D states. This fact shows some differences between the screening potential and the coupled-channel quark models. After all, they are two approaches to phenomenologically describe the unquenched effects. Besides, the effects of nearby thresholds are important for some higher bottomonia, which cannot be normally reflected in the screening potential model. To explore the strength of this effect in the bottomonium system, we will take Υ(4S ) and Υ(5S ) states as an example to calculate their mass shifts in the coupled-channel model [67]. In Ref. [67], the inverse meson propagator can be presented as P −1 (s) = m 2 0 − s + n (ReΠ n (s) + iImΠ n (s)) , where m 0 is the mass of a bare state, and ReΠ n (s) and ImΠ n (s) are real and imaginary parts of the self-energy function for the n-th decay channel, respectively. The bare mass m 0 can be obtained from the GI model [46]. To obtain the mass of a physical state, the P −1 (s) can be expressed in the Breit-Wigner representation where m(s) 2 = m 2 0 + n ReΠ n (s) and Γ tot (s) = n ImΠ n (s) m BW . The physical mass m BW can be determined by solving an equation m(s) 2 − s = 0. Furthermore, the imaginary of the selfenergy function ImΠ n (s) can be related to the amplitude of the 3 P 0 model by applying the Cutkosky rule and the corresponding ReΠ n (s) can also be obtained by the dispersion relation. The parameter γ 0 in the 3 P 0 model can be fixed by fitting the experimental width of Υ(4S ) and Υ(5S ), which gives us γ 0 = 0.337 for the bottomonium system. By calculation of the coupled-channel model, we obtain the physical mass m BW (4S ) = 10.592 GeV and m BW (5S ) = 10.838 GeV. Comparing the above results and those from our screening potential model, we find that mass differences between two models are 20 MeV and 16 MeV for Υ(4S ) and Υ(5S ) states, respectively. These results show that the effects of nearby thresholds are essential for the bottomonium system, but on the other hand, they are also within our expectations. Generally, the contributions of effects of nearby thresholds to higher bottomonia should be systematically calculated in the coupledchannel model, which can be considered for further research in the future. In addition, when comparing our results of average mass of these 3P states with those from other models [26,27,31], we notice that our results are in agreement with those given in Refs. [26,27,31]. In Ref. [31], the authors specified the coupled-channel effect on the mass splittings for the χ bJ (3P) states. If further focusing on the mass splittings of χ bJ (3P) states, there exist differences among different model calculations [26,27,31]. It is obvious that the theoretical and experimental study on this mass splittings of χ bJ (3P) states will be an intriguing research topic in future. VI. SUMMARY As we all know, the screening effect usually plays a very important role for highly excited mesons. It affects the mass values, wave functions of mesons and hence, estimates of corresponding decay behaviors. Motivated by the recent studies of bottomonium properties in the framework of the Godfrey-Isgur relativized quark model [26], we have performed the most comprehensive study on the properties of bottomonium family by using the modified Godfrey-Isgur model with a screening effect. We have studied radiative transition, annihilation decay, hadronic transition, and OZI-allowed two-body strong decay of bb states. Our calculated numerical results indicate that our predictions for the properties of higher bottomonia are quite different from the conclusion of GI model. Hence, we also expect that this work could provide some valuable results to the future research of bottomonium in experiment. Our work in this paper can be divided into two parts, study of mass spectrum and study of decay behaviors of bottomonium states. Furthermore, we have focused our main attention on the prediction and analysis of higher bottomonia due to significant reflection of the screening effect. Firstly, we have taken advantage of the measured mass of 18 observed bottomonium states in Table II to fit eight undetermined parameters of the modified GI model in Table I. It can be found that our theoretical mass values have been greatly improved compared to those of the GI model. At the same time, our results have been well matched with experimental results. Based on the above preparation, the predicted mass spectrum of bottomonium states has been given in Table III. It is interesting to note that the mass values of Υ(7S ) and Υ(8S ) are predicted as 11157 and 11296 MeV, respectively, which are higher than the measured mass of Υ(11020) only by 154 and 293 MeV, respectively. Classifed in L, the decay properties of bottomonium states have been separately discussed in accordance with mass values above and below the open-bottom threshold. We have found that a screening effect is weak for the decay behaviors of the most bottomonium states below the threshold, whose estimates are similar to those of the GI model. For the higher bottomonium states above the threshold, the screening effect has become important. We have obtained fairly inconsistent conclusion on characteristic decay behaviors of bottomonium mesons between the GI model with and without screening effects. Here, we have provided results to check the validity of our model in future experiments. In the following years, exploration for higher bottomonium states will become a major topic in the future LHCb and forthcoming Belle II experiments. Until then, the highly excited states that are still missing are likely to be found. Moreover, some hidden experimental information of observed bb states can be further perfected. In this work, we have provided abundant theoretical information for higher bottomonia, which is helpful for piloting experiments to search for these missing bottomonium states. final transition rate is given by [88] Γ(φ I → φ F + ππ) =δ l I l F δ J I J F G|C 1 | 2 − 2 3 H|C 2 | 2 |A 1 | 2 + (2l I + 1)(2l F + 1)(2J F + 1) The intermediate hybrid states can be described by the quark confining string (QCS) model [89][90][91], in which we consider that the quark and anti-quark are connected by an appropriate color electric flux tube or string. If the string is in the ground state, the system of a quark-antiquark pair is a meson where the string corresponds to the strong confinement interaction. The vibration of the string means a new state with gluon excitation effects, which is composed of the excited gluon field and quark-antiquark pair, i.e., the so-called hybrid state. For this vibrational mode, assuming both ends of a string are fixed because of too heavy quark masses, then the effective vibrational potential can be given by [90] V n (r) = σ(r)r 1 − 2nπ 2nπ + σ(r)[(r − 2d) 2 where d is the correction of finite heavy quark mass and n indicates the excitation level. The α n related to the shape of the vibrational string [90] is taken as √ 1.5, which is consistent with Ref. [27] and is insensitive to our mass spectrum of hybrid states. The potential of a hybrid meson can be expressed as [91] V hyb (r) = V G (r) + V S (r) + [V n (r) − σ(r)r] , where V G (r) is one-gluon exchange potential and V S (r) is a color confining potential. It is easy to see that the above potential becomes a general QQ interaction when n = 0 for the vibrational potential V n . For theoretical self-consistency, forms of V G (r) and V S (r) are taken from our modified GI model and due to a screening effect, the effective string tension σ(r) is not a constant but rather a function of a distance r of Q andQ. The specific expressions of potentials V G (r) and V S (r) can be written as Solving the Schrödinger equation for a hybrid meson, one obtains the mass spectrum and corresponding wave function of a hybrid state, which are used to calculate the width of hadronic transition by Eq. (A11). Nevertheless, we have to emphasize that the QCD multipole expansion is dependent on the inputs and has its own error due to theoretical uncertainties. Hence, we should regard the calculated width of hadronic transition as rough estimates rather than precise results. In addition, considering that there may be a more complex mechanism of hadronic transition for highly excited states, we focus on the hadronic transition of the bottomonium states only below open-bottom threshold in this work. The numerical results of hadronic decay will be discussed in Secs. III and it should be noted that the GI's results of hadronic transition Ref. [26] are derived from the reduced matrix elements, which are obtained by measured transition rates rather than direct calculations. Here, we adopt the direct QCD multipole-expansion method to calculate a partial width of hadronic transition of bottomonium states and it is also useful to make a comparison with the results of Ref. [26]. Two-body OZI-allowed strong decays Quark-Pair Creation (QPC) model is applicable to the calculation of OZI allowed hadron strong decays. This model is proposed by the Micu [93] at the earliest in 1968 and it has been further developed by the Orsay Group [94][95][96][97] which is one of the most popular phenomenological method to deal with the OZI allowed strong decays and has been greatly used in the calculation of strong decay. The model assumes that a created quark-antiquark pair qq from the vacuum is a 3 P 0 state which has spin-parity J PC = 0 ++ , so the model also known as 3 P 0 model. In the following, we will briefly introduce this model. For the OZI allowed strong decay process A → B + C, the transition operators T can be expressed as T = −3γ m 1m; 1 − m|00 dp 3 dp 4 δ 3 (p 3 + p 4 ) where Y lm (p) = p l Y lm (θ p , φ p ) is a solid spherical harmonic function and b † 3 (d † 4 ) is quark (antiquark) creation operator, φ 34 0 = (uū + dd + ss)/ √ 3 and ω 34 0 are SU(3) flavor and color wave function of vacuum quark pair respectively, and the dimensionless parameter γ describes the strength of creating a quark-antiquark pair from the vacuum. γ value for ss pair creation is generally more than a factor of 1/ √ 3 compared to that of uū/dd pair creation. The reason of the existence of factor 1/ √ 3 is in order to show the SU(3) symmetry breaking [51,[94][95][96][97][98]. The transition matrix of decay process can be written as ×φ D ω D dp 1 dp 2 δ 3 (P D − p 1 − p 2 ) ×Ψ D nLM L (p 1 , p 2 )|q 1 (p 1 )q 2 (p 2 ) , where the front factor E is particle energy and Ψ D nLM L (p 1 , p 2 ), χ D , φ D and ω D donate spatial, spin, flavor and color wave function of meson D, respectively, and LM L ; S M S |JM J is Clebsch-Gordan coefficients. For the spatial wave function of initial and final states, we use the exact eigenfunctions by solving Schrödinger equation in potential models rather than simple harmonic oscillator (SHO) wave function. Combined Eqs. (A23)-(A24) and Eq. (A25), the decay amplitude M M J A M J B M Jc can be derived. For the convenience of experimental measurement the decay amplitudes could be related to the helicity partial wave amplitudes by Jacob-Wick formula [100] M where J and L are the total and orbital angular momenta between final state B and C respectively and P = P B . Finally, the partial width of the A → BC can be written as where m A is the mass of the initial state A. In addition, in the calculations of strong decay, the constituent quark mass of bottom, up/down and strange quark are taken as 5.027, 0.22 and 0.419 GeV, respectively. momentum-independent correction to the one-gluon exchange potential, which can be obtained from Eq. (7) and their specific expressions arẽ respectively, with constant parameters ǫ t and ǫ c . The last term V so is the spin-orbit coupling and it includes vector and scalar spin-orbit potentials, namely, whereṼ ii are also the momentum-dependent corrections for the vector and scalar spin-orbit interactions, respectively, which are given bỹ where the subscripts i( j)=1, 2 denote bottom and anti-bottom quark, respectively.
18,921
2018-02-13T00:00:00.000
[ "Physics" ]
Evolution of the galaxy stellar mass function: evidence for an increasing $M^*$ from $z=2$ to the present day Utilising optical and near-infrared broadband photometry covering $>5\,{\rm deg}^2$ in two of the most well-studied extragalactic legacy fields (COSMOS and XMM-LSS), we measure the galaxy stellar mass function (GSMF) between $0.1<z<2.0$. We explore in detail the effect of two source extraction methods (SExtractor and ProFound) in addition to the inclusion/exclusion of Spitzer IRAC 3.6 and 4.5$\mu$m photometry when measuring the GSMF. We find that including IRAC data reduces the number of massive ($\log_{10}(M/M_\odot)>11.25$) galaxies found due to improved photometric redshift accuracy, but has little effect on the more numerous lower-mass galaxies. We fit the resultant GSMFs with double Schechter functions down to $\log_{10}(M/M_\odot)$ = 7.75 (9.75) at z = 0.1 (2.0) and find that the choice of source extraction software has no significant effect on the derived best-fit parameters. However, the choice of methodology used to correct for the Eddington bias has a larger impact on the high-mass end of the GSMF, which can partly explain the spread in derived $M^*$ values from previous studies. Using an empirical correction to model the intrinsic GSMF, we find evidence for an evolving characteristic stellar mass with $\delta \log_{10}(M^*/M_\odot)/\delta z$ = $-0.16\pm0.05 \, (-0.11\pm0.05)$, when using SExtractor (ProFound). We argue that with widely quenched star formation rates in massive galaxies at low redshift ($z<0.5$), additional growth via mergers is required in order to sustain such an evolution to a higher characteristic mass. The evolution of the galaxy stellar mass function The galaxy stellar mass function (GSMF) is a measurement of the cumulative effects of physical processes that enhance or hinder star formation within galaxies.These processes include merger events, internal feedback mechanisms (both supernova and active galactic nuclei (AGN) driven) and environmental effects.Understanding the balance between these processes is key to understanding how galaxies have been assembled over cosmic time.Measurements of the local GSMF reveal a steep cut-off in the number of high-mass galaxies above a characteristic mass log 10 ( * / ) ∼ 10.7 (e.g.Baldry et al. 2012).In addition to this, the population of very high-mass galaxies becomes increasingly quenched with time (e.g.Davidzon et al. 2017;McLeod et al. 2020).Many theories have been proposed to explain why there is significant suppression in the star formation ★ E-mail<EMAIL_ADDRESS>(often called quenching) of galaxies above this stellar mass, examples include but are not limited to: starvation/strangulation (Larson et al. 1980;Kawata & Mulchaey 2008;McCarthy et al. 2008;Feldmann et al. 2011;Bahé et al. 2013;Feldmann & Mayer 2015), virial shock heating (Birnboim & Dekel 2003;Dekel & Birnboim 2006;Cattaneo et al. 2006) and AGN feedback (Binney & Tabor 1995;Di Matteo et al. 2005;Springel et al. 2005;Bower et al. 2006;Croton et al. 2006;Cattaneo et al. 2009;Fabian 2012;Bongiorno et al. 2016;Beckmann et al. 2017).To adequately test these theories and increase our understanding of how these processes influence galaxy evolution, simulations are required.Key milestones in testing theories such as these include being able to accurately reproduce the observed galaxy population through Luminosity/Mass Functions (see Somerville & Davé 2015;Vogelsberger et al. 2020, for recent reviews).The evolving shape of the GSMF is dependent on all forms of stellar mass growth, including growth via merging events in addition to internal star formation (e.g.Rodriguez-Gomez et al. 2015;Qu et al. 2017;O'Leary et al. 2020).Consequently, to increase our understanding of both quenching and merger rates, stellar mass functions have become a key benchmark for simulations in the past few years (Henriques et al. 2013;Schaye et al. 2015;Pillepich et al. 2018;Lagos et al. 2018) and hence, require observations to be reliable in order to tune and test these simulations. The advance in deep extra-galactic surveys over the past decade has led to a better measurement of the GSMF at high-redshift.Using predominantly photometric redshifts, several studies have now mapped out the evolution of the GSMF from = 0-3 (e.g.Fontana et al. 2004;Pérez-González et al. 2008;Marchesini et al. 2009;Pozzetti et al. 2010;Ilbert et al. 2013;Muzzin et al. 2013;Davidzon et al. 2017;Wright et al. 2018) and even up to 7 (e.g.Verma et al. 2007;McLure et al. 2009;Stark et al. 2009;Grazian et al. 2015;Song et al. 2016;Thorne et al. 2020).Despite the increased cosmological volume probed by these studies, there is no consensus on the exact form of the GSMF over this epoch.For example Wright et al. (2018), McLeod et al. (2020) and Thorne et al. (2020) find an approximately constant characteristic mass with redshift, but offset from each other by up to * ∼ 0.5 dex.Alongside this, other studies of high mass systems suggest merger events are required to explain the observed change in the size-mass relation of these objects (e.g.McLure et al. 2013).The rate of mergers required would consequently influence the evolution of the shape of the stellar mass function at the high mass end.To solve these issues, the precise number densities of galaxies with masses greater than * are required.Uncertainties in both the stellar mass and number density of these objects can have a much larger impact on the measured shape and characteristic mass of the derived GSMF than the more numerous low-mass sources.This is due to the need to quantify a steep exponential fall off, with relatively few galaxies, that suffer from higher levels of cosmic variance (e.g.Moster et al. 2011).In this work we exploit deep, wide-area optical and IR imaging to measure the GSMF over > 5 deg 2 , leveraging the large area to understand, in particular, the evolution in the number density of the most massive galaxies between 0.1 < < 2.0. Measuring accurate photometric redshifts and stellar masses In the coming decade, the Vera Rubin Observatory (Ivezić et al. 2019) and Euclid (Racca et al. 2016) survey programmes will be conducting broadband observations to further improve both the area and depths achieved within popular multi-wavelength fields such as COSMOS and XMM-LSS, among others.In parallel to these photometric surveys, a number of spectroscopic campaigns are also set to begin operations in the next couple of years.MOONS (Cirasuolo et al. 2012) and WAVES (Driver et al. 2019) are examples of multiobject spectrograph (MOS) surveys and both have an immediate need for high quality photometric redshift and physical parameter estimates in order to plan survey operations and develop final target catalogues ahead of commissioning.Consequently, there is demand for a maintained compilation of broadband photometry and photometric redshifts based on the improved optical and NIR data that has been obtained in recent years in order to best prepare for these upcoming projects (past examples including: Laigle et al. 2016;Alarcon et al. 2020).Moreover, with next-generation telescopes and survey programmes comes next-generation software and analysis pipelines.A workhorse in photometric source extraction for over 20 years has been SExtractor (Bertin & Arnouts 1996), but in recent years there has been a renewed effort in tackling the problem of obtaining accurate flux measurements from images with robust uncertainties and improved handling of source blending.One product of this ef-fort has been ProFound (Robotham et al. 2018), which potentially provides improved photometry, particularly for extended sources, over a variety of wavelengths due to the non-parametric apertures used (e.g.Davies et al. 2018;Hale et al. 2019;Bellstedt et al. 2020).In this study we produce multiple catalogues of broadband photometry and using these different source extraction methods.The use of two source extraction tools, as well as varying the use of Spitzer/IRAC data, allows for any potential bias' in the GSMF due to these different effects to be explored. This paper is presented as follows: In Section 2 we describe the data used in producing our photometric catalogues.In Section 3 we describe the methods used in source extraction, fitting for photometric redshifts and derivation of basic physical properties of galaxies.In Section 4 we present the measured GSMF in the redshift range 0.1 < < 2.0 and in Section 5 we present the results of modelling the GSMF, and compare the models with the results from previous studies.In section 6 we explore the time evolution of the GSMF and how our observations compare to results from simulations.We finally present our conclusions in Section 7. Throughout this paper we use the AB magnitude system (Oke 1974;Oke & Gunn 1983) and assume ΛCDM cosmology with 0 = 70 km s −1 Mpc −1 , Ω M = 0.3 and Ω Λ = 0.7. DATA The following sub-sections describe the data in the two fields used in the construction of our photometric catalogues and subsequent measurement of the GSMF. COSMOS The COSMOS field (Scoville et al. 2007) is one of the most widely studied multi-wavelength fields in extra-galactic astrophysics, with data spanning the X-ray through to the radio domains over ∼ 2 deg 2 of the sky centred on the J2000 coordinates of RA = 150.12deg (10:00:28.6)DEC = +2.21deg (+02:12:21.0). The imaging data over this field that are used in the construction of our catalogues is derived from four telescopes.The bluest coverage is from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS; Cuillandre et al. 2012) which has an ultra-deep pointing in the central square degree of the field, and we restrict our analysis to this area for this reason.From this survey we make use of the * -band.For the optical coverage we take data from the ultra-deep component of the HyperSuprimeCam Strategic Survey Programme DR1 (HSC; Aihara et al. 2018b,a).Near-infrared imaging is sourced from the UltraVISTA survey (McCracken et al. 2012).We make use of the fourth data release of UltraVISTA (DR4) which has a tiered observing strategy, leading to a striped pattern of near-infrared coverage across the field.The 'deep' component is approximately 1 magnitude shallower than the 'ultra-deep' region.3.6 and 4.5 micron photometry is obtained from the Spitzer Extended Deep Survey (SEDS; Ashby et al. 2013), Spitzer Matching Survey of the Ultra-VISTA ultra-deep Stripes survey (SMUVS; Ashby et al. 2018) and Spitzer Large Area Survey with Hyper-Suprime-Cam (SPLASH; Capak et al. 2012).The 5 detection limits for the 2 (2.8) arcsecond photometry in each optical/NIR (Spitzer) bands are outlined in Table 1.This table also highlights the impact of the tiered structure of the UltraVISTA survey. XMM-LSS With a much greater area (at 4.5 deg2 ), the XMM-Large Scale Structure field is one of 3 deep fields that make up the Vista Infrared Deep Extra-galactic Observations (VIDEO) survey (Jarvis et al. 2013).Located at RA = 35.5 deg (02:22:00.0)DEC = -4.8deg (-04:48:00.0),our study focuses on regions of the field with high quality HSC observations.We mask parts of the edge of the field due to the lack of overlapping coverage between HSC and VISTA (see Bowler et al. 2020, for more information on overlapping coverage from the different surveys and variable depths).The total area of the field that is used after considering the overlap of telescope footprints is 4.23 deg 2 (giving a total of 5.23 deg 2 when combined with COSMOS).We use the same photometric bands as those used in the COS-MOS field.Unlike in the COSMOS field, the optical coverage in XMM-LSS is not uniform while the near-infrared is more uniform.-band imaging over the full XMM-LSS field was obtained from the CFHTLS Wide survey in addition to the CFHTLS-D1 field which covers a 1 deg 2 patch where * observations are 1 magnitude deeper than the rest of the field (Cuillandre et al. 2012).HSC SSP covers a different 1.5 deg 2 region of the field (centred on UKIDSS UDS Lawrence et al. 2007) which has 'ultra-deep' coverage.Nearinfrared photometry is derived from the final data release of VIDEO (Jarvis et al. 2013) and Spitzer data is sourced from the Spitzer Extragalactic Representative Volume Survey (SERVS; Mauduit et al. 2012).The depths of the images in each broadband filter are also outlined in Table 1. PHOTOMETRIC CATALOGUES AND DERIVED PRODUCTS For our study we define and produce four separate stellar mass estimates for all galaxies identified in the near-infrared.The various stellar mass estimates are determined by using two source extraction algorithms and including/excluding the Spitzer/IRAC 1 and 2 bands.The goal here is to examine the impact of these two variables in the methodology, such as how different measurements of total flux translates to differences in stellar-mass estimation and how the use of Spitzer/IRAC bands affects photometric redshift performance and the final distributions in stellar mass.We refer to these catalogues as 'SExtractor/SE' and 'Pro-Found/PF' when using them in a lengthened/shortened format.When the two Spitzer/IRAC bands are included in any analysis, the suffix '+IRAC' is added to the catalogue name. SExtractor photometry Source finding is performed in SExtractor (Bertin & Arnouts 1996) with the -band image used for object selection.The fiducial photometry was derived using 2 arcsecond diameter circular apertures placed at the location of sources found by SExtractor.As Spitzer has a larger point spread function, for the IRAC bands we use 2.8 arcsecond diameter circular apertures (Bowler et al. 2020).The optical and NIR photometry is aperture corrected using a point-spreadfunction (PSF) generated in each band by PSFE (Bertin 2011) based on cut-outs of point-sources.This is calculated separately for the COSMOS field and over each of the three VISTA/VIDEO tiles in XMM-LSS.For Spitzer photometry, we use an aperture correction derived in the Spitzer handbook1 .Alongside this we also measure the MAG_AUTO parameter from SExtractor to estimate the total flux for each object.The flexible aperture used in the measurement of MAG_AUTO is generated from the -band detection image.Due to the significantly larger PSF of the two Spitzer bands, carrying over the same aperture from the band would lead to an underestimation of the total flux.To solve this problem, we derive a correction to translate aperture flux ( ) to total flux ().This correction follows the average value of , = , × , , for objects in bins of magnitude and redshift used in the later calculation of the GSMF.The use of this correction is sanity checked against the methodology we use with the P F catalogues and is found to agree well for luminous objects.Errors on the SExtractor photometry are calculated based on local depth maps generated by inserting apertures in empty locations of the field (the same method as applied in Bowler et al. 2020). Profound photometry In addition to the SExtractor photometry, we produce P F (Robotham et al. 2018) catalogues selected on a weighted stack of the VISTA- , , and s bands 2 .P F generates two photometric measurements for each object, a 'total' flux and a 'colour' flux.P F operates by iteratively dilating the aperture encompassing each galaxy in each image until it meets the local background.This results in a morphologically derived aperture around each identified object for each photometric band.For our purposes we only make use of the total flux measurements and the associated errors from the P F output.P F flux errors are calculated by combining the errors resulting from the sky root mean square (RMS), errors from the sky subtraction process and object shot noise.Because this dilation process is performed for each photometric band, the larger Spitzer/IRAC PSF is taken into account and thus no alterations are required to obtain total fluxes in these bands.To validate both the ProFound and SExtractor photometry.We compare our measured total photometry against measurements made in other catalogues which have targeted these fields.These comparison catalogues are the COSMOS2015 catalogue for the COSMOS field (Laigle et al. 2016) and the Subaru/XMM-Newton Deep Field (SXDF) catalogue (Mehta et al. 2018).We find that the vast majority of our photometric bands are consistent ( 0.1 mag median offsets) with the measurements from these reference catalogues.The methodology we use to determine total SExtractor Spitzer fluxes for bright, resolved sources is also found to be consistent with these catalogues.The greatest difference in photometry is found when running ProFound on Spitzer images in XMM-LSS.For bright sources ( < 20), the photometry is consistent with our SExtractor catalogue and the reference catalogue.However, for fainter objects ( > 20) the measured flux in ProFound is found to be fainter than measured with SExtractor (up to 0.2 mags offset at ∼ 24).Similarly, colour distributions match those of the reference catalogues, with the exception of the aforementioned ProFound IRAC offsets in XMM-LSS. Photometric Redshifts The procedure we follow for obtaining photometric redshift estimates is the same as that used in Adams et al. (2020), with the only exception being the use of observer-frame NIR selection instead of optical selection.To summarise here, we make use of the minimising 2 code L P (Arnouts et al. 1999;Ilbert et al. 2006) to fit templates of galaxies, active galactic nuclei (AGN) and stars to our SExtractor derived aperture fluxes.These templates are modified for dust attenuation according to the Calzetti et al. (2000) extinction law with E(B-V) = 0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.6, 1.0, 1.5.The result is a Probability Density Function (PDF) for the redshift and a simple classification as a likely galaxy, star or QSO-like object.The template sets used are those from Ilbert et al. (2009) and are sourced from Polletta et al. (2007) and from Bruzual & Charlot (2003).To conduct object classification, template spectra for AGN from Salvato et al. (2009) and stars from Hamuy et al. (1992Hamuy et al. ( , 1994)); Bohlin et al. (1995); Pickles (1998); Chabrier et al. (2000) were also fit.Photometric errors are set to a minimum of 5 per cent in flux during the template fitting process, this is to alleviate the consequences of using templates that probe the colour space discretely while the real galaxy population is continuous. Zero-point calibration with spectroscopic samples The two fields have been targeted by a large number of spectroscopic campaigns which can be used to calibrate our methods and examine photometric redshift accuracy.We make use of the spectroscopic catalogue compiled by the HSC team3 .Included are spectra from the VIMOS VLT Deep Survey (VVDS; LeFèvre et al. 2013), z-COSMOS (Lilly et al. 2009), Sloan Digital Sky Survey (SDSS-DR12; Alam et al. 2015), 3D-HST (Skelton et al. 2014;Momcheva et al. 2016), Primus (Coil et al. 2011;Cool et al. 2013) and the Fiber-Multi Object Spectrograph (FMOS; Silverman et al. 2015).From these we select only those with high quality flags (>95 per cent confidence) to ensure secure redshifts are being used.Together these provide a spectroscopic sample of 22,409 in COSMOS and 35,125 in XMM-LSS. We use these spectroscopic redshifts to examine the accuracy of our photometric redshift estimates.In addition to this, we also make use of L P functionality to make iterative adjustments to the zero-points of each photometric filter in order to optimise the results against the spectroscopic sample.Minor shifts in the zero-points can occur as a result of inaccurate filter transmission functions, through biases within the choice of SED templates and from the calibration of the images.The inclusion of a very large sample of spectroscopically confirmed objects from a variety of different surveys minimises the risk of introducing an additional bias through calibration on a non-representative sample.For each catalogue, we run L P once on the spectroscopic sample to obtain the zero-point corrections, these offsets are applied and the entire field is then run.We show the results of this process in Table 2 and the majority are small compared to the errors ( 0.1 mags). Photometric redshift accuracy The quality of the photometric redshift catalogues can be described with two numerical values.The outlier rate, defined as percentage of objects satisfying abs( − ℎ )/( + 1) > 0.15, and the Normalised Mean Absolute Deviation Hoaglin et al. (NMAD;1983), defined as 1.48 × median[| − ℎ |/(1+ )].These two values quantify a) the rate at which the photometric redshift method produces a redshift value that is in significant tension to the measured spectroscopic redshift and b) the spread around − ℎ in a manner that is resistant to influence from the relatively small number of extreme outliers.Comparing the two fields, COSMOS has greater depth and uniformity while XMM-LSS is shallower and wider, and has around 1 magnitude variability in its optical coverage .It is therefore expected that XMM-LSS would produce photometric redshifts of lower quality.The results of this comparison are displayed in Table 3.For each field, we produce two sets of photometric redshift estimates, one with and one without the inclusion of the Spitzer IRAC 3.6 and 4.5 m bands.The addition of the Spitzer IRAC bands to COSMOS makes minimal difference in the quality of the photometric redshifts.However, redshifts are found to decrease in quality for the faintest objects in the XMM-LSS catalogue. Table 3. Summary of the photometric redshift statistics.Displayed are the outlier rates and Normalised Median Absolute Deviation (NMAD) of the catalogues when compared to a large spectroscopic sample.We cut show these numbers in cuts of i-band magnitudes to enable comparison with results from Laigle et al. (2016).The cut of < 22 corresponds to the brightest 50 per cent of the spectroscopic sample we compare to in COSMOS and the brightest 60 per cent in XMM-LSS. Stellar mass determination With photometric redshifts and object classification determined for each source, we proceed to measure the stellar mass.This is performed by fixing the redshift to the best-fit value (template and redshift with minimum 2 ) and rerunning LePhare using the total flux measurements, rather than aperture fluxes.For SExtractor we use fluxes from the MAG_AUTO parameter and for ProFound this is magt.Compared to the aperture fluxes, the total fluxes are essential to making accurate measurements of the normalisation of the SED for resolved objects and hence the total luminosity of the galaxy and stellar mass.In the case that an object has a spectroscopic redshift from one of the surveys described in Section 3.2.1, this value is used instead of the photometric redshift. MEASURING THE GSMF We select objects for use in measuring the GSMF based on a number of criteria to maintain purity and completeness. (i) The source exists in both SExtractor and ProFound derived catalogues.The majority of sources that fail this are either artefacts on the edge of manual masking or are a consequence of the initial ProFound selection on a VISTA stack verses just the -band.Such sources are also removed by the magnitude cuts detailed below. (ii) The source has a 2 arcsecond aperture magnitude following the condition < 24.5 in COSMOS and < 23.2 in XMM-LSS.This corresponds to a SNR cut of 8 and is employed to minimise the potential for contamination while enabling * to be well constrained up to ∼ 2. (iii) The source has a best fit SED that is a galaxy or AGN with a redshift between 0.1 and 2.0 ( 2 Gal/QSO < 2 Star ).In the case a source has a spectroscopic redshift, that value is used in place of the photometric redshift. (iv) We apply an upper limit on the quality of the photometric redshift of 2 < 250 (removing the worst 0.5 per cent of objects). This results in a sample of ∼ 320, 000 galaxies in the combined COSMOS and XMM-LSS fields used in measuring the GSMF.We present the galaxy number counts in each redshift bin in Tables A1 and A2.The inclusion of Spitzer bands leads to a significant reduction in the number of highly massive galaxies at redshifts 0.1 < < 2. This is the result of Spitzer bands breaking degeneracies between stellar templates and certain combinations of galaxy template, leading to a number of these massive objects being reclassified as stars.The use of ProFound photometry over SExtractor leads to minimal difference to the general population of objects, cases will be discussed in Section 5.1.1. The 1/𝑉 max method We first compute the GSMFs using the 1/Vmax methodology (Schmidt 1968;Rowan-Robinson 1968).We determined the max for each galaxy by redshifting the best-fitting template (from the 2 arcsecond aperture photometry) until the object no longer meets our -band magnitude limit. The GSMF is then determined using: where is stellar mass, Δ is the bin width which we set to 0.25 dex and max, is the maximum volume for which galaxy could have been successfully detected. Stellar mass completeness Towards lower stellar masses, galaxies become intrinsically less luminous.This ultimately leads to a regime where the detection limits of the data are reached and galaxy number counts begin to fall as they are lost to noise.As the science goals of this study focus on the massive end of the GSMF, we adopt a conservative approach while still probing a significant mass range.In our model fitting procedures we elect to only use bins of redshift and mass where over 95 per cent of the galaxy sample are brighter than the 8 2 arcsecond aperture detection limit of the respective field.Due to the shallower coverage in XMM-LSS, the criteria for completeness is approximately 0.5-0.75dex higher in stellar mass than in COSMOS.We apply a simple correction to the survey areas based on the fraction of the field that is occupied by other sources or masked by foreground stars.For COSMOS this is 15 per cent and XMM it is 7 per cent.This corrects the GSMF for the probability that sources are highly blended (> 50 per cent overlap) with other sources or significantly effected by bright foreground stars. Cosmic Variance Our measurements of the GSMF are based on data that only probe a limited volume of the Universe.As a result, they are susceptible to biases that are a consequence of large-scale fluctuations in Here we display the cosmic variance for each field if treated independently and the result of combining the fields using the cosmic variance calculator from Moster et al. (2011) and our measured number counts.COSMOS is shown in gold, XMM-LSS in blue and the combination of the two fields in black.Where XMM-LSS becomes incomplete the cosmic variance value for the combined case is just the COSMOS cosmic variance.density in the galaxy distribution.This is commonly referred to as cosmic variance ( 2 ).As we are measuring the GSMF across a wide range of mass and redshift, there is no single quantitative value that can be used to describe this effect.In order to model the effects of cosmic variance on our measurements, we use the treatment from Moster et al. (2011), which provides an estimate of the cosmic variance as a function of both stellar mass and redshift.Our dataset consists of two fields with differing area and dimensions, thus allowing us to mitigate some of the effects of cosmic variance.Where both the XMM-LSS and COSMOS fields are used in measuring the GSMF we calculate the cosmic variance for each field independently ( 2 , ) and combine the values together with a co-moving volume weighting (Equation 7 in Moster et al. 2011).The output is a percentage error on the GSMF, and so to combine the fields this value is converted back into variance ( 2 ) using our galaxy number counts.When data from the XMM-LSS field drops out of consideration due to its shallower depth, cosmic variance is determined from the area of the COSMOS field alone.Fig. 1 shows the results of our cosmic variance calculations for two redshift bins (0.2 < < 0.3 and 0.75 < < 1.0), highlighting how the increased area from including the XMM-LSS field, coupled with combining two widely separated fields, results in a factor of ∼ 2 decrease in the cosmic variance uncertainty. Eddington bias The steep drop in the GSMF beyond the knee can lead to a bias in the derived number densities of the most massive galaxies due to Eddington bias (Eddington 1913).All galaxies in the sample have an uncertainty in the derived stellar mass, however as low-mass galaxies are significantly more numerous this leads to more galaxies scattering to higher stellar mass than the number that scatter to lower masses.To account for this effect and hence, determine an estimate of the intrinsic GSMF, we require an estimate of the uncertainty in the stellar masses derived for our sample.With this as a part of our study.We show the probability of a certain mass scatter ( = original − perturbed ) as a result of photometric errors in the SED fitting process.The lower plot shows identical data to the upper plot, however we have logged the -axis to reveal the low-amplitude wings of the distribution.The non-parametric distribution derived directly from our data is shown as the thick red line.The black dashed line is the best fit Gaussian to this data and the orange line is for a Gaussian multiplied by a Lorentzian.distribution we can then deconvolve (or in reality, fit a convolved double Schechter form) to our observations to determine the intrinsic GSMF.To measure the uncertainty in the stellar masses derived in this study, we repeat both the photometric redshift measurement and the SED fitting process after perturbing the photometry of our sources according to the photometric errors in each band.This process is repeated multiple times to produce the distributions shown in Fig. 2. Based on this, we examine three possible methodologies to uncover the intrinsic stellar mass function from this observed distribution in our analysis described in Section 4.5. The measured galaxy stellar mass functions The GSMFs, as measured from the samples produced from our four catalogues, are presented in Fig. 3.They probe stellar masses from 7.75 < log 10 ( ) < 11.75 and are split into nine redshift bins between 0.1 < < 2.0. To each GSMF we fit a double Schechter function (Schechter 1976, Eqn. 3) using a Markov-Chain Monte Carlo (MCMC) implemented in emcee (Foreman- Mackey et al. 2013).In this redshift range, past studies have found the double Schechter functional form better describes the GSMF due to the underlying bimodality in the galaxy population (e.g.Strateva et al. 2001;Driver et al. 2006;Baldry et al. 2012;Ilbert et al. 2013;Davidzon et al. 2017) and we clearly see an upturn at the low mass end of our GSMFs. A series of priors are applied to prevent parameters from flipping due to the symmetry of the double Schechter functional form shown in Equation 3. The normalisation (Φ 1 & Φ 2 ) follows the condition Φ 1 > Φ 2 , the low-mass slopes ( 1 & 2 ) are limited to being between [-1.8,1.5] and [-3.0,-0.9]respectively for the two components.To compare against past studies, we make the same assumption that each Schechter component has a single, shared A1 and A2).The black line shows the median of the MCMC results for the SExtractor+IRAC data when fit with a double Schechter function convolved with our Eddington correction.The grey shaded region shows the region contained within 1 of the model fit and is based on 10,000 random samples of the final posterior.The gold line and shaded region follows the same process with the ProFound+IRAC data.The high redshift bins in the right column have the x-axis truncated to higher masses to focus on the complete regime.In the final panel the Eddington corrected ProFound models are shown simultaneously and cut where data is incomplete. value of * in the range 10 < log 10 ( * /M ) < 12.We only fit to data points where the bins in stellar mass are greater than 95 percent complete.The MCMC is set up with 500 walkers that burn in for 100,000 steps before conducting a further 20,000 steps for use in mapping the posterior distributions.Each walker is distributed uniformly in the parameter space and limited by the above priors. We perform the fitting procedure four times, once on the observed GSMF and three times with the double Schechter function modified with the convolution with one of three Eddington bias methods that we describe below.First, we modify the fit to be a convolution of the double Schechter function with a Gaussian distribution, where the standard deviation ( ) of the distribution is calculated by fitting a Gaussian to the measured scatter in masses shown in Fig. 2.This is a commonly used method in studies of the GSMF (e.g.Wright et al. 2018).In the second method we extend this model by multiplying the Gaussian with a Lorentzian in the same manner as described in Ilbert et al. (2013); Davidzon et al. (2017).This adds extended wings to the function, which more adequately reproduces the distribution, however it continues with the assumption that the mass scatter is symmetric.Any asymmetry is important to account for, as it means that there is a greater probability of scattering to lower masses than towards higher masses (due to the photometric redshift uncertainty), and this would impact on the measured GSMF.Therefore, in the third case, we do not fit any parametric distribution to the mass scatter, instead we use the smoothed histogram of the scatter convolved directly with the double Schechter function.This should improve upon the use of the analytic forms because it captures the observed asymmetry in the mass scatter (). Visual inspection of the distribution in mass scatter derived in Section 4.4 reveals there to be minimal dependence on redshift.The two lowest redshift bins are slightly broader as a consequence of photometric redshift uncertainty, but these bins contain a much higher fraction of galaxies with spectroscopic redshifts (30 per cent).As a result, the real scatter within these bins is likely much smaller.Using the method which convolves a Gaussian distribution with the measured GSMF, we find values with 0.08 < < 0.10 across all redshifts.For the second method, which uses a Gaussian distribution multiplied with a Lorentzian, we find 0.43 < < 0.58.We note that the values for the two methodologies are not directly comparable due to the different functional forms.Since minimal evolution was found, we remove the redshift dependence on the Lorentzian component that was introduced in Ilbert et al. (2013) to minimise total fit parameters.The resultant formula for the extended wings is thus , which is equivalent to fixing the redshift to 1.0 in the original formula from Ilbert et al. (2013).The range of 0.43 < < 0.58 for the Gaussian component is in agreement with the findings of Ilbert et al. (2013) and slightly larger than the value found by Davidzon et al. (2017).In addition, Grazian et al. (2015) and Davidzon et al. (2017) both find evidence for redshift and stellar mass dependence on the measured mass scatter when approaching the completeness limited regime.The probable explanation for the lack of such a dependence in our data is the conservative SNR cuts that have been implemented. Following previous studies, we fix the values in our final fits to 0.09 in the Gaussian case and 0.5 for the Gaussian × Lorentzian case.The shapes of these distributions are presented in Fig. 2. We discuss the impacts of this correction on the measured GSMF in Section 5.2. Our preferred method for correcting for Eddington bias is to use the histogram presented in Fig. 2 directly in the convolution.This method directly uses the results of the perturbed catalogues and captures the subsequent asymmetry found in the distribution.Such an asymmetry has previously been described in recent studies exploring the Eddington bias (Grazian et al. 2015).The best-fitting double Schechter function fit parameters using this Eddington bias correction are presented in Table 4. Corner plots showing the posterior probability distribution for the parameters in this model will be provided in an online resource.For completeness, we also present our results without the application of the Eddington bias correction in Appendix A alongside the results obtained using the various parametric fits to the mass scatter. Our double Schechter model is only fit up to stellar masses of 10 11.75 .While there are a small number of objects with stellar masses greater than this limit in our sample, these are increasingly likely to be subject to forms of contamination such as AGN activity, source blending, misclassification of stars or artefacts.In the high redshift regime we are also unable to constrain the lowmass Schechter component.If we instead fit with a single Schechter function for > 1.25, we find the shift in the fit parameters to be minimal (< 1) compared to results obtained with a double Schechter function.For consistency we proceed with the double Schechter functional form for all redshift bins. Visual inspection of the measured GSMFs reveal a change in the shape of the massive-end between the redshift bins of 0.75 < < 1.0 and 1.0 < < 1.25 (see the last panel of Fig. 3).This evolution is present in the results obtained using both source extraction methods (SExtractor/ProFound) and both sets of photometric redshifts (including/excluding Spitzer data).Inspection of the redshift distributions reveal no significant features, such as peaks or troughs, within these redshift bins.The total shift in the high-mass component amounts to around 2 sigma in * and the normalisation Φ 1 between these two redshift bins.So, a combination of statistical errors/cosmic variance or an unknown systematic could be the driver of such a change. Changes in the GSMF with varying methodology Within each redshift bin, we measure a total of four stellar mass functions.First we compare the results with and without the inclusion of Spitzer/IRAC [3.6] and [4.5] data, and second we compare the GSMF derived from SExtractor based photometry in comparison to that derived with Profound.While the low mass component of the GSMF is consistent between these different methods, we find some differences in the results at stellar masses greater than * . The impact of including Spitzer/IRAC photometry on the measured GSMF The most immediately apparent feature is the offset between the GSMFs that include or exclude Spitzer/IRAC data in the redshift bins of 0.5 < < 0.75 and 1.0 < < 1.25.This is due to two effects.Firstly, many high-mass objects that lie in the 0.5 < < 0.75 bin have a different redshift solution ( ∼ 0.1) when Spitzer/IRAC is included.They consequently have smaller masses in this lower redshift bin and their influence is negated by the high number densities within this mass-redshift range.The cause of the different redshift solutions is the uncertainty of the SED slope redder than the band, with a red slope favouring the higher redshift solutions and a blue slope favouring low redshift solutions.Secondly, there is a likely contamination from stars in the 0.5 < < 0.75 and 1.0 < < 1.25 bins.Many high mass objects have smooth red optical slopes that turn over around the or bands.With a limited wavelength range, some black body-like spectra and red galaxies are indistinguishable.Introducing the Spitzer/IRAC bands vastly increases the 2 of the galaxy models and reduces the 2 of the stellar models for a significant number of these high mass objects.Consequently these objects no longer meet our selection criteria (either the 2 increases above 250 or the classification switches to a star) when using the values associated with the inclusion of the mid-infrared bands.Both of these cases are examples of degeneracies between template sets as a result of the use of a limited number of broadband filters.Thus we find that the inclusion of the [3.6] and [4.5] bands makes a significant difference to the derived number density of the most massive galaxies in our data.Therefore, we focus on the GSMFs measured with the inclusion of the Spitzer/IRAC when discussing redshift evolution and comparisons to simulations. The impact of varying the choice of source extraction software While the global mass distributions between SExtractor and Pro-Found based catalogues are broadly very similar, there are individual cases where mass estimates can vary widely between objects.Issues can typically be put down to artefacts affecting the photometry, proximity to bright sources and disagreement between the two source extraction measurements in the Spitzer bands.Instances of significant differences in mass estimations tend to occur in high redshift and/or low mass objects that fall within our incomplete regime and so do not affect our final results. At low redshifts, galaxies become increasingly resolved and so any systematic variation between the SExtractor and ProFound photometry would be expected to be more apparent.In our measurements we find any such variation between the two to be very small, with the low-mass components ( < * ) between 0.1 < < 1.5 being highly consistent between the two source extraction methods.We find that at the lowest redshift (0.1 < < 0.2) the ProFound based GSMF produces more galaxies of very high mass compared to the SExtractor derived measurements.This is then reversed at higher redshifts > 0.5, where the ProFound GSMF produces fewer galaxies of very high mass.However, these differences are of relatively low significance (of order 1) and demonstrate that over the redshift and mass ranges probed in this study, the choice of source extraction software makes no significant difference to GSMF results. The intrinsic GSMF corrected for Eddington bias To recover the intrinsic GSMF from our observations, which is affected by the uncertainties in the estimate of stellar mass, we trial three different methods described in Section 4.4.In the first method, we assume that the scatter induced by uncertainties on stellar mass are described by a Gaussian distribution of = 0.09.We find that this imposes minimal changes to the shape of the high-mass end of the MF, corresponding to a shift in * of order 0.05 dex lower when compared to the observed GSMF.With the second method, where we convolve a Gaussian × Lorentzian combination with = 0.5, we find the shift between the observed and intrinsic parameters to be more significant with * shifting to lower masses by ∼ 0.1 dex compared to the observed GSMF.For completeness, we display the The impact that differing models of the Eddington Bias have on the measurement of the intrinsic GSMF at high masses.Shown is the intrinsic GSMF at 0.5 < < 0.75 as measured with SExtractor photometry when recovered using the three methods applied in this study.In blue we show the results of using a simple Gaussian to model mass errors, in red we expand the model to include Lorentzian wings and in black we show the results when using a non-parametric approach.Shaded regions indicate the 1 sigma uncertainty and are derived from 10,000 random samples of the posterior probability distribution.The edges of the shaded regions are made more bold to assist in readability. MCMC results for these two methods in Tables A3 and A4 respectively.Each of these methods make a fundamental assumption in that the scatter in the derivation of the stellar mass is symmetric in logarithmic stellar mass space.We find this assumption to work best when redshifts are confident e.g. if we conduct the photometry scatter procedure on just the mass calculations and assume the photometric redshift is correct or the objects have spectroscopic redshifts.However, when the uncertainty on the photometric redshifts is included, the measured distribution is found to be broader and more asymmetric.This results in the Gaussian × Lorentizan method underestimating the amount of scattering in certain regimes (small up-scatter and most of the down-scattering), even with the extended wings of the function.It is for this reason that we elect to instead use the measured distribution of mass scatter to convolve with the intrinsic double Schechter function (see Fig. 4 for an example of the impact). The GSMF, when corrected with this distribution in mass scatter, undergoes a similar shift in the Schechter parameters to that of the Gaussian × Lorentzian.The shift in * is 0.12 dex compared to the observed GSMF and all the fit parameters lie within 1 of the results found with the Gaussian × Lorentzian method.While the broad wings of the distribution of mass scatter are very small in probability beyond shifts of 0.3 dex (see Fig. 2), the nature of the GSMF spanning many orders of magnitude in number density around the knee requires these wings to be modelled in order to capture the impact of a small number of objects scattering into the less populated, high-mass bins.This highlights that the intrinsic GSMF is very sensitive to the strength of the Eddington bias correction and could be a key source of inconsistency between results of observational studies.The best-fit parameters for the double Schechter function when using our preferred non-parametric Eddington correction are shown in Table 4. The use of high completeness spectroscopic surveys would reduce the uncertainty on stellar mass due to photometric redshift uncertainties (Pozzetti et al. 2010;Moustakas et al. 2013;Leauthaud et al. 2016).While such studies have been limited to the brighter and more massive objects (e.g.log 10 (/ ) > 10.5), it is these objects that are most at risk at having their number counts inflated by Eddington Bias.With new surveys and instruments using multi-object spectrographs coming online in the coming years (e.g.DEVILS and WAVES), such studies will soon be capable of measuring the GSMF to lower masses and higher redshifts (Davies et al. 2018;Driver et al. 2019) Comparisons to previous studies To put this study into greater context we compare against a number of past studies.The studies selected for these comparisons are Davidzon et al. (2017) which utilised only the COSMOS field in their study with the Laigle et al. ( 2016) catalogue, Wright et al. (2018) which uses a combination of the GAMA (Driver et al. 2011), COSMOS (Davies et al. 2015;Andrews et al. 2017) and 3D-HST surveys (Brammer et al. 2012;Skelton et al. 2014;Momcheva et al. 2016) and the recent study conducted by McLeod et al. (2020) that uses components of the COSMOS & XMM-LSS fields.The results of the the double Schechter fits conducted in these studies are presented alongside our own in Fig. 5 and all were calculated with the same cosmological model as used in this study. Examining the Schechter parameters individually, the strongest agreement is that of the high mass normalisation (Φ 1 ), where there is very close agreement between our results and those from McLeod et al. (2020).In contrast, the slope of the high mass component 1 is found to agree more with the results from Wright et al. (2018), however this is subject to degeneracy based effects from our poorer capability to constrain the low-mass slope 2 .Inspection of the corner plots show that the more negative values of 1 are driven by the steeper slopes found in our broad uncertainties of the low-mass slope (see accompanying online resources). The Eddington bias corrections implemented in previous work all use different functional forms.Wright et al. (2018) andMcLeod et al. (2020) use corrections that are independent of mass and redshift as in our study, whereas Davidzon et al. (2017) 2020), who fold their mass uncertainties into their model fitting procedures.They find a constant * = 10.78 and mildly evolving values between 10.8 and 10.9.The work conducted in Leja et al. (2020) is an example of a study that included redshift evolution terms within a model fitting procedure that did not bin by redshift.Such a methodology can result in a smoothing of the evolution of the model parameters, which has its pros and cons depending on the timescales examined and assumptions made (e.g.fixing the slope of 1 ).For the propose of this study, we have elected to assume no particular evolutionary form for the GSMF. These past studies, combined with our work trialling different functional forms to account for Eddington bias, suggest that up to 0.15 dex in variation of the characteristic mass * can be attributed to implementing different methodologies.While significant enough to cause changes of a few , it is not enough to explain the ∼ 0.5 dex discrepancy among * values found between these studies, 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2020) is a modified value from Baldry et al. (2012) indicating that there are further systematics at play between the data sets. Time evolution of the high-mass component of the GSMF Including Spitzer/IRAC photometry is found to be important in obtaining accurate galaxy classification and photometric redshifts, we therefore discuss the redshift evolution in the context of the GSMFs that make use of these data.We show the time evolution for the best-fit Schechter parameters (Φ 1 , * , 1 , 2 ), corrected for Eddington bias via our non-parametric method, in Fig. 5. Examining these best-fit Schechter parameters reveals evolution to be initially driven by the normalisation (Φ 1 ) of the double Schechter components, which increases with time.This evolution is found to be much stronger at higher redshifts, with a change of over 0.5 dex between = 2 through to = 1.25.For < 1.25, the normalisation Φ 1 stabilises and the evolution of the mass function becomes dominated by evolution in * (there is also a possible flattening of 1 , but this is degenerate with 2 which is poorly constrained at higher redshift). Under the assumption that * is constant for 0 < < 2, the best fitting value of log 10 ( * / ) = 10.73 ± 0.04 is found for both SExtractor and ProFound derived photometry.However, we find that a constant * model gives a poor quality of fit, with 2 red = 3.4 (3.4) for SExtractor (ProFound).The observed rise in the value of * at < 1.25 is present in all versions of the intrinsic GSMF that we produce with the differing Eddington corrections.We therefore fit a simple linear model to the evolution of * of the form log 10 ( * / ) = + , where and are free parameters and fit using minimisation of 2 .Introducing this simple time dependence into * produces significantly better fits ( 2 red = 1.4 (2.4) with a Pearson correlation of 0.7 (0.75) for SExtractor (ProFound)) .The evolutionary fits can be described by, Δ log 10 ( * / ) Δ = −0.164± 0.047 SExtractor −0.113 ± 0.054 ProFound (4) a 2-3 disagreement with the evolution described by a constant * .However, the observed evolution in * could also be attributed to a steeper evolution over a smaller redshift range, rather over the entire redshift range probed (see top-right panel of Fig. 5).Such an evolution could be indicative of a break in the balance of growth channels that maintains a constant * above ∼ 1 while Φ 1 increases.Two such channels include star formation and the rate of major mergers.Observations and simulations show that the contribution towards new stellar mass from both mechanisms falls at < 1.5 in massive galaxies (Rodriguez-Gomez et al. 2015;Tomczak et al. 2016;Qu et al. 2017;O'Leary et al. 2020).The natural consequence of this would be to stabilise the high-mass end of the GSMF towards lower redshifts, as we find (e.g. the bottom right panel of Fig. 3 for < 0.75). The evolving quenched fraction of galaxies The effects of the evolving in situ star formation rate and merger rates of galaxies results in the changing shape of the GSMF.In order to probe this further, we examine the population of massive galaxies over the redshift range 0.5 < < 1.5 in more detail.We separate our sample of massive galaxies into two broad specific star formation rate (SSFR) bins, based on the same SED template used to derive the stellar mass.We term these bins 'Star-forming' and 'Quenched', which are defined as bins of −8.5 > log 10 ( /yr −1 ) > −10.0 and −10.0 > log 10 ( /yr −1 ) respectively.The star forming bin is selected to cover the majority (∼ 95 per cent) of the star forming main sequence at ∼ 1.5 as found in both observational and simulation based studies (e.g.Sparre et al. 2015;Tomczak et al. 2016).The quenched galaxies are thus defined as falling below this star forming main sequence.We do not enforce our definitions to be redshift dependent (i.e. a certain level above or below the evolving star-forming main sequence) to capture the global fall in star formation post-cosmic noon and produce results similar to those that use rest-frame colour selection (e.g.Davidzon et al. 2017;McLeod et al. 2020).We restrict the redshift range to 0.5 < < 1.5, as SSFR estimates for galaxies at redshifts < 0.5 would become increasingly unreliable as the rest-frame UV exits the wavelength coverage of our observations. We reproduce the GSMF for these two populations in Fig. 6.Our definitions for star-forming and passive galaxies exclude a small number of galaxies undergoing starbursts −8.5 < log 10 ( /yr −1 ), and consequently the GSMFs in Fig. 6 do not sum directly to the total GSMF presented in Fig. 3.We find the change in total number density of galaxies with > * to be driven by a strong evolution in the passive population.As a consequence of this trend, the most massive component of the GSMF becomes increasingly dominated by more passive galaxies while the number densities of star-forming galaxies are more constant, agreeing with Davidzon et al. (2017) and McLeod et al. (2020).Assuming a constant star formation rate, a borderline quenched galaxy with mass ≥ * , log 10 ( /yr −1 ) = −10 at = 1.5 would grow of order 0.15 dex in stellar mass during the ∼ 4 Gyr that passes between 0.5 < < 1.5.The majority of the objects in our high mass sample will produce fewer stars than predicted from this overly simple estimation, as most start with SSFR values which are lower and there is a global trend for the SSFR to decrease with time.Consequently, the growth of galaxies through solely main sequence evolution is insufficient to produce the observed increase in high-mass galaxy number densities. Since these quenched systems are not expected to form enough stars to move between bins of stellar mass ( 0.25 dex), the two primary ways of increasing the quenched number densities at high masses are consequently merger events and the addition of newly quenched systems taken from the star-forming population.As our star-forming number densities at high mass remain near constant, this implies that these galaxies must be replenished by slightly lower mass star-forming galaxies at approximately the same rate at which they undergo quenching.Major mergers (∼ 1 : 4 mass ratio) are capable of generating mass gains of order ∼ 0.1 dex for passive galaxies on top of internal star formation.Mergers themselves can trigger starbursts in the aftermath of the event.However, at later times ( < 1), such mergers are increasingly 'dryer' as massive galaxies tend to be gas poor (e.g.De Lucia & Blaizot 2007;Lin et al. 2008;Davidzon et al. 2016;Tomczak et al. 2017).Merger events come at the cost of reducing the total number of galaxies and its effects will be embedded in the evolving shape of the GSMF.In addition to providing pathways for stellar mass growth of passive galaxies, merger events can also be used to solve issues surrounding the radial size of passive galaxies.At higher redshifts ( ∼ 1.5), quenched galaxies are found to be compact relative to passive systems observed at = 0 (e.g.Williams et al. 2010;Wuyts et al. 2011;McLure et al. 2013).The passive nature of these galaxies requires that external processes must be responsible for the observed shift in galaxy radius.The study by McLure et al. (2013) finds that combinations of major and minor mergers can simultaneously reproduce Figure 6.The GSMF when broken down into star forming and passive components by SSFR.The quenched population is shown with the red lines and the blue lines show the star-forming population.Darker shading indicates mass functions at higher redshifts.Shading widths indicates observational errors through both cosmic variance and .The bin 0.5 < < 0.75 is not shown to reduce clutter and is found to be near identical to the bin 0.75 < < 1.0.The star-forming population is found to be near constant at higher masses while the passive population is found to evolve significantly between 1.5 < < 0.75.both the growth in stellar mass and radius observed in these systems between ∼ 1.5 and today. Comparison to simulations In this section we compare our measured GSMF with the semianalytic simulation (SAM) SHARK (Lagos et al. 2018) and the hydrodynamical simulations Simba (Davé et al. 2019) and EAGLE (Schaye et al. 2015).In Fig. 7 we show the GSMF results from these alongside our intrinsic (Eddington-bias corrected) double Schechter functions.The SHARK-SAM has the GSMF as one of physical measurements that it is tuned to reproduce at = 0, 1 and 2. As a result it is unsurprising to find excellent agreement with our results with the low mass slope and location of the primary 'knee'.However, there is an excess in the number density of the most massive objects (log 10 (/ ) > 11.5) in the simulation.With the close matching to all other components of the GSMF, the likely source for this discrepancy could be in part the choice of GSMF which was used to tune the simulation.The Lagos et al. (2018) models elect to calibrate against studies such as Muzzin et al. (2013) and Wright et al. (2018), all studies with * values on the higher end of the parameter space covered by observational studies (10.8 < log 10 ( * / ) > 11.0).For our comparisons we utilise the GSMF derived from total masses provided from the primary SHARK run.The study by Lagos et al. (2018) finds that implementing a 30kpc aperture to measure stellar mass leads to minimal impact for log 10 (/ ) < 12.0. A similar situation is present when comparing against the results of the Simba hydrodynamical simulation (Davé et al. 2019).Simba implements a new, torque-limited accretion model (Anglés-Alcázar et al. 2017) for cold gas alongside more conventional Bondi accretion (Bondi 1952) for hot gas.The energy built up in accretion is used to fuel feedback and quench galaxies.While this model for black hole growth and AGN activity was considered to be more physically motivated over previous simulations, it is found to still over produce galaxies of very high stellar mass.Our observations reinforce the findings of Davé et al. (2019), where the largest discrepancy in high mass galaxies is at ∼ 2 (even when compared to the SHARK results) and that there is a slight underproduction of galaxies around * at low redshifts ( < 1).This results in a much shallower 'knee' than would be described with an exponential cut off.The authors explain possible causes of over-production of massive galaxies include over-merging of galaxies that blend, due to the use of a friend-of-friends (FoF) algorithm to count star particles, and the over-production of large halo masses.The EAGLE simulation (Schaye et al. 2015) implements AGN feedback through inputting a fraction of the accreted gas as thermal energy into the local surroundings.It is found to produce mass functions with slightly higher low-mass normalisation but much sharper cut offs at the high mass end.This results in an over production of galaxies on the low mass slope by a factor of around 2, but a closer match at ≥ * .The inconsistencies at the lowmass end are thought to originate from the lack of quenching of lower mass galaxies at higher redshifts, or from the need to implement more burst-like star formation histories (Furlong et al. 2015).These GSMF measurements implement three-dimensional apertures of radius 30kpc when calculating stellar masses, the size of which is found to reduce the mass assigned to the largest, most massive systems in the simulation and prevent some of the overmerging issues described in Davé et al. (2019).This was found to reduce the number density of galaxies at log 10 (/ ) ∼ 11.5 by around half a dex, imposing a steeper cut off in the number of high-mass systems.In the work conducted in Furlong et al. (2015), the EAGLE GSMF is fit with a double Schechter function and is found to exhibit a strong evolution in * from 0.1 < < 2.0.The value of log 10 ( * / ) = 10.44 ± 0.08 at = 2 is notably small, even lower than the results from McLeod et al. (2020) which lie at the lower extrema of the range of observational constraints.The increase from log 10 ( * / ) = 10.74 ± 0.05 at = 1 to log 10 ( * / ) = 10.93 ± 0.03 at = 0.1 matches our ProFoundbased GSMF to within 1, while the value of * is lower at = 0.1 from our SExtractor-derived GSMF.However, it is worth noting that Furlong et al. (2015) suggest that such an evolution in the characteristic mass could be the result of overly strong AGN feedback limiting the production of massive galaxies between 2 < < 4. It remains difficult to determine if faults lie within observations (mass errors, systematics etc.) or with the simulations (e.g.fine tuning AGN feedback) due to the sensitivity off all of these factors on the exponential decline in number density. Within EAGLE, the origin of this stellar mass growth in the most massive galaxies arises from multiple sources.Qu et al. (2017) show that in the redshift range of < 1.5, around 68 per cent of the massive galaxies (log 10 (/ ) > 11) in EAGLE undergo at least one major merger (at least 1:4 mass ratio, leading to > 0.1 dex growth) and have the average fraction of stellar mass originating from outside the primary galaxy increase by a factor of two (from a median of 10 per cent to 20 per cent), with most of this external contribution fuelled by these major merger events.At = 1, over half of the simulated galaxies with log 10 (/ ) > 11 are defined as quenched (log 10 ( /yr −1 ) < −10), falling significantly below EAGLE star-forming main sequence (Furlong et al. 2015).This leads to a similar fraction of quiescent galaxies to our sample.These consequently experience mass gains of less than 0.15 dex over the 4Gyr between 0.5 < < 1.5 under the simple assumption of a constant star formation rate.Creating this evolving value of * while maintaining a near constant value of Φ 1 thus likely requires a balance in merger events and internal star formation in order to sustain the growing number density of > * systems. CONCLUSIONS Utilising new photometric catalogues generated from optical and near-infrared data in the COSMOS and XMM-LSS fields, we have measured the GSMF over the redshift range of 0.1 < < 2.0, covering 60 per cent of the age of the Universe.The use of the two fields greatly reduces the impact of cosmic variance, due to both the wider area and the fact that they are widely separated on the sky, and allows for tight constraints on the GSMF at > * .Simultaneously, the depth of the photometry available allows for constraints on on the GSMF of log 10 (/ ) = 7.75 for galaxies at 0.1 and log 10 (/ ) = 9.75 for galaxies at 2.0.We have investigated the use of ProFound and SExtractor source extraction software and also the impact of Spitzer/IRAC [3.6] and [4.5] photometry on the GSMF.Our main conclusions are summarised as: (i) The inclusion of Spitzer/IRAC [3.6] and [4.5] photometry alleviates the degeneracies in select cases where the SEDs of red, massive galaxies in the redshift ranges 0.5 < < 0.75 and 1.0 < < 1.25 are confused with low redshift or stellar templates.Thus the inclusion of these data leads to fewer contaminants and a small decrease in the derived * (of up to 0.1 dex) in these redshift ranges. (ii) Both SExtractor and Profound derived photometry produce consistent faint-end components of the GSMF.Differences between the mass functions at higher masses are greater when examining the extreme parts of our results (e.g. at the lowest and highest redshifts) but the resultant double Schechter fits are found to agree to within ∼ 1. (iii) The measured GSMF is found to disagree with the assumption that the characteristic mass * is constant with time between 0.1 < < 2.0 at the 3 (2) sigma level for the SExtractor (ProFound) derived results.Such an evolution in * between 0.5 < < 1.5 can be seen in some (but not all) previous work and also in the predictions from the EAGLE hydrodynamical simulation.However, significance is low and caveats in both the understanding of observational systematics and AGN feedback strength in simulations (Furlong et al. 2015) mean a claim of an evolving * is presently very mild. (iv) Eddington bias, and the methodology used to correct for it, is found to be highly influential on the shape of the high-mass end when attempting to retrieve the intrinsic GSMF from observations.There is presently still no consensus on the handling of observational and systematic errors that can impact stellar mass estimates (see also discussions within Grazian et al. 2015;Davidzon et al. 2017).For our data, we find the correction required to be asymmetric and poorly described by commonly used analytic forms.When comparing to the observed GSMF, applying a simple Gaussian treatment to the Eddington bias is found to reduce * by around 0.05 dex.With the Lorentzian wings added into the description, the shift in * doubles to around 0.1 dex.Utilising a non-parametric form, based on the measured error distribution, leads to * shifts of 0.12 dex when compared to the observed GSMF. (v) When splitting our galaxy sample by specific star formation rate (SSFR), our results confirm the findings of previous studies that show increasing number densities of quenched galaxies are responsible for the rise in the GSMF for > * (e.g.Davidzon et al. 2017;McLeod et al. 2020).Examining growth channels for stellar mass in the EAGLE simulation show this to be the result of a combination of internal star formation and merger events.This is because internal star formation alone would amount to less than 0.15 dex in stellar mass growth for the majority of the population of massive galaxies ( > * ).Between 0.5 < < 1.5, the constant number densities of star-forming galaxies at > * indicate that these galaxies quench at approximately the same rate that lower mass galaxies replace them through their own in situ star formation. (vi) Comparisons to simulations reveal that the semi-analytic model SHARK and hydrodynamical simulation Simba both overproduce massive galaxies at low and intermediate redshifts.Likewise the EAGLE is found to over-predict the number of low-mass galaxies by a factor of around 2. This highlights that even in the era where such simulations can be fine tuned to better replicate observations, discrepancies still exist.It also further emphasises the conclusions drawn by these studies that additional work is required on all sides, from the observational data with which to calibrate/compare simulations against, to the development of the physical models used in simulations.However, we find the evolution of the high-mass component of the EAGLE simulations replicates our observations of an evolving * . Table A1.The median and standard deviations of the MCMC samples for the double Schechter function when using SExtractor based total photometry.Priors are set to ensure that 1 & log 10 (Φ 1 ) refer to the high mass component and 2 & log 10 (Φ 2 ) refer to the low mass component.Alongside are the number of galaxies from each of the two fields that contribute to the GSMF as shown in Fig. 3.These mass functions are based on the raw observations and are not corrected for Eddington bias. Figure 1 . Figure 1.The estimated cosmic variance for two example redshift bins.The solid lines show 0.2 < < 0.3, the dashed lines show 0.75 < < 1.0.Here we display the cosmic variance for each field if treated independently and the result of combining the fields using the cosmic variance calculator fromMoster et al. (2011) and our measured number counts.COSMOS is shown in gold, XMM-LSS in blue and the combination of the two fields in black.Where XMM-LSS becomes incomplete the cosmic variance value for the combined case is just the COSMOS cosmic variance. Figure 2 . Figure2.The shapes of the three Eddington bias corrections implemented as a part of our study.We show the probability of a certain mass scatter ( = original − perturbed ) as a result of photometric errors in the SED fitting process.The lower plot shows identical data to the upper plot, however we have logged the -axis to reveal the low-amplitude wings of the distribution.The non-parametric distribution derived directly from our data is shown as the thick red line.The black dashed line is the best fit Gaussian to this data and the orange line is for a Gaussian multiplied by a Lorentzian. Figure 3 . Figure3.The GSMF from our analysis of the COSMOS and XMM-LSS data.In blue and red are the results from SExtractor and ProFound, while navy and brown data points denote the points measured after including Spitzer/IRAC photometry.Unfilled data points indicate those with less than 95 per cent completeness and are not used in any fitting procedure.Displayed data points are based on raw observations and do not correct for Eddington bias (see TablesA1 and A2).The black line shows the median of the MCMC results for the SExtractor+IRAC data when fit with a double Schechter function convolved with our Eddington correction.The grey shaded region shows the region contained within 1 of the model fit and is based on 10,000 random samples of the final posterior.The gold line and shaded region follows the same process with the ProFound+IRAC data.The high redshift bins in the right column have the x-axis truncated to higher masses to focus on the complete regime.In the final panel the Eddington corrected ProFound models are shown simultaneously and cut where data is incomplete. Figure4.The impact that differing models of the Eddington Bias have on the measurement of the intrinsic GSMF at high masses.Shown is the intrinsic GSMF at 0.5 < < 0.75 as measured with SExtractor photometry when recovered using the three methods applied in this study.In blue we show the results of using a simple Gaussian to model mass errors, in red we expand the model to include Lorentzian wings and in black we show the results when using a non-parametric approach.Shaded regions indicate the 1 sigma uncertainty and are derived from 10,000 random samples of the posterior probability distribution.The edges of the shaded regions are made more bold to assist in readability. implement a minor redshift dependence in the Lorentzian terms.Furthermore, Wright et al. (2018) use a Gaussian of width = 0.1 dex while Ilbert et al. (2013); Davidzon et al. (2017) use the Gaussian × Lorentzian distribution of widths 0.5 dex and 0.35 dex respectively.McLeod et al. (2020) uses a log-normal distribution of width = 0.15 dex to correct for the Eddington bias, leading to larger changes in their fit parameters, with * changing by up to 0.2 dex compared to the raw observations.Other examples of recent studies include Thorne et al. (2020) and Leja et al. ( Figure 5 . Figure 5.The time evolution of the double Schechter parameters as measured with photometry derived by SExtractor (blue) and ProFound (red) and including the use of Spitzer/IRAC photometry in both redshift and mass calculations.These data points are for the fits that utilise the non-parametric correction for Eddington bias.Since the parameter Φ 2 is largely unconstrained for a significant portion of the redshift range, we do not show those results in this figure.Alongside our results we display the results of Davidzon et al. (2017); Wright et al. (2018); McLeod et al. (2020) in black, gold and green respectively.The lowest redshift data point from McLeod et al. (2020) is a modified value fromBaldry et al. (2012) Figure 7 . Figure 7.The intrinsic Galaxy Stellar Mass Function (GSMF) from Table4with Eddington bias removed.SExtractor derived results are in blue while ProFound is in red with the shading indicating the 1 confidence interval.Presented alongside are examples of previously conducted simulations.For data from external sources, we present the closest redshift bin available for our comparisons.The studies used are the primary results from the SHARK run conducted inLagos et al. (2018) as the black dotted lines, EAGLE(Schaye et al. 2015) as yellow dashed lines and Simba as the dot/dashed purple lines(Davé et al. 2019). Table 1 . Summary of the 5 detection depths within the COSMOS and XMM-LSS fields.Depths are calculated in 2 diameter circular apertures (2.8 for IRAC bands due to Spitzer's larger point spread function) that were placed on empty regions of the image.Values are grouped into Deep (D) and Ultra-Deep (UD) sub-regions.The XMM-LSS field has a single ultra-deep pointing from HSC centred on the UDS field and so a third of XMM-LSS has this deeper optical coverage.Similarly the * coverage has a single square degree of ultra-deep coverage in a separate part of XMM-LSS as part of the CFHT Legacy Survey observing programme. Table 2 . The list of zero-point offsets applied to the 2 arcsecond aperture photometry after optimising the template fitting process against a large spectroscopic sample.Offsets listed are in magnitudes and are calculated separately for each field. Table 4 . The best-fitting double Schechter function parameters derived from our observed GSMF when corrected for Eddington bias through the direct implementation of the mass scatter over fitting an analytic form.Priors are set to ensure that 1 & log 10 (Φ 1 ) refer to the high mass component and 2 & log 10 (Φ 2 ) refer to the low mass component. Table A2 . The median and standard deviations of the MCMC samples for the double Schechter function when using ProFound based total photometry.Priors are set to ensure that 1 & log 10 (Φ 1 ) refer to the high mass component and 2 & log 10 (Φ 2 ) refer to the low mass component.Alongside are the number of galaxies from each of the two fields that contribute to the GSMF as shown in Fig.3.These mass functions are based on the raw observations and are not corrected for Eddington bias. Table A3 . The median and standard deviations of the MCMC samples for the double Schechter function when corrected for Eddington bias by convolving with a Gaussian distribution using a width of = 0.09.Priors are set to ensure that 1 & log 10 (Φ 1 ) refer to the high mass component and 2 & log 10 (Φ 2 ) refer to the low mass component. Table A4 . The median and standard deviations of the MCMC samples for the double Schechter function when corrected for Eddington bias through the scattering of photometry in the photo-z process, leading to a convolution with a Gaussian × Lorentzian functional form with a strength of = 0.50.Priors are set to ensure that 1 & log 10 (Φ 1 ) refer to the high mass component and 2 & log 10 (Φ 2 ) refer to the low mass component.
17,150
2021-01-18T00:00:00.000
[ "Physics" ]
Transcription Factor Sp1 Promotes the Expression of Porcine ROCK1 Gene Rho-associated, coiled-coil containing protein kinase 1 (ROCK1) gene plays a crucial role in maintaining genomic stability, tumorigenesis and myogenesis. However, little is known about the regulatory elements governing the transcription of porcine ROCK1 gene. In the current study, the transcription start site (TSS) was identified by 5’-RACE, and was found to differ from the predicted one. The region in ROCK1 promoter which is critical for promoter activity was investigated via progressive deletions. Site-directed mutagenesis indicated that the region from −604 to −554 bp contains responsive elements for Sp1. Subsequent experiments showed that ROCK1 promoter activity is enhanced by Sp1 in a dose-dependent manner, whereas treatment with specific siRNA repressed ROCK1 promoter activity. Electrophoretic mobility shift assay (EMSA), DNA pull down and chromatin immunoprecipitation (ChIP) assays revealed Sp1 can bind to this region. qRT-PCR and Western blotting research followed by overexpression or inhibition of Sp1 indicate that Sp1 can affect endogenous ROCK1 expression at both mRNA and protein levels. Overexpression of Sp1 can promote the expression of myogenic differentiation 1(MyoD), myogenin (MyoG), myosin heavy chain (MyHC). Taken together, we conclude that Sp1 positively regulates ROCK1 transcription by directly binding to the ROCK1 promoter region (from −604 to −532 bp) and may affect the process of myogenesis. Introduction As a downstream effector of the small GTP-binding protein Rho, rho-associated, coiled-coil containing protein kinase (ROCK) acts as a molecular switch controlling a variety of cellar functions, such as the regulation of stress fiber formation, actin polymerization and so on [1,2]. ROCK1 and ROCK2, two isoforms of ROCK, have distinct roles and cannot be replaced by each other [3]. ROCK1 participates in multiple biological and physiological processes [4,5]. Besides the important role in the progress of tumorigenesis, obesity, and inflammation [5][6][7], ROCK1 also participates in the regulation of skeletal muscle [8,9]. Additionally, numerous elements such as RhoA, medicine, sex hormone, and nitric oxide can regulate the activity of ROCK1 [10][11][12].The activation of ROCK1 is necessary and sufficient to control glucose transport in myoblasts [13]. During myogenesis, ROCK1 is reported to act as a negative regulator of the process [9]. ROCK1 is required for myoblast proliferation, but prevents commitment to differentiation [8]. Despite the researchers increasing focus on the biological role of ROCK1 gene, little is known about the transcriptional regulation of porcine ROCK1 gene. Therefore, it is crucial to elucidate the molecular mechanisms involved in its expression and transcriptional regulation. Sp1, a member of the SP/KLF transcription factor family, is an important regulator in many tissues that binds to GC-rich motifs, which plays a key role in cellular functions [14,15]. It always works through binding to the promoter region of its target genes [16,17], and can activate or repress the transcription in response to physiological and pathological stimuli [18]. The promoter activity of rat ROCK1 gene is reduced by Sp1, whereas it is enhanced by Sp6 in dental epithelial cells [19]. To investigate the transcriptional regulation of the ROCK1 gene, we isolated the promoter of the porcine ROCK1 gene, analyzed its upstream regulatory elements and revealed that transcription factor Sp1 directly binds to the core promoter region of ROCK1 and stimulates its expression. Identification of the Promoter Region and Regulatory Elements of Porcine ROCK1 Gene. A 2552-bp 5'-flanking region of porcine ROCK1 gene was obtained and 11 progressive deletions were introduced upstream of the luciferase reporter gene. It was noticed that the ROCK1-P7 to P10 fragments had no activity according to negative control; an increase of activity was detected in P6 and P5, especially the P5 fragment (Figure 1), indicating that the region from P5 to P6 (´744 to´402 bp) was important for transcriptional activity (Figure 1). required for myoblast proliferation, but prevents commitment to differentiation [8]. Despite the researchers increasing focus on the biological role of ROCK1 gene, little is known about the transcriptional regulation of porcine ROCK1 gene. Therefore, it is crucial to elucidate the molecular mechanisms involved in its expression and transcriptional regulation. Sp1, a member of the SP/KLF transcription factor family, is an important regulator in many tissues that binds to GC-rich motifs, which plays a key role in cellular functions [14,15]. It always works through binding to the promoter region of its target genes [16,17], and can activate or repress the transcription in response to physiological and pathological stimuli [18]. The promoter activity of rat ROCK1 gene is reduced by Sp1, whereas it is enhanced by Sp6 in dental epithelial cells [19]. To investigate the transcriptional regulation of the ROCK1 gene, we isolated the promoter of the porcine ROCK1 gene, analyzed its upstream regulatory elements and revealed that transcription factor Sp1 directly binds to the core promoter region of ROCK1 and stimulates its expression. Identification of the Promoter Region and Regulatory Elements of Porcine ROCK1 Gene. A 2552-bp 5'-flanking region of porcine ROCK1 gene was obtained and 11 progressive deletions were introduced upstream of the luciferase reporter gene. It was noticed that the ROCK1-P7 to P10 fragments had no activity according to negative control; an increase of activity was detected in P6 and P5, especially the P5 fragment (Figure 1), indicating that the region from P5 to P6 (−744 to −402 bp) was important for transcriptional activity (Figure 1). Figure 1. 5'-Deletion analysis of the porcine ROCK1 promoter activity. Schematic representation of the progressive deletions of porcine ROCK1 5'-flanking region in pGL3-Basic vector and the relative activities of ROCK1 promoter corresponding to the progressive deletions. The predicted transcription start site (TSS, the red arrow in the figure) was set +1, differs from the TSS in NCBI database. The pGL3-control/basic vectors were used as positive/negative control, while pRL-TK was used as internal control. Data were expressed as means ± SD of three replicates. In addition, differing from the predicted transcription start site (TSS), the TSS obtained by 5' RACE was located at −430 bp ( Figure S1), suggesting the importance of the region from −744 to −402 bp. According to the TFsearch and the JASPAR database, three potential Sp1 binding sites (−604/−595 bp, −561/−554 bp and −543/−532 bp) were located in the region ( Figure S1). The Importance of Sp1 Binding Sites in Porcine ROCK1 Promoter To functionally determine the importance of Sp1 binding sites, site-directed mutagenesis was performed ( Figure 2A). The modification in −604/−595 bp and −561/−554 bp regions obviously blocked the Sp1-stimulated transcription activity ( Figure 2B,C) in both PK and C2C12 cells. These results revealed that the two binding sites (located at −604/−595 bp and −561/−554 bp) are essential for ROCK1 promoter activity. In addition, differing from the predicted transcription start site (TSS), the TSS obtained by 5' RACE was located at´430 bp ( Figure S1), suggesting the importance of the region from´744 to´402 bp. According to the TFsearch and the JASPAR database, three potential Sp1 binding sites (´604/´595 bp, 561/´554 bp and´543/´532 bp) were located in the region ( Figure S1). The Importance of Sp1 Binding Sites in Porcine ROCK1 Promoter To functionally determine the importance of Sp1 binding sites, site-directed mutagenesis was performed (Figure 2A). The modification in´604/´595 bp and´561/´554 bp regions obviously blocked the Sp1-stimulated transcription activity ( Figure 2B,C) in both PK and C2C12 (B,C) Luciferase activity of site-directed mutagenesis in PK and C2C12 cells. Statistical differences of relative activities were analyzed in the same cells; ** p < 0.01, data were expressed as means ± SD of three replicates. Sp1 Binds to the Porcine ROCK1 Promoter in Vitro and in Vivo The EMSA and DNA pull down assays were used to determine whether transfection factor Sp1 could bind to promoter region of porcine ROCK1 gene in vitro. As shown Figure 3A, the incubation of Nuclear extract (NE) from PK cells with Sp1 probe 1 gave rise to the formation of a DNA-protein complex (Lane 2), which could be observed with competitor-mut probe (Lane 4), but not with competitor probe (Lane 3). Moreover, DNA-protein bands of the other two probes were also detected ( Figure 3B,C), and when incubated with the same NE, each site showed a different binding ability, whereas the molecular size of the three DNA-protein complexes were similar ( Figure 3D). Moreover, the proteins obtained from DNA pull down assay were detected using anti-Sp1 by Western blotting (Figure 3E), suggesting the protein bound to ROCK1 promoter region was exactly the transcription factor Sp1. Similar results of EMSA and DNA pull down were observed in NE of pig longissimus dorsi muscle (LM) ( Figure S2A-E). To determine the in vivo binding of Sp1 and ROCK1 promoter, ChIP analysis was performed in PK cells. The position information of the ChIP-PCR primers is shown in Figure 3F where the three sites are considered as a cluster. DNA fragment of the expected size was obtained when anti-Sp1 was added ( Figure 3G). However, when antibody for Sp3 (another member the SP/KLF transcription factor family) was used, the expected DNA fragment did not appear ( Figure 3G). The results showed that Sp1, rather than Sp3, directly interacted with the ROCK1 promoter. Taken together, these findings suggested that the proximal Sp1 binding sites of the ROCK1 promoter region were capable of binding to Sp1 protein both in vitro and in vivo. Sp1 Binds to the Porcine ROCK1 Promoter in Vitro and in Vivo The EMSA and DNA pull down assays were used to determine whether transfection factor Sp1 could bind to promoter region of porcine ROCK1 gene in vitro. As shown Figure 3A, the incubation of Nuclear extract (NE) from PK cells with Sp1 probe 1 gave rise to the formation of a DNA-protein complex (Lane 2), which could be observed with competitor-mut probe (Lane 4), but not with competitor probe (Lane 3). Moreover, DNA-protein bands of the other two probes were also detected ( Figure 3B,C), and when incubated with the same NE, each site showed a different binding ability, whereas the molecular size of the three DNA-protein complexes were similar ( Figure 3D). Moreover, the proteins obtained from DNA pull down assay were detected using anti-Sp1 by Western blotting (Figure 3E), suggesting the protein bound to ROCK1 promoter region was exactly the transcription factor Sp1. Similar results of EMSA and DNA pull down were observed in NE of pig longissimus dorsi muscle (LM) ( Figure S2A-E). To determine the in vivo binding of Sp1 and ROCK1 promoter, ChIP analysis was performed in PK cells. The position information of the ChIP-PCR primers is shown in Figure 3F where the three sites are considered as a cluster. DNA fragment of the expected size was obtained when anti-Sp1 was added ( Figure 3G). However, when antibody for Sp3 (another member the SP/KLF transcription factor family) was used, the expected DNA fragment did not appear ( Figure 3G). The results showed that Sp1, rather than Sp3, directly interacted with the ROCK1 promoter. Taken together, these findings suggested that the proximal Sp1 binding sites of the ROCK1 promoter region were capable of binding to Sp1 protein both in vitro and in vivo. , the second (B) and the third (C) biotin-labeled probes were incubated with the NE of PK cells. Lane 1 was the negative control without NE; the reagents were incubated in the absence competitor probes in Lane 2 or in presence of 50× excess competitor (Lane 3)/competitor-mutant (Lane 4) probes, respectively; (D) The three probes were incubated with PK NE, respectively; (E) Proteins of PK extracted from DNA-pull down materials were detected by Western blot. The total non-denaturing proteins/Streptavidin MagneSphere ® Paramagnetic Particles were taken as positive/negative control (PC/NC). The three potential Sp1 binding sites were named as Sp1.1, Sp1.2, and Sp1.3 in (D,E). The competitor/competitor-mutant probes were 50-fold excess and arrows indicated the specific DNA-protein complex bands; (F) Schematic diagram of the Sp1 binding sites in the porcine ROCK1-P5 fragment; (G) ChIP assay of Sp1 binding to porcine ROCK1-P5 fragment in PK cells. The in vivo interaction of Sp1 and Sp3 with porcine ROCK1 promoter was determined by ChIP assay, in which Normal mouse IgG was used as negative control. DNA isolated from immunoprecipitated materials was used for PCR amplification, whereas total chromatin was used as input (positive control). The antibodies used in ChIP assay were listed in the right of the figure and the corresponding amplification product obtained here was 107 bp. Sp1 Stimulates ROCK1 Gene Expression According to the prediction of cis-acting elements, the overexpression and inhibition of Sp1 were performed. When overexpressing Sp1 both in PK and C2C12 cells, the ROCK1-P5 activity was significantly increased depending on the amount of Sp1 (Figure 4A,B). Furthermore, overexpression The competitor/competitor-mutant probes were 50-fold excess and arrows indicated the specific DNA-protein complex bands; (F) Schematic diagram of the Sp1 binding sites in the porcine ROCK1-P5 fragment; (G) ChIP assay of Sp1 binding to porcine ROCK1-P5 fragment in PK cells. The in vivo interaction of Sp1 and Sp3 with porcine ROCK1 promoter was determined by ChIP assay, in which Normal mouse IgG was used as negative control. DNA isolated from immunoprecipitated materials was used for PCR amplification, whereas total chromatin was used as input (positive control). The antibodies used in ChIP assay were listed in the right of the figure and the corresponding amplification product obtained here was 107 bp. Sp1 Stimulates ROCK1 Gene Expression According to the prediction of cis-acting elements, the overexpression and inhibition of Sp1 were performed. When overexpressing Sp1 both in PK and C2C12 cells, the ROCK1-P5 activity was significantly increased depending on the amount of Sp1 (Figure 4A,B). Furthermore, overexpression of Sp1 significantly promoted ROCK1 expression at mRNA level (p < 0.05), which was also dependent on the amount of Sp1 ( Figure 4C). Meanwhile, a similar tendency was observed at protein level ( Figure 4D). Additionally, when inhibiting Sp1 by specific siRNAs, a clear decrease of ROCK1 promoter activity and the expression of ROCK1 were observed both in PK and C2C12 cells ( Figure 4F-H). Taken together, our data showed that Sp1 acted as a positive regulator of ROCK1 transcription. of Sp1 significantly promoted ROCK1 expression at mRNA level (p < 0.05), which was also dependent on the amount of Sp1 ( Figure 4C). Meanwhile, a similar tendency was observed at protein level ( Figure 4D). Additionally, when inhibiting Sp1 by specific siRNAs, a clear decrease of ROCK1 promoter activity and the expression of ROCK1 were observed both in PK and C2C12 cells ( Figure 4F-H). Taken together, our data showed that Sp1 acted as a positive regulator of ROCK1 transcription. Sp1 Stimulates the Process of Myogenesis To determine whether ROCK1 affects the process of myogenesis via Sp1, Sp1 was forced expressed in C2C12 cells. Overexpression of Sp1 results in the up-regulation of MyoD, MyoG, and MyHC at mRNA level ( Figure 5), indicating that Sp1 can stimulate the process of myogenesis. Sp1 Stimulates the Process of Myogenesis To determine whether ROCK1 affects the process of myogenesis via Sp1, Sp1 was forced expressed in C2C12 cells. Overexpression of Sp1 results in the up-regulation of MyoD, MyoG, and MyHC at mRNA level ( Figure 5), indicating that Sp1 can stimulate the process of myogenesis. of Sp1 significantly promoted ROCK1 expression at mRNA level (p < 0.05), which was also dependent on the amount of Sp1 ( Figure 4C). Meanwhile, a similar tendency was observed at protein level ( Figure 4D). Additionally, when inhibiting Sp1 by specific siRNAs, a clear decrease of ROCK1 promoter activity and the expression of ROCK1 were observed both in PK and C2C12 cells ( Figure 4F-H). Taken together, our data showed that Sp1 acted as a positive regulator of ROCK1 transcription. Sp1 Stimulates the Process of Myogenesis To determine whether ROCK1 affects the process of myogenesis via Sp1, Sp1 was forced expressed in C2C12 cells. Overexpression of Sp1 results in the up-regulation of MyoD, MyoG, and MyHC at mRNA level ( Figure 5), indicating that Sp1 can stimulate the process of myogenesis. Discussion The promoter region is reported to regulate the transcription initiation and the expression of gene which is critical for the regulation of gene expression [20]. Our experimental analysis indicated that the 5'-flanking region from´744 to´402 bp of porcine ROCK1 gene significantly affected the promoter activity (Figures 1 and 2), suggesting that the core promoter is located in this region and the regulatory elements of this region may enhance the promoter activity of ROCK1. The in silico analysis of the promoter region of porcine ROCK1 reveals that this region exhibits an extremely high GC content (up to 54.82%), particularly the core promoter region (from´744 tó 402 bp, accounting for 74.72%), and the region contains several GC boxes. Further analysis indicated the core promoter region of porcine ROCK1 contains several potential binding sites for Sp1, in line with the report that Sp1 binding sites often occur as multiple repeats [21]. Sp1, a regulator in many tissues, plays a vital role in numerous cellular functions such as apoptosis and invasion [22,23] and usually regulates the expression of its target gene via binding to their promoters [24]. It often works through binding to GC-rich decanucleotide recognition elements (GC boxes) with a consensus sequence 5'-(G/T)GGGCGG(G/A)(G/A)(G/T)-3' [25,26]. The EMSA, DNA pull down and ChIP assays revealed that Sp1 does bind to the ROCK1 promoter directly and these interactions are important determinants of basal promoter activity. Furthermore, the specific DNA-protein complexes detected by EMSA indicate that Sp1 can bind independently to each potential GC boxes, in accordance with former research [27]. Previous studies indicate that Sp1 and Sp3 can cooperate or compete to regulate the expression of target genes [28,29]. The presence of expected amplification products when DNA precipitated with Sp1 antibody and the absence of expected amplification products when DNA precipitated with Sp3 antibody indicate that Sp1, not Sp3, directly binds to ROCK promoter to transcriptionally regulate the expression of ROCK1. Sp1 can promote or suppress the expression of its target gene [18]. The significant changes of ROCK1 promoter activity and the endogenous ROCK1 expression when overexpressing or inhibiting Sp1 indicate that Sp1 can stimulate the expression of ROCK1 via the regulation of transcription activity, different from the previous report that Sp1 reduces the promoter activity of rat ROCK1 gene in dental epithelial cells [19]. ROCK1 gene has been implicated in the regulation of skeletal muscle growth [30,31]. Several studies via miRNA, overexpression or inhibition of ROCK1 have demonstrated that ROCK1 acts as a negative regulator in the myogenic process [8,9,32]. Mfy5 (myogenic factor 5), Mfy6 (MRF4), MyoD (myogenic differentiation) and myogenin (MyoG) are members of the myogenic regulatory factors (MRFs), and play crucial roles in the complex process of skeletal myogenesis, including commitment and proliferation, muscle fiber formation, and postnatal maturation and muscle function [33,34]. Myosin heavy chain (MyHC) is the essential component of myosin, the most abundant contractile molecule in mammalian skeletal muscles [35]. When forced expression of Sp1 occurs, the significant increase of MyoD, MyoG, and MyHC in C2C12 cells suggests the significant role of Sp1 in regulating myoblasts differentiation, implying that ROCK1 might participate in or partly inhibit the regulation of myoblasts' differentiation via Sp1. Additionally, Sp1 can affect the phosphorylation of multiple genes, which can further affect their functions [36,37]. The function of ROCK will be changed when its activity was modified [38]. Therefore, further research needs to be performed to determine whether Sp1 can affect the phosphorylation of porcine ROCK1 gene. Taken together, Sp1 acts as a critical regulatory factor for porcine ROCK1 transcription and may regulate the development of pig skeletal muscle via Sp1-ROCK1-MRFs pathway, thus providing a novel regulation mechanism of porcine ROCK1 and myogenesis. Animals Pigs (S. scrofa) used for this study were obtained from Jingpin pig station of Huazhong Agricultural University (Wuhan, China). All of the studies involving animals were conducted according to the regulation (No. 5 proclamation of the Standing Committee of Hubei People's Congress) approved by the Standing Committee of Hubei People's Congress, China. The sample collection was approved by the Ethics Committee of Huazhong Agricultural University with the permit number No. 30700571 for this study. The animals humanely sacrificed as necessary to ameliorate suffering. The methods were carried out in accordance with the approved guidelines. Four blood samples were preparation for genomic DNA and protein samples of the longissimus dorsi muscle (LM) were collected from every 60-day old Yorkshire pigs (3 pigs in total). Rapid Amplification of 5'cDNA Ends (5'-RACE) The LM of 60-day Yorkshire was used for RNA isolation, with total RNA from mouse heart provided in the kit as positive control. 5'-RACE was performed using the SMARTer™ RACE cDNA synthesis kit (Clontech, Shiga, Japan) according to the manufacturer's instructions and the primers were listed in Table S1. Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Total-RNA was extracted with Total RNA isolation Kit (Omega, Bienne, Switzerland) and the cDNA was synthesised as previously described [39]. Subsequently, the expression was measured by qRT-PCR in LightCycler480 (Roche, Basel, Switzerland), using the gene-specific primers (Table S1). The HPRT, eEFγ, and PPIA were selected as housekeeping genes for PK cells [39], while β-Actin was used for C2C12 cells. Plasmids' Construction, Cell Culture, Transfection and Analysis Serial deletions of porcine ROCK1 5'-flanking genomic region were amplified and denoted as ROCK1-P0-P10-Luc. Subsequently, the recombinant fragments were digested and inserted into pGL3-Basic vector (Promega, Madison, WI, USA). The overexpression plasmids were created by inserting the coding sequence (CDS) of porcine Sp1 gene into the pcDNA3.1 (+) vector (Invitrogen, Cashman, CA, USA). All constructs were sequenced for verification. Primers used for amplification are listed in Table S2. The PK (pig kidney cell line) and C2C12 (mouse myoblast cell line) cells were maintained at 37˝C in humidified 5% CO 2 atmosphere in Dulbecco's modified Eagle medium (Hyclone, Logan, UT, USA) supplemented with 10% fetal bovine serum (Gibico, New York, NY, USA). Cells were seeded in proper plates and cultured overnight. Then, the cells were transfected using Lipofectamine 2000 (Invitrogen). Site-directed mutagenesis of the ROCK1-P5 (´744/+737)-Luc construct was performed by using the MutanBEST Kit (Takara, Tokyo, Japan) and specific primers (Table S3). Twenty-four hours after transfection, the luciferase activity was measured with PerkinElmer 2030 Multilabel Reader (PerkinElmer, Boston, MA, USA). Electrophoretic Mobility Shift Assay (EMSA) Nuclear extract (NE) of PK cells and LM of pig were extracted with Nucleoprotein Extraction Kit (Beyotime, Shanghai, China). Sequence specific probes (Sangon, Shanghai, China) were synthesized and annealed into double strands. The DNA binding ability was detected by EMSA with Scientific Light-Shift EMSA KIT (Thermo, Grand Island, NY, USA). Briefly, proper component was added to the reaction, in which 20 fmol of Biotin-labeled oligonucleotides were added, the control group was supplemented with 50-fold excess of competitor/competitor-mut oligonucleotides. After incubation, the mixtures were conducted on polyacrylamide gels and transferred onto nylon membrane and analyzed with GE ImageQuant LAS4000 mini (GE-Healthcare, Little Chalfont, UK). Details of the oligonucleotide probes are shown in Table S3. Chromatin Immunoprecipitation (ChIP) Assay To measure the binding activity of Sp1 in vivo, ChIP assay was conducted with the EZ-ChIP™ Kit (Millipore, Bedford, MA, USA) according to the manufacturer's protocol in PK cells. Briefly, DNA-protein complex were cross-linked and neutralized. After sonication, fragmented chromatin was added into ChIP dilution buffer, and incubated overnight with anti-Sp1 (Abcam, ab13370, Rabbit polyclonal antibody)/anti-Sp3 (Santa Cruz, sc-644x, Rabbit polyclonal antibody). A Normal Mouse IgG was added as negative control antibody. Immunoprecipitated products were collected after incubation with Protein A + G coated magnetic beads. The bound chromatin was eluted and digested with proteinase K, then the DNA was purified for PCR analysis (the primers are listed in Table S1). DNA Pull down Assay Non-denaturing proteins of PK cells and LM of the pig were extracted by Non-denaturing Lysis Buffer (Sangon). Later, we typically bonded non-denaturing proteins and biotin-labeled DNA probes by rotation, which were then supplemented with Streptavidin MagneSphere ® Paramagnetic Particles (Promega). Then, the reactions were further rotated and washed. Later, DNA-bound proteins were collected with 10% SDS (sodium dodecyl sulfate, sodium salt) and analyzed by Western blotting, taking non-denaturing proteins/Streptavidin MagneSphere ® Paramagnetic Particles as positive/negative control. Western Blotting Samples were heated in SDS buffer, separated by SDS-PAGE and transferred to PVDF (polyvinylidene fluoride) membranes. Then, the membranes were blocked and separately probed with anti-ROCK1 (Abcam, Cambridge, MA, USA, ab134181), and anti-Sp1 (Abcam, ab13370) overnight. β-Actin (Santa Cruz, Santa Cruz, TX, USA, sc-130656) was used as a loading control. After washing, the membranes were incubated with secondary antibodies (Santa Cruz) and visualized using the ECL (enhance chemiluminescence) Western Blotting Detection System (Tiangen, Beijing, China). Statistical Analysis All experiments were performed at least three times in triplicate. Data are presented as mean˘SD of three replications. Statistical significance was assessed with Bonferroni t-test in SAS 9.1. Differences were considered statistically significant at p < 0.05 (* p < 0.05; ** p < 0.01).
5,738.6
2016-01-01T00:00:00.000
[ "Biology" ]
The Labor Productivity Gap between the Agricultural and Nonagricultural Sectors, and Poverty and Inequality Reduction in Asia The objective of this paper is to examine how agricultural and nonagricultural labor productivities have grown over time and whether the growth pattern affected poverty in low- and middle-income economies in Asia. We first examine whether labor productivities in the agricultural and nonagricultural sectors have converged, finding evidence that they did not as the latter have grown faster. We then confirm that both agricultural and nonagricultural labor productivities have converged across economies and that the convergence effect is stronger for the nonagricultural sector. We have also observed that, despite the relatively slower growth in agricultural labor productivity, the agricultural sector played an important role in promoting nonagricultural labor productivity and thus in nonagricultural growth. Finally, we have found some evidence that the labor productivity gap reduces rural and urban poverty, as well as national-level inequality. I. Introduction The objective of this paper is to examine (i) how labor productivities in the agricultural and nonagricultural sectors in Asia have grown over time, and (ii) whether the growth pattern-proxied by the labor productivity gap between the two sectors-affected poverty and inequality in low-and middle-income economies in Asia. We focus on these economies because the interaction between the agricultural and nonagricultural sectors has become increasingly important as these economies have experienced structural transformation. We will first investigate the convergence of labor productivity in the agricultural and nonagricultural sectors with a focus on both intersectoral convergence and within-sector convergence across different economies over time. The issue of intersectoral convergence versus divergence is reviewed in the literature, which investigates allocations or misallocations of inputs into the agricultural and nonagricultural sectors. For instance, using microlevel data, Gollin, Lagakos, and Waugh (2013) found that a large gap between the two sectors persists, suggesting the misallocation of labor at the macro level. However, the extent of the gap and how it has changed over time differs across economies depending on their initial capital and labor endowments, the stage of economic development, and the nature of their public policies. As the degree of the misallocation of resources in dual-economy settings explains variations in national income and productivity growth (Vollrath 2009), it is important to examine how the gap has changed over time. To investigate whether the growth pattern impacts poverty and inequality in low-and middle-income economies in Asia, we draw upon the large empirical literature to test the convergence hypothesis in line with the neoclassical growth model: that is, whether poorer economies or regions grow faster than richer economies or regions (Barro 1991, Barro andSala-i-Martin 1992, Barro et al. 1991). For instance, Barro et al. (1991) and Barro and Sala-i-Martin (1992) used state-level data on personal income for 48 states in the United States (US) during 1940-1963 and found clear evidence of convergence. As for convergence across economies, while the earlier literature suggests that there was convergence across a wide range of economies (Barro [1991] observes 98 economies during and that the convergence was also observed for productivity growth (Baumol, Nelson, and Wolff 1994), it has been debated whether the convergence occurred for a subset of economies or for different specifications (Levine andRenelt 1992, Quah 1996). The results partly depend on the extent to which the economies are integrated, for instance, through international trade (Ben-David 1996). Given that East and South Asian economies are becoming more integrated, an interesting question is whether productivity converged among Asian economies. We will also investigate whether the gap is associated with poverty and inequality reduction in rural and urban areas. While the literature has focused on the poverty-reducing effect of agricultural sector income or productivity growth, little is known about whether the gap between agricultural and nonagricultural productivity influences poverty or inequality. 1 A point of departure is that we treat the labor productivity gap as endogenous by using the fixed-effects instrumental-variable (FE-IV) model, where the cropping pattern is used as an instrument. Finally, we will discuss whether the labor productivity gap will dynamically affect the labor allocation between rural and urban sectors. Our paper draws upon the following three strands of the literature. The first is the literature on the empirical investigations of the gap between agricultural and nonagricultural productivities in the dual-economy model, consisting of the traditional and modern sectors. A seminal work in this strand of the literature is Gollin, Lagakos, and Waugh (2013), who used both national accounts and household data to show that value added per worker is much higher in the nonagricultural than agricultural sector in developing economies. They call this the "agricultural productivity gap." As Gollin, Lagakos, and Waugh (2013, p. 942) note, the investigation of the agricultural productivity gap has been viewed as an important topic in the early literature on development economics as it can offer valuable insights into the analysis of economic growth and inequality in developing economies (e.g., Lewis 1955, Kuznets 1971. In recent years, the agricultural and nonagricultural sectors have become more integrated within economies through structural transformation, while the agricultural (or nonagricultural) sector of one economy has become more closely linked with the same sector of other economies under globalization. Given the nature of the data that Gollin, Lagakos, and Waugh (2013) used, their analysis is essentially static. However, it is important to analyze the gap in a dynamic context. Drawing upon the panel data of Asian economies, the present study focuses on how agricultural and nonagricultural labor productivities have grown, with their interactions taken into account. It also estimates the effect of the gap on poverty and inequality. Second, our study is closely related to the large body of the literature on the role of the agricultural sector in development and the reduction of poverty and inequality (e.g., Christiaensen, Demery, and Kuhl 2011). A point of departure of the recent literature (Christiaensen, Demery, and Kuhl 2011;Imai, Cheng, and Gaiha 2017) is that the role of agriculture is captured by dynamic interactions between the agriculture and nonagricultural sectors. The present study extends these arguments and focuses on the effect of the labor productivity gap between the two sectors on poverty and inequality. Third, the present study is also closely related to the literature on structural transformation, in particular rural transformation (or agricultural transformation), and its effect on development and/or poverty in low-and middle-income economies in Asia and elsewhere (e.g., Reardon and Timmer 2014, Dawe 2015, Barrett et al. 2017. As the structural transformation implies a closer and more intricate relationship between the agricultural and nonagricultural sectors, our empirical investigation of the gap between agricultural and nonagricultural productivity can provide useful insight into the literature on structural transformation. The rest of the paper is organized as follows. In the next section, we briefly summarize the theoretical foundations underlying our empirical investigation. In section III, we examine the convergence of labor productivity in the agricultural and nonagricultural sectors. Section IV estimates the effects of the labor productivity gap on poverty, inequality, and the sectoral population share. The final section offers our concluding observations. II. Theoretical Foundations Our empirical investigation of the gap between agricultural and nonagricultural labor productivity is associated with a large body of theoretical literature on the dual-economy model, which originated from Arthur Lewis (1954) and was later developed by many authors (e.g., Dixit 1973, Mundlak 2000. More recently, Vollrath (2009) constructs a dual-economy model in which the productivity differences between the two sectors arise endogenously. In Vollrath's model, agricultural production is a constant returns to scale function of labor effort and land (Vollrath 2009, p. 8). Total agricultural production is denoted as where Y A t is agricultural production at time t (and superscript A denotes the agricultural sector), A A t is total factor productivity of the agricultural sector, R is the total amount of land (or resources in general) in the agricultural sector, and E A t is the total labor effort: that is, E A t = s t a t L t . F is a well-behaved function with constant returns to scale. Net income for a representative farmer in the agricultural sector is where r t is the land employed by the farmer at time t. Each individual has a unit of time, with the share s t ∈ (0, 1) allocated to productive work in the agricultural sector and the remaining 1 − s t spent in nonfarm activity at time t. ρ t is the rental price of land, and p A t is the price of agricultural goods relative to manufactured goods. The manufacturing (nonagricultural) sector is assumed to be perfectly competitive so that labor effort is paid its marginal product (Vollrath 2009, p. 9). The wage rate per unit of effort in the nonagricultural sector is specified as where the wage rate depends on the productivity of nonagricultural sector, A M t , as well as on a well-behaved function w of the number of people in agriculture, a t (w > 0 and w > 0), given the assumption that the nonagricultural sector is competitive, while the agricultural sector is not. These properties imply that the nonagricultural wage increases as the number of people in the nonagricultural sector (1 − a t ) decreases. Net income for nonagricultural workers is simply defined by Under these settings, Vollrath (2009, p. 11) showed that in equilibrium a dual economy exists where nonagricultural workers allocate more time to productive work than agricultural workers, and the marginal product of a worker is higher for nonagricultural (manufacturing) workers. 2 As a result, gross domestic product (GDP) per capita can be increased by a transfer of labor from the agricultural sector to the nonagricultural sector. Vollrath's model (2009, p. 13) also implies that sustained increases in agricultural productivity will help industrialize the economy, but this will be accompanied by a growing disparity in productivity between sectors. On the contrary, increases in nonagricultural productivity will not only industrialize the economy but also induce agricultural workers to work more efficiently. 3 This model prediction is intuitively valid given close interactions between the two sectors through migration, particularly in emerging economies such as India, the People's Republic of China, and Viet Nam. The above model would predict, in our empirical context, that the gap in labor productivity between the agricultural and nonagricultural sectors expands as the economy grows. As the gap in labor productivity between the two sectors implies an improvement in relative productivity of the nonagricultural sector, it is likely to reduce poverty. So, we will test the hypotheses directly related to Vollrath (2009) that (i) the labor productivity gap between the agricultural and nonagricultural sectors expands over time, and (ii) the labor productivity gap between the two sectors reduces poverty. As we will discuss later, our empirical results are broadly consistent with Vollrath (2009). Vollrath's (2009) model also implies that agricultural productivity and nonagricultural productivity interact in a complicated way. However, the model does not explicitly consider the interactions with factors outside the economy. Assuming the concavity of the production function in both sectors, we will empirically investigate whether agricultural productivity will converge or not across Asian economies by taking account of the effect of the lagged nonagricultural productivity on agricultural productivity. The convergence of nonagricultural productivity will also be examined by incorporating the effect of agricultural productivity on nonagricultural productivity. This empirical model is oriented in the literature to test the convergence of economic growth (Barro 1991, Barro and Sala-i-Martin 1992. Vollrath (2009) predicts that in the long term the agricultural sector's productivity growth will exacerbate the inefficiencies of a dual economy and produce slower overall growth than will nonagricultural sector productivity improvements, and therefore the dual economy will disappear. This is consistent with empirical observations of developed Asian economies such as Japan and the Republic of Korea. While both of these economies improved their agricultural productivity in the late 20th century, the GDP share of the agricultural sector declined as they industrialized and eventually achieved higher overall productivity. In the meantime, the overall inequality of these economies remained relatively low and stable. 4 However, Vollrath (2009) lacks two aspects. First, the effect of the persistence of the dual economy on income distribution is not explicitly analyzed. Second, focusing on the long-term effect, Vollrath's model may not fully capture the positive role of agriculture on economic growth and the reduction of poverty and inequality, which is important in most low-and middle-income economies in Asia such as India. For instance, Ravallion and Datt (1996) used 35 household surveys of India between 1951 and 1991 and found that the growth of the primary sector (mainly agriculture) and the tertiary sector (mainly services) reduced national, rural, and urban poverty significantly, while growth of the secondary sector (mainly manufacturing) increased national poverty. They also showed that rural growth is more important for poverty reduction than urban growth. It is evident that a separate theoretical model is necessary to analyze the effect of a dual economy on income distribution and poverty. Some authors have explored the relationship between growth and income distribution with a focus on the dual economy (e.g., Robinson 1976, Bourguignon 1990, Fields 1993, Bourguignon and Morrisson 1998. Bourguignon (1990) offers a theoretical ground for Kuznets' hypothesis in detail. The dual economy is modeled in a general equilibrium framework by taking account of the entire distribution, which generates a Lorenz curve rather than summary measures. Bourguignon (1990, p. 219) first derived a proposition that a "necessary and sufficient condition for growth to shift the Lorenz curve of the income distribution upward is that the share of the traditional sector in GDP increases with growth." That is, an increase of the share of the agricultural sector in the growth process tends to reduce inequality. However, as Bourguignon notes, it is unlikely that the agricultural sector share increases with growth. Bourguignon (1990, pp. 226-27) then derives the proposition that a "necessary condition for growth to be unambiguously egalitarian, despite a fall in the GDP share of the traditional sector, is that capital-labor substitution be inelastic in the modern sector," implying that "observing a falling GDP share of the traditional sector, together with elastic capital-labor substitution in the modern sector, is sufficient to rule out unambiguously egalitarian growth in a dual economy." That is, the model predicts that the disparity between the agricultural and nonagricultural sectors tends to increase inequality with elastic capital-labor substitution in the nonagricultural (modern) sector. Bourguignon's model motivates our empirical analysis of the relationship between the agriculturalnonagricultural labor productivity gap and inequality and poverty. III. Convergence of Labor Productivity in the Agricultural and Nonagricultural Sectors Drawing upon the theoretical discussion in the last section, this section will examine the relationship between agricultural labor productivity and nonagricultural labor productivity with a focus on whether (i) these two converge or diverge over time, (ii) agricultural labor productivity converges across different economies, and (iii) nonagricultural labor productivity converges across different economies. For (ii) and (iii), the intersectoral effects are also taken into account in one case. That is, the effect of lagged nonagricultural labor productivity on agricultural labor productivity is considered. For (iii), the effect of lagged agricultural labor productivity on nonagricultural labor productivity is taken into account. For simplicity, the labor productivity of the agricultural (nonagricultural) sector is defined as value added in the agricultural (nonagricultural) sector divided by the number of workers in the agricultural (nonagricultural) sector. Table 1 compares labor productivity in these sectors by economy and region, and for Asia as a whole. The comparison is also made for the entire period as well as before and after the year 2000. Table 1 reports labor productivity growth as well as the labor productivity gap as defined by the gap between the logarithm of agricultural value added per worker and the logarithm of value added per worker in the nonagricultural sector. Consistent with earlier literature (e.g., Mitra 2001, Bernard andJones 1996), nonagricultural labor productivity is higher in all cases except the Federated States of Micronesia before 2000. Also, the labor productivity gap is higher after 2000 in all cases except Fiji. Our results strongly confirm the labor productivity divergence between the two sectors. That is, nonagricultural labor productivity was higher than agricultural labor productivity to start with and that the gap has expanded over time. However, there is a great degree of heterogeneity in terms of the speed of divergence. For instance, in a few economies (e.g., Indonesia and the Federated States of Micronesia), the gap has only moderately increased, but in other economies (e.g., Bhutan, India, and the People's Republic of China), the gap dramatically increased after 2000. It is thus safe to conclude that there is no evidence of labor productivity convergence between the agricultural and nonagricultural sectors. This is due to the fact that while agricultural labor productivity has grown substantially since 2000, nonagricultural labor productivity has grown even faster in many economies. Figures 1 and 2 confirm these results graphically. Figure 1 plots labor productivity in the agricultural and nonagricultural sectors in South Asian economies over time. The productivity gap was initially small in many economies (in the 1960s and 1970s), but it has expanded over the years. Figure 2 indicates that the above results are broadly similar for East and Southeast Asian economies. If we aggregate these data, the divergence of labor productivity between the agricultural and nonagricultural sectors can be confirmed for all of Asia. Next, we will examine whether agricultural labor productivity (or nonagricultural labor productivity) has converged across different economies based on the following simple static model (FE model) and dynamic panel model (system generalized method of moments). The idea is similar to Ghosh (2006), who examined the convergence of agricultural productivity among Indian states during 1960-2001. He found that there was significant divergence in labor productivity, particularly after the early 1990s, while there was no significant convergence or divergence in land productivity and per capita agricultural output. To take account where d log AGLP it stands for the annual agricultural labor productivity growth at time t for economy i. log AGLP it−1 is the level of agricultural productivity one period earlier in order to capture the convergence effect following the empirical literature to test the Solow growth model. Our main hypothesis for convergence is to test whether β 1 is negative. T is the linear time trend. X it is a vector of control variables, such as the logarithm of schooling years, the logarithm of share of the mining sector in GDP (in order to capture the economy's resource dependency), and the lagged level of inequality (based on the Gini coefficient). A selection of explanatory variables draws upon the recent literature, which investigated the interactions between agricultural growth and nonagricultural growth (Christiaensen, Demery, and Kuhl 2011;Imai, Cheng, and Gaiha 2017). The average years of total schooling is based on the Barro-Lee data, which has been commonly used in the empirical macroeconomics literature as it is a broad measure of the human capital stock of the economy. 5 It is assumed that as the economy's educational attainment improves, agricultural or nonagricultural labor productivity improves. The GDP share of the mining sector captures the extent to which the economy relies on natural resources, which may undermine sectoral labor productivity. The degree of inequality in various ways influences the sectoral labor productivity. For instance, if there exists a threshold (based on the nutritional requirement) below which workers cannot work efficiently in the labor market, a high level of inequality may undermine either agricultural or nonagricultural labor productivity. d log NAGLP it−1 is the lagged annual nonagricultural productivity growth to capture the transmission effect of labor productivity growth in the nonagricultural sector. This draws upon Vollrath's (2009) model, which showed that nonagricultural labor productivity enhances agricultural labor productivity over time in a dual-economy setting. μ i is the economy's unobservable fixed effect (e.g., cultural or institutional factors). ε it is an error term. We estimate this model with and without control variables, or the nonagricultural labor productivity growth term, while the results are robust to inclusion (exclusion) of a few other explanatory variables. As an extension, equation (1) has been estimated using the dynamic panel model (system generalized method of moments) drawing upon the Blundell and Bond (1998) robust estimator: Here, d denotes the first difference. The lagged dependent variable captures the persistent effect of agricultural labor productivity growth. Control variables have been dropped as they are statistically insignificant. Exactly the same models can be estimated for nonagricultural labor productivity growth by static and dynamic panel models as in equations (7) and (8). The same models have been applied to subsamples for South Asia and for East and Southeast Asia: In Table 2, the above models are estimated by using the 5-year average data. Here, the presence of convergence effect can be tested by checking whether the lagged agricultural labor productivity (agricultural value added per worker [t − 1]) is negative and statistically significant in Cases 1-4, and whether lagged nonagricultural labor productivity (nonagricultural value added per worker [t − 1]) is negative and statistically significant in Cases 5-8. The result of a positive effect of agricultural productivity on nonagricultural productivity (Cases 1-4) is important as this is consistent with the prediction of Vollrath's (2009) model that there is diffusion from the agricultural sector. This is important in terms of the literature on structural transformation in Asia (Reardon and Timmer 2014), which suggests that the transformation of the agricultural sector (e.g., commercialization and product diversification) is becoming closely linked to changes in dietary patterns; supply chain and retail revolution; and integrated labor, land, and credit markets. Here, the whole process of structural transformation implies a positive diffusion effect of agricultural labor productivity on nonagricultural labor productivity. However, contrary to Vollrath's prediction, a positive effect of nonagricultural labor productivity on agricultural labor productivity was not observed as many Asian Electronic copy available at: https://ssrn.com/abstract=3346727 economies were primarily dependent on the agricultural sector during our data period. In Table 2, we confirm that labor productivity converges in both the agricultural and nonagricultural sectors, and the convergence effect is significant in all the cases except Case 2. This implies "a catching-up effect" in which the economies with relatively low agricultural labor productivity tend to catch up with those having relatively high agricultural labor productivity. The catching up effect is also found for nonagricultural labor productivity. We have also found that lagged nonagricultural labor productivity growth deters agricultural labor productivity growth (Cases 3 and 4). This is consistent with the theoretical model of Vollrath (2009) that an improvement of nonagricultural productivity induces agricultural workers to work more efficiently. However, the result is reversed when we use the annual panel data in which nonagricultural labor productivity is lagged by 5 years. Here, lagged nonagricultural labor productivity growth is found to promote agricultural labor productivity growth as predicted by the theoretical model. 6 On the other hand, we have found, based on the 5-year average panel, that lagged agricultural labor productivity growth promotes nonagricultural labor productivity growth (Cases 5, 7, and 8). In Case 8, the lagged agricultural productivity growth is treated as an endogenous variable. Other covariates are mostly statistically insignificant, but a large lagged inequality increases nonagricultural labor productivity growth in Case 7. We have estimated the same models using the 5-year average data only for South Asia. A statistically significant convergence effect is found in the case of agricultural labor productivity growth. For the cross-sectoral effects, lagged agricultural labor productivity growth is found to promote nonagricultural labor productivity growth. For South Asia, a higher level of inequality tends to reduce overall agricultural labor productivity growth with some lag. Given that inequality can dampen the productivity of the disadvantaged group of agricultural workers or poor smallholders, this is a plausible result. 7 When we replicate the same regressions for East and Southeast Asia, we find that convergence effects are generally found to be significant. For the cross-sectoral effect, lagged agricultural labor productivity growth positively affects nonagricultural labor productivity growth. 8 6 The results based on the annual panel will be provided on request. 7 For South Asian economies, the Gini coefficient is positively correlated with the agricultural commercialization index based on the extent to which an agricultural product is processed (Imai, Gaiha, and Bresciani 2016); the coefficient of correlation is 0.067. For East and Southeast Asian economies, the correlation is negative with a coefficient of -0.4. This could explain the negative correlation between inequality and agricultural labor productivity for South Asia, though the causality will have to be examined carefully in future studies. 8 The disaggregated results will be provided on request. IV. Effects of the Labor Productivity Gap between the Agricultural and Nonagricultural Sectors on Poverty, Inequality, and the Sectoral Population Share We have so far examined the pattern of (i) the convergence of labor productivity between the agricultural and nonagricultural sectors, and (ii) the convergence of agricultural or nonagricultural productivity across different economies. Overall, agricultural labor productivity growth has promoted nonagricultural productivity growth and the sectoral gap has widened, while the between-economy disparity of the sectoral labor productivity has narrowed. These findings are broadly consistent with the theoretical model of Vollrath (2009). An interesting empirical question is how this process will dynamically affect poverty and inequality as well as labor allocation across different sectors over time. As we discussed in section II, the theoretical model implies that an increase of the sectoral gap tends to be generally less egalitarian, or that there is an increase in inequality when both sectors grow (Bourguignon 1990). However, it is not straightforward to answer the question because of the difficulty in disentangling the complex causal links from the labor productivity gap between the agricultural and nonagricultural sectors to poverty (or inequality or the sectoral population share). For instance, an increase in the labor productivity gap may imply a divergence: that is, a change toward higher nonagricultural labor productivity (reflecting technological development) and/or lower or more stagnant agricultural productivity. On the other hand, a reduction in the gap may imply a change toward convergence due to stagnant nonagricultural labor productivity and/or an increase in agricultural labor productivity. However, while the larger gap affects poverty or inequality, the higher poverty rates or inequality might also influence the gap. For instance, poor people in rural areas cannot invest in a profitable investment in agriculture that would require a certain amount of investment in physical and human capital (e.g., machinery or high-yielding crops), which hinders the growth of labor productivity in agricultural areas. Thus, there is a need for instrumenting the labor productivity gap because it may be endogenous. We have tackled the endogeneity by instrumenting the labor productivity gap by (i) the lagged agricultural product diversity index (Imai, Gaiha, and Bresciani 2016) and (ii) the lagged logarithm of the production share of the mining sector in GDP. 9 The first instrument is used as a proxy for agricultural transformation by 9 This draws upon Remans et al. (2014), who use an index called the Shannon Entropy Diversity Metric to capture production diversity at the country level using FAOSTAT. It is defined as H = − R i=1 p i ln p i where R is the number of agricultural products and p i is the share of production for the item, i, available from FAOSTAT. The production share, p i , is defined in terms of the monetary value at a local price for each product, i. If the country produces more agricultural products, including processed and unprocessed crops, and the monetary value of all products is more evenly divided among different items, the diversity index, H , takes a larger value. On the contrary, if the country produces a smaller number of agricultural products and the monetary value of one or two specific products is large, H is smaller. Imai, Gaiha, and Bresciani (2016), and is supposed to affect the labor productivity gap, mainly by influencing agricultural labor productivity. However, the change of the production pattern itself cannot directly influence poverty or inequality. We cannot deny the possibility that the process of specialization could increase poverty, for instance, as there may be less demand for manual labor; but we can reasonably assume that poverty can change through adjustments in farm production or income (per worker). The second instrument could also reduce the labor productivity gap because dependence on the mining sector could deter the overall effort for technological progress in the industrial sector, without directly affecting poverty. The reliance on the mining sector could affect poverty directly (e.g., the impoverishment of manual workers in the mining sector), but we assume that this does not have a direct impact on poverty, particularly in rural areas. We assume that the productivity or income effect is larger than the direct effect on poverty, while we admit limitations in using the second instrument. 10 We have applied the IV model in the panel framework using the FE-IV model, whereby the unobservable country effect is taken into account. Because we focus on the relatively longer-term effect, we use only the 5-year average data. In the first stage, we will estimate the determinants of the labor productivity gap between the two sectors: Here, t stands for each 5-year period: t = 1 for 1960-1964, t = 2 for 1965-1969, . . . , t = 11 for 2010-2014. Gap it−1 is the first lag of normalized difference between nonagricultural value added per worker and agricultural value added per worker (at purchasing power parity [PPP] in US dollars divided by 1,000). d log AGLP it−1 is the lag of the first difference in log of agricultural value added per worker: that is, the agricultural labor productivity growth during the preceding period. Likewise, d log NAGLP it−1 is the nonagricultural labor productivity growth during the preceding period. S it−1 is the lag of schooling years. μ i is the unobservable country fixed effect and ε it is an error term (independent and identically distributed). Instruments for the labor productivity gap between the agricultural and nonagricultural sectors are the second lag of the production share of the mining sector (Mining it−2 ) and the second lag of the agricultural product diversity index. These instruments, despite the limitations, are justified on the following grounds. Since the mining sector share is a variable closely associated with the (broadly predetermined) factor endowment of the economy, it will have a direct effect on the economy's labor allocations across different sectors, including the rural agricultural sector, rural nonagricultural sector (nonmining or mining), and urban nonagricultural sector (nonmining or mining). Depending on the degree of dependence on mining resources, the allocation of labor across sectors and worker efforts in each sector are influenced directly. It is surmised here that the effect of the mining sector share first influences sectoral labor productivity, rather than poverty. While the mining sector share may influence poverty directly (e.g., through the impoverishment of mining workers), we assume that it mainly influences the relative sectoral productivity. The second instrument, the product diversity index, affects agricultural labor productivity directly as more diversified production implies the economy's adoption of profitable and marketable agricultural products (e.g., vegetables, fruits, meat). The index also influences nonagricultural labor productivity as the introduction of these products influences the productivity of the food processing sector. However, it is unlikely that the product diversity index directly affects poverty or inequality. These instruments, despite the limitations, have been validated by specification tests. In the second stage, poverty is estimated by the (instrumented) labor productivity gap as well as other determinants: Equations (9) and (10) are estimated using the FE-IV model. Poverty is defined in various ways, including (i) the national poverty headcount, or poverty gap, based on the international poverty line of $1.9 (extreme poverty) or $3.1 (moderate poverty) per day at PPP in 2011 (World Bank 2016); (ii) the rural poverty headcount, poverty gap, or poverty gap squared, based on $1.25 (extreme poverty) or $2 (moderate poverty) at PPP in 2005; and (iii) the same urban poverty indexes in (ii), based on household data in rural areas. 11 In one case, we have replaced poverty by the Gini coefficient evaluated at the national or subnational level (for rural and urban areas separately). Finally, given the data limitations, we have derived the population share of the rural sector, nonagricultural sector, and urban sector, and used each share as a dependent variable in the second-stage regression (Imai, Gaiha, and Garbero 2017). This aims to examine how the labor productivity gap will influence the labor allocation in the middle to long run. In all cases, the endogeneity of the labor productivity gap is instrumented. First, we have estimated national poverty in the second stage (the upper left panel of Table 3). 12 In the first stage, one of the instruments, the agricultural product Table 3. diversity in the preceding period, will reduce the labor productivity gap. That is, if the structural transformation in the rural sector progresses and agricultural production is more diversified, then the gap will be reduced, presumably because agricultural sector productivity will catch up with nonagricultural productivity. However, the first lagged agricultural productivity growth increases the gap. This is counterintuitive, but if agricultural productivity growth promotes nonagricultural growth without a lag, the period with faster agricultural productivity growth may even match the period with faster nonagricultural growth. The coefficient estimate of nonagricultural labor productivity growth is negative, but not statistically significant. 13 Education tends to increase the gap. The question arising from the analysis in the last section is why the labor productivity gap has grown in some economies and not in other economies. It is not easy to provide a definite answer, but our results imply that the agricultural transformation reduces the gap and that improved human capital widens the gap. Effects of the Labor Productivity Gap between the Agricultural and Nonagricultural Sectors on Poverty and Inequality In the second stage, we do not find any evidence that the gap influences poverty at the national level with the coefficient estimate being negative (except the second column) and statistically insignificant (the upper left panel of Table 3). 14 We find that the number of schooling years is negative and statistically significant. The F-statistic of excluded instruments is 16.34, which is above the threshold of 10, and the Sargan overidentification test of all instruments is not significant (p-value of 0.331), validating the IV estimation. Next, we examine whether the labor productivity gap has affected poverty. Because the sample is reduced, the results from the first stage have changed slightly. For instance, nonagricultural productivity growth is now negative and significant, while one of the instruments, the productivity-diversity index, is now positive and significant. So, with a smaller sample, the progress of the agricultural transformation tends to increase the labor productivity gap. The reason is not clear, but in this case the agricultural transformation may have an instant impact on improving both agricultural and nonagricultural labor productivity, with the magnitude of the latter being comparatively larger. In the second stage, the increase of the labor productivity gap tends to reduce poverty in the rural regions regardless of the choice of poverty thresholds and for all different measures of poverty (e.g., headcount, poverty gap, and poverty gap squared except the third column for extreme poverty gap squared as shown 13 The correlation between the labor productivity gap and nonagricultural labor productivity growth is positive with a correlation coefficient of 0.034. The correlation coefficient between the gap and agricultural labor productivity growth is 0.036. Not surprisingly, the correlation coefficient between the agricultural and nonagricultural sector growth terms is high at 0.614. The highest variance inflation factor of the first-stage regression is 2.44, which is below the threshold of 10 and which would justify the inclusion of labor productivity growth in both sectors at the same time. 14 We have also estimated the second-stage regressions by using the FE model without using IV. In this case, the sample size is larger, but we have found that the lagged labor productivity gap reduces significantly both extreme and moderate poverty, for both the headcount ratio and poverty gap. in the upper right panel of Table 3). That is, as nonagricultural labor productivity grows faster than agricultural labor productivity, rural poverty significantly declines in every dimension, including the share of the poor, the depth of rural poverty, and inequality among the rural poor. This result may not be consistent with the theoretical prediction by Bourguignon (1990) as the model suggests that the gap between the agricultural and nonagricultural sectors tends to increase inequality given elastic capital-labor substitution assumed in the modern sector. However, Vollrath's (2009) model implies that as nonagricultural labor productivity increases, the efficiency of workers in the agricultural sector improves. If this helps the rural poor escape from poverty, we expect that nonagricultural labor productivity growth has the effect of reducing rural poverty. Here, the test of excluded instruments (F-statistic) is 9.55, which is below the threshold of 10, partly because of the small sample size, and so the results need to be interpreted with caution. The Sargan statistic is not significant, justifying the use of IV. 15 We have also estimated urban poverty in the second stage of the FE-IV model. The results are shown in the lower left panel of Table 3. We have found that the size of the poverty-reducing effect is much larger for urban poverty than rural poverty. That is, as the gap between nonagricultural and agricultural labor productivity expands, both urban poverty and rural poverty decrease, but urban poverty tends to decline at a much faster rate. However, the results will have to be interpreted with caution, particularly in cases where the F-statistic for excluded instruments in the first stage is low (columns 2 and 3). Finally, we have estimated the effect of the lagged labor productivity gap on the Gini coefficient at the national, rural, and urban levels. As the sample sizes differ, the result in the first column cannot be compared with the results in the second and third columns. However, after controlling for the endogeneity of the labor productivity gap, we have found evidence that the gap significantly reduces the national Gini coefficient (the lower right panel of Table 3). In this case, the first-stage F-statistic is larger than 10. The result is robust if we do not instrument the labor productivity gap or if we use the smaller sample for which disaggregated inequality data are available. Using the disaggregated data, we have also estimated the effects of the lagged labor productivity gap on the sectoral population share, drawing upon Imai, Gaiha, and Garbero (2017). The results will have to be interpreted with caution, specifically in the first and the second columns (due to the small sample size) where the specification tests for IV do not validate the specifications. However, we have found some evidence that the labor productivity gap reduces the rural population share and increases the share of the rural nonagricultural sector. When we use a larger sample size, we have found that the lagged productivity gap increases the population share of the urban sector significantly. These results are broadly consistent with the theoretical model of Vollrath (2009) where increases in nonagricultural productivity will help industrialize the economy and induce agricultural laborers to work more efficiently, while the share of the agricultural sector declines over time. If this process benefits much of the population in rural and urban areas, inequality is likely to decline over time. However, our result is not consistent with Bourguignon's (1990) model, which implies that the gap between the agricultural and nonagricultural sectors tends to increase inequality. In sum, we have found that the increase in the lagged labor productivity gap, which is treated as endogenous, will reduce both urban and rural poverty as well as national-level inequality. In particular, there is robust evidence confirming that the labor productivity gap reduces urban poverty evaluated at the poverty threshold of $2 per day. IV. Conclusions First, we have examined whether labor productivities in the agricultural and nonagricultural sectors have converged by using the 5-year average panel dataset. We have found robust evidence that nonagricultural labor productivity and agricultural labor productivity did not converge; the former has grown faster and the gap has increased significantly over time. We have also observed that within Asia (i) agricultural labor productivity has converged across economies, (ii) nonagricultural labor productivity has converged across economies, and (iii) the convergence effect is stronger for the nonagricultural sector. Agricultural labor productivity growth was found to promote nonagricultural productivity growth with some lag. That is, despite the slower growth in agricultural labor productivity, the agricultural sector played an important role in promoting nonagricultural labor productivity and thus in nonagricultural growth. As we used the 5-year average panel data, we can identify the middle-to long-term effects by controlling for short-term fluctuations. In the second part of the study, we examined whether the labor productivity gap between the agricultural and nonagricultural sectors reduced poverty, inequality, and the sectoral population shares over time. While the results vary depending on the specifications, we have found some evidence that the labor productivity gap reduces both urban and rural poverty over time as well as nationallevel inequality. The gap also increases the share of the population in the urban sector. Our results provide the following policy implications. While improvement in agricultural labor productivity also brings about improvement in nonagricultural labor productivity, the latter has increased faster than the former over time, resulting in a gap between the two sectors. The widening gap was found to reduce poverty and inequality. These results are important in light of the literature on structural transformation in Asia (e.g., Reardon and Timmer 2014;Imai, Gaiha, and Bresciani 2016), which underscores diffusion from the agricultural sector. Our results suggest that as the agricultural sector experiences structural changes, it plays a central role in improving nonagricultural labor productivity and reducing poverty and inequality within an economy. Policy makers need to facilitate the process of structural transformation (e.g., commercialization and product diversification of agriculture; revolutions in supply chain and retail networks; and integration of labor, land, and credit markets) to improve agricultural labor productivity and reduce poverty and inequality.
9,826.6
2019-03-01T00:00:00.000
[ "Economics" ]
Changes in N6-Methyladenosine Modification Modulate Diabetic Cardiomyopathy by Reducing Myocardial Fibrosis and Myocyte Hypertrophy In this study, we aimed to systematically profile global RNA N6-methyladenosine (m6A) modification patterns in a mouse model of diabetic cardiomyopathy (DCM). Patterns of m6A in DCM and normal hearts were analyzed via m6A-specific methylated RNA immunoprecipitation followed by high-throughput sequencing (MeRIP-seq) and RNA sequencing (RNA-seq). m6A-related mRNAs were validated by quantitative real-time PCR analysis of input and m6A immunoprecipitated RNA samples from DCM and normal hearts. A total of 973 new m6A peaks were detected in DCM samples and 984 differentially methylated sites were selected for further study, including 295 hypermethylated and 689 hypomethylated m6A sites (fold change (FC) > 1.5, P < 0.05). Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway analyses indicated that unique m6A-modified transcripts in DCM were closely linked to cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism. Total m6A levels were higher in DCM, while levels of the fat mass and obesity-associated (FTO) protein were downregulated. Overexpression of FTO in DCM model mice improved cardiac function by reducing myocardial fibrosis and myocyte hypertrophy. Overall, m6A modification patterns were altered in DCM, and modification of epitranscriptomic processes, such as m6A, is a potentially interesting therapeutic approach. INTRODUCTION There are more than 450 million patients with diabetes worldwide, and by 2045 this number is predicted to increase to 693 million (Cho et al., 2018). Cardiovascular disease accounts for 50.3% of total deaths of patients with diabetes (Einarson et al., 2018). Diabetes not only increases the risk of heart failure, but also increases the mortality rate from heart failure by 2.5 times (Tan et al., 2020). Diabetic cardiomyopathy (DCM) is a metabolic cardiovascular disease resulting in decreased myocardial glucose consumption, modestly increased ketone metabolism, and significantly increased utilization of fatty acids (Ritchie and Abel, 2020;Tan et al., 2020). The main features of DCM are myocardial hypertrophy, cardiac fibrosis, coronary microvascular dysfunction, left ventricular enlargement, and weakened ventricular wall motion (Ritchie and Abel, 2020); however, the causal relationships among these complications are not clear. To date, therapies for DCM are limited and cannot prevent the eventual development of the disease. Therefore, additional treatment options are needed. Although multiple aspects of epigenetic regulation, from DNA modification to protein modification, have been extensively studied in DCM, the role of RNA modification in the regulation of gene expression is just beginning to be elucidated (Gluckman et al., 2009;Zhang et al., 2018). The most pervasive internal mRNA modification is m 6 A methylation, which affects RNA metabolism throughout its life cycle (Nachtergaele and He, 2018). The m 6 A methyltransferase complex consists of at least three components, METTL3, METTL14, and WTAP, and m 6 A is catalyzed by a core writer complex, comprising the catalytic enzyme, METTL3, and is its allosteric activator, METTL14 (Wang P. et al., 2016;Wang X. et al., 2016). WTAP, a mammalian splicing factor, is an indispensable component of the m 6 A methyltransferase complex; it does not have methyltransferase activity, but can interact with METTL3 and METTL14 to influence cellular m 6 A deposition (Liu et al., 2014). In addition to this core complex, a writer complex, comprising VIRMA, RBM15 or RBM15B, and ZC3H3 subunits, has been identified (Patil et al., 2016;Wen et al., 2018;Yue et al., 2018). The m 6 A modification is dynamic and can be demethylated by FTO and ALKBH5 (Jia et al., 2011;Zheng et al., 2013;Mauer et al., 2017;Wei et al., 2018). FTO was the first m 6 A demethylase to be discovered (Jia et al., 2011) and it can demethylate internal m 6 A, cap m 6 Am, and tRNA m 1 A (Wei et al., 2018), whereas ALKBH5 specifically demethylates m 6 A by affecting RNA metabolism and mRNA export (Zheng et al., 2013). In addition, RNA-binding proteins that bind to m 6 A modification sites are regarded as m 6 A readers, which regulate the functions of m 6 A-modified RNAs through various mechanisms. YTH domain-containing proteins bind to RNA in an m 6 A-dependent manner Zhu et al., 2014). YTHDC1 is predominantly expressed in the nucleus, where it regulates mRNA splicing (Roundtree and He, 2016). YTHDC2 expresses in nuclear and cytosolic and promotes translation (Wojtas et al., 2017). The family of cytosolic YTHDF proteins includes YTHDF1, YTHDF2, and YTHDF3 (Du et al., 2016;Shi et al., 2017;Patil et al., 2018). YTHDF1 can facilitate the translation of m 6 A-modified mRNAs alongside translation initiation factors, YTHDF2 can cause degradation of m 6 Acontaining RNAs, and YTHDF3 can facilitate both translation and degradation functions (Du et al., 2016;Shi et al., 2017;Patil et al., 2018). Besides the YTH domain-containing proteins, HNRNPA2B1, HNRNPC, HNRNPG, and IGF2BP1-3 have also been reported to preferentially bind to m 6 A-modified mRNAs (Alarcon et al., 2015a;Liu et al., 2015Liu et al., , 2017Edupuganti et al., 2017;Huang et al., 2018). The development of methods for analysis of genome-wide m 6 A topology has allowed extensive study of m 6 A-dependent regulation of RNA fate and function Alarcon et al., 2015b;Wang X. et al., 2015). Two independent studies reported m 6 A RNA methylomes in mammalian genomes for the first time using an m 6 A-specific methylated RNA immunoprecipitation approach, followed by high-throughput sequencing (MeRIP-seq) (Dominissini et al., 2012;Meyer et al., 2012). Emerging evidence indicates that m 6 A modification is associated with normal biological processes and with the initiation and progression of different types of heart disease (Dorn et al., 2019;Kmietczyk et al., 2019;Mathiyalagan et al., 2019;Mo et al., 2019;Song et al., 2019;Berulava et al., 2020;Gao et al., 2020;Kruger et al., 2020;Lin et al., 2020). Dysregulation of m 6 A is associated with cardiac homeostasis and diseases, such as cardiac hypertrophy, cardiac remodeling, myocardial infarction, and heart failure (Dorn et al., 2019;Mathiyalagan et al., 2019;Mo et al., 2019;Song et al., 2019;Berulava et al., 2020;Gao et al., 2020;Lin et al., 2020); however, the transcriptome-wide distribution of m 6 A in DCM remains largely unknown. In this study, we report the m 6 Amethylation profiles of heart tissue samples from a db/db mice which is a well-established DCM model for diabetic complications and normal control mice (db/ +), and demonstrate highly diverse m 6 A-modified patterns in the two groups. We show that abnormal m 6 A RNA modifications in DCM likely modulate cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism. Our results provide evidence that m 6 A modification is closely associated with DCM pathogenesis, and will facilitate further investigations of the potential targeting of m 6 A modification in DCM therapy. Animals This study was approved by the Ethics Committee of the Capital Institute of Pediatrics with the permit number: DWLL2019003. All procedures performed in the study complied with the relevant ethical standards. Leptin receptor-deficient (db/db) mice and control mice (db/ +) were purchased from Shanghai Model Organisms Center, which genetic background is C57BL/6J. All mice were used for experiments at 8-12 weeks old and were housed in constant 24 degrees cages with a 12 h alternating light/dark cycle and free access to water and food. To construct a diabetic heart disease model (DCM), mice were continuously fed to 24 weeks of age, then euthanized, hearts collected in 1.5 ml RNase-free centrifuge tubes, immediately immersed in liquid nitrogen to prevent RNA degradation, and finally stored at −80 • C. Five pairs of db/db and db/ + heart samples were selected for RNA sequencing, and the remaining samples were saved for validation. Echocardiographic Assessment Echocardiographic evaluation was blinded and conducted using a Vevo 2100 imaging system to the mouse at 24 weeks age, with two-dimensional guided M-mode image used to determine left ventricle size at papillary muscle level in the parasternal views (short axis and long axis), and calculate ejection fractions (EF) and fractional shortening (FS) by using standard equations (EF = (EDV-ES) × 100%/EDV; FS = (LVEDD-LVESD)/LVEDD × 100%). All measurements were averages from at least three beats. Histological Analysis Mouse hearts were fixed in 4% paraformaldehyde for 12 h. After dehydration in 5% sucrose, samples were embedded in paraffin and hematoxylin and eosin (HE) staining performed on 5 µm thick sections. In addition, frozen heart tissue samples were cut into 7 µm sections, incubated with wheat germ agglutinin (1:100) in the dark for 1 h, and then washed three times with PBS. DAPI (1:1000) was used to stain cell nuclei (10 min), followed by three washes with PBS. Water-soluble anti-fade mounting medium was dripped onto samples, which were then covered with glass cover slips and observed by confocal microscopy (Leica SP8). The cross-sectional areas of cardiomyocytes were calculated using Image-Pro Plus 6.0 software. To assess cardiac fibrosis, heart sections were stained using the standard Masson's trichrome method. Overexpression of FTO Empty adeno-associated virus (AAV-EV) and adeno-associated virus expressing FTO (AAV-FTO) under the control of a heartspecific cTNT promoter with an EGFP tag were constructed by Hanbio Biotechnology Ltd (Shanghai, China). The virus titer was approximately 1 × 10 12 V.g/ml. At 16 weeks, db/db mice were injected with AAV-EV and AAV-FTO virus, respectively, via a tail vein (120 µ l per mouse). m 6 A Dot Blot Assay TRIzol (Invitrogen) was used to extract total RNA from mouse hearts. For mRNA denaturation, samples were heated at 95 • C for 5 min, then immediately chilled on ice. Next, RNA samples were spotted on a positively charged nylon membrane (GE Healthcare) and cross-linked using an 80-degree hybridizer. Uncrosslinked RNA was eluted with PBS containing 0.01% Tween 20 for 5 min, then membranes incubated with anti-m 6 A antibody (1: 500 in PBS containing 0.01% Tween 20) at 4 • C for 12 h after blocking with 5% skim milk (in PBS containing 0.01% Tween 20) for 1 h. Then, membranes were incubated with horseradish peroxidaseconjugated anti-rabbit IgG secondary antibody, gently agitated at room temperature for 1 h, and washed four times with PBS for 10 min, followed by development with chemiluminescence. Methylene blue staining was used to confirm that duplicate dots contained the same amount of total RNA. MeRIP-Seq After five 24-week-old mice in each group were euthanized, total RNA samples were harvested from heart tissue specimens and quantified using a NanoDrop ND-1000 (Thermo Fisher Scientific, MA, United States). Then, complete mRNA was obtained by purification using Arraystar Seq-Star TM ploy(A) mRNA Isolation Kit, and broken into fragments of approximately 100 nucleotides by incubation in fragmentation buffer [10 mM Zn 2+ and 10 mM Tris-HCl (pH7.0)] at 94 • C for 5-7 min. RNA fragments containing m 6 A methylation sites were enriched by immunoprecipitation using anti-m 6 A antibody (Synaptic Systems, 202003). A KAPA Stranded mRNA-seq Kit (Illumina) was used to construct sequencing libraries of postenrichment m 6 A mRNA and input mRNA, which were then subjected to 150 bp paired-end sequencing on the Illumina NovaSeq 6000 platform. FastQC (v0.11.7) was used for quality inspection of raw sequence data, and original sequence filtered using Trimmomatic (V0.32). Filtered high-quality data were compared with the reference genome (HISAT2 v2.1.0) in the Ensembl database, and exomePeak (v2.13.2) used to identify peaks in each sample and differentially methylated peaks in compared samples. Peaks were annotated according to Ensembl database annotation information, peaks in different regions [5' untranslated region (5'UTR), coding sequences (CDS), and 3' untranslated region (3'UTR)] of each transcript counted in every sample, and the resulting data used for motif analysis with MEME-ChIP software. mRNA-Seq After five 24-week-old mice in each group were euthanized, total RNA samples were harvested from heart tissue specimen. RNA concentrations were determined using a NanoDrop ND-1000, and total RNA samples enriched using oligo dT (rRNA removal) and then selected using a KAPA Stranded RNA-Seq Library Prep Kit (Illumina) for library construction. Constructed libraries were quality assessed using an Agilent 2100 Bioanalyzer, and quantified by qPCR. Mixed libraries containing different samples were sequenced using the Illumina NovaSeq 6000 sequencer. Solexa pipeline version 1.8 (Off-Line Base Caller software, version 1.8) software was used for image processing and base identification. FastQC (v0.11.7) software was applied to evaluate sequencing read quality after adapter removal, Hisat2 (v2.1.0) software for comparison with the reference genome, and StringTie (v1.3.3) software to estimate transcript abundance, with reference to official database annotation information. The Ballgown package in R software was applied to calculate fragments per kilobase of transcript per million mapped reads at the gene and transcript levels, and screen out genes differentially expressed between samples or groups. MeRIP-qPCR Validation Five genes with differentially methylated sites according to MeRIP-seq were tested by reverse transcription (RT) qPCR. After total RNA samples were harvested from heart tissue specimen, total RNA were carried out mRNA-specific enrichment and fragmentation through Arraystar Seq-Star TM poly(A) mRNA Isolation Kit (AS-MB-006-01/02). Then the interrupted mRNA fragments were enriched by m 6 A antibody [Affinity purified anti-m 6 A rabbit polyclonal antibody (Synaptic Systems, 202003)] and IgG antibody [Dynabeads TM M-280 Sheep Anti-Rabbit IgG (Invitrogen, 11203D)]. Next, we eluted the RNA bound by m 6 A antibody and IgG antibody, and reverse transcribed into cDNA with random primers. RT-qPCR was performed on the input control and m 6 A-IP-enriched samples using gene-specific primers ( Table 1). Statistical Analysis All data are from three or more independent experiments and are presented as mean ± standard deviation. Statistical analyses were conducted GraphPad Prism 5.0 software. Comparisons of DCM and normal control (NC) samples were conducted using the paired Student's t-test. Differences among three or more groups were assessed by one-way analysis of variance (ANOVA). Differences with P < 0.05 were defined as statistically significant. Cardiac Dysfunction and Myocardial Fibrosis in DCM Are Linked to Changes in Global m 6 A Levels Leptin receptor deficient db/db mice are mature animal models of type 2 diabetes mellitus and diabetic cardiomyopathy (DCM). Cardiac hypertrophy and fibrosis are pervasive characteristics in DCM, and are usually present in mice with DCM at the age of 24 weeks (Tan et al., 2020). In our study, db/db mice manifested obvious cardiac hypertrophy, with significantly enlarged hearts, compared with db/ + mice ( Figures 1A,B), and also had evidently elevated heart weight to tibia length ratios (n = 5, P < 0.001; Figure 1C). Further, db/db mice had clearly increased interstitial fibrosis (n = 5, P < 0.001; Figures 1D,E), as well as markedly elevated cardiomyocyte cross-sectional area (n = 5, P < 0.0001; Figures 1F,G). To provide further evidence that cardiac function was maladjusted, we performed serial echocardiography in NC and DCM group mice at 24 weeks old. The db/db mice manifested decreased cardiac function at 24 weeks old, and significantly reduced left ventricle ejection fraction (LVEF) and left ventricle fractional shortening (LVFS) compared with NC mice (n = 5, P < 0.0001; Figures 1H,I,L). Furthermore, mice with DCM showed a significantly decreased E to A wave ratio (E/A) compared with the NC group (n = 5, P < 0.0001; Figures 1J,K), indicating that the diastolic function of the heart was abnormal in DCM mice. Blood glucose (BG), triglyceride (TG), and total cholesterol (TC) were also significantly increased in db/db (DCM) mice at 16 weeks compared with db/ + mice (Supplementary Figure 1), indicating that db/db mice had abnormal blood glucose and lipid metabolism. Overall, these data demonstrate that we successfully constructed a mouse model of DCM. Next, to assess global m 6 A levels in db/db and db/ + mouse hearts, we performed dot blot analysis on heart samples from both groups and found relatively higher total m 6 A levels in db/db mice than the db/ + mice (Figures 1M,N; P < 0.05). Therefore, we conducted transcriptome-wide MeRIP-seq and RNA-Seq to generate an m 6 A-methylation map of DCM. Transcriptome-Wide MeRIP-Seq Demonstrates Differential m 6 A Modification Patterns in DCM Compared With NC Mouse Hearts Diabetic cardiomyopathy hearts had unique m 6 A modification patterns that differed from those of NC heart samples. We identified 4968 m 6 A peaks, representing 3,704 gene transcripts, in the DCM group by model-based analysis using exomePeak (v2.13.2) (Figure 2A). In the NC group, 5297 m 6 A peaks were identified, corresponding to 3,863 gene transcripts (Figure 2A). We detected 3995 unique m 6 A peaks and 3230 m 6 A peaks associated with transcripts in both groups. The DCM heart had 973 new peaks, and 1302 new peaks were absent relative to the NC group. This finding indicates that global m 6 A modification patterns in the DCM group differed from those in the NC group (474 vs. 633; Figure 2B). Analysis of m 6 A peak distribution showed that approximately 69.47% of modified genes had an individual m 6 A-modified peak, and the majority of genes had one to three m 6 A modification sites ( Figure 2C). m 6 A methylation was further mapped using MEME-ChIP software which identified the top consensus motif in m 6 A peaks as GGACU (Figure 2D), which is similar to the previously identified RRACH motif (where R = G or A; A = m 6 A, and H = U, A, or C) (Dominissini et al., 2012;Meyer et al., 2012). Within genes, m 6 A peaks were predominantly distributed in coding sequences (CDS) following the 3' untranslated region (3'UTR) and in the immediate vicinity of the stop codon ( Figure 2E). Total and unique m 6 A peaks were analyzed in DCM and NC whole transcriptome data and divided into 5' UTR, transcription start site region (TSS), CDS, stop codon, and 3'UTR regions, based on their locations in RNA transcripts. The major regions of m 6 A peak enrichment were in CDS, 3'UTR, and (N) Quantification of m 6 A levels in hearts from DCM and NC mice. DCM, diabetic cardiomyopathy group; NC, normal control group; n = 5. Data are presented as mean ± SEM; differences between two groups were analyzed using the Student's t-test; ****P < 0.0001, **P < 0.01, *P < 0.05 vs. NC. stop codon vicinity regions (Figure 2F), consistent with previous m 6 A-seq results (Meyer et al., 2012). DCM-unique m 6 A peaks distributions showed a different pattern from NC-unique peaks, with a relative increase in the m 6 A residues in CDS regions and a relative decrease in 3' UTR (35.57 vs. 34.14%; 37.71 vs. 40.5%, Figure 2F). Next, we identified differentially methylated transcripts and analyzed them using Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway, and protein interaction network analyses. We compared the abundance of m 6 A peaks between NC and DCM samples and found among the 3995 m 6 A peaks detected in both samples, 984 differentially methylated sites were detected and selected for further study. Among them, 295 hypermethylated and 689 hypomethylated m 6 A sites were found in the DCM group [fold change (FC) > 1.5, P < 0.05; Figure 3A]. Differentially methylated sites in both groups showed significantly altered intensity on analysis using Integrative Genomics Viewer (IGV) software. Representative m 6 A-methylated mRNA peaks in the zinc finger domain transcription factor (Zfp69) and plakophilin-4 (Pkp4) genes are shown in Figure 3B as examples of sites with decreased and increased m 6 A levels, respectively. To determine the potential biological significance of changes in m 6 A methylation associated with DCM, we conducted GO analysis of differentially methylated RNAs. The results revealed that, compared with db/ + mice, hypermethylated and hypomethylated RNAs in db/db mice were particularly associated with metabolism-related terms; for example, regulation of metabolic process, RNA metabolic process, organic substance metabolic process, cellular metabolic process, regulation of gene expression, nucleobase-containing compound, and nitrogen compound metabolic process, indicating that the differentially methylated RNAs were closely associated with metabolism ( Figures 3E,F). Further, KEGG Pathway analysis of RNAs differentially methylated in DCM were mainly associated with cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism; for example, the cAMP signaling pathway, dilated cardiomyopathy, and cGMP-PKG signaling pathway, which are strongly associated with myocardial hypertrophy (Figures 3C,D). The chemokine signaling pathway, adrenergic signaling in cardiomyocytes, TNF signaling pathway, and the advanced glycation end products (AGEs)-receptors for AGEs (RAGE) pathway (which is involved in diabetic complications) also appear to be important mechanisms associated with cardiac fibrosis (Figures 3C,D). Further, glycerophospholipid metabolism, apoptosis-multiple species, glycerolipid metabolism, fat digestion and absorption, ether lipid metabolism, and the calcium signaling pathway associated with myocardial energy metabolism were enriched among differentially methylated transcripts (Figures 3C,D). Overall, our data indicate that these differentially methylated RNAs may be involved in DCM pathogenesis. Protein interaction network analysis of genes with differentially methylated transcripts was performed using Cytoscape software. BCL2L2, MEF2A, and VEGF-A were the most central proteins, and are particularly associated with the advanced glycation end products (AGEs)-RAGE pathway, which is involved in diabetic complications (Supplementary Figure 2). Therefore, these genes and the corresponding signaling pathways are likely of great importance in protein-protein interaction networks and molecular events underlying DCM. In summary, transcripts with DCM-unique m 6 A peaks were are closely related to cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism, which are major pathological features of left ventricle remodeling in DCM. Combined Analysis of MeRIP-seq and RNA-Seq Data Reveals That Unique m 6 A-Modified Transcripts Were Highly Relevant to Left Ventricle Remodeling Pathological Features of DCM RNA-Seq data showed that 127 mRNAs were significantly dysregulated in DCM samples compared with NCs, including 61 downregulated and 66 upregulated mRNAs (FC > 1.5, P < 0.05; Figure 4A). Hierarchical clustering analysis of RNA-Seq data showed that the trend in differential gene expression between the groups was consistent among individual samples within each group (n = 5 per group) ( Figure 4B). Further, principal component analysis showed that samples from the DCM and NC groups clustered separately, with only small differences among samples within each group (Supplementary Figure 3). Interestingly, GO and KEGG pathway analyses showed that differentially expressed genes were mainly associated with cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism (Supplementary Figures 4A,B, 5A,B), consistent with involvement in myocardial remodeling pathology (Ritchie and Abel, 2020). Next, we performed combined analysis of MeRIP-seq and RNA-Seq data to identify target genes that were modified by m 6 A. We detected 166 hypermethylated m 6 A peaks within mRNA transcripts, 57 and 109 of which were significantly downregulated and upregulated, respectively ( Figure 4C). Further, 308 hypomethylated m 6 A peaks were detected in mRNA transcripts, with 158 and 150 clearly upregulated and downregulated, respectively ( Figure 4C). The top three ranking genes showing the most distinct changes in m 6 A and mRNA levels in DCM samples relative to NC in each quadrant of Figure 4C are presented in Table 2. Comparisons of the whole transcriptome of DCM versus NC, m 6 A methylated genes were more upregulated than downregulated in DCM. This trends did not exist for non-m 6 A methylated genes ( Figure 4D). Furthermore, combined analysis of MeRIP-seq and RNA-Seq data demonstrated that unique m 6 A-modified transcripts were highly relevant to cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism; therefore, we focused on genes critical for these processes, including Mef2a, Klf15, Bcl2l2, Cd36, and Slc25a33 ( Figure 4C and Table 2). We used quantitative reverse-transcription PCR (RT-PCR) to validate the key genes Mef2a, Klf15, Bcl2l2, Cd36, and Slc25a33, which are associated with DCM pathophysiology and found that all of them were significantly enriched in immunoprecipitation (IP) pull-down samples ( Figure 4E). Further, the mRNA levels of these genes were measured in db/ + and db/db hearts samples (Figure 4F), and the results showed that they all had similar mRNA expression tendencies consistent with their m 6 Amethylation levels ( Figure 4F). In summary, these data suggest that differentially methylated RNAs affect cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism, thereby affecting protein homeostasis in a transcription-independent manner. FTO Is Downregulated in DCM, and Overexpression of FTO Improves Cardiac Function by Reducing Myocardial Fibrosis and Myocyte Hypertrophy Based on the results of GO/KEGG analysis of unique genes and combined analysis of MeRIP-seq and RNA-Seq data in DCM and NC hearts, we suspected that the demethylase, FTO, may have an important role in DCM pathogenesis, which is closely related to energy metabolism regulation. Therefore, in the next experiment, we assessed the role of FTO in the mechanism underlying DCM. FTO was downregulated in db/db mice compared with db/ + group. The mRNA and protein expression levels of the FTO demethylase enzyme were measured in hearts from the db/db and db/ + groups to determine whether fibrosis is related to m 6 A modification. FTO protein levels estimated by western blotting were significantly decreased in the db/db group compared with db/ + group (P < 0.001; n = 6/group; Figures 5A,B). Fto mRNA expression was also significantly decreased in db/db mice compared with db/ + controls (P < 0.001; n = 6/group; Figure 5C). In addition, immunostaining showed that FTO was predominantly expressed in cell nuclei, and that integrated optical density of FTO was decreased in db/db mice compared with the db/ + group (P < 0.05; n = 6/group; Figures 5D,E). We also detected expression of other important methyltransferases and demethylases, including Mettl3, Mettl14, and ALKBH5; however, no significant differences were detected (Supplementary Figure 6). Overexpression of FTO significantly improved cardiac function by reducing myocardial fibrosis and myocyte hypertrophy in db/db mice. To determine whether overexpression of FTO protected hearts against DCM, adenoassociated virus vectors encoding FTO were injected into 16-week-old db/db or db/ + mice via a tail vein and FTO expression assessed 8 weeks later (Supplementary Figure 7A). Figures 7B,C). Overexpression of FTO leaded to decrease total m 6 A levels in db/db mice (Supplementary Figures 7D,E); P < 0.05). Eight weeks after adenovirus injection, reconstitution of FTO efficiently prevented myocardial fibrosis by reducing interstitial fibrosis in db/db mouse hearts (Figures 6A,B). Further, reconstitution of FTO efficiently reduced myocyte hypertrophy, as evidenced by decreased cardiomyocyte crosssectional area in db/db mouse hearts (Figures 6A,C). Moreover, reconstitution of FTO significantly enhanced cardiac function in db/db mice by increasing LVEF and LVFS (Figures 6D,E). Furthermore, Doppler echocardiography indicated that FTO reconstitution also alleviated diastolic dysfunction in db/db mice by elevating the E/A ratio ( Figure 6F). Overexpression of FTO decreased the mRNA levels of Bcl2l2 and Cd36 (Figures 6G,H). Taken together, these data indicate that reconstitution of FTO prevented myocardial fibrosis and myocyte hypertrophy, and overexpression of FTO improved cardiac systolic and diastolic function in db/db mice. DISCUSSION Diabetic cardiomyopathy is a specific type of cardiomyopathy caused by abnormal metabolism during diabetes. The main features of DCM are myocardial hypertrophy, cardiac fibrosis, coronary microvascular dysfunction, left ventricular enlargement, and weakened ventricular wall motion (Ritchie and Abel, 2020;Tan et al., 2020). In the early stage of DCM, echocardiography mainly indicates diastolic dysfunction, while FIGURE 5 | Fat mass and obesity-associated (FTO) protein levels were significantly downregulated in diabetic cardiomyopathy. (A,B) Representative western blots and quantitative analysis of FTO/GAPDH protein expression in hearts from DCM and NC mice (n = 6 per group). (C) Fto mRNA levels in hearts from DCM and NC mice (n = 6 per group). (D) Confocal immunofluorescence using specific antibodies against FTO (red) in DCM hearts. Nuclei were stained with DAPI (blue), merged images show co-localization. Scale bar, 100 µm. (E) Quantification of FTO fluorescence intensity. Data are presented as mean ± SEM; differences between two groups were analyzed using the Student's t-test; *P < 0.05 vs. NC, **P < 0.01 vs. NC. later stage disease is manifested as abnormal systolic function (Tan et al., 2020). Although multiple aspects of epigenetic regulation, from DNA to protein modification, have been extensively studied in DCM, the role of RNA modification in the regulation of gene expression is just beginning to be explored. Previous studies have demonstrated that m 6 A dysregulation is associated with cardiac homeostasis and diseases, such as cardiac hypertrophy, cardiac remodeling, myocardial infarction, and heart failure (Dorn et al., 2019;Mathiyalagan et al., 2019;Mo et al., 2019;Song et al., 2019;Berulava et al., 2020;Gao et al., 2020;Lin et al., 2020); however, the transcriptome-wide distribution of m 6 A in DCM remains largely unknown. Our study reveals that m 6 A RNA methylation is altered in db/db mice, which exhibit unique m 6 A modification patterns that differ from those in db/ + mice at the transcriptome-wide and gene-specific scales. GO and KEGG analyses revealed that genes with m 6 A RNA methylation differences between db/db and db/ + mice were particularly associated with cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism. FTO was downregulated in db/db mice compared with db/ + mice, and overexpression of FTO in db/db mice improved cardiac function and significantly reduced myocardial fibrosis and myocyte hypertrophy. FTO is a critical RNA-modifying enzyme that may control cardiomyocyte function by catalyzing the demethylation of m 6 A on specific subsets of mRNAs; for example, Mef2a, Klf15, Bcl2l2, Cd36, and Slc25a33. The m 6 A modification patterns in db/db mice differed from those of db/ + mice at both the transcriptome-wide and gene-specific scales. We detected 4,968 m 6 A peaks in DCM, consistent with the 3,208 and 3,922 peaks described in heart failure and myocardial hypertrophy, respectively (Mathiyalagan et al., 2019;Berulava et al., 2020). Together, these results indicate that m 6 A is a ubiquitous post-transcriptional RNA modification in cardiovascular diseases. Furthermore, FIGURE 6 | Reconstitution of FTO alleviated cardiac hypertrophy and fibrosis in diabetic db/db mice. (A) The gross morphology of hearts stained with HE, Masson, and WGA. Scale bar, 2 mm (B,C) Quantitative analysis of interstitial fibrosis and cardiomyocyte cross-sectional area. Scale bar, 20 µm. Mice with reconstituted FTO showed significantly increased ejection fraction (D), fractional shortening (E), and E/A ratio (F). Mice with reconstituted FTO showed reduced Cd36 (G) and Bcl2l2 (H) expression, as determined by qPCR. Data are expressed as mean ± SD and were analyzed using the Student's t-test. *P < 0.05; **P < 0.01, ****P < 0.0001 vs. NC group (n = 3 per group). we investigated differentially methylated RNAs during DCM development and found that numbers were higher than those of genes with differential mRNA levels (296 vs. 127), indicating that the changes in m 6 A RNA methylation far exceeded those in gene expression. Importantly, we found that, in DCM, m 6 A was primarily present in the CDS and 3'UTR regions, which may influence mRNA stability, translation efficiency, and alternative splicing. Therefore, we speculate that the differential methylation of RNA in DCM may influence RNA at the post-transcription and translation levels, and particularly translation efficiency. Gene Ontology and Kyoto Encyclopedia of Genes and Genomes analyses showed that transcripts with differential m 6 A methylation in DCM were significantly enriched in processes and pathways associated with myocardial energy metabolism, such as glycerophospholipid metabolism, glycerolipid metabolism, and regulation of cellular metabolic processes (Figures 4C-F). DCM is defined as a loss of flexibility in myocardial substrate metabolism, which leads to mitochondrial dysfunction, inflammation, and myocardial fibrosis (Peterson and Gropler, 2020). Cardiomyocyte ATP is mainly (60-90%) derived from fatty acid fat β oxidation under physiological conditions (Peterson and Gropler, 2020); however, in the diabetic state, fatty acid oxidation can produce numerous lipid intermediates, which accumulate in cardiomyocytes to cause lipotoxicity and ultimately lead to impaired heart function. Further, excessive fatty acid oxidation can cause accumulation of reactive oxygen species (ROS), leading to oxidative stress, which damages myocardial cells (Peterson and Gropler, 2020;Ritchie and Abel, 2020). We propose that m 6 A methylation may be involved in the key pathogenic processes underlying DCM, including abnormal myocardial substrate metabolism. Furthermore, our RNA-Seq data demonstrate that abnormally up-regulated genes in the DCM samples were significantly enriched in biological processes involving lipid metabolism, cellular lipid metabolism, and fatty acid metabolism (Supplementary Figure 3A), consistent with myocardial energy metabolism pathology (Ritchie and Abel, 2020;Tan et al., 2020). Moreover, pathway analysis showed that unsaturated fatty acid biosynthesis, fatty acid elongation, butanoate metabolism, PPAR signaling pathway, and the HIF-1 signaling pathway were significantly altered among up-regulated genes (Supplementary Figure 4B). These results support that changes in m 6 A RNA methylation mainly occur in transcripts coding for proteins involved in cardiac metabolic processes, with differences in gene expression also linked to metabolic regulation . In addition, combined analysis of MeRIP-seq and RNA-Seq data identified the target genes Cd36 and Slc5a33, which were validated by MeRIP-qPCR (Figures 4E,F). CD36 deficiency rescues lipotoxic cardiomyopathy by preventing myocardial lipid accumulation in MHC-PPARα mice (Yang et al., 2007). In our study, Cd36 m 6 A-methylation and mRNA expression levels were upregulated, indicating that m 6 A modification may influence mRNA stability or translation efficiency. Our GO and KEGG analyses showed that transcripts differentially m 6 A methylated in DCM were significantly enriched in processes and pathways associated with cardiac fibrosis and myocardial hypertrophy. The cAMP signaling, cGMP-PKG signaling, and dilated cardiomyopathy pathways are closely related to myocardial hypertrophy, while adrenergic signaling in cardiomyocytes, the TNF signaling pathway, the mitogen activated protein kinase (MAPK) signaling pathway, the AGEs-RAGE pathway, and chemokine signaling are strongly associated with cardiac fibrosis (Figures 4C,D). The main pathological changes in DCM include myocardial interstitial fibrosis, cardiomyocyte hypertrophy, cardiomyocyte apoptosis, and microvascular disease Peterson and Gropler, 2020;Ritchie and Abel, 2020;Tan et al., 2020). In our study, GO analysis showed that metabolic processes involving nitrogen compounds were up-regulated while the cGMP-PKG signaling pathway was also increased, consistent with a previous study showing decreased NO signaling in endothelial cells and cardiomyocytes, leaded to cardiomyocyte hypertrophy by reducing the activity of soluble guanylate cyclase (sGC) and cyclic guanylate (cGMP) content, as well as cardiomyocyte loss of the protective effects of protein kinase G (PKG) in DCM (Park et al., 2018;Tan et al., 2020). Hence, our data suggest that m 6 A methylation could be involved in the important pathogenic processes underlying myocardial hypertrophy in DCM. Furthermore, AGEs promote an imbalance of inflammatory gene expression by binding to specific cell surface receptors, thus increasing matrix protein expression through the MAPK pathway in vascular and heart tissues . Simultaneously, AGEs are involved in increasing ROS production and promoting myocardial inflammation and fibrosis (Wang Y. et al., 2020). Our KEGG analysis of transcripts differentially m 6 A methylated in DCM showed that MAPK signaling was up-regulated, while the AGEs-RAGE pathway was down-regulated. Interestingly, enriched GO terms for genes differentially expressed in DCM based on RNA-seq data included extracellular matrix organization, myofibril assembly, and collagen-containing extracellular complex organization, indicating that m 6 A methylation may contribute to important pathogenic processes underlying cardiac fibrosis in DCM. We also validated two target genes, Mef2a and Klf15, by MeRIP-qPCR (Figures 4E,F). KLF15 affects myocardial hypertrophy by inhibition of MEF2 and GATA4 transcription (Zhao et al., 2019), and can reduce myocardial fibrosis by downregulating the expression of transforming growth factor-β (TGFβ), connective tissue growth factor, and myocardial proteinassociated transcription factor-A in myocardial fibroblasts (Zhao et al., 2019). Further, knockout of the MEF2A gene improves cardiac dysfunction and collagen deposition in DCM, while inhibition of MEF2A can reduce extracellular matrix accumulation by regulating the Akt and TGF-β1/Smad signaling pathways (Chen et al., 2015). We found significant associations between differentially m 6 A methylated transcripts and cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism in DCM, suggesting that DCM may be regulated by epitranscriptomic processes, such as m 6 A RNA methylation. FTO is downregulated in DCM, and overexpression of FTO improves cardiac function by reducing myocardial fibrosis and myocyte hypertrophy. The FTO gene was discovered in 2007 in a genome-wide association study of type 2 diabetes (Frayling et al., 2007). Further, a population cohort study found that the role of FTO risk genes is related to energy intake (Haupt et al., 2009); however, the mechanisms by which FTO influences obesity and the specific pathways related to energy metabolism remain unclear. Animal experiments showed that this increase in energy metabolism does not involve physiological exercise, and may be caused by increased activity of the sympathetic nervous system (SNS) (Church et al., 2010). Further, the increased SNS activity may promote lipolysis of fat and muscle tissues and improve fat burning efficiency, thereby reducing the occurrence of obesity (Church et al., 2010). Our KEGG pathway analysis of differentially methylated RNAs showed that they were mainly associated with adrenergic signaling in cardiomyocytes ( Figure 4F). In adipogenesis, FTO can also improve the binding ability of C/EBPs with methylated or unmethylated DNA, thereby enhancing the transcription activity of the corresponding gene promoter, and stimulating preadipocyte differentiation (Wu et al., 2010). In summary, FTO plays an important role in energy metabolism. Recent studies have shown that FTO expression is downregulated in failing mammalian hearts and hypoxic cardiomyocytes, thereby increasing m 6 A levels in RNA and decreasing cardiomyocyte contractile function (Mathiyalagan et al., 2019). Simultaneously, overexpression of FTO in mouse reduces fibrosis and promotes angiogenesis (Mathiyalagan et al., 2019). In our study, overexpression of FTO also reduced myocardial fibrosis (Figures 6A,B). In addition, FTO knockout can lead to impaired cardiac function and promote heart failure (Berulava et al., 2020). In our study, overexpression of FTO improved heart function by increasing the LVEF and LVFS (Figures 6D,E). In summary, overexpression of FTO improves cardiac function by reducing myocardial fibrosis and myocyte hypertrophy. This study has limitations. Firstly, no human heart samples were analyzed, therefore we will further seek human heart samples to further explore the association of m 6 A with the DCM pathogenic process in the future. Second, although the target genes modified by m 6 A were detected, the mechanism by which methylation readers regulate the target genes was not explored. In the future, we will investigate whether readers influence the stability, translation efficiency, or degradation of target genes. Third, although we purposely overexpressed FTO in the heart to improve cardiac function, future experiments will use conditional knockout mice and additional DCM models to study the exact mechanism by which FTO mediates DCM. In conclusion, m 6 A RNA methylation was altered in db/db mice, which had unique m 6 A modification patterns that differed from those in db/ + mice at the transcriptome-wide and gene-specific scales. GO and KEGG analysis indicated that differentially methylated genes were particularly associated with cardiac fibrosis, myocardial hypertrophy, and myocardial energy metabolism. FTO is downregulated in db/db mice compared with db/ + mice, and overexpression of FTO in db/db mice improved cardiac function and significantly reduced myocardial fibrosis and myocyte hypertrophy. FTO is a critical RNAmodifying enzyme that may control cardiomyocyte function by catalyzing the demethylation of m 6 A on specific subsets of mRNAs, including Mef2a, Klf15, Bcl2l2, Cd36, and Slc25a33. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: GSE173384, https:// www.ncbi.nlm.nih.gov/bioproject/?term=GSE173384. ETHICS STATEMENT The animal study was reviewed and approved by the Ethics Committee of the Capital Institute of Pediatrics.
9,011.8
2021-07-21T00:00:00.000
[ "Biology" ]
DETERMINATION OF MOLECULAR GENETIC MARKERS IN PROGNOSIS OF THE EFFECTIVENESS OF TREATMENT OF MALIGNANT INTRACEREBRAL BRAIN TUMORS Intracerebral malignant brain tumors remain one of the most complex problems of neuro-oncology. Today, promising results of the use of targeted drugs have been received, which determine the important diagnostic and predictive value of molecular genetic markers of glial and metastatic brain tumors. Aim: The study of the prevalence of MGMT (O6-methylguanine-DNA methyltransferase) and PTEN (phosphatase and tensin homologue deleted on chromosome 10) gene expression by real time polymerase chain reaction in tumor tissue of gliomas and brain metastases. Original Research Article: full paper (2019), «EUREKA: Health Sciences» Number 4 Introduction Intracerebral malignant brain tumors, regardless of their genesis, primary or metastatic, remain the most painful and unresolved issue of neuro-oncology [1,2]. Since the 70s of the last century, surgery along with radiotherapy (RT) remains the main methods of treating patients with malignant intracerebral tumors. The combination of these two treatments allowed doubling the survival rate [3,4]. The survival of patients with malignant gliomas progressed with the development of therapeutic protocols, especially in the introduction of the Stupp protocol of chemotherapy (CT) in 2005 in the clinical practice. According to the Stupp protocol, the temozolomide cytotoxic effect (TMZ) is added to the postoperative RT, followed by TMZ CT for six cycles. This allowed a two-year overall survival rate to be increased to 26.5 % compared with only 10.4 % for post-operative RT in mono-regimen [5]. It is known that TMZ CT in the complex treatment of patients with glioblastoma (GBM) in 39 % of cases increases the average life expectancy of some categories of patients up to 24 months [6]. However, TMZ CT is also widely known, and is associated, first of all, with resistance to this drug. Despite the fact that TMZ CT has been used for more than ten years in the treatment of malignant glial tumors, the causes and mechanism of TMZ resistance development are not fully understood today and are the subject of scientific research [7,8]. Independent risk factors associated with less long-term survival in malignant glial tumors are age of the patient (≥60 years), non-radical tumour removal, low preoperative functional status of the patient (KPS <70), absence of post-operative RT and CT, reduction of postoperative TMZ CT (<4 years) [10,11]. In the case of a more favourable prognosis at the molecular level, the presence of methylation of the promoter of the O 6 -methylguanine gene of DNA-methyltransferase (MGMT) or the presence of mutations in the isocyte dehydrogenase (IDH) gene (Hartmann et al. 2013; Reifenberger et al., 2014) may be indicated. It is with the differences in the molecular biology of the tumors that the same histotype tumors are different in their response to therapy. Therefore, in modern neuro-oncology of special diagnostic and prognostic significance acquire molecular genetic markers of tumors, as an essential component of the selection of treatment tactics [12]. The development of molecular biology and the orientation towards personalized medicine have led to a fundamentally new approach in the treatment of cancer patients with the use of molecular targeting drugs that block the proliferation of tumour cells by selective inhibition of its major signalling pathways -ligands, membrane receptors, intracellular proteins, etc. [6]. CT is an important component of the integrated therapy of cancer patients. Heterogeneity of tumors of one histotype according to the molecular mechanism of their development and mechanisms of chemo-resistance induced to personification of therapy. Today, along with tradition- al cytotoxic chemotherapy, targeted therapy is becoming increasingly important. The essence of the approach lies in the preliminary study of key genetic mutations that trigger uncontrolled cell proliferation and inhibition of apoptosis. Therefore, targeting therapy is based on the blocking of processes that are abnormally activated by mutation [13,14]. When assigning cytostatics, it is important to determine the expression of the corresponding genes of chemoreceptor. Studies are conducted using biopsy or postoperative tumour tissue. Thus, the use of an antibacterial CT is based on the need for the prescription of chemotherapy, either traditional or targeted, from the standpoint of the current level of knowledge about molecular genetic prediction. The deepening of the search for molecular genetic markers, which allows predicting the effectiveness of therapy of intracerebral tumors, remains relevant in neuro-oncology. Aim of research The study of the prevalence of MGMT (O 6 -methylguanine-DNA methyltransferase) and PTEN (phosphatase and tensin homologue deleted on chromosome 10) gene expression by real time polymerase chain reaction in glioma tumour tissue and brain metastases. Materials and methods The research was conducted at the State Institution "Institute of Neurosurgery named after A.P. Romodanova NAMS of Ukraine" (certificate on quality management is registered in the Register of Certification Systems of UkrSEPRO № UA.2.166/0909-15 dated June 10, 2015, valid until June 09, 2020), in the Department of Neurobiology (Certificate of Attestation No. PT-221/14 dated 04.07.2014 till 03.07.2019) which certifies that the department is certified on the basis of the Law of Ukraine "On Metrology and Metrological Activity", meets the criteria for attestation of measuring laboratories in accordance with the Rules of Authorization and Certification in the State Metrology System. The department is certified for carrying out measurements of the indicators of the objects according to the industry, which is given in the appendix to the certificate and is an integral part of it. All patients included in the study were treated at the State Institution "Institute of Neurosurgery named after A.P. Romodanova NAMS of Ukraine". The study included patients with different tumour brain lesions: both primary and metastatic. Surveyed and operated to obtain tumour material for further studies of 30 patients from 04.2017 to 09.2018: 1 case of anaplastic astrocytomas (AA); 26 cases of GBM IV grade anaplasia, of which 22 cases was first-time GBM diagnosis; 3 cases of continued growth (recurrent) GBM; 1 case of secondary GBM; 1 case of prolonged growth of anaplastic oligodendroglioma (AOG); 1 case of anaplastic ganglioglioma; 1 case of metastatic adenocarcinoma in the brain. Two of the 30 patients with GBM, were dropped out of observation, the observations of the remaining 28 patients are known. All patients were informed about the conduct of research and informed consent to their implementation. The research was carried out in accordance with the ethical norms adopted by the Ukrainian legislation. Polymerase chain reaction (PCR) method in real time Material for the study was obtained from fragments of brain tumour tissue extracted during surgical intervention. Tissue immediately after sterile removal was treated as follows: the fragments of blood vessels and necrotized fragments were removed, the purified tissue was placed in sterile cryotubes and stored in liquid nitrogen at a temperature of -196 °C until the time of use in the experiment. Isolation of total RNA The total RNA was isolated from samples of tissues frozen in liquid nitrogen using the commercially available PureLink RNA Mini Kit Kit (Applied Biosystems, USA), respectively, the kit instructions. Selection steps consisted of lysis of cells, RNA binding to the membrane, purification of RNA and elution. Guanidine isothiocyanate is included in the prevention of degradation of RNA into the lysis and washing buffers. The isolated RNA was placed on ice, checked for quality and used immediately for the synthesis of cDNA in the reverse transcription reaction. Determination of the quality of the isolated RNA The absorption spectrum of the total RNA solution was roughly estimated by the degree of purification of RNA from proteins and guanidine. The ratio of 260 nm/280 nm should be between 1.9 and 2.1. Samples that had lower or higher values were not included in the experiment. The RNA was evaluated by absorbance at 260 nm on a spectrophotometer. Synthesis of the first chain of cDNA The cDNA was obtained by kit of Tag Man reverse transcription reagents (Applied Biosystems, USA) using the reverse transcription reaction, respectively, with the kit instructions. The reaction was carried out in a volume of 20 μl. OligodT primers were used as primers. Determination of the level of expression of genes by real-time PCR The level of expression of MGMT (Hs01037698_m1) and PTEN (Hs02621230_s1) genes was analyzed on a real-time PCR fluorescence detection system of the SFX96 Touch (BioRad, USA) using Tag Man Universal PCR Master Mix (Applied Biosystems, USA) and Tag Man Gene Expression Assays (Applied Biosystems, USA) on the following protocol: Primary denaturation 95 °C -10 minutes (1 cycle). Amplification cycle (×40): -denaturation 95 °C -15 sec; -annealing of primers, elongation and fixation of fluorescence signal 60 °C -1 minute. All reactions were carried out in separate test tubes. The volume of the reaction mixture was 20 μl. To evaluate the reproducibility of the threshold cycle values, all samples with each pair of primers were amplified in double repetition. The difference between the samples was no more than 0.5 cycle. Accumulation of amplification products was determined using oligonucleotide samples of the TagMan type (probes), complementary to the central portion of the amplified transcript fragment. The FAM is attached to the 5 'end of the probe, while the 3' end carries the MGB shield. Upon amplification of the transcript there is a separation of the fluorophore and the absorber and the device fixes the fluorescence signal. The oligonucleotide samples that were used to determine the expression of the genes studied are shown in Table 1. To check the absence of amplification of transcript regions of the genes studied on possible impurities of genomic DNA in the sample, parallel reactions of amplification of the RNA specimen without reverse transcription and cDNA were pre-performed. Table 1 Nucleotide sequence of fluorescence probes used to determine the level of gene expression Gene Sequence of the oligonucleotide test For the real time PCR method, the gene of glucuronidase GUSB (Hs00939627_m1), which belongs to the family of "housekeeping genes", has been selected as the reference gene for the gene reference MGMT and PTEN, which is characterized by a stable expression level among various specimens. It is assumed that the change in the amount of its cDNA depends only on the influence of additional factors that determine the kinetics of reverse transcription. This allows comparing the level of gene expression between different samples. The data analysis was performed at the threshold fluorescence, which is the main criterion for evaluating the results. It describes a certain cycle of PCR (Ct), which shows a significant increase in fluorescence compared with the background level and begins the exponential phase of the PCR cycle. The absolute number of copies of the mRNAs of the MGMT and PTEN genes in tumour samples was determined using the tenfold dilutions of a fragment of the MGMT gene enclosed in the plasmid and constructed on the basis of these data of the standard curve. GeneArt Gene Synthesis (Invitrogen, Thermo Fisher Scientific, Germany) was used for this purpose. The calculations used the equation of the curve y=-0.3415x+13.275. The difference in the level of gene expression between the specimens was evaluated at the normalized number of copies of the mRNAs of the MGMT and PTEN genes relative to the number of copies of the housekeeping gene, GUSB. Results In all 30 (100 %) patients with tumour fragments, we determined normalized expression of MGMT and PTEN genes. Table 2 presents the results of the determination of normalized expression of MGMT and PTEN genes in the tissue of glial tumour fragments and brain metastases extracted during surgical removal, further adjuvant treatment was traced. In 18 (69 %) of 26 patients with GBM, adjuvant radiotherapy (RT) was conducted with concomitant TMZ CT. In 14 (54 %) of 26 patients with GBM after completion of postoperative adjuvant RT, TMZ CT was performed. In 16 (53 %) out of 30 examined patients, a low normalized expression of MGMT gene (<40 c. u.) was observed. In 6 (20 %) patients, this indicator was in the range of 40 c. u. to 100 c. u. The use of TMZ in such cases is considered appropriate and effective, as evidenced by overall survival (OS) in the study group of patients (Table 2). In one case, in a patient with a recurrent GBM (observation No. 27, Table 2), the normalized expression of MGMT gene significantly exceeded this indicator and amounted to 2,292.8 c. u. in the remaining patients, which is a prognostic negative factor. In this observation, the recurrence-free period (RFP) was 18 months, overall survival (OS) was 25 months; the patient died 7 months after the onset of relapse. In the case of secondary GBM (observation № 25, Table 2), the normalized expression of the MGMT gene was 10.3 c. u., RT was performed with concomitant TMZ CT, in the subsequent treatment of 6 courses of adjuvant TMZ CT. This patient is alive, the observation lasts 22 months. In our observation group, in the majority of patients, a low normalized expression of the PTEN gene (<40 c. u.) was detected in 22 (73 %) cases. In the remaining 8 (27 %) patients, the normalized expression of the PTEN gene exceeded 40 c. u. and ranged from 61.6 c. u. to 21390.5 c. u. In this case, the correlation between normalized expression of PTEN gene and the survival rate of patients, nor the level of normalized expression of MGMT gene has not been established. The highest normalized index of PTEN gene expression is 21390.5 c. u. was observed in a patient with GBM, which is still alive, is observed for 10 months, data for relapse of the tumour has not been received (observation № 19б Table 2). Two patients with GBM with high OS 22 months, who are still alive (observation № 1 and 8, Table 2 Discussion As a result of our work, the prevalence of expression of MGMT and PTEN genes by real-time PCR in glandular tumor tissue and brain metastases has been studied. In all 30 (100 %) patients with tumor fragments, we determined normalized expression of MGMT and PTEN genes. Such cases were considered prognostic favorable for the response to TMZ CT, which is confirmed Continuation of Table 2 Original Research Article: full paper (2019), «EUREKA: Health Sciences» Number 4 Medicine and Dentistry by survival rates in the study group. Our results indicate a correlation between the level of expression of the MGMT gene and the efficacy of TMZ CT. Unfortunately, in our work, due to the small sample size, we were unable to analyze the statistical significance of the results obtained, as well as to estimate the predictive significance of the expressions of the MGMT gene and the PTEN gene. This is a weak link in our study and requires further analysis of the results with an increase in the number of samples and the period of observation. It would also be interesting to investigate the mutations in the EGFR gene (epidermal growth factor receptor, ErbB-1) in the samples presented, the correlation of this mutation with the level of expression of the PTEN gene, and to analyze their prognostic value for assessing the effectiveness of EGFR targeting therapy in neuro-oncology patients. Thus, there is a need to continue such a study, deepen the analysis and implement the results in clinical practice. The subject of our research is relevant in neuro-oncology and corresponds to the demands of modern scientific research. To date, molecular biology has identified several genetic loci in the human genome, whose functioning determines the effectiveness of antitumor effects of CT and RT. Deepening understanding of the molecular-biological properties of brain tumors leads to the expansion of therapeutic options, primarily targeted therapy. In recent years, the number of publications devoted precisely to the clinical aspects of molecular biology has increased. Yes, Brandes A.A. and co-authors (2017) studied the role of MGMT methylation status at various stages of GBM pathomorphism. It has been determined that one of the key factors to be taken into account when treating malignant glioma is the functioning of the gene of the reparative enzyme MGMT. The gene of the reagent enzyme, MGMT, rapidly resumes damage to the DNA of cells by glioma (and all other proliferating cells of the patient's body) after the effects of the alkylating agent. The presence of MGMT gene expression depends on the functioning of other genes (for example, DNA gene promoter of methylation), some gene ensembles that affect apoptotic processes, etc. [15]. Published in 2018, the work of Nguyen, H. S., and co-authors is devoted to the study of molecular markers of resistant GBM and ways to overcome therapeutic resistance [16]. The authors emphasize that the study of expression of the gene MGMT, a product that reduces the effectiveness of alkylating cytostatics, is necessary for the administration of TMZ CT. The main modern trend in neuro-oncology is the person-oriented approach in the appointment of antibacterial chemotherapies. First of all, this is due to the heterogeneity of the tumors of one histotype by the molecular mechanism of their development and mechanisms of chemo-resistance. The basis of targeted therapy, unlike traditional cytotoxic chemotherapy, is the blocking of processes that are abnormally activated due to mutations. In the cells of many tumors, EGFR abnormal receptors are detected as a consequence of the mutation of this gene. In cells with mutation activation of the signaling pathway of EGFR occurs, which, in turn, initiates the processes of malignant transformation in most tumors. The presence of gene mutations in the EGFR allows the selection of a group of patients who may have a positive response to the inhibitor of the tyrosine kinase of the epidermal factor receptor. The work of Kwatra M. M. and co-authors (2017) is devoted to the analysis of EGFR targeting GBM therapy [17]. The EGFR transmembrane receptor binds to extracellular ligands belongs to the family of transmembrane receptors: EGFR (ErbB-1), HER2 / c-neu (ErbB-2), Her 3 (ErbB-3) and Her 4 (ErbB-4), the intracellular part of which exhibits tyrosine kinase activity. The amplification of the EGFR gene is found at 60 % GBM, in the half of GBM, hyperfragmentation of EGFR is observed. Studies with blockers and antibodies to EGFR (Erlotinib, Gefitinib, Nimotuzumab) are currently ongoing and require more advanced scientific research [17,18]. Eskilsson E. and co-authors in 2018 presented a work concerned with EGFR heterogeneity GBM and therapeutic aspects of this problem. Such studies provide an opportunity for the proper administration of a specific inhibitor of tyrosine kinase -erlotinib and hefitinib [19]. Elsamadicy A. A. and co-authors (2017) show positive results of the use of the EGFR vIII (Rindopepimut (CDX-110)) protein in the vaccine against line I therapy in patients who did not receive TMZ. At this time, a separate type of EGFR-EGFR variation-EGFR version III mutation, which occurs in 20 % GBM, is highlighted. It should be emphasized that the best results in this study were obtained in patients with unmethylated MGMT [20]. Recent studies have shown the expediency of molecular genetic prediction of the effectiveness of target-therapy of the brain tumors with the detection of mutations in the gene of EGFR (abnormal splicing and mutations in 18-21 exons, only 19 mutations) and expression of the PTEN gene. Recent work has been published that thoroughly the role of the PTEN protein in neuro-oncogenesis has been determined. In articles by Kang Y. J. and Benitez J. A., the molecular-biological significance of the PTEN protein and mutations in the PTEN gene [21,22] is analyzed in detail. The PTEN protein encoded by the PTEN gene plays a role in the regulation of the cell cycle, is a tumor suppressor and functions as phosphatase. The main function of PTEN is the inhibition of the signal path controlled by phosphoinositol-3-kinase -one of the main pathways responsible for proliferation and cell survival. PTEN is part of a signaling pathway that stops cell division and induces apoptosis. Apoptosis is that of uncontrolled cell growth and progression of the oncological process. With malignant transformation, the cells accumulate mutations that ensure the progression of tumors. Mutations in the PTEN gene lead to the development of many tumors. The genetic inactivation of PTEN is indicated for GBM, uterine cancer, prostate cancer, and a decrease in PTEN activity is observed in lung cancer, breast cancer, and others. In 2014, a meta-analysis was conducted on the study of the PTEN gene and its predictive value in gliomas [14]. It has been shown that the PTEN gene study is needed to evaluate the efficacy of EGFR targeting therapy, which is one of the few negative PI3K/AKT/mTOR signaling pathways and plays an important role in cellular proliferation, apoptosis and tumor invasion regulation. The sensitivity to therapy of brain tumors with EGFR inhibitors is significantly higher when coexpressed by the EGFRvIII oncogene and the PTEN tumor suppressor. In this case, the loss or mutations of the PTEN gene make the tumor resistant to EGFR inhibitors, as the PI3K/AKT/mTOR signaling pathway is not excluded. It has been established that a decrease in PTEN expression is a negative predictive marker for GBM survival, as well as a therapy response marker. All this confirms the relevance of our research and the feasibility of further scientific research, aimed at studying the molecular genetic features of intracerebral tumors. The authors report that there is no conflict of interest. 6. Conclusions 1. A study of MGMT gene expression, a marker of temozolomide chemotherapy sensitivity, indicates a trend towards correlation between expression levels and therapeutic efficacy. To determine statistically significant prognostic effects and informativeness of MGMT expression, it is necessary to expand the statistical sample and further studies with a larger number and observation period. 2. A study of the expression of the PTEN gene, the PI3K/AKT signalling pathway blocker, indicates a different degree of expression of this enzyme in the tumour samples studied. The predictive value of this indicator for targeting chemotherapy is possible in comparison with the EGFR mutation in these tumour tissue samples. 3. The study of molecular biological characteristics of intracerebral tumors and the implementation of these data in clinical practice provides the opportunity to develop more effective therapeutic protocols based on a personified approach to the treatment of neuro-oncology patients.
5,103
2019-07-31T00:00:00.000
[ "Medicine", "Biology" ]
The Sentinel-3 SLSTR Atmospheric Motion Vectors Product at EUMETSAT : Atmospheric Motion Vectors (AMVs) are an important input to many Numerical Weather Prediction (NWP) models. EUMETSAT derives AMVs from several of its orbiting satellites, including the geostationary satellites (Meteosat), and its Low-Earth Orbit (LEO) satellites. The algorithm extracting the AMVs uses pairs or triplets of images, and tracks the motion of clouds or water vapour features from one image to another. Currently, EUMETSAT LEO satellite AMVs are retrieved from georeferenced images from the Advanced Very-High-Resolution Radiometer (AVHRR) on board the Metop satellites. EUMETSAT is currently preparing the operational release of an AMV product from the Sea and Land Surface Temperature Radiometer (SLSTR) on board the Sentinel-3 satellites. The main innovation in the processing, compared with AVHRR AMVs, lies in the co-registration of pairs of images: the images are first projected on an equal-area grid, before applying the AMV extraction algorithm. This approach has multiple advantages. First, individual pixels represent areas of equal sizes, which is crucial to ensure that the tracking is consistent throughout the processed image, and from one image to another. Second, this allows features that would otherwise leave the frame of the reference image to be tracked, thereby allowing more AMVs to be derived. Third, the same framework could be used for every LEO satellite, allowing an overall consistency of EUMETSAT AMV products. In this work, we present the results of this method for SLSTR by comparing the AMVs to the forecast model. We validate our results against AMVs currently derived from AVHRR and the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). The release of the operational SLSTR AMV product is expected in 2022. Introduction In the field of wind observations, Atmospheric Motion Vectors (AMVs) prevail as a globally available, mesoscale measurement of winds. AMVs are routinely assimilated in Numerical Weather Prediction (NWP), where they have a significant positive impact on weather forecasting and nowcasting [1][2][3]. EUMETSAT already derives AMVs from its Meteosat Second Generation (MSG) satellites, Meteosat-11 at 0 • East, Meteosat-10 at 9.5 • East in rapid scan mode and Meteosat-8 at 41.5 • East, and from its Low-Earth Orbit (LEO) satellites, Metop A, B and C. AMV producers like EUMETSAT are always encouraged by end-users to propose AMV products from new sensors, to increase the coverage and density of wind observations. In particular, LEO satellite sensors allow observing atmospheric motion at high latitudes, which are not visible for geostationary satellite sensors [4,5]. Since the launch of Sentinel-3A (S3A) in 2016 and Sentinel-3B (S3B) in 2018, the Sea and Land Surface Temperature Radiometer (SLSTR) represents a viable new opportunity to derive AMVs in these critical areas. AMVs are derived from pairs of images from satellite sensors, by tracking clouds, or water vapour features, from one image to the other. Such tracking is easily achieved for sensors on board geostationary satellites, considering that successive images are co-registered spatially, have the same ground resolution and the time gap between images is small [6]. The situation is different for LEO satellites. Indeed, the time gap between successive overpasses is much higher, either 101 min (time needed to complete one orbit of Metop or Sentinel-3), or less when using images from different satellites. Not only does this increased time gap make the tracking of features from one image to another harder, but it also limits the derivation of AMVs to the area of overlap between the images. Additionally, the ground resolution of pixels varies depending on the viewing angle: pixels at nadir usually represent areas of about 1 km × 1 km each for SLSTR, while pixels at the border of the swath can represent areas as large as 5 km × 5 km each. The current implementation of AMV computation from EPS-Advanced Very-High-Resolution Radiometer (AVHRR) [7] consists in locating pixels of the second image inside the first image using their geographical coordinates, and then interpolate to fill the gaps between remapped pixels. The AMV derivation methodology we use for SLSTR is a slightly modified version of that used for AVHRR. In the following, we present this methodology, used to derive AMVs from images from band S8 (centred at 10,854 nm) of SLSTR. In Section 2, we present the input data and the algorithm used to derive AMVs, and give details about the final product. In Section 3, we give an overview of the performance of the SLSTR AMVs, by comparing them to ECMWF forecast model winds, MSG -Spinning Enhanced Visible and InfraRed Imager (SEVIRI) AMVs, and AVHRR AMVs. Algorithm and Product Description The derivation of AMVs is based on SLSTR Level 1B (L1B) products. L1B products are available in the form of Product Dissemination Units (PDUs) of 2 or 3 min of data. Every orbit allows the production of 33 PDUs of 3 min and one PDU of 2 min, resulting in 484-485 PDUs per day. Currently, only the thermal infrared band of SLSTR, the band S8, centred at 10.854 micrometres, is used to derive AMVs. The L1B S8 data is disseminated in the form of Brightness Temperatures (BT), which are used by the AMV algorithm for the height assignment. However, features are tracked using the radiance images. Any pixels declared clouds by any masks within the L1B product are considered clouds in the AMV processing here. This permissive choice ensures the derivation of as many AMVs as possible, with the risk of sometimes deriving ground-level null vectors which should not be interpreted as movements. 2.1.2. Difference between S3A and S3B Products SLSTR AMVs are derived using L1B data from both Sentinel-3 satellites, S3A and S3B. The orbits of S3A and S3B are identical, but S3B flies 140 degrees out of phase with S3A. The 1420 km swath width of SLSTR has little overlap between successive overpasses of the same satellite (see Figure 1a). Furthermore, the temporal resolution of the observations is lower in a single platform approach (101 min) compared to a dual-platform approach (61.5 min or 39.5 min). Consequently, only dual-satellite (S3A and S3B combined) products will be disseminated. The SLSTR AMV product consists of two complementary products from the S3A and S3B processing chains, even if both satellites data are used in both cases. The S3A (also called dual S3A/S3B) product considers S3A data as the reference image, and tracks the features backwards in S3B data (earlier pass), the second image; and conversely for the pair S3B/S3A. The phase between the two Sentinel-3 satellites implies a difference in the acquisition time gap for the two dual products: 39.5 min for S3A products, and 61.5 min for S3B products. This difference has three main consequences. • S3A products cover all the globe, while S3B products are restricted to latitudes polewards of 50 • (see Figure 1). • S3A allows the production of many more AMVs than S3B. According to our study, about twice as many AMVs are derived from S3A as from S3B (see Section 3). • The quality of AMVs, relative to the ECMWF forecast model, is higher for S3B than for S3A (as explained in Section 3). S3A (a) Trace of two successive overpasses of Sentinel-3A, 101 min apart, over the North Pole. Projection of Scans The main difference between the processing of SLSTR AMVs and that of AVHRR AMVs is the co-registration of the images, prior to the derivation of AMVs. Currently, pairs of AVHRR L1B images are co-registered by mapping the second image on the reference image. Values are assigned to the nearest neighbouring pixels, and then interpolated with a 2D bilinear interpolation (see Figure 2). The approach described above has several drawbacks. • Depending on the viewing angle, pixels of the reference frame may represent areas from 1 km × 1 km to 5 km × 5 km. The varying spatial resolution over the images induces a scale factor effect, possibly resulting in tracking inconsistency. • The tracking is limited to the frame of the reference image. Clouds in both images which moved from outside of the frame (in the second image) to the inside of the frame cannot be tracked. • This approach uses only one PDU from each orbit. However, successive SLSTR scans may be put in different PDUs, resulting in the loss of potential features lying in these scans. For all these reasons, another methodology is used to co-register SLSTR data: L1B scans are projected onto an equal-area grid (see Figure 3). Locations of values projected from... Dual image Reference image Pixels assigned with values from... Dual image Both reference and dual images Reference image EUMETSAT developped a Generic Projection Tool [8] based on the ground track oblique Cassini projection proposed by Mills [9] for VIIRS data. This tool is generic and can be used for all other LEO satellites like the future EPS-SG/METimage at EUMETSAT. The projection solves the aforementioned problems: it ensures a constant ground resolution over both images; it makes it possible to track features leaving either image frame; and by projecting successive PDUs together on the same grid, features lying at the edges of the reference image may be tracked. All these new elements together allow deriving many more AMVs, and a more consistent tracking across the images. The resolution of the projection, for the operational product to come, is set to 1 km, the nominal spatial resolution at the nadir of the thermal channel of SLSTR. The pixel field-of-view degrades down to almost 5 km on the edges of the swath. This means that a bilinear interpolation is necessary to achieve a constant ground resolution over all the image. In practice, this interpolation makes the features on the sides of the swath blurry, when compared to features at the centre. However, using the wind guess (see Section 2.3.2) allows the derivation of AMVs of acceptable quality, no matter the native ground resolution (as explained in Section 3.2). For this reason, the finest native ground resolution, 1 km, is preferred to other projection resolutions, which give fewer AMVs, although of marginally better quality. Target Selection From here on, the algorithm is similar to those used for all other sensors utilised at EUMETSAT, including EPS-AVHRR and MSG-SEVIRI. The reference frame is divided into a regular grid of cells of 24 pixels. For each of these cells, we define a 32-pixel sized window, with the same centre as the cell. Within this window, the algorithm screens every frame of 24 × 24 pixels in search of the one with highest contrast. To be selected as a target, the frame must meet the following criteria. • At least 25 % of the target's pixels shall be cloudy. Whether a pixel is cloudy is determined by the cloud mask available in the L1B product. • Have a stricly positive contrast. • The whole target lies in the frame of the reference image. • The whole search area (as defined in Section 2.3.2) lies in the trace of the second image. The selection of a target leads to the derivation of an AMV, unless the final correlation found with the matching target, at the motion tracking step, is below a given threshold. Motion Tracking The tracking is performed backwards, that is, from the reference image to the dual image. Given the significant time gap between images for LEO satellites, a first guess is computed from wind forecasts to locate the search area's position. It reduces computing time drastically and improves the robustness of AMV tracking. The impacts of the use of the guess are explained by Borde et Garcia Pereda [10]. The forecasted wind is retrieved at the location of the centre of the target, and at the height where the atmospheric temperature equals the average brightness temperature of the 20 % coldest pixels in the target. This rough estimation of the cloud height is also applied to AVHRR data [11]. We limit the search of a matching target to a search area of size 100 × 100 pixels around the guessed feature location in the dual image. The cross-correlation between the starting target and the search area is computed, producing a correlation surface. The size of the search area relies on the expected error on the forecast wind. Since the time gap between the images varies among the AMV products, that size should also vary. This capability was introduced in the AVHRR AMV processing by the multiplication of the correlation surface with a Gaussian kernel. The objective is to maintain consistent quality between the two S3A and S3B products while decreasing the likelihood of deriving an outlier AMV. Consider (x, y) the coordinates of a pixel in the correlation surface, (0, 0) being its centre, v the speed (in m/s) of the guess vector, ∆t the time gap between the acquisitions of the images (in seconds) and r the spatial resolution of the projection (in metres), then g(x, y), the Gaussian weight at (x, y) is given by Equation (1). In Equation (1), the parameters a, b and c can be adjusted depending on the confidence put in the forecast model. For the operational product, the values are a = 0.5, b = 10 m/s and c = 2. The maximum of the multiplied correlation surface defines the centre of the matching target. This approach is summarised in Figure 4. Height Assignment The height assignment is based on the Cross-Correlation Contribution (CCC) method [12]. Basically, pixels that contribute the most to the final cross-correlation of the matched targets are selected, and the temperature used for the height assignment is the weighted mean of the temperatures of these pixels, the weights being the cross-correlation contributions. This temperature is placed in the ECMWF temperature profile, and the corresponding height is assigned to the AMV. Profiles are checked for temperature inversion and tropopause before assignment. Final Product The products will be disseminated in BUFR format [13], following the standard sequence 3-10-077. The satellite identifier for the SLSTR AMV product is coded 856, "combination of Sentinel-3 satellites". Up to one AMV product per L1B PDU can be derived, depending on the overlap and the cloudiness. In the offline data in production at EUMETSAT since June 2020, actual numbers of products are on average 480 per day for S3A, and 280 per day for S3B. Indeed, the two products are asymmetric, as explained in Section 2.1.2. Comparison to ECMWF Forecast Data Our AMV data are first checked against the ECMWF forecast model. The forecast is available in the form of GRIB files with a 3-h step. We interpolate this information spatially (in height and planar location) and temporally to get the forecasted wind vector at the location and time of the AMV. We study here one month of SLSTR AMV data, 15 December 2020 to 15 January 2021. Several statistics and plots are shown here to illustrate the comparison. First, we show the scatter plots of AMVs speeds against co-located forecast wind speeds, Figure 5. Second, we plot the biases, Root Mean Square (RMS) errors and Root Mean Square Vector Differences (RMSVD) of AMVs against forecast as functions of the altitude, Figure 6. Third, we report a series of statistics, including RMS error on the speeds, RMSVD, overall biases, and numbers of AVMs derived, Table 1. Statistics are reported for two sets of data: the full set, and the set of good quality AMVs, conventionally defined (for LEO satellites) as the AMVs faster than 2.5 m/s and of Quality Index (QI) [14] above 60 among the wind scientific community. AMVs derived from both satellites present an RMSVD around 5 m/s, which is in line with statistics of AMVs from other sensors exploited by EUMETSAT. The RMS and RMSVD are slightly better for S3B. The higher time gap between the images may explain that behaviour [15]. The motion tracking achieves the same precision, in both cases, but the displacement estimate is divided by a longer time gap, resulting in a lower uncertainty on the wind vector [11]. Overall biases remain low in both cases, although increasing (in absolute value) at high altitudes (<400 hPa). The vast majority of AMVs output by the algorithm are assigned at low and mid altitudes (>600 hPa). Important differences in biases appear at high altitudes between S3A and S3B. This is caused by the difference in coverage of the products: the S3A product is global, it can include many high AMVs in the tropics, where the bias is very high. The S3B product only covers latitudes polewards of 50 • , meaning it does not include all these high altitudes AMVs. When limiting the S3A products to AMVs derived polewards of 50 • (that is, the areas covered by S3B), AMV biases from S3A and S3B are similar at all altitudes (see Figure 6b,c). The stability of the statistics is also studied time-wise and location-wise. Figure 7 shows the variation of the RMS, RMSVD and bias for both satellites over the period covered by the dataset, and Figure 8 is a world map of the bias AMV-forecast model, for S3A, over the same period. The statistics remain consistent over time. Regarding spatial stability, the bias varies significantly across the globe, with a noticeable positive bias in the tropical zones. This behaviour is in agreement with observations from other sensors [7]. In some areas (around 10 • to 20 • latitude in Africa and Asia), too few AMVs of sufficient QI were derived to allow to compute a meaningful bias. It is to be noted that SLSTR is tilted so that it never senses any area South of −86 • latitude, hence the absence of data at these latitudes in Figure 8. Impact of the Projection Grid Resolution To justify the choice for the projection grid resolution of 1 km, we report hereafter the statistics of AMVs on the same period, but produced with a 2 km projection grid resolution. Projecting with a resolution twice as coarse has two major consequences: the total number of AMVs is divided by four, but the target box covers a geographical area four times as large, as illustrated in Figure 9. The scatter plot of AMVs against the ECMWF forecast model is shown in Figure 10. The corresponding statistics are reported in Table 2. The results, for AMVs of speed > 2.5 m/s and QI > 60 are: RMS = 3.64 m/s, RMSVD = 5.00 m/s, bias = −0.25 m/s and 1,984,776 AMVs were derived. This is about four times less AMVs as for the 1 km resolution, which coincides with the decrease in the number of pixels. The performance statistics remain however similar. Thus, our choice is to keep 1 km as projection resolution to ensure the retrieval of the maximum number of AMVs. Comparison between SLSTR and AHVRR Performances Until the operational release of the SLSTR AMV product, EPS-AVHRR is the only source of AMVs derived from a LEO satellite at EUMETSAT, other sources being MODIS and VIIRS, operated by NOAA. In order to validate SLSTR AMVs, we compare their overall statistics to those of AVHRR AMVs. AVHRR AMVs and SLSTR AMVs are derived from very similar wavelengths, around 10,800 nm, and target and search box sizes are similar. However, there are two major differences between AVHRR and SLSTR AMVs. First, the swath width of SLSTR, 1,420 km, is less than half that of AVHRR, resulting in a much lower number of AMVs derived from each Sentinel-3 satellite. Second, the temporal gap varies a lot depending on which satellite is the reference. In particular, the time gap for S3B products is the highest, with more than a full hour of gap between two successive images. These properties are summarised in Table 3. Consequently, far fewer AMVs can be derived daily from either S3A or S3B. To illustrate this, we have computed the overall statistics of AVHRR AMVs for the same period of 15 December 2020 to 15 January 2021. Results are summarised in Table 4. The quality of AMVs is quite comparable across all satellites, in terms of RMSVD and bias. However, the reduced swath width results in significant differences in the number of AMVs. Especially for S3B, the monthly number of AMVs is less than half the equivalent number from any Metop satellite. Furthermore, a higher overlap between reference and dual images implies better performance, since the ground resolution of the pixels is more consistent across images. This fact is reflected in Table 4 by better performance statistics for Metop-C than for Metop-A, which is currently (February 2021, time of writing) drifting away from the nominal orbital plane. Overall, S3A and S3B still represent a monthly contribution of 15 million AMVs with QI > 60. This amount will be a significant source of additional data for NWP systems. Comparison to Co-Localised MSG Data Further validation of the product is made by comparing AMVs to co-located AMVs derived from MSG-SEVIRI. EUMETSAT derives AMVs from bands 2 (visible, at 810 nm), 5 (water vapour, 6,250 nm), 6 (water vapour, 7,350 nm), 9 (thermal infrared, 10,800 nm) and 12 (broadband). In the following, two Meteosat satellites are used for the comparison: Meteosat-11 (MET11), centred at 0 • , and Meteosat-8 (MET08), centred at 41.5 • East. Colocation of SLSTR AMVs with AMVs from SEVIRI in the geostationary disks is performed on the period 12 January 2021 to 7 February 2021, with the following criteria: • The QI of SLSTR vectors shall be above 60, and the QI of SEVIRI vectors shall be above 80. • There shall be less than 45 min between the two AMVs. • The distance between AMV locations shall be less than 50 km. • The difference in height between the two AMVs shall be less than 25 hPa. The scatter plots of SEVIRI AMVs, for MET08 and MET11, against SLSTR AMVs, for S3A and S3B, are shown in Figure 11. The statistics are reported in Table 5. Given that S3B AMVs cover latitudes above 50 • , there is minimal overlap between the S3B products coverage and the coverage of geo-winds products, resulting in a meagre number of matches for S3B. Nevertheless, in all cases, there is good agreement between the two sets of AMVs, with a noticeable positive bias. Considering that the biases against the model are high in the tropical regions (see Figure 8) which are the most represented regions in the geostationary disks, this is not surprising. Apart from the bias, the RMSVD suggests there is consistency between geostationary and LEO AMVs. Conclusions EUMETSAT presently derives offline and routinely AMVs from pairs of images from Sentinel-3A and B. When checked against the ECMWF forecast model, the quality of AMVs is similar to that of AMVs derived from AVHRR data. This will guarantee consistency between operational AMV products disseminated by EUMETSAT. The narrower swath of SLSTR, compared to AVHRR, results in lower coverage, and then, far fewer AMVs are derived daily from SLSTR than from AVHRR. However, using an equal-area projection grid allows the mitigation of some caveats of the current coregistration method used for AVHRR. SLSTR AMV products, like AVHRR AMV products, will help to measure winds at high latitudes (polewards of 50 • ), not covered by sensors on board geostationary satellites. After validation of the offline products by end users, EUMETSAT plans to start disseminating SLSTR AMV products operationally in 2022.
5,413.8
2021-04-28T00:00:00.000
[ "Environmental Science", "Physics" ]
Grape Seed Proanthocyanidins Induce Autophagy and Modulate Survivin in HepG2 Cells and Inhibit Xenograft Tumor Growth in Vivo Liver cancer is one of the leading causes of death worldwide. Although radiotherapy and chemotherapy are effective in general, they present various side effects, significantly limiting the curative effect. Increasing evidence has shown that the dietary intake of phytochemicals plays an essential role in the chemoprevention or chemotherapy of tumors. In this work, HepG2 cells and nude mice with HepG2-derived xenografts were treated with grape seed proanthocyanidins (GSPs). The results showed that GSPs induced autophagy, and inhibition of autophagy increased apoptosis in HepG2 cells. In addition, GSPs also reduced the expression of survivin. Moreover, survivin was involved in GSPs-induced apoptosis. GSPs at 100 mg/kg and 200 mg/kg significantly inhibited the growth of HepG2 cells in nude mice without causing observable toxicity and autophagy, while inducing the phosphorylation of mitogen-activated protein kinase (MAPK) pathway-associated proteins, p-JNK, p-ERK and p-p38 MAPK and reducing the expression of survivin. These results suggested that GSPs might be promising phytochemicals against liver cancer. Introduction Globally, liver cancer remains a serious threat to human health. Hepatocellular carcinoma (HCC) accounts for about 85-90% of all primary liver malignancies [1]. Most liver cancer patients are already in the advanced stage when they are diagnosed, unlike the early stage patients whose malignant tissue can be removed by surgical resection [2]. Although radiotherapy and chemotherapy are effective in general, they present various side effects, significantly limiting the curative effect [3]. Increasing evidence has shown that the dietary intake of proanthocyanidins plays an essential role in the chemoprevention or chemotherapy of tumors [4]. In vitro and in vivo toxicity experiments have demonstrated that proanthocyanidins are devoid of toxicity [5] and have an anticancer effect on various human cancers, such as colorectal cancer [6][7][8], pancreatic cancer [9], HCC [10,11], non-small cell lung cancer [12,13], squamous cell carcinoma [14], as well as head and neck squamous cancer [15]. Various fruits, beans, chocolates, fruit juices, wine, beer, and tea are rich in proanthocyanidins, and the proanthocyanidins in grape seeds are the most abundant [16]. Grape seed proanthocyanidins (GSPs) are formed by the polymerization of catechins and/or epicatechins, in the form of dimers, trimers, tetramers, and oligomers/polymers [17]. Although proanthocyanidins reportedly have anticancer effects on some human cancers, the precise mechanisms of HCC-associated cell death remain unclear. A previous study indicated that GSPs could markedly inhibit the growth of HepG2 liver cancer cells and induce apoptosis and the phosphorylation of mitogen-activated protein kinase (MAPK) Cell Culture The HCC HepG2 cells were generously gifted by Prof. Hongbo Hu (China Agricultural University, Beijing, China). HepG2 cells were cultured in DMEM supplemented with 10% heat-inactivated fetal bovine serum and 1% penicillin streptomycin. All cells were grown in a 5% CO 2 humidified incubator at 37 • C. RNA Isolation and Quantitative Real-Time Polymerase Chain Reaction (qPCR) Analysis The HepG2 cells were grown in 6-well plates, and the total RNA was extracted using Trizol (Takara, Dalian, China). cDNA was prepared according to the manufacturer's instructions using a HiFiScript cDNA Synthesis Kit (Cwbio, Beijing, China). The amplification of GAPDH was used as the internal reference gene to normalize the expression of the selected genes. The primer sequences were survivin-sense (5 -TACGCCTGTAATACCAGCAC-3 ); survivin-antisense (5 -TCTCCGCAGTTTCCTCAA-3 ) [19]; GAPDH-sense (5 -TCTGGTAAAGTGGATATTGTTG-3 ); and GAPDH -antisense (5 -GATGGTGATGGGATTTCC-3 ) [20]. Two-step qPCR was performed using a CFX96 Connect TM Real-Time PCR System (Bio-Rad, USA). Each reaction was conducted in triplicate with a reaction volume of 20 µL containing 0.4 µL of each primer (10 µM), 10 µL of the UltraSYBR mixture (Cwbio, Beijing, China), 7 µL of diluted cDNA, and 2.2 µL of ddH 2 O. A thermal cycling protocol, involving pre-denaturation at 95 • C for 10 min, followed by 40 cycles of amplification (denaturation at 95 • C for 15 s and annealing at 60 • C for 1 min), was used. The melting curve analysis was conducted from 65 • C to 95 • C. Relative gene expression was calculated using the 2 −∆∆CT method, as described by Livak & Schmittgen [21]. Untreated cells were considered to be the reference sample, which was defined as expression =1, and the results were expressed as the fold-change in comparison with the reference sample. Apoptosis Analysis with Flow Cytometry The apoptosis of HepG2 cells was analyzed with flow cytometry using the TransDetect Annexin V-FITC/PI Cell Apoptosis Detection Kit (Transgen Biotech, Beijing, China). Briefly, cells were plated into 6-well plates and cultured for 24 h. Following this treatment, the floating and adherent cells were collected, washed twice with cold PBS, and resuspended in 100 µL of ice-cold 1×Annexin V Binding buffer followed by a mixture of 5 µL of Annexin V-FITC and 5 µL of PI. The cells were incubated for 15 min in the dark at room temperature followed by mixing of 400 µL of ice-cold 1×Annexin V Binding buffer. The stained cells were then detected using a FACSCalibur flow cytometer (BD Biosciences, San Jose, USA). The data were analyzed by FlowJo 10 software (Tree Star, Inc., Ashland, OR, USA). Western Blot Analysis Briefly, cells were washed twice with cold PBS and then scraped off in a RIPA lysis buffer (50 mM Tris (pH 7.4), 150 mM NaCl, 1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS, sodium orthovanadate, sodium fluoride, EDTA, and leupeptin) (Beyotime Biotechnology, Shanghai, China) containing protease inhibitor PMSF (1 mM). Tumor tissues were added to the RIPA lysis buffer containing protease inhibitor PMSF (1 mM), ground with a tissue grinder, and centrifuged to obtain the supernatant. The concentration of the protein samples was quantified using the BCA protein assay (Pierce ® BCA Protein Assay Kit, Thermo Fisher Scientific, MA, USA). Equal amounts of denatured proteins (20-40 µg/well) were subjected to SDS-PAGE gel (10% or 15%) electrophoresis and transferred onto polyvinylidene fluoride (PVDF) membranes (Immobilon ® -P Transfer Membrane, Millipore) via wet transfer. The PVDF membranes were then blocked in 5% skim milk in TBS-Tween-20 (TBST) for 1 h at room temperature and incubated overnight with specific primary antibodies at 4 • C. Each membrane was washed three times with TBST and incubated with secondary antibody-horseradish peroxidase (HRP) conjugated with anti-rabbit IgG diluted in 5% skim milk at room temperature for 1 h, followed by three washes with TBST. Finally, immunoreactive bands were exposed to enhanced chemiluminescence (ECL) reagents to visualize the HRP signal. Observation of Acidic Vesicular Organelles (AVOs) Formation The observation of AVOs formation was performed as previously described using acridine orange (AO) staining [22]. Briefly, GSPs-treated cells were washed twice with PBS, followed by staining with 1 µg/mL AO for 30 min at room temperature. Afterward, cells were washed twice with PBS and observed using a fluorescence microscopy (Nikon Eclipse Ti) equipped with a green filter. Cell Transfection Cells were grown on 6-well cell culture plates for 24 h and then transfected with 2.5 µg of pQCXIP-GFP-LC3 for 24 h, followed by treatment with GSPs for 24 h. Following GSPs treatment, the formation of autophagic puncta was detected with a fluorescence microscope equipped with a blue filter. For the transfection of pcDNA3.1-survivin, cells were grown on 6-well cell culture plates for 24 h and then transfected with 2.5 µg of pcDNA3.1-survivin for 24 h, followed by treatment with GSPs for 24 h. Afterward, apoptosis of the HepG2 cells was analyzed with flow cytometry. HepG2-Derived Nude Mice Xenograft Model Female BALB/C nude mice (4-5 weeks old) were purchased from the Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China) and housed there in a SPF barrier environment that was maintained at a constant temperature (23-25 • C) and humidity (50-60%). All animals had free access to drinking water and food and received humane treatment. The animal protocol was approved by the Institutional Animal Care and Use Committee, Beijing Vital River Laboratory Animal Technology Co., Ltd. HepG2 cells (2.5 × 10 6 cells in 100 µL PBS per mouse) were injected subcutaneously on the right side of the back of nude mice. Thirteen days later, the tumor volume reached about 100 mm 3 , and mice were randomly divided into control (pure water) and GSPs treatment groups (100 mg/kg and 200 mg/kg body weight) via oral daily gavage. The length and width of the tumor were measured with a vernier caliper, and the mice were weighed every other day to determine their respective body weight. The volumes of the tumors were calculated using the formula: Volume = (length × width 2 )/2. At the termination of the experiment, the mice were sacrificed by CO 2 euthanasia, and the tumor mass was harvested and weighed. A portion of the tumor tissue was paraffin-embedded for immunohistochemistry, and the other part was frozen in liquid nitrogen and stored at −80 • C for further analysis. Immunohistochemistry Ki67 staining was performed according to an immunohistochemical staining standard protocol. The samples were incubated overnight with ki67 antibodies (1:1000) at 4 • C. The HRP-labeled secondary antibody was then incubated at room temperature. Staining with 3,3 -diaminobenzidine was used as a chromogen and Mayer's hematoxylin as a counterstain to the sections. Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA) software was used to select the same brown-yellow cell nuclei as the unified standard for judging all photo-positive cells. The same blue cell nuclei as other cells were selected, and each photo was analyzed to obtain the number of positive cells and total cells, while the positive rate (%), positive cells / total cells × 100, was calculated. Histopathological Examination The paraffin sections of the tumor were deparaffinized in xylene and rehydrated through descending concentrations of ethanol according to routinely used methods and were stained with hematoxylin and eosin (HE) as previously described [23]. All sections were examined under a light microscope. Statistical Analysis The data of three independent experiments were expressed as mean ± standard deviation (SD). Statistical analysis was performed with an analysis of variance using SPSS version 21 (SPSS Inc., Chicago, USA). Duncan's multiple range test was performed to determine the significant difference. Differences at p < 0.05 were considered to be statistically significant. GSPs Induced Autophagy in HepG2 Cells The change in autophagy marker LC3 was first detected with Western blotting to investigate whether GSPs could induce autophagy in HepG2 cells. The expression of LC3 II is increased when autophagy occurs [24]. As shown in Figure 1a, the expression of LC3 II increased dramatically after treatment with 10 mg/L GSPs for 24 h and 48 h, respectively in HepG2 cells compared with the control group. Further confirmation regarding whether GSPs could induce autophagy in HepG2 cells was obtained by transfecting these cells with pQCXIP-GFP-LC3 for 24 h followed by treatment with 10 mg/L GSPs for 24 h to observe autophagic puncta. Figure 1b shows the formation of autophagic puncta (red arrow indication) in GSPs-treated cells transfected with pQCXIP-GFP-LC3 using a fluorescence microscopy. Also, to further demonstrate that GSPs treatment could induce autophagy in vitro, HepG2 cells were further stained with AO. AO is a fluorescent dye that crosses the cell membrane and enters the cell nucleus to form a uniform green fluorescence indicating DNA. AO can be protonated and trapped in AVOs, resulting in its metachromatic shift to red fluorescence [25]. Therefore, the fluorescence intensity of AO can directly reflect the number of autophagic vacuoles formed in the cells, that is, a higher fluorescence intensity causes the formation of more autophagic vacuoles. As shown in Figure 1c, the red fluorescence in HepG2 cells was markedly enhanced after GSPs treatment for 24 h and 48 h, confirming that GSPs could induce autophagy in HepG2 cells. The transfection efficiency of pQCXIP-GFP-LC3 was detected with Western blotting, and the autophagic puncta (red arrow indication) were observed using a fluorescence microscope. (c) HepG2 cells were treated with 10 mg/L GSPs for 24 h and 48 h, then stained with AO (1 μg/mL), while AVOs formation was observed using a fluorescence microscope. The data of three independent experiments were expressed as mean ± SD. Duncan's multiple range test was performed to determine the significant difference. ** and *** indicate that the values of treatment were significantly different at p < 0.01 and p < 0.001, respectively. Inhibition of Autophagy Increased Early Stage Apoptosis of HepG2 Cells Results indicated that GSPs could induce both apoptosis [18] and autophagy ( Figure 1) in HepG2 cells. To investigate the relationship between apoptosis and autophagy, HepG2 cells were pretreated with the autophagy inhibitor, 3-MA (1 mM) for 1 h, and then treated with GSPs for 24 h, after which apoptosis was measured with flow cytometry (Figure 2). The results showed that cells in the early stage of apoptosis increased after the inhibition of autophagy, but no significant effect on the number of cells in the late stage of apoptosis was observed. These findings suggested that GSPs might cause the two forms of programmed death, apoptosis and autophagy, to cascade and transform, which constituted a complex system of programmed cell death together. The transfection efficiency of pQCXIP-GFP-LC3 was detected with Western blotting, and the autophagic puncta (red arrow indication) were observed using a fluorescence microscope. (c) HepG2 cells were treated with 10 mg/L GSPs for 24 h and 48 h, then stained with AO (1 µg/mL), while AVOs formation was observed using a fluorescence microscope. The data of three independent experiments were expressed as mean ± SD. Duncan's multiple range test was performed to determine the significant difference. ** and *** indicate that the values of treatment were significantly different at p < 0.01 and p < 0.001, respectively. Inhibition of Autophagy Increased Early Stage Apoptosis of HepG2 Cells Results indicated that GSPs could induce both apoptosis [18] and autophagy ( Figure 1) in HepG2 cells. To investigate the relationship between apoptosis and autophagy, HepG2 cells were pretreated with the autophagy inhibitor, 3-MA (1 mM) for 1 h, and then treated with GSPs for 24 h, after which apoptosis was measured with flow cytometry (Figure 2). The results showed that cells in the early stage of apoptosis increased after the inhibition of autophagy, but no significant effect on the number of cells in the late stage of apoptosis was observed. These findings suggested that GSPs might cause the two forms of programmed death, apoptosis and autophagy, to cascade and transform, which constituted a complex system of programmed cell death together. GSPs Significantly Reduced the Expression of Survivin in HepG2 Cells Survivin plays an essential role in the regulation of apoptosis. Therefore, the changes in survivin at the mRNA and protein levels after treatment with GSPs were determined first. Results indicated that treatment with GSPs for 24 h and 48 h significantly reduced the expression of survivin at the mRNA and protein levels in HepG2 cells (Figure 3). GSPs Significantly Reduced the Expression of Survivin in HepG2 Cells Survivin plays an essential role in the regulation of apoptosis. Therefore, the changes in survivin at the mRNA and protein levels after treatment with GSPs were determined first. Results indicated that treatment with GSPs for 24 h and 48 h significantly reduced the expression of survivin at the mRNA and protein levels in HepG2 cells (Figure 3). . The data of three independent experiments were expressed as mean ± SD. Duncan's multiple range test was performed to determine the significant difference. * and ** indicate that the values of treatment were significantly different at p < 0.05 and p < 0.01, respectively, compared with the untreated cell. Survivin Was Involved in GSPs-induced Apoptosis An overexpression vector of pcDNA3.1-survivin was constructed to further assess whether survivin was involved in GSPs-induced apoptosis. HepG2 cells were transfected with the overexpression vector for 24 h and then treated with 10 mg/L GSPs for 24 h. The apoptosis of the cells was measured with flow cytometry. The results showed that transfection of pcDNA3.1-survivin reduced the number of cells with early apoptosis induced by GSPs, but had no significant effect on the number of cells with late apoptosis (Figure 4). . The data of three independent experiments were expressed as mean ± SD. Duncan's multiple range test was performed to determine the significant difference. * and ** indicate that the values of treatment were significantly different at p < 0.05 and p < 0.01, respectively, compared with the untreated cell. Survivin Was Involved in GSPs-induced Apoptosis An overexpression vector of pcDNA3.1-survivin was constructed to further assess whether survivin was involved in GSPs-induced apoptosis. HepG2 cells were transfected with the overexpression vector for 24 h and then treated with 10 mg/L GSPs for 24 h. The apoptosis of the cells was measured with flow cytometry. The results showed that transfection of pcDNA3.1-survivin reduced the number of cells with early apoptosis induced by GSPs, but had no significant effect on the number of cells with late apoptosis (Figure 4). The data of three independent experiments were expressed as mean ± SD. Duncan's multiple range test was performed to determine the significant difference. Different letters indicate significant differences at p < 0.05. GSPs Inhibited the Growth of HepG2 Cells without Displaying Observable Toxicity in Nude Mice Since GSPs could significantly inhibit the growth of HepG2 cells in vitro [18], an investigation into whether GSPs could also inhibit the growth of HepG2 cells in vivo was conducted. The xenograft model in nude mice showed that 100 mg/kg and 200 mg/kg GSPs significantly reduced tumor size and tumor weight in nude mice (Figure 5a, b). The ki67 represented the degree of tumor cell proliferation, while the results of ki67 immunohistochemistry also showed that 200 mg/kg GSPs could significantly reduce the ki67 positive rate of the tumor (Figure 5c). Moreover, whether the dose of GSPs displayed observable toxicity in nude mice was also evaluated. Therefore, changes in the body weight of the nude mice during gavage with GSPs; two indicators of liver function, namely alanine aminotransferase (ALT) and aspartate aminotransferase (AST); an indicator of kidney function, creatinine (Cr); and HE staining of the liver were evaluated. The results showed that GSPs doses at 100 mg/kg and 200 mg/kg did not significantly affect body weight, ALT, AST, and Cr in nude mice, while the HE staining showed no damage to the liver of nude mice (Figure 5d-f). GSPs Inhibited the Growth of HepG2 Cells without Displaying Observable Toxicity in Nude Mice Since GSPs could significantly inhibit the growth of HepG2 cells in vitro [18], an investigation into whether GSPs could also inhibit the growth of HepG2 cells in vivo was conducted. The xenograft model in nude mice showed that 100 mg/kg and 200 mg/kg GSPs significantly reduced tumor size and tumor weight in nude mice (Figure 5a,b). The ki67 represented the degree of tumor cell proliferation, while the results of ki67 immunohistochemistry also showed that 200 mg/kg GSPs could significantly reduce the ki67 positive rate of the tumor (Figure 5c). Moreover, whether the dose of GSPs displayed observable toxicity in nude mice was also evaluated. Therefore, changes in the body weight of the nude mice during gavage with GSPs; two indicators of liver function, namely alanine aminotransferase (ALT) and aspartate aminotransferase (AST); an indicator of kidney function, creatinine (Cr); and HE staining of the liver were evaluated. The results showed that GSPs doses at 100 mg/kg and 200 mg/kg did not significantly affect body weight, ALT, AST, and Cr in nude mice, while the HE staining showed no damage to the liver of nude mice (Figure 5d-f). When the tumor size was about 100 mm 3 , the nude mice were treated with GSPs (0 mg/kg, 100 mg/kg, and 200 mg/kg) by oral gavage and were weighed every other day. (e) Following GSPs treatment, the nude mice were sacrificed, blood was collected from the heart, and the levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST) and creatinine (Cr) in serum were determined. The p value is the difference between each group and the group of 0 mg/kg GSPs. (f) HE staining of the liver. The data were expressed as mean ± SD (n = 5). Duncan's multiple range test was performed to determine the significant difference. * indicates that the values of the treatment are significantly different at p < 0.05 compared with the control. Different letters indicate significant differences at p < 0.05. GSPs Induced the Phosphorylation of the MAPK Pathway-Associated Proteins, and Decreased the Expression of Survivin In the nude mouse xenograft model, GSPs enhanced the phosphorylation levels of the MAPK pathway proteins, and the most significant increase was displayed in the p-ERK level (Figure 6a). GSPs also reduced the expression of survivin in tumor tissues (Figure 6b). The results of Western blotting showed that GSPs did not enhance LC3 II in tumor tissues (Figure 6c). (e) Following GSPs treatment, the nude mice were sacrificed, blood was collected from the heart, and the levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST) and creatinine (Cr) in serum were determined. The p value is the difference between each group and the group of 0 mg/kg GSPs. (f) HE staining of the liver. The data were expressed as mean ± SD (n = 5). Duncan's multiple range test was performed to determine the significant difference. * indicates that the values of the treatment are significantly different at p < 0.05 compared with the control. Different letters indicate significant differences at p < 0.05. GSPs Induced the Phosphorylation of the MAPK Pathway-Associated Proteins, and Decreased the Expression of Survivin In the nude mouse xenograft model, GSPs enhanced the phosphorylation levels of the MAPK pathway proteins, and the most significant increase was displayed in the p-ERK level (Figure 6a). GSPs also reduced the expression of survivin in tumor tissues (Figure 6b). The results of Western blotting showed that GSPs did not enhance LC3 II in tumor tissues (Figure 6c). The expression of LC3-I and LC3-II at the protein level was detected with Western blotting. The data of three independent experiments were expressed as mean ± SD. Duncan's multiple range test was performed to determine the significant difference. The p value is the difference between each group and the group of 0 mg/kg GSPs. Discussion There is growing evidence that phytochemicals, which target the autophagy pathway, present a promising approach for cancer therapy [26]. The anti-tumor mechanism of these phytochemicals was verified to induce autophagy in cancer cells and cause programmed cell death [26]. Resveratrol, a non-flavonoid polyphenolic compound, induced autophagy in different ovarian cancer cell lines [27]. B-group triterpenoid saponins from soybean could induce macroautophagy, while down-regulating Akt and up-regulating ERK protein in human colon cancer cells [28]. Vitamin D analog EB1089 induced autophagy in MCF-7 breast cancer cells [29]. Pterostilbene induced autophagy on human oral cancer cells through the modulation of the Akt and MAPK pathways [22]. Procyanidins from Vitis vinifera seeds induced apoptotic and autophagic cell death via the generation of reactive oxygen species in squamous cell carcinoma cells [14]. Furthermore, in this work, the results also indicated that GSPs could induce significant autophagy in HepG2 cells (Figure 1), which may be one of the mechanisms by which GSPs inhibited the growth of HepG2 cells. The in vitro studies have demonstrated that GSPs induced autophagy in HepG2 cells (Figure 1), while no significant autophagy was observed in the xenograft model ( Figure 6c). This may be due to the following reasons. On the one hand, proanthocyanidins may exert an effect by interacting with other components of the gut, such as lipids or iron [30]. On the other hand, polymeric proanthocyanidins were catabolized by human colonic microflora into low molecular weight phenolic acids, such as m-hydroxyphenylpropionic acid, m-hydroxyphenylacetic acid, and m-hydroxybenzoic acid [31,32]. The bioavailability of GSPs and their metabolites in tumor tissues was also an important aspect [4]. Therefore, many factors, including studies in vitro, studies in vivo, and clinical trials in anti-tumor studies, should be considered. Further studies regarding the bioavailability, metabolism, toxicity, and pharmacodynamics of GSPs are necessary to promote the understanding of their beneficial effects on human health. The standardization of the dosage, composition of the grape seeds extract, and duration of the studies are also a necessity to shed light onto the cause-effect relationship between the intake of GSPs and their health effects in a more accurate way [33]. Survivin is the smallest member of the inhibitor of apoptosis (IAP) gene family [34,35] and not only inhibits apoptosis, promotes cell mitosis, and regulates the cell cycle, but also increases cell proliferation [36]. Studies have shown that survivin was highly expressed in most human tumor tissues, such as liver cancer, lung cancer, gastric cancer, pancreatic cancer, breast cancer, and other malignant tumors, but not in most normal tissues [37,38]. A phosphorothioate antisense oligonucleotide was the first described molecular antagonist of survivin, suppressing it in mRNA and protein expression, and producing strong anticancer activity in preclinical models [39]. In this work, results showed that GSPs significantly reduced the expression of survivin at the mRNA and protein levels ( Figure 3). GSPs treatment significantly reduced the number of early apoptotic HepG2 cells transfected with the survivin overexpression vector (Figure 4). Survivin may be involved in GSPs-induced apoptosis, which is suggested as one of the pathways by which GSPs inhibit the proliferation of HepG2 cells. Studies have also shown that GSPs were able to inhibit tumors in vivo. Previous studies showed that GSPs could inhibited the tumor growth of HeLa and SiHa cervical cancer cells [40], HCT116 colorectal cancer cells [41] and PC3 prostate cancer cells [42] in xenograft tumor model. Consistent with these studies, it was found that GSPs inhibited the growth of HCC, HepG2 cells in the xenograft of nude mice (Figure 5a-c). Moreover, it was found that proanthocyanidin-rich extract from grape seeds displayed a lack of toxicity during in vitro and in vivo toxicity experiments [5]. In this work, changes in the body weights of nude mice during gavage with GSPs, ALT and AST indicators of liver function, Cr indicator of kidney function, and the pathological HE staining of the liver were studied, indicating that the doses of GSPs at 100 mg/kg and 200 mg/kg caused no observable toxicity in nude mice (Figure 5d-f). Conclusions In conclusion, GSPs observably induced autophagy in vitro but not in vivo, and inhibition of autophagy increased the early stage apoptosis of HepG2 cells. In addition, GSPs modulated the expression of survivin, and survivin was involved in GSPs-induced apoptosis. In vivo studies showed that GSPs inhibited the growth of HepG2 cells without observable toxicity in nude mice; induced the phosphorylation of the MAPK pathway-associated proteins, p-JNK, p-ERK and p-p38 MAPK; and decreased the expression of survivin. Overall, the data suggest GSPs as promising phytochemicals with anti-cancer properties that can be potentially used to target HCC. Author Contributions: L.W. conceived and designed the study, performed the experiment, analyzed the data, and wrote the manuscript. J.Z. conceived and designed the study, revised the manuscript, and secured the funds to support this research. W.H. conceived and designed the study, and secured the funds to support this research.
6,274.8
2019-12-01T00:00:00.000
[ "Biology", "Medicine" ]
Information leakage via side channels in freespace BB84 quantum cryptography While the BB84 protocol is in principle secure, real implementations suffer from imperfections. Here, we analyse a free space BB84 transmitter, operating with polarization encoded attenuated pulses. We report on measurements of all degrees of freedom of the transmitted photons in order to estimate potential side channels of the state preparation at Alice. Introduction Quantum key distribution (QKD) [1], especially in terms of key growing, can offer, in principle, a level of security [2] that cannot be reached with any classical method. When it comes to real implementations, however, security proofs for ideal protocols might have to be extended in order to also cover potential design flaws of a specific system. Otherwise, an adversary (usually called Eve) might be able to gather information about the transmitted key, for example by employing possible correlations between the different degrees of freedom of the photons that implement the qubits. Thereby, Eve would not cause quantum bit errors and hence would not be revealed. These so-called side channels are a major threat to the security of real QKD devices [3]- [5]. It is therefore essential to investigate QKD systems carefully with respect to their compliance with the theoretical assumptions and idealizations in the security proofs. In this work, the system in question is our freespace BB84 [6] QKD system extended to decoy states [7]- [10]. For practical reasons the transmitter uses eight distinct laser diodes to prepare the quantum states according to the protocol. The photons prepared by this QKD transmitter then feature three degrees of freedom besides the polarization used for the protocol: while remaining undetected by the BB84 protocol, an adversary can measure the spatial, spectral and temporal properties of the photons. As the value of these degrees of freedom might differ for the various laser diodes, their measurement would allow us to determine, with a certain probability of success, the key bit of the sender. In order to determine a possible information leakage in the case of distinct measurements on single transmitted pulses by an eavesdropper, we therefore characterized the degrees of freedom in question of the transmitted pulses. Even though this study analyses a particular system, the side channels described here can also be found in other implementations: whenever different sources or signal pathways are used to prepare the states in the transmitter, they might be distinguishable. QKD setup Our QKD setup, sketched in figure 1, implements the BB84 [6] protocol with polarization encoded qubits. In the transmitter, faint pulses (λ = 850 nm) are prepared, avoiding the extra effort for true single photon pulses. Therefore decoy states and the accordingly modified privacy amplification [2] are used to secure the key distribution. These photonic states have to be encoded in four different polarizations and with two distinct intensities, each. For that purpose often Pockels cells and variable attenuators are used. We opted for a different solution to keep hardware and electronics simple and also to enable an increase of the repetition frequency more easily. The signal pulses (mean photon number per pulse µ signal ) with relative linear polarizations of 0 • , 45 • , 90 • and 135 • are generated by one laser diode each, mounted with its intrinsic polarization aligned, respectively. Thus there is no need for fast polarization rotations. The pulse intensities are calibrated digitally using a separate laser driver for each channel. Already at a repetition frequency of 10 MHz, it is very hard to switch between different pulse intensities electronically. This is why we decided to use a second set of four laser diodes for the decoy pulses, calibrated for a mean photon number of µ decoy . When the system is used for QKD, random numbers control which laser diode is to be used for a certain pulse according to the chosen QKD protocol [8,10]. Using eight laser diodes to encode the bit values obviously leads to side channels if the emitted pulses are not perfectly indistinguishable: first of all, their spatial position and Simplified diagram of the Alice and Bob setup: D: cube housing the laser diodes, F: fibre mode filter, Q: quarter-and half-wave plates for polarization compensation of the fibre, BS: beamsplitter, CD: detector for calibration of mean photon number, IF: interference filter, S: spatial filter consisting of two lenses and a pinhole, I: iris, A: polarization analysing unit (bob module) as described in [12]. orientation differ and might be exploited by an adversary. Therefore, in the Alice module, a custom design of conical and pyramidal mirrors combines the light of the eight diodes into a short piece of single mode fibre acting as a spatial filter [11] 4 . Next, a beamsplitter reflects parts of the light into a single photon detector (silicon avalanche photodiode (APD)) for calibration of the mean photon numbers. Finally, a telescope transmits the remaining photons to Bob. At Bob's site, incoming light is filtered very restrictively, both spectrally and spatially, and analyzed afterwards with respect to its polarization in a randomly chosen basis. This takes place in the actual Bob module (see [12]). Information leakage through side channels Eve might try to measure the spatial (X ), spectral ( ) or temporal (T ) properties of the transmitted pulses. If there are correlations of these degrees of freedom with the actual bit value (B) encoded by Alice, Eve can use this side channel to gain knowledge about the key without introducing errors. A quantitative measure of the amount of information, directly accessible by a single, immediate measurement on one pulse, is the mutual information I (X : B), I ( : B) and I (T : B), respectively. Most likely, more information on the bit value can be gathered from combined, e.g. temporal and spectral measurements, and even more sophisticated attacks. For example, Eve could measure one or more of the degrees of freedom and then, depending on her outcome, decide individually for every pulse whether to perform an intercept resend attack, to block the pulse or just to guess the bit value. This, however, is beyond the scope of this work. The definition of the mutual information (see e.g. [13]) for a random variable A and the bit value (random variable B = {0, 1}) can be written as where In the experiments, it is now the task to determine the values of p(b|α). The measurements investigating the physical properties of the eight diodes, however, provide us with the conditional probabilities p(α|d) where d ∈ {H, V, +, −, H decoy , V decoy , + decoy , − decoy } denotes the laser diode, α ∈ {x ∈ X, λ ∈ , t ∈ T }. To obtain p(b|α) instead we have to perform further calculations. As Eve will learn about the basis during sifting we have to treat two cases when averaging p(α|d) in order to get the conditional probabilities p(α|b) with the bit value b ∈ B. For this purpose we consider a decoy QKD protocol here that transmits both pulse classes equally frequently [8] with pulse intensities µ signal = 0.3 photons pulse −1 and µ decoy = 0.35 photons pulse −1 . We get From (2a) one can see that using a decoy protocol that transmits the pulse classes not equally frequently or that demands for height contrast in the pulse intensities may not be optimal with the hardware described here. Note that if Alice evaluates the decoy security parameters, whether a pulse is of signal or decoy type is never published. Eve may, however, try to guess this information based on her measurements and thereby try to compromise the decoy protocol. This case is not considered here. In order to get the values for p(b|α) in (1b), Bayes' theorem is used in the form For example, we now get for the mutual information between the wavelength and the bit value I β ( : B) with β ∈ {{H, V }, {+45, −45}} denoting the basis where we already get H (B) = 1 as we expect the bit values to be equally distributed and totally random: p(b) = 1/2. Finally, as both bases are used equally frequently, the average is the actual mutual information accessible by a measurement as described. Here, we assume mutual independence of the different degrees of freedom X , and T . Obviously, this has to be tested in the specific setups, too. Measurements on the quality of state preparation in the transmitter In order to obtain the conditional probabilities as required for (1b) we performed a series of measurements on the pulses transmitted by Alice. All these measurements have been carried out under authentic conditions, i.e. using the same electronic parameters and mean photon numbers used during QKD runs. This implies that all analyses have to be performed at light levels below the single photon level per pulse. We used passively quenched APDs, which have a quantum efficiency of 30% including an interference filter (FWHM 10 nm at a wavelength of 850 nm) mounted as entrance window. The time jitter of the APD diode together with the pulse shaping electronics was found to be 600 ps. For the measurements of these detector characteristics, downconverted photon pairs were used. As described above, the usage of eight laser diodes in the Alice module, despite the many benefits, leads to obvious security concerns: the different light sources can be, in principle, distinguished spatially, by means of their wavelength or by a precise measurement of the pulse delay with respect to the 10 MHz beat. We therefore made huge efforts to anticipate these side channels right from the beginning. First, laser diodes with closely matching wavelengths were chosen and their special ordering in the Alice module minimizes the remaining distinguishability. The laser driver electronics features digitally programmable delay lines for each channel in order to overlap all emitters temporally and, finally, spatial information is erased in the single mode fibre. The following measurements are intended to quantify the remaining information accessible via measurements on the transmitted pulses. For the evaluation of the information leakage, we decided not to fit the acquired data to a model and not to use an integral form of (1b) because local aberrations of single diodes cannot be resembled in the fit and therefore would not contribute to the calculated mutual information. Evidently, Eve will especially search for such distinguishabilities to determine the diode that produced a certain pulse. Spatial measurements. Because of the short length of about 6 cm of the single mode fibre, we had to check if there is still spatial information transmitted by the fibre cladding, i.e. whether the fibre has enough attenuation on all higher modes. For this purpose we scanned an intersection of the collimated beam at a distance of 40 cm after the fibre where the beam diameter was 3 mm (full width at half-maximum (FWHM)) (figure 2). The diameter of the sensitive surface of the APD used for this measurement is 500 µm. On this scale, we could not find any correlations between the bit value B and the detection place X above the noise level. The information leakage I (X : B), calculated (in analogy to (4a)) from the data in figure 2, is of the order of 10 −5 bits pulse −1 . Spectral. In order to determine the spectral distinguishability of the pulses coming from eight different laser diodes, we acquired their emission spectra (figure 3) using a single photon spectrometer with a resolution of 0.4 nm. The width of these spectra Temporal. The temporal overlap between the pulses from different laser diodes can be maximized in our system by digitally programmable delay lines with a resolution of better than 50 ps. We measured a histogram of pulse delays T relative to the 10 MHz beat for each diode by focusing its pulses onto an APD (see figure 4). The electronic pulses were registered with an oscilloscope with an additional time jitter of about 40 ps. Given the time resolution of the detectors, we can infer the pulse length of all diodes to be well below 1 ns (FWHM). The statistical analysis led us to a mutual information between the bit value and the detection time of I (T : B) = 2.8 × 10 −3 bits pulse −1 as an average over both bases (I {H,V } (T : B) = 2.6 × 10 −3 bits pulse −1 , I {+45,−45} (T : B) = 3.0 × 10 −3 bits pulse −1 ). Conclusions Our experiments allow for an estimation of the amount of information a single measurement of one degree of freedom of a transmitted pulse can provide about the value of the sent keybit. Considering the above values for I (X : B), I ( : B) and I (T : B) the according information leakage arising from the usage of eight separate laser diodes is small compared to the information leakage indicated by the QBER or the contribution of pulses with more than one photon. The analysis presented here gives a first indication of the possible information leakage, as Eve was allowed only to perform measurements on the side channels but not to manipulate the quantum channel depending on her measurement results. If Eve is allowed such conditional attacks, she can gather more information. This is, however, beyond the scope of this work but could be examined along the lines of [2]. In either case, the information leakage can be further reduced by narrower spectral filtering or shorter gate windows. Yet these countermeasures would require temperature-stabilized diodes and filters or novel faster timing circuits. The design of the transmitter presented here, however, has already been demonstrated to be well suited for freespace QKD as it allows for simple electronics and small formfactor, while potential weaknesses of this approach involving distinct laser diodes can be kept under control.
3,235.4
2009-06-03T00:00:00.000
[ "Computer Science" ]
Sustainable agricultural development: from sectoral to ecosystem approach The article is devoted to the study of the prospects for using the ecosystem approach in agriculture. The methodology of research is based on the concept of sustainable development, ecosystem theory and platform economy. Research methods and materials deal with economic and statistical analysis of the agricultural industry and related branches of agricultural engineering and the land market in Russia. The main idea of the article is the assumption that in order to ensure sustainable development, it is necessary to integrate the listed industries into a single ecosystem based on the same technological standards and digital platform solutions, with a unified coordination center. The main problem in the organization of the ecosystem in agriculture in Russia is the systemic dependence of the industry on institutional factors, primarily government subsidies and the lack of specific design solutions in the field of its sustainable development. The authors see the development of another hightech sector in the economy of the Russian Federation as a result of the organization of an ecosystem in agriculture. Introduction The concept of sustainable development, proposed in the early 80s of the last century, today is recognized by countries as an imperative. [1,2,3] The most common approach to sustainable development is the study of macroeconomic trends associated with the analysis of global socio-economic phenomena and environmental problems that affect the processes of life of the whole society. At the same time, a number of researchers (for example, [4,5]) illustrates that adherence to the principles of sustainable development is becoming an increasing priority for individual companies as well. Building a business model based on the principles of economy, social responsibility and environmental sustainability is especially important for organizations in the real sector of the economy [6,7]. On this background, the crisis caused by COVID-19 and price volatility in the oil market has led to a powerful technological shift in the activities of markets and individual firms. These changes suppose the introduction of new business standards and challenge the government to determine the vector of economic recovery. Traditionally, practical approaches in such cases come down to support the affected industries and / or creating new innovative products that could become "drivers" of economic growth in the near future. The specificity of the application of the first approach makes it possible to stabilize the situation but does not allow creating a new source of economic growth. The second approach has the risk of obtaining a local effect from the introduction of innovations. At the same time, in practice, there are already business models that are innovations in themselves. Such business models most often involve the configuration of platform and other ecosystems. The purpose of this study is to study the possibilities of the ecosystem business model in agriculture and related industries to ensure sustainable development of both this sector and the Russian economy as a whole. The objectives of the study are to establish the principles of building an ecosystem approach, analyze the state of the agricultural market and related markets (primarily, the agricultural machinery market), identify opportunities and tools for their integration into a single ecosystem. Theoretical framework Ecosystems as special market patterns [8] are understood as "a process of continuous formal and informal agreements between autonomous agents, as a result of which rules are created. These rules are shared by all participants and bring them mutual benefits " [9, p. 24]. Depending on the diversity and changes in the structure of participants [10], their density [11] and closeness of connection [12], different types of ecosystems are distinguished. Methods of measuring these ecosystems properties are based on the idea of their territorially local nature and natural analogues (biogeocenoses). A number of works, including [13,14,15], focuses on platform ecosystems. Technology in such ecosystems is the main reason for the unification of participants and is understood as a set of technological standards. Ecosystems involve cooperation, which leads to receiving relational rent through direct and indirect (cross) network effects. For high-tech business, which is the main unit of analysis in the concept of sustainable development and the "Industry 4.0" paradigm, R. Seiger et al. [16] distinguishes engineering coordination, which consists in using a consistent and uniform model of product production at all stages of development; and a unified environment for the exchange of information and resources. The modularity of an ecosystem means that all participants in the ecosystem are autonomous, but unlike market relationships, they are difficult to replace, since each of them creates additional value for the client, as well as due to the problem of technological and institutional identity. Connections in ecosystems are not built hierarchically, but rather horizontally. There is a stereotypical view that more high-tech industries should become a growth driver for supplying industries. However, we believe that agricultural enterprises can become a mediator (aggregator) of one of the ecosystems in Russia. Using the methods of economic and statistical analysis, the dynamics of indicators, the connectivity of the economic results of agriculture and related industries, we will assess the state and potential of ecosystem formation in agriculture of Russia. Analysis of the state and development potential of the agricultural market and related industries in the Russian Federation According to the Report on the State and Use of Agricultural Lands in the Russian Federation, prepared by the Ministry of Agriculture of the Russian Federation in 2020, on January 1, 2019, the area of the land fund amounted to 1,712.5 million hectares, of which agricultural land occupied 22.3%. The total area of agricultural land as part of agricultural land amounted to 197.7 million hectares, including the total area of arable land -58.8%, hayfields -9.5%. According to this indicator, the Russian Federation ranks 5-6 in the world. Rational and efficient use of agricultural land is an important factor in ensuring the food security of the Russian Federation. In this regard, the tasks aimed at identifying unused agricultural land, primarily agricultural land, and their involvement in agricultural turnover, are becoming priorities for the development of the country's agro-industrial complex. The area of unused agricultural land in the Russian Federation on 01.01.2019 accounts for 16.7% of the total area. In the structure of the identified unused arable land, it is advisable to single out arable land suitable for introduction into agricultural use, which does not require preliminary special cultural and technical measures. The total area of such arable land is 9.8 million hectares (almost 50%). The number of peasant and private farms in Russia on January 1, 2020 was 176.3 thousand units. The main growth point of the agricultural sector in 2019 was crop production -the production of grain and oilseeds. According to Rosstat, the gross grain harvest amounted to 121.2 million tons -the second result in the history of modern Russia (the 2017 harvest remains a record -130 million tons). Crop production in value terms amounted to 3.16 trillion. rubles, having increased over the year by 14.66%. Agricultural machine building (tractors, combine harvesters and harrows) is one of the key segments of the machine building complex with a share in the production of machines and equipment of 18.8%, which has a double value for the Russian economy. Nevertheless, the contribution of domestic agricultural engineering enterprises to Russia's GDP today is only 0.13%. The reasons for this are insufficient effective demand (the average purchase of agricultural machinery over the past 5 years was about 3 times lower than the market capacity) and virtually no export (3.9% of the total output). According to the "Strategy for the Development of Agricultural Engineering in Russia for the Period up to 2030", in order to renew the fleet of equipment in the country, taking into account the disposal of old machines, it is necessary to buy more than 50 thousand tractors for the amount of over 300 billion rubles annually. and more than 18 thousand combines with a total value of over 140 billion rubles. According to Rosstat, in October 2018, the average age of a tractor in the Russian fleet of agricultural machinery was 19 years. The role of the state in building an agricultural ecosystem The key problem of building a single ecosystem based on the dominant role of the agricultural market is still the subsidized nature of the development of the agro-industrial complex (AIC). According to the «Agricultural Market Review», the main factor in increasing the competitiveness of the agro-industrial complex is government support. According to Rosstat, 75% of the profits of agribusiness companies over the past four years have been generated through subsidies received from the state. Subsidies remain one of the key factors in making investment decisions in the agro-industrial complex. The development of the export of agricultural machinery is one of the main priorities of the domestic agricultural machinery industry. In 2017, the Government of the Russian Federation adopted the "Strategy for the development of exports in the agricultural engineering industry for the period up to 2025". Russian agricultural products are now supplied to 160 countries, China remains the main buyer of Russian food products, having increased purchases in 2019 from $ 2.5 billion to $ 3.1 billion. To support the export of agricultural products, the government allocated 33.8 billion rubles for 2020, that is, 12% of the total financing of the industry. In addition, exporters will be reimbursed for the costs incurred in the certification of agricultural products sent to foreign markets. Assistance will be provided to agricultural enterprises that grow and process soybeans and rapeseed. Since April 2019, the Rosagroleasing program has been in effect, designed specifically to level the costs of agricultural enterprises in the current situation. It assumes a preferential rate of 3%, a zero advance payment, an extension of the lease term and a deferred payment for one year. Manufacturers such as Rostselmash, PTZ and KLAAS owe their financial results to the direct support of the Russian state. So, in accordance with the data of the Ministry of Agriculture of the Russian Federation, exactly three listed companies accounted for 70% of subsidies provided to agricultural machine builders under the RF Government Decree No. 1432 dated December 27, 2012. At the same time, there is a slowdown in the growth rate of export deliveries of agricultural machinery, which is associated with a reduction in the size of subsidies under the program to compensate the costs of transporting industrial products (from 25-27.5% to 11-13% of the cost of products). Results and Discussion The prospects of the agro-industrial complex are visible against the background of its stable growth in the conditions of the crisis. According to Rosstat, in 2020 export wheat prices increased by $ 14-20 per ton, depending on the class. The competitive advantage of the country's agriculture can be achieved by reducing the cost of growing and harvesting crops, increasing yields through the automation of these processes. First of all, automation is required in terms of planting, care and harvesting. To a lesser extent in storage, transportation and processing. The need for automation processes will contribute to the emergence of new domestic industries that can meet the growing demand through the creation of innovative solutions for the needs of agricultural producers. Enterprises of the military-industrial complex can act as "innovators" as they have high scientific potential and experience in the transfer of technological solutions to civilian developments. Thus, the greatest role in the development of agriculture will be played by the accompanying high-tech areas of production, which, on the one hand, will increase the contribution of agriculture to GDP, on the other hand, will become an independent significant source of growth in the country's GDP. This is the development and production of modern high-tech agricultural machinery with robotization elements, integrated satellite geoinformation navigation and telecommunication services, integration and development of new generation software products and materials for remote sensing of the Earth and unmanned vehicles, construction of a modern agricultural infrastructure using control and management based on artificial intelligence and unmanned technologies, development of information infrastructure in rural areas, creation of technologies and decision support platforms for agricultural producers, etc. At the same time, the analysis of strategic documents in the field of digitalization of the economy showed the presence of a number of problematic issues that impede the active process of introducing digital technologies in the agro-industrial complex. Among them: -the lack of a unified coordination center for authorities, business, science and public organizations for the tasks of digitalization of the agro-industrial complex. -the lack of linking indicators of the efficiency of digitalization of the agro-industrial complex to the final benefit for the agricultural producer (increasing the profitability of the farm, reducing the cost of seeds and fertilizers, increasing yields, etc.). -lack of a methodology for assessing the level of digitalization of agricultural enterprises. Conclusion Industries and firms as units of analysis are losing importance as technology evolves in the context of collaboration, resource sharing, and distributed manufacturing. [17, p.46] Building an ecosystem in the field of agro-industrial complex implies the creation of a complex integrator of digital solutions, which, on the one hand, will offer the customer all the benefits he needs, taking into account the wishes and specifics of his activities, on the other hand, accumulates a wide range of cooperation enterprises for his requirements. Thus, not digital products will begin to be sold, the effectiveness of which may be questioned by the consumer, but platform solutions for farmers' tasks: increasing labor productivity; profit growth; reducing crop loss. To build an ecosystem, the following steps are seen as necessary: 1) Determine the complex of needs of agricultural producers -what tasks they want to solve using digital technologies; 2) Create a mechanism for selecting projects for the digitalization of the agro-industrial complex; 3) Determine the specific conditions and sources of projects funding; 4) Determine a single institute that will act as a project office for the implementation and support of projects aimed at digitalizing the agro-industrial complex. 5) Identify priority pilot markets ("demand points") for digital approbation. The ecosystem approach, ultimately, will allow, in parallel with the raw materials economy, to create a sector that will not only become one of the world leaders in itself, but also act as an aggregator of high-tech solutions for its own needs, providing an innovative multiplier effect in the country's economy and, as a consequence its growth in the coming years.
3,330.6
2021-01-01T00:00:00.000
[ "Economics" ]
Formation of the honeycomb-like MgO / Mg ( OH ) 2 structures with controlled shape and size of holes by molten salt electrolysis Synthesis of the honeycomb-like MgO/Mg(OH)2 structures, with controlled shape and size of holes, by the electrolysis from magnesium nitrate hexahydrate melt onto glassy carbon is presented. The honeycomb-like structures were made up of holes, formed from detached hydrogen bubbles, surrounded by walls, built up of thin intertwined needles. For the first time, it was shown that the honeycomb-like structures can be obtained by molten salt electrolysis and not exclusively by electrolysis from aqueous electrolytes. Analogies with the processes of the honeycomb-like metal structures formation from aqueous electrolytes are presented and discussed. Rules established for the formation of these structures from aqueous electrolytes, such as the increase of number of holes, the decrease of holes size and coalescence of neighbouring hydrogen bubbles observed with increasing cathodic potential, appeared to be valid for the electrolysis of the molten salt used. INTRODUCTION It is known that an electrochemical deposition process can produce nanostructured deposits in controlled manner.2][3][4][5][6] However, the additional annealing of electrochemically produced magnesium hydroxide is needed to produce MgO.The additional thermal treatment causes calcination and certain level of magnesium oxide crystallization. 4We successfully THE HONEYCOMB-LIKE MgO/Mg(OH) 2 STRUCTURES 1353 In contrast to the experimental conditions applied in our earlier work, 8,10 in linear sweep voltammetry experiments (LSV), the potential was swept from starting potential, E S, being 0.000 V to a final cathodic potential, E C, of -1.000 V vs. Mg/Mg 2+ , and back to E S .Experiments under the potentiostatic regime were carried out at: -0.200 V vs. Mg/Mg 2+ , -0.700 V vs. Mg/Mg 2+ and -1.000 V vs. Mg/Mg 2+ applied to the working electrode at T = 373 K. Also, in all electrodeposition processes, the deposition charges were limited to 2 C.After electrolysis, the working electrode was taken out from the cell, rinsed with absolute ethanol (Zorka--Pharma, Šabac, Serbia) and dried at room temperature. The surface morphology and composition of the deposited samples were characterized by SEM (TESCAN digital microscope; model VEGA3, Brno, Czech Republic) equipped with an energy dispersive spectrometer (EDS).Crystal structure of the deposit obtained by deposition at -0.200 V vs. Mg/Mg 2+ with charge limited to 2C was analysed by Philips PW 1050 powder diffractometer, at room temperature with Ni-filtered CuK α radiation (λ = 1.54178Å) and scintillation detector within 2θ 20-75° range in steps of 0.05°, and scanning time of 4 s per step. RESULTS AND DISCUSSION Glassy carbon (GC) reversible potential measured in the melt used under experimental conditions was 1.400±0.050V vs. Mg/Mg 2+ .An example of a cyclic voltammogram recorded with glassy carbon working electrode in the melt used, scanning the potential range from 0.000 V to -1.000 V vs. Mg/Mg 2+ and back, is shown in Fig. 1.There are two well defined broad current waves in the cathodic part of the voltammogram and no anodic counterparts.The first cathodic current wave (I) was observed with the maximum at -0.200 V vs. Mg/Mg 2+ followed by a subsequent drop to the minimum at -0.330 V vs. Mg/Mg 2+ .The second cathodic current maximum (II) was observed at -0.700 V vs. Mg/Mg 2+ and it was followed by a constant current decrease to a minimum at -1.000 V vs. Mg/Mg 2+ .In the anodic scan, there were no current waves which would indicate the oxidation complementary to the two cathodic current waves.Thеsе recordings were in a very good agreement with our previous results, obtained on GC electrode in the same melt [7][8][9][10] where starting potential has been E s = 1.400V vs. Mg/Mg 2+ . The most important electrochemical and chemical reactions, from the greater series of 13 reported elsewhere, 10,27,30 responsible for formation of the shown structures can be summarized as: 4,6,10,30,31 2 ) Therefore, the wave I, in Fig. 1, includes the currents reflecting more pronounced reactions given by Eqs. ( 1) and (3), and wave II recorded the currents reflecting reactions given by Eqs. ( 1), ( 2) and ( 4).The synthesis of magnesium oxide and magnesium hydroxide from the products of the reactions cited here, Eqs. ( 2), ( 5) and ( 6), and elsewhere, 6,10,27,[29][30][31] proceeds at the working electrode surface within the whole potential range applied.Hydrogen gas bubbles produced on the electrode surface and their detachment from the surface provide a fresh area for the electrochemical reactions.However, the gas bubbles do not detach from the electrode surface so easily.Very often, they are detained by very fast growing needle-like and similar MgO/Mg(OH) 2 deposit structures which surround and sometimes even cover them. The chronoamperogram reflecting electrodeposition with restricted charge used at -0.200 V vs. Mg/Mg 2+ is shown in Fig. 2. Typical current-time transient during deposition showed that after about 2500 s cathodic current density decreases down to -0.2 mA cm -2 .However, with increasing deposition time the Available on line at www.shd.org.rs/JSCS/THE HONEYCOMB-LIKE MgO/Mg(OH) 2 STRUCTURES 1355 current density increased again and reached the maximum of -0.5 mA cm -2 at about 5000 s.This maximum is than followed by another slow decrease. The rise and fall of the current density observed during deposition at -0.200 V vs. Mg/Mg 2+ , using the amount of total charge passed during the deposition restricted to 2 C (Fig. 2), should therefore be attributed to the increase of hydrogen evolution, magnesium cations and nitrate reduction rates on the freed electrode surface (gas bubbles leaving) and the subsequent fall of these rates due to a pseudo-passivation of the electrode surface (Mg oxides and hydroxides being formed). 29,32-ray analysis of the deposit synthesized under potentiostatic regime with 2.0 C applied charge is shown in Fig. 3. XRD pattern revealed that the deposit is composed of a mixture of magnesium hydroxide (Mg(OH) 2 ) and magnesium oxide (MgO).2θ peaks recorded at 32.8, 50.8, 58.SEM photographs of the deposits obtained by electrolysis of magnesium nitrate hexahydrate melt used, at working electrode potentials of -0.200, -0.700 and -1.000 V vs. Mg/Mg 2+ using 2.0 C of charge are shown in Figs.4-6, respectively.The potentials for electrodeposition were selected by the analysis of Fig. 1 and they corresponded to the current waves I and II.Figs.4-6 revealed that MgO/Mg(OH) 2 mixture deposits of honeycomb-like structures have been formed at all three potentials applied. Fig. 4 shows the honeycomb-like structure obtained at a potential of -0.200 V vs. Mg/Mg 2+ .From Fig. 4a and b, it can be seen that formed honeycomb-like structure showed regularly distributed holes, made by the detached hydrogen bubbles.The average size of the holes recorded was estimated to be around 2 μm (Fig. 4c).It should be noted that the holes were surrounded by very thin nano--sized needles oriented in all directions.Tips of the needles were grouped in bundles making relatively compact wall structure around the holes.The deposit obtained at -0.200 V vs. Mg/Mg 2+ was analysed by EDS and the chemical composition of the honeycomb-like structure is shown in the Fig. 4d. Figure 5 shows the honeycomb-like structure obtained at the potential of -0.700 V vs. Mg/Mg 2+ , using the same quantity of charge as in Fig. 4. The estimated average hole size was around 1 μm (Fig. 5c).The difference in structural characteristics between deposits in Figs. 4 and 5 was immediately apparent.The increase in the number of holes formed by the detachment of hydrogen bubbles and the decrease of hole sizes has to be attributed to the electrodeposition potential applied (Fig. 5a and b).As expected, the changes in structural characteristics of the obtained honeycomb-like structures can be ascribed to the inten-________________________________________________________________________________________________________________________ (CC) 2018 SCS. Available on line at www.shd.org.rs/JSCS/sification of hydrogen evolution with the increasing cathodic potential.The increase of the magnesium overpotential applied led to the formation of thinner needles that were highly intertwined around holes.Further increase in the cathodic potential applied led to further intensification of the hydrogen evolution.As a result of vigorous hydrogen evolution and high electrocrystallization rate at the potential of -1.000 V vs. Mg/Mg 2+ , the honeycomb-like structure lost its recognizable regular appearance, as shown in Fig. 6.A number of holes remained captured in the interior of the deposit structure (Fig. 6a).It appears that there was coalescence of the closely formed hydrogen bubbles (Fig. 6b).In the same time there is a portion of needles which mutually coalesced to make a compact surface of this structure type (Fig. 6c).Hence, unlike the results presented in our previous reports, [7][8][9][10] where the flower-like forms, as well as a mixture of dish-like holes and holes constructing the honeycomb-like structure were obtained, here we show that it is possible to define conditions that enable a formation of the uniform honeycomb-like structures of the controlled shape and size of the holes formed from the detached hydrogen bubbles.It is attained by the careful selection of electrolysis conditions.Namely, in our recent investigation, 10 the pulse of 1.400 V vs. Mg/Mg 2+ preceded to the electrolysis at the selected potential, by which the flower-like forms and dish-like holes were formed.The absence of this pulse enabled formation of uniform honeycomb-like structures.Anyway, the analysis of the honeycomb-like structures presented in Figs.4-6 showed that the honeycomb-like structure obtained at -0.200 V vs. Mg/Mg 2+ was more uniform than those formed at potentials of -0.700 and -1.000 V vs. Mg/Mg 2+ .This can be ascribed to the fact that the increase in cathodic potential causes both intensification of the hydrogen evolution reaction and the increase of the magnesium oxide/magnesium hydroxide electrocrystallization rate. From the presented results, it is apparent that the honeycomb-like structures can be synthetized not only by the electrolysis from aqueous electrolytes, but also by the electrolysis from melt, as shown in this work.Additionally, it appears that similar rules are valid for the formation of honeycomb-like structures made of metals being deposited from aqueous electrolytes and honeycomb-like structures made of MgO/Mg(OH) 2 mixtures, being obtained by the electrolysis of magnesium nitrate melt.2][13][14][15] Therefore, it seems logical to discuss the analogies between the formations of honeycomb-like metal structures obtained by the electrolysis from the aqueous electrolytes and the formation of honeycomb-like MgO/Mg(OH) 2 structures obtained by the electrolysis of the magnesium nitrate hexahydrate melt. According to Winand,33 in the dependence of the exchange current density, melting points and overpotentials for hydrogen discharge, metals are classified into three groups: normal, intermediate and inert ones.The group of the normal metals, such as Pb, Sn, Ag, Cd and Zn, is characterized by the high values of both the exchange current density and overpotential for hydrogen evolution, as well as by the low melting points.The group of the intermediate metals, such as Cu, Au and Ag (ammonium electrolyte) is characterized by the lower values of the exchange current density and overpotential for hydrogen evolution than the normal metals.Finaly, the third group of metals makes the so-called the inert metals, such as Ni, Co, Pt and Pd, and they are characterized by both low ________________________________________________________________________________________________________________________ (CC) 2018 SCS. Available on line at www.shd.org.rs/JSCS/THE HONEYCOMB-LIKE MgO/Mg(OH) 2 STRUCTURES 1359 exchange current densities and overpotentials for hydrogen evolution, as well as by the high melting points. The fact that well defined needles oriented in all directions were formed among holes, indicates that MgO/Mg(OH) 2 mixture behaves almost as the normal metal.One of the main characteristics of electrodeposition of the normal metals is diffusion control starting from very low potentials 13,34 as confirmed here by the formation of well defined needles in a whole range of examined potentials.For comparison, the needle-like, as well as the other type of dendritic forms, are also formed starting from low cathodic potentials during electrodeposition processes of Pb and Ag (the typical representatives of the normal metals) from the aqueous electrolytes. 13On the other hand, the MgO/Mg(OH) 2 mixture shows certain characteristics that define the so-called inert metals.Low values of both the exchange current density and the overpotential for hydrogen evolution reaction means that there is a parallel between metal electrodeposition process and the hydrogen evolution reaction in the whole range of potentials. 13,35In this case, the honeycomb-like structures are formed throughout the magnesium OPD region.However, the fact that well defined needles similar to those formed around holes are obtained in the magnesium UPD region where there is no hydrogen evolution, 10 suggests that the formation of the MgO/Mg(OH) 2 deposit still behaves as the normal metal.This implies that the evolved hydrogen had no effect on the hydrodynamic conditions in the near-electrode layer, and hence, on the morphology of the deposits around the holes.The similar situation is observed during the formation of the honeycomb-like structure of Pb, as one of the most important representatives from the group of the normal metals. 24The very thin needles were formed around the holes during electrodeposition of Pb in the honeycomb-like form. CONCLUSION The processes of electrolysis from magnesium nitrate hexahydrate melt have been analyzed by the linear sweep voltammetry (LSV) and chronoamperometry, while morphology of the deposits obtained by the potentiostatic regime of electrolysis was characterized by the techniques of scanning electron microscopy (SEM) and EDS.The XRD analysis of the deposit showed that MgO/Mg(OH) 2 mixture was obtained by this electrolysis process. It was shown that the honeycomb-like structures made of MgO/Mg(OH) 2 mixture constructed around the holes originating from the detached hydrogen bubbles and surrounded by the thin intertwined needles were formed in a wide range of cathodic potentials applied.The shape and the size of holes were strongly controlled by the choice of the cathodic potential. Fig. 2 . Fig. 2. Current density-time transient recorded on GC electrode from the magnesium nitrate melt used; the potential applied -0.200 V vs. Mg/Mg 2+ ; the amount of total charge passed during the deposition restricted to 2 C at 373 K. Fig. 3 . Fig. 3. X-ray diffraction pattern of the electrochemically produced MgO/Mg(OH) 2 deposit on GC working electrode from magnesium nitrate melt at potential of -0.200 V vs. Mg/Mg 2+ at 373 K, with charge limited to 2 C.
3,243.8
2018-12-29T00:00:00.000
[ "Materials Science" ]
Appraising HEI-community Partnerships: Assessing Performance, Monitoring Progress, and Evaluating Impacts Momentum of the creation of partnerships between higher education institutions (HEIs) and communities is strong. As their significance intensifies, the question of how to judge their value is garnering increasing attention. In this perspective article, we develop a framework for comprehensively appraising HEI-community partnerships. Constituent parts of the framework are unpacked, and application of the framework is then discussed. The appraisal framework provides a mechanism to document evidence of worth, and most importantly contributes to the continuous improvement and learning imperative of HEI-community partnerships. Introduction The relationship between higher education institutions (HEIs) and their community is being reimagined amidst pressing global challenges, the changing nature of knowledge production, and dialogue regarding the functions of higher education in contemporary society ( Jongbloed et al. 2008;Nelson 2021). Sentiment is widespread that HEIs should meaningfully contribute to society (e.g. Compagnucci & Spigarelli 2020;Hart & Northmore 2010). This outlook has captured the attention of governments, funders and non-governmental organisations (de Boer et al. 2015;Plummer et al. 2021a;Secundo et al. 2017). Over the past two decades, the emphasis on strengthening relationships with communities has increased (e.g. Bawa & Munck 2012;Tremblay 2017) and community engagement itself is being recognised as central to the mandate of HEIs (Nelson 2021;Plummer et al. 2021a;Secundo et al. 2017). HEI-community relationships take many forms and encompass the triumvirate of research, teaching and service. It is heuristic to conceive of these forms along a spectrum of participation, with consultation being the least interactive and collaboration being the most (IAP2 2014). We focus on partnerships between HEIs and community because they specifically help HEIs meet engagement mandates, as well as enable alignment of HEI functions with community needs (Groulx et al. 2021). Similarly to many terms that have gained prominence in popular lexicon, the term partnership in this context is imprecisely employed and variously understood Luger et al. 2020). We discuss this challenge in the section that follows, concentrating on formal partnerships because they are a prominent means for HEIs and the community to actively engage with diverse societal challenges and to realise opportunities; offer explicit and agreed upon parameters regarding goal(s), functioning and aspirations; afford context specificity as determined in the initiation phase by the partners themselves (Estrella & Gaventa 1998), and have been recognised as an appropriate unit of analysis, which is an intervening variable as well as an '… outcome of "impact" in itself ' (Cruz & Giles 2000, p. 31). The myriad of potential benefits from HEI-community partnerships is increasingly being recognised (Buys & Bursnall 2007;Groulx et al. 2021;Holton et al. 2015;Meza et al. 2016;Williamson et al. 2016). Such collaborations leverage access to knowledge and expertise, thereby facilitating diagnosis of community challenges as well as formulation of potential solutions (Buys & Bursnall 2007;Hart & Northmore 2010;Meza et al. 2016;Williamson et al. 2016). Engaging community partners through these types of partnerships aspires to eliminate tokenism and connect content experts (professionals, staff within each organisation, service providers, etc.) with context experts (those with lived experience (Attygalle 2017). Additionally, capacity-building opportunities for community partners increase (Holliday et al. 2015;Holton et al. 2015;Muse 2018;Williamson et al. 2016), leading to more resilient communities. At the same time, such collaboration broadens the range of perspectives considered and diversity of knowledge employed . It also ideally enables learning that is reciprocal (Holliday et al. 2015). Ultimately, partnerships serve to advance the exchange of knowledge and education, enhance community practice and inform decision-making (Buys & Bursnall 2007;Holton et al. 2015;Muse 2018;Wlliamson et al. 2016). As the importance of HEI-community relationships increases and excitement about potential benefits from partnerships grows, the need to gauge success is intensifying. This concerted concern for measuring success stems from the accountability agenda in higher education (Holton et al. 2015;Shephard 2018;Wiek et al. 2012). Despite these intensifying needs, 'there is a lack of consensus in the field regarding what defines partnership success …' (Brush et al. 2019, p. 1). As HEI-community partnerships continue to rise in prominence and importance, there is a clear need to devise corresponding mechanisms to identify, gauge and track the success of these unique arrangements (Holton et al. 2015;Nelson 2021;Plummer et al. 2021a). In this article, we examine how to gauge the success of formal HEI-community partnerships. As a perspective article, it is essential at the outset to detail the authors' positionality. The authors are all affiliated with the Environmental Sustainability Research Centre (ESRC) at Brock University in Canada. Sustainability science is an overarching imperative for the Centre, which aims '… to strengthen the exchange and integration of different disciplinary and non-academic knowledge, enabling mutual learning between scientists and practitioners' (Brandt et al. 2013, pp. 1-2). Innovative partnerships were initiated by the ESRC in the spirit of bridging the science-societal divide and as such encompass our research, education and service endeavours. Partnerships are established with both government and non-governmental organisations, with Memoranda of Understanding (MOUs) setting out the aims, associated responsibilities and budgetary contributions of all parties. The importance of communities was elevated at Brock University in the present strategic plan for the institution (Brock University 2018) and advanced in the associated Community Engagement Strategic Plan (Brock University 2020, p. 1): the '… connection to community is fundamental to the University's strategic mission. Community engagement will support each of Brock's strategic objectives'. In seeking to draw upon current experience, the President encouraged the authors to catalyse community connections across the institution as well as determine how to gauge the success of formal community partnerships. The authors thus embarked on a multi-year program of inquiry into the performance of higher education institutioncommunity partnerships (Plummer et al. 2021a), including measuring the performance of sustainability science initiatives ) and evaluating transdisciplinary partnerships for sustainability (Plummer et al. 2021b). While sustainability science informed some of the empirical investigations, the program of inquiry transcended disciplinary, geographic and institutional contexts. With the experience and perspective of the authors acknowledged, we start by scrutinising challenges that make this a particularly vexing dilemma -terminological and conceptual differences, multiple perspectives and ways of knowing, inconsistency about what constitutes success, and measurement messiness. We then provide a conceptual framework for measuring the success of a partnership, from both HEI and community partner perspectives, that is clear, cohesive and comprehensive. The manner by which the framework can be operationalised within HEI-community partnerships is then detailed. The proposed framework bolsters the capacity of HEI-community partnerships to navigate key challenges and gauge their success. Measuring the Success of HEI-community Partnerships: A Quagmire Individuals interested in measuring the success of HEI-community partnerships are immediately confronted with a paradox. On the one hand, there are numerous accounts appraising HEI-community partnerships in great detail (e.g. Brinkerhoff 2002;Butterworth & Palermo 2008;Holton et al. 2015;McNall et al. 2009;Plummer et al. 2021b;Tyndall et al. 2020;Wiek et al. 2012;Williamson et al. 2016). On the other hand, consensus does not exist between scholars as to what constitutes HEI-community partnership success (Brush et al. 2019). Often, studies focused on appraisal (e.g. Buys & Bursnall 2007;Holton et al. 2015;Wiek et al. 2012) address success from a HEI perspective and the view of the community partner(s) is absent (Hart & Northmore 2010). Additionally, many existing accounts and frameworks focus only on a single component of HEI-community partnerships. For example, Blackstock et al. (2007) provide a framework for undertaking a summative evaluation of participatory research. Similarly, Van Tulder (2016) provides an analytical framework for partnership impact assessments. While these frameworks offer a snapshot of the evaluation component, which takes place near the end of the partnership, they do not adequately capture every component throughout the duration of the partnership process, including how they are interconnected and shape HEI-community partnerships throughout their lifecycle. Contrastingly, Jagosh et al. (2015) undertook a realist evaluation of 11 concluded research-based partnerships in order to understand the contextual factors (e.g. trust, power sharing, resource sharing, etc.) that impacted the overall outcomes of the partnership. In this example, the appraisal is more invested in the partnership process and what circumstances led to various outcomes. Ultimately, these accounts and proposed frameworks do not present a holistic view of partnerships or encompass all essential components and considerations of HEI-community partnerships. Scholars are explicit regarding the requirement to better understand and appraise HEI-community partnerships Holton et al. 2015;Luger et al. 2020;Nelson 2021). These calls are complemented by arguments regarding the need for mechanisms to identify, gauge and track success (Holton et al. 2015;Plummer et al. 2021a;Van Tulder et al. 2016). In the following section we address key challenges which contribute to this contradiction and create an entangled dilemma in measuring the success of HEI-community partnerships. TERMINOLOGICAL AND CONCEPTUAL DIFFERENCES Terminological and conceptual differences pose a substantive challenge. The term partnership itself is imprecisely used and variously understood (Drahota et al. 2015;Luger et al. 2020;Nelson 2021;Plummer et al. 2021a). Drahota et al.'s (2015) systematic review on this matter found that terms used to describe partnerships ranged broadly, from community-academic partnerships and community-based participatory research (CBPR) partnerships to university-community partnerships, community-university partnerships and academic-community partnerships. This may be explicable in the light of varying degrees of formalisation (Brinkerhoff 2002) and/or discipline-specific terminology (Horton et al. 2009). In this article, we follow the comprehensive definition by Brinkerhoff (2002, p. 21), who describes a partnership as a '… dynamic relationship among diverse actors, based on mutually agreed objectives, pursued through a shared understanding of the most rational division of labour based on the respective comparative advantages of each partner. Partnership encompasses mutual influence, with a careful balance between synergy and respective autonomy, which incorporates mutual respect, equal participation in decision-making, mutual accountability and transparency.' As Plummer et al. (2021a, p. 2) observe, 'one immediate challenge within this area of literature is the imprecise and interchangeable uses of terms such as "relationship", "engagement", and "partnership"'. These terms are often colloquially used to describe partnerships when they actually describe very different circumstances. This is further complicated by the tendency for disciplines and communities of practice to also define partnership in different ways, leading to misunderstandings and inconsistencies across disciplinary boundaries and fields of practice (Horton et al. 2009). There are also more fundamental conceptual differences in HEI-community partnerships. Drahota et al. (2015) highlight the extent of these disparities in their systematic review. For example, CBPR partnerships are understood there as arrangements among structurally unequal groups that come together to address problems such as poverty, crime and housing (Drahota et al. 2015). However, Benoit et al. (2005, p. 265) describe partnership arrangements as 'a process of ongoing negotiation through which academic and community partners establish their respective expectations and responsibilities in the partnership, always taking into account changes in personnel, agendas, and budget allocations, among other things'. The conceptualisation of 'community-academic partnership' (CAP) is used by Drahota et al. (2015) to reconcile understanding across different disciplines and the need for equitable control. MULTIPLE PERSPECTIVES AND DIFFERENT WAYS OF KNOWING A considerable challenge of HEI-community partnerships is 'respecting, balancing, bridging, reconciling and/or sometimes integrating differing knowledge systems, values and processes among disciplines and with partner communities, which translate into potentially differing assumptions about what constitutes effective interaction and credible knowledge generation' (Steelman et al. 2021, p. 633). HEI-community partnerships are evident across the gamut of scholarly areas and, depending on the discipline, they can take different forms and encompass varying activities (Ansari et al. 2001;Plummer et al. 2021a). The involvement of individuals from different professions, disciplines and scholarly areas brings unique challenges with regard to integrating their different perspectives (Blecher & Hughes 2020). The magnitude of this challenge is amplified by the emphasis on transdisciplinarity (Brandt et al. 2013;Lang et al. 2012), combined with an emphasis on community partnerships. Scholarship on measuring partnerships has largely emphasised the view of HEIs (e.g. Buys & Bursnall 2007;Sargent & Waters 2004;Wiek et al. 2012) and often excludes the community perspective (Sandy & Holland 2006). Hart and Northmore (2010, p. 6) observe that 'the rigorous and comprehensive incorporation of community perspectives in audit and benchmarking is almost entirely absent across the higher education sector ...'. However, the importance of the perspective of community partners is critical (Srinivas et al. 2015) as it provides a more complete picture of how the partnership is operating and/ or the overall success of the partnership (Plummer et al. 2021a). This situation is complicated as HEI and community partners may enter into a partnership for different reasons, have divergent expectations and uniquely perceive benefits, all of which affect gauging success (Klein 2008;Plummer et al. 2021a;Sandy & Holland 2006). Moreover, cogent areas, professions and communities may have implicitly and/ or explicitly established methods and associated criteria for evaluating partnership success (Belcher et al. 2016). For example, research evaluation approaches may emphasise measures of academic outputs, such as publication of books or prestige journals, whereas community organisations may emphasise the number of people attending a public partnership event or the number of responses to a community-based survey. Also, evaluation criteria from more than one discipline or community of practice may be contradictory or conflicting (Gaziulusoy et al. 2016;Klein 2006). HEI-community partnerships pose a particular challenge in this regard as they are often context-specific (Hansson & Polk 2018). Ultimately, 'the failure to grapple with understanding the community perspective may have potentially dire consequences because there is considerable room for misunderstandings between higher education and community partners …' (Sandy & Holland 2006, p. 31). It is imperative to be cognisant of the multiple perspectives that may be underpinned by unique ways of knowing and manifest knowledge systems Steelman et al. 2021) and that actors must accommodate, reconcile and/or integrate different knowledges (Norström et al. 2020). This poses challenges as each can be viewed as legitimate, valid and credible. For example, Indigenous methods of evaluation may be very different from those of Western academics. Western approaches tend to focus on objectivity and the systematic collection and analysis of data, whereas an Indigenous method of evaluation may be a decolonising and spiritual process of deep reflection (Evans et al. 2020). Further, it is important to acknowledge that certain ways of knowing have become entrenched, so HEIs may be reluctant to create space for different ways of knowing for fear that they may be perceived as less credible or legitimate. WHAT CONSTITUTES 'SUCCESS'? Measuring the success of HEI-community partnerships requires shared understanding, while defining success is difficult as there are multiple perspectives. Linquist-Grantz and Vaughn (2016) focus on the relationship between process and outcomes, whereby individual contexts within the processes of a partnership are critically examined to rank the success of the community-academic partnership. Ensuring not only multiple perspectives are included within the process, but also individual contexts are considered within the partnership evaluation framework is considered critical. As with the scholarship on measuring partnerships, often studies (e.g. Buys & Bursnall 2007;Holton et al. 2015;Wiek et al. 2012) lack inclusion of the perspectives of the community partner(s) (Hart & Northmore 2010). Much of the literature exploring what constitutes partnership success resides in the field of communitybased participatory research (e.g. Brush et al. 2019;Israel et al. 2020;Luger et al. 2020). Brush et al. (2019, p. 565) identified 28 indicators of success and concluded that the multi-dimensional construct goes beyond outcomes and includes 'some combination of characteristics of partners, relationships among/between partners, partnership characteristics, processes, resources and capacity, along with partnership outcomes'. Not only are indicators of success numerous, but much like indicators for evaluation, they tend to be discipline specific, with many disciplines having well-established criteria and processes for evaluation. Beyond the nature of the data itself, HEIs are ill-equipped to collect the required data to track community partnership work and, more specifically, measure the success of such partnerships as most existing enterprise systems focus on the institution's instruction and academic aspects (Holton et al. 2015). A MEASUREMENT MESS A final challenge is the actual measurement of success. Measurement nomenclature is often used interchangeably and employed imprecisely (see Plummer et al. 2021b). Each has a specific meaning and purpose, which we describe in the next section. There are also broad monikers for subject areas or fields, and developments within these further exacerbate confusion (Ansari et al. 2001). For example, evaluation is a field of study with numerous specific types therein (e.g. formative evaluation, program evaluation, impact evaluation, process evaluation, etc.), which causes blurring of boundaries and the potential to collect or focus on imprecise and/or irrelevant information in relation to the HEI-community partnership. Issues in nomenclature are compounded by operational and conceptual measurement considerations. Many measurement tools and case studies exist, but these come from diverse disciplines/fields (e.g. Benoit et al. 2005 in public health/medicine, and Weik et al. 2021 in sustainability science; see Drahota et al. 2016 andLuger et al. 2020 for others). However, they tend to be singularly focused and measure different aspects of HEI-community partnerships. For example, Buys and Bursnall (2007) focus on indicators for understanding the partnership process, without making connections to outcomes or impacts. Contrastingly, Azaroff et al. (2011) primarily concentrate on evaluating the impacts of a public health campaign on the choices of community members. Despite the existence of numerous measurement tools, '… empirical evaluation of [HEI-community partnerships] are inadequate' (Drahota et al. 2016, p. 195). The foregoing leads Luger et al. (2020, p. 509) to observe that while 'many practical lessons learned, and conceptual models can be found in the literature ... models and concepts of engaged research still remain muddy'. A Proposed Framework for Capturing HEI-community Partnership Success While the aforementioned challenges are substantial and difficult, we argue that appraising success is imperative to strengthen HEI-community partnerships as well as address matters of accountability, transparency and value. We endeavour to overcome these challenges and advance this subject by (1) providing clear definitions and parameters of assessment, monitoring and evaluation; and (2) conceptually establishing a comprehensive framework for the consistent measurement of HEI-community partnership inputs, processes, outcomes and impacts. Our framework (Figure 1) builds upon foundational work on assessing the performance of HEIcommunity partnerships by Plummer et al. (2020). It sets out and illustrates the relationship among three salient components for appraising HEI-community partnerships: assessment, monitoring and evaluation. Each component is briefly summarised and then the overall workings of the framework are addressed. At the outset, it is imperative to recognise that the three components are dynamic and not mutually exclusive; they connect in complementary ways throughout the appraisal process to measure the success of a given HEI-community partnership. Component 1: Assessment Assessment involves identifying the present status of a given partnership in reference to desired conditions and determining how to close gaps in performance. This includes 'comparing the current condition to the desired condition, defining the problem or problems, understanding the behaviours and mechanisms that contribute to the current conditions, determining if and how specific behaviors and mechanisms can be changed to produce the desired condition, developing solution strategies, and building support for action' (Gupta et al. 2007, pp. 14-15). It occurs through the systematic, continuous collection of data on specified indicators to provide insight into the current performance of the partnership itself (Caplan et al. 2007;Estrella & Gaventa 1998;Onyango 2018;Stem et al. 2005). Assessment takes place after the initiation/ formalisation of a partnership through a memorandum of understanding (MOU), and can occur iteratively throughout the lifecycle of the partnership, as illustrated in Figure 1. Qualities associated with the performance of partnerships are abundant. In drawing upon the recent synthesis of this scholarship in relation to HEI-community partnerships (Plummer et al. 2020), we highlight the qualities that comprise a good partnership: the circumstances surrounding their formation (inputs), how they function (processes), and what they may accomplish (outcomes). These three categories have recently been established and their associated qualities have been validated as appropriate indicators and measures for assessing partnership performance (Plummer et al. 2021a). We provide a synopsis of each below (see also Plummer et al. 2021a for the full suite of associated qualities and indicators). INPUTS Inputs are the contributions bestowed to the enterprise by the entities at the initiation of the partnership (Figure 1). Effective communication during initiation is imperative as each entity articulates their motivation for seeking entry into formal partnership and thereby a commitment to the jointly shared aims. Clarity at the outset on what each entity will contribute to the partnership is also important. These supports should align with the aims of the partnership as they will shape the process as well as the capacity for what can be accomplished. The availability of financial resources, for example, is often identified as a key influence on many other aspects of a partnership (e.g. scope, duration, activities, etc. (Buys & Bursnall 2007;Holton et al. 2015;Sargent & Waters 2004). Resources can also dictate the roles of particular collaborators (Sargent Schulz et al. 2003). PROCESS The second category in the assessment portion of the framework (Figure 1) is process (Plummer et al. 2020). Process refers to how the partnership functions (McNall et al. 2009). A process which promotes respectful and constructive interactions between/among actors is imperative for success as the partnership transitions from initiation to implementation (Sargent & Waters 2004). It is important that the process fosters mutual understanding of perspectives, a collective understanding of the problem, and a basis for joint decision-making (Amey & Brown 2005;McNall et al. 2009;Schulz et al. 2003). Effective communication, established in the initiation phase, continues to be imperative for smooth operation of a given partnership (Amey & Brown 2005;Bringle & Hatcher 2002;Holton et al. 2015;Schulz et al. 2003). The process also engenders the essential quality of trust, so that individuals come to expect no harm from the others involved, and ideally develop confidence that others will complete actions if their control is relinquished (Mayer et al. 1995). Reciprocity and mutual respect, which often develops as trust is built, are further qualities of successful partnerships. It is especially important to recognise in the context of HEI-community partnerships that individuals bring unique perspectives, skills and contributions to the process, and it is this diversity that strengthens the partnership process (Schulz et al. 2003). OUTCOMES The final category of outcomes in the assessment portion of the framework (Figure 1) encompasses what is produced as well as the effects on those directly involved in the partnership (Plummer et al. 2020). The realisation of outcomes is broadly influenced by the employment of available resources and the effectiveness of the process in goal attainment (Schulz et al. 2003). Outcomes are not temporally constrained and can occur throughout the life of the partnership (Koontz & Thomas 2012). The three-fold typology of outcomes for partnerships (Plummer et al. 2020;Sargent & Waters 2004) highlights objective outcomes, subjective outcomes and learning outcomes. Objective outcomes are tangible products (e.g. publications, reports, etc.) that are easily quantifiable and often used as measures of productivity in reporting. Subjective outcomes (e.g. satisfaction, trust, etc.) have value for, and are interpreted by, those involved in the partnership. Finally, learning outcomes include knowledge creation and acquisition, integration of diverse perspectives to address multi-party challenges, and skill development and attainment (Amey & Brown 2005;Sargent & Waters 2004). Component 2: Monitoring Monitoring involves the practice of conducting systematic observations to gain information about progress (Estrella & Gaventa 1998;Onyango 2018). Data is collected using indicators that provide insight into aspects of the partnership as well as activities (Stem et al. 2005). Monitoring should take place regularly over the life cycle of the HEI-community partnership, as illustrated in Figure 1. It provides routine feedback, and thereby a basis for continuous improvement (Estrella & Gaventa 1998) and learning. Although the importance of monitoring partnerships is recognised within the literature (Calderon & Mathies 2013;Kagan & Duggan 2009), HEIs acknowledge that the ability to track and measure partnerships is a key challenge in practice (Plummer et al. 2021a). Secundo et al. (2017, p. 232) observe that 'the nature of relevant data required to track third mission activities is considered as invisible, tacit, unquantifiable, informal, and in most cases, it is not collected by administrators' (see also Shephard 2018;. Reflective of, and contributing to, the challenges experienced in practice, there is currently no consensus within the literature regarding monitoring procedures/documentation or indicators of success (Kagan & Duggan 2009), or exactly how to institutionalise tracking across disciplines/types of partnerships. Monitoring of HEI-community partnerships is two-fold. First, it involves tracking performance over time and determining progress (Buys & Bursnall 2007;Pellegrino et al. 2014), while assessment considers a single point in time. Second, monitoring focuses on activities that take place throughout the partnership (Bäckstrand 2006) and, specifically, on the achievement of activities as milestones, signalling advancement toward their aims, goals and/or objectives. Monitoring is typically accomplished through tracking variables, or key performance indicators (KPIs). KPIs are measurable indicators used to signal progress or achievements against pre-defined standards or objectives. Traditionally, and especially in an academic context, KPIs are often quantitative and used in order to obtain 'objective' data for evaluation purposes (Garlick & Langworthy 2008). However, as partnerships require both objective and subjective outcomes, it is important that KPIs for appraisal represent a wide range of qualitative (e.g. perceptions) and quantitative (e.g. number of academic publications) information, which may be tracked for different purposes (Garlick & Langworthy 2008;Plummer et al. 2022). In line with the two-fold purpose of monitoring above, KPIs for HEI-community partnerships come from existing scholarship as well as the specific context of a given partnership. Monitoring HEI-community partnership performance over time, for example, can benefit greatly from adopting and systematically applying indicator measures which are established and validated in relation to HEI-community partnership performance for each category described above (Plummer et al. 2021a), and are also transferable to different types of partnerships (disciplines, stages, etc.). Conversely, KPIs regarding actions or activities that signal progress towards their aim, goals and/or objectives are unique, and therefore need to be established and agreed upon by the partners. Component 3: Evaluation Evaluation is a field of inquiry in its own right and comprises diverse conceptualisations (Ansari et al. 2001;Shephard 2020). Generally, evaluation entails determining the extent to which objectives have been achieved and intended impacts realised (Onyango 2018). It is prudent for evaluation to occur near the end of the partnership (see Figure 1) for three reasons. First, the duration of a partnership is typically identified in a formal agreement and should align with expectations that the objectives can be realistically accomplished over this time. Second, information gathered through monitoring should enable ongoing adjustments to keep the partnership on track and inform evaluation. Third, evaluation near the conclusion of a partnership serves as a mechanism to consider what has been achieved and how to move forward. The partners may wish to continue their collaboration, revise their formal agreement and the accompanying aims, or cease working together. As evaluation of HEI-community partnerships is essential, a 'culture of evidence' has emerged whereby HEIs and institutions more broadly must increasingly document their performance, achievements and impacts (Calderon & Mathies 2013). This culture has largely been driven by the issue of accountability -the 'obligation to report to others, to explain, to justify, to answer questions about how resources have been used and to what effect' (Trow 1996, p. 310). Emergence of the accountability agenda is widespread (de Boer et al. 2015;Jongebloed 2018), being embraced and influential in shaping public administration in a businesslike way (Huisman & Currie 2004;Plummer et al. 2021a). Although clearly an essential process, evaluation of HEI-community partnerships is, surprisingly, an uncommon practice. For example, a recent study of Canadian HEIs found that most HEIs 'occasionally' use some form of monitoring and evaluation, and 25 percent of institutions reported that they do not employ any form of monitoring or evaluation (Plummer et al. 2021a). Evaluative studies of HEI-community partnerships are also rare. In one notable example, Bowen and Martens (2006) tested a collaborative 'utilisation focused' approach to evaluation, whereby community partners were meaningfully incorporated into planning, developing and evaluating all aspects of the partnership/project. More recently, Shephard (2018) investigated how HEI-community partnerships were valued at the University of Otago and showed that partnerships used different types of evaluation, including explicit and formalised evaluations, informal evaluations and implicit evaluations. Efforts in both practice and scholarship are hampered by the lack of guidelines and consensus on how to effectively evaluate HEI partnerships and their impacts. This often leads to the imposition of ineffective, inadequate and varied approaches to describe, understand and evaluate both their activities and impacts (Shephard 2020). This problem is exacerbated by the contextual specificity of both evaluation and partnerships. In an effort to address the above matters, evaluation of the proposed framework specifically focuses on effectiveness and impact. Effectiveness evaluation measures whether the objectives of the partnership have been achieved (Bowen & Martens 2006;Deniston et al. 1968). As illustrated in Figure 1, the agreed upon objectives are typically codified in a document (e.g. an MOU) and are an expression of the desired outcomes of the partnership. They imply one or more necessary conditions which must be met to result in accomplishment (Deniston et al. 1968). As such, the effectiveness indicators are context specific and determined in the initiation phase by the partners themselves (Estrella & Gaventa 1998). Relatedly, impact evaluation is concerned with the causal effects from an intervention (such as a partnership). It encompasses 'positive and negative, primary and secondary long-term effects, directly or indirectly, intended or unintended' (OECD-DAC 2010, p. 24). It is important to give consideration to the temporal dimension of impact assessment in this context. While some of the intended impacts from the partnership are set out during the initiation phase, it is important that the partners are open to unforeseen impacts and document these throughout the partnership life cycle. Additionally, impacts may develop after the partnership has officially ended (as determined in the MOU). It then becomes more difficult to isolate long-term effects or impacts, both logistically and resource wise (Van Tulder et al. 2016). Impacts are distinguished from outcome assessment and effectiveness evaluation in two ways. First, impacts refer to the extent to which the intervention (partnership) has made a societal difference (Van Tulder et al. 2016;Worton et al. 2017). Societal audiences, as illustrated in Figure 1, may include students, governments, non-government organisations, industry, academics, citizens, and so on. Second, impacts need to be measured from the perspective of these different audiences (Srinivas et al. 2015). A guide to assessing performance and evaluating the impacts of transdisciplinary partnerships for sustainability by Plummer et al. (2022) outlines key steps in the evaluation process and provides a navigational pathway forward through the process. While the techniques to carry out effectiveness and impact evaluation will be unique to each HEI-community partnership, the suggested framework can assist with overcoming challenges, offer mechanisms to document evidence of achievements and ultimately provide a foundation for transformative change. Practical Application of the Framework While undoubtedly some HEI-community partnerships implicitly incorporate one or more of the components set out above, to the best of our knowledge there is currently no appraisal framework which brings together assessment, monitoring and evaluation. The scholarly basis for such an appraisal framework is provided above. However, the appraisal framework has little heuristic value if it cannot be applied in practice. Here we consider application of the appraisal framework in relation to frequently observed life-cycle stages of HEI-community partnerships (cf Bringle & Hatcher 2002;Lewinson 2014). Although the timeframe in which HEI-community partnerships go through these stages varies, we illustrate them unfolding over a five-year formal partnership. Our intent is to provide a general appraisal guide applicable to HEI-community partnerships, while also acknowledging the specific parts that are context dependent and ultimately require the sound judgement of the entities involved in the appraisal. An appropriate entry point is creation of a HEI-community partnership. The reason(s) for collaboration is paramount at this initiation stage. Instrumental rationale for partnering includes gaining particular knowledge, leveraging complementary skills, gaining access to unique opportunities, and so on. Potential enjoyment from collaboration and cultivating enduring relationships is a common intrinsic motivation. These motivations often precipitate formal codification through a MOU, which sets out the aims, duration, scope, funding contributions, legal considerations, and so on. Ideally, the partners consider when/how often assessment, monitoring and evaluation will occur as well as what each will entail. The performance assessment component of the appraisal framework coincides with the implementation phase of the partnership lifecycle. In this phase, entities are concerned with enacting collaboration. Assessment gauges performance at a particular point in time. There is no prescribed timeline for conducting performance assessment; however, it is important that assessment is continuous so as to provide ongoing feedback for the duration of the partnership (Figure 1). From our experience with HEI-community partnerships, we suggest that performance assessment is undertaken annually as this allows sufficient time for tangible consideration while also permitting ideal response time for feedback. The HEI-Community Partnership Performance Index (HCPPI; Plummer et al. 2021a) is a recent validated rapid assessment tool that can easily be completed by HEI and community participants to assess the performance of their collaboration. It involves a 47-item questionnaire designed to provide a comprehensive understanding of a given HEI-community partnership's strengths and areas for improvement across the three broad categories of assessment (inputs, process, and outcomes). As partnerships make incremental adjustments based on this evidence, they employ an adaptive approach for continuous improvement (Folke et al. 2005;Lee 1994). This adaptive approach aligns well with the dynamism and complexity of HEI-community partnerships (Plummer et al. 2020). Monitoring coincides with the implementation phase of the partnership lifecycle and is considered iteratively throughout (Figure 1). It captures progress of an HEI-community partnership in terms of performance and progress towards its aims, goals and objectives. In regard to the former, monitoring provides information on how the partnership is performing over time. In applying the appraisal framework, this part of monitoring is a logical extension of performance assessment and is achieved by tracking the results of the HCPPI over time. More specifically, the results obtained within the first year provide a baseline or benchmark for subsequent performance. Monitoring performance thus affords a barometer by which the partners can gauge their maintenance, improvement and/or deterioration in particular aspects. Including all entities in the partnership is critical to accurately gauge performance as well as learn from feedback and make continuous improvements. Monitoring also concentrates on determining progress. These milestones should be established at the outset of the partnership and revisited as necessary. Determining KPIs that capture the essence of the overall goals or milestones should occur at the outset of the partnership. These KPIs manifest as indicators to measure progress towards goals. KPIs can be qualitative and/or quantitative, and may adapt or change over time as the partnership evolves. A given milestone or goal may have multiple relevant KPIs. The most important consideration regarding KPIs is that they reflect the overall goals and are agreed upon by all entities. Monitoring progress may occur at different times in different partnerships, depending on the length of the partnership and the intended milestones. Often, multi-year partnerships identify yearly milestones and deliverables. Evaluation of the framework offers guides to practically tackle critical questions about HEI-community partnership success, including effectiveness and impact. The intentions of the partnership are typically articulated at the outset in some type of document (Figure 1) or are developed in the formative stages of the partnership lifecycle. When setting out intentions, partnerships will benefit greatly from articulating what they aspire to accomplish as well as whom they seek to affect. In so doing, the stated aims offer a clear understanding of the desired effects and impacts. Specificity in these intentional statements (or substatements) is strongly encouraged to facilitate the operationalisation of evaluation. While intentions are mainly established at the outset, partners need to be aware of, and open to, emergent opportunities in terms of accomplishments as well as audiences. Evaluation typically occurs at the completion stage of the partnership life cycle (Sargent & Waters 2004) and entails two main considerations. Evaluating effectiveness is anchored to the goals and objectives established at the outset of the partnership, and is made operational by agreed upon indicators or milestones. Whereas monitoring documents' progress to evaluate effectiveness uses KPIs and other credible evidence to cumulatively judge achievements. Critically, all entities in the partnership should participate in evaluating effectiveness, thereby coming to a shared understanding of accomplishments. Impacts are a related consideration of evaluation when applying the framework. Aspirational societal impacts as a consequence of the partnership are commonly discussed during the initiation phase and evolve over the lifespan of the project. Identifying specific target audiences associated with the intended societal impacts, cultivating relationships with those audiences, and devising ways to assess if they have experienced changes due to the partnership are all essential. Engaging multiple, and often diverse, audiences in evaluation can be logistically difficult, expensive and time-consuming. There are, however, numerous quantitative and qualitative tools that are well suited to conducting impact evaluation. Questionnaires, interviews and/or workshops are powerful examples of means to gain insight into partnership impacts from the perspective of multiple and diverse audiences. Conclusion HEI-community partnerships are on a steady upward trajectory. Appraising their success is of increasing importance; however, a myriad of challenges immediately confronts those interested in this enterprise. These include terminological and conceptual differences, multiple perspectives and ways of knowing, inconsistency about what constitutes success, and measurement messiness. In addition to confronting this quagmire, a rigorous and dynamic framework for appraising HEI-community partnerships does not yet exist (Holton et al. 2015;Plummer et al. 2021a;Srinivas et al. 2015). In this perspective article, we set out a conceptual basis for a comprehensive appraisal framework which encompasses the complementary components of assessment, monitoring and evaluation. The framework provides a general guide to transferability across HEI-community partnerships. Assessment considers the present state in relation to aspects and qualities. Monitoring provides important information regarding how the partnership is performing, as well as the impacts created through the partnership. Evaluation provides evidence of goal attainment and demonstrates the societal benefits of a partnership. In discussing how to apply the framework in practice, we have taken an initial step in reconciling conceptual and applied considerations when appraising HEI-community partnerships. Employing the appraisal framework in a variety of HEI-community partnerships is needed next. Flexibility and adaptation are encouraged as consideration of context as well as the perspectives of the participants is paramount. As experience using the framework accumulates, opportunities will emerge for lesson learning, transferability and further refinement.
8,926.4
2022-06-30T00:00:00.000
[ "Education", "Business", "Sociology", "Economics" ]
The Jun N-terminal kinases signaling pathway plays a “seesaw” role in ovarian carcinoma: a molecular aspect Ovarian cancer is the most common gynecological malignancy that causes cancer-related deaths in women today; this being the case, developing an understanding of ovarian cancer has become one of the major driving forces behind cancer research overall. Moreover, such research over the last 20 years has shown that the Jun N-terminal kinase (JNK) signaling pathway plays an important role in regulating cell death, survival, growth and proliferation in the mitogen-activated protein kinases (MAPK) signaling pathway, an important pathway in the formation of cancer. Furthermore, the JNK signaling pathway is often regulated by an abnormal activation in human tumors and is frequently reported in the literature for its effect on the progression of ovarian cancer. Although the FDA has approved some JNK inhibitors for melanoma, the agency has not approved JNK inhibitors for ovarian cancer. However, there are some experimental data on inhibitors and activators of the JNK signaling pathway in ovarian cancer, but related clinical trials need to be further improved. Although the Jun N-terminal kinase (JNK) signaling pathway is implicated in the formation of cancer in general, research has also indicated that it has a role in suppressing cancer as well. Here, we summarize this seemingly contradictory role of the JNK signaling pathway in ovarian cancer, that ‘seesaws’ between promoting and suppressing cancer, as well as summarizing the application of several JNK pathway inhibitors in cancer in general, and ovarian cancer in particular. Introduction Ovarian carcinoma (OC) is one of the most common of the gynecologic cancers as well as being the most prevalent cause of gynecology tumor-related deaths worldwide [1]. To date there are some 239,000 new cases and 152, 000 deaths due to OC each year [2]. In the United States during 2018 there were about 22,240 new OC cases resulting in 14,070 deaths [3]. Whilst in Europe [1], the OC incidence rate is from 6.0 to 11.4 per 100,000 women, and although it is relatively lower in China, there was at least [4] 52,100 new cases and 22,500 deaths in 2015 alone. Most ovarian carcinomas are diagnosed at an advanced stage, of which 51% are diagnosed at stage III and 29% are diagnosed at stage IV [3,5] and what are the risk factors for such incidence levels of OC? Age growth, overweight or obesity, first full-term pregnancy after age 35, fertility therapy, hormone therapy after menopause, family history of OC, breast cancer or colorectal cancer might all be high risk factors for OC [6]. In addition, about 50% of OC patients are more than 65 years old [7] and according to early studies in the Netherlands, patients with stage II and III ovarian cancer, even in the absence of comorbidities, did not achieve the same effective as younger patients [8]. This difference may be related to the relatively poorer physical conditions of the elderly [8]. However, the latest study indicates that older women with OC are 50% less likely to receive standard treatment than younger women, regardless of the type of treatment. Furthermore, when elderly patients receive personalized treatment, it has been shown that the treatment effect on them can be significantly improved [9,10]. Age itself may not be a high-risk factor [11] and the etiology of OC is unclear but 5-10% of OC is thought to be hereditary. Hereditary OC, like breast cancer, is an autosomal dominant inheritance due to mutations in the BRCA1 and BRCA2 genes. Such gene mutations change the biological effects of cell tissues and, thus, play an indispensable role in promoting the occurrence and development of tumors. According to the dualism of OC, it can be divided into type I ovarian cancer and type II ovarian cancer. Concerning type I OC, the main gene mutations are KRAS, BRAF, PTEN, ARID1A, and PIK3CA, and its onset is slow, the diagnosis is mostly in the early clinical stage, and the prognosis is good. The main mutations in type II OC, however, are TP53 and BRCA1/2 and the onset of the disease is fast, aggressive, no prodromal symptoms, the diagnosis is mostly in the late clinical stage. Ovarian tissue composition is very complex, and it is the organ with the most types of primary tumors of all the organs of the body. There are great differences in different types of histological structure and biological behavior. According to the histological classification of the World Health Organization (WHO) 2014 edition, ovarian tumors can be divided into 14 categories, the main histological types of which are epithelial tumors, germ cell tumors and cord-stromal tumors. Epithelial tumors are the most common histological type of ovarian tumors, and their histology can be further divided into serous, mucinous and endometrioid types. Serous tumors are the main type of ovarian cancer. In addition, the five-year survival rate of serous cancer is 43%, while that of other types of endometrioid cancer is 82%, and that of mucinous cancer is 71% [3]. Although the optimal treatment is surgery as the main component, supplemented with appropriate chemotherapy such as TC (Paclitaxel and carboplatin), TP (Paclitaxe and cisplatin) and PC (cisplatin and cyclophosphamide), in about 70% of patients the ovarian cancer will reoccur. Clinical trials are trying new therapies and drug combinations, such as immunotherapy, target therapy, PARP and anti-angiogenesis drugs. At present, many experiments and clinical trials have been carried out to improve the therapeutic effect on ovarian cancer with the single or combined treatment of potential candidates, and some results have been achieved. However, much remains to be done. Specifically, understanding the genes of abnormally activated or expressed signal transduction pathways in the development of ovarian cancer will be helpful to improve the prognosis of ovarian cancer and the development of new therapies. In this regard, and according to the latest cancer genome research, many signaling pathways have malfunctioned in the development of cancer [12,13]. For instance, the abnormal activation of the Jun N-terminal kinases signaling pathway, one of the MAPK signaling pathways, has been reported in ovarian cancer frequently, making it one of the most important signaling pathways in the treatment of ovarian cancer [14][15][16][17]. To inform our discussions in this paper on the roles of JNK in the death of cells, cancer cell growth, and in chemotherapy resistance, it is worthwhile to first summarize the salient features of both the MAPK and JNK signaling pathways. Key features of the MAPK and the JNK signaling pathway The MAPK pathway includes a series of protein kinases such as ERK1/2, p38 α/β/γ/δMAPK and c-Jun amino terminal kinase 1/2/3(JNK1/2/3), which control cell proliferation, cell survival and cell death in various processes [18,19]. JNKs, also called 'stress activated' protein kinases just like p38 MAPKs, responds to various extracellular stimuli in different organisms. JNK1/2/3 have been demonstrated in mammals, where splices have revealed more than ten various subtypes of transcription. And the proteins size of JNKs range from fourty-six kDa to fifty-five kDa. JNKs can be activated by cascade phosphorylation reaction of MAPKKK and MAPKK. More than 20 types of MAPKKK have been found such as the MEKK family members protein and the apoptosis related kinases pathway, which regulates and phosphorylates two different types of MAPKK, MKK4/7.MKK4 and MKK7, and activates JNK via phosphorylating the motifs of conserved tripeptide, that is tyrosine 185 and threonine 183 residues [19,20]. Moreover, scaffold protein, nuclear factor-kB and dual specificity phosphatases also have an effect on the activity of JNKs [19]. Various constitutions of MAPKKK and MAPKK can control JNK signaling. Apoptosis signal regulation kinase 1(ASK1) is in an inactive state in the normal conditions via interacting with thioredoxin [21]. ASK1/MAPKKK5 activating JNK is the mechanism of some kidney diseases [22]. Cypermethrin impairs astrocytes and disrupts the development of the extracellularmatrix via regulating reactive oxygen species (ROS), Ca2+ and JNK pathway [23]. Continuous activation of JNK in hepatocytes can lead to cell death or metabolic abnormalities [24]. TNF related apoptosis-inducing-ligand promotes apoptosis of Bim, homologue of Bcl-2, by activating JNK and its downstream substrates, and enhances hepatocyte cell anti-Fas-induced apoptosis [25]. Increasing ROS inhibits the phosphatase of JNK such as mitogen-activated protein kinase phosphatase 1(MKP1 /DUSP1), which contributes to the continuous activation of JNK [26]. ROS can also remove the inhibition of ASK1 [27]. Furthermore, MAP 2 K/MAP 3 K, interleukin-1, epidermal-growth factor, drugs, endoplasmic reticulum stress (ER stress) and environment stress such as heat, hypoxia as well as radiation can also activate JNK. There are so many molecular such as STAT1/3, c-jun, c-Myc, FOXO4, Bcl-2, ATF2, Smad2/3, PPARγ1 and RXRα that have been demonstrated as the JNK substrates. JNKs which increase the activity of c-jun transcription via binding and phosphorylating c-jun at Ser73 and Ser63 [28]. C-jun, a part of AP-1 that regulates expression of gene. AP-1, like JNKs can be activated by radiation, various stress or inflammatory cytokine. The activity of c-jun is essential for the Ha-Ras mediating carcinogenesis transformation [29]. Phosphorylated cjun at Ser73 and Ser63 via Ha-Ras, c-Raf and v-Src induced carcinoma transformation in embryonic fibroblasts [30,31]. And JNK regulates the activating-transcription factor 2 (ATF-2) and intracellular mitochondrial signaling pathway such as target proteins Sab [32,33]. Using inhibitory peptide of Tat-Sab interferes with the activity of JNK blocking the increase of ROS. However, some evidence also shows contradictory phenomena that lacking the JNK1/2 gene induces fetal death in mice. JNKs also control cell autophagy, proliferation or migration. Furthermore, experiments involving the editing of mouse genes suggest that overactivation or loss of the JNK pathway function induces the growth of cancer, inflammation and metabolism related diseases. Some evidence supports the idea that JNK has the function of promoting tumor development, whilst other evidence demonstrate that JNK plays an important role in suppressing cancer. Note that, c-jun is highly expressed in ovarian cancer tissues [34]. Furthermore, the JNK signaling pathway is abnormally active in platinum-resistant ovarian cancer, but platinum drugs can also promote apoptosis of ovarian cancer cells by activating the JNK signaling pathway [35][36][37]. Because the effect of the JNK signaling pathway in carcinoma is extremely complex but crucial, it may, therefore, be a potential target for molecular cancer treatment therapy. JNK and ovarian tumor: role in the death Cell death is important for maintaining the intracellular homeostasis environment and is critically regulated by the signaling pathway. The ability to increase the number of tumor cells depends not only on the rate of cell proliferation, but also on the rate of cell depletion [38]. Once the signal-regulated pathways of apoptosis, autophagy and other cell functions become abnormal, this may weaken the cell depletion rate for tumorigenesis so as to accelerate the development of cancer. Cell death includes necrosis and apoptosis. Cell apoptosis, a programmed death, is characterized by changes in the nucleus and forming apoptosis body. The death receptor pathway, mitochondrial pathway and endoplasmic reticulum pathway can cause cell death, in addition, autophagy can also cause cell apoptosis. After the discovery of lysosomes, the Greek term autophagy was proposed. Autophagy can be divided into auto and phagy according to the root, which refers to the process in which organelles in the cytoplasm are transported to lysosome and degraded [39]. Autophagy is formed by double membrane vesicles and lysosomes, namely the autophagy body, which swallows cell damaged or harmful protein, organelles and complexes and the damaged organelles and protein is degraded by autophagy body. Although autophagy protects both normal and cancer cells from apoptosis, especially in the environment of cytotoxicity, malnutrition and other stimuli, when exceeding a certain limit, autophagy can cause apoptosis of normal and cancer cells [40][41][42]. Autophagy can generally be divided into at least three major categories: macrophage, micro-autophagy and chaperonemediated autophagy [43]. Usually autophagy refers to macrophage. Many researches have reported that autophagy plays a dual role, helping in cell survival or inducing cell death, depending on the strength or kinds of stimuli and the type of cell. The JNK pathway plays an essential role in OC autophagy and cell death. Autophagy mediated by the JNK signaling pathway, not only can help OC cells avoid apoptosis when the tumor cells are in a nutrient-poor or cytotoxicity environment, but also can induce autophagymediated cell death. Activating JNK signaling pathway induce cell death of autophagy in OC cell, which through promoting conversion of LC3-I to LC3-II as well as forming autophagosomes, accompanying with inhibition of cell cycle in G1 stage via up-regulating the expression of p27 and p21 [44]. Unfolded protein response (UPR) and inositol-requiring enzyme 1α(IRE1α) are essential to autophagy. The character of the endoplasmic reticulum stress (ER stress) response, caused by nutrient deprivation, hypoxia, mitochondrial dysfunction, chemotherapy and etc., can enable the activation of PERK (double-stranded RNA activated protein kinase-like ER kinase), IRE1α and ATF6 and is also controlled by JNK [42]. ER stress results in the accumulation of incorrect folded proteins in cell, although there are some ways help deal with those proteins such as UPR mediated by three different ER transmembrane receptors, ATF, PERK and IRE1, when these protective responses are not enough to cope with ER stress, cells would go through apoptosis in the end [45]. IRE1α is also an essential regulator of the JNK pathway. UPR autophagy-mediated cell death might rely on influencing the activity of IRE1α-JNK [46]. The chaperone proteins Gadd1533 and GRP78 of ER also play an important role in UPR-induced cell death or survival [46,47]. When UPR in cells exceed a certain limit, damaged cells will die. This may be due to the activation of JNK/AP-1/Gadd153 so as to inhibit the expression of NF-kappa B or Bcl-2 and mediation of ATF6 and ATF4 in OC cells [46,48]. Furthermore, it is reported that ER stress could not induce apoptosis in caspase-12 deficient cells [49]. This result might be due to IRE1 needing to gather TRAF2 as well as caspase-12 so as to induce cell apoptosis [49,50]. The use of drugs related to ER stress or autophagy and silencing of autophagy-related proteins Beclin 1 and LC3 can increase cell survival in OC cells [42]. Low-glucose not only induces cell apoptosis via ER stress /IRE1α/JNK, but also influences the generation of cell ATP resulting in cell death via the activation of ASK1 [51]. ASK1 activation leads to sustaining the activation of JNK [51,52]. And energy metabolism reprogramming is one of the hallmarks of cancer [38]. Dr. Otto Warburg pointed out that cancer cells mainly rely on glycolysis. Furthermore, due to the low efficiency of glycolysis, a high rate for taking up glucose is essential for cancer cells [53]. Low glucose environments could, therefore, enhance some drugs cytotoxicity such as metformin via the energetic stress producing reactive oxygen species (ROS), influencing the mitochondria membrane potential and other byproducts of damaged cells originating from the process of mitochondrial oxidative phosphorylation [51,54]. ROS-mediated cell apoptosis might, therefore, activate the ASK1/MKK4/7/JNK/mitochondrial signaling pathway [36]. AIF induces apoptosis through the mitochondrial pathway from mitochondrial translocation to the nucleus [55]. Furthermore, Bcl-2 expression decreases, Bax/Bak expression increases, while caspase 3/8/9 does not significantly change [56][57][58]. SP600125, a specific inhibitor of JNK/SAPK, prevents the expression of Bcl-2, increases the expression of Bax and AIF nuclear translocation, so as to reduce cell apoptosis [59]. Activation transcription factor 2(ATF-2), a component of AP-1, activates the JNK/SAPK pathwaymediated apoptosis of cells via increasing the trimethylation of histone H3K9 associated with the AP-1 binding region interacting with the Bcl-2 promoter [58]. Inhibition of the Akt signaling pathway and c-FLIPL activation can induce the apoptosis of OC cells through up-regulation of p-JNK and subsequent activation of caspase-3 [60]. However, there are also conflicting experimental results suggesting that inhibiting JNK activity may promote cancer cell death. SP600458 can significantly enhance the death of OC cells caused by PARP1 protein cleavage and caspase-3 by disrupting MMP in nilotinib [61]. This may be due to the different subtypes of JNK that play a role. It has been pointed out that WBZ_4, a new inhibitor of JNK1, or targeting A2780CP20, SKOV3 and HEYA8 with JNK1 siRNA, can significantly inhibit the proliferation of OC cells [62]. Furthermore, a clinical trial of ovarian cancer (NCT01015118) indicated that, Nintedanib, which is a triple vascular kinase inhibitor of the VEGF receptor, a platelet-derived growth factor receptor and a fibroblast growth factor receptor, combined with carboplatin and paclitaxel can significantly improve progression-free survival in patients with advanced ovarian cancer [63,64]. Nintedanib can up-regulate the expression of pulmonary surfactant protein D in A549 cells through the JNK/AP-1 pathway, and alleviate lung fibrosis, however, this does not affect cell proliferation [65] but it can inhibit the proliferation of prostate cancer cells [66]. Concerning ovarian cancer, it is clear that further research is needed to demonstrate how Nintedanib affects the development of ovarian cancer through JNK signaling pathway. At present, there are only a few reports on JNK inhibitors in the treatment of ovarian cancer; this may be due to the dual role of the JNK signaling pathway, discussed above, in promoting ovarian cancer apoptosis. For instance, mice lacking only one mutant of jnk1/2/3 and either a mutant of jnk1/jnk3 or jnk2/jnk3 can survive normally, but lacking JNK1 and jnk2 at the same time leads to the death of mice embryos and a very serious abnormal apoptosis in their brain cells [67]. In addition, the phosphorylation of JNK1, JNK2 and c-jun is necessary for ultraviolet-induced apoptosis, but JNK2 activated by ER stress can promote the survival of cells [36,68]. The mechanism of JNK-induced cell death in ovarian cancer is depicted in Fig. 1. The function of JNK in the regulation ovarian cancer cell growth Normal cells need to be regulated by growth signals to change from a stationary state to a proliferative state. Without the stimulation of such signals, no normal cells will proliferate [69]. However, for cancer cells, the proliferation behavior relying on endogenous growth signals is greatly reduced, probably because cancer cells mainly rely on the regulation of carcinogenic genes and their own growth signals. In addition to self-sufficient growth signals and gene mutations, autophagy may also promote tumorigenesis and development by avoiding cell death through internal regulation. The type of high level serous ovarian cancer with highly aggressive ability is the cause of most, 70-80% [70], of ovarian cancer deaths. Furthermore, this pathological type is the most common type of ovarian cancer. TP53 mutation may induce cell loss through the normal function of anti-cancer, which covers or masks the role of the p53 in normal cell. Carcinomas with TP53 mutations generally have the characteristics of high aggressiveness, invasiveness, and poor-differentiation. According to the report, the frequency of mutation in ovarian cancer ranges from about 50 to 100% [70,71], such a high rate is attributed to the different histological types of ovarian cancers having different frequencies of mutation. It shows that the frequency of TP53 mutation is rare in low level serous cancer or borderline carcinomas, whereas TP53 mutation is very common in high level serous ovarian cancer, even reaching 100% in some samples [71,72]. The mutation of TP53 can weaken JNK/ROS/TP53 signaling pathway-mediating cell apoptosis. The tumorigenic effect of JNK is not only weakened by inhibition activity, but also can be promoted by JNK itself. Furthermore, there are other gene mutations in ovarian tumors such as KRAS, BRAF, and PTEN, which induce the activation of the JNK signaling pathway to promote growth of cancer cells [62]. In addition, the mutation of RAS can also contribute to the growth of ovarian cancer. RAS family genes including HRAS, KRAS, NRAS are the most common genes to undergo mutations in tumors. Ras v12 mutations interact with JNK-Drosophila TGF-βactivated kinase 1(dTAK1) signaling mediating the growth of tumors, contributing to their proliferation out of control behavior [73]. This might be another reason for inducing the rapid growth of ovarian cells but further experiments need to be done to Fig. 1 A schematic diagram of the JNK signaling pathway promoting cell death in ovarian cancer. a Endoplasmic reticulum stress (ER), reactive oxygen species (ROS), and the Akt signaling pathway can regulate IER1 alpha, UPR and A SK1, thereby activating MKKK4/7 to regulate JNK signaling pathway activity, leading to autophagic cell death, mitochondrial pathway-mediated cell death and AP-1-induced apoptosis. Deathrelated antibodies mediate cell death. In addition, the JNK signaling pathway can regulate the expression of P27/P21 and cause cell cycle arrest. b SiRNA and the JNK related inhibitor WBZ_4, inhibit JNK1 expression and cell proliferation. c The JNK inhibitor SP600458 affects the JNK signaling pathway and destroys MMP, thereby enhancing the expression of PARP1 and Caspase-3 and, therefore, promoting cell death confirm this. As an important target molecule downstream of JNK, the expression level of c-jun in ovarian cancer tissues is high and is correlated with malignancy [34]. Therefore, inhibiting the signaling pathway of JNK1/ c-jun should result in the ability of cellular proliferation and invasion to be significantly decreased [74,75]. Tumor necrosis factor-alpha-mediated apoptosis can induce transient activation of JNK and regulate NF-kappa B to protect cells from apoptosis [68]. Furthermore, it is positively correlated with alpha 1,2-fucosyltransferase 1/2(FUT1/2) as well as Lewis y, a sugar of type II. The combination of cjun and the promoter of FUT1 induce the proliferation of ovarian cancer cells and the complexity of Lewis y/ FUT1 might be regulated by TGF-β1 [76,77]. It has been pointed out that c-Fos can enhance the proliferation of ovarian cancer cells induced by TGF-β1/c-jun [77]. Knocking out c-Fos can down-regulate the expression of cyclin D1 and CD44 as well as inhibit the proliferation and invasion of ovarian cancer cells [77,78]. Aging human peritoneal mesothelial cells may promote the proliferation and migration of ovarian cancer cells through AP-1/c-jun, IL-6, TGF-βas as well as the phenotypes secreted by tumors, such as angiogenic agents including vascular growth factor and CXCL1 [79]. Moreover, there is also evidence to suggest that the JNK signaling pathway has the function of promoting cancer cell survival in the regulation of cancer cell autophagy and growth [68]. Autophagy can not only induce tumor cell apoptosis as we discussed above, but also induce tumorigenesis and development via the JNK signaling pathway [68,80]. Many researches have reported that autophagy helps cell survival and proliferation, and in many contexts such as ER stress, chemotherapy, and other diseases can also do so via activating the JNK signal pathway when stimulated for a short time [68,81]. Different JNK subtypes may have different functions. ER stress can instantaneously activate JNK2 to up-regulate BIP, inhibit cell apoptosis and prevent cell death caused by ER stress, and down-regulate the expression of CHOP, as for transcription factors of cell death and cell apoptosis [68]. Bcl-2, downstream molecular of JNK signaling pathway during the autophagy. Aslo, the JNK signaling pathway, activated by IRE1α, may regulate Bcl-2 phosphorylation of Ser70 via JNK1 not JNK2, thus promoting cell survival [40,82]. Inhibiting the activity of JNK led to the impair of cancer cell development [83]. TP53 has a negative regulation of Bcl-2, inducing the alteration of the pro-apoptosis Bax/ Bak, which initiates cell apoptosis [84]. Note that, Ovarian cancer, especially high level serous ovarian cancer, has a high frequency mutation of TP53 [71]. Inhibiting the activation of Bcl-2 contributes to cell apoptosis, which suggests that the JNK1-Bcl-2 signaling pathway is crucial for the autophagy helping cell survival. There is, however, another explanation that JNK mediates the separation of Beclin1 and Bcl-2/Bcl-XL combinations and induces autophagy. Instantaneous activation of JNK1 can also induce cell survival, but long-term activation can lead to cell apoptosis [85]. When cells are in a hungry state, it is JNK1 rather than JNK2 or JNK3 that mediates the phosphorylation of Bcl-2 so that Beclin1 dissociates from the combinations and stimulates autophagy [40]. Although autophagy-mediating evading apoptosis can help cancer cells maintain a proliferating cell base through intracellular regulation mechanism. Self-regulation mechanism of autophagy may be one of the important reasons leading to drug resistance and recurrence of cancer. To sum up, the discussion shows that autophagy plays a positive role in the regulation of tumor cell survival via JNK signaling, and gene mutation also promotes tumor growth via activating the JNK pathway. JNK1 induces phosphorylation of Bcl-2, dependent on the length of stimuli, which promotes cells survival. Further research is, therefore, essential to clearly explain the mechanism involved in JNK signaling preventing cell death. Although there are a number of studies in other cancer research areas concerning the promotion of cell survival and growth via the JNK pathway, there are very few in the realm of ovarian cancer and those few only show that inhibiting the JNK pathway impairs the growth of ovarian cancer without demonstrating the underlying mechanism. A potential mechanism, as discussed, for the function in JNK signaling pathway-mediated tumor cell survival and growth is displayed in Fig. 2. The role of JNK pathway in chemotherapy resistance The chemotherapy resistance development of cancer cells is one of the important reasons for impairing the effect of drugs, and may be due to an abnormal activation pathway. Such a resistance phenomenon of tumor cells generally occurs after the initial chemotherapy is given, and in the later stage, drug resistance, aggravation, followed by recurrence may happen in a process similar to Darwin's natural selection rules, the survival of the fittest. As the drug time increases, the resistance of tumor cells will gradually increase. Chemotherapy kills most ovarian cancer cells, but some cancer stem cells with unique hereditary characteristics survive and represent a hidden danger for future recurrence and drug resistance [86]. In addition, the body's immune system is also a very important accomplice for the recurrence of ovarian cancer, in that through the use of immune cell recognition functions by tumor cells, they can modify their own immunogenicity, namely cancer immune editing [87,88]. For an instance, cisplatin, a first line chemotherapy drug for the treatment of advanced ovarian tumors, induces the rate of cancer cell death up to more than 50% but the subsequent treatment effect is decreased owing to drug resistance or recurrence after a long time of chemotherapy exposure [89]. Therefore, in order to look for potential therapeutic targets of chemotherapy resistance to improve the effect of treatment, many researches focused on studying the mechanism of chemotherapy drug resistance. The activation of the JNK signaling pathway plays an important role in promoting drug resistance in cancer [90,91]. Furthermore, there are also many researches reporting that c-jun is overexpressed in cisplatin resistance in ovarian cancer tissue [34]. The negative expression of cjun mutation can enhance the sensitivity of the human ovarian cancer cells Caov-3 and A2780 to cisplatin [92]. JNK activity is higher in cisplatin or paclitaxel-resistant human ovarian cancer cell lines and has the positive correlation with drug resistance. In addition, the high expression of active JNK is reported to have a close relationship with stage III and stage IV compared with stage I and stage II, which also was negatively correlated with the survival rate of patients with ovarian cancer [92]. Activation of JNK may play a key role in the DNA repair mechanism due to cisplatin therapy. When cisplatin and SP600125 were combined to treat ovarian cancer cells, the anti-cancer effect was improved compared with cisplatin alone. Compared with the combination of paclitaxel and SP600125, the anti-cancer effect of paclitaxel is weakened, because the death signal caused by paclitaxel is related to the activity of JNK. However, pretreatment of ovarian cancer cells with SP600125 can improve the anti-cancer effects of cisplatin and paclitaxel [92]. Platinum-based drugs increase the level of ROS via the mitochondrial apoptosis signaling pathway. ROS, however, damages the structure of DNA, Protein and other molecular integrity. Note that, sirtuin6, phosphorylated at serine 10 by the JNK pathway, modifies Kap1 so as to promote the silence of L1 retrotransposons [93], and SIRT6 also mediating single adenosine diphosphate ribosylation of PARP1 increases the activity of PARP1 poly adenosine diphosphate ribosylation and promotes DNA double strand break repair [93]. It may also be related to JNK-induced autophagy or gene mutation of tumors themselves helping cells survive. After all, JNK can induce both autophagy and apoptosis by regulating the interaction between Beclin1-Bcl2 complex and Bcl2-Bax [94,95]. Activation of JNK leads to the destruction of the Bax-Bcl2 complex, which increases the expression of Bax promoting cell apoptosis [96]. JNKinduced autophagy depends on the dissociation of Beclin1, which comes from the Beclin1-Bcl2 complex by phosphorylation of Bcl2. The production of ROS results in mitochondrial damage. There is a small positive regulatory loop between activating apoptosis; ROS, JNK and p53, that is, ROS can activate the JNK signaling pathway. Separation of the p53-MDM2 complex can lead to ROS accumulation, while JNK can activate the downstream gene p53 [95]. The ROS/JNK signaling pathway can promote rapid cell apoptosis through p53; however, mutation of p53 in some ovarian cancer tissue may avoid ROS/JNK/p53 signaling pathway mediated rapid apoptosis of cells. Micro RNAs (miRNA), having the length of about 21 nucleotieds, do not participate in coding protein directly; these are also be called non-coding RNAs. MiRNAs regulate the expression of gene transcription and also have an association with chemotherapy drug resistance as well as the progression and oncogenesis of cancer. MiR-21, overexpressed highly in many cancers such as breast cancer and ovarian cancer, induces the resistance of cisplatin or paclitaxel [75,97]. The promoter region of pri-miR-21 interacts with p-c-jun specifically and c-jun is activated by JNK1 not JNK2 or JNK3 in the chemotherapy resistance of ovarian tumor cells [75]. MiR-21 inhibits the expression of programmed cell death 4 (PDCD4), which is directly related to poor prognosis of ovarian tumor patients [98]. Besides, miR-21 also regulates the expression of hypoxia inducible factor-1α influencing tumor cell metabolism [99]. Fra-1, a member of the Fos family, form dimer AP-1 with the Jun family. AP-1 regulates miR-134 so as to augment the function of the JNK signaling pathway mediating chemotherapy insensitivity of ovarian tumor cells [100]. However, not all miRNAs promote the development of tumor chemotherapy resistance. For example, miR-139-5p can interact with c-jun by binding site 3′ UTR of mRNA, which interferes with the combination of c-jun and ATF2 induced by cisplatin and also inhibits the expression of Bcl-xl in cisplatin resistance cells [101]. These researches suggest that the JNK signaling pathway plays an essential role in chemotherapy resistance of ovarian cancer cells, although its functions are contradictory in tumors. Autophagy mediated by the JNK signaling pathway may provide resistance to chemotherapeutic drugs, leading to drug resistance in the later stage of relapse. Mutations in the P53 gene may inhibit apoptosis induced by ROS/JNK/p53, although the JNK pathway may induce apoptosis in a caspase-dependent manner. Studies have shown that the staggered use of JNK inhibitors and the chemotherapeutic drugs platinum or paclitaxel in turn can increase chemotherapeutic sensitivity without increasing side effects. The mechanism of JNK-mediated chemotherapeutic drug resistance discussed above is shown in Fig. 3. It is clear that JNK-mediated drug resistance in ovarian cancer chemotherapy is a difficult problem that needs to be solved as soon as possible. Conclusion It is clear that understanding dual role of the JNK signaling pathway, that can both promote the development of tumorigenesis and also inhibit the progress of tumors, the so called seesaw role, is very important in the study of cancer. The immune escape mechanism of tumors may also be involved in JNK-mediated tumor cell survival, such as interleukin and tumor necrosis factor. Although many studies have provided detailed evidence for JNK to promote the survival of cancer cells and inhibit the development of cancer, many problems remain unsolved, including the following: (1) JNK signaling pathway mediates death in the early stage and induces drug resistance in the later stage. Besides the reason of activation time and phosphorylation SIRT6 so as to repair DNA breaks, how it specifically affects JNK signaling pathway has not been confirmed. And vivo experiments and clinical studies need to be improved. (2) JNK1 can promote cell proliferation, and autophagy induced by external stimulation of the JNK signaling pathway and can also help cancer cells avoid apoptosis induced by drugs. But under what stress conditions these external stimuli and chemotherapeutics cause Fig. 3 JNK signaling pathway mediated drug resistance. a NK signaling pathway mediated by the JNK signaling pathway in ovarian cancer can repair DNA damage through SIRT6/PARP1 or reduce the sensitivity of ovarian cancer tissues to chemotherapy by c-Jun. b Drug resistance mediated by JNK signaling pathway also involves microRNA. MiR-21 inhibits the expression of PDCD4 and promotes resistance, or the interaction between miR-134 and JNK signaling pathway promotes the activation of c-jun/ATF2 and promotes drug resistance autophagic apoptosis rather than autophagy. (3) At present, the evidence of many oncology experiments related to the JNK signaling pathway comes from cell experiments, especially on ovarian cancer cells. The study of mouse models and clinical trials will help to improve our understanding of the function of the JNK signaling pathway in tumors. Because the occurrence and development of cancer is affected by the overall internal environment of the body, not a single factor. The JNK signaling pathway regulates both cell death and survival. It is, therefore, important to study the role of the JNK signaling pathway in each of these conflicting situations and the mechanisms involved in "see-sawing" between each other. In the future, it is necessary to determine the specific relationship between JNK structure and function. In view of this though, we firmly believe that in the near future, defining the specific mechanism of JNK in the process of tumorigenesis and development, and targeting the JNK signaling pathway in cancer treatment-related drugs will provide tremendous help to spearhead a breakthrough in anti-cancer treatment.
7,785.4
2019-10-21T00:00:00.000
[ "Biology" ]
The Future High Education Distance Learning in Canada, the United States, and France: Insights From Before COVID-19 Secondary Data Analysis Evolving information and communication technology creates new spaces, learning materials, and demands in training institutions. Higher education distance learning (HEDL) responses to these transformations are miscellaneous and its development strategies vary from a country to another. Interpreting before COVID-19 secondary data, this article redefines the concept of distance learning and analyzes HEDL supply in Canada, the United States, and France. It enlightens its main current trends and challenges. The COVID-19 crisis is triggering the online learning outbreak. We do not know what will remain when the crisis is over. If we must consider the data projected before the crisis, how can we see the evolution of distance learning in universities? Technological transformations will continue to grow around the world (Docebo, 2016; Organisation for Economic Co-operation and Development [OECD], 2017a), changing not only the landscape of trade and labor but creating news training and learning situations as well.In fact, they influence the accessibility and availability of distance education and training. For example, more than 4.4 million learners are enrolled in more than 2,497 programs and 18,342 courses in all disciplines at 27 open universities in the Commonwealth, spread over four continents (Africa, Asia, Europe, and America) with 300% growth in 2017 compared to 1987 (Commonwealth of Learning, 2017). These transformations intensify empowerment in learning, creating new relationships to knowledge, generating genuine needs and forms of learning, and requiring new ways of achieving this learning. More than access, education must focus on quality and learning relevance (Peña-L opez, 2015) to ensure a changing labor market competitiveness and the overall country economic performance (Zhang et al., 2017); develop adaptive postsecondary distance learning (DL) supply; and implement interventions that strengthen digital and technical skills, STEM, and employability skills (complex problem solving, critical thinking, creativity, management, etc.;World Economic Forum, 2018). The objective of this study is to analyze higher education distance learning (HEDL) supply in the three target countries and recent developments of HEDL. The Distance Learning top rated : Evidence from the Market Social distancing has forced the change of any mode of learning in DL that increases unexpectedly during the COVID 19 pandemic. Either way, the DL was experiencing a deep change suggesting its growing importance in the market. According to the litterature, DL faces several challenges including the emergence and rapid growth of learning needs, appropriate training delivery, and the supply adaptation to technological advances. Opposing Ambient Insight Research's 2021 projections of a declining training market, technological change is creating new needs, and it uses and applications in education and training. The DL market generated revenues of US$42.7 billion in 2013 and US$46.7 billion in 2016 (Docebo, 2016). Between 2013 and 2018, it grew globally in all other World regions: Africa, 16.4%; Latin America, 9.7%; Asia, 8.9%; Eastern Europe, 8.4%; Central Europe, 6.3% (Ambient Insight Research, 2014in OECD, 2015. With a projected annual growth of 10.26% between 2018 and 2023, it represents US $286.62 billion. Considering advanced improvements in artificial platforms and strong demand for flexible learning technology solutions, its revenue for 2021 is estimated at US$1,189 million in Latin America, US$16,967 million in North America, US$5,874.8 million in Asia, US$8.4 million in Europe, and US $636.3 million in Africa (Docebo, 2018; Figures 1 and 2). Technological applications offer evolving and emerging learning opportunities (virtual classes, mobile learning, rapid e-Learning, etc.). Three quarters (74%) of the world's population currently has access to email and each person will be connected to at least three devices by 2022 (OECD, 2019a). Internet use among 16-to 24-year-old is around 100% in most OECD countries (90% in Israel and Italy, 85% in Mexico and Turkey), higher among those with postsecondary education (OECD, 2017a). With the growing popularity of online learning among Generation Z and millenniums (Docebo, 2018), academic leaders consider DL as a growing force Seaman, 2014 in OECD, 2015). In 2013, about 90% of American universities leaders indicated that a majority of students were likely to enroll in, at least, one online course within the next 5 years; 70.8% compared to 8.6% in 2014 of these leaders consider DL as critical to their institutions long-term strategy. This vision of DL should be supported by the national education system for the DL global competitiveness, the balancing of DL organizational objectives with learner practical learning needs. Several studies highlight the correlation between the environment and learning outcomes. Consequently, DL investments and solutions should maximize learning spaces to increase any benefits related to this type of training (Barrett et al., 2019). For the majority of Canadian postsecondary institutions (69%), DL is important for their future, regardless of sector; more than 60% of the institutions with 30,000 or more students believe it promotes education innovation; one fifth see it as a way to implement provincial government policies (Bates et al., 2017a).With increasing automation and artificial intelligence on the verge of the fifth mobile generation, technological applications offer multiple potentials for innovation and training environment development. Rapidly evolving higher education DL models offers comparative advantages (Huynh et al., 2003;Wagner et al., 2008). To understand how DL is adapted to actual needs, national HEDL system could be understood by its supply transformations in the last few years. The Study Objectives This study considers that any efficient response should take into account the technology use trends, the technological environment, and the learners' habits and customs. It aims to answer the following fundamental question in consideration for the analysis: How do the national systems of the three countries respond to this growing demand? For doing so, it examines two subquestions: What are the changing trends in distance learning? How do national strategies align with these trends? The Methodology This research is descriptive and uses secondary data. To analyze DL supply in the three countries, the adopted methodology approach is a systematic review of the literature based on flowing selection criteria: the research strategy, the study design, documents and report date of publication, and the quality of the documents. Data from the Google Scholar database and the websites of key stakeholders of the distance training (educational institutions, associations, and organizations) have been analyzed. Conceptual Framework DL is an umbrella concept with a rich vocabulary, technical, varied, evolutionary, and encompassing several situations of equally different training. For Basak et al. (2018), digital learning brings together teaching technical solutions and learning in a format that fits today's digital world of work and learning. Thus, DL should be constantly redefined to include a variety of emerging devices and practices. It includes online learning and all forms of education delivery to offcampus students (Bates et al., 2017a(Bates et al., , 2017c, based on any training approach that replaces face-to-face in a traditional classroom in terms of specific time and place (Volery & Lord, 2000) for autonomous learning and requiring rare physical encounters between learners and their teachers (Ferreira, 2006). It is an educational process of teaching-learning (A. Martel, 1999), which implies, to a certain degree, a dissociation of teaching and learning in space and/or time (Conseil sup erieur de l' education, 2015a, 2015b), volitional control of individual learning, noncontiguous communication between learner and trainer (Sherry, 1995), and technology (online or off-line). Learning, which also includes teaching and its corollary training, could be defined as a process for the communication of instructive information between two parties (teacher trainers and learners) and that allows a learner to build required knowledge. It includes courses, programs, and other educational experiences delivered through traditional means (print, paper, and radio; Sherry, 1995) and/or online and within the entire spectrum of providing remotely instructive information (Sener, 2010). Even through media devices, the act of communication includes sometimes two-way exchanges between learners and trainers or with peers. In the context of widespread 21st-century internet use, is there a distinction between "remotely" or "online"? Many online tools (email, attachments, online review, viewing, printing, database, syllabus image, etc.) are used synchronously or asynchronously, both in the classroom and remotely by learners and teachers. This leads to blurred boundaries between these two modes of training generating a hybridization or creating a continuum between DL and traditional training (Charlier et al., 2006). Online training provides access to educational experiences Conrad, 2002;Moore et al., 2011).However, with the internet, technological distance denies time and space referring to temporal/timeless, spatial/a-spatial, training/learning, and synchronous/asynchronous (Mass e et al., 2014in Gr egoire, 2017. Thus, the various names such as distance education, multimedia training, open and distance learning, tailor-made training, e-learning (or e-training), and online training represent the same reality as DL (Descheˆnes & Et Maltais, 2006). DL is formal when it refers to a set of activities organized in the education system (public or private educational institutions, colleges, universities, and other educational institutions), which are the normal pathway to full-time or part-time student enrolment (OECD, 2017b). It is framed by the demands and constraints of pedagogy and, even more, the transformation of training paths (A. Martel, 1999), the institutional accreditation of training, and the social recognition that accompanies it. The articulation of teaching/learning at the heart of the transmission/acquisition of information translates into another, that of the supply/demand for training. Demand is defined by targets, their representativeness, and their characteristics (European Commission/EACEA/Eurydice, 2018) including ease of use of mediated communication, perceived utility, and cost-benefit of training (Ferreira, 2006). Chitkushev et al. (2014) consider DL as an educational tool which influences higher education offer in different ways (popularity, way of delivery, and it continues to get widespread and to gaining popularity day by day in the digital world. The offering is characterized by educational performance, success rate, external quality assurance, and recognition of informal and nonformal learning and its social dimension (European Commission/EACEA/Eurydice, 2018.). It depends on its funding and its capacity for systematic self-organization that needs to be captured from a holistic perspective of the change process, the emerging properties of creating and controlling technological change (Wotto et al., 2017), the institutional-level digital training strategy, leadership (Ngamau, 2013), the technical infrastructure (Masoumi, 2010), and management support (Mavengere & Ruohonen, 2010;Ngamau, 2013). It is influenced by institutional, managerial, and ethical factors (Basak et al., 2016), socioeconomic factors (Van der Wende, 2003), cultural factors, the education system and its institutional organization, and the changing role of government policies (Middlehurst, 2001). National Education System Responses to DL Needs In the three countries, HEDL is characterized by a rapid increasing of registrations, the existence and spread of national network of suppliers giving birth complex platforms of DL (exceptionally in Canada and the United States), and internationalization, especially in France and United States. In these systems coexist accredited DL and Massive Open Online Courses (MOOCs). Higher Education DL in Canada According to data from Bates et al. (2017aBates et al. ( , 2017c, between 2011 and 2015, the Canadian higher education DL experienced an overall increase of 58% (about 11% per year) in enrolment in online courses; in 2015, these registrations accounted for about 16% of total registrations at Canadian universities. Between 2015-2016 and 2016-2017, more than 65% of institutions experienced a growth of more than 10% in online course registrations over the previous year; in [2016][2017]236,917 higher education students or 67% of students were taking courses online; 75% of institutions indicated that they expected enrolment to increase (Bates et al., 2019). In 2017, 8% of all registrations for credited courses were fully online, representing just over 1.3 million online registrations. Therefore, there is still a lot of room for growth, although, for most campus institutions, it is unlikely that it will far exceed 20% of all course registrations (Bates et al., 2019). According to Bates et al. (2017aBates et al. ( , 2017b, the number of establishments transitioning to online training has increased by about 2% per year. DL is present in almost every Canadian university and most institutions have a good experience in Bates et al. (2019). DL accredited is provided by 98.1% of universities; over 6 years, a growth of 6% is mainly driven by medium-sized institutions (between 10,000 and 20,000 students) which make up half of the institutions that offer 87% of online courses. The average number of courses is almost the same in these institutions as in those of over 30,000 students. In addition, 824 additional DL courses are offered per year in Canada. This represents an average of 15 additional courses per institution; 87% offered hybrid courses. Moreover, 97% of Anglophone institutions offered online courses in 2016, compared to 61% of French-language institutions, but institutions offer at least three times as many credited courses as bilingual institutions (Bates et al., 2017a(Bates et al., , 2017b(Bates et al., , 2017c. At the undergraduate level, Anglophones offer more courses than Francophones (an average of 146 vs. 114). At the higher levels, the opposite trend is observed (40 vs. 54). Quebec is the second-largest province in Canada DL, considering the average number of distance learning programs per institution and fifth in terms of the average number of international students enrolled. The internet remains the widely used technology (98% of institutions). Canada university DL offering is centralized in clusters of Canadian public institutions and common registration platforms-such as OntarioLEARN, eCampus Ontario, Contact North (Ontario), eCampus Alberta, BCampus (British Columbia), and Virtual University of Canada (UVC)-a consortium of Canadian universities in which Athabasca University is demonstrating its leadership.According to Bates et al. (2017aBates et al. ( , 2017b report, the majority of DL in the public sector is offered in disciplines such as administration, education, health sciences, information and technology, and community services. Canada, institutions' views on the future of MOOCS are highly diversified, with almost one third of institutions either having no interest in them (32.60%) and or believing existing MOOCs should be supported. Despite this limited enthusiasm for MOOCs, there is only an average of eight nonaccredited courses per institution. These courses are frequent in the francophone institutions landscape. Moreover, 50% of French-Canadian institutions already offer (or plan to offer) one or more MOOCs. In 2014, all French-Canadian institutions already offering MOOCs reported having completed one or two, with the exception of EDUlib, which offered 12. However, Anglophones are more active than Francophones in offering noncredited courses (8 vs. 28). The American Higher Education DL With overall DL revenues of US$46.6 billion in 2016, North America will likely experience significant growth between 2016 and 2023 (Ambient Insight Research, 2016). Based on data from the U.S. Integrated Postsecondary Education Data System (IPEDS), enrolments are globally increasing mainly in management and business, social sciences, education and training, medicine and health, and engineering and technology (Allen & Seaman, 2017). However, there is very low enrolment in journalism and media, agriculture, and forestry. More than 5.8 million students took at least one course online in 2014; this represents 28.4% of all students enrolled compared to 27.1% in 2013 and 25.9% in 2012. Of these, more than 2.8 million students were enrolled exclusively in online courses. According to the same data, more than 147,169 new registrations took place in public higher education institutions in 2014. In general, 67% of registrations took place in public institutions and 33% in private for-profit and nonprofit institutions. Considering enrolments by cycle, 61% of graduate students are enrolled in private institutions (36% in nonprofit ones, 25% in lucrative ones) and 39% in public institutions. Of the total number of registrations in distance training, 27% were enrolled in the first cycle in private institutions, of which 12% is in nonprofit institutions and 15% in private for-profit institutions. Public institutions count for 73% of total registrations (Allen & Seaman, 2017). The majority of enrolments are in institutions with 1,000 or more students. Smaller institutions receive less than half (48.8%). For Seaman et al. (2018), the total number of students enrolled in on-campus courses decreased by more than one million (1,173,805). This represents a decrease of 6.4% between 2012 and 2016. The largest decrease came from the private sector, which experienced a 44.1% declines over the period, while nonprofit institutions experienced a 4.5% decrease and public institutions 4.2%. The number of students who do not take distance courses declined by 11.2% (1,737,955) between 2012 and 2016. The for-profit private sector lost 50.5% compared to the nonprofit sector (9.5%) and public institutions, a decline of (7.7%). Since 2015, the MOOCs have been dominated by traditional teaching in the United States. U.S. platforms, through varied geographical university partners, deal with the most prestigious institutions. More than 70% MOOCs are hosted by Coursera and 60% of those available on EdX are produced by universities included in the Shanghai Top 150 ranking; 70% of those registered on these platforms reside outside the U.S. territory (Delpech and Diagne, 2016.). The survey designed, administered, and analyzed by the Babson Survey Research Group, with additional data from the National Centre for Education (Education) and the Statistics IPEDS, indicated that several researchers consider that 11.3% of responding institutions offer MOOCs in 2015 compared to 8.0% in 2014 (Seaman et al. (2018)). France Higher Education DL In recent years, French HEDL has especially favored centralization, promotion of MOOCs, and internationalization. It is mainly offered in engineering and business schools; law/economics/management disciplines, it is a niche offering of excellence courses in the second cycle where it represents 40% of the supply. The French F ed eration interuniversitaire de l'enseignement a`distance (FIED) brings together 35 of the 85 universities and has 30,000 students each year for bachelor's and master's degrees in almost all disciplinary fields. According to data from France Strat egie, the French DL has many facets, with its franchises, satellite campuses, and associated institutions. France exports knowledge to more than 600 international programs on international campuses, including 330 outsourced graduate programs and 138 DL programs that reach nearly 37,000 students worldwide. Internationalization is also observed through the MOOCs. The country has developed its platform France Universit e num erique (FUN) under the aegis of the Ministry of Higher Education and Research. According to Delpech and Diagne (2016), FUN hosts more than 140 MOOCs followed by more than 500,000 registrants in France and abroad. The authors stress that the French strategy is to develop a potential market of 400 million students by 2030 to catch up with the Anglo-Saxon supply. This would be achieved through supply diversification defined by: 1. Income from new markets, in particular, continuing vocational training for employees of enterprises; 2. Offer to meet the needs of different audiences, giving priority to the certification and customized MOOCs; 3. A geographical customization that led, at the end of 2015, to more than 500,000 subscribers on the FUN platform which 70% in France on national MOOCs. This result represents a reversal trend since MOOCs generally serve 70% of international students. Only the universities of Paris Sorbonne and Franc¸ois Rabelais de Tours are present on the European site. Comparative Analysis of Higher Education DL System Data highlight that high education DL is expanding in the three countries by undergoing several transformations and integrating technological evolution. Comparing enrolment rates in Canada in the fall of 2016 (Bates et al., 2017a(Bates et al., , 2017c with those in the United States at the same time (Seaman et al., 2018), it is observed that Canada HEDL enrolment exceeds that in the United States (Seaman et al., 2018;Bates et al., 2017aBates et al., , 2017bBates et al., , 2017c. In the United States, the majority of students (55%) taking distance courses in 2012 resided in the same territory of residence of their platform (Seaman et al., 2018). However, considering overall growth in enrolments in Canada and the United States, it is rather a decrease. In these two countries, it is difficult to say, which audiences are involved globally in the national DL. Table 1 presents summarizes the main characteristics of national HEDL in terms of supply. Several universities in the three countries consider DL adoption as a training diversification strategy to ensure profitability and consolidate its position in education. There is also a strategy of specialization without expanding the market: This is the case of Canadian universities, which primarily satisfy the national market. It is observed that in Canada, surveys considered in this literature review demonstrate a priority in meeting national needs with existing on campuses courses and programs completed with DL. It is a matter of either centralization of services or very autonomous university strategies. Table 1 shows the main characteristics of the HEDL in the three countries. A growing supply that looms in continuity training and diversifies from the community-specific needs requires a service consolidation. As Bates et al. (2019) points out, MOOCs are an interesting and useful development, but they have moved into a niche for continuing and in-company training rather than disrupting the current system. The development of unaccredited higher education DL in Canada appears to be in contrast to that observed in France and the United States. In France, the government strategy encourages the development of MOOCs. The "MOOCization" focused on diversifying international and professional markets in the workplace. In the United States, DL shows a decline leading to the development of private for-profit MOOCs platforms based on a delegated logic. The fundamental differences between Canada, France, and the United States higher education DL could be expressed in terms of hybridization, internationalization, level of development of MOOCs, strong centralization (FUN platform) private sector participation, and functional delegation. As Bates et al. (2019) points out, MOOCs are an interesting and useful development, but they have moved into a niche for continuing and in-company training rather than disrupting the current system. MOOC adoption is considerably higher in the U.S. institutions in 2013 and 2014 (14%), but lower in Europe, at 72% in 2014. In 2017, Bates et al. (2019) found that there is not much interest in open-to-all online training (MOOC) in Canada. Fewer than 20% of the responding institutions had offered them in the past 12 months, and those offering only a few courses. One third (32.60%) of Canadian institutions have no interests in MOOCs. In contrast, in 2014 in the United States, 51% of respondents disagreed with this statement, perhaps because it was still early to decide; 11.3% of institutions offered MOOCs in 2015 (8.0% in 2014, in 2012: 2.6%, 2013: 5%, 2014: 8.0%). Fewer than 20% of the responding institutions had offered them in the past 12 months, and those offering only a few courses. In his survey, Gr egoire (2016) stated that 50% of French-Canadian institutions already offer (or plan to offer) one or more MOOCs, a rate considerably higher than in the U.S. institutions in 2013 and 2014 (14%), but lower than Europeans (72% in 2014). Discussion Several reasons and strategies contribute to DL development in the three countries. However, this consistent transformation of HEDL following the digital transformation put in shadow four transformations: MOOC explosion, mega portals birth, DL internationalization, and the mobile and lifelong learning. These transformations expose national HEDL to emerging challenges, which should call for enhancing higher education strategies. However, the American edX platform like the French FUN is not-for-profit. Moreover, 3% of universities in France against 80% in the United States put their courses online. According to data from Open Education Europa (in Delpech & Diagne, 2016), the edX platform has 5 million registrants, FutureLearn 2.5 million of which almost two thirds are out of the country. MOOCs are a showcase to promote higher education institutions, particularly abroad, and to reach new audiences through more flexible and personalized training offerings (Delpech & Diagne, 2016.). Although they can play a formative role in higher education, they miss encouraging long-term personalized learning, training strategies, and accreditation or certification forms. Due to the massive participation, the high heterogeneity of participants, the lack of target groups, and the varying commitment of learning, Henri (2017) considers that analytic and prescriptive rigor of pedagogical engineering seems difficult to apply to the design of MOOCs. Except in France, countries miss a national business and monetization model, adaptive learning, and learning recognition framework. As stated by Kiers and van der Werff (2019), HEDL needs an operational model, which requires obtaining and implementing insight into factors in MOOC-based programs. Megaportal and HEDL Internationalization While international student enrolment continues to rise at U.S. universities (Hellmann & Miranda, 2015), HEDL internationalization takes place through mega portals such as the Study Portals. This portal offers more than 170,000 courses from 3,050 educational institutions in 110 countries, 12,698 programs, including 2,464 bachelor's degrees, 6,475 master's degrees, 470 doctorates, and 2,998 short programs. Its partners include British Council, European Commission European Commission, Nuffic, German Academic Exchange Service (DAAD), Austrian Academic Exchange Service, Universidades in Spain, Academic Cooperation Association, and pan-European network of several nonprofit organizations, responsible for the internationalization of education and training such as UNESCO Institute for Lifelong Learning, a nonprofit organization, EADTU, Cambridge Assessment English, a division of Cambridge University, International Council for Open and Distance Education (ICDE), Swedish Institute, Open Education Europa, and so forth. In 2017, Studyportals reportedly helped more than 28 million students around the world explore curricula and make an informed choice. According to its website data, the megaportal registered 195,400 international student registrations in 2016. The number of registrations grew by 28.40% in 2014, while in 2012 this growth was 25.90%. The profile of the typical student is 51%women and 49% men. It offers programs in agriculture and forestry, applied sciences, art, design and architecture, management and business, computer and information technology, education and training, engineering and technology, Environmental Studies and Earth Sciences, Recreational Hospitality and Sport, Humanities, Law, Journalism and Media, Medicine and Health, Natural and Mathematical Sciences, and Social Sciences. Paralleled to megaportals, higher education internationalization has become, in many countries, a major expansion issue and enrolment growth target. More than program outsourcing, online learning contributes to increasing enrolment providing access and flexibility. Learners from all over the world increasingly invest time and money for personal progress. In the same time, HEDL in national education remains an extension of in-class education. Furthermore, it is limited to a few programs. In Canada, Bates et al. (2019) note that at least half of universities offer online programs in administration, arts, humanities, and education. However, none of the universities surveyed offered programs in dentistry, engineering, forestry, medicine, or pharmaceutical sciences. Bates et al. (2019) point at many reasons-the lack of appropriate staff to develop and deliver these courses or the additional faculty effort required to develop or deliver online courses-which were similar to the national response. In the United States, enrolments are mainly in management and business, social sciences, education and training, medicine and health, and engineering and technology. Mobile and Lifelong Learning Matter If Higher Education needs to promote lifelong learning, strategic transformative approaches should not only broaden access but also increase learning quality and learner satisfaction (Yang et al., 2015). Despite the increasing enrolment in the United States, there has been a growing decline in student satisfaction: from 92% in 2012-2013 to 86% in 2016. The emerging issues in DL are social learning, mobile learning, micro learning, and learner skilling in organizations. Recent Docebo (2018) surveys noticed that 53% of learners mentioned location as a barrier to online learning. So they turned to mobile learning. In addition, 64% of learners declared that learning on a mobile device is essential or very useful; they access their training content from a mobile device essential. Smartphone learners complete course material 45% faster than those using a computer; 89% of smartphone users download apps, 50% of which is used for learning; 43% learners see improved productivity levels compared to nonmobile users; 71% of Millennials say they connect more with mobile learning than L&D activities delivered via desktop or formal methods; the number of mobile-only users (27%) has grown, now surpassing desktop-only users (14%); 46% of learners use mobile learning before they go to sleep at night. In Frenchspeaking Canada, a survey showed 4 out of 5 participants say that the massive mobile devices use in their community has had an impact on their way of teaching. As Merhan (2017) points out, another dimension to consider is now a matter of adopting a HEDL vision that is more concerned with professionalization, upskilling, reskilling for the market increasing demand for competences, and curricula adaptation. For this purpose, all the three countries market still has high potential from a vocational and business training perspective. That hypothesis should be tested. Furthermore, the learner's commitment to learning must be linked to a personal project. Carr e (2014) promoting a high-quality, equitable and global learning experience will help graduates to prepare and contribute to a globally interconnected society. Points out that the starting point lies in the analysis of the individual conditions of learning and skills development. For the author, training performance exists in underestimating the learner's logic. If online learning provides better access and flexibility for students, comparability between education systems and the transferability of qualifications obtained by HEDL are prerequisites for improving student mobility (Henri, 2017) and for developing their global competency. Furthermore, the reality of today's learning and training is no longer limited to institutionalized education. Adding new subjects or learning areas to the taught curricula traditionally designed around specific disciplines and/or learning areas can lead to curriculum overload, while embedding them within existing subjects can prove challenging, given the conceptual complexity of some of these competencies (OECD, 2019b). As the rapid labor market transformations challenge societies and individual, lifelong learning becomes the foundations of providing continuous upskilling and reskilling learning. It promotes adaptation to leaning and full work participation for core skills, knowledge, attitudes, and values that are prerequisites for further learning across the entire curriculum (OECD, 2019b). LLL contributes also to the continuous professional development of the active population, thus improving autonomy and internal flexibility (Wotto et al., 2017). LLL vision could help to motivate MOOC learners, to better identify and understand how the uses of MOOC may or may not participate in producing inequality (Vayre & Lenoir, 2019), to show how the uses that e-learners make of this course are shaped by the weight of social structures. Finally, through HEDL transformation, it becomes important to clarify the "different potential for technological deskilling/upskilling, namely the ability of ICTs to contribute to the moral deskilling of human users" (Vallor, 2015, pp. 107-124). Conclusions and Limits The national higher education DL is adapting supply to technology transformations dominated by many forces, namely, service centralization and internationalization. In the United States, both public and private (for-profit and not-for-profit) institutions support these forces. In France, the national strategy is especially oriented to abroad. In Canada, institutions develop their own HEDL strategy. The study shows that the DL in Canada is growing to assume the coverage of the national territory, but for organizational effectiveness (institutional performance). However, like France, Canadian institutions must catch up on the international stage. However, HEDL development does not follow the tendencies of the learners and DL market trends. MOOC development faces a lack of business model. The DL sometimes follows the traditional education because of lack of development new training strategy. The interest for HEDL internationalization is growing for training institutions leading to megaportals development. Although the growth of digital platforms helps to concentrate many services, quality, credibility for organizations and skills, and employability matter for individuals should overcome any commercial reason. If m-learning can be capitalized to enhance quality and access and to help learning and future employability, training geared to the acquisition of renewed skills expressed in terms of professionalization also aims at self-efficacy in learning for the mobilization of skills in context. In a knowledge-based economy with changes in skills and occupational profiles, we are a long way from customizing training that requires learner-centered learning. The comparison in this study should be understood in the context of this report, a context, which considered that the training system falls within a sociopolitical context with its norms, rules, and values to which we have not alluded in this report. Furthermore, this study presents the cumulative limitations of the studies and reports selected. One of its main limits is the secondary database analysis. Further research should provide evidence to understand the trends in learning and training and their impacts.
7,484.4
2020-07-16T00:00:00.000
[ "Education", "Computer Science" ]
Preserving information from the beginning to the end of time in a Robertson–Walker spacetime Preserving information stored in a physical system subjected to noise can be modeled in a communication-theoretic paradigm, in which storage and retrieval correspond to an input encoding and output decoding, respectively. The encoding and decoding are then constructed in such a way as to protect against the action of a given noisy quantum channel. This paper considers the situation in which the noise is not due to technological imperfections, but rather to the physical laws governing the evolution of the Universe. In particular, we consider the dynamics of quantum systems under a 1 + 1 Robertson–Walker spacetime and find that the noise imparted to them is equivalent to the well known amplitude damping channel. Since one might be interested in preserving both classical and quantum information in such a scenario, we study trade-off coding strategies and determine a region of achievable rates for the preservation of both kinds of information. For applications beyond the physical setting studied here, we also determine a trade-off between achievable rates of classical and quantum information preservation when entanglement assistance is available. Introduction Data storage is relevant not only for accomplishing tasks in our day-to-day lives but also to keep track of our history.Information can be stored on various physical media, ranging from modern compact disks to ancient papyrus.A fundamental goal of information storage is to preserve it for the longest possible time in a reliable way.That obviously depends on the used technology.However, even if we have achieved a perfect or ideal implementation of a given technology, we should realize that there are limitations on preserving information.These limitations are posed by physical theories and ultimately result from the evolution of the universe itself, which can cause unavoidable effects to any physical system. To address the issue of determining these fundamental limits, we require the theory of general relativity as well as that of quantum information.Very recently the interconnections between these two fields have received increased interest.Several previous works have developed a theory of communication between a sender and a receiver in relativistic settings [2,5,22,19,6,4] or in situations involving black holes [3,15]. In this paper, we investigate how well information stored in the remote past is preserved when going to the far future, by assuming evolution of the universe in a Robertson-Walker (RW) spacetime.Our main results are 1) that the noise imparted to spin- 1 2 particles by the evolution of the universe is equivalent to an amplitude damping channel, and so we then 2) determine achievable rates for the simultaneous communication of classical and quantum information over this channel.As a result, we can interpret these rates to be achievable rates for the storage of classical and quantum information from the early past to the far future in a Robertson-Walker spacetime. The RW spacetimes are a reasonable description of the dynamics of the late universe, which, at large scale, appear to be homogeneous and isotropic.Most cosmological models are special cases of RW spacetimes [1].When considering a quantum matter field evolving through a dynamical spacetime, the concept of the vacuum cannot be considered unique any longer.Indeed, to detect the presence of quanta it is also necessary to specify the details of the quantum measurement process, and in particular, the state of motion of the measuring device.Particles possess an essential observer-dependent quality, so that they can be observed on some detectors and not others.Also, we can define positive and negative energy solutions of differential equations governing matter fields only if the spacetime structure is invariant under the action of a time-like Killing vector field [24].This is certainly true for Minkowski spacetime, and a RW universe which is Minkowskian in the early past and in the far future is a suitable choice.The simplest, nevertheless insightful, choice we can make is a 1+1 RW spacetime. There, we can consider any quantum state of the matter field before the expansion of the universe begins and define, without ambiguity, its particle content.We then let the universe expand and check how the state looks once the expansion is over.The overall picture can be thought of as a noisy channel into which some quantum state is fed.Once we have defined the quantum channel emerging from the physical model, we will be looking at the usual communication task as information transmission over the channel.Since we are interested in the preservation of any kind of information, we shall consider the trade-off between the different resources of classical and quantum information. The rest of the paper is organized as follows.In the next section, we discuss the physical model and show how the noise imparted to spin-1 2 particles is equivalent to an amplitude damping channel, which has been well studied in quantum information theory [25].In the section thereafter, we calculate achievable rates for the simultaneous communication of classical and quantum information over this channel.We then conclude with a summary of our results and a discussion of some open questions.Appendixes A and B are devoted to prove the main results.There we also determine a trade-off between achievable rates of classical and quantum information preservation when entanglement assistance is available, which might be useful for applications beyond the physical setting studied in this paper. Robertson-Walker spacetime The geometry of a RW spacetime, considered here for the sake of simplicity of 1+1 dimensions, is described by the line element which represents the varying distance between two spacetime points, depending on the conformal scale factor a(τ ).The so-called conformal time τ is a function depending on the cosmological time t, defined by τ = a −1 (t)dt.The spatial coordinate is denoted by x.Now consider an expanding universe with Minkowskian spacetime in the early past and in the far future filled with a Dirac field (that is, with matter made of spin-1 2 particles).We can associate a Hilbert space to each of the two regions with suitable basis vectors.The dynamics of the matter fields ψ of mass m are governed by the Dirac equation expressed in covariant form The index µ runs from 0 to 1 and the Einstein sum rule over repeated indices is used.Furthermore, γµ ≡ [a(τ )] −1 γ µ with γ µ the 2 × 2 matrices representing Dirac algebra.Finally, D µ is the covariant derivative [1]. We look for solutions of (1), writing ψ = a −1/2 (γ ν ∂ ν − M )ϕ, with M = ma(τ ), so to have with g µν being the flat metric as opposed to the actual spacetime metric gµν = [a(τ )] −2 g µν .Moreover, given flat spinors u and v satisfying γ 0 u = −iu and γ 0 v = iv, we set with k the momentum.Inserting (3), ( 4) into (2), the functions f (±) must obey the differential equation Define in/out and f (±) * in/out the solutions behaving as positive and negative frequency modes with respect to conformal time τ near the asymptotic past/future, i.e. in/out , M in/out ≡ ma(τ → +/ − ∞).Then we can introduce spinors that behave like positive and negative energy spinors, respectively, in the asymptotic regions: with normalization constants . Now the solutions of (1) can be expanded over ( 6) and (7) as The coefficients appearing in such expansions are in-out ladder operators for particles and antiparticles (a, a † and b, b † respectively).They are connected by Bogoliubov transformations [10] where α, β ∈ C, such that |α| 2 + |β| 2 = 1 and αβ * − α * β = 0. Notice that such transformations do not mix different momentum solutions, and so we can safely focus on a single momentum and omit the dependence on k.Therefore, any particle (antiparticle) quantum state lives in a 2-dimensional Hilbert space with orthonormal basis {|0 , |1 } denoting absence or presence of a particle (antiparticle).The Bogoliubov coefficients are linked to physical quantities by |β| 2 = n/2, where n is the density of particles for the mode under consideration (0 ≤ n ≤ 2).The transformations (8), ( 9) come from a unitary operator that can be written as where the parameters r and ϑ are related to α and β of Eqs.( 8) and ( 9) by α = cos r and β = −e −iϑ sin r. Robertson-Walker dynamics induces amplitude damping channel Assuming to have access to particles in the out region only, the antiparticles will play the role of an environment which is initially in the vacuum.Hence, from (10), we can single out a completely positive trace preserving linear map from in particle states to out particle states, given by where U is given by ( 10) and tr −p stands for the partial trace over antiparticles.In terms of the so-called Kraus representation, we have that where K j = −p j|U |0 −p follows from ( 10) Expressing them in terms of outer products of particles states we may observe that the quantum channel map A is an amplitude damping channel with the so-called transmissivity η ∈ [0, 1] related to the physical observable n by We now consider the toy model introduced in [10], having the following conformal scale factor The real and positive parameters and ρ control the total volume and the rapidity of expansion of the universe, respectively.In the two asymptotic regions in and out, we have that a(τ → −∞) = 1 and a(τ → +∞) = 1 + 2 , respectively.Inserting ( 15) into (5) we get Solutions of this equation can be found as [10] f with 2 F 1 denoting the ordinary hypergeometric function and Since f in/out (τ ) and f (±) * in/out (τ ) are positive and negative frequency modes in asymptotic regions, we can write the Bogoliubov transformation between them as follows: Using linear transformation properties of hypergeometric functions we can write down the coefficients as [10] with Γ denoting the Euler Gamma function.These Bogoliubov coefficients will be related to those of Eqs.( 8) and ( 9), namely α and β [20].In particular it results Hence, remembering from ( 14) that η = 1 − n 2 = |α| 2 , we find In Figure 1, we plot the transmissivity η in (17) as a function of the momentum k.Observe that it is equal to one (no damping) only for zero or large momentum.This is a consequence of the fact that modes such that 0 < √ k 2 + m 2 < ρ are excited, implying particle creation for them.Also notice that the value of η never drops below 1/2, and it is equal to this minimum value in the limit as ρ, → ∞. Information trade-offs for the amplitude damping channel In the above development, observe that the region for which η falls below one is the most important for information storage.In fact, in order to save energy, one would like to have the momentum k as low as possible.However, it is unreasonable to freeze particles such that k = 0. Hence, we have to face up with the problem of non-negligible information damping, and this motivates us to consider the best strategy for preserving it. In particular, we would like to preserve both classical and quantum information in the RW spacetime, and so we consider trade-off strategies for doing so [9,26], modeling the noise as an amplitude damping channel (as motivated in the previous section).To do so, we can model this problem in a communication-theoretic language, in which we say that the device encoding information at the beginning of the evolution is the "sender" and the device recovering information at the end of the evolution is the "receiver." A simple strategy for trading between classical and quantum communication is known as time sharing-in a time-sharing strategy, the sender and receiver use a classical communication code for a fraction of the channel uses, a quantum communication code for another fraction, etc.For some channels such as the quantum erasure channel [13], time sharing is an optimal communication strategy, but in general, it cannot outperform a more general strategy known as "trade-off coding" [26].This allows for transmitting classical and quantum information at net rates (C, Q) that lie in a two-dimensional capacity region. To proceed with our development for the amplitude damping channel, we begin by recalling that the trade-off region between classical and quantum communication (without the help of entanglement assistance) for any quantum channel A A →B is given by [9]: where ρ denote the quantum mutual information, coherent information, and Holevo information of a quantum state ρ XAB , respectively, with the von Neumann entropies defined as Chapter 11 of [25], for example, for more on these definitions).These entropies are actually with respect to a classicalquantum state of the following form: with |φ x AA φ x | a purification of the input state ρ x A corresponding to the letter x.Taking the union of the region specified by (A.1)-(A.3)over all ensembles of the form {p X (x) , |φ x AA φ x |} then gives what is known as the single-letter triple trade-off region (meaning that the formulas are a function of a single instance of the channel).We should clarify that the above rate region is an achievable rate region, and for some channels, it is known to be optimal as well [5,26].The above rate region is not known to be optimal for the amplitude damping channel. For the amplitude damping channel A, and hence for the channel (11), we have the following characterization of the single-letter trade-off region: Theorem 1 The single-letter trade-off region ( 18)-( 19) for the qubit amplitude damping channel is the union of the following polyhedra over all p 0 , q x , ν x ∈ [0, 1] for x ∈ {0, 1} and where p 1 = 1 − p 0 and p ≡ x∈{0,1} p x q x : with h 2 denoting the binary Shannon entropy: The proof of Theorem 1 is given in Appendix A. We can significantly simplify the characterization of the region when η ≥ 1/2, which is the case of most interest for the physical setting of this paper. Theorem 2 The single-letter trade-off region ( 18)-( 19) for the qubit amplitude damping channel when η ≥ 1/2 is the union of the following polyhedra over all p, ν ∈ [0, 1]: where g (p, z, ν) is defined in Theorem 1 and it can be achieved with the following ensemble The proof of Theorem 2 is given in Appendix B. Notice that the ensemble that attains the trade-off interpolates between the strategy that achieves the quantum capacity of the amplitude damping channel and that which achieves the product-state classical capacity of the amplitude damping channel, as ν varies from zero to one.That is, when ν = 1, the ensemble reduces to which has been proved to be optimal for the product-state classical capacity (the singleletter classical capacity) [12].When ν = 0, the ensemble reduces to which is of the diagonal form that achieves the quantum capacity of the amplitude damping channel [12].The communication strategy resulting from the state in (B.1) is very different from a naive time-sharing one and outperforms it (see Figure 2). Discussions and Conclusions In this paper, we have investigated how well information stored in the remote past is preserved when going to the far future, by assuming evolution of the universe in a Robertson-Walker spacetime.We proved, under certain assumptions, that the noise imparted to spin-1 2 particles by the evolution of the universe is equivalent to an amplitude damping channel, and we then determined achievable rates for the simultaneous communication of classical and quantum information over this channel.Actually we have established an achievable rate region (and the ensemble to attain it) characterizing communication trade-offs for the qubit amplitude damping channel, thus also generalizing the results given in Ref. [12].Our results refer to single-letter rate regions, so that it remains open to determine whether a multi-letter characterization could achieve strictly higher rates of communication.For this purpose, one might consider recent approaches developed in [7]. A more physically relevant scenario is the 3 + 1 dimensional spacetime with the same evolutionary model adopted here.In this situation spin degrees of freedom of the quantum field become relevant, making physics somehow more involved but richer.An extension of our study to this case is foreseeable thanks to the Bogolyubov transformations given in Ref. [10].Still we are supposing that the in and out regions spacetime admits natural particle states and a privileged quantum vacuum.If we would employ a more realistic evolutionary model with no static in or out regions, an approximate definition of particles can be made by selecting those mode solutions of the field equation that come in some sense "closest" to Minkowski space limit.Physically this might be envisaged as a construction that ''least disturbs" the field by the expansion and in turn leads to the concept of "adiabatic states" (introduced for the scalar fields long time ago [23], then put on rigorous mathematical footing [17] and later on extended to Dirac fields [16]). In future work, one could also cope with the degradation of the stored information by intervening from time to time and actively correcting the contents of the memory during the evolution of the universe.In this direction, channel capacities taking into account this possibility have been introduced in [21].In another direction, and much more speculatively, one might attempt to find a meaningful notion for entanglementassisted communication in our physical scenario by considering Einstein-Rosen bridges along the lines of [18] or entanglement between different universe's eras, related to dark energy [8]. Appendix A. Proof of Theorem 1 The two dimensional trade-off region of Theorem 1 is a special case of a theorem determining the triple trade-off region where in addition to C and Q also the net rate E of entanglement consuption/generation is considered. First we recall that the triple trade-off region for any quantum channel A A →B is given by a union of polyhedra, each of which is specified by the following formulas [25,26]: Theorem 3 The single-letter triple trade-off region (A.1)-(A.3)for the qubit amplitude damping channel is the union of the following polyhedra over all p 0 , q x , ν x ∈ [0, 1] for x ∈ {0, 1} and where p 1 = 1 − p 0 and p ≡ x∈{0,1} p x q x : where Proof.From Refs.[25,26] we have that the so-called "quantum dynamic capacity formula" characterizes the optimization task set out in (A.1)-(A.3)(i.e., the task of computing the boundary of the region specified by (A.1)-(A.3)).That is, we should optimize the quantum dynamic capacity formula for all non-negative values of the Lagrange multipliers λ and µ and doing so allows us to simplify the form of ensembles necessary to consider in the computation of the boundary of the region.The quantum dynamic capacity formula is given by max with the entropies referring to the state of (20).As detailed in [25,26], this is equivalent to where the various von Neumann entropies H can be specified as follows. where Z is the Pauli Z operator.This augmentation can only increase communication rates due to the covariance of the amplitude damping channel with respect to {I, Z}.Let σ XJABE denote the corresponding classical-quantum state that results from purifying each state in the A system and then sending the A system through an isometric extension of the channel.That is, with U N A →BE an isometric extension of the channel A A →B .We then have an upper bound for the r.h.s. of (A.5), namely where the inequality follows from concavity of entropy and defining p ≡ x p X (x) q x . (A.15) Other steps follow from the covariance of the amplitude damping channel with respect to I and Z operations.As a consequence of (A. 14), we see that to compute (A.4), it suffices to optimize the following function of {(p X (x) , q x , γ x )} for fixed values of λ and µ: Theorem 4 The single-letter triple trade-off region (A.1)-(A.3)for the qubit amplitude damping channel when η ≥ 1/2 is the union of the following polyhedra over all p, ν ∈ [0, 1]: where g (p, z, ν) is defined in Theorem 1 and it can be achieved with the following ensemble To prove Theorem 4 we have to show that ensembles of following simplified form optimize (A.16): This is equivalent to showing that for every {p X (x), q x } x∈{0,1} such that x p X (x)q x = p, there exists a value of ν such that Let us have a closer look at the function F 1 of Eq.(A.18).Its first derivative with respect to γ is as follows: This is a linear function of µ, hence we can determine a critical value of µ below (resp.above) which ∂F 1 (q,γ) ∂γ is always negative (resp.positive).It is given by The second derivative of F 1 with respect to q, in turn, is equal to This is also a linear function of µ, and there exists a critical value of µ below (resp.above) which ∂ 2 F 1 (q,γ) ∂q 2 is always negative (resp.positive).It is given by By inspection, it follows that µ * ≤ µ * * for η ≥ 1 2 (for η < 1 2 one can always find a large enough value of λ that invalidate the condition).Anyway this is the only relevant regime for our purposes since for η < 1 2 the quantum capacity of the amplitude damping channel vanishes.Let us then distinguish the following two situations: Appendix B.1.µ ≤ µ * * , i.e.F 1 is concave with respect to q In this case F 1 is a monotonic function of γ.It is decreasing with increasing γ for µ ≤ µ * ≤ µ * * and increasing with increasing γ for µ * ≤ µ ≤ µ * * . So this proves (B.1) for this case.Notice that when F 1 is a decreasing function of γ the optimal value of γ is 0, while when F 1 is an increasing function of γ the optimal value of γ is the maximum allowed one, i.e. q − q 2 .Appendix B.2. µ ≥ µ * * , i.e.F 1 is convex with respect to q In this case F 1 is a monotonic increasing function of γ.Hence, we should look for a suitable value of γ, say γ, such that the following inequality (equivalent to (B.1)) p 0 F 1 (q 0 , γ 0 ) + p 1 F 1 (q 1 , γ 1 ) ≤ F 1 (p 0 q 0 + p 1 q 1 , γ), (B.7) is satisfied for any arbitrary points (q 0 , γ 0 ) and (q 1 , γ 1 ) in the q, γ plane (again remembering that it suffices to consider two letters). As consequence of Theorem 4, also the communication strategy involving entanglement results quite different from a naive time-sharing one and outperforms it (see Figure B1). Figure 2 . Figure 2. A comparison of a trade-off coding strategy (blue points) versus a timesharing strategy (red line) for an amplitude damping channel with transmissivity η = 0.75.The figure demonstrates that an ensemble of the form in Theorem 2 outperforms a naive time-sharing strategy between the product-state classical capacity and the quantum capacity. Figure B1 . Figure B1.A comparison of a trade-off coding strategy (blue points) versus a timesharing strategy (red line) for an amplitude damping channel with transmissivity η = 0.75.The figure demonstrates that an ensemble of the form in Theorem 4 outperforms a naive time-sharing strategy between the product-state classical capacity and the entanglement-assisted classical capacity.
5,521.6
2014-05-11T00:00:00.000
[ "Physics" ]
Characterization of Factor VI1 Association with Tissue Factor in Solution HIGH AND LOW AFFINITY CALCIUM BINDING SITES IN FACTOR VI1 CONTRIBUTE TO FUNCTIONALLY DISTINCT INTERACTIONS* Protein-phospholipid as well as protein-protein interactions may be critical for tight binding of the serine protease factor VIIa (VIIa) to its receptor cofactor tissue factor (TF). To elucidate the role of protein- protein interactions, we analyzed the interaction of VII/VIIa with TF in the absence of phospholipid. Bind- ing of VI1 occurred with similar affinity to solubilized and phospholipid-reconstituted TF. Lack of the y-car- boxyglutamic acid (G1a)-domain (des-(1-38)-VIIa) resulted in a 10- to 30-fold increase of the ICd for the interaction, as did blocking the Gla-domain by Fab fragments of a specific monoclonal antibody. These results suggest that the VI1 Gla-domain can participate in protein-protein interaction with the TF molecule per se rather than only in interactions with the charged phospholipid surface. Gla-domain-independent, low affinity binding of VI1 to TF required micromolar Ca2+, indicating involvement of high affinity calcium ion binding sites suggested to be localized in VI1 rather than TF. Interference with Gla-domain-dependent interactions with TF did not alter the TF-VIIa-depend- ent cleavage of a small peptidyl A 1:2 dilution of culture CaClz incubated with the followed by detection using a secondary, alkaline phosphatase-con- jugated antibody as described (6). VI1 binding to immobilized phospholipid was performed as described (6). Inhibition of VI1 binding by F4-2.1B was tested by including the mAb (1 p ~ ) in the incubation mixture of VII, 5 mM CaC12, and the immobilized phospholipid. NMR Spectroscopy-The effect of Ca2+ on the conformation of TF,.,,, was studied by NMR spectroscopy. 'H NMR spectra were acquired on a Bruker AM-500 spectrometer; 1024 transients were collected with 8192 data points at 310 K. Solvent suppression was achieved by presaturation of the H20 resonance, and a Hahn-echo pulse scheme was used for detection to eliminate baseline artifacts (20). Ca2+ was removed from the protein sample (TF1.219) by incubation with 10 mM EDTA for 2 h followed by ultrafiltration to adjust to 62 mM NaCl, 50 pM EDTA, and 2.5 mM phosphate, pH 6.5. Analysis was performed with 840 p~ TF1.219. Ca2+ was titrated into the sample in 2 aliquots. First, 3.5 mM CaCL was added which yielded approximately 1 mM free CaZ+, since 2.5 mM phosphate precipitated upon the addition of CaCI2. The second addition of 9 mM Ca" yielded a net Ca2+ concentration of 10 mM. associated with a phospholipid surface in the absence of its cofactor TF (4)(5)(6). Function of the TF.VIIa complex is optimal when TF is cell surface-expressed or reconstituted into phospholipid vesicles. Binding of VIIa to TF under these conditions results in a profound increase in its catalytic efficiency to activate the substrate factors X and IX (5,6). The extracellular domain of TF mediates protein-protein interactions with VIIa which are required for catalytic enhancement of substrate cleavage by the VIIa catalytic domain (6). Binding of VII/VIIa to cell surface TF occurs with high affinity at 2-5 mM CaC12 (1,7,8), and the VIIa Gla-domain has been implicated in the high affinity binding to cell surface TF (9). From these findings, one can speculate that the Gladomain of VIIa may interact with charged phospholipid surfaces during assembly with TF. Binding of VII/VIIa to unidentified sites on TF has been suggested to be mediated in part by residues 195-206 located in the catalytic domain of the protease (10). T o further evaluate the significance of the Gla-domain and other interactive regions of VIIa for binding to TF, we have estimated binding constants for the interaction of VI1 with TF in solution. We characterize here the interaction of both full length, detergent-solubilized TF and the isolated TF extracellular domain with VIIa, as well as with VIIa lacking the Gla-domain (des-(l-38)-VIIa). We propose participation of both high and low affinity Ca2+ binding sites on VIIa which are critical for the formation of the TF.VIIa complex. In addition, we provide evidence that these two types of CaY+-dependent interactions enhance the catalytic activity of VIIa by increasing the function of the catalytic center of VIIa and by improving extended recognition of protein substrates. EXPERIMENTAL PROCEDURES Proteins-Human coagulation proteins were purified from plasma as previously described (6). Prothrombin fragment 1 was produced from purified prothrombin as described (6). T F was immunoaffinitypurified from human brain (11) and reconstituted into mixed phosphatidylcholine/phosphatidylserine (70:30, w/w) vesicles (12). The isolated T F extracellular domain (TFl-21g) was produced by inserting a stop codon in the T F coding sequence and expressing the recombinant protein in Chinese hamster ovary cells as described (6). The secreted protein was purified from the culture medium using a twostep procedure of monoclonal antibody (mAb) affinity purification followed by gel filtration using Sephadex G-75 Superfine. All proteins were homogeneous by SDS-polyacrylamide gel electrophoresis (13). Recombinant VIIa and the partially proteolyzed des-(l-38)-VIIa which lacks the Gla-domain were prepared as described (9,14). mAbs to VI1 were produced by immunization of mice with plasma-derived VII/VIIa using standard hybridoma technology. Hybridomas were initially screened by solid phase radioimmunoassay for binding of 1261~VII (2 nM). mAb F4-2.1B was selected for binding of fluid phase VI1 in the presence of CaCI,, but not in the presence of EDTA. Hybridomas were recloned two to three times, and antibody was produced by ascites growth of stable cells. mAbs were purified as previously described (15). Fab fragments of mAbs were produced by cleavage of purified IgG with immobilized papain (16) in 20 mM NaH2P04, 20 mM cysteine-HC1, 10 mM EDTA, pH 7.0, for 16 h, followed by removal of the Fc portion and uncleaved mAb by binding to immobilized protein A (17). Functional Assays-Catalytic activity of VIIa bound to TF was analyzed by hydrolysis of a peptidyl chromogenic substrate, as well as by limited proteolytic activation of factor X. Peptidyl hydrolysis was determined by mixing TF and VIIa in the presence of 1.25 mM Spectrozyme FXa and calcium ions and continuously monitoring the increase in the absorbance at 405 nm in a Molecular Devices (Menlo Park, CA) kinetic microplate reader, essentially as described (6). Cleavage of factor X by the TF.VIIa complex was analyzed by a coupled amidolytic assay as described (18). Briefly, detergent-solubilized TF or TF1.,,, and VIIa were preincubated in the presence of 5 mM CaC12 for 10 min followed by addition of substrate factor X. Samples were removed and quenched in EDTA each minute. Generated factor Xa was determined with a chromogenic substrate, and initial rates of factor Xa formation were calculated. Analysis of the TF-Factor VI1 Interaction in Solution-The interaction of VII/VIIa with TF or TF1.219 in solution was analyzed using '""IVII, unlabeled TF, and a noninhibitory anti-TF monoclonal antibody (TF9-1OH10) as a capture antibody (15). Proteins were radiolabeled using the coupled lactoperoxidase/glucose oxidase method as described (18). Falcon MicroTest 111 plates (Becton Dickinson Labware, Oxnard, CA) were incubated with 10 pg/ml TF9-1OH10 (100 pl) overnight followed by blocking with 200 pl of 5% dry milk dissolved in TBS (20 mM Tris, 130 mM NaCl, pH 7.4). After three washes with TBS, a mixture of varying concentrations of lZ51-VII and T F or TF,.,,, in the presence of 5 mM CaCIZ, 0.5% BSA were added to the wells and incubated at 37 "C for 90 min. Radioactivity bound to the wells was determined after 10 rapid washes of the plates with ice-cold TBS, 5 mM CaC12, 0.5% BSA. Nonspecific binding was determined for each dilution of radiolabeled ligand from wells which had not been coated with the mAb, but which contained an identical reaction mixture. The specifically bound radioactivity was assumed to correspond to captured TF. '"I-VII complexes. TF. VIIa complexes formed rapidly in solution based on factor Xa formation which occurred at maximal rate after 2 min of preincubation of VIIa with T F a t 37 "C. TF binding to the mAb was in equilibrium after 60 min. Under the conditions of the binding assay, the parameters determined will therefore reflect VI1 binding to TF at equilibrium. Free ligand was determined by counting an aliquot of the supernatant of each well and by correcting for free TF.VII complexes in the sample as described below. Typically, data for eight dilutions were used for Scatchard analysis which was performed using the EBDA subroutine of the PC-adapted version (Elsevier Biosoft) of the LIGAND program (19). Analysis of IZ5I-TF and of '251-TF1.219 binding to TF9-lOH10 was performed in the presence of 5 mM CaC12, 0.5% BSA in TBS with or without 300 nM VII. Incubation times and washing procedures were as above. lZ51-VII binding to TF or TF,.,,, in the presence of different calcium concentrations was determined by varying the calcium concentration in the incubation mixture from 50 p M to 5 mM. The concentration in the washing buffer was 50 p M for all the calcium concentrations determined. No difference in the binding data at 5 mM CaCI2 was observed between assays which were washed at 50 p~ or 5 mM CaC12. Displacement of bound ligand was studied by mixing T F or TF,.,,, with 100 nM lz'I-VII in the presence of 5 mM Ca2+ followed by an incubation (90 min) in the TF9-1OH10-coated wells to allow assembly with the antibody. The mAb TF9-5G4 (1 FM) (15) which does not compete with TF9-1OH10 but competes with VI1 binding to T F was added for various times to prevent reassociation of VI1 after dissociation from the TF. VI1 complex. Residual specifically bound radioactivity was determined after washes as described above. This analysis demonstrated a slow dissociation of the bound TF. "'I-VI1 or TF,.,,,. '251-VII complex with half-dissociation times of 20 to 40 min at 37 "C, consistent with the slow dissociation of VI1 from cell surface TF (7). This validates that the rapid washes (<1 min) with cold buffers do not allow substantial dissociation of complexes. Binding analysis of '''I-VII to cell surface T F was performed as described (18). Characterization of the Anti-VI1 mAb F4-2.1B-Binding of solution phase VI1 to mAb F4-2.1B was analyzed by coating Falcon MicroTest 111 plates with purified IgG at 10 pg/ml overnight, followed by blocking with 5% nonfat dry milk in TBS. IZ5I-VII, '2sI-VIIa, or '"1-des-(l-38)-VIIa (5 nM) were incubated for 60 min followed by rapid washes with ice-cold buffer. For competition analysis between mAbs, a 100-fold molar excess of competitor over lZ5I-VII was added in the incubation mixture. For Western blot analysis of mAb reactivity, 1 pg of reduced VIIa per gel lane was separated by SDS-polyacrylamide gel electrophoresis, followed by transfer to nitrocellulose and blocking with 5% nonfat dry milk in TBS. A 1:2 dilution of culture supernatant which was adjusted to 5 mM CaClz was incubated with the membrane followed by detection using a secondary, alkaline phosphatase-conjugated antibody as described (6). VI1 binding to immobilized phospholipid was performed as described (6). Inhibition of VI1 binding by F4-2.1B was tested by including the mAb (1 p~) in the incubation mixture of VII, 5 mM CaC12, and the immobilized phospholipid. NMR Spectroscopy-The effect of Ca2+ on the conformation of TF,.,,, was studied by NMR spectroscopy. 'H NMR spectra were acquired on a Bruker AM-500 spectrometer; 1024 transients were collected with 8192 data points at 310 K. Solvent suppression was achieved by presaturation of the H 2 0 resonance, and a Hahn-echo pulse scheme was used for detection to eliminate baseline artifacts (20). Ca2+ was removed from the protein sample (TF1.219) by incubation with 10 mM EDTA for 2 h followed by ultrafiltration to adjust to 62 mM NaCl, 50 p M EDTA, and 2.5 mM phosphate, pH 6.5. Analysis was performed with 840 p~ TF1.219. Ca2+ was titrated into the sample in 2 aliquots. First, 3.5 mM CaCL was added which yielded approximately 1 mM free CaZ+, since 2.5 mM phosphate precipitated upon the addition of CaCI2. The second addition of 9 mM Ca" yielded a net Ca2+ concentration of 10 mM. RESULTS Analysis of VII Binding to TF in Solution-The noninhibitory anti-TF mAb TF9-lOHlO was used to provide an efficient phase separation in studies which characterize the T F . VI1 interaction in solution. This mAb does not interfere with VI1 binding to cell surface TF (15). Conversely, binding of detergent-solubilized lZ5I-TF or soluble 1251-TF1.219 to the mAb TF9-10H10 was not inhibited in the presence of 300 nM VI1 (Fig. 1). If VI1 binding to TF is analyzed, the TF .VI1 complexes bound to the mAb should reflect the concentration of complexes formed in solution. However, the fraction of the complexes in the solution which binds to the plate will depend on the concentration of T F in the reaction. Scatchard analysis can be used to characterize the equilibrium of ligand (L) binding to a receptor (R). The dissociation . Parameters for VI1 binding to T F a t 2 nM or TF1.219 at 10 nM (Table I) were calculated based on this approximation. For most of the experiments, higher concentrations of T F were used to achieve higher antibody saturation. Increasing the concentration to 100 nM TF1.219 and 20 nM T F resulted in a lower fraction of bound complexes which was 7% and 30% of the respective receptor concentration (Fig. 1). The free receptor complexes therefore represented a considerable fraction in the supernatant of the wells, and we obtained Table I demonstrates that the dissociation constants obtained by the two procedures are similar and that the maximal number of sites (Bmax) is in reasonable agreement with the fraction of receptor bound to the mAb ( f -based on Fig. 1. TABLE I Binding constants for the TF-VII interaction in solution Binding of plasma-derived VI1 (VII), recombinant VIIa (VIIa), or recombinant VIIa without the Gla-domain (des-(l-38)-VIIa) to TF in Triton X-100, T F acetone-extracted (-Triton), or TF,.,,, at the given concentration (TF, TF1.219) was characterized at indicated CaC12 concentrations (Ca). Prothrombin fragment 1 a t 100 PM (+ Fragment 1) or Fab fragments of the mAb F4-2.1B at 1 PM (+ F4-2.1B) were added to certain experiments. Mean and standard deviation for the K d and the maximal number of binding sites (Elmax) were calculated for the indicated number of exDeriments f n ) . VII Binding to Solubilized TF and TFl.21g-The binding of plasma-derived human VI1 to human brain TF which was solubilized with Triton X-100 (<0.02% in the assay) was characterized. We determined a Kd of 9.2 nM for the binding of VI1 to solubilized T F ( Table I, Fig. 2B) which is comparable to the interaction with TF reconstituted into phosphatidylcholine vesicles. This interaction has been shown to occur with a Kd of 13.2 nM at 37 "c and 5 mM CaClz (21). Similarly, VIIa bound with a Kd of 4.2 nM (Table I), indicating an apparent affinity slightly greater for the active serine protease. This is comparable to findings for T F associated with phospholipid vesicles (21). In contrast, the TF extracellular domain TF1.219 bound VI1 (Table I, Fig. 2 A ) and VIIa (Table I) with an approximately 10-fold higher Kd. This may indicate that the truncated molecule is slightly less well conformed compared to full length T F resulting in loss of some proteinprotein interactions contributing to the tight binding of VI1 to TF. Alternatively, the tighter binding of VI1 to detergentsolubilized T F might indicate a favorable interaction of the VI1 Gla-domain with the detergent associated with the hydrophobic membrane-spanning domain of T F (residues 220-242). To exclude this possibility, we extracted lyophilized T F once with 100% and twice with 80% acetone to remove most of the bound detergent (11). This did not alter the Kd of VI1 binding to T F (Table I). In addition, we attempted to block potential Gla-domain interactions with detergent by the addition of 100 FM prothrombin fragment 1 which is homologous to the Gla-domain of VI1 and blocks phospholipid binding of other Gla-domain-containing proteins due to the occupation of the phospholipid binding sites (22, 23). Since we included the prothrombin fragment 1 at a fixed concentration of 100 PM, a 10,000-fold molar excess was present relative to VI1 at 10 nM. Prothrombin fragment 1 did not alter the Kd of VI1 binding to TF or TF1.219 (Table I). These data suggest contribution of the Gla-domain to specific protein-protein interactions which result in a more stable TF. VI1 complex, rather than to a less specific type of interaction exemplified by the prothrombin Gla-domain. and TF (0) was calculated from the depicted data. Analysis was performed with 20 nM TF or 100 nM TF1.219 in the presence of 5 mM CaC12 and 0.5% BSA at 37 "C. TF-VII Interaction in Solution Binding of des-(1-38)-VIIa to TF-In contrast to cell binding assays (9), the binding assay used here allowed determination of binding parameters at a 10-fold higher Kd, as demonstrated for the TFl.21g. VI1 complex. We determined a 30fold higher Kd for the des-(l-38)-VIIa binding to detergentsolubilized T F as compared to the binding of VIIa (Fig. 3A, Table I). However the dissociation constants of the binding of VIIa and des-(l-38)-VIIa to TF1.219 were identical (Fig. 3B, Table I). This indicates that at least one interactive site in des-(l-38)-VIIa is properly conformed, and it further suggests that the truncated T F molecule has a less conformed structural site participating in binding of the VIIa Gla-domain. These data support our hypothesis that the Gla-domain of VI1 might directly interact with TF. However, the possibility exists that proteolytic removal of the amino-terminal Gladomain slightly alters the folding of the protein and/or interdomainal interactions within VIIa. We explored this further using mAb F4-2.1B which binds to the Gla-domain. This mAb should interfere with the postulated interactions of the Gladomain with TF. Hybridoma F4-2.1B was generated by immunization with human VII/VIIa purified from plasma and identified by screening for binding only in the presence of Ca2+. The mAb reacted with the light chain of VIIa. In solution, it bound VI1 and VIIa at an optimal calcium concentration of 1-5 mM, but not in the presence of 1 mM EDTA. F4-2.1B failed to bind des-( 1-38)-VIIa, when analyzed under conditions identical with those where it bound VIIa. Binding of des-(l-38)-VIIa and VIIa was comparable for 19 additional mAbs which were raised against plasma VII. In a binding assay with immobilized mixed phosphatidylserine/phosphatidylcholine vesicles (30/ 70) (6), addition of 1 p~ F4-2.1B decreased the specific binding of 300 nM VI1 to immobilized phospholipid by 30%. These data are consistent with epitope localization in the Gladomain. No competition of F4-2.1B with a panel of 19 VIIspecific mAbs which were calcium-dependent as well as calcium-independent was observed suggesting that binding of this mAb to VI1 does not conformationally perturb VI1 resulting in epitope loss. In addition, the Kd of VI1 for binding was not diminished in the presence of this mAb (Table I). However, in the presence of F4-2.1B, VI1 binding to full length T F occurred with a Kd which was identical with that determined for the interaction with TF1.219 (Table I). Consistent with these results, binding of VI1 to cell surface T F was inhibited in the presence of F4-2.1B (data not shown). These data provide an independent line of evidence that selective interference with Gla-domain-mediated interactions prevent high affinity binding of VI1 to TF. Requirement of Calcium for the Low Affinity Interaction of VII with TF-Previous studies have demonstrated a calcium optimum of 2-5 mM for the interaction of VI1 with cell surface T F (1, 7, 8). In light of the potential interaction of the Gladomain of VIIa with TF, these findings indicate that optimal interaction requires a conformation of the Gla-domain imparted by occupancy of the low affinity calcium binding sites. We analyzed the effect of ca2+ concentration on the Kd for binding of native VI1 to both full length T F and the truncated TF1.219. The binding to TF1.219 was not affected by variation of the calcium concentration from 50 p~ to 5 mM (Table I). In contrast, the Kd of VI1 for binding to detergent-solubilized T F approximated the Kd for TF1.219, when the calcium concentration was reduced to 50 p~ ( Table I). The Kd of VI1 binding to TF at 50 p~ calcium was comparable to the Kd of des-(l-38)-VIIa binding to TF. These data demonstrate that low affinity calcium binding sites of the VI1 Gla-domain must be saturated for tight binding to TF, consistent with a calcium-dependent structure which participates in this interaction. The low affinity binding of VI1 to TF as well as TF1.Z19 was abolished in the presence of EDTA indicating that the low affinity interaction of VI1 with T F is dependent on Ca2+ ions at micromolar concentrations. This indicates involvement of a high affinity calcium ion binding site to properly conform one or more interactive sites on T F or VI1 or both. Conformational Effects of Calcium Ions on TF-The effect of Ca2+ on the conformation of T F was analyzed by 'H NMR spectroscopy. Several studies on calcium-binding proteins demonstrated that large spectral changes occurred upon the addition of Ca2+. Shifts in aromatic resonances of as much as 0.5 ppm and in methyl resonances in the order of 0.1 to 0.2 ppm are reported (24)(25)(26). The NMR spectra obtained for TF1.219 in the presence of 50 p~ EDTA and 1 mM and 10 mM CaC12 were similar (Fig. 4A). Both the aromatic (5.0 to 11.0 ppm) (Fig. 4B) and the methyl (-1.0 to 1.0 ppm) (Fig. 4C) regions were monitored during the Ca2+ additions to TF1-219, and no detectable differences were observed. These results exclude a major effect of Ca2+ on the conformation of TF1.219 and suggest that the Ca2+ requirement for binding of VI1 to T F involves one or more calcium binding sites on VII. Functional Impact of the High and Low Ca2+-dependent Interactions-We analyzed the characteristics of peptidyl hydrolysis and cleavage of the natural substrate factor X by VIIa when complexed with T F or TF1.219. Catalytic activity of TF. VIIa and TF1.219. VIIa complexes were not significantly different when peptidyl hydrolysis was examined (Fig. 5A) (6). The apparently lower rate of peptidyl hydrolysis at high T F concentrations is due to inhibition of the assay by the excess of Triton X-100 (>0.25% final concentration) which was used to solubilize the stock solution of TF. These data suggest comparable functional activity of both forms of TF to more effectively enable the VIIa catalytic triad for cleavage of small substrates. The enhancement of peptidyl hydrolysis by VIIa in the presence of T F was abolished in the presence of EDTA (6), consistent with the binding data which demonstrated dissociation of the complex upon removal of Ca2+. Peptidyl hydrolysis mediated by 200 nM des-(l-38)-VIIa was approximately 60% of that mediated by an identical concen- tration of intact VIIa, and the enhancement was supported by T F and TF1.219 to the same extent (Fig. 5B). Based on the Kd for the binding of des-(l-38)-VIIa to TF, the observed catalytic activity of the TF. des-(l-38)-VIIa is consistent with the number of catalytic complexes formed at the given concentration of des-(l-38)-VIIa. TF. des-(l-38)-VIIa therefore appears to cleave the peptidyl substrate with a rate comparable to TF . VIIa. Limited proteolytic cleavage of the protein substrate factor X by VIIa was approximately 5-to 10-fold slower when TF was replaced by TF1.219 (Fig. 6A), consistent with previous analysis (6). The rate of factor X cleavage by 200 nM des-(1-38)-VIIa was approximately 100-fold less when in complex with T F as compared to the TF.VIIa complex (Fig. 6B). Similar rates were obtained with TF1.219 when very high concentrations of the cofactor were added to ensure saturation of des-(l-38)-VIIa. These data suggest that function of the catalytic site of des-(l-38)-VIIa for peptidyl substrates is readily imparted by complex formation with T F and equally effective with TF1.219, whereas extended substrate recognition, as for protein substrates, is severely diminished. This also applied to phospholipid-bound factor X in the presence of 1 nM phospholipid-reconstituted TF, the rate of activation of 1 PM factor X by 500 nM des-(l-38)-VIIa was 80% reduced compared to an identical concentration of VIIa. The binding analysis indicated that TF1.219 was deficient in providing a complementary interactive site for at least a Gla-domain- dependent interactive site on VII. Similarly, interaction of VI1 with TF1-219 appeared to result in a less efficient interaction with the protein substrate factor X. However, rates in the presence of TF1-219 were different for VIIa and des-(1-38)-VIIa suggesting that TF1.219 may provide transient interactions for the VII-Gla domain resulting in more efficient substrate interaction with VIIa. This could be consistent with the concept that conformational entropy is modified in TF1-219 providing a less optimal complementary structure for VIIa Gla-domain-dependent interactions. Further evidence for this idea is provided by the finding that cleavage of phospholipid-associated factor X by TF. VIIa and TF,.,,, . VIIa occurred with similar catalytic efficiency (6), further emphasizing that, at least transiently, a good fit does occur between VIIa and TF,.,,, to form a functional TF1.219. VIIa complex. DISCUSSION The assembly of VIIa with its cellular receptor T F is the critical initial reaction leading to efficient limited proteolytic activation of the substrate factors X and IX on cell surfaces. Insertion of TF into a phospholipid bilayer is critical for full functional activity (5,6,27). The preferential cleavage of factor X that is associated with the phospholipid surface may contribute largely to the increase in catalytic activity of the TF.VIIa complex in a phospholipid environment (6). By analogy to homologous proteins, interactions of the VIIa Gladomain with charged phospholipid surfaces could additionally facilitate the assembly of the protease with its receptor cofactor. The Gla-domain of VI1 has been suspected to mediate interactions with the phospholipid surface resulting in conformational changes which facilitate binding of additional sites on VI1 to the TF extracellular domain (9,10). However, inhibitors of protein interactions with phospholipid have not been shown to inhibit VI1 binding to TF, whether cell-surfaceexpressed or reconstituted into phospholipid (28); thus, the importance of VI1 Gla-domain interactions with phospholipid for high affinity binding to TF and for functional activity has not been established. We address here the functional significance of the Gla-domain of VI1 and its interaction with TF by analyzing the formation of the TF . VI1 complex in solution. Binding constants for the TF .VI1 interaction were estimated from an assay which uses a noninhibitory anti-TF mAb to capture T F VI1 complexes which formed in solution. Both plasma VI1 and recombinant VIIa bound to detergent-solubilized full length TF with affinities similar to those observed for TF reconstituted into uncharged phospholipid (21). The affinity was not significantly influenced by the absence of detergent or by competition for potential Gla-domain interactions by a high molar excess of prothrombin fragment 1. These data provide evidence that insertion of TF into a phospholipid bilayer is not critical for the TF structure to provide binding function and that a lipid surface is not required for high affinity binding of VI1 to TF. This suggests that the functional importance of the VIIa Gla-domain (9) may be explained by the contribution of the Gla-domain to specific protein-protein interactions with TF. This analysis does not exclude additional complexity as indicated by the observation of positive cooperativity in the presence of negatively charged phospholipid (21) which suggests a functional noncovalent dimerization of T F (7,21). Our data suggest that the Gla-domain of VI1 may participate in interactions with T F which contribute to the high affinity binding. Similar structural requirements have also been described for binding of factor IX to its endothelial cell receptor, a ligand-receptor interaction of comparable affinity to the TF. VI1 assembly (29,30). The factor IX Gla-domain in conjunction with the first epidermal growth factor domain has been proposed as the site for these protein-protein interactions (30), suggesting that Gla-domain-dependent interactions are not restricted to interactions with phospholipid. Consistent with the proposed participation of the Gladomain in the high affinity binding of VI1 to TF, tight binding of VI1 to TF was observed at 1-5 mM CaC1,. Further, VIIa lacking the amino-terminal Gla-domain (des-(l-38)-VIIa) did not bind with high affinity to full length T F in solution, consistent with the lack of high affinity binding to cell surface T F (9). Similarly, interfering with Gla-domain interactions by binding of Fab fragments of a Gla-domain-specific mAb abolished high affinity binding. Low affinity binding of VI1 to full length TF was observed using plasma VI1 in the presence of Gla-domain-specific mAb (& = 140 nM) or des-(1-38)-VIIa (& = 112 nM). A previous study using a cell binding assay failed to demonstrate binding of des-(1-38)-VIIa to T F (9). From the data presented in that study, as well as from our own experience, it is evident that nonspecific binding of VII/VIIa in the cell binding assay prevents use of sufficiently high concentrations of VI1 to observe des-(1-38)-VIIa binding to TF at the Kd determined in our study. The present study demonstrates TF. VI1 interaction which is independent of the Gla-domain. Binding with similar affinity was also observed at low (50 PM) CaCl, or when binding of plasma VI1 or VIIa to the isolated extracellular domain of T F (TF1-Z19) was analyzed. Binding affinity of VI1 and des-(1-38)-VIIa to TF,.,lg was identical, and the affinity of VI1 for TF1.,,, did not change at 50 PM CaC12 or in the presence of the Gla-domain-specific mAb. These data support the hypothesis of multiple interactions between VI1 and TF and suggest that TF and TF1.219 express a binding site which allows low affinity association of VI1 independent of a properly conformed Gla-domain. Low affinity interaction of VIIa with TF or TF1.219 was sufficient to enhance peptide substrate hydrolysis. In addition, des-( 1-38)-VIIa hydrolyzed the same peptidyl substrate at an identical rate with either TF or TF1.219 as cofactor. This suggests that the low affinity interaction with TF must influence the catalytic domain of VIIa to lead to increased accessibility or alignment of the catalytic triad. Whether this low affinity interaction involves interactions of TF with the catalytic domain of VIIa or whether the effects are allosteric due to interactions in other domains of VIIa has not been established. However, an interactive region in the VIIa catalytic domain has been suggested to involve residues 195-206 (10). The catalytic triad Hidg3 is adjacent to residues of this peptide. Association of T F with VIIa through certain residues in the sequence 195-206 could therefore be conceived to enhance peptidyl hydrolysis by a rather local conformational alteration of the catalytic triad or, alternatively, by more global conformational rearrangements of the catalytic domain, as demonstrated to occur spontaneously during the activation of trypsinogen or upon binding of an active site inhibitor or trypsinogen (31). However, experimental proof in support of these speculations remains to be established. The cleavage of the substrate factor X in solution by TF. des-(l-38)-VIIa or TFl.219. des-( 1-38)-VIIa was markedly reduced compared to complexes formed with VIIa. Under identical conditions, a less than 2-fold difference in peptidyl hydrolysis was observed, whereas the cleavage of factor X was reduced approximately 100-fold. This is consistent with the observed 97% loss of function, when des-(l-38)-VIIa was analyzed at 10 nM with phospholipid-reconstituted T F (9). Since the use of a 20-fold higher des-(l-38)-VIIa concentration in our study did not result in a substantial increase of factor Xa generation, a decreased number of catalytic TF. des-( 1-38)VIIa complexes due to the lower affinity of the interaction is unlikely to account for the decrease in functional activity. Rather, these findings suggest that the interaction of the Gla-domain of VIIa with TF contributes to more optimal extended recognition of the protein substrate by allosteric effects or direct interaction with factor X. This conclusion is further substantiated by the slightly reduced rate of solution-phase factor X activation by TF1.219.VIIa compared to TF.VIIa. Since TF1.219 presumably does not provide an entirely equivalent fit for the VIIa Gla-domain, the decreased rate of factor X activation would be consistent with a reduction in contacts of the Gla-domain with TF1,,,. The low affinity interaction of VI1 with T F or TF1.219 is dependent on calcium ions and is fully supported by 50 pM Ca2+. This suggests one or more high affinity calcium binding sites which participate in the interaction with T F directly or by conferring stability to the interactive domain. No apparent homology with calcium-binding proteins is found in the T F extracellular domain. Using 'H NMR spectroscopy, we provide here evidence that the conformation of the TF binding structures is not demonstrably influenced by the presence of Ca2+. This indicates that a high affinity binding site in VI1 could be critical for interaction with TF. Such high affinity sites could be located in the first EGF domain (32) or in residues 210-220 of VI1 which are homologous to the calcium binding site in trypsin (33, 34). This homologous sequence is adjacent to the proposed interactive region in the catalytic domain (10) and peptides encompassing the sequence have been reported to inhibit the function of the TF . VIIa complex (35), although not confirmed by others (10). Our results are consistent with the participation of at least two sites on VI1 in the interaction with T F (35), one which is dependent on a proper conformation of the Gla-domain or which may involve the Gla-domain itself and another which is functionally independent of the Gla-domain. Distant homology search revealed structural similarity of T F with the interferon receptor and cytokine receptor superfamily (36). A tandem repeat of two immunoglobulin-like / 3 barrel domains has been proposed as the global structural folding characteristic of this receptor family and interactive regions for the respective ligands have been hypothesized for both domains. Interactive sites in the amino-and carboxyl-terminal region of the T F extracellular domain may therefore provide the structural counterpart for the two sites proposed here for VII.
7,882
1991-08-25T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
An Experimental Study of Intermittent Heating Frequencies From Wind-Driven Flames An experimental study was conducted to understand the intermittent heating behavior downstream of a gaseous line burner under forced flow conditions. While previous studies have addressed time-averaged properties, here measurements of the flame location and intermittent heat flux profile help to give a time-dependent picture of downstream heating from the flame, useful for understanding wind-driven flame spread. Two frequencies are extracted from experiments, the maximum flame forward pulsation frequency in the direction of the wind, which helps describe the motion of the flame, and the local flame-fuel contact frequency in the flame region, which is useful in calculating the actual heat flux that can be received by the unburnt fuel via direct flame contact. The forward pulsation frequency is obtained through video analysis using a variable interval time average (VITA) method. Scaling analysis indicates that the flame forward pulsation frequency varies as a power-law function of the Froude number and fire heat-release rate, . For the local flame-fuel contact frequency, it is found that the non-dimensional flame-fuel contact frequency remains approximately constant before the local Rix reaches 1, e.g., attached flames. When Rix>1, decreases with local as Rix flames lift up. A piece-wise function was proposed to predict the local flame-fuel contact frequency including the two Rix scenarios. Information from this study helps to shed light on the intermittent behavior of flames under wind, which may be a critical factor in explaining the mechanisms of forward flame spread in wildland and other similar wind-driven fires. INTRODUCTION Wind-driven fires have been studied extensively over the past few decades, however, there are still significant gaps in understanding, especially applied to wind-driven fires resembling a line fire configuration. Most of the current literature on wind-driven fire spread has focused on the steadystate burning characteristics of these fires, preferring this time-averaged view of flame tilt angles, burning rates and downstream heat fluxes to the more complicated, stochastic movements that flames in reality make (Putnam, 1965;Albini, 1982;Weckman and Sobiesiak, 1988). The fluctuation of the flame front has recently been determined to follow some scaling laws and play a role in flame spread, in particular for wildland fires (Finney et al., 2015). The movement of flames therefore may have implications in a variety of wind-driven scenarios, wherever flames reside long enough to heat unburnt fuels and thus contribute to forward fire spread. Flame "pulsations, " or brief cyclical motions, have been studied for various fire configurations. Under stagnant conditions, an intermittent "puffing" phenomenon has been extensively studied using pool fires where the puffing frequency of the flame has been found to be well correlated with the diameter of the fire source (Grant and Jones, 1975;Hamins et al., 1992;Hu et al., 2015). These experiments on buoyant plumes suggest that puffing is primarily the result of a buoyant flow instability, which involves the strong coupled interaction of a toroidal vortex formed a short distance above a fuel or burner surface (Cetegen and Ahmed, 1993). Scaling of this phenomena has also been represented as a Strouhal-Froude relationship St ∼ Fr −0.5 (Cetegen and Ahmed, 1993). In wind-driven fires, the pulsation of the flame is not expected to scale with burner size in the same way as fires under stagnant conditions as wind plays a significant role in the fire behavior, too. These fires have already been shown to be strongly influenced by a competition between upward buoyant forces from the flame and forward momentum from the wind (Tang et al., 2017a), suggesting a combination of these forces will also play a role in generating intermittent motions within the flame. A detailed look at the time-dependent nature of wind-blown flames reveals that there are a variety of structures and regions which vary in both time-dependent and averaged characteristics. Figure 1 shows an image of a wind-blown flame from a stationary burner that, at first, appears attached along the downstream surface, but eventually lifts into a tilted flame. Three regions are thus defined to describe the flame behavior. First, an attachment region exists where the flame is visibly attached to the surface, occurring for some distance downstream of the burner since the wind overpowers buoyancy from the flame. As the flame moves forward, buoyancy increases in proportion to the momentum from the wind and the flame enters a transitional, "intermittent" region, where the flame fluctuates as a result of the competition with momentum from the wind and flame-generated buoyancy. After this region, the flame is finally lifted due to the dominant role of buoyancy, growing with distance as distributed heat release reactions continues to occur within the flame. In the process of the flame moving forward, a two-directional fluctuating behavior is anticipated, indicated on Figure 1. One is flame forward pulsation, where the flame intermittently flickers forward onto the downstream surface ahead of the flame front. In a spreading fire scenario, this may potentially reach more unburnt fuels and heat them, albeit for short times. This likely occurs due to a competition between momentumdriven wind and a counter-clockwise recirculation zone at the flame front, a buoyant instability similar to puffing pool fires, or a combination of the two. The other is flame-fuel contact, which appears most rapid in the region between the attached flame length and the lifted region. The up-and-down motion of the flame here is most likely due to a local buoyant instability and may be subject to change along the downstream distance. Independently measuring these two components will help to determine the influence intermittent heating has in each of the mentioned regions. Flame forward pulsation and flame-fuel contact will each play a significant role in the ignition of unburnt fuels within the flame's reach in winddriven fires. In our previous work (Tang et al., 2017a), the local total heat flux distribution on the downstream surface of winddriven line fires was investigated and a local Richardson number [Ri x = Gr x / Re 2 x , describing flame buoyancy over wind momentum (Subbarao and Cantwell, 1992;Johnson and Kostiuk, 2000)] was employed to scale measured non-dimensional heat fluxes. Inertial forces would be expected to dominate the flame behavior when Ri x <0.1, and buoyant forces when Ri x > 10. However, the details and implications of the heat transfer modes in the mixed region (0.1 < Ri x < 10), where the transition from an inertial-dominant to buoyancy-dominant regime, has not been well studied. Because most fires with cross-flow reside in this region, e.g., flames that begin attached near the surface and tend to "lift off " into a more plumelike scenario downstream, study of the fire behavior in this region is important to improve understanding of heating during flame spread. In this study, a stationary, non-spreading gas-burner fire configuration was chosen as it allows for a thorough statistical analysis of the flame structure. Long-duration experiments allow for a large sample size and more control over variations in experimental parameters, such as decoupling the heatrelease rate of the fire from flow conditions. High speed video is useful on these fires to reveal and track buoyant instabilities in the fire flow which resemble those appearing in spatially-uniform fuel beds. The same flame movements observed in previous spreading fire experiments were observed with the stationary burner (Finney et al., 2015), but with the ability to collect a larger data set of intermittency. Both forward and vertical movements of the flame were studied. The flame forward pulsation frequency was extracted from videos using the VITA method, similar to previous work (Tang et al., 2017b), while the local flame-fuel contact frequency on the downstream surface was obtained through Fast Fourier Transform (FFT) of heat flux sensor data. Scaling laws were then developed for these two frequencies and equations derived to correlate the frequency with related controlling parameters. EXPERIMENTAL SETUP Experiments were performed on a specially-designed 30 cm cross-section laminar blower built for uniform forced-flow combustion experiments. The blower pressurizes a 0.75 cubicmeter plenum with a centrifugal fan. The flow then travels through a converging section into a 30 × 30 cm rectangular duct, where multiple mesh screens and honeycomb flow straighteners were installed in the converging section to help generate a uniform wind profile. Finally, the flow travels another 1.35 m in the duct resulting in a fully-developed laminar boundary layer before it is exhausted at the outlet. The outlet velocity from the tunnel can be as high as 6 m/s, with a turbulence intensity, u'/u controlled below 3%. The wind velocity ranged from about 0.8 to 2.5 m/s in experiments, confirmed to be uniform at multiple locations across the space with a hotwire anemometer. The experimental platform was placed immediately following the outlet of the exhaust tunnel. A sand-filled gas burner was used with a 10 cm deep sand-filled plenum and a 25 cm (length) ×5 cm (width) surface. The top surface of the burner was mounted flush with a sheet of ceramic insulation board in the center of the blower outlet. The ceramic board had dimensions of 90 (length) × 45 (width) × 2 (height) cm 3 , over which the flame fluctuates, providing a quasi-adiabatic boundary condition. Propane from a gas cylinder was passed through a programmable flow meter to provide a steady flow rate of fuel during experiments. Three different fire heat-release rates, 6.3, 7.9, 9.5 kW, which correspond to 4, 5, and 6 liters per minute of propane gas were used during the experiments. High-speed videography using a Nikon DX was recorded at 250 frames per second at a 1,000 × 720 pixel resolution to capture digital images of the flame in all configurations from the side view. The wind tunnel and test section setup are shown in Figures 2A,B. For the frequency of the flame intermittently attaching to the downstream surface, a Gardon-type high frequency Vatell heat flux gauge (model HFM 1000-0) sampled at 1 kHz was used to capture the heat flux signal, and a FFT is applied to the heat flux data to extract the dominant frequency. These gauges were placed at six downstream locations 5.5 cm apart, starting 5.5 cm away from the trailing edge of the burner. Experimental conditions were chosen following our previous work on the total heat flux distribution and flame attachment, representing a wide range of wind momentum (Re number) and flame buoyancy (nondimensional heat release rate) (Tang et al., 2017a). For the flameforward pulsation frequency, a dominant frequency is not as apparent in the video analysis, as it is thought to be more affected by transport of stochastic turbulent structures. A technique used to analyze such turbulent flows, the variable-interval timeaverage (VITA) method, essentially a level-crossing technique, is applied through a MATLAB script which was previously found to show good results for turbulent flows (Tang et al., 2017b). RESULTS AND DISCUSSIONS Flame Forward Pulsation Frequency The flame forward pulsation was measured for stationary burners under wind. The flame location was determined using side-view high speed videos. Each image in the video was cropped to the same region-of-interest, a region defined from the downstream edge of the burner surface to the end of the image in the downstream direction, with a certain height above the surface. This region, in theory, could represent a flame zone depth in a spreading fire (see the dashed rectangle in Figure 3A). Flame images were then converted to greyscale images in MATLAB by averaging all three color channels and a threshold applied to result in a black-and-white image of flame and no-flame regions. As shown in Figure 3A, the flame position and flame shape are constantly changing when there is a perpendicular wind. The flame location is determined in the region of interest by tracking the furthest downstream tip of the flame detected from thresholding. This location fluctuates in time and would "burst" or quickly enter into what would be the unburnt fuel region, resulting in the intermittent heating of unburnt fuels by flame contact. Resultant flame locations as a function of time were analyzed using the VITA method (Blackwelder and Kaplan, 1976;Audouin et al., 1995) for a 1 cm window in the video at different distances downstream of the burner. Other window sizes up to 4 cm wide required more processing but produced similar results. Level-crossing was only considered for the forward direction, thus only when the flame appeared after absence in the previous frame, was it considered a crossing to avoid doublecounting. The resulting flame forward pulsation frequency was then determined at each downstream location by dividing the number of crossings by the total number of frames, multiplied by the frame rate of the video. The mean values of the frequency data obtained from all the thresholds was used for later analysis. To connect results to flame spread in solid fuels, a fuel bed height will need to be defined for the scenario at hand. Figure 3B shows resultant frequencies following application of the VITA technique. As the flame intermittently moves forward and backwards, it enters and leaves the downstream window position producing a parabolic distribution of frequencies. A maximum frequency therefore occurs at some downstream location between the continuous flame region and the maximum forward extension of the flame. The frequency at this location is used in the analysis as a maximum representative frequency of the flame along the surface. Applying the VITA method to all fire sizes and wind velocities, maximum frequencies are extracted, shown in Figure 4. It shows that the flame pulsation frequency increases relatively linearly with the wind velocity, while it decreases with fire size in all the experiments tested. The frequencies observed range from <10 Hz to about 15 Hz. Flame-Fuel Contact Frequency The flame-fuel contact frequency can help determine how much heat flux is received by unburnt fuels ahead of the flame front through direct flame contact, which has recently been found to be a primary mechanism of ignition of fuels in a wind-driven wildland fire (Finney et al., 2015). For each of the experimental conditions, raw heat flux signals were taken at different locations on the downstream board. The flame-fuel contact frequency was extracted from measured heat flux signals by applying a FFT which results in a frequency spectrum. The same method was applied to three cases with different wind velocities but the same fire size (9.5 kW) at 11 cm downstream, shown in Figure 5. FFT data smoothed by a Savitzky-Golay filter revealed the peak intensity used to choose the peak frequency at that location. In Figure 5, it can be seen that the flame-fuel contact frequency slightly increases with increasing wind velocity, however the intensity of this peak decreases indicating a reduced dominance of this peak frequency and more turbulent structures. The local flame-fuel contact frequency was further plotted for different wind velocity and fire sizes. Figure 6 shows one example where the fire size is 7.9 kW, for a given wind velocity, the local frequency tends to decrease with downstream distance from the burner. It is worth noting that, under high wind speeds, the flame starts to behave differently as the leading edge of the flame becomes highly strained i.e., the flame length approaches a constant value instead of extending further with high wind speed, the leading edge also starts to briefly extinguish and blue flames appear, which might explain some of the decrease shown in Figure 6 for the high wind at 5.5 cm data point. Discussion Unlike previous studies on flame frequencies (Hamins et al., 1992;Cetegen and Ahmed, 1993), which focused on puffing under stagnant conditions, this paper investigated the flame pulsation frequency under a cross-wind flow. The puffing frequency without wind has been found to be a function of burner size. Under wind conditions, however, two movements have been clearly identified. One is the global forward pulsation, where the flame is driven by the wind, and possibly buoyancy as well, and intermittently moves back and forth in the stream-wise direction. The other is an upward pulsation, which is found to be a more local phenomenon, where the flame is touching and directly transferring heat to the local unburnt fuel within the attachment and intermittent regions shown in Figure 1. This section will discuss these two pulsation frequencies and their correlations with relevant parameters. Global Flame Forward Pulsation Frequency The global flame forward pulsation frequency is thought to be the result of a competition between forward momentum from the ambient wind and buoyancy from the flame itself. A scaling analysis can be performed, assuming relevant parameters, which reveals two primary groups that the forward pulsation frequency is dependent on, the Froude number (wind momentum over inertial force) and Q * (buoyancy). A phenomenological explanation can be arrived at by first assuming the flame forwardpulsation frequency can be related to both the ambient wind velocity, u and the flame length, l f , which characterizes buoyancy from the fire, as In a wind-driven fire, the flame length has previously been found to be a function of the wind velocity and mass burning rate in the form (Thomas, 1963;Moorhouse, 1982), where D is the characteristic diameter or length of the burner,ṁ ′′ the mass burning rate of the fire, and a, b, and c are constants, previously found to be 62, 0.25, −0.044, respectively for gas fires (Thomas, 1963). A non-dimensional velocity can be defined as a ratio of the ambient wind velocity and a characteristic buoyant velocity of the fire (Hu, 2017), where g is the acceleration due to gravity and ρ a the density of ambient air. Assuming the fuel burns completely,ṁ ′′ can be related to the heat-release rate of the fire,Q aṡ which provides a more functional and universal parameter from which to define the fire. The heat-release rate can be nondimensionalized as Q * (Quintiere, 1989) for a fire plume and expressed in terms ofṁ ′′ as Combining Equations (1-5), we arrive at Equation (6), which relates the flame forward-pulsation frequency with the Froude number and the non-dimensional heat-release rate, where Fr is defined as Fr = u 2 /gL, u is the wind velocity, and L is the flame length. The flame forward pulsation frequency is then plotted against this parameter derived in Figure 7, and a powerlaw relationship is found relating them. An empirical fit can then be found from the data, with an R 2 of 0.95. While f F could be presented in nondimensional form, such as through a Strouhal number, the choice of a relevant length scale known a priori is difficult to define and, if properly applied, will result in a straight line similar to puffing pool fires (Hamins et al., 1992). Therefore, only f F is shown in Figure 7. Local Flame-Fuel Contact Frequency For the local flame-fuel contact frequency, the local Grashof (Gr x = gβ (T h − T ∞ ) L 3 /ν 2 ) and Reynolds (Re x = UL/ν) numbers arise as critical parameters describing the flow and heat transfer in our setup, where T h and T ∞ are the hot gas and ambient temperatures, respectively, L is the characteristic length, g the acceleration due to gravity, β the thermal expansion coefficient, and ν the kinematic viscosity of the ambient air. The relative role of buoyant and inertial forces in the flow have been found to be well-described by comparing the relative influence of these two parameters, often determined to be Gr x / Re a x , with a varying constant a (Imura et al., 1978;Miller et al., 2017). A non-dimensional flame-fuel contact frequency f + C is proposed based on a characteristic gas fuel flow rate and downstream distance, u + = gṁ ′′ D/ρ a 1/3 . where f C is the raw frequency data we obtained from heat flux gauge sensor, L + is the downstream distance from the measuring point to the leading edge of the burner chosen as characteristic length scale, u + is the characteristic fuel velocity based on mass flow rate,ṁ ′′ is the mass flow rate, D the burner hydraulic diameter, and ρ a the air density. Note that this non-dimensional flame-fuel contact frequency is not a typical form of the Strouhal number, such as those previously defined in pool fire studies. St-Fr correlations have been found to correlate the pool fire puffing frequency under stagnant conditions, where only natural convection is controlling the flame behavior, and the length scale chosen for the study was the pool diameter. This appears to be more of a global instability of the system driven by buoyancy. In our study, we introduce forced convection (wind), and the length scale is chosen as the distance from the measuring point to the leading edge of the burner, where the thermal boundary layer starts to develop. The length scale chosen in this paper follows our previous work on the effect of forced and natural convection on the heat flux distribution in wind-driven line fires (Tang et al., 2017a). In Figure 8 the local Ri x is plotted against the nondimensional frequency. When Ri x is smaller than 1, f + C varies around 0.7 with ranges from 0.55 to 0.8, however after Ri x reaches 1, f + C starts to decrease with Ri x . A piece-wise function was obtained based on a correlation with experimental data to describe the local frequency trend with Ri x . Equation (10) indicates that in a wind-driven fire, the flame-fuel contact frequency before the local Ri x reaches 1 will remain unchanged, fluctuating around f + C = 0.7. After Ri x reaches 1, which means natural convection and forced convection approximately balance each other, f + C will decrease with Ri x in a power law trend as Correlations are provided here is to aid in understanding the trend of local flame-fuel contact frequency as it changes in different Ri x regimes. Within the two Ri x regimes representing lifted and attached flames, data are still scattered to some degree. Further investigations are needed to look into each of these regimes and isolate the parameters related to this scatter, which may include the fuel heat-release rate, geometry of the burner, etc., to obtain a full understanding of this relationship. This investigation revealed multiple patterns of movement within a wind-driven flame resulting from different forces controlling the competition between buoyancy and forward momentum along the length of the flame. Neither puffing pool fires nor jets in cross flow correctly describe this phenomenon. The forward motion is more "messy" than the up and down contact of the flame. This likely occurs due to a competition between momentum-driven wind and a counterclockwise recirculation zone at the flame front, a buoyant instability similar to puffing pool fires, or a combination of the two. It is also thought to be more affected by transport of stochastic turbulent structures. The up and down motion of the flame, described by f + C and Ri x is seen in two regions, similar to previous studies investigating the heat flux downstream of the burner. The changes here were attributed to attachment and liftoff of the flame, which appears to be occurring here as well. It is near the end of this attachment region where the highest frequencies are observed, indicating this is also an inflection point where an unstable transition between attachment and liftoff occurs. The way in which the flame moves within both the forward region ahead of and close within the attachment region may have important implications for fire spread modeling. Current flame spread models assume either a constant heat flux for some distance ahead of the burning region, which heats unignited fuels, or a profile of decaying heat flux constant with time. Both approaches neglect the time-dependence of heating that becomes increasingly important when fine fuels primarily carry the fire, such as in wildland fires. CONCLUSIONS Experiments were conducted on a variety wind-driven line fires where the intermittent behavior of the flame was studied. Both the flame forward-pulsation frequency and flame-fuel contact frequency were independently measured. Trends in these quantities were reviewed and non-dimensional scaling proposed for each. It was found that the flame forward pulsation frequencyf F , can be well correlated and predicted by a non-dimensional parameter, Fr Q * 1/2 in a power law trend. The mechanism for this forward pulsation has been found to be related to the competition of wind momentum and flame buoyancy. For the flame-fuel contact frequency, which describes the local heating process of the flame to unburnt fuels along the flame attachment and intermittent regions, a piece-wise function was found with local Ri x , indicating that when Ri x < 1, the non-dimensional flame contact frequency f + C remains approximately constant, and when Ri x > 1, it decreases with Ri x . The description of global flame forward pulsation frequency and local flame-fuel contact frequency will help to explain wildland fuel ignition and flame spread in the future. DATA AVAILABILITY All datasets generated for this study are included in the manuscript and/or the supplementary files. AUTHOR CONTRIBUTIONS WT and MG designed the tests and conducted the tests. WT, MF, MG, and SM wrote the paper.
5,849.2
2019-06-20T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Linear-time computation of minimal absent words using suffix array Background An absent word of a word y of length n is a word that does not occur in y. It is a minimal absent word if all its proper factors occur in y. Minimal absent words have been computed in genomes of organisms from all domains of life; their computation also provides a fast alternative for measuring approximation in sequence comparison. There exists an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal {O}(n)$ \end{document}O(n)-time and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal {O}(n)$ \end{document}O(n)-space algorithm for computing all minimal absent words on a fixed-sized alphabet based on the construction of suffix automata (Crochemore et al., 1998). No implementation of this algorithm is publicly available. There also exists an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal {O}(n^{2})$ \end{document}O(n2)-time and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal {O}(n)$ \end{document}O(n)-space algorithm for the same problem based on the construction of suffix arrays (Pinho et al., 2009). An implementation of this algorithm was also provided by the authors and is currently the fastest available. Results Our contribution in this article is twofold: first, we bridge this unpleasant gap by presenting an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal {O}(n)$ \end{document}O(n)-time and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal {O}(n)$ \end{document}O(n)-space algorithm for computing all minimal absent words based on the construction of suffix arrays; and second, we provide the respective implementation of this algorithm. Experimental results, using real and synthetic data, show that this implementation outperforms the one by Pinho et al. The open-source code of our implementation is freely available at http://github.com/solonas13/maw. Conclusions Classical notions for sequence comparison are increasingly being replaced by other similarity measures that refer to the composition of sequences in terms of their constituent patterns. One such measure is the minimal absent words. In this article, we present a new linear-time and linear-space algorithm for the computation of minimal absent words based on the suffix array. Background Sequence comparison is an important step in many important tasks in bioinformatics. It is used in many applications; from phylogenies reconstruction to the reconstruction of genomes. Traditional techniques for measuring approximation in sequence comparison are based on the notions of distance or of similarity between sequences; and these are computed through sequence alignment techniques. An issue with using alignment techniques is that they are computationally expensive: they require quadratic time in the length of the sequences. Moreover, in molecular taxonomy and phylogeny, for instance, whole-genome alignment proves both *Correspondence<EMAIL_ADDRESS>1 Department of Informatics, King's College London, The Strand, WC2R 2LS London, UK Full list of author information is available at the end of the article computationally expensive and hardly significant. These observations have led to increased research into alignment free techniques for sequence comparison. A number of alignment free techniques have been proposed: in [1], a method based on the computation of the shortest unique factors of each sequence is proposed; other approaches estimate the number of mismatches per site based on the length of exact matches between pairs of sequences [2]. Thus standard notions are gradually being complemented (or even supplanted) by other measures that refer, implicitly or explicitly, to the composition of sequences in terms of their constituent patterns. One such measure is the notion of words absent in a sequence. A word is an absent word of some sequence if it does not occur in the sequence. These words represent a type of negative information: information about what does not occur in the sequence. Noting the words which do occur in one sequence but do not occur in another can be used to detect mutations or other biologically significant events. Given a sequence of length n, the number of absent words of length at most n can be exponential in n, meaning that using all the absent words for sequence comparison is more expensive than alignments. However, the number of certain subsets of absent words is only linear in n. An absent word of a sequence is a shortest absent word if all words shorter than it do occur in the sequence. An O(mn)-time algorithm for computing shortest absent words was presented in [3], where m is a user-specified threshold on the length of the shortest absent words. This was later improved by [4], who presented an O(n log log n)-time algorithm for the same problem. This has been further improved and an O(n)time algorithm was presented in [5]. A minimal absent word of a sequence is an absent word whose proper factors all occur in the sequence. Notice that minimal absent words are a superset of shortest absent words [6]. An upper bound on the number of minimal absent words is O(σ n) [7,8], where σ is the size of the alphabet. This suggests that it may be possible to compare sequences in time proportional to their lengths, for a fixed-sized alphabet, instead of proportional to the product of their lengths [1]. Theory and some applications of minimal absent words can be found in [9]. Recently, there has been a number of biological studies on the significance of absent words. The most comprehensive study on the significance of absent words is probably [10]; in this, the authors suggest that the deficit of certain subsets of absent words in vertebrates may be explained by the hypermutability of the genome. It was later found in [11] that the compositional biases observed in [10] for vertebrates are not uniform throughout different sets of minimal absent words. Moreover, the analyses in [11] support the hypothesis of the inheritance of minimal absent words through a common ancestor, in addition to lineage-specific inheritance, only in vertebrates. In [12], the minimal absent words in four human genomes were computed, and it was shown that, as expected, intra-species variations in minimal absent words were lower than inter-species variations. Minimal absent words have also been used for phylogenies reconstruction [13]. From an algorithmic perspective, an O(n)-time and O(n)-space algorithm for computing all minimal absent words on a fixed-sized alphabet based on the construction of suffix automata was presented in [7]. An alternative O(n)-time solution for finding minimal absent words of length at most , such that = O(1), based on the construction of tries of bounded-length factors was presented in [13]. A drawback of these approaches, in practical terms, is that the construction of suffix automata (or of tries) may have a large memory footprint. Due to this, an important problem is to be able to compute the minimal absent words of a sequence without the use of data structures such as the suffix automaton. To this end, the computation of minimal absent words based on the construction of suffix arrays was considered in [6]; although fast in practice, the worstcase runtime of this algorithm is O(n 2 ). Alternatively, one could make use of the succinct representations of the bidirectional BWT, recently presented in [14], to compute all minimal absent words in time O(n). However, an implementation of these representations was not made available by the authors; and it is also rather unlikely that such an implementation will outperform an O(n)-time algorithm based on the construction of suffix arrays. Our contribution In this article, we bridge this unpleasant gap by presenting the first O(n)-time and O(n)-space algorithm for computing all minimal absent words of a sequence of length n based on the construction of suffix arrays. In addition, we provide the respective implementation of this algorithm. This implementation is shown to be more efficient than existing tools, both in terms of speed and memory. Definitions and notation To provide an overview of our result and algorithm, we begin with a few definitions. Let y = y[0] y [1] . .y[n − 1] be a word of length n = |y| over a finite ordered alphabet of size σ = | | = O(1). We denote by y[i. .j] = y[i] . .y[j] the factor of y that starts at position i and ends at position j and by ε the empty word, word of length 0. We recall that a prefix of y is a factor that starts at position 0 (y[0. .j]) and a suffix is a factor that ends at position n − 1 (y[i. .n − 1]), and that a factor of y is a proper factor if it is not the empty word or y itself. Let x be a word of length 0 < m ≤ n. We say that there exists an occurrence of x in y, or, more simply, that x occurs in y, when x is a factor of y. Every occurrence of x can be characterised by a starting position in y. Thus we say that In this article, we consider the following problem: MINIMALABSENTWORDS Input: a word y on of length n Output: for every minimal absent word x of y, one tuple Algorithm MAW In this section, we present algorithm MAW, an O(n)-time and O(n)-space algorithm for finding all minimal absent words in a word of length n using arrays SA and LCP. We first give an example and explain how we can characterise the minimal absent words; then we introduce how their computation can be done efficiently by using arrays SA and LCP. Finally, we present in detail the two main steps of the algorithm. Intuitively, the idea is to look at the occurrences of a factor w of y and, in particular, at the letters that precede and follow these occurrences. If we find a couple (a, b), a, b ∈ , such that aw and wb occur in y, but awb does not occur in y, then we can conclude that awb is a minimal absent word of y. For an illustration inspect Figure 1. For example, let us consider the word y = AABABABB: • factor w = AB occurs at: -position 1 preceded by A and followed by A -position 3 preceded by B and followed by A -position 5 preceded by B and followed by B We see that Aw occurs and wB occurs as well but AwB does not occur in y, so AABB is a minimal absent word of y. • factor w = BA occurs at: -position 2 preceded by A and followed by B -position 4 preceded by A and followed by B We cannot infer a minimal absent word. is an absent word whose proper factors all occur in y. Among them, Figure 2); we will focus on these two factors to characterise the minimal absent words. To do so, we will consider each occurrence of x 1 and x 2 , and construct the sets of letters that occur just before: Lemma 1. Let x and y be two words. Then x is a minimal absent word of y if and only if x[0] is an element of Proof. (⇒) Let x 1 be a factor of y, x 2 be the longest proper prefix of x 1 , and B 1 (x 1 ) and B 2 (x 1 ) the sets defined above. Further let p be a letter that is in B 2 (x 1 ) but not in B 1 (x 1 ). Then, there exists a starting position j of an occurrence of x 2 such that y[j − 1] = p, so the word px 2 occurs at position j − 1 in y. p is not in B 1 (x 1 ) so px 1 does not occur in x and is therefore an absent word of y. x 1 and px 2 are factors of y, so all the proper factors of px 1 occur in y, thus px 1 is a minimal absent word of y. . Its longest proper suffix, x 1 occurs as well in y, but x = x[0] x 1 is an absent word of y so it does not occur in y and x[0] is not in B 1 (x 1 ). Lemma 2. Let x be a minimal absent word of length m of word y of length n. Then there exists an integer Proof. Let j be the starting position of an occurrence of x[0. .m − 2] in y and k the starting position of an occurrence of x 1 in y. The suffixes y[j + 1. .n − 1] and y[k. .n − 1] share x 2 = x [1. .m − 2] as a common prefix. As x is an absent word of y, this common prefix cannot be extended so x 2 is the longest common prefix of those suffixes. By using iSA, the inverse suffix array, we For an illustration inspect . We just need to construct the sets B 1 (S 2i ), B 2 (S 2i ) and B 1 (S 2i+1 ), B 2 (S 2i+1 ), where B 1 (S j ) (resp. B 2 (S j )) is the set of letters that immediately precede an occurrence of the factor S j (resp. the longest proper prefix of S j ), for all j in [0 : 2n − 1]. Then, by Lemma 1, the difference between B 2 (S j ) and B 1 (S j ), for all j in [0 : 2n − 1], gives us all the minimal absent words of y. Thus the important computational step is to compute these sets of letters efficiently. To do so, we visit twice arrays SA and LCP using another array denoted by B 1 (resp. B 2 ) to store set B 1 (S j ) (resp. B 2 (S j )), for all j in During the first pass, we visit arrays SA and LCP from top to bottom. For each i ∈ [0 : n − 1], we store in positions 2i and 2i+1 of B 1 (resp. B 2 ) the set of letters that immediately precede occurrences of S 2i and S 2i+1 (resp. their longest proper prefixes) whose starting positions appear before position i in SA. During the second pass, we go bottom up to complete the sets, which are already stored, with the letters preceding the occurrences whose starting positions appear after position i in SA. In order to be efficient, we will maintain a stack structure, denoted by LifoLCP, to store the LCP values of the factors that are prefixes of the one we are currently visiting. Top-down pass Each iteration of the top-down pass consists of two steps. In the first step, we visit LifoLCP from the top and for each LCP value read we set to zero the corresponding element of Interval; then we remove this value from the stack. We stop when we reach a value smaller or equal to LCP[i]. We do this as the corresponding factors are not prefixes of y[SA[i] . .n − 1], nor will they be prefixes in the remaining suffixes. We push at most one value onto the stack LifoLCP per iteration, so, in total, there are n times we will set an element of Interval to zero. This step requires time and space O(nσ ). For the second step, we update the elements that correspond to factors in the suffix array with an LCP value less than LCP[i]. To do so, we visit the stack LifoLCP top-down and, for each LCP value read, we add the letter y[SA[i] −1] to the corresponding element of Interval until we reach a value whose element already contains it. This ensures that, for each value read, the corresponding element of Interval has no more than σ letters added. As we consider at most Bottom-up pass Intuitively, the idea behind the bottom-up pass is the same as in the top-down pass except that in this instance, as we start from the bottom, the suffix y[SA[i] . .n − 1] can share more than its prefix of length LCP[i] with the previous suffixes in SA. Therefore we may need the elements of Interval that correspond to factors with an LCP value greater than LCP[i] to correctly compute the arrays B 1 and B 2 . To achieve this, we maintain another stack LifoRem to copy the values from LifoLCP that are greater than LCP[i]. This extra stack allows us to keep in LifoLCP only values that are smaller or equal to LCP[i] without losing the additional information we require to correctly compute B 1 and B 2 . At the end of the iteration, we will set to zero each element corresponding to a value in LifoRem and empty the stack. Thus to set an element of Interval to zero requires two operations more than in the first pass. As we consider at most n values, this step requires time and space O(nσ ). Another difference between the top-down and bottomup passes is that in order to retain the information computed in the first pass, the second step is performed for each letter in B 1 [2i]. As, for each LCP value read, we still add a letter only if is not already contained in the corresponding element of Interval, no more than σ letters are added. Thus this step requires time and space O(nσ ). For an example, see Figure 5. Once we have computed arrays B 1 and B 2 , we need to compare each element. If there is a difference, by Lemma 1, we can construct a minimal absent word. For an example, see Figure 6. To ensure that we can report the minimal absent words in linear time, we must be able to report each one in constant time. To achieve this, we can represent them as a tuple < a, (i, j) >, where for some word x of length m ≥ 2 that is a minimal absent word of y, the following holds: Note that this representation uniquely identifies a minimal absent word and conversion from this encoding to the actual minimal absent word is trivial. Lemma 2 ensures us to be exhaustive. Therefore we obtain the following result. Results and discussion The experiments were conducted on a Desktop PC using one core of Intel Xeon E5540 CPU at 2.5 GHz and 32GB of main memory under 64-bit GNU/Linux. All programs were compiled with gcc version 4.6.3 at optimisation level 3 (-O3). Time and memory measurements were taken using the GNU/Linux time command. Implementation We implemented algorithm MAW as a program to compute all minimal absent words of a given sequence. The program was implemented in the C programming language and developed under GNU/Linux operating system. It takes as input arguments a file in (Multi)FASTA format and the minimal and maximal length of minimal absent words to be outputted; and then produces a file with all minimal absent words of length within this range as output. The implementation is distributed under the GNU General Public License (GPL), and it is available at http://github.com/solonas13/maw, which is set up for maintaining the source code and the man-page documentation. Datasets We considered the genomes of thirteen bacteria and four case-study eukaryotes (Table 1), all obtained from the NCBI database (ftp://ftp.ncbi.nih.gov/genomes/). Correctness To test the correctness of our implementation, we compared it against the implementation of Pinho et al. [6], which we denote here by PFG. In particular, we counted the number of minimal absent words, for lengths 11, 14, 17, and 24, in the genomes of the thirteen bacteria listed in Table 1. We considered only the 5 → 3 DNA strand. Species Abbreviation Genome reference Bacillus anthracis strain Ames Ba NC003997 Xanthomonas campestris strain 8004 Xc NC007086 11,14,17, and 24 respectively. Identical number of minimal absent words for these lengths were also reported by PFG, suggesting that our implementation is correct. Efficiency To evaluate the efficiency of our implementation, we compared it against the corresponding performance of PFG, which is currently the fastest available implementation for computing minimal absent words. Notice that this evaluation depends heavily on the suffix array con- MAW also scales linearly and is the fastest in all cases. It accelerates the computations by more than a factor of 2, when the sequence length grows, compared to PFG. Figure 7 corresponds to the measurements in Table 4: it plots chromosome sequence length versus elapsed time for computing all minimal absent words in the genomes of Homo Sapiens and Mus musculus using MAW and PFG. MAW also reduces the memory requirements by a factor of 5 compared to PFG. The maximum allocated memory (per task) was 6GB for MAW and 30GB for PFG. To further evaluate the efficiency of our implementation, we compared it against the corresponding performance of PFG using synthetic data. As basic dataset we used chromosome 1 of Hs. We created five instances S 1 , S 2 , S 3 , S 4 , and S 5 of this sequence by randomly choosing 10%, 20%, 30%, 40%, and 50% of the positions, respectively, and randomly replacing the corresponding letters to one of the four letters of the DNA alphabet. We computed all minimal absent words for each instance. We considered both the 5 → 3 and the 3 → 5 DNA strands. Table 5 depicts elapsed-time comparisons of MAW and PFG. MAW is the fastest in all cases. Conclusions In this article, we presented the first O(n)-time and O(n)space algorithm for computing all minimal absent words based on the construction of suffix arrays. In addition, we provided the respective implementation of this algorithm. Experimental results show that this implementation outperforms existing tools, both in terms of speed and memory. In a typical application, one would be interested in computing minimal absent words in the whole genome for a set of species under study [11,12]. Hence, we consider the improvements described in this article to be of great importance. Our immediate target is twofold: first, explore the possibility of implementing the presented algorithm for symmetric multiprocessing systems; and second, devise and implement a fast space-efficient solution for this problem based on the construction of compressed full-text indexes. Availability and requirements • Project name: MAW • Project home page: http://github.com/solonas13/ maw • Operating system: GNU/Linux • Programming language: C • Other requirements: compiler gcc version 4.6.3 or higher • License: GNU GPL • Any restrictions to use by non-academics: licence needed
5,219.2
2014-06-24T00:00:00.000
[ "Computer Science" ]
Analyzing the impact of feature selection methods on machine learning algorithms for heart disease prediction The present study examines the role of feature selection methods in optimizing machine learning algorithms for predicting heart disease. The Cleveland Heart disease dataset with sixteen feature selection techniques in three categories of filter, wrapper, and evolutionary were used. Then seven algorithms Bayes net, Naïve Bayes (BN), multivariate linear model (MLM), Support Vector Machine (SVM), logit boost, j48, and Random Forest were applied to identify the best models for heart disease prediction. Precision, F-measure, Specificity, Accuracy, Sensitivity, ROC area, and PRC were measured to compare feature selection methods' effect on prediction algorithms. The results demonstrate that feature selection resulted in significant improvements in model performance in some methods (e.g., j48), whereas it led to a decrease in model performance in other models (e.g. MLP, RF). SVM-based filtering methods have a best-fit accuracy of 85.5. In fact, in a best-case scenario, filtering methods result in + 2.3 model accuracy. SVM-CFS/information gain/Symmetrical uncertainty methods have the highest improvement in this index. The filter feature selection methods with the highest number of features selected outperformed other methods in terms of models' ACC, Precision, and F-measures. However, wrapper-based and evolutionary algorithms improved models' performance from sensitivity and specificity points of view. www.nature.com/scientificreports/procedure that must be conducted with care and precision.It is typically based on the knowledge and experience of the physician, which, if not done properly, can result in significant financial and life-altering expenses for the patient 6 .However, not all physicians possess the same expertise in subspecialties, and the geographical distribution of qualified specialists is uneven.As a result of these multiple factors used to evaluate the diagnosis of the heart attack, physicians typically make the diagnosis based on the patient's present test results 7 .Additionally, doctors review prior diagnoses made on other patients who have similar test results.These intricate procedures are, however, of little importance 8 . To accurately diagnose heart attack patients, a physician must possess expertise and experience.Consequently, the obligation to leverage the knowledge and expertise of various professionals and the clinical screening data collected in databases to facilitate the analysis process is seen as a beneficial framework that integrates clinical selection aids and computer-aided patient records.Furthermore, it can reduce treatment errors, enhance patient safety, eliminate unnecessary conflicts, and enhance patient outcomes.Machine learning has been extensively discussed in the medical field, particularly for the diagnosis and treatment of diseases 7 .Recent research has highlighted the potential of machine learning to improve accuracy and diagnostic time.AI-based tools constructed with machine learning have become increasingly effective diagnostic tools in recent years 9,10 .Machine learning algorithms are highly effective in predicting the outcome of the data in a large amount.Data mining is a process of transforming large amounts of raw data into data that will be highly useful for decision-making and forecasting 11 .By producing more precise and timely diagnoses, machine learning technology has the potential to transform the healthcare system and provide access to quality healthcare to unprivileged communities worldwide.Machine learning has the potential to shorten the time it takes for patients to meet with their physicians, as well as to reduce the need for unnecessary diagnostic tests and enhance the precision of diagnoses.Preventive interventions can significantly reduce the rate of complex diseases 1,2 .As a result, many clinicians have proposed increasing the identification of patients through the use of Machine Learning and predictive models to reduce mortality and enhance clinical decision-making.Machine learning can be used to detect the risk of cardiovascular disease and provide clinicians with useful treatments and advice for their patients 12 . In addition to the various cardiovascular disorders, there are pathological alterations that take place in the heart and the blood vessels.Data classification can enable the development of tailored models and interventions that reduce the risk of cardiovascular disease.These analyses assist medical professionals in re-evaluating the underlying risks and, even if a prior vascular disease has occurred, can provide more efficient solutions and treatments to improve the quality of life and extend life expectancy 13 , and reduce mortality.An expert can use supervised learning to answer the following: whether a medical image contains a malignant tumor or a benign tumor.Is a patient with heart disease likely to survive?Is there a risk of disease progression?Is it possible for a person with heart disease to develop heart disease with existing factors?These and other questions can be answered using supervised learning techniques and classification modeling 14,15 .Classification is one of the most common methods used in data mining.It divides data into classes and allows one to organize different kinds of data, from complex data to simple data.Classification is one of the supervised learning methods in data mining.The main goal of classification is to connect the input variables with the target variables and make predictions based on this relationship.The classification techniques used in this study ranged from decision tree to support vector machines (SVM) and random forest (Random Forest) 16 .In a study conducted by Melillo and colleagues, the CART algorithm was found to have the highest accuracy of 93.3% among the other algorithms.This algorithm was used to determine which patients had congestive heart disease, and which patients were at lower risk 17 . Although Machine Learning (ML) is essential for the diagnosis of a wide range of diseases, the production of large-scale data sets and the presence of numerous non-essential and redundant features in these data sets is a significant deficiency in ML algorithms 8 .Furthermore, in many cases, only a small number of features are essential and pertinent to the objective.As the rest of the features are disregarded as trivial and redundant, the performance and accuracy of the classification are adversely affected.Therefore, it is essential to select a compact and appropriate subset of the major features to enhance the classification performance, as well as overcome the "curse of dimensionality".The purpose of feature selection techniques is to assess the significance of features.The aim is to reduce the number of inputs for the requirements that are most pertinent to the model.In addition to reducing the number of inputs, feature selection also significantly reduces the processing time.Even if several feature selection techniques have been employed in decision support systems in medical datasets; there are always improvements to be made 18 . Previous research on predicting heart disease in two broad categories has focused on either optimizing algorithms based on various machine learning techniques or attempting to optimize algorithms by utilizing various feature selection techniques.However, it has been less discussed to compare the impact of various feature selection techniques on model performance.This study aims to compare the performance of three different feature selection techniques (filter, wrapper, and evolutionary) in machine learning models for predicting heart disease. This paper contains the following significant points: • The present study examines the contributions of different feature selection techniques, filter, wrapper, and evolutionary methods (16 methods) effect on machine-learning algorithms for heart disease prediction.• In the subsequent phase, all sixteen feature selection techniques were employed with Bayes net, Naïve Bayes (BN), Multivariate Linear Model (MLM), Support Vector Machine (SVM), logit boost, j48, and Random Forest.• The results were then compared according to the assessment criteria of Precision, F-measure, Specificity, Accuracy, Sensitivity, ROC area, and PRC.• The most important and significant result of the present study is a comprehensive comparison of a variety of feature selection techniques on machine algorithms for the prediction of heart diseases.The primary and most significant outcome of the study was that, despite the filter methods selecting more features, they were still able to enhance the accuracy factors and precision, as well as F-measures, when applied to machine learning algorithms.• The most significant improvements in factors are associated with a + 2.3 increase in accuracy after implemen- tation of SVM + CFS/information gain/symmetry uncertainty feature selection methods, as well as an + 2.2 improvement in the F-measure factor derived from SVM + CFS/information gain/symmetry uncertainty.• The results showed that although feature selection in some algorithms leads to improved performance, in others it reduces the performance of the algorithm. This paper is structured as follows: Following the introduction in section "introduction", the related literature is reviewed in section "related literature".Research methods are reviewed in section "methodology".The results of the research are presented in section "results".Subsequently, the results of the study are discussed in section "discussion".Finally, the conclusions of the study are presented in section "conclusion".Lastly, the limitations and future scope are discussed in Section "Limitation and future scope". Related literature The Cleveland UCI dataset contains a number of related studies on the prediction of heart disease.These studies fall into two broad categories: the first, which compares algorithms based on classic or deep learning, and the second, which compares the performance of algorithms based on feature selection. Premsmith et al. presented a model to detect heart disease through Logistic Regression and Neural Network models using data mining techniques in their study.The results demonstrated logistic regression with an accuracy of 91.65%, a precision of 95.45%, a recall of 84%, and F-Measure of 89.36%.This model outperformed the neural network in terms of performance 3 .In a study to enhance heart attack prediction accuracy through ensemble classification techniques, Latha et al. concluded that a maximum of 7% accuracy improvement can be expected from ensemble classification for poor classifiers and those techniques such as bagging and boosting will be effective in increasing the prediction accuracy of poor classifiers 16 .Chaurasia et al. conducted a study to evaluate the accuracy of the detection of heart disease using Naive Bayes (Naive), J48, and bagging.The results indicated that Naive berries provided an accuracy of 82.31%, J48 provided an accuracy of 84.35%, and bagging provided an accuracy of 85.03%.Bagging had a greater predictive power than Naive Bayes 19 . Mienye et al. presented a deep learning strategy for predicting heart disease in a study utilizing a Particle Swarm Optimization Stacked Semiconductor Auto encoder (SSAE).This research proposes an approach for predicting heart diseases through the use of a stacked SSAE auto encoder that has a softmax layer.The softmax layer is a layer in which the last hidden layer of a sparse Auto encoder is connected to a softmax classifier, resulting in the formation of a SSAE network.This network is then refined with the implementation of the PSO algorithm, resulting in the development of feature learning and enhanced classification capabilities.The application of these algorithms to the Cleveland test yielded the following results: 0.961 accuracy, 0.930 precision, 0.988 sensitivity, and 0.958 F-measure 2 . In a research project to assess the predictive power of MLP and PSO algorithms for the prediction of cardiac disease, Batainh et al. proposed an algorithm with an accuracy of 0.846 percent, an AUC of 0.848 percent, a precision of 0.808 percent, a recall of 0.883 percent, and an F1 score of 0.844.This algorithm outperforms other algorithms such as Gaussian NB classifiers, Logistic regression classifiers, Decision tree classifiers, Random forest classifiers, Gradient boosting classifiers, K-nearest neighbors classifiers, XGB classifiers, Extra trees classifiers, and Support vector classifiers, and can be used to provide clinicians with improved accuracy and speed in the prediction of heart disease 5 . In order to enhance the predictive accuracy of heart disease, Thiyagaraj employed SVM, PSO, and a rough set algorithm in a study.To reduce the redundancy of data and enhance the integrity of the data, data was normalized using Z-score.The optimal set was then selected using PSO and the rough set.Finally, the radial basis function-transductive support vector machines (RBF) classifier was employed for the prediction.The proposed algorithm was found to have superior performance compared to other algorithms 7 . A battery of papers focused on the use of classification techniques in the field of cardiovascular disease.These studies employed classification methods to prognosis the onset of disease, to classify patients, and to model cardiovascular data.The classification and regression tree algorithm (CART), a supervised algorithm, was employed in the studies conducted by Ozcan and Peker to prognosis the onset of heart disease and classify the determinants of the disease.The tree rules extracted from this study offer cardiologists a valuable resource to make informed decisions without the need for additional expertise in this area.The outcomes of this research will not only enable cardiologists to make faster and more accurate diagnoses but will also assist patients in reducing costs and improving the duration of treatment.In this study, based on data from 1190 cardiac patients, ST slope and Old peak were found to be significant predictors of heart disease 15 . Bhatt et al., in their study based on data from Kaggle datasets and using Random Forest, Decision Tree Algorithms, Multilayer Perception, and XGBOOST classifier, predicted heart disease.In conclusion, the MLP algorithm demonstrated the highest level of accuracy (87.28%) among the other algorithms evaluated 14 .In a study conducted by Khan et al., 518 patients enrolled in two care facilities in Pakistan were predicted to develop heart disease using decision tree (DT), random forest (RF), logistic regression (LR), Naive Bayes (NB), and support algorithms.The most accurate algorithm used to classify heart disease was the Random Forest algorithm, which had an accuracy of 85.01% 20 .This was the best out of the other algorithms, according to a study by Kadhim and colleagues.They looked at a dataset of IEEE-data-port data sources and used a bunch of different algorithms to classify it.The Random Forest algorithm was the most accurate, with an accuracy of 95.4% 21 .In addition to these papers, a further set of studies have explored the application of machine learning to image and signal analysis.Medical images are a critical tool in the diagnosis of a variety of medical conditions, including tumors.Due to the high degree of similarity between radiological images, timely diagnosis may be delayed.Consequently, the utilization of machine learning techniques can lead to an increase in the rate and precision of medical image-based diagnosis.Furthermore, with the growing number and volume of medical images available, the search for similar images and patients with similar complications can further enhance the speed and precision of diagnosis.The WSSENET (weakly supervised similarity assessment network) was a method used to evaluate the similarity of pulmonary radiology images, and it was found to be more accurate in retrieving similar images than prior methods 22 .In this paper 23 , a low-dose CT reconstruction method is proposed, based on prior sparse transform images, to resolve image issues.This method involves the learning of texture structure features in CT images from various datasets, and the generation of noise CT image sets to identify noise artifact features in CT images.The low-dose CT images processed with the enhanced algorithm are also used as prior images to develop a novel iterative reconstruction approach.DPRS is a method employed to expedite the retrieval of medical images within telemedicine systems, resulting in an enhanced response time and precision.Classification and selection of features are also employed for medical photo classification.Deep learning was employed to classify medical images in the study 24 .The adaptive guided bilateral filter was employed to filter the images.In this study, Black Widow Optimization was also employed to select the optimal features.The accuracy rate achieved in this study was 98.8% when Red Deer Optimization was applied to a Gated Deep Relevance Learning network for classification.Metaheuristic approaches have gained increased recognition in the scientific community due to their reduced processing time, robustness, and adaptability 25 .In his study presented a methodology based on a multiobjective symbiotic organism search to solve multidimensional problems.The results of a Feasibility Test and Friedman's Rank Test demonstrated that this method is sufficiently effective in solving complex multidimensional problems with multiple axes.A triangular matching algorithm was used in the study 26 .The method of soft tissue surface feature tracking is presented in the study.A comparison of the results of the soft tissue feature tracking method with the results of the convolution neural network was conducted.The result showed that the method of soft tissue feature tracking has a higher degree of accuracy.In a study (Dang et al.), a matching method was presented to overcome the issues of conventional feature matching.The method of matching feature points in various endoscopic video frames was presented as a category, and the corresponding feature points in subsequent frames were compared with the network classifier.The experimental data demonstrated that the feature-matching algorithm based on a convolutional network is efficient due to feature-matching, no rotation displacement, and no scaling displacement.For the initial 200 frames of a video, the matching accuracy reached 90% 27 .In a study, Ganesh et al. used a wrapper method based on the K Nearest Neighborhood (KNN) algorithm to select the best features.In this study, the WSA algorithm was compared with seven metaheuristic algorithms.The results showed that this algorithm was able to reduce 99% of the features in very large datasets without reducing the accuracy and performed 18% better than classical algorithms and 9% better than ensemble algorithms 28 .Priyadarshini et al. conducted a study using metaheuristic algorithms inspired by physics investigated feature selection.The performance of these algorithms were compared using factors such as accuracy, processing cost, suitability, average of selected features and convergence capabilities.The results showed that Equilibrium Optimizer (EO) had a better performance than other algorithms and it was suggested to solve problems related to feature selection 29 . The following is a summary of the findings of the studies comparing the feature selection techniques and the algorithms used in the Cleveland dataset to predict heart diseases (Table1). This group of studies included only a few feature selection techniques mostly filter methods as well as accuracy factor, as indicated in Table 1.However, in this study, sixteen feature selection methods in three groups filters, wrapper, and evolutionary were studied and their impact on all factors-including Precision, F-measure, Specificity, Accuracy, Sensitivity, ROC area, and PRC were measured. Methodology The present study was divided into four general phases, as illustrated in Fig. 1. Once the data had been acquired and preprocessed, sixteen feature selection techniques were applied in three categories: filter, wrapper, and evolutionary methods.Subsequently, the best subset was selected, and seven machine-learning techniques applied.Subsequently, algorithm and feature selection performance were evaluated using various evaluation factors.Since a public dataset was used in this study, informed consent was not obtained.In addition, human subjects were not used in present research.Also, all stages of the research were in Table 1.Related studies with a focus on feature selection effect on heart disease prediction. Feature selection method Classification algorithm Evaluation factor Year References Chi-squared and analysis of variance (ANOVA) Dataset The dataset used for the heart disease analysis is the Cleveland Heart disease dataset.This dataset was extracted from UCI Machine Learning Repository and consists of 303 records.This dataset includes a total of 165 individuals with cardiovascular disease and 138 individuals with no cardiovascular case history.The dataset was characterized by 13 attributes for predicting heart disease, with one attribute serving as the final endpoint.Table 2 provides a description of this dataset.Data preprocessing is one of the most critical steps after obtaining the data.Due to the uniformity and global nature of the data set, only the missing value analysis was used as a pre-processing technique, and records with www.nature.com/scientificreports/blank fields were eliminated from the data set.At this stage, the dataset has been filtered for missing data and 6 missing records were removed, leaving 297 records to be processed. Feature selection Feature selection is the process of removing unrelated and repetitive features from a dataset based on an evaluation index to make it more accurate.There are three main types of feature selection methods: filter, wrapper, and embedded 31 .Filtering methods use the general properties of the training data to perform the selection as a step-by-step process independent of an induction algorithm.Filtering methods have lower computational complexity and are better at generalizing.Because filter methods only look at the intrinsic properties of the training samples to evaluate a feature or a group of features, they can be used with a wide range of classifiers 32 .In a wrapper-based method, the selection process involves optimization of a predictor.Unlike a filter method, a wrapper method is tailored to a particular classifier and evaluates the quality of a subset of candidates.As a result, a wrapper method achieves better classification performance than a filter method.In a third-party method, feature selection is performed during the training phase.Embedded methods constitute a subset of overlay methods, which are characterized by a more profound relationship between feature selection and the classifier construction 33 .Feature subsets are formed when the embedded methods are used to construct the classifier 32,33 . In the present study, filter methods were employed alongside wrapper and evolutionary methods (Fitness function: precision + SVM), which are briefly outlined below. Filter method Correlation-based feature selection (CFS): This multivariate filter algorithm ranks feature subsets based on a heuristic evaluation function based on a correlation.The bias function evaluates subsets that correlate with the class and are not correlated with other features.Non-relevant features are disregarded as they will not have a high correlation with the class; additional features should be evaluated as they are highly correlated to one or more other features.The acceptance of a feature is dependent on its ability to predict classes in areas of the sample space that have not previously been predicted by other features 32 . Information gain: This univariate filter is a widely used way of evaluating features.It assigns an order of importance to all features and then determines the necessary threshold value.In this example, the threshold value is determined by selecting features that receive positive information gain 32 . Gain ratio: The purpose of the algorithm modified for information gain is to mitigate bias.The algorithm evaluates the number and scope of branches when selecting a feature.By taking into account the internal information of a segment, the algorithm attempts to adjust the information gain 34 . Relief: This method involves selecting a random sample of data and then finding the closest neighbor of that class and its counterpart.The closest neighbor's attribute values are then compared to the sample and the associated scores for each attribute are updated.The logic for each attribute is that it distinguishes between samples from different classes and takes the same value into account for samples that belong to the same class 32 . Symmetrical uncertainty: To determine the relationship between a feature and a class label, symmetric uncertainty is used.The mean normalized mutual benefit of a feature (f), each other feature (n), and the class label reflects the relationship between feature f and other features in a set of features (F) 35 . Wrapper method Forward and backward selection: In a backward elimination model, all features are eliminated and the least important features are removed sequentially.In a forward selection model, no features are eliminated, and the most important features are added sequentially 36 . Naïve Bayes: This algorithm is derived from probability theory to identify the most likely classifications.It utilizes the Bayes formula (Eq. 1) to determine the likelihood of a data record Y having a class label c j 11 . Decision tree: The tree-based technique involves each path beginning at the root of the tree is initiated by a sequence of data separators, and the sequence continues until the result reaches the leaf node.The tree-based technique is, in reality, a hierarchy of knowledge that consists of nodes and connections.Nodes, when used for classification purposes, represent targets 37 . K-Nearest-Neighbor (KNN): It is a classifier and regression model used for classification.As KNNs are typically sample-based (or memory-based) learning schemes, all computational steps in KNNs are postponed until classification.Furthermore, KNNs do not require an explicit training step to construct a classifier 33 . NN: A neural network is a computer model composed of a vast number of interconnected nodes, each of which represents a particular output function, referred to as an activation function.Each node represents a signal, referred to as a weight that passes through the connection between two nodes.The weight corresponds to the memory capacity of the neural network, and the output of the neural network will vary depending on how the nodes are connected, the degree of weight, and the incentive function 38 . SVM: Support vector machines (SVM) are algorithmic extensions of statistical learning theory models that are designed to generate inferences that are consistent with the data.The question of estimating model performance in an unfamiliar data set, taking into account the model's properties and the model's performance in the training set is posed by support vector machines.These machines solve a restricted quadratic optimization problem to find the optimal dividing line between sets.The model generates data, and different kernel functions can be employed to provide varying degrees of linearity and flexibility 39 .www.nature.com/scientificreports/Logistic regression: Logistic regression (or logistic regression analysis) is a statistical technique that involves the prediction of the outcome of a class-dependent variable (or class of variables) from a set of predicted variables.Logistic regression involves the use of a binary dependent variable (or class) with two categories and is primarily used to predict, as well as to calculate, the probability of a given outcome 40 . Evolutionary algorithms They are a type of metaheuristic algorithm based on population that involves the use of a set of solutions in each step of the solution process.This set of solutions is composed of operators that combine/change solutions to incrementally improve/evolve aggregate solutions based on the Proportion uses function.This category includes algorithms such as PSO, ABC, and genetic algorithms 41 . Artificial Bee Colony (ABC): ABC is a hybrid population-based optimization algorithm in which artificial bees act as change operators to refine the solutions to the optimization problem-i-e-of food resources.The objective of the bees is to locate food sources with the primary nectar.In ABC, an artificial bee navigates a multidimensional area and selects nectar resources based on experience and hive companions or based on its location.In addition, some bees fly (explore) and select food sources randomly, without relying on experience.When they locate a source of the primary nectar, they retain their positions.ABC combines local and global search methods to achieve a balance between exploration and utilization of the search space 42 . Genetic algorithm: A genetic algorithm is a type of programming technique that utilizes evolutionary biology techniques, including heredity, mutation, and the principles of Darwin's selection, to find the most appropriate formula to predict or match a pattern.In many cases, genetic algorithms are a suitable substitute for regressionbased prediction methods.Genetic algorithm modeling is a programming approach that utilizes genetic evolution as a tool for problem-solving.Inputs are transformed into solutions through a process model based on genetic evolution, and the solutions are then evaluated as candidates for the fitness function.If the output condition of the problem can be met, the algorithm is terminated.In general; a genetic algorithm is an algorithm that is based on repetition, with most of its parts selected as random processes.It consists of parts of a function of fitting, displaying, selection, and change 43 . Particle swarm optimization (PSO): In particle swarm optimization algorithms, each member of a population or solution is referred to as a particle.Each particle flies and moves through the search space with its initial position and velocity to locate the most optimal solution.Each particle stores the best position it has achieved while searching and moving through the search space as its own experience.This information is then shared with other particles within the neighborhood, allowing them to identify the locations where they had the greatest success and thus the best position within their neighborhood or the entire search space.The best group experience is known as the solution 1 . Machine learning algorithms: This study employed a variety of machine learning models, including Bayes net, Naïve Bayes (BN), multivariate linear model (MLM), Support Vector Machine (SVM), logit boost, j48, and Random Forest.Bayes nets are mathematical models that represent relationships among random variables through conditional probabilities, similar to how a classifier evaluates the probability of P(c| x) of a class of discrete variables c in the presence of certain characteristics of a given X pay 44 .Random forests are a subset of tree based models, in which tree predictors are calculated independently from a random vector's values after a distribution that is equivalent for all trees within the forest.The generalization error of random forest classifiers is contingent upon the relationship between individual trees in the forest and the strength of those trees.J48 classifiers are extensions of the classification decision tree algorithm (C4.5) that generate binary trees.This system constructs a tree to represent the classification procedure.After constructing the tree, the algorithm applies to any tuple within the database to classify that tuple 45 . An MLP is a supervised learning approach that utilizes back-propagating techniques.Because there are many layers of neurons in an MLP, it can be considered a deep learning approach and is commonly employed to solve supervised learning problems.Additionally, it has been used in computational neuroscience research as well as in distributed parallel processing (DCP) research 46 .The logit boost is a boosted classification algorithm that is based on incremental Logistic regression and strives to reduce logistic loss. Evaluation and analysis tools: For data analysis and the identification of significant risk factors, Waikato environment for knowledge analysis (Weka) version 3.3.4was utilized.Evolutionary algorithms were implemented in Matlab 2019b, and machine learning models were implemented in R 3.4.0.The models were validated using a tenfold cross-validation method and various criteria, such as accuracy, sensitivity, specificity, and precision, as well as F-measure, ROC, and PRC area (Table 3).These indices operate based on the confusion matrix, a two-dimensional matrix that compares the predicted class values to the actual class values.Within the first quartile, true positives (TP) refer to the number of correctly classified patients with heart disease, and false positives (FP) refer to patients without heart disease who are incorrectly classified as having heart disease.The False Negative (FT) refers to patients with heart disease that are not classified correctly by the model, while TN (true negative) refers to patients without heart disease that are classified correctly 12 .The f-measure, the ROC area, and the PRC area indices are aggregated indices that provide an overall assessment of the model; the mathematical formulas (Eqs.2-6) for the calculation of the assessment indices are outlined in Table 3 Results The heart disease dataset consisted of 297 records (after removing 6 missing records) in which 160 subjects (53.9%) had no heart disease, and 137 subjects (46.1%) had heart disease.To determine the risk factors associated with heart disease diagnosis, sixteen feature selection methods were applied in three categories: filter, wrapper, and evolution.All of the feature selection techniques were employed on the features, and the outputs of each operation, as well as the features chosen by each technique, are presented in Table 4. The results of Table 4 demonstrate that the forward and backward regression methods have selected the minimum number of features, while the Relief method has selected the most (n = 12).In the subsequent step, seven different machine-learning methods were employed.The performance of these methods was evaluated using the tenfold cross-validation technique.All models were initially implemented based on the complete data set, followed by the features selected by the feature selection methods. After implementing all feature selection methods and determining the number of selected features, the results of Table 5 show the methods that selected the least number of features in each category.According to this, wrapper algorithms choose the least features while filter methods choose the most features.In addition, all the features selected by the filter algorithms were similar, while evolutionary algorithms, despite the same number of features, chose different feature types. The results of running the machine learning algorithms before feature selection are presented in Fig. 2. The SVM algorithm achieves a good performance with ACC = 83.165%,Spec = 89.4%, and Precision = 86.However, when the combined criteria are taken into account, Bayesian networks achieve better performance with ACC = 81.3%,F = 81.3%,AUC = 90.3%, and PRC = 90.The highest sensitivity value achieved for MLP was 81%.The accuracy of all algorithms is demonstrated in Fig. 3 following the selection of features.The SVM algorithm implemented using the CFS/Information Gain/Symmetrical Uncertainty feature selection method displays the highest performance in comparison to other algorithms.The Bayes net algorithm displays the highest performance after the implementation of feature selection methods. The values associated with the F-measure are presented in Fig. 4 following the implementation of the algorithms based on the feature selection methods.The highest performance was associated with the SVM + CFS/ information gain / Symmetrical uncertainty algorithm. In Fig. 5, the AUC values are displayed after performing the feature selection methods.Bayesian Network + Wrapper-logistic Regression algorithm had the best performance among other algorithms.As can be seen in the picture, the amount of AUC has been improved after feature selection in most algorithms. The results demonstrate that feature selection resulted in significant improvements in model performance in some methods (e.g., j48), while it led to a decrease in model performance in other models (e.g.MLP, RF).Table 6 compares the best results achieved before and after feature selection. Table 6 demonstrates that filter feature selection techniques have improved model performance in terms of Accuracy, Precision, and F-Measure, however, Wrapper-based and evolutionary algorithms have enhanced model sensitivity and specificity.SVM-based filtering methods have a best-fit accuracy of 85.5.In fact, in a best-case scenario, filtering methods result in + 2.3 model accuracy.SVM-based feature selection methods have the highest improvement in this index, with the PRC index having the lowest improvement of + 0.2. Figure 6 shows the ML model running time before and after the feature selection.All models are running on Corei3 (RAM = 4GB).The comparison of the results shows that the ML models with the original set of data reached an average model building time of 0.59 ± 0.34 s, among which MLP with 1.64 s and NB with 0.01 s had the highest and lowest times.Following the implementation of the feature selection methods, the ML models with the features selected by the Relief and gain ratio method achieved an average model building time (ABT) of 0.44 ± 0.19 s and 0.42 ± 0.18 s respectively.Additionally, the backward method and the Wrapper + NB method resulted in an ABT of 0.14 ± 0.06 and an average model construction time of 0.13 ± 0.06, respectively. Table 6 summarizes the findings of this study and related papers.Based on the data presented in Table 7, the accuracy of this paper was 85.5% higher than that of similar papers based on the SVM algorithm and the CFS/Information Gain/Symmetrical Uncertainty Feature selection methods. Discussion This study evaluates the influence of filter selection methods on the performance of various algorithms.Firstly, the algorithms were applied to the dataset without the implementation of feature selection methods.The SVM and Bayesian Network algorithms demonstrated the most robust performance, with accuracy values of 83.2 and 83.0 respectively.However, when combined criteria such as the F-measure = 81.3, the AUC = 90.3, and the PRC area = 90, the Bayesian network performed more efficiently.Subsequently, sixteen feature selection methods were applied in three categories: the filter, the wrapper, and the evolutionary.The wrapper method selected the least number (backward selection = 4, forward selection = 5) and the filter method selected the most features (Relife = 12).Evolutionary methods PSO and ABC also selected 7 features.Although the numbers were similar, the selection of features varied between the two algorithms.In his analysis of feature selection correlation • This study examines the role of feature selection methods in optimizing machine learning algorithms in heart disease prediction.• Based on the findings, the filter feature selection method with the highest number of features selected out- performed other methods in terms of models' ACC, Precision, and F-measures.• Wrapper-based and evolutionary algorithms improved models' performance from a sensitivity and specificity point of view.• Based on current knowledge, this study is among the few to compare the performance of different feature selection methods against each other in the heart disease algorithm field.• Previous research has mainly focused on enhancing algorithms, whereas studies that have examined the impact of feature selection on the field of cardiac prediction have focused on a limited number of methods, such as filter or metaheuristic.• As a result, the findings of this study may be of value to health decision-makers, clinical specialists, and researchers.The findings of this study will enable clinical professionals to utilize artificial intelligence more effectively in the prediction of heart disease.Policymakers will be able to plan and allocate resources for the utilization of AI in the area of health promotion and prevention of cardiovascular disease, and researchers can draw on the findings of this study to inform further research on the function of feature selection methods across various fields of disease. Limitation and future scope The limitations of this study include the use of a single dataset and the utilization of only seven algorithms.It appears that improved results can be obtained by utilizing multiple datasets and additional algorithms.Another limitation of this study is that socio-economic characteristics and other clinical characteristics related to people's lifestyle (e.g., smoking, physical activity) were not taken into account.Future studies will be able to provide better results by taking into account a broader range of clinical characteristics and socio-economic characteristics.However, other information (e.g.patient medical images, ECG signals) were not included in this study. The simultaneous utilization of structured and non-structured data, signals, and medical images, can provide researchers with more comprehensive insights and thus serve as a foundation for future exploration.Furthermore, the limited size of the dataset studied may limit the ability to disseminate the findings of the current study to the general public, thus necessitating the utilization of larger datasets and larger sample sizes to enhance the outcome of future research.Therefore, based on the findings of this paper, the present research team will focus on using larger datasets with a wider range of features and will also look at the impact of different feature selection techniques on different disease domains and, finally, the current team will employ more algorithms and, of course, deep learning techniques. https://doi.org/10.1038/s41598-023-49962-wwww.nature.com/scientificreports/ Figures 3 , 4 and 5 compare the machine learning algorithms' performance after feature selection for accuracy, F-measure, and ROC diagram area. Figure 2 . Figure 2. Before FS (based on the original data set). Figure 3 . Figure 3. Accuracy result of the algorithm after feature selection. Table 2 . The detail of Cleveland dataset. Table 4 . Feature selection results.*Selected features are shown with star mark (*), Cp: chest pain, FBS: fasting blood sugar, restECG: rest electrocardiographic, Exang: exercise-induced angina, Slope: peak exercise slope measure, Ca: number of major vessels colored by fluoroscopy, Thal: heart rate, Trestbps: resting blood pressures of patients measured in mm Hg on admission to the hospital, Chol: serum cholesterol, Thalach: maximum heart rate, Old peak: ST depression made by exercise relative to rest. Table 5 . Minimum features which choose by different methods. Table 6 . Performance result comparison before and after feature selection.
9,014.6
2023-12-18T00:00:00.000
[ "Medicine", "Computer Science" ]
Dynamic Optimization Model and Algorithm Design for Emergency Materials Dispatch Emergency materials dispatch (EMD) is a typical dynamic vehicle routing problem (DVRP) and it concentrates on process strategy solving, which is different from the traditional static vehicle routing problem. Based on the characteristics of emergency materials dispatch, DVRP changed the EMD into a series of static problems in time axis. Amathematical multiobjective model is established, and the corresponding improved ant colony optimization algorithm is designed to solve the problem. Finally, a numeric example is provided to demonstrate the validity and feasibility of this proposed model and algorithm. Introduction In the emergency situations, such as earthquakes, snowstorms, and wars, ensuring all kinds of materials to be efficiently dispatched is the key problem to the emergency logistics.And it is also a typical dynamic vehicle routing problem whose aim is exactly a guarantee of the smooth logistic workflow process.Meanwhile, emergency material dispatch problem is a much more complex vehicle routing problem because of its high efficiency and strict time limitation.Therefore, exploring the modeling methods and efficient solving algorithms under the dynamic condition is greatly significant [1]. The research scope of dynamic vehicle routing problem (DVRP) includes uncertain demands, uncertain network performance, uncertain service vehicles, and subjective preference of decision makers.The solving strategy can be divided into two categories.One is local optimization strategy, which deals with the dynamic adjustment of implementation process and local optimization algorithms.The insertion method and k-opt method are often used to redesign the routing lines and vehicles distribution [2].The other one is restarting optimization strategy, which is actually a static solution for dynamic problems.Once received determined real-time information, the strategy starts from the very beginning again to find the optimal material dispatch scheme.As one successful application of restarting optimization strategy, Paraftis used dynamic programming (DP) to deal with dial-aride problems with one vehicle [3].However, this method can only solve small-scale problems not exceeding ten demand nodes. Ant colony algorithm (ACA) has the nature advantage to adapt the dynamic environment with inherent robustness.Eyckelhof [4] carried out some research of VRP under the condition of unchanging node number and changing node distance, which indicates the sudden change of traffic congestion degree.The preliminary study results show that the original version and some slightly improved versions of ACA are quite good in solving some simple test performances.Wang [5] changed the dynamic vehicle routing problem into a series of static subproblems based on time axis.The path pheromone is reinitialized and the new problem with old unmet demand nodes and new nodes in next dispatch round is modeled to be solved. Because emergency logistics is focusing on speed instead of quantity, the dynamic effective material management is to replace the inventory materials' lack of mobility.The key point of emergency material dispatch in this paper is how to ensure the reliability of material supply stability in dynamic environment.Based on the characteristics of emergency material dispatch problem, the multiobjective mathematic model is established, an improved ACA optimization strategy and algorithm are developed, and a numeric example proved this method is feasible to generate real-time emergency material dispatch scheme corresponding to each point of the time axis. Mathematic Model of Emergency Material Dispatch in Dynamic Environment As to dynamic events of DVRP, the normal treatments are the event trigger mechanism and rolling horizon principle.While we adopt the event trigger mechanism, the new arrived material assignment is immediately inserted into the running dispatch scheme.Meanwhile we adopt the rolling horizon principle and divide the operation time into several small horizons.In the intervals, the material dispatch scheme is adjusted.The two treatments have both advantages and disadvantages.The former one may cause the frequent scheme altering, while the latter one may decrease the service quality. Strategic Analysis. Because sudden incidents in the emergency logistics are various and discrete, the demand of materials is changing with the development of the situation. According to the actual material assignment operation, a rolling horizon multitask decision-making emergency material dispatch model is built based on time axis, and the dispatch scheme is updated while distributing unfinished dispatch scheme and addon dispatch scheme in these intervals. In the interval, the event occurrences trigger the simple exchange neighborhood optimization and service denial strategy, while guaranteeing that every undamaged vehicle finally returns to the logistic center.Based on the instant statistics information about each node demands of the moment, the dynamic model can be converted to submodels.As the time goes on, the material demands of customer nodes will accumulate because of the frequent scheme altering or the assignment finishing rate as shown in Figure 1. The restart strategy based on rolling horizon principle has three advantages.Firstly, the dispatch scheme can change with the dynamic network attributes.Because the material flow is continuous, the new dispatch scheme round can be adjusted to take full use of limited transport network resources.Secondly, it can reduce the complexity of problem and generate simpler and easier schemes, in which every vehicle starts from the logistic center and the initial capacity is the loading capacity.Thirdly, the improved ACA algorithm can speed up response time by using the information generated from the solving process of former dispatch scheme. Variable Parameter Descriptions. The vehicles that have been the starting must try their abilities to serve all nodes and the vehicles which are in the logistic center are dispatched according to actual need.The mathematic model of emergency material dispatch is built based on some cases as follows. (i) Emergency materials are all generic class material and dispatched as standardized container units whose unit of measurement is twenty-foot equivalent units (TEU).(ii) A route can only start from the logistic center and end in the logistic center. (iii) The type of vehicles is single and the lay time is ignored.The drive speed is constant and the initial route weight matrix is represented by the distance between nodes. (iv) The emergency materials should be dispatched into all customer nodes safely, cheaply, and fast during the transportation. (v) The parameters of the mathematic model are all integer decision variables. According to the current research situation, the problem to be studied is described as a vector set of = (, , , ) at the moment of .The problem is a network topology = {V 0 , V 1 , V 2 , . . ., V } that contains a logistic center and customer nodes (C node for short), where is the maximum number of the nodes.Each node has an associated location [ , ], and the network is in the Euclidean plane, so the travel distance between any two nodes V and V is the straightline distance between them.The distance between any two nodes of vector constitutes the route weight matrix , which is symmetric and satisfy the triangle inequality: ≤ + .And the travel time can be obtained if you divide the distance by the drive speed, which is also a constant.Each node denotes a service request.All requests will randomly arrive within a planning fixed horizon.Accumulated material demands which contain both unfinished dispatch scheme and addon dispatch scheme are expressed as vector = { 1 , 2 , 3 , . . ., }, which corresponds to the emergency material demand of node V at the moment of .Every vehicle must depart from and return to the logistic center, and the transportation parameter matrix is = (, ). The logistic center has service vehicles whose loading capacity is TEU. Objective Function min The model seeks the optimization routes of the current snapshot so that the total dispatch service benefit is maximized.Equation ( 2) is an object function minimizing the total dispatch route length, which also makes the drive cost the lowest.Equation ( 3) is an object function minimizing the number of vehicles that have been put into use in the logistic center, which also cuts down greatly the fixed charges.And the two targets will guarantee the maximum of emergency material dispatch safety. In this paper the multiple objectives are integrated into a single objective with the weight coefficient method, and the weight vector of the two objectives is = (0.354, 0.646), which was obtained from importance comparison matrix in AHP.Considering the data in numeric example, the value of 1 ranged from 800 to 1000, and the value of 2 ranged from 9 to 22. Therefore we used weighted summation of the two data to get the single objective function as follows: min = 0.354 ⋅ lg 1 + 0.646 ⋅ 2 . (4) ̸ = . (10) Equation ( 5) ensures that the total number departed from the logistic center is less than the upper bound in every interval of dispatch scheme.Equation (6) ensures that the sum of total demand of all nodes in a vehicle's route is less than its loading capacity.Equation (7) can ensure that all vehicles have departed from the logistic center and finally returned to the depot.Equations ( 8) and (9) ensure that every node has one and only one predecessor node and successor node and is visited only once by a vehicle.Expression (10) can ensure the travel route of every vehicle forbid loop node, especially looping in the logistic center. Ant Colony Algorithm Design of DVRP Dealing with DVRP, the new input information could make the former optimal dispatch scheme change into the suboptimal, even the infeasible scheme.In the static environment, it is tolerable to spend more processing time for commanders getting high quality or optimal dispatch scheme.However, commanders want to obtain the dispatch scheme immediately; the fewer seconds or minutes are the better in an emergency. The main idea of this algorithm design is to improve the performance of basic ant colony algorithm by state transformation rules using load rating function and pheromone update rules based on ant ranking system [6].Furthermore, the route pheromone information is reserved while solving the former dispatch scheme, so the latter dispatch scheme solving process is speeded up and the emergency material dispatch scheme is generated. State Transformation Rules of ACA.Dorigo M put forward the adaptive pseudorandom probability transformation rule in ant-Q algorithm in 1993 [6].In general approach of basic Dorigo ant algorithm, each of ants constructs several routes touring through all the given cities in every generation, which makes up the dispatch scheme.Starting at a random city an ant selects the next city using heuristic information as well as pheromone information, which serves as a form of memory by indicating which choices were good in the past.In order to enhance the accuracy and reality of the ants while they choose travel routes, this paper proposes the improved state transformation rule which takes full use of load rating in view of constraint expression (6) as follows. Consider The probability for ant to append arc (V , V ) to its partial solution is then given as (); indicates the pheromone concentration information; represents the expectation heuristic information, which equals the reciprocal of and minimizes the total dispatch route length. represents the load rating heuristic information, which minimizes the number of vehicles in use., , and are constants that determine the relative influence corresponding to the pheromone values, distance values, and load rating values on the decision of the ant .Random variable follows 0-1 uniform distribution.Parameter 0 (0 ≤ 0 ≤ 1) determines state transformation rule, which also indicates the relative importance of exploitation versus exploration.The nodes set allowed contains the remaining nodes that have not been visited so far by ant . While ant has TEU capacity at node V , and the material demand of successor node V is TEU.Function Pheromone Update Rules. Once the ants of the colony have completed their computation, the better dispatch routes are used to globally modify the pheromone trail.In this way a "preferred route" is memorized in the pheromone trail matrix and future ants will use this information to generate new solutions in a neighborhood of this preferred route.For pheromone initialization we set 0 = 1/( − 1) for every arc (V , V ) at the beginning of each iteration.When ant moves from V to V , a local updating is performed on the pheromone matrix, according to the following rule: where parameter 1 determines the evaporation rate of pheromone.The global pheromone matrix adapts the elite ant strategy, which only uses the optimal and suboptimal dispatch route.And it is updated as follows: where parameter 2 determines the evaporation rate of pheromone, 1 and 2 are constants, and 1 > 2 > 0. Dynamic Modification Rules of Pheromone Matrix. During the emergency material dispatch, service requests are received in real time, and serviceability of requesting node is immediately verified.The target of this paper is to get the updated dispatch scheme based on rapid reaction to the dynamic information about the traffic capacity of transport network and demand nodes.However, the situations of dynamic environment can be divided into three types as follows: (i) the change in route weight matrix , (ii) the change in material demand of customer nodes, (iii) the change in optimization objectives of material dispatch. How to modify the pheromone matrix as much as possible to adjust the change in route weight matrix is the key point.Among these three situations, types (i) and (ii) are more common in connectivity and travel time reliability along with the development of the situation.The simplest strategy is reinitializing all pheromones to 0 and then restarting the ant algorithm while making the solving process inefficiency, so the current solving process should make full use of the previous results of the dispatch scheme calculation in order to reduce the computation time.For description, the modification rules are as follows. (1) Rule 1: Dynamic Distance Weight between Nodes.In the case of the distance between any two nodes V and V changing, is the maximum of all the in the former route weight matrix ; the pheromone modification rule of arc (V , V ) is expressed as follows: where parameter ∈ [0, 1] determines the adjustment range. = 0 means no adjustment, while = 1 means all pheromones are changing according to this rule. In the case of the demand of node V changing, is the number of all customer nodes; the pheromone modification rule of arc (V , V ) is expressed as follows: (1) -strategy [2], as Figure 2. Consider Because parameter equals the reciprocal of , so it can be inferred that parameter is inversely proportional to the distance between node V and the changed node. (2) -strategy [2], as shown in Figure 3. Consider From the equations it can be deduced that the closer arc (V , V ) is next to the changed node, the more equalized pheromone is in the route. To illustrate the -strategy and -strategy operation, we used a 10 × 10 grid of cities and set = 1 to visualize how the distribution of reset values takes place.In Figure 2 the distribution for the -strategy is proportionate to Euclidean distance, while Figure 3 shows that the -strategy tends to distribute along the path.By comparing the two strategies, we choose -strategy to deal with dynamic regulation of customer nodes demands. (3) Rule 3: Dynamic Regulation of Nodes Number.When the increased node number equals the decreased number, replace the nodes and then deal in Rule (1). (1) When the increased node number is more than the decreased number, directly delete the row and column in which the node stays. (2) When the increased node number is less than the decreased number, insert the new node, calculate the distance weight, and initialize pheromone to 0 . Detailed Steps of Improved ACA Step 1. Calculate the dispatch scheme at the initial time of 0 using improved ACA and generate the solved result. Step 2. Overwrite the pheromone matrix into the excel document "pheromone" until the iteration stops. Step 3. When calculating the new dispatch scheme, read the data in excel document "pheromone" and save it initialize pheromone matrix. Step 4. According to the specific changing information of dynamic environment, update the pheromone matrix with modification rules in Section 3.2. Step 5. Calculate the dispatch scheme at the time of using improved ACA, generate the new solved result, and then go to and execute Step 2. As shown in Table 1, the basic ACO algorithm with a simple restart strategy uses one more vehicle and generates the suboptimal total dispatch route, while its solution time is far more than other algorithms.The improved ACO algorithm with pheromone reserved strategy can meet the requirement of quick decisions, which saves an average of 50% computation time searching the optimal emergency material dispatch scheme.And the total computation time is less than 80 seconds in absolute amount dealing with 100node problem.Because of simplified modeling, this improved ACO algorithm is superior to the LNS algorithm in [8].So the improved ACO algorithm designed in this paper is optimal in the vehicles number and suboptimal in the total route length. Conclusions The multiobjective materials dispatch model is built by linear weighted method.The dynamic information generated in the process of emergency material dispatch is disposed in batches by rolling horizon strategy, such as traffic capacity of transport network or demand nodes.Eventually, a numeric example is designed in 5 different dynamic degrees to prove the validity of this model and algorithm.The modified ant colony optimization algorithm utilizes the pheromone reserved by old material dispatch scheme to initialize the ant colony pheromone matrix in seeking for the next new dispatch scheme.This method improves the quality of scheme solutions and obviously the solving efficiency, while generating material dispatch scheme in real time based on current emergency material dispatch network.However, there are still some problems that need to be studied such as split demand, complete set, and multimodal transport of complex dynamic emergency materials dispatch problem. Figure 1 : Figure 1: Accumulated material demands of customer nodes. Table 1 : Calculation results of Solomon benchmark problem for various dynamic degrees.The state transformation rule parameter 0 = 0.6, and the dynamic modification rule parameters = 0.1, = 2.
4,165
2013-12-03T00:00:00.000
[ "Engineering" ]