text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Genuine Multipartite Entanglement in Time
While spatial quantum correlations have been studied in great detail, much less is known about the genuine quantum correlations that can be exhibited by temporal processes. Employing the quantum comb formalism, processes in time can be mapped onto quantum states, with the crucial difference that temporal correlations have to satisfy causal ordering, while their spatial counterpart is not constrained in the same way. Here, we exploit this equivalence and use the tools of multipartite entanglement theory to provide a comprehensive picture of the structure of correlations that (causally ordered) temporal quantum processes can display. First, focusing on the case of a process that is probed at two points in time -- which can equivalently be described by a tripartite quantum state -- we provide necessary as well as sufficient conditions for the presence of bipartite entanglement in different splittings. Next, we connect these scenarios to the previously studied concepts of quantum memory, entanglement breaking superchannels, and quantum steering, thus providing both a physical interpretation for entanglement in temporal quantum processes, and a determination of the resources required for its creation. Additionally, we construct explicit examples of W-type and GHZ-type genuinely multipartite entangled two-time processes and prove that genuine multipartite entanglement in temporal processes can be an emergent phenomenon. Finally, we show that genuinely entangled processes across multiple times exist for any number of probing times.
F Circuit for a GME comb on three steps 39
Introduction
Correlations form the basis for scientific inferences about the world. They are used, amongst others, to detect (and discern different types of) causal relations [1][2][3], to distinguish theories that abide by local realism from those that do not [4][5][6], and to test if quantum mechanics satisfies the assumptions of non-invasive measurements and realism per se [7][8][9]. The most striking type of genuine quantum correlations is entanglement [10,11], which is a prerequisite for the violation of Bell inequalities [12] and quantum steering [13][14][15], two phenomena that lie outside of what is possible by means of classical correlations. Additionally, entanglement provides an advantage in information processing tasks and is a fundamental resource in many quantum information protocols [16].
There have been many attempts to import various facets of spatial quantum correlations to the temporal domain. Most notably, the violation of Leggett-Garg type inequalities [7][8][9] were introduced to capture quantum correlations in time in analogy to a Bell-type setup. That is, when a single quantum system is probed at different points in time the resulting correlations can also go beyond what is possible in classical theories. However, Bell-type setups in time are fraught with difficulties as it is easy to construct fully classical, but invasive, setups that too can violate these inequalities maximally [17,18].
The interpretational issues are less problematic for the case of 'entanglement in time' and thus it is possible to classify and attribute operational meaning to genuinely quantum temporal correlations. In particular, the quantum comb formalism [19,20] allows one to express any temporal quantum process in terms of a multipartite quantum state -called a quantum comb -where each time the process is probed at corresponds to two Hilbert spaces. This is most transparent by means of the so-called Choi-Jamio lkowski isomorphism (CJI), which maps any (multi-time) quantum process to a (many-body) quantum state. Consequently, both spatial and temporal correlations can be analysed in one common framework, enabling the study of temporal correlations on the same mathematical footing as the analysis of spatial ones. In addition, as the respective correlations have a direct interpretation in terms of the properties of the underlying process, it allows one to provide a clear-cut meaning to statements like 'different points in time are entangled with each other' 1 .
The quantum combs framework is naturally suited for describing multi-time quantum stochastic processes [2,23], as well as more exotic processes that lack global causal ordering [24,25]. In each case quantifying quantum resources has significant operational value. For instance, for the former, the requisite entangling resources naturally relate to the quantum complexity of a stochastic process. Alternatively, having access to naturally occurring or engineered entanglement in time within quantum devices could help to enhance their performance [26]. For the latter case, simulating causally indefinite processes requires spatial entanglement, tying together quantum correlations and exotic temporal phenomena [27,28].
While the isomorphism between quantum states and processes enables the systematic study of temporal correlations, it comes with two caveats; on the one hand, quantum states that correspond to quantum processes have to encapsulate the causal ordering of the process they describe. This means that measurements made at a later time cannot influence the statistics at earlier times, a requirement that imposes a hierarchy of trace conditions on quantum combs [19]. Accordingly, known results on the existence of multipartite quantum states that satisfy desired entanglement properties cannot straightforwardly be applied to quantum combs, as the set of states that describe spatial scenarios does not coincide with the set of states that correspond to temporal processes [29]. Given these additional constraints, it is then natural to ask, what types of entanglement can exist in combs, and if there are genuinely multipartite entangled combs for any number of times.
On the other hand, the respective interpretations of the observed correlations fundamentally differ in the spatial and the temporal case. While quantum states 'only' describe the correlations between measurements on spatially separated parties, in a quantum comb correlations between different sets of parties have different interpretations. For example, in the simplest case, depending on the involved parties, entanglement can mean that two parties share a quantum state, share a non-entanglement breaking (EB) channel, or possess the ability to transmit quantum memory. Each of these cases has been analysed individually in the literature; phrased in the language of quantum causal modelling, the former two cases amount to a quantum common cause and a quantum direct cause, respectively [2]. In Refs. [30,31] the authors provided an example of a process that constitutes a superposition of common cause and direct cause scenarios, something, that is unattainable in quantum mechanics. The idea of a quantum memory and its connection to entanglement properties of the underlying comb was introduced in [32].
Here, we provide a systematic study of the entanglement features -both bipartite and multipartite -a quantum comb can display, analyse in detail the respectively necessary and sufficient properties of the underlying dynamics for the presence of different types of entanglement in combs. Put differently, we analyze both what it means in a physical sense for a comb to be entangled, and work out the respective resources that would be required in order to create such different types of entanglement in temporal processes. Specifically, after analysing the bipartite entanglement properties of combs on two times (i.e., defined on three Hilbert spaces), we show that there exist genuinely multipartite entangled combs for any number of times, and provide explicit examples of both of W-and GHZ-type entangled combs on two times. Additionally, we show that in the temporal case -in analogy to its spatial counterpart [33] -there exist processes with entanglement as an emergent quality, i.e., genuinely multipartite entangled states that do not display entanglement in their marginals even when one allows for conditioning. Along the way, we provide explicit circuits for each of the discussed cases, and relate them to existing phenomena discussed in the literature, like entanglement breaking superchannels [34], channel steering [35], as well as the aforementioned concepts of genuine quantum memory and the superposition of direct and common causes. In this way we provide a comprehensive picture of the entanglement properties of temporal processes.
Preliminaries: Quantum combs
Throughout this article, we envision the following setup: An experimenter has access to a system -considered to be finite dimensional -which they can manipulate (i.e., transform, measure, discard, etc.) at successive points in time t 1 < t 2 < · · · < t n+1 . In between these points, the system evolves freely, potentially interacting with degrees of freedom (henceforth dubbed environment) that are out of the control of the experimenter. In the most general case, this free evolution is described by a quantum channel, that is, a completely positive trace preserving (CPTP) map (see Fig. 1). Due to the interaction with the environment, the resulting multi-time statistics, or, equivalently, the quantum stochastic process that the Figure 1: Temporal quantum process. A system of interest is probed sequentially at times t 1 , t 2 , . . . . Initially, the system can be in a state ρ AR that is correlated with degrees of freedom (labelled by R) that are outside the experimenter's control. Each measurement corresponds to a map M i . In between measurements, the system and the environment together undergo a free evolution given by CPTP maps L 1 , . . . , L n . The resulting process is fully described by a comb T n+1 (depicted by the blue outline). experimenter probes, can display complex memory effects that can go beyond what is possible in classical physics [30][31][32].
Mathematically, every manipulation the experimenter can perform on the system corresponds to a trace non-increasing completely positive (CP) map. For example, at each time t j , the system of interest could be measured in the computational basis, yielding outcomes {x j }. This measurement leaves the original state of the system in the state |x j x j | and is described by the CP map P x j [ρ] = x j |ρ|x j |x j x j |, where the probability to actually obtain outcome x j is given by tr(P x j [ρ]). More generally, in order to gain different information about the underlying quantum stochastic process, the experimenter could choose to probe it in different ways. For example, instead of measuring in the computational basis, the experimenter could use a positive operator valued measure (POVM), and, upon observing outcome x j , feed forward a quantum state η x j . In this case, the map corresponding to the experimental manipulation would be given by M x j [ρ] = tr(E x j ρ)η x j , where E x j is the POVM element corresponding to the outcome x j .
In the most general case, at each time t j the experimenter could choose a general instrument I j = {M x j }, i.e., a collection of CP maps that add up to a CPTP map M j , such that every outcome x j they observe corresponds to a transformation of the system given by M x j . Then, denoting the free system-environment evolution between times t j and t j−1 by L j , the probability for the experimenter to observe outcomes x n+1 , . . . , x 1 having used the instruments J n+1 , . . . , J 1 at times t n+1 , . . . , t 1 is given by where the maps {L n , . . . , L 1 } describe the free system-environment evolution in between measurements and we have defined the multilinear functional T n+1 (see Fig. 1). This functional (or slight variations thereof) appears under varying names in various fields of quantum information theory and beyond. Depending on the context, it is called a quantum comb [19,20,36] (in the study of higher order quantum maps), process tensor [23,37,38] and causal automata/non-anticipatory channels [39,40] (when concerned with general open quantum processes with memory), causal box [41] (when quantum networks with modular elements are investigated), operator tensor [42,43] and superdensity matrix [44] (in the field of quantum information in general relativistic space-time), process matrix [2,3,24,25] (when used for quantum causal modelling), and quantum strategy (in the context of quantum games [45]). Following Refs. [19,20,36], we will call T n+1 a quantum comb (or simply comb). Importantly, combs can encapsulate any causally ordered quantum process [46], and are thus the mathematical framework for the field of quantum causal modelling [2,3]. As can be seen from Eq. (1), T n+1 depends on the initial system-environment state ρ AR as well as the intermediate maps {L j } and contains all statistical information that can be inferred from an underlying process when probing it at times t 1 , . . . , t n+1 . As such, it contains all spatio-temporal correlations -and thus all causal relations -that the quantum stochastic process of interest can exhibit. Additionally, due to the linearity of quantum mechanics, T n+1 can be reconstructed with a finite number of measurements [23]. These latter two properties of combs are analogous to those of quantum states, with the difference that the latter only contain all inferrable spatial joint probabilities, while the former contains all spatio-temporal correlations for measurements that can be separated both in space and time.
Employing the Choi-Jamio lkowski isomorphism [47][48][49] we can make this analogy more transparent. Specifically, each of the CP maps M Importantly, the respective output space can also be trivial, i.e., H o j ∼ = C, which is the case when the system of interest is discarded after the measurement (see below). A map M x j is CP iff its Choi matrix is positive, while it is trace preserving (TP) iff its Choi matrix satisfies Here, tr y x (1 y x ) denotes the partial trace over (identity matrix on) the Hilbert space H y x . While we will mostly consider cases where the input and output dimensions of the respective maps coincide (except for the last time t n+1 , see below), and where the size of the considered system does not vary with time, for better bookkeeping, we always distinguish between the input and output space, and additionally label the respective Hilbert spaces with the time t j they correspond to. Employing the CJI, Eq. (1) can be rewritten as where rT denotes the transposition.
is the Choi matrix of T n+1 , deviating from our normal naming convention for maps and their Choi states to avoid confusion with the transposition and to conform with the notation used in the literature [23,50]. As the evolution after the final time t n+1 is not of interest, without loss of generality, the final instrument J n+1 is a POVM (i.e., it has a trivial output space), implying . In slight abuse of notation, in what follows, whenever there is no risk of confusion, we will call both T n+1 and its Choi matrix Υ n+1 the comb of the quantum process at hand.
It is important to stress the similarity of Eq. (2) to the Born rule. The joint probability distributions for measuring a spatially separated multipartite quantum state ρ with POVM elements E x 1 , . . . , E x n+1 corresponding to the outcomes of spatially separated parties 1, . . . , n+ 1 is given by This is akin to Eq. (2), which thus has been dubbed generalized Born rule for temporal processes [51,52]. Υ n+1 can hence be considered a quantum state in time, and it is natural to ask what kinds of correlations it can display, and what kinds of resources are necessary for their creation. Just like a quantum state, Υ n+1 is a positive matrix. However, in contrast to quantum states, Υ n+1 has to encapsulate the causal ordering of sequential measurements [19,39]. Specifically, the structure of Υ n+1 has to be such that the choice of instrument at time t j cannot influence the statistics observed at any earlier time t i < t j . This requirement imposes a hierarchy of trace conditions [19]: where Υ 1 ∈ B(H i 1 ) is a quantum state 2 . These equations also fix the overall trace of Υ n+1 to be tr . Vice versa, any positive matrix that satisfies the above trace conditions can be considered the Choi matrix of an underlying quantum stochastic process, or, equivalently, of an underlying quantum causal model [19].
To see why the above conditions ensure causal ordering, consider, for example, the case of three times {t 1 , t 2 , t 3 }. Statistics at time t 1 should not depend on the choice of instruments J 2 and J 3 at times t 2 and t 3 . Any given choice of these latter two instruments implies that -on average -at times t 2 and t 3 , the experimenter performs CPTP maps with Choi matrices M 2 and M 3 respectively. As the output space of J 3 is trivial, it is a POVM, implying With this, we see that where we have alternatingly used the property tr o j (M j ) = 1 i j of CPTP maps and the causality conditions of Eqs. (4). As Υ 1 is independent of the choice of J 3 and J 2 , so is P(x 1 |J 3 , J 2 , J 1 ).
In order to investigate the structural properties of combs and to see how they stem from the underlying dynamical 'building blocks' (i.e., the initial state ρ AR , as well as the intermediate maps {L j }), it is convenient to introduce the link product [19]. For example, Υ n+1 can be straightforwardly computed as where the link product between two matrices F ∈ B(H x ⊗ H y ) and G ∈ B(H y ⊗ H z ) is given by Put shortly, the link product traces two Choi matrices over the spaces they share, and corresponds to a tensor product on the remaining spaces (importantly, F G = F ⊗ G if F and G are defined on disjoint spaces). Intuitively, " " expresses the concatenation "•" of maps to the case of Choi matrices, i.e., the Choi matrix of F • G is given by F G. For example, the action of a map L on a state ρ can be written as L[ρ] = L ρ. Importantly, the link product satisfies F (G H) = (F G) H = F G H, it is commutative for all cases we consider, and the link product of positive matrices is itself a positive matrix. To keep better track of the involved spaces, we will often additionally label Choi matrices with the spaces they are defined on. While we provide a more detailed discussion of the link product in App. A (see [19] for thorough derivations), the above definition is sufficient for our purposes. We will make use of it frequently in what follows to derive the properties of combs from their underlying building blocks.
Mapping the somewhat abstract linear functional T n+1 onto its Choi matrix Υ n+1 has the advantage that the latter is -up to normalization -a quantum state, and all temporal correlations that the given process T n+1 can display are now encoded in the spatial correlations of the (unnormalized) quantum state Υ n+1 . Consequently, the vast machinery that has been developed for the analysis of bi-and multipartite entanglement in quantum states [10,53,54] can be used to analyse temporal correlations that are genuinely quantum. Such a program has recently led to the definition and investigation of genuine quantum memory in temporal processes [32]. Naturally, as combs are not normalized to unity, in what follows, when we speak of entanglement, we will always understand it up to normalization; then, for example, a comb Υ ABC is separable in the splitting A : BC, if it can be written in the form α ρ (α) Throughout, we will predominantly analyse the three-party case, both investigating the types of genuinely tripartite entanglement in temporal processes that can persist, as well as the necessary and sufficient conditions on the underlying dynamics for their occurrence. This case is simple enough to allow for explicit results, yet already displays genuine quantum effects [32]. To simplify notation, we denote the involved Hilbert spaces by H A , H B , and H C instead of H 1 i , H 1 o , and H 2 i , and the corresponding comb by Υ ABC . Consequently, the combs we will consider satisfy Besides notational simplification, this relabelling has the additional advantage of providing an intuitive role that each of the 'parties' play. A corresponds to Alice measuring the system state at t 1 , B corresponds to Bob feeding forward a state at t 1 (after Alice's measurement), and C corresponds to Charlie measuring the final state of the system at time t 2 (see Fig. 2). With this, for example, different kinds of bipartite entanglement in Υ ABC correspond to different kinds of 'control' that each party has over the correlations the other two share. Throughout this article, we will adopt the convention that the Choi states of maps are labeled in the temporal order in which the respective spaces appear.
Preliminaries: Entanglement
While structurally similar to spatial quantum correlations, it is a priori unclear how the additional causality constraints of Eqs. (4) affect the phenomena of multipartite entanglement that can persist in the temporal setting. Before studying this in detail we will first set the ground and review some important definitions and results from entanglement theory.
To begin with, states are entangled if they cannot be prepared with local operations (by potentially spatially separated parties) and classical communication among them (LOCC). In Figure 2: Two-step process. A two step process can be considered (up to normalization) a three-partite quantum state, where each of the parties has a different role. Alice (A) can measure the state of the system at time t 1 , Bob (B) can feed forward a state at t 1 , and Charlie (C) can measure the state of the system at t 2 . To emphasize the causal ordering of the process, arrows have been added to the 'legs' of the comb. the bipartite case this implies that they cannot be written in the form where here and in the following {p i } is a probability distribution and ρ A i (ρ B i ) are valid density matrices of party A (party B) respectively. If states are of this form they are called separable, and we will denote them by ρ sep A:B . An important example of a bipartite entangled state which is in fact maximally entangled and which plays also an important role in the CJI is |Φ + = 1/ √ 2 (|00 + |11 ). Whether the Choi matrix is entangled or not provides information about the underlying channel. More precisely, the Choi matrix is separable if and only if the corresponding quantum channel is entanglement-breaking (see [55] and references therein).
In the multipartite scenario the situation is more complex (see, e.g. [53]). The straightforward generalization of Eq. (9) yields fully separable states. However, it may also be that a multipartite state of N parties contains at most entanglement among k parties for k = 2, . . . , N . For the case that the state indeed contains N -partite entanglement (in any possible decomposition) it is called genuine multipartite entangled (GME). As mentioned before, we will mainly be interested in the three-party case. In this case if a state is entangled one may either observe only bipartite entanglement or genuine tripartite entanglement. It may be for example that a state is separable with respect to some specific bipartition, e.g. A : BC. That is, it can be written as ρ sep and it does not contain any entanglement between party A and parties B, C which we denote by E(A : BC) = 0. Note that, however, in this case parties B and C can still share entanglement (in contrast to a fully separable state). Any biseparable state is then some mixture of biseparable states with respect to different bipartitions, i.e., it can be written in the form where {q i } are probabilities that sum to unity. Any state that is not of this form is genuinely tripartite entangled. By definition any GME state has to contain bipartite entanglement across all cuts, i.e., E(A : BC) > 0, E(B : AC) > 0 and E(C : AB) > 0. This does not necessarily hold true if one particle gets lost, i.e., the marginals may be separable or, stated differently, there exist GME states for which after performing a partial trace over C the resulting state is separable, i.e., E(A : B) = 0, and analogously for all other parties. For example the GHZ state, |GHZ = (|000 + |111 )/ √ 2 shows this property. Performing a partial trace is a very special case of a POVM measurement. As has been also shown one can find GME states for which any possible POVM measurement yields a separable state on the remaining parties [33]. Assuming that for example party C is performing the measurement, we will denote this conditional form of entanglement by either E(A : B|c) > 0 in case there exists some post-measurement state on A and B which is entangled or E(A : B|c) = 0 otherwise. Note that in the first case the measurements of party C may be able to influence the resource A and B share. We will consider the implications of such an effect on temporal processes in Sec. 3.3. Note further that the notion of conditional entanglement considered here is different from localizable entanglement [56], which is defined as the maximal entanglement among two parties that can be obtained on average via local measurements on the other parties.
There exist several criteria which may allow one to detect that a state is entangled. In particular, for the two-qubit case and the case of a qubit and a qutrit a necessary and sufficient condition for entanglement which can be easily evaluated exists. It is known as the PPT criterion or Peres-Horodecki criterion [57,58] and states the following: A state with total dimension d = d A d B ≤ 6 is separable if and only if its partial transposition yields a positive semi-definite operator, i.e., it has a positive partial transpose (PPT). For higher dimensions any separable state is PPT however the converse is not true. Hence, the PPT criterion still constitutes a necessary criterion for separability and still allows to certify entanglement of bipartite states.
Entanglement can also be detected by considering entanglement witnesses (EWs) [58,59]. These are operators which yield a non-negative expectation value for all separable states but which detect at least one entangled state via a negative expectation value. For any entangled state (independent of the number of parties or local dimensions) there exists an EW that heralds the presence of entanglement [58]. It is, however, not clear how to construct it in general.
Matters are significantly simplified, if the set of separable states is relaxed to the set of states that are PPT mixtures, which are written as where each term in the above equation corresponds to a state with a PPT in the respective splitting. Any state that cannot be written in the form of Eq. (11) is GME, but there are GME states that are PPT mixtures. Importantly, using the concept of EWs, membership to the set of PPT mixtures can be decided by means of the following semi-definite program (SDP) [60] min tr(W ρ) This can be easily seen as for any state ρ of the form in Eq. (10) it holds that Here we used that tr(σP T M M ) = tr(σ T M P M ) and the inequality follows from the fact that [ρ sep A:BC ] T A ≥ 0 (and analogously for the other splittings) and by definition P M , Q M ≥ 0. Hence, in case a negative value is observed the state is certified to be GME. Moreover, it has been shown that for any state that is GME there exists a witness of the form given in the SDP (12) which detects it [60].
For certain classes of states the following analytical criterion is particularly useful to prove GME. It has been shown [61] that for a biseparable n-qubit state, ρ (n) , it holds that where here we use the notation ρ 1} andĪ is obtained from the tuple I by swapping zeroes and ones. Moreover, we denote by |I| the Hamming weight of the tuple and |I|=k denotes a sum over all tuples which have Hamming weight k. In case the inequality is violated the state is GME.
So far we considered the question among how many parties entanglement has to persist in order to create a state. However, one may also be interested in organizing states into different classes of entanglement. Such a classification can be achieved by considering stochastic local operations assisted by classical communication (SLOCC). If one can transform with nonvanishing probability the pure state |Ψ into another pure state |Φ via LOCC and the reverse transformation from |Φ to |Ψ is also possible via SLOCC, both states are within the same SLOCC class [62]. Mathematically, this corresponds to |Φ ∝ A 1 ⊗ . . . ⊗ A n |Ψ with A i being local invertible operators. For three qubits there exist two different SLOCC classes which are genuine tripartite entangled [62], the W-class and the GHZ-class. Well-known representatives of these classes are the W state, |W ∝ |001 + |010 + |100 , and the GHZ state, |GHZ ∝ |000 + |111 .
The concept of SLOCC classes can be generalized from pure states to mixed states by defining such a class as the convex hull over all pure states within the closure of a SLOCC class [63]. As an example, consider the W-class whose closure also includes all biseparable and fully separable states. Hence, the W-class (for mixed states) contains all states that are convex combinations of pure states within the W-class, biseparable and fully separable states. As the closure of the GHZ class for three qubits contains all pure three-qubit states, all states are contained in the GHZ class for mixed states. Moreover, all states which are in GHZ\ W have the property that in any decomposition at least one state is contained in the GHZ-class (of pure states). Such states can also be identified by using witness operators [63,64]. An example of such a SLOCC witness is given by [63] W = (3/4)1−|GHZ GHZ|, which gives a positive expectation value for any state within the W-class but is able to detect, e.g., the GHZ state. Another way to distinguish among different SLOCC classes can be by considering SL invariants [62,65]. The tangle [66] (which is also an entanglement measure) is such a quantity, it is non-zero for states within GHZ\W and zero for all states in the W-class [62]. Below, we will use these criteria to show that processes of all SLOCC classes exist for the case of two times, i.e., three parties.
Preliminaries: States vs. Processes
As mentioned, via the CJI, all quantum combs can be considered as quantum states (up to normalization). Hence, the concepts and tools presented above can be also applied for their characterization. When doing so we will use the normalization for combs, i.e., tr(Υ ABC ) = d B and if necessary modify the criteria accordingly. We emphasize that the causality constraints prohibits that all states can be considered as temporal processes. This already holds true for the case of two times t 1 , t 2 , i.e., the tripartite case with the involved Hilbert spaces H A , H B , and H C . Here, for example, the GHZ state is a genuinely tripartite entangled quantum state, but not (proportional to) a proper comb Υ ABC . More generally, it is straightforward to see that any process that is described by a pure comb cannot be genuinely multipartite entangled, as it has to be of the form Choi matrices of unitary maps that act on the system alone.
This structural difference between states and processes also extends to the respective physical implications of entanglement in the spatial and the temporal setting. While for quantum states, entanglement is a statement about correlations between spacelike separated measurements, the correlations in a comb can be given a direct operational meaning in terms of causal structure of the underlying process. Discerning and determining different causal influences is the object in the field of (quantum) causal modelling [1,2]. For two parties, correlations between them can exist due to a common cause, i.e., they are correlated, but cannot causally influence each other; or a direct cause between them, i.e., one party's actions can influence the statistics of the other.
Expressed in quantum mechanical terms, a common cause (between, say, parties A and C) corresponds to the two parties sharing a quantum state, but no channel between them to communicate. As the state can be correlated, their measurement outcomes when probing said state can also be correlated, but they cannot influence each other. When the state that is shared is entangled, we will call it a quantum common cause.
On the other hand, a direct cause, say, between B and C, implies that information can be sent from B to C (or vice versa), i.e., they share a communication channel between them. In this case, correlations between the two parties stem from a direct causal influence. We will call a direct cause quantum, if the channel that is shared admits the sending of quantum information, i.e., if it is not entanglement breaking.
For a process with three parties of the form discussed above (see Fig. 2), both of these types of correlations can be read off directly from the comb Υ ABC . As Fig. 2 suggests, B is the only party that can share direct cause correlations with the other two (Bob is the only party that can feed something into the process), while A and C can only share common cause correlations. More precisely, we will both consider the correlations across different splits in the full comb Υ ABC as well as its (conditional) marginals. Then, depending on the respective comb, the considered Choi state correspond to a channel between parties -i.e., a direct cause -or a state shared by different parties -i.e., a common cause.
The type of correlation then tells us directly, if the cause is quantum or not. For example, if Υ AC is entangled, then Alice and Charlie share an entangled state, implying that they have a quantum common cause. On the other hand, if the considered Choi state is that of a channel, then entanglement in a cut means that the underlying channel is not entanglement breaking [67]. Consequently, if, say, Υ BC is entangled, then there is a quantum direct cause between Bob and Charlie. Any study of quantum correlations in combs has thus a direct operational implication for the process at hand that is investigated.
Naturally, beyond the two-party paradigm, the landscape of causal structures becomes more complex. For example, as mentioned it has been shown that, in quantum mechanics, common cause and direct cause scenarios can be superposed [30,31]. More generally, combs can display genuinely multipartite entanglement. While somewhat harder to interpret in terms of common and direct causes, following the definition of Choi states, a GME comb has nonetheless an operational interpretation: the corresponding process can be harnessed to create GME from a collection of mutually uncorrelated maximally (bipartite) entangled states by, respectively, acting on only one of their parts.
Here, we will investigate in detail the entanglement structure of processes on two times and beyond; i.e., we investigate what types of entanglement, both bi-and multipartite, can exist. This, in turn, answers the question of what genuinely quantum phenomena processes in time can display. Additionally, we will study the basal building blocks required for the implementation of such different types of processes, providing a comprehensive characterization of the resources required for to exploit such genuine quantum features. Both of these results, in turn, provide insights into the multilayered nature of causal relations that persist in quantum processes.
Bipartite entanglement in combs -necessary conditions
As a first step, before considering genuine multipartite entanglement in quantum processes, we consider the simpler case of bipartite entanglement, and answer the questions what kinds of entanglement are possible, given the causality requirements of Eq. (8), and what underlying resources are necessary for its creation. Put in terms of quantum causal modelling, entanglement of a comb in different splittings amounts to different types of quantum causal relations [30,31]. Their analysis both allows one to make inferences about the necessary resources required to create different types of entanglement in combs, and prepares the discussion of the genuinely tripartite case.
The bipartite case can be considered in two different ways. On the one hand, assuming a 'global' position, entanglement in the splittings is of interest. On the other hand, considering each leg in the comb in Fig. 2 as a party that either wants to communicate with another party, and/or tries to influence the communication of the other two, entanglement of the form is relevant, where conditioning implies that the respective entanglement can depend on a measurement or preparation that the third party performed. Evidently, entanglement of the latter case implies entanglement in the former, global case, as no local operation can create entanglement. Here, we first investigate the necessary conditions for entanglement in the conditional case, while the unconditional one will be discussed in detail in the next section.
Quantum common cause -Entanglement E(A : C|b) > 0
Depending on the interplay of the underlying building blocks of the process, Alice and Charlie can -potentially conditioned on the state that Bob inserts into the process -share entanglement. Expressed in the language of causal modelling [1,2], such a scenario would (possibly depending on the state that Bob feeds forward) imply a quantum common cause [3,30] between Alice and Charlie. Evidently, for this type of entanglement to persist, the initial system-environment needs to be entangled between the system A and the environment R. Additionally, the subsequent system-environment map L : Fig. 3 for a graphical depiction of the considered process). These two requirements can be summarized in the following Proposition: Proof. The overall process Υ ABC can be computed directly from its building blocks as where we have omitted the respective identity matrices. Now, if ρ AR is separable, it can be decomposed as R , which leads to an overall comb of the form which is -up to normalization -a quantum state that is separable in the splitting A : C, independent of what Bob does. On the other hand, assuming that |Φ Φ| B L BRC is an entanglement breaking channel for all states |Φ B implies that τ B L BRC is entanglement breaking for any state τ B that Bob feeds into the process. As such, we have C } are quantum states. Then, one obtains for the resulting comb shared between Alice and Charlie: which is a separable state on B(H A ⊗ H C ). A simple example for a process that satisfies E(A : C|b) > 0 is one where the initial system-environment is a maximally entangled state, and the action of the map L is to swap the system and the environment, and subsequently discard the environment. In this case, the resulting state shared between Alice and Charlie would be the maximally entangled state Φ + AC , independent of what state Bob feeds into the process. More generally, it is also possible that Bob can actively 'switch' the entanglement between Alice and Charlie on or off by choosing his input state τ B appropriately. To see this, consider a variation of the above channel, where, before the swap, depending on the input state of Bob, either an identity channel I or an entanglement breaking channel Λ EB is implemented (see Fig 4). Concretely, let the action of this map on a BR state ρ BR be given by In this case, if Bob feeds τ B = |0 B 0 B | into the process, then the resulting AC state is equal to the maximally entangled state Φ + AC , while, if he feeds forward τ B = |1 B 1 B | into the process, then Alice and Bob share the separable state Λ EB ⊗ I A [Φ + AC ]. The above discussion also provides a direct intuitive interpretation of E(A : C|b) > 0 for a comb Υ ABC ; it implies that (possibly depending on Bob's input), Alice (i.e., an experimenter at time t 1 ) and Charlie (i.e., an experimenter at time t 2 ) share an entangled state, or, equivalently, there is a quantum common cause between Alice and Charlie, depending on what Bob does. Importantly -unlike in the two cases that follow -Bob can control the entanglement that is shared between Alice and Charlie deterministically, i.e., no conditioning on measurement outcomes is necessary. Phrased with respect to the above example, Bob can decide beforehand, which of the possible states he wants Alice and Charlie to share. While there are infinitely many deterministic preparations (all states τ B that Bob can feed into the process), there is only one deterministic POVM element [68] (discarding the system). Consequently, Alice and Charlie can control the causal correlations of the respective other two parties, but will only know what particular correlations they controlled on after their measurement.
Quantum direct cause -Entanglement E(B : C|a)
While E(A : C|b) concerns the entanglement between two output legs, i.e., two parts of a quantum state, E(B : C|a) concerns the entanglement between an input leg (B) and an output leg (C). Consequently, the property E(B : C|a) > 0 is directly related to the entanglement preserving properties of the map L : The presence (possibly conditioned on Alice's outcome) of this type of entanglement can thus be considered a quantum direct cause between Bob and Charlie. In particular, we have the following Proposition: Proof. Let the initial system-environment state be ρ AR . If Alice performs a measurement with an outcome corresponding to the POVM element E A , the remaining comb shared by Bob and Charlie is given by E T A Υ ABC (where the transpose on E A is necessary to comply with the definition of the link product) which, written in terms of its building blocks is equal to (21) cannot be proportional to an entangled state.
As Υ BC|a is (proportional to) the Choi matrix of a quantum channel from Bob to Charlie (where the proportionality constant is equal to the probability for Alice to measure the outcome corresponding to E A ), E(B : C|a) > 0 implies that for some measurement outcome in Alice's laboratory, Bob has an entanglement preserving channel to Charlie at his disposal, i.e., he can send him quantum information. Unlike in the previous case, conditioning on an outcome in Alice's laboratory is in general not deterministic, and the probability for the POVM element E A to occur is given by tr(E A ρ AR ). As already mentioned, the only deterministic POVM element that Alice can perform is 1 A , which amounts to discarding her part of the initial state ρ AR .
It is straightforward to find examples of processes that satisfy E(B : C|a) > 0 for all POVM elements E A (including E A = 1 A ). One such example is a process without environment, with a pure state initial state |Ψ A and an identity map between Bob and Charlie, which yields the valid comb Υ ABC = |Ψ A Ψ A | ⊗ Φ + BC , where the unnormalized maximally entangled state Φ + BC is the Choi matrix of the identity channel I B→C (see App. A). On the other hand, there are processes that yield an entanglement preserving channel between Bob and Charlie for each of the outcomes of Alice (for a given POVM), but do display an entanglement breaking channel if no conditioning takes place.
For example, let all systems (including the environment R) be qubits, the initial systemenvironment state a maximally entangled state Φ + AR , the map L be a controlled Pauli-Z gate, with control on the environment, and Alice performs a measurement in the computational basis (see Fig. 5). Then, for outcome 0 of Alice's measurement (which occurs with probability 1/2), the corresponding map between B and C is the identity map with corresponding Figure 5: Alice controls the entanglement between Bob and Charlie. Alice measures in the computational basis. If she obtains outcome 0, the channel between B and C is the identity channel. Otherwise, it is given by ρ → σ z ρσ z , where σ z is the Z-Pauli matrix. Both of these channels are unitary -and hence their Choi matrices are proportional to entangled states -but on average, i.e., if Alice simply discards her system, the channel between B and C is the completely dephasing one, which is entanglement breaking.
Choi matrix Φ + BC , while for the outcome 1 the corresponding map between B and C is given by the Pauli-Z gate (with corresponding Choi matrix Φ − BC ). Each of these maps is entanglement preserving, however, the average map (with corresponding Choi matrix Φ + BC + Φ − BC = |00 00| BC + |11 11| BC ) is the completely dephasing map, which is entanglement breaking. Consequently, for a given process Υ ABC , Alice might be able to switch on and off Bob's capability to transmit quantum information to Charlie, but she cannot do so deterministically.
Channels from the future to the past -Entanglement E(A : B|c)
So far, the causality constraints on Υ ABC only played a minor role in the considerations of possible entanglement in different splittings. For example, as we have seen above, entanglement in the splitting B : C can exist both deterministically (for the case that Alice's POVM element is 1 A ), as well as probabilistically (for E A = 1 A ). This freedom no longer exists if the entanglement between Alice and Bob is considered. From the causality constraint (8), we have tr C (Υ ABC ) = 1 B ⊗ Υ A , implying that if Charlie discards the state he receives, then there are no correlations (quantum or classical) between Bob and Alice. Otherwise, the choice of Bob's input could influence the state that Alice receives, which is forbidden by causality. Consequently, E(A : B|c) > 0 means that, by conditioning on a measurement outcome in Charlie's lab (corresponding to the POVM element E C ), quantum information can be sent from Bob to Alice (i.e., 'from the future to the past' [69]). Such simulation scenarios by means of conditioning, have been discussed in different contexts in the literature [27,[69][70][71][72][73]. For this to be possible, the initial system-environment state ρ AR has to be entangled, and the map L has to be able to entangle R and B. More precisely, we have the following Proposition: Proposition 3. If a process Υ ABC satisfies E(A : B|c) > 0, then the initial system-environment state ρ AR is entangled and there exists a pure state |Φ C Φ C | such that L BRC |Φ C Φ C | ∈ B(H R ⊗ H C ) is proportional to an entangled state.
Proof. Analogously to the proof of Prop. 1, it is straightforward to show that a separable initial state leads to a comb Υ ABC that satisfies E(A : B|c) = 0. Now, focusing on the second part of the proposition, we assume that L BRC |Φ C Φ C | is separable for all pure states |Φ C . As any POVM element E C can -up to normalization -be written as a convex combination of pure states, this implies where η If it is entangled, then Charlie can 'entangle' R and B (and, in turn, possibly A and B) by conditioning on one of his outcomes, thus allowing Bob to send quantum information to Alice. In general, F BR is not proportional to the Choi matrix of a channel (i.e., a CPTP map), but of a trace non-increasing CP map. Consequently, the probability p for Charlie to obtain an outcome corresponding to Φ C depends on Bob's input state. On the other hand -as has been noted in [69] and, in a slightly different context, in [27,74] -if F BR = q F BR is proportional to the Choi matrix of a CPTP map F BR , then this probability p does not depend on Bob's input state τ B : where we have used the fact that F BR is CPTP, and {ρ AR ⊗ τ B } are quantum states. To make this statement more concrete and to provide an explicit example, consider a variation of a teleportation scheme (without classical communication); let the system-environment map L be a measure and prepare channel of the form where {F (i) } are POVM elements that sum to identity, and the initial system-environment state is the maximally entangled state (see Fig. 6). Choosing F (0) = |Φ + BR Φ + BR |, we see that if Charlie measures in the computational basis, then Υ AB|0 = 1 2 Φ + AB and Υ AB|1 = 1 2 (1 AB −Φ + AB ). Both of them are proportional to CPTP maps. In particular, Υ AB|0 is proportional to the identity channel from Bob to Alice, while Υ AB|0 is proportional to an entanglement breaking channel. The probability of simulation is independent of Bob's input state. For example, we have This simulation probability of 1/4 is the maximal achievable simulation probability for an identity channel from the future to the past [69]. Having the above resources at hand, Alice, Bob, and Charlie can thus simulate quantum channels from the future to the past (albeit not deterministically). The necessary conditions for E(A : C|b) > 0, E(B : C|a) > 0, and E(A : B|c) > 0 shed light on the underlying necessary resources as well as the intuitive interpretation of the respective cases. However, the requirements we found are not sufficient to ensure the existence of any of the three kinds of bipartite entanglement we discussed, and simple examples that satisfy the respective necessary conditions but do not display entanglement in the considered splitting can readily be constructed. Before advancing to the genuinely tripartite case, we now investigate bipartite entanglement from a more global perspective and link it to the entanglement breaking properties of certain maps. In the previous section we derived necessary conditions on a process not to be biseparable in some splitting. Here we will provide a sufficient condition on processes to be entangled in the splitting AB:C and AC:B. In particular, we will show that if a certain map is not entanglement breaking, one observes bipartite entanglement in Υ BC . As entanglement cannot be created by tracing out one party this implies that in this case the corresponding comb has to be entangled across the bipartite splittings AB:C and AC:B. More precisely, we consider the map where ρ R = tr A [ρ AR ] and L are given by the process. Then one can show the following Lemma: Lemma 4. Let Υ ABC be a process defined by ρ AR and L, Υ BC = tr A (Υ ABC ) and Σ be the CPTP map associated to this process via Eq. (26). Then Υ BC is separable if and only if the map Σ is entanglement breaking.
Proof. Recall that (see also Fig. 7) Hence, we have that Υ BC ∝ I B ⊗ Σ[|Φ + BB ]. This yields a separable state if and only if Σ is an entanglement breaking map (see [55] and references therein) which proves the Lemma.
Note that this implies that in case Σ is not entanglement breaking Υ BC is entangled and so is Υ ABC across the bipartite splittings AB:C and AC:B. The converse is not necessarily true. By taking the partial trace entanglement may vanish and hence Υ ABC may show bipartite entanglement even though Σ is entanglement breaking. An example of a genuinely multipartite entangled process for which Υ BC is separable is given in Eq. (50) with n = 3 (see Section 7).
In summary we have shown that if one disregards the initial state and inserts in the first time step a maximally entangled state and one still observes entanglement after the process took place then the quantum comb describing this dynamics has to be entangled in the bipartite splittings AB:C and AC:B.
Channel steering and quantum combs
We finish our discussion of sufficient conditions for bipartite entanglement in combs with the case E(A : BC) > 0. While the corresponding conditions on their own are not very illuminating, it is nonetheless insightful to consider this type of bipartite quantum correlations and connect it to the concept of channel steering [35].
To this end, consider a quantum channel K : B(H B ) → B(H A )⊗B(H C ) with corresponding Choi matrix K BAC (such channels with one input and two output spaces are also called 'broadcasting channels' [75]). As K is assumed to be a channel, it satisfies tr AC (K BAC ) = 1 B . From Eq. (8), we see that any comb Υ ABC also satisfies tr AC (Υ ABC ) = 1 B (with the additional constraint tr C (Υ ABC ) = 1 B ⊗ Υ A ), and can thus be considered as a quantum broadcast channel (see Fig. 8). Consequently, while originally applied to general broadcast channels [35], all of the following considerations directly apply to combs. Given such a channel K BAC , Alice could measure the system A, using a POVM {(E a|x A ) T } a , where x denotes the choice of POVM, a corresponds to the respective outcome, and the transpose is added to simplify the subsequent notation. Conditionally on each of Alice's outcomes, the mapping between Bob and Charlie would be given by (trace non-increasing) CP maps that add up to a CPTP map K BC = a K a|x BC . In this way, Alice could create a collection {K x|a BC } a,x of instruments that all add up to the same channel K BC . Such a collection -in close analogy with the theory of steering of quantum states [13][14][15] -is called a 'channel assemblage' of the channel K BC . A channel assemblage is said to be unsteerable, if there exists an instrument {K λ BC } λ , probabilities p(λ) and conditional probabilities p(a|x, λ), such that for all {a, x} one has 3 Otherwise, the assemblage is steerable. If Alice can create a steerable assemblage, then the channel K BAC is said to be steerable. Evidently, channel steerability in the above sense is equivalent to steerability of the (normalized) state K BAC by Alice [35]. In our case, channel steerability implies that Alice, by performing measurements, can steer the mapping between Bob and Charlie. As entanglement is a prerequisite for steerability [13], here, entanglement of Υ ABC in the splitting A : BC is a prerequisite for Alice being able to steer Bob and Charlie. In line with our above considerations, we thus obtain the following Proposition: Proposition 5. If the comb Υ ABC is steerable by Alice acting on A, then the initial system environment state ρ AR is steerable by Alice acting on A.
Proof. Using different instruments (labelled by x) with outcomes a, Alice can create a state assemblage {ρ a|x R } a|x of the state ρ R = tr A (ρ AR ), i.e., ρ a|x R = ρ AR E a|x A and a ρ a|x R = ρ R for all choices of instruments x. Let us assume that the state ρ AR is unsteerable on Alice's side. This means that, for any state assemblage {ρ a|x R } a|x , there exists a set {ρ λ R } of subnormalized states, probabilities p(λ) and conditional probabilities p(a|x, λ), such that Using this identity, we see that for any conditional mapping Υ a|x BC between Bob and Alice, we have It is easy to check that {L λ BC } constitutes an instrument, implying that any channel assemblage {Υ a|x BC } is unsteerable if ρ AR is unsteerable on Alice's side. This, in turn, means that if the broadcast channel Υ ABC is steerable by Alice, then so is ρ AR . Figure 9: Process with an entanglement breaking map on at least one of its spaces. If the circuit of a process can be represented with an entanglement breaking (EB) channel on one of its wires, then the resulting comb Υ ABC is separable in the corresponding cut. For example, an entanglement breaking channel on the environment R implies that Υ ABC is separable in the splitting A : BC. If there are two entanglement breaking channels (independent of what two wires they act on), then the resulting comb is fully separable. For better tracking of the involved spaces, the input and output spaces of the EB channels are labelled differently.
Having analysed necessary and sufficient conditions on the dynamical building blocks for entanglement in different splittings of combs, as well as the interpretation of these quantum correlations, we finish our discussion of the bipartite case, providing a connection between biseparable combs and entanglement breaking channels.
Separability and entanglement breaking channels
As we have seen in Sec. 4.1, separability of a comb Υ ABC in the splittings AB : C and AC : B is directly related to the entanglement breaking property of a map (given by Eq. (26)) that arises naturally from the building blocks of the comb. Such a relation between the separability of a comb Υ ABC in a given splitting and entanglement breaking (EB) maps within the circuit that yields said comb can be made more concrete, complementing the results of Sec. 4.1. Specifically, here we investigate the question if the presence of an entanglement breaking map on one of the wires (see Fig. 9 for a graphical representation) implies separability of Υ ABC in a given splitting, and vice versa. We give an affirmative answer for the splittings C : AB and provide a partial resolution for the cases A : BC and B : AC.
Entanglement breaking channels imply separable combs
First, it is straightforward to show that the presence of an EB channel on any of the wires leads to a comb Υ ABC that is separable with respect to a particular splitting. Concretely, we will say that a comb Υ ABC can be represented with an EB channel on one of its wires, if it admits a decomposition with an EB channel. For example, for the wire R, this would mean that Υ ABC can be written as Υ ABC = ρ AR N EB R R L BRC (see Fig. 9), where N EB R R is the Choi state of an entanglement breaking channel. With this, we have the following Proposition: Proposition 6. If a comb Υ ABC can be represented with an entanglement breaking channel on one of its wires, then it is separable in at least one possible splitting. In particular, an entanglement breaking channel on A or R implies E(A : BC) = 0, on B implies E(B : AC) = 0, and on C implies E(C : AB) = 0.
The proof can be found in App. B. For the case of an EB channel on R, within the study of quantum memory, a similar Proposition has been shown in [32]. There, processes on two times that only display classical memory were defined as those that have an EB channel on R. Additionally, related results can be found in [34], where entanglement breaking superchannels and their representations are analysed. Here, we provide direct simple proofs that will also allow us to show the converse for one of the cases and to pinpoint the difficulties with proving the converse of the other two.
From the above Proposition, it is evident, that a circuit with at least two entanglement breaking channels in its representation (as long as they are not on A and R) is separable with respect to two distinct splittings. While this fact in itself does not yet imply that it is fully separable, we have the following Lemma: The proof can be found in App. C. While an EB channel on one of the wires always leads to a comb that is separable in the corresponding splitting, it is a priori unclear, if every separable comb Υ ABC has a representation that contains an EB channel on the correct wire.
E(C : AB) = 0 implies EB channel on C
First, consider the case E(C : AB) = 0. We have the following Proposition: Proposition 8. If a proper comb Υ ABC satisfies E(C : AB) = 0, then there exists an initial state ρ AR , a CPTP map L BRC and an EB channel N EB C C , such that Υ ABC = ρ AR L BRC N EB
C C
Proof. If E(C : AB) = 0, then Υ ABC is of the form where {p(α)} are probabilities that sum up to one, {ξ C } are states on the respective spaces, and d B = dim(H B ). Here, and in the following proofs, for clarity of the exposition, we make the normalization of the comb evident by factoring out the factor d B . If Υ ABC is a proper comb, then so is where α|α C = δ αα . Now, choosing N EB C , we can concatenate Υ ABC and N EB C C to obtain which implies that Υ ABC can be understood as coming from a process given by Υ ABC , with an additional entanglement breaking channel on the final output leg C . As Υ ABC is a proper comb, there is an underlying circuit with initial state ρ AR and CPTP map L BRC that leads to it. This concludes the proof.
While the above Proposition provides a constructive way to obtain a circuit with an entanglement breaking map on C for any comb Υ ABC that is separable in the splitting C : AB, it comes with a caveat. In principle, depending on how many terms make up d B α p(α)ρ C , the dimension of C can be much bigger than the dimension of C. It remains an open question, if there is also always a realization with an EB channel that satisfies dim(H C ) = dim(H C ).
Before advancing, let us emphasize the operational meaning of the absence of entanglement in the C : AB splitting in terms of the action of the corresponding comb on a bipartite state ρ BB that is fed into the process. The resulting tripartite state on AB C is given by which is separable in the splitting C : AB , but potentially entangled in the A : BC splitting. Consequently, any comb that satisfies E(C : AB) = 0 breaks the original entanglement of any state ρ BB that is fed into the process (otherwise, entanglement between B and C would have to be present in the above state), but potentially entangles it with another system (here, the system A). In this sense, entanglement of an input state can merely be swapped, but not be distributed between all three parties. We will see below that similar operational statements also apply to combs that are separable in different splittings.
E(A : BC) = 0 and EB channels on A
The case of combs that are separable in the splitting A : BC has been discussed in [32] as a potential superset of the set of processes that only display classical memory (i.e., processes where only classical information is fed forward on the environment line R), and the question was left open, if it constitutes a proper superset. Phrased in our nomenclature, this question amounts to the question if every comb that is separable with respect to the splitting A : BC can be represented as a comb with an EB channel on A (or, equivalently, R). Here, we provide a partial answer to this question. To do so, we need the following Lemma: Lemma 9. Any proper comb of the form Υ ABC = α ρ (α) A } is a set of quantum states and G Proof. By assumption, we have tr C (G (α) BC ≥ 0). From tr AC (Υ ABC ) = 1 B , it follows that 0 < µ α ≤ 1 and α µ α = 1, implying that {µ α } α is a probability distribution. Now, choose an initial system-environment state with α|α = δ αα , and a map L BRC = α |α α| R ⊗ G While in the above proof, we consider the case of representing the comb Υ ABC by means of an EB channel on A, in closer correspondence with the considerations of [32], we could also have considered a representation by means of an EB channel on R. Concretely, choosing an EB channel N EB RR = α |α α| R ⊗ |α α| R on the environment, one sees that, with the appropriate relabeling, ρ AR = N EB RR ρ AR , where ρ AR ∼ = ρ AR , implying that inserting the EB channel N EB RR on R does not change the resulting comb Υ ABC . Using Lem. 9, we can show that a proper subset of combs that are separable in the A : BC cut admit a representation that includes an EB channel on A.
A } is a set of quantum states and G BC between Bob and Charlie is given by Up to a normalization constant, given by the probability is a CPTP map, and we set F BC . This property can now be used to show that every G Consequently, for all α, tr C (G (α) j q j 1 B holds, which implies that all matrices in BC } are proportional to CPTP maps. By Lem. 9 the comb Υ ABC can then be represented with an EB channel on A.
As for the previous Lemma, the above proof also holds for an EB channel on R. Importantly, though, not every Υ ABC that is separable in the A : BC splitting can necessarily be represented by means of linearly independent states {ρ (α) A } α . If this requirement is not fulfilled, then the matrices G (α) BC can not be 'addressed' anymore by means of duals, and the above proof fails to work. In particular, as causality constraints only require that α G (α) BC is CPTP, there might exist proper combs Υ ABC that can be decomposed as Υ ABC = α ρ (α) BC , but cannot be represented as a separable matrix with all terms on BC proportional to CPTP maps.
As at the end of the previous section, we can, again, give an operational meaning to separability of a comb in the A : BC splitting in terms of its action on an input state ρ BB . By direct insertion, we see that for any input state ρ BB the resulting tripartite state Υ ABC ρ BB is separable in the A : B C splitting and can thus at most be entangled between B and C, but not between B and A. This, in turn implies, that a process described by a comb that satisfies E(A : BC) = 0 allows one to pertain entanglement, but cannot entangle input states with other degrees of freedom.
E(B : AC) = 0 and EB channels on B
As for the case of combs that are separable in the splitting A : BC, the causality constraints that Υ ABC has to satisfy only allow for a partial result concerning the connection of EB channels on B and the separability of Υ ABC in the splitting B : AC. We have the following Proposition: Proof. First, using the causality constraints on Υ ABC , we see that the reduced state in Alice's lab is independent of the state that Bob feeds into the process, i.e., for any quantum states {τ B , τ B }, holds, where 1 C is the Choi matrix of the operation that discards the system, the only trace preserving operation Charlie can perform [68]. Additionally, the causality constraint implies With this, we have where the complex conjugate is required to comply with the definition of the link product. From Eq. (39) it follows that all ρ AC ) = ρ A for all α. Then, it is easy to see that the matrix Figure 10: Different sets of separable combs. All states outside the convex hull of separable combs are GME. As we show in the main text, there are processes that are not bi-separable, but also not GME (the red dot in the figure would correspond to such a process. is also a proper comb, as it is positive and satisfies the causality constraints of Eq. (8). Now, defining the EB channel N EB This concludes the proof.
Again, as for the other two possible separable combs, there is a direct operational meaning to separability in the B : AC splitting. Specifically, it implies that the resulting tripartite state Υ ABC ρ BB is separable in the B : AC splitting for any input state ρ BB . In turn, this means that the corresponding process breaks any entanglement of its input states (as B is not entangled with the other degrees of freedom of the output state), but can be a source of entanglement between different degrees of freedom (here, A and C).
Having discussed the bipartite case in detail, we now turn to the case of genuine multipartite entanglement, and its structure for the case of quantum processes.
Example of a biseparable process which shows non-zero bipartite entanglement in all splittings
Any genuinely multipartite entangled process has to be in the intersection between E(A : BC) > 0, E(C : AB) > 0 and E(B : AC) > 0 as by definition it cannot be separable with respect to some bipartition. However, the conditions E(A : BC) > 0, E(C : AB) > 0 and E(B : AC) > 0 do not imply genuine multipartite entanglement. This is well known (and straightforward to see) for states and holds also true for processes as the following example shows. The process is not genuinely multipartite entangled. However, for 1 > p > 1/3 it has the following property: E(A : C) > 0 and E(B : C) > 0, (as the reduced states are not PPT). This implies that also E(A : BC) > 0 and E(B : AC) > 0. Moreover, the process is also not PPT with respect to the splitting C : AB and hence E(C : AB) > 0. This shows that there exist biseparable processes in the intersection of E(A : BC) > 0, E(C : AB) > 0 and E(B : AC) > 0. Analogously, it also implies that E(A : C) > 0 and E(B : C) > 0 do not guarantee GME for processes either (note that E(A : B) = 0 for any process due to the causality constraints). While we have that for this process E(A : C|b) > 0 and E(B : C|a) > 0 (for e.g. trivial measurements on the respective parties), it is straightforward to see that there exists no measurement such that E(A : B|c) > 0. One may wonder whether the intersection of E(A : B|c) > 0, E(A : C|b) > 0 and E(B : C|a) > 0 contains also biseparable processes and the answer is yes. An example for such a biseparable process is given by with 1/12( √ 33 − 3) < p < 1/2. That this process is indeed in the intersection can be easily proven by considering the post-measurement state obtained by measuring |0 0| X for X ∈ {A, B, C} respectively and showing that its partial transpose has a negative eigenvalue, i.e., it is entangled.
Genuinely multipartite entangled processes
So far we investigated conditions on processes to show (conditional) bipartite entanglement. In this section we will consider genuinely multipartite entangled processes, i.e., the set of processes that lie outside the convex hull of biseparable ones. This discussion is similar in spirit to the one conducted in [30,31], where processes that cannot be understood as a mixture of direct and common cause processes were analysed. Our emphasis, however, lies on the genuine multipartite entanglement properties.
In particular, we will focus on the three-qubit case and we will provide examples of processes within each of the SLOCC classes, more precisely an example inside the W-class and one for a process in GHZ\W. This implies that (at least in the three qubit case) all types of mixed-state entanglement can occur for processes. Let us finally note for completeness that also biseparable processes that are not biseparable with respect to some specific splitting but some mixtures of such states can occur (see Section 6).
The first example of a genuinely multipartite entangled process is given by with The fact that this process is indeed GME can be straightforwardly checked by means of the SDP in Eq. (12). Additionally, it constitutes a valid process due to As the comb of Eq. (47) satisfies the causality constraints of (8), there exists a quantum circuit with a pure initial state and a unitary system-environment dynamics that leads to it. We provide this circuit in App. D and show explicitly that its building blocks satisfy the properties laid out in the previous sections. The process given in Eq. (47) has the following properties: a) It has rank 2, which is the smallest possible rank for a genuinely multipartite entangled process. The latter can be easily seen by noting that there exists no genuinely multipartite entangled pure three-qubit state for which the causality constraint can be fulfilled (as already discussed in Sec. 2.2). b) It is in the W-class as it is a convex combination of two pure states within the W-class. c) It can easily be shown that this process is in the intersection of E(A : B|c) > 0, E(C : A|b) > 0 and E(B : C|a) > 0. Note that this is not necessarily true for any GME process. In Sec. 8 and App. G we provide an example for a process with vanishing conditional entanglement for all splittings.
The second example is a construction of genuinely multipartite entangled processes for n qubits and for which there exists in the three-qubit case no decomposition into pure fully separable, biseparable and W-class states, i.e., this process is not in the W-class. As we will show next for arbitrary n the following processes are genuinely multipartite entangled: where |GHZ n = 1/ √ 2 (|0 . . . 0 + |1 . . . 1 ) and 1 k is the identity acting on k qubits. It can be straightforwardly checked that they are valid processes as performing the partial trace over the last qubit results in tr n (Υ (n) ) = and hence all causality constraints are satisfied. Note that also all other marginals are separable. This implies that even though the process is genuinely multipartite entangled, all entanglement vanishes as soon as one of the parties is discarded. In order to show that these processes are genuinely multipartite entangled we use a necessary condition for biseparability first introduced in [61] which states that any biseparable n-qubit state, ρ (n) fulfills (see Sec. 2.2 for further details) As can easily be seen it holds for Υ (n) that Υ . Therefore, the necessary condition for biseparability in Eq.
(52) is violated and the state is genuinely multipartite entangled. Figure 11: Circuit that leads to a four-partite GME comb. Note that using a Swap gate instead of √ S would not yield a GME comb (see App. F.) Moreover, it should be noted that for n = 3 this process has non-zero tangle τ (see App. E) and is therefore not in the W-class. Hence, all different types of mixed three-qubit entanglement can occur for processes. Note that for the process it holds that E(A : C) = 0 and E(B : C) = 0 in spite of being genuinely multipartite entangled. Note further that it can be easily verified the state is in the intersection of E(A : B|c) > 0, E(C : A|b) > 0 and E(B : A|c) > 0.
In App. F we provide a further simple example of a genuinely multipartite entangled comb on four parties. Specifically, the corresponding circuit only requires the two-fold application of where S is the Swap gate (see Fig. 11). In the appendix we show that the corresponding comb is indeed genuinely multipartite entangled.
Finally, returning to the operational meaning of entanglement in combs, as alluded to in Sec. 2.3, it is clear that a GME comb is a resource that allows one to transform bipartite entanglement, present in an input state ρ BB , to genuine multipartite entanglement. This, naturally, extends to the arbitrary times, in the sense that, starting from k − 1 bipartite entangled states, a GME comb on k times enables the creation of a genuinely multipartite quantum state.
GME Processes with no creatable entanglement
We have seen in Sec. 6 that there exist processes that are entangled in any bipartition, yet not genuinely multipartite entangled. While all GME processes are entangled in any bipartite splitting, the same does not hold true for their reduced processes; in particular, we have tr C (Υ ABC ) = 1 B ⊗Υ A . However, as discussed in Sec. 3, there can be conditional entanglement in any of the reduced processes. This begs the question of whether there are GME processes that do not display conditional entanglement in any of their reduced versions, i.e., GME processes where no party can induce entanglement between the remaining two parties by means of a measurement.
This question has been answered for states in [33], where a GME state with separable conditional marginals was provided, and, in addition, it was shown that the GME of the state can be detected from the separable marginals. As such a state is GME but does not even conditionally display any entanglement in its marginals, it has no creatable entanglement in Figure 12: GME combs without creatable entanglement. In the main text, we provide a comb that lies outside the convex hull of separable combs, i.e., it is GME, but does not display any conditional entanglement. Exemplarily, this processes is depicted by a red dot.
its subsystem. Here, we investigate whether this phenomenon also exists for processes, i.e., whether there are proper combs Υ ABC that are GME but have no creatable entanglement (this situation is depicted in Fig. 12).
To find such a process -defined on three qubit Hilbert spaces H A , H B , and H C -we follow the procedure detailed in [33]; there, GME states with vanishing conditional entanglement have been found by means of a see-saw SDP. Starting from some initial state ρ 0 , first, an optimal witness for GME is found by means of the SDP in Eq. (12). Then, employing a second SDP, an optimal GME state is found for this witness, with the additional requirement, that the conditional marginals of said state are separable for a 'sufficient' (∼ 1000 in [33]) number of projective measurements. More precisely, the conditional marginals of said state (with respect to the projective measurements) are required to be strictly positive under partial transpose. Iterating this procedure yields a good guess ρ for a GME state with vanishing conditional entanglement, and it can then be shown numerically that the found state actually possesses the desired properties (see [33] for more details on the employed algorithm). Here, we can use the same algorithm, with the additional constraint that the 'states' we consider satisfy the causality constraint of Eq. (8). With this, we find a candidate process, for which we can verify numerically that it has vanishing conditional entanglement all measurements. We provide this process in App. G.
Given that an action of any of the parties destroys it, we conclude that the GME such processes possess is somewhat inaccessible; if Bob feeds forward any state, Alice and Charlie do not share an entangled state, while any measurement on Alice's (Charlie's) system will only allow for an entanglement breaking channel between Bob and Charlie (Alice).
Conclusion and outlook
In this work -using the quantum comb framework -we studied the entanglement properties of temporal processes. Despite the vast body of work on the structure and stratification of biand multipartite entanglement, due to the constraints imposed by causality, a priori, it had not been clear what types of entanglement and well known phenomena from entanglement theory can also persist in the temporal case. Here, we provided an overview of the types of entanglement that can occur in quantum combs, and furbished these different types of temporal quantum correlations with clear-cut operational interpretations; both in terms of the implications for the properties of the corresponding temporal process, and in terms of the building blocks of the underlying quantum circuits.
First, we derived both necessary and sufficient conditions for bipartite (conditional or unconditional) entanglement and discussed its relation to entanglement-breaking channels and channel steering (for the case E(BC : A) > 0). Based on these results, a more refined notion of entanglement for temporal processes suggests itself. While here we considered the standard definition of entanglement, i.e., based on convex combinations of product states, for processes an alternative definition in terms of product processes seems equally reasonable. Then, a process would be entangled if it could not be written as a convex combination of processes that factorize in at least one splitting. Investigating the operational and structural implication of such a more fine-grained definition of entanglement for processes is subject to future research.
Next, we demonstrated that there exist genuinely multipartite entangled processes for processes with an arbitrary number of steps, and all types of three-qubit mixed state entanglement can occur in the case where only two times are considered. Our investigation showed that there exist genuinely multipartite entangled processes with separable marginals. Finally, we provided an example of a GME process that does not display creatable entanglement, mirroring analogous results for the case of spatial entanglement.
Along the way, we related the separability of quantum combs in different splittings to the presence of entanglement breaking channels in their circuit representation. While the presence of an entanglement breaking channel in the underlying dynamics leads to a separable comb, the converse is not necessarily true for all possible splittings. We provided partial results on when the converse actually holds. Besides this interpretation in dynamical terms, we also gave direct operational interpretations of separable combs. Depending on the splitting, separable combs break and/or swap entanglement of bipartite input states, thus extending the corresponding results for the case of quantum channels to the multi-time case.
Our research sheds light on the interplay between the underlying physical process and the entanglement properties of the corresponding comb. There are two follow-up questions that suggest themselves: Can entanglement in time be understood on a deeper structural level, and how can genuine quantum correlations in time be exploited for the enhancement of quantum information processing tasks?
The former question concerns the peculiar structure of how entanglement in time is 'created'. The only system that allows for the interaction between non-adjacent parties is the environment, which, in the end, is discarded. The environment 'collides' with all the parties, and in doing so, transmits correlations between them. Such collision models have, for example, been used to investigate the creation of multipartite (spatial) entanglement [77]. The entanglement structure of such processes inevitably will affect their quantum complexity. A detailed investigation of the implications for the strength and distribution that entanglement in combs can display is subject to future work.
Considering the latter question, we have seen that many of the observed entanglement phenomena also exist in the spatial setting. It is important though, to emphasize that combs describe fundamentally different experimental scenarios than quantum states; quantum states allow for the computation of correlations for the case where spatially separated, non-communicating parties perform independent measurements. Quantum combs on the other hand describe communication scenarios, where spatial and temporal correlation are concurrently of importance. The roles of the different parties are not limited to measurements, but consist of a read-out of information as well as a feed-forward of it. Quantum stochastic processes with nontrivial entanglement structure, in an operational setting, can be considered as a resource [78,79], much in the same way as usual quantum channels [80,81]. Consequently, having developed an understanding of the entanglement structure of spatio-temporal processes, a natural next step will be -based on the results we provided -to investigate how such genuine quantum correlations can be of use in complex quantum information tasks that require both common cause and direct cause causal relations for their successful implementation.
Finally, it is in principle possible to drop the requirement of causal order without a priori creating paradoxical situations. Such scenarios are described by process matrices, mathematical objects akin to quantum combs, but with the fundamental difference that they do not ensure global, but only local causality. While it has been shown that such more general processes can lead to advantages in quantum games [24,82], the explicit origin of this advantage is hard to pin-point. Approaching this question based on the correlations that cannot persist in causally ordered combs but can occur in process matrices, might shed new light on the resourcefulness of these exotic objects.
Note. Upon publication of this paper we became aware of Ref. [83], where the authors proved the existence of separable combs that cannot be represented by means of a circuit containing entanglement breaking channels, thus completing our considerations of Sec. 5. Figure 13: CJI for combs. At each time t j , one half of a maximally entangled state it fed/swapped into the process. The resulting multipartite quantum state Υ n+1 ∈ B(H i contains all spatio-temporal correlations of the underlying process. For simpler notation, the CJI is generally phrased in terms of unnormalized maximally entangled states, i.e., Υ n+1 = d o 1 · d o 2 · · · d o n Υ n 1
A Choi-Jamio lkowski isomorphism and the link product
In this Appendix, we review the basic properties of the CJI for maps and combs, as well as the link product. For any map M : B(H X ) → B(H Y ), the CJI consists of letting M act on one half of an (unnormalized) maximally entangled state to map it onto a positive (Choi) matrix where Φ + = d j i,j=1 |ii jj| ∈ B(H X ⊗ H X ), and H X ∼ = H X . Frequently encountered Choi matrices are the Choi matrix of the identity channel I X→Y [ρ] = ρ, which is given by Φ + Y X , and the Choi matrix of the partial trace tr X , which is given by 1 X . It is straightforward to see that the map M is CPTP iff its Choi matrix M Y X satisfies M Y X ≥ 0 and tr Y (M Y X ) = 1 X .
In a similar vein the CJI exists for combs, with the difference that at each time t j , one half of a (unnormalized) maximally entangled state is fed into the process (see Fig. 13). It is straightforward to check that the resulting quantum state satisfies the hierarchy of trace conditions of Eq. (4), while it has been shown in [19] that any positive matrix that satisfies these conditions can indeed be considered the Choi matrix of a quantum process. Finally, by simply inserting the respective definitions, one obtains the Born rule for temporal processes. It is convenient, to not only phrase the respective maps in terms of their Choi matrices, but also to translate the concatenation of maps in terms of the respective Choi states. For example, the action of a map M : B(H X ) → B(H Y ) on a state ρ X ∈ B(H X ) can be written in terms of Choi states [36] as which, again, can be seen by direct insertion of the definition of M XY . These considerations are generalised by the link product [19], which translates the concatenation G • F of two maps F : B(H X ) → B(H Y ) and G : B(H Y ⊗ H Z ) to an operation on their respective Choi states F Y X and G ZY . The Choi state of G • F is then obtained via For example, Eq. (55) can be understood as the Choi state of the concatenation M • R, where R : C → B(H X ) is the CPTP preparation map that prepares the state ρ X (i.e., the Choi matrix of R is equal to ρ X ). Put somewhat intuitively, the link product traces Choi matrices over the Hilbert spaces they are both defined on, and corresponds to a tensor product on the remaining spaces. From the definition, it can be directly seen that it satisfies and it is commutative if each Hilbert space the respective matrices are defined on appears at most twice in the product [19] (which is always the case in this article). Additionally, the link product of positive matrices is positive as well. To simplify tracking of the involved spaces, it is helpful to add subscripts to the respective Choi matrices. For example, then, it is easy to see that a product of the form is defined on B(H X ⊗ H W ) (as all the other spaces are traced over), and must be of the form P X ⊗G W , where P X = F XY Z H Y L Z . While every operation in this article could be written in terms of link products, it is sometimes more insightful to mix notation. For example, a term of the form tr(F XY Z H Y ) could equivalently be written as 1 XZ F XY Z H Y (since 1 XZ is the Choi state of tr XZ ).
B Proof of Proposition 6
In the following we provide the proof of Proposition 6: Proposition 6. If a comb Υ ABC can be represented with an entanglement breaking channel on one of its wires, then it is separable in at least one possible splitting. In particular, an entanglement breaking channel on A or R implies E(A : BC) = 0, on B implies E(B : AC) = 0, and on C implies E(C : AB) = 0.
Proof. As the proofs for all cases proceed along the same lines, we explicitly show the Proposition 6 for EB channels on C and A, with the other two cases following in a similar vein.
First, for better book-keeping of the involved spaces, we will label the respective input and output spaces of the entanglement breaking maps differently, i.e, we have N EB C →C : B(H C ) → B(H C ) and N EB A →A : B(H R ) → B(H R ). Their Choi matrices are then denoted by N EB C C and N EB R R , respectively. Additionally, in order for the resulting comb Υ ABC to be defined on the spaces ABC, we add some primes to the spaces the building blocks of Υ ABC are defined on (see Fig. 9). These relabellings are a mere notational aid and have no bearing on the results.
As each entanglement breaking channel can be represented as a measurement followed by a repreparation, we have N EB X } α is a collection of quantum states, and X ∈ {C, A}. With this, a comb Υ ABC with an entanglement breaking channel on C (and building blocks ρ AR , L BRC ) is given by which, as ρ AR L BRC E (α) C > 0 is separable in the splitting C : AB. Analogously, a comb Υ ABC resulting from a circuit with an entanglement breaking channel on the wire A is given by which -for the same reason as above -is separable in the splitting A : BC. The remaining cases follow in a similar vein.
Note that Eqs. (59) and (60) provide the most general form of a comb that contains an EB channel on the respective wire.
C Proof of Lem. 7 Here, we provide a proof of Lem. 7 in the main text: Lemma 7. If a circuit can be represented with an EB channel on any two of the wires {A/R, B, C} it is fully separable.
Proof. We present an explicit proof for the case of entanglement breaking channels on the wires A and B. the other cases follow analogously. In this case, the resulting comb Υ ABC can be written as (see Fig. 9): where N EB A A and M EB BB are entanglement breaking channels. Consequently, each of them can be decomposed as N EB where p(α) = tr(ρ A R E D Circuit for a tripartite GME comb Υ ABC Here, we provide an explicit construction for the quantum circuit that leads to the comb Υ ABC provided in Eq (47), which is of the form with As mentioned in the main text, we have tr C (Υ ABC ) = 1 2 1 A ⊗ 1 B . The circuit that leads Υ ABC can be found directly by means of the Stinespring dilation; to this end, first, consider the purification where D is the auxiliary two-dimensional purification system. By construction, |Υ ABCD is a purification of 1 2 1 A ⊗ 1 B , and so is Φ + AA ⊗Φ + BB . As the latter purification is minimal, there is an isometry V A B →DC := V on the purification spaces, such that Consequently, we have This equation corresponds to a quantum circuit of an open system dynamics, where one half of an unnormalized maximally entangled state is fed in. Concretely, said circuit starts in an initial state Φ + AA , the A part of it swapped out of the process, while the B part of the statẽ Φ + BB is fed in. The A B degrees of freedom undergo the unitary evolution V , and finally, the system D is traced out (see Fig. 14 for a graphical representation). This corresponds to the Choi matrix of a comb with building blocks Φ + AA (as the initial system-environment state) and V (as the dynamics between t 1 and t 2 ). For the discussion of their properties, we relabel Φ + AA → Φ + AR and B → B to abide by the notational convention set up in the main text. First, note that Φ + AR is entangled between the system A and the environment R. The isometry V can be constructed directly from Eqs (65) and (66). It is straightforward to see that where the order of the spaces is BR (CD), e.g., the first equation reads V |0 B 0 R = |1 C 0 D . With this, we have which is not just an isometry, but a unitary matrix. The corresponding Choi matrix of the map V[·] = V (·)V † is proportional to a pure state, i.e., it is of the form |v v|, with where, the order of spaces is CDBR (i.e., for example, we have the term |1 C 0 D 0 B 0 R ). Finally, to be able to compare this map to the ones we discussed in Props. 1 -3 the purification space D needs to be traced over, yielding Here, and in what follows, we have switched the ordering of spaces to BRC (i.e., for example, we have the term |0 B 1 R 1 C 1 B 0 R 1 C |) to be consistent with the subscript of L BRC . As already mentioned, this tripartite entangled Υ ABC can satisfy E(X : Y |z) > 0 for all possible bipartitions. Here, we can directly check that its underlying building blocks indeed satisfy the necessary and sufficient conditions presented in the main text. Firstly, it is easy to check that, e.g., |0 0| B L BRC is proportional to an entangled state on H R ⊗ H C , which, according to Prop. 1 is necessary for entanglement of the form E(A : C|b). Analogously, both |0 0| R L BRC and |0 0| C L BRC are proportional to entangled states, respectively. Consequently, the building blocks {Φ + AR , L BRC } of Υ ABC that we derived satisfy all the necessary conditions of Props. 1 -3. While the Stinespring dilation of a comb -and as such its building blocks -is not unique [19], any pair {ρ AR , L BRC } that satisfies ρ AR L BRC = Υ ABC for the comb of Eq. (63) would satisfy the same conditions. Additionally, we can show directly, that Υ ABC satisfies the sufficient conditions for entanglement in the splittings AB : C and AC : B laid out in Sec. 4.1. There, we showed that if the Choi matrix L BRC [tr A (ρ AR )] is entangled, then Υ ABC is entangled with respect to both of the splittings AB : C and AC : B. In our case, L BRC tr A (ρ AR ) reads (with spaces arranged in the order BC) tr D (|v v|) tr A (Φ + AR ) = 1 2 tr DR (|v v|) .
Then, from Eq. (72) we have which is (proportional to) an entangled state. Consequently, the maximally tripartite entangled comb Υ ABC we found satisfies all of the necessary and sufficient conditions for bipartite entanglement we discussed in the main text.
E Non-zero tangle for the process in Eq. (50) with n = 3 In order to see that the process in Eq. (50) has indeed a non-zero tangle note first that if τ (ρ) = 0, then also τ (AρA † / tr(A † Aρ)) = 0 with A = ⊗ i A i and A i ∈ GL(2, C) [62,65].
However, as we will show there exist local invertible operators that transform Υ (3) to a state which has non-zero tangle and therefore also τ (Υ (3) ) = 0. These operators can be chosen to be A 1 = A 2 = Diag(1/ √ α , √ α ) and A 3 = Diag(α, 1/α) with 0 < α < 1/ √ 3 and Diag(x, y) denoting a diagonal matrix with entries x and y. The transformed state is of the form Note that for convenience we used here the normalization for states. This state can be detected for example for α < 1/ √ 3 to be not in the W-class by using the witness [63] W = 3/41 − |GHZ 3 GHZ 3 | and therefore has non-zero tangle. Hence, also Υ (3) is not in the W-class.
F Circuit for a GME comb on three steps
Here, we provide an explicit example for a simple circuit that yields a genuinely multipartite entangled comb on three times, i.e., on four Hilbert spaces. In the case we discuss, the system and the environment are a qubit, respectively, and the four-legged resulting comb has an initial input leg (A) at time t 1 , a middle slot (BC) at t 2 , where general CP maps can be performed, and a final output leg (D) at time t 3 (see Fig. 15). As where and the ordering of spaces is ABCD, i.e, for example, |1 A 0 B 0 C 1 D . It is straightforward to see that the comb Υ ABCD of Eq. (77) is positive and satisfies the causality constraints of Eq. (4). Finally, using the SDP (12), it is easy to check that Υ ABCD is indeed genuinely multipartite entangled. Importantly, choosing the swap operator S instead of √ S would not have lead to a GME comb. Rather, the Swap would lead to identity channels between the respective inputs and outputs, leading to an overall non-GME comb of the form Υ (S) where 2Φ + AD is the Choi state of the identity channel between A and D.
G GME processes with vanishing conditional entanglement
Following the steps laid out in Sec. 8, we find the following GME comb with vanishing conditional entanglement: where Υ ABC is represented in the standard product basis, i.e., |0 = |0 A 0 B 0 C , |1 = |0 A 0 B 1 C , |2 = |0 A 1 B 0 C , . . . . Up to numerical precision, the above Υ ABC is a proper comb, as it is positive semidefinite, satisfies tr(Υ ABC ) = d B = 2, and we have tr C (Υ ABC ) = 1 2 1 AB .
Υ ABC is a good candidate for a GME comb that has vanishing conditional entanglement. As all the involved subsystems are qubits, the fact that this is indeed the case can be shown by applying the PPT criterion to the (normalised) post-measurement states. Any pure qubit state |Ψ can be parameterized in terms of Pauli matrices as |Ψ Ψ| = 1 2 [1 + cos(ϑ) cos(ϕ)σ x + cos(ϑ) sin(ϕ)σ y + sin(ϑ)σ z ] .
With this, the respective conditioned combs tr X (Υ ABC |Ψ Ψ| X ) for arbitrary (pure) projective 'measurements' on X ∈ {A, B, C} can be computed 4 . As the resulting conditioned combs are defined on two qubits, their entanglement can be decided by means of the PPT criterion; while it cannot be straightforwardly shown analytically that the resulting conditioned combs are indeed PPT, we can check the positivity of their partial transpose numerically, for a sufficiently large number of angles. Here, we randomly choose 5 × 10 5 uniformly distributed pairs (ϕ, ϑ) ∈ [0, 2π] × [0, π] and compute the minimal eigenvalue of the (normalised) partial transpose ρ T Y Y Z (ϑ, ϕ) of the respective reduced states conditioned states. The minimal obtained eigenvalues we found are given by λ AB min = 0.0124, λ BC min = 0.0187, and λ AC min = 0.0193.
Given that each of these values is well above zero, the resulting conditional states are all separable, and since the angles we sampled cover the relevant parameter space sufficiently finely, we conclude that the conditional states are indeed separable for all projective measurements. Since any POVM element (or -in the case of B -any state) can be written up to normalization as a convex combination of pure states, this then implies that the GME comb of Eq. (81) displays vanishing conditional entanglement for all conceivable measurements (or preparations, in the case of B). | 24,155.6 | 2020-11-18T00:00:00.000 | [
"Physics"
] |
Iodide capped PbS/CdS core-shell quantum dots for efficient long-wavelength near-infrared light-emitting diodes
PbS based quantum dots (QDs) have been studied in great detail for potential applications in electroluminescent devices operating at wavelengths important for telecommunications (1.3–1.6 μm). Despite the recent advances in field of quantum dot light-emitting diode (QLED), further improvements in near-infrared (NIR) emitting device performance are still necessary for the widespread use and commercialization of NIR emitting QLED technology. Here, we report a high-performance 1.51-μm emitting QLED with inverted organic–inorganic hybrid device architecture and PbS/CdS core-shell structured quantum dots as emitter. The resultant QLEDs show a record device performance for the QLEDs in 1.5 μm emission window, with a maximum radiance of 6.04 Wsr−1 m−2 and peak external quantum efficiency (EQE) of 4.12%, respectively.
Quantum dots (QDs) based light-emitting diodes (QLEDs) have attracted increasing interest due to their narrow emission linewidth, tunable emission spectral window as well as facile solution-processibility [1][2][3][4] . QLED has undergone rapid developments both in quantum dot synthesis and device structure optimization since its invention in 1994 5 . Nowadays, QLEDs are emerging as an undeniable competitor to organic light emitting diodes (OLEDs) for lighting and display applications. Despite the huge progress of QLEDs emitting at visible emission range, the development of near-infrared (NIR) emitting QLEDs used in telecommunications is still very slow. Infrared light with a wavelength around 1.3 μm or 1.5 μm, the best emission wavelength choices for standard silica optical fibres because of the low loss transmission response of crystalline silicon at this wavelength range, is an indispensable element of any modern telecommunication network. The extension of OLEDs into NIR spectral range is difficult because organic molecules usually emit light at wavelengths (λ) shorter than 1 μm [6][7][8][9][10] . In contrast, the emission spectrum of quantum dots can be easily adjusted from the visible to NIR range only by simply changing composition and size of quantum dots, which can perfectly matches well with the "transparency window" of silica optical fibre.
Narrow band-gap lead sulfide (PbS) based nanocrystals have been widely studied in the past few years for applications in NIR emitting LEDs operating at wavelengths vital for telecommunications [11][12][13][14] . For example, Supran et al. reported short-wavelength NIR QLEDs with λ > 1 μm using core-shell structured PbS/CdS QDs 15 . However, previous efforts at developing NIR emitting QLEDs have demonstrated only limited success due to the low emission efficiency and insulating surface ligands of QDs as well as the poor carrier mobility of organic charge transport materials. Recently, Sargent et al. 16 made a breakthrough in the NIR emitting LEDs using PbS QDs as emissive layer. They improved the radiative exciton recombination within the QD layer by embedding QDs in a high-mobility hybrid perovskite matrix to prevent transport-assisted trapping losses. The resultant NIR emitting QLEDs exhibit a two-fold increase in external quantum efficiency (EQE) compared with previous devices. Nevertheless, the highest reported electroluminescence (EL) performance to date has fallen short of those possible with mature technologies for QLEDs emitting at visible emission wavelengths. Meanwhile, the quantum yields of PbS based QDs with short-wavelength emission are generally much lower than those of the QDs with long-wavelength emission because of the emission quenching caused by localized trap states with the particle size increase, which causes the poor performance for NIR QLEDs with long-wavelength emission.
The high performance of QLEDs can be attributed to the concomitant excitation of the luminescent QD film via Förster energy transfer and direct charge injection. The efficiency of moving charges to QDs is strongly dependent on the QD environment. The surface of QDs prepared by traditional colloidal techniques is capped with an insulating layer of organic ligands with long hydrocarbon tails. Here, we report a best-performance 1.51 μm-emitting QLED by employing iodide passivated PbS/CdS core-shell nanocrystals as emissive layer. The charge injection into QD emissive layer is dramatically improved with the use of iodide passivated QDs. Such NIR emitting QLEDs demonstrate the maximum radiance and efficiency of 6.04 Wsr −1 m −2 and 4.12%, respectively. These results demonstrate a new route towards high-performance NIR emitting QLEDs for telecommunication applications.
Results
To achieve high-performance QLEDs, it is necessary to start with a quantum dot emitter with a high quantum yield for electroluminescence where the excited state is generated by electron-hole recombination. In this work, the PbS/CdS core-shell QDs with a ~1481 nm peak emission wavelength we used were prepared by a microwave-assisted cation exchange approach (Supporting Information Fig. S1). The quantum yield of the QDs is ~52%, the highest efficiency reported for PbS-based QDs in this emission range 17 . The surface of QDs is capped with an insulating layer of the mixed oleyamine (OLA) and oleic acid (OA) organic ligands with long hydrocarbon tails which lead to enhanced emission efficiency owing to the effective passivation of surface defects for PbS/ CdS QDs (Fig. 1a,b). Figure 1c exhibits the low-magnification transmission electron microscopy (TEM) image of the resultant QDs with a uniform size distribution of an average particle size of ~10 nm.
We further explored the potential application of these PbS/CdS QDs as emissive layer in NIR QLEDs. The resultant NIR electroluminescent devices comprise a multilayer thin film architecture of ITO/ZnO nanoparticles (NPs)/QDs/4, 4′-N, N'-dicarbazole-biphenyl (NPB)/MoO 3 /Al, as shown in Fig. 2a, where MoO 3 was used as the hole injection layer (HIL), NPB as the HTL, PbS/CdS core-shell structured QDs as the emissive layer (EML), ZnO NP film spin-casted from ZnO NPs in ethanol solution as the ETL and Al as the anode. The inverted QLED architectures have been demonstrated that they are very efficient for obtaining high QLED device performance and the highest brightness for visible QLEDs was reported by Chang et al. 18 using a similar device architecture with that we used. The schematic energy level diagram of the device depicted in Fig. 2b shows that electrons are injected from the ITO to the QD layer via conduction band of ZnO nanoparticle layer. However, hole injection from Al anode to the organic semiconductor (NPB) results from carrier breakthrough the MoO 3 /NPB junction (the electrons are extracted from the highest occupied molecular orbital (HOMO) level of NPB through the conduction Figure 2c presents the EL spectrum of our optimized NIR emitting QLED. The pure QD emission at a peak wavelength of ~1492 nm illustrates the highly efficient recombination of electrons and holes in the QD emissive layer. The red-shifted peak wavelength (~11 nm) from the QD solution's photoluminescence (PL) is attributed to the Föster resonant energy transfer from small QDs to the large ones in closely packed QD solids 19 . The radiance and EQE curves as a function of current density for the device are depicted in Fig. 2d. The resultant NIR QLED exhibits an excellent device performance, with a maximum radiance and peak EQE of 2.31 Wsr −1 m −2 and 2.42%, respectively, which are the highest reported values for the QLEDs in the 1.5 μm emission window.
For inverted QLEDs, the direct charge injection into QD emissive layer is dominant in driving QD EL process compared with Förster energy transfer 20 , and thus this requires facile transport of carriers to QDs. The efficiency of moving charges to QDs is strongly dependent on the QD environment. The surface of QDs prepared by traditional colloidal techniques is generally capped with an insulating layer of organic ligands with long hydrocarbon tails, resulting in the part of inefficient charge injection. It has been previously demonstrated that the inorganic ligands are often more efficient than their organic counterparts [21][22][23] . In this work, the I ions are connected to QDs for enhancing the surface mobility of QDs by a ligand exchange method, as depicted in the inset of Fig. 3. X-ray photoelectron spectroscopy (XPS) confirmed that the introduction of I ions into the ligands of QDs. As shown by Fig. 3a, strong binding energy peaks of I 3d at 624.75 eV and 636.18 eV appeared following the solution-phase halide treatment. We used PL emission spectra (Fig. 3b) to further study the impact of the solution-phase iodine treatment on the physical properties of the QDs. It can be seen that the emission intensity of the I connected QD film is 1.56-fold higher than that of untreated QD film, suggesting that the iodine treatment also plays an important role in passivating the surface defects of PbS based QDs, which is agreement well with the previous findings 24 . In addition, it can be noted that the PL emission peak of QDs with I ligands is red-shifted slightly compared with the untreated QDs due to the strengthening dot-to-dot interaction resulting from the shorter length of I ions than OA/OLA ligands.
The EL spectrum of the QLED using I passivated QDs with a peak emission of 1510 nm is correspondingly red-shifted ~ 18 nm compared with the above reference device (Fig. 4a). The electrical characterization results reveal that the presence of I ligands in QDs has an obvious effect on the charge transport process that the current injection for the device with I treated QDs is more efficient than that of the reference device (Supporting Information Fig. S2), indicating that the introduction of I ligands facilitates the charge injection into QDs. The significant improvement for the surface properties of QDs contributes to the high performance of our devices. As shown in Fig. 4b, the QLED with iodide passivated PbS/CdS QDs obtained a maximum radiance of 6.04 Wsr −1 m −2 at the current density of 6.1 mA/cm 2 and a peak EQE of 4.12% at the current density of 759 mA/ cm 2 , the record radiance and efficiency values for NIR QLEDs in the 1.5 μm emission window. In addition, the resulting NIR QLEDs fabricated by the efficient design of inverted structure have excellent device reproducibility (Supporting Information Fig. S3). Nevertheless, it still can be expected that the further optimization such as improving the quantum yields of PbS based QD films would realize better device performances.
Discussion
In conclusion, we have demonstrated the best-performance NIR-emitting QLEDs with a 1.51 μm wavelength emission. The superior performance in radiance and efficiency was achieved by the use of high-efficiency PbS/ CdS core-shell QDs, the surface treatment of QDs with high-mobility inorganic I ions and the efficient design of inverted device architecture. Our results illustrate a further step towards the practical application of NIR emitting QLEDs in optoelectronic devices, especially for telecommunication.
Preparation of iodine treated PbS/CdS QDs and ZnO nanoparticles. Untreated PbS/CdS QDs with
core-shell structure were prepared according to a previously reported microwave-assisted cation exchange procedure 17 . For solution-phase iodine treatment, untreated PbS/CdS QDs (30 mg mL −1 QDs in toluene (5 mL) were added to tetrabutylammonium iodide (TBAI) solution (0.6 mL), and then the mixed solution was fully stirred for 15-30 minutes. For precipitating the QDs, ethanol was added to centrifuge and separate the QDs solid from the solution. The as-prepared PbS/CdS QDs can be finally dispersed in octane (5 mg mL −1 ). The exchanging process for iodine and organic ligands was carried out in nitrogen glovebox. The TBAI solution 24 and ZnO NPs 25 were obtained following previous literature reports.
Fabrication of NIR emitting QLED devices. The patterned ITO-glass substrates were cleaned by sonication in detergent, de-ionized (DI) water, acetone, and isopropyl alcohol for 15 min. in sequence. Then ZnO NP layer (40 nm) was spin-coated on the cleaned substrates from the ZnO ethanol solution (25 mg/mL) at 1000 rpm for 60 s and baked at 150 °C for 30 min in a nitrogen filled glove box. Next, the 5 mg/mL of PbS/CdS QD toluene solution was spin-coated on the ZnO NP layer at a rate of 2000 rpm for 60 s to form a QD emissive layer (20 nm), and baked at 90 °C for 30 min. Subsequently, the 4, 4′-N, N'-dicarbazole-biphenyl (NPB, 60 nm), MoO 3 (10 nm) and Al (150 nm) layers were sequentially deposited on the QD layer in a vacuum thermal evaporator.
Instrumentation. The device electroluminescencec (EL) spectra were collected by a Peltier cooled InGaAs photodiode equipped with a standard lock-in amplifier technique. The overall device performance was measured by using a technique described by Forrest et al. 26 in which a photodiode was covered on the device active pixel. The EL emission of QLEDs measured as the photodiode current and the current of device were simultaneously obtained. The device radiance and external quantum efficiency (EQE) were calculated by these two quantities together with the device EL spectra. Fluorescence spectra of PbS/CdS QDs were recorded with a Fluorolog ® -3 system (Horiba JobinYvon) using a photomultiplier tube (PMT) detector and the excitation wavelengths were set at 670 nm. The absolute PL quantum yield (QY) of QDs was measured using an integrating optical sphere. X-ray photoelectron spectroscopy (XPS) data were recorded with a Phiobos 100 spectrometer using Mg X-ray radiation source (SPECS, Germany) at 12.53 kV for the high resolution scans. Transmission electron microscopy (TEM) was characterized by a transmission electron microscope (JEOL, 2100 F) operating at 200 kV.
Data availability statement. All data generated or analysed during this study are included in this published article (and its Supplementary Information files). | 3,063.2 | 2017-11-07T00:00:00.000 | [
"Physics"
] |
Users Acceptance of E-Government System in Sintok, Malaysia: Applying the UTAUT Model
E-government services have become a vital tool to provide citizens with more accessible, accurate and high-quality services and information. E-government system provides an efficient dissemination of information to people and eases people to communicate directly with government services. The utilization of ICT through e-government enhancing efficiency and effectiveness of service delivery in the public sector. The system is regarded as one of the vital elements to be a developed country. The application of e-government indicates the readiness and ability of the nation utilizing technology within public administration periscope. Although the Malaysian government has introduced e-government for many years, its acceptance still not very high. Therefore, this paper studies the key factors of Malaysian citizens’ in Sintok, Kedah, a semirural area on approval on e-government services based on the Unified Theory of Acceptance and the Use of Technology (UTAUT Model). The survey data was collected from 83% respondents to measure people understanding and awareness toward e-government system. The results show that there is an excellent understanding among Malaysian towards e-government system.
Introduction
In today's complex world,
Information and Communication
Technology (ICT) has become an essential element in our daily lives as it has affected and changed our lives to some extent. initiatives and fosters the knowledge society. Most of the countries around the world have adopted e-government in their government system because they realized the benefits of e-government in developing their countries. E-government helps to improve the information flows and processes within government, increase accountability and transparency, less corruption, and greater convenience increased citizen involvement, greater efficiency, and cost reduction for government and user (Noor, Kasimin, Aman, & Sahari, 2011).
However, the success of e-government system not solely depends on government support, but also the willingness of citizens to accept, use and adopt e-government services (DeLone & Mclean, 2003). Many researched has been conducted to study the adoption and success of e-government services around the world, and the results show many governments still suffering from low-level citizen adoption of e-government services (Bélanger & Carter, 2008;Gupta, Dasgupta, & Gupta, 2008;Kumar, Mukerji, Butt, & Persaud, 2007;Reddick, 2005;Thomas & Streib, 2003). Due to this concern, the government must take proactive actions in hopes to cater to the low usage of e-government services among the citizens as the government has spent much money to develop and implement e-government system. This research has been conducted to investigate and assess the factors that contribute to citizens' acceptance in using e-government services in Sintok, Malaysia.
E-Government
E-government has been defined in many different ways from various studies, for example, (Brown & Brudney, 2001) define e-government as the use of technology to increase access to and expeditiously deliver government information and services. They categorize e-government efforts into three broad types of Government-to-Government (G2G), Government-to-Citizen (G2C) and Government-to-Business (G2B). Muir & Oppenheim (2002) define e-government as one of the government initiatives in transmitting information via the internet or digital methods. Kumar et al. (2007) also define e-government as a medium for better service delivery to citizens, businesses, and community members through a change in the way the government manages the information.
The fully utilized e-government will contribute many benefits to government management and can bridge the interaction gap between government and citizens, for example, it will indirectly involve citizens in making a decision or policy (Othman, Yasin, & Samelan, 2012).
E-government targets to simplify bureaucratic procedures, increasing efficiency and transparency, improving information, and increasing the level of citizen empowerment. Due to the advantages gained through the (MAICSA) in 2017. He is also a social entrepreneur, and currently dedicates study in the area of Public Governance, Business Management, Psychology and Islamic Thought. application of the e-government, the UN and the World Bank embraced e-government as a strategic instrument (Cloete, 2007). Simultaneously, it will improve information flow and processes within the government, improve the speed and quality of policy development, and improve coordination and enforcement (Suki & Ramayah, 2010)"type":"article-journal","vol ume":"5"},"uris":["http://www.mendeley.com/ documents/?uuid=b52bfeda-fd56-4b93-9ff4-da3f0c919ac6","http://www.mendeley.com/ documents/?uuid=09fc1147-cc36-4428-97b6-f8 88fc06f651"]}],"mendeley":{"formattedCitation ":" (Suki & Ramayah, 2010. E-government aims to allow governments, businesses and citizens to work together to support Malaysia and all its people. It aims to deliver services to the people of Malaysia and become more responsive to the needs of its citizens. It also helps to enhance the relationship and quality of interaction between the Government of Malaysia and its citizens. Information System (eSILA).
Technology (UTAUT Model)
Information technology acceptance and adoption research has developed several competing and complementary models, each with a different set of acceptance determinants. These models developed over the years and came as a result of ongoing attempts to validate and expand the models. Most notable amongst these models are the Theory of Reasoned Action (TRA) (Ajzen & Fishbein, 1980) , Theory of Planned Behavior (TPB) (Ajzen, 1985)Heider, 1958Lewin, 1951, Technology Acceptance Model (TAM) (Davis, 1989), Extension of the Technology Acceptance Model (TAM2) (Venkatesh & Davis, 2000), Diffusion of Innovation Model (DOI) (Rogers, 2003), and Unified Theory of Acceptance and Use of Technology (UTAUT) .
UTAUT is one of the most popular frameworks in the field of general technology acceptance models (Alraja, 2015)Like earlier acceptance and adoption models, it aims to explain user intentions to use an Information System (IS) and further the usage behaviour. (Igbaria, Parasuraman, & Baroudi, 1996), the Model of PC Utilization (MPCU) (Thompson, Higgins, & Howell, 1991), DOI and Social Cognitive Theory (SCT) (Compeau, Higgins, & Huff, 1999). Each model attempts to predict and explain user behaviour using a variety of independent variables. The UTAUT model was created based on the conceptual and empirical similarities across these eight models. Comparing UTAUT and previous models, UTAUT was able to explain 70% of technology acceptance behaviour, a considerable improvement on previous models, which routinely explain only over 40% of acceptance . Therefore, UTAUT is considered an enhanced model with parsimonious and robust characteristics that could better explain the factors influencing an individual's intention and usage of IT. In detail, UTAUT contains four core determinants, namely performance expectancy, effort expectancy, social influence and facilitating conditions. The variable gender, age, experience and voluntariness of use moderate the critical relationships in the model . However, concerning this research, these four intermediate variables will be not used and tested. Alawadhi & Morri (2008) investigated the adoption of e-government services using UTAUT, and the survey was carried out on 880 students revealed that performance expectancy, effort expectancy and peer influence determine students' behavioural intention. Similarly facilitating conditions and behavioural purposes determine students' use of e-government services.
Performance Expectancy (PE)
Performance expectancy is user beliefs that by using the system will assist him or she improve in job performance .
Another definition is performance expectancy can be regarded as a belief that the use of a particular technology will be advantageous or performanceenhancing to the individual (Muhammad Abubakar & Hartini, 2013). Wu, Yu, & Weng (2012) define performance expectancy as in which individual perceives that an information system is helpful for the job. From the definitions given, it can be seen that performance expectancy is a concern with making an individual task or job more effective and efficient. In the case of this research, it means that citizens feel and believe that by using e-government services will help them to gain more benefit and achieve a reasonable job expectation.
Performance expectancy in the UTAUT model is derived from a combination of five constructs from previous models, consisting of perceived usefulness, external motivation, job fit, relative's advantages and outcome expectations (Davis, Bagozzi, & Warshaw, 1989;Venkatesh & Davis, 2000;. First, a construct is perceived usefulness, where a person believes that using the system in his job will allow him to accomplish tasks faster. It also enhances his job performance, increases his productivity, increases efficiency at work and makes it easier to perform the job and find the system useful in the job. The second construct is the extrinsic motivation (Davis et al., 1989) a perceptions that users will want to perform an activity because it is perceived to be instrumental in achieving valued outcomes that are distinct from the activity itself, such as improved job performance, pay, or promotions.
Users Acceptance of E-Government System in Sintok, Malaysia: Applying the UTAUT Model 70
The third construct is job-fit, (Thompson et al., 1991) explains how the capabilities of a system enhance an individual's job performance, for example, using the system can decrease the time needed to complete the job and using the system can significantly increase the quality of output on the job. The fourth construct is a relative advantage, (Moore & Benbasat, 1991) a degree in which the use of innovation is considered to be better than in its predecessor, for instance, the use of the system increases job efficiency, makes the work more relaxed and makes the work more effective. The last construct is outcome expectations (Compeau et al., 1999) where the users expect that by using the system it will
Effort Expectancy (EE)
Effort expectancy can be defined as the degree of ease associated with the use of the system. According to Venkatesh & Davis, 2000; effort expectancy is the extent of convenience perceived for using the system. Based on this research, it means that citizen feels using e-government services is much more comfortable and easy to access. Similar constructs in other previous models and theories from semantic viewpoints are perceived ease of use, complexity and ease of use (IDT). First, Perceived ease of use determines where users assume that it will be free of effort to use the system. For example, it was easy to learn to operate the system and users find it easy to manipulate the system to do what they want and the interaction with the system is simple, understandable and easy to use. The second construct is complexity (Thompson et al., 1991) where the system is perceived as relatively difficult to understand and use, for example, using the system takes too much time, working with the system is so complicated. It takes too long to learn how to use the system. The last construct in effort expectancy is the ease of use (Moore & Benbasat, 1991) the degree to which using innovation is perceived as being easy to use.
Overall, effort expectancy variable is also considered as the vital determinant in the UTAUT model because an individual expects the technology introduced should be free of effort. Therefore, when technology is perceived to require less effort to use, then the tendency to accept and use the technology would be increase.
It is due to the perception that the lesser effort it takes to use the system technology the more useful the technology Venkatesh & Davis, 2000). Such same effect is predicted to be on the e-government where free of effort would improve as well as attract the citizens to use the e-government services. Hence, it is proposed that effort expectancy could have resulted in a positive relationship with citizens' acceptance of the implementation of e-government services. (Ajzen, 1991;Davis et al., 1989) subjective norms, where the individual perceives that most people vital to them, will affect their behaviour to act. For instance, people who control their actions believe they should use the system or people who are important to them think they should apply the system. The second construct is social factors (Thompson et al., 1991) where people use the system because of other friends or co-workers also use the structure, and the organization has supported the use of the system. The last construct in this variable is the image (Moore & Benbasat, 1991) the degree to which use of the system is perceived to enhance one's appearance or status in one's social network. For example, people in the organization or community who use the system have more prestige than those who do not, people in the organization or community who use the system have a high profile and having the system is a status symbol in my organization or community.
Overall, social influence is the degree to which the users perceived that important people for them believe that he should use the new system. The scholars found that social impact depicts a low positive relationship in the UTAUT model, and social influence constructs are not significant. Under a mandatory condition, this element of social influence seems to be substantial only on early-stage and turn to be non-
Facilitating Conditions (FC)
Facilitating conditions is the degree to which an individual believes that an organizational and technical infrastructure exists to support the use of the system. According Venkatesh & Davis, 2000; facilitating conditions refers to the extent to which an individual perceives that technical and organizational infrastructure required to use the intended system is available. This definition covers constructs of perceived behavioural control, facilitating conditions and adaptability. First, the construct is perceived behavioural control (Ajzen, 1991;Taylor & Todd, 1995) where it reflects perceptions of internal and external constraints on behaviour and encompasses self-efficacy, resources facilitating conditions, and technology enabling conditions. For example, users have the necessary resources to use the system and have the required knowledge to use the system.
Second is facilitating conditions (Thompson et al., 1991) where the guidance was available for users in the selection of the system, specialized instruction concerning the course was open to users, and a specific person or group is available for assistance with the system difficulties. The last construct is compatibility, (Moore & Benbasat, 1991) the degree to which the system is perceived as being consistent with existing values, needs, and experience of users. For example, using the system is compatible with all aspects of users' work and using the design fits well with the way users like to work.
Dependent Variable -Individual Acceptance
Individual acceptance is referring to how the users react with the services provided. It shows the overall satisfaction that users have before and after using the services provided. One of the most important things that indicate the acceptance of the people is the sustainability of service provider (Faezipoura & Ferreiraa, 2013). Sustainability is the time frame that service providers manage to survive in the industry. The time frame is measured as it will show that the people are accepting the service provider. One of them is the acknowledgement of the future needs of the people (Dey, Hariharan, & Brookes, 2006).
As people's needs continue to evolve, technologies are required that can cope with the various changes in the demand of the people.
Besides, people's acceptance is also calculated from the service quality perspectives produced by service providers. Service quality has many definitions. Crosby (1979) stated that service quality is conformance to requirement.
It means that a stated service is said as having quality when all of the dimensions of the services adhered to specifications set. Services are not something that can be seen. It is something that must be experienced. Because of that, the quality of services is difficult to be measured. The nature of services which is intangible lead to the measurement of the quality through customers' satisfaction as one of the indicators to show the service quality (Falk, Berkman, Mann, Harrison, & Lieberman, 2010). Customer satisfaction is met when service firms manage to achieve both customer needs and expectations. The service is born due to the demand from the customers.
When the demand is satisfied with the fulfilment of the customers' expectation, it will lead to customers' satisfaction.
Quality service is essential to ensure customer satisfaction. Customer satisfaction will indicate the acceptance of the services whereby it can only be shown by the loyalty of the customers (Zeithaml, Berry, & Parasuraman, 1996). For example, the implementation of e-services to meet consumer needs is intended to satisfy customers.
Also, e-services ensure better knowledge transfer and flexibility transaction process that can be done at anytime and anywhere. Collier & Bienstock (2006) whereby they stated that e-services are one of the ways to ease the transaction process and affects customers' satisfaction in using the services provided. They further explain that the purpose of the e-services development is also to ensure customers loyalty and retention.
The proposed conceptual framework for this study is presented in Figure 2. The variables are mostly derived from the UTAUT model. First, performance expectancy, it is the degree to which the individuals expect that using the system will help them improve in their job performance .
Furthermore, the anticipated performance may also be considered as the expectation that the use of a specific technology would be beneficial or enhance individual performance. In this research, performance expectancy will be measured by the perceptions of using e-government services in terms of accomplishments, such as saving time, money and effort, facilitating communication with government, improving the quality of government services, and by providing citizens with an equal basis on which to carry out their business with the government. Second, effort expectancy is the extent of convenience perceived for using the system Venkatesh & Davis, 2000;. In other words, it is associated with the ease of use when using the system. In this research, effort expectancy will be measured by the perceptions easiness and convenience, such as learning and using the e-government services is straightforward, easy to become skilful at using the services, and able to get government services quickly. Third, social influence is the degree to which peers influence the behaviour of individuals, whether to use the system or not. According to (Curtis & Payne, 2008) social influence includes consideration of the person's perception of the opinion of others. Individuals expect that any person that important or closed to them think that they should use the e-government services. In this research, this variable will be measured by the perceptions of how peers affect citizens' use of e-government services and the encouragement from the government to use the services.
Facilitating conditions is the degree to which an individual believes that an organizational and technical infrastructure exists to assist the use of the system . In other words, this variable associated with the availability of resources to support individuals in using the system. In this research, facilitating conditions will be measured by the perceptions of being able to access required resources, as well as to obtain knowledge and the necessary support to use e-government services. Citizens' acceptance is referring to how the citizens react with the services ensure customers loyalty and retention.
Conceptual Framework
Source: Adopted from The UTAU Model framework with some modification
Conceptual Framework
Source: Adopted from The UTAU Model framework with some modification provided. Some considerations are considered in determining the level of acceptance of people in the usage of e-government services. One of them is the regularity of the citizens in getting the services. If the citizens frequently use e-government services, it shows that they are willing to accept the system.
Besides, the acceptance is also based on the overall satisfaction felt by the citizens from the services they get. Citizens' desire to get services again in the future is another measure showing the extent of acceptance. When they accept to use the e-government services, they voluntarily want to use that service to perform any government requests.
Methods
The primary data collection method is adopted in this research. It used the original data collected through survey questionnaires. people. According to (Krejcie & Morgan, 1970) in determining size population, if the sample size is 420, the sample size is around 201. For this study, systematic sampling is selected as the technique of the sampling. Systematic sampling is samples that are chosen systematically or regularly. The questionnaire will be used as a medium for data collection. This study will be cross-sectional by using a survey questionnaire to study the relationship between the independent variable and dependent variable at a given period.
The items of the questionnaire were adapted from the UTAUT model by . Collected data will be tested using SPSS Version 22. Descriptive analysis will be conducted to assess the general background of respondents, and Pearson Correlation will be run to measure the strength of a linear association between two variables. Lastly, the normality test has been conducted for all variables, and it was accepted. The result of the reliability test has been presented, and all variables are valid and accepted.
Descriptive Analysis
Descriptive analysis study has been carried to investigate the background of participants and their level of knowledge in an internet application. Source: Data processed from the SPSS software and globalization era, only 3% and 1.8% of respondents do have poor computer knowledge and internet knowledge accordingly.
Least of them use the internet for less than one year and less than 1 hour, which are around 2.4% and 7.2% accordingly. However, almost all respondents stated that they are using the internet for more than three years, 85.6% and more than 3 hours, 60.8%. Finally, from the data above, the majority of respondents used e-government for e-filling 35.5%, MyEG 19.9% and e-service 17.5%.
E-Government Services among Citizens
The first objective of the study is to examine factors influencing the acceptance and social influence.
Discussion
The objective of this study is to examine factors influencing the acceptance of e-government.
Thus, the researcher will discuss and interpret each hypothesis on performance expectancy, effort expectancy, social influence and facilitating condition.
The first hypotheses: there is a positive and significant relationship between performance expectancy and citizens' acceptance of the Hence, the finding value .425** (2-tail significant) is in line with (Alraja, 2015; that people expect that technology would improve their performance in dealing with the government agency. Furthermore, according to (Davis et al., 1989) people hoped that technology would enable them to accomplish assigned task more quickly, improving job performance, increasing productivity, enhance the effectiveness on the job and more comfortable to perform the job. From researcher opinion, the findings show that the public is attracted to use e-government because the system facilitates them and ease their task. Thus, efficiency and effectiveness of e-government will increase public performance, hence increase public acceptance in using the e-government. This study also shows that performance expectations are not only mirrored in the efficiency of the system but have a substantial positive influence on the intent towards technological implementation.
This result also shows that the government has made vigorous attempts to simplify the system and to make it publicly more useful in communicating with government agencies.
Second hypotheses: there is a positive relationship between effort expectancy and citizens' acceptance of the e-government service.
According to Venkatesh & Davis, 2000; it is human perception that if the lesser effort taken to use particular technology or system, the more useful such technology or system. In researcher opinion, it is a human altruistic to seek something that will not cause them to use great effort. Human beings like to spend less time and effort to complete any task. The Pearson correlation value .484** (2-tail significant) shows that there is a substantial and positive relationship between effort expectancy and individual acceptance on the e-government service. The results illustrate that whenever lesser time takes to complete one task, it will encourage the public to use such a service. These findings in line (Mansour, Samir, Samia, & Billal, 2016;Moore & Benbasat, 1991) that the degree to which using innovation is perceived as being easy to use will increase the tendency for the public to use such innovation. The significant point also verified that public perceive and believe that there is no need for much effort to use e-government services (Mensah, 2019) and have favourable views on the application of e-government by Malaysia government.
Third hypotheses: there is a positive relationship between social influence and citizens' acceptance of the e-government services. Thus, it explains that social pressure either from office colleagues, neighbour, friends or relative may influence individual behaviour to experience using such "thing" either technology, foods and beverages or fashion. Curtis & Payne (2008) social influence involves considering the personal experience or view of others, particularly from the subjective culture of the reference community and specific interpersonal agreements with others. And the degree to which an invention is considered to improve one's reputation or status in one's social structure. The Pearson correlation (Thompson et al., 1991) where he stated that people use the particular system because of their friends or co-workers also use the system, and the organization also supported the use of the According to Venkatesh & Davis, 2000; to change to positive if they possess the right talent and able to access the system without any issues (Voutinioti, 2013). The public expected that they would be provided with necessary access and resources such as internet availability and speed of the internet as the first step to browser e-government website. The next step, e-government service should provide guidelines or measures such as "advisor" or "manual books" on how to use e-government. It was proposed that those steps and procedure should not be in jargon words, too long, and include with a picture to facilitate the public to access the system. For the workers, the organization is expected to give full support, such as providing computer and highspeed internet for their employees to complete a task relating to e-government services.
Conclusion
The findings of this study show that citizens' This study opens a new direction for future research to be undertaken, especially using a qualitative method by using a case study to gains rich insight into the experience of the public in using e-government. By using a case study, perhaps it could assist in advancing the field of e-government and obtains multidimensional reality of this service. | 5,937.4 | 2021-01-11T00:00:00.000 | [
"Political Science",
"Computer Science"
] |
Effect of Radiation-Induced Cross-Linking on Thermal Aging Properties of Ethylene-Tetrafluoroethylene for Aircraft Cable Materials
The effects of electron beam irradiation on ethylene-tetrafluoroethylene copolymer (ETFE) were studied. Samples were irradiated in air at room temperature by a universal electron beam accelerator for various doses. The effect of irradiation on samples and the cross-linked ETFE after aging were investigated with respect to thermal characteristics, crystallinity, mechanical properties, and volume resistivity using thermo-gravimetric analysis (TGA), differential scanning calorimeter (DSC), universal mechanical tester, and high resistance meter. TGA showed that thermal stability of irradiated ETFE is considerably lower than that of unirradiated ETFE. DSC indicates that crystallinity is altered greatly by cross-link. The analysis of mechanical properties, fracture surface morphology, visco-elastic properties and volume resistivity certify radiation-induced cross-linking is vital to aging properties.
Introduction
Fluoropolymers, thermoplastic polymers, have been widely applied for many years due to outstanding mechanical properties, cold resistance, high heat, electrical insulation, and chemical resistance. These materials confer a superior temperature resistance and have been used a lot for cable insulation across the aerospace field [1,2]. Although fluoropolymers have been extensively employed as insulation materials in spaceships and aircrafts, fluoropolymer, which is used for aerospace applications, is frequently decomposed on account of its rather low radiation resistance performance derived from fluoride precipitation after cosmic ray radiation [3,4]. Hence, there has been a substantial drop-off in mechanical properties, chemical resistance, electrical properties, thermal stability, surface properties, and other characteristics of perfluoropolymers where the body is exposed to rays [5,6] and the extent of degradation depends upon the radiation dose, dose rate, and energy of the incident radiation [7][8][9][10][11][12][13].
Ethylene-tetrafluoroethylene copolymer (ETFE), which occupies a special status among fluoropolymers, is a semi-crystalline polymer and is essentially a 1:1 alternating copolymer of ethylene and tetrafluoroethylene. ETFE has a higher radiation stability and exhibits superior mechanical properties, flexural modulus and creep resistance than its perfluorinated counterparts, i.e., poly (tetrafluoroethylene) (PTFE), poly (tetra fluoroethylene-cohexafluoropropylene) (FEP), and poly (tetrafluoroethylene-co-pefluorovinylether) (PFA) [3,[14][15][16][17] Oshima et al. [18] reported that polytetrafluoroethylene (PTFE) is extremely sensitive to radiation and undergoes a chain scission even at a very small radiation dose. Galante et al. [19] investigated the radiation tolerance of perfluoroalkoxy (PFA) under vacuum and in oxygen and found it to be higher than that of PTFE (the ratio being 10:1). This has made ETFE a particularly interesting candidate for the aerospace industry to replace other fluoropolymer and electron beam irradiation further enhances the radiation resistance, modulus, and mechanical properties of ETFE.
In the present paper, ETFE was modified by adding TAIC as cross-linking agent, and a network structure was formed upon electron beam irradiation. With a rapid development in the use of ETFE in the aircraft industry worldwide, a systematic investigation about the effects of irradiation with various doses on thermal stability and a series of properties after thermal aging was conducted imperatively. To the best of our knowledge, such detailed surveys have not been reported in any previous work. The objective of this study is to provide an in-depth understanding of cross-linking in ETFE for its industrial applications.
Sample Preparation
The curing agent (TAIC) and Sb 2 O 3 as flame retardant were first manually mixed with ETFE. The ratio of ETFE, Sb 2 O 3 and TAIC is 100:10:1. The blends were prepared using RM-200C parallel twin screw extruder (HAPU, Harbin, China) with a screw diameter of 20mm and an L/D ratio of 30:1. The torque of the twin screw extruder is from 0 to 250 N·m and the production capacity is 6 kg/h. The temperature of zones of feeding, compression, and metering was set at 230, 260 and 280 • C, respectively. The rotor speed was fixed at 40 rpm. The extrudates in the form of thin ribbon were immediately quenched in a water bath and repelletized in a subsequent operation. These were dried before the subsequent processing and characterization. In the following, the blends were compression molded in the form of a rectangular sheet with 1 mm of thickness by using flat-panel curing press (GT-7014-H30C of GOTECH, Dongguan, China) (temperature 285 • C and pressure 15 tons).
Irradiation
The samples were located on a pallet on a conveyer and irradiated in air at room temperature on a stepwise basis by using Kinwa High Technology Co., Ltd. of Changchun, China an acceleration voltage of 1.2 MeV and a dose rate of 2.4 × 10 4 kGy/h. The samples were exposed to continuous multiple irradiation from 60 to 180 kGy by increasing the number of passes. Afterwards, the irradiated specimens were ensconced at room temperature for 48 h in order to minimize the effects of free radical in samples.
Ageing Experiments
Thermal aging was conducted in a box furnace (LY-2150, LIYI Co., Shenzhen, China) for times varying between 0 to 504 h in an oxygen environment. Because of thermal degradation play a dominant role in aging reaction with temperature increasing. Therefore, aging experiments was achieved at temperatures of 230 • C, which is the limiting temperature for maintaining the crystal structure.
Measurements
The thermal stabilities of specimens which were irradiated respectively at 60, 120, 180 kGy and pure ETFE were measured under nitrogen atmosphere with a Netzsch TGA 209C thermogravimetric analyzer (Selb, Germany) at a heating rate of 10 • C/min −1 . Thermograms of heating of ETFE samples were characterized at a certain heating rate using a DSC 204 instrument (Netzsch, Wittelsbacherstr, Germany) from 50 to 300 • C, maintained isothermally at 300 • C for 3 min, cooled from 300 to 40 • C. The heat of melting was calcu-Materials 2021, 14, 257 3 of 10 lated from the areas under the melting peaks. The degree of crystallinity was calculated using the following Equation (1): where ∆H m is the heat of melting of ETFE films which is proportional to the area under the melting peak and ∆H 100% is the heat of melting of 100% crystalline ETFE polymer (∆H 100% = 288 J/g). The whiteness of specimens which were irradiated respectively at 60, 120, 180 kGy were carried out with whiteness meter ( 209C thermogravimetric analyzer (Selb, Germany) at a heating rate of 10 °C/min −1 . Thermograms of heating of ETFE samples were characterized at a certain heating rate using a DSC 204 instrument (Netzsch, Wittelsbacherstr, Germany) from 50 to 300 °C, maintained isothermally at 300 °C for 3 min, cooled from 300 to 40 °C. The heat of melting was calculated from the areas under the melting peaks. The degree of crystallinity was calculated using the following Equation (1): where ∆ is the heat of melting of ETFE films which is proportional to the area under the melting peak and ∆ % is the heat of melting of 100% crystalline ETFE polymer (∆ % = 288 J/g). The whiteness of specimens which were irradiated respectively at 60, 120, 180 kGy were carried out with whiteness meter (DRK130A, Drick, Jinan, China) according to China State Standard GB/T 15595-2008. The tensile tests of the samples were recorded with Universal Testing Machine (AI-3000 of GOTECH, DongGuan, China) at speed of 50 mm/min, the shape and size of the samples according to type I in the China State Standard GB/T1040. . The detailed dimensions are shown in the Figure 1. At least five specimens of each composition were tested, and the average values were recorded. Volume resistivity of samples with 0.5~1 mm of thickness was measured by using ZC36 type high resistance meter (BELL Analytical Instruments Co., Ltd., DaLian, China) according to GB1410-78. The fracture surface morphology of the samples which fractured in tensile strength experiments was observed using scanning electron microscope (SEM, Hitachi S-4700, Tokyo, Japan). All sample surfaces were coated with a thin gold layer by plasma sputtering to avoid any charging effect due to non-conductivity of the polymer. The visco-elastic properties were measured with dynamic mechanical analysis (NE-TZSCH DMA 200, Selb, Germany) and a three-point bending configuration at a heating rate of 3°/min.
Investigations into Thermal Stability
With respect to applications, the thermal stability of the ETFE with TAIC and Sb2O3 is crucial. Therefore, the thermal stability of the specimens with different doses and unir- Figure 2b. The results demonstrate that precursor (non-irradiated) pure ETFE exhibit a single-step degradation pattern with the transition at about 525 °C stemmed from the decomposition of molecular chains of pure ETFE. However, the degradation profile of irradiated ETFE with TAIC and Sb2O3 exhibit two distinct steps. The first stage peaks at a temperature of approximately 470 °C. The weight loss in the first stage remain unchanged with increasing dose are plotted in Figure 2b. Therefore, this behavior can explain the decomposition of TAIC molecules. The small amount of decomposition for pure ETFE in the temperature range of 250 to 450 °C is due to low molecular weight degradation. The most significant weight loss is attributed to decomposition of ETFE matrix as expected. The weight loss in the stage rises with increasing dose rested with the extent of structure changes rooted in these reactions as ETFE are exposed to irradiation process. During the irradiation process,
Investigations into Thermal Stability
With respect to applications, the thermal stability of the ETFE with TAIC and Sb 2 O 3 is crucial. Therefore, the thermal stability of the specimens with different doses and unirradiated specimen was investigated by TGA and the results obtained are depicted in Figure 2a. The derivative thermogravimetry (DTG) curves are provided in Figure 2b. The results demonstrate that precursor (non-irradiated) pure ETFE exhibit a single-step degradation pattern with the transition at about 525 • C stemmed from the decomposition of molecular chains of pure ETFE. However, the degradation profile of irradiated ETFE with TAIC and Sb 2 O 3 exhibit two distinct steps. The first stage peaks at a temperature of approximately 470 • C. The weight loss in the first stage remain unchanged with increasing dose are plotted in Figure 2b. Therefore, this behavior can explain the decomposition of TAIC molecules. The small amount of decomposition for pure ETFE in the temperature range of 250 to 450 • C is due to low molecular weight degradation. The most significant weight loss is attributed to decomposition of ETFE matrix as expected. The weight loss in the stage rises with increasing dose rested with the extent of structure changes rooted in these reactions as ETFE are exposed to irradiation process. During the irradiation process, scission reactions of C-F, C-H and C-C bonds occur in initial moment, resulting in the formation of macroradicals, which undergo the following competitive reactions [20]: (1) peroxidation by reaction with atmospheric oxygen generating in hydroperoxides after hydrogen abstraction from the neighboring ethylene molecules; (2) dehydrofluorination after C-C scission to form unsaturated structure and (3) dehydrofluorination and the subsequent formation of cross-linked structure by reaction with the adjacent macromolecular radical. Therefore, -CF 3 side groups and branched structures generated during irradiation, which have lower thermal stability than the pure ETFE, will be increasing in the wake of the absorbed dose. This is significant potential for the decline in the thermal stability. On the other hand, the irradiated ETFE reveal higher carbon residue rate compared with the unirradiated ETFE. This can be explicated that network structures in irradiated ETFE are not decomposed completely at the temperature of 550 • C.
formation of macroradicals, which undergo the following competitive reactions [20]: (1) peroxidation by reaction with atmospheric oxygen generating in hydroperoxides after hydrogen abstraction from the neighboring ethylene molecules; (2) dehydrofluorination after C-C scission to form unsaturated structure and (3) dehydrofluorination and the subsequent formation of cross-linked structure by reaction with the adjacent macromolecular radical. Therefore, -CF3 side groups and branched structures generated during irradiation, which have lower thermal stability than the pure ETFE, will be increasing in the wake of the absorbed dose. This is significant potential for the decline in the thermal stability. On the other hand, the irradiated ETFE reveal higher carbon residue rate compared with the unirradiated ETFE. This can be explicated that network structures in irradiated ETFE are not decomposed completely at the temperature of 550 °C.
Despite the whiteness being a crucial index to evaluate the thermostability of ETFE in the application, studies relating to this issue are rare in the literature, especially regarding whiteness after aging. As shown in Figure 3, the whiteness of ETFE with various irradiation dose demonstrate a downtrend due to chain scission of the cross-linked system as an extension of aging time.
Kinetic Analysis of DSC
As we all know, ETFE is a semi-crystalline thermoplastic polymer with a high degree of crystallinity. As a consequence, the cables of ETFE insulation properties depend on Despite the whiteness being a crucial index to evaluate the thermostability of ETFE in the application, studies relating to this issue are rare in the literature, especially regarding whiteness after aging. As shown in Figure 3, the whiteness of ETFE with various irradiation dose demonstrate a downtrend due to chain scission of the cross-linked system as an extension of aging time. scission reactions of C-F, C-H and C-C bonds occur in initial moment, resulting in the formation of macroradicals, which undergo the following competitive reactions [20]: (1) peroxidation by reaction with atmospheric oxygen generating in hydroperoxides after hydrogen abstraction from the neighboring ethylene molecules; (2) dehydrofluorination after C-C scission to form unsaturated structure and (3) dehydrofluorination and the subsequent formation of cross-linked structure by reaction with the adjacent macromolecular radical. Therefore, -CF3 side groups and branched structures generated during irradiation, which have lower thermal stability than the pure ETFE, will be increasing in the wake of the absorbed dose. This is significant potential for the decline in the thermal stability. On the other hand, the irradiated ETFE reveal higher carbon residue rate compared with the unirradiated ETFE. This can be explicated that network structures in irradiated ETFE are not decomposed completely at the temperature of 550 °C. Despite the whiteness being a crucial index to evaluate the thermostability of ETFE in the application, studies relating to this issue are rare in the literature, especially regarding whiteness after aging. As shown in Figure 3, the whiteness of ETFE with various irradiation dose demonstrate a downtrend due to chain scission of the cross-linked system as an extension of aging time.
Kinetic Analysis of DSC
As we all know, ETFE is a semi-crystalline thermoplastic polymer with a high degree of crystallinity. As a consequence, the cables of ETFE insulation properties depend on
Kinetic Analysis of DSC
As we all know, ETFE is a semi-crystalline thermoplastic polymer with a high degree of crystallinity. As a consequence, the cables of ETFE insulation properties depend on these crystallization behaviors to a certain extent. On the other side, the processing approach of treating ETFE with irradiated matters transforms the crystallinity of such a semi-crystalline polymer in most cases. Therefore, it is indispensably important to survey the crystallization kinetics of ETFE due to the intimate connection between the properties and these crystallization behaviors.
With increasing irradiation dose, the increase is accompanied by transparency of ETFE specimens irradiated on oxygen-free conditions compared to unirradiated, opaque ETFE. The phenomenon manifests that the crystallinity of radiation-treated ETFE is reduced. In order to illustrate the structural-induced changes in ETFE, DSC measurements were conducted and the obtained thermograms were further analyzed to calculate the degree of crystallinity. Temperatures from the first heating run are plotted against absorbed dose in Figure 4 and the differential scanning calorimeter results shown in Table 1 for ETFE irradiated under oxygen-free atmosphere at room temperature. The melting temperature of irradiated polymers, which represents the crystallite sizes, displays a significant shift towards lower values with the irradiation dose increases. Moreover, the crystallinity degree of those irradiated ETFE have a parallel tendency of the melting temperatures. The result derived from non-isothermal DSC scans can contribute to cross-linking preventing the packing of chains and restricting the mobility of molecules. The crystallization in the amorphous region is impeded due to the augmenting of cross-linking effect. This eventually results in a remarkable decrease in the heat of melting and the degree of crystallinity.
Materials 2020, 13, x FOR PEER REVIEW 5 of 11 these crystallization behaviors to a certain extent. On the other side, the processing approach of treating ETFE with irradiated matters transforms the crystallinity of such a semi-crystalline polymer in most cases. Therefore, it is indispensably important to survey the crystallization kinetics of ETFE due to the intimate connection between the properties and these crystallization behaviors.
With increasing irradiation dose, the increase is accompanied by transparency of ETFE specimens irradiated on oxygen-free conditions compared to unirradiated, opaque ETFE. The phenomenon manifests that the crystallinity of radiation-treated ETFE is reduced. In order to illustrate the structural-induced changes in ETFE, DSC measurements were conducted and the obtained thermograms were further analyzed to calculate the degree of crystallinity. Temperatures from the first heating run are plotted against absorbed dose in Figure 4 and the differential scanning calorimeter results shown in Table 1 for ETFE irradiated under oxygen-free atmosphere at room temperature. The melting temperature of irradiated polymers, which represents the crystallite sizes, displays a significant shift towards lower values with the irradiation dose increases. Moreover, the crystallinity degree of those irradiated ETFE have a parallel tendency of the melting temperatures. The result derived from non-isothermal DSC scans can contribute to cross-linking preventing the packing of chains and restricting the mobility of molecules. The crystallization in the amorphous region is impeded due to the augmenting of cross-linking effect. This eventually results in a remarkable decrease in the heat of melting and the degree of crystallinity. On the other hand, non-isothermal DSC, which was performed at four heating rates: 5, 10, 15, 20 °C/min −1 , was employed to study the non-isothermal oxidation induction temperature of the ETFE samples with different doses and unirradiated ETFE. The corresponding curves and results are depicted in Figures S1-S8 and Tables S1-S4 of supporting On the other hand, non-isothermal DSC, which was performed at four heating rates: 5, 10, 15, 20 • C/min −1 , was employed to study the non-isothermal oxidation induction temperature of the ETFE samples with different doses and unirradiated ETFE. The corre-sponding curves and results are depicted in Figures S1-S8 and Tables S1-S4 of supporting information. The Kissinger's equation is used to estimate the activation energy (∆Ea) of ETFE for clearer description of the variation of ETFE non-isothermal kinetics [21]: where β means the heating rate, R is the gas constant, and T max is the crystallization peak temperature. The relationships of Ea versus absorbed dose are shown in Figure 5. As shown in Figure 5, the trend to decline in activation energy can be ascribed to the cross-linking effect obtained from irradiation process reduces the crystallization rate of ETFE.
where β means the heating rate, R is the gas constant, and Tmax is the crystallization peak temperature. The relationships of Ea versus absorbed dose are shown in Figure 5. As shown in Figure 5, the trend to decline in activation energy can be ascribed to the crosslinking effect obtained from irradiation process reduces the crystallization rate of ETFE. Aging resistance of ETFE is a significant factor to consider as a potential candidate for aircraft cable. In order to evaluate effect of aging time on the crystallization behavior, aging experiments were conducted by adopting ETFE samples exposed to a dose of 180 kGy. The ETFE samples that undergo various aging time were measured with non-isothermal DSC. This can clearly be observed from the relationship between the aging time and the crystallinity degree shown in Figure 6 and the crucial parameters are summarized in Table 2. It is noteworthy that both the ∆ and the degree of crystallinity show a steady continually increasing trend with aging time. It can be affirmed that an increase in molecular mobility stemmed from the main chain degradation reaction and induced some of the broken polymer chains to recrystallize and others in the amorphous region to crystallize during the aging process. Aging resistance of ETFE is a significant factor to consider as a potential candidate for aircraft cable. In order to evaluate effect of aging time on the crystallization behavior, aging experiments were conducted by adopting ETFE samples exposed to a dose of 180 kGy. The ETFE samples that undergo various aging time were measured with non-isothermal DSC. This can clearly be observed from the relationship between the aging time and the crystallinity degree shown in Figure 6 and the crucial parameters are summarized in Table 2. It is noteworthy that both the ∆H m and the degree of crystallinity show a steady continually increasing trend with aging time. It can be affirmed that an increase in molecular mobility stemmed from the main chain degradation reaction and induced some of the broken polymer chains to recrystallize and others in the amorphous region to crystallize during the aging process.
Mechanical Properties
The determination of mechanical properties after aging of induce-crosslinked ETFE is one of the most substantial subjects in estimating the performance of the cable materials based on ETFE. Mechanical properties of ETFE with various irradiation doses in tensile tests and elongation at break as a function of aging time are illustrated respectively in Figure 7a,b. As can be seen in Figure 7, the sharp reduction in the tensile strength and elongation at break of unirradiated ETFE with increasing aging time is expected given that chain scission and oxidation occur in the whole reaction. Furthermore, the obvious changes in the tensile strength and elongation at break of irradiated ETFE with different doses with increasing aging time are not observed. It can be asserted that the cross-linking structure formed in radiation processing effectively obstructs the main chain degradation during the aging process. Hence, the slight reduction of tensile strength and trifling augment in elongation at break are both due to the extent of cross-linking decline for chain scission during aging processing.
Mechanical Properties
The determination of mechanical properties after aging of induce-crosslinked ETFE is one of the most substantial subjects in estimating the performance of the cable materials based on ETFE. Mechanical properties of ETFE with various irradiation doses in tensile tests and elongation at break as a function of aging time are illustrated respectively in Figure 7a,b. As can be seen in Figure 7, the sharp reduction in the tensile strength and elongation at break of unirradiated ETFE with increasing aging time is expected given that chain scission and oxidation occur in the whole reaction. Furthermore, the obvious changes in the tensile strength and elongation at break of irradiated ETFE with different doses with increasing aging time are not observed. It can be asserted that the cross-linking structure formed in radiation processing effectively obstructs the main chain degradation during the aging process. Hence, the slight reduction of tensile strength and trifling augment in elongation at break are both due to the extent of cross-linking decline for chain scission during aging processing.
Fracture Morphology
In order to further investigate thoroughly structure-property relationship, themorphology of the fracture surfaces after tensile test was investigated by SEM. Figure 8 reveals the SEM fractographs of ETFE with variation irradiated dose after aging reaction. It is clear that the fracture surface of the unirradiated ETFE after 504 h aging treatment is very glossy, manifesting that the obstruction to crack propagation is very low after aging reaction and in accordance with its inferior mechanical strength. The micrographs of irradiated ETFE after 504 h aging treatment at a dose of 60 kGy and 120 kGy (Figure 8b,c) show deformed failure surfaces compared to unirradiated ETFE, which can be attributed to the formation of cross-linking network that act as crack-stoppers and can transform the direc-
Fracture Morphology
In order to further investigate thoroughly structure-property relationship, themorphology of the fracture surfaces after tensile test was investigated by SEM. Figure 8 reveals the SEM fractographs of ETFE with variation irradiated dose after aging reaction. It is clear that the fracture surface of the unirradiated ETFE after 504 h aging treatment is very glossy, manifesting that the obstruction to crack propagation is very low after aging reaction and in accordance with its inferior mechanical strength. The micrographs of irradiated ETFE after 504 h aging treatment at a dose of 60 kGy and 120 kGy (Figure 8b,c) show deformed failure surfaces compared to unirradiated ETFE, which can be attributed to the formation of cross-linking network that act as crack-stoppers and can transform the direction of crack propagation when the specimen was loaded.
Fracture Morphology
In order to further investigate thoroughly structure-property relationship, themorphology of the fracture surfaces after tensile test was investigated by SEM. Figure 8 reveals the SEM fractographs of ETFE with variation irradiated dose after aging reaction. It is clear that the fracture surface of the unirradiated ETFE after 504 h aging treatment is very glossy, manifesting that the obstruction to crack propagation is very low after aging reaction and in accordance with its inferior mechanical strength. The micrographs of irradiated ETFE after 504 h aging treatment at a dose of 60 kGy and 120 kGy (Figure 8b,c) show deformed failure surfaces compared to unirradiated ETFE, which can be attributed to the formation of cross-linking network that act as crack-stoppers and can transform the direction of crack propagation when the specimen was loaded.
Volume Resistivity
Volume resistivity of ETFE as the one of the most significant properties in the field of aviation insulation materials was investigated via high resistance meter. As is well known, the value of volume resistivity is influenced by the molecular structure in polymer; the higher crystallinity the higher volume resistivity. The orient macromolecules that induced high crystallinity in the matrix are bound firmly together, the compact structure can effectively reduce charge carriers (ions) mobility, and then give rise to an increase in volume resistivity. Figure 9 presents the variation of volume resistivity of ETFE with various irradiation dose against aging time. As mentioned above, the degree of crystallinity exhibits a steady continually increased trend during the aging process. Therefore, it is clearly seen that ETFE with various irradiation doses present a progressive enhancement in the volume resistivity when the aging time increases. The volume resistivity of irradiated ETFE with a dose of 180 kGy is far lower than specimens at the dose of 60 kGy and 120 kGy. The behavior can be explicated by the irradiated ETFE with a dose of 180 kGy
Volume Resistivity
Volume resistivity of ETFE as the one of the most significant properties in the field of aviation insulation materials was investigated via high resistance meter. As is well known, the value of volume resistivity is influenced by the molecular structure in polymer; the higher crystallinity the higher volume resistivity. The orient macromolecules that induced high crystallinity in the matrix are bound firmly together, the compact structure can effectively reduce charge carriers (ions) mobility, and then give rise to an increase in volume resistivity. Figure 9 presents the variation of volume resistivity of ETFE with various irradiation dose against aging time. As mentioned above, the degree of crystallinity exhibits a steady continually increased trend during the aging process. Therefore, it is clearly seen that ETFE with various irradiation doses present a progressive enhancement in the volume resistivity when the aging time increases. The volume resistivity of irradiated ETFE with a dose of 180 kGy is far lower than specimens at the dose of 60 kGy and 120 kGy. The behavior can be explicated by the irradiated ETFE with a dose of 180 kGy possessing higher cross-linking degree, and the cross-linking effect hindering chain scission of crosslinked system during aging processing. Therefore, ETFE with a dose of 180 kGy exists in a more amorphous region and the structure causes the inferior volume resistivity.
Materials 2020, 13, x FOR PEER REVIEW 9 of 11 possessing higher cross-linking degree, and the cross-linking effect hindering chain scission of cross-linked system during aging processing. Therefore, ETFE with a dose of 180 kGy exists in a more amorphous region and the structure causes the inferior volume resistivity.
Dynamic Mechanical Thermal Analysis
DMA is an efficient method for providing significant information at the molecular level, helping us understand polymer mechanical behavior, in which the storage modulus (E') and loss modulus (E") of the sample under flexural load are measured against time,
Dynamic Mechanical Thermal Analysis
DMA is an efficient method for providing significant information at the molecular level, helping us understand polymer mechanical behavior, in which the storage modulus (E') and loss modulus (E") of the sample under flexural load are measured against time, temperature or frequency of flexural load. As shown in Figure 10, the irradiated ETFE after aging reaction disposed over 336 h show a higher storage modulus and loss factor than nonirradiated sample after a period of 336 h aging treatment. Meanwhile, storage modulus and loss factor of irradiated ETFE mildly increase with an enhancive dose. It can be reasonably assumed that the cross-linking network of ETFE resulted from radiation reaction effectively enhancing the thermo-oxidative stability of the main chain during the aging processing. Therefore, irradiated ETFE with a varied dose still remains, to a certain extent, part of the cross-linking structure. It reduces the free volume, the segmental motion is hindered, and the macromolecular chains are in a frozen state, resulting in the superior storage modulus and Tg.
Dynamic Mechanical Thermal Analysis
DMA is an efficient method for providing significant information at the molecular level, helping us understand polymer mechanical behavior, in which the storage modulus (E') and loss modulus (E") of the sample under flexural load are measured against time, temperature or frequency of flexural load. As shown in Figure 10, the irradiated ETFE after aging reaction disposed over 336 h show a higher storage modulus and loss factor than nonirradiated sample after a period of 336 h aging treatment. Meanwhile, storage modulus and loss factor of irradiated ETFE mildly increase with an enhancive dose. It can be reasonably assumed that the cross-linking network of ETFE resulted from radiation reaction effectively enhancing the thermo-oxidative stability of the main chain during the aging processing. Therefore, irradiated ETFE with a varied dose still remains, to a certain extent, part of the cross-linking structure. It reduces the free volume, the segmental motion is hindered, and the macromolecular chains are in a frozen state, resulting in the superior storage modulus and Tg.
Conclusions
In this study, the irradiation effect on ETFE when TAIC was used as curing agent and the aging properties of irradiation cross-linked ETFE was systematically investigated. The TGA results demonstrate that the irradiation cross-linked ETFE is considerably lower than that of unirradiated ETFE. Meanwhile, a downtrend of thermostability of irradiation cross-linked ETFE was observed. Although the crystallinity of ETFE decreased with increasing dose, the improvement in the crystallinity of ETFE after aging was achieved in the nonisothermal DSC measure. Furthermore, mechanical properties and volume resistivity of cross-linked ETFE as a function of aging time was conveyed and the results indicate that cross-linking is a crucial factor for these properties.
Supplementary Materials: The following are available online at https://www.mdpi.com/1996-194 4/14/2/257/s1, Figure S1: DSC curves of the non-isothermal oxidation induction temperature of ETFE unirradiated at different heating rates, Figure S2: Linear relationship of ln β/T 2 max versus 1/T max , Figure S3: DSC curves of the non-isothermal oxidation induction temperature of ETFE absorbed 60 kGy at different heating rates, Figure S4: Linear relationship of ln β/T 2 max versus 1/T max , Figure S5: DSC curves of the non-isothermal oxidation induction temperature of ETFE absorbed 120 kGy at different heating rates, Figure S6. Linear relationship of ln β/T 2 max versus 1/T max The values of Ea (124.37 kJ/mol) can be calculated ac-cording to the Kissinger's equation, Figure S7: DSC curves of the non-isothermal oxidation induction temperature of ETFE absorbed 180 kGy at different heating rates, Figure S8: Linear relationship of ln β/T 2 max versus 1/T max , Table S1: Non-isothermal oxidation induction date obtained from the DSC scans at different heating rates, Table S2: Non-isothermal oxidation induction date obtained from the DSC scans at different heating rates, Table S3. Non-isothermal oxidation induction date obtained from the DSC scans at different heating rates, Table S4. Non-isothermal oxidation induction date obtained from the DSC scans at different heating rates. | 7,494.2 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Ga Based Heuristic to Minimize Makespan in Single Machine Scheduling Problem with Uniform Parallel Machines
This paper considers the single machine scheduling problem with uniform parallel machines in which the objective is to minimize the makespan. Four different GA based heuristics are designed by taking different combinations of crossover methods, viz. single point crossover method and two point crossover method, and job allocation methods while generating initial population, viz. equal number of jobs allocation to machines and proportionate number of jobs allocation to machines based on machine speeds. A detailed experiment has been conducted by assuming three factors, viz. Problem size, crossover method and job allocation method on 135 problem sizes each with two replications generated randomly. Finally, it is suggested to use the GA based heuristic with single point crossover method, in which the proportionate number of jobs allocated to machines based on machine speeds.
Introduction
The single machine scheduling problem with parallel machines is classified into the following three categories. Single machine scheduling with identical parallel machines; Single machine scheduling with uniform parallel machines; Single machine scheduling with unrelated parallel machines.Let, t ij be the processing times of the job j on the machine i, for i = 1, 2, 3, •••, m and j = 1 2, 3, •••, n.
Then the types of parallel machines scheduling problem are defined using this processing time.
1) If t ij = t 1j for all i and j, then the problem is called as identical parallel machines scheduling problem.
2) If t ij = t 1j /s i for all i and j, where s i is the speed of the machine i and t 1j is the processing time of the job j on the machine 1, then the problem is termed as uniform (proportional) parallel machines scheduling problem.
3) If t ij is arbitrary for all i and j, then the problem is known as unrelated parallel machines scheduling problem.
In this paper, the single machine scheduling problem with uniform parallel machines is considered with the objective of minimizing the makespan.When n jobs with single operation are scheduled on m parallel machines, then each parallel machine will have its completion time of the last job in it.
The maximum of such completion times on all the parallel machines is known as the makespan of the parallel machines scheduling problem, which is an important measure of performance [1].
The characteristics of the uniform parallel machines scheduling problem are as listed below. It has n single operation jobs. It has m parallel machines with different speeds (s 1 < s 2 < s 3 < •••< s m ). m machines are continuously available and they are never kept idle while work is waiting. t 1j is the processing time of the job j on the machine 1 for j = 1, 2, 3, •••, n. For each job, its processing times on the uniform parallel machines are inversely proportional to the speeds of those parallel machines (1/s 1 :1/s 2 : 1/s 3 : •••: 1/s m ), where s 1 is the unit speed. t ij = t 1j /s i for j = 1, 2, 3, •••, n and i = 2, 3, •••, m.
In this paper, off-line, non-preemptive single machine scheduling problem with uniform parallel machines is considered.
Literature Review
In this section, the review of off-line, non-preemptive single machine scheduling problem with uniform parallel machines is presented.
Panneerselvam Senthilkumar and Sockalingam Narayanan [2] have done a comprehensive review of literature of single machine scheduling problem with uniform parallel machines, in which 17 classifications were discussed.Prabuddha De and Thomas E.Morton [3] have developed a new heuristic to schedule jobs on uniform parallel processors to minimize makespan.It is tested on a large number of problems for both uniform and identical processors.They found that the solutions given by the heuristic for the uniform parallel machines scheduling are within 5% of the solutions given by the branch and bound algorithm.Bulfin and Parker [4] have considered the problem of scheduling tasks on a system consisting of two parallel processors such that the makespan is minimized.In particular, they treated a variety of modifications to this basic theme, including the cases of identical processors, proportional (uniform) processors and unrelated processors.In addition, they suggested a heuristic scheme when precedence constraints exist.
Friesen and Langston [5] examined the non-preemptive assignment of n independent tasks to a system of m uniform processors with the objective of reducing the makespan.It is known that LPT (longest processing time first) schedules are within twice the length of the optimum makespan [6].They analyzed a variation of the MULTIFIT algorithm derived from the algorithm for bin packing problem and proved that its worst-case performance bound on the makespan is within 1.4 times of the optimum makepsan.Gregory Dobson [7] has given a worst-case analysis while applying the LPT (longest processing Time) heuristic to the problem of scheduling independent tasks on uniform processors with the minimum makepsan.In this research, a bound of 19/12 is derived on the ratio of the heuristic to the optimal makespan.Friesen [8] examined the non-preemptive assignment of independent tasks to a system of uniform processors with the objective of minimizing the makespan.The author showed that the worst case bound for the largest processing time first (LPT) algorithm for this problem is tightened to be in the interval (1.52 to1.67).Hochbaum and Shmoys [9] devised a polynomial approximation scheme for the minimizing makespan problem on uniform parallel processors.The technique employed is the dual approximation approach, where infeasible but super-optimal solutions for a related (dual) problem are converted to the desired feasible but possibly suboptimal solution.
Chen [10] has examined the non-preemptive assignment of independent tasks to a system of m uniform processors with the objective of minimizing the makepsan.The author has examined the performance of LPT (largest processing time) schedule with respect to optimal schedules, using the ratio of the fastest speed to the slowest speed of the system as a parameter.
Mireault, Orlin, Vohra [11] have considered the problem of minimizing the makespan when scheduling independent tasks on two uniform parallel machines.Out of the two machines, the efficiency of one machine is q times as that of the other machine.They computed the maximum relative error of the LPT (largest processing time first) heuristic as a function of q.
Burkard and He [12] derived the tight worst case bound for scheduling jobs using the MULTIFIT heuristic on two parallel uniform machines with k calls of FFD (first fit decreasing) within MULTIFIT.Burkard, He and Kellerer [13] have developed a linear compound algorithm for scheduling jobs on uniform parallel machines with the objective of minimizing makespan.This algorithm has three subroutines, which run independently in order to choose the best assignment among them.Panneerselvam and Kanagalingam [14] have presented a mathematical model for parallel machines scheduling problem with varying speeds in which the objective is to minimize the makespan.Also, they discussed industrial applications of such scheduling problem.Panneerselvam and Kanagalingam [15] have given a heuristic to minimize the makespan for scheduling n independent jobs on m parallel processors with different speeds.Agarwal, Colak, Jacob and Pirkul [16] have proposed new heuristics along with an augmented-neural-netwrok (AugNN) formulation for solving the makespan minimization task-scheduling problem for the non-identical machine environment.They explored four task and three machine-priority rules, resulting in 12 combinations of single-pass heuristics.They gave the AugNN formulation for each of the 12 heuristics and showed computational results on 100 randomly generated problems of sizes ranging from 20 to 70 tasks and 2 to 5 machines.The results clearly showed that AugNN provides significant improvement over single-pass heuristics.
Panneerselvam Senthilkumar and Sockalingam Narayanan [17] have developed a simulated annealing algorithm to minimize the makespan in the single machine scheduling problem with uniform parallel machines.In the first phase, a seed generation algorithm is presented and then it is followed by three variations of the simulated annealing algorithm.They compared these three simulated annealing algorithms and found that there is no significant difference among them in terms of makepsan.So, they suggested to use all the three simulated annealing algorithms for a given problem and select the best solution.
Cristina Mihaila and Alin Mihaila [18] have developed an evolutionary algorithm for single machine scheduling problem with uniform parallel machines, in which the objective is to minimize the makespan.They also, compared their algorithm with other meta-heuristics and reported the results.Alin Mihaila and Cristina Mihaila [19] have developed a genetic algorithm to minimize the makespan of the uniform parallel machines scheduling under single machine scheduling and experimented with instance problems and reported that their algorithm performs better when compared to other algorithms.
From, the literature, it is clear that the objective of minimizing the makespan in single machine scheduling problem with uniform parallel machines comes under combinatorial category.Hence, development of heuristic is inevitable for this problem.Hence, in this paper, an attempt has been made to design a GA based heuristic to minimize the makespan in single machine scheduling problem with uniform parallel machines.
Factors Affecting GA Based Heuristic
In this paper, a GA based heuristic is designed to minimize the makespan in single machine scheduling problem with uniform parallel machines.
The genetic algorithm mimics the mechanism of selection and evaluation.It generates successive population of alternate solutions until a solution is obtained that yields acceptable results.It is based on the fundamental processes that control the evolution of biological organisms, namely natural selection and reproduction.
The skeleton of the genetic algorithm is given below [20].
Step 1: Input the maximum number of successive population to be generated (Q).Let the generation count (GC) be 1 Step 2: Generate a random initial population with N chromosome.Let this population be L.
Step 3: Evaluate the fitness function f(x) of each chromosome x in L.
Step 4: Sort L by ascending /descending as per the ob-jective and copy a specified percentage of chromosomes (30%) into a subpopulation P.
Step 5: Randomly select two chromosomes and do the following: 5.1 Perform Crossover operation.5.2 Perform Mutation of each offspring for a mutation probability, α.
Replace the new two offspring in L along with their fitness function values
Step 6: Repeat Step 5 until all the chromosomes in P are considered.
Step 9: From L, identify the chromosome which has the best fitness function value and print its results.
Step 10: Stop.The performance of the genetic algorithm is suspected to be affected by the crossover method, mutation, the way in which the initial population is generated and the problem size.
In the GA based heuristic, the design factors considered in this paper are as listed below. Crossover method (Factor B), for which the levels are "Single point crossover method" and "Two pint crossover method". Method of allocation of jobs to machines while generating initial population (Factor C), for which the levels are "Equal number of allocation of jobs to machines" and "Proportionate number of allocation of jobs to machines, which is based on the speed of the machines".
Methods of Job Allocation to Machines
This section explains the methods of allocation of jobs to different machines while generating the initial population (Factor C), viz.equal number of jobs allocation to machines and proportionate number of jobs allocation to machines.
Equal Number of Jobs Allocation to Machines
In the method which assigns equal number of jobs to each machine, the construction of chromosome is explained below.
Let, NJ i be the number of jobs assigned to machine i 1 by assuming the processing times as in Table 2.
If the number of jobs is 9 and the number of machines is 3, then a sample chromosome is as presented by Chromosome 1 in Table 1 by randomly assigning each machine number to three jobs.If the number of jobs and the number of machines are 10 and 3, respectively, then a sample chromosome is as presented by Chromosome 2 in Table 1.
Proportionate Number of Jobs Allocation to Machines
The construction of a chromosome in the method of proportionate number of jobs allocation to machines is explained below.
The speed ratio of the machines be S 1 :S 2 :S 3 : In the above representation of chromosomes, each gene represents a machine to which the corresponding job is assigned.The generation of genes is random subject to fulfilling the number of jobs assigned to each of the machines.
Let, NJ i be the number of jobs assigned to machine i, i The determination of the makespan for the Chromo- Let the number of jobs be 10 and the number of machines be 4 with speed ratio 1:2:3:4 for the machines 1, 2, 3 and 4, respectively.A sample chromosomes for this situation is as presented by the Chromosome 3 in Table 3 by randomly assigning Machine 1 to one job, Machine 2 to two jobs, Machine 3 to 3 jobs and Machine 4 to four jobs as per their speed ratio.Assume another situation in which the number of jobs and the number of machines are 10 and 5, respectively.Let the speed ratio of the machines be 1:2:3:4:5 for the machines 1, 2, 3, 4 and 5, respectively.A sample chromosome for this situation is as presented by the Chromosome 4 in Table 3 by assigning Machine 1 to one job, Machine 2 to 1 job, Machine 3 to 2 jobs, Machine 4 to 2 jobs and Machine 5 to four jobs as per speed ratio.
Crossover Methods
In this paper, single point crossover method and two point crossover method are used in the experiment conducted to select the factors affecting the performance of the GA based heuristic to minimize the makespan of the single machine scheduling problem with uniform parallel machines.These are demonstrated using the chromosomes 5 and 6 which are given below in which the number of jobs is 10 and the number of machines is 4.
Single Point Crossover Method
The single point crossover method is explained using the chromosomes 5 and 6.Let the random position selected in the range 1 to 10 (positions of the job numbers in the chromosomes) be 4.Then, the chromosome 5 is divided into two parts, P and Q and the chromosome 6 is divided into two parts, X and Y as shown below.
Two Point Crossover Method
The two point crossover method is explained using the same set of chromosomes 5 and 6.Let the two random positions selected in the range 1 to 10 (positions of the job numbers in the chromosomes) be 3 and 6.Then, the chromosome 5 is divided into three parts, P, Q and R, and the chromosome 6 is divided into three parts, X, Y and Z as shown below.
GA Based Heuristic to Minimze Makespan
As stated earlier, the performance of the GA based heuristic to minimize the makespan of the single machine scheduling problem with uniform parallel machines is mainly suspected to be affected by the factors, viz., "Crossover Method" and "Job Allocation Method", each having two methods.So, four (2 × 2 = 4) GA based heu- ristics by combining the levels of these factors to minimize the makespan are presented in this section.
GA Based Heuristic with Single-Point Crossover Method and Equal Number of Job Allocation Method
The steps of the GA based heuristic with single point crossover method and equal number of job allocation method to minimize the makespan of the single machine scheduling problem with uniform parallel machines are presented below.
Step 1: Input the following.
Number of machines (m) Number of jobs (n) [It is assumed that n ≥ m]
Speed ratio of the machines: S 1 :S 2 :S 3 : Mutation probability, α (0.3).
Step 2: Set the genetic algorithm parameters.Size of population, N Size of subpopulation, P (30% of N) Number of iterations to be carried out, Q Step 3: Construct N chromosomes of the population by allocating equal number of jobs to the machines in each chromosome, CHROM K,J , K = 1, 2, 3, •••, N and J = 1, 2, 3, •••, n.
Step 5: Set the Iteration Number q to 1.
Step 6: Sort the chromosomes in the ascending order of their makespans.
Step 9: Treat the topmost 30% of the population (0.3N = P) of the sorted population as subpopulation for crossover operation.
Step 10: Set chromosome number, C = 1 Step 11: Perform single-point crossover between the chromosomes C and C + 1 as listed below and obtain the offspring C and C + 1.
Crossover between: CHROM C,J , J = 1, 2, 3, Step 12: Perform mutation in each of the offspring for a mutation probability of α.
Step 13: Compute the makespan of each of the offspring.[MS C and MS C+1 ].
Step 15: If C ≤ P then go to Step 11.
Step 18: If q ≤ Q, then go to Step 6.
GA Based Heuristic with Two Point Crossover Method and Equal Number of Jobs Allocation Method
The steps are same as given in the Section 4.1, except the Step 11.The Step 11 of the two-point crossover is shown below.
Step 11: Perform two point crossover between the chromosomes C and C + 1 as listed below and obtain the offspring C and C + 1.
GA Based Heuristic with Single Point Crossover Method and Proportionate Number of Jobs Allocation Method
The steps are same as given in the Section 4.1, except the Step 3. The Step 3 is shown below.
GA Based Heuristic with Two Point Crossover Method and Proportionate Number of Jobs Allocation Method
The steps are same as given in the Section 4. Step 3: Construct N chromosomes of the population by allocating proportionate number of jobs to the machines in each chromosome, CHROM K,J , K = 1, 2, 3, •••, N and J = 1, 2, 3, •••, n.
Step 11: Perform two point crossover between the chromosomes C and C + 1 as listed below and obtain the offspring C and C + 1.
Experimentation
In the GA based heuristic, the design factors are as listed below. Problem size (Factor A) in terms of number of machines and number of jobs, for which the levels Crossover method (Factor B), for which the levels are "Single point crossover method" and "Two point crossover method". Method of allocation of jobs to machines while generating initial population (Factor C), for which the levels are "Equal number of allocation of jobs to ma-chines" and "Proportionate number of allocation of jobs to machines, which is based on the speed of the machines" A comparison is made between the GA based heuristics with these three factors to minimize the makespan of the single machine scheduling problem with uniform parallel machines.
The problems are generated by varying the number of machines (m) from 2 to 10 with an increment of 1 and the number of jobs from 11 to 25 with an increment of 1.The speed ratio of the machines is assumed as the ratio of the machine numbers.If a problem has four machines, the speed ratio of the machines 1, 2, 3, and 4 is 1:2:3:4, respectively.
The For each combination of the factors, two replications have been carried out.So, 270 problems (135 problems sizes with two replications in each problem size) were generated randomly as per the layout shown in Table 4.
The values of the makespan of the problems under each experimental combination are obtained.The formula to compute the percent deviation of the makespan of a problem from the minimum makespan of that problem is given by the following formula.The respective ANOVA model [21] is presented below.
where, Y ijkl is the percentage deviation of makespan w.r.t.l th replication under i th problem size, j th crossover method and k th job allocation method.μ is the overall mean of the percent deviation of the makespan values.
A i is the effect of the i th problem size on the percent deviation of the makespan value.
B j is the effect of the j th crossover method on the percent deviation of the makespan value.
AB ij is the interaction effect of the i th problem size and j th crossover method on the percent deviation of the makespan value.
C k is the effect of k th job allocation method on the percent deviation of the makespan value.
AC ik is the interaction effect of the i th problem size and k th job allocation method on the percent deviation of the makespan value.
BC jk is the interaction effect of the j th crossover method and k th job allocation method on the percent deviation of the makespan value.ABC ijk is the interaction effect of the i th problem size, j th crossover method and k th job allocation method on the percent deviation of the makespan value.
e ijkl is the random error associated with the l th replication under i th problem size, j th crossover method and k th job allocation method.
The different hypotheses of this model are listed below.
Factor: Problem Size (A)
H 0 : There is no significant difference between problem sizes in terms of the percent deviation of makespan value.
H 1 : There is significant difference between problem sizes in terms of the percent deviation of makespan value.
Factor: Crossover Method (B)
H 0 : There is no significant difference between crossover methods in terms of the percent deviation of makespan value.
H 1 : There is significant difference between crossover Copyright © 2011 SciRes.IIM
Interaction: Problem Size (A) × Crossover Method (B)
H 0 : There is no significant difference between different pairs of interaction terms of problem size and crossover method in terms of the percent deviation of makespan value.
H 1 : There is significant difference between different pairs of interaction terms of problem size and crossover method in terms of the percent deviation of makespan value.
Factor: Job Allocation Method (C) H 0 : There is no significant difference between job allocation methods in terms of the percent deviation of makespan value.
H 1 : There is significant difference between job allocation methods in terms of the percent deviation of makespan value.
Interaction: Problem Size (A) × Job Allocation Method (C)
H 0 : There is no significant difference between different pairs of interaction terms of problem size and job allocation method in terms of the percent deviation of makespan value.
H 1 : There is significant difference between different pairs of interaction terms of problem size and job allocation method in terms of the percent deviation of makespan value.
Interaction: Crossover Method (B) × Job Allocation Method (C)
H 0 : There is no significant difference between different pairs of interaction terms of crossover method and job allocation method in terms of the percent deviation of makespan value.
H 1 : There is significant difference between different pairs of interaction terms of crossover method and job allocation method in terms of the percent deviation of makespan value.
Interaction: Problem Size (A) × Crossover method (B) × Job Allocation Method (C)
H 0 : There is no significant difference between different combinations of interaction terms of problem size, crossover method and job allocation method in terms of the percent deviation of makespan value.
H 1 : There is significant difference between different combinations of problem size, crossover method and job allocation method in terms of the percent deviation of makespan value.
The results of the corresponding ANOVA model are shown in Table 5.
The hypotheses for which the effects are significant are as listed below.
Factor "Problem Size (A)" In the Table 5, the calculated F ratio for the factor "Problem Size (A)" is 4.65, which is more than the corresponding table F value of 1 for (134, 540) degrees of freedom at a significance level of 0.05.Hence, the alternate hypothesis is accepted.This means that there is sig- nificant difference between the problem sizes in terms of percent deviation of makespan.
Factor "Job Allocation Method" (C) In the Table 5, the calculated F ratio for the factor, "Job Allocation Method (C)" is 546.169, which is more than the table F value of 3.84 for (1,540) degrees of freedom at a significance level of 0.05.Hence, the corresponding alternate hypothesis is accepted.This means that there is significant difference between the job allocation methods in terms of percent deviation of makespan.
Interaction "Problem Size × Job Allocation Method" (A × C) In the Table 5, the calculated F ratio for the interaction "Problem Size × Job Allocation Method" is 2.72, between the single point crossover method and two point crossover method in terms of the percentage deviation of the makespan.So, any of these two crossover methods can be selected for implementation.However, the two point crossover method is selected because of its reduced mean percent deviation of the makespan values.Based on the above discussions, it is recommended to use the GA based heuristic with two point crossover method, in which the initial population is generated by way of proportionate allocation of jobs to the machines based on the machine speeds.
The research reported in this paper is a significant contribution under the single machine scheduling problem with uniform parallel machines in terms of designing a meta-heuristic to minimize the makespan.Future researches may be focused on uniform parallel machines scheduling with stochastic processing times for the jobs.
1, except the Step 3 and Step 11.The Step 3 and the Step 11 are shown below.
of the percent deviation of makespan value. | 5,924.8 | 2011-09-20T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Design and Implementation of Brushless DC Motor based Solar Water Pumping System for Agriculture using Arduino UNO
This technical paper focuses about the design and implementation of brushless dc (BLDC) motor based solar water pumping system for agriculture. The abundant solar energy has been effectively tracked and utilized for powering the water in urban area agriculture where the electricity is not available or it would be a costly one. This paper presents an entire design of the solar powered water pumping system based on the requirement of water and other factors for an acre land. For extracting the maximum power generated by photovoltaic array (PV) array, Cuk converter is used as maximum power point tracker with basic perturb and observe (P&O) algorithm. The dc link voltage obtained from the Cuk converter is fed to three phase inverter to provide proper supply to BLDC motor. The BLDC motor pumping system is selected among other motor pumping systems, because it has won features like small in size, noiseless operation, long operating life, less maintenance and high output torque. The performance of the system has been validated using Arduino UNO. The given design can be used for any size of land by simply recalculating the parameters. Keywordbrushless dc motor, photovoltaic array, perturb and observe.
I. INTRODUCTION
Nowadays, agriculture has been drastically diminished due to the frequent power cut and reduction in underground water levels. To consider the above problem, the solar water pumping is the best choice to improve the agriculture outcome. Fig. 1 shows the block diagram of the BLDC motor based solar water pumping system. The solar energy is the widely available energy due to its abundant availability with free of cost, so it would be the most attractive solution in many fields. The main issue is the high initial cost, but it has long life span of about twenty five years hence it gives a good payback period [1]. The solar water pumping system initially uses a DC motor, because it doesn't needs an intermediate voltage conversion, but it needs a regular maintenance due to the presence of commutator and brushes [2]. The induction motor is also a best suit for solar water pumping system due to its reliability and ruggedness, however the control of an induction motor is complex by introducing the field oriented control and it also needs an intermediate voltage conversion [3]- [4]. To consider the above issues the BLDC motor is preferred among other commercially available motors, because it has certain features such as small in size, noiseless operation, long operating life, less maintenance and high output torque [5]. Cuk converter is used for the purpose of maximum power point tracking and provides necessary dc link voltage. Cuk converter has the ability to provide non-inverted buck/boost voltage at the output and its current is continuous, which eliminates the need for external filter. Hence it provides ripple free voltage at the input of BLDC motor, thus avoids an oscillation in motor torque [6]- [7]. The P&O MPPT algorithm has been used to track a maximum power, becasuse of its simple algorithm and easy implementation [8]- [9]. This paper is organized as follows; Section II deals about the designing of parameters based on the water requirement of an acre agriculture land; Section III deals about the simulation and its results with the designed data from the section II, implementation and the validation of the results using Arduino UNO.
A. Designing of PV water pumping system
The designing parameters of BLDC motor based solar water pumping system depends on the water requirement of the particular agriculture land. In this paper the parameters are chosen by considering the water requirement of an acre agriculture land and rice as corp, the water is required in depth of eight inches from the land surface [10]. Required water for agriculture land in liter = 27154 × (Area of land in acre) × (Water depth in inch) × 3.78 (1) Required water for one acre agriculture land in liters = 27154 × 1 × 8 × 3.78 Required water for one acre agriculture land in liters = 821136.96 liters ≈ 821137 liters.
B. Selection of centrifugal water pump
The required water for rice corp in depth of eight inches from the land surface of an acre land is 821137 liters. The pump has been selected by considering the total water requirement, the discharge rate of the pump and the total system cost. The 2.5 HP oswal monoblock surface mounted pump is selected, it has been operated in the water head range of 12-15 metre with the water flow rate of 405-190 LPM, its cost about 10,000 INR. The selected pump is able to deliver the required water of an acre land.
C. Selection of BLDC motor:
To meet the torque requirement of the selected pump, the TETRA85TR 3.2 BLDC motor is selected. Its power rating is about 1.8 kW. The motor parameters are listed in appendix section.
D. Selection of PV panel:
The 2.5 kW PV array is designed, by consider the load and converter losses also to consider the power requirement of the PV water pumping system. The SOLKAR SPV module is selected for this application. Its technical details are listed in appendix section. An open circuit voltage and the short circuit current of SOLKAR SPV module is given as follows: Open circuit voltage of single PV panel (V oc ) = 21.24 V; Short circuit current of single panel (I scr ) = 2.55 A.
The available voltage and current of the single PV panel at full insolation is 80% of the open circuit voltage and short circuit current [11], so the available voltage and current of the single PV pannel is 16.992 V and 1.8 A.The PV water pumping system requires a maximum voltage and current of 310 V and 6.57 A, to effectively runs the BLDC motor.
MPP MPP MPP
The maximum voltage available across the PV array is (V MMP ) 270 V. E. Designing of Cuk converter parameters: Fig. 2 shows the Cuk converter circuit, the required dc link voltage V dc =300 V by considering the maximum operating voltage of the BLDC motor, the duty has been computed by follows [6], The switching frequency is selected for 20 kHz to keep the low inductor ripple current and also it reduces the inductor size.
where f switch is the switching frequency and ΔI L1 is the inductor ripple current.
The inductor L 2 is calculated by follows, here first to calculate the inductor current I L2, from the I L2 to take an inductor L 2 ripple current as 6%.
The DC link capacitor is calculated as follows, The designed parameters in Section II and the motor parameters from an appendix section have been used for simulation. The simulation has been done by using MatLab/Simulink software. Fig. 3 shows the PV voltage versus current and PV voltage versus power curves under different solar insolation. The near 2500 W PV power has been obtained at full insolation. In simulation the system performance has been tested under different solar insolation shown in Fig. 4.a. The solar insolation is maintained at different levels in the total simulation time, in starting it is about full insolation i.e., 1000 W/m 2 upto the time of 2 seconds. Then it has been decremented to the level of 600W/m 2 and 400W/m 2 at the time of 2 and 4 second. Fig. 4.b. and Fig. 4.c show the response of PV voltage and PV current, the PV voltage has been maintained in the level of 260-270 V at entire simulation time. The PV current only changed abruptly in the level of 5.2 to 9 A for the change in insolation. Obviously the PV power gets changed under change in insolation levels shown in Fig. 4.d from the level of 2500 W to 1400 W. Fig. 4. e shows the dc link voltage or output voltage of the cuk converter, the cuk converter is fired with MPPT algorithm so the pulse width has been changed at different values irrespective of solar insolation levels. so the dc link voltage gets changed from the levels of 320 to 220 V. Fig. 4. g, Fig. 4. h and Fig. 4. i show the BLDC motor parameters like motor phase a back EMF, motor phase a current, motor speed and electromagnetic torque, for the sack of clarity the zoomed version of motor phase a back emf and current has been shown in Fig.5.
In the motor phase an emf is gets varied depends on dc link voltage or solar insolation levels, it is varied about 110 V to 80 V. The motor phase a current is obtained about 6 A. The motor speed gets varies depends on insolation levels from 2900 rpm to 2000 rpm. The motor electromagnetic torque is obtained about 3 to 4 Nm. The simulated results of the motor speed and torque is capable to drive the selected centrifugal pump.
The perturb and absorb MPPT algorithm is used to track the maximum power from the PV array under change in insolation levels. The PV voltage versus current and PV voltage versus power is shown in Fig. 6. The maximum PV power of different insolation has been obtained for a power levels of 2500 to 1400 W. Fig. 7 shows the graphical representation of the PV power under different solar insolation levels with corresponding duty cycles from the MPPT controller.
B. Hardware Realization of PV water pumping system:
Arduino UNO is used for an implementation, among other commercially available micro controllers and feild programmable gate arrays (FPGA), the arduino UNO has own features like, it has more included libraries to allow many number of hardware devices, build in analog and digital pins. It supports the pulse width modulation at I/O pins by simply adjusting the duty cycle with a single line code. It supports the USB connectivity for data communication with PC.It available in the market at low cost compared to other controllers [12].The hardware setup has been developed and tested under two different insolation levels. The schematic diagram of the developed hardware has been shown in the Fig. 8. It comprises the following circuits cuk converter (dc-dc converter), three phase voltage source inverter, power conditioning circuit, driver circuit unit and the controller board arduino. The layout diagram itself having the components rating and their part names. The arduino has been programmed to track the maximum power form the solar array and also provide the pulses for three phase inverter (120° commutation). The HCPL 3120 has been used for an isolation purpose between pulses from the controller and the gate terminals of the power switches, it provides high switching speed upto 500 ns and great temperature withstand capability in the range of -40º C to 100° C. The power switches and diodes are having the high voltage and current ratings in both inverter and dc-dc converter. In cuk converter the power MOSFET IRFZ44 is used similarly in three phase voltage source inverter the FPGA25N120 is used. The power conditioning circuits having the op-amp. circuit which converts the sensor voltage into the level of voltages that can be accesible by an arduino UNO.
The results of the PV water pumping system is shown below under two different solar insolation conditions. Fig. 9 shows the PV voltage, dc-dc converter output voltage, MPPT pulses and the inductor current of the dc-dc converter under unshaded condtion. The zoomed version of the above results are shown in Fig. 10.
The power generated from the PV source is measured through fluke meter shown in Fig. 11. Considering the BLDC motor runs at the speed of 2800 rpm. Fig. 14 shows the motor back emf of the three phases and Fig. 15 shows the motor phase a current. Fig. 12 shows the PV voltage, dc-dc converter output voltage, MPPT pulses and the inductor current of the dc-dc converter under shaded condition. The power generated from the PV source under shaded condition is measured through fluke meter shown in Fig. 13. Considereing the BLDC motor runs at the speed of 1500 rpm. Fig. 16 shows the motor back emf of the three phases and Fig. 17 shows the motor current. IV. CONCLUSION In this paper, the complete design of PV water pumping system for an agricultural land has been carried out to deliver the required water of an acre agriculture land. The simulation of PV water pumping system was done and its results were taken under different solar insolation using MatLab/Simulink software. The prototype has been implemented and the maximum power were tracked from the solar PV array using Arduino Uno with perturb and observe algorithm (P&O). The results were taken at different insolation levels and those results are validated with the simulation results. This paper provides an idea to implements the solar PV water pumping for any size of land, by simply recalculating the above work. | 3,011.4 | 2017-02-28T00:00:00.000 | [
"Engineering",
"Agricultural and Food Sciences"
] |
Optical lattice clocks towards the redefinition of the second
Nowadays atomic optical lattice clocks can perform frequency measurements with a fractional uncertainty at the 10-18 level in few hours of measurement, outperforming the best caesium (Cs) standards operated in the world. Since the definition of the unit of time is based on Cs, a worldwide debate about the need to promote the redefinition of the second on a optical reference is under-way. At INRIM (Istituto Nazionale di Ricerca Metrologica) we developed an optical lattice clock based on ytterbium atoms and compared it against a Cs fountain. These results are an important contribution to the debate.
Introduction
The time unit, i.e. the second, is one of the base units in the International System of Units (SI). Its definition is based on the atomic transition between two hyperfine levels of the caesium (Cs) atom ground state [1]. Cs frequency standards [2], or Cs clocks, are in charge of the second realization: they generate a microwave whose frequency matches the second definition requirements, being able to excite Cs atoms on the mentioned atomic transition. This microwave can thus be used to calibrate all other oscillators.
Atomic clocks can also be based on other atomic transitions. Nowadays, among the best performing clocks there are optical lattice clocks, which use optical atomic transitions as frequency reference. World best optical lattice clocks currently provide frequency measurements with ultimate fractional uncertainties at the level of 10 −18 after 3 hours of measurement [3,4]. Best performing Cs standards are instead showing fractional uncertainties of about 2.3 × 10 −16 after 9 days of continuous measurement [5].
The interest in developing optical frequency standards is due to their high quality factor Q, with respect to the one of Cs standards. The quality factor is defined as Q = f /∆f , where f is the frequency of the reference atomic transition (clock transition) and ∆f is the clock transition linewidth. Since ∆f is similar in both microwave and optical standards, the five order of magnitude higher frequency of an optical radiation with respect to a microwave lets perform frequency measurements at higher resolution.
Optical clocks need to be calibrated by measuring their oscillator frequency against a Cs standard. This comparison is currently possible through the use of the optical frequency comb [6], which bridges the frequency gap between the two oscillator frequencies. This process leads on one hand to the use of optical clocks as a secondary representation of the second, but on the other end the comparison is limited by the Cs clocks uncertainty budget. The large discrepancy between the performances of optical and microwave standards has thus opened the way for a worldwide ongoing discussion about the need to change the SI definition of the second [7].
Beyond the metrological purpose of the optical clocks research field, the availability of accurate optical frequency standards has an important impact on fundamental physics research. For instance, radioastronomy has significantly benefited from the possibility to accurately synchronize radio-antennas for very long baseline interferometry [8]. Relativistic geodesy has also been affected by the availability of accurate frequency standards, which are sensitive detectors of gravity variations [9]. Finally, optical clocks can be used to investigate the variation of fundamental constants, such as the fine structure constant [10].
At the Istituto Nazionale di Ricerca Metrologica (INRIM), in Italy, we developed and fully characterized an 171 Yb optical lattice clock [11]. An absolute frequency measurement has been performed against the INRIM Cs fountain clock, which is the Italian primary frequency standard.
The Ytterbium-171 optical lattice clock
The 171 Yb optical lattice clock uses as atomic frequency reference the transition between the 1 S 0 (ground state) and the 3 P 0 states at 578 nm. The local oscillator is a yellow laser, whose linewidth is required to be 1 Hz for probing the narrow clock transition. This is accomplished thanks to a frequency stabilization scheme based on an ultra-stable optical Fabry-Pérot cavity (see Fig. 2) [12].
The clock transition is a good frequency reference if it is completely unperturbed or if frequency shifts of the transition due to perturbations are well controlled, i.e. are measured accurately. In order to realize such controlled environment, atoms need to be wisely prepared for the interrogation process. Firstly, a cloud of Yb atoms inside a vacuum chamber (see Fig. 1) are laser cooled down to microkelvin temperatures [13], exploiting radiation pressure from laser beams interacting with blue and green Yb transitions (see Fig. 2). Atoms have to be cold in order to be afterwords successfully loaded in the potential wells of an optical one-dimensional lattice, generated as a laser standing wave (see Fig. 2). Thanks to the strong confinement, the atomic transition Doppler frequency shift is suppressed. Moreover, the trapping lifetime of the atoms in the lattice, usually longer than 1 s, allows long interrogation pulses. Furthermore, this spatially confining technique lets trap thousands of neutral atoms: the interrogation of this atom sample leads to a high resolution signal. Finally, the frequency shift caused by the interaction between the lattice laser and the atoms is cancelled at the first order by selecting appropriately the lattice wavelength. At this so-called magic wavelength, the two clock states are shifted by the same amount, keeping the clock transition unperturbed [14]. For Yb, the magic wavelength has been experimentally determined to be 759 nm [11,15,16]. Thousands of atoms in the lattice sites are thus ready to be interrogated by the clock laser (see Fig. 2). In the process, a single laser pulse of duration within 60 ms and 100 ms interacts with the atoms: if the laser is resonant, atoms will be excited along the clock transition, otherwise a computer controlled loop move the laser frequency closer to resonance, locking it to the atomic reference.
Since residual atoms perturbations are present, the systematic frequency shifts generated by these effects have to be evaluated and applied to the clock frequency as a correction term. While the physical parameter leading to the frequency shift is under control, the shift measurement is performed by comparing two Yb clock frequency measurements where the parameter under test has been changed. This is done by interleaving two frequency measurements in the two different operating regimes. For example, interleaving a clock cycle with a high number of atoms in the lattice with a clock cycle with a low number of atoms in the lattice allows to evaluate the frequency shift due to atomic collisions, which are proportional to the atomic density of the interrogated sample. The uncertainty of frequency shift measurements performed with this method is statistically limited by the measuring time, thus the physical effects concerned are not limiting issues. If the effect perturbing the atoms is not controlled, the frequency shift evaluation can be performed indirectly by measuring correlated quantities. The uncertainty of such evaluations would depend on the knowledge about the effect and the capacity to set-up the required specific experiment; as a result, these effects would probably contribute to the total uncertainty to a greater extent. The total uncertainty coming from residual frequency shift measurements represents the clock accuracy limit.
The 171 Yb optical lattice clock developed at INRIM has been recently characterized with a fractional uncertainty limit of 1.6 × 10 −16 [11]. Main contributions to the uncertainty budget are the second order shifts due to the atoms interaction with the lattice laser and the shift generated by the black body radiation. Both effects are under investigation in order to be tackled properly. Concerning the latter for example, at the National Institute of Standards and Technology (NIST, USA) researchers decided to enclose atoms in a black body chamber into vacuum, in order to have a high degree of control of the black body temperature affecting atoms [17], while at RIKEN (Japan) researchers decided to interrogate atoms in a cryogenic environment [3], directly cancelling the black body radiation source.
3. Absolute frequency measurement of the 171 Yb clock transition 171 Yb clock transition, among other atomic transitions, has been recommended as a secondary representation of the second by the International Committee for Weights and Measures (CIPM) [18], which is in charge of the establishment of the international metrological conventions. This recommendation follows the availability of 171 Yb clock frequency measurements performed independently around the world [19 -22]. At INRIM a direct absolute frequency measurement of the Yb clock transition against the INRIM Cs fountain clock as been performed [11]: the measurement value results to be 518 295 836 590 863.59(31) Hz, which is in agreement with the recommended value for 171 Yb [18]. This is the first direct measurement of the 171 Yb clock transition at this level of uncertainty. This measurement further confirm previous measurements and contribute to reaffirm the ytterbium transition as an interesting candidate for the redefinition of the SI second.
Conclusions
Optical clocks nowadays are outperforming Cs clocks by many order of magnitude both in term of accuracy and stability, raising the question about a redefinition of the SI second based on an optical atomic transition. Many different optical transitions, both in the visible and in the UV region, are being investigated. The 171 Yb transition has been already measured in many independent experiments with an high accuracy; at INRIM we demonstrated an 171 Yb optical lattice clock with a fractional uncertainty limit of 1.6 × 10 −16 and we measured its frequency against the Cs Italian primary frequency standard with a fractional uncertainty of 5.9 × 10 −16 .
In addition, ytterbium has the potential to see further increased the control of its perturbations thanks to its atomic structure and properties. Thus, Yb clocks are competitive in the world scenario in the fullfilment of the requirements leading to the SI second redefinition. | 2,270.8 | 2017-05-01T00:00:00.000 | [
"Physics"
] |
Salient Region Detection via Feature Combination and Discriminative Classifier
We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.
Introduction
Humans have the ability to locate the most interesting region in a cluttered visual scene by selective visual attention.The task of computer vision is to simulate the human intelligence, and the related research has been carried out for many years.The study of human visual systems suggests that the saliency is related to rarity, uniqueness, and surprise of a scene.It has recently gained much attention , as it has been brought into the various applications, including image classification [24], object recognition [25], and content-aware image editing [26].
Existing saliency region detection methods can be roughly classified into two categories: bottom-up, data driven and top-down, task driven approaches.Bottom-up methods which utilize low-level image features, such as color, intensity, and texture, determine the contrast of image regions to their surroundings, while top-down methods make use of highlevel knowledge about "interesting" object.The majority of most bottom-up models can be roughly divided into local and global schemes.
Inspired by the early work by Treisman and Gelade [27] and Koch and Ullman [28], Itti et al. [1] proposed highly influential biologically plausible saliency analysis method and they define image saliency using local center-surrounding operators across multiscale image features, including intensity, color, and orientation.Harel et al. [4] proposed a method to generate saliency map by nonlinearly combining local uniqueness maps from different feature channels.Ma and Zhang [29] propose a novel approach which directly computed center-surround color difference in a fixed neighborhood for each pixel and then utilize a fuzzy growth model to extract image salient region.They classify the saliency into three levels: attended view, attended areas, and attended points.Liu et al. [19] propose a set of novel features, including center-surround histogram, multiscale contrast, and color spatial distribution, which are unified in a CRF learning framework, to detect salient region in images.
Later on, many saliency models are proposed which exploit various types of image features in a global scope for saliency detection.Hou and Zhang [3] propose a spectral residual method that relies on frequency domain processing.
Zhai and Shah [2] define pixel-level saliency based on pixels contrast to all other pixels.To improve computational efficiency, they introduce the color histogram to analyze image saliency.Achanta et al. [5] propose a frequency tuned method which achieves globally consistent results by defining the saliency as the distance between the pixel and the overall mean image color.Cheng et al. [6] also utilize the color histogram and segmented region to analyze image saliency, which enable the assignment of comparable saliency values across similar image regions.
High-level priors have been used to analyze image saliency in recent years.Judd et al. [30] train SVM model using a combination of low-, middle-, and high-level image features, making their approaches potentially suitable for specific high-level computer vision tasks.The concept of center prior was considered in their approach.Shen and Wu [8] unify three higher level priors, including location prior, semantic prior, and color prior, to a low rank matrix recovery framework.Shape prior is proposed in Jiang et al. [7]; concavity context is utilized by [31].Wei et al. [32] turn to background priors to analyze image saliency, and they assume that the image boundary is mostly background.Subsequently, many recent approaches use boundary prior to guide saliency detection, such as GMR [11], SO [17], PDE [33], AMC [15], and DSR [34], and using those methods can obtain state-of-the-art performance on several public available datasets.
Recent studies indicate that single saliency cue is far from being comprehensive.Some methods such as LC [2], FT [5], and HC [6] only use the contrast cue and the generated saliency maps are disappointing; the contrast cue sometimes produces high saliency values for background regions, especially for regions with complex structures.To alleviate the above problems, some approaches such as SF [18], PD [9] GC [12], PISA [21], PR [22], UFO [14], and HI [10] use multiple cues.Perazzi et al. [18] formulate saliency estimation using high-dimensional Gaussian filters by which region color and region position are, respectively, exploited to measure region uniqueness and distribution.Cheng et al. [12] and Tong et al. [22] also consider color contrast cue and color distribution cue when computing the saliency map.Margolin et al. [9] combine pattern distinctness, color uniqueness, and organization priors to generate saliency result.Shi et al. [21] present a generic framework for saliency detection via employing three terms, including color-based contrast term, structure-based contrast term, and spatial priors.Jiang et al. [14] propose a novel algorithm by integrating three saliency cues, namely, uniqueness, focusness, and objectness.Yan et al. [10] propose a multilayer approach to analyze image saliency.To determine the single-layer saliency cue, they exploit two useful saliency cues, including local contrast and location heuristic, and then a hierarchical inference framework is used to generate the final saliency map.The above-mentioned algorithms compute saliency maps from various cues and heuristically combine them to get the final results.
These methods can generate ideal saliency map when dealing with simple images.When computing the image with complex background, some methods such as [9,12,18] can only highlight part of salient object.Though methods such as [10,14,22] can highlight the entire object uniformly, the background may be highlighted too.Thus, to differentiate real salient regions from high-contrast parts, more saliency cues including low-level feature and high-level priors need to be integrated.To the best of our knowledge, there are few works that model the interaction between different saliency cues.Inspired by the work [10,23], we propose a feature combination strategy which can capture the interaction between different cues.Our main contributions lie in three aspects.Firstly, we introduce feature combination to model the interaction between different cues, which is different from most existing methods that generate saliency maps heuristically from various cues.Secondly, we formulate salient estimation as a classification problem and learn a logistic classifier that can directly map a fifteen-feature vector to a saliency value.Thirdly, the use of smoothing and weighted salient image center can further improve the detection performance.The experimental results show that our method can generate reasonable saliency map, even though the image contains complex background and the salient object has similar color to background.
The framework of the approach is presented in Figure 1.First, our approach includes four main parts.The first one is hierarchical image abstraction, which segments the image to homogenous regions across several layers by using the efficient graph-based image segmentation [35].Second, four saliency cues, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, are used as an atomic regional feature.Then we map a four-dimensional regional feature to fifteendimensional feature vector which can capture the interaction between different features.Third, a logistic regression classifier is trained for mapping a fifteen-dimensional feature vector to a saliency value.Finally, we combine the saliency map at different layers to obtain our saliency map. Figure 2 shows samples of saliency maps generated by state-of-the-art methods and by ours.
The remainder of this paper is organized as follows.The proposed model is introduced in Section 2. Section 3 presents experiments and results.This paper is summarized in Section 4.
The Proposed Approach
Our method can be divided into four main stages: hierarchical image abstraction, regional feature generation, training a logistic regression classifier, and multilayer saliency map integration and reinforcement.In the following, we describe the details of the proposed approach.
as 1 = { 1 1 , 1 2 , . . ., 1 1 }, and the segmentation result of other layers can be described in a similar way.Each superpixel is represented by mean color (in CIELab) and a spatial position (-coordinate and -coordinate) which are defined as where stands for a pixel in the region , while () represents the color vector of pixel and () represents the coordinate vector of pixel . is the number of pixels in the segmented region .
Regional Feature Generation
(1) Color Contrast Cue.Given the result of segmentation in layer of image pyramid which is described as = { 1 , 2 , . . ., }, the color contrast of a region (1) can be formulated as follows: where ( , ) is the smooth term which considers the distance between two regions and ( , ) is the color distance between and .
(2) Color Distribution Cue.Inspired by Liu et al. [19], we use the nonoverlapped region as computing unit to compute region color distribution.First, all region colors are represented by Gaussian Mixture Models (GMMs) { , , Σ } 5 =1 , where { , , Σ } is weight, the mean color, and the covariance matrix of the qth Gaussian component.The probability of a region belonging to the th component is given by The number of Gaussian is set to 5 in the subsequent experiment.We exploit -means algorithm to initialize the parameters of GMMs and EM algorithm to train the GMMs.Referring to [19], the horizontal spatial variance of the th clustered component of GMMs is defined as where The vertical spatial variance () can be defined in the same way.Different from Liu et al. [19] that use the two variances to compute saliency cue, we only use the horizontal spatial variance.The color distribution of region can be defined as (3) Center-Boundary Prior Cue.Location is an important factor in saliency detection.Center and boundary are two priors which are widely used in previous saliency detection methods.After considering above two priors, our centerboundary heuristic is thus defined as where (cp) is the center prior term which measures the distance between the region and the image center and it is defined as (cp) = 1/( + √‖ − ‖ 2 /2), where is the center of image and it is set to (0.5, 0.5); the parameter controls the sensitivity of the center prior and it is set to 1 in the experiment. (bp) is the boundary prior term which measures the color distance between the region and the image boundary.Inspired by the approach proposed by Yang et al. [11], we define background feature of region as where (top) is the sum of distances from region to the top boundary of image which is different from Yang et al. [11] (5) Cues Smoothing.Thus, we get four saliency cues and they are normalized to range [0, 1] using minimum-maximum normalization.Although we can efficiently compute four saliency cues, there exist at least two problems.Firstly, some regions with similar property will have very different saliency value and, secondly, some adjacent regions will be assigned to very different saliency value.To reduce noisy saliency results caused by above-mentioned issues, we use two smoothing procedures to refine the saliency value for each region.
-Means Clustering Based Smoothing.Given the results of segmentation in layer of image pyramid which is described as = { 1 , 2 , . . ., }, we first exploit -means clustering algorithm to divide the segmented regions into different clusters in each layer.Referring to [36], we can then define an object function, sometimes called a distortion measure, given by which we can easily solve for to give is the binary indicator variables; if a region is assigned to cluster , then = 1, and = 0 for ̸ = .This is known as the 1-of- coding scheme.The two phases of reassigning data points to clusters and recomputing the cluster means are repeated in turn until there is no further change in the assignments.Then we get the number for each cluster cl () = ∑ =1 .We replace the saliency values of each region by the weighted average of the saliency values of the same cluster (measured by * * * distance).The saliency value of each region can be refined by where tmp can be replaced by 1, 2, 3, and 4. The parameter controls the importance of color space smoothing term.
In our experiment, we set the parameter = 0. propose a spatial based approach to refine saliency between adjacent regions and the procedure is very similar to color space smoothing.We replace the saliency values of each region by the weighted average of the saliency values of its neighbors.
(6) Regional Feature.After completing the above steps, we can get four atomic features ( (1) , (2) , (3) , and ( 4) ) for each segmented region, including color contrast, center-boundary prior, color distribution, and objectness.In order to capture the interaction between the four different features, a novel feature is generated by mapping a four-dimensional regional feature to fifteen-dimensional feature vector.There are four kinds of combinations: single term, double term, triple term, and quadruple term.For single term, we use the four atomic features ( 1 - 4 in the vector).For double term, there are six elements which are combination of any two atomic features ( 5 - 10 ) ( .Finally, we can get a novel fifteen-dimensional feature vector.
Learning Framework for Saliency Estimation.
The logistic function is useful because it can take an input with any value from negative to positive infinity, whereas the output always takes values between zero and one [37].We take full advantage of this property; thus our saliency estimation can be formulated as a probability framework.Let us assume that where ℎ () = ( ) = 1/(1 + − ) is our hypotheses.Consider () = 1/(1 + − ) is called the logistic function or the sigmoid function.Notice that () tends towards 1 as → +∞, and () tends towards 0 as → −∞.Hence, our hypotheses is always bounded between 0 and 1 and higher value indicates that the region is likely to belong to a salient object.
The parameter is what we want to learn from the data.We use the first layers of image pyramid for training, given the result of segmentation in 1st layer of image pyramid which is described as 1 = { 1 1 , 1 2 , . . ., 1 1 }.A segmented region is considered to be positive if the number of the pixels belonging to the salient object exceeds 90% of the number of the pixels in the region and its saliency value is set to 1. On the contrary, a region is considered to be negative if the number of the pixels belonging to the salient object is under 10% of the number of the pixels in the region and its saliency value is set to 0. As aforementioned, each segmented region is described by a fifteen-dimensional vector x.We learn a logistic regression classifier from the training data X = {x 1 , x 2 , . . ., x } and the saliency value Y = {y 1 , y 2 , . . ., y }.Once the parameter is obtained, we can quickly perform the saliency estimation using (10).
Multilayer Saliency Map
Integration.We combine image pyramid which is the multiscale representation of image to suppress background region.Similar to [1], the saliency map is obtained by adjusting the saliency map to the same scale and point-by-point addition.The fusion strategy is given by Fusion () = ⨁ =1 sal( ), where is the input image, is the mth layer of image pyramid, and sal( ) is the saliency detection result of the mth layer of image pyramid.
Reinforcement of Salient Region.
Salient object is always distributed in local region of the image, while background has a high degree of dispersion.To use such property, we introduced the weighted salient image center into our saliency estimation, and the newly defined salient center is defined as where is the number of pixels in image and is the th pixel.Hence, the final pixel-level saliency can be defined as where is a pixel in an image, is the Euclidean distance between the pixel and the weighted salient image center, and the parameter 2 is the smooth term which controls the strength of spatial weight; we set the parameter 2 to 0.4.
Experiments and Results
To validate our proposed approach, we have performed experiments on two publicly available datasets.(1) The first one is the MSRA dataset [19], which contains 5000 images with pixel-level grounds truth.We used the same training set, validating set, and test set as the paper of Jiang et.al [23].The training set contains 2500 images, the validating set contains 500 images, and the testing set contains 2000 images.
Evaluation Methods.
Following [5,6,8], we evaluate the performance of our method measuring its precision and recall rate.Precision measures the percentage of salient pixels correctly assigned, while recall measures the percentage of salient object detected.In order to study the performance of saliency detection approaches, we use two kinds of objective comparison measures in previous studies.
Secondly, we follow [5,6,8] to segment a saliency map by adaptive thresholding.The image is first segmented by mean-shift clustering algorithm.And then we calculate the average saliency value of each nonoverlapped region; an overall mean saliency value over the entire saliency map is calculated as well.The mean-shift segments whose saliency value is larger than twice of the overall mean saliency value will be marked as foreground, and the threshold is defined as where and are the width and height of the saliency map, respectively.In many applications, high precision and better recall rate are both required.In addition to precision and recall, we thus estimate , which is defined as where we set 2 = 0.3 as is suggested in [5,6,18].
3.2.Performance on MSRA Dataset.We report both quantitative and qualitative comparisons of our method with 18 state-of-the-art saliency detection approaches on the MSRA dataset.
Quantitative Comparison.Figures 3(a) and 3(b) show the precision-recall curves of all the algorithms on the MSRA-5000 dataset.As observed from Figure 3, the curve of our method is consistently higher than others on this dataset.Besides, we compare the performance of various methods using adaptive thresholding.Each value of our -- (0.8524, 0.7794, and 0.8343) ranks first among the 18 state-of-the-art methods.
Performance on ECSSD Dataset.
The ECSSD dataset is a more challenging dataset provided by Yan et al. [10].As is shown in Figure 5, our approach achieves the best precisionrecall curve.We also evaluate average precision, recall, and using adaptive thresholding; our recall and value rank first among all the methods.We also provide the visual comparison of different approaches in Figure 6, from which we see that our approach produces the best detection results on these images and can highlight the entire salient object uniformly.We only consider the most recent thirteen models: FT [5], HC [6], RC [6], LR [8], PD [9], GC [12], HI [10], GMR [11], BMS [13], UFO [14], AMC [15], HDCT [16], and SO [17].
Analysis of the Influencing Factors of Segmentation.
Recently, low-level image segmentation methods have been widely used for saliency analysis.SLIC [38] and superpixel [35] approaches are two efficient algorithms and the source codes are publicly available.Because of considering different segmentation criterion, the segmentation results are quite different from each other, as is shown in Figure 8. From the figure, we can see that the result of SLIC method has more local compactness than superpixel method.We also provide the visual comparison of four saliency cues and final saliency map produced by the above two segmentation algorithms.Figure 9 shows that different segmentation algorithms can produce different salient cues and final saliency map.The superpixel approach can generate high-quality saliency map, while the SLIC segmentation algorithm may highlight some nonsalient region.Finally, we provide the quantitative comparison of SLIC and superpixel segmentation algorithm.To verify the effectiveness of two segmentation algorithms, we plot the corresponding precision-recall curves on the ASD dataset.As observed from Figure 10, the use of superpixel algorithm can obtain better precision-recall curves when compared with SLIC clustering algorithm.
Conclusion
In this paper, a novel salient region detection approach based on feature combination and discriminative classifier is presented.We use four saliency cues as atomic feature of segmented region in the image.To capture the interaction among different features, a novel feature vector is generated by mapping a four-dimensional regional feature to a fifteendimensional feature vector.A logistic regression classifier is trained to map a regional feature to a saliency value.We further introduce the multilayer saliency map integration and salient center for improvement.We evaluate the proposed approach on two publicly available datasets and the experiments results show that our model can generate high-quality saliency map which can uniformly highlight the entire salient object.
Figure 1 :
Figure1: An overview of our weighted feature combination framework.We extract four image layers from input and then train a logistic regression classifier by using the four atomic features.Initial saliency maps of the four layers can be obtained by weighted feature combination.Finally, we fuse saliency maps of different layers to obtain the final saliency map.
Figure 3 :
Figure 3: Experimental results on the MSRA dataset.(a) and (b) are precision and recall curves of all approaches which are obtained using fixed threshold.The histogram (c) (precision, recall, and ) is obtained using adaptive thresholding.
Figure 5 :
Figure 5: Experimental results on the ECSSD dataset.(a) and (b) are precision and recall curves of all approaches which are obtained using fixed threshold.The histogram (c) (precision, recall, and ) is obtained using adaptive thresholding.
Figure 8 :
Figure 8: Visual comparison of SLIC and superpixel segmentation result, from left to right: input image, SLIC segmentation result, and superpixel segmentation result.
Figure 9 :Figure 10 :
Figure 9: Comparison of salient feature with different segmentation methods.(a) Input image.(b) Ground truth.(c) Saliency map generated by using SLIC method.(d) Saliency map generated by using superpixel method.(e) Color contrast based salient feature by using SLIC method.(f) Color contrast based salient feature by using superpixel method.(g) Color distribution based salient feature by using SLIC method.(h) Color distribution based salient feature by using superpixel method.(i) Objectness based salient feature by using SLIC method.(j) Objectness based salient feature by using superpixel method.(k) High prior based salient feature by using SLIC method.(l) High prior based salient feature by using superpixel method.
[14] is the number of regions that intersect with the top image boundary.We use a simple approach to compute bounding boxes to the pixel level first and then we can obtain region level objectness measure.For more details, please refer to UFO[14].For each region, we can get its region-level objectness (4) = ∑ ∈ () / , where () is the objectness value for pixel .
(4) Objectness Cue.Recently, a generic objectness measure is proposed to quantify how likely it is for an image window to contain an object of any class.The measure is based on low-level image cues.As our goal is to obtain a saliency map for the whole image, we should transfer the objectness value from the | 5,405.4 | 2015-12-28T00:00:00.000 | [
"Computer Science"
] |
Identification and analysis of urban functional area in Hangzhou based on OSM and POI data
The accurate identification of urban functional areas is of great significance for optimizing urban spatial structure, rationally allocating spatial elements, and promoting the sustainable development of the city. This paper proposes a method to precisely identify urban functional areas by coupling Open Street Map (OSM) and Point of Interest (POI) data. It takes the central urban area of Hangzhou as a case study to analyze the spatial distribution characteristics of the functional areas. The results show that: (1) The central urban areas of Hangzhou are divided into 21 functional areas (6 single functional areas, 14 mixed functional areas and 1 comprehensive functional area). (2) The single functional areas and the mixed functional areas show the geographical distribution characteristics of the looping stratification, which means “Core-periphery” differentiation is obvious, and the comprehensive functional area is relatively scattered. (3) The mixed degree of regional function with ecological function and production function is low while comprehensive functional areas are usually associated with higher potential and vitality. (4) The identification results are in great agreement with the actual situation of Hangzhou central urban area, and the method is feasible. Therefore, this paper can provide a reference for urban development planning and management.
Introduction
The functional area is the basic unit of urban planning, management and resource allocation. Urban functional areas are important geospatial attributes of urban land, in which people carry out various socio-economic activities, which are usually determined by two perspectives of land use type and human activities, including residential land, industrial land, commercial and business facilities land [1,2,7]. As the basic spatial unit of urban development, the concept of urban functional area originated from the "functionalism" planning idea established by the Athens Charter, which emphasizes the clear structure of the city, pays attention to the functional areas division and the purification of use [3]. Diversified urban functions bring convenience to the life, work, recreation, and communication of urban residents, which is the foundation and charm of urban sustainable development [4,5]. In rate exceeded 60%. With the accelerating process of urbanization, various elements gather and diffuse in different spaces of the city [6] and result in functional differentiation at different regional scales. In addition, unreasonable urban planning leads to such problems as a single functional urban structure, spatial differentiation, lack of affection and care and so on, which causes the loss of urban vitality and a series of urban environmental and social problems. Therefore, the accurate identification of space and social structure of the urban, the rational division of functional urban areas has become an important subject of current research. It is of great significance for coordinating the relationship between humans and land, optimizing the urban spatial strategy and improving the level of urban planning [7]. At the same time, the reasonable division of urban functional areas has certain guiding value for solving many urban diseases, such as traffic congestion, air pollution, waste of land resources, climate change and so on [8][9][10].
The traditional urban function zoning is mainly based on remote sensing images, land use data, panel data and so on [11]. Zhong et al. proposed a semantic allocation level multi-feature fusion strategy for high spatial resolution image scene classification which is based on the probabilistic topic model [12]. Zhang et al. proposed a complete hybrid scene decomposition system to decompose the high-resolution remote sensing images of Beijing and Zhuhai [13]. However, the traditional method of remote sensing can only classify urban functions based on the natural properties of the land. With the enhancement of the data acquisition capability, breaking the traditional idea of functional zoning, fully mining the urban social and cultural information contained in big data, and establishing the methodology system for identifying urban functional areas has become an innovation direction of urban geography, which especially in the context of information [14]. Liu collected 7-day taxi trajectories in Shanghai to study the temporal variation of boarding and alighting and its relationship with different landuse characteristics [15]. Pei et al. collected mobile phone data in Singapore to identify residents' daily travel activities to reflect the social function of land use [16]. Jia et al. combined remote sensing image (RSI) of a large area with mobile phone positioning data (MPPD) and applied this framework to the center of Beijing [17]. Song et al. explored the use of Singapore's parks based on the number of photos and visual content of the geographical location of the park (from Instagram and Flickr platforms) [18]. However, there are data acquisition difficulties and strong subjectivity problems in urban functional area identification based on remote sensing or survey data. The big data of travel or pictures can only be roughly divided into urban functional areas, which cannot accurately analyze the spatial structure of urban functions, nor can they provide the precise implementation to solve the problems of land use function [19,20]. At the same time, these methods are limited by the number of users and the accuracy of the user footprint location, so the accuracy of the classification results is poor [21].
Compared with the above studies, the POI data is a kind of point-shaped geographic space big data of real geographic entities. The results of this research could finely characterize the dynamic and real-time nature of urban land functions, and reflect the diversity and mixed degree of various facilities [22,23]. Zhai et al. improved the functional area identification framework by constructing the Place2vec model to capture POI geographic information [24]. Han et al. used POI data to identify single urban functional areas in Beijing [25]. Ran et al. conducted an in-depth analysis of the spatial pattern of the life service industry in Changsha based on POI data [26]. However, most of the present researches on the identification of urban functional areas which based on POI data are focused on the identification of the single function, and lack of in-depth division of mixed and comprehensive land-use functions. At the same time, in the division of the basic unit of functional area identification, present researches mainly divided into cells by grids, and lack of multi-functional identification of the actual land use units. Therefore, this paper takes Open Street Map (OSM) road network data and POI data as the main data source, and proposes a method for accurate identification of urban functional areas based on kernel density estimation, functional area identification, mixed degree calculation and other technical means. Taking the main urban area of Hangzhou as a case, this paper further identifies urban single functional land, mixed and comprehensive functional land, and analyzes the distribution characteristics of the functional structure in the research area. The improvements of this study are shown as follows: 1) Taking the irregular grid formed by the road network as the research unit, the segmentation of urban functional areas is more reasonable.
2) The kernel density method reduces the noise effect of POI data, improves the recognition accuracy of mixed functional areas and weakens the discretization of results.
3) The recognition model considers the mixed functional areas and comprehensive functional areas to make the results more in line with the real situation. 4) The mixed degree calculation and accuracy verification of the identification results of urban functional areas are conducted to make the results more convincing. In terms of literature, it is expected to provide a typical case and research methods for the research of urban spatial structure in China driven by spatial big data. In practice, it provides the reference for urban managers to understand the urban spatial structure, accurately grasp the development status of urban functions, and formulate reasonable urban planning schemes.
Study area: The central urban area of Hangzhou, China
Hangzhou, the capital city of Zhejiang Province, is located in the northern part of Zhejiang Province, the lower reaches of the Qiantang River, and the southern end of the Beijing-Hangzhou Grand Canal. The central geographical coordinates are E120˚19', N30˚26'. The city governs 10 districts, 2 counties and 1 county-level city, with a total area of 16853.57km 2 . It is and one of the central cities in the Yangtze River Delta. In 2019, the city's GDP reached 222.85 billion US dollars, with a permanent population of 10.36 million. The central urban areas are located in the northeast of Hangzhou, including Xihu District, Gongshu District, Shangcheng District, Binjiang District, Jianggan District, and Xiacheng District, with a total area of 706.27km 2 . The central urban areas are concentrated areas of business, population and services in Hangzhou. The urbanization process of Hangzhou is the epitome of China's rapid development. At the same time, the urban development mode of Hangzhou, which takes into account science and technology, industry, humanities and ecology, represents the development direction of the future city. Moreover, the rapid change of urban functions of such cities brings challenges to urban management and operation. Therefore, this area is selected as the research area. An overview of the research area is shown in Fig 1.
Data source and process
Before collecting the open source data required for the article, we have carefully examined all relevant platform service terms to ensure that our research is fully compliant with the agreement. The administrative division data is from the 1:250,000 basic geographic database provided the National Geomatics Center of China (http://www.ngcc.cn/ngcc/html/1//391/392/ 16114.html), the satellite map data comes from Auto Navi Map (https://www.amap.com/), and they all can be obtained free of charge. The urban road network data of Hangzhou comes from the Open Street Map (OSM) geographic data platform (https://www.openstreetmap.org/). OSM aims to provide users with free and easy-to-access digital map resources, which is currently the most popular spontaneous geographic information platform. We preprocessed the collected OSM road network data: select highways, railways, township main roads, urban main roads and roads, delete sidewalks, residential lanes, chaotic road network and other lines which are less important and do not affect the analysis results, delete duplicate lines. Filter to delete other redundant paths and broken paths and paths below 100 meters, and finally, fill the broken paths. POI (Point of Interest) data comes from AutoNavi Maps Open Platform (https://lbs.amap.com/). AutoNavi Maps is China's leading provider of digital map content, navigation, and location services. This research uses the open API interface provided by Auto-Navi Maps to obtain 168607 POI data of six districts in Hangzhou in December 2020, and each POI data contains attribute information such as geographical entity name, address, affiliation type, longitude, and latitude, administrative region and so on. There are many problems in the original POI data, such as various classifications, the repeated crossover between different classifications, classification errors, information missing, and inconsistent coordinate system. Moreover, some types of POI have low public awareness, such as public toilets, newspaper pavilions, and bus stations. These types of POI are not significant in the identification of functional areas and are not convenient for research and discussion. Therefore, it is necessary to unify the coordinate of POI data, clean the data, delete wrong POI points, delete repeated abnormal values, screen and delete low public awareness points, and reclassify them. According to the "Code for classification of urban land use and planning standards of development land" (GB_50137-2011) and combined with the actual conditions of Hangzhou, we divided POI into six types: residential, administration and public services, commercial and business facilities, industrial, green space, science and education [27] (Table 1).
Research methods
The research idea of this paper is to use the OSM road network data to divide the city into different research units. Then, based on the POI data, the frequency density of different research units in the city is calculated by using the methods of kernel density and POI weight assignment. Finally, it is divided into different functional partitions according to the discriminant rules. The technical route is shown in Research unit division based on OSM data. OSM refers to an open street map for the crowd. Ordinary users can collect and track a large amount of location or trajectory data through mobile terminals, and publish geographic information to the Internet with the help of an open map platform. Compared with traditional mapping spatial data, this kind of data has the advantages of real-time, fast update speed, and free access [28]. This paper collects road network data covering the research area, including five types: highways, arterial roads, firstclass roads, second-class roads, and third-class roads. According to China's "Code for classification of urban land use and planning standards of development land", the highways and arterial roads are the first level, the first-class roads and second-class roads are the second level, and the third-class roads are the third level, buffer zones of 40m, 20m, and 10m are generated respectively. When the irregular grid is generated, there are suspension points and independent roads are topologically modified to form closed units.
Kernel density estimation. The kernel density is based on the first law of geography, that is, the closer the objects are, the greater the density expansion value will be. This method is used to calculate the density of spatial points and line elements in their surrounding neighborhoods, and to simulate the density layout continuously. The kernel density value of each grid in the image reflects the layout characteristics of spatial elements. It is less affected by subjective factors, and the result has the advantages of gradual change and revealing local characteristics [29,30]. In this research, the influence spreading of POI data is realized by using the kernel density method. The formula is defined as: In the formula, K j is the weight of the research object j, is the distance between the spatial point i and the research object j, R is the bandwidth of the selected regular area (D ij < R), n is the number of research objects j within the bandwidth R.
The result shows that the choice of bandwidth R has a critical impact on the result of kernel density analysis [31,32]. Generally, larger bandwidth is used to reflect the spatial variation of the global scale, and a smaller bandwidth reflects the spatial variation of the local scale. In this paper, the bandwidth R is adaptive by the optimal search bandwidth strategy of the ArcGIS10.5 platform, and the calculation formula is: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 1 11ð2Þ SD ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi X n where r is the bandwidth, SD is the standard distance value, D m is the median of the distance between the average center of the point and all points, n is the number of points, x i and y i are the coordinates of points, respectively, and � X and � Y are the average center coordinates of POI points, respectively.
The above formula is used for calculation in GIS to obtain the search radius of POI data of various functional areas. Among them, the residential area is 1095 m, the administration and public services area is 1102 m, the commercial and business facilities area is 745 m, the industrial area is 970 m, the green space is 1224 m, the science and education area is 1013 m, and the overall search radius of POI data is 837 m. Hinnerburg et al. researched the relationship between the bandwidth and the number of density-attractor and found that there are a certain number of bandwidth intervals that keep the density-attractor stable, and it is reasonable to choose the bandwidth in these intervals [33]. Based on this, the bandwidth interval is determined as [700m, 1300m]. Considering the unity of data analysis, 1000m bandwidth kernel density analysis can better describe the specific aggregation characteristics of various industries, which can meet the needs of urban spatial structure analysis [34,35].
POI weight assignment. It is not only the distribution density of POI but also the size of geographical objects represented by POI and the degree of public recognition to affect the function of the plot. In this paper, the occupied area of POI points is selected as one of the weights (Table 2), and the average building area or occupied area of various POIs is determined by referring to the current format classification standard GB / T18106-2010 and the urban public service facility planning standard GB50442 (draft for comments), and then according to the data and the area value for grading and scoring (Table 3). According to the research of Xue and Zhao et al. [36,37], public awareness is cited as another impact factor (Table 4). Then the weights of the two influencing factors are set to 1:9, 3:7, 5:5, 7:3 and 9:1, respectively. After the accuracy of each proportion of sampling verification, only the accuracy of the results with the weight of 5:5 reaches more than 80%. Therefore, in order to ensure the accuracy of the research, we set the weight of the two influencing factors as 5:5. After superposing the weights of different POI data in different land-use types, the corresponding weights of different land-use types are given. The full score of the two factors is 100, and the total score after weighting is 100. For example, if the general area score of a POI point is 50 and the public awareness is 0.7 (the percentage is 70), then the weight assignment result of this POI point is 60. Based on these, the corresponding weights of each land type are: science and education 65, administration and public services 50, residential 30, commercial and business facilities 25, industrial 70, and green space 80.
Urban functional area identification
According to the weight calculation of each POI, the frequency density of each POI in the land use unit is calculated, which can be used as the basis of functional zoning [3]. The calculation formula is as follow: Where F i , d i , and W i are the frequency density, sum of nuclear density, weight of the POI of class I in the unit, respectively. Due to the complex and diverse urban functional structure, there are both single-use functional areas and areas with two or more mixed functions. Therefore, this paper further compares the frequency density of POI in the functional area: When the frequency density of POI in the unit is more than or equal to 50%, the functional area is a single function area. When the two kinds of values of higher frequency density of POI in the unit are between 20% and 50%, the unit is defined as a mixed functional area with two types of POI. When the frequency density of all kinds of POI in the unit is 0, it is defined as no data area, and the rest of the unit is the comprehensive functional area. Mixed degree. According to the mixed index of land use, the mixed degree of functional areas in the research area is further evaluated [38]. The formula is as follow: Where D is the calculation result of the mixing degree, p i is the proportion of class I POI kernel density value to the total value of each class of POI kernel density in the functional area unit, and n is the number of POI classes. If the kernel density of some POI in the unit is 0, it does not participate in the calculation.
Results of urban functional area identification
Through OSM network grading and buffer operation, the central urban area of Hangzhou is divided into 1542 block units. After calculating the weight and frequency density of various POI in each block unit, the identification result of functional areas in the Hangzhou central district is obtained (Fig 3). It can be seen from the combination of commercial and business facilities land, and residential land (234 blocks in total, with a total area of 83.733km 2 ), the combination of commercial and business facilities land, science and education land (136 blocks in total, with a total area of 34.917km 2 ), the combination of commercial and industrial facilities land (39 blocks in total, with a total area of 10.678km 2 ), the combination of commercial and business facilities land, and green space land (7 blocks in total, with a total area of 5.466km 2 ), the combination of administration and public services land, and residential land (14 blocks in total, with a total area of 15.317km 2 ), the combination of administration and public services and green space land (7 blocks in total, with a total area of 2.714km 2 ), the combination of administration and public services and industrial land (6 blocks in total, with a total area of 6.084km 2 ), the combination of administration and public services land, and science and education land (4 blocks in total, with a total area of 8.618km 2 ), the combination of industrial and residential land (2 blocks in total, with a total area of 0.258km 2 ), the combination of residential and green space land (1 block in total, with a total area of 0.411km 2 ), the combination of science and education, and residential land (4 blocks in total, with a total area of 2.838km 2 ), the combination of science and education, and industrial land (1 block in total, with a total area of 0.961km 2 ), the combination of science and education, and green space land (2 blocks in total, with a total area of 2.938km 2 ). There are 103 comprehensive functional areas, covering an area of 39.801km 2 . The remaining units are no data areas (79 blocks in total, with a total area of 0.333km 2 ). Because of the small statistics of various POIs in this block, the kernel density value is small too.
Spatial distribution of different functional areas
Single functional area. The single functional areas in the central urban area of Hangzhou are mainly commercial and business facilities land and residential land (Fig 4). Among them, the number of commercial and business facilities land plots is the largest, accounting for 76% of the total land plots in the single functional area, and 27% of the total land plots in the research area. And the total area of commercial and business facilities land plots is the largest, accounting for 69% of the total land plots in the single functional area, and 26% of the total land plots in the research area. Commercial and business facilities land is distributed in a wide range, but there are large differences between regions, showing the characteristics of more periphery and less center. The number of residential land accounts for 13.7% and 4.9% of the single functional areas and the total of research areas, and the area accounts for 24% and 9%, respectively. Residential land's spatial distribution is similar to that of commercial and business facilities land, showing the characteristics of more periphery and less center, residential land mainly distributed in the north and south of Xihu District, the east of Jianggan District, and the southern edge of Binjiang District. Meanwhile, the number of industrial land accounts for 6% of the single functional area and 2% of the total research area, with the area accounting for 1% and 0.4% respectively. Industrial land is mainly distributed along the main traffic arteries. The administration and public services land, science and education land, and green space have small areas and single distribution because there are many other mixed functions and the recognition rate is low.
Mixed functional area and comprehensive functional area. Mixed functional areas in the downtown area of Hangzhou are widely distributed (Fig 5), with an area of 319.767km 2 , accounting for 55% of the total area of the research area, which indicates that the mixed degree of urban functions in the downtown area of Hangzhou is relatively high. Mixed functional areas are mainly the combination of commercial and business facilities land and administration and public services land, which occupy the largest area, accounting for 25% of the total research area. In terms of spatial distribution, the central part is more and concentrated, while the periphery is less and scattered. Specifically, they are primarily concentrated in Shangcheng District and Xiacheng District, the south of Gongshu District, the east and west of the Xihu District, and the west of Jianggan District. Next is the combination of commercial and business facilities land and residential land, accounting for 14% of the total research area, intersperses in various functional areas. The number of other mixed functional areas is 223, accounting for 14% of the total area of the research area, and the area of other mixed functional areas accounted for 16% of the total area of the research area. The layout of other mixed functional areas is more dispersed. Among them, the number of the combination of residential and green space land and the combination of science and education and the industry land are the least, with 1 each. The area of the combination of industrial and residential land is the smallest, accounting for only 0.04% of the total area of the research area. The overall layout of the comprehensive functional urban areas in the central city area of Hangzhou is scattered (Fig 6), accounting for 6.7% of the total research area, and the area of comprehensive functional urban areas accounted for 6.8% of the total functional areas. So it is clear that the overall layout of integrated functional urban areas in the center is in the state of multi-point flowering, distributed around all kinds of single functional areas and mixed functional areas. Distribution characteristics of mixed utilization degree. According to the distribution map of the mixed degree of functional areas (Fig 7), it can be clearly seen that the mixed-use degree of land use in the central urban area of Hangzhou has obvious regional differentiation. Those with a high mixed degree are mixed functional area and comprehensive functional area, and those with low mixing degree are the single functional area. The high-value areas are mainly distributed in the south of Xihu District and Binjiang District, the middle of Shangcheng District and Xiacheng District, the west of the Gongshu District, and the riverside area of Jianggan District. The low degree mixed areas are mainly distributed in the west of Xihu District, the north of Binjiang District, the south of Shangcheng District and Gongshu District, and the north of Xiacheng District and Jianggan District. In each administrative region, there are high-value areas and low-value areas of mixing degree, and the distribution is more even.
Validation of results
In order to verify the accuracy of the functional area identification results, this paper draws on the research of Ding and Kang [39,40]. In this paper, 40 units of functional area are selected randomly (Fig 8), and the real attributes of the units of functional area are judged according to AutoNavi map, and the accuracy of the identification results is evaluated by the method of scoring conformity degree. The full score is 3, that is, complete compliance, 0 is complete noncompliance. If the single functional area is identified as a mixed functional area of one type, it is 2, and the rest is 1. If the mixed functional area is identified as a single functional area that contains one function of this mixed functional area or a mixed functional area which contains one function of this mixed functional area, it is 2, and the rest is 1.If the comprehensive functional area is identified as a mixed functional area, it is 2, If the comprehensive functional area is identified as a single functional area, it is 1. The accuracy calculation formula is: In the formula, n is the sampling number and X i is the sum of all samples with full precision, x i is the actual score of sample accuracy.
The evaluation and calculation of the accuracy of functional zoning is shown in Table 5. It can be seen that the overall accuracy of urban functional area identification in the central urban area of Hangzhou reaches 82%, indicating that this research can effectively identify the urban functional areas and have certain accuracy.
Method feasibility analysis
In the past, the basic data of urban functional area identification were mainly census data and government planning text. These data have some limitations (high acquisition cost, difficult management, poor timeliness, etc.), making research in this field not detailed enough. With the development and popularization of computer technology, remote sensing technology and big data are applied in the research of urban functional area recognition, but there are still problems such as tedious classification methods and the result precision cannot be guaranteed. Based on the kernel density estimation algorithm, this paper proposes an urban functional area identification method that combines OSM and POI data. It has a high reference value for the analysis of urban overall patterns and spatial optimization. At the same time, the availability of OSM and POI data makes this method can be easily used in other cities.
Firstly, POI data can provide a spatial location and attribute information of various urban facilities. Real-time and efficient data acquisition methods simplify the cost and uncertainty of traditional identification methods such as remote sensing, field research and questionnaire survey. The application of big data breaks through the problems of poor timeliness and a small sample of traditional data, improves the accuracy of urban functional area identification, and is convenient for larger-scale research. Then, the OSM road network is used to generate road space, and the research area is divided into independent block units so that the segmentation of urban functional areas is more reasonable. The kernel density estimation method is used to realize the influence diffusion of POI in the adjacent location, weaken the discrete phenomenon of POI points, and identify the functional areas through the optimal bandwidth selection. The results are basically consistent with the actual situation. Through the calculation of the mixed index of urban functional areas, it is convenient to express the mixed degree of land use in the central urban area of Hangzhou. Although in the process of urban planning and construction, urban land use layout planning emphasizes functional zoning, mixed land use planning should also be adhered to promote economic development and facilitate the lives of residents. Finally, compare the identified functional areas with the image of AMap, the verification of the results shows that the method has a high degree of recognition for urban functional areas, which can make up for the disadvantages of the traditional method and has certain feasibility. It can support the spatial layout research of urban functional areas, so that the decision-making organs can find the shortcomings of the existing urban planning, and can coordinate the development direction of the existing planning functional areas, improve the vitality of urban space, and carry more human activities.
However, this method also has some shortcomings. For example, the verification of the results is only carried out through the comparison of typical regions, and there is no true value as a comparison. Therefore, it is hard to determine the accurate identification accuracy of this method. In addition, this method is based on the OSM road network to divide urban units, and the basic road network data have the problems of low density and the lack of data in suburban and rural areas, which leads to the excessive division of functional units in such areas, resulting in poor local recognition results.
Current situation analysis
The urban functional area projects each element of the city in space and forms a closely related organism. The identification of Hangzhou urban functional areas based on big data is not only beneficial to the adjustment of urban spatial layout and the creation of good conditions for the development of urban economy and society but also can improve the efficiency of urban land use [41] through the rationalization of urban functional zoning, so as to ensure the effective implementation of urban sustainable development strategy [42,43]. According to the "Results" part, we have the following judgments: (1) The urban functional pattern of "single-mixed-comprehensive" coexistence. On the whole, the single functional areas are mainly distributed in the periphery of the city. For example, the single residential areas are mainly distributed in the north and south of West Lake District, the east of Jianggan District, and the southern edge of the Binjiang District. The single industrial areas are mainly distributed in the eastern part of the Jianggan District and the southern part of the Xihu District. The single areas for science and education are mainly distributed in the northern part of the main urban area of Hangzhou. In general, the more single functional land, the more it affects the mobility of the city [44]. For example, the time cost of residents leaving the residential area to reach two or more other functional areas is relatively higher. From the perspective of sustainable development, the simplification of functions will not only affect the "mobility" of the city but also directly reduce the attractiveness and vitality of the city [45]. The existence of single-use areas is related to policies, history, and land use cost. For example, the cost of land use in central areas may be more than ten times that in the suburbs. Therefore, large scales of land development are often developed in the suburbs [15]. At the same time, the development area policies with Chinese characteristics will also lead to the emergence of large-scale single functional areas in the suburbs.
The mixed functional areas are distributed in the whole research area and nested with single functional areas, which indicates the rationality of the spatial structure of the main urban area of Hangzhou. For example, residential land forms residential-industrial zones around enterprises, commercial-residential lands around businesses, and public service-residential lands around public services. From the perspective of different administrative regions, each administrative region has an area with a higher degree of mixed, as well as an area with a lower degree of mixed. In line with the law of urban growth and development in urban areas, it revolves around multiple economic activity centers [46,47]. However, most of the highly mixed areas are not centers of economic activity and do not have highly mixed urban functions around traffic stations. This is mainly due to the rigid system of land use and the consolidation of management models in China [48,49].
(2) Comprehensive functional areas bring more vitality to the city. The comprehensive functional area means that there are three or more urban functions in the plot, reflecting the organic integration of different urban functions. For example, the southernmost comprehensive functional area of Binjiang District is the location of China Network Writers' Village and the core for the development of China's network literature industry. Meanwhile, the highquality environment and supporting facilities attract more writers. Similarly, in the easternmost comprehensive functional area of Jianggan District, there are enterprise platforms with functions of scientific research and industrial incubation, such as ABO Biopharmaceuticals (Hangzhou) Co., Ltd., Sanhua Academia Sinica, as well as high-grade residential areas, the supporting facilities are relatively complete. These regions are very typical innovation areas in Hangzhou, and the mixing of functions is also the most prominent feature of high-quality urban innovation space [50,51].
(3) Low mixed degree of regional function with ecological function and production function. In the research area, the mixed degree of some areas is deliberately controlled, mainly in the areas that undertake ecological functions, such as the Qiantang River, Xixi Wetland, West Lake Scenic Area, and the areas that undertake production functions, such as Zhejiang Qiosi Farm in Jianggan District. This is because the strategy of ecological civilization and the protection of cultivated land both are China's basic state policies [52], which require strict restrictions on the development of the ecological area and cultivation area. A high degree of mixed often means diversified and high-intensity land development. Therefore, the existence of such low-mixed land use for urban development strategy is of great significance.
Suggestions
The rapid evolution of cities and the promotion of China's urbanization policy have brought many new challenges to urban planning and management. The spatial division of urban functional areas and urban zoning management of the city could provide a new idea for planners and managers to a certain extent. The government could plan different functional areas of the city according to the social, economic and natural geographical conditions, and realize the change from the traditional " unitary and extensive " management mode to the modern " diversified and refined " management mode [53,54].
Based on the above analysis of the comprehensive judgment of the current situation, we believe that the current urban functions of the central city area of Hangzhou are becoming diversified and individualized. However, with the continuous evolution of urban spatial structure, there are many potential problems [55]. First of all, the single functional area is still more, leading to the separation of working and housing, the separation of school and residence, and heavy traffic pressure in the morning rush and evening rush. Secondly, the less comprehensive function area and insufficient space for diversified urban functions, which will bring the problem of insufficient vitality and attraction of the city. Finally, the over-strict control of land function results in the restriction of the diversified allocation of ecological and productive land, which has a great inconvenience to the surrounding residents.
Therefore, in the future, the urban land use function of the central urban area of Hangzhou should be optimized from the following aspects: (1) Developing mixed-use areas based on actual needs, guiding the optimization of urban mixed areas to improve the efficiency of land use. (2) Reducing single-use land types and promoting the transition from single-use land to mixed-use land. (3) Adhering to the construction of the multi-center distribution pattern of urban spatial structure and promote the fair distribution of social-spatial elements [56]. (4) While adhering to the strict system of cultivated land protection and ecological protection, we should also provide diversified infrastructure to serve the neighboring residents.
Conclusions and deficiencies
This article takes the central urban area of Hangzhou as the research area and proposes a method of city functional area identification based on the kernel density estimation algorithm that integrates OSM road network and POI data: Firstly, the OSM network divides the urban area into research units with the same social and economic functions. The influence diffusion of POI facilities is realized based on the kernel density estimation algorithm, which weakens the discretization of POI. At the same time, kernel density bandwidth and POI weight are set to identify the functional urban areas and projected to different plots. Finally, the land-use mixed index is calculated. The results show that: (1) There are three types of urban functions in the central urban area of Hangzhou: single, mixed, and comprehensive. The central urban areas of Hangzhou are divided into 21 functional areas and the commercial function land is the dominant function of the research area. (2) The single functional areas and the mixed functional areas show the geographical distribution characteristics of the looping stratification, which means the "Core-periphery" differentiation is obvious. Mixed functional areas are widely distributed, but the areas with ecological and production functions have a low degree of mixed. Comprehensive functional areas are widely distributed and with higher potential and vitality. (3) The results of identification are in good consistent with the actual situation of the central urban area of Hangzhou. Therefore, the kernel density weighting method which considered public awareness and general area has high accuracy and feasibility, which can provide a reference for urban development planning and management. (4) The real-time and high efficiency of OSM and POI data acquisition methods and the scientific accuracy of big data application make the method of identifying urban functional area which combined OSM and POI data available. This method can be easily used in the identification of other urban regional functional areas to help people to understand the urban spatial structure more intuitively. And this method has high potential in assisting government departments and researchers in the scientific analysis of urban planning and land use.
There are some deficiencies of this paper: Because of the data defect of POI, the identification ability of areas such as small building density, unused land in cities and farmland is weak. At the same time, the integrity of POI data, the rationality of data cleaning and the accuracy of reclassification need further research. In view of the defects of the results test in research methods, a variety of verification methods can be added to future research for a horizontal comparison of results. Finally, due to the lack of historical POI, the evolution characteristics of urban functional areas cannot be identified. So combining mobile signaling data, sharing vehicle (bicycle) data, social media big data and other multi-source data to analyze the evolutionary logic and driving mechanism of urban functional areas will be the direction of future research. | 9,583 | 2021-05-27T00:00:00.000 | [
"Mathematics"
] |
Extending the spatiotemporal resolution of super-resolution microscopies using photomodulatable fluorescent proteins
In the past two decades, various super-resolution (SR) microscopy techniques have been developed to break the diffraction limit using subdiffraction excitation to spatially modulate the fluorescence emission. Photomodulatable fluorescent proteins (FPs) can be activated by light of specific wavelengths to produce either stochastic or patterned subdiffraction excitation, resulting in improved optical resolution. In this review, we focus on the recently developed photomodulatable FPs or commonly used SR microscopies and discuss the concepts and strategies for optimizing and selecting the biochemical and photophysical properties of PMFPs to improve the spatiotemporal resolution of SR techniques, especially time-lapse live-cell SR techniques.
Introduction
Fluorescence microscopy using genetically encoded°u orescent proteins (FPs) plays a key role in elucidating biological processes as well as in vivo dynamics in a minimally invasive manner. However, because of the di®raction limit, it is a challenge to visualize objects with sizes smaller than 200 nm in the lateral direction and 500 nm in the axial direction. To overcome this problem, several superresolution (SR) techniques have been developed to extend the di®raction-limited spatial resolution by as much as an order of magnitude. [1][2][3][4][5][6][7] Most of these SR techniques use light-controllable FPs whose°u orescence emission is modulated by light irradiation with speci¯c wavelengths. We refer to these light-controllable FPs as photomodulatable FPs (PMFPs). Compared with organic dyes, FPs enable easy and genetically speci¯c labeling of both¯xed and living cells. Although some unique characteristics of organic dyes and other°uorophores, such as their brightness and photostability, can be superior to those of PMFPs, the sophisticated approaches required to deliver them into biological cells and to decrease unspeci¯c labeling limit their application mainly to¯xed samples and make the imaging of living cells a challenge. There are three classes of PMFPs: photoactivatable FPs (PAFPs), photoconvertible (also called photoswitchable) FPs (PCFPs), and reversibly photoswitchable FPs (RSFPs). 8,9 PAFPs can be activated from a non°uorescent (dark) state to a°uorescent state, whereas PCFPs undergo a conversion from one color to another. In contrast to PAFPs and PCFPs, RSFPs can be reversibly photoswitched between the active and inactive states. In this review, we focus on the recently developed PMFPs for commonly used SR microscopies and discuss the concepts and strategies for optimizing and choosing the biochemical and photophysical properties of PMFPs to enhance the spatiotemporal resolution of SR techniques, especially time-lapse live-cell SR imaging.
Breaking the Di®raction Limit with Photomodulatable FPs
The image of an in¯nitely small object under a light microscope consists of a central spot surrounded by a series of higher-order di®raction rings. The size of the central spot (also called the Airy disk) is equal to the di®raction limit, d, which is given by the equation d ¼ 0:61 /NA, where is the emission wavelength and NA is the numerical aperture of the objective. 10 Fluorophores within the di®raction limit remain indiscernible from each other if their°u orescent signals are simultaneously recorded. In the past decades, several SR techniques have been developed to break the di®raction limit. The main principle of these techniques is to prevent the simultaneous excitation or emission of all the°u orophores in the sample using nonuniform sub-di®raction excitation or emission within the di®raction spot, successively changing the excitation or emission location and recording time-sequential images. If digital cameras are used, the image within one di®raction-limited zone occupies several pixels. The subdi®raction excitation or emission locations are sequentially moved across the image¯eld to ensure that all the°uorescent molecules within one di®raction-limited zone will be recorded. The timesequential information on the camera pixels, either the intensity and/or°uctuation of each pixel or the centroid of the single-molecule°uorescent distribution, is used to reconstruct an SR image using di®erent algorithms in di®erent SR techniques. We refer the readers to the recent excellent reviews for in-depth descriptions of these techniques. [11][12][13] Generally, there are two strategies of nonuniform subdi®raction excitation or emission: the stochastic emission or the patterned excitation. The¯rst strategy uses PMSFs whose emissions are stochastically controlled by the illumination light so that one, several or an ensemble of molecules within one di®raction-limited zone are recorded at a time. The (f)PALM/STORM 1-3 and single-moleculebased variations [14][15][16][17] utilize this strategy to improve resolution by localizing the position(s) of a single°u orophore molecule or overlapping°uorophores, respectively, whereas the super-resolution optical°u ctuation imaging (SOFI) technique 18 uses the intensity°uctuation of pixels resulting from the capture of sequential images for a cross-correlation analysis and the SR reconstruction. The second strategy, used in stimulated emission depletion (STED), 7 ground state depletion (GSD) microscopy 19 and structured illumination microscopy (SIM) 6 SR microscopies, relies on the patterned sub-di®raction excitation (point pattern or line pattern) to reduce the size of an ensemble of excited°uorophores. During the imaging of the subdi®raction pools at each time point, the signal from the molecules outside of the subdi®raction area in the di®raction-limited area will contribute to the background and signal noise and decrease the resolution. Thus, the near-¯eld scanning optical microscopy (NSOM) 20 that also uses this strategy has a smaller excitation spot size and a lower sur-rounding°uorescence background within one diffraction-limited zone compared with other SR techniques. Both conventional FPs, such as GFP, and PMFPs can be used for this strategy. However, PMFPs require orders of magnitude lower light intensity by using nonlinear subdi®raction excitation 5,7 and further improve spatial resolution in saturation-based SR techniques (NL-SIM 21 and RESOLFT 4 ). Notably, our recently developed PSFP Skylan-NS and patterned activation NL-SIM (PA NL-SIM) make possible practical noninvasive time-lapse live-cell imaging with very high spatiotemporal resolution. 22 For both strategies and their related SR techniques, three key characteristics to be discussed later are very important for improving the spatiotemporal resolution: (1) the size of the subdi®raction excitation pattern (spot or line), (2) the°uorescence signal of the subfocus location, and (3) the signal-tobackground ratio (SBR). We will focus on recently developed PMSFs and discuss their photochemical properties that are related to resolution improvement in commonly used SR techniques, especially live-cell SR techniques.
Photophysical and Biochemical Properties of Photomodulatable FPs
Since the discovery of PMFPs, extensive e®orts have been made to engineer FPs with improved properties, with the aim of achieving better image accuracy and precision. The following are some basic properties that determine whether an FP is suitable for use in SR microscopy.
Monomeric properties
This property is very important but often overlooked in SR techniques. In our opinion, it should be the¯rst consideration when choosing an appropriate PMFP for SR imaging as it is not useful to obtain the wrong information from the mislocated arti¯cial structures even if the resolution is extremely high. For a°uorescent protein, it is worth noting that its oligomeric state depends on the concentration, which is related to the local environment in living cells. Many \monomeric" FPs tested in solution or by gel electrophoresis in vitro are not as monomeric as they were expected to be and form oligomers once they are con¯ned in a crowded in vivo environment such as a membrane. An elevated local concentration of the FPs is the main cause as sedimentation velocity experiments show that FPs that are not true monomers gradually form dimers and higher-order oligomers with increased concentration in vitro. 23 For any SR technique, although di®erent target proteins have di®erent expression levels and intrinsic oligomerization tendencies, a monomeric PMFP should be chosen to minimize the risk of disturbing the localization and function of the target protein. Notably, dimeric FPs may cause articial clusters of the target proteins by enhancing the dimerization tendency of the target protein. 24 Studies by our group and others show that the oligomerization of FPs a®ects the target protein localization and even function. 23,24 Fortunately, many truly monomeric PMFPs with very good properties have been developed for common SR techniques by our lab and others ( Table 1).
The brightness was determined as the product of extinction coe±cient and quantum yield.
Brightness
Brighter FPs are always in demand because brightness is a key factor that directly a®ects the imaging resolution. For all SR imaging methods, brighter°u orescence means that more photons can be collected by the detector and a higher SBR can be achieved. Moreover, brighter FPs require a lower light intensity or shorter acquisition time for the same°uorescence signal, therefore signi¯cantly reducing photobleaching and damage to the FPs. Additionally, a shorter acquisition time enables higher temporal resolution. The brightness of a°u orescent protein can be calculated using the formula below: where " ex is the molar absorption coe±cient at a given excitation wavelength and È ex=em is the quantum yield at a given excitation and emission wavelength.
Thus, the photon absorption capability and energy conversion e±ciency of a°uorescent protein determine its bulk brightness. However, for singlemolecule-based technology such as PALM, the number of photons emitted by a single molecule in one frame, instead of bulk brightness, is what really matters. The photon number is the product of the photon emission rate and the exposure time, and the photon emission rate is proportional to the bulk brightness and incident light intensity. From experience, the brightness of a°uorescent protein for single-molecule imaging should be no less than 3 Â 10 4 . For blue and green FPs, this threshold value should be higher, as these proteins may have a higher background signal due to auto-°uorescence.
Photostability
For SR imaging techniques such as STED and RESOLFT, which use a much higher laser intensity (10 4 -10 5 higher than PALM and STORM) than traditional microscopy, the photostability of the FP is one of the indispensable factors that need to be considered. On the other hand, for SR imaging techniques such as PALM and STORM, which need to acquire ten thousand frames to reconstruct one image, high photostability of the FP is also preferable. For a more challenging task such as live cell or 3D SR imaging, the photostability of the candidate FP is the¯rst priority. There are several photostable FPs suitable for 2D super resolution imaging in¯xed cells. However, there is still a great shortage of super photostable FPs, especially red and far red FPs, which are useful in 3D and live-cell imaging.
Photostability can be denoted by 1=2 , which is the time required to photobleach half of the maxi-mum°uorescence. The parameter is dependent on illumination conditions, expression systems and fusion proteins; therefore, it is of great importance to measure the photostability in each experiment. Furthermore, note that for RSFPs, which can be turned on/o® repeatedly, 1=2 is the time it takes for the°uorescence maximum of each cycle to decrease to its half value.
Maturation time
With the help of oxygen, newly translated FP peptides go through several chemical reactions that lead to the maturation of the chromophore and to°u oresce. In this process, oxygen acts like a doubleedged sword because increased contact of the FPs with oxygen will increase both maturation speed and the risk of bleaching. In most cases, fast maturation means more detectable°uorescent PMFPfusion molecules at the time of imaging, a higher signal volume, a short acquisition time and, thus, a better time resolution. This is vital for live-cell imaging, which needs to capture nanosecond to second scale movements. Moreover, FPs with short maturation time, such as mEos3.2, 23 will have a higher labeling density, which is a key determinant of the imaging resolution of PALM. 13 Currently, two methods have been introduced to probe the maturation speed of FPs. One method measures the recovery time of denatured FPs in vitro, whereas the other directly records time intervals of FP expression and°uorescence in vivo. It has been reported that FPs with super folding abilities, such as sfGFP,°u oresce even if their fusion protein is expressed in inclusion bodies. 36 Thus, one can use this method to rapidly determine whether an FP has good folding and maturation properties.
Labeling density
An appropriate labeling density, neither too high nor too low, is the key factor for the higher resolution of both stochastic and patterned subdi®raction excitation SR techniques. A high labeling density provides a high resolution; however, the activating laser power (generally 405 nm) must be optimized to activate separated single molecules at a time for PALM/STORM imaging because more molecules in the o® state produce a higher background, and a higher illumination intensity or a longer time is needed to saturate the molecules to the o® state for saturation-based SR techniques. Furthermore, a higher labeling density decreases the dynamic°uctuation in SOFI imaging.
Optimal Photomodulatable FPs for Di®erent SR Techniques
In addition to the properties mentioned above, other properties of PMFPs, including photons per switching event, on-o® duty cycle, contrast ratio and photon conversion e±ciency largely dictate the quality of SR images. Di®erent SR techniques require the consideration of di®erent photophysical and biochemical properties. In this review, we discuss the properties of PMFPs that a®ect their performance in the stochastic or patterned subdi®raction excitation-based SR techniques. Mainly based on our recently developed PCFP and RSFP, we provide important and practical suggestions for choosing PMFPs for most promising live-cell SR techniques, speci¯cally the SOFI and NL-SIM techniques.
PMFPs for PALM/FPALM
In PALM/FPALM imaging, the resolution depends on the localization precision and molecular density. The former describes how well the center of a molecule can be determined, whereas the latter indicates how many molecules can be determined per area unit. Although molecular density has been neglected for a time, it is essential for good PALM imaging. According to the Nyquist criterion, the average distance between two neighboring molecules should be no more than half of the achievable resolution, so a certain labeling density should be achieved. Moreover, the formula of the localization precision below 37 shows that the photon number emitted by a single FP and the contrast ratio of the FP are also important: where s is the standard deviation of the point spread function, a is the pixel size of the imaging detector, N is the photon number and B is the background noise. All three types of PMFPs can be applied to PALM imaging. We refer the readers to the recent excellent review of PMFPs for PALM/STORM techniques 8 and Xiaowei Zhuang's research paper. 24 Here, we highlight the important properties of recent PMFPs and present our considerations on their selection for PALM/STORM techniques.
Many PMFPs have been developed for PALM/ STORM imaging 8 ( Table 1). As mentioned above, the monomeric property is our¯rst consideration for choosing a PMFP. mEos2 is one of the most widely used PCFPs for PALM/STORM microscopy. However, it was found to form dimers and higher-order oligomers at high concentrations. 31,38 We have shown that mEos2 causes incorrect intracellular aggregates when fused to membrane proteins, including G protein-coupled receptor GRM4 and glucose transporter 4 (GLUT4) 23 (Fig. 1). Siyuan Wanga et al. also reported that mEos2 exhibits an arti¯cial visible punctum or clusters in Escherichia coli when fused to the protease ClpP, nucleoid-associated protein H-NS and Tar protein and causes Vimentin¯laments to cluster into thick bundles in mammalian cells. 24 Other PMFPs that have dimerization tendencies and fusion problems include mKikGR, mGeosM, mMaple, PAmCherry, PSCFP2 and tdEos 24 (Table 1). Notably, the authors stated that for those proteins that oligomerize or cluster even without fusion to FPs, even a weak dimerization tendency of FPs may amplify the clustering e®ect of the target protein and cause ar-ti¯cial clusters. 24 Therefore, because it is not known before the experiment whether the target protein has intrinsic aggregation, PMFPs that are less dimeric or not dimeric, including mEos3.2, Dronpa, PAGFP, mMaple3 and PAtagRFP, as well as our recently developed Skylan-S 25 and Skylan-NS, 22 should be considered¯rst (Table 1).
Next, we consider the photon budget and on/o® contrast ratio of PMSFs (Table 1). A higher photon budget leads to higher localization precision and, hence, higher image resolution. 39 Among the monomeric PMFPs mentioned above, PAtagRFP and mEos3.2 exhibited the highest photon budgets (800-900) among PAFPs 23,24 (Table 1). The green PAGFP has two-fold fewer photons (200-300) than mEos3.2 and PAtagRFP 24 (Table 1). Slowly switched RSFPs often have higher number of photons per switching event and are thus better suit for PALM. The green RSFPs rsFastLime, 40 rsEGFP 41 and updated version rsEGFP2 42 gave relatively low photon budgets (< 60 photons per switching event), which would lead to relatively poor localization precision 24 for PALM/STORM imaging. However, it is worth noting that rsEGFP2 is an excellent alternative choice for RESOLFT due to the large number of switching cycles before photobleaching. 42 Skylan-S is our recently developed RSFP speci¯c for SOFI imaging. However, it can also be used for the PALM/STORM technique at a higher illumination intensity because the photon number before bleaching is high enough. 25 The on/o® contrast ratio is another key property to consider for PALM/STORM imaging (Table 1). For example, Skylan-NS, 22 an RSFP we recently developed speci¯cally for NL-SIM and RESOLFT techniques, has a high single-molecule photon number but is suboptimal for PALM/STORM imaging due to its low single-molecule contrast ratio. The bulk°uorescent signal is much higher than that of other green RSFPs, but it is hard to detect single molecules, possibly because Skylan-NS maturates very fast (unpublished) and can be easily activated to the on state by 488 nm illumination. For PALM/ STORM imaging, all the PMFPs are illuminated during imaging, but only one or a few single molecules within the di®raction-limited zone are in the on state and excited to emit photons. However, the abundant other molecules within the di®ractionlimited spot and in the o® state will produce a high background that a®ects the detection of the single molecule. The on/o® contrast ratio is de¯ned as the ratio of°uorescence between the on state and o® state under the illumination of the imaging light only. 43 It is worth noting that the on/o® contrast depends highly on the illumination intensity and exposure time, which should be optimized for each PMFP to obtain the optimal single-molecule detection. Because the emission spectrum of activated single molecules is red shifted (compared with that of the inactivated molecules) and detected in a di®erent color channel, PCFPs have much higher contrast ratios than RSFPs and PAFPs and hence better image quality (resolution). Therefore, PCFPs such as mEos3.2 and mMaple3 are the¯rst consideration for PALM/STORM imaging of only one target protein. mEos3.2 has a very high photon budget and can be used for live-cell PALM microscopy using sCMOS camera-speci¯c single-molecule localization algorithms. 44 Compared with mEos3.2, mMaple3 was reported to have higher signal e±ciency, which is de¯ned as the ratio between the number of detectable PMFP-fusion molecules per cell and the expression level of the fusion protein, 24 and thus a higher labeling density and resolution in theory; however, the lower photon budget of single molecules and lower on-o® ratio of mMaple3 in the red channel may decrease the spatial resolution in practical use. For RSFPs, the on/ o® contrast ratio is highly dependent on the labeling density and the activating laser and excitation laser intensities. rsKame is an excellent candidate as a green marker for dual color PALM/STORM. 45 The ideal red partner still needs to be developed, as PAmCherry mentioned above has the problem of dimerization, whereas PAtagRFP has a very low signal e±ciency. 24 The properties discussed above are also suitable for the updated versions of PALM/STORM that image several overlapping single molecules simultaneously.
RSFPs for SOFI
SOFI is a purely calculation-based imaging approach that can produce background-free, contrastenhanced SR images based on the temporal correlation analysis of°uorescence°uctuation/blinking over hundreds of raw images. 18 In contrast to PALM/STORM imaging, in which only one or several single molecules are stochastically excited, SOFI uses light to stochastically activate small subsets of RSFPs within a densely labeled structure to produce a°uorescence°uctuation. The RSFP properties critical for SOFI imaging include (1) the averaged°uorescence intensity in the°uctuation state, (2) the on/o® contrast ratio, (3) the photostability, and (4) the oligomerization tendency. Thē rst three properties determine the°uctuation range of the imaged pixels and the SOFI signal, which are essential to the spatial resolution, and the last may lead to arti¯cial aggregation of the target proteins. As in PALM/STORM imaging, the on/o® contrast ratio in SOFI imaging is highly dependent on the labeling density, laser power and exposure time. There are only a few RSFPs reported for SOFI imaging. Dronpa and rsTagRFP could be used for SOFI imaging. However, both have low averaged°u orescence intensity in the°uctuation state, photostability, and on/o® contrast ratios, which produce low SOFI signals. 25 Recently, we developed the novel monomeric green RSFP Skylan-S. Compared with Dronpa, Skylan-S has a much higher on/o® contrast ratio and photostability 25 (Fig. 2). Notably, Skylan-S exhibits a 4-fold improvement in the°uctuation range of the imaged pixels and an order-of-magnitude increase in averaged°uorescence intensity in the°uctuation state. The higher°uctuation and averaged°uorescence intensity in the°uctuation state of Skylan-S produces a higher second-order cumulant value than that provided by Dronpa at each time point. With Skylan-S, a high resolution SOFI image of live cells can be obtained.
RSFPs for RESOLFT/Nonlinear SIM (NL-SIM)
RESOLFT and NL-SIM belong to the saturated depletion-based SR techniques, which can be implemented by patterned photoactivation of RSFPs to dramatically lower the illumination intensities and decrease the size of the excitation ensemble (point or line). RSFPs can be switched o® from a long-lived state under dramatically reduced laser power (nearly one million times lower than the intensity used in STED), which helped the invention of reversible saturable optical°uorescence transitions stimulated emission (RESOLFT). Derived from STED, RESOLFT also uses a donutshaped beam to switch o®, but not deplete,°uorescence in the beam-covered region, and only the molecules at the small center of the beam remain°u orescent; by scanning the small center across the sample, RESOLFT can generate a SR image. Similarly, by replacing the donut-shaped beam with other illumination patterns, such as the sinusoidal stripes used in structured illumination microscopy, and by saturating one of the \on" or \o®" states, nonlinear structured illumination microscopy (NL-SIM) increases the imaging speed compared with RESOLFT, and higher resolution than SIM. These two techniques require fewer raw images to reconstruct a¯nal SR structure, making them suitable for live cell imaging.
The following characteristics of RSFPs are most critical for the two SR microscopies:¯rst, the inte-grated°uorescence signal across each switching cycle, which depends on the absorption cross-section, e®ective quantum yield and characteristic switching time from the°uorescent \on" to \o®" state; second, the°uorescence contrast ratio of the \on/o®" states; and third, the photostability under excitation and depletion. As mentioned above, there are not very many truely monomeric RSFPs with high photon budgets suitable for RESOLFT/NL-SIM imaging. The Dronpa and rsEGFP families have been exploited for the saturated SR imaging. However, Dronpa has a limited number of switching cycles, relatively low°uorescence signal and poor contrast ratio under physiological conditions. Using rsEGFP, Grotjohann et al. demonstrated RESOLFT imaging of various living samples at $ 40 nm resolution, including bacteria, mammalian cells, and organotypic tissue slices. 41 Additionally, by developing a fast switching mutant rsEGFP2, they increased the pixel dwell time 25-to 250-fold and revealed changes in the ER structure occurring in < 1 s. 42 rsEGFP2 exhibits extremely high photostability, showing potential utility in timelapse live-cell imaging. However, the low photon numbers restrict its ability to reach the desired resolution at a reasonable SBR. Currently, we developed a truly monomeric RSFP, Skylan-NS (short for Sky lantern for Nonlinear Structured illumination), with excellent RSFP characteristics for SR live imaging. Skylan-NS produces $ 10-fold more photons per switching cycle than rsEGFP2 and a 3.6-or 1.8-fold higher \on/o®" contrast ratio than Dronpa or rsEGFP2, respectively. Notably, it provides $ 700 switching cycles before bleaching to 1/e of the initial°uorescence intensity. cell imaging to achieve $ 60 nm resolution at subsecond rates with only a 100 W/cm 2 illumination intensity for tens of time points across 40 Â 40 m 2 elds-of-view.
Future Directions and Challenges:
Brighter, Higher Contrast Ratio, and more Colorful PCFPs have a higher contrast ratio and spatial resolution than RSFPs. However, they occupy two channels, making dual-color PALM imaging either with another PCFP or RSFP (PAFP) a challenge. Although it has been reported that mEos2 can be used with PSCFP2 for dual-color PALM microscopy by sequential imaging of mEos2 and PSCFP2, 46 it is di±cult to convert all the green mEos2 to red or bleach all the green mEos2 molecules before acquiring the green signal from the photo-converted PSCFP2. This is also the problem for mEos3. Dendra2 has very high conversion e±ciency; however, the single molecule properties are not suitable for PALM imaging (unpublished data). There is an urgent need for PCFPs with high photo-conversion e±ciency and excellent single-molecule photons for dual-color PALM/STORM imaging. Green RSFPs are the best developed ones among RSFPs for both stochastic and patternedexcitation microscopies. In the future, brighter green RSFPs with enhanced contrast ratios and photostability will be required for live-cell SR techniques with higher spatiotemporal resolution and more time points. Additionally, red RSFPs and far-red RSFPs with optimal photochemical characteristics are highly needed for dual color SR imaging. | 5,812.8 | 2016-05-30T00:00:00.000 | [
"Biology",
"Chemistry",
"Materials Science",
"Physics"
] |
Phosphorylation of GATA-4 is involved in alpha 1-adrenergic agonist-responsive transcription of the endothelin-1 gene in cardiac myocytes.
The expression of endothelin-1 (ET-1) in cardiac myocytes is markedly induced during the development of heart failure in vivo and by stimulation with the alpha(1)-adrenergic agonist phenylephrine in culture. Although recent studies have suggested a role for cardiac-specific zinc finger GATA factors in the transcriptional pathways that modulate cardiac hypertrophy, it is unknown whether these factors are also involved in cardiac ET-1 transcription and if so, how these factors are modulated during this process. Using transient transfection assays in primary cardiac myocytes from neonatal rats, we show here that the GATA element in the rat ET-1 promoter was required for phenylephrine-stimulated ET-1 transcription. Cardiac GATA-4 bound the ET-1 GATA element and activated the ET-1 promoter in a sequence-specific manner. Stimulation by phenylephrine caused serine phosphorylation of GATA-4 and increased its ability to bind the ET-1 GATA element. Inhibition of the extracellularly responsive kinase cascade with PD098059 blocked the phenylephrine-induced increase in the DNA binding ability and the phosphorylation of GATA-4. These findings demonstrate that serine phosphorylation of GATA-4 is involved in alpha(1)-adrenergic agonist-responsive transcription of the ET-1 gene in cardiac myocytes and that extracellularly responsive kinase 1/2 activation plays a role upstream of GATA-4.
expression in cardiac myocytes is induced by myocardial stretch, angiotensin II, and norepinephrine (6 -8). Left ventricular levels of ET-1 increase markedly in close association with the deterioration of systolic function following myocardial infarction and pressure overload (9,10). Immunohistochemical studies have demonstrated that ET-1 in the failing heart is localized in cardiac myocytes. ET receptor antagonists bosentan and BQ123 prevent the remodeling of the heart and have been shown to improve survival following myocardial infarction and pressure overload (9,10). These findings demonstrate that up-regulated expression of ET-1 in cardiac myocytes plays a critical role in the development of heart failure in vivo. However, the molecular mechanisms leading to this up-regulation in the failing heart are unclear at present.
The mechanisms regulating the transcription of the ET-1 gene have been studied in endothelial cells. The 204-bp sequences proximal to the transcription starting site is sufficient to drive high levels of expression in these cells in culture (11). Mutation of a putative GATA element in this sequence diminishes the transcriptional activity of the ET-1 promoter (12)(13)(14). One endothelial factor that binds to the ET-1 GATA element has been shown to be GATA-2 (12,13). Although cardiac myocytes also express the subfamily of zinc finger GATA transcription factors (GATA-4/5/6), it is unknown whether the ET-1 GATA element is functional in this context. We and others have shown that GATA factors are required for transcriptional activation of the genes for -myosin heavy chain and angiotensin II type 1a receptor during pressure overload-induced hypertrophy in vivo (15,16). For these reasons, it would be of interest to know whether GATA factors also mediate the up-regulation of the expression of cardiac ET-1 during myocardial cell hypertrophy. In addition, if this were the case, it would be important to examine how GATA factors are regulated during this process. The ␣ 1 -adrenergic agonist phenylephrine (PE) is a potent inducer of hypertrophy and ET-1 expression in cardiac myocytes (17), providing a useful tool to study the mechanisms for the induction of ET-1 expression in the failing heart. Therefore, in the present study, we investigated the role of GATA factors in PE-stimulated transcription of cardiac ET-1.
Plasmid Constructs-The plasmid construct pwtET-CAT was the transcription start site-proximal 204-bp wild type rat ET-1 promoter fused to the bacterial CAT gene (14). In pmutGATA-ET-CAT, a consensus GATA element located at sequence Ϫ136 to Ϫ131 was mutated in the context of the 204-bp rat ET-1 promoter (14). These plasmids were gifts of Dr. Thomas Quertermous (Stanford University, Palo Alto, CA). A promoterless CAT plasmid (basic CAT) was purchased from Promega (Madison, WI). pRSVCAT and pRSVluc contain the CAT and luc genes, respectively, driven by Rous sarcoma virus long terminal repeat sequences (15,18). The murine GATA-5 and GATA-6 expression plasmids, pcDNAG5 and pcDNAG6, were generous gifts of Dr. Michael S.
Parmacek (University of Pennsylvania, Philadelphia, PA) and were described elsewhere (20,21). The murine GATA-4 expression plasmid pcDNAG4 was subcloned by digesting pMT2-GATA-4 (22) (a generous gift of Dr. David Wilson, Washington University, St. Louis, MO) with EcoRI to isolate the 1.9-kilobase insert and subcloning the resulting cDNA fragment encoding the murine GATA-4 into the EcoRI site of the eukaryotic expression plasmid pcDNA3 (Invitrogen, Carlsbad, CA). Plasmids were purified by anion exchange chromatography (Qiagen, Hilden, Germany), quantified by measurement of A 260 , and examined on agarose gels stained with ethidium bromide prior to use.
Transfection and Luciferase/CAT Assays-24 h after plating, cells were washed twice with serum-free medium and then co-transfected with 2 g of the CAT construct of interest and 0.1 g of pRSVluc using LipofectAMINE Plus (Life Technologies, Inc.) according to the manufacturer's recommendation. After a 2-h incubation with DNA-Lipo-fectAMINE complex, the cells were washed twice with serum-free media and further incubated for 48 h in serum-free medium in the presence of 1.0 ϫ 10 Ϫ5 M PE or saline as controls. The cells were then washed twice with ice-cold phosphate-buffered saline, lysed with lysis buffer, and subjected to assays for luciferase and CAT activities as described previously (15,18,19).
Electrophoretic Mobility Shift Assays (EMSAs)-Nuclear extracts were prepared from cultures of primary neonatal rat cardiac myocytes as described (19). Double-stranded oligonucleotides that contained GATA motifs from the ET-1 promoter were designed. The sequences of the sense strand of these oligonucleotides were as follows: ET-GATA, 5Ј-CCTCTAGAGCCGGGTCTTAT-CTCCGGCTGCACGTTGC-3Ј, and mutET-GATA, 5Ј-CCTCTAGAGCCGGGTCTGCAC-TCCGGCTGCAC-GTTGC-3Ј. We also used a double-stranded oligonucleotide that contained p53-binding site in the p21 promoter as a control probe (19). Oligonucleotides were synthesized by Greiner, Inc. (Tokyo, Japan) and purified by SDS-polyacrylamide gel electrophoresis.
Analysis of the Phosphorylation State of GATA-4 -In these experiments, 50 mM NaF and 1 mM Na 3 VO 4 were added in all buffers. Nuclear extracts from primary cultures of cardiac myocytes were immunoprecipitated using anti-GATA-4 antibody (Santa Cruz Biotech, Santa Cruz, CA) in low stringency buffer (50 mM Tris, pH 7.4, 0.15 M NaCl, 0.5% Nonidet P-40, 1 mM EDTA, 10 mg/ml aprotinin, and leupeptin, and 0.5 mM phenylmethylsulfonyl fluoride; all from Sigma) for 16 h at 4°C and incubated with protein G beads for 1 h at 4°C. The precipitate was washed four times in the same buffer, resuspended in 50 ml of SDS-lysis buffer (20 mM Tris, pH 7.5, 50 mM NaCl, 0.5% SDS, 1 mM dithiothreitol), heated to 95°C for 2 min, electrophoresed on an SDS-polyacrylamide gel (10%), transferred to Immobilon membranes, and reacted with anti-phosphoserine antibody (New England Biolabs, Beverly, MA), which was subsequently detected using horseradish peroxidase-conjugated anti-mouse IgG. Signals were detected using the ECL Western blotting detection system (Amersham Pharmacia Biotech) according to the manufacturer's instructions. To normalize for protein loading after immunoprecipitation, blots were stripped by incubation in 62.5 mM Tris-HCl, pH 6.8, 100 mM -mercaptoethanol, and 2% SDS for 30 min at 50°C, washed twice with phosphate-buffered saline and 0.05% Tween, and then probed with anti-GATA-4 antibody.
Statistical Analysis-Data are presented as the means Ϯ S.E. Statistical comparisons were performed using unpaired two-tailed Student's t tests or analysis of variance with Scheffe's test where appropriate, with a probability value less than 0.05 taken to indicate significance.
Phenylephrine-responsive ET-1 Transcription Requires an
Intact GATA Element-The ␣ 1 -adrenergic agonist PE is a potent inducer of hypertrophy and ET-1 expression in cardiac myocytes (17). To determine whether the 204-bp promoter sequence of the ET-1 gene is sufficient to mediate PE-responsive transcription in neonatal rat ventricular cells, these cells were transfected with a CAT reporter construct driven by the 204-bp rat ET-1 upstream sequence (pwtET-CAT). To control for transfection efficiency, the cells were co-transfected with a small quantity of pRSVluc. After 48 h of stimulation with 1.0 ϫ 10 Ϫ5 M of PE or saline as a control, cardiomyocytes were harvested for luciferase and CAT assays. The 204-bp ET-1 promoter fragment conferred PE-inducible expression on the CAT reporter gene (Fig. 1A). In contrast, PE stimulation did not induce the activity of a promoter derived from the ubiquitously expressed -actin gene (data not shown). PE stimulation did not increase the background activity of a promoterless CAT (basic CAT) transfected into cardiomyocytes (Fig. 1A). To rule out the possibility that the increase by PE of the ET-1 promoter activity occurs in contaminating cells other than cardiac myocytes (less than 10% in our preparation), which mainly consist of fibroblasts, NIH3T3 cells (mouse fibroblasts) were transfected with pwtET-CAT. In these cells, PE stimulation did not increase the activity of pwtET-CAT (Fig. 1B), indicating the cardiac-specific response of this promoter to PE. These findings demonstrate that the proximal 204-bp ET-1 promoter sequence contains element that are responsive to PE stimulation in cardiac myocytes.
The 204-bp rat ET-1 promoter sequence contains a putative GATA motif, which was recently shown to mediate changes in the expression of other genes associated with hypertrophy (15,16). To clarify the role of the GATA element in PE-induced hypertrophy in the context of the ET-1 promoter, cardiac myocytes were transfected with CAT gene driven by the 204-bp ET-1 promoter containing a mutation in the GATA site (pmut-GATA-ET-CAT), which abolishes the binding of cardiac GATA factors (see Fig. 3). As shown in Fig. 1A, the basal transcriptional activity of the transfected 204-bp ET-1 promoter was modestly decreased by the mutation of the GATA element (53% decrease versus wild type). In contrast, PE-responsive transcription was severely attenuated (2.2-fold for wild type versus 1.3-fold for GATA mutant). Thus, an intact GATA element is required for both basal and PE-responsive ET-1 transcription in cardiac myocytes.
GATA-4/5 Are Sequence-specific Activators of the ET-1 Promoter-Among the six members of the GATA transcription factor family, only GATA-4, -5, and -6 are expressed in the heart (19 -22). To determine whether expression of GATA-4, -5, and/or -6 can transactivate the 204-bp ET-1 promoter via the GATA sites that mediate PE responsiveness (Fig. 1), we performed co-transfection experiments in NIH3T3 cells, which lack all GATA factors. A CAT reporter driven by the 204-bp ET-1 promoter was transiently transfected with a eukaryotic expression plasmid encoding GATA-4, -5, or -6 or -galactosidase as a control. Transfection efficiency was monitored by co-transfecting a small quantity of pRSVluc. As shown in Fig. 2, expression of GATA-4 (25-fold), -5 (91-fold), or -6 (13-fold) resulted in significant activation of the 204-bp ET-1 promoter. Moreover, co-transfection experiments employing the ET-1 promoter containing the same GATA site mutation as in Fig. 1 illustrated that only GATA-4 and -5 transactivate the 204-bp ET-1 promoter in a GATA sequence-specific manner (Fig. 2). These data suggest that GATA-4/5 transactivate the ET-1 promoter directly via the GATA site.
GATA-4 in PE-stimulated Cardiac Myocytes Binds to the ET-1 GATA Element-To identify cardiac nuclear factors that bind the ET-1 GATA site, EMSAs were performed with nuclear extracts from PE-stimulated neonatal cardiac myocytes. Nuclear extracts were probed with a radiolabeled ET-1 GATA double-stranded oligonucleotide in the presence or absence of competitor oligonucleotides (Fig. 3A). Competition EMSAs revealed that one retarded band, indicated by an arrow (first lane), represented a GATA sequence-specific complex, as evidenced by the fact that it was competed by an unlabeled ET-1 GATA double-stranded oligonucleotide (second lane) but not by an excess of the ET-1 GATA site containing mutations that abolishes PE responsiveness (third lane). This result suggests the presence of factor(s) that specifically bind to the ET-1-GATA sequence in extracts from PE-stimulated rat cardiac myocytes.
Previous studies demonstrated that in vitro translated GATA factors can bind to the ET-1-GATA site (11)(12)(13)(14). In addition, our results described above demonstrate that GATA-4/5 can transactivate the ET-1 promoter in a sequence-specific manner. Therefore, we examined by supershift experiments whether the factor(s) that specifically interact with the ET-1-GATA site in cardiac nuclear extracts are GATA-4 and/or GATA-5 (Fig. 3B). Addition of anti-GATA-4 antibody resulted in nearly complete disappearance of the original complex formed by the interaction of the ET-1 GATA site with cardiac nuclear factors (second lane). In addition, it produced a new complex that migrated much more slowly than the original one.
In contrast, addition of anti-GATA-5 antibody resulted in only slight diminishment of the original band and in the formation of a slower migrating complex which was much fainter than that formed upon the addition of anti-GATA-4 antibody (lane 3). These findings demonstrate that GATA-4 is the major cardiac nuclear factor that binds the ET-1-GATA site and that GATA-5 binds this site to a much lesser degree.
PE Stimulation Causes Serine Phosphorylation of Cardiac GATA-4 -To clarify the regulatory mechanisms for GATA-4 activation during PE-stimulated hypertrophy in neonatal rat cardiac myocytes, we examined the expression of GATA-4 following PE stimulation. In neonatal rat ventricular cells incubated with PE for 5 min -48 h, the total GATA-4 expression levels were not altered by PE (Fig. 4). We next examined the effect of PE on GATA-4 phosphorylation. The cell lysates were immunoprecipitated with anti-GATA-4 antibody, followed by Western blot analysis with anti-phosphoserine antibody. As shown in Fig. 4, PE treatment resulted in a marked increase in the level of the phosphorylated form of GATA-4. The level was maximal at 3 h after PE treatment and decreased thereafter but continued to be high at 48 h after the treatment. To correct for differences in protein loading after immunoprecipitation, the same membrane was reblotted with anti-GATA-4 antibody. As shown in Fig. 4, total GATA-4 immunoreactivity did not change following PE stimulation. Therefore, the ratio of the phosphorylated form of GATA-4 to the total GATA-4 in cardiac myocytes was markedly increased by PE. Similar results were obtained with reciprocal experiments, i.e. immunoprecipitation with anti-phosphoserine antibody and Western blotting with anti-GATA-4 antibody. We could not detect a PE-stimulated increase in threonine phosphorylation of cardiac GATA-4. These findings indicate that PE stimulation causes serine phosphorylation of GATA-4, which might be involved in PEresponsive ET-1 transcription in cardiac myocytes.
Phosphorylation of GATA-4 Increases Its DNA Binding Activity-We tested the possibility that phosphorylation of GATA-4 results in functional consequences such as an increase in the DNA binding activity. COS7 cells were transfected with an expression plasmid encoding GATA-4 (pcDNAG4) and incubated in medium containing 10% serum. 48 h later, nuclear extracts were prepared by lysis of the transfected cells in lysis buffer in the presence or absence of phosphatase inhibitors (50 mM NaF and 1 mM Na 3 VO 4 ). These extracts were immunoprecipitated with anti-GATA-4 antibody, and the immunoprecipitates were subjected to Western blot analysis with anti-phosphoserine antibody. As shown in the upper panel of Fig. 5A, the phosphorylation of GATA-4 was evident in the nuclear extract collected with the phosphatase inhibitors but not in that without the inhibitors. However, the amount of GATA-4 (total of phosphorylated and unphosphorylated forms) was similar in these two extracts (Fig. 5A, lower panel). Then we determined using EMSA whether the DNA binding activity of GATA-4 differed in the nuclear extracts used for the experiments shown in Fig. 5A. By competition EMSAs using a radiolabeled ET-1 GATA double-stranded oligonucleotide as a probe (Fig. 5B, second through sixth lanes), one retarded band (indicated by an arrow, second lane) was found to be a GATA sequence-specific complex, as evidenced by the fact that it was competed by an ET-1 GATA oligonucleotide (third lane) but not by an oligonucleotide with a GATA site mutations (fourth lane). In addition, this retarded band was clearly supershifted by anti-GATA-4 antibody (sixth lane) but not by IgG (fifth lane), indicating this band represents a complex of the ET-1 GATA oligonucleotide and GATA-4. Notably, the intensity of this band was much stronger in the nuclear extract prepared with the phosphatase inhibitors (second lane) than in that prepared without the inhibitors (first lane). These results suggest that phosphorylation of GATA-4 increases its DNA binding activity.
ERK1/2 Activation Is Required for PE-induced Increase in Phosphorylation and DNA Binding Activity of Cardiac GATA-4 -To determine the upstream factors involved in PE-induced phosphorylation of GATA-4, we examined the effect of PD098059, a MEK-1-specific inhibitor, on GATA-4 phosphorylation, as well as on activation of ERK1/2, targets of MEK-1. Neonatal rat ventricular myocytes were preincubated with or without 20 M PD098059 for 1 h; then PE was added, and the myocytes were further incubated at 37°C. Activation of ERK1/2 was estimated by Western blot analysis using an antibody that specifically recognizes the phosphorylated, active form of these enzymes. ERK1/2 were markedly activated after 15 min of PE stimulation (Fig. 5A, second lane) compared with saline stimulation (Fig. 5A, first lane). 20 M PD098059 completely blocked the activation (Fig. 5A, third lane). GATA-4 phosphorylation was examined in cardiac myocytes collected 3 h after the PE stimulation, when the phosphorylation was maximal. Lysates of these cells were subjected to immunoprecipitation with anti-GATA-4 antibody followed by Western blotting with anti-phosphoserine antibody as above. As shown in Fig. 5B, 20 M PD098059 almost completely inhibited PEinduced GATA-4 phosphorylation. However, GATA-4 phosphorylation was not blocked by a phosphatidylinositol 3-kinase inhibitor (wortmannin) or p38 mitogen-activated protein kinase inhibitor (SB 203580) (data not shown). These results suggest that ERK activation is required for PE-induced phosphorylation of cardiac GATA-4.
Last, to determine whether the ERK pathway is also involved in the DNA binding activity of cardiac GATA-4, EMSAs were performed with nuclear extracts from neonatal cardiac myocytes stimulated with saline and PE in the presence or absence of PD098059. These extracts were probed with a radiolabeled double-stranded oligonucleotide containing the ET-1 GATA site or that containing p53-binding site in the p21 promoter as a control. As shown in Fig. 6 (top two panels), the intensity of the specific band indicating GATA-4 binding was increased in nuclear extracts from PE-stimulated myocytes (second lane) compared with those from saline-stimulated cells (first lane). The PE-stimulated increase in the DNA binding activity of cardiac GATA-4 was almost completely blocked by PD098059 (third lane). In contrast, p53 binding activities were altered by PE nor PE plus PD098059 (Fig. 6, bottom two panels, first through third lanes). These experiments were repeated three times using independent preparations of cells and found to be reproducible. Taken together with the above results, these findings indicate that ERK activation and subsequent GATA-4 phosphorylation might be involved in the increased DNA binding activity of cardiac GATA-4 (Fig. 7). FIG. 4. Phosphorylation of cardiac GATA-4 following PE stimulation. Nuclear extracts (100 g of protein) from cardiac myocytes stimulated with PE for various periods were immunoprecipitated with anti-GATA-4 antibody. Immunoprecipitates were separated by SDSpolyacrylamide gel electrophoresis, transferred to Immobilon membranes, and sequentially probed with anti-phosphoserine antibody and with anti-GATA-4 antibody.
DISCUSSION
Although ET-1 was initially identified as an endothelial cellderived vasoconstrictor, it is now recognized as a growth-promoting peptide produced by a variety of cell types. Expression of ET-1 in cardiac myocytes is markedly increased in failing hearts (9,10). In addition, the administration of ET receptor antagonists prevents remodeling of the heart following myocardial infarction and pressure overload independent of hemodynamic effects (9,10). These findings suggest that the local synthesis of ET-1 is involved in the development of heart failure in vivo. The ␣ 1 -adrenergic agonist PE is a potent inducer of hypertrophy and ET-1 expression in cardiac myocytes (17). The present results demonstrate that PE-inducible expression of cardiac ET-1 is mediated, at least in part, at the level of transcription and that phosphorylation of GATA-4 plays a role in this process.
GATA factors are important for cardiac-specific transcription of many genes, including ␣-MHC, B-type natriuretic peptide, myosin light chain 1/3, and cardiac troponin C (23)(24)(25)(26). Likewise, we showed here that mutation of the GATA element in the 204-bp ET-1 promoter moderately decreased the basal transcriptional activity in cardiac myocytes. In addition, recent studies, including ours, demonstrated that GATA transcription factors are also required for transcriptional activation of the genes for -myosin heavy chain and angiotensin II type 1a receptor during hemodynamic overload-induced cardiac hypertrophy in vivo (15,16). The results of the present study provide further evidence that GATA factors are involved in the hypertrophic process, because mutating the GATA element abolished ␣1-adrenergic-responsive ET-1 transcription. Thus, in the context of the ET-1 promoter, the GATA element plays an important role in both basal and PE-responsive transcription in cardiac myocytes.
To date, six related zinc finger-containing proteins have been described that recognize and bind the GATA motif (19 -22). The proteins fall into two subgroups: one consisting of GATA-1, -2, FIG. 5. Phosphorylatio of GATA-4 increases its DNA binding activity. COS7 cells were transfected with pcDNAG4. Nuclear extracts were prepared from these cells in lysis buffer in the presence or absence of phosphatase inhibitors (50 mM NaF and 1 mM Na 3 VO 4 ) as indicated. A, GATA-4 phosphorylation was examined as described in the legend for Fig. 4. B, nuclear extracts were probed with a radiolabeled double-stranded oligonucleotide containing the ET-1 GATA site. and -3 and one consisting of GATA-4, -5, and -6. The subgroups are defined both by sequence homology and expression pattern, with GATA-1, -2, and -3 predominating in blood and ectodermal derivatives and GATA-4, -5, and -6 predominating in heart and endodermal derivatives. Interestingly, the genes encoding GATA-4 and -6 are expressed in the heart throughout embryonic and postnatal development, whereas the murine GATA-5 gene is normally expressed in a temporally and spatially restricted pattern within the embryonic heart (20,21). The present study demonstrated that both GATA-4 and -5 activated the 204-bp ET-1 promoter in a sequence-specific manner. However, GATA-4 is the major cardiac nuclear factor that binds to the ET-1 GATA element, and GATA-5 binds to a much lesser degree. Despite this fact, our data do not rule out a potential contribution of GATA-5 to the PE-responsive ET-1 transcription because the expression of GATA-5 in cardiac myocytes increases during hypertrophy (19). The relative contributions of GATA-4 and GATA-5 to PE-responsive transcription should be further investigated.
A large number of transcription factors have been shown to exist within cells as phosphoproteins. The functional consequences of phosphorylation vary but include regulation of DNA binding and transcriptional activation. It has been shown that other members of the GATA family of transcripiton factors, namely GATA-1 and -2, exist as phosphoproteins in erythroid or hematopoietic progenitor cells (27,28). These proteins are phosphorylated exclusively on serine residues. Notably, stimulation with growth factors results in their enhanced phosphorylation. Systematic mutations of serine residues in GATA-1 do not appear to alter its transactivation function as judged by reporter gene assays conducted in COS cells, nor do they alter the DNA binding ability of GATA-1 proteins expressed in COS cells. Our data that stimulation by PE caused serine phosphorylation of GATA-4 in cardiac myocytes are compatible with these previous findings. In contrast to the previous reports, we observed that PE stimulation increased the DNA binding activity of GATA-4 in cardiac myocytes. The reason for this discrepancy is unclear at present, but further studies on precise mapping of phosphorylation sites are needed to clarify this discrepancy.
Many hypertrophic stimuli, including ␣ 1 -adrenergic stimulation, have been shown to activate the Ras-mitogen-activated protein kinase pathway, also known as the ERK pathway (29,30). ERK1/2 are one element in a series of kinases that serve to connect the nucleus with cytosolic events. The idea that phosphorylation of transcription factors by ERK1/2 might provide the cytoplasmic link between hypertrophic stimuli and changes in gene expression in the nucleus is supported by the observation that activated ERK1/2 can enter the nucleus. The present study demonstrated that inhibition of the ERK cascade with PD098059 blocked phosphorylation of GATA-4. These findings demonstrate that ERK1/2 plays a role upstream of GATA-4 in ␣ 1 -adrenergic signaling of cardiac myocytes. The results also raise a secondary, although interesting, question of whether ERK1/2 brings about GATA-4 phosphorylation in a direct or an indirect fashion. In this regard, it is noteworthy that GATA-4 protein has multiple ERK1/2 phosphorylation sites (31). Systematic mutagenesis of these positions is currently underway in our laboratory. However, ERK1/2 are activated 10 -15 min after PE stimulation, whereas GATA-4 phosphorylation is maximal at 3 h after the stimulation. The precise relationship between ERK1/2 and GATA-4 should be further investigated.
Cardiac myocyte hypertrophy is a central feature of all types of heart muscle failure. Hypertrophic stimuli reach the nucleus via multiple signaling pathways within cardiac myocytes and elicit changes in cardiac gene expression. ET-1 is one of the local factors that play important roles in the development of heart failure. Our present findings add ET-1 to the increasing list of factors whose transcriptional activation during cardiac hypertrophy is mediated by GATA factors. In addition, the present study provides the first evidence that post-translational modification of GATA-4 is involved in this process. Further elucidation of the precise mechanisms by which this central pathway modulates the hypertrophic response may provide novel therapeutic approaches to human heart failure. | 5,763.8 | 2000-05-05T00:00:00.000 | [
"Medicine",
"Biology"
] |
Reducing HIV/AIDS in young people in Sub-Sahara Africa: gaps in research and the role of theory
This paper discusses the role of education in preventing HIV in children and young people in sub-Sahara Africa and presents the results of policy advisory research conducted on behalf of the Belgian Development Cooperation. The research consisted of a literature review and a field study in Rwanda. Relative to the high number of HIV prevention activities in sub-Sahara Africa, there is a limited number of scientific data on HIV risk reduction interventions for young people in this region. Longitudinal studies are especially scarce. Preliminary results show that many in terventions have only a marginal impact on reducing sexual risk behaviour. Factors influencing programme effectiveness include the consistency and accuracy of messages and information, the provision oflife-skills, social support and access to contraceptives, the intensity and duration of the programme, the training of the facilitators and the age of the target population. The HIV/ AIDS pandemic has a potentially devastating impact on the education sector. Because few coun tries have monitoring systems in place that quantify the absenteeism, morbidity and mortality of teachers and students infected with or affected by HIV/AIDS, there is only anecdotal evidence available for illustrating this impact. The final section discusses the current gaps in research and the important role of theory in increasing the impact and improving the evaluations ofl IIV/AIDS education interventions.
Introduction
About 1.2 billion children and young people are about to enter or have just entered their reproductive age'. The large majority (85°/o) of these young people live in developing countries (United Nations Population Fund, 2003). Those children and young people are disproportionally affected by the HIV/AIDS epidemic. UNFPA (United Nations Population Fund, 2008) and UN AIDS (UN AIDS, 2007) estimate that more than half of new HIV infections occur in young people between the ages of 15 and 24 years and that one fourth to one third of those living with HIV/AIDS are under the age of 25 years. This means that more than ten million young people are currently living with HIV/AIDS. Worldwide, adolescence is for young people a period of curiosity and exploration. The combination of this experimental period with other socio-cultural factors and poverty makes youths particularly vulnerable to HIV infection. In many African countries young people are denied access to information, education, and services. Since the existence of premarital sex is often denied family planning services, contraception and condom use are only acceptable and provided for married couples. While the acceptability of talking about HIV/AIDS in schools increases, sex education is still forbidden in many countries because of the belief that it incites young people to have sexual intercourse. Focus group discussions in Rwanda with 84 secondary school pupils revealed that young people are often stigmatized when asking for sexual and reproductive health advice in health centers'. Also extreme poverty makes youngsters vulnerable to HIV infection. Poverty may compel them to engage in survival sex (sex in exchange for food and other basic needs), or transactional sex (sex in exchange for other goods, like clothes or a portable phone). The phenomenon of sugar daddies/mommies that have sex with young girls/boys in exchange for financial and material gain is widespread in some regions.
For over two decades now, many organizations have been funding or implementing programmes that promote safer sexual behaviour. Despite all such efforts, the HIV/AIDS pandemic keeps expanding and in most countries there is little prospect of a reduction in the near future. This paper touches upon some of the possible reasons for this apparent lack of effectiveness and highlights gaps in research on HIV prevention through education for young people in sub-Sahara Africa.
Methods
This article is the result of policy advisory research conducted for the Belgian Development Cooperation on "Combating HIV/AIDS in children and young people through education and training". Our main objective was to contribute to making the Belgian policy note "The Belgian contribution to the fight against HIV/AIDS worldwide" operational in the field of HIV/AIDS and education. The policy advisory research consisted of a literature study and field study.
Specific objectives of the literature study were 1) to provide background information to the subject of combating HIV/AIDS in children and youths through education and training in sub-Sahara Africa; 2) to clarify key concepts in the field of HIV/AIDS and education; 3) to study the impact of HIV/AIDS on the education sector; and 4) to study the role that education and training can play in combating HIV/AIDS.
The objective of the field study was to assess the processes of development and implementation of Rwandan policy on HIV prevention through education, with a specific focus on Belgian actors in these processes. The field study was a qualitative study which consisted of 28 in-depth interviews with government officials who work in the field of 2 Focus group discussions were conducted by Kristien Michielsen during field research in Rwanda in April 2007.
HIV and education, as well as with representatives of international organizations, civil society organizations and the bilateral cooperation.
The field study was complemented with visits to five secondary schools in the former province ofGitarama. All five schools participated in the peer education programme of the Rwandan Red Cross. The aim of this part of the field study was (a) to get more insight into activities taking place in the schools; (b) to assess the perceptions of students, teachers and school authorities concerning HIV and sex education; and (c) to detect the main barriers in HIV education. In each school in-depth interviews were conducted with representatives of the school authorities (5) and with teachers (8). Pupils (84) participated in focussed group discussions. The field study took place in April 2007 over a three-week period.
The results of the policy advisory research are complemented with those of an additional literature search that identified evaluations of HIV prevention interventions for young people in sub-Sahara Africa. Articles were sought in the online databases: PubMed, ISi Web of Science, ELIN and Ebscohost (selected databases: Academic Search Elite, ERIC, MLA International Bibliography, MEDLINE, Biomedical Reference Collection, Comprehensive Nursing & Allied Health Collection, Comprehensive Psychology and Behavioral Sciences Collection, Soclndex and Health Business Fulltext Elite). The search terms used were (effectiveness OR evaluation OR impact OR result) AND (HIV prevention OR AIDS education) AND behaviour AND (adolescent OR youth OR student) AND Africa. The search was complemented with a search on Google Scholar, using the same search terms and with searches on websites of international organizations renowned for HIV prevention activities, interventions and research (UNAIDS, UNESCO, Population Services International). The bibliographic information of the selected articles was examined for other relevant publications. Reference lists of recent reviews were consulted to identify additional studies.
The impact of HIV/AIDS on education systems
When discussing the impact of HIV/AIDS on education systems, many authors (Carr-Hill & Peart, 2003;Coombe, 2004;Fall, 2002;Gachuhi, 1999;Jukes & Desai, 2005;Kelly, 2oooa) distinguish between the impact on the demand side and the impact on the supply side. As for the demand side, reduced life expectancy, increased condom use and other social factors, result in a reduction of the average number of children per woman (Dorling, Shaw & Smith, 2006;Vandemoortele & Delamonica, 2000). In five countries that currently have adult HIV prevalence rates of over 20 per cent (South Africa, Zimbabwe, Botswana and Swaziland), the under-five mortality rate not only failed to decline between 1990 and 2003, it actually increased during that period (United Nations Department of Economic and Social Affairs, 2005). As a result, there are fewer children to educate than would be the case in a world without HIV/AIDS.
HIV/AIDS puts an important constraint on household economics. Sick parents may no longer be able to work, reducing household resources. Furthermore, in highly Downloaded from Brill.com04/28/2021 06:08:42AM via free access affected regions, a large part of the household budget goes to medication and fi.meral costs and less is left for sending children to school. A study by the International Labour Association (Cohen, 2002) shows that the epidemic is eroding the saving capacity of households through its direct effects on flows of income and levels of expenditure. School fees and school requisites can become too expensive and children have to leave school in order to work or to take care of family members. Those who stay in school might need substantial psychological support or flexible learning opportunities. Traumas can affect students' learning ability. Even the environment of morbidity in general, and the visibility of the effects of the disease in the surroundings of the child, can create considerable stress (Vandemoortele & Delamonica, 2000). As a consequence, the education system is now faced with new groups of children and youths whose lives have been marked in some way by HIV/AIDS: orphans, children who are the head of household, street children and youths, children and youths who care for sick parents or relatives and children and youth infected with HIV. These positions and experiences demand different approaches within education.
As for the supply side, HIV/AIDS can affect the quality of education services. Even though the authors in our literature review disagree as to whether or not teachers are more or, on the contrary, less vulnerable to HIV infection than other professional groups, it is clear that they are not immune. Especially in highly affected countries, the impact of teachers' illness and mortality is felt. Due to AIDS-related illness educators become increasingly unproductive and one has to rely increasingly on young teachers with less experience (Coombe, 2004). HIV/AIDS puts a strain on the governments' spending choices. While teacher mortality and absenteeism is costly and education systems need to take up new roles, national budgets for education are often reduced.
Due account must be taken of the fact that articles and reports stating this impact are mostly illustrative. The impact of HIV/AIDS on the education system has not yet been thoroughly calculated, quantified or determined, simply because there is little information available at the country or regional level. Few countries possess monitoring systems that quantify the absenteeism and mortality of teachers infected with or affected by HIV/AIDS. Monitoring absenteeism and drop-out rates of students is even more difficult (Coombe, 2004;Kelly, 20oob). The following results pertain to HIV prevention education in a broader sense, including HIV risk reduction interventions for young people outside schools.
The impact of HIV risk reduction interventions
From the literature review we gather that the impact of the evaluations of HIV risk reduction interventions is diverse. The most significant changes take place in the field of HIV/AIDS related knowledge: after HIV risk reduction interventions knowledge on the routes of transmission of HIV and the ways of protection against HIV infection almost always increases. The second most noted change is an increase in reported positive attitudes towards people living with HIV/AIDS and towards safe sexual behaviours. Interventions that seek to increase communication about HIV/AIDS also mostly succeed. As for self-reported sexual behaviour, interventions produce notably varied results. Some AFR! KA FOCUS-Volume 21, Nr. I Downloaded from Brill.com04/28/2021 06:08:42AM via free access articles report no changes at all, some report small but statistically insignificant changes (e.g. Kinsman, Nakiyingi, Kamali, Carpenter, Quigley, Pool et al., 2001) while others report significant changes on one or more behavioural outcomes in certain subgroups. For example, the intervention evaluated by Palekar R. et al., (2007) showed a slight increase of condom use in black youths who knew someone who had died of AIDS. Maticka-Tyndale et al., (2007) show the impact of programme exposure on condom self-efficacy among girls who were virgin before the start of the intervention and not in girls that already had sexual intercourse before the intervention. It also happens that sexual risk behaviour actually increases after the intervention (Visser, 2005). Several meta-analyses that report on studies in sub-Sahara Africa and beyond identify common characteristics for successful programmes (Gallant & Maticka-Tyndale, 2004;Johnson, Carey, March, Levin & Scott-Sheldon, 2003;Robin, Dittus, Whitaker, Crosby R., Ethier, Mezoff et al., 2004;Speizer, Magnani & Colvin, 2003). Successful programmes provide youths with consistent, accurate messages and information, with the life-skills needed to protect their health and well-being, and with social support and access to contraceptives. They tend to be long and intense programmes: when the content of an intervention is not changed, a reduction in the number of sessions may reduce the efficacy of an intervention even though the overall duration is the same. Teachers must be properly trained for and committed to teaching HIV/AIDS programmes, in order for the programme to be successful. The content of the programme and thorough training of facilitators may be even more important than their demographic characteristics in producing effective interventions. Finally, programmes targeting younger, primary school children have had greater success in influencing sexual behaviour.
On the other hand, factors that can have a negative impact on the effectiveness of HIV risk reduction interventions include certain socio-cultural norms, values and taboos, the lack of participation of stakeholders, the lack of good educational materials and means, the lack of teacher training and insufficient monitoring of the programmes. Programmes that are not successful in changing sexual risk behaviours often treat HIV/ AIDS as an isolated problem. A good example ofhindering factors is shown in a study by Boler & Jellema (2005) on HIV/AIDS curricula. Almost all countries in sub-Sahara Africa have introduced HIV prevention in the national curriculum. However, the implementation of this curriculum is weak due to insufficient teacher training, lack of participation in the development of the curriculum, lack of educational materials, taboos and the fact that it is often not an examinable and compulsory part of the curriculum.
3-3-General observations concerning evaluations of HIV risk reduction interventions
This section discusses general findings resulting from the review of the literature, which put the rather limited impact of HIV risk reduction interventions in a broader perspective.
Limited number of scientifically published eualuations Notwithstanding the high number of HIV risk reduction interventions taking place in sub-Sahara Africa, there is little published research data on the role that prevention Downloaded from Brill.com04/28/2021 06:08:42AM via free access education plays in combating HIV/AIDS. For the whole of sub-Sahara Africa, only 39 articles could be identified that report on the outcome evaluation of HIV risk reduction interventions for young people in sub-Sahara Africa, that are published in scientific journals and that use rigorous research designs. In-depth interviews in Rwanda revealed that most governmental and non-governmental actors collect information on the impact of their interventions. However, these actors often do not arrive at the stage of analyzing the data due to a lack of qualified personnel or time shortage and since it is not their core-business. Furthermore, dependence on external funding makes it difficult to make non-convincing results public.
Relyin,g on self-reported behauiour Evaluation studies of HIV risk reduction interventions mostly rely on self-reported data on sexual behaviour confronting researchers with several challenges. Firstly, several studies have shown that people tend to give socially desirable answers (e.g. McAuliffe T.L. eta!., Lau J.T. eta!., 2003). This means under reporting unsafe sexual behaviour and over reporting abstinence and condom use. Secondly, standardized questionnaires aim to assess the sexual behaviour of respondents in the months and year preceding the research. Recalling specific behaviours over longer periods of time can be difficult and problematic. This is an obstacle in the evaluation of HIV risk reduction interventions. Furthermore, researchers often use general concepts such as sexual intercourse, homosexuality and commercial sex. Because of cultural or idiosyncratic reasons individual people may interpret or understand these concepts in very different ways. For example: men who have sex with men do not always identify themselves as homosexual; many people do not consider oral sex as sexual intercourse and might as a consequence under report their hazardous behaviours. For some people, sex with sugar daddies/mommies in exchange for clothes or food is interpreted as commercial sex, while for others it is not.
Several studies show that the evaluation method has an impact on the sexual behaviour reported. Mensch et al., (2008) compared audio computed-assisted self-interview with conventional face-to-face interviews. They provide clear evidence that the mode of interviewing affects respondents' reporting of their sexual activity and not always in accordance with expectations. Another study by Meekers and Van Rossem (2005) shows that condom sales data are a very poor indicator of the level of condom use. Also estimates of both the number of sexual acts and the number of condoms used vary enormously based on the estimation method used. Until now, only one study (Ross D.A., Changalucha J., Obasi A.I., Todd J., Plummer M.L., Cleophas-Mazige B. et al., 2007) includes biological markers as outcome measure of an HIV risk reduction intervention for young people in sub-Sahara Africa. This study showed that the respondents in the intervention group significantly changed their self-reported behaviour compared to the pre-intervention situation and compared to the control group: fewer men reporting more than one sexual partner in the past year, a significantly higher proportion of men reporting having initiated condom use, more reported condom use at last sex in young men and less reported genital pus or abnormal genital discharge. Nevertheless, no significant AFRI KA FOCUS -Volume 21, Nr. I Downloaded from Brill.com04/28/2021 06:08:42AM via free access Reducing HIV/AIDS in young people in Sub-Sahara Africa differences occurred in HIV prevalence, STI prevalence and pregnancies.
Difficulties in comparing eualuations
The variety in objectives and desired outcomes of HIV risk reduction interventions makes it difficult to compare different studies. Some articles for instance only report on knowledge (Rusakaniko S., Mbizvo M.T., Kasule J., Gupta V., Kinoti S.N., Mpanju-Shumbushu W. et al., 1997), while others report on knowledge and attitudes (Dalrymple L. & duToit M.K., 1993). Many studies include one or more aspects of sexual behaviour. Other outcomes include interpersonal communication about HIV and beliefs about HIV prevention (Geary C.W., Burke H.M., Castelnau L., Neupane S., Sall Y.B., Wong E. et al., 2007), and treatment-seeking behaviour among youths who experience symptoms of sexually transmitted diseases (Okonofua F.E., Coplan P., Collins S., Oronsaye F., Ogunsakin D., Ogonor J.T. et al., 2003). The variety in outcomes and in measured indicators makes it difficult to compare studies and draw general conclusions.
Even when outcomes are similar, comparison of evaluations is difficult due to the use of different indicators. For example 'condom use' is measured through indicators such as 'condom use at last sex' (Stanton B., Li X., Kahihuata J., Fitzgerald A.M., Neumbo S., Kanduuombe G. et al., 1998), 'condom use in the past six months' (Okonofua F.E. et al., 2003) or 'condom use with different types of sex partners' (Kajibu P., Kamya M.R., Kamya S., Chan S., McFarland W. & Hearts N., 2005), making it difficult to compare between studies. Since no standardized methods of measuring exist, researchers tend to develop their own. It is equally important to be aware of outcomes that are not being measured. For example, 'correct condom use' is rarely measured. Nevertheless, consistent condom use is pointless ifthe condoms are not used correctly.
Lack of longitudinal studies
Little is known about the long-term impact of HIV/AIDS education. Many studies limit themselves to post-measures directly after the intervention (Fitzgerald, Stanton, Terreri, Shipena, Li, Kahihuata et al., 1999;Stanton B. et al., 1998), or one to six months after (Kamel! A.P., Cupp P.K., Zimmerman R.S., Feist-Price S. & Bennie T., 2006;Magnani, Macintyre, Karim, Brown & Hutchinson, 2005). More exceptionally studies report on the impact a year after the intervention (Klepp, Ndeki, Seha, Hannan, Lyimo, Msuya et al., 1994) and only two studies could be identified that measure the impact up to 18 months after the intervention (Brieger W
3-4-Alignment and harmonization
In Rwanda the government has identified over 2,000 partners in the fight against HIV/AIDS. The monitoring and evaluation of all their activities is in the first place a responsibility of the intervening organization. However, the large presence of organizations also puts a great burden on government structures responsible for coordination. During the field study, we visited schools where three HIV prevention programmes were AFR! KA FOCUS -2008-06 Downloaded from Brill.com04/28/2021 06:08:42AM via free access active at the same time. One of the peer educators interviewed was trained by three different organizations. Since not all organizations send out the same messages concerning HIV risk reduction behaviour, young people get contradicting messages. This does not benefit the effectiveness of HIV risk reduction interventions.
3.5. HIV/AIDS and social and cultural heritage HIV/AIDS prevention efforts are inextricably linked to issues of social and cultural heritage. Social aspects and cultural norms and values have an ambiguous relationship with HIV/AIDS. A number of cultural practices such as widow inheritance, widow cleansing, wife sharing and polygamy are recognized as being directly responsible for the spread ofHIV/AIDS. Also female genital mutilation can have adverse effects: it may cause women to bleed during sexual intercourse, increasing the chances of HIV transmission. In many countries social norms dictate that females are inferior to males, especially in sexual relationships. Male youths often learn that it is a sign of manhood to be able to control relationships, while females learn to believe that males are the masters of sexual relationships ORIN News, 2003).
In highly affected regions, the AIDS pandemic has a disastrous impact on culture. Khanakwa (2003) gives the example of a highly affected region in Uganda where the AIDS epidemic has caused a decline in the importance of rites of passage and of funeral rituals meant to chase away the spirits of the dead and to pass on clan history and social norms. With the AIDS scourge, most people are reluctant to send their families to such functions, which involve staying overnight and are associated with loose sexual behaviour. The Banyankole practiced blood brotherhood to extend social ties beyond biological relations. Incisions were made on the stomachs of two friends who would then rub their blood on coffee berries and ate each others beans. In present times, it is very unlikely that this ritual is performed.
On the other hand, traditional and cultural practices have proven effective in reducing HIV transmission and can be deployed as a prevention strategy. Several aspects of traditional culture offer opportunities for adapting HIV prevention interventions. Muyinda, et al., (2003) trained Sengas (the fathers' sisters who give young girls advice on marriage) in a rural village in Uganda with positive results and male circumcision is being deployed as a successful HIV prevention strategy.
Discussion
It cannot be denied that the effectiveness of HIV/AIDS education and training depends to a considerable extent on the quality of the development, implementation and monitoring of the interventions and on the participation of the target population in all these steps. On top of that, our research findings indicate that evaluation methods need to be rethought. Direct observation of sexual behaviour is obviously unethical and using biological markers is an expensive and time consuming method. Therefore researchers must rethink evaluation and find other ways to get round the problem of dependence on self-reported behaviour. Since it aims to explain and sometimes even predict sexual behaviour, behavioural theory could potentially play a role in closing the observational AFR!KA FOCUS-Volume 21, Nr. I gap. As a consequence, theoretical determinants can serve as a control mechanism for self-reported sexual behaviour.
A great number of existing health behaviour theories have been adapted to sexual behaviour and even new theories have been developed that try to understand and predict sexual behaviour in the era of AIDS. Most theories describe determinants that influence an individual's sexual behaviour. For example the Health Belief Model developed by Rosenstock (1974) is a psychological model that attempts to explain and predict health behaviours by focusing on the attitudes and beliefs of individuals. The Theory of Planned Behaviour of Ajzen and Fishbein (1980) argues that behavioural, normative and control beliefs influence individual attitudes, subjective norms and perceived behavioural control. These factors shape a person's intention to perform a certain behaviour. According to the Information-Motivation-Behavioural Skills model of Fisher and Fisher (1993) the fundamental determinants of STD/HIV preventive behaviour are information on preventive behaviour, motivation to practice prevention, and behavioural skills for effectively practising prevention. Other theories describe the process of behaviour change, like the Stage of Change theory of Prochaska, DiClemente and Norcross (1992) and the AIDS Risk Reduction Theory ofCatana, Kegeles and Coates (1990). The Social Cognitive Theory of Bandura (1997) identifies three main factors that affect the likelihood that a person will change a health behaviour: self-efficacy, goals and outcome expectancies. The combination of these theories provides us with a good understanding of how individuals make decisions concerning sexual behaviour. These theories describe sexual behaviour as a logical consequence of several interacting determinants.
There is a growing awareness that the social and cultural environment as well as the direct environment of the young people should be taken into account and should actually be integrated in HIV risk reduction interventions (by e.g. involving religious leaders, involving community leaders, involving parents, involving school teachers, involving peers). Certain models, like the Ecological Approach of Bronfenbrenner (1979), explicitly draw on the broad social, economic and cultural contexts. Important challenges remain. First of all, our literature review revealed that the theoretical basis for the role and impact of the social and cultural environment on sexual behaviour has not yet been sufficiently developed in order to be able to support behavioural interventions.
Secondly, the link between theory and practice is hard to make. Despite the wide range of theories available to programme managers, the use of theory in the development of interventions is not self-evident. Programme managers still rely mostly on individual psychological theories developed predominantly in the West. In our literature survey on HIV risk reduction interventions for young people in sub-Sahara Africa less than two out of three articles (59 per cent) identify one or more theories that form the basis for their programmes. In total 12 theories are mentioned 28 times in 2 3 articles. Five articles mention more than one theory. Social Learning/Cognitive Theory forms the basis for more than a quarter (27 per cent) of the interventions, or make up 36 per cent of the theories mentioned. Other theories that are mentioned more than once are the Health Belief Model (19 per cent of the theories mentioned) and the Theory ofReasoned Action (n per cent). We found very little information on why and how the chosen theory is used in the intervention or evaluation. These observations strongly relate to other meta-analysis that report on the use of theory in HIV risk reduction interventions (Pedlow C.T. & Carey M.T., 2003;Jemmott J.B. & Jemmott L.S., 2000;Kirby, Laris & Rolleri, 2007) which confirm the dominance of the Social Cognitive Theory, the Theory ofReasoned Action and the Health Belief Model. They also confirm that it is rarely explained how the theory is used.
Thirdly, HIV/AIDS caused a certain panic: it was more importantto act quickly without having a complete understanding of sexual behaviour in the region/culture/age category in which one was working. (Allen T., 2006) argues that "much of what has been claimed [about sexual activity] is based on little more than speculation, and is sometimes affected by very misleading assumptions about a homogeneous African sexuality".
Fourthly, and following (Delor F. & Hubert M., 2000), a final question should be raised: is there not another step between the individual and the environmental aspects of sexual behaviour (the 'relational-situational level') that behavioural theory has not incorporated yet? Sexual behaviour is a social behaviour that takes place between two people and in a certain situation. The same person with the same knowledge, attitudes, perception, intention etc, might perform a completely different behaviour depending on which person he/she is with, in which circumstances the sexual intercourse is taking place and depending on the phase of his personal development. Therefore we should ask ourselves if it is perhaps more important to focus HIV risk reduction interventions on situations that make young people more prone to engage in risky sexual behaviour and on the factors that can draw a person into such a situation. If we want to come to reliable self-reported sexual behaviour all levels should be involved as outcome measures.
Conclusion
This paper provides an overview of several aspects of HIV/AIDS prevention and education. Based on a literature review and a field study we can say that in the field of HIV/ AIDS and education, there is a need for more research in several domains. The long-term impact of HIV risk reduction interventions is rarely measured. Quantitative and qualitative research is needed to help us to understand the (lack of an) impact on sexual behaviour of many interventions and the impact of HIV/AIDS on education systems. The development of common indicators to measure the impact ofinterventions would greatly improve comparability. Finally, we saw that theories on individual behaviour are especially well-documented and used in practice. Some theories draw on the broad environment, but they are not sufficiently developed nor do they find their way into the field. As for the relational and situational aspect of sexual behaviour, a major gap exists both in theory and in practice. Complementing and reconciling individual behavioural theories with aspects of the social and cultural environment and with the relational-situational level is essential to truly understanding sexual behaviour. It is only after we understand how sexual identity and behaviour is structured that it is possible to develop mechanisms for promoting safe and responsible sexual behaviour. | 6,825 | 2008-03-08T00:00:00.000 | [
"Economics"
] |
Analysis of the Risk Management Process on the Development of the Public Sector Information Technology Master Plan
: The Information and Communication Technology Master Plan—ICTMP—is an important tool for the achievement of the strategic business objectives of public and private organizations. In the public sector, these objectives are closely related to the provision of benefits to society. Information and Communication Technology (ICT) actions are present in all organizational processes and involves size-able budgets. The risks inherent in the planning of ICT actions need to be considered for ICT to add value to the business and to maximize the return on investment to the population. In this context, this work intends to examine the use of risk management processes in the development of ICTMPs in the Brazilian public sector.
Introduction
Information Communication Technology has become more sophisticated, a willingness to share information among organizations and stakeholders may become a major factor to those actively seeking information and resources to make value-added products [1].
Information and Communication Technology (ICT) plays an increasingly important role for public and private organizations in order to achieve their goals and fulfill its institutional mission. ICT actions need to be aligned with the organization's strategy to enhance the added value to the business concerned with risk minimization. The aggregate value of ICT to business and the risks minimization related to ICT are considered the main objectives of ICT governance [2], which, in turn, is an integral part of corporate governance [3].
ICT risk management is a subject that needs to be well communicated and understood by management. Since the adaptation of ICT into the business is continually growing, all of the inherent risks of using ICT must be very well managed in order to support the decision-making process. It is a fact that ICT is very unique in its nature of development and complexity. Thus, a strong foundation of management skills is needed [2].
The management of ICT risks should form an integral part of the Federal Public Administration (FPA) in Brazil risk management strategy and policies. Risk management involves identification of risks concerning existing applications and ICT infrastructures, and continuous management, including an annual/periodic review and update by the management of the risks and monitoring of mitigation strategies.
In recent years, the Federal Court of Accounts-FCA-has been conducting several studies in order to obtain information on the state of ICT governance and ICT risk management in the FPA, so it can act as an inducer of the ICT governance improvement process.
The first ICT governance survey in the FPA was held in 2010 and saw the participation of 255 institutions. The second survey was held in 2015 and evaluated 301 institutions. The latest study was conducted in 2017, included 350 institutions, and aimed to monitor and maintain an updated database with the ICT governance situation in FPA, deepening the panorama outlined in 2015.
According to the latest survey, with regard to institutional and ICT planning, the improvement of these instruments and a trend of continuing evolution of ICT Governance were found. However, the obtained results still cause concern, given the number of institutions that still have not given due importance to the strategic planning process, which tends to compromise its performance. Organizations need to promote a culture for strategically planning their actions and not just react to the demands and changes taking place.
In relation to ICT planning, more specifically to the ICT Master Plan (ICTMP), it was found that almost half of the organizations did not approve or published the ICTMP internally or externally. Risk management is an important control tool to be considered during the preparation of planning instruments of public institutions, especially in ICTMP, and the main ICT planning tool at the tactical level, the object of study of this work.
Institutions must anticipate and prevent risks to the organization's set of processes that may prevent or hinder the achievement of its objectives [4]. Among the potential effects of non-application of risk management, the organization has inefficient use of resources; ignorance of the risks to which the institution critical processes are exposed; and absence of solid criteria for planning and prioritization of information security actions [5].
In front of this scenario, this paper aims to analyse the risk management process in the preparation of ICTMP in the public sector in order to obtain information about the use of risk management mechanisms for the development of the ICTMP at FPA agencies and entities. Therefore, we have defined the following research question: Is Risk Management Important for the ICTMP Development? To achieve this goal, a survey was conducted and divided in two stages. Because it is a relatively unexplored matter, the first stage, aimed to identify and understand the context of the use of risk management in ICTMP of FPA agencies and entities. At this stage, it used a method of exploratory nature. The second stage, a descriptive one, evaluated the risk management mechanisms available in ICTMPs using as a reference the procedures recommended by [6], a standard that covers the principles and guidelines of risk management.
The analysis performed in this work will allow the Federal Public Administration to value the importance of its achievement of risk management, when it elaborates its strategic planning.
The rest of this paper is is organized as follows. Section 2 presents an overview of issues related to institutional planning and ICT instruments in the public sector, and likewise explores risk management in national and international public domains. In Section 3, the search tools used are addressed. The results obtained from the research in Section 4. The conclusions and future work are presented in Section 5.
Literature Review
Risk Management (RM) refers to the coordinated activities to direct and control an organization with regard to risks. In this context, the risk is the effect of uncertainty in objectives, where effects can be understood as a deviation from the expected-positive or negative [7].
From the view of [7], risk is the possibility that an event will occur and adversely affect the achievement of objectives. In this work, the broader concept is adopted which is advocated for by the standard that considers that the risk may give rise to positive or negative impacts.
According to [8], there is no real risk or objective. The implication is that risk is not something that is waiting to be measured independently of our minds, cultures, policies or views of the world-it is inherently subjective. To protect oneself against all risks is impossible because any opportunity invariably entails risks [8].
Risk Management in the Public Sector
In recent years, the use of the risk management in a public sector became important in some countries. Javani and Rwelamila [9] studied the South African case. They found a significant statistical support for the conclusion that risk management is being applied in IT projects and that it is understood by project clients, although risk management status in the South African public sector is little known.
In Thailand, Kongmalai et al. [10] found empirical evidence of corporate governance in state-owned enterprises. They develop a multi-attribute pattern of the corporate governance model and provide detailed information of each corporate governance practice, including risk management.
In Indonesia, Amali et al. [11] discussed a framework of Information Technology governance in the Public Sector where there is an absence of IT resource, IT strategic alignment and risk management. The results showed the local need for that framework.
In Canada, Leung and Frances [12] studied the risk management in public sector based on the National Research Council (NRC). They map multiple sources of strategic and operational risks, which might arise from political and other stakeholder interests, intellectual property ownership and policy, funding structures, public perceptions of science and technology, occupational health and safety, management of highly qualified personnel, and others.
Risk management in the Brazilian public sector is still a relatively unexplored subject. The FCA started a survey on risk management in the federal indirect administration, in order to understand and evaluate the risk management maturity of these entities. The instrument used for the survey is based on recognized international standards, such as [6].
The only way for public organizations to manage the responsibilities and protect citizens is to implement their risk management [13].
According to [14], the government's ability to manage risk depends on the skills of its employees. Therefore, the ability to manage risk is broader than the concern with scientific capacity.
The scientists need to know that making proper scientific analysis and the effective risk management in the context of public policy also requires the ability to make the right questions about scientific issues, risks, public perception and policy options and how these factors can be related to each other.
According to [15], the main points to be noted in order to implement risk management in the public administration are:
•
To ensure that decision-making takes into account the risks, so that risk management becomes a requirement for the decision-making process; • To ensure that risk management is effectively established and that the tools and methods selected are applied; • To organize itself for risk management, ensuring that the responsibility for dealing with risks be of those best prepared for their management and that the flow of information supports the division of such tasks; • To develop skills to ensure that those responsible for decision-making are prepared to understand and analyze the risks and that they are advised by experienced professionals, if necessary; To ensure that the government has a leading and stimulating role for cultural change.
According to [16], the risk management process in the public sector is increasingly better understood; however, the knowledge of these elements and of the associated processes does not guarantee appropriate treatment of risks in an organization. Effective risk management requires a risk assessment culture that supports a holistic approach to risk identification and management throughout the organization.
According to [17], risk assessment is inherently subjective and represents a mixture of scientific observations and individual judgments with significant psychological, social, cultural and political factors. The one who controls the definition of a risk, controls the rational solution to the problem under discussion. When a risk is defined in a particular way, a specific option will appear as the most effective in terms of costs, for being the safest or the best. When it is defined otherwise, perhaps incorporating qualitative characteristics and other contextual factors, the ordering of possible solutions tends to be different. Defining risk, therefore, is an exercise of power.
The limitations of risk science, the importance and the difficulty to maintain the trust and the complex and socio-political nature of risks point out the need to adopt a new approach-an approach focused on more public participation in the evaluation and decision-making involving risks, so that the decision-making process can be more democratic; relevance and technical analysis quality be improved and the resulting decisions have more legitimacy and greater public acceptance. Risk management has improved significantly, but project success rates have failed to improve at the same rate. Attained improvements are also seen to deteriorate remarkably quickly, and the development is topped with real dilemmas. The five specific challenges in uncertainty analysis identified indicate that even professional risk managers and their teams do not have the right competences, adequate planning data or effective procedures to properly identify risks and uncertainties, quantify and analyze them, communicate them to decision makers or incorporate the consequences into their risk management [18,19].
In [7], the principles and general guidelines on risk management for any type of organization, groups or individuals in general are provided. It can be applied to various activities, including strategies, decisions, operations, processes, projects, services and assets.
According to [7], an organization should, at all levels, meet a set of eleven principles for risk management (RM) to be effective. Among them, risk management should:
•
Create and protect value-RM contributes to the demonstrable achievement of objectives and to the performance improvement of various aspects, such as security, legal compliance, environmental protection, product quality, operations efficiency, governance and reputation; • Be an integral part of all organizational processes-the RM is not an autonomous activity separated from main activities and organizational processes. It is part of management's responsibilities and is an integral part of all organizational processes, including strategic planning and all project management processes and change management; • Be part of decision-making-RM helps decision makers make informed choices, prioritize actions and distinguish among alternative courses of action; • Be systematic, structured and timely-a systematic approach, timely and structured to risk management contributes to efficiency and conscious, comparable and reliable results; • Be transparent and inclusive-the proper and timely involvement of stakeholders and, in particular, of decision makers at all levels of the organization ensures that risk management remains relevant and updated. In addition to the principles and structure, the standard also establishes the risk management process that can be used by any type of organization. It is appropriate that the risk management process be an integral part of management, incorporated into the culture and the organization's practices and be tailored to the organization's business processes. Figure 1 shows the risk management process defined in [7].
The process of communication and consultation aims to ensure that communication and consultation with internal and external stakeholders occur during all stages of the risk management process. The establishment of a context provides the scope and risk criteria and sets the external and internal parameters that must be considered in risk management [20].
The process of risk assessment is considered the overall process of risk identification, risk analysis and risk assessment. Treatment of risks involves the selection of the most suitable alternatives to modify the risks, together with the plans necessary to implement them. In turn, the process of monitoring and critical analysis aims to ensure that controls related to risks are effective and efficient, as well as to obtain additional information to improve the process of risk assessment [20].
Institutional Planning in the Public Sector
Huang [21] established an analytical framework for exploring the ICT-oriented urban planning experience of Taipei City. He tests the limitations and application of the framework. The study finds technological trends, physical infrastructure and ICT content.
In Jordan, Onizat et al. [22] evaluate the ICT process in the Jordanian e-Government program. They found the lack of the systematic evaluation process as the main reason for the retreating of the e-Government program in Jordan.
The institutional planning in the Brazilian public sector is governed by a set of constitutional provisions. In addition to being mandatory, the planning activity enhances the achievement of established goals and assists in the perception of predictability of results.
The Constitution of the Federal Republic of Brazil of 1988 (CF/1988- [23]) states that, as the normative and regulating agent of the economic activity, the State shall, in accordance with the law, exercise supervisory, incentive and planning functions, which are crucial for the public sector and indicative for the private sector.
In addition, it states that the direct and indirect public administration of any of the powers of the Union, the States, the Federal District and the Municipalities shall obey the principles of legality, impersonality, morality, publicity and efficiency (...). The Multi-Year Plan (MYP) is the main and most comprehensive planning tool for agencies and entities of the Federal Public Administration. It should contain the guidelines, objectives and goals of the federal government for capital expenditures and others due and for those related to continuous programs [24].
Every year's planning (annual budget) can not contradict the MYP determinations. Thus, it becomes mandatory for the Government to plan its actions aligned to the budget. The Budget, through the Budget Guidelines and Annual Budgets, translates the plan into financial terms and goals for a financial year, adjusting the pace of implementation of the flow of funds in order to ensure the timely release of funds [24].
The Institutional Strategic Plan (ISP) is also an essential tool for organizations. Strategic planning is a process of determining the main objectives of an organization, the policies and strategies that will govern it and the use and availability of resources to achieve these objectives, consisting of assumptions, planning itself, implementation and review [25,26].
ICT Planning in the Public Sector
The ICT planning in the public sector, similar to the private area, is used to declare the strategic objectives and initiatives of ICT by aligning information technology solutions to the organization's goals. It constitutes also an important addition to ISP, including guidelines and cross-cutting actions, i.e., that support business goals in all areas of the institution, as well as structural and regulatory objectives of the Federal Public Administration-APF agencies [27,28].
Strategic planning at public and private organizations should be complemented by the planning of information systems, knowledge and information. This planning is also known as Strategic Information Technology Planning-SITP. Both must be integrated and aligned [29]. It establishes guidelines and goals that guide the construction of the organization's ICT planning. At the tactical level, the most commonly used instrument to represent the ICT planning is the Information Technology Master Plan-ICTMP [29]. This instrument shows tactically how an organization, with regard to information technology, can make the transition from a current situation to a future situation, from the definition of a plan of goals and actions.
The ICTMP is defined as a diagnostic, planning and management tool of resources and information technology process that aims to meet the technological needs and information of an agency or entity for a certain period [29]. In addition, it states that ICT procurement must be preceded by planning, prepared in accordance with the ICTMP, which, in turn, should be aligned with the strategic planning of the body [29].
The ICTMP must set indicators in accordance with the strategic objectives of ICT, and include the planning of necessary investments, budget proposal, quantitative and training of people and identification and treatment of related ICT risks. It is extremely important that the ICTMP provides the alignment of ICT solutions to business goals and the organization's needs [29]. In addition, the strategic alignment and risk mitigation, value delivery, resource management and performance measurement are considered key areas of ICT Governance [30].
ICTMP Development in the Brazilian Public Sector
The Administration of Information and Computer Resources System (AICRS) aims to organize the operation, control, supervision and coordination of information resources and information of direct, independent and foundation of the Federal Executive Branch.
The Preparation Guide for the Information Technology Master Plan (ICTMP) of AICRS, which aims to provide information to assist the development of ICTMP, with content and minimum quality to improve the ICT management in APF agencies [31].
The Guide, despite not having normative character, is considered an important tool for assisting public agencies and entities to develop its ICTMPs adherents to several regulations that deal with the matter, as well as good market practices. The literature research conducted in this study found that the risk management issue in FPA is still under explored. With respect to planning, the Brazilian public sector has a wide range of legislation governing the matter, either on the institutional or ICT levels. Regarding the application of risk management in the public planning instruments, particularly in ICTMP, there were no publications that deal specifically with the matter.
Research Methodology
Every decision that is made by managers and policy-makers in a public sector organization requires an evaluation and a judgment of the risks involved. This vital requirement has been recognized in the growth of risk management. However, risks can never be fully prevented, which means that public managers also have to be crisis managers [32].
Today's crises develop in unseen ways; they escalate rapidly and transform through the interdependencies of modern society, and their frequency is growing: the global financial crisis, the European volcanic ash cloud, the Japanese tsunami and subsequent Fukushima nuclear plant meltdown, the Christchurch earthquake and the Queensland floods. All highlight the extreme challenges that public sector organizations across the world have had to face in recent years [27].
Risk management in the Federal Public Administration (FPA), especially for the development of institutional arrangements is a relatively unexplored matter. In the first stage of the work, a qualitative research method of exploratory nature was used. A work is exploratory in nature when it involves literature and analysis of examples that encourage the understanding of the problem in order to provide the researcher better knowledge on the matter and enable the formulation of more precise problems and create hypotheses that can be researched by subsequent studies [17].
At this stage, seventeen ICTMPs were analyzed from major federal agencies and public entities belonging to the three powers of the Union, effective between the years 2010 to 2017. In conducting the research, the ICTMPs from federal level agencies and entities that had published their documents on the internet were considered for the selection of the sample. Additionally, via email, the availability of agencies and entities ICTMPs belonging to the Applied Information Technology to Control Community was requested. This community brings together representatives of the Legislative, Executive, Judiciary, Public Ministry and the Attorney General's Office. Its purpose is to contribute to the increasing of efficiency, efficacy and effectiveness in the public administration. Some of these institutions have provided their plans through electronic messages.
In the second stage of the work, we used a descriptive method that aims to describe the characteristics of a given population or phenomenon [33]. In order to mark the analysis of risk management use in the ICTMP preparation, key risk management processes recommended by the Brazilian standard were used as reference [7].
Key standard processes were selected and the verification of adherence to guide [31] to those processes performed, using a qualitative scale to value the degree of adherence: low (absence of the process), medium (partial presence of the process) and high (total process presence). The aim of this step was to determine whether the risk management mechanisms present in the Guide were aligned to the main standard dealing with the matter. The result of this verification facilitated the analysis of the risk management process during the preparation of ICTMPs that used the guide [31].
The ICTMPS were grouped to facilitate the analysis and to provide a quantitative view of the presence or not of risk management in these documents, as well as to establish compliance with the risk of mechanisms suggested by the Guide. For each group, a subset of documents for a more detailed analysis was selected in order to describe the risk management mechanisms used or to analyze the reasons for its absence.
To expand the focus of the analysis, a grouping of documents was chosen in order to obtain a holistic view of the use of risk management mechanisms in the preparation of federal public organizations ICTMPs. The grouping of the sample was given into three groups:
Has no risk management. Figure 2 shows the quantity and percentage distribution for each group of sample. On average, a total of 120 employees of the organization were part of each group of the sample. Two documents, representing twelve percent of the sample, had management risks adherent to the Guide (Group 1-12%). Two others, despite their risk management related mechanisms, were not adhering to the Guide (Group 2-12%). Thirteen documents, representing seventy-six percent of the sample, were classified in Group 3 (76%) due to a lack of evidence on the presence of risk management processes.
Analysis of Results and Discussion
Acknowledging that there are many official standards for IT risk management that are designed to improve the organization's decision-making and activities that address key uncertainties as: ISO 31000, BS 3100:2008, COSO:2004 and FERMA:2002, the following analysis is focused on ISO 31000 [7] as it is the basis for the Brazilian Standard [6].
Analysis of Risk Management Processes Adherence to AICRS ICTMP Planning Guide to ISO 31000
In order to verify the adherence to the management mechanisms of risks present in the Guide [34] to Brazilian standards dealing with the risk management principles and guidelines, the main processes of the Brazilian standard were used as a reference [6,7].
The Guide addresses risk management explicitly in various sections of the document: in the overview, in the preparation stage and the ICTMP planning stage. Table 1 shows the adherence analysis to mechanisms related to risk management present in the Guide [34] for the processes that were highlighted in the standard [6,7]. For each process (column 2), the most relevant sub-processes were selected (column 3) in order to perform the adherence analysis. A qualitative scale to value the degree of adherence was used: low (absence of the process), medium (partial presence of the process) and high (total process presence). In the fourth column, evidence of the presence of the ISO 31000 process [6,7] in the Guide [34] is shown.
During the analysis, explicit evidence of the application of mechanisms related to the communication and consultation process on the content of the analyzed Guide was not identified. Thus, the Guide was considered with low adherence to the communication and consultation process of ISO 31000 [6]. According to the standard, this process has no highlighted sub processes.
The process of context establishment consists of a set of sub-processes. The Guide adherence analysis in relation to the context establishment process restricts only to the risk criteria defining sub-process, for its relevance, because much of the information produced by the other sub-processes is used as input for the definition of risk criteria. The analysis showed the presence of a specific item in the Guide that deals with the definition and updating of prioritization criteria and risk acceptance, providing even a model for the registration of such information. It follows, therefore, that the Guide has a high degree of adherence in relation to the context establishment process to the Brazilian standard highlighted. According to [6,7], the risk assessment process is composed of the following sub-processes: risk identification, risk analysis and risk assessment. The mentioned sub-processes are essential for the effective realization of the risk assessment process. For being relevant and complementary, the adherence analysis was performed for all of them. It was found that the item "Risk management or planning" in the Guide explicitly covers the three sub-processes related to the ISO 31000 risk assessment process [7], providing even a model for recording of information. Thus, it was observed that the Guide has a high degree of adherence to the risk assessment process stipulated in the standard.
The risks' treatment process recommended by the Brazilian standard involves the selection of the most suitable alternatives to modify the risks, as well as the preparation of necessary plans to implement them. Those aspects are addressed in the "Risk Management Plan" contained in the Guide; however, a lack of items were found, such as the definition of those responsible for the execution and approval of plans, timeline for implementation and resource requirements. In this sense, it is considered that the Guide has a medium degree of adherence to the ISO 31000 Risk treatment process [6,7].
Regarding the monitoring and reviewing process, only the mention of responsibility for the constant risk in the risk management model plan recommended by the Guide was identified. The Brazilian standard recommends that monitoring processes and critical analysis of the organization should cover all aspects of the risk management process in order to ensure that controls are effective and efficient, as well as detection of changes in the external and internal context and identification of emerging risks among others. Thus, it was found that, in relation to these aspects, the Guide has medium adherence to the monitoring process and critical analysis of ISO 31000 [6,7], since it has partial presence of the process in the Guide.
Despite the absence of some processes recommended in the Guide by the Brazilian standard, it was found that most of the ISO 31000 [6,7] processes are treated by the AICRS Guide [34]. It is concluded, therefore, that the processes that deal with risk management in the Guide adhere to ISO 31000 [6,7].
Analysis of ICTMPs Included in Group 1-Has Risk Management and Is Adhering to the ICTMP of the AICRS Development Guide
Twelve percent of the sample, representing two ICTMPs were classified in Group 1, which comprises the documents that have risk management adherent to the Guide. Both documents belong to agencies from the indirect federal administration. These agencies legally have special authorities, typified as regulatory agencies. Later, an analysis of each document will be done. In order to preserve the identification of agencies and entities, in this work, they will be identified by a number (value corresponding to the group to which it belongs) followed by a letter.
The first agency's ICTMP analyzed, called 1A agency, effective from 2012 to 2014. As recommended by [6,7] (part of the establishment process) and recommended by the Guide [6,7], the sub-process that deals with the definition and updating of the acceptance risks criteria is essential in order to guide the evaluation of the actions planned and the design of new actions intended to address the existing risks. This aspect was addressed in the analyzed ICTMP, through meetings of the ICT agency's committee, as shown in the implementation schedule of the existing plan in the document.
The risk assessment processes, treatment of risks and risk monitoring and analysis were addressed according to the guidelines in the present Guide. Table 2 shows the result of the execution of these processes.
For each action proposed in the plan, the identified agency analyzed and assessed the risks as well as adopted a treatment strategy and response to them. The set of possible strategies was set in line with the Guide. They are: mitigate (develop actions to minimize the risk occurrence probability or its impact on the project in order to make the risk acceptable); avoid (change the project plan eliminating the condition to which the project was exposed to risk); transfer (pass on the risk consequences as well as response responsibility to those better prepared to deal with it) and accept (indicated in situations where the risk criticality is medium or low, or when it is not possible or there is no interest in implementing a specific action).
The risk assessment processes, risk treatment and monitoring and critical analysis present in the 1B agency's ICTMP were addressed according to the guidelines in the present Guide. Table 3 presents a small sample of the result of the execution of these processes. It consists of seven columns: action identification, action description, risk description, probability, impact, contingency action and responsibility.
For each action in the plan, the risks were identified and assessed in regards to the probability and impact of occurrence, applying a scale with five levels of classification: very low, low, medium, high and very high. The criteria used to perform the classification in each of these levels have been established and communicated. After classification, response to risks planning was done, establishing contingency actions and responsibility for their treatment.
The analysis performed on the ICTMPs of 1A and 1B agencies comprised in group 1 showed that the risk management mechanisms present in these documents follow the guidelines in the AICRS Guide in relation to risk management. An existence of small variations in relation to risk treatment among the examined documents was found, without prejudice to the adherence to the Guide.
Analysis of the ICTMP Included in Category 2-Has Risk Management and Is Not Adhering to ICTMP of the AICRS Development Guide
Two ICTMPs, representing twelve percent of the sample, were classified in Group 2, comprising the plans that have risk management, but are not adhering to the Guide. Both documents belong to important agencies of the Federal Administration. In the following, an analysis of each document will be done. Within Group 2, an ICTMP of a major federal public foundation, an agency of indirect administration, with high ICT investment was selected. Within this study, this agency will be named 2A.
There was a lack of processes related to the context establishment, specifically the risk criteria definition, as well as the analysis and risk assessment sub-processes, recommended by the Brazilian standard and suggested by the Guide. Only evidence and its responsibilities were observed. Table 4 shows, in full, the risk management plan in ICTMP. In the table, four columns are present representing the description of the risk, preventative measures, contingency measures and the person responsible for risk. The column responsibility represents the person responsible for the information in the agency.
There was an absence of probability analysis on the occurrence of risks as well as of impacts from identified risks, key step to support the decision-making regarding acceptance, prioritization, treatment and monitoring of risks. There was also no correlation between the identified risk and the action plan of pretended to be run, making difficult the traceability between items.
According to [6,7], the risk assessment sub-process involves comparing the level of risk found during the analysis process with the risk criteria established when the context was considered. Based on this comparison, it is confirmed if there is the need for certain risks to be treated. Since no risk criteria was established or conducted, the risk analysis regarding the likelihood and impact, risk management mechanisms present in the ICTMP are insufficient for the risk management process execution effectively. Therefore, despite the existence of risk management mechanisms, the analyzed ICTMP does not follow the guidelines recommended by the Guide with respect to risk management processes.
Furthermore, in relation to the analysis of the documents included in category 2, the ICTMP of another important agency of the federal administration was analyzed. Within this study, this agency will be called 2B. The risk management mechanisms present in this document differ from the traditional way used by another ICTMP, where, in general, the risks are identified, analyzed, evaluated, treated and monitored for each share in this plan. The analyzed document only showed the impacts of the non-implementation of ICTMP. Table 5 shows the list of possible impacts resulting from non-performance of the ICTMP of agency 2B. This approach ignored the use of risk management mechanisms associated with each action in this plan. Moreover, it was not clear during the analysis if the agency considered only the total non-performance impacts of the ICTMP or if the list presented in Table 5 represents the impact on the risk of non-performance of certain actions in the plan.
The conclusion of the analysis of the two ICTMPs included in Group 2 is that the plans analyzed need more robust tools for planning and monitoring the risks inherent in the execution of the Information Technology Master Plan, an essential tool for the realization of actions responsible for conducting much of the organization's business objectives. High ICT budgets present in ICTMPs demonstrate the need to use a formal risk management process and integrated with ICT governance mechanisms of the institutions. The Guide [31,34] is one of the instruments that can contribute to these objectives.
Analysis of ICTMPs Included in Category 3-Does Not Have Risk Management
In this group, most analyzed ICTMPs are included. Seventy-six percent of the sample did not have evidence of risk management mechanisms use in their ICTMPs. This finding led to a central question that will be discussed during the analysis: Was risk management for the preparation of these ICTMPs really unnecessary?
Here, major federal agencies and entities of the three powers of the Union were considered. Thirty-eight percent of the documents included in category 3 belong to ICT Superior Governing Agencies-OGS, responsible for regulating and supervising the use and ICT management in their respective areas of the APF [5]. Similarly, to the analysis carried out in the other groups, an ICTMP was selected from an agency belonging to Group 3 for a particular analysis. The analyzed document belongs to an important agency. The plan considers dozens of actions that deal from the development of corporate solutions, investment in hardware and software infrastructure, to ICT governance improvement. In the whole set of these actions, no evidence related to the use of risk management mechanisms was found. The other ICTMP analyzed belong to agencies and entities that have similar characteristics. They are prominent institutions, behavior-inducing within the APF and have sizable ICT budgets to achieve organizational strategies and business objectives of the institution.
According to [35], risk is the situation where the decision maker has a prior knowledge of both the consequences of different alternatives as the probability of occurrence. In this sense, the perceived risk has two main components: uncertainty (the possibility of unfavorable results) and consequences (relevance of loss). When that information is not considered in the planning of a given action, the chances of the occurrence of tangible and intangible losses increase to the extent that the risks are not identified, monitored and recorded through a formal process of risk management. Similarly, opportunities will be hardly exploited.
Conclusions
The study of risk management applied to planning tools, particularly to the ICTMP, has an important role for organizations in order to achieve their goals and fulfill their institutional mission since ICT actions are present in all organizational processes. The risks inherent in the planning of ICT actions in the public sector need to be considered so that ICT adds value to business and maximizes the return on investment of the population. This work examined the use of risk management processes in the development of ICTMPs in the Brazilian public sector.
Because it is a relatively unexplored matter, a qualitative method of exploratory nature for understanding it was initially used. At this stage, seventeen ICTMPs were analyzed from major federal agencies and public entities belonging to the three powers of the Union. The main criterion for choosing the sample was the ease of access to the documents. In a second stage, a descriptive method for performing detailed analysis of each document was used. To mark the analysis of the risk management use in the preparation of ICTMPs, key risk management processes were used as reference.
The vast majority of the evaluated agencies and entities did not use risk management processes in the preparation of their ICTMPs. This finding led to the analysis of the real need for the use of risk management processes in each studied/evaluated ICTMP, taking into account the relevance, scope and materiality of the actions contained in the plans. The examined documents had structural and essential actions for achieving the organizational strategies and business goals of the institutions. Therefore, risk management can contribute to the preparation of the analyzed ICTMPs, to increase efficiency in the use of resources, to facilitate the understanding of the risks to which the critical processes of the institution are exposed, to provide solid criteria for planning and prioritization of actions, to enable the exploration opportunities and to minimize adverse effects.
Another worrying fact was the finding that thirty-eight percent of the analyzed documents belong to ICT Superior Governing Agencies-OGS responsible for regulating and supervising the use and ICT management in their respective segments of the APF. These agencies, besides having legislative powers, are behavior inducers.
The absence of risk management in the preparation of the analyzed organizations' ICTMPs can contribute negatively to the achievement of strategies and business goals of these organizations. The possible consequences are exacerbated on account of them being public organizations, which should uphold the principles of legality, efficiency and economy, providing a quality service to the main interested and affected one: the Brazilian society. | 9,175.6 | 2018-10-04T00:00:00.000 | [
"Business",
"Computer Science"
] |
Nonlinear and machine learning analyses on high-density EEG data of math experts and novices
Current trend in neurosciences is to use naturalistic stimuli, such as cinema, class-room biology or video gaming, aiming to understand the brain functions during ecologically valid conditions. Naturalistic stimuli recruit complex and overlapping cognitive, emotional and sensory brain processes. Brain oscillations form underlying mechanisms for such processes, and further, these processes can be modified by expertise. Human cortical functions are often analyzed with linear methods despite brain as a biological system is highly nonlinear. This study applies a relatively robust nonlinear method, Higuchi fractal dimension (HFD), to classify cortical functions of math experts and novices when they solve long and complex math demonstrations in an EEG laboratory. Brain imaging data, which is collected over a long time span during naturalistic stimuli, enables the application of data-driven analyses. Therefore, we also explore the neural signature of math expertise with machine learning algorithms. There is a need for novel methodologies in analyzing naturalistic data because formulation of theories of the brain functions in the real world based on reductionist and simplified study designs is both challenging and questionable. Data-driven intelligent approaches may be helpful in developing and testing new theories on complex brain functions. Our results clarify the different neural signature, analyzed by HFD, of math experts and novices during complex math and suggest machine learning as a promising data-driven approach to understand the brain processes in expertise and mathematical cognition.
Continuous brain imaging data, which is collected over a long time span during naturalistic stimuli, enables the application of data-driven analyses (Cantlon, 2020 [2]; Zhang et al., 2021 [4]).Machine learning (ML) analyses may assist in generating new hypotheses about the underlying task-relevant brain processes, especially in the naturalistic context.In such contexts, several low and high-level overlapping brain processes occur simultaneously (Nastase et al., 2020 [3]).Due to the overlapping nature of several brain processes, extension of the neuroscientific theories formulated based on reductionist and simplified study designs is both challenging and questionable (Cantlon, 2020 [2]).Novel methodologies in analyzing naturalistic data are required and data-driven intelligent approaches form a good candidate for developing and testing new theories on the brain functions in the real world (Nastase et al., 2020 [3]).
Recent developments in ML are already applied in healthcare and extend to several fields: spike detection in epilepsy, dementia prediction, and mental health and sleep stage classification (Singh et al., 2022 [10]).These data-driven methods aim to transform healthcare delivery and to change the trajectory of brain health by addressing brain care earlier in the lifespan (Singh et al., 2022 [10]).For example, recent advances utilizing ML, specifically techniques with Brain-Computer Interfaces (BCI), help stroke patients either restore neurologic pathways or communicate with an electronic prosthetic (Cervera et al., 2018 [11]; Baniqued et al., 2021 [12]).On the other hand, ML may help in diagnostics of conditions like stroke or Alzheimer through the detection of disease-specific EEG biomarkers (Karthik et al., 2020 [13]; Meghdadi et al., 2021 [14]).
In addition to the applications for prediction and diagnostics in healthcare, ML for brain imaging has application possibilities in the contexts of learning and education (Bavelier and Green, 2019 [7]; Cantlon, 2020 [2]).For decades, scientists have studied the brain processes during cognitive tasks, like mathematics or language.These studies have brought valuable knowledge on the domaingeneral brain functions of working memory, attention, and solving strategies (e.g.De Smedt et al., 2009 [15]; Kulasingham et al., 2021 [16]; Wang et al., 2020 [17]) and domain-specific brain functions on numeric and verbal processing (e.g.Amalric and Dehaene 2016 [18], 2019 [19]).Some studies have focused to understand healthy development and expertise (Jeon et al., 2019 [20]; Zhang et al., 2015 [21]), whereas others bring insights on disrupted development and learning deficits (Klados et al. 2017 [22]; Rubinsten, 2015 [23]).Neuroscientific studies made in learning sciences have not yet utilized ML in the data analysis.However, ML has potential to be used in data-driven hypothesis formation of the brain functions underlying expertise development or learning deficits, and for real-time adaptive feedback in learning and focused attention (Kefalis et al., 2020 [24]; Hunkin et al., 2021 [25]).
Previous studies show differences in the brain functions of math experts and novices during short and simple math tasks (e.g.Grabner et al., 2007 [26]).Such differences are associated with brain functions modified through expertise, such as rote learning and strategy selection for solving the tasks at hand (Grabner and De Smedt, 2012 [27]; Hinault and Lemaire, 2016 [28]).However, a few second simple math tasks, which are used traditionally as stimuli in studies on math expertise, seldomly create enough of continuous brain imaging data for which to successfully apply the ML methods.
Despite some ML algorithms are designed to evaluate raw EEG data (da Silva Lourenco et al., 2021 [29]), several studies which focus on the comparison of brain states have preprocessed the data before ML classifications.The brain, as many biological systems, behaves in a nonlinear manner.Nonlinear behavior of biological systems is characterized by a high degree of variability in the time domain (nonstationarity) and randomness that could be attributed to the interaction of internal and external factors influencing the organism (Glass, 2001 [30]; Eke et al., 2002 [31]).Engagement with complex math recruits several cognitive brain processes which overlap with sensory and emotional processes (Suarez-Pellicioni et al., 2016 [32]; Wang et al., 2015 [33]).Therefore, the EEG data collected during such cognitively challenging task is likely highly complex, and therefore, a potentially optimal way to process such data includes an analysis which is suitable for nonlinear systems.
Cognitively challenging tasks create a brain state which is clearly different from those of relaxed states (Finn, 2021 [34]).Fractal dimension is a highly sensitive measure in the detection of hidden information contained in physiological time series (Klonowski [35], 2002; Raghavendra and Dutt, 2010 [36]) and is shown to vary depending on the brain state.An often-used nonlinear measure for signal analysis is Higuchi's fractal dimension (HFD) which is a measure of signal complexity in the time domain (Higuchi, 1988 [37]; Spasic et al., 2008 [38]).Previous studies utilizing such methods classified successfully different sleep stages and detected the difference in the brain state during drowsiness and wakefulness (Inoye et al., 1994 [39]; Šušmáková and Krakovská, 2008 [40]).HFD showed the most robust results and seems to be superior to other FD methods for EEG signals (Solhjoo and Nasrabadi, 2005 [41]; Ahmadlou et al., 2012 [42]).
This study investigated the neural signature of math expertise with a relatively robust nonlinear analysis, HFD, and explored a new paradigm by applying ML to EEG data collected from math experts and novices when they engaged with long and complex math demonstrations.Such math demonstrations with a duration up to one minute form a part of the current trend in investigating the brain with naturalistic stimuli.Our aim was to describe the EEG data during advanced mathematical cognition with a nonlinear method and evaluate whether the neural signature of math experts and novices differ in a way which is detectable with artificial intelligence.We hypothesized that the experts' and novices' brain functions during long math tasks differ in signal complexity detectable with HFD, which further, can be classified by a ML model.
Participants
Thirty-four math experts (bachelor and master students in math or mathrelated disciplines, like physics or engineering) and thirty-five math novices (no university-level math studies) participated in the experiment.However, eleven participants from the group of math experts and twelve participants from the novice group were discarded from the data analysis because their EEG data was too noisy, or some of the relevant data was missing due to malfunctioning EEG amplifier.Therefore, in the group of math experts, there were 22 participants (5 female and 17 male), and in the novice group, 22 participants (7 female and 15 male).The background of the participants was screened by a math questionnaire.
The age of the participants ranged from 19 to 24 years (mean 21.0 years) among math experts and from 19 to 35 years (mean 23.8 years) among novices.All participants in both groups were right-handed.No participants reported hearing loss nor history of neurological illnesses.The experiment protocol was conducted in accordance with the Declaration of Helsinki and approved by the Executive Board of ETH Zurich after a review by the ETH Zurich Ethics Commission.All participants provided written informed consent.
Task design
Participants watched 16 math demonstrations.After each demonstration they were asked three self-evaluation reflections to which they answered by pressing a button in a 4-button response box.Each set of trials consisted of four excerpts of the same presentation style (symbolic or geometric), and these sets were presented in a pseudo random order via a monitor.The pseudo randomization defined the presentation order (symbolic first or geometric first).However, each participant saw the same four math demonstrations presented in both symbolic and geometric form before seeing them in the other form.
Each math demonstration consisted of several slides, from 4 up to 12 slides (6.9 slides on average) depending on the complexity of each demonstration.The total duration of math demonstrations varied from 13 seconds to 68 seconds (33.1 seconds on average).The timing of each slide was the same for all the participants.The duration of each slide was defined according to an online screening in which 25 math experts and 25 math novices watched the math demonstrations slides and auto-regulated the following slide with a button press.The participants who attended the online screening did not attend the actual EEG experiment.The duration of each slide in the EEG experiment was the average time the participants spent on each slide during the online screening.In the online screening, there was no statistically significant difference between experts and novices in the duration of time spent on each slide.
Data acquisition
The stimuli were presented to the participants with the MATLAB via Psych-Toolbox.The experimenter launched the playback of the presentation program after which participant could navigate to the math demonstrations by a button press once they had read the instruction slides on the screen.The total length of the experiment material was approximately 15 minutes.
The data were recorded using Ant Neuro eego mylab electrode caps with active 128 EEG channels 1 .
Four external electrodes placed below, above and on the left side of the left eye and on the right side of the right eye.The offsets of the active electrodes were kept below 30 mv at the beginning of the measurement, and the data were collected with a sampling rate of 2048 Hz.A timestamp (trigger) was marked into to EEG data at the beginning of each slide of the math presentations.The triggers were sent wirelessly via Lab Streaming Layer2 .
Data pre-processing
The EEG data of all the participants were first preprocessed with EEGLAB (version 2019.1;Delorme & Makeig, 2004 [43]).The reference was set as the average of all the EEG electrodes.The data were high-pass filtered at 0.5 Hz and low-pass filtered at 40 Hz.Finite impulse response (FIR) filtering, based on the firls (least square fitting of FIR coefficients) MATLAB function, was used as a filter for all the data.Then, the data were treated with independent component analysis (ICA) decomposition with the runica algorithm of EEGLAB (Delorme and Makeig, 2004 [43]) to detect and remove artefacts related to eye movements and blinks.ICA decomposition gives as many spatial signal source components as there are channels in the EEG data.Typically, one to four ICA components related to the eye artefacts were removed.Noisy EEG data channels for some participants were interpolated.
Feature extraction 2.5.1. Higuchi Fractial Dimension (HFD)
The EEG time-series has a duration between 10-20 minutes, resulting in a large data size per sample.Hence, feature extraction is necessary to capture relevant information.The extracted feature are then used to draw conclusions regarding the relevance of each brain area for mathematical calculations.For this purpose the fractal dimension (FD) [44] for each sample is calculated and is used to measure the complexity of the signal.A simple pattern that is repeating continuously can become a very complex series which is the basis for the fractal constructs.A fractal is a shape that retains its structural detail despite scaling and is the reason why complex objects can be described with the help of fractal dimension.One variant of FD, the Higuchi's fractal dimension, [37] has its roots in chaos theory and has been successfully applied as a complexity in various domains of signal processing.It has been shown to be a good numerical solution to nonlinear signals [45].The speed, accuracy, and cost of applying the HFD method for research and medical diagnosis make it stand out from the widely used linear methods [46].Among the different FD algorithms Higuchi's method [45] demonstrates to be a more accurate option for EEG signals, since it is accurate for stationary and non-stationary signals.
Say X is an EEG signal of length T is the length of a time window on which we calculate a HFD value.Following [37], we calculate HFD as: A new signal x k m is constructed from X, with window size N where m = (1, 2, ..., k) denotes the starting point and k = (1, 2, ..., k max ) the interval size: L m (k) describes the length of the curve of x k m for every k given m: where is the normalization factor.Length L(k) is defined by the average of the k lengths: HFD is the slope of the best fitted curve between all the data points of timeseries X for a given time window N for for k = (1, 2, ..., k max ) between log(1/k) and log L(k): It is possible to calculate HFD for the whole signal (T = N ).However, this is not recommended if the signal is nonstationary.In such cases the HFD value does not represent the true measure, and division into windows (or segments) is advised.In [47], Accardo and colleagues have shown on synthetic fractal signals that Higuchi's algorithm is more efficient, faster, more accurate and able to estimate fractal dimension for short segments, compared to Maragos and Sun's algorithm proposed in [48].
Hyperparameter tuning
An important hyperparameter that requires finetuning is k max .There is no agreed methodology to optimize this parameter [49].As per equation 3, HFD is summed up to k max , therefore increasing k max will lead to an increase in HFD.A poor choice of k max will result in uninformative HFD, thus, it has to be carefully tuned.
We propose the following methodology to identify the best value for k max : 1. We compute the HFD values as per equation 4 for a wide range of k max values, i.e., k max ∈ 2, 5, 20, 100, 150, 200, 400 over all subjects and presentations.2. We identify the k max at which the difference (equation 5) between HFD values of significant and non-significant channels is maximized.Significance/nonsignificance is assessed by taking the maximum/minimum HFD value across all electrodes for a subject.Here, the minimum value is understood as the baseline fractal dimension and is therefore subtracted from the maximum value, which is the complexity of the relevant channels.We base this requirement on the assumption that certain EEG regions are more relevant than others for the mathematical tasks.Hence, there will be a difference in HFD values and we want to select the k max that maximizes this difference.3. The k max value that satisfies requirement 2) and 3) is chosen to compute the HFD values for further analyses and for the machine learning classification.
HFD features analyses
Estimating HFD values for each channel of each participant allows to investigate which brain areas are most active while performing mathematical tasks.Since HFD values have no physical interpretation, a relative comparison between two different groups is performed.
First, a comparison between experts and novices is investigated, by taking the average of all HFD values of the expert group and the novice group and subtracting them from each other: where j ∈{1,...11} is the index of experts and novices, respectively, k ∈{1,...16} is the index of presentations and i ∈{1,...129} is the index of EEG channels.
A one-sided t-test is calculated, testing whether there is a significant difference between the two groups.A visual heatmap of the difference between experts and novices based on equation 5 is mapped onto the head for better qualitative interpretation.
Subsequently a more fine grained analysis is performed by comparing the difference between expert and novice for algebraic and geometric separately: where k A and k G ∈{1,...8} is the index of the algebraic and geometric presentations respectively.
Machine learning classification
We posit the question if a prediction can be made whether a new subject is a novice or an expert based on EEG recordings while performing mathematical tasks.We frame this problem as a two-class classification task.To understand and interpret the outcome of the machine learning classifiers, care needs to be taken while generating the classification dataset and splitting it into training and testing sets.
We first define the classification-dataset as a collection of subject-presentation pairs (e.g.Expert1-Presentation1A etc.).Together with the 16 presentations, the full dataset include 704 samples, i.e., subject-presentation pairs.Subsequently, we calculate either a unique HFD value per EEG channel, meaning that each sample consists of 124 HFD features, or divide the EEG signals of total length T into non-overlapping windows of length N and calculate a HFD value for each window leading to (T/N)*124 HFD features.To be noted that the channels "VEOGL", "HEOGL", "HEOGR", "VEOGU", "HEART" are discarded, since they do not record brain signals but eye movement and cardiac activity.
Since this work is the first in the literature to attempt an automatic classification of mathematical cognitive behavior, we propose three different cases of dataset splitting, illustrated in Figure 1: 1. Subject-presentation pairs: We randomly split all 704 samples without considering whether a sample is coming from different subjects.This means that the samples from the same subject can either be entirely in the training set or in the validation set, or partially in the training and in the validation set.With case 1, we verify if the machine learning (ML) classifier is able to discern between the 22 experts and novices present in the dataset based on a single mathematical presentation.With case 2, we validate the ML classifier on new subjects of which data it has never seen before.The former is a relatively easier classification task, but necessary as a first proof-of-concept, whereas the latter tackles the most challenging problem of inter-subject variability common to all biomedical data.With case 3, we analyze whether a prediction can be made based on samples coming from a single presentation.By training a separate classifier for each presentation, we can compare the classification accuracy among the presentations and draw insights about which mathematical presentation is more suitable for discerning between math novices and experts.
For cases 1 and 2 we calculate a single HFD value per EEG channel throughout the whole duration of the presentations.This choice is motivated by the fact that all presentations, of different recording lengths, belong to the same dataset on which a machine learning classifier is trained on and, in general, the classifiers require a fixed numbers of features.This is no longer an issue for case 3, because each sub-dataset consists of data from a single presentation of fixed length.Hence, we can increase the granularity and use a non-overlapping moving window of length N to calculate the HFD value in equation 4 for each window.More precisely, a HFD value is calculated every N seconds of the duration of the presentation HF D 1:N , ..., HF D t:t+N with t being time steps.This allows to analyze the temporal evolution of the presentation and draw conclusions regarding the classification differences.We test several values of N, i.e., 5, 8, 11 seconds.
Once the datasets are prepared, we proceed with classifiers training using the scikit-learn Python package.We investigate several ML algorithms including Nearest Neigbours, Linear SVM, Decision Tree and Adaboost.We first optimize the classifiers by tuning the hyperparameters under case 1, i.e., subjectpresentation level.Once the optimal parameters are found, we keep them for case 2 and 3.The various ML algorithm tested are summarized in their corresponding parameters ranges.Once the best performing ML algorithm has been identified, we further optimize it with a grid-search algorithm.Given the small sample size, 10 fold cross-validation (90 percent training/ 10 percent validation set) has been applied with a fixed seed.
Results
As described in the introduction, extracting the neural signature of math experts and novices requires careful features extraction via the HFD method.calculate the HFD correctly, hyperparameter k max requires finetuning.Therefore, section 3.1 presents the optimization results of hyperparameter k max .
Based on the extracted HFD features, experts and novices are compared in section 3.2 giving insights which brain region is relevant for performing mathematical tasks.Finally, based on the features, classification results between experts and novices are shown in section 3.3.
Optimal k max
Figure 2 shows the value of HFD for all subjects averaging over all channels for different values of k max .HFD is steadily increasing but starts to plateau at a value of 100. Figure 3 shows the difference between the maximum and minimum HFD values for different k max with accordance to equation 5.It can be observed that the difference in HFD value corresponding to k max reaches a peak at 20 and 100 and progressively declines with increasing k max .Based on the fact that HFD is plateauing at k max equal to 100 and the largest difference between the maximum and minimum HFD values is also found at the same value, k max = 100 is used for further analysis.
HFD feature analyses
Figure 4 shows the difference between the average HFD values between experts and novices, for the top 10 channels that present the highest difference between expert and novices.All top 10 channels are statistically significant under p = 0.05 constraint.All channels are depicted in form of a heatmap in Figure 5.The dark blue shaded areas indicate the highest positive difference between expert and novices.
The subsequent more finegrained analysis comparing the difference between expert and novice for algebraic and geometric is shown in Figure 6 given equation 6.Although there are differences between algebraic and geometric presentations, none of them is statistically different under p-value 0.05 hypothesis.
Average over all Algebraic presentations
Average over all Geometric presentations Figure 6: HFDmax-HFD min calculated as average for all Algebraic vs Geometric presentations for all channels as defined in equation 6
Expert/Novice classification
Table 2 summarizes the classification results between expert and novices.On a subject-presentation split the accuracy reaches 97% demonstrating that it is possible to automatically classify between math experts and math novices based on their electroencephalogram (EEG) signals while watching math demonstrations because the ML model can successfully learn each subject's brainwaves signatures.However, when we split the training and test sets on a subject level, meaning that we increase the difficulty of the task by introducing inter-subject variability that is well-known to be challenging in biosignals classification, i.e., the trained model is validated on new subjects whose data it has never seen before, the accuracy falls to 66%.
Subject
So far the results are shown by considering all presentations for each subject, i.e., the calculated HFD features for all presentations are concatenated for the final classification stage.We suspect that the poor classification accuracy could be partially caused by some of the presentations that might perform poorly.Hence, we perform presentation-specific classification on subject level and the classification accuracy improves up to 79% (presentation 7A).
Figure 7 and Figure 8 show the HFD values when window size of 8 seconds is applied for the presentation with the highest (presentation 7A) and the lowest (presentation 4G) classification accuracy.The difference in classification accuracy may be explained through a better separation between Experts and Novices.
Discussion
Advantages of ML for brain research include the data driven approach which enables generation of hypotheses about underlying brain processes in rest or in active engagement with a cognitive or emotional task.Such underlying processes are sometimes impossible to detect by experts' observations.ML also enables explorations of new paradigms with respect to their neurophysiological signatures (Lemm et al., 2011).One of such new paradigms is naturalistic study design which aims to understand the brain during real-life tasks, like when solving complex math.
Our novel approach on applying ML to EEG data recorded in math experts and novices during complex math encourages to expand the usage of data driven brain imaging methods from healthcare to education.Our approach utilizing nonlinear HFD, which measures signal complexity, was reliable in describing the data by systematically detecting the difference in the neural signature of math experts and novices with a 98% cross-validation accuracy.However, the results gained with ML discriminative algorithm were mixed and showed 50-80 percent classification accuracy when tested with unseen subjects.
Nonlinear fractal dimension methods seem ideal for tracing fluctuations in biological systems, including the brain, which are nonlinear by nature.HFD is a measure of signal complexity in the time domain (Higuchi, 1988 [37]; Spasic et al., 2008 [38]) and has been successfully applied for brain state analysis of EEG in sleep, drowsiness, and wakefulness (Inouye et al., 1994 [39]; Klonowski et al., 2005 [35]; Peiris et al., 2006).Our results gained with HFD show a difference in the neural signature between math experts and novices during long and complex math tasks with a high classification accuracy.These results encourage to use the HFD method in detecting subtle differences in the brain states, like those of math experts and novices, which go beyond the more drastic differences in the brain states during the levels of arousal, like sleep stages, or drowsiness and wakefulness.
Despite the successful classification to experts and novices based on HFD was relatively stable for the entire dataset, the ML model adapted poorly to unseen subjects, and we could not overcome the overfitting and high generalization error caused by inter-subject variability.The most important reason for such a poor generalization is that our dataset is incorrigibly small to be divided to the training and test sets on a subject level.In healthcare, big data platforms are being formed increasingly (Eickhoff et al., 2016; Zbontar et al., 2019), and it is important to take similar steps to create large and clearly labeled open data pools for educational neurosciences.
Our small dataset may function reasonably well for method development of data-driven approaches, since the differences between math demonstrations are statistically significant especially over several frontal electrodes showing higher frontal signal complexity in math novices in comparison to experts.Cognitively, these results may indicate novices' stronger recruitment of domain-general processes in comparison to experts, which is in line with previous literature (Amalric and Dehaene, 2016 [18]; Wang et al., 2020 [17]).
Some studies have investigated for the connection between nonlinear FD methods and linear oscillation analyses over delta, theta and alpha bands.These studies show a dependence between the nonlinear and linear methods and suggest that the most reliable results are gained when combining nonlinear and linear methods to classify different brain states (Acharya et al., 2005 [18]; Šušmáková and Krakovská, 2008 [40]).Since combination of nonlinear and linear methods seem to bring the most robust classification results, we could combine the HFD and oscillation analyses and feed the combined information to a machine learning model.Our novel analysis with machine learning utilized only fractal dimension; however, we report on other papers the brain oscillations for the same dataset (Formaz et al., unpublished data; Poikonen et al., 2022 [9]).
Another interesting way to deepen the analysis of our dataset was to break the temporal data stream to segments.With a larger dataset and statistical power, time points during which the neural signatures of math experts and novices differ the most could potentially be found.This data-driven approach may have practical implications after detecting whether the cortical functions of experts and novices differ the most at the beginning, at the end, or at some other time point during the math demonstrations.With our dataset, ML algorithm showed 50-80 percent classification accuracy for unseen subjects when breaking the data to a temporal stream.Such a high variation may be explained by a small dataset, or by a combination several features related to the length, content, and difficulty level of the math demonstrations.
Understanding which parts of the math demonstrations to emphasize when teaching complex math may be helpful in supporting students' development towards math expertise.Such time-dependent information may be hard to collect with questionnaires or other behavioral measures, and therefore, brainoriginated data-driven methods may be the only way to access such information in the context of learning.Further, these ML models could be used to create learning contexts in which adaptive feedback is given to adjust to the individual needs of a learner or those of a specific group during collaborative learning, building on the previous examples like BCI applications for post-stroke motor rehabilitation, or relatively simple neurofeedback applications for focused attention or working memory (Cervera et al., 2018 [11]; Kefalis et al., 2020;Hunkin et al., 2021 [25]).Simple options for BCI interventions for the math demonstrations used in our study might be to adjust the velocity of presenting new information, or by scaffolding the learning process via instructions or remarks depending on the EEG signal of the learner.
Limitations
Our novel paradigm combining mathematical cognition, cortical activity and ML is exploratory in nature and we recognize the following limitations.First, the most drastic limitation is the small dataset in use.The straightforward way around it would be to increase significantly the amount of data, e.g., by at least doubling the number of participants.The more data the better we can estimate the real data distribution of the general population.The second limitation is related to the classes chosen for the ML classification.We chose to compare two groups of participants during the same cognitive task.Other strategy for a small dataset would be to explore individual differences, for example, by aiming to classify the data excerpts of resting state and cognitively active state for each participant.Earlier studies show that differentiation of brain states for an individual participant during simple sensory tasks is rather robust whereas the generalizations of the cortical activation patterns across a group of participants, and during complex cognitive tasks, is challenging.However, such individual brain state classification would not give us hardly any insights for the expert-novice differences during mathematical cognition.As the third limitation to consider, when preprocessing, we chose to band-pass filter the data with a bandwidth of 0.5-40 Hz due to the contamination of the data with the 50 Hz line noise.HFD is associated with changes in delta, theta and alpha oscillations which all were included in our analysis.However, also gamma oscillation is known to be important during cognitive tasks (Herrmann et al., 2004), and it has been connected to HFD.Due to bandpass filtering chosen, gamma activity is not included in our analysis.Based on previous literature, HFD seems the most stable fractal dimension methods (Kesic and Spasic, 2016 [45]).However, as the fourth limitation of our study, is the general criticism for the HFD that it has a short margin of scale which may give the same complexity number to signals with only subtle differences.For detecting the possibly small differences in the cortical activity of math experts and novices, some other method with more detailed scale may be more suitable.Fifth, for the cross-validation, different models could be compared to find a model with ideal complexity which balances between overfitting of an unnecessarily complex model and simple model's inability to adapt to the details of the complex cognitive data.Ideally for ML algorithms, each sample (e.g.EEG data collected during each math demonstration) would have the same number of data points (e.g. the same duration).However, it is difficult to realize in practice due to different duration it takes to solve different naturalistic math tasks.In the future, research of brain processes during abstract cognition might be conducted, for example, within a video game context, in which the duration is easier to match to be the same over all the rounds played.
Conclusions The present study used a unique paradigm to compare neural correlates of math experts and novices while solving naturalistic math demonstrations.Overcoming limitations of previous studies with reductionist stimuli and linear EEG analysis methods, the brain functions during abstract cognition were measured with a high-density EEG during long and complex math demonstrations and analyzed with a relatively rigor nonlinear method, HFD.Our results indicated that math novices have a higher signal complexity measure with HFD than experts over several frontal electrodes suggesting a stronger engagement of domain-general brain functions.Further, we explored ML algorithms for classifying math experts and novices based on their neural signature.These results were promising but we also acknowledge the inevitably small dataset we had in use for consistent results.We encourage taking example from brain imaging databases created in healthcare for a creation of a similar database for educational neuroscience.In the future, application possibilities for such a database and deep learning lay in data-driven theory formation for normal and disrupted learning and development, and adaptive feedback systems for learning contexts.
Data and code availability statement The dataset analyzed in the present study as well as scripting and plotting code are available from the corresponding authors via email on request. CRediT
2 .
Subject-specific: We split the dataset on the level of subjects, meaning that all subject-presentation pairs of the same subject are either in the training or validation set.3. Presentation-specific: We deal with each presentation as a separate machine learning task.In other words, we divide the full dataset into subdatasets, each of which consists in a single presentation, and perform the training and testing procedure on each of the sub-datasets.
Figure 2 :
Figure 2: HFD value, averaged across all channels all subjects and presentations for different values of kmax.
Figure 3 :
Figure 3: HFD value, difference between HFD values between the maximum and minimum of all channels averaged across all subject and presentations for different values of kmax.
Figure 4 :
Figure 4: Top 10 channels with the highest difference between their HFD values.Asterisk * indicates that the average value of HFD expert is statistically different than HFD novice under p=0.05 threshold for that specific channel.
Figure 5 :
Figure 5: Heatmap of HFD difference between Expert and Novices.Darker blue color indicates the brain areas where the positive differences between experts and novices are the highest.
Table 1 :
Machine Learning algorithms used for classification between experts and novices
Table 2 :
Classification results between experts and novices based on different classification algorithms for Subject-presentation pairs, Subject-specific and Presentation-specific split.All results are based 10 fold cross validation and averaged over 3 random seeds. | 8,006.6 | 2022-12-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
ERF: An Empirical Recommender Framework for Ascertaining Appropriate Learning Materials from Stack Overflow Discussions †
: Computer programmers require various instructive information during coding and development. Such information is dispersed in different sources like language documentation, wikis, and forums. As an information exchange platform, programmers broadly utilize Stack Overflow, a Web-based Question Answering site. In this paper, we propose a recommender system which uses a supervised machine learning approach to investigate Stack Overflow posts to present instructive information for the programmers. This might be helpful for the programmers to solve programming problems that they confront with in their daily life. We analyzed posts related to two most popular programming languages—Python and PHP. We performed a few trials and found that the supervised approach could effectively manifold valuable information from our corpus. We validated the performance of our system from human perception which showed an accuracy of 71%. We also presented an interactive interface for the users that satisfied the users’ query with the matching sentences with most instructive information.
Introduction
In the field of software engineering, learning is a life-time pursuit for professionals seeking to maintain competency. It has been well-established in educational and training circles that fostering communities of learners can facilitate the achievement of learning goals. Online social networks clearly have a strong role to play in this aspect. At present, many websites are available to share information and exchange knowledge with peers. Stack Overflow is one such popular Web repository that helps programmers to share and obtain useful information about programming. Launched in the year of 2008 by Jeff Atwood and Joel Spolsky, StackOverflow.com has become a widely used forum for people to interact on topics related to computer programming. Over a period of time, it has gained the reputation of being a reliable forum where users get quick responses to their questions, and that too with high level of accuracy. Stack Overflow data have been a pool for research from many perspectives. Such researches have helped to understand the power of programming-specific question and answering (Q&A) forums and how far these forums are serving as a learning platform.
In computer programming, learning is an actual interest for the programmers who thrive off increasing their proficiency. In the rapid pace of technological changes, the official documentation of programming languages are often lagging behind [1]. Bacchelli et al. [2] stated that the project documentations are commonly inadequate, manuals tend to be outdated, and books may be hard to retrieve or link to the actual task. Online social platforms are found to be promising to fill this gap. The reason is that the programming tasks often demand knowledge that are diversified among numerous people with different specializations and skills [1].
Due to this fact, Stack Overflow has picked up the notoriety of being a solid gathering, where users get nimble responses to their inquiries identified with computer programming and with an elevated level of precision [3,4]. Stack Overflow data has been a pool for research from many perspectives. Such researches [1,[5][6][7][8][9][10][11] have assisted with understanding the intensity of programming-explicit Q&A discussions and how widely these forums are filling in as learning stages. It is even inferred that the appropriate responses on Stack Overflow often become a substitute for authentic item documentation where the official documentation remains to be inadequate [12]. Parnin et al. [5] asserts that Stack Overflow has developed into an enormous storehouse of user-engendered content that could supplement conventional specialized documentations. Nonetheless, jam documentation is made in a casual way with no extensive association or express connects to API components [5,13]. Even though we can surmise the estimation of group documentation by a number of contributors and commitments present, for understanding the extent of contributions or assessing the worth and nature of those legacies, we have not done much yet. Additionally, common studies have not considered the intricacy of cases to make documentation versatile to various degrees of developer experience.
Although there are some research [14,15] to recommend tags for Stack Overflow discussions, to the best of our knowledge there is no efficient technique for recommending learning materials to the users from Stack Overflow discussion. Considering this fact, we develop a technique to recommend appropriate learning materials for the two popular programming languages-Python and PHP. The main goal of this research is to build a recommendation system using a supervised machine-learning technique to suggest richer discussion posts to the programmer on demand. The contribution of this research is the development of a system that can automatically identify richer discussion posts and recommend them to the users from Stack Overflow repositories. We have achieved satisfactory results by validating with the machine extracted information from human perception. Our framework provides users with the appropriate guidance complementary to the original software documentation. This work is an extended version of our previous paper [16]. The main contributions of this work can be summarized as follows: • We exploit the knowledge repository available from the Stack Overflow discussions to generate learning materials related to PHP and Python.
•
We use supervised learning techniques to satisfy users' queries with a matching Q&A discussions those are the most insightful.
•
We develop an user friendly interface to help users in easy interaction with the system.
•
We perform subjective and objective evaluation of the system to check the efficiency of our system.
The rest of the paper is organized as follows: Section 2 provides a short review of the related literature. Section 3 presents the design consideration and techniques used for the proposed framework. Section 4 details the implementation and experimental results. Finally, we conclude the paper in Section 5.
Literature Review
In computer programming, continuous learning is important using online resources. Despite the short history of Stack Overflow, many studies have put a great effort to investigate the contents, available metadata, and user-behavior of this online forum. Existing works identified in our review are categorized into empirical study of stack overflow data, harnessing stack overflow data, and improving programming language usage documentation utilizing social media channels. We have also reviewed works on stipulating automatic standard appraisal in other advanced archives of text-based information.
Empirical Study of Stack Overflow Data
With the particular objective of revealing fundamental discussion points, elementary outbuildings and courses with time, Barua et al. [17] represents an approach for analyzing Stack Overflow data. This is a semi-automatic method and it's analysis gives an estimate of the requirements and needs of the concurrent implementer. Similar research was done by Wang et al. [4] who performed both text and graph-based analyses to derive subjects from questions and distribute an inquiry to a few points with certain probabilities. They also investigate the distribution of participants who ask questions and respond to the questions in Stack Overflow and observed their behavior. The main limitation of this study is that they analyzed only 385 questions manually which is not sufficient to analyze such a huge system like Stack Overflow.
Barzilay et al. [1] reviewed Stack Overflow from the perspective of its design and social communication features. This study also explored its usage in terms of model-driven programming worldview and the role it plays in the documentation view of software development. Rosen et al. [18] performed a study on the issues confronted by the mobile application developers. This study examined 13,232,821 Stack Overflow posts related to mobile application development. The findings of this study help in highlighting the challenges met by mobile developers that require more consideration from the research of software engineering and development groups for going forwards. Another similar work has been done by Linares-Vásquez et al. [19], where they performed an explanatory analysis of mobile application development issues utilizing Stack Overflow. However, this study has broken down only 450 Android related posts manually to decide on the issues. Bajaj et al. [20] analyzed fine-grained parts of Stack Overflow with current Web application improvement. They did a subjective investigation of more than 5,000,000 Stack Overflow addresses which were identified with the Web improvement, and removed the noteworthy bits of intuitions from the information for the developers and research groups.
Harnessing Stack Overflow Data
An Eclipse integrated development environment (IDE) plug-in is proposed by Bacchelli et al. [2] for integrating Stack Overflow contents automatically which can establish questions from the context of IDE. They show the exchange view of results with the ranked index. The disconnection of the tool from the IDEs and the absence of the team efforts have confined their maximum capacity for program design which is a limitation of the system. Ponzanelli et al. [21] proposed another tool that gives the IDE related information retrieval from Stack Overflow. They graded their work's significance by notifying the developers about existing help if a particular level of threshold is crossed. Their system shows high efficiency when it deems itself to have enough recommendations that does not force developers to invoke any additional suggestions.
Improving Programming Language Usage Documentation
Campbell et al. [22] questioned whether the developer-written documentation provides enough guidance for programmers based on the observation of hundreds of thousands of programming-related questions posted on Stack Overflow. They managed to find topics related to PHP and Python languages in need of tutorial documentation using the Latent Dirichlet Allocation (LDA) technique. However, the project's documentation cannot address sufficiently the manual analysis of the topics guided by their method.
An automated comment generator has been presented by Wong et al. [23] for concentrating code sections from Stack Overflow, and introduces this information to consequently create explanatory comments for comparable code sections in open-source ventures. The authors applied their method to deal with Java and Android projects and automatically figured out 102 comments for 23 projects. In a survey, most members found the produced comments precise, sufficient, brief, and valuable in helping them comprehend the code. This study worked on small-sized code-depiction mapping databases by including Stack Overflow answers that do not have the most noteworthy vote check. It cannot supplant the code clone identification apparatus with one that can identify expansion and reordering of lines to expand the number of code matches.
Treude et al. [24] analyzed Java API related posts in Stack Overflow to extract useful information which are unavailable in the documentation. Seemingly, this work is the most similar to our work since it likewise harnesses the content of natural language accessible on Stack Overflow. As far as utilizing Artificial Intelligence (AI) to find content on Stack Overflow, there are some basic subjects among our own and crafted by de Souza et al. [8] who built up an improved Web search tool utilizing the machine learning techniques to find content on Stack Overflow. This study has made a qualitative manual analysis of their system with the consideration of only 35 programming problems on three different areas (Swing, Boost and LINQ).
Kim et al. [11] presented a framework which recommended the users the API documentations. This framework uses a Web mining technique to embed summaries of coding related posts to the API. However, the database which is related to programming for addressing their cases may not be representable for utilizing eXoDus. A hands-on, analytical approach for connecting program code examples to the documentation of related API is proposed by Petrosyan et al. [25] and Subramanian et al. [10]. Petrosyan et al. [25] used classification based method for discovering helpful tutorials for a provided API. They used supervised classification of text yielding linguistic features. Subramanian et al. [10] implemented an approach called Baker which retrieves JavaScript and Java API documentation with a high precision of about 97%.
Other Related Work
Le et al. [26] have mined the posts from "Brainly", a community question-answering site, to determine what constitutes answer quality. They built their classification model by integrating different groups of features: personal, community-based, textual, and contextual. Their findings indicate that personal and community-based features have more prediction power in assessing answer quality. Their approach also achieved high values on other key metrics such as F1-score and Area under Receiver Operating Characteristic (ROC) curve. Numerous answers were erased from their site because of low-quality and were not clear. However, it was not clarified by the authors well how the removal of the questions could influence their system's performance.
Shah et al. [27] conducted a large scale analysis of knowledge sharing within "Yahoo! Answers". They considered the notion of users satisfaction as the indication of answer quality and employed logistic regression to predict which of the answers to a given question the user would pick as the best. They demonstrated the robustness of their custom model by validating the results from human perception. As this study selects answer arbitrarily based on prior information, the classifier may fail to address the best quality answer if it has not been trained with a sufficient amount of good quality training data.
Maity et al. [28] studied "Quora" to build an efficient mechanism to characterize the answerability of the questions asked. They investigated various linguistic activities to discriminate between the open and answered questions. Their research discovered the correspondence of language usage patterns to quality factors in the forum. They have made a comparison among different models related to their mechanism and showed that their system outperforms with a accuracy of 76.26%.
We have reviewed several research studies related to our work and come up with a consideration of a methodology by crawling data from Stack Overflow followed by pre-processing the data and performing a sort of recommendation using classification. This methodology is implemented as a framework for finding helpful learning materials from Stack Overflow for popular programming languages-Python and PHP. Figure 1 depicts the architecture of the proposed system. The system consists of an interface module, a search module, a processor module, and an input-output module. In the offline phase, data from Stack Overflow undergoes pre-processing and feature building followed by training set annotation and incorporation of machine learning to predict desired information from never-before-seen-data. Meanwhile, tags available on Stack Overflow are categorized into three distinct sets: novice, intermediate, and expert. All these data are stored in the database. In the online phase, a user interacts with the system and vets to measure the user's expertise on a particular field and the predefined list of tags is leveraged to retrieve appropriate data from the database.
Dataset Description
Stack Overflow makes its data publicly available online (https://archive.org/details/ stackexchange/) in Extensible Markup Language (XML) format under the Creative Commons (CC) license. We downloaded a data dump containing a total of 33,456,633 posts, spanning from July 2008 to September 2016, and imported the XML files into a MySQL relational database. We used simple SQL queries to identify PHP and Python-related questions and find the matching answers. This query resulted in a dataset containing 930,429 "answer-type" posts. We observed that the dataset contained incomplete records and since such data might mislead the machine learning techniques which we intend to apply, we got rid of those incomplete data and the size of our initial dataset was reduced to 323,801 posts. Next, we extracted the corresponding questions to the answers in our dataset as the "question-type" posts contained extra information concerning the question-answer pair. Then we got rid of records with incomplete information and the size of the dataset is further reduced to 190,215 posts. Following this reduction, we further excluded corresponding posts from the "answer-type" corpus. We included the reputation score of each asker and answerer which was calculated based on their activity on the site. Finally, a total of 190,215 posts were left which were exported from the database as Comma Separated Value (CSV) format to undergo pre-processing.
Data Preprocessing
In the official data dump of Stack Overflow, standalone code snippets are surrounded with HTML tags <pre> and </pre>. We removed these code snippets because many code elements in such code snippets are defined by the askers or answerers of questions for illustration purposes and these code elements do not refer to software-specific entities that the other developers are concerned with. However, we kept small code elements embedded in the post texts that are surrounded by <code> and </code> tags. These small code elements often refer to APIs, programming operators, and simple user-defined code elements for explanation purposes. Removing them from the texts will impair the completeness and meaning of sentences. Finally, we stripped all other HTML tags from the post texts. All these were done using "pandas" (a data analysis library that provides high performance data structures and analysis tools) [29] and "lxml" (a feature-rich library for processing XML and HTML) [30]. Next, the posts were divided into individual sentences. Accordingly, we dived into one of the grand challenges in Natural Language Processing (NLP) called Sentence Boundary Detection (SBD). Surprisingly, the state-of-the-art tools could not perfectly account for the unique characteristics in case of informal programming related discussion not found in other texts. We examined several NLP libraries, such as "NLTK" [31], "segtok" (http://fnl.es/segtok-a-segmentation-and-tokenizationlibrary.html) and "spaCy" [32], and finally selected the "spaCy" as it was found to be performing better on our corpus. Our dataset was divided into 719,614 sentences on which our research was conducted. Since we divided the threads into individual sentences, we assigned a unique identifier to each sentence. This identifier was later used to append the parent posts' attributes to the child sentences. Algorithm 1 summarizes the data pre-processing steps.
Training and Test Set Generation
Supervised learning requires labeled data. We selected a batch of 1000 sentences from our constructed subset and manually annotated them with a yes/no rating to indicate whether it was informative. We defined "informative sentences" as the ones that are "meaningful on its own and conveys specific and useful information". Such a sentence is supposed to bring a thorough understanding or a fresh standpoint to the topic at hand. However, it should be noted that everyone can make his or her own subjective judgements on the quality of content depending on various measurements. In Table 1, we list a few sentences that were considered informative. During manual labeling, we followed a set of rules proposed by Treude et al. [24]. The handcrafted set of 1000 sentences described previously was appended with corresponding features to build our training and test set. These sets were excluded from the dataset obtained by data pre-processing (Subsection 3.2) and the remaining 718,614 sentences constituted the validation set. In the next step, we generated the Attribute-Relation File Format (ARFF) files from the training, test, and validation sets. This is the native format for the machine learning tool we used. Table 1. Example of informative sentences.
Sentence
Post ID (stackoverflow.com) When an upload occurs , the target script is executed when the upload is complete, so PHP needs to know the maximum sizes beforehand.
949415
After a DeadlineExceededError, you are allowed a short amount of grace time to handle the exception, e.g., defer the remainder of the computation.
2248811
If an element is containing floating element, the wrapping element then needs a overflow:auto or a clear:both; to get the height it needs to add background to it.
12663826
The standard escape character for .htaccess regular expressions is the slash ("\").
34464235
The types of sentences that were not considered as informative are: the sentence resulted from a parsing error, the sentence contains an explicit reference to another sentence, the sentence contains explicit reference to a piece of context that's missing, the sentence requires another sentence to be complete, the sentence contains a comparison that is incomplete (i.e., one part of the comparison is missing), the sentence starts with "but", "and", "or" etc., the sentence contains references to "it", "this", "that" etc. which is not resolved within the sentence, the sentence that is grammatically incomplete, the sentence which is a question, the sentence that contains only a link, the sentence itself is a code snippet or is prefacing one (often indicated by a colon), the sentence references code elements that come from user examples, the sentence references specific community users, the sentence contains communication between Stack Overflow users, the sentence contains a reference to something that is not an obvious part of the Python or PHP language, and the sentence is a generic statement that is unrelated to the Python or PHP language. A few example sentences are listed in Table 2. You might be surprised to find out that in your sample code, s1 == s2 returns true!
Feature Extraction
We defined 18 attributes to characterize our data. The construction of our feature set was based on careful inspection of our corpus. We also took advantage of some previous studies [6,24,33,34] that worked with Stack Overflow data. It is safe to say that our feature set captures the structural, syntactic, and metadata information of the dataset. Out of these 18 attributes, three were related to the number of occurrences of keyword terms in the "question body" and "answer body" of a post, four were related to the presence or absence of source code in the "question body" and "answer body", and the remaining attributes were either directly obtained or calculated from the post metadata. Our procedure of building the feature set is depicted in Algorithm 2 and the resulting attributes were outlined in Table 3. The Algorithm 2 parsed the dataset to find information like the size and age of an answer, the time taken by the question to get answered, if the answer contains any code, the position of each sentence in the answer post, if the sentence started with lower-case letter and if the sentence was only a code-snippet.
Classification Experiment
To conduct supervised learning from our training dataset, we used the WEKA workbench which is recognized as a landmark system in data mining and machine learning [35]. Initially, WEKA identified that our data was imbalanced towards "not informative" instances. To fix this issue, we applied the Synthetic Minority Oversampling Technique (SMOTE) [36] and increased the number of "informative" instances by oversampling. After SMOTE filtering, cross-validation of the dataset led to choosing k = 5 in the case of Support Vector Machine classifier and k = 3 in the rest of the cases. We tested five different machine learning algorithms including ensemble approaches on our training set. These algorithms were recommended by previous researchers [8,37] for text classification in the software engineering domain: J48 (Decision Tree) [38], SMO (Support Vector Machine) [39], PART (Decision List) [40], IBk (k-Nearest Neighbour) [38], and Random Forest [41]. For model training and testing, we have used 10-fold cross-validation [42,43] to reduce the risk of over-fitting.
Ranking and Categorization of Result
During the classification operation, WEKA measured a level of confidence for each prediction made on the "never-before-seen" instances. We exploited this information to rank the "informative" sentences extracted from our test set. To devise a categorization rule in our framework, we followed the approaches used in a few researches [13,20] on topic-modeling. We grouped relevant tags by leveraging the query option of the "Stack Exchange Data Explorer" site (https://data.stackexchange. com/stackoverflow/). We inspected this list of tags, and partitioned the most popular tags into three categories: novice, intermediate, and expert. These categories were maintained during the presentation of data through our user interface.
Persistent Data Store
We chose a NoSQL database to work at the back-end of our interface. MySQL is adequate for storing the raw data obtained from Stack Overflow. However, in case of integrating a corpus built from our information retrieval approach, it does not provide the flexibility that a document-based database such as MongoDB can provide. MongoDB implements the B-Tree data structure which allows the query optimizing component to quickly sort through and order the documents. Han et al. [44] have documented the access speed in MongoDB to be 10 times than MySQL.
User Interface
The user interface was designed for simplicity. It could supply users of the system with appropriate documents on particular topics. Figure 2 provides a sample output.
Experimental Settings and Results
This section notes our experimental setup, quantifies the performance of our approach and provides a qualitative evaluation.
Experimental Setup
The system has been developed in a machine running on Intel Core i5 2.5 GHz CPU and 4 GB RAM. We have used Python and MySQL for data processing and WEKA for analytics. The front-end was developed using Visual C# and MongoDB NoSQL database is used at the back-end.
Performance Evaluation
To evaluate the performance of the classifiers in our experiment, we took both accuracy and f-measure as performance metrics. The logic behind taking f-measure as a performance metric is that it allows us to remove any choice biases as most of the instances were annotated as "not informative". We found that the Random Forest classifier showed the most promising performance. Table 4 shows the accuracy, precision, recall, and f-measure for the classifiers. From the results of Table 4, we can see that in case of the three classifiers the precision and recall values are identical. This is because the number of false positives and the number of false negatives generated by these there classifiers are found to be almost same. The confusion matrix produced from 10-fold cross-validation of the Random Forest classifier on our training data is shown in Table 5. It can be seen that out of 1940 instances 1048 instances were classified correctly achieving an accuracy of 98.3505%. It is evident from the analysis that our approach is quite successful in both identifying "informative" and "not informative" sentences. The trained model of the Random Forest classifier was re-evaluated on the test set to predict the class labels. It predicted 6886 sentences to be "informative" from the "never-before-seen" 718,614 sentences in our test set.
Human Validation
Like any human activity, our classification is prone to human bias. So, the prediction of the classifier was made subject to human-scrutiny to uncover threats to validity. For this purpose, we randomly selected 100 sentences from the 6,886 sentences extracted by our supervised approach. We asked two personnel with a background in computer science (a master's student and a software developer) to participate in the study. Each of them independently rated each sentence as being one of the following: (a) meaningful and conveys useful information; (b) meaningful but does not provide any useful information; (c) requires more context to understand; (d) makes no sense. The inter-rater agreement matrix illustrated in Table 6 shows that out of the 100 pairs of ratings, 71 were in perfect agreement. The highest number of disagreements was related to sentences that were either type-b (meaningful but does not provide any useful information) or type-c (requires more context to understand). On the whole, it is safe to say that the human raters have found our experiment result to be correct up to 71%. We also perform validation of several chunks containing K-ranked sentences from human perception. The aforementioned participants were called in again to render their services. The results are listed below in Tables 7-9. Table 7. Inter-rater Agreement Matrix (K = 5). We calculate the performance metrics for each case as well. We devise the following heuristics during computation: From the 4x4 matrices in Tables 7-9, we consider cell (1,1) as True-Positive; sum of cells (1,2) and (1,3) as False-Positive; sum of cells (2,2), (3,3) and (4,4) as True-Negative; sum of cells (1,4), (2,3), (2,4) and (3,4) as False-Negative.The result of the comparative study is summarized in Table 10. It is observed that the classification predictions with high confidence match more with human rating. That is why the accuracy and precision/recall of top (more confidently predicted) posts according to our model was higher when matched against human validation results compared to less confidently classified posts. This is an indication that all the evaluation outcomes were consistent.
Discussion
In this paper, several models for were constructed for evaluating content quality of Q&A sites based on literature review and found that the Random Forest classifier showed the most promising performance. We also found that for three classifiers the precision and recall values are identical. This happened because the number of false positives and the number of false negatives generated by these there classifiers in our system were found to be almost same. We also conducted some tests to demonstrate the robustness of answer quality via 10-fold cross-validations. The trained models also have been applied to a larger "never-before-seen" dataset. We designed a software-specific tutorial extraction system that presents "informative" sentences from Stack Overflow. We realized that one must first understand the unique characteristics of domain-specific texts that bring unique design challenges. We considered the metadata available on Stack Overflow along with natural language characteristics to construct our information extraction process. Then, based on the understandings of these design challenges, we used state-of-the-art supervised machine learning and NLP techniques to develop our system. We considered several techniques to check the efficiency of our system. The predictions made by our model was further verified by potential users of the system. An interface was built to present the extracted "informative" sentences in a user-friendly manner.
Conclusions
Stack Overflow is a popular website among the programmers for finding important information about different programming related issues. Although there are several methods for recommending tags to the users while posting a question at Stack Overflow website, according to our knowledge, there is not any suitable work to obtain quality answers for the users. Considering this fact, in this paper, we provided a method to recommend efficient learning materials for the programmers. In our work, we presented an approach to leverage QA crowd knowledge. By building models and experimenting with several classifiers, the most significant features for predicting answer quality were discovered. As future work, we plan to extend this work beyond Python and PHP and to experiment with more features that capture the grammatical structure of sentences on Stack Overflow discussions. | 6,925.8 | 2020-07-20T00:00:00.000 | [
"Computer Science"
] |
Unbiasing time-dependent Variational Monte Carlo by projected quantum evolution
We analyze the accuracy and sample complexity of variational Monte Carlo approaches to simulate the dynamics of many-body quantum systems classically. By systematically studying the relevant stochastic estimators, we are able to: (i) prove that the most used scheme, the time-dependent Variational Monte Carlo (tVMC), is affected by a systematic statistical bias or exponential sample complexity when the wave function contains some (possibly approximate) zeros, an important case for fermionic systems and quantum information protocols; (ii) show that a different scheme based on the solution of an optimization problem at each time step is free from such problems; (iii) improve the sample complexity of this latter approach by several orders of magnitude with respect to previous proofs of concept. Finally, we apply our advancements to study the high-entanglement phase in a protocol of non-Clifford unitary dynamics with local random measurements in 2D, first benchmarking on small spin lattices and then extending to large systems.
Introduction
The boundaries of current computational paradigms shape the scope of questions that can be investigated in many-body quantum systems.Problems such as the dynamics of high-dimensional interacting systems [1], dissipative phase transitions out of equilibrium [2], the simulation of digital quantum circuits [3], or quantum information protocols [4,5,6,7,8,9,10,11,12,13,14] on states with volumelaw entanglement all suffer from a lack of efficient computational methods.A class of powerful numerical techniques to treat such problems are variational methods, which rely on an efficient parametrization of the quantum state and stochastic optimization of its parameters.Compared to more established Tensor Network (TN) [15,16] or Quantum Monte Carlo (QMC) [17,18] algorithms, variational approaches coupled with Monte Carlo sampling can simulate high-dimensional or unstructured systems while not suffering from the sign-problem [19], making them ideal candidates to target molecules [20,21] and fermionic [22,23,24] or frustrated matter [25].However, the optimization problem arising in variational calculations is generally non-convex, making it hard to give general convergence guarantees.While state-of-the-art results for the calculation of ground states [25,26,27] have been obtained with Variational Monte Carlo (VMC) [28], variational simulations of dynamics have yet to improve over existing approaches systematically.
Techniques for the variational time evolution rely on so-called variational principles to recast Schrödinger's differential equation for the wave function onto non-linear differential equations for the parameters [29].These latter equations can be integrated with an explicit scheme, the time-dependent Variational Monte Carlo or tVMC [30,31], or with an implicit method by solving an optimization problem at every time-step [32,33,34,35,36].The first approach has been applied to many systems [37,38,39], but it struggled to significantly improve upon benchmark methods due to several poorly-understood challenges in the numerical integration.The second method is conceptually more powerful than tVMC, but it has yet to be applied to realistic systems due to an unexpectedly large computational overhead [35].
In this manuscript, we systematically analyze the accuracy and efficiency of stochastic variational methods to tackle dynamical problems.First, we formalize the origin of the numerical challenges affecting tVMC by proving that they arise from the Monte Carlo sampling, which may hide a bias or an exponential cost when the wave function contains zeros, as is the case for many physically-relevant problems.Then, we prove that the high overhead of the implicit integration arises from poor scaling of the Monte Carlo sampling and we derive a new scheme that lowers the computational cost by several orders of magnitude.We call this scheme projected tVMC (p-tVMC).Finally, we apply the p-tVMC to simulate the dynamics of a quantum system undergoing unitary evolution interspersed with random measurements, which is a paradigmatic model for entanglement phase transitions [4,5,6,7,8,9,10,11,12,13,14].Established Figure 1: Sketch of the failure of the tVMC when the state features zeros (red) and of the dynamics generated by the p-tVMC algorithm (blue).When a state with zeros (or near zeros) in the wave function is encountered during the tVMC evolution, such as |Ψ θ(t i+1 ) ⟩, the variational dynamics starts to detach from the exact solution due to a bias (or an vanishing signal to noise ratio).In p-tVMC, the optimization problem of projecting the exactly evolved state U |Ψ θ(t) ⟩ onto the variational manifold M of the ansatz |Ψθ⟩ is solved at each time-step.This is achieved by minimizing a distance in the Hilbert space, which is the infidelity I (as shown in the right panel).
methods can access 1D systems [4,5,6,8,12,14] or higher dimensional Clifford dynamics [7,9,10,13], but questions remain on the nature of this transition in the case of non-Clifford 2D dynamics.As a proof of concept, we investigate this regime which is intractable to TNs due to the rapid entanglement growth and the higher dimensionality but also to tVMC due to the projective measurements enforcing a large number of zeros in the wave function.
Numerical challenges in timedependent Variational Monte Carlo
We consider a quantum many-body system whose Hilbert space is spanned by the basis states {|σ⟩}, where σ is a set of quantum numbers that is treated as discrete.The state of the system |Ψ⟩ can be efficiently approximated by a variational ansatz |Ψ θ ⟩ whose wave function Ψ θ (σ) ≡ ⟨σ|Ψ θ ⟩ is completely specified by a set of P parameters θ = (θ 1 , . . ., θ P ), thus we have: We consider computationally tractable ansätze, meaning that P is polynomially large in the system size and Ψ θ (σ) can be sampled and queried efficiently [40].
Within this framework, variational dynamics can be encoded onto time-dependent parameters θ(t) such that |Ψ θ(t) ⟩ approximates the physical dynamics.In what follows, we focus on the unitary evolution of a time-independent Hamiltonian H, on complex θ and holomorphic Ψ θ (σ).However, the discussion is general and also applies to non-Hermitian PT-symmetric Hamiltonians [41], imaginary time evolution [18], or open quantum systems obeying the Lindblad Master Equation [42] and can be extended to the nonholomorphic case.
The McLachlan's variational principle [43] recasts the Schrödinger's equation d|Ψ θ ⟩ dt = −iH |Ψ θ ⟩ at every time onto the optimization problem: where D is the Fubini-Study metric and δt is a small time-step.By keeping only the leading terms in δt in Eq. ( 2), it is possible to derive the following set of explicit equations of motion for θ(t): In the previous relations, the quantity are the log-derivatives of the variational state.
However, we remark that if the wave function and its derivatives have non-identical support, namely there exist configurations σ for which The previous relations show that F MC k and S MC kk ′ are biased if ⟨σ|Ψ θ ⟩ vanishes on some configurations while ⟨σ|∂ θ k Ψ θ ⟩ do not, leading to a mismatch between the tVMC and ideal variational dynamics.
This condition may arise from the variational encoding of several physically relevant states, such as basis states |σ⟩, anti-symmetric wave functions (e.g.Slater, Neural Backflow [48], . . . ) or states generated by digital quantum circuits.Another relevant class of affected states are those that underwent projective measurements, which are commonly found in trajectory unravelings of the Lindblad Master Equation [49,50,51] or in quantum information measurement protocols [4,5,6,7,8,9,10,11,12,13,14].We remark that for continuous systems (such as a particle in free space), the bias may only emerge from zeros in the bulk, since at infinity the wave function and its derivative must vanish.A pictorial representation of the breakdown of tVMC is shown in Fig. 1.
In realistic calculations, the variational wave function is often arbitrarily close to zero without encoding nodes exactly.In such cases, the biases b F and b S are zero.Still, we find that the variances of F MC and S MC grow such that an exponential number of samples is required to resolve those quantities with finite accuracy.This phenomenon can be revealed by the signal-to-noise ratios (SNRs) of F MC and S MC approaching zero.The SNR of a function f of random variable σ with distribution Π is: In practical calculations, to ensure an accurate estimation with a finite number of samples N s we require This inequality guarantees that the effective signal (mean value) is larger than the statistical fluctuations and thus it can be resolved.
In the following, we first discuss a minimal example where finite biases emerge, and we then consider a more realistic case where there are no biases but for which we show that the SNRs go to zero.
Paradigmatic examples
We analyze a toy model where the biases are non-zero, and they break the tVMC dynamics.The system is 8) and ( 9) are finite and the stochastic estimates differ from the exact values as: (see Appendix B.1 for the full calculation).In Fig. 2(a) we show a simulation of an initial state |+⟩ which is rotated by the Hamiltonian H = σ y .At t = π/4 the state becomes |Ψ θ ⟩ = |↓⟩ and the tVMC evolution is stuck, as F MC and S MC vanish.Similar considerations hold for more than 1 particle, and in Appendix B.2 we discuss an example for the GHZ state of N = 2 spins.
We now analyze a more realistic case where the variational state does not exactly encode zeros.In particular, we consider a system of N spins in the state |Ψ ε ⟩, which is peaked on a single configuration |σ 0 ⟩ and has a constant small amplitude √ ε for all the other basis states (see Fig. 2(d)), namely: where 0 < ε < 1/(2 N −1).We remark that for small ε this ansatz approximates a Hartree-Fock state, which commonly arises from quantum-chemistry Hamiltonians.For ε ̸ = 0 the biases b F/S are zero, but for ε ≈ 0 the leading terms of the SNRs for F MC and S MC are respectively (see Appendix C for the full calculation): Intuitively, this suggests that the more peaked the state is, the more samples will be needed to accurately estimate those quantities, in particular N s ∝ ε −1 .As normalization of the state imposes ε ∝ 2 −N , the number of samples necessary to correctly compute the quantum geometric tensor and the variational forces will diverge as N s ∝ 2 N , eliminating the advantage of stochastic sampling and rendering tVMC computationally ineffective.
We consolidate this argument with a numerical experiment involving a commonly adopted setup for quantum dynamics.We evolve with tVMC the state |Ψ ε ⟩ for t ∈ [0, t f ] according to the Transverse Field Ising (TFI) Hamiltonian, where ⟨i, j⟩ denotes nearest neighbors in a lattice with periodic boundary conditions.In Fig. 2(b) we show some evolutions obtained with an increasing number of Monte Carlo samples N s for a fixed ε, demonstrating that the dynamics is correctly reconstructed only at large values of N s .The scaling of N s with the system size is studied in Fig. 2(c), where we report the final infidelity 1 of the state obtained with tVMC with respect to the exact solution for different ε and N s .We remark that the accuracy of the variational simulation improves when ε or N s are increased, as the statistical fluctuations in the estimated quantities are suppressed.The inset highlights a power-law relation between the N s necessary to reconstruct the dynamics accurately and ε, proving that N s ∼ 2 N .
Overview
In this first section, we have shown that the tVMC method can be either biased or require an exponential number of samples when the wave function is exactly or approximately zero.This highlights the necessity of an efficient alternative method to tVMC for variational time evolution.We stress that while our considerations on stochastic estimators arose in the context of tVMC, they are also applicable to ground-state calculations using both plain gradient descent or stochastic reconfiguration [52], because they rely on the same stochastic estimators.However, we believe that in such calculations, the additional errors contributed by the biasing or small SNR are mitigated by the iterative optimization scheme, which may avoid the accumulation of errors that instead affects dynamics.We also remark that Monte Carlo variational methods for open quantum systems [53,54,55,56] are also possibly affected by the same issues.
Going forward, in Appendix A we propose a modified estimator for the forces F for which the bias and the SNR problems are absent, and therefore it can efficiently estimate the forces when the standard estimator F MC fails (see Appendices B and C).From our knowledge, this alternative estimator has never been discussed in the literature, and preliminary investigations suggest that it already reduces the computational effort needed to reliably find the ground state of some frustrated or fermionic Hamiltonians.
Unfortunately, we could not find a similarly straightforward modification of the estimator for the quantum geometric tensor S. For that reason, the following section presents a completely different scheme that avoids using the tensor.
Projected time-dependent Variational Monte Carlo
We consider the general problem of finding the parameters of a variational state |Ψ θ ⟩ such that it approximates the state U |Ψ θ ⟩, where θ are known and U is an arbitrary transformation, in terms of a given distance.Considering the distance to be the infidelity I, this can be expressed as the following optimization problem: Other distance choices, such as the L2 metric, have also been discussed in the literature [36].Eq. ( 16) is similar to Eq. ( 2), but it can treat arbitrary unitaries and therefore can be used to simulate noninfinitesimal gates in quantum circuits [32,33] or to perform state preparation.The solution of Eq. ( 16) can be found with iterative gradient-based optimizers such as Stochastic Gradient Descent [57], ADAM [58], Natural Gradient [59,60,44] or similar methods.Since this approach consists of projecting the exactly evolved state U |Ψ θ ⟩ onto the manifold of the variational ansatz |Ψ θ ⟩, we name it projected time-dependent Variational Monte Carlo (p-tVMC), and this is pictorially represented in Fig. 1.
The infidelity in Eq. ( 16) can be estimated through Monte Carlo sampling as I( θ) = E χ [I loc (σ, η)].Many choices for the sampling distribution χ and the local estimator I loc are possible, but assuming that U is unitary we can sample from the joint Born distribution of the two states 32,33].We remark that estimating the infidelity using Eq. ( 17) is efficient if U is K-local [61], namely it acts non-trivially on at most K degrees of freedom (spins, qubits, particles, . . .), where K is polynomially large in the system size.When this is not the case, it can be factored in several sub-terms U = U 1 . . .U N where each term is K-local, and Eq. ( 16) must be solved for every sub-unitary U i .In particular, the unitary propagator of a general Hamiltonian can be decom-posed with the Trotter-Suzuki decomposition [62,63] or with other expansions that are unitary up to leading order, such as the Taylor series [35].
We now analyze the estimator I loc according to the same approach used in the previous section.We find that, in the limit of I → 0, the SNR of the estimator scales as This means that, as the optimization approaches the optimum of I = 0, the number of samples needed to resolve the infidelity increases as I −1 (see Appendix E for analytical calculation).This is systematically different from what happens when minimizing the energy, where the SNR remains constant when close to the solution, and a constant number of samples can be used to achieve arbitrarily high precision.
To recover this behaviour in the case of the infidelity optimization we propose a new estimator based upon the Control Variates (CV) technique [64]: where c ∈ R. The quantity c can be chosen such that the Var χ [I CV loc ] attains a minimal value.This optimal value of c, say c * , depends on the parameters of the ansatz, so it changes during an infidelity optimization.However, it is possible to show (see Appendix E for analytical proof) that in the limit |Ψ θ ⟩ → U |Ψ θ ⟩, c * is exactly −1/2.Therefore, we avoid the high cost of estimating c * at each iteration of an optimization, and directly use the asymptotically ideal estimator Eq. ( 19) with c = −1/2.
To further support this approach, we show in Fig. 3(a) that the CV estimator I CV loc features a variance that is orders of magnitude smaller than the one of the bare estimator Re I loc , in such a way that the number of samples needed for the optimization is reduced by almost three orders of magnitude.Additionally, the scaling of the variance with I changes, such that for I → 0 we have that: meaning that the SNR remains asymptotically constant (see analytical proof in Appendix E).Indeed, in Fig. 3(b) one can see that the SNR of Re I loc goes to zero when I decreases, while the SNR of the corrected estimator decreases at smaller values of I.When c = −1/2, I CV loc has a rescaled SNR which is constant and larger than 1 over the whole range of I considered, suggesting that our strategy is ideal.
Moreover, the reduced variance of the CV estimator implies that the infidelity gradient computed with it results to be more accurate [65].The lower-variance gradient can improve the accuracy of the solution, since its mean value is affected by smaller statistical fluctuations, and can increase the speed of convergence, as it allows for larger learning rates.
This substantial improvement of the sampling cost using the CV estimator Eq. ( 19) makes the p-tVMC an efficiently scalable method for simulating large systems, such that it can address system sizes that have not been investigated before within this approach.A detailed analysis on the CV infidelity estimator is present in Appendix E, with further extensions in Appendix F. As shown in Fig. 2(a), in Appendix B.2 for the GHZ state and in Appendix D for adiabatic evolution, the p-tVMC, since it is not affected by biases or vanishing SNR, can simulate dynamics in cases where tVMC fails or is inefficient.
Unitary dynamics with random measurements
In recent years, considerable interest has been devoted to studying entanglement in many-body quantum systems subject to evolution and random local measurements.This is a paradigmatic model of a quantum system coupled to an external environment acting as a measurement apparatus.Therefore, it is intimately related to the physics of open quantum systems.The competition between the unitary evolution's entangling action and the measurements' localizing effect gives rise to a phase transition between volume-law and area-law entanglement in the steady state of the dynamics.The order parameter is the measurement rate.This phenomenology has been originally investigated in quantum circuits [4,5,7,9,10,11,13,14], and more recently for continuous dynamics [6,8,12].
To the extent of our knowledge, numerical investigations have focused so far on systems in onedimension, integrable or evolving via efficiently simulable [66] Clifford gates because of algorithmic limitations.However, several open questions remain on the nature of such transitions in non-integrable 2D systems or non-Clifford circuits, requiring novel computational paradigms.
In this concluding section, we leverage the p-tVMC to simulate the time evolution generated by the (non-Clifford) 2D TFI model subject to random local measurements.This problem cannot be treated efficiently with Tensor Network methods because of the rapid entanglement growth and the exponential cost of exact contractions in 2D.Moreover, as projective measurements insert exponentially many zeros in the wave function amplitudes, the shortcomings of tVMC discussed in this article emerge, resulting in an exponential cost.We consider a 2D spin 1/2 square lattice with side length L, such that the total number of spins is N = L 2 .We evolve the system to time t f , discretized into time-steps of duration δt.At each t, two operations are performed on the system: • unitary evolution with U = e −iHTFIδt , where H TFI is the 2D TFI Hamiltonian of Eq. ( 15); • a projective measurement of each spin in the σ z basis independently with probability p (measurement rate).
The overall evolution is stochastic and non-unitary.
As the unitary propagator is not K-local, we decompose it with a second-order Trotter scheme as: e −iH TFI δt = e −iHzz δt 2 e −iHxδt e −iHzz δt 2 + O(δt 3 ), (21) where H zz = −J ⟨i,j⟩ σ z i σ z j and H x = −h i σ x i .In the first stages of this work we employed the forward-backward scheme as done in [36,67], but then we move to the Trotterization as it allowed to use larger δt and was more practical for our calculations.
We use the p-tVMC to apply each unitary obtained from the decomposition.Still, we remark that as exp(−iH zz δt/2) is diagonal in the chosen σ z basis, its p-tVMC optimization problem can be solved analytically, as shown in Appendix G. Instead, the unitary containing H x is applied using the p-tVMC, factorizing the propagator into a product of terms, each of which acts on a small subset of the spins.In this way, the number of connected elements to compute is not exponentially large with N .The unitary dynamics that we simulate is a global quench across the critical point of the 2D H TFI , which is in correspondence of the transverse field h c such that h c /J ≈ 3.044 [68,69,70,71,72].In particular, the initial state is the paramagnetic ground state in the limit h → ∞, given by N i=1 |+⟩ i , and this is evolved into the ferromagnetic phase where h < h c .The measurement operators are Measuring spin i with outcome ↓ translates into setting ϕ ↑ i = 0 and ϕ ↓ i ̸ = 0 and vice versa for the other outcome.The measurement probabilities are stochastically computed using Monte Carlo sampling.The protocol of unitary evolution and random measurements is repeated several times and the final result is obtained by averaging over all these trajectories, as in the Monte Carlo wave function method [49].To study the entanglement growth of |Ψ θ ⟩, we monitor the Rényi-2 entanglement entropy S 2 (ρ A ) = − log 2 Tr ρ 2 A , where ρ A is the reduced density matrix of the state on a subsystem A. Indeed, the Rényi-2 entropy is a lower bound for the Von Neumann entanglement entropy and it can be estimated via Monte Carlo sampling [73] as: where σ, σ ′ ∈ A and η, η ′ ∈ B (complementary of A).
See Appendix H for a derivation of Eq. ( 23).Fig. 4 shows the evolution of S 2 (ρ A ) for subsystems of size |A| ∈ [1, ⌊N/2⌋] in lattices with L = {4, 5, 6} and with measurement rate p = 0.01.To assess the quality of the variational simulations, we compare with ED for L = 4, 5 and we select a feature density for the RBM ansatz employed (which determines its expressivity) that gives a satisfying level of precision.Given the chosen hyper-parameters, there is an excellent agreement for small subsystem sizes (|A| ≲ ⌊N/4⌋) and a good agreement for the largest partitions.The Rényi-2 entanglement entropy is 0 at t = 0, as expected for the initial product state, and it grows linearly over time, plateauing to a value that is proportional to |A|.The Page-like curves [74] in the insets of Fig. 4 suggest that with p = 0.01 the steadystates belong to a high-entanglement phase.However, due to possible finite size effects, whether the scaling of S 2 with the subsystem size is linear, witnessing a volume-law, or logarithmic as in critical phases is not obvious.In any case, a low-entanglement regime with area-law scaling is excluded since, in that situation, the steady-state S 2 would not change for subsystems with the same boundary length (indicated with equal markers in the insets).Instead, what is observed is that S 2 increases with |A| independently of the boundary length, at least far from ⌊N/2⌋ where finite size effects might play a role.The proportionality of the entanglement growth rate in the initial times with the boundaries of the partitions is in accordance with the Lieb-Robinson bound [75] valid for local Hamiltonians.
Conclusions
In this manuscript, we proved that the standard approach to Monte Carlo variational dynamics, the tVMC, can be limited by a finite bias or by an exponentially small signal-to-noise ratio when the wave function contains nodes or is only approximately zero.This implies that the tVMC cannot efficiently simulate the time evolution of physically-relevant cases such as completely polarized wave functions, states arising from digital quantum circuits or measurement processes, including the open dynamics with quantum jumps.Subsequently, we have formalized an alternative scheme, which consists in solving an optimization problem at each time step using the infidelity distance, and we have introduced a novel stochastic estimator which makes this approach viable and scalable to large systems.Finally, we showed that our method can solve the lack of efficient algorithms to investigate the high-entanglement phase in a protocol of non-Clifford unitary dynamics with local random measurements in 2D.This enables future investigation into the physics of several classes of systems, including measurement-induced phase transitions in non-trivial models above 1D and the physics of dissipative systems, all of which are currently limited by the available computational methods.In particular, a direct application of the projected method would be the variational simulation of quantum trajectories arising from unraveling the Lindblad Master Equation.
Data availability
All the simulations have been performed using Netket 3 [76,77] with MPI and MPI4jax [78].The code for the p-tVMC method can be found in [79].
where, like in the main text, E loc (σ) is the local energy and O k (σ) are the log-derivatives of the ansatz.F MC k does not have a covariance form like F MC k , therefore it has, in general, larger statistical fluctuations when estimated using a finite number of samples.Moreover, since O k (σ) cannot be used to compute the first term in Eq. ( 24), its computational cost is generally higher than the standard estimator.
B Examples of biases in tVMC B.1 One-spin system
We consider a system of N = 1 spin 1/2 whose state is represented by the variational ansatz |Ψ θ ⟩ = α |↓⟩ + β |↑⟩ with parameters θ = (α, β).The evolution is generated by the Hamiltonian H = σ y .For the choice α = 1 and β = 0, we have In these conditions: However, due to the covariance form of the standard tVMC estimates F MC k and S MC k,k ′ evaluated on samples including only one value of σ, we have that ∀ k, k ′ : We verify that, instead, the stochastic estimate F MC with the alternative estimator proposed in the previous section is unbiased, since: as
B.2 Two-spin system
We consider a system of N = 2 spins 1/2 whose state is represented by the variational ansatz We consider the dynamics generated by the TFI Hamiltonian H TFI with coupling J and transverse field h.In these conditions, F and S differ from F MC and S MC as: One can verify that, like for the one spin case, the alternative estimator F MC gives an unbiased estimate for the variational forces.As shown in Eq. ( 28), S MC has a lower rank than S, while F MC is identical to zero meaning that this dynamics starting from |GHZ⟩ cannot be evolved with tVMC.On the contrary, the p-tVMC is not affected by any problem and can perform the evolution, as shown in Fig. 5.
C Signal to noise ratios in Monte Carlo estimates
We consider a system of N spins 1/2 and normalized variational states |Ψ ε ⟩ whose wave function is parameterized by a single parameter ε as: and for a given σ 0 .In the following, we prove that the signal-to-noise ratios (SNRs) of F MC k and S MC kk ′ scale as O( √ ε), namely they diminish indefinitely as the wave function becomes more peaked around σ 0 .In order to ensure normalization of |Ψ ε ⟩ over different system sizes, ε goes as 1/2 N .Therefore, the two SNRs diminish exponentially as N increases, and so the number of samples needed to resolve F MC k and S MC kk ′ with finite precision is exponentially large in the system size.Instead, the SNR of the unbiased estimate F MC k of Eq. ( 24) is O(1) in ε, enabling it to efficiently estimate the forces in the limit ε → 0, where F MC k cannot be used, and independently of the system size.
Proof.The variational log-derivative of Ψ ε (σ) is: We define (1 − (2 N − 1)ε) ≡ α for brevity and we observe that α → 1 when The variance of stochastic estimates of this form is given by: For the local energy we have: where the notation H σσ ′ ≡ ⟨σ| H |σ ′ ⟩ for any σ, σ ′ is used.Therefore, keeping leading order terms in 1/ε, the variances of the stochastic estimates are: Since F MC k ≈ σ̸ =σ0 H σσ0 α/4ε and S MC kk ′ ≈ (2 N − 1)/4ε for ε ≪ 1, we have that: The two SNRs go to zero when the state becomes more peaked because when ε → 0 we have that |Ψ ε (σ)| 2 → 0 for σ ̸ = σ 0 , so these configurations are rarely sampled, but the estimators of the biases for σ ̸ = σ 0 increase as ε is reduced.Indeed, for σ ̸ = σ 0 we have that: It is possible to verify that using the unbiased force estimate F MC k of Eq. ( 24) the SNR remains constant in the limit ε → 0, making it able to efficiently compute the variational forces when the standard estimate F MC k fails.This is because the original sum of the first term in Eq. ( 24) already runs over the points where the wave function is non-zero, so to obtain the corresponding estimator it is sufficient to divide by ⟨Ψ θ |σ⟩ without excluding any point.Instead, to obtain the estimator of the first term in the expression of F MC k it is necessary to divide by |Ψ θ (σ)| 2 , implicitly excluding from the Monte Carlo expression the points for which the bias b F can be non-zero.Considering only the contribution of the first term in F MC k , since this is the problematic one in the standard estimate and the second term is common to F MC k , we obtain that: Therefore, since
D Example of vanishing SNRs in tVMC
We consider a chain of N spins 1/2, initially in the ground state of the Hamiltonian − i σ x i , which evolves according to the time-dependent Hamiltonian: where γ(t) oscillates between 0 and 1 with a triangular profile of period T , namely γ(t) = t/T if 0 < t < T and γ(t) = 1 − (t − T )/T if t > T .It is known that for a sufficiently large T , so for an adiabatic evolution, the state at t = T is going to be ε-close to N i=1 |↓⟩ i , so an instance of the peaked states |Ψ ε ⟩ of the previous section for some ε depending on T .
While in this case the biases in the tVMC estimates are zero, in Fig. 6 we show that the dynamics is correctly reconstructed for t > T only when choosing a sufficiently large number of N s = 10 4 samples (the Hilbert space in this case has size 2 10 ≈ 10 3 ), hinting at the exponentially small SNRs of F MC and S MC .As before, the p-tVMC can efficiently simulate this dynamics with a fair number of N s = 10 3 samples, for which instead the tVMC fails.
⟨η| U
where I loc (σ, η) is the term in brackets and χ(σ, η) Similarly, the (conjugate) gradient of the infidelity can be computed stochastically as: where O k (σ) = ∂ θk log Ψ θ (σ).We remark that the first term in the previous expression is free from the problem present in F MC k and S MC k when |Ψ θ ⟩ → U |Ψ θ ⟩.Indeed, in this limit we have that for σ where Ψ θ (σ) = 0 the term ⟨∂ θk Ψ θ |σ⟩ ⟨σ| U |Ψ θ ⟩ vanishes, so the bias as in Eq. ( 8) disappears.Since we consider continuous time dynamics generated by a succession of infinitesimal Us, such that |Ψ θ ⟩ ≈ U |Ψ θ ⟩ already at the beginning of the optimizations, we can use the gradient Eq. ( 41) without incurring in the issues discussed for the tVMC.
The variance of I loc can be directly computed from the one of F loc = 1 − I loc as: This proves that I loc has a variance bounded above by 1 (because 0 ≤ I ≤ 1), as already noted in Ref. [80] of the main text, and that it features a zero variance principle, since Var χ [I loc (σ, η)] → 0 when the solution is approached (I → 0), as the energy in VMC for ground state search.However, the SNR of I loc is: which vanishes when I → 0. This entails that when approaching the solution the number of samples N s must diverge as 1/I to resolve the infidelity with finite accuracy.We find that this problem of diverging sampling overhead can be eliminated by applying the Control Variates (CV) technique on I loc .Since I is real, Re I loc can be considered in place of I loc in all the following.CV consists of adding to an estimator an additional random variable that is correlated to the estimator and for which the expectation value is known exactly, such that the fluctuations of this variable cancel out the ones of the original estimator.For the infidelity, we discovered that |F loc | 2 satisfies the required properties, since it is correlated to I loc and we have Therefore, we can define the CV infidelity estimator: where c ∈ R. I CV loc has the same mean as Re I loc for any c, while its variance depends on c.Therefore, c can be chosen such that Var χ [I CV loc (σ, η)] is minimized.This optimal value, say c * , is: where Cov χ [A(σ), B(σ)] is the covariance of functions A and B of random variable σ with distribution χ.The corresponding minimized variance is: For reasons similar to the ones previously discussed for the first term in Eq. ( 41), the estimation of the CV factor |1 − I loc | 2 is not affected by a bias or a vanishing SNR as F MC k and S MC kk ′ when |Ψ θ ⟩ → U |Ψ θ ⟩, and it is precisely in this limit where CV acts to keep the SNR constant.Therefore, I CV loc is a well-defined infidelity estimator.
F Infidelity estimator with importance sampling
The estimator Re I loc (σ, η) is unbounded because it contains ratios of wave function amplitudes.Therefore, it may diverge if ⟨σ| U |Ψ θ ⟩ and ⟨σ|Ψ θ ⟩ or ⟨η| U † |Ψ θ ⟩ and ⟨η|Ψ θ ⟩ differ of orders of magnitudes.This is a severe problem when evolving the variational state after local measurements, since it may happen that on some configurations the state |Ψ θ ⟩ is close to zero but U † |Ψ θ ⟩ is not.Therefore, despite these configurations are rarely sampled, the estimator I loc (σ, η) on them can be very large, significantly skewing the statistics.A way to include these outliers in the sampling is to perform importance sampling (see Ref. [64] of the main text), namely by rewriting: for a given distribution χ ′ .This latter can be chosen such that it minimizes the variance of the estimator, leading to the expression: where w(σ, η) = E χ [|I loc (σ, η)|]/|I loc (σ, η)|.Using I CV loc in place of I loc , importance sampling can be combined with control variates obtaining the estimator: The cost of importance sampling is almost n times larger than the cost of standard sampling, where n is the number of spins on which U acts non-trivially.
G Exact application of a diagonal propagator
The exact solution θ of the general optimization problem in the p-tVMC must satisfy: for all configurations σ and any constant C equal for all σ.For certain U and variational states, Eq. ( 50) can be solved exactly.In particular, if U is diagonal in the basis {|σ⟩} and if it is possible to write ⟨σ|Ψ θ ⟩ = ⟨σ|Ψ θ ⟩ ⟨σ|Ψ δθ ⟩ ∀σ with some update δθ, Eq. ( 50) is reduced to: where U σ is the eigenvalue of U for the eigenstate |σ⟩.In the following, we consider spin systems for simplicity.For the RBM, Eq. ( 51) admits a solution when U = R z i (ϕ z ) = e −iϕzσ z i consisting in an update of the visible bias only.If U = R zz ij (ϕ zz ) = e −iϕzzσ z i σ z j , instead, a simple parameter update is not sufficient.The transformation can be exactly implemented by adding a hidden unit (see Refs. [32,33] of the main text).However, in general, starting with an arbitrary ansatz |Ψ θ ⟩, it is possible to add two variational terms such that both R z i and R zz ij can be exactly simulated with a parameter change.Indeed, |Ψ θ ⟩ can be modified into a new ansatz |Ψ J (1) i ,J (2) ij ,θ ⟩ with two additional parameters {J (1) i , J (2) ij } which is defined as: The two exponential terms factorize as required to have Eq.( 51), such that the application of R z i (ϕ z ) translates into the updates δJ
4 H− 2 εFigure 2 :
Figure 2: (a-b) Dynamics of the z-magnetization ⟨σ z i ⟩ for: (a) the N = 1 spin 1/2 toy-model, where the initial state |+⟩ is rotated with σ y , which is simulated by Exact Diagonalization (ED), tVMC and p-tVMC; (b) a system of N = 4 spins 1/2, initialized in |Ψε⟩ with ε = 1.5 • 10 −4 and evolved via H TFI (J = h = 1), which is simulated by ED and tVMC with increasing number of samples Ns (see colorbar).For the tVMC the time-step δt = 10 −3 has been used, while for the p-tVMC δt = 10 −2 .(c) Infidelity I among states evolved with tVMC and with ED after a time t f starting from |Ψε⟩ with increasing Ns (see colorbar) and different ε.The inset shows how the minimal Ns to reach I = 5•10 −3 (indicated by a dashed line in (c)) scales with ε (markers) and a power-law fit (red line).In (a) for tVMC the ansatz parametrizing the wave function amplitudes is used, while in (b-c) a Restricted Boltzmann Machine (RBM) [47] with α = 1.(d) Illustration of the Born distribution for the peaked states |Ψε⟩.
5 Figure 3 :
Figure 3: (a) Learning curves for the infidelity I using the bare estimator Re I loc and the CV estimator I CV loc .In the inset the corresponding variances, generically indicated as Varχ[I], are shown.(b) Rescaled signal to noise ratio of the infidelity SNRχ[I] √ Ns as a function of I for the bare and the CV estimators with different values of c.We remark that the slope of the curves of the SNR with c ̸ = −0.5 in the limit I → 0 is approximately 1, which is better than what is given by Eq. (43).This is because the estimator used in practical calculations is Re I loc instead of simply I loc (further discussion in Appendix E).In both (a) and (b) a system of N = 8 spins 1/2 is considered and the transformation used is U = exp −i δt 2 Hzz , where Hzz = −J ⟨i,j⟩ σ z i σ z j with J = 1 and δt = 5 • 10 −2 .In (b) Ns = 10 4 samples have been employed.
2 Figure 4 :
Figure 4: Time evolution of the Rényi-2 entropy S2 simulated with p-tVMC for H TFI interspersed with random local measurements along z in 2D L × L lattices.For L = 4 and L = 5 we also provide ED benchmark results.Subsystems of increasing size |A| up to the maximum ⌊N/2⌋ are considered, and the corresponding markers are indicated with different colors according to the colorbar.The insets show the scaling of S2 as a function of |A| in the steady-state for the three lattices.S2 for subsystems with equal boundary length is indicated using the same marker.The initial state is N i=1 |+⟩ i and the parameters of H TFI are J = 1/2 and h = hc/4.The measurement rate is p = 0.01 and the time interval is δt = 0.1.The results are averaged over 5 trajectories.The ansatz is an RBM with α = 4, endowed with the variational terms to exactly implement the diagonal part of the propagator and the measurements.The number of samples is Ns = 10 4 for L = 4, and Ns = 2 • 10 4 for L = 5, 6.
ij = 0 and δθ = 0, while for R zz ij (ϕ zz ) the changes δJ ij = ϕ zz and δθ = 0 are required.Adding many two-site terms in the ansatz, it is possible to simulate the dynamics of the diagonal part of the TFI Hamiltonian H TFI exactly. | 9,735.6 | 2023-05-23T00:00:00.000 | [
"Physics"
] |
Cascade Model Predictive Current Control for Five-phase Permanent Magnet Synchronous Motor
As to the model predictive control for five-phase permanent magnet synchronous motor (PMSM), it is difficult to ensure the optimal control performance by using calculation method to obtain the weighting factor. To solve this problem, a cascade model predictive current control based on the idea of sequential model predictive control is proposed for five-phase PMSM for the first time. Firstly, the principle of selecting the optimal voltage vector by the proposed method is analyzed in detail. Then, combined with the characteristics of five-phase PMSM, the control priority of controlled variables is set for designing two cost function schemes without weight factor. The maximum torque scheme can generate a trapezoidal stator voltage, improving the DC bus voltage utilization rate and the system loading capacity. The minimum harmonic current scheme can reduce the harmonic of stator current, obtaining small system noise and vibration. The experimental results indicate that the proposed method can ensure the optimality of the voltage vector applied. Therefore, the five-phase PMSM obtains good performance under different working operation, such as small torque ripple, fast dynamic response, and small harmonic current.
I. INTRODUCTION
Permanent magnet synchronous motor (PMSM) is broadly used in industrial production, such as wind power, electric locomotive, and numerical control machine tool [1]. Under the background of vigorous expansion in new energy application technology, the electric drive system composed of multi-phase PMSM and buck-boost power converter caters to the development trend of new energy [2] [3]. Due to the increase of the phase number of stator winding, multiphase PMSM can normally work despite two of the bridge arm broken. Thus, multi-phase PMSM has a strong ability in reliability and fault tolerance and the characteristics of large output power with low voltage and low torque ripple. Multiphase PMSM can meet the needs of high performance AC transmission in power, safety, and reliability, enjoying a broad application prospect in new energy electric vehicle and electric propulsion aircraft [4], [5].
As an advanced control method, model predictive control (MPC) becomes a hot research topic in recent years. A large number of study results indicate that MPC is an emerging high performance control method for AC motor that can be seen as an alternative version of vector control and direct torque control [6] [7]. According to optimization problem solution, MPC is divided into finite control set model predictive control (FCS-MPC) and continuous set model predictive control [8]. Based on the discrete mathematical model of the inverter, FCS-MPC can select an optimal one from a limited number of switching states through enumeration method. Therefore, the modulator used to generate PWM (pulse width modulation) driving signal is removed in FCS-MPC. Meanwhile, the nonlinear problem of the inverter can be well solved [9].
Due to simple implementation and fast dynamic response, FCS-MPC is very suitable for controlling multivariable and nonlinear PMSM [10]. Model predictive torque control (MPTC) and model predictive current control (MPCC) are two basic FCS-MPC methods for PMSM. In the two control approaches, the key technologies for implementation are the use of prediction model to calculate the future state of the system and the cost function to select an optimal voltage vector balancing the performance of flux and torque in MPTC and that of d-axis current and q-axis stator current in MPCC.
For a PMSM to tend to be stable, it is necessary to properly design the weighting factor in the cost function because of the distinct dimension of the controlled variables. Nevertheless, the current research indicates that it is still difficult to establish the mathematical relation between the weighting factor and the controlled variable. For this reason, the weighting factor is mainly obtained by the trial and error method that repeats adjustment in simulation or experiment rather than analytical method [11], [12]. The trial and error method is with the advantages of complex operation and poor universality. For example, the weighting factor should be changed with system parameter and operating state. Therefore, the adjustment of weighting factor is a research problem in FCS-MPC, thereby limiting the application and development of FCS-MPC in the field of motor control [13], [14].
To solve the problem mentioned above, calculation method and elimination method are developed presently [15]. The essence of calculation method is a mathematical optimization method applying to multi-variables system difficult to eliminate the weighting factor. R. Vargas proposed a dynamic adjustment method [16] to ensure the weighting factor changes with the error of the controlled variable. The results show that the penalty force for the controlled variable has a dynamical change, and thus the system error can be controlled within a certain range. P. Cortes adopted several interpolation operations to fast determine the weighting factor, obtaining an excellent control performance [17]. It is not difficult to find that the two methods above only improve the efficiency of matching weighting factor, but still do not get rid of the cumbersome process in trial and error. In [18] and [19], the combination of artificial intelligence with motor control makes the use of artificial neural network (ANN) to weighting factor calculation possible. The simulation model or experimental result is used to train ANN. The trained ANN is able to calculate the performance index of the weighting factor and find the optimal weighting factor with the expected performance of the system. However, ANN training is very time-consuming and requires manual intervention.
To avoid the weighting factor participating in the decisionmaking of system performance optimization, the elimination method is to indirectly transform the cost function or unify controlled variable. R. S. Dastjerdi improves PMSM control performance through the idea of model replacement [20]. The stator flux and torque are replaced with dq-axis stator voltage that has the same dimension, eliminating the weighting factor of stator flux. However, some approximate equivalence in model replacement reduces the control accuracy. In [21], the candidate voltage vectors are ranked according to the errors of flux and torque, without using weighting factor to the select optimal voltage vector. However, ignoring relative errors of flux and torque amplitude, the method proposed in [21] is difficult to ensure PMSM has the optimal control performance. In 2019, IEEE life fellow also the MPC expert Rodriguez proposed a sequential MPC [22]. The stator flux and torque are included in two independent cost functions, respectively. Two optimal voltage vectors are first selected by the torque cost function, and then evaluated by the flux cost function to obtain an optimal one. It is concluded that the voltage vector is separately selected by the cost functions of stator flux and torque without using weighting factor. Since it was proposed, sequential MPC has attracted the attention of scholars and been successively applied to three-phase PMSM and asynchronous motor and power converter [23], [24].
Based on the principle of the sequential MPC, a cascade MPCC is proposed for five-phase PMSM for the first time to solve the problems of cumbersome weighting factor adjustment. Firstly, taking the stator fundamental dq-axis currents and third harmonic dq-axis currents of the fivephase PMSM as the controlled variables, the proposed method selects the optimal voltage vector for the four controlled variables by constructing two independent cost functions. Secondly, the construction scheme of cost function can change with the system performance requirement. The maximum torque scheme can improve the DC voltage utilization and output electromagnetic torque while the harmonic elimination scheme can reduce the third harmonic in stator current. Therefore, the proposed method can not only solve the problem of weighting factor design, but also realize the maximum torque and minimum harmonic controls for five-phase PMSM. In addition, the sector judgment and one-step delay compensation methods are used to reduce the amount of calculation and improve the control accuracy. Finally, the control performance of the conventional MPCC and the proposed method is experimentally compared to verify the advantages of the proposed method in weighting factor tuning and digital operation. The results confirm that the proposed method can effectively eliminate the weighting factor in MPCC for five-phase PMSM, providing new schemes to ensure the stable and efficient operation of five-phase PMSM.
The contribution of the proposed method can be summarized as follows: 1) Based on the principle of the cascade MPCC, the proposed method develops two control schemes for fivephase PMSM, namely the maximum torque scheme and the harmonic elimination scheme, to improve the torque response ability and reduce the three harmonic of stator current.
2) For the MPCC of five-phase PMSM, a new way to eliminate weighting factor is presented by the proposed method.
II. MATHMATIC MODEL OF FIVE-PHASE PMSM
The topology of two-level inverter driving five-phase PMSM is shown in Figure 1. The DC source voltage of the inverter is U dc . By controlling the switches of five-phase bridge arms of the inverter, the AC output side of the inverter will generate five-phase AC currents with phase difference of 72 degree, forming a circular rotating magnetic field in the space to drive the five-phase PMSM. It is assumed that the stator current contains only fundamental and third harmonics currents while ignoring the influence of other higher harmonics. According to the coordinate transformation, the five-phase stator currents i a 、 i b 、i c 、i d 、i e can be equivalent to the currents i d1 、i q1 、i d3 、 i q3 in the rotating coordinate system, which are expressed by where i d1 , i q1 , u d1 , and u q1 are the stator current and voltage components in the dq-axis in fundamental wave space, respectively; i d3 , i q3 , u d3 , and u q3 are the stator current and voltage components in the dq-axis in harmonic wave space, respectively; R s is the stator resistance; L d and L q are the stator inductance components in d-axis and q-axis, respectively; L ls is stator leakage inductance; ω e is the electrical angular velocity of the rotor; ψ f is the flux linkage of the permanent magnet.
The torque of five-phase PMSM is given as follows where ψ f1 and ψ f3 are the fundamental and harmonic components of permanent magnet flux linkage; n p is the number of pole pair.
III. CONVENTIONAL MPCC
In this section, the implementation of the conventional MPCC for five-phase PMSM is first introduced. Secondly, the predictive model of stator current is derived from the mathematical model of five-phase PMSM. Thirdly, the principle of selecting the optimal voltage vector by cost function is analyzed.
The structure block diagram of MPCC for five-phase PMSM is presented in Figure 2. The system includes five controlled variables, i.e., the speed n, the fundamental currents id1 and iq1, and the third harmonic currents id3 and iq3. In Figure 2, the speed is controlled by PI regulator while the optimized control for the fundamental and harmonic currents is achieved by the cost function. MPCC will periodically detect the stator current, predict the system state, select the optimal voltage vector, and therefore realizing on-line rolling optimization. Firstly, the calculated speed error signal is transformed into the reference command * q1 i through the speed regulator. Secondly, the stator current at the current moment is detected, and the fundamental and third harmonic currents at the next moment are calculated according to the prediction model. Thirdly, the predicted values of stator current and their references are sent into the cost function to select the optimal voltage vector that can balance the performance of the fundamental and third harmonic currents. Finally, the optimal voltage vector is converted into the control signal driving the five-phase PMSM.
A. Prediction Model
According to Euler formula, the continuous model of fivephase PMSM can be expressed by the discrete-time model, which is given as follows where T s , k, and k-1 are the control cycle, the current moment, and the previous moment, respectively. The future value of the stator current prediction model is obtained by shifting the time forward one moment, as shown in (4).
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. where id1(k+1), iq1(k+1), id3(k+1), and iq3(k+1) are the predicted stator current components in the dq-axis in fundamental and harmonic wave spaces, respectively. ud1(k), uq1(k), ud3(k), and uq3(k) are the stator voltage components in the dq-axis in fundamental and harmonic wave spaces, respectively There are totally 32 voltage vectors generated through the five-phase bridge arms. The 32 basic voltage vectors provide a rich control set for MPCC. The control set can be composed of large vector and zero vector, or large vector, medium vector and zero vector, or all vectors. The more the voltage vectors in the control set, the better the control performance. However, selecting the optimal voltage vector by enumeration method causes a great computational burden to the microprocessor. Therefore, MPCC for five-phase PMSM usually needs to balance the amount of computation and control performance.
B. Cost Function
After the prediction of each controlled variables, the predicted and reference values of all controlled variables are substituted into the cost function to quantify the system control error at the next moment. To form a cost function g, id1(k+1), iq1(k+1), id3(k+1), and iq3(k+1) in (4) are compared with their references and match a weighting factor λ. The cost function is given as follows where λ is weighting factor of the third harmonic current that can be tuned for balancing the performance of id and iq; * i are the references of the fundamental and third harmonic currents. By using (8), the cost function value of each voltage vector can be calculated. The smaller the cost function value, the smaller the sum of control errors, and the better the control performance of the system. Therefore, the voltage vector with the smallest cost function is the optimal one for the system.
IV. CASADE MPCC FOR FIVE-PHASE PMSM
MPCC for five-phase PMSM are with simple principle and fast dynamic response, but faces the problems of difficult to adjust the weighting factor and large amount of calculation. On the one hand, the performance of fundamental current and third harmonic current needs to be balanced by setting a proper weighting factor in the cost function. However, due to lacking of theoretical basis, the weighting factor is mainly obtained by the tedious trial and error method based on the analysis of a large number of experimental data. Furthermore, the unreasonable weighting factor may be used when the motor parameter or the system operation changes, leading to a wrong selection of the optimal voltage vector. On the other hand, the system state prediction and the optimal voltage vector selection from 32 voltage vectors bring a great computational burden to the microprocessor.
To solve the problems above, a cascade MPCC for fivephase PMSM is proposed. The cost function including fundamental current and third harmonic current is divided into two independent cost functions. The two cost functions are combined according to the priority of controlled variables, selecting the optimal voltage vector step by step. In addition, to reduce the computation of the proposed algorithm, the sector judgment method is used to eliminate unreasonable voltage vectors. In this section, the proposed method will be introduced in detail from four aspects, i.e., the control principle, the cost function design, the voltage vector selection, and delay compensation.
A. Principle of Proposed Method
Due to the currents id1 and i q1 in the fundamental space and the currents i d3 and i q3 in the harmonic space included in the same cost function, the performance optimization needs to take into account multiple controlled variables at the same time. Therefore, the parallel approach is adopted for the selection of voltage vector. Based on the idea of sequential MPC, the parallel approach is transformed into cascade one, and a cascade MPCC is proposed. In the proposed method, independent cost function is designed for each controlled variable. The cost functions are prioritized according to the importance of the controlled variables and then combined in series. Therefore, the primary and secondary relation of the controlled variables in the system can be clearly distinguished by the new cost function, which is the basis for selecting the voltage vector. The controlled variable with the top importance can first select some voltage vectors with good performance by its cost function. Then, the controlled variable with the bottom priority selects a voltage vectors with smaller cost function value from the voltage vectors in the first step. After step-by-step selection, the optimal voltage vector considering the performance of each controlled variable can be obtained. The structural block diagram of the proposed method is shown in Figure 3. The speed control is consistent with that in the traditional MPCC using a PI regulator while the stator fundamental and third harmonic currents are controlled using a cascade cost function. Firstly, the reference and predicted values of fundamental and third harmonic current components in dq-axis are calculated for the one-step delay compensation and the reference voltage vector location. Secondly, the cost function of each controlled variable is built. The priority of the cost functions of fundamental current and third harmonic current is established according to the control performance requirements of the system. Finally, the basic voltage vector is evaluated by the high priority cost function to select two optimal voltage vectors u opt1 and u opt2 , and then the two voltage vectors are evaluated by the low priority cost function to select the optimal voltage vector u opt .
The advantage of the proposed method is that the selection of optimal voltage vector is completed by two steps according to the priority of cost function, without taking into account the performance of the fundamental and third harmonic currents at the same time. Since optimal control is only for one controlled variable in one step, the selection of the optimal voltage vector does not depend on the weighting factor.
B. Cascade Cost Function Design
The cost function of the proposed method obtained from (8) can achieve the independently control of the stator fundamental current and the third harmonic current. The cost function of the fundamental current includes i d1 , i q1 , * Using equations (9) and (10), the optimal voltage vector for effectively controlling the fundamental and third harmonic currents can be selected. In this paper, a cascade combination of the two cost functions can balance the system performance from the perspectives of maximum torque and minimum harmonic current.
a) Maximum torque scheme
The goal of the maximum torque scheme is to increase the output torque of PMSM. Therefore, the cost function of the fundamental current has the highest priority, followed by that of the third harmonic current, as shown in Figure 4. As the high DC voltage utilization causes large torque, the optimal voltage vector should be selected from large vector and zero vector. For this reason, 10 large vectors and one zero vector among the 32 basic voltage vectors are used as candidate voltage vectors for the maximum torque scheme. Firstly, using the cost function of the fundamental current, two large vectors and a zero vector that minimize g d1 are selected from as the candidate vectors of the cost function of third harmonic current. Secondly, after calculating the values of the cost function of the third harmonic current under the three voltage vectors, the voltage vector that minimizes g d3 is selected as the optimal voltage vector of the maximum torque scheme.
b) Minimum harmonic current scheme The objective of the minimum harmonic current scheme is to reduce the torque ripple. Contrary to the maximum torque scheme, the minimum harmonic current scheme makes the cost function of third harmonic current have the highest priority, followed by the cost function of fundamental current, as shown in Figure 4. Since the third harmonic current results in torque ripple, the optimal voltage vector should be selected from large vector, medium vector and zero vector. For this reason, 10 large vectors, 10 medium vectors, and one zero vector among the 32 basic voltage vectors are used as candidate voltage vectors of the minimum harmonic current scheme. Firstly, a voltage vector from the large vector, a voltage vector from the medium vector, and one zero vector minimizing the cost function of third harmonic current are selected as the input of the cost function of fundamental current. Secondly, after calculating the values of the cost function of the fundamental current under the three voltage vectors, the voltage vector that minimizes g d1 is selected as the optimal voltage vector of the minimum harmonic current method.
C. Voltage Vector Selection
The proposed maximum torque scheme and minimum harmonic current scheme can reduce the candidate voltage vectors from 32 to 11 and 21, respectively, reducing the amount of calculation of MPCC. However, the number of candidate voltage vectors is still large for the implementation of control method in microprocessor. Therefore, the sector location method is used to quickly select some useful voltage vectors from all the voltage vectors.
To make the stator current accurately track the reference command, the predicted value of fundamental current is equal to the reference value. Inversely derived by (4), the desired fundamental output voltage can be expressed by where, * where θ e is electrical angle According to (12), the phase angle of the reference voltage vector is located using the arctangent function, which is As for (13), the adjacent voltage vectors of the sector located using the reference voltage vector are applied to improve the control accuracy of stator current. Therefore, 4 large vectors and one zero vector close to the reference voltage vector are used as candidate voltage vectors for the maximum torque scheme. 2 large vectors and 2 medium vectors close to the reference voltage vector and one zero vector are taken as the candidate voltage vectors for the minimum harmonic current scheme.
It can be seen that the number of the candidate voltage vectors of the proposed method and the conventional one are 5 and 32, respectively. 27 candidate voltage vectors are reduced in proposed method compared with that of the conventional one, decreasing the amount of calculation by 84%.
D. Implementation of Proposed Method
As shown in Figure 5, the implementation of cascade MPCC for five-phase PMSM can be divided into 5 steps.
Step 1: measure the stator current and the speed of fivephase PMSM, calculate the reference command, and predict the system state.
Step 2: use (14) to compensate one-step delay for stator fundamental current and third harmonic current.
Step 3: locate the sector of the reference voltage vector and select the adjacent voltage vector of the sector as the candidate voltage vector.
Step 4: select the control scheme according to the requirements of control performance, design the priority of cost function, and construct cascade cost function.
Step 5: convert the optimal voltage vector into PWM control signal driving five phase PMSM.
V. EXPERIMENTAL RESULTS
To verify the correctness and the advantages of the proposed method, the experiments of the performance comparison among the proposed method, the conventional MPCC, and the MPCC with virtual voltage vector in [23] are carried out on the five-phase PMSM setup. As shown in Figure 6, a DSP TMS TMS320F28335 is used as the microprocessor executing the control algorithm. A five-phase PMSM is connected to a magnetic powder brake generating the load torque. The parameters of five-phase PMSM are in TABLE II. The weighting factor λ for the conventional MPCC is tuned using the trial-and-error approach. Based on large numbers of experimental tests, the value of λ is set to 1.9. The experimental results are shown in Figure 7~Figure 11.
A. Steady-state Experimental Results
The steady-state experiment at the speed of 380r/min are carried out to compare the current harmonics of the maximum torque scheme, minimum harmonic current scheme, conventional MPCC, and MPCC with virtual voltage vector, as shown in Figure 7. It can be clearly seen that smooth currents are obtained in the minimum harmonic current scheme and MPCC with virtual voltage vector, while the harmonic current can be observed in the maximum torque scheme and conventional MPCC. From the total harmonic distortion analysis (THD), the current THD of the minimum harmonic current scheme is 5.1%, smaller than that of the maximum torque scheme 5.7%, that of the conventional MPCC 7.1%, and that of MPCC with virtual voltage vector 5.3%.
B. Experiment of Step Change in Speed
The experiment of step change in speed is made to compare the dynamic response of the maximum torque scheme, minimum harmonic current scheme, conventional MPCC, and MPCC with virtual voltage vector, as shown in Figure 8 and Figure 9. In Figure 8, the speed is changed from 200 r/min to 1200r/min spending about 120 ms in the three methods. Therefore, the speed response of the four methods is the same. In Figure 9, q-axis current in the start of speed change is presented. The q-axis currents of the maximum torque scheme, the minimum harmonic current scheme, and MPCC with virtual voltage vector track their references in 2.9 ms, 3.2 ms, and 3.3 ms, obtaining fast current response. However, the conventional MPCC has a relative slow current response completing within 4.5 ms.
C. Experiment of Load Disturbance
The experiment results of load disturbance using the maximum torque scheme, minimum harmonic current scheme, conventional MPCC, and MPCC with virtual voltage vector are shown in Figure 10. When the machine operates at 1100r/min, the magnetic powder brake generates a load torque. It can be observed that the maximum torque scheme has a fast current response and a small speed drop, exhibiting a good dynamic performance against the load disturbance over the minimum harmonic current scheme, the conventional MPCC, and the MPCC with virtual voltage vector.
D. Experiment performance evaluation
The stator current THD of the four MPC methods are compared at different speed to verify the advantage of proposed method, as shown in Figure 11. At 200r/min, 400r/min, 600r/min, and 800r/min, the stator current THD of the proposed maximum torque scheme, the proposed minimum harmonic current scheme, the conventional MPCC, and the MPCC with virtual voltage vector changes little. The proposed minimum harmonic current scheme has the same current THD with the MPCC with virtual voltage vector, smaller than the other two methods. Therefore, the proposed minimum harmonic current scheme can effectively lower current THD.
The dynamic performance in part B and the performance against load disturbance in part C are summarized in TABLE III. The current response time of the proposed maximum torque scheme, the proposed minimum harmonic current scheme, the conventional MPCC, and the MPCC with virtual voltage vector are 2.9ms, 3.2ms, 3.3 ms, and 4.5ms, respectively, while their speed drop in loading test are 60r/min, 80r/min, 90r/min, and 120r/min. It can be clearly seen that the proposed maximum torque scheme has the best dynamic performance and the ability against load disturbance among the four methods.
VI. CONCLUSION
This paper proposes a cascade MPCC for five-phase PMSM to solve the problem of tedious tuning work of weighing factor. Through setting the control priority of fundamental current and harmonics current to form the cascade structure of the cost function, the optimal voltage vector used to balance the performance of the fundamental current and the harmonics current can be selected without using a weighting factor. The construction scheme of cost function can be changed with the system performance requirement, developing to a maximum torque scheme for output electromagnetic torque improvement and a harmonic elimination scheme for eliminating third harmonic in stator current. The Experimental results indicate that the proposed method is better than the conventional MPCC in dynamic performance and current harmonic elimination without weighting factor. | 6,448.4 | 2022-01-01T00:00:00.000 | [
"Engineering"
] |
Comprehensive evaluation research of hybrid energy systems driven by renewable energy based on fuzzy multi-criteria decision-making
The worsening of climate conditions is closely related to the large amount of carbon dioxide produced by human use of fossil fuels. Under the guidance of the goal of “ carbon peaking and carbon neutrality goals ” , with the deepening of the structural reform of the energy supply side, the hybrid energy system coupled with renewable energy has become an important means to solve the energy problem. This paper focuses on the comprehensive evaluation of hybrid energy systems. A complete decision supportsystem is constructed in this study. The system primarily consistsof four components: 1) Twelve evaluation criteria from economic, environmental, technological, and socio-political perspectives; 2) A decision information collecting and processing method in uncertain environment combining triangular fuzzy numbers and hesitation fuzzy language term sets; 3) A comprehensive weighting method based on Lagrange optimization theory; 4) Solution ranking based on the fuzzy VIKOR method that considers the risk preferences of decision-makers. Through a case study, it was found that the four most important criteria are investment cost, comprehensive energy ef fi ciency, dynamic payback period and energy supply reliability with weights of 7.21%, 7.17%, 7.17%, and 7.15% respectively. A1 is the scheme with the best comprehensive bene fi t. The selection of solutions may vary depending on the decision-maker ’ s risk preference. Through the aforementioned research, the decision framework enables the evaluation of the overall performance of the system and provides decision-making references for decision-makers in selecting solutions.
Introduction 1.Background and motivation
Energy is the cornerstone of human survival and development.Faced with multiple challenges such as resource shortage, environmental damage and climate change, traditional energy production and supply modes cannot meet the needs of social development (Zhang et al., 2023).As the world's largest carbon emitter, China's main source of carbon dioxide emissions is the burning of fossil fuels, accounting for 88% (Zeng et al., 2023).Therefore, it is urgent to carry out clean and efficient reform of China's energy supply system and consumption structure.
New energy sources such as wind and solar power, due to their abundant resources and zero emissions, will play a supporting role in the entire transition process (Niu et al., 2022).However, the mismatch between the output characteristics and the load of renewable energy, resulting in low actual utilization, still hinders its large-scale distribution (Liu et al., 2019;Liu et al., 2022a;Yong et al., 2022).With the continuous development of energy management, energy monitoring, energy storage (ES), and distributed generation technologies, the hybrid energy system (HES) that incorporates renewable energy generation is regarded as a crucial solution to address future energy challenges (Ke et al., 2022).HES achieves an organic coordination and optimization of energy production, transmission, distribution, conversion, storage, and consumption across multiple time scales, enabling an integrated supply of energy production and consumption (Liu et al., 2022b), as shown in Figure 1.
However, the layout and promotion of HES are still in the early stage, with limited demonstration projects.As an energy project, the lack of a comprehensive evaluation system during the investment decision-making stage is a significant obstacle to the development of HES.Decision-makers (DMs) need a comprehensive understanding of the project to be motivated to invest in its construction.Therefore, in order to address this issue and promote the sustainable development of HES, this study establishes a comprehensive evaluation framework for HES.
1.2 Literature review HES, consisting of renewable and fossil energy sources, is an important approach to addressing energy supply issues (Li et al., 2018).As a result, researchers have conducted extensive studies on HES.Devrim and Bilir, (2016) investigated a system that integrates wind turbines, photovoltaic panels, and fuel cells to meet the electricity demand of residential buildings.Zhou et al. (2019) studied the performance of the entire system after incorporating wind and solar power generation into the integrated energy system.Sezer et al. (2019) proposed a multi-output system that stores and converts concentrated solar, wind, and hydrogen energy.Ruiming (2019) optimized a hydrogen-integrated energy system, including wind turbines, photovoltaics, electrolyzers, and fuel cells.Eriksson and Gray, (2017) provided a detailed review of energy systems that couple renewable energy generation, hydrogen storage, and fuel cells, conducting a comprehensive comparative analysis and outlook while maintaining a positive outlook on the industry's development.Building on this foundation, Zhang et al. (2022a) proposed that the capacity configuration optimization of a HES is the basis for system development, with the goal of increasing system economics.Liu et al. (2022a) studied the optimal size of HES considering economic, environmental, and thermal comfort benefits and solved the model using NSGA-II.The aforementioned studies primarily focus on the structural characteristics and capacity configuration optimization of HES, revealing that HES with integrated renewable energy sources has a solid theoretical and practical foundation and provides significant environmental and social benefits.
Conducting a comprehensive evaluation of HES is important both for assessing the overall performance of the system and providing decision-making guidance for selecting appropriate solutions.Current research on the comprehensive evaluation of HES mainly includes the establishment of evaluation indicator systems, determination of indicator weights, and ranking of alternative solutions.Zhou et al. (2020) constructed performance analysis indicators from five aspects: energy utilization, economy, environment, technology, and society, to optimize decision-making for integrated energy systems coupling renewable energy generation.Yang et al. (2018) considered economic, technical, social, and environmental analysis indicators to comprehensively evaluate planning schemes for distributed energy systems.Ke et al. (2022) conducted a comprehensive evaluation of HES using nine indicators in four aspects: economic, energy utilization, environmental impact, and social acceptance.Building on this, Zhang et al. (2021) considered the comprehensive grid loss rate to analyze the overall benefits of HES driven by wind and solar energy, and conducted case studies.It is evident that the comprehensive evaluation of HES needs to consider multiple aspects, constituting a multi-criteria decision-making (MCDM) problem.The determination of indicator weights is an important step in solving MCDM and can be approached through subjective weight methods, objective weight methods, and integrated weight methods (Wu et al., 2016;Wu et al., 2018;Qian et al., 2021;Zhang et al., 2021;Yong et al., 2022).Subjective weight methods reflect the subjective preferences of decision-makers (Wu et al., 2023a), while objective weight methods focus on the intrinsic relationships among data.Integrated weight methods combine the two through certain mathematical methods to achieve a balance between subjectivity and objectivity (Zhang et al., 2022b).Yong et al. (2022) employed a combination of Step-wise Weight Assessment Ratio Analysis (SWARA) and entropy method using Lagrange optimization, achieving effective weight optimization solutions that might provide insights for this paper.As for the ranking of alternative solutions, commonly used methods include Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), among others.However, many of these methods do not adequately account for DMs`bounded rationality.The VIse Kriterijumski Optimizacioni Racun (VIKOR) method is capable of effectively addressing the aforementioned issues.Kamali Saraji et al. (2023) utilized the VIKOR method to rank eight challenges related to the adoption of renewable energy in rural areas.Abdul et al. (2022) employed the VIKOR method to prioritize the selection of solar energy, wind energy, hydropower, and biomass energy in developing countries.These studies demonstrate the mature application of the VIKOR method in the energy sector.Moreover, due to its ability to reflect decision-makers' subjective preferences, this paper intends to use the VIKOR method to rank alternative scenarios for HES.However, the traditional VIKOR method may not fully meet the practical decision-making requirements, prompting further improvements in this study.
Through the summary and analysis of related literature, the critical findings are as follows: (1) Existing comprehensive evaluation studies mostly focus on HES that provide combined heat, power, and cooling, and there is a lack of research that incorporate renewable energy for hydrogen production.
(2) The existing comprehensive evaluation indicators for HES commonly suffer from deficiencies such as the lack of rational selection of indicators and difficulties in quantifying them.(3) Current comprehensive evaluation research on HES lacks considerations for collecting complete decision-making information and addressing information loss during processing.
Objectives and contributions
The above literatures provide significant inspiration for this study, but it also highlights certain deficiencies in current research.Therefore, the main objectives of this paper are to address the existing gaps in research and construct a rational and comprehensive framework for the comprehensive evaluation of HES, providing DMs with solid theoretical and methodological support.The main contributions of this paper are as follows: (1) This paper addresses the comprehensive evaluation of HES that incorporate renewable energy for hydrogen production, expanding the research in this field.
(2) This paper establishes a complete and operational decision support system for decision-makers.The HES comprehensive evaluation decision support system consists of three parts: evaluation indicators, indicator weight determination, and alternative solution ranking.DMs can directly apply this model to conduct comprehensive evaluations of various HES.Additionally, each part of the decision support system takes into account the subjective preferences of DMs.
(3) This paper thoroughly considers and resolves the issues of fuzziness and randomness in the decision-making environment.Extended fuzzy logic is employed for collecting and processing decision-making information in order to maximize information gathering and minimize losses.
The remainder of this paper is organized as follows: Section 2 establishes a comprehensive evaluation index system of HES; Section 3 utilizes a series of methods to construct the HES comprehensive evaluation model; Section 4 uses a park in Gansu Province to carry out empirical analysis; Section 5 analyzes and discusses the calculation results, including sensitivity analysis and comparative analysis; Section 6 gives the conclusion and outlook.
Evaluation criteria system for HES
The indicator system serves as an important foundation for conducting comprehensive evaluations of HES.A good indicator system should encompass comprehensiveness, rationality, and innovation.Therefore, this paper will first review the indicator systems used in relevant studies to explore the common indicators for HES comprehensive evaluation, as shown in Table 1.Secondly, since the experts are distributed in different regions, Delphi technology is used to collect the decision-making information of the experts.The information in Table 1 and the architecture of HES were sent to a number of experts.Experts analyze and select the comprehensive evaluation indicators.We aggregated the reports of each expert to form a preliminary indicator system, which is then distributed to the experts for analysis.So repeated, and eventually formed a generally recognized indicator system.Finally, based on this, innovative indicators applicable to HES systems including hydrogen production processes will be proposed.
Based on the above analysis, this paper constructs a comprehensive benefit evaluation indicator system for HES from four aspects: economy, environment, technology, and socio-policy.
Economic criteria
The economic indicators of evaluation index system are as follows: Investment cost (C11): Investment cost refers to all expenses incurred in the initial stage of HES construction.It determines to a certain extent the difficulty of system construction and economic benefits.Since labor costs can be neglected compared to equipment procurement costs, the initial investment can be simplified as the cost of equipment procurement during the construction period.C11 is a cost criterion.
where IC refers to investment cost; c inv,i is unit investment cost of the device i; Q i indicates the capacity of the device i. Dynamic payback period (C12): The dynamic payback period refers to the time required for a project's net returns to offset the total investment, taking into account the time value of money.This metric examines the ability of the project to recover its investment and is related to investment risk (Li et al., 2022a).C12 is a cost criterion.
P electricity t,y
and P gas t,y are the price of electricity and gas; C i,y is the maintenance cost of device i; Y is the planned operating cycle of HES.
Hydrogen yield rate (C14): The ratio between the economic benefits obtained from the hydrogen production process and the input costs.This ratio can be used to evaluate the economic feasibility and profitability of the electrolytic hydrogen production equipment (Liang and Wang, 2023).C14 is a beneficial criterion.
where HYR refers to hydrogen yield rate; P H2 and c H2 indicate the price and the yield of hydrogen; c electricity EHP and c oper,EHP are the electricity consumed and operation and maintenance cost of EHP.
Environmental criteria
The environmental indicators of evaluation index system are as follows: Carbon dioxide emissions (C21): This indicator refers to the annual total carbon dioxide emissions from HES (Qin et al., 2021).C21 is a cost criterion.
where Q CO2 refers to carbon dioxide emissions; δ grid indicates grid emission factor; δ gas is the amount of carbon dioxide released by per cubic meter natural gas combustion.Air pollutant emissions (C22): This indicator refers to the annual total emissions of SO2, NOx, and particulate matter generated by HES each year.C22 is a cost criterion.
where φ S , φ N and φ P are SO2, NOx and particulate matter emissions per cubic meter natural gas combustion.Land occupation (C23): The construction of HES will require land, which will have a certain impact on natural scenery and urban planning (Wen et al., 2021).C23 is a cost criterion.
Noise (C24): Due to the presence of various energy supply equipment in HES, there will be some noise generated during operation.The noise can cause disruptions to the normal lives of workers and nearby residents and, in the long run, can have significant health impacts on the human body (Qian et al., 2021).C24 is a cost criterion.
Technical criteria
The technical indicators of evaluation index system are as follows: Comprehensive energy efficiency (C31): The comprehensive energy utilization rate reflects the degree of coupling and complementary utilization of multiple energy flows at different time scales, and can be used to measure the level of comprehensive energy utilization in a system (Zheng and Wang, 2020).C31 is a beneficial criterion.where CEE refers to comprehensive energy efficiency; c electricity cons,y and c gas cons,y represent the annual electric energy and heat energy consumed in the park respectively.P RE y is the total amount of clean energy entered into the system by new energy equipment; a is the low calorific value of natural gas.
Energy supply reliability (C32): The ES in the park and their connection to the external power grid significantly reduce the impact of the intermittency of renewable energy and power equipment failures on the system's reliability.Therefore, this study considers reflecting the system's reliability by evaluating the energy supply-demand imbalance within the park when the system is operating in island mode (Ke et al., 2022).C32 is a beneficial criterion.
where ρ refers to energy supply reliability; L E , L H and L C indicate the electricity, heat and cooling consumption of users, respectively; ΔL E , ΔL H and ΔL C are the deviation of the electricity, heat and cooling consumption.
Device utilization rate (C33): The equipment utilization rate represents the ratio of the actual output power of the energy generation devices installed in the park to their rated power, reflecting the efficiency of the equipment's production.C33 is a beneficial criterion.
where D UE refers to device utilization rate; P out,i and P rated,i indicate the output power and rated power of device i respectively.
ES equivalent utilization coefficient (C34):The ES equivalent utilization coefficient represents the utilization rate of the ES in HES, reflecting the significance of ES and the rationality of capacity allocation.C34 is a beneficial criterion.
where EAF refers to ES equivalent utilization coefficient; E C ES,y and E D ES,y indicate annual total charging capacity and annual discharging capacity of ES respectively; P ES is the rated capacity.
Social-political criteria
The social-political criteria indicators of evaluation index system are as follows: Level of advancement (C41): The level of advancement refers to the level of advancement of the HES compared to similar projects domestically and internationally.It influences the extent of policy and financial support that the project can receive after construction and implementation.This indicator is related to the technological advancement, innovative mode, and scalability of the project.C41 is a beneficial criterion.
Public satisfaction (C42): Public satisfaction is mainly related to two aspects: firstly, the public's acceptance of the project's construction, which is related to the engineering implementation plan and operational mechanisms; secondly, the users' intuitive experience with the HES.This indicator has a significant impact on the promotion and later operation of the project.C42 is a beneficial criterion.
Job creation (C43): The construction and operation of HES will stimulate local employment and the development of the service industry.The research and development, as well as the manufacturing of related equipment, will promote the employment of engineering and technical personnel (Qian et al., 2021).C43 is a beneficial criterion.
Compatibility with policies (C44): As a new type of energy utilization model, most HES are still in the planning and initial construction phase.Therefore, adopting system solutions that are more compatible with national policies is more conducive to obtaining financial support from the government.This aspect plays a significant role in determining whether the project can obtain feasibility approvals.C44 is a beneficial criterion.
Methods of collecting decision-making information
The collection and processing of decision information are among the most crucial issues in the field of MCDM.In the investment decision-making process for the HES, there will be a significant amount of uncertainty due to its novelty and cutting-edge nature.Uncertainty can introduce ambiguity and randomness into the decision environment, making it challenging for DMs to assess the HES.Therefore, it is essential to address the significant issue of how to gather decision information that reflects the most authentic thoughts of DMs.Additionally, the process of handling decision information should minimize information loss as much as possible to ensure the rationality of evaluation results.
Hesitant fuzzy linguistic term set
In a fuzzy environment, the decision-making process often brings significant hesitation to DMs.Especially when evaluating qualitative indicators, DMs does not necessarily have an in-depth study of all aspects of HES.They may hesitate between adjacent measurement levels, unable to provide precise and singular decision information.Hesitant fuzzy linguistic term set (HFLTS) can obtain expert subjective evaluation information more flexibly, thereby maximizing the integrity of decision information (Yong et al., 2023).
The definitions related to HFLTS are as follows: Definition 1: A linguistic term set S s i , s i+1 , s i+2 , /,s n { } is a finite ordered collection of linguistic variables with an odd number of terms.The language term set used in this article consists of seven linguistic variables, which are set as follows: Definition 2: HFLTS allows DMs to evaluate the HES by selecting one or multiple linguistic variables s i and assigning corresponding degrees of belief C(s i ) to each linguistic variable.A set of ordered linguistic terms H S obtained based on this provision can be represented as follows: Definition 3: The conversion relationship between the evaluation information provided by DMs based on the linguistic term set and HFLTS is as follows: ( Based on the above definition, experts can give evaluation terms that look like the following expression:
Triangular fuzzy number
The expert evaluation information can be collected more comprehensively using HFLTS.However, this information is currently in qualitative form and cannot be directly analyzed and computed quantitatively.Triangular fuzzy numbers (TFNs) are widely used to transform qualitative information into quantitative information due to their ability to preserve fuzzy information and their advantages of simplicity and ease of operation.The main definitions and formulas involved in TFNs are as follows: Definition 4: When an information set x [x l , x m , x u ] satisfies x | 0 < x l ≤ x m ≤ x u , x ∈ R , it is called a TFN.When all the elements have values distributed between 0 and 1, the TFN is referred to as a standard TFN, and its membership function μ x (x) is defined as follows: By performing the defuzzification operation on TFNs, their crisp values can be obtained: R x ( ) The specific representation of TFNs in this paper is the quantitative characterization of linguistic terms in set S. Therefore, the correspondence between triangular fuzzy numbers and the linguistic term set S is shown in Table 2.
Based on the theoretical analysis above, further processing can be performed on the decision information represented by H S .
Definition 5: In the optimization of the HES, it involves comparing different evaluation values.Therefore, the distance formula between two TFNs a [a l , a m , a u ] and b
Methods of calculating criteria weights
The evaluation results of the HES are determined by a combination of multiple indicators.However, the contributions of different indicators may vary, which is reflected in the weights assigned to the indicators.Therefore, this section will discuss the methods for determining the indicator weights.Additionally, to account for both the subjectivity of the DM and the objectivity of the indicator values, this paper adopts a comprehensive weighting method that combines subjective and objective aspects to calculate the relative importance of each indicator.
SWARA method-Subjective weights
HES is a novel mode of energy production and utilization; Thus, the proper subjectivity of DMs is important to ensure the rationality of the evaluation results.The SWARA method, which effectively reflects the DMs' viewpoints and balances operability and scientific rigor, has been widely used for determining the subjective weights of indicators (Ghenai et al., 2020).
The main steps of SWARA are shown as below: Step 1: According to the DM's preferences, the indicators are ranked in descending order of importance.Additionally, the relative importance between the top-ranked indicator and the remaining indicators is evaluated.The evaluation language and the corresponding quantitative values are presented in Table 2.
Step 2: (Akhanova et al., 2020): Starting with the second attribute, calculate the relative importance between the criterion (marked j) and the previous criterion (marked j−1).This ratio represents the comparative importance of s j value.
Step 3: The coefficient value c j of all criteria is calculated as follows: Step 4: Calculate the correction weight value s ′ j .
Step 5: Compute the subjective weights sw j .
Entropy weights method-Objective weights
The evaluation indicator system for the HES includes a large number of quantitative indicators.When determining the weights of these indicators, ignoring the influence of numerical values can result in a lack of objectivity in the decision-making process.Therefore, this paper adopts the entropy method to determine the objective weights of the indicators.The main steps of the entropy method are as follows (Wu et al., 2023a): Definition 6: To eliminate the influence of different properties of the indicators on the data dimensions and scale, it is necessary to perform a standardization operation on the TFNs.The standardization formula for x [x l , x m , x u ] to n [n l , n m , n u ] is shown below (Yong et al., 2022): where BC and CC are beneficial indicator and cost indicator respectively.x max j max x ij | i 1, 2, . . .,m and x min j min x ij | i 1, 2, . . .,m .
Step 1: Construct the initial decision matrix as shown below: where x ij indicates the evaluation value of alternative i under criterion j.And i 1, 2, . . ...m, j 1, 2, . . .,n.
Step 2: Standardize the initial decision matrix through Definition 6.
And calculate the mean value of criterion j: Step 3: The entropy measure e j can be obtained as follows: TABLE 2 The fuzzy scale.
Lagrange optimization-Comprehensive weights
According to the principle of minimum discriminant information, the comprehensive weight should reflect the subjective and objective characteristic information as much as possible.Thus, Lagrange optimization is used to obtain the comprehensive weights (Huang et al., 2021).
where cw j , sw j and ow j mean the combined, the subjective and objective weights respectively.In addition, the above formula should satisfy the following constraints: (1) s.t.
Method of sorting the alternatives
After obtaining the weight information of the indicators, integrating it effectively with expert evaluation language becomes a crucial step in the selection of the optimal solution for the HES.The VIKOR method is a compromise-based MCDM method that ranks alternative solutions by comparing their proximity to the positive and negative ideal solutions (Meniz and Ozkan, 2023).The VIKOR method can fully consider the DMs' subjective preferences for the HES and balance the trade-offs between the benefits and harms of each solution.However, the effectiveness of the VIKOR method can be greatly influenced by uncertain environments.Therefore, this paper improves the VIKOR method using TFNs to enhance its applicability in such environments.The main steps of the fuzzy VIKOR method are as follows.
Step 1: Based on the normalized initial evaluation matrix obtained from the fuzzy entropy method, determine the best Ĩ* j and the worst Ĩ− j among all the standard evaluation values.
Step 2: Compute the social utility value S i and individual regret value G i .
Step 3: Compute the collective benefit coefficient Q i .
where β ∈ [0, 1] is the compromise coefficient, which represents the proportion of the collective utility and regret utility in the decisionmaking process.
Step 4: Sort the solutions in ascending order based on their values of S, G, and Q.A smaller value indicates a better solution.
Step 5: To determine the compromise solution, the alternative solution A 1 with the lowest Q value is chosen as the optimal solution, provided that it satisfies the following two conditions: ) and Q(A 2 ) are the benefit coefficient values of the top-ranked and second-ranked solutions respectively.m represents the total number of solutions.
Condition 2: Acceptable Stability: If, based on the ranking according to S and G, A 1 remains in the first position.
If either of the two conditions mentioned above is not satisfied, a set of compromise solutions is obtained: (1) If only Condition 2 is not satisfied, both A 1 and A 2 are compromise solutions.(2) If Condition 1 is not satisfied, the maximum value of X is obtained from the relationship Q(A X ) − Q(A 1 ) < 1 m−1 , and A 1 , A 2 , /,A X is close to the ideal solution.
Decision-making framework
The decision framework of this paper is shown in Figure 2.
Case background
Gansu Province is an important base for new energy in China, ranking among the top in wind power generation and photovoltaic power generation.Therefore, this article selects an industrial park in Lanzhou, Gansu Province as the service object of HES for a case study.The industrial park is located in the northwest of Lanzhou City and has abundant wind and solar resources, making it suitable for the development of renewable energy generation.The area of the park available for solar energy is 17,500 square meters, with a solar irradiation intensity of 1,300 (kW•h)/m2.The average wind speed is 5.5 m/s.The electricity load in the park is 3.75 MW, with separate loads for heating (2.1 MW) and cooling (2.8 MW), and a gas load of 3.8 MW.The wind and solar resource data are obtained from the NASA, and the load data is provided by the local power company.
Data and decision information collection
Based on the network architecture of the HES shown in Figure 1, this paper has formulated six different schemes in Table 3 to meet the energy demands of the industrial park.Among them, A1, A2, and A3 compare the advantages and disadvantages of investing in photovoltaic and wind power in the park.A4 and A5 compare the advantages and disadvantages of electric boilers and gas boilers.A6 primarily utilizes CCHP units as the main heat source, coupled with small-scale gas boilers.
To maximize daily profits using the aforementioned six schemes, a four-season typical daily scheduling is conducted.Based on the scheduling results and the calculation methods of the three-level indicators in this paper, the quantitative data for the six alternative schemes in the comprehensive evaluation index system of HES are shown in Table 4.
The qualitative data for the six alternatives is sourced from an expert committee.The committee is composed of four experts who have long been engaged in research on integrated energy systems.The experts used HFLTS to evaluate the qualitative indicators of the alternative schemes.The evaluation results for the six alternative schemes are presented in Table 4.
Subjective weight calculation
In this paper, the TFNs-SWARA method is used to calculate the subjective weights of the indicators.The four experts evaluate the priority order of the various indicators based on their own expertise.The initial evaluation matrix by the experts is shown in Table 4.
FIGURE 2
The framework of this study.Based on Table 5, the defuzzification operation is performed using Eq.14. Afterwards, the subjective weights of the HES composite evaluation indicators can be obtained through Eqs 17-19.
Objective weights calculation
In this paper, the entropy method based on TFNs is used to calculate the objective weights of the indicators.Firstly, the information from Table 4 is integrated to form an initial decision matrix.Then, the qualitative decision information in the initial decision matrix is quantified using Eq. 15.Finally, the objective weights of the indicators can be obtained using Eqs 20-25, as shown in Table 6.
Comprehensive weights calculation
In order to incorporate both the subjective judgments of the experts and the inherent patterns of objective data, this paper integrates the results of two types of weights.In this process, it is important to minimize the loss of information.Therefore, the Lagrange optimization method is chosen in this paper.The integrated weights can be seen in Table 6.
It can be seen from the calculation result that economic index and technical index are the two most important first-level indexes.In the secondary index, C11 (Investment cost), C12 (Dynamic payback period), C31 (Comprehensive energy efficiency) and C32 (Energy supply reliability) are the most important criteria, which the DMs need to prioritize when making decisions.
Alternatives sorting
Once the weight calculation for the indicators is completed, this paper will conduct a comprehensive evaluation of the six alternative scenarios for the HES.Firstly, the normalized initial decision matrix obtained during the objective weight calculation process is used as the basis for the comprehensive evaluation.Secondly, the real values of the indicators are transformed by inversely utilizing the defuzzification formula to expand them into TFNs.For example, (0.75, 0.75, 0.75) = 0.75.Then, the best and worst indicator values are selected among all the standards, and the group utility value and individual regret value are calculated using Eqs 30, 31, as shown in Table 6.Finally, the group benefit coefficient is calculated using Eq.32, as shown in Table 7.It is worth noting that the compromise coefficient β 0.5 is chosen in this paper to simultaneously pursue maximizing group utility and minimizing individual regret for decisionmaking.
According to the compromise solution determination rules of the VIKOR, A1 is the optimal solution in Q i , S i and G i .And m−1 0.2.Thus, A1 is the optimal option in the six alternatives.
Discussion and analysis
In the previous chapter, this paper obtained the comprehensive evaluation results of HES, including index weight results and scheme ranking results.Therefore, the above results will be analyzed in this chapter.In addition, sensitivity analysis and comparative analysis will be employed to discuss the model.These two types of methods will respectively verify the robustness and rationality of the model.
Analysis of criteria weights results
The weight reweighting of HES comprehensive evaluation indicators is shown in Table 6.From the subjective weight of indicators, C11 (Investment cost), C31 (Comprehensive energy efficiency), C12 (Dynamic payback period) and C32 (Energy supply) reliability) has a high weight of 8.34%, 8.34%, 8.28%, and 7.95%, respectively.Among them, C11 and C12 are economic indicators, which mainly reflect the economic feasibility and investment risk of the program.C31 and C32 are technical indicators, which are the embodiment of system efficiency and supply assurance ability.From the objective weight of indicators, the weight of C24 (Noise) has a significant advantage over other indicators, which is 7.46%.The weights of the remaining indicators are between 6% and 6.5%.It indicates that there are some differences in technical scheme and comprehensive performance among the alternatives, but they are not very obvious.The obvious difference in the noise of each scheme is due to the greater noise of wind turbines.As a result, a scenario with more wind turbines would have a poorer C24 performance.From the comprehensive weight of indicators, the highest weights are C11, C31, C12 and C32, which are 7.21%, 7.17%, 7.17%, and 7.15% respectively.It is consistent with the trend of subjective weight, but the weights are reduced, which indicates that the comprehensive weight method can better reflect the subjective decision of experts and objective data information in the final weight.
Analysis of alternatives sorting
The fuzzy VIKOR method was used to rank HES alternatives, and the results are shown in Table 7.The final sorting result of the determined scheme is A1>A3>A2>A5>A4>A6.This result shows the relative advantages of each scheme considering group interests and individual regrets.However, when only the group benefits of each scheme are considered, the ranking results are A1>A3>A5>A4>A2>A6.This is because in the comprehensive evaluation, although the program may perform very poorly in one aspect, it will eventually be smoothed out by other aspects, resulting in large shortcomings in the implementation process of the project.VIKOR method takes this factor into account and reduces the impact of extreme results on comprehensive evaluation by introducing individual regret value.In all scenarios, A1 has the best performance in terms of group benefits and individual regrets.Therefore, A1 is the optimal solution.
Sensitivity analysis of criteria weights
The calculation of indicator weights is a crucial step in the comprehensive evaluation of HES, as it affects the final ranking The sensitivity analysis results of the environmental indicators are shown in Figure 4. A6 consistently remains the worst alternatives.A1 only drops to the second priority when the weight of C24 increases to 20%.When the weights of C12, C22, and C23 change, there is no change in the ranking results of all the schemes.By observing the sensitivity analysis results of C24 (Noise), it can be seen that as the weight of the indicator gradually increases, the ranking of A2 increases from third to first.This is due to the significant noise pollution generated by wind power compared to solar power, giving A2 a clear advantage over the other schemes in this indicator.
The sensitivity analysis results of the technical indicators are shown in Figure 5.The ranking results of A1 and A6 do not change with the variation of technical indicator weights.By observing the sensitivity analysis results of C32 and C34, it can be concluded that A2, A4, and A5 are sensitive to C32, while A3 is sensitive to C34.This indicates that these schemes have noticeable advantages or disadvantages compared to other schemes in these two indicators.Furthermore, the ranking results of all alternatives do not undergo significant changes with variations in the weights of the indicators.
The sensitivity analysis results of the social-political indicators are shown in Figure 6.The best and worst schemes among the six alternatives remain A1 and A6, respectively.By observing the sensitivity analysis results of all the indicators, it can be seen that A4 is sensitive to C41 and C42.A2 is sensitive to C43.Additionally, only a few schemes experience minor changes in their priority ranking.
By employing the sensitivity analysis method on the variation of indicator weights, it can be observed that the priority ranking of A1 and A6 remains largely unchanged.Additionally, the ranking results of all alternatives do not undergo significant changes with variations in the weights of individual indicators.
FIGURE 7
Sensitivity analysis results of compromise coefficients β.
Sensitivity analysis of decision support coefficient
The advantage of VIKOR method over other MCDM methods primarily lies in its ability to reflect the DMs' subjectivity, allowing them to make more aggressive or conservative decisions.This advantage is manifested in the specific calculation method through the choice of the compromise coefficient.A higher compromise coefficient indicates a greater emphasis on maximizing the overall group utility and less consideration for the personal regrets of the dissenting individuals, which reflects a risk-seeking DM.Conversely, a lower compromise coefficient represents a decision mechanism that aims to minimize individual regrets and belongs to the risk-averse category.β 0.5 represents different trade-off approaches that consider the majority group's interests and the minority's dissenting opinions, thereby representing risk-neutral DMs.Therefore, in this paper, by observing changes in the ranking results of the alternatives through variations in the compromise coefficient, the robustness of the model is validated.
From Figure 7, it can be observed that regardless of how the DMs' strategy changes, A1 and A6 consistently remain the best and worst options, respectively.As the compromise coefficient gradually increases, the priority of the A2 option decreases, indicating that A2 has a significant advantage in a certain criterion.Conversely, the priority of the A3 option increases, suggesting a more balanced performance across multiple indicators.Furthermore, the ranking of the alternative schemes does not undergo significant changes with variations in the compromise coefficient.
Comparatives analysis
To validate the rationality of comprehensive evaluation model, this paper compares it with several commonly used MCDM methods in the field, as shown in Figure 8.In addition to the VIKOR method, TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution), TODIM, and FCE (Fuzzy Comprehensive Evaluation) have been widely used by many scholars in the field of comprehensive evaluation.By observing Figure 8, it can be noted that A1 and A3 are consistently ranked among the top two options across all methods.Furthermore, except in the case of FCE, A1 is the optimal solution in all methods, as FCE does not consider the specificity of the solutions and the DMs' preferences.Additionally, the ranking of the alternatives remains relatively stable across all methods.Therefore, the model constructed in this paper demonstrates rationality.
Conclusion and outlook
Sustainable development is a consensus and goal of the entire human society.With the continuous maturation of new energy generation technologies and storage technologies such as hydrogen energy, HES represents the inevitable trend towards integrating energy sources and loads in future energy systems.However, the lack of a comprehensive evaluation system hinders the development and layout of HES.Therefore, this paper constructs a comprehensive evaluation framework for HES from three aspects: system architecture, evaluation indicators, and evaluation models.Firstly, the energy flow of HES, including electricity, heating, and cooling, is clearly decomposed and presented.Secondly, 12 indicators related to the comprehensive evaluation of HES are identified from four dimensions: economic, environmental, technological, and socialpolicy.Specific quantitative methods are provided for the quantitative indicators.Then, a comprehensive evaluation model based on fuzzy theory and MCDM theory is constructed.Finally, the robustness and rationality of the proposed method are verified through sensitivity analysis and comparative analysis.The main conclusions derived from this study are as follows: (1) C11(Investment cost), C31 (Comprehensive energy efficiency), C12 (Dynamic payback period) and C32 (Energy supply reliability) are the four most important criteria, with weights of 7.21%, 7.17%, 7.17% and 7.15% respectively.C11 and C12 reflect the economic characteristics of HES as an energy project.C31 and C32, on the other hand, represent the energy supply characteristics of HES.(2) A1 is the optimal alternative for the layout of the HES in a certain industrial park in Gansu.However, A1 has shortcomings in land occupation, noise, and ES equivalent utilization coefficient.DMs can optimize this scheme in these three aspects to maximize the benefits of the HES.(3) When collecting and processing expert information, the reasonable use of fuzzy theory can maximize the acquisition and retention of original decision information, and it can fully reflect the psychological factors of DMs.(4) In the application process of the fuzzy VIKOR method, DMs can change the compromise coefficient to reflect the changes in decision psychology and influence the final determination of the scheme.
The comprehensive evaluation framework of the HES constructed in this article is universal and can serve as a reference for the layout of HES in places with abundant wind and solar resources.However, There are still some shortcomings in this paper: ① Since this decision support model has not been used in real HES, the true performance of the optimal scheme selected based on this model is still open to question.② In the VIKOR method, the combination coefficient of group benefit value and individual loss value is 0.5, which is the value used in most literature.
Therefore, how to improve the value of coefficient is also an important direction of optimization model; ③ Comprehensive evaluation index system of HES is established in the current development background.When the future socio-economic situation changes or disruptive technologies emerge, the indicators should also be adjusted accordingly.Therefore, we will continue to optimize the model and solve the problems in the above three aspects in the follow-up research work.
FIGURE 4
FIGURE 4Sensitivity analysis results of environmental indicators.
FIGURE 5
FIGURE 5Sensitivity analysis results of technical indicators.
FIGURE 6
FIGURE 6Sensitivity analysis results of social-political indicators.
TABLE 1
Index aggregation in relevant literature.
TABLE 3
Six different HES capacity configuration schemes.
TABLE 4
Quantitative and qualitative data for the six alternatives.
TABLE 5
Subjective weight evaluation matrix of indicators and algorithm steps of SWARA.
TABLE 6
Algorithm steps of objective indicators and comprehensive weights.
TABLE 7
Calculation results of group utility value, individual regret value, and collective benefit coefficient.The results of comparative analysis. | 9,501.8 | 2023-12-29T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Immunotherapy of Cancer: Reprogramming Tumor-Immune Crosstalk
The advancement of cancer immunotherapy faces barriers which limit its efficacy. These include weak immunogenicity of the tumor, as well as immunosuppressive mechanisms which prevent effective antitumor immune responses. Recent studies suggest that aberrant expression of cancer testis antigens (CTAs) can generate robust antitumor immune responses, which implicates CTAs as potential targets for immunotherapy. However, the heterogeneity of tumor cells in the presence and quantity of CTA expression results in tumor escape from CTA-specific immune responses. Thus, the ability to modulate the tumor cell epigenome to homogenously induce expression of such antigens will likely render the tumor more immunogenic. Additionally, emerging studies suggest that suppression of antitumor immune responses may be overcome by reprogramming innate and adaptive immune cells. Therefore, this paper discusses recent studies which address barriers to successful cancer immunotherapy and proposes a strategy of modulation of tumor-immune cell crosstalk to improve responses in carcinoma patients.
Introduction
Conventional approaches in the therapy of cancer, such as chemotherapy, have shown only modest success in the treatment of advanced carcinoma [1]. Historical comparisons since the late 1970s have shown that the introduction of combination cytotoxic chemotherapy has produced a modest 9-12 month gain in survival compared with untreated breast cancer patients [2]. Despite advances in conventional cytotoxic therapies of early-stage breast cancer [3,4] there remains no therapeutic strategy that can ensure relapsefree survival. Furthermore, studies have shown that 20% of clinically disease-free early-stage breast cancer patients relapse within 10 years after conventional therapies [5]; indeed, most cancer-related deaths within the United States are attributed to relapse [6]. Thus, there is an urgent need to develop more effective therapies to overcome breast cancer relapse and to treat advanced cancer. To this end, immunotherapy emerges as promising strategy for the prevention of tumor relapse, when combined with conventional therapies.
Thus far advances in the immunotherapy of cancer have also been met with a number of setbacks. Several vaccination strategies used against breast cancer have been successfully employed to induce tumor-specific CD8 + and CD4 + Tcell responses; however, such immunological responses have rarely been potent enough to achieve objective results [7][8][9]. Additionally, it has been demonstrated by several groups that adoptive cellular therapy (ACT) directed against highly immunogenic melanoma-associated antigens results in objective responses in animal models as well as in some melanoma patients [10,11]. ACT has also been tested against breast cancer both in preclinical and clinical studies [12,13]; however, unlike melanoma, ACT has not produced promising results in breast cancer patients and has only displayed effectiveness in animal models in prophylactic settings [14,15], rather than against well-established, vascularized tumors. Such failure has been attributed, in part, to (i) the lack of a robust antitumor immune response as a result of the expression of weakly immunogenic tumor antigens coupled with the presence of low frequency and low affinity T cells and (ii) the suppression of antitumor immune responses though the activity of immunosuppressive mechanisms. Indeed, distant recurrence of breast cancer may occur even in the presence of tumor-specific immune responses. The ability to overcome these barriers will likely improve the efficacy of immunotherapy directed against cancer. To address these issues, the crosstalk between tumor cells and cells of the immune system should be altered in order for reprogrammed tumor cells and immune cells to prevent tumor relapse as well as induce regression of advanced cancer.
Immune Suppression
It is now well established that the mammalian immune response can be suppressed through various mechanisms. The expression of immunoregulatory molecules, such as CTLA-4 and PD-1 as well as the ectoenzyme, CD73, inhibits the proliferation and function of conventional T cells [16,17]. Furthermore, immunosuppressive cells such as alternatively activated M2 macrophages, type II NK cells, and regulatory T cells have been demonstrated to antagonize tumor immunosurveillance [18][19][20][21][22].
Results from clinical studies involving breast cancer patients indicate that another critical regulator of tumor immunosurveillance, the myeloid-derived suppressor cell (MDSC), was found to be the most abundant type of suppressor cell [23,24] and thus represent a major hurdle in overcoming antitumor immune suppression. MDSCs represent a phenotypically heterogeneous population of myeloid cells at different stages of maturation. These cells have been found in tumor-bearing mice as well as cancer patients and have been shown to possess multiple mechanisms to suppress the antitumor immune response [25,26]. Such responses include disrupting TCR antigen recognition and T-cell mediated IFN-γ production [27,28], depletion of essential amino acids within the tumor microenvironment [29], and overproduction of reactive oxygen species (ROS) [30]. Murine MDSCs are defined as coexpressing Gr-1 and CD11b, with two subsets commonly being described: granulocytic (CD11b + Ly-6G + Ly-6C low ) and monocytic (CD11b + Ly-6G − Ly-6C high ) [31]. Human MDSCs, on the other hand, have been difficult to be identified as initial studies revealed that these cells express varied phenotypes and suppressive patterns [25]. It is now regarded, however, that human MDSCs fall into two main subsets: a monocytic population characterized by expression of CD14 and a granulocytic population characterized by CD15 expression; both subtypes have been reported to express the common myeloid markers CD11b and CD33, with minimal expression of myeloid maturation markers such as HLA-DR [32]. The accumulation of these cells in association with cancer development is corroborated by experimental mouse models, indicating that MDSCs develop as a function of tumor progression [33]. For instance, our group has previously reported that FVBN202 mice, which overexpress the rat neu oncogene in their mammary glands, develop atypical ductal hyperplasia (ADH) and ductal carcinoma in situ (DCIS) in mammary epithelial cells prior to the formation of spontaneous mammary tumors [34]. DCIS of the breast is conventionally regarded as a precursor of invasive breast cancer, and ADH is a risk factor for the development of the disease [35,36]. Compromised antineu immune responses occur as a result of the emergence of premalignant events, such as ADH and DCIS, which are characterized by an accumulation of MDSCs in the blood, bone marrow, secondary lymphoid tissues, and within tumor lesions due to an increased production of tumor-derived soluble factors [34,[37][38][39][40][41]. Such findings provide evidence that MDSCs function as potent inhibitors of antitumor immunity in breast cancer models. Likewise, human MDSCs have been observed to negatively regulate both adaptive and innate immunity during cancer development and progression, with accumulation having been observed in peripheral blood and lymphoid tissues as well as draining tumor sites of cancer-bearing patients [31]. In addition to breast cancer, the accumulation of MDSCs has been observed in other neoplastic diseases, such as hepatocellular, pancreatic, esophageal, and colorectal cancers [26], and is generally correlated with advanced clinical cancer stage and metastatic tumor burden with a demonstrated suppression of antitumor immune responses correlating with poor responses following conventional therapies [14,23,42,43]. Thus, MDSC accumulation is paramount in the ability of cancer to evade effective immune responses. Therefore, suppression of immune responses mediated by MDSCs must be overcome to rescue and facilitate effective tumor-specific immunity. Accordingly, it was reported that activated NKT cells can overcome MDSCs, thereby supporting an effective adaptive immune response against cancer [15,44,45]. Our group has recently developed a novel strategy of reprogramming immune cells ex vivo to overcome MDSC-mediated antitumor immune suppression in a prophylactic model of murine breast carcinoma upon adoptive transfer, which resulted in a demonstrated ability to enhance immune mediated rejection of tumors [15]. However, this approach failed to protect mice in a therapeutic model against established tumors. Thus, in addition to overcoming MDSC-mediated suppression, improvements in the efficacy of immunotherapy likely will require further addressing the crosstalk between immune and tumor cells; one such strategy is enhancing tumor cell immunogenicity.
In Situ Vaccination: Modulating the Tumor Cell Epigenome
A barrier for successful immunotherapy of breast cancer is the low immunogenicity of tumor cells, for example, expression of tumor associated antigens which are recognized as "self " by the immune system. Therefore, improving the immunogenicity of tumor is essential to improving tumor immunotherapy. To this end, in situ induction of foreign-like antigens, such as cancer testis antigens (CTA), to which Tcell tolerance does not exist, is a promising option. CTAs are Clinical and Developmental Immunology 3 highly immunogenic with no natural self-tolerance due to the observation that they are normally only expressed during embryonic development; after birth, expression is generally limited to immunologically privileged germ cells and the placenta [46]. Aberrant CTA expression was first described in melanoma; as such, this expression was found to generate CTA-specific cytotoxic T-cell responses [47]. Recently, it was reported that treatment of metastatic melanoma with autologous CD4 + T cells specific for the CTA, NY-ESO-1, elicited long-term complete remission [48]. In addition to melanoma, CTA expression has also been observed in hematological malignancies [49] as well as solid tumors, including breast cancer [50,51]. Further, CTA expression in breast cancer has been shown to elicit a broad range of cellular and humoral immune responses [50,52,53]; both CD8 + T cell and CD79 + B cell infiltration has been observed in primary and metastatic NY-ESO-1 expressing breast cancer [54]. Of note, a significantly elevated expression of NY-ESO-1 and MAGE-A, another highly immunogenic CTA, was detected in triple negative breast cancers compared to other types of breast cancer [55], which therefore represent antigenic targets in an otherwise immunologically refractory breast cancer subtype. Importantly, CTA expression is normally silenced by methylation within the promoter region of these genes. Methylation at the C-5 position of cytosine bases within DNA is a covalent chemical modification which characterizes a key, biologically functional, epigenetic modification of the animal genome [56]. This action primarily occurs at CpG dinucleotides in mammals, where DNA-methyltransferases (DNMTs) mediate the transfer of methyl groups to cytosine, thereby generating 5-methylcytosine (5mC) that has been shown to play a critical role in the cellular protein expression by transcriptional silencing of genes [57]. Aberrant CTA expression likely occurs due to epigenetic molecular alterations which arise during tumor progression; cancer cells display drastic changes in DNA methylation status, typically exhibiting global DNA hypomethylation as well as region-specific hypermethylation [58], resulting in irregular expression of CTAs. Our group has observed that a lack of such aberrant CTA expression within breast tumor lesions at the time of diagnosis correlated with eventual relapse after conventional therapies (unpublished data) along with the lack of expression of an immune function gene signature [59]. Conversely, the tumors in patients who remained free of relapse expressed both CTAs and the immune function gene signature. These data suggest that CTA expression in breast cancer patients activates effective immune responses which results in improved prognosis after conventional treatments.
In order to induce and/or increase expression of CTAs to function as target antigens and improve the prognosis in patients with breast cancer, it is possible to modulate the tumor epigenome to initiate the cellular CTA transcriptional program; such an approach will serve to impart a more immunogenic tumor cell phenotype. Azacitidine (Aza) and Decitabine (Dec) are both hypomethylating agents employed in epigenetic therapy to modify cellular methylation patterns; both of these agents have been approved for clinical use in the treatment of myelodysplastic syndrome. Aza and Dec function as cytosine analogs, which lead to their incorporation into newly synthesized DNA strands during S phase of the cell cycle; these agents have been shown to induce and/or increase the expression of various CTAs in a variety of in vitro and in vivo tumor models [49,[52][53][54]. Both Aza and Dec have demonstrated the ability to induce the expression of CTAs, as well as the tumor suppressor gene p53 [60] and the death receptor Fas [61] on tumor cells. These are attributed to their capacity to function as potent DNMT inhibitors through the formation of a covalent complex with a serine residue at the active site of DNMT1, which therefore results in CpG island demethylation during cellular proliferation. This, in turn, results in hypomethylation within the promoter of tumor suppressor genes as well as a highly immunogenic CTAs [56,[62][63][64], thereby rendering tumor cells susceptible to CTAreactive immune responses and suppression of proliferation via expression of p53, as well as rendering these tumor cells more susceptible to Fas L-induced apoptosis by CTA-reactive T cells. Such modulation of CTA expression using Aza has been shown to generate CTA-specific T-cell responses in patients with acute myeloid leukemia, as demonstrated by our group [65]. Others have demonstrated the feasibility to induce CTA expression in vivo using Dec in the 4T1 model of murine breast carcinoma, resulting in greater tumor cell cytotoxicity upon treatment with CTA-specific T cells [56]. Further, an ongoing clinical trial in breast cancer patients is testing the efficacy of Dec for the induction of the expression of ER/PR in patients with hormone receptor negative tumors in order to render them susceptible to hormonal therapy [66].
Decitabine is a particularly attractive option to induce CTA expression as it functions as a prodrug which requires activation by deoxycytidine kinase (DCK), an enzyme preferentially expressed in tumor cells and myeloid cells. Thus, the effects of Dec are likely tissue specific, as DCK is selectively expressed in tumor cells and myeloid cells, thus protecting T and B cells from the potentially deleterious demethylating effects of this agent. In addition, DCK has been found to be overexpressed in poor outcome breast cancer [67], suggesting that epigenetic therapy to induce CTA expression may prove to be an efficacious approach in breast cancer patients with poor prognosis.
Our group has recently demonstrated that epigenetic modulation using sequential Aza and the immunomodulatory agent lenalidomide for the induction of CTA expression in the tumor and CTA-specific antitumor immune responses in patients with multiple myeloma [65]. Upon determination of CTA expression in bone marrow of multiple myeloma patients following treatment with Aza, we found that CTA expression is induced exclusively in CD138 + malignant plasma cells in vivo, which suggests a preferential induction of hypomethylation in CTA promoters within tumor cells. As a result of such a strategy, which we term in situ vaccination or epigenetic induction of an adaptive immune response, we have determined that the observed induction of CTA expression resulted in the generation of robust CTAspecific adaptive immune responses [65]. We believe that this strategy will maintain long-term surveillance against malignant plasma cells in patients with MM and translate into prolonged freedom from progression in this otherwise incurable disease. Furthermore, these data suggest that epigenetic therapeutic agents, such as Dec, when used in a neoadjuvant setting, may induce CTA expression in tumor-bearing patients and may therefore activate early CTA-specific immune responses to prevent recurrence after conventional therapies.
Reprogramming of Tumor-Sensitized Immune Cells
The rationale for ex vivo reprogramming of tumor-sensitized immune cells is based on overcoming the low frequency of endogenous tumor-reactive T cells by driving their expansion and activation toward the most effective antitumor phenotype(s). We have previously shown the ability of ex vivo reprogrammed Her2/neu sensitized immune cells to protect mice in a prophylactic setting when used in an adoptive cellular therapy (ACT) setting [15]. Cellular reprogramming through the combined use of bryostatin 1, a potent activator of classical and novel protein kinase C (PKC) [68,69], and ionomycin (B/I), a calcium ionophore [70,71], followed by differentiation using gamma-chain (γc) cytokines (IL-2, IL-7, and IL-15) results in the ability to selectively activate tumor-primed T cells, NK cells, and NKT cells, as described by our group [72,73]. In particular, the generation of both CD4 + and CD8 + central memory (CD44 + CD62L high ) lymphocytes, which are necessary to mediate protection in ACT recipients upon challenge with antigen expressing tumor cells, is observed. Furthermore, we observed that reprogrammed NK/NKT cells surprisingly functioned to render T cells resistant to MDSC suppression and induced tumor rejection even in the presence of MDSC in FVBN202 mice [15]. Therefore, it may prove beneficial to harvest autologous peripheral blood mononuclear cells (PBMC) from breast cancer patients having received neoadjuvant Dec treatment in order to reprogram CTA-sensitized immune cells using B/I and γ-c cytokines; following conventional therapies, such reprogrammed lymphocytes can then be reinfused back into the host, whereupon they may exert long-lived protection against relapse, even in the presence of classical immunosuppressive cells such as MDSCs.
Rescue of Late Antitumor Immune Responses
We have previously demonstrated that MDSC accumulation results as a function of tumor-derived soluble factors, such as GM-CSF, in the FVBN202 model of breast carcinoma [74], while others have identified additional tumor-derived soluble factors and inflammatory cytokines which are responsible for the accumulation of MDSCs [38][39][40][41]. We have also verified the ability of radiation therapy (RT) to reject primary tumors, thereby resulting in the reduction of MDSCs within the tumor bearing host [15]. Ionizing irradiation is known to cause cellular stress and enhance the synthesis of a variety of immune-stimulatory andmodulating molecules such as heat shock proteins (HSP) [75,76], high mobility group box 1 (HMGB1) [76], and NKG2D ligands [77]. Such danger signals are then sensed by cells of the immune system. For instance, toll-like receptor (TLR)-4 on DCs interacts with its ligands including HMGB1 [78] and HSPs [79] and enhances maturation and antigen presentation capacity of DCs. Detection of danger signals in tissues by leukocytes activates an immune response involving cells of the innate (myeloid and NK cells) and adaptive (T and B cell) lineages. RT-induced NKG2D ligand, an activating receptor for NK cells, and HSP70 render tumor cells more susceptible to NK-cell-mediated cytolysis [80]. Thus, combining RT with an enhanced immunotherapeutic strategy, such as neoadjuvant administration of Dec, is likely to enhance antitumor immune responses and produce objective responses against advanced breast cancer and result in a decreased risk of disease relapse. The removal of MDSCmediated suppression via RT may, therefore, facilitate the rescue of CTA-specific antitumor immune responses against residual tumor cells and result in the prevention of future disease recurrence. Accordingly, we propose that CTA-reactive T cells became antigen experienced during tumorigenesis due to aberrant CTA expression; however, it is likely that such CTA expression occurs late in the progression of the tumor, thus rendering CTA-reactive T cells ineffective due to MDSC accumulation via tumor-derived soluble factors. It is expected, nevertheless, that patients who receive neoadjuvant Dec followed by radiation therapy or surgery to remove the primary tumor will experience a reduction in MDSC accumulation; we propose that such activity will result in the rescue of CTA-reactive T cells from suppression to eliminate residual tumor cells in order to decrease the likelihood of future disease recurrence.
Limitations and Future Considerations
The majority of solid tumors and hematological malignancies undergo a period of dormancy that is characterized by years to decades of minimal residual disease (MRD) in which cancer progression has paused [81,82]. Indeed, diseasefree periods in breast cancer patients can last as long as 25 years and are clearly associated with the presence of MRD; subsequent relapse represents the escape of the tumor from dormancy, which can include locoregional recurrence as well as distant metastatic disease [81,83,84]. Tumor dormancy may be the result of hypoxic stress, as well as other as yet unknown cues from the microenvironment of the host [85]. The mechanism of tumor cell dormancy may best be explained by cellular quiescence. Quiescence is defined as growth/proliferation arrest and is thought to be due to G0-G1 cell cycle arrest, during which cells pause cellular activities which can render them refractory to differentiation and proliferation [86,87]. Thus, given that DNMT inhibitors Dec and Aza are incorporated into cellular DNA during S phase, the induction of CTA expression requires tumor cells to be actively proliferating. As such, the in situ vaccination strategy outlined above will likely be less effective against any residual Clinical and Developmental Immunology 5 tumor cells that have entered G0-G1 arrest. Therefore, further understanding the process by which residual tumor cells naturally exit dormancy may provide novel approaches to coax such cells to exit cell-cycle arrest. Future studies investigating the ability of Aza or Dec combined with histone deacetylase inhibitors (HDI) to reinitiate the cell cycle would be beneficial in addressing this problem. Such efforts may result in an enhanced ability of in situ vaccination strategy to target and eliminate MRD, which may therefore lower the incidence of tumor recurrence presently observed in breast cancer patients. | 4,676 | 2012-10-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Baryon Asymmetry as a Stochastic Result and Implications for the Time of Baryogenesis
We try to explain the known asymmetry between baryons and antibaryons in the observable universe as the result of random fluctuations in the number of baryons and antibaryons within the particle horizon. The establishment of this asymmetry occurs at a poorly constrained time, before the epoch of baryon/anti-baryon annihilation between 10−32 and 10−1° s. The low initial gravitational entropy of the universe is due to this initial matter/antimatter asymmetry. The observed ratio of cosmic microwave background photons to baryons (~ ten billion) is a measure of this asymmetry. Different horizons render different probability distributions for the baryon asymmetry, all variants of a Gaussian centered at zero. Using the current level of baryon asymmetry as an estimate of the one sigma width of the Gaussian, we estimate the time of baryogenesis to be on the order of 10−1°s, near the time of reheating, within established constraints.
Introduction
Baryon asymmetry has escaped conclusive explanation since the prediction and experimental verification of antimatter's existence.Standard models of particle physics predict the production of equal amounts of matter and antimatter, though this contradicts observation [1].In an inflationary scenario, baryogenesis occurs at a poorly constrained energy, between 100 GeV (electroweak scale) < E < 10 12 GeV (sphaleron processes) [2].These energy levels constrain the time of known baryogenesis models between the end of inflation and the electroweak phase transition, 10 −32 < t < 10 −1 °s.The baryon asymmetry parameter is As one averages baryon asymmetry (eq. 1) over larger volumes, does total baryonic charge average to zero [3]?
If it does, what might a probability density plot of eq. ( 1) for any particular volume look like?This question envisions random fluctuations in eq. ( 1) as baryon/antibaryon levels vary in a volume existing within an arbitrarily larger volume over which eq. ( 2) holds (Figure 2).We expect it to be centred at = 0 , where averaging over larger horizon volumes reveals global symmetry.The conceptual models in Figure 1 disagree about eq. ( 2).If there is some 'global' baryon symmetry, looking at a number of distinct horizon volumes should reveal a Gaussian distribution function of .If there is some fixed global asymmetry, then we might expect delta functions at the observed or at a different for different inflationary bubbles.A distinct but related question is concerned with the size of of matter/antimatter dominated regions and whether they are fit for observed large-scale structure formation.Omnes has attempted to answer these questions while assuming a globally baryonsymmetrical universe [4].His mechanism of baryongenesis is a phase separation between matter and antimatter at temperatures of 10 − 10 to create distinct matter/antimatter dominated regions that partially annihilate but go on later to coalesce into structures roughly the size of galaxies, with the expected baryonic asymmetry to be ~10 − 10 .We use a simpler mechanism to ensure baryogenesis in random fluctuations within a commoving volume.The following analysis makes use of the time-dependent baryonic density of the observable universe, and the changing size of the horizon volume.We expect the observed to be within a bound that demarcates a statistical fluctuation.Hence we allow the observed asymmetry to equal of the derived (time-dependent) probability density function Pr( ).From this we find the time at which for our current horizon became fixed if baryon asymmetry was a stochastic result.represents the current size of our particle horizon whose baryon asymmetry is postulated to be a stochastic result in a globally baryon-symmetric universe.
Observational constraints
Observed baryon asymmetry is calculated under the Λ model by either observing abundances of light elements in the intergalactic medium (IGM), or from temperature fluctuation spectrum in the cosmic microwave background (CMB).Each measure baryon asymmetry at different points in the history of the universe, and serve to qualify the constancy .The observe range of value varies between 5.80 − 6.20 × 10 [5].The AMS (Alpha Magnetic Spectrometer) datasets constrain models which posit large amounts of antimatter in the observable universe, either as a homogeneous matter-antimatter mixture, or as distinct co-existing matter and anti-matter dominated regions of space.AMS-01 results failed to find antihelium neutrons with an upper limit of accuracy of ~ = 10 [6].Large-scale antimatter structures are unobserved and 'large' antimatter deposits greater than those synthesized at CERN are unobserved in our Solar System.
Results and Discussion
Consider a horizon volume existing within a larger volume (Figure 1).Flipping a two sided coin, let a heads result in the addition of a baryonic particle to ; tails, the addition of an antibaryonic particle.The number of flips of the coin will refer to the total number of particles in the volume itself, since each flip adds one particle: where ( ) and ( ( )) is the scale-factor dependence density and particle horizon volume.
As flips are undertaken, we expect eq. ( 4) to undertake a random walk, either increasing or decreasing by one depending on the flip.The stochastic fluctuations described in Figure 1 are represented by the 'randomness' of the coin flips.The statistics of a random walk are well known [7] Given that = , the probability density function will look different [8].More specifically, = 411 .In the coin-flipping situation described above, effectively undergoes a random walk around an initial point of zero (Figure 2).Given that both heads and tails are equally likely, we expect the average value of (and hence ) to be zero for large enough .The probability of encountering a particular value of for a given value of is given by the binomial distribution: The normalised normal approximation to the binomial distribution is Changing parameters from to , by using where ( ) and ( ) are probability density functions and ( ) is well-defined.
Figure 1 .
Figure 1.(a) At the heart of this analysis is the question of whether eq.(2) holds or not.If it does, then the result is (i) blue: a statistically expected distribution of for a given horizon size in a baryon-symmetric universe, and if not, then (ii) green: delta functions at either the observed asymmetry of ~ 10−1° or at another asymmetry (green dashed) in a horizon volume distinct from ours in the same inflationary bubble.
Figure 2 .
Figure 2. The asymmetry parameter described by Eq. (1) fluctuates, or 'randomly walks' in the sub-volume around a global baryon-symmetric value of = 0.represents the current size of our particle horizon whose baryon asymmetry is postulated to be a stochastic result in a globally baryon-symmetric universe. | 1,503.6 | 2015-11-12T00:00:00.000 | [
"Physics"
] |
Torrefaction of Pine Using a Pilot-Scale Rotary Reactor: Experimentation, Kinetics, and Process Simulation Using Aspen Plus™
: Biomass is an excellent sustainable carbon neutral energy source, however its use as a coal/petroleum coke substitute in thermal applications poses several challenges. Several inherent properties of biomass including higher heating value (HHV), bulk density, and its hydrophilic and fibrous nature, all contribute to challenges for it to be used as a solid fuel. Torrefaction or mild pyrolysis is a well-accepted thermal pretreatment technology that solves most of the above-mentioned challenges and results in a product with superior coal-like properties. Torrefaction involves the heating of biomass to moderate temperatures typically between 200 ◦ C and 300 ◦ C in a non-oxidizing atmosphere. This study focused on evaluating the influence of torrefaction operating temperature (204–304 ◦ C) and residence time (10–40 min) on properties of pine. Tests were performed on a continuous 0.3 ton/day indirectly heated rotary reactor. The influence of torrefaction operational conditions on pine was evaluated in terms of the composition of torrefied solids, mass yield, energy yield, and HHV using a simulated model developed in Aspen Plus™ software. A kinetic model was established based on the experimental data generated. An increase in torrefaction severity (increasing temperature and residence time) resulted in an increase in carbon content, accompanied with a decrease in oxygen and hydrogen. Results from the simulated model suggest that the solid and energy yields decreased with an increase in temperature and residence time. Solid yield varied from 80% at 204 ◦ C to 68% at 304 ◦ C, and energy yield varied from 99% at 204 ◦ C to 70% at 304 ◦ C, respectively. On the other hand, HHV improved from 22.8 to 25.1 MJ/kg with an increase in temperature at 20 min residence time. Over the range of 10 to 40 min residence time at 260 ◦ C, solid and energy yields varied from 77% to 59% and 79% to 63%, respectively; however the HHV increased by only 3%. Solid yield, energy yield, and HHV simulated data were within the 5% error margin when compared to the experimental data. Validation of the simulation parameters was achieved by the conformance of the experimental and simulation data obtained under the same testing conditions. These simulated parameters can be utilized to study other operating conditions fundamental for the commercialization of these processes. Desirable torrefaction temperature to achieve the highest solid fuel yield can be determined using the energy yield and mass loss data
Introduction
Torrefaction is a thermochemical process used for the conversion of biomass into a substance such as coal, which has improved fuel characteristics compared to the original frequency factors, respectively for wood decomposition into gas, tar, and char defined by three parallel first-order reactions [9]. Shang et al. (2013) studied the decomposition kinetics and devolatilization of wheat straw in a thermogravimetric analyzer coupled with a mass spectrometer. They applied a two-step reaction kinetic model considering the initial dynamic heating period. This helped them obtain precise data that matched with the experimental results at different temperatures. The activation energy and pre-exponential factors established for these two reactions were 71.0 and 76.6 kJ mol −1 , and 3.48 × 10 4 and 4.34 × 10 3 s −1 , respectively. These parameters and model successfully predicted the residual mass of wheat straw on a batch torrefaction reactor [10].
Research involving kinetic parameters were mostly successful in studying the effects of temperature, but information is limited when it comes to the effects of residence/reaction time. Wilk et al. (2017) used kinetic parameters to find that conducting torrefaction at a higher temperature helped in decreasing the activation energy of the combustion process of miscanthus pine with improved fuel properties [11]. Moreover, results on composition, mass yield, energy yield, and higher heating values were not always reported and/or validated with experimental results, which is a critical part of ensuring the success of a model.
Lately, there has been an increase in the number of studies focusing on torrefaction characteristics for a wide varieties of biomass species. The primary focus of the research has been on the fuel properties of torrefied products obtained using different process parameters [6,[12][13][14][15][16][17]. Although there are several torrefaction studies available, the majority of them focus on experimental approaches. These studies therefore provide limited information on process scale-up [6]. For instance, lab-scale experiments pose challenges in obtaining information on process energy requirement, which is critical for industrialization and commercialization of the process. These reasons might have hindered implementation of torrefaction technology to produce solid biofuel on an industrial scale to an extent. [18,19]. In order to minimize the gap between academia and industry, more process modeling studies are required. There is a scarcity of torrefaction process-modeling works in the literature. [20][21][22][23]. In a simple torrefaction model by Hardianto et al. [20] and Dudgeon [24], they estimated the yield and the heating value of torrefied biomass. Nikolopoulos et al. [21] studied the torrefaction of wheat straw using a combination of an Aspen Plus [25] model and chemical kinetics. However, none of these works provided any information on energy consumption and process energy efficiency.
Previous studies have used known compositions of the feed obtained from experimental studies and these results were incorporated into a stoichiometric (RStoic) or a conversion reactor (Ryield), or both reactor types in tandem [20,26,27]. As stated by Bach et al. (2017) [28], the issue was that these pre-defined reactors could not model the actual torrefaction reaction successfully. In order to address this issue, in this study, a simplified kinetic model using the Arrhenius equation has been developed for carbon, oxygen, and hydrogen separately based on experimental data obtained at various temperatures and residence times. The novelty of this study lies in the fact that this model was simulated using Aspen Plus TM through a CSTR reactor that can account for changes in both the temperature and the residence time, and the reaction kinetics has been developed based on results obtained from a pilot-scale torrefaction. A CSTR (continuous stirred tank reactor) is commonly used in industrial processing operations. The advantages associated with CSTR are effective mixing and well-maintained and controlled reactor temperatures. Under steady-state conditions, the output composition is identical to composition of the material inside the reactor, which is a function of the residence time and the reaction rate [29]. These properties of the CSTR reactor were found to be suitable for the simulation of the torrefaction conditions with the assumption of steady-state and the uniform mixing of the materials within the reactor.
In addition to predicting the yields, the proposed simulation model can also provide valuable information and insights into managing energy consumption, scale-up, and optimization of the reactor conditions.
Materials and Experimental Procedure
Pine wood chips were procured from Winn Timber depot, Winnfield, LA, USA (February 2018) for further processing and experimentation. The pine chips obtained had an initial moisture content of 50 wt.% and were approximately 45 × 25 × 10 mm in size on an as-received basis. Pine chips were processed through a commercial chipper (one pass) to reduce the size to approximately 25 × 13 × 6 mm and smaller and stored for further analysis and experimentation. Torrefaction tests were performed on a 15 kg/h natural gas fired indirectly heated rotary reactor. The dimensions and details of the reactor system are provided in detail [30]. The effects of temperature and residence times on mass yield, energy yield, higher heating values, proximate and ultimate analysis, and the energy yield in the off-gases on various feedstocks on a simulated torrefaction unit were all evaluated in this study.
Pilot-Scale Experimental Procedure
A schematic of the torrefaction reactor set up is presented in Figure 1. Pine chips were loaded into a hopper (1), which were dropped via rotary air lock (2) into a mechanically vibrated hopper (3) and continuously fed into the rotary reactor (5) through a screw conveyer (4). As the reactor rotates, chips were transported along the heated section of the reactor (5) which was located within the heating chamber (10). Chips were torrefied as they passed the reactor. Torrefied pine was dropped into a sampling drum (7), which was hermitically sealed to the discharge end of the reactor (6) and cooled by purging nitrogen continuously. The volatile gas stream released during the process leaves the reactor from the process side gas exit (9), while the combustion flue gases exit the heating chamber at the flue gas exit (8). The torrefaction temperature referred throughout this document was based on the thermocouple located in the center zone of the reactor, which was controlled by adjusting the natural gas flow rates. The residence times were controlled by adjusting the rotary reactor rpm's. The inert atmosphere was maintained by purging nitrogen and maintaining a positive pressure inside the reactor system. The detailed experimental procedure can be found in the thesis [30], where a total of 12 experiments were conducted using pine at various temperatures and residence times ranging between 232-315 • C and 16-24 min, respectively. The tolerance for process temperatures was ±7 • C, and ±10% for residence times, respectively. The moisture content of the samples was determined as per the ASTM Method 4442. Two different instruments were used to analyze the moisture content independently: (1) a moisture analyzer (HB43-S Halogen Mettler Toledo), and (2) the oven dry method. A Vario Micro Cube Elemental Analyzer was used to determine the elemental composition of pine and torrefied pine samples. Volatile matter and ash content of the samples were
Post-Experimental Analysis
All the analytical methods used in this study such as the higher heating value, and the ultimate and proximate values of raw biomass and torrefied biomass samples were performed strictly adhering to ASTM procedures. All samples were analyzed at least in duplicates to ensure accuracy.
The higher heating value (HHV) was measured using a bomb calorimeter according to the ASTM standard D-2015. A PARR 6200 Bomb Calorimeter (Parr calorimeter Model No. A1290DDEB, Parr Instrument Company, Moline, IL, USA) was used for the determination of the HHV.
The moisture content of the samples was determined as per the ASTM Method 4442. Two different instruments were used to analyze the moisture content independently: (1) a moisture analyzer (HB43-S Halogen Mettler Toledo), and (2) the oven dry method. A Vario Micro Cube Elemental Analyzer was used to determine the elemental composition of pine and torrefied pine samples. Volatile matter and ash content of the samples were determined in accordance with the STM standards D-3175-11 and ASTM E-1755-01, respectively. Analysis was carried out using a high-temperature vertical tube furnace (MTI GSL-1100X-S MTI, USA) and a box furnace (Lindberg Blue BF51828C-1, Ashville, NC, USA). Fixed carbon was calculated based on the measured moisture content, volatile matter, and ash content of the samples.
Aspen Plus™ Simulation
Aspen Plus™ was used to simulate the torrefaction process. This flow-sheeting software was developed by AspenTech, that permits the simulation of the industrial processes by incorporating chemical and physical transformations in a comprehensive way [31]. This simulation technique uses the principle of a sequential modular approach to calculate the steady-state heat and mass balance, scale-up, sizing, and cost analysis for a chemical process. There are certain limitations associated with the traditional modular approach due to the flow of information being unalterable, as it is pre-determined in the process flow structure. On the other hand, the biggest advantage of the sequential modular approach lies in its robustness, flexibility, and reliability [32]. Aspen Plus™ modeling was conducted using block units that can simulate the inter-connected steps involving material streams and energy flows in unit operations and sub processes. This software has an enormous list of chemical compounds with their properties. It also supports the use of non-conventional components such as coal and biomass via ultimate, proximate, and sulfate analysis [33]. Aspen Plus™ also provides calculator blocks to incorporate a Fortran code and change the design conditions. This software utilizes an iterative solution method to achieve convergence with a string of calculating streams blocks [34].
Simulation of torrefaction of pine using Aspen Plus™ was divided into two sections in series-drying and torrefaction. The simulation design is shown in Figure 2. Biomass contains a significant amount of moisture, and hence was first dried prior to entering the torrefaction reactor. Drying is typically a high energy demanding step [35]. Pre-drying of biomass is a crucial step for successful torrefaction application where it can generate greater than 80% of the process heat required by combusting the volatile organic compounds from the biomass. This was demonstrated at North Carolina State University [36]. However, the amount of energy available from process volatiles is dictated by the torrefaction processing conditions such as temperature and residence time. Commercial applications have followed this step and incorporated pre-drying methods in their designs. The Biomass Technology Group used this principle, and utilized the gases released in the reactor as a heat source for drying by burning them in a combustion chamber [36]. The design shown in this study has a recycled heat source from the exhaust gases utilized in drying.
Differential Method of Rate Law
Differential rate laws express the rate of reaction as a function of a change in the concentration of one or more reactants over a specified time interval. These rate laws help in determining the reaction (or process) by which the reactants turn into products. The differential form of reaction is expressed as: Process Flow Sheeting The following general assumptions were held while constructing the simulation: • The process being continuous and steady-state, with the mechanical aspects of the equipment being disregarded. The model does not consider the movement of the material within the dryer or reactor.
•
The simulation process was conducted at atmospheric pressure (1 atm).
•
The air used for drying and combustion was at a temperature of 250 • C and 25 • C, respectively.
•
The ultimate and proximate analysis data were used to define the non-conventional biomass feedstock. For the enthalpy and density calculations, the solid property models of HCOALGEN and DCOALIGT for coal were used. • Due to the low or near atmospheric processing pressures (1 atm) and the presence of conventional gaseous compounds (such as H 2 O, CO, and CO 2 ), the ideal gas law equation was adopted for calculating thermodynamic properties.
The sequential modular simulation of the process of torrefaction of the dried biomass with the utilization of Aspen Plus TM blocks is shown in the "torrefaction" section as: B2 is a first stage torrefaction with a RYield block to ensure the final drying of pine to 0% wt. moisture.
2.
DRYER block separates the moisture from the biomass, which is subsequently released through the exhaust stream along with air. 3.
B3 uses the elemental composition of the biomass to convert the incoming nonconventional dry biomass into a conventional form.
4.
The reactor is a RCSTR block with a specified temperature and residence time. In this block, the kinetic model that was developed based on the experimental data was used to determine the yield of torrefied solid product. Reaction kinetic parameters are shown in Table 2. 5.
RGIBBS2 recomposes the gaseous products in stream TOGAS from its elemental constituents. 7.
B4, RGIBBS3, and B6 are used in the combustion section. Exhaust gases after torrefaction are mixed with air to ensure the complete combustion of the torrefied gases. The heat generated from combustion is then recycled into the dryer to minimize the energy requirement for the drying process. 8.
Final step involves the cooling of both TOGAS and torrefied solids. This section is implemented in Aspen Plus™ by means of conventional "heat exchanger" blocks ( Figure 2). After combustion, the torrefied gases are mixed with air. This dilutes the exhaust gases and decreases the temperature before releasing them into the atmosphere. Differential rate laws express the rate of reaction as a function of a change in the concentration of one or more reactants over a specified time interval. These rate laws help in determining the reaction (or process) by which the reactants turn into products. The differential form of reaction is expressed as: In the kinetic modelling, it was assumed only solids were present at the initial condition of torrefaction and the kinetic rates are based on the ultimate analysis of the feed. The kinetic reaction rates were represented by the Arrhenius equation where it consists of two parameters, namely the pre-exponential factor (A) and the activation energy (E a ). The kinetic models were implemented as power-law type kinetic expressions with the reaction rate calculated in Aspen Plus TM by Equation (1), where r is the rate of reaction, A is the pre-exponential factor, T the absolute temperature, E a the activation energy, and R is the gas constant. The value of k (reaction rate constant) and α (reaction order) were predicted using polymath non-linear regression (L-M) at each temperature. For each temperature, the value of α was fixed and the value of k was predicted. The method was repeated until a constant value of k with adjusted R 2 value above 0.9 was obtained. The kinetic parameters for the reaction temperatures 232-288 • C (450-550 • F) are shown in Table 2. The Arrhenius plots of these data are shown in Figure 3a-c. The slopes and intercepts were calculated by linear regression using excel and were utilized for obtaining the values of the activation energy and the pre-exponential factor. Polymath regression analysis data are provided in the supplementary material (Figures S1-S6).
Integral Method of Rate Law
Integrated rate laws express the rate of reaction as a function of the initial concentration and a measured concentration of one or more reactants after a specific amount of time (t) has passed. They are used to determine the rate constant and the reaction order from experimental data (Kissinger, 1957). The concentration in terms of mole fractions of each component were calculated on a basis of 1 g of biomass, and the final value was obtained after multiplying with the solid yield. In this study, an integral method was utilized to determine the reaction order. Experimental results for carbon, oxygen, and hydrogen were studied, and results were fitted to zero-order, 1st order, 2nd order, and 3rd order reactions. The value of R 2 for the linear regression was used to determine which reaction order was most applicable to the tested dataset. The equations for each reaction order are shown in Table 3. The Arrhenius plots for the corresponding reactions are shown in Figure 4a In the kinetic modelling, it was assumed only solids were present at the initial condition of torrefaction and the kinetic rates are based on the ultimate analysis of the feed. The kinetic reaction rates were represented by the Arrhenius equation where it consists of two parameters, namely the pre-exponential factor (A) and the activation energy (Ea). The kinetic models were implemented as power-law type kinetic expressions with the reaction rate calculated in Aspen Plus TM by Equation (1), where r is the rate of reaction, A is the pre-exponential factor, T the absolute temperature, Ea the activation energy, and R is the gas constant. The value of k (reaction rate constant) and (reaction order) were predicted using polymath non-linear regression (L-M) at each temperature. For each temperature, the value of was fixed and the value of k was predicted. The method was repeated until a constant value of k with adjusted R 2 value above 0.9 was obtained. The kinetic parameters for the reaction temperatures 232-288 °C (450 °F -550 °F) are shown in Table 2. The Arrhenius plots of these data are shown in Figure 3a-c. The slopes and intercepts were calculated by linear regression using excel and were utilized for obtaining the values of the activation energy and the pre-exponential factor. Polymath regression analysis data are provided in the supplementary material (Figures S1-S6).
Integral Method of Rate Law
Integrated rate laws express the rate of reaction as a function of the initial concentration and a measured concentration of one or more reactants after a specific amount of time (t) has passed. They are used to determine the rate constant and the reaction order from experimental data (Kissinger, 1957). The concentration in terms of mole fractions of each component were calculated on a basis of 1 g of biomass, and the final value was obtained after multiplying with the solid yield. In this study, an integral method was utilized to determine the reaction order. Experimental results for carbon, oxygen, and hydrogen were studied, and results were fitted to zero-order, 1st order, 2nd order, and 3rd order reactions. The value of R 2 for the linear regression was used to determine which reaction order was most applicable to the tested dataset. The equations for each reaction order are shown in Table 3 (8)- (12)) [29].
Reaction Order Integral Form of the Equation
Zero-order The reaction kinetics for the formation of torrefied solid products, and experimental correlations (Equation (11)) providing the dependence of torrefaction operating conditions on the conversion of the elemental constituents carbon, hydrogen, and oxygen, as well as the HHV, was derived from experimental data. The correlation parameters for the HHV were predicted using polymath non-linear regression (L-M), and Equation (12) was developed.
Simulation Runs and Data Validation
The simulation data generated was compared with the experimental results obtained from the torrefaction pilot-scale unit at four different temperatures-232, 260, and 288 • C, respectively, and three residence times-16, 20, and 24 min, respectively. A standard error of 5% was applied on the experimental data to clearly distinguish the deviations between the simulation results and the experimental data. Both the differential and integral methods were used to determine the reaction order for each elemental species. The results presented are based on the predicted activation energy and the pre-exponential factors shown in Table 2 using the differential method of rate law. These parameters were found to be the most suited for this study based on the data fitting and comparison to the experimental results. component were calculated on a basis of 1 g of biomass, and the final value was obtained after multiplying with the solid yield. In this study, an integral method was utilized to determine the reaction order. Experimental results for carbon, oxygen, and hydrogen were studied, and results were fitted to zero-order, 1st order, 2nd order, and 3rd order reactions. The value of R 2 for the linear regression was used to determine which reaction order was most applicable to the tested dataset. The equations for each reaction order are shown in Table 3 The reaction kinetics for the formation of torrefied solid products, and experimental correlations (Equation (11)) providing the dependence of torrefaction operating conditions on the conversion of the elemental constituents carbon, hydrogen, and oxygen, as well as the HHV, was derived from experimental data. The correlation parameters for the HHV were predicted using polymath non-linear regression (L-M), and Equation (12) was developed. Table 3. Integral forms of the reaction equations (Equations (8)-(12)) [29].
Reaction Order Integral Form of the Equation
Second-order Third-order
Proximate and Ultimate Analysis of Pine
Proximate and ultimate analysis of pine used in this study are presented in Table 4. This compositional data was used in the Aspen simulation. Proximate and ultimate analysis of torrefied pine obtained at various operational conditions are presented in Tables S2-S4. Torrefaction reaction was successfully simulated through the RCSTR reactor while taking the variation of both the temperature and the residence times into consideration. Figure 5a,b show the composition of torrefied solid obtained from the simulation at various temperatures for 16 min and 20 min residence time, respectively. Figure 6a,b show the composition of torrefied solid obtained from the simulation at 260 • C and 288 • C, respectively, with the residence time varying from 10 min to 40 min. Figures 5 and 6 also present experimental data in comparison to the simulated values. Carbon content in the torrefied biomass was found to have increased with an increase in temperature, while oxygen and hydrogen content decreased. Carbon content increased from 53% at 204 • C to 56% at 304 • C, respectively. An increase of 5.6% was observed over a range of 100 • C temperature change, and the rate of increase was about 0.056% for a 10 • C rise in temperature. Oxygen decreased from 43% to 38%, and the decline in hydrogen content was from 5.6% to 5.2%, respectively. The total decrease in oxygen and hydrogen were about 11.6% and 0.3%, respectively. Similar trends were observed by Strandberg, et al. (2015). This loss of oxygen and hydrogen contributes to the mass loss of pine with increasing temperatures [6,46]. A similar trend was also observed for 20 min residence time, however the increase in carbon content was higher, and the loss of hydrogen and oxygen was higher as well compared to the 16 min run. Also, the corresponding increase in the carbon content and the rapid removal of oxygen from the biomass contributed to the resulting increase in the higher heating value. This decrease in the hydrogen and the oxygen content with the temperature could be attributed to the devolatilization reactions, during which the hydrogen and the oxygen were lost in the form of water, CO, and CO 2 [47]. Similar trends were observed when the residence time was changed keeping the temperature constant. At 260 • C (500 • F), the residence time was varied from 10-40 min. Carbon content in the torrefied material increased, while hydrogen and oxygen contents decreased with increasing residence time. Carbon content increased from 54% to 59% at 40 min, respectively, which is about a 9.2% total increase. The rate of increase was about 0.23% for every 2.5 min increase in residence time. At 288 • C, the carbon content increased from 55% to 68%, respectively. Oxygen content decreased from 40% to 33%, respectively, while hydrogen content decreased from 5.6% to 4.5%, respectively, over the range from 10 min to 40 min residence time. The total decrease in oxygen content was about 17.5%, and that of hydrogen was 19.6%. The total changes for the elements showed that the temperature had a more pronounced effect on carbon than the residence time. On the other hand, for hydrogen, the influence of the residence time was higher than that of the temperature. For oxygen, temperature, and residence time both had similar effects, however temperature was more pronounced. Although temperature has a greater influence on torrefaction compared to residence time, both parameters are critical to obtain a product with the desired specifications.
for carbon and oxygen were within the 5% error margin. In the case of hydrogen, the simulated results followed the same trend as the experimental results, however a 5-8% deviation was observed at 232 °C. The simulated and experimental data for carbon, oxygen, and hydrogen were all within the 5% error margin when the residence time was varied. Observed deviations, although small between the simulated and the experimental data, could be contributed to limited data at higher temperatures.
Van Krevelen Plot
An increase in torrefaction severity, from 204-304 °C, over 10 to 40 min, caused the decomposition of biomass in terms of oxygen and hydrogen loss that can be seen from Figure 7, which is a van Krevelen diagram that provides an insight into the differences in the elemental compositions of biomass, torrefied pine, and coal. Volatiles have high H/C and O/C ratios, which implies a decrease in the H/C and O/C atomic ratios of the remaining solid after torrefaction. The simulated results indicated the same. As the torrefaction severity of pine increased above 302 °C (575 °F), the H/C vs. O/C ratio decreased from 1.2 to 0.6, respectively, and the elemental composition of pine fell under [48] the range of peat. This indicated that collecting more experimental data over a narrower range of temperature and residence time can improve the modeling parameters, consequently helping in the improvement of the model. Similar results were reported by Manouchehrinejad
Torrefied Pine -Sim Torrefied Pine -Ex
Increasing torrefaction The elemental composition obtained from the simulation was compared to the experimental results with a 5% standard error. Simulated results and experimental results for carbon and oxygen were within the 5% error margin. In the case of hydrogen, the simulated results followed the same trend as the experimental results, however a 5-8% deviation was observed at 232 • C. The simulated and experimental data for carbon, oxygen, and hydrogen were all within the 5% error margin when the residence time was varied. Observed deviations, although small between the simulated and the experimental data, could be contributed to limited data at higher temperatures.
Van Krevelen Plot
An increase in torrefaction severity, from 204-304 • C, over 10 to 40 min, caused the decomposition of biomass in terms of oxygen and hydrogen loss that can be seen from Figure 7, which is a van Krevelen diagram that provides an insight into the differences in the elemental compositions of biomass, torrefied pine, and coal. Volatiles have high H/C and O/C ratios, which implies a decrease in the H/C and O/C atomic ratios of the remaining solid after torrefaction. The simulated results indicated the same. As the torrefaction severity of pine increased above 302 • C (575 • F), the H/C vs. O/C ratio decreased from 1.2 to 0.6, respectively, and the elemental composition of pine fell under [48] the range of peat. This indicated that collecting more experimental data over a narrower range of temperature and residence time can improve the modeling parameters, consequently helping in the improvement of the model. Similar results were reported by Manouchehrinejad and Mani (2019), where they simulated the torrefaction of pine wood chips over a range of 230-290 • C with 30 min residence time using reaction kinetics and compared their results with the experimental data. The O/C and H/C ratios declined with increased temperature due to the dehydrogenation and deoxygenation reactions happening during the torrefaction process. This resulted in an increased HHV [49]. An increase in torrefaction temperature increased the thermal decomposi-5 tion of biomass, causing progressively decreased solid mass yield whereas, the solid yield 5 decreased with increasing residence time. The mass loss is mainly attributed to the deg-5 radation of hemicellulose content in the biomass. Hemicellulose decomposition tempera-5 ture ranges from 190°C to 320°C, where primarily the weight loss occurs at 230°C due to 5 the cleavage of glycosidic bonds and decomposition of side chains, and at 290°C due to 5 the fragmentation of monosaccharide units the weight loss is extensive [40]. 5 5 The energy yields calculated by Equ. (14) decreased with an increase in the torrefac-5 tion temperature. Energy yield as a trade-off between the mass loss and quality gain 5 (HHV) can be used to find the desirable torrefaction temperature and solid fuel yield. 5
Effect of Temperature and Residence Time on the Properties of Bio-Coal in Terms of Mass Yield, Energy Yield, and HHV
The results of solid mass yield, energy yield, and torrefied solid HHV from the simulation model compared with the experimental data from torrefaction of pinewood chips at 232, 260, and 288 • C with 16, 20, and 24 min residence time are exhibited in Figure 8a,b and Figure 9a,b. The thermal decomposition of biomass increased with an increase in torrefaction temperature. This resulted in a gradual decline of solid mass yield, whereas the solid yield decreased with increasing residence time. The mass loss was mainly attributed to the degradation of the hemicellulose content in the biomass. The hemicellulose decomposition temperature ranges from 190 • C to 320 • C, where primarily the weight loss occurs at 230 • C due to the cleavage of glycosidic bonds and decomposition of side chains, and at 290 • C, due to the fragmentation of monosaccharide units, the weight loss is extensive [51].
ing values were close to the experimental data, all being within the 5% standard error. This indicates the precision of the developed correlation used in the simulation for approximating the calorific value of the torrefied pine in the simulation process. The acceptable conformity between the simulation results and the experimental data at the specified residence time validated the simulation parameters, which can be further utilized to evaluate other operating conditions.
Effect of Temperature and Residence Time on Product Distribution
In the simulated results, the effect of temperature on the yield of condensable and volatiles was higher than that of residence time as shown in Figure 10a,b. Owing to the increased decomposition of biomass, the solid yields decreased with the increase in residence time at a specific temperature. This weight loss was mainly attributed to the hemicellulose content in the biomass. Compared to other components, hemicellulose is more reactive, and causes the decline in solid yield during torrefaction [42]. Similar results were obtained by Chang et al. (2012) in a torrefaction process of spruce wood and bagasse. In both cases, the solid yield of torrefied biomass decreased from about 81% to 66% when temperature was increased from 204 °C to 300 °C , respectively. Under the same condi- An increase in the torrefaction temperature resulted in a decrease in the energy yields which was calculated using Equation (14). There is a fine balance between the mass loss and the quality gain (HHV) resulting in a desired energy yield. This energy yield can be used as an indicator for the optimal torrefaction temperature and the solid fuel yield. Simulated solid yield results varied from 80% at 204 • C to 63% at 300 • C, respectively, while the energy yield varied from 99% at 204 • C to 69% at 300 • C, respectively. Over the range of 10 to 40 min residence time, solid yield and energy yield varied from 75% to 69% and 79% to 65%, respectively. Shang et al. (2013) studied the torrefaction of wood chips in a pilotscale continuous reactor. They observed an increased mass loss and a higher HHV in the temperature range of 250-300 • C due to the degradation of hemicellulose in the 200-250 • C range, along with cellulose and lignin degradation in the 270-300 • C temperature range. They reported that the preservation of energy in the torrefied material is possible at mild torrefaction conditions involving a low temperature and a short residence time. They also mentioned that if a high energy density is desired, then only severe torrefaction conditions should be used [10]. This observation supports the findings that was made in this paper. The predicted solid yields at various torrefaction temperatures by simulation were slightly higher, while the energy yields were slightly lower than that of the experimental data. Experimental results for all the parameters were available only within the temperature range of 232-288 • C (450-550 • F), so the kinetics developed for the simulation was more appropriate for this temperature range. This might be the reason for the minor differences observed between the simulated and experimental results, especially for the temperature outside the given range. Approximate control of the reaction temperature and retention time in the lab experiments can also play a role. The simulated higher heating values were close to the experimental data, all being within the 5% standard error. This indicates the precision of the developed correlation used in the simulation for approximating the calorific value of the torrefied pine in the simulation process. The acceptable conformity between the simulation results and the experimental data at the specified residence time validated the simulation parameters, which can be further utilized to evaluate other operating conditions.
Effect of Temperature and Residence Time on Product Distribution
In the simulated results, the effect of temperature on the yield of condensable and volatiles was higher than that of residence time as shown in Figure 10a,b. Owing to the increased decomposition of biomass, the solid yields decreased with the increase in residence time at a specific temperature. This weight loss was mainly attributed to the hemicellulose content in the biomass. Compared to other components, hemicellulose is more reactive, and causes the decline in solid yield during torrefaction [42]. Similar results were obtained by Chang et al. (2012) in a torrefaction process of spruce wood and bagasse. In both cases, the solid yield of torrefied biomass decreased from about 81% to 66% when temperature was increased from 204 • C to 300 • C, respectively. Under the same conditions, the yields of condensable and volatiles were found to have increased [42]. The same trend was observed by Manouchehrinejad and Mani (2019), with a decrease in solid and energy yield observed with an increase in temperature [49]. The composition of gases obtained after torrefaction are shown in Figure 11a,b. Formation of carbon dioxide, carbon monoxide, and methane decreased, while water content increased with the increase in torrefaction temperature at a specified residence time. Carbon monoxide content was found to be low, ranging between 2-5%. A similar trend was observed when the residence time was increased, keeping the temperature constant. Carbon dioxide content in the torrefied gas stream on a mass basis varied from 48% to 38%, methane varied from 44% to 30%, and water varied from 1.8% to 22.7% when temperature was changed from 204 • C to 300 • C with a 15 min residence time, respectively. At 260 • C with 16, 20, and 24 min, carbon dioxide decreased from 45% to 41%, methane decreased from 39% to 35%, and water increased from about 11% to 12%, respectively. Although experimental results for the gaseous composition were not available, other studies showed comparable results. Chang et al. 2012 reported the same trend of decreasing carbon dioxide formation with an increase in temperature. The formation of carbon dioxide was primarily owing to the decarboxylation reaction of unstable carboxyl groups present in the hemicellulose of pine wood [42]. Figure 12 shows the overall mass and energy balance flow diagram. Mass and energy balances were calculated based on a 1 ton/hour feed rate. Heat duties of the process were obtained from the simulation. The heats utilized by the system are represented by Q, the enthalpy flow of the system is represented by Qth, the mass enthalpy is represented as Qch, the mass flow is denoted as M, and the temperature as T. Most of the total energy required by the dryer was provided by the combusted heat stream. Thus, heat required by the dryer decreased significantly with this recycled heat stream from the combusted torrefied gases. An energy requirement comparison was made with and without the recycled heat stream, and it showed that 100% of the energy required for drying was provided by the recycled heat stream. The excess heat can be further utilized in the reactor to achieve the target torrefaction temperature. The calculated heat required by the dryer by Equation (15) Figure 12 shows the overall mass and energy balance flow diagram. Mass and energy balances were calculated based on a 1 ton/hour feed rate. Heat duties of the process were obtained from the simulation. The heats utilized by the system are represented by Q, the enthalpy flow of the system is represented by Qth, the mass enthalpy is represented as Qch, the mass flow is denoted as M, and the temperature as T. Most of the total energy required by the dryer was provided by the combusted heat stream. Thus, heat required by the dryer decreased significantly with this recycled heat stream from the combusted torrefied gases. An energy requirement comparison was made with and without the recycled heat stream, and it showed that 100% of the energy required for drying was provided by the recycled heat stream. The excess heat can be further utilized in the reactor to achieve the target torrefaction temperature. The calculated heat required by the dryer by Equation (15) Figure 12 shows the overall mass and energy balance flow diagram. Mass and energy balances were calculated based on a 1 ton/hour feed rate. Heat duties of the process were obtained from the simulation. The heats utilized by the system are represented by Q, the enthalpy flow of the system is represented by Q th , the mass enthalpy is represented as Q ch , the mass flow is denoted as M, and the temperature as T. Most of the total energy required by the dryer was provided by the combusted heat stream. Thus, heat required by the dryer decreased significantly with this recycled heat stream from the combusted torrefied gases. An energy requirement comparison was made with and without the recycled heat stream, and it showed that 100% of the energy required for drying was provided by the recycled heat stream. The excess heat can be further utilized in the reactor to achieve the target torrefaction temperature. The calculated heat required by the dryer by Equation (15)
Conclusions
This study focused on evaluating the performance of a pilot-scale reactor to produce bio-coal and developing a simulation model for optimizing the torrefaction process. Based on the experimental work performed, it can be concluded that the fuel properties of pine, including the higher heating value, carbon content, and energy density, were improved from the torrefaction process using an indirectly heated rotary drum reactor. The solid fuel obtained at higher temperatures had properties closer to that of lignite and coal. An indirectly heated pilot-scale rotary drum reactor was successfully used to produce biocoal from pine chips with production rates of up to 9 kg per hour under the conditions tested. Thus, the study has provided conclusive evidence that rotary reactor technology is a promising option and has an excellent potential to be scaled up for the commercial production of bio-coal.
Integral and differential methods of rate were used to fit the experimental results and predict the kinetic parameters for the reaction. The differential method of rate law was used to predict the kinetic parameters for the simulation, as these results were in close approximation with the experimental results. The simulated results were within 5-7% of the error margin when compared to the experimental results. Based on the experiments performed on the pilot-scale unit to produce bio-coal, the simulation model was validated, and showed that both temperature and residence time play important roles in this process. Choosing the right reaction parameters are crucial to the success of the process. The simulation successfully generated accurate results in the range of a 200-300 °C temperature range. Increasing the temperature improves the higher heating value, carbon content, and energy density of the product. The solid fuel obtained at higher temperatures had properties closer to that of lignite. The energy efficiency of the process also increased at higher temperatures with the use of recycled heat stream after combusting the torrefied gases.
There are many areas of this project which merit further study. The most significant way in which these results can be expanded upon is the verification of the Aspen Plus™ model results with further experimental testing. Additional validation, refinement, and improvement of the model can then be achieved. This also helped in minimizing the heat required for torrefaction in the reactor. As most of the heat required in the reactor goes to heating up the biomass to the torrefaction target temperature, it was assumed that minimal heat losses were observed from the equipment not incorporated in the energy balance calculations. The heat released in the separator block (SEP) was due to the enthalpy of mixing of carbon, hydrogen, oxygen, and nitrogen where they were finally separated into solid (TOSOLID), condensable, and volatile (TOGAS) streams. This was verified by finding the values for enthalpy of mixing in the inlet stream, S8 to the SEP block, and the two outlet streams TOGAS and TOSOLID using the equation [52]: where H mixture : total enthal py o f the system a f ter mixing; ∆H mix : enthal py o f mixing; x i : mole f raction o f the components; and H i : enthal py o f pure component The enthalpy of the mixture and the enthalpy of the pure components were both obtained from Aspen Plus™ property analysis at 260 • C. The difference in the enthalpy values in the inlet and outlet streams was close to the heat released by the SEP block. This explains some of the discrepancies observed in the energy balance due to the enthalpy of mixing ofvarious components. The sensible heat of the torrefied biomass was extracted through the cooling sections. The final torrefied product temperature was reduced to 100 • C using heat exchangers. The exhaust gases after combustion were cooled down to an ambient temperature before releasing them into the atmosphere with the use of air. The torrefaction process was simulated at various temperatures ranging from 204 • C to 304 • C, respectively over a range of 10 min to 40 min residence time. The energy for heating up the biomass to the torrefaction target temperature increased slightly with the increase in torrefaction temperature. At the same time, the heat released from the reactor due to the reactions also increased. Thus, the overall energy requirements for drying and torrefaction did not change considerably. A similar observation was made by Manouchehrinejad and Mani (2019). At higher torrefaction temperatures above 260 • C (500 • F), the torrefaction was considered as exothermic [3], and hence the heat inputs for torrefaction to occur can be supplemented by the exothermic reactions without a further increase in the heat supply to the system within the range of higher 260-316 • C (550-600 • F) torrefaction temperatures.
Conclusions
This study focused on evaluating the performance of a pilot-scale reactor to produce bio-coal and developing a simulation model for optimizing the torrefaction process. Based on the experimental work performed, it can be concluded that the fuel properties of pine, including the higher heating value, carbon content, and energy density, were improved from the torrefaction process using an indirectly heated rotary drum reactor. The solid fuel obtained at higher temperatures had properties closer to that of lignite and coal. An indirectly heated pilot-scale rotary drum reactor was successfully used to produce biocoal from pine chips with production rates of up to 9 kg per hour under the conditions tested. Thus, the study has provided conclusive evidence that rotary reactor technology is a promising option and has an excellent potential to be scaled up for the commercial production of bio-coal.
Integral and differential methods of rate were used to fit the experimental results and predict the kinetic parameters for the reaction. The differential method of rate law was used to predict the kinetic parameters for the simulation, as these results were in close approximation with the experimental results. The simulated results were within 5-7% of the error margin when compared to the experimental results. Based on the experiments performed on the pilot-scale unit to produce bio-coal, the simulation model was validated, and showed that both temperature and residence time play important roles in this process. Choosing the right reaction parameters are crucial to the success of the process. The simulation successfully generated accurate results in the range of a 200-300 • C temperature range. Increasing the temperature improves the higher heating value, carbon content, and energy density of the product. The solid fuel obtained at higher temperatures had properties closer to that of lignite. The energy efficiency of the process also increased at higher temperatures with the use of recycled heat stream after combusting the torrefied gases.
There are many areas of this project which merit further study. The most significant way in which these results can be expanded upon is the verification of the Aspen Plus™ model results with further experimental testing. Additional validation, refinement, and improvement of the model can then be achieved. | 11,103.4 | 2023-05-17T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
A New Self-Healing Degradable Copolymer Based on Polylactide and Poly(p-dioxanone)
In this paper, the copolymerization of poly (p-dioxanone) (PPDO) and polylactide (PLA) was carried out via a Diels–Alder reaction to obtain a new biodegradable copolymer with self-healing abilities. By altering the molecular weights of PPDO and PLA precursors, a series of copolymers (DA2300, DA3200, DA4700 and DA5500) with various chain segment lengths were created. After verifying the structure and molecular weight by 1H NMR, FT-IR and GPC, the crystallization behavior, self-healing properties and degradation properties of the copolymers were evaluated by DSC, POM, XRD, rheological measurements and enzymatic degradation. The results show that copolymerization based on the DA reaction effectively avoids the phase separation of PPDO and PLA. Among the products, DA4700 showed a better crystallization performance than PLA, and the half-crystallization time was 2.8 min. Compared to PPDO, the heat resistance of the DA copolymers was improved and the Tm increased from 93 °C to 103 °C. Significantly, the rheological data also confirmed that the copolymer was self-healing and showed obvious self-repairing properties after simple tempering. In addition, an enzyme degradation experiment showed that the DA copolymer can be degraded by a certain amount, with the degradation rate lying between those of PPDO and PLA.
Introduction
In recent years, the development of degradable polymer materials has played an increasingly important role in improving people's lives, as well as reducing environmental pollution and constructing implantable materials [1][2][3]. For instance, polylactide (PLA) is frequently used as a surgical suture and for stent implantations because of its excellent biocompatibility, biodegradability and processing properties [4][5][6], while poly(p-dioxanone) (PPDO) is often used as a surgical suture and stent implantation owing to its stronger biodegradability and biocompatibility [7][8][9][10]. However, different application scenarios have led to higher requirements for polymer properties, making it more challenging for single degradable polymer materials to satisfy human needs [11,12]. For instance, PPDO has good crystallinity because of its good molecular chain flexibility [13,14], but PLA has drawbacks such as a poor toughness and a sluggish crystallization rate [15][16][17]. However, there is little compatibility between them, so direct blending would not produce the desired material. Therefore, a convenient way to improve the overall performance of degradable polymers is through the straightforward copolymerization of two polymers [18][19][20].
The following examples [21] are where copolymerization's impact on biomedicine is most clearly visible. First, the copolymerization of PPDO and PLA can lead to control of the copolymer's degradation cycle, which has a significant impact on in vivo degradation. Second, the ratio of monomers may be regulated to improve PPDOs comprehensive performance, which can be used in a variety of application scenarios. Finally, researchers can prepare corresponding copolymer materials in accordance with various application requirements because materials produced by copolymerization are more frequently used than homopolymers.
The Diels-Alder reaction is a [4 + 2] cycloaddition reaction between dienes and dienophiles [22,23]. DA adducts are formed at 50~90 • C, and the inverse Diels-Alder reaction (r-DA reaction) occurs at 100-130 • C. When cooling to a low temperature, the broken DA bond re-undergoes the DA reaction and forms DA adducts [24][25][26][27]. Due to its thermal reversibility and mild reverse reaction conditions, the DA reaction is regarded as a practical method to construct self-healing materials. Fred et al. [28] used the DA reaction to form a transparent polymer material. The material is a tough solid near room temperature and has mechanical properties comparable to those of commercial epoxy resins. At temperatures above 120 • C, the crosslinking point is disconnected and can be re-connected after cooling. In 2016, Xia et al. [29] also prepared a re-modellable, recyclable and self-repairing polysiloxane elastomer by cross-linking maleimide-functionalized polysiloxane with furanend-functionalized polysiloxane. Therefore, it seems feasible to construct self-healing polymeric materials based on PPDO and PLA with the aid of the efficient DA reaction.
In this study, a series of DA copolymers were synthesized by the DA reaction of PPDO prepolymer with PLA prepolymer. The molecular weight of the experimental PPDO prepolymer and the PLA prepolymer was about 1:1. DA copolymers (DA2300, DA3200, DA4700 and DA5500) were obtained by altering the molecular weight of prepolymers. The structure of the product was characterized by 1 H NMR, FT-IR and GPC. In addition, the thermal stability, crystallization ability, self-healing properties and degradation properties of DA products were studied by DSC, XRD, POM, rheological measurements and enzymatic degradation.
Synthesis and Characterizations
The synthesis routes to DA copolymers are shown in Scheme 1 and the 1 H NMR spectra of all intermediate products are shown in Figure S1. In Figure S1a, PPDO2300 is used as an example, and the ratio of integrated peak areas of a, b and c is 1:1:1, which corresponds exactly to the structure of PPDO. Meanwhile, the ratio of integrated peak areas of a and d showed that the molecular weights of the four PPDO products were 2300, 3200, 4700 and 5500, respectively, which was also confirmed by the ratio of integrated peak areas of a, b (furan rings) and e in Figure S1b. On the other hand, the maleimide group grafting process of PLA mainly includes the synthesis of PLA and AMI and the esterification reaction between them. Figure S1c,d demonstrates the structures of PLA and AMI, and the molecular weights of PLA products were also calculated as 2300, 3200, 4700 and 5500, respectively, by the ratio of integrated peak areas of b and c in Figure S1c. Subsequently, the presence of peaks a, b and c in Figure S1e also proved the successful grafting of maleimide groups and the successful synthesis of PLAER. The 1 H NMR spectrum of the DA copolymer is shown in Figure 1, the peaks of a-c come from PPDO, and d-f correspond to the peaks of PLA and furan ring, in addition, the presence of peaks g and h indicates the successful completion of the DA reaction and the linkage between PLA and PPDO by DA bonds.
Each synthesis step could also be confirmed by the FT-IR spectrum ( Figure S2). In Figure S2a, PPDOER showed a relatively small peak at~765 cm −1 , which is the characteristic peak of furan groups grafted to PPDO after esterification. The new peak at 1709 cm −1 of PLAER in Figure S2b was attributed to the two carbonyl groups of the maleimide group. More importantly, in Figure S2c, the presence of a DA bond made a new peak appear in the spectrum of the copolymers at 1748 cm −1 , indicating the successful synthesis of DA copolymers. The GPC spectrum of DA copolymers is shown in Figure S3, and it can be found that the molecular weight of the copolymer increased with the increase in the molecular weight of the PPDO and PLA macromolecular monomers. Although the molecular weight distribution was wide, only a single peak can be observed in the spectra, proving that the composition of the copolymers was as expected. Each synthesis step could also be confirmed by the FT-IR spectrum ( Figure S2). In Figure S2a, PPDOER showed a relatively small peak at ~765 cm −1 , which is the characteristic peak of furan groups grafted to PPDO after esterification. The new peak at 1709 cm −1 of PLAER in Figure S2b was attributed to the two carbonyl groups of the maleimide group. More importantly, in Figure S2c, the presence of a DA bond made a new peak appear in the spectrum of the copolymers at 1748 cm −1 , indicating the successful synthesis of DA copolymers. The GPC spectrum of DA copolymers is shown in Figure S3, and it can be found that the molecular weight of the copolymer increased with the increase in the molecular weight of the PPDO and PLA macromolecular monomers. Although the molecular Scheme 1. Synthetic routes towards DA copolymers. Each synthesis step could also be confirmed by the FT-IR spectrum ( Figure Figure S2a, PPDOER showed a relatively small peak at ~765 cm −1 , which is the cha istic peak of furan groups grafted to PPDO after esterification. The new peak at 170 of PLAER in Figure S2b was attributed to the two carbonyl groups of the maleimide g More importantly, in Figure S2c, the presence of a DA bond made a new peak app the spectrum of the copolymers at 1748 cm −1 , indicating the successful synthesis copolymers. The GPC spectrum of DA copolymers is shown in Figure S3, and it found that the molecular weight of the copolymer increased with the increase in th lecular weight of the PPDO and PLA macromolecular monomers. Although the mol
Thermal Stability of DA Copolymer
The thermal stabilities of PPDO, PLA and DA copolymers were measured by TGA in a nitrogen atmosphere, and the relevant information is summarized in Figure 2 and Table S1. The thermal degradation process of PPDO was generally divided into three stages. In the first stage, within the range of 30-200 • C, the weight loss was mainly due to the evaporation of adsorptive water and crystalline water of the samples because of physical dehydration, with a mass loss of about 7 %. The second stage consists of weight loss in the range of 200-400 • C, which was mainly attributed to the thermal degradation fracture of molecular chains. In the third stage, carbonization occurred at about 500 • C, and the final residue rate of all samples was about 6 % at 500 • C. Compared with PPDO, the initial decomposition temperature (T 5% ) of DA copolymers increased from 150 • C to Molecules 2023, 28, 4021 4 of 15 260 • C due to the introduction of a PLA segment. Meanwhile, the maximum thermal decomposition temperature (T max ) increased from about 275 • C to 325 • C, and with the increase in molecular weight, T max tended to move towards a higher temperature. In addition, compared to the two peaks of the PPDO/PLA blend in the DTG spectrum, DA copolymers only have one peak, except for DA3200, demonstrating the successful preparation of DA copolymers again. As for DA3200, the occurrence of a r-DA reaction may be responsible for the two peaks. In general, compared to PPDO, DA copolymers show higher thermal stabilities, with the thermal stability gradually increasing with increasing molecular weight. S1. The thermal degradation process of PPDO was generally divided into three stages. In the first stage, within the range of 30-200 °C, the weight loss was mainly due to the evaporation of adsorptive water and crystalline water of the samples because of physical dehydration, with a mass loss of about 7 %. The second stage consists of weight loss in the range of 200-400 °C, which was mainly attributed to the thermal degradation fracture of molecular chains. In the third stage, carbonization occurred at about 500 °C, and the final residue rate of all samples was about 6 % at 500 °C. Compared with PPDO, the initial decomposition temperature (T5%) of DA copolymers increased from 150 °C to 260 °C due to the introduction of a PLA segment. Meanwhile, the maximum thermal decomposition temperature (Tmax) increased from about 275 °C to 325 °C, and with the increase in molecular weight, Tmax tended to move towards a higher temperature. In addition, compared to the two peaks of the PPDO/PLA blend in the DTG spectrum, DA copolymers only have one peak, except for DA3200, demonstrating the successful preparation of DA copolymers again. As for DA3200, the occurrence of a r-DA reaction may be responsible for the two peaks. In general, compared to PPDO, DA copolymers show higher thermal stabilities, with the thermal stability gradually increasing with increasing molecular weight.
Crystallinity of DA Copolymers
Due to both PLA and PPDO being crystalline, the crystallinity of the DA copolymer is also of great interest. Firstly, XRD was used to characterize the crystal structure of the polymers. In Figure 3, it can be observed that PPDO has strong diffraction peaks at the diffraction angles of 21.9 • , 23.8 • and 29.1 • , and the corresponding d-spacing is calculated as 4.05 (d210), 3.74 (d020) and 3.06 (d310), respectively, while PLA samples only have a large and wide peak between 10 • and 30 • , indicating their polycrystalline structure and relatively poor crystallization ability. Meanwhile, the crystallization ability of PLA seems to be related to its molecular weight, and a strong diffraction peak at 15.7 • could be observed when the molecular weight increased to about 5 kDa. As for the DA copolymer, its XRD spectrum ( Figure 3c) appeared to be a superposition of those of both PPDO and PLA. However, among the two, PPDO exhibits much stronger crystal diffraction peaks than PLA due to its stronger crystallization ability. In addition, the effect of molecular weight was also obvious, i.e., the diffraction peak of PPDO in the copolymers was first enhanced and then weakened, which may be attributed to the fact that the addition of PLA affects the regular arrangement of molecular chains, making the overall crystallization ability of the polymer decrease.
relatively poor crystallization ability. Meanwhile, the crystallization ability of PLA seems to be related to its molecular weight, and a strong diffraction peak at 15.7° could be observed when the molecular weight increased to about 5 kDa. As for the DA copolymer, its XRD spectrum (Figure 3c) appeared to be a superposition of those of both PPDO and PLA. However, among the two, PPDO exhibits much stronger crystal diffraction peaks than PLA due to its stronger crystallization ability. In addition, the effect of molecular weight was also obvious, i.e., the diffraction peak of PPDO in the copolymers was first enhanced and then weakened, which may be attributed to the fact that the addition of PLA affects the regular arrangement of molecular chains, making the overall crystallization ability of the polymer decrease. Meanwhile, DSC was also used to characterize the thermal and crystallization properties of the copolymers. The differential scanning calorimetry curves of all samples are shown in Figure 4 and relevant data are listed in Table 1. As shown in Figure 4a,b, crystallization exothermic peaks appeared in all PPDO samples, and exothermic crystallization peaks appeared at 34.17 °C and 36.46 °C, respectively, for PPDO2300 and PPDO4700 in the subsequent heating curves. This phenomenon may be due to incomplete crystallization during the previous cooling scan and the occurrence of cold crystallization during heating. In general, the melting point of PPDO increases from 93 °C to 100 °C with increasing molecular weight. As for PPDO5500, more PDO monomers may be the main Meanwhile, DSC was also used to characterize the thermal and crystallization properties of the copolymers. The differential scanning calorimetry curves of all samples are shown in Figure 4 and relevant data are listed in Table 1. As shown in Figure 4a,b, crystallization exothermic peaks appeared in all PPDO samples, and exothermic crystallization peaks appeared at 34.17 • C and 36.46 • C, respectively, for PPDO2300 and PPDO4700 in the subsequent heating curves. This phenomenon may be due to incomplete crystallization during the previous cooling scan and the occurrence of cold crystallization during heating. In general, the melting point of PPDO increases from 93 • C to 100 • C with increasing molecular weight. As for PPDO5500, more PDO monomers may be the main cause of the increase in lattice defects and the decrease in thermal stability. However, under the same conditions, PLA has no obvious crystal peak in the spectrum (Figure 4c,d) due to its weak crystallinity. As for the DA copolymer, only DA4700 and DA5500 showed obvious crystallization exothermic peaks, while the other samples did not (Figure 4e,f). This may be due to the fact that the smaller macromonomer causes increased mixing of PPDO and PLA after polymerization, making PLA have a greater impact on the crystallization ability of PPDO. In the case of DA4700 and DA5500, they also failed to crystallize completely during the cooling process, and there was also a cold crystallization peak in the heating curve. According to Table 1, with the increasing molecular weight of the homopolymer, the crystallinity (Xc) of the copolymer increased to about 20% compared to that of PLA homopolymer. Compared to the PPDO homopolymer, the melting point of the copolymer also improved. The results show that the increase in molecular weight of PPDO improves the crystallization properties of the DA copolymer and the introduction of PLA chain segments improves the thermal stability of PPDO, which is consistent with the results of XRD and TG above. the heating curve. According to Table 1, with the increasing molecular weight of the homopolymer, the crystallinity (Xc) of the copolymer increased to about 20% compared to that of PLA homopolymer. Compared to the PPDO homopolymer, the melting point of the copolymer also improved. The results show that the increase in molecular weight of PPDO improves the crystallization properties of the DA copolymer and the introduction of PLA chain segments improves the thermal stability of PPDO, which is consistent with the results of XRD and TG above. Detailed crystallization peak data, such as crystallization temperature (T c ), crystallization enthalpy (∆H c ), recrystallization temperature (T cc ), recrystallization enthalpy (∆H cc ), melting point (T m ), enthalpy of melting (∆H m ) and crystallinity (X c ), obtained from all DSC experiments are listed in Table 1. The calculation formula is shown in (1): where ∆H f and ∆H f • are the melting heat of crystallization of component A in the sample and the melting enthalpy of crystallization of 100% crystalline A homopolymer (the melting enthalpy of fully crystallized PLA is reported to be 93 J/g and that of PPDO is 141 J/g), respectively, and A (wt%) represents the mass percentage of component A in the sample.
Here, the mass ratio of PLA to PPDO is 1:1, so the melting enthalpy of complete crystallinity of copolymer is calculated as 117 J/g on average. Moreover, the isothermal crystallization exothermic curves of each sample are also shown in Figure 5a, and it can be seen that only PPDO, DA4700 and DA5500 have crystallization peaks. Among the three, the crystallization rate of PPDO is the fastest, followed by DA4700, and DA5500 is the slowest (Figure 5b), which may be due to the fact that macromolecules with a larger molecular weight need a longer time to arrange themselves into regular chains and form crystals. According to the Avrami equation, the half crystallization time (t 1/2 ) of PPDO is 0.99 min, while those of DA4700 and DA5500 are 2.85 min and 5.11 min, respectively (Figure 5c,d and Table S2).
where ΔHf and ΔHf° are the melting heat of crystallization of component A in the sample and the melting enthalpy of crystallization of 100% crystalline A homopolymer (the melting enthalpy of fully crystallized PLA is reported to be 93 J/g and that of PPDO is 141 J/g), respectively, and A (wt%) represents the mass percentage of component A in the sample.
Here, the mass ratio of PLA to PPDO is 1:1, so the melting enthalpy of complete crystallinity of copolymer is calculated as 117 J/g on average. Moreover, the isothermal crystallization exothermic curves of each sample are also shown in Figure 5a, and it can be seen that only PPDO, DA4700 and DA5500 have crystallization peaks. Among the three, the crystallization rate of PPDO is the fastest, followed by DA4700, and DA5500 is the slowest (Figure 5b), which may be due to the fact that macromolecules with a larger molecular weight need a longer time to arrange themselves into regular chains and form crystals. According to the Avrami equation, the half crystallization time (t1/2) of PPDO is 0.99 min, while those of DA4700 and DA5500 are 2.85 min and 5.11 min, respectively (Figure 5c,d and Table S2). The time evolution of POM micrographs of the polymers is shown in Figure 6. For PPDO, a spherocrystal with a clear Maltese extinction cross and alternating light and dark concentric rings, which is consistent with the literature, can be observed by isothermal crystallization at 30 • C (Figure 6a). Meanwhile, PLA did not show any crystal structure after isothermal crystallization at 80 • C for 15 min, which was consistent with the results of XRD and DSC. As for the DA copolymer, only DA4700 could be observed to have a spherulite structure with a weak Maltese cross and light and dark concentric rings (Figure 6e), while a few crystals can be seen in others. In particular, DA2300 and DA3200, with small molecular weights, took a longer time to complete the crystallization, which was consistent with the results of DSC. In summary, with the increase in molecular weight, the crystallinity of the DA copolymer generally showed a trend of enhancement, and DA4700 showed the best crystallinity, which was mainly dependent on PPDO segments. concentric rings, which is consistent with the literature, can be observed by isothermal crystallization at 30 °C (Figure 6a). Meanwhile, PLA did not show any crystal structure after isothermal crystallization at 80 °C for 15 min, which was consistent with the results of XRD and DSC. As for the DA copolymer, only DA4700 could be observed to have a spherulite structure with a weak Maltese cross and light and dark concentric rings (Figure 6e), while a few crystals can be seen in others. In particular, DA2300 and DA3200, with small molecular weights, took a longer time to complete the crystallization, which was consistent with the results of DSC. In summary, with the increase in molecular weight, the crystallinity of the DA copolymer generally showed a trend of enhancement, and DA4700 showed the best crystallinity, which was mainly dependent on PPDO segments.
Rheological Properties and Self-Healing Properties
The relationship between apparent viscosity (η) and shear rate was obtained by determining the shear rate of DA copolymers with different macromonomer molecular weights, as shown in Figure S4. The apparent viscosity decreased with the increase in shear rate, and shear thinning behavior can be observed clearly, indicating that the copolymer is a non-Newtonian pseudoplastic fluid. Moreover, η clearly decreased with the increase in molecular weight of the macromonomers at shear rates of 0.1-1000 s −1 , which may be attributed to the fact that the longer flexible PPDO segment weakens the
Rheological Properties and Self-Healing Properties
The relationship between apparent viscosity (η) and shear rate was obtained by determining the shear rate of DA copolymers with different macromonomer molecular weights, as shown in Figure S4. The apparent viscosity decreased with the increase in shear rate, and shear thinning behavior can be observed clearly, indicating that the copolymer is a non-Newtonian pseudoplastic fluid. Moreover, η clearly decreased with the increase in molecular weight of the macromonomers at shear rates of 0.1-1000 s −1 , which may be attributed to the fact that the longer flexible PPDO segment weakens the entanglement between molecular chains, making the copolymer molecules more susceptible to untangle and slip under shear stress. In general, with the increase in molecular weight of PPDO and PLA, the influence of PPDO on viscosity is dominant.
In addition, the DA copolymer was synthesized by linking PPDO and PLA chain segments with a DA bond, which is a dynamic covalent bond and reversibly broken at 120 • C. Thus, the self-healing property of the DA copolymer is predicted. The four groups of final DA products were scanned three times at different temperature cycle angular frequencies. The four groups of DA copolymer were cyclically scanned three times at different temperatures (80 and 120 • C) to obtain the curves of storage modulus (G ) and loss modulus (G") with respect to angular frequency ( Figures S5 and S6). When G is greater than G", the polymer mainly behaves as a solid, while, vice versa, it behaves as a liquid, which can also be considered here as the breaking of the molecular chains under shear. The intersections of the three cycles in the low frequency region at 120 • C and 80 • C were recorded, respectively, to plot against temperature, and Figure 7 was obtained. It can be found that after completing a cycle, the intersection at 80 • C can basically return to the same level as before, indicating that the previously damaged DA bond can be recovered at high temperatures, and thus the copolymer has a self-healing ability. Since there are more DA bonds in the copolymer with small molecular weights (such as DA2300), the angular frequency of the intersection decreased less after cycling, which also indicates that the self-healing property of the copolymer comes from the DA bonds and can be strengthened by the regulation of the molecular structure.
segments with a DA bond, which is a dynamic covalent bond and reversibly broken at 120 °C. Thus, the self-healing property of the DA copolymer is predicted. The four groups of final DA products were scanned three times at different temperature cycle angular frequencies. The four groups of DA copolymer were cyclically scanned three times at different temperatures (80 and 120 °C) to obtain the curves of storage modulus (G′) and loss modulus (G″) with respect to angular frequency (Figures S5 and S6). When G′ is greater than G″, the polymer mainly behaves as a solid, while, vice versa, it behaves as a liquid, which can also be considered here as the breaking of the molecular chains under shear. The intersections of the three cycles in the low frequency region at 120 °C and 80 °C were recorded, respectively, to plot against temperature, and Figure 7 was obtained. It can be found that after completing a cycle, the intersection at 80 °C can basically return to the same level as before, indicating that the previously damaged DA bond can be recovered at high temperatures, and thus the copolymer has a self-healing ability. Since there are more DA bonds in the copolymer with small molecular weights (such as DA2300), the angular frequency of the intersection decreased less after cycling, which also indicates that the self-healing property of the copolymer comes from the DA bonds and can be strengthened by the regulation of the molecular structure.
Degradation Properties
The degradation performance of polymer materials is an important index for medical applications. The enzymatic degradation of the PPDO, PLA homopolymer and DA copolymer by protease K was investigated, as shown in Figure 8. During these experiments, a high level of protease K activity was obtained by refreshing the enzyme solution and
Degradation Properties
The degradation performance of polymer materials is an important index for medical applications. The enzymatic degradation of the PPDO, PLA homopolymer and DA copolymer by protease K was investigated, as shown in Figure 8. During these experiments, a high level of protease K activity was obtained by refreshing the enzyme solution and applying higher temperatures. The results showed that the degradation rate of PPDO was the highest and that of PLA was the slowest, while the degradation rate of the DA copolymer was between those of the homopolymers PPDO and PLA. These results were mainly because the hydrophilicity of PPDO is better, so its degradation is faster. After 7 days of degradation testing, the weight retention rate of the DA material was 34.3%, lower than that of PLA (37.6%) and higher than that of PPDO (30.1%). The block copolymerization of PPDO and PLA was achieved through DA bonds. The new material inherits some of the physicochemical properties of different blocks. The biodegradability of the material has been well preserved. because the hydrophilicity of PPDO is better, so its degradation is faster. After 7 days of degradation testing, the weight retention rate of the DA material was 34.3%, lower than that of PLA (37.6%) and higher than that of PPDO (30.1%). The block copolymerization of PPDO and PLA was achieved through DA bonds. The new material inherits some of the physicochemical properties of different blocks. The biodegradability of the material has been well preserved.
Synthesis of PPDO Modified by Furan Groups
The monomer PDO (3.0 g, 29.4 mmol) and the initiator 1,4-benzenedimethanol were placed into a round-bottomed flask in a certain proportion, and stannous caprylate was also added as the catalyst [30]. The experimental ratios and conditions are shown in Table S3. After the reaction finished, trichloromethane was added to dissolve the solid in flask, and the solution was precipitated in excess methanol. After filtration, the precipitate was washed with ether and dried under vacuum overnight to obtain white powdery products (PPDO2300, PPDO3200, PPDO4700 and PPDO5500). 1 Subsequently, in a dry box under an Ar atmosphere, to a dry flask containing a certain amount of furoic acid, PPDO, EDC and DMAP was added a suitable amount of ethanol [31,32]. The specific experimental ratios and conditions are shown in Table S4. After the reaction, the products were precipitated with ether and dried in a vacuum oven at 40 °C to obtain the corresponding products (PPDO2300ER, PPDO3200ER, PPDO4700ER and
Synthesis of PPDO Modified by Furan Groups
The monomer PDO (3.0 g, 29.4 mmol) and the initiator 1,4-benzenedimethanol were placed into a round-bottomed flask in a certain proportion, and stannous caprylate was also added as the catalyst [30]. The experimental ratios and conditions are shown in Table S3. After the reaction finished, trichloromethane was added to dissolve the solid in flask, and the solution was precipitated in excess methanol. After filtration, the precipitate was washed with ether and dried under vacuum overnight to obtain white powdery products (PPDO2300, PPDO3200, PPDO4700 and PPDO5500). 1 Subsequently, in a dry box under an Ar atmosphere, to a dry flask containing a certain amount of furoic acid, PPDO, EDC and DMAP was added a suitable amount of ethanol [31,32]. The specific experimental ratios and conditions are shown in Table S4. After the reaction, the products were precipitated with ether and dried in a vacuum oven at 40 • C to obtain the corresponding products (PPDO2300ER, PPDO3200ER, PPDO4700ER and PPDO5500ER). 1
Synthesis of PLA Modified by Maleimide Groups
The monomer LA (5.0 g, 34.7 mmol) and the initiator 1,4-benzenedimethanol were placed into a round-bottomed flask in a certain proportion, and stannous caprylate was also added as the catalyst [33,34]. The specific experimental ratios and conditions are shown in Table S5. During the reaction, the products (PLA2300, PLA3200, PLA4700 and PLA5500) were obtained by precipitation with excess methanol, washing with ether and drying under vacuum overnight. 1 In a 250 mL flask, β-alanine (8.01 g, 90 mmol) and maleic anhydride (10.58 g, 0.11 mol) were mixed in glacial acetic acid (108 mL) and stirred for 7 h. The crude AMA was obtained by evaporation of solvent, and pure AMA was obtained by recrystallization with methanol. Then, AMA was suspended in toluene (150 mL), and the suspension was stirred at 130 • C for 3 h. After cooling to room temperature, a yellow precipitate was obtained by evaporation. The crude product was acidized by HCl, extracted by ethyl acetate and recrystallized by ethyl acetate to obtain pure maleimide propionic acid (AMI) [35].
Subsequently, in an Ar atmosphere, to a dry flask containing a certain amount of AMI, PLA, EDC and DMAP was added a suitable amount of dichloromethane. The reaction ratios and conditions are shown in Table S6. After 24 h of reaction at room temperature, a solid was precipitated with ether and dried in vacuum oven at 40 • C to obtain the corresponding products (PLA2300ER, PLA3200ER, PLA4700ER and PLA5500ER). 1
Synthesis of DA Copolymer
The PPDOER and PLAER products prepared above were mixed in dichloromethane at the same molecular weight. The reaction ratios and conditions are shown in Table S7. By controlling the maleimide/furan molar ratio at about 1.0, the DA reaction was completed at 60°C to obtain four groups of DA products (DA2300, DA3200, DA4700 and DA5500) [36][37][38]. 1
Proton Nuclear Magnetic Resonance Spectroscopy ( 1 H NMR)
A Bruker Avance-400 NMR spectrometer (400 MHz) was used to obtain 1 H NMR spectra of the samples. The solvent of the sample was deuterated chloroform (CDCl 3 ) or deuterated acetone (C 3 D 6 O) as required, and tetramethylsilane was used as the internal standard.
Fourier Transform Infrared Spectroscopy (FT-IR)
A Nicolet 6700 Fourier transform infrared spectrometer was used to measure the infrared spectra of the samples. The test method was as follows: the sample was ground into powder after drying in a vacuum oven, and the attenuated total reflection method (ATR) was used. The measurement range was 4000-650 cm −1 .
Gel Permeation Chromatography (GPC)
The molecular weights and polydispersity of the DA copolymer were determined using a THF GPC setup operating at 35 • C and comprising Styragel HR1, HR2 and HR4 columns, a Waters 2414 refractive index detector and a Waters 1515 pump. The GPC eluent was THF (2 v/v% triethylamine) at a flow rate of 1.0 mL/min.
X-ray Diffraction
The changes in the crystallization properties of PPDO and PLA before and after the copolymerization were observed via a Japanese Ultima IV series composite X-ray diffractometer. The samples were dried at 25 • C, and X-ray diffraction patterns were obtained under Cuka radiation generated at 40 kV and 40 Ma. The scanning speed was 3 • /min and the diffraction angle range (2θ) was 3-50 • .
Thermogravimetric Analysis (TG)
The thermal stability of the target polymer was determined via a Q5000IR thermal analyzer (TA Company of America). Under the protection of a nitrogen flow of 20 mL/min, the samples were raised from room temperature to 820 • C at a heating rate of 20 • C /min, and the data were recorded by a computer.
Differential Scanning Calorimetry (DSC)
An L-700 differential scanning calorimeter (Toledo, Mettler, Switzerland) was used to determine the non-isothermal crystallization behavior of each polymer.
Non-isothermal crystallization test conditions: Under a nitrogen atmosphere, the gas flow rate was 50 mL/min, the temperature was increased from room temperature to a specific temperature (PPDO series products: 140 • C and PLA and DA series products: 200 • C) at a rate of 10 • C/min and the thermal history was eliminated by 5 min of heat retention. Then, the temperature was lowered to −30 • C at a cooling rate of 10 • C/min. Then, the temperature was raised to a specific temperature at a heating rate of 10 • C/min, and the non-isothermal DSC curves of the samples were recorded.
Isothermal crystallization test conditions: Under a nitrogen atmosphere, the gas flow rate was 50 mL/min and the temperature rose from room temperature to a specific temperature (PPDO: 140 • C and PLA and DA: 200 • C) at a rate of 10 • C /min for 5 min to eliminate the thermal history. The temperature was then rapidly reduced to crystallization temperature (30 • C for PPDO, 90 • C for PLA and 50 • C for DA) at a cooling rate of 50 • C/min and held for 15 min. The samples were then cooled to room temperature at a cooling rate of 10 • C/min, and then raised to a specific temperature at 10 • C/min. The isothermal DSC curves of the samples were recorded.
Polarizing Optical Microscopy (POM)
The spherulite morphology of isothermal crystals was investigated by POM with a hot table. A small amount of the samples was placed on the hot table of the polarizing microscope, and the temperature was rapidly raised to 150 • C. The samples were kept at this temperature for 3 min to completely melt. Then, the samples were pressed into films and rapidly cooled to the desired crystallization temperature to observe the crystallization.
Rheological Property
The rheological behavior of the composites was characterized by a stress-controlled rotational rheometer (MCR 302), and shear rate scanning and angular frequency scanning were performed. A parallel plate clamp with a diameter of 25 mm and a spacing of 1 mm was adopted.
Shear rate scanning conditions: the temperature was set at 110 • C, the strain was fixed at 1%. Additionally, the shear rate was scanned within 0.1-1000 s −1 and the viscosity was measured.
Angular frequency sweep conditions: the strain was fixed at 1% and the angular frequency scanning range was 0.01-628 rad/s. The energy storage modulus (G ) and loss modulus (G") were measured at 120 • C and 80 • C, respectively. Three heating-cooling cycles were measured and the data were recorded.
Degradation Property
Enzymatic degradation of each sample was carried out using proteinase K. In each trial, 50 mg of the sample was dispersed in 5 mL of 0.1 M Tris/HCL buffer (pH 8.5) containing protease K (2 mg/mL) and incubated at 50 • C. After 3.5 days incubation, the solution was filtered, the residue captured on the filter was washed with plenty of distilled water, then lyophilized, weighed, dispersed again in 5 mL fresh enzyme solution and incubated again at 50 • C for 3.5 days. The remaining solid mass was weighed [39,40].
Conclusions
In this paper, a series of biodegradable copolymers with self-healing abilities were synthesized by the DA reaction of PPDO and PLA prepolymers, and the chain segment length of the copolymers was regulated by the molecular weight of prepolymers (2300, 3200, 4700 and 5500). The performance of the DA copolymer was compared to that of PPDO and PLA homopolymers, and the former exhibited an improved thermal stability and heat resistance compared to the latter. For instance, copolymerization increased the Tm of the DA copolymer from 93 • C (pure PPDO) to 103 • C. Furthermore, due to the introduction of PPDO, the crystallization ability of the copolymer was greatly improved compared to PLA, with the crystal structure mainly depending on the PPDO segments. Among the copolymers of varying molecular weights, DA4700 exhibited the best crystallization ability with a t 1/2 of approximately 2.8 min. Significantly, a rheological analysis revealed the copolymers' self-healing properties. In addition, an enzyme degradation experiment showed that the DA copolymer was degraded by a certain amount, and the degradation rate was between that of PPDO and PLA. It was found that the segment composition of PPDO and PLA may change the thermal and crystalline properties of the copolymer. In addition, the crystallization properties of the copolymer can further affect the mechanical properties of medical materials such as hemostatic clips and surgical sutures, which provides a route for expanding the applications of PPDO. In conclusion, PPDO will be a focus of degradable polymer research for a considerable amount of time as a novel degradable polyether ester. In the future, the range of materials based on PPDO will become more and more extensive, and it may be possible to develop new degradable composite materials based on PPDO to replace most plastics and reduce environmental pollution.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/molecules28104021/s1, Figure S1: 1 H NMR spectra of all intermediate products; Figure S2: FT-IR spectra of each product; Figure S3: The GPC spectra of DA copolymers; Figure S4: The relationship between apparent viscosity (η) and shear rate; Figures S5 and S6: The four groups of DA copolymer were cyclically scanned three times at different temperatures (80 and 120 • C) to obtain the curves of storage modulus (G ) and loss modulus (G") with respect to angular frequency (a and d are DA2300, DA3200, DA4700, and DA5500). Table S1: The TGA data of each product; Table S2: Isothermal crystallization parameters of neat PPDO and DA copolymers; Table S3: The feeding and conditions of PPDO; Table S4: The feeding and conditions of PPDOER; Table S5: The feeding and conditions of PLA; Table S6: The feeding and conditions of PLAER; Table S7: The feeding and conditions of DA copolymer. | 9,207.8 | 2023-05-01T00:00:00.000 | [
"Materials Science"
] |
Geospatial Weather Information System (GWIS): a DREAD based study
Over the years, the focus has been on protecting network, host, database and standard applications from internal and external threats. The Rapid Application Development (RAD) process makes the web application extremely short and makes it difficult to eliminate the vulnerabilities. Here we study web application risk assessment technique called threat risk modeling to improve the security of the application. We implement our proposed mechanism the application risk assessment using Microsoft's threat risk DREAD model to evaluate the application security risk against vulnerability parameters. The study led to quantifying different levels of risk for Geospatial Weather Information System (GWIS) using DREAD model.
I. INTRODUCTION
There has been tremendous success of World Wide Web (WWW).Today most of the applications are developed using web technologies in different areas viz., banking, ecommerce, education, government, entertainment, webmail and training.Many companies are depending on their web sites for the publicity and business and some of the companies came into business like online shopping through the possibilities of WWW only.Many of customers also find convenient to get benefit from these services of web application rather than conventional or manual methods.The technology of web also enormously developed with modern technologies to build more reliable and cost effective web applications.The technology is now in a position to cope up with various issues like interoperability, multiple platforms and to connect with different database technologies.
Despite the importance of web applications with improved technologies, hacking techniques also gained momentum in cashing the vulnerabilities of the applications.Web Application Security Consortium gave report on web hacking statistics [1].These statistics clearly states that the number is gradually increasing from year to year, even with the added security feature technology in web application development tools.II.SECURITY CHALLENGES Web applications are increasingly becoming high value target for attackers.71% of the reported application vulnerabilities have affected the web technologies such as web servers, application servers and web browsers [2].In 2007, a survey was conducted by the Cenzic and Executive alliance on the state of web application security level [3].Some of the interesting key findings are, there is lack of confidence in the current state of web application security.Around 50% of the people are not confident about their application security, although most of them are happy about their application technology.83% of the CEOs are aware of the web security, but most of them and other senior management are not sure about the financial implications of the unsecured web applications.
The above findings evidently show that, organizations are still not matured enough to take care of the application security issues against the ever growing threats.Therefore, it becomes imperative than ever to assess the web application security concerns.In the past, organization relied more on gateway defenses, Secure Socket Layer (SSL), network and host security to keep the data secured.Unfortunately, majority of the web attacks are application attacks and the mentioned technologies are generally unable to cope up with the security needs against the application attacks [4].The gateway firewall and antivirus programs though offer protection at network and host level, but not at the application level [5].Firewall may not detect malicious input sent to a web application.Indeed, firewalls are great at blocking ports, but not complete solution.Some firewall applications examine communications and can provide very advanced indication still.Typical firewall helps to restrict traffic to HTTP, but the HTTP traffic can contain commands that exploit application vulnerabilities.Firewalls are only an integral part of security, but they are not a complete solution [6].The same holds true for Secure Socket Layer (SSL), which is good at encrypting traffic over the network.However, it does not validate the application's input or protect from a poorly defined port policy.http://ijacsa.thesai.org/The Software Unlimited Organization [7] listed the top 10 firewall limitations.Web servers are becoming popular attack targets.Between 1998 and 2000, around 50 new attacks exploit the Microsoft's widely utilized web server Internet Information Server (IIS) and published these reports in the public domain [8].Of these attacks 55% allowed an intruder to read sensitive information such as ASP source files, configuration files and finally the data records as well.These growing numbers of attacks target the databases which reside behind the web server.By exploiting the vulnerabilities in the web server it is possible to run SQL commands for gaining the access of database server.Hence protecting the web server is becoming huge concern in the web application security domain.
A. Web application concerns
Today's client/server technology has progressed beyond the traditional two tiered concept to three-tier architectures.Application architectures have three logical tiers called presentation services, process services, and data services.As with all these technologies, three tier gives the opportunity to reap these benefits, but a number of challenges to implementing three tier architecture exist.This is because of the number of services that need to be managed, and because the tools are still skeletons for the applications.Furthermore, three tier systems are inherently more complicated because of the multiple technologies involved in the design and development of the application.From pure security point of view, lack of security in any one of the technology will result the total system vulnerable.
Web application must be secured in depth, because they are dependent on hardware, the operating system, web server, database, scripting language and application code.So web applications have numerous entry points that can put database at risk.Hackers generally look into the different fundamental areas of application to break the security.The general types of attacks are IP access, port access, and application access.Hackers get the IP address of the server and do the telnet to exploit the server.There are so many tools for extracting the passwords of the logins.Applications are normally configured to listen on a predefined port for incoming requests.These vulnerable ports are also major sources for the attacks on the application.Web applications include the series of web servers, file servers and database servers etc.Each of these servers attracts potential point of entry to break the application security.But there are so many other areas where the application is vulnerable to the attacks.The major challenges associated with the web application are their most critical vulnerabilities that are often the results of insecure information flow, failure of encryption, database vulnerabilities etc [9].They are inherent in web application codes, and independent of the technologies in which they are deployed [10].Attacker may exploit these vulnerabilities at anytime.Almost every week, the media reports on new computer crimes, latest attack techniques, application vulnerabilities, system break-ins, malicious code attacks, and ever growing cyber crime threat.Web Application Security Consortium (WASC) has listed the top 10 web application vulnerabilities for the year 2007 out of reported 24 classes of attacks.Application vulnerabilities, network vulnerabilities, viruses, trojans etc. are some of the external threats.But there are many other internal threats other than external threats posed by rogue administrators, bad employees, some casual employees and social engineering.The solution to the web application security is more than technology.It is all about practices, precautions and countermeasures.That is why security is not a path, its destination.Security is about risk management and effective countermeasures [11].
B. Security assessment
Traditionally, security assessment has been considered sa sub function of network management, and has been identified as one of the functional areas of the open system interconnection, management framework.As defined in the OSI management framework, security assessment is concerned not with the actual provision and use of encryption or authentication techniques themselves but rather with their management, including reports concerning attempts to breach system security.Two important aspects are identified (i) managing the security environment of a network including detection of security violations and maintaining security audits, and (ii) performing the network management task in a secure way [12].Sloman et al, 1994 defines security assessment as the support for specification of authorization policy, translation of this policy into information which can be used by security mechanisms to control access, management of key distribution, monitoring and logging of security activities [13].Meier et al, 2004 defines security assessment involves holistic approach, applying security at three layers: the network layer, host layer, and the application layer [14].Additionally, applications must be designed and built using secure design and development guidelines following good security principles.Russ et.al., 2007 concludes security assessment is an organizational level process that focuses on the nontechnical security functions within an organization [15].In the assessment, it examines the security policies, procedures, architectures, and organizational structure that are in place to support the organization.Although there is no hands on testing (such as scans) in an assessment, it is a very hands on process, with the customer working to gain an understanding of critical information, critical systems, and how the organiation wants to foucs the future of security.
Application security is the use of software, hardware and procedural methods to protect applications from external threats.Security measures built into application and sound application security procedures minimize the likelihood of the attack.Security is becoming an increasingly important concern during development as applications are more frequently accessible over networks.As a result, applications are becoming vulnerable to a http://ijacsa.thesai.org/wide variety of threats.Application security can be enhanced by rigorously by implementing a security framework known as threat modelling.It is the process of defining enterprise assets, identifying what each application does with respect to these assets, creating security profile for each application, identifying and prioritizing potential threats.
III. GENERAL THREAT MODELING PRINCIPLES
Threat is a specific scenario or a sequence of actions that exploits a set of vulnerabilities and may cause damage to one or more of the system's assets.Threat modeling is an iterative process that starts in the early phases of analysis, design, coding & testing and continues throughout the application development life cycle.It systematically identifies and rates the threats that are most likely to effect the web application.By identifying and rating the possible threats with detailed understanding of application architecture the appropriate countermeasures can be implemented against all possible threats in a logical order.Fig. 1 shows the threat modeling process, which is an iterative process Threat modeling is an essential process for securing web application.It allows organizations to determine the correct controls and product effective countermeasures against all vulnerabilities in the application.Fig. 2 shows the interrelation between a threat and assets, vulnerabilities and countermeasure entities.The threat described in the figure may cause damages to any of application assets and even may exploit all possible vulnerabilities in the system.A successful attack exploits all vulnerabilities in the application and may take over the total control of application.It is probably because of weak design principles, weak coding practices, and configuration mistakes of the applications.Well defined countermeasures can be implemented to the application to mitigate attacks as shown in fig. 2. Fig. 2 Interrelation between threat, asset, vulnerability and countermeasure [17] Application development team needs to understand the organization security policy and the overall objectives of the application.Asset is information, capability, an advantage, a feature, a financial or a technical resource that should be defined from any damage, loss or disruption.The damage to an asset may affect the normal functionality of the system as well as the individuals or organizations involved with in the systems.Normally, in the web application technology assets are database, application and web servers.to make application assets more hack resilient at the design time rather than the deployment stage.But it is not possible to document all the possible threats a web application faces as the application development is dynamic process in nature.So the option would be conducting a brain storming session with development people, testers, architecture designers, and professionals etc. to identify the maximum threats at the design time itself.Then the process of documenting the threats in a hierarchical mode that defines core set of attributes to capture for each threat.It is important to rate the threats to prioritize the most frequently occurring possible threats, and which can cause maximum risk to the application.The rating methods depend on different parameters and generally calculated with probability of occurrence and the damage potential that threat could cause.
A. Threat risk models
Over the last five years, threat risk modeling became important mitigation development in the web application security environment [18].Different process models exist for identifying, documenting and rating the threats such as Microsoft Framework, OWASP model, Trike, CVSS, AS 4360 and OCTAVE model [19].It is up to the security specialist to choose the model according to the suitability of risk assessing method and the technology being used in the application.It is always best practice to adopt one of the risk models to reduce the business risk to the application.This study adopts the basic Microsoft Threat Modeling methodology for implementing threat risk modeling both at design and implementation stages.
IV. GEOSPATIAL WEATHER INFORMATION SYSTEM: A THREAT MODELING APPROACH
Geospatial Weather Information System (GWIS) is a web based tool for capturing, storing, retrieving and visualization of the weather climatic data.The GWIS contains historical climatic data for nearly hundreds of land stations country wide.The database is provided with both climatic daily and monthly data.Daily data has been nearly for 150 ground stations country wide and covering temperature, rainfall, humidity details.The climatic monthly data has for wide range of land stations around 3000 countrywide.Daily data is being captured from different sources after then arranged in GWIS format for storing in the database.The source for monthly data is Global Historical Climatology Network (GHCN).It is used operationally by National Climatic Data Centre (NCDC) to monitor long-term trends in temperature and precipitation.The mission of GWIS is to integrate the weather related information from different available sources and organize the data in structured GWIS format.The application tool is designed to cater the research needs of various application scientists working on different themes.
Microsoft provides a thereat-modeling methodology for .NET technologies.The process starts from identifying threats, defining architecture overview, decomposing the application, identifying the threats, document the threats and rating the threats.More emphasis has been given to the detailed architecture design describing composition and structure of the application including the sub systems addressing the technologies being used in the web application.As the Microsoft always emphasizes on holistic approach methodology, it again adopts holistic approach in identifying the threats [20].
A. Identifying threats
Threats are generally point to network, host and application layers.Identifying network threats is mainly concerned with understanding the network topology, the flow of data packets and the connecting network devices such as router, firewall, and switch.The most frequently occurring network threats are IP Spoofing, Session hijacking, open port policies, open protocols and any weak authenticated network device.Host threats mainly concerned with the security settings of operating system.Possible host vulnerabilities are unpatched servers which can be exploited by viruses, systems with nonessential ports, weak authentication, social engineering etc. Application threat is a big area compared to any other domain of web application.Since the web application includes combination of multiple technologies, there is always a chance for the technology gap between any two.Hence it is always important to evaluate the application vulnerability categories.The major application vulnerability categories are authorization, input validation, cryptography, configuration management, and exception handling.The mentioned areas are normal known threats in the web application environment.But there may be many more number of unknown threats in specific area.However, there are some other approaches to document potential threats using attack trees and attack patterns.
B. Attack trees and Attack pattern
As web application often includes the client / server technology with dynamic process of application development, it is very difficult to document all the possible threats.Attack Trees and Attack Patterns are special tools that most of security professionals use for identifying potential threats in the application.They refine information about the attacks by identifying the compromise of enterprise security or survivability as the root of the tree.Each tree represents an event that could significantly harm the asset.Each path through an attack tree represents a unique attack of the asset.Typically threat tree imparts lot more information in shorter time for the reader but takes longer to construct, and attack pattern is much easier way to write but takes longer for the impact of the threats to become obvious.Attack trees provide a formal way of describing the security of systems, based on varying attacks.It represents attacks against a system in a tree http://ijacsa.thesai.org/structure, with the goal as the root node and different ways of achieving that goal as leaf nodes.Fig. 3 and 4 represents attack tree and attack pattern of GWIS respectively.Attack trees are represented in a tree structure by decomposing a node of an attack tree either as, a set of attack sub-goals, all of which must be achieved for the attack to succeed, that are represented as an ANDdecomposition.
a set of attack sub-goals, any one of which must be achieved for the attack to succeed, they are represented as an OR-decomposition.Attack patterns are generic representations of commonly occurring attacks that can occur in a variety of contexts.The pattern defines the goal of the attack as well as the conditions that must exist for the attack to occur, the steps that are required to perform the attack, and the results of the attack [21].
Threat #1:The attacker will learn the structure of the SQL query, and then use this knowledge to thwart the query by injecting data that changes the query syntax into performing differently than indented.
1.1 SQL Vulnerabilities 1.1.1Block SQL Injection, Blind SQL Injection, Cross Site Scripting, HTTP response splitting etc. 1.1.1.1By verifying that user input does not contain hazardous characters, it is possible to prevent malicious users from causing your application to execute unintended operations, such as launch arbitrary SQL queries, embed javascript code to be executed on the client side, run various operating system commands
B. Document the threat
Documenting the possible known threats of GWIS application gives the great edge to deal with the vulnerabilities.Sometimes it is very difficult to document the unknown threats.But documenting the known threats with the help of third party vulnerability assessment tools will give great knowledge to the developer / administrator to reduce the risks.GWIS application has been scanned thoroughly to perform vulnerability testing to find out the vulnerabilities in the application.For this type of application assessment, single type of vulnerability scanner is not sufficient for scanning the application.So larger sites may require multiple vulnerability scanners to support the assessment needs.The reason is the specific tools are effective in some of the areas and may not be good at other functional areas.For this reason, the GWIS application has been scanned with multiple scanners namely AppScan, CENZIC, and Nessus tools.The consolidated list of vulnerabilities observed is shown in Table 1.
TABLE I VULNERABILITIES BY PATTERNS
The vulnerabilities are documented in the threat list as per the Microsoft threat template.Threat list generates the application threat document with the details of threat target, attack techniques, risk and possible countermeasures that are required to address the threat.
C. Rating the Risk
The rating process measures the probability of threat against the damage that could result to an application.This process will generate the list of priority with the overall rating of threats.This allows addressing the most risk generated threat first on priority with proper countermeasures to mitigate the risk.The risk can be calculated from a simple formula [22].
Risk = Probability x Damage potential Where risk posed by a particular threat is equal to probability of threat occurring multiplied by the damage potential.With this formula risk can be categorized into high, medium and low by calculating in the scale of 1-10.Most of the security professionals may not agree up on the simple rating system in calculating the risk of the application because of the equal distribution of the assets.To resolve this issue, Microsoft came up with a modeling formula called DREAD which is used to
Vulnerability Patterns
No checks http://ijacsa.thesai.org/calculate risk.On the basis of these parameters, values can be calculated for the given threats and then can be categorized as high risk, medium risk and low risk D. Rating risk with DREAD approach DREAD methodology is used to calculate the risk.For each threat the risk rating is calculated by assessing damage potential, reproducibility of attack, exploitability of hte vulnerability, discoverbility of vulnerability and finally total risk points of the application.
D: Damage potential -The loss if the vulnerability is exploited R: Reproducibility -How easy is it to reproduce the attack E: Exploitability -How easy to attack the assets A: Affected users -Average affected users in enterprise D: Discoverability -How easy to find out the vulnerabilities T: Total -Total calculated risk points The threat is rated with high value, if it poses significant risk to the application and needs to be addressed immediately.Table 2 shows the risk rating value of GWIS application using DREAD approach.The scoring system does not consider more than one vulnerability, if the application has more than one number of similar types of vulnerability.For example, GWIS consists of two instances of blind SQL injections and seven unencrypted view state parameters.But finally the scoring has given only for one blind SQL injection, and one unencrypted view state parameter, as the type of vulnerability is same.But when particular type of vulnerability is addressed, total number of instances is taken care.This is because of the reason that, each vulnerability provides equal chance of opportunity for exploiting the application.Once the risk rating is obtained, the threat is documented and with full information of threat target, risk rating, attack technique and necessary countermeasure as shown in Table 3.This template is quite useful for the administrators and application designers for understanding the risk they are dealing with.
V. EXPERIMENT AL RESULTS
The GWIS application has been scanned thoroughly for the vulnerabilities across the presentation, business, and database layers of GWIS.Nine vulnerability patterns are found including total 20 instances.
The DREAD scores are calculated against each vulnerability of the application, and the final scores are derived as per the risk catagories.In order to experiment with DREAD model, the study has been chosen the GWIS application to implement security assessment.During the assessment phase, the application flaws are completely assessed with variety of tools for finding out vulnerabilities of the application.The found vulnerabilities are billed with DREAD factors.Fig. 5 shows the DREAD severity gauze of the GWIS.The exploitability factor is maximum for the application, which shows that the vulnerabilities present in the applicaiton are easy to exploit.applicaiton is medium when applicaiton is explited.But the reproducibility of hte attack is very less for GWIS.So from the technical point of view the risk is less.Now these DREAD scores are combined together to get final severity risk rating for the GWIS.3, probability of occurring the threat in GWIS is MEDIUM, and the damage potential is MEDIUM and hence severity level is medium.SQL injection attracts high sever scores, unencrypted login request carries medium severity scores.Rest of the vulnerabilities are of low sever levels.So from pure business point of view, the risk factor is LOW.But on the whole, probability and damage potential levels of GWIS are MEDIUM and MEDIUM respectively.Therefore the overall severity of the risk is MEDIUM.To minimize the risk levels of the GWIS, it is crucial to fix the most sever risk generating vulnerabilities first such as, blind SQL injection and login page SQL injection vulnerabilities in the GWIS.Similarly the other vulnerabilities also should be fixed to further reduce the risk of GWIS.
VI .CONCLUSIONS
Web based application should be addressed by threat modeling process to identify the potential threats, attacks, vulnerabilities and countermeasures.It is basically a software engineering approach to see the application is meeting the company's security objective and to mitigate the risk at maximum level.It helps to identify the vulnerabilities in the application context.The paper has discussed Microsoft's framework DREAD approach to evaluate the risk of GWIS, and the remediation levels for the vulnerabilities.output of threat modeling process is standard document of the security aspects about the architecture of application and list of rated threats.This document helps as reference to the designers, developers and testers to make secure design choices, writing code to mitigate the risks and to write the test cases against the vulnerable areas identified in the document. | 5,548.2 | 2010-01-01T00:00:00.000 | [
"Computer Science"
] |
DYNAMICAL PROPERTIES OF A STOCHASTIC PREDATOR-PREY MODEL WITH FUNCTIONAL RESPONSE
A stochastic prey-predator model with functional response is investigated in this paper. A complete threshold analysis of coexistence and extinction is obtained. Moreover, we point out that the stochastic predatorprey model undergoes a stochastic Hopf bifurcation from the viewpoint of numerical simulations. Some numerical simulations are carried out to support our results.
Introduction
Predator-prey dynamics is one of the dominant fields in both theoretical and applied ecology, which has encouraged numerous researchers to develop various mathematical models to better understand it over the last few decades [2,22,25]. In population dynamics, the functional response is one of the nonlinear components in biological systems, which describes the feeding rate of prey consumption by predators, and plays a key role in understanding the dynamical complexity of the systems [16,18].
A predator-prey model with the Crowley-Martin functional response is described as follows: x, growth rate of prey, c represents the growth rate of predator when it's positive and the death rate when it's negative. f stands for the conversion rate of nutrients into predator production, while a, b measure the competition strength among individuals of prey and predator respectively. In recent years, there were some relevant predator-prey models with this type of functional response [3,20,24,26,32].
As a matter of fact, environmental noises play an inevitable role in population dynamics and always contribute to random fluctuations on parameters appearing in ecosystems [9,10,21]. Therefore, we take the influence of randomly fluctuating environment into account. After incorporating white noise into the system (1.1), we consider the following stochastic system: where B 1 (t), B 2 (t) are mutually independent Brownian motions, σ 2 1 and σ 2 2 represent the intensities of white noise.
As this kind of stochastic model accommodates interference among predators and preys and is a better fit to the experimental data, we believe it deserves further attention. Some literatures used the corresponding stochastic model to describe the dynamic properties [18,19,27,28]. Liu et al [18] studied stochastic boundedness, stochastic permanence and extinction for a corresponding stochastic system with Crowley-Martin functional response. Zhang et al [28] showed the existence, boundedness and uniform continuity of the positive solution for a stochastic population system with this kind of functional response.
The threshold analysis of strong stochastic persistence and extinction is given for some stochastic population models [29][30][31]. However, to the best of our knowledge, literatures on the threshold analysis of coexistence and extinction, stochastic Hopf bifurcation for the stochastic predator-prey system (1.2) have not yet appeared. The Crowley-Martin functional response is a generalization of Holling type and Beddington-DeAngelis functional responses. And the parameter c may be positive or negative. If c > 0, the species y has extra source of food except x, however, if c < 0, the species y has no extra source of food except x. Both the two cases are considered in this paper. The aim of this paper is to investigate these issues for the system (1.2).
In Section 2, we obtain a complete threshold analysis of coexistence and extinction. Section 3 considers stochastic Hopf bifurcation of the stochastic predator-prey model (1.2) from the viewpoint of numerical simulations. A final discussion concludes the paper in Section 4.
To illustrate the asymptotic behaviors of the sample paths of the solution discussed above clearly, we show them in Table 1.
exists, then the discussions are similar to those appearing in (A2) and (A3) of Case A. ).
We only need to discuss signs of λ 1 (µ y ).
If λ 1 (µ y ) > 0, then there exists a uniquely ergodic stationary distribution π(·) in the interior of the first quadrant.
If λ 1 (µ y ) < 0, thenΠ t (·) converges to µ y (·) for any initial value (x 0 , y 0 ) ∈ R 2 + almost surely, and x(t) converges to 0 almost surely. Similarly, we show the discussions in Table 2 (Blue parts stand for those who have been deduced by the previous conditions). , the readers may refer to the reference [12]. And we give the subsequent corrections in our next work for the revisions of the stochastic predator-prey model with response function [J1] .
Simulations of persistence and extinction
Three examples are introduced to illustrate Table 1: Figure 1 shows that both x and y go extinct.
And six examples are listed to demonstrate Table 2: Under the conditions, both x and y go extinct, see Figure 4. Meanwhile, In view of mathematical software, we compute that λ 2 (µ x ) < 0, henceΠ t (·) → µ x (·). Figure 5 supports the result.
It is obvious that λ 1 (µ y ) < 0, henceΠ t (·) → µ y (·). We can see from Figure 7 that the prey x goes extinct, however the predator y is persistent. This support the point that y has extra source of food besides x.
Stochastic Hopf bifurcation
It follows from Section 2 that when t is sufficiently large, the statistical properties of sample paths can be used to replace spatial ones. Therefore, in this section, we will use numerical simulations of sample paths to study stochastic Hopf bifurcation of the system (1.2). Now, we will use numerical simulations to illustrate that the stochastic predatorprey model can undergo a stochastic Hopf bifurcation phenomenon. Let r = 1, a = 1, w = 10, α 1 = 2.1, α 2 = 1. ydt + 0.03ydB 2 (t), ydt + 0.03ydB 2 (t). (3. 2) The deterministic system for the system (3.1) becomes ydt. (3. 3) The deterministic system (3.3) exists a stable limit cycle according to the literature [26] (see . Here, Figures 10-12 are given in comparison with the stochastic system (3.1). The deterministic system for the system (3.2) becomes ydt. [26] (see . Here, Figures 13-15 are given in comparison with stochastic system (3.2). It is observed that the deterministic system exists Hopf bifurcation phenomenon. Figure 16 is the stationary distribution of the system (3.1) in the phase space. Figure 17 shows a stochastic limit cycle for the system (3.1) in three-dimensional space introducing time axis. Figure 18 implies that there is a crater-like stationary distribution for the stochastic system (3.1). Figure 19 is the stationary distribution of the system (3.2) in the phase space. Figure 20 shows the stochastic solution for the system (3.2) in three-dimensional space introducing time axis. Figure 21 implies that there is a peak-like stationary distribution for the stochastic system (3.2). Now, from the viewpoint of numerical simulations, Figures 16-18 show the stochastic system (1.2) exists a crater-like stationary distribution, and Figures 19-21 show the stochastic system (1.2) exists a peak-like stationary distribution. Overall, the shapes of stationary distributions change from crater-like to peak-like. Therefore, the stochastic model (1.2) undergoes a stochastic Hopf-bifurcation phenomenon [4,7,14,15,17,23,34].
Concluding remarks
Here, we consider a stochastic predator-prey model with Crowley-Martin functional response. The main results are as follows: • We obtain the complete threshold analysis of coexistence and extinction of the stochastic system (1.2). Moreover, numerical simulations are introduced to support each conclusion in Table 1 and Table 2.
• From the perspective of numerical simulations, the stochastic model (1.2) exists peak-like stationary distribution and crater-like stationary distribution, that is, it undergoes a stochastic Hopf bifurcation.
Some interesting topics deserve further investigation. It will be interesting to study the stochastic high-order nonlinear systems. We will discuss these issues in the near future. | 1,737.4 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Scalable funding of Bitcoin micropayment channel networks
The Bitcoin network has scalability problems. To increase its transaction rate and speed, micropayment channel networks have been proposed; however, these require to lock funds into specific channels. Moreover, the available space in the blockchain does not allow scaling to a worldwide payment system. We propose a new layer that sits in between the blockchain and the payment channels. The new layer addresses the scalability problem by enabling trustless off-blockchain channel funding. It consists of shared accounts of groups of nodes that flexibly create one-to-one channels for the payment network. The new system allows rapid changes of the allocation of funds to channels and reduces the cost of opening new channels. Instead of one blockchain transaction per channel, each user only needs one transaction to enter a group of nodes—within the group the user can create arbitrarily many channels. For a group of 20 users with 100 intra-group channels, the cost of the blockchain transactions is reduced by 90% compared to 100 regular micropayment channels opened on the blockchain. This can be increased further to 96% if Bitcoin introduces Schnorr signatures with signature aggregation.
Introduction
The increasing popularity of Bitcoin and other blockchain-based payment systems leads to new challenges, in particular, regarding scalability and transaction speed. During peaks of incoming transactions, the blockchain cannot process them fast enough and a backlog is created. A second major problem is transaction speed, the time from initiating a transaction until one can assume that the transaction has concluded, and is thus irreversible. With inter-block times typically in the range of minutes and multiple blocks needed to reasonably prevent double spending, transactions take minutes to hours until the payment is confirmed. This may be acceptable for long-term Bitcoin investors, but not for everyday shopping or interacting with a vending machine [1].
To solve both scalability and speed, micropayment channel networks have been proposed [2,3]. A micropayment channel provides a way to trustlessly track money transfers between two entities offblockchain with smart contracts. If both parties are honest, they can commit the total balance of many transfers in a single transaction to the blockchain and ignore the smart contracts. If a node crashes or stops cooperating otherwise, the smart contracts can be included in the blockchain and enforce the last agreed-on state.
If two parties do not have a channel, a network of multiple micropayment channels can be used together with a routing algorithm to send funds between any two parties in the network. Hashed timelocked contracts (HTLCs) provide a scheme to allow atomic transfers over a chain of multiple channels [2][3][4].
As micropayment channel networks will keep most transactions off the blockchain, blockchain-based currencies may scale to magnitudes larger user and transaction volumes. Also, micropayment channel networks allow for fast transactions, as a transaction happens as soon as a smart contract is signedthe blockchain latency does not matter.
Challenges
Micropayment channel networks create new problems, which have not been solved in the original publications [2,3]. We identify two main challenges-the blockchain capacity and locked-in funds.
Even with increases in block size, it was estimated that the blockchain capacity could only support about 800 million users with micropayment channels due to the number of on-chain transactions required to open and close channels [5]. A large-scale adoption of micropayment channel networks, where e.g. Internet of Things devices have their own Bitcoin wallet, brings the blockchain to its limit.
Two parties cooperating in a channel must lock funds into a shared account. The locked-in funds should be sufficient to provide enough capacity for peaks of transactions. There is a conflict of the two aims to have a low amount of funds locked up in a channel, while at the same time being flexible for these peaks.
We will present a solution that improves on both problems. Payment channels will not appear in the blockchain, except in the case of disputes. Users will be able to enter the system with one blockchain transaction and then open many channels without further blockchain contact. Funds are committed to a group of other users instead of a single partner and can be moved between channels with just a few messages inside this collaborating group, which reduces the risk, as an unprofitable connection can be quickly dissolved to form a better connection with another partner. By hiding the channels from the blockchain, a reduction in blockchain space usage and thus the cost of channels is achieved. For a group of 20 nodes with 100 channels in between them, this can save up to 96% of the blockchain space.
The channels created inside these groups work in the same way as regular micropayment channels; therefore, members of such a group can forward payments over a larger payment network of regular channels, founded either directly on the blockchain or within other groups. This property enables easy deployment in an existing payment network.
Ingredients
For completeness, this section describes the previous work we are building on.
Blockchain transactions
The concept of a blockchain to store transactions in a decentralized payment system was introduced by Satoshi Nakamoto in 2008 [6]. The blockchain is a distributed append-only ordered list of transactions. To append a transaction to the blockchain, it is broadcast into the network of miners. We will use broadcast as a synonym for appending a transaction to the blockchain; we are waiting for enough confirmations to ensure that a blockchain transaction is irreversible with high probability.
Each transaction consists of inputs and outputs. An output is an amount of currency and a spending condition, e.g. specified in the Bitcoin Script language. An input is a reference to an existing, unspent output of another transaction and a proof fulfilling the spending conditions of the referenced output.
A useful option of this design is to create an output containing n public keys, which can be spent with signatures of m of the corresponding private keys, known as an m-of-n OP_CHECKMULTISIG or just multisignature output. This implements a shared account of n entities, which can be spent with the support of m of those entities.
Micropayment channels
A micropayment channel is a set-up where two parties have created the means to send each other currency without contacting the blockchain. The construction principle is shown in figure 1.
The commitment is signed before the funding transaction to ensure that no funds can be taken hostage by one party, as the other party already holds the means to recover its stake. Both parties can close the channel at any time by broadcasting the prepared commitment. As the opposing party cannot spend from the shared account without both signatures, the funds are safe and the broadcast of the commitment can be delayed to a later point in time. Given a scheme to replace transactions, the channel can now be used to transfer funds by replacing the commitment transaction with new commitment transactions, which change the amount of currency sent to each party, as shown in figure 2. It is important to ensure that the old version of the commitment transaction cannot be used any more. We will look at methods to accomplish this in the next subsections.
The amount of locked funds determines the maximum imbalance between sent and received funds, until all funds are with a single partner only. This is the capacity of the channel. When a channel's capacity is depleted, currency must move in the other direction or the channel needs to be closed and reopened on the blockchain with additional funds.
Transaction replacement using timelocks
Channels which replace transactions using timelocks are known as duplex micropayment channels [2]. Figure 3 shows a simple micropayment channel with timelocks. The first commitment transaction is created with a timelock of 100 days, meaning it cannot be appended to the blockchain until 100 days have passed. The second commitment transaction is created with a timelock of 99 days and spends the same funds, so it will be valid first and if anyone spends it during the first day, the outdated commitment transaction will never have a time where it can be broadcast, as the referenced output will have been spent already. Subsequent commitment transactions use lower timelocks, always having only one transaction which can be broadcast first. A channel constructed this way has to be closed by broadcasting the newest commitment transaction as soon as the first timelock has elapsed, limiting the maximum lifetime of a channel. With relative timelocks [7,8], this problem can be solved elegantly. Figure 4 introduces a kickoff 1 transaction. Timelocks only start ticking as soon as the kickoff transaction is broadcast, resulting in a potentially unlimited lifetime of a channel.
Still, one quickly runs out of time by doing transactions in the channel, each requiring a smaller timelock on the commitment transaction. This was solved with a tree of transactions [2] as shown in figure 5. 2 At any point in time only the path where all transactions have the lowest timelock of their siblings can be broadcast. In this way, many commitment transactions can be created before the timelocks get too low and the channel cannot be updated any more.
Implementations of the transactions according to figure 5 can be found in appendix A.
Transaction replacement using punishments
A variant of micropayment channels, known as lightning channels, uses revocable transactions to replace the commitments [3,9]. Each commitment consists of two transactions, one per user in the channel. A party can give up its personal transaction by revealing a secret, which allows the opponent to punish it in the case that it broadcasts the transaction afterwards.
Channel factories
As our main contribution, we introduce a new layer between the blockchain and the payment network, giving a three-layered system. In the first layer, the blockchain, funds are locked into a shared ownership between a group of nodes. The new second layer consists of multi-party micropayment channels we call 1 This is just a regular transaction without any special script. Its inclusion in the blockchain just 'kicks off' the timers on the subsequent transactions, as they are relative to the previous transaction. 2 The original publication preceded the introduction of relative timelocks and as a result had to use a different tree.
rsos.royalsocietypublishing.org R. Soc. open sci. 5: 180089 channel factories, which can quickly fund regular two-party channels. The resulting network provides the third layer, where regular transfers of currency are executed. Similar to regular micropayment channels, multi-party channels can be implemented with either timelocks or punishments for dishonest parties. Our implementation with timelocks scales much better to larger participant numbers, hence we will focus on it. The regular micropayment channels of the third layer can be punishment based or timelock based independent from the implementation of the multi-party channels of the second layer. Figure 6 shows an example channel factory of three parties that funds pairwise one-to-one channels. We formally define some concepts.
Definition 3.1 (Funding transaction).
A funding transaction is a blockchain transaction with an OP_CHECKMULTISIG output that is used to lock funds into a shared ownership between the p collaborating parties.
Note that there are two types of funding transactions in the new system, funding a multi-party channel and funding the layer three two-party channels.
Definition 3.2 (Hook transaction). The hook transaction is the funding transaction of the multi-party channel. It locks the funds of many parties into a shared ownership.
Definition 3.3 (Allocation)
. The allocation is one transaction or a number of sequential transactions that take the locked funds from a multi-party channel as an input and fund many multi-party channels with their outputs.
The allocation effectively replaces the funding transactions of a number of two-party channels. Commitments are already known from two-party channels. The channel is constructed by first creating all transactions of the initial state, then signing all except the hook and finally signing and broadcasting the hook. Signing the hook last ensures that the funds can be returned to their owners the in case that one party stops cooperating. After the hook is included in the blockchain and enough confirming blocks have been received, the channel can be used.
To implement the described set-up, the known constructions of payment channels can be extended. The hook transaction is a simple blockchain transaction which takes inputs from all users and creates one n-of-n OP_CHECKMULTISIG output, which can be spent with the signatures of all parties. The commitments include just two parties, thus the known implementations with timelocks or revocable transactions from §2 can be used directly. However, we need a new scheme for the allocations, as they need to be replaced in a trust-free way as well, but include more than two parties.
Replaceable allocations
Replaceable transactions with many parties can be implemented similarly to two-party channel commitments based on timelocks with an invalidation tree and a kickoff transaction at the root, which starts the timers when broadcast to the blockchain. The leaves of the invalidation tree create the two-party shared accounts. The principle is shown in figure 7.
Note that the order of the replacement of transactions is important. One should always have a state where the path of lowest timelocks does not end in unsigned transactions. When a new path is created in the tree, the first transaction which diverges from the old active path must be signed last, so the rest of the path is already valid and the whole new path replaces the old path atomically.
It is easy to show that there is no risk to the involved parties. Assuming that at least one party tries to broadcast transactions, when the timelocks have elapsed, only one path of the tree will ever be broadcastable, apart from situations where a channel update is in progress. While a new path is being created, there is a brief period where some parties already have the new path fully signed, while the other parties are missing signatures. This is not a problem, as this state is temporary and cannot be abused, as long as the receiver of a transaction does not regard a transfer as complete before he has received all new signatures.
Most of the tree can be pruned, thus the memory footprint is small. While a reallocation is in progress, new commitments can be made to the subchannels. To ensure that they are valid indifferent of whether the new allocation succeeds, commitments should be made on both the old and new subchannels. The details of the protocol to update an allocation will be discussed later in §3. 6.
Bitcoin Script implementations of the transactions are found in appendix A.
Settlement
When the involved parties cooperatively decide to close a channel factory, they can create and broadcast a settlement transaction, which pays out the current stake of each party directly from the shared account without a timelock, replacing the allocation and removing the locked funds ( figure 8). This way only two transactions appear on the blockchain, the hook and the settlement, which saves blockchain space and hides the unnecessary information from the public. The protocol to create a settlement is simple. If one node decides to close the channel factory, it broadcasts this decision to all other nodes. Everyone stops updating the subchannels and broadcasts the sum of his current stake. This is enough information for each node to create and sign the settlement transaction and broadcast the signature. Nodes cannot profit from lying about their total stake, as if any node gave a number too high the total sum would exceed the locked-in funds of the factory and the settlement transaction would be invalid.
Moving funds
A channel factory can be used to rebalance channels which have become one sided. A new allocation is set up which replaces every channel with a balanced new one while keeping the total stake of each party the same. As an advantage, funds can also be moved between channels, new channels can be created or old ones removed, changing the network connectivity without contacting the blockchain. Figure 9 shows such a rebalancing where funds are simultaneously moved to a different channel.
Including a cold wallet in a channel
Another use case of a channel factory is to include a cold wallet 3 in the creation process of a channel to later stock up the channel as shown in figure 10. The construction allows the cold wallet to be offline most of the time. When the owner wants to move money from or into the channel, the cold wallet can be brought online to create a new allocation. Updates to the subchannel do not need the cold wallet and are used for normal value transfers.
Leaving a group
In large groups, there may be situations where some node wants to leave; however, the others would like to continue the channel. Instead of closing the channel factory and opening a new one, both actions can be combined into one transaction, as shown in figure 11.
Coordination of allocation updates
When a new allocation is created, the members of a channel factory need to coordinate the creation of a new allocation transaction and all transactions to make the new subchannels of this new allocation.
Owing to the number of involved parties, this might take a considerable amount of time. However, this is not a problem, as normal channel operation can be continued as long as care is taken to make changes to the subchannels of both the old and the new allocation. An allocation update can be executed in the following order: (i) A member decides that an update of the allocation is necessary, e.g. because it wants to move funds to another channel, and broadcasts to all nodes of the group that a new allocation should be created. (ii) As soon as someone receives the allocation update request, he will issue a request to all his subchannel partners to use the current channel state as the base for the new allocation. Figure 11. A channel after one node has left. The allocation was replaced with a new hook. After broadcasting the new hook, the leaving node can spend its money on the blockchain and the others can continue with the channel. 3 A cold wallet is a wallet stored on a device that is most of the time disconnected from the Internet and acts as a secure long-term storage for funds. rsos.royalsocietypublishing.org R. Soc. open sci. 5: 180089 (iii) In each subchannel, the two cooperating parties decide on a starting state for the new subchannel and broadcast it to the group. Nodes can apply changes that move funds to other channels in this step. (iv) Each node creates the new allocation transaction. These should all be identical, as they fund the same two-party shared accounts. (v) The two cooperating parties of each subchannel create the subchannel commitment transactions and sign them. From this point on, they keep both subchannels based on the old and new allocation updated. (vi) All nodes sign the new allocation and exchange signatures. (vii) After receiving all signatures on the new allocation, a node can stop updating the subchannels based on the old allocation, as those cannot be executed any more.
From the view of any node, there are three states during this process. In the first state, the node knows that only the old allocation may come into effect. In the second state, the node has given away its signature on the new allocation, however, not received all signatures from the other nodes, thus it is uncertain which allocation might be executed on the blockchain. After receiving all signatures, the node can enforce the new allocation due to its lower timelock. By starting to apply changes to both the old and new subchannels before giving away its own signature on the new allocation, it is always ensured that the newest subchannel state is enforceable on the blockchain. Note that it is clear that movements of funds are consistent, i.e. no node can create money by telling different partners different information about moving funds between channels, as the total sum must not exceed the locked funds of the group. A net gain for some party must result in a net loss for another party, which will refuse to sign the new allocation. Furthermore, if there are different versions of the new allocation, the signatures of some parties will not be applicable to the transactions of others and the new allocation cannot come into effect, as no one has a complete set of signatures. If this happens by error, the case can be resolved either by retrying with another new allocation or by giving up and eventually resolving the situation on the blockchain with an old allocation.
The described procedure uses broadcasts of subchannel sizes and signatures. This results in a communication overhead of O( p 2 ), where p is the number of members in the channel factory. If this is considered too large, a leader can be chosen, e.g. the node with the smallest input index in the funding transaction of the channel factory. The leader can collect and distribute the information, reducing the number of messages to O( p). The time used by the protocol is constant.
Higher order systems
With larger groups, the coordination work required to sign a new allocation rises, but it is advantageous to create large groups to save blockchain space and have more partners for subchannels. It is possible to extend the system to more layers, each layer having less parties per shared account, as shown in figure 12.
This set-up uses the same number of signatures as a system with two independent groups, one to enter and one to leave per entity. However, with two independent groups, no channels between members of different groups would be possible without additional blockchain transactions. With higher order systems, multiple groups can be combined into one larger group, which can create overlapping subgroups. This allows us to create channels which enable paths between any two members of the larger group.
Risks
With a rising number of parties in a channel factory, the number of parties that can stop cooperating and close the channel rises, as anyone involved in the multi-party channel can broadcast the allocation to the blockchain. Afterwards the subchannels can still be used, as the funds are now locked in the two-party accounts, but the option to move funds between channels is lost. There is no personal advantage in unilaterally closing a channel, as the only difference is that higher mining fees are paid for the increased blockchain space, thus everyone loses. A selfish user should always prefer a settlement solution in comparison to broadcasting the current path of the invalidation tree.
Signature aggregation
It has been proposed to introduce Schnorr signatures [10] in Bitcoin, which would enable signature aggregation. 4 Signature aggregation allows us to combine many public keys into a single public key and many signatures into a single signature. N-of-n multisignature outputs could be created with just one public key and the corresponding input would contain a single signature, saving blockchain space. Furthermore, the transaction format could be modified to use a single signature, which signs the combination of the public keys of all inputs [11]. Channel factories use transactions with many public keys and would thus profit highly from these possibilities.
Fees
Higher order systems enable larger groups, where creating a new allocation in an upper layer might require a significant number of collaborating nodes. Nodes which would like to change the affiliation with subgroups could pay fees to everyone else in the group to incentivize help to update the allocation. As all subchannels are replaced, this is easily accomplished by creating larger channels everywhere the initiating party is not involved and reducing the initiating party's stake in its own channels. Integrated into the new channel state, this payment is atomically executed with the allocation update.
Combining channel factories
As it is unlikely that all users of the payment network will share a single channel factory, there will be some kind of connecting structure between the different groups in the network. As it is not possible to join an existing channel factory without additional cost, it is likely that new users joining the system will group together to create new channel factories. To connect to the rest of the network, they need to include a few nodes already connected in other channel factories. The result might look similar to figure 13. Members of multiple groups have a higher initial cost, as they need to be part of multiple hook transactions on the blockchain; however, they will also experience more traffic and can try to make profits by routing payments. Coordinating the creation of new groups and being such a connecting node might become a business model.
Evaluation
To evaluate the cost reduction, we assume that the largest part of the cost of a money transfer in the payment network results from the space occupied in the blockchain to create the channels. The price of blockchain space is regulated by the fee market and is paid per byte of transaction data, thus more complex transactions are more expensive. We will approximate how many bytes of blockchain space are used to create a single payment channel. As someone closing a channel unilaterally loses money, it can be assumed that few disputes will reach the blockchain. Hence, we only consider cooperatively closed channels. Public keys and signatures comprise a large part of the transaction data. We use this to avoid lengthy byte counting of the Bitcoin transaction structures and define a simplified metric: We start by evaluating the system with the currently used ECDSA signatures and therefore without signature aggregation. On average, an ECDSA signature constitutes 72 bytes, a public key 33 bytes. Channel factories closed cooperatively only broadcast two transactions, the hook and the settlement. Each of the two transactions contains one signature and one public key per participant. Let p be the number of parties in the channel factory and n be the number of subchannels. The blockchain cost per subchannel is as follows: To set this into context, we also calculate the blockchain cost in a system, where all one-to-one payment channels are opened directly on the blockchain. Both the funding and settlement of every channel each require two public keys and two signatures.
If p ¼ 3 entities form a second layer group to create n ¼ 3 pairwise channels, their blockchain cost is 210, so they already save 50% of the blockchain space. With p ¼ 20 parties and n ¼ 100 subchannels, the blockchain cost of each channel is 42, which is 10% of the original cost.
With Schnorr signatures, only one signature is necessary to sign all inputs of the hook transaction, and one combined public key can be used for the output. The settlement can also use a single signature, but needs to provide the public key for each output. If Schnorr signatures are implemented with the ed25519 curve [12], which provides a similar security level as the current ECDSA implementation, a public key uses 32 bytes and a signature 64 bytes. 5 This results in the following: One-to-one channels without a channel factory use one signature on the funding transaction, one public key on the hook, one signature on the settlement and two public keys on the settlement. This results in the following: BC simple,Schnorr ¼ 32 Â 3 þ 64 Â 2 ¼ 224: With p ¼ 3 parties in a channel factory with n ¼ 3 subchannels, we calculate a blockchain cost of 85.3, an improvement of 62% compared to blockchain-funded channels. With p ¼ 20 parties and n ¼ 100 channels, the cost is 8, an improvement of 96%. It is clear that channel factories increase their usefulness with Schnorr signatures.
Related work
The need for scalability is well understood. Apart from simply changing the parameters [13,14], the efficiency of the original Bitcoin protocol still offers space for improvement [15][16][17][18][19].
Increasing the transaction speed without payment networks has been investigated. It was shown that double spending is easily achievable without doing any mining if the receiver is not waiting for any confirmation blocks after a transaction [20,21]. Some work has been done to introduce sharding for cryptocurrencies [22][23][24]. If the validation of transactions could be securely distributed and every node only had to process a part of all transactions, the transaction rate could scale linearly with the number of nodes. One especially interesting approach has been published as our submission of the conference version of this work, called Plasma [25]. Plasma has the property that the members of a shard are the same people that care about its contents, similar to payment channels. Indeed, one could also interpret payment channels as interest-based shards of a blockchain. Plasma also introduces trees of blockchains, splitting interest groups into smaller subgroups. The same hierarchical structure has been introduced to payment channels with this work!
Payment networks
Solutions to find routes through a payment network in a scalable and decentralized way have been proposed, based on central hubs [26], rotating global beacons [27], personal beacons, where overlaps between sender and receiver provide paths [28], or combinations of multiple schemes [29].
A known way to rebalance channels in a payment network are cyclic transactions, shown in figure 14. The idea has originated in private communication between the developers of the Lightning Network. 6 While cyclic rebalancing allows us to reset channels which have run out of funds, it has limitations. If the amount of funds running through a specific edge has been estimated wrong at funding time, or changes over time, rebalancing might become necessary frequently. This slows down transactions which have to wait for the rebalancing to finish. Our solution with channel factories allows moving the locked-in funds to a different channel to solve the problem for a longer time.
Differences to the conference version
This work was presented in an earlier version at the 19th International Symposium on Stabilization, Safety, and Security of Distributed Systems [30]. Apart from small extensions, a section was removed as a flaw was discovered. The section contained an idea to re-merge subchannels into a larger channel in further unpublished transactions similar to figure 15. However, this is insecure, as any owners of one of the merged subchannels can close their subchannel with a different transaction than the new hook at any time. This means the new hook does not provide the secure lock-in feature required to base payment channels of it.
From this, we learn that any transaction that is a descendent of an unconfirmed transaction can only be trusted by people that are also required to sign everything that could replace the unconfirmed transaction. In other words, to trust a transaction one has to make sure that every unconfirmed ancestor transaction requires one's signature to be replaced. This also means channels can only have less participants further down the transaction graph.
Conclusion
We introduced a new layer of channel factories, sitting between the blockchain and the network of micropayment channels. Within a group of nodes, channel factories allow for more flexibility, creating many micropayment channels without additional blockchain usage, and easy movement of locked-in funds to other subchannels of the same factory using only off-blockchain collaboration. By creating many of those channel factories with some member overlap, a network of micropayment channels can be created with a lower use of blockchain space compared to existing systems.
The larger a group, the more space is saved, as the additional channels amortize the blockchain transactions. Three-party channel factories save 50% of the blockchain space. In a setting of 20 users with 100 channels between them, 90% reduction is achieved. In a Bitcoin system with signature aggregation those numbers improve even more to 62% and 96%, respectively.
With a larger number of nodes in a channel factory, there is an increased risk of someone closing the channel factory, creating blockchain transaction costs for everyone involved; however, there is no gain for the acting party, meaning that any entity that is trusted to act selfishly will be a good channel factory member. Nevertheless, this risk limits the usefulness of large groups. Figure 14. Rebalancing a cycle of channels, which have become one sided. The channels between A, B and C have been heavily used in one direction, e.g. external transactions being routed counterclockwise. As a result, one direction of each channel cannot be used any more due to insufficient funds. An atomic cyclic transfer, shown by the red arrows, can turn the three channels usable again. The transaction does not change the total stake of any involved party. allocation hook new hook Figure 15. A set-up to merge subchannels into a larger channel again. It is insecure because two parties can close their subchannels with a different transaction without agreement from the third party and make the new hook impossible to execute. | 7,666.4 | 2017-11-05T00:00:00.000 | [
"Business",
"Computer Science"
] |
The Accurate Modification of Tunneling Radiation of Fermions with Arbitrary Spin in Kerr-de Sitter Black Hole Space-time
The quantum tunneling radiation of fermions with arbitrary spin at the event horizon of Kerr-de Sitter black hole is accurately modified by using the dispersion relation proposed in the study of string theory and quantum gravitational theory. The derived tunneling rate and temperature at the black hole horizons are analyzed and studied.
Introduction
Lorentz dispersion relation has been regarded as the basic relation of modern physics, which is related to general relativity and quantum field theory. However, the development of quantum gravity theory shows that Lorentz relation must be modified in the high energy field. e dispersion relation theory in the high-energy field has not been established and needs to be further studied. Nevertheless, it is generally believed that the magnitude of such correction is only the Planck scale [1][2][3][4][5][6][7][8][9][10]. e dynamical equation of the spin-1/2 fermion is the Dirac equation in curved space-time, while that of spin-3/2 fermion is described by the Rarita-Schwinger equation in curved space-time. In 2007, Kerner and Mann et al. proposed to study the quantum tunneling of spin-1/2 fermion using semiclassical theories and methods [11,12]. Subsequently, Yang and Lin studied the quantum tunneling of fermions and bosons applying the semiclassical Hamilton-Jacobi method, and found that the behaviours of fermions and bosons could both be described by one same equation-Hamilton-Jacobi equation, and studied the quantum tunneling radiation of various black holes with the Hamilton-Jacobi theory and method [13,14]. In the process of studying the thermodynamics of black holes, it is worth mentioning that Banerjee and Majhi et al. put forward Hamilton-Jacobi method beyond the semiclassical approximation to modify the quantum tunnelling of bosons and fermions, and then studied the temperature, entropy, and other physical quantities of black holes [15,16]. Zhao et al. conducted effective research on Hawking radiation of various black holes [17][18][19]. What is worth studying is that Parikh and Wilczek corrected the particle tunneling rate at the event horizon of the black hole by taking into account the special and real situation that the background space-time changes before and a er particle tunneling [20]. Banerjee and Majhi et al. Further developed the mechanism of quantum tunneling through chirality method and semiclassical approximation [21][22][23][24][25]. Other scholars have also conducted a series of effective studies on the quantum tunneling rate and entropy of various black holes [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43]. Science and technology are always developing and progressing with the passage of time. e research on theoretical physics, astrophysics and related hot topics can always promote the continuous improvement of scientific research and bring about knowledge innovation. Recent studies show that the modification of Lorentz dispersion relation is necessary to modify the particle dynamical equation in the space-time of strong gravitational field, which is mainly to correct the quantum tunneling of fermions or bosons at the event horizon of the black hole, as well as the gravitational wave equation in curved space-time can also be modified as necessary. e purpose of this paper is to analyze the physical quantities, such as quantum tunneling rate of arbitrary spin fermions, black hole temperature, and black hole entropy in Kerr-de Sitter black hole space-time. erefore, in Section 2, the Rarita-Schwinger equation, which is the dynamical equation of arbitrary spin fermions, will be modified by applying the modified Lorenz dispersion relation. In Section 3 the quantum tunneling radiation of arbitrary spin fermions at horizons of Kerr-de Sitter black hole, which is a rotating, stationary, axisymmetric, and with a cosmic horizon black hole, is studied. Section 4 includes some conclusions and discussions.
From the Rarita-Schwinger Equation in Flat
Space-Time to the Rarita-Schwinger Equation in Curved Space-Time A modified dispersion relation on the quantum scale is given by [1][2][3][4][5][6][7][8][9][10] Consider the special case of the modified dispersion relation (the speed of light in vacuum equals unit in the notations), where 0 and are the energy and momentum of particle and is a constant on the Planck scale. When considering 훼 = 2, the dynamical equation of the spin-1/2 fermion is known as the Dirac equation. In flat space-time it is expressed as In flat space-time, the more general fermion dynamical equation was expressed by Rarita-Schwinger as [44] which satisfies the conditions of For the correction on quantum scale, take as a small correc- matrix Equation (8) where is the radius of curvature of the cosmic horizon. Obviously, the event horizon and the cosmic horizon of this black hole satisfy the equation, respectively, as From (11), (12), and (13), we can get where is the energy and is a component of the generalized angular momentum of the particles tunneling from the black hole. e electromagnetic potential of the particles 퐴 = 0 in this space-time can also be known from expression (11). To solve the matrix Equation (9), suppose Substituting formulas (14) and (15) into Equation (9), and Equation (9) is simplified as From Equation (16) we can obtain From Equations (17), (18), and (6) (11) and (12). is is a precisely modified particle dynamical equation. When the correction term is ignored, the equation reverts to the Hamilton-Jacobi equation for particles of mass m in curved space-time expressed in Equations (11) and (12). erefore, we can think of Equation (25) as the modified Rarita-Schwinger equation. e process of solving Equation (25) is the process of solving Equation (9). We only need to solve Equation (25) to find the fermion action and consequently study the characteristics of quantum tunneling radiation at the horizons of the black hole.
Tunneling Radiation Characteristics of Arbitrary Spin Fermions in Kerr-de Sitter Black Hole Space-Time
From Equations (11) and (12), the values of metric determinant and the nonzero components of the contravariant metric tensor of the space-time can be calculated, respectively, as (19)
Advances in High Energy Physics 4
According to the theory of tunneling radiation of the black hole, Equation (34) shows that the quantum tunneling radiation rate of the black hole is where is modified Hawking temperature at event horizon, expressed as where 0 is the unmodified Hawking temperature at the event horizon of the black hole. If we ignore the terms a er , It is worth noting that Equation (36) is an accurate correction based on the Lorenz dispersion relation. is is a modification of the specific theoretical basis, and a small modification on the quantum scale. Since the result of this correction will lead to the correction of black hole entropy, which is related to the information of black hole, the physical significance of this correction is worth studying deeply. e equation satisfying the cosmic horizon of the black hole is shown in (13). Similarly, by solving the dynamical equation of the arbitrary spin fermions in the space-time of the black hole, we can obtain the fermions tunneling rate at the cosmic horizon of the black hole. Similarly, we can get the Hawking temperature at the cosmic horizon of the black hole, which is expressed as where ὔ 0 and ὔ 0 can be got by changing in 0 and 0 into respectively. e difference between getting formula (37) and formula (36) is that when we integrate 휕푆 휕푟 ᐈ ᐈ ᐈ ᐈ ᐈ푟→푟 using the residue theorem, we need to integrate from to the inner where the researcher locates. In other words, 휕푆 휕푟 ᐈ ᐈ ᐈ ᐈ ᐈ푟→푟 and 휕푆 휕푟 ᐈ ᐈ ᐈ ᐈ ᐈ푟→푟 integrate in opposite directions.
Another important physical quantity in black hole thermodynamics is black hole entropy, and the modified Hawking temperature will lead to the modification of black hole entropy. According to the first law of thermodynamics of black holes, the entropy of black holes can be expressed as For the Kerr-de Sitter black hole, therefore, through formula (36), the modified entropy at the event horizon of the black hole is expressed as where, e positive and negative signs in Equation (32) correspond to the exit wave solution and the incident wave solution respectively. In order to find out the fermion action , we can consider the solution of equation Δ 푟 = 0 as a singularity, so we can integrate at the event horizon of the black hole by applying the residue theorem, and then we can obtain (29) number, so that the terms with O(ℏ) in fold equation is ignored. If we consider the effect from O(ℏ), the entropy of the black hole will be modified again, and the technology is developed in [15,16]. We will investigate the effect in future work.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this paper.
Similarly, we can obtain the modified entropy at the cosmic horizon of the black hole, which is expressed as
Conclusion and Discussion
rough a series of calculations, the obtained formulas (35), (36), and (37) show that the tunneling radiation of arbitrary spin fermions in Kerr-de Sitter black hole space-time has been accurately modified based on the Lorenz dispersion theory.
e result of this correction shows that when its 푎 = 0, the related Hawking temperature at event horizon of black hole returns to the situation of static black hole, and when and 2 terms are further ignored, the tunneling rate, black hole temperature and other physical quantities return to the situation of Schwarzschild black hole. is further demonstrates the correctness of the conclusions. e conclusions of Equations (34)- (41) show that Lorenz dispersion relation is a theory worth studying in the field of high energy, and it also should be considered in the study of the theory of strong gravitational field and gravitational wave. e research on these topics can not only promote the research of quantum gravity theory, but also promote the innovation of theoretical physics and astrophysics knowledge.
Moreover, the above calculations show that the tunneling rate Γ depends on , , and 휔 0 = 훺 0 푗, but the back reaction from tunneling particles have not be considered. Now let's study the effect. A er the black hole emits a particle with energy and angular moment j, the mass and angular momentum of the black hole will become (푚 − 휔) and 퐽 − 푗 , and we have so the tunneling rate is rewritten as It means tunneling rate is dependent on the entropy of the black hole, so it is a candidate solution of the information loss paradox in black hole physics [45].
On the other hand, we do this work in semiclassical approximation, and in which ℏ is considered as a small | 2,464 | 2019-11-19T00:00:00.000 | [
"Physics"
] |
Healthcare information exchange using blockchain technology
Received Jul 17, 2019 Revised Aug 10, 2019 Accepted Aug 29, 2019 Current trend in health-care industry is to shift its data on the cloud, to increase availability of Electronic Health Records (EHR) e.g. Patient’s medical history in real time, which will allow sharing of EHR with ease. However, this conventional cloud-based data sharing environment has data security and privacy issues. This paper proposes a distributed solution based on blockchain technology for trusted Health Information Exchange (HIE). In addition to exchange of EHR between patient and doctor, the proposed system is also used in other aspects of healthcare such as improving the insurance claim and making data available for research organizations. Medical data is very sensitive, in both social as well as legal aspects, so permissioned block-chain such as Hyperledger Fabric is used to retain the necessary privacy required in the proposed system. As, this is highly permissioned network where the owner of the network i.e. patient holds all the access rights, so in case of emergency situations the proposed system has a Backup Access System which will allow healthcare professionals to access partial EHR and this backup access is provided by using wearable IOT device.
INTRODUCTION a. Motivation
User's in today's world expect seamless and instantaneous flow of data and many industries have already adopted and some are beginning to adopt necessary technologies to provide their user's with required information quickly and in a secured manner. Unfortunately, the healthcare industry has lagged behind to meet its users' expectations. Conventional systems are many times vulnerable to attacks, slow and have very little role for the patient [1]. The health data which is stored in conventional system are very difficult to share due to varying standards and data formats i.e. current healthcare ecosystem is ill-suited for the instantaneous needs of modern user. The primary objective of HIE system is to transmit health information on the far side geographical and institutional boundaries to supply a good and secure delivery mechanism [2]. We need to consider few factors with respect to this sharing mechanism. a. Maintaining privacy in user data is very important and failure to this will result in implications related to financial as well as legal sectors. b. In traditional data sharing system, it requires centralized source of data which increases the risk of data security and it also requires single trusted central authority. In this centralised system, failure of central-storage will risk the storage of medical records of various patients. As data for each patient from various sources such as wearables, physicians, lab reports are increasing, HIE are under pressure to scale the infrastructure and support variety of data sources [3]. b. Problem statement Currently, in the traditional system Electronic Medical Record EMR's are stored in the centralized cloud-based database in which medical records remain largely un portable. Centralization increases security risks and requires trust in single authority. Regardless of controlled access and de-identification, centralized databases cannot guarantee integrity and data security. The disadvantages of current system are vital and thus cannot be ignored, which makes it necessary to develop a new solution for sharing EMR's which is more secured, reliable and should be able to handle all the data privacy, data redundancy and other security related issues related to Healthcare Information Exchange System. c. Use of blockchain in HIE system In Healthcare Information Exchange System, various Healthcare entities like Doctor, Pharmacist, Medical Lab incharge and insurance company may have to submit transactions which may result in change in contents of the Electronic Medical Records (EMR) [4]. These EMR's are critical and highly sensitive patient's medical information which needs to secured. It requires that the participants performing transactions should be known and trusted by other participants [5]. Centralization does not guarantee integrity and data security so; this paper proposes a distributed solution to implements a HIE System to share EMR's between various entities present in the healthcare using Blockchain. Blockchain is a distributed ledger which is used for making logs and storing every transaction block in distributed ledger [2] i.e. whenever a participant performs a transaction on EMR's it will be recorded in the ledger. Blockchain depends on previously known cryptographic methods which will allow every node in the network to participate in the interaction (e.g. store information, exchange of information, and view that information), without having prior trust between the participants of the network. There is no central authority in a blockchain system instead, transaction blocks are recorded and distributed to all the peers in the network [6]. So, all participants in the network will know every interaction with blockchain and requires it to be verified before adding to the blockchain, which will enable trust-less interaction between the peers in the network [7]. The block once added to the blockchain can neither be deleted nor updated i.e. it is immutable.
In a blockchain based HIE, the patient has the ownership of the medical records, in contradiction to traditional architecture, where a central authority controls accesses and distributes data across network [8]. In this system medical record access is permitted to only limited healthcare entities (people or organizations). Data shared across the blockchain network enables near real time updates across all healthcare entities. Distributed ledger allows secure access to patient data [9]. Data redundancy is reduced as the same copy of data is available to all peers in network. Privacy and confidentiality in blockchain network is maintained by restricting few nodes to access data. d. Hyperledger fabric to support private data The proposed system requires that some information to be private to some participants and should not be seen by other participants in the network, for e.g. the research organizations and insurance companies need not to know all the transactions in the network [10]. The other permissioned blockchain network requires that all participants should have same view of the distributed-ledger, which makes it difficult to support private data for various different participants whereas Hyper-ledger fabric provides a way to keep certain data/transactions confidential among a subset of members in network [11]. e. Objectives of HIE system Secure, immutable and decentralized EHR database with patient owing her / his own health data Easy to share selective or all EHRs as consented by the patient Full medical history of a patient at one single point Easy verification of medical prescription Redacted EHRs for research purposes Increased transparency No insurance fraud
TECHNOLOGICAL DETAILS Basic concept of hyperledger fabric
Fabric network is distributed permission based which requires its users to sign-up before using it. Using Hyperledger modelling and access control languages, permissioning on the network is controlled [12]. It is implementation of DIL (Distributed Ledger Technology) which provides Enterprise ready network scalability, security, performance and confidentiality in flexible blockchain architecture [13]. Like other blockchain's Fabric also has ledger which uses smart contract which is called as chaincode. Hyperledger
423
Fabric blockchain has configurable and modular architecture which enables optimization, innovation and versatility for different industrial use-cases [14]. Unlike other distributed ledger platform Hyperledger fabric allows smart contract to be written in various general purpose languages such as Go, Node.js and Java rather than using Domain-Specific Languages (DSL). Unlike other permission-less network, Fabric is fully permissioned platform; it means that, all the participants in the network are known to each other, rather than anonymous and therefore fully untrusted. Hyperledger Fabric allows new architecture called as execute-order-validate for transactions in the network [15]. This architecture overcomes various challenges faced by the orderexecute -model like resiliency, flexibility, scalability, performance and confidentiality by separating the transaction flow into three steps: 1. Execute a transaction and check its correctness, 2. Order transactions using consensus protocol, 3. Before committing transactions to the ledger validate them against an application-specific endorsement policy. Hyperledger Fabric provides multi-layer permissions which allows owner of data to control which part of data is accessed [16].
Functionalities of Hyperledger Fabric 2.1.1. Identity management
Hyperledger Fabric enables permissioned network by providing membership identity services [17]. This service is responsible for managing user ID's and also authenticates all network participants.
Privacy and confidentiality
It allows competing businesses and a group that needs their transactions to be confidential and private to coexist on one network. Members of the network use private channel to maintain confidentiality and privacy in transactions [18]. A member who doesn't have access to the private channel won't be able to see channel information, transactions on that channel and members of it.
Chaincode functionality
Chaincode which is also called as smart contract manages business logic agreed by all the participants in the network [19]. It is basically a program written in programming languages such as Go, Java and node.js. This program runs on Docker container which is secured and isolated from other peer processes
Modular design
Hyperledger Fabric provides functional choice to network designers by implementing modular architecture [20]. It allows particular algorithms encryption, ordering and identity to be plugged into any fabric network. This results in universal architecture for blockchain which can be adopted by any public or industry domain with the guarantee that the network will be inter-operable across regulatory, market and geographical boundaries [21].
Hyperledger composer
Hyperledger Composer is a set of collaboration tools for building blockchain business networks that makes it simple and fast for business owners and developers to create smart contracts and blockchain applications to solve business problems. It supports existing Hyperledger fabric blockchain runtime and infrastructure, which support pluggable conscious protocol which ensures that transactions are validated according to the policy designed by participants of the business network.
SYSTEM DESIGN 3.1. System overview
The Figure 1 shows a system overview of personal healthcare information exchange system. It consist of following entities which included, namely patient, emergency contact, hospitals, insurance company, research organization, doctor, pharmacist and the patients private blockchain network.
HIE System Entities 3.1.1. Patient/user
Patient/User is the owner of the network and has complete control over his personal health data. Patient is responsible for revoking, granting and denying data access to the other peers in the network such as doctors, hospitals, insurance companies and research organizations [21]. While taking medical treatment from doctor's patient can share his Electronic Health Records with the doctor by granting read access rights and when treatment is completed patient can revoke access rights from doctor to decline further access to HER. Similarly, patient can grant, deny or revoke access to other peers connected in the network such as, research organizations, insurance companies. Besides this, the patient can record some personal information like daily health routine, allergies or any other related data which will in improving medical treatment.
Doctor/practitioner
Patient/User can appoint a healthcare professional such as doctor to give medical treatment or to perform some medical tests and give them access to his EHR's. When more than one doctor are involved in providing treatment to patient the doctor may request patient to give permission to other doctor / doctors to access patients medical records. The doctors can read previous blocks containing medical records and can add new block in the blockchain with current medical record after getting permission from the patient.
Health insurance company
Patient can allow Health Insurance Company to access their Electronic Health Records [22]. Insurance company in return can suggest a better insurance policy by analysing patients HER's. In addition to this the insurers will have a trusted source of healthcare data based on which they make better decisions [23].
Research organizations
Research organizations can request access for patient's medical history for medical research purposes. Patient's medical history will helps research organizations in improving health services, development of more effective diagnostics and treatments, insights into cause of diseases and identification of public health risks [23]. Table 1 show the Health Information Exchange system participants and permissions.
Goals
The main goal of this proposed system is to provide a way to store and share the health information in more secured and effective manner [24]. The patient's EHR must be consistent, available when needed and only patient should control the terms of its access. The secondary goal is to share the EHR in such a way that its structure and the meaning must be easily understandable to improve data utilization which will result in good patient care.
Backup access system
The proposed system is to be implemented using Hyperledger fabric which is highly permissioned blockchain where all the access rights of EMRs are with owner of the network i.e. Patient. However, in the case of an emergency and with the patient unable to give access his medical records, there must be an ability to view certain information in order to provide the best possible care. So in the proposed system a backup access system is provided in order to gain partial access to patient's Electronic Medical Records. This can be done using wearable IOT device [25]. The medical representatives can scan IOT device in case of emergency situation to gain access to critical healthcare information like blood group, allergies, age, and critical decease information. This would enable doctors to provide the best care possible to a patient in an emergency situation.
CONCLUSION
This work proposes aBlockchain enabled Healthcare Information Exchange system which will provide solution to various challenges such as integrity, data interoperability and Integration of blockchain in HIE will allow access to historic and authentic healthcare data. In addition to this the proposed system is also used in other aspects of healthcare such as improving the insurance claim and making data available for research organizations. | 3,134.6 | 2020-02-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Fine-grained urban blue-green-gray landscape dataset for 36 Chinese cities based on deep learning network
Detailed and accurate urban landscape mapping, especially for urban blue-green-gray (UBGG) continuum, is the fundamental first step to understanding human–nature coupled urban systems. Nevertheless, the intricate spatial heterogeneity of urban landscapes within cities and across urban agglomerations presents challenges for large-scale and fine-grained mapping. In this study, we generated a 3 m high-resolution UBGG landscape dataset (UBGG-3m) for 36 Chinese metropolises using a transferable multi-scale high-resolution convolutional neural network and 336 Planet images. To train the network for generalization, we also created a large-volume UBGG landscape sample dataset (UBGGset) covering 2,272 km2 of urban landscape samples at 3 m resolution. The classification results for five cities across diverse geographic regions substantiate the superior accuracy of UBGG-3m in both visual interpretation and quantitative evaluation (with an overall accuracy of 91.2% and FWIoU of 83.9%). Comparative analyses with existing datasets underscore the UBGG-3m’s great capability to depict urban landscape heterogeneity, providing a wealth of new data and valuable insights into the complex and dynamic urban environments in Chinese metropolises.
Background & Summary
Urban Landscapes are complex and dynamic geographic phenomena that are shaped by natural and human forces 1,2 .These landscapes are comprised of various components, including urban green space (UGS), urban blue space (UBS), and urban impervious surfaces (UIS), which together form the basic units of complex urban landscape configurations 3,4 .UGS generally refers to vegetated land in urban areas 5 , such as parks, gardens, trees, and grasses, which play a crucial role in upholding urban ecosystem equilibrium.UBS pertains to water features within urban locales, encompassing rivers, lakes, wetlands, reservoirs, ponds, and artificial water structures 3 .UIS, commonly referred to as "urban gray space", are impervious surface features in cities caused by man-made land use activities, such as building roofs, asphalt or concrete roads 6 .UGS and UBS provide multiple ecological and social benefits, including climate regulation, water and air purification, biodiversity conservation, carbon sequestration, recreational opportunities, and aesthetic enhancement 3,7,8 .However, UIS may engender negative impacts, such as the urban heat island (UHI) effect, air quality degradation, and stormwater runoff increases 6,9 .Therefore, a balanced approach to urban landscape construction necessitates the integration of all three space types, with a focus on increasing the quantity and quality of UGS and UBS, while also minimizing the negative impacts of UIS, which can lead to a more sustainable and livable urban environment 10 .In the context of global urbanization, there is a pressing need for a more precise perception of the urban landscape structure 11 .Nonetheless, urban blue-green-gray (UBGG) landscapes are strikingly heterogeneous in terms of space, structure and function, as they are the outcome of dynamic interactions between biophysical and socio-economic processes occurring at multiple spatial scales 2,12 .Yet, detailed and accurate UBGG landscape mapping is the fundamental first step to understanding human-nature coupled urban systems.Therefore, it is necessary to open the "closed box" of urban landscape structure and quantify the subtle heterogeneity of the built and natural components within the metropolis 1,13 .With the increased demand for higher resolution urban products, urban mapping products have made tremendous progress toward finer scales in the last decades 11,14,15 .It can be attributed to the availability and accessibility of very high resolution (VHR) satellite data and the support of computing platforms with substantial computing power, such as Google Earth Engine (GEE) 11 .Notably, VHR imagery, with resolutions as fine as 1-3 m/pixel, has emerged as an invaluable asset for revealing urban landscapes at an increasingly detailed level of granularity, offering a comprehensive view of the ground.Moreover, Deep Learning (DL) techniques have emerged as a powerful tool for VHR urban landscape mapping, revolutionizing the field of intelligent classification research in the 21st century [16][17][18] .Establishing fine-scale urban datasets for landscape/landcover interpretation and deep learning-based research has become a hot research topic in recent years 15,17,19 .However, present urban landscape datasets typically focused on individual landscapes (e.g., UGS or UBS) 20,21 or limited spatial extents (usually covering several cities/provinces) 22 .Some scholars focused on UGS extraction to accurately digitally twin UGS at a fine scale 5,21,23 , such as Brandt et al. 23 utilized VHR satellite images covering more than 1.3 million km 2 in West African Sahara and Sahel, detecting more than 1.8 billion individual trees in areas previously regarded as barely covered by trees.Similarly, Shi et al. 5 generated 1-meter UGS maps for 31 major cities in China using Google Earth images.Some scholars focused on UBS extraction to address the obstacles posed by the confusion of water with heavy shadows in VHR images 20,24,25 , like Chen et al. 25 proposed an open water detection method in urban areas using VHR imagery, successfully identifying various types of water bodies.Likewise, Li et al. 20 proposed the water index-driven deep fully convolutional network (WIDFCN), showcasing robustness to different shadows types and achieving high-performance water extraction in 12 test sites worldwide.In addition, in the field of UIS extraction, scholars have also achieved notable outcomes in the application of DL to urban building and road extraction from VHR images [26][27][28] .For example, Guo et al. 28 devised a coarse-to-fine boundary refinement network for building footprints extraction from VHR images.Nevertheless, a limitation persists in the mapping of single landscapes or confining analyses to limited geographical extents, failing to offer a comprehensive understanding of the highly heterogeneous interactions between human and natural elements 1,22 .
Establishing an effective automatic DL model for fine-grained and large-scale UBGG dataset is a challenging frontier in high-resolution urban landscape mapping.However, the pursuit of such datasets comes with its own set of challenges, stemming from VHR image acquisition, manual annotation, and the intrinsic heterogeneity of urban landscapes.First, the paramount significance of VHR imagery in capturing intricate urban landscape details is countered by its inherent costliness and the complexities associated with its acquisition 15,29 .Although Google Images has been used for some large-scale research, its restricted geographic and temporal coverage, limited visible spectrum bands, as well as varying image quality are also inevitable drawbacks 14 .Second, training a UBGG network with large-scale applications and high generalization capability relies on a large-volume sample dataset 21 , which poses a major challenge for nationwide landscape mapping due to the enormous dataset, laborious annotation, and cumbersome process involved.Although some studies have proposed innovative techniques employing biophysical indices or existing coarse-resolution products in conjunction with self-supervised mechanisms to generate training labels automatically 20 , the label noise of resolution mismatch of spatial resolution and the true accuracy of labels require further scrutiny.Reliable training labels are crucial to achieving accurate fine-scale landscape mapping results but still insufficient.Third, the striking heterogeneity characterizing the UBGG landscape at both intra-and inter-city levels and across various spatial scales presents significant impediments to effectively mining multi-scale features 1,2 .The variability of urban landscapes across geographic locations and climatic zones, such as plant type, water quality, building structure, and color, poses significant challenges 30 .Additionally, mining multi-scale features from UBGG landscapes presents substantial obstacles.Fine-scale features, encompassing spectral colors, geometrical sizes, and textural shapes, primarily manifest in the network's shallow layers but are often confused and invalidated at deeper levels 26 .Conversely, coarse-scale features, such as global spatial context, are obtained from the deep layer but struggle to be effectively expressed 31 .
China has undergone rapid development and urbanization in recent decades 32 , becoming the world's second-largest economy.In light of this remarkable growth, a comprehensive mapping survey of large-scale and fine-grained landscapes assumes immense significance, fostering an in-depth comprehension of urban environment, facilitating effective urban landscape management, and illuminating future development trajectories 33 .Consequently, this study endeavors to develop a transferable multi-scale high-resolution convolutional neural network to generate a 3-meter resolution UBGG landscape dataset, utilizing Planet images in 36 Chinese metropolises.Rigorous validation processes, including visual interpretation and quantitative evaluations, were employed to assess the credibility and efficacy of the UBGG-3m dataset, further augmented by comparisons with existing products.This dataset will enhance our understanding of fine-scale landscape distribution patterns in Chinese metropolises, provide a deeper understanding of integrated human-nature systems from an ecological perspective, and contribute to better urban landscape management as well as sustainable urban development planning 1,12,29 .Shijiazhuang, Taiyuan, Lanzhou, Qingdao, Jinan, Zhengzhou, and Xi'an), the southern region (Shanghai, Nanjing, Hangzhou, Hefei, Ningbo, Wuhan, Changsha, Nanchang, Chengdu, Chongqing, Guiyang, Kunming, Nanning, Fuzhou, Xiamen, Guangzhou, Shenzhen, and Haikou), the northwest region (Hohhot, Yinchuan, and Urumqi), and the Qinghai-Tibet region (Xining and Lhasa).
Planet multispectral satellite images with a spatial resolution of 3 meters provide an important data source for capturing the detailed characteristics of urban landscapes (https://www.planet.com/explorer/).Planet, with the largest commercial Earth observation satellite constellation ever built, operates over 200 small satellites in near-Earth orbit 35 .These satellites provide meter and sub-meter spatial resolution images, allowing for an unprecedented global repeat observation frequency of once a week.This frequency and resolution enable the capture and analysis of UBGG landscapes in unprecedented detail, providing insights into the morphology and dynamics of the urban landscape at an unprecedented scale.In addition, the Planet multispectral imagery comprises four bands, with the near-infrared band being particularly adept at capturing vegetation growth information, thereby augmenting the accuracy of UGS type classification.A total of 336 clear and non-cloudy images in summer of 2020 (June to October) were downloaded (Table 1).In cases where cloud cover obscured images of the study area in 2020, cloud-free images from the summer of 2021 served as suitable replacements.The Planet satellite images were preprocessed utilizing geometrical correction, image mosaic, color stretching, band combination, and projection transformation.Finally, we obtained standard false color images covering 36 metropolitan urban areas in China, where trees and grass are shown as dark and bright red, which can be better distinguished from UBS and UIS.
The boundaries of 36 metropolises were defined according to the administrative boundaries, which were obtained from the Resource and Environment Science and Data Center (https://www.resdc.cn).Nonetheless, administrative boundaries cannot distinguish between urban and rural areas, leading to potential misclassification of urban grassland and farmland, due to their similar physical features but distinct economic attributes.To address this challenge and improve classification accuracy, we integrated the 2018 China Urban Boundary (CUB) data 4 into our classification process.The CUB data was meticulously extracted through a human-computer digitalization process from China's Land Use/cover Dataset (CLUD), derived from Landsat images.Notably, the CUB data is known for its high accuracy in urban boundary detection, with an overall accuracy rate exceeding 92.65% from 2000 to 2018 4 .Specifically, we focused on reclassifying areas outside the urban boundaries, ensuring that urban grasslands located outside these boundaries were accurately reclassified as farmland.
To comprehensively assess the reliability and precision of the UBGG-3m dataset, we collected several established and widely utilized land cover datasets, as well as two high-resolution urban green space datasets for comparison and validation.Specifically, the land cover datasets include the 30 m GlobeLand30 in 2020 36 , the 10 m Esri land cover in 2020 37 , the 10 m ESA World Cover in 2020 38 , the 1 m national-scale land-cover map (SinoLC-1m) 14 .To ensure consistency with our classification system, the four land cover products were reclassified into UGS (trees, grassland and farmland), UBS, and UIS.The other two high-resolution urban green space datasets include the 2 m Urban Tree Cover dataset (UTC-2m) 21 , and 1 m Urban Green Space (UGS-1m) 5 .
Technical framework.
The workflow for generating the UBGG-3m dataset mainly includes three phases, as depicted in Fig. 2. Firstly, UBGGset for typical Chinese cities was created for training, validation, and testing of the deep learning model.Secondly, the novel deep learning model was pre-trained on the UBGGset and tested in Beijing to compare its performance against state-of-the-art deep learning network models.Finally, the transfer training was utilized to strengthen the pre-trained model to adapt to diverse landscape characteristics in different geographic regions and generated UBGG-3m of 36 metropolitan areas in China.Thorough visual inspection and quantitative accuracy validation were conducted to ensure the reliability and credibility of the UBGG-3m dataset.
UBGG landscapes sample dataset.Accurate and reliable training tags are critical to the accuracy of fine-scale urban landscape mapping 14,20 .Current UBGG studies are deficient in standard datasets, so we first created a large-volume UBGG landscape sample dataset (UBGGset) applicable to urban areas in China.The classification system includes the UBS, UGS, and UIS landscapes in the city.UBS comprises all water bodies, including rivers, lakes, and seas, as well as reservoirs and ponds, while UGS is further divided into tree, grass, and farmland.The HRNet-OCR network architecture.The HRNet-OCR network architecture (Fig. 4), constituting the core of the deep learning model, was designed to tackle the challenges posed by the multi-scale information extraction and the inadequacy of contextual information in VHR images.Leveraging the High-Resolution Network (HRNet) 39 as the backbone network, HRNet-OCR effectively harnessed multi-scale feature learning and exploited four multi-branch parallel convolutions to generate high-to-low-resolution feature maps 40 .Meanwhile, multi-scale information branches are sufficiently linked to enable the seamless flow of information and enhancing semantic richness and spatial accuracy.Furthermore, this structure can effectively avoid the loss caused by the recovery of high-resolution features from low-resolution features, thereby preserving the image's high-resolution features throughout the process.To overcome the problem of inadequate contextual information, we also integrated the Object-Contextual Representations (OCR) module 31 information, thereby improving its capability to recognize objects in complex scenes with multiple objects and occlusions.
The specific training steps are as follows: Firstly, we input the train and validation dataset into the HRNet for multi-scale feature learning and then obtained a coarse segmentation map from the softmax layer.
Secondly, we computed the object regions from the coarse segmentation map from HRNet by aggregating the representations of all pixels in the N th landscape object.
Where N is the number of landscape categories, f N represents the N landscape object.x i denotes the pixel i representation.m Ni ∼ refers to the normalized degree for pixel i in the N th landscape object.Thirdly, we calculated the relationship between pixels and the corresponding landscape object as below: Where W iN means the relationship between x i and f N .The transformation function N(x, f) is referenced in literature 31 .Lastly, the final pixel representation z i was obtained by computing the combination of the original representation x i and the object background representation y i using the transformation function g(•) 31 .
Where z i is the augmented pixel representation; y i is the object contextual representation.
Transfer learning.To enable the model's effective performance across large-scale applications, the study employed a transfer learning technique 15 .Influenced by natural factors (e.g., vegetation type, spectral diversity of water bodies, impermeable styles) and external factors (e.g., solar altitude angle, image quality) 1 , satellite images collected from different regions could exhibit inconsistent data distribution 15 .Therefore, a model trained on one region dataset cannot be applied effectively to images of another region.To overcome this challenge, transfer learning was employed, where a pre-trained model was used as a starting point for a new task in different geographic regions.Specifically, we first trained a model in the Northen geographic region to obtain a pre-trained model and further fine-tuned it through adversarial training by adding samples from the next region to the pre-trained model for parameter initialization and feature extraction (as shown in Fig. 2).This process was repeated for each geographic region.For large-scale applications, transfer learning can improve computing efficiency and model generalization compared to starting from scratch on a small sample dataset.
Post-processing.In the post-processing stage of image classification, three essential techniques were implemented to enhance the accuracy of the results.The sliding window prediction method 27 was employed to effectively address the issue of insufficient image edge information, mitigating the impact of mosaic traces.Test enhancement techniques, involving horizontal, vertical, and diagonal flipping, were used to improve classification accuracy and reliability by averaging test image enhancements.Lastly, morphological post-processing, facilitated by the "skimage" package in Python, removed small incorrect patches and filled tiny holes, ensuring precise and accurate classification results.
Accuracy assessment.To evaluate the accuracy and quality of the proposed UBGG-3m dataset, comprehensive assessments were conducted at the pixel level.The widely used assessment metrics were used to evaluate classification accuracy for each landscape pixel, including Precision, Recall, overall accuracy (OA), F1-score (F1), intersection over union (IoU), and frequency weighted intersection over union (FWIoU).The calculation equations for the metrics are shown in Table 2.
Experimental parameters.The whole experimental process was completed on the High-performance Computing Platform of Peking University, employing the Pytorch deep learning framework with GPU acceleration from NVIDIA Tesla P100.During the training process, the batch size was 32, and the initial learning rate was 0.0001.The learning rate was adjusted by simulated annealing to avoid the possibility of the gradient descent algorithm falling into local minima, with a minimum learning rate of 1e-5.The Adam optimizer was selected for loss-value optimization with a weight decay factor of 0.001.We set the number of iteration epochs to 120 and selected the optimal model parameters corresponding to the rounds with the highest accuracy in the training and validation.The loss function is utilized to calculate the difference between the predicted and true values and update the network model parameters by error backpropagation.Here, we used the combined loss function of Soft Cross Entropy Loss (CE) and Dice Loss (DL) 5 , which can more effectively solve the category imbalance problem and enhance the model generalization.The calculation formula is as follows: Loss w Loss w Loss (5) Fig. 4 The architecture of the HRNet-OCR.The yellow box shows the structure of HRNet 39 , and the blue box shows the structure of OCR module 31 .
where y i is the prediction of urban landscapes of the network.p i is the truth of urban landscapes from label images.The weights of w CE and w DL are 0.5.
Data records
The UBGG dataset 41 provides easily access and leverage to researchers and analysts, which is stored in the following Zenodo repository (https://doi.org/10.5281/zenodo.8352777).The UBGG dataset consists of two main components: 1) UBGG-3m: the fine-grained UBGG map of 36 metropolises in China.The UBGG-3m dataset captures the intricate urban landscape features with remarkable precision, providing a detailed representation at an impressive 3-meter resolution.The classification maps for all 36 Chinese metropolises were showcased in Fig. 5. Researchers can delve into the nuances of the UBGG continuum, gaining invaluable insights into the interplay between the blue, green, and gray elements of urban environments in each metropolis.
Technical Validation
Visual and accuracy evaluation on UBGG-3m.To evaluate the accuracy of the product, five cities were selected for visual and quantitative assessment, namely Beijing, Shenzhen, Harbin, Urumqi, and Lhasa.A total of six test sample areas were collected in these five cities, covering an area of 43.5 km 2 .The classification accuracy was evaluated by comparing the labeled reference maps with the results.The OA for all samples (about 4.83 million pixels) was found to be 91.23%(Table 3), indicating promising mapping results.For the different types, UBS had the highest F1 of 95.15%, followed by UIS at 93.14%.The F1 for trees and grass were 87.54% and 85.97%, respectively.The quantitative assessment results of different cities were accurate, with OA higher than 91% except for Lhasa (83.21%), demonstrating the usability and accuracy of the UBGG-3m product.
The visual assessment results of the UBGG-3m were presented in Fig. 6.As the capital of China, Beijing is a highly urbanized and economically developed city, with comprehensive blue and green infrastructure construction.The HRNet-OCR model accurately identified UBS ranging from large lakes to small ponds, as well as the moat surrounding the Forbidden City (Fig. 6a).In addition, the model also effectively captured the sizes, shapes, locations, and boundaries of UGS, such as individual tree canopies in residential areas, small arborizations in Peking University, and slender trees on boulevards.Notably, the reconstructed tree geometry was highly consistent with the ground truth data.Moreover, the model was able to successfully distinguish between trees and grass, highlighting delicate shape contours in areas such as playgrounds of a school and artificial grass on golf courses (Fig. 6a).The results demonstrated the model's ability to extract detailed information about the UBGG landscape in urban areas and to distinguish between different types of greenery with a high level of accuracy.Shenzhen, located in the southern region of China, is characterized by a higher coverage ratio of UBS and UGS, which is
Assessment metrics Formula
Where TP, TN, FP and FN represent true positive, true negative, false positive and false negative; N is the total pixel number of the image.K is the number of categories.x ii represents the pixel number of the category i that was correctly classified.x ij represents the pixel number of the category i that are wrongly divided into category j.
mostly comprised of large reservoirs and parks.The model's effectiveness in accurately describing the complex boundary shape of Xikeng Reservoir and identifying trees around commercial and residential buildings, as well as greenery along roadsides has been demonstrated, as shown in Fig. 6b.Harbin, as a representative city in northern China, has the largest proportion of farmland within its administrative boundaries.The analysis of detail maps indicated that building shadows had a certain impact on the accurate extraction of UGS, particularly in residential areas with tall buildings, as illustrated in Fig. 6c.The presence of building shadows led to discontinuous UGS extraction, and sometimes the obscured areas were classified as UIS, resulting in relatively poor extraction with F1 of 79.38% and 88.25% for trees and grass, respectively (Table 3).Urumqi, as a representative of inland cities in the northwest region, has the largest UIS area, and the UBGG-3m product exhibits superior performance in providing detailed information on UGS and UBS.It is worth noting that despite being geographically distant from each other, with Harbin located in the far north, Urumqi in the northwest, and Shenzhen in the south of China, the UBGG classification results for all cities are excellent.This suggests that the model framework's performance is unlikely to be affected by geographic location differences, which could be attributed to the transfer learning strategy that helped the model adapt.with lower and sparser vegetation cover were more likely to be misclassified, owing to their similarity in appearance to bare ground.Conversely, UGS with higher and denser vegetation cover were more accurately identified.
Comparison with the state-of-the-art deep learning networks.Several state-of-the-art deep learning networks were selected for performance comparison with HRNet-OCR, including PSPNet 42 , DeepLabV3+ 43 , UNet 44 , HRNet 39 .The model's accuracy and loss value records for each round were plotted and shown in Fig. 7.
As depicted in Fig. 7a, the accuracy of all five models increased rapidly with the increase in epochs and then gradually increased and stabilized after 20 epochs.In terms of training loss, as demonstrated in Fig. 7b, all five models initially showed a rapid decrease in the first 20 epochs, followed by a more stable decrease.Among the state-of-the-art deep learning networks, PSPNet exhibited the slowest improvement in classification accuracy and loss function convergence.Conversely, HRNet outperformed DeepLabv3 + in terms of accuracy improvement and loss function convergence.Overall, HRNet-OCR demonstrated the most significant training advantage, with the accuracy reaching 0.989 and the loss reduced to 0.197 after 120 epochs.Although this advantage was not apparent in the early stage, it showed significant improvement in the later stage compared to HRNet.The performance evaluation of different models was conducted on a test region covering 1225 ha (1167 × 1167 pixels) in Haidian District, Beijing, which included Summer Palace Park, Haidian Park, Wanliu Golf Course, Kunming Lake, and Xiyuan residential area, representing a variety of UBGG landscape features (Fig. 7c).The classification results and accuracy assessment of HRNet-OCR and other state-of-the-art semantic segmentation networks are presented in Fig. 7c and Table 4, respectively.All deep learning methods demonstrated effective UBS extraction, with F1 above 96.9%.However, the classification of UGS and UIS was more challenging.PSPNet struggled to handle detailed information, resulting in smooth edges of impervious and trees that were inconsistent with the actual landscape boundaries.DeepLabv3 + still had difficulty in distinguishing trees and grass, particularly on the golf course lawns, where several solitary tree canopies were ignored.In comparison, HRNet performed better in classifying UBGG landscapes, particularly in accurately recognizing trees and grasses, with the boundaries of UGS more consistent with actual features, owing to the high to low resolution feature learning mechanism.Furthermore, the classification accuracy was significantly improved by introducing the OCR module based on HRNet.The F1 of UBS, UGS_tree, UGS_grass, and UIS classified by HRNet-OCR were improved by 0.56%, 1.11%, 1.03%, and 1.95%, respectively, compared to HRNet.The OA was ranked from large to small: HRNet-OCR (93.16%) >HRNet (91.94%) >UNet (91.05%) >DeepLabv3 + (91.00%) >PSPNet (89.40%), highlighting the effectiveness and great potential of HRNet-OCR for high-resolution landscape classification tasks.
Comparison with and without transfer learning in large-scale UBGG landscape classification.
To develop an ecological understanding of urban systems, the spatial heterogeneity for urban landscapes from various geographic regions must be addressed for large-scale and fine-grained mapping 5,15 .Transfer learning has been demonstrated as a useful tool to address urban landscape heterogeneity and dynamics by a large body of literature 15,45 .Our study found that transfer learning can consider the spectral variance of diverse UBS types, including rivers, lakes, and reservoirs.For example, a large sediment content in the Yellow River causes a high reflectivity that appears as a blue-green color on a standard false-color image (Fig. 8a), while the Jialing River shows bright blue color due to its shallow water level, and the Yangtze River has high turbidity showing lake blue (Fig. 8b).The pre-trained model was unable to fully comprehend this UBS heterogeneity.However, after transfer learning, the misclassification was much reduced by introducing positive/negative UBS samples and fine-tuning the pre-trained model with new water features.In addition, the transfer learning cross-geographic regions method has significant advantages in solving "various UGS in the same spectrum" and "same UGS with different spectrums" 46 .For example, crops and urban trees were highly confused due to the same spectral characteristics during the peak growth period of crops in Harbin (Fig. 8d).Similarly, the classification of aquaculture area in Wuhan also had mixed trees and farmland, manifested by the relatively broken and irregular shape of farmland patches (Fig. 8e).After adversarial training, the misclassification is much improved, and the edges are more finely and accurately delineated.
The findings demonstrate that transfer learning can enhance the generalization by efficiently retraining on the pre-trained model, which is feasible and potentially possible for large-scale, high-resolution UBGG landscape mapping.In practical applications, HRNet-OCR can be automatically applied to other cities and achieve good urban landscape classification by fine-tuning the pre-trained model or even directly using the pre-trained model.Here, we counted the computational efficiency of the prediction phase.The computational times for Table 3. Quantitative results of accuracy evaluation on UBGG-3m (units: %).
HRNet-OCR in 36 cities were recorded based on NVIDIA Tesla P100 GPU and Pytorch.Statistically, it only took about 5 h to generate UBGG-3m covering all 36 metropolitan areas of 50, 411 km 2 by transfer learning, which is effective for timely monitoring and managing dynamic changes in the urban landscape.
Comparison with existing landcover/landscape datasets.Visualization comparisons of UBGG-3m with existing landcover/landscape datasets are shown in Fig. 9. Additionally, one region of each city was zoomed in for visual inspection of spatial detail reconstruction.Remarkably, our product demonstrated superior performance in terms of visual assessment results, exhibiting excellent landscape classification results.Most of the existing land cover products, exhibited poor accuracy in reconstructing the UBGG landscape, often misclassifying blue-green natural land as construction land (Fig. 9).Among the four large-scale land cover products, ESA World Cover displayed relatively better performance, albeit falling short in accurately depicting the edges of the urban landscape compared to UBGG-3m.This phenomenon can be attributed to two main factors.Firstly, the diameter of tree crowns typically ranges between 0.5 m and 10 m, and the width of urban rivers and ponds generally falls between 20 m to 100 m, which can be smaller than one pixel of Sentinel-2 or Landsat 21 .As a consequence, the resolution limitation leads to a mixed pixel problem, where scattered UGS and striped UBS may merge with the surrounding landscape and thus be removed from the pixel 47 .Secondly, the orientation of these products is designed for global or national land cover rather than specifically for urban areas 37 .For example, the Food and Agriculture Organization (FAO) defines forests as patches greater than 0.5 ha with more than 10% tree canopy cover, leading to an underestimation of UGS in these products.Furthermore, our comparative analysis with UTC-2m and UGS-1m demonstrated the superiority of our UBGG-3m product in accurately capturing urban green space (Figs. 9, 10).The UBGG-3m product based on higher resolution planet images facilitated more accurate detection of urban tree crowns and finer-grained analysis of their distribution patterns.In contrast, UTC-2m, due to its lower resolution of Sentinel-2 images, may fail to identify small or isolated trees and struggle to distinguish between different types of tree canopies.On the other hand, UGS-1m, which utilized high-resolution Google imagery, offers a comparable representation of urban green space when compared to our product.However, the utilization of multispectral information from Planet imagery allows UBGG-3m to achieve a higher level of discrimination between urban trees, grasslands, and farmlands, which is not attainable with the other two high-resolution tree products.These comparisons provide compelling evidence of the superior performance and accuracy of UBGG-3m in capturing the intricate characteristics of urban landscapes.More importantly, the UBGG-3m product mapped a comprehensive urban blue-green-gray landscape in human-nature coupled urban systems.It will enable urban planners, researchers, and policymakers to gain a deeper understanding of the complexities inherent in the urban landscape and facilitate more effective management strategies.
Usage Notes
Urban applications.Urban areas occupy only a very small portion of the terrestrial landscape but play a crucial role in driving environmental change at local, regional, and global scales 6,48,49 .Although the importance of urban landscape ecology is increasingly being recognized 50 , related researches are still limited due to the lack of large-scale and high-resolution urban landscape maps 29,35 .With its high resolution and accuracy, the UBGG-3m product has the potential to provide more precise knowledge of urban landscape and facilitate a deeper understanding of the patterns, processes, and implications of urbanization.Here, we briefly describe some research applications in which our product can be further applied.
(1) Sustainable Urban Planning.UBGG-3m contributes significantly to the development of sustainable urban planning by providing detailed information on the spatial heterogeneity of landscape types and their distribution patterns.With the increase in urbanization, the importance of maintaining and enhancing UGS and UBS has become widely recognized 10,51 .UBGG-3m enables the identification and quantification of green and blue infrastructure, which helps in assessing their contributions to urban ecosystems and environmental services.In particular, UBGG-3m allows researchers to analyze the spatial configuration and pattern of UGS (e.g., tree, grass, and farm), including their connectivity, size, shape, type, and distribution.This information is essential for making informed decisions on urban planning and management, including land use policies, urban greening, and urban infrastructure development.
(2) Urban thermal environment.Our product contributes to the in-depth study of the urban thermal environment, where the current understanding of the contributors to the Urban Heat Island (UHI) effect mainly relies on coarse land cover types due to the lack of high-resolution images 6 .However, the UHI is more like an "archipelago" than an "island" 52 , with local temperature differences as large as those along the urban-rural gradient.A systematic investigation of the interaction between fine-scale urban landscapes and thermal environments is still lacking, and UBGG-3m can provide landscapes spatial variation on a fine-scale.(3) Urban aboveground carbon storage.High-resolution urban landscape products facilitate urban aboveground carbon storage studies.Numerous studies have proven that UGS have significant carbon sink potential and provide ecosystem services and livelihood benefits 53 .However, this service has been largely underestimated in most studies.For example, an analysis conducted in Beijing showed that carbon stocks were underestimated by 39% of satellite data from 6 m to 30 m resolution 7 .Furthermore, according to an analysis in Leicester, UK 54 , shifting from 10 m to 250 m resolution remote sensing data resulted in a 76% underestimation of aboveground carbons stores.Additionally, a survey estimated that more than 1.8 billion isolated trees in West Africa have carbon stocks up to 22 MgC ha -1 , which is far larger than global biomass mapping 23,53 .Thus, our product provides essential information on the estimation of urban aboveground vegetation carbon density with large spatial variability.Apart from the applications discussed above, the UBGG-3m can be combined with big geospatial data and contribute to other scientific research, such as smart city construction, urban digital twin, sustainability assessment, habitat evaluation, and urban health studies 29 .
Limitations and future work.This study represents a significant advancement in the production of VHR urban landscape maps for 36 Chinese metropolises.However, several limitations of the study need to be acknowledged.Firstly, UBGG-3m only covers the 36 cities included in the study, and further work is necessary to extend this coverage to other cities worldwide.Secondly, the availability of high-resolution images is still limited by factors such as temporal resolution and cloud cover occlusion.As a result, UBGG-3m only covers the summer images of 2020-2021.As more high-resolution satellite images become available, future research could be devoted to landscape classification tasks for more cities and long time series globally.This would provide a more comprehensive understanding of urban landscape dynamics and aid in developing effective urban planning and management strategies. .The green region represents the agreement between UBGG-3m and the other products in identifying urban trees.The yellow region represents the urban trees underestimated by other products compared to UBGG-3m, while the blue region represents the overestimated area by other products compared to UBGG-3m.
Fig. 1
Fig. 1 Spatial distribution of the 36 study cities within the four geographic regions in China.
others are classified as UIS, including buildings, traffic roads, squares, and other impervious surfaces.In addition, shaded and bare land is also classified as UIS.UBGGset was constructed with co-registered pairs of 3 m Planet images and fine-annotated urban landscapes labeled on 1 m Google Earth images.The visual interpretation process of the UBGGset landscapes was done by the mapping team and further validated by field surveys.Moreover, UBGGset covers 4 major geographic regions and 15 typical cities (Beijing, Harbin, Changchun, Hefei, Wuhan, Changsha, Xi'an, Chengdu, Chongqing, Guizhou, Fuzhou, Shenzhen, Hohhot, Lanzhou, and Lhasa), covering an urban area of about 2,272 km 2 , which enriches the urban landscape standard datasets and facilitates the large-scale application of deep network.Examples of UBGGset for six cities are shown in Fig. 3.After that, 50852 training images and 12712 validation images (Length 256 × width 256) were obtained by sliding window clipping and data enhancement (horizontal, vertical, and diagonal flip).
Fig. 2
Fig. 2 Workflow for generating the Urban Blue-Green-Gray landscape product (UBGG-3m) using high-resolution Planet satellite images.
Fig. 6
Fig. 6 Classification results of the UBGG-3m in (a) Beijing, (b) Shenzhen, (c) Harbin, (d) Urumqi, and (e) Lhasa.The small maps at the bottom display detailed classification results of the UBGG landscape in major urban scenes such as residential areas, schools, parks, etc. (The background images are Planet satellite images from © Planet 2020.The classification results are depicted using colored boundaries, with bright blue representing urban blue spaces, green indicating trees, yellow representing grass, and orange denoting farmland.).
( 4 )
Deep learning.This work provides an open high-resolution dataset for urban landscape semantic segmentation studies, which can serve as a huge training pool for high-resolution land cover mapping.Moreover, Planet images cover a global scale and are freely available, allowing us to develop a robust and transferable deep network for urban landscape classification using deep learning and transfer learning.At the same time, our product also promotes more deep learning development models to be applied to urban environmental remote sensing research, driving technological advances in this field and promoting the development of urban landscape remote sensing interpretation towards intelligence and automation17 .
Fig. 8 Fig. 9
Fig. 8 Comparison of classification results before and after transfer learning in urban landscape.(a) Yellow River Basin in Lanzhou; (b) Yangtze River and Jialing River confluence area in Chongqing; (c) Sand quarries in Urumqi; (d) Farmland in Harbin; (e) Aquaculture areas in Wuhan.
Fig. 10
Fig. 10 Visualization comparisons of urban tree extraction between UBGG-3m and high-resolution urban green space dataset in Beijing.(a) Planet satellite images from © Planet 2020; (b) Google Images from © Google Earth 2020; (c) Comparison of UBGG-3m and Urban Tree Cover-2m (UTC-2m) 21 ; (d) Comparison of UBGG-3m and Urban Green Space-1m (UGS-1m)5 .The green region represents the agreement between UBGG-3m and the other products in identifying urban trees.The yellow region represents the urban trees underestimated by other products compared to UBGG-3m, while the blue region represents the overestimated area by other products compared to UBGG-3m.
Table 1 .
Information of the Planet satellite images used in this study.
Table 4 .
Comparison of classification accuracy with state-of-the-art semantic segmentation networks (units: %). | 8,552.6 | 2024-03-04T00:00:00.000 | [
"Environmental Science",
"Geography",
"Computer Science"
] |
Quantum-Walk-Inspired Dynamic Adiabatic Local Search
We investigate the irreconcilability issue that arises when translating the search algorithm from the Continuous Time Quantum Walk (CTQW) framework to the Adiabatic Quantum Computing (AQC) framework. For the AQC formulation to evolve along the same path as the CTQW, it requires a constant energy gap in the Hamiltonian throughout the AQC schedule. To resolve the constant gap issue, we modify the CTQW-inspired AQC catalyst Hamiltonian from an XZ operator to a Z oracle operator. Through simulation, we demonstrate that the total running time for the proposed approach for AQC with the modified catalyst Hamiltonian remains optimal as CTQW. Inspired by this solution, we further investigate adaptive scheduling for the catalyst Hamiltonian and its coefficient function in the adiabatic path of Grover-inspired AQC to improve the adiabatic local search.
I. INTRODUCTION
Quantum technologies have advanced dramatically in the past decade, both in theory and experiment.From the view of theoretical computational complexity, Shor's factoring algorithm [1] and Grover's search algorithm [2] are well-known for their improvements over the best possible classical algorithms designed for the same purpose.From a perspective of universal computational models, Quantum Walks (QWs) have become a prominent model of quantum computation due to their direct relationship to the physics of the quantum system [3,4].It has been shown that the QW computational framework is universal for quantum computation [5,6], and many algorithms now are presented directly in the quantum walk formulation rather than through a circuit model or other abstracted method [3,7].Besides being search algorithms, CTQWs have been applied in fields such as quantum transport [8][9][10][11], state transfer [12,13], link prediction in complex networks [14] and the creation of Bell pairs in a random network [15].Some other well-known universal models include the quantum circuit model [16][17][18], topological quantum computation [19], adiabatic quantum computation (AQC) [20], resonant transition based quantum computation [21] and measurement based quantum computation [22][23][24][25].Each model might has its own bottleneck.Investigation on the relationship among the frameworks helps identify the violations when mapping frameworks and potential solutions.By studying the mapping, one can extend the techniques from one framework to another for some potential speedup [26].
In this work we investigate the irreconcilability issue that arises when translating the search algorithm from the Continuous-Time Quantum Walk (CTQW) framework to the Adiabatic Quantum Computing (AQC) framework as first pointed out by Wong and Meyer [27].
This irreconcilability issue can be described as follow.One first notes that the CTQW is the unique continuoustime quantum walk formulation of Grover's discrete search algorithm.While the CTQW search evolves the initial unbiased (equal amplitude) state to the unknown (marked) state on the order of time T ∼ O( √ N ) (where N is the size of search space), it does not follow the same evolution path (on the Bloch sphere) as that of Grover's algorithm.The uniqueness of the CTQW formulation stems from the fact that the unknown marked state only acquires a (time-dependent) phase from the oracle operation.Most importantly the marked states does not undergo evolution, and thus the CTQW effective employes a dichotomous "Yes/No" oracle, for which the discrete Grover's algorithm has been proven to be optimal.
The AQC formulation of the search algorithm with a non-uniform adiabatic evolution schedule [28] also finds the marked state in time T ∼ O( √ N ) while at the same time following the same path as Grover's algorithm.Thus, if one investigates what adiabatic Hamiltonian gives rise to the same evolution path as the CTQW formulation, one finds [27] the AQC formulations introduces an extra "catalyst" Hamiltonian which introduces structure beyond the standard "Yes/No" oracle employed in the CTQW or discrete (Grovers) search algorithm.A scaled version of the AQC Hamiltonian leads to a constant energy gap that implies that the marked state can be found in time T ∼ O (1).This discrepancy between the formulations of the two versions of a continuous time search algorithm was termed the "irreconcilability (difference) issue" between CTQW and AQC by Wong and Meyer [27].
In this work we address the CTQW/AQC search algorithm irreconcilability issue by modifying the constant energy gap Hamiltonian of the AQC formulation.Our contribution is twofold.We first adapt the result from the mapping of CTQW to AQC by selecting the regular oracle Z operator as the catalyst Hamiltonian and explore an alternative for the coefficient function for the catalyst Hamiltonian in order to attempt to avoid the Approved for Public Release, Distribution Unlimited: PA# AFRL-2022-1619.
arXiv:2204.09830v2 [quant-ph] 20 Apr 2023
irreconcilability issue.Through the simulation, the modified model provides optimal results in terms of time required for search.We then apply this modification to adiabatic local search by adding an additional sluggish parameter δ which delineates the width of the adiabatic run time schedule over which the catalyst Hamiltonian effectively acts (i.e. the "slowdown" region in the vicinity of the system's smallest energy gap ∆).The sluggish parameter tracks the increase of running time t = t(s) with respect to schedule parameter 0 ≤ s ≤ 1 where δ = |d 2 t/ds 2 |.The catalyst is employed when δ ≥ δ 0 to facilitate the process, where we have found that the threshold value of δ 0 = 64 provides good results.
The outline of this work is as follows.The background information regarding CTQW and AQC is given in section II where the translation of CTQW to AQC is described in section III.The irreconcilability issue that occurs during the translation is explained in section III A and our proposed solution is provided in section III B. The mapping of Grover search to AQC as an adiabatic local search is summarized in section IV.We propose and describe the catalyst Hamiltonian mechanism in section IV A 2 and determine the sluggish interval where it is employed.We further explore three coefficient functions of the catalyst Hamiltonian in section IV A 3. The simulation results for proposed modifications are discussed in section V. Finally, our conclusions are given in section VI.
II. BACKGROUND A. Continuous-Time Quantum Walk
Given a graph G = (V, E), where V is the set of vertices and E is the set of edges, the CTQW on G is defined as follows.Let A be the adjacency matrix of G, the |V | × |V | matrix is defined component-wise as where i, j ∈ V .A CTQW starts with a uniform superposition state |ψ 0 in the space, spanned by nodes in V , evolves according to the Schrödinger equation with Hamiltonian A. After time t, the output state is thus The probability that the walker is in the state |τ at time t is given by | τ | e −iAt |ψ 0 | 2 .To find the marked node |ω starting from an initial state |ψ 0 via a CTQW, one has to maximize the success probability while minimizing the time t.For instance, initially at time t = 0, the success probability is The success probability is extremely small when the search space |V | = N is large and |ψ 0 is a uniform superposition state.
When applied to spatial search, the purpose of a CTQW is to find a marker basis state |ω [29,30].For this purpose, the CTQW starts with the initial state |ψ 0 = N i=1 1 √ N |i , and evolves according to the Hamiltonian [31] where γ is the coupling factor between connected nodes.The value of γ has to be determined based on the graph structure such that the quadratic speedup of CTQW can be preserved.Interested readers can refer to [29,31] for more details.
B. Adiabatic Quantum Computing
In the AQC model, H 0 is the initial Hamiltonian, H f is the final Hamiltonian.The evolution path for the timedependent Hamiltonian is where 0 ≤ s ≤ 1 is a schedule function of time t.For convenience, we denote s as s(t) and use them interchangeably.The variable s increases slowly enough such that the initial ground state evolves and remains as the instantaneous ground state of the system.More specifically, where λ k,t is the corresponding eigenvalue the eigenstate |λ k,t at time t and k labels for the k th excited eigen-state.The minimal eigenvalue gap is defined as where T a is the total evolution time of the AQC.Let |ψ(T a ) be the state of the system at time T a evolving under the Hamiltonian H(s(t)) from the ground state |λ 0,0 at time t = 0.The Adiabatic theorem [32,33] states that the final state |ψ(T a ) is -close to the real ground state |λ 0,Ta as provided that Approved for Public Release, Distribution Unlimited: PA# AFRL-2022-1619.
There are several variations of AQC to improve the performance.The variations are based on modifying the initial Hamiltonian and the final Hamiltonian [34,35] or adding a catalyst Hamiltonian H e [34], which is turned on/off at the beginning/end of the adiabatic evolution.In this work, we are interested in the catalyst approach.A conventional catalyst Hamiltonian assisted AQC path is expressed as where s(t) = sin 2 ( t √ N ) with or explicitily in the {|w , |r } basis as
A. The Irreconcilability Issue: Constant Gap Catalyst Hamiltonian and Small Norm
The main concerns that are raised from Eqn. ( 12) are twofold.The first issue is the factor 4 s(1−s) 4 2 N of H(s).The adiabatic theorem [36] states the system achieve a fidelity of 1 − to the target state, provided that Here dH dt 0,1 are the matrix elements of dH/dt between the two corresponding eigen-states.E 0 (t) and E 1 (t) are the ground energy and the first excited energy of the system at time t.Given the H(s) in Eqn.(12), one might conclude that a factor of O( 4 1/N ) significantly reduces the required time to achieve 1 − precision.This might be misleading as the g min of H(s) also carries the same factor.The second issue is that the catalyst H e provides power greater than a typical Yes/No oracle as it maps non-solution states to a solution state and a solution state to non-solution states.Provided initially the we start with a superposition state with amplitude of N −1 N for a non-solution, it takes time of O(1) for this catalyst to drive the initial (unbiased, equal amplitude) state to the solution state.In the following we will relax this constraint by using a normal oracle.For the rest of the paper, let us simply treat 1 as some small negligible constant.
B. Modified CTQW-Inspired Adiabatic Search
In Eqn.(12), the following parameters were computed during the mapping [27]: • the scaling factor 4 s(1−s) • the coefficient function of H e as s(1 − s).
In [37] the cost of the adiabatic algorithm was defined to be the dimensionless quantity (using = 1) where t f is the running time.To prevent the cost from being manipulated to be arbitrarily small by changing the time units, or distorting the scaling of the algorithm by multiplying the Hamiltonians by some size-dependent factor as shown in the irreconcilability concern [27], the norm of H(s) should be fixed to some constant, such as 1.
To address the irreconcilability issue, the scaling factor is dropped and the catalyst Hamiltonian H e is modified.
Since H e = 2 N −1 N iXZ in the {|ω , |r } basis provides more power than a standard Oracle, for our modification we remove the imaginary number i and the X operator.The operator Z alone behaves as a conventional "Yes / No" oracle in the {|ω , |r } basis.Let M = 2 N −1 N and choose the modified adiabatic path H m (s) as where f z (s) is our chosen s-dependent coefficient for catalyst Z.In addition to f z (s) = s(1 − s) that was used Approved for Public Release, Distribution Unlimited: PA# AFRL-2022-1619.
in [27], functions that reach its maximum when s = 1/2 are good candidates for f z (s), such as f z (s) = sin(sπ) 2 .The use of the factor 1/2 on the sine function is to offset the magnitude M to bound the norm of H e as described in Eqn.(16).
IV. GROVER SEARCH TO ADIABATIC LOCAL SEARCH MAPPING
In this section we consider the mapping of Grover's algorithm to an adiabatic search.Given the initial driving Hamiltonian H 0 and the final Hamiltonian H f as where in the {|ω , |r } basis.The adiabatic path [27,28] in the {|ω , |r } basis is given by Instead of employing a linear evolution of s(t), Eqn.(20) adapts the evolution ds/dt to the local adiabaticity condition [28] such that where g(t) is the energy gap of the system at time t.The running time t is then a function of schedule s such that The relation between the schedule s and the running time t is shown in Figure 1.It is clear that the system evolves quickly when the gap is large (s away from 1/2) and slowly when the gap is small (s 1/2) [28].In this example, the sluggish period s ∈ [0.4,0.6].For completeness, we provide the formal proof of the close form of the squared gap function g 2 (t) (second order in s) with respect to the schedule s in Appendix A.
A. Adaptive Scheduling
For a fixed schedule of a adiabatic path, the schedule s moves fast when the eigen-energy gap is large, and slowly when the gap is small.We desire to employ the catalyst Hamiltonians H e to amplify the eigen-energy gap during the "slow down" period such that the total time to pass through the sluggish period is reduced (s ∈ [0.4,0.6] in Fig. (1)).
FIG. 1.
Schedule s in terms of time t with N = 64 in adiabatic local search, as observed in [28].
Schedule Dependent Gap Function
In this section, we consider employing gap-dependent scheduling functions.Let H f be an arbitrary 2 by 2 Hermitian Hamiltonian.Let the time-dependent Hamiltonian H(s) be Operators σ x and σ z are chosen as catalyst Hamiltoni- where a, b, c, p, q, r are some given constants.The matrix form of the timedependent Hamiltonian is given by ) and the schedule-dependent gap can be analytically computed to yield (see see Appendix B for a derivation).By using Eqn.(22), the total running time T stp strt from s = s strt to s = s stp is thus where 0 ≤ s strt ≤ s stp ≤ 1.In brief, the time spent during a certain period of a schedule can be obtained by use of gap function.The gap function can be expressed via the entries of H 0 , H e , H f , schedule s and the coefficient functions of the catalyst Hamiltonians.
Determining the Sluggish Interval for the Catalyst Hamiltonian
By using the condition f (s) = dt/ds = 1 g 2 (s) (see Appendix A), the region where the gap quickly significantly Approved for Public Release, Distribution Unlimited: PA# AFRL-2022-1619.decreases or increases is during the sluggish period of s.That is the portion of schedule the s where catalyst should be employed.The region where |df 2 (s)/ds 2 )| ≥ δ 0 is the sluggish period.The threshold value δ 0 = 64 was chosen because if we choose a threshold proportional to N , as N increases exponentially, the quantity d 2 t/ds 2 might never reach the N -dependent threshold within the adiabatic evolution schedule 0 ≤ s ≤ 1.By using this threshold, the starting point s slug strt and the stopping point s slug stp used to mark the sluggish period can be identified.
Using the example in [28], we can re-plot and get t as a function of s as t = f (s) and f (s) = dt/ds in Figure 2 -3 with N = 64.
Catalyst Coefficient Functions
As discussed in section III B, we are interested in the H e = Z case in Eqn.( 17) and its coefficient function f z (s).Three coefficient functions of the catalyst Hamiltonian Z are proposed as the following where 0 ≤ a, b ≤ 1 under the constraint that a 2 + b 2 = 1.In the grid search a increased from 0 to 1 by 0.1 in each iteration.From the 10 pairs of (a, b), we find the values of a, b that give the shortest sluggish time interval.
V. EXPERIMENT & RESULT
For our simulations we used (Wolfram) Mathematica (version 12.3 run on a Linux Ubuntu 20.04 LTS laptop).The code is available upon request.The running time is based on Eqn.(28).The size N (number of nodes) ranges from 2 5 , 2 6 , • • • to 2 25 .We observe the corresponding running time and sluggish time for each of the proposed models.The result of the original adiabatic local search serves as the baseline for comparison, which used N = 64 [28].In this work, we generalize the setting for any arbitrary size N .
Given an arbitrary complete graph of size N with coupling factor 1/N , one can compute the entries in the reduced Hamiltonian for H 0 and H f in the {|ω , |r } basis.The values of variables a, b, c, p, q and r as discussed in section IV A 1 can be obtained from Eqn.( 14) for the CTQW case, and from Eqn. (19) for the adiabatic local search.It is worth noticing that the ground state energy is −1 in the CTQW case, but is 0 in the adiabatic local search case.Based on the adiabatic path Eqn.(25), and the gap function in Eqn. ( 27) with given schedule s, coefficient function f z (s) for σ z , we perform the simulation with the running time computed from Eqn. (28).
A. Modified CTQW-Inspired Adiabatic Search Simulation
This experiment aimed to demonstrate that the modified adiabatic paths addressing the irreconcilable issues remain optimal.The three proposed modifications we explored are as follows: • H org (s) takes Eqn.(12) and drops the scaling factor as explained in section III B. The adiabatic path is • H m1 (s) replaces the computed catalyst Hamiltonian H e with an ordinary Z oracle operator and keeps the magnitude M .This was used to address the constant gap H e irreconcilability issue.
We have catalyst Hamiltonian H e = M XZ in H org (s) indeed is a constant gap Hamiltonian.This also shows the irreconcilability issue as suggested in [27].From the simulations we can conclude that both H m1 (s), H m2 (s) perform optimally with respect to running time, namely T ∼ O( √ N ), similar to that of the original adiabatic local search but with a minor constant factor which can be ignored in the Big O notation.As the simulation suggests, both modified CTQW-inspired approaches outperform the original adiabatic local search.When the N ≤ 2 21 , the H m2 (s) outperforms H m1 (s).When problem size N is larger then 2 21 , H m1 (s) is a better choice over H m2 (s).
B. Adaptive Adiabatic Local Search Simulation
With Various Coefficient Functions In the previous section V A, the proposed modifications are optimal, in the sense that T ∼ O( √ N ) up to a minor constant factor.For further improvement, the adaptive scheduling scheme is applied.The adiabatic path to be explored is therefore where f (s) ∈ [f sine z , f ss z , f grid z ] as seen in Eqn.(29).The catalyst Hamiltonian Z operator is only employed during the sluggish period and hence f (s) = 0 when s / ∈ [s slug strt , s slug stp ].The H 0 and H f are based on Eqn.(19).As the catalyst is only employed within the sluggish period, to compare the performance of each proposed modification, one only needs to compute the running time within this period.In Figure 5, f ss z provides the minimal reduced sluggish time while f sine z and f grid z provide significant improvements.The difference in the runtimes becomes significant for N ≥ 2 15 .In Figure 6
VI. CONCLUSION
In this work, we investigated different Hamiltonians for resolving the irreconcilability issue [27] when mapping the CTQW search algorithm to AQC.We modified the time-dependent Hamiltonian by (1) removing the original scaling CTQW factor 4 s(1−s) 4 2 N , and (2) replacing i X Z → Z in the original catalyst H e Hamiltonian obtained from mapping CTQW to AQC.These modification were made in order to resolve the irreconcilability issue.We further optimized the schedule s of the CTQW-inspired adiabatic path by an adaptive scheduling procedure.
The modified CTQW-inspired adiabatic search simulation experiment demonstrates that indeed the H e with-Approved for Public Release, Distribution Unlimited: PA# AFRL-2022-1619.out any modification leads to a constant time in the total running time, regardless of the search space size N .This result echoes the irreconcilability issue stated in [27].On the other hand, the modified CTQW-inspired adiabatic path with catalyst Hamiltonian coefficient sin(sπ) 2 behaves similarly to the behavior of the optimal adiabatic local search.Furthermore, the modifications are optimal and outperform the original adiabatic local search.
Lastly, in the adaptive adiabatic local search simulation with various coefficient functions experiment, we further investigated how to reduce the time wasted in the sluggish period of an adiabatic local search path.As our numerical experiments show, the function f sin z (s) and f grid z (s) provide significant improvement and both outperform the original adiabatic local search.Even though the grid search f grid z (s) approach could have further reduced the length of the sluggish ("slow down") interval, the benefit was offset by the additional cost incurred from implementation over that of the other two methods.
FIG. 2 .
FIG. 2. Time t as a function of schedule s for adiabatic local search with N = 64.
2 M
s) uses sin(sπ) 2 as the coefficient function for the catalyst Hamiltonian Z.The adiabatic path is H m2 (s) = (1 − s)H 0 + sin(sπ) Z + sH f For the above three models, simulations were run on Hamiltonian of size N ∈ [2 5 , 2 6 , • • • , 2 25 ].In the following figures, the abscissa is log 2 N while ordinate is the required total running time T .The time is computed based on Eqn.(28).As the dimension of the Hamiltonian increases, the difference in running times for the three models considered are magnified.The simulation results are shown in Figure 4.It is clear to see that H org is a constant time scheme as it does not scale as the size N increases.This indicates the original Approved for Public Release, Distribution Unlimited: PA# AFRL-2022-1619.
, both f sine z and f grid z have a reduced sluggish time over 75% in comparison to the original adiabatic local search when N reaches 2 25 .f sine z gradually outperforms the original adiabatic local search after N = 2 10 and remains almost as good as f grid z till N = 2 23 .When N = 2 25 , the sluggish time of f sine z is only twice that of
FIG. 5 .
FIG. 5. Case when N ∈ [2 5 , 2 25 ] and time spent in during the sluggish period for adiabatic paths with (f ss z , f sine z , f grid z ) coefficient functions where the original adiabatic local search serves as the baseline.
σ z .They should be good candidates for the catalyst perturbation in the AQC path.Similarly, if the Hamiltonian has an imaginary part in the off diagonal entries, |λ+ − λ − | = (α − β) 2 + 4(γ 2 + d 2 ).(B4)The Hamiltonian H (with no imaginery entries) can be expressed in terms of Pauli matrices as ) [27] set to 1/N and |ψ 0 is the uniform superposition of all states in the search space.State |r is the uniform superposition of non-solution states, state |ω is the solution state.Treating the state evolving in CTQW system as the time-dependent ground state of H(s), one constructs H(s) in the {|ω , |r } basis as[27] | 5,590.2 | 2022-04-21T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Features of the interaction of localized free-stream disturbances with the straight and swept wing boundary layer
Under the conditions of a model experiment, the features of the appearance and spatial development the boundary layer disturbances – wave packets (precursors) and longitudinal localized structures were studied. Localized disturbances were created artificially upon local exposure of an external source to the boundary layer of the straight and swept wing with a low turbulence level of incoming flow. Wave packets formed near the fronts of longitudinal localized structures. The studies were carried out in a subsonic low-turbulent wind tunnel. Flow perturbations were recorded using a constant-temperature hot-wire anemometer.
Introduction
Laminar-turbulent transition in the boundary layer at a moderate or elevated level of free-stream turbulence is associated with disturbances created by external turbulence. The effect of free stream disturbances on the boundary layer leads to the formation of longitudinal localized structures or streaks, which are areas with an excess and deficit of longitudinal velocity [1,2]. Longitudinal structures provide the basic conditions for the development of high-frequency wave disturbances, such as secondary instability and wave packets [3]. Under favorable conditions, wave disturbances turn into turbulent spots. According to recent experimental studies [4][5][6], a localized effect on the boundary layer can lead to the appearance of wave packets along with banded structures.
In this work, under controlled conditions, we study the occurrence and spatial development of perturbations of the boundary layer -wave packets (precursors) and longitudinal banded structures. Wave packets are formed near the fronts of longitudinal localized structures due to a sharp local change in the longitudinal flow velocity inside the boundary layer.
Experimental technique
The studies were carried out in a subsonic low-turbulent wind tunnel MT-324 of ITAM SB RAS. The free-stream velocity was U∞ = 6.8 m/s. The freestream turbulence level Tu was 0.18% U∞. Localized disturbances were created artificially by local exposure of the source to the boundary layer under a low turbulence level of incoming flow. The source of disturbances was located in the incoming flow near the model, see figure 1. The source of disturbances was a tube with an inner diameter of 2.5 mm through which air flowed out with a short pulse of 200 or 300 ms duration. Pulsed air blowing was provided by an electromagnetic valve. Pulses were repeated every 0.5 seconds. The method of introducing controlled perturbations was used, in which the moment of introducing disturbances and the moment of their registration always coincides, which makes possible a detailed study of the characteristics of the
Straight wing
Initially, the experiment was carried out on a straight wing model with a chord C1 = 290 mm, installed in the closed wind tunnel test section at zero angle of attack. Pulse air blown out under a small excess pressure from the tube and interact with the boundary layer of the model in the region of the leading edge, which has a large curvature radius about 7 mm. Figure 2 presents the pictures of hot-wire anemometric visualization of this process. The cross sections of disturbances are shown in the region of their middle in the Y-Z plane for different longitudinal coordinates X. The first cross section at X/C1 = -0.04 shows a disturbance in the incoming flow. The transverse size of the air jet flowing out of the tube remains almost unchanged (about 2.7 mm), while slightly compressing in the vertical direction, apparently due to interaction with the associated main flow. Faced with the leading edge of the model, the jet core generates a disturbance in the boundary layer, which has a much larger transverse dimension. Only the red region with excess of velocity has a transverse dimension in 4.6 times larger than the initial disturbance (12.5 and 2.7 mm, respectively). In this case, regions with a velocity defect (blue color) appear symmetrically on both sides. The disturbance generated on the model (including all regions of the defect and velocity excess) has a transverse dimension in 10 times larger than the boundary layer thickness . Here, the can be estimated from the height of the disturbance (along the Y coordinate). Moving downstream, the disturbance slightly diffuses, in proportion to the increase of the boundary layer thickness, which was noted in a number of previous studies [2,3], when disturbances were introduced from the model wall [4,5]. In [2], at a similar incoming flow velocity, the transverse size of the disturbance in the boundary layer approximately corresponded to the initial one, however, the radius of the leading edge in this experiment was much smaller, 0.5-0.7 mm. It is likely that the air jet in the present experiment, interacting with a strongly blunted leading edge, behaves like an impact jet, which is not observed in [2], where the leading edge is quite sharp. Below is the evolution of the disturbance in the Z-t plane. Figure 3 shows typical visualization patterns for various coordinates along the flow direction. At X/C1 = 0.76 and further downstream, the formation and growth of the wave packet near the leading edge of the longitudinal localized disturbance are seen. With a low Reynolds number, in our case ReC1 = 131000, flow separation occurs in the rear part of the wing. As was shown earlier, the region of the unfavorable pressure gradient, and in particular the flow separation, strongly destabilize the wave packet, which in the present experiment manifests itself in the region near the leading front of the localized disturbance. In the present experiment, the development of the wave packet led to the appearance of the -structure. A similar result was observed upon introducing perturbations from the model surface [4].
Swept wing
Further experiment was carried out on a swept wing model of with a sweep angle of 45 degrees and a chord of C2 = 410 mm, also set at zero angle of attack. Figure 4 shows the visualization of the interaction of the disturbance generated by the tube with the boundary layer of the swept wing. Near the model nose at X/C2 = 0.02, the disturbance shifts upward from the axis of symmetry by 10 mm. Moreover, the internal structure of the disturbance differs from the case of a straight wing. There are only two areas -defect and exceeds of velocity. The total disturbance width turned out to be smaller than for the straight wing (7.5 and 12.5 mm). Due to the influence of the secondary flow directed along the leading edge of the wing, the perturbation becomes asymmetric. In the process of evolution downstream, the disturbance shifts first down and then up. Its width also increases in proportion to the increase of the boundary layer thickness. Figure 5 shows the disturbance cross section along the vertical coordinate Y at X/C2 = 0.61. The velocity excess region is located above the velocity defect region, which means that, unlike the straight wing, see figure 2, the secondary flow spins the disturbance in main flow direction. In the region of an unfavorable pressure gradient, at X/C2 = 0.85, due to the secondary flow, new region is formed with exceed of the velocity. Here, as in the case of a straight wing, a high-frequency disturbance (wave packet) comes out. The symmetry of the wave packet is also broken by the influence of the secondary flow, which was previously noted in [7].
The general view of the movement of longitudinal localized disturbance along the wing, made on the basis of real-scale visualization patterns (figure 4), shows its S-shaped trajectory, see figure 6, which coincides with basic ideas about the flow structure on a swept wing [8]. A study of the dynamics of the downstream development of the generated disturbances on the straight and swept wings showed that the amplitude of the longitudinal localized disturbances damps. Moreover, their main characteristics coincide with the boundary layer disturbances generated by increased or high external turbulence [3]. The amplitude grow of the wave packets, which are also excited by the interaction of disturbances from the incoming flow and the boundary layer of the model, occurs in the region of an unfavorable pressure gradient and flow separation.
Conclusions
It was found that longitudinal localized disturbances are arose in all considered cases. These disturbances were classified as longitudinal localized structures, in their characteristics identical to disturbances arising in the boundary layer under the influence of an increased level of free-stream turbulence. It was noted that the intensity of the longitudinal localized structures decreases in the downstream direction. High-frequency wave packets also appear by the interaction of the artificial external flow disturbances with the boundary layer. Wave packets rapidly increase in the presence of an unfavorable pressure gradient and in the flow separation area. During the interaction of free stream disturbances with a thin boundary layer in the region of the model nose, the disturbance expands in the transverse direction. The presence of a secondary flow on a swept wing influences strongly the structure of both longitudinal localized structures and wave packets. The longitudinal structures twist in the direction of flow, and their motion path takes an S-shape. Wave packets become asymmetrical. | 2,108.4 | 2019-11-01T00:00:00.000 | [
"Engineering"
] |
A Collaborative Framework for Privacy Preserving Fuzzy Co-Clustering of Vertically Distributed Cooccurrence Matrices
In many real world data analysis tasks, it is expected that we can get much more useful knowledge by utilizing multiple databases stored in different organizations, such as cooperation groups, state organs, and allied countries. However, in many such organizations, they often hesitate to publish their databases because of privacy and security issues although they believe the advantages of collaborative analysis. This paper proposes a novel collaborative framework for utilizing vertically partitioned cooccurrence matrices in fuzzy co-cluster structure estimation, in which cooccurrence information among objects and items is separately stored in several sites. In order to utilize such distributed data sets without fear of information leaks, a privacy preserving procedure is introduced to fuzzy clustering for categorical multivariate data (FCCM). Withholding each element of cooccurrence matrices, only object memberships are shared by multiple sites and their (implicit) joint co-cluster structures are revealed through an iterative clustering process. Several experimental results demonstrate that collaborative analysis can contribute to revealing global intrinsic co-cluster structures of separate matrices rather than individual site-wise analysis. The novel framework makes it possible for many private and public organizations to share common data structural knowledge without fear of information leaks.
Introduction
Data mining is a powerful tool for many private and public organizations in supporting efficient decision making, and they have been utilizing various databases, which are independently and securely stored in each organization.However, it is often quite expensive or impossible to store enough data by each of themselves and many analysts believe that we can get much more useful knowledge by utilizing multiple databases stored in different organizations.In these collaborative data analysis, a significant problem is the privacy issue.For example, in many corporations, customer segmentation by clustering is a fundamental approach in possible marketing while their customer privacy must be securely protected and each data record such as purchase history and personal profiles must not be published to other corporations or organizations.Similar situations are found in many other organizations such as hospitals with clinical records and governments with military intelligences.
Privacy preserving data mining (PPDM) [1] is a fundamental approach for utilizing multiple databases including personal or sensitive information without fear of information leaks.A possible approach is a priori -anonymization of databases for secure publication [2,3], but such anonymization can bring information losses.Another approach for utilizing all distributed information is to analyze the information without revealing each element.In -means clustering, several secure processes for estimating cluster centers were proposed [4,5], in which the mean vector of each cluster is calculated with an encryption operation.
In this paper, a novel collaborative framework for utilizing vertically partitioned cooccurrence matrices in fuzzy cocluster structure estimation is proposed, where cooccurrence information among objects and items is separately stored in several sites.In vertically distributed databases, it is assumed that all sites share common objects but they are characterized with different independent items in each site.The goal is to reveal the global co-cluster structures varied in 2 Advances in Fuzzy Systems whole separate databases without publishing each element of independent databases to other sites.
The remaining parts of this paper are organized as follows: Section 2 gives a brief review on related works and Section 3 shows their problems and possible solutions.Section 4 provides explanations on the conventional fuzzy co-clustering model and Section 5 proposes a novel collaborative framework for applying fuzzy co-clustering considering privacy issues.In Section 6, several experimental results demonstrate that collaborative analysis can contribute to revealing global intrinsic co-cluster structures of separate matrices rather than individual site-wise analysis.Finally, a summary conclusion is given in Section 7.
Background
Co-clustering is a fundamental technique for summarizing mutual cooccurrence information among objects and items.For example, in document clustering, mutual cooccurrence information of documents and keywords are utilized for revealing intrinsic document clusters with their keywords summaries.In purchase history analysis, mutual connections among customers and their promising products are investigated considering purchase preferences.Co-clustering provides pairwise cluster structures among objects and items and has been widely investigated in both probabilistic [6] and heuristic contexts [7].In this paper, fuzzy clustering approaches are focused on.
Fuzzy clustering has been proved to have many advantages against hard ones from such view points as noise and initialization sensitivities.Fuzzy variants of co-clustering have also been demonstrated to be useful in such applications as document analysis [8] and collaborative filtering [9,10].The goal of fuzzy co-clustering is to simultaneously estimate memberships of both objects and items from a cooccurrence information matrix.For example, in document analysis, each document (object) is characterized by several keywords (items) with their appearance frequencies (degree of cooccurrences), and the goal is to extract documentkeyword clusters with their fuzzy memberships for analyzing their contents.
In order to analyze distributed databases in -meanstype clustering, several secure processes for estimating cluster centers were proposed [4,5], in which the mean vector of each cluster is calculated with an encryption operation.However, in fuzzy co-clustering, the clustering criteria of cluster aggregation degrees were defined without cluster centers and the conventional secure framework cannot be adopted.Then, a novel secure mechanism is needed, where the main problems to be solved remained as summarized in the next section.
Problems and Solution
In the -means-type secure clustering model for vertically distributed data [4,5], multiple sites share common objects, such as customers and patients, while having their own vector observations only, such as customer profiles of their own stores and clinical records in their own hospitals.In order to reveal the intrinsic object clusters without publishing each observation, each coordinate of cluster centers is separately calculated in each site and the derived coordinates are shared by all sites.
On the other hand, fuzzy co-clustering does not use cluster centers as cluster prototypes and utilizes two types of fuzzy memberships only.Then, the conventional secure framework for -means-type clustering cannot be adopted, and a secure process for calculating the fuzzy memberships must be developed.
In the following, in this paper, a novel framework for calculating fuzzy memberships in fuzzy co-clustering of vertically distributed cooccurrence matrices is proposed following a brief review on the conventional fuzzy coclustering models.In order to calculate object memberships, the sum of products of item memberships and cooccurrence observations are needed, and vice versa.In the proposed secure process, the sum calculation is securely achieved through an encryption operation, in which the sum can be calculated by concealing each value.
The novel framework is constructed in the FCCM context only, which is the basic model of fuzzy co-clustering.However, it is easily expected that a similar extension is directly applicable to the other FCCM variants without discussions because all the FCCM variants are based on the FCCM updating process.
Methodology of Fuzzy Co-Clustering
Assume that we have a cooccurrence matrix = { } on objects = 1, . . ., and items = 1, . . ., , in which represents the degree of cooccurrence of item with object .The goal of co-clustering is to simultaneously partition objects and items into co-clusters by estimating two types of fuzzy memberships.Object partitions are represented by object memberships , which is the memberships degree of object to cluster and is forced to be exclusive in the same way with FCM such that ∑ =1 = 1.On the other hand, in order to avoid trivial solutions, item partitions are represented by item memberships , which are mostly responsible for representing the mutual typicalities in each cluster such that ∑ =1 = 1.Oh et al. [11] proposed the FCM-type co-clustering model, which is called FCCM, by modifying the FCM algorithm for handling cooccurrence information, where the cluster aggregation degree of each cluster is maximized: The first term to be maximized measures the aggregation degree of objects and items in cluster , such that it becomes larger when mutually familiar objects and items having a large , simultaneously, have large memberships in a cluster.Here, this aggregation degree is only designed for hard partition because the term is a linear function with respect to both of and , where we have always ∈ {0, 1} and ∈ {0, 1}.Then, in order to derive fuzzy memberships ∈ [0, 1] and ∈ [0, 1], the aggregation measure must be nonlinearized.
In FCCM, the entropy-based fuzzification method [13,14] was adopted instead of the standard approach in FCM because the exponential weight in FCM can work only in the minimization framework of positive objective functions. and tune the degree of fuzziness of memberships, where a larger brings fuzzier partitions while a smaller brings crisp partitions.
The clustering algorithm is an iterative process of updating and using the following rules: This FCCM process was also reconstructed with other fuzzification mechanisms.For example, Fuzzy CoDoK [8] utilized the quadric term-based regularization [19] for avoiding calculation overflows.Honda et al. [15] adopted K-L information-based regularization [20] for handling unbalanced cluster sizes.As discussed in Section 3, these extended models generally follow the original FCCM procedure and have similar characteristics.So, in this paper, the novel collaborative framework is described in the FCCM context only.
Privacy Consideration in 𝑘-Means
Clustering.When each object is characterized by -dimensional observation x = ( 1 , . . ., ) , -means algorithm tries to minimize the within-cluster errors by iterating cluster center updating and nearest prototype assignment.Let b = ( 1 , . . ., ) be the center of cluster .In cases of distributed databases, we must care about privacy issues in either of the two phases by adopting such a technique as encryption operation [5].
For vertically distributed databases, where the elements of x = ( 1 , . . ., ) are separately stored in several sites, distances between object and cluster centers are calculated under collaboration of all sites.Here, the clustering criterion is the sum of squared errors ∑ =1 | − | 2 and should be calculated by concealing each value of | − | 2 from other sites.Once we find the nearest prototype assignment of each object, we can independently calculate new b = ( 1 , . . ., ) in each site by sharing the object membership information.
Common objects Site-wise items Site-wise items Site-wise items 1 2 1 2 Although the above secure framework is also useful in many other -means-type clustering algorithms such as FCM, it cannot be directly adopted to co-clustering ones because co-clustering does not use cluster prototypes but considers two types of memberships.
In this paper, similar ideas are adopted to fuzzy coclustering tasks.
Fuzzy Co-Clustering with Privacy Consideration.
Assume that sites ( = 1, . . ., ) share common objects ( = 1, . . ., ) and have different cooccurrence information on different items, which are summarized into × matrices = { }, where is the number of items in site and ∑ =1 = .Figure 1 shows a visual image of vertically distributed cooccurrence matrices.For example, we have a group of corporations (or hospitals, countries, etc.) and each of them has its independent customer purchase history = { } (or patients' records, military intelligence, etc.).If we do not care about the privacy issues, the distributed matrices should be gathered into a full × matrix to be analyzed in a single process without information losses.Taking the privacy preservation into account, however, each matrix should be processed in each site without broadcasting personal information although the reliability of each co-cluster structure may not be enough satisfied because of information losses.Then, the goal of the collaborative fuzzy co-clustering analysis is to estimate object and item memberships as similar to the full-data case as possible by sharing object partition information without broadcasting cooccurrence information = { }.Object memberships to be shared by sites are common and are defined in the same manner with the conventional FCCM.On the other hand, item memberships are somewhat different because they follow the within-cluster sum constraint.In this paper, it is assumed that item memberships are independently estimated in each site following the site-wise constraint ∑ =1 = 1, where is the item membership on item in site .Be noted that the item memberships should not be opened to other sites from privacy consideration.
In applying FCCM clustering to distributed cooccurrence matrices, (2) implies that each object membership function is dependent on ∑ =1 , which is the sum of site-wise (1) Random vector generation t T (2) Encryption key Assume that we have at least three sites, that is, > 2, and two sites of 1 and are selected as representative sites.Figure 2 summarizes the process for secure calculation of ∑ =1 as follows.
( Once object memberships are broadcasted to all sites, each item membership is calculated by (3) in each site using in-site information only, where site-wise item memberships follow site-wise normalization constraints ∑ =1 = 1.It should be noted that, in this algorithm, item memberships are independently estimated in each site under the assumption that each site does not have any information on the items, which other sites deal with, such as the number of items and the degree of fuzziness of item memberships.Additionally, the algorithm cannot exactly reconstruct the equivalent co-clustering result to the whole data case, where all cooccurrence information is shared without care for privacy issues, even if we use the same parameter setting in all sites.It is because the piecewise constraint of ∑ =1 = 1 is independently forced to item memberships in each site while we just consider ∑ =1 = 1 in the whole data case.
Numerical Experiments
In this section, three experimental results are shown for demonstrating the characteristics of the proposed algorithm.Section 6.1 demonstrates the basic features of the proposed framework with a simple data set and Section 6.2 discusses the applicability to more realistic situations with a data set having unbalanced cluster structure.Then, an applicational experiment is shown in Section 6.3, where a virtual alliance of military sections is simulated using a real world benchmark data set.
Data Set 1: Homogeneous Cluster
Partition.An artificially generated 100 × 90 cooccurrence matrix = { } was used in this experiment, where 100 objects and 90 items form roughly 4 co-clusters.Figure 3(a) shows the original whole data matrix, where black and white cells depict = 1 and = 0, respectively.Vertically distributed cooccurrence submatrices were generated by arranging the 100 × 90 noisy matrix into four sites.Figure 3(b) shows the arranged cooccurrence matrix, where = 90 items were divided into ( 1 , 2 , 3 , 4 ) = (27, 24, 21, 18).Then, four co-cluster structures are very weakly implied in each site and the global co-cluster structure is only expected to be revealed in collaboration by all sites.This is a virtual situation of a group of four corporations, where they share 100 customers but have independent purchase history data on their own products.Here, the goal of collaborative fuzzy co-clustering is to reveal the intrinsic four customer clusters associated with their familiar products, which can be captured in the whole data strategy without privacy consideration but cannot be found in the site-wise independent analysis.
The co-clustering results of the distributed matrices are compared with that of whole data case, where the conventional FCCM algorithm was applied to the original 100 × 90 cooccurrence matrix = { } without privacy consideration.Figure 4 shows the item membership vectors given in the and = 0, respectively.The goal is to estimate site-wise item memberships , which are as similar to the original as possible.Then, in this experiment, the similarity between original and site-wise is measured by their correlation coefficient.
Table 1 compares the correlation coefficients between the site-wise or proposed item memberships and the original result, where the best and the mean values in 50 trials with different initializations are depicted.In the site-wise FCCM, the conventional FCCM was applied to each submatrix (each small chunk) in each site.The fuzzification weights were set as = 0.001 and = 100.0,respectively.The table indicates that the proposed framework is useful for estimating reliable item memberships under collaboration of all sites while the derived item membership vectors are not necessarily equivalent to those of the whole data case because of site-wise independent constraints.
Data Set 2: Heterogeneous Cluster Partition.
Next, the applicability of the proposed framework is investigated in a heterogeneous cluster partition case.The second artificial 100 × 90 cooccurrence matrix = { } was vertically distributed into 4 sites as shown in Figure 5(a), where ( 1 , 2 , 3 , 4 ) = (27,24,21,18).In contrast to the previous experiment, each site has different numbers of virtual coclusters such that ( 1 , 2 , 3 , 4 ) = (4, 3, 2, 4).This situation is similar to the case where four corporations in the group have different products characteristics and cannot have the real customer features without their collaboration.
The goal of collaborative co-cluster analysis is to reveal the intrinsic global co-cluster structures, which can be found only with global whole data.Applying the proposed secure framework with various cluster numbers, the FCCM algorithm could derive at most = 3 co-clusters; that is, when > 3, the 4th or later clusters consisted of a few noise objects only.
In order to intuitively validate the = 3 co-clusters derived by the proposed framework, Figure 5(b) provides the arranged whole data matrix, where the all 90 items were first resorted in descending order of item fuzzy memberships of the first cluster in order to extract items of first cluster, and then, the remaining items were second resorted in descending order of the second cluster.Be noted that, in real applications, we cannot construct such whole data Figure 6 compares the item memberships derived by the proposed secure framework.Although sites 1 and 3 had different numbers of co-clusters from the global co-cluster structures, that is, ( 1 , 3 ) = (4, 2), their co-cluster structures were also summarized into = 3.In site 1, the first 2 coclusters were merged into a solo co-cluster.On the other hand, in site 3, the second co-cluster was shared by two coclusters because they cannot be distinguished in the global whole co-cluster structure.Finally, the derived item memberships are compared with the whole data case, where we do not care about privacy issues.Table 2 compares the correlation coefficients between the site-wise or proposed item memberships and the whole data result.In the similar manner to the previous experiment, the table also supports the high performance of the proposed method in collaborative fuzzy co-cluster analysis.
6.3.Data Set 3: Terrorist Attacks.Third, the proposed secure framework is applied to a social network dataset.Terrorist attacks data set, which is available from LINQS webpage of Statistical Relational Learning Group @ UMD (http://linqs.cs.umd.edu/projects//index.shtml), consists of 1293 terrorist attacks each assigned to one of 6 labels indicating the type of the attack.Each attack is characterized by 106 distinct features with a 0/1-valued vector of attributes whose entries indicate the absence/presence of a feature.The goal of this experiment is to extract the structural knowledge on the terrorist attacks from the 1293 × 106 cooccurrence matrix.
In this experiment, a virtual situation of four allied states is considered, where the 106 distinct features are separately First, the item memberships derived from the distributed matrices are compared with the whole data result.The whole data result was given by applying the conventional FCCM algorithm with ( , ) = (0.001, 180.0).The goal is to estimate similar fuzzy memberships to the whole case result from the distributed matrices.The proposed framework and the site-wise FCCM were applied with ( , ) = (0.0035, 100.0) and ( , ) = (0.01, 100.0), respectively.
Table 3 compares the correlation coefficients between the site-wise or proposed item memberships and the whole data result.In a similar manner to the previous experiments, the collaborative knowledge is much more efficient than the sitewise one.This result implies the applicability of the proposed framework in strategic collaboration of allied states.
Next, the cross tabulations of the labeled class and clusters are compared for validating the utility of object partitions.In Table 4, the three main classes are compared with the maximum membership cluster assignment.Although the site-wise models derived quite degraded object partitions only, the proposed collaborative model could reconstruct almost equivalent result to the whole data case.
These results show the proposed model efficiently achieves secure co-clustering from both object and item partitions view points and is suitable for co-clustering tasks.
Conclusions
In this paper, a novel framework for collaborative fuzzy cocluster analysis was proposed, in which vertically distributed cooccurrence matrices can be jointly analyzed with personal privacy preservation.In joint calculation of object fuzzy memberships, a secure encryption operation was adopted for calculating cluster-wise typicalities without broadcasting each element of individual cooccurrence matrices.Then, item fuzzy memberships are securely estimated in each site.Several experimental results demonstrated that collaborative analysis can contribute to revealing global intrinsic co-cluster structures of separate matrices rather than individual sitewise analysis.
The proposed framework is expected to enhance the collaborative utilization of many distributed databases, such as strategic marketing in corporation groups, collaborative medical development in hospitals, and strategic military actions in allied countries because they have a potential of sharing common knowledge withholding their independent sensitive information.
A possible future work is to evaluate the responsibility (utility) degree of each site.In the present model, each site is equally responsible for clustering estimation while some sites may have unreliable independent information only.Because the site-wise sum-to-one condition on item memberships can bring an undesirable influence of sites with low confidences, the responsibility of each site should be evaluated considering their confidences and should be fairly reflected in object membership calculation.Noise rejection mechanism [21,22] would be promising in removing unreliable sites.
with encryption operation.
is based on an encryption operation.
in site .Then, site broadcasts to all sites.
Table 3 :
Comparison of partition quality measured by correlation coefficients among item memberships (terrorist attacks).
Table 4 :
Comparison of cross tabulation tables of object partition (terrorist attacks).four states and they want to get a collaborative knowledge on the terrorist attacks without publishing their observed features such as military intelligences.The 106 features were distributed to the four states such as ( 1 , 2 , 3 , 4 ) = (26, 26, 27, 27); that is, each state has only a part of the whole features (1293 × * matrices) but the states want to get a knowledge, which is given from the whole data case.Because three of six labeled classes have fewer numbers of objects (attacks), the characteristics of major three classes (bombing, kidnapping, and Weapon-Attack) are mainly discussed with = 3. | 5,210.8 | 2015-01-01T00:00:00.000 | [
"Computer Science"
] |
Analysis of Biometric Technology Adaption and Acceptance in Canada
This study aimed at analyzing the analysis biometric technology adoption and acceptance in Canada. From the introduction, the paper reveals that biometrics technology has been in existence for many decades despite rising to popularity in the last two decades. Canada has highly advanced in information technology. It is observed that the three sectors for the adoption and acceptance of biometric technologies are: financial services, immigration, and law enforcement. The study uses judgment for sampling and questionnaires for the collection of data. Given the high rate of adoption and acceptance of biometric technologies in Canada, the paper concludes that the adoption of these technologies is at the adaptation state. Age and experience also influence the rate at which individuals accept biometric technologies with the most experienced participants showing the highest rate of approval. Keywords—Adaption; biometric technology; organizational
I. INTRODUCTION
In the modern business environment, competition leaves organizations with no chance but use all the resources that are at their disposal to gain competitive advantage.One of the fronts where this kind of competition has been evident is technological.In most cases, the organizations that have innovative technologies carry the day.One of the most discussed technologies in business is the biometric technology.Biometric is the measure of the unique physical and behavioral traits.The use of biometrics for identification and security has become a common practice, especially in developed countries.The use of biometrics for authentication is one of the most secure and trusted options for user authentication.Various factors always influence the rate at which a technology such as biometric authentication is adopted in a country.This paper analyses biometric technology adoption and acceptance in Canada.
The paper is organized as follows.The next section highlights the Information technology investment in Canada.In Section 3, we present the literature review of the topic be explaining the mannerism of identification and use.Also, we shows how the adoption of biometric security systems .In Section 4, we present the theoretical framework.Following this in Section 5, we explain the methodology of the work by defining the way of sampling, shows how is the data collection and also, the approach of data processing.In Section 6, we presents the data presentation and analysis and emphasis the regression analysis.Following this in Section 7, we show the discussion of results.The final section concludes the paper.
A. Contribution
In this paper, we analysed and studied biometric technology adoption and acceptance in Canada.The paper concluded that three sectors in Canada where the adoption and acceptance of biometric technologies are financial services, immigration, and law enforcement.Furthermore, there are two main factors including age and experience influence the rate at which individuals accept biometric technologies with the most experienced participants showing the highest rate of approval.
II. INFORMATION TECHNOLOGY IN CANADA
Canada may not be a country that comes to the mind of any one with the mention of economic superstars, but it is one of the best-managed economies in the world.For many years, the government of Canada has put importance in the use of information technology for economic sustainability.The government of Canada Information Technology Strategic Plan 2016-2020 is one of the evidence of the extent to which the government has gone to make sure that the use of information technology in the country is complemented.This is according to a study that was done by [3].Information technology has been of the most contribution to the growth of the main services sector in Canada.According to a report by The Information and Communications Technology Council (ICTC), aside from job creation and direct contribution to the GDP through the ICT sector that increased by $2 billion in 2014, the contribution of ICT to the other sectors is beyond doubt.The proximity of the country to the US and their close business relationship has also contributed to the high rate of advancement of ICT in the country.
III. LITERATURE REVIEW
Despite biometric being on the verge of a breakthrough in human identification and security, it was not given much consideration as it is given in the modern society.Canadian stakeholders believe that there are benefits that come with the use of biometric technologies [5].However, with all these advantages there are disadvantages such as age and occupational factors that may lead to difficulty in capturing physical attributes such as fingers.People in occupations such as construction are prone to such a disadvantage.The various biometric technologies can be judged in terms of universality, permanence, uniqueness, collectability, acceptability, performance, and circumvention.However, assert that none of these technologies is perfect.When they are reviewed in terms of the above-mentioned factors of judgment, there are some shortcomings that are noted for the use of biometrics as shown in Table I.
Table I reveals the imperfection of the biometrics that is used in Canada.Therefore, there is always a system that can be used in the evaluation of the performance of a biometric system.The system has three criteria, which are: 1) False accept rate (FAR): the proportion of unauthorized users manage to get access.Such error is most likely to be as a result of a security breach.a) False reject rate (FRR): the proportion of authorized users that fail to get access.Such an error represents threats the rightful use of the system.b) Crossover error point (CEP): a scenario whereby the rate of false acceptances equals is equal to the rate of false rejections.Such and error implies optimal results to biometrics-based systems.
A. Mannerism of Identification and Use
Of the forms of biometrics that are used in various parts of the world, fingerprints have been used in Canada for the longest period.This technology has been particularly used in the finance sector in the identification of account holders for the sake of securing accounts from possible fraud [10].In [9] asserts that there has been a significant decrease in the rate of fraud cases that relate to false acceptance.For the sake of security, some organization is fingerprint recognition that requires the use of all the ten fingers instead of just one because it enhances accuracy.
Another biometrics that is commonly used in Canada is facial recognition.This technology was initially manual with the administrators have to look at digital pictures for facial confirmation.However, with the advancement in technology in Canada, facial biometrics technology has been taken to a whole new level.A perfect example is the application of facial recognition at Canadian airports [13].This is part of the program that was introduced by a traveler screening program by the Canadian Border Services Agency.This involves selfservice border clearance kiosks that intend to make Canadian border points safer.
In [2] asserts that many people do not realize that Iris recognition is different from facial recognition.Iris is a muscle that performs the function of controlling the size of the pupil.The highly detailed texture makes it possible for Iris to be used for identification and authentication.Government agencies have been on the frontline in using Iris identification in Canada.An example of such a case is the partnership between IBM and ID Iris to provide iris recognition technology for NEXUS, a program under the Canadian Border Services Agency [8].The private sector has also used this technology for authentication is some occasions.
Voice recognition technology has also been used by some organizations.This is an assertion that is true as far as the financial sector in Canada is concerned.RBC was the first Canadian company to successfully implement the voice recognition technology in 2015.However, that is not where the application of voice recognition technology ends.Soon, many other organizations were using this technology for recognition.The financial sector has been under pressure to use innovative technology in the recent past because of the changing expectations of their customers.
B. Adoption Biometric Security Systems
Despite many people not being opportunistic on the adoption of biometric technologies in Canada, the adoption rates have been relatively high as organizations strive to adapt to the expectations of their customers and international standards.
The government has also been under the equal pressure since of the advanced nature of security risks that the country may faces and the want to give assurance to people of the public that they are secure.As a technology that was considered by many members of the society as new three decades, the debate on the benefits and the shortcomings of these technologies were at the center of decisions that were made by the government agencies or organization on whether adopting biometrics technologies was a viable decision or not [4].
In Canada, the decisions to use biometric were guided by economic, operational, managerial, or, process related.According to most of the researches that were done on the adoption of biometric, these decisions were as a result of operational variables.This is further backed by the high costs that are included in the acquisition of such technologies [14].Some managers, especially those managing small and medium-sized organizations know the security threats that their organizations face, and the benefits that they can get by adopting biometric identification and authentication but are limited by the lack of financial resources.It was also revealed that the research that informs to the ability formulation of implementation strategies might be a main factor that decreases the ability of some organizations to use biometric technologies.For most of the organizations and government agencies that adopted, security and better services were at the center of the decisions.Author in [6] argues that the money that is involved in the acquisition of this technology in Canada was not much as compared to the operational and process-related impact that it had.
C. Summary
Evidently, there is no single technology that will give absolute technology.Therefore, only the integration of various biometric identification options into a single application.This lead to different layered levels of security and that can lead to high layered levels of security as in the case for some of the organizations in the financial sector.The Canadian Border Services Agency has also been seen to use more than one biometrics identification technology.The use of biometric technology is advanced in Canada is more advanced than most parts of the world, especially the developing countries.
IV. THEORETICAL FRAMEWORK Managerial, organizational, technological and environmental imperatives are all promoted by available theories on the use of technology [12].Notably, the adoption of biometric technologies is not entirely a technological issue because of the high influence that cost has.Therefore, this study is going to assume that adoption has six phases, which are: initiation, adoption, adaptation, acceptance, re-utilization then infusion.As far as the available literature on the adoption of biometric technology in Canada is concerned, the country is at the adaptation stage of technology adoption.The model (Rogers, 1995) include the diffusion of innovation that is commonly used in understanding the diffusion of technologies is as seen in Fig. 1 below.
A. Usage
Biometric technologies in widely used by security agencies and financial organizations in Canada.However, not much has been seen in the other sector.
B. Variables of Receivers
The diffusion of innovation model uses organizational social characteristics, demographic characteristics, and perceived innovation need as essential variables that could have controlling impacts on the decision to adopt a technology by an organization or government agency.This study mainly focuses on organizational features such as type, size, age, and experience in IT as main factors that can affect the decision on whether or not to adopt the use of biometric technologies.
C. Perceived Innovation Characteristics
The diffusion of innovations theory asserts that the various dimensions of attitude toward an innovation are measurable using the five attributes, namely, compatibility, relative advantage, complexity, risk, and trialability.The perceived relative advantage of innovative technologies is directly proportional to the rate at which it is adopted [1].Research also indicates that an innovative technology with substantial complexity needs more technical skills and needs greater implementation and operational efforts to increase its chances of adoption.Potential adopters of technology who are allowed to try an innovation will feel more comfortable with the technology and are likely to adopt it.
A. Sampling
The sampling technique that was carried out in this study was a testing as opposed to random choosing.The study made use of three sets of samples which are financial institutions, immigration agencies, and law enforcement agencies.The choice of these sets of the sample was as a result of the high rate at which security measures and identification was of importance to these sectors.Participants were employees of the organization in the specialized IT department, especially those directly involved in the implementation of biometrics technologies.Although biometric not being a new trend in Canada, focusing on the IT specialists ensured that the data that was collected was of the maximum possible accuracy.
B. Data Collection
The data collection method that was used for this study was questionnaires.The selection of this method of data selection was directed by a high number of participants that the study willing to have and their variety in terms of age, the size of organizations in which they work, gender, and the roles that they play in these organizations [11].The questionnaire used both open and closed questions with demographic details being of utmost importance.The questionnaires also had a section that focused on the variables concerning biometrics adoption.These variables included technology, ease of use, support of management, technological compatibility, and participant vulnerabilities and privacy needs [7].We distributed the questionnaire to 10 organisations that have number of IT specialists.Therefore, we made up about one hundred questionnaires.One hundred thirty nine of them were filled and used for analysis and studies in this paper.
C. Processing
Exploratory factor analysis was used in the reduction of the number of variables into few factors that can influence the implementation of biometrics technology in an organization.The reduced factors were an improvement of service excellence, security, and productivity.Statistical Package for Social Sciences (SPSS) for Windows Version 14.0 was then used for factor analysis.The factors were also derived by principal axis factoring and rotating them by applying Promax with Kaiser Normalization method to increase the relationship between some of the factors and the variables.Multiple regression analysis was applied to test the hypotheses associated with the factors that influence implementation of biometric technologies.
VI. DATA ANALYSIS
The data that was collected from individuals from demographic perspective is as shown in Table II.
According to the data that was collected, the rate at which biometrics was adopted in the sectors that the study was based on.The findings according to the difference are presented in table three.The YY ratings imply that there was the adequate use of biometrics technologies.The Y ratings indicate that the use of biometrics was moderate while N rating implies that either were no cases of biometric adoption.
A. Regression Analysis
Regression analysis proved to be effective in the testing of the hypothesis.The regression analysis was based on the factors that determined the applicability and adoptability of biometrics technologies in Canada.It was from the relationship between the stated variable: technology, ease of use, support of management, technological compatibility, and participant vulnerability and privacy concerns and the state of biometric technology in the country.
VII. DISCUSSION OF FINDINGS
According to the results of the study reveal that ease of use, size and type of organization, and communication sufficiently influenced the adoption of biometric technology.
Given that Canada is at the adaptation stage, there were very few participants still doubting the level of effectiveness of biometric technology in the enhancement of security.However, the participants were of the opinion that determination of the right balance in terms of the technologies to be used and the nature of organization were the major challenges that were faced when it comes to the adoption of biometric technologies.
A. Expected Attributes of Biometric Technology 1) Compatibility of the technology: Though there is the significance of technological compatibility in the adoption of biometric technology, there has not been evidence that it is of high influence.As far as infrastructure that is used in the adaptation of this technology is concerned, most of the participants seem to think that the level of advancement of information technology in Canada is enough for any organization that intends to adopt biometric technologies.
2) Use difficulty of use: Majority of the respondents were of the opinion that the difficulty of use is no longer a determining factor on whether an organization adopts biometric technology.It used to be an important determinant in the past.However, with the level of advancement of information technology in Canada, it has become easier for organizations to adopt the biometric technology.This can be proved by the high number of organizations using biometric technologies in Canada.In most developing countries, older organizations tend to avoid innovative technologies because they are satisfied with the traditional techniques of identification and authentication.However, the situation in Canada is different because the older organizations are taking advantage of massive experience and access to capital that they can use in the adoption of biometric technology.This is an opinion that was held by 72.22% of the participants.
3) Relative advantage: Relative advantage proved to be of high influence on the adoption of biometric technology.Most of the participants were of the impression that their organizations opted for biometric technologies because of the fact that they provide identification and authentication advantages that none of the available alternatives could.Only 17.26% of the participants thought that potential relative advantage had no impact on the decision whether to adopt biometric technologies.
B. Variables of Social Systems
Suitable communication between information technology experts and users and organization managers is considered to be one of the principal factors that have contributed to the high level of adoption of biometric technology in Canada.86% of the participants were adamant that lack of understanding of the technologies was a high contribution to the low rate at which these technologies were adopted in the past.Management support was also of significance because of the level to which management influence the strategies that are adopted and the extent to which they are adopted.This is a trend that proved to be persistent for participants who worked in government agencies.
C. Receiver Variables
As seen in the result, organizational demographic nature tends to have minimal influence its intention to use the biometric technology.There is no evidence from the collected data that can prove that the age of the organization had an influence on the decision to opt for biometric technology.
However, there was some difference when it comes to the size of the organization.The larger organizations in Canada appeared to adopt biometric technology at a higher rate as compared to the smaller ones.This difference can be attributed to the difference in access to resources.
VIII. CONCLUSION
Evidently, the use of biometric technologies in Canada has gotten to its adaptation stage.The level of understanding of the participants and the number of organizations that use these technologies is enough proof that the country is way past the infancy stage.However, it is clear that the nature of the organization and its goals in terms of identification and authentication was of high influence on the adoption of biometrics technology.This is the reason why the technologies are widely used in the banking, immigration, and law enforcement sectors.This proves the hypothesis that economic, operational, managerial, or, process factors determined the possibility of an organization adopting biometric technologies. | 4,435.2 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Solar radiation modeling and simulation of hyperspectral satellite data
- In this research, we are interested in applying the model to simulate the radiative transfer through the atmosphere under realistic conditions for assessing the significance of the effects of the atmosphere and conditions on shooting satellite images. The main objective of this application is the analysis of satellite measurements, along with their variations atmospheric parameters. The purpose of modeling is to understand how different components of the measurement system combine to make a measurement. The form and content of a model depends on their purpose. The model is constructed to describe and characterize the measurement system to understand the phenomena which he is registered and to predict their behavior under the effect of an external action or as a result of a partial modification of the system itself same. The model developed is to break the middle-ground atmosphere into subsets in interaction with the solar spectrum and the sensor onboard the satellite. The proposed radiometric correction method is simple because it is based on pixels that are known to support their radiometric images. The luminances are simulated using the software (SDDS) which allows us to establish rules between reported digital luminance and luminance reflectance. Résumé - Dans cette recherche, nous nous intéressons à l'application du modèle pour simuler le transfert radiatif dans l'atmosphère dans des conditions réalistes pour évaluer l'importance des effets de l'atmosphère et les conditions de la prise des images satellitaires. L'objectif principal de cette application est l'analyse des mesures par satellite, ainsi que la variation des paramètres atmosphériques. Le but de la modélisation est de comprendre comment les différentes composantes du système de mesure se combinent pour faire une mesure. La forme et le contenu d'un modèle dépend de leur but. Le modèle est construit pour décrire et caractériser le système de mesure pour la compréhension des phénomènes pour les quel il a été enregistrée et de prédire leur comportement sous l'effet d'une action extérieure ou à la suite d'une modification partielle du système soi même. Le modèle développé est de briser l'atmosphère moyenne en sous-ensembles en interaction avec le spectre solaire et le capteur à bord du satellite. La méthode de correction radiométrique proposée est simple parce qu'elle est basée sur les pixels qui sont connus pour soutenir leurs images radiométriques. Les luminances sont simulées à l'aide du logiciel (NSDD), qui nous permet d'établir des règles entre luminescence numérique et luminescence de réflectance.
INTRODUCTION
All bodies emit and reflect the flow of energy in the form of electromagnetic radiation.The relative variation of the energy reflected or emitted as a function of wavelength, is the spectral signature of the object considered in a given state.The spectrum can be used to identify and determine its status.For a satellite, making measurements in a number of spectral bands, the spectral signature of an object will correspond to different levels of radioactivity recorded in each of them.
The principle of remote sensing is the detection of electromagnetic radiation that carries information from the soil-atmosphere either by reflection or by transmission from a radiometer on board the satellite.The signal received by the radiometer is the result of physical, biological and geometrical objects on the ground.For a better use of satellite measurements, we must answer the following questions: At what point on the earth's surface so far is it?What is the value of measuring that?
Answering these questions requires the definition: What exactly are the physical quantities measured by the measurement system?What disturbs the measurement system does what it is supposed to measure?Which model can you describe the disturbances?How does one characterize the quality of measurement?
To understand this complex phenomenon, we have developed an analytic model (SDDS) of radiatif transfer simulation in water coupled to an atmospheric model in order to simulate measure by satellite.This direct model permits to follow the solar radiance in his trajectory Sun-Atmosphere -Sea -Depth of sea-sensor.The goal of this simulation is to show for every satellite of observation (Spot, Landsat MSS, Landsat TM) possibilities that can offer in domain of oceanography.(Bachari, 1997) An interaction model of the solar spectrum with the Earth-atmosphere system in developed to calculate the various components of solar radiation at ground and upper atmosphere.(Bachari, 1999;Houma et al., 2000) In this research, we are interested in applying the model to simulate the radiative transfer through the atmosphere under realistic conditions for assessing the significance of the effects of the atmosphere and conditions on shooting satellite images.The main objective of this application is the analysis of satellite measurements, along with their variations atmospheric parameters.The spectral signature of water is used here to simulate the action of the satellites.(Gordon, 1974)
MODELLING THE INTERACTION OF THE SOLAR SPECTRUM WITH THE ATMOSPHERE
An analytical model for simulating radiative transfer in water coupled with an atmospheric model can adequately simulate the signal of a body of water to the attitude of the satellite to analyze the effects of atmospheric parameters and shallow water.This model determines the scattered radiation by a body of water in the software simulation of satellite data 'SDDS' (Bachari, 1997), its main function is the calculation of the spectral radiance reflected from the sea water level sensor.
The purpose of modeling is to understand how different components of the measurement system combine to make a measurement.The form and content of a model depends on their purpose.The model is constructed to describe and characterize the measurement system to understand the phenomena which he is registered and to predict their behavior under the effect of an external action or as a result of a partial modification of the system itself same.The model developed is to break the middleground atmosphere into subsets in interaction with the solar spectrum and the sensor onboard the satellite.
133
The source irradiates the object and the latter reflects the radiation in all directions, some of this radiation is captured.The radiation received by a radiometer on board the satellite, is composed of two main terms: brightness caused by the surface in the field of vision sensor and a brightness that is not caused by the surface in the field of vision.The first term is useful information, it is due to the direct and indirect solar radiation.The second term, considered the noise is due to the light scattered by the atmosphere.(Gordon et al., 1981;Becker et al., 1990).
SPECTRAL IRRADIANCE ON THE GROUND
The sun is the source of energy in passive remote sensing; solar radiation carries the information of the natural environment for its intrinsic properties (wavelength, polarization, phase shift).Knowledge of the spectral distribution of the radiation that reaches the high atmosphere is very important for various applications.(Chadin, 1988).The solar spectrum has been the subject of several measures on the ground, air and satellite.
Assuming that the atmosphere is transparent, the solar spectrum reaching the soil does not undergo any change in its trajectory.
The spectral irradiance in the upper atmosphere depends on the latitude of the location (latitude = ϕ , declination of the axis of rotation of the earth ( δ ) and time (h).
The solar spectrum on the ground is given by the following equation (Ratto, 1986 ): is the solar spectrum outside Earth's atmosphere, λ : wavelength of emission of radiation, 1: Astronomical Unit (1UA = 1.496×10 8 km), f is the correction factor of distance sun-soil (this factor depends on the number of days and ) ( cos z θ is the zenith angle).
In clear sky atmosphere, the concentration of gases and aerosols varies with the changing weather conditions and geographical position.Gases and aerosols absorb and scatter solar radiation on a selective basis throughout the optical path.Gases, principally ozone, carbon dioxide and water vapor are the bodies responsible for absorption of the solar spectrum.Air molecules and aerosols are the body responsible for the dissemination of solar radiation in all directions.(Prieur et al., 1975).The effects of absorption and scattering functions are presented by the transmittance according to Bouguer's law (Bouguer, 1953): λ I is the spectral radiation output and λ 0 I is the radiation spectral λ input.
Diffusion occurs during the interaction between the incident radiation and particles or large gas molecules in the atmosphere (water droplets, dust, smoke ...).Where the suspended particles are negligible compared to the wavelength, the phenomenon that occurs is Rayleigh scattering.(Fröhlich et al., 1981).
The diffusion of a particle occurs independently of other particles.The radiation will be distributed in all directions, the forward scattered radiation is equal to the radiation scattered backward.
Model description of irradiance
This model is to calculate the solar spectral irradiance (irradiance) direct normal and horizontal diffuse to the conditions of a cloudy sky not.This code calculates a range of 0.3 and 4.0 microns with a pitch of 10 nm.This code introduces a number of parameters such as solar zenith angle, the angle of inclination, atmospheric turbulence, the amount of water vapor precipitated amount of ozone, pressure and albedo.(Guyot et al., 1992).
Monochromatic distribution of a direct solar beam can be computed as a function of a number of variables, including optical mass and a wide variety of atmospheric parameters-for example, water-vapor content, ozone layer thickness, and turbidity parameters.
In the ultraviolet and visible region, it is essentially ozone absorption, Rayleigh scattering, and aerosols that control attenuation of the direct beam.The transmittance by aerosols is minimum at the short wavelengths and increases slowly as the wavelength increases.(Morel et al., 1993).
Direct spectral irradiance on the Ground
The equation of the light arriving directly from the sun at ground level for a wavelength λ is as follows: where represents the irradiance in the upper atmosphere of Earth-Sun distance an average wavelength λ .• D is the correction factor for Earth-Sun distance.
where • θ is the angle of incidence of direct beam on an inclined surface.
• t is the angle of the inclined surface.
The angle of inclination to a horizontal surface is 0° and 90° to a vertical surface.The global irradiance on a horizontal surface is given by: 4 shows that the maximum solar irradiance is with a solar zenith angle of 0 °, which explains when the sun is overhead (the sun is at noon) solar intensity is high and second illumination decreases with increasing solar zenith angle.
MODELING THE RADIATION REFLECTED BY THE GROUND
In this work, we determine the physical quantity measured by the system of shooting (sensor) which is sunlight reflected by the soil-atmosphere averaged in some way in the spectral band considered the sensor.For this, we described the various factors affecting a satellite measurement.
Thus, after characterization data on the spectrum of electromagnetic radiation, reflection, emission and atmospheric transmission is determined and developed by the optical properties of the elements of natural surfaces, radiation reflected by the surface water and radiation captured by satellites.(Houma et al., 2004).
The methods used in atmospheric modeling can be divided into direct method and indirect method.Generally, direct methods are represented by the development of a model of interaction of the solar spectrum with the various elements that are in the path of solar radiation the sun-ground and ground sensor.
The radiance sat L captured by the satellite is the sum of the three luminance's: 1.The luminance of the system from the ground -atmosphere considering the ground as a black body is given by the following equation: Using a database of spectral signatures and spectral extinction coefficients to model parameters.A numerical code to track the signal in the solar sun-trip ground and ground sensor.
The energy quantity ) b , y , x ( R λ is transformed into a numbered account, which includes all information about the Earth-atmosphere system, the geometric conditions of shooting and optical properties of the sensor.It is obvious that the atmospheric absorption and scattering vary across an image due to three effects: • Change in weather conditions across an image.
• Change of observation relative to the position of the sun.
• Variation in the average radiation in the area surrounding the pixel observed at all times.
It is therefore necessary to analyze different types of information in order to quantify and qualify.(Becker, 1978).To do this, we should dissect the process of taking an image, estimate its multiple components and determine at what level the various categories of information can be determined follow atmospheric correction models.(Bukata et al., 1995).
Simulation analysis of the reflectance of sea water
This model is followed by a detailed study of factors affecting the optical properties of sea water.To correctly interpret satellite data, we must solve the equation of radiative transfer soil-atmosphere.Solving the transfer equation is based on atmospheric models at several levels that require a considerable mass of meteorological data generally not available.
Solar radiation modelling and simulation of hyperspectral satellite data 139
The first test is performed to explain the blue sky, was made by Lord Rayleigh that the assumptions of his theory are: the particles are small compared to the wavelength, the scattering particles and the medium does not contain free charges (not conductive), therefore, the dielectric constant of the particles is almost the same as that of the medium.
In vertical viewing, then the reflectance is lower than when the sun is at its zenith.The set of simulated data depends on the reflectance and the spectral amplitude of the radiation that reaches the ground is maximum at the zenith, so the measure is more affected by radiation than because of the dependence of reflectance of the zenith angle.
The zenith angle determines the illumination received by the target surface and is involved in all elements of calculating the various transmittances and radiation.Radiation received, for all channels, decreases if the solar zenith angle tends to a horizontal position; it is maximum when the sun is at its zenith.The zenith angle is involved in all elements of calculating the various transmittances and radiation, it depends on the latitude, the inclination of the sun and time.
The information spectral radiometers are determined by the wavelengths recorded by the sensor.The width of each spectral band radiometer defined spectral resolution.We consider that the observation is made in a plane perpendicular to the direction of the grooves.
Solar radiation travels through space as electromagnetic waves.In the case where the wave propagates in a medium refractive index and suddenly she meets any other medium characterized by a different index of refraction, part of the wave is then transmitted into the second medium and the other part is reflected in the first medium.The amplitude of the reflected wave depends on the nature of the medium, shape and lighting conditions.
Part of the global radiation reaching the ground is reflected to the sensor by the coefficient of reflectance.The major problem in determining the reflected radiation is the development of a model that generates all the soil properties affecting in a direct spectral signature (lighting condition, roughness, soil type,.)or indirect (color, salinity, humidity, etc.)...
Total radiation reaching the satellite
The luminance level of the satellite is the sum of the intrinsic brightness of the atmosphere and the luminance of the target which represents the sea water in our case.
The radiation recorded at the satellite is given by the relation: is the transmittance of direct radiation toward the sensor; s , spherical albedo of the atmosphere et ) ( S λ sensitivity function optical sensor.(Sturm ,1980).
The sensor has a spectral response, λ S , the recorded signal at the sensor is the luminance: The luminance level of the satellite is the sum of the intrinsic brightness of the atmosphere and the luminance of the target which represents the sea water in our case. (Deschamps et al., 1983).
The radiation reflected from the water surface to the satellite passes through the atmosphere in a direct way with an angle v θ and undergoes attenuation before being captured by the satellite.The amount of energy that reaches the satellite sensor ism) the sum of that from the ground and scattered by the atmosphere.The radiation emitted by the sea water that reaches the sensor is: The radiation scattered by the atmosphere: Radiation reaching the satellite is composed of spectral global radiation reflected from the sea water passing through the atmosphere So the radiation that reaches the sensor is expressed: The signal of sea water recorded at the sensor is:
SIMULATION OF SATELLITE DATA FOR SDDS
Based on this physical model, we developed a simulation system of satellite data to correct the scattered radiation of atmospheric effects.
A library of spectral signatures is introduced, it covers the main ground objects that have a reflectance in the bands of the electromagnetic spectrum.The combination of spectral signatures and different radiances allows us to calculate the spectral radiance reflected from the surfaces.The simulation results depend on the choice of input parameters.The software allows to show the influence of the effects of various parameters and geometric characteristics of the structures on the signal reaching the sensors onboard the satellite Spot, Landsat and Irs1c.
To highlight the effect of a given parameter on the satellite measurement, are assigned fixed data for all variables in the case of a clear sky and for geometrically well defined.
The second part is from satellite Spot, Landsat and Irs1c, applying the method of covariance matrix (a method that can provide a correction locally specific in the sense that it relates to the pixels of a given region of the image), one can estimate the atmospheric noise through a program to input data images from different channels and outputs the atmospheric noise of these channels.
The physical quantity measured by the shooting system (sensor) is a solar radiation reflected by the soil-atmosphere averaged in some way in the spectral band considered the sensor.It depends on angle of illumination and shooting.
To determine the different radiation received at the satellite, the data input parameters are astronomical, geographical and atmospheric.
Atmospheric correction of remotely sensed data
Atmospheric correction is a major issue in visible or near-infrared remote sensing because the presence of the atmosphere always influences the radiation from the ground to the sensor.
As introduced before, the atmosphere has severe effects on the visible and nearinfrared radiance.
First, it modifies the spectral and spatial distribution of the radiation incident on the surface.
Second, radiance being reflected is attenuated.Third, atmospheric scattered radiance, called path radiance, is added to the transmitted radiance.
The atmosphere transmittance is: where τ is the atmospheric optical thickness and θ can be sonar zenith angle or satellite angle view.
The optical thickness is composed of: For a given spectral interval, the solar irradiance reaching the earth's surface is: The scattering is dominated by aerosols while back scattering is mainly due to Rayleigh scattering.A number of path radiance determination algorithms exist.For a nadir view as Landsat MSS, TM and Spot HRV are usually used.In this section, we only tried to introduce some basic concepts of this complex topic.This is only a singlescattering correction algorithm for nadir viewing condition.More sophisticated algorithms which count multiple-scattering do exist.Some examples of these algorithms are Lowtran7, 5S (Simulation of the Satellite Signal in the Solar Spectrum 5S) and 6S (Second Simulation -aircraft, altitude of target).
There are Fortran codes available for these algorithms.The 5S and 6S are proposed by Tanre, (Tanre et al., 1990).
For small wavelengths the atmospheric contribution is very important, we wish to point out that the impact angle is also important he was noticed by a degradation of the signal that reaches the sensor signals for all contributors to the signal exciting the radiometer.
The simulation analysis shows the need to correct the satellite images of atmospheric effects in order to identify objects on the ground.(Fig. 7) The composition of the atmosphere disturbs the path of electromagnetic radiation between the source on the one hand, between the earth and the satellite on the other.Atmospheric effects resulting absorption and diffusion, performed jointly by the two major components: gases and aerosols.
When a signal is recorded, it is the part of the spectral radiation scattered by air molecules and aerosols to the outside of the system soil -atmosphere (Popp, 1994).
is the spectral transmittance of the atmosphere, such that its optical path is calculated by replacing the zenith angle from the viewing angle.To dissect the effects of the various elements contributing to satellite measurements, we developed a software simulation of satellite data (SDDS) using the visual language Basic.6.The tool is based on modeling of radiation and atmospheric effects.
The monitoring of the solar spectrum as a double drive-ground and ground sunsensor is implemented based on the concepts of codes 5S, 6S, to simulate different radiances reaching the sensor.
For the operation of the system we used the extinction coefficients of the solar spectrum in the developed Lowtran.6 and a bank of spectral signatures extracted from the software ENVI.4.3 (2007).
ANALYSIS OF VARIATION IN LUMINANCE
The physical quantity measured in the system shooting (sensor) is a solar radiation reflected by the soil-atmosphere averaged in some way in the spectral band considered the sensor.It depends on the angle of illumination and shooting.(Bachari et al., 1997).
Effect of solar zenith angle
The zenith angle is involved in all aspects of calculation of the various transmittances and radiation, it depends on the latitude, the inclination of the sun and time.
The zenith angle determines the illumination received by the ground and involved in all aspects of calculation of the various transmittances and radiation.
Radiation received, for all satellite channels, decreases if the solar zenith angle tends to a horizontal position, and is maximum when the sun is at its zenith (Fig. 11).
The following figure shows the contrast angle (zenith-sun) radiances: Fig: 11: The effect of solar zenith angle on luminance
Effect of the zenith angle of observation
The zenith angle of observation determines the length of the journey made by atmospheric radiation.The simulation results show that the radiation collected is small if the angle of observation tends to a horizontal position.The growth of the zenith angle of observation leads to an increase in air mass and a decrease in transmittance.
The following figure shows the contrast of the luminance for different sensors.In the case of surface spectral signature weak as water, atmospheric noise becomes important to the reflected radiation if the angle of observation believed to a horizontal position.The atmospheric contribution increases, therefore the luminance level of the sensor increases.
The brightness peaks at a zenith angle of observation [60°, 70°] and then begins to decrease if v θ differs from 70°.
Fig. 12: The effect of the zenith angle of observation on the luminance
Effect of relative humidity
The presence of water vapor in the atmosphere depends on the location and altitude above the ground.Analysis of simulated data shows the insensitivity of the channels XS1, XS2, TM1, TM2, TM3, MSS4 to increase or decrease of water vapor in the atmosphere.
The transition from a dry to a humid atmosphere causes a slight decrease in the extent to channel XS3 and TM4.Constructors radiometers onboard satellites Spot and Landsat have avoided the windows of absorption of radiation by water vapor.
The following figure shows the change in apparent radiance between the two extreme amounts of water vapor.(Fig. 13) The apparent radiance level of the system Spot, Landsat is practically independent of the wet state of the atmosphere because the spectral bands used do not contain the total absorption window of radiation by water vapor.Mie scattering has a significant influence on the measured signal at the sensor described by the diffusion parameter c F .The atmosphere absorbs in the field of short wavelength and becoming more transparent.The information is degraded in the channels depending on the diffusion parameter c F . (Fig. 14).
The following figure shows that for low surface reflectance as the case of water degradation in the channel becomes more comparable, especially for channels XS1 and MSS4.Fig. 14: The effect of atmospheric turbidity parameter c F on the radiance
CORRECTION OF SATELLITE IMAGES
Each pixel of an image is a digital count from 0 to 255, which translates into a color using an editable pre-selected distribution in image processing.Generally the relationship between digital count and the luminance is linear: with factors 1 a and 0 a are calibration coefficients.
For the same conditions of image capture, simulation of the observed brightness is determined by the relationship between apparent brightness and simulated the account corresponding digital images processed.
For the three channels of the HRV sensor, radiometric conversion is given by the relations:
Application to images
The same method was applied directly to the accounts of digital Landsat 2003 and Spot 2004.The images are processed using the software to process satellite images PCSATWIN developed by (Bachari et al., 1997).
Calculation of the reflectance
The luminance is happening to global satellite is expressed by the relation: with λ ρ is the reflectance at the sea surface, E is the total illumination received by the surface.
The average reflectance is connected to the luminance by the following equation: The conversion factors obtained by modeling of radiation on Spot satellite channels (XS1, XS2, XS3 and) and Landsat (TM1, TM2, TM3 and TM4) are given in the Table below.(Bachari, 2006).
Quality of radiometric corrections
According to the criterion Rouquet, we can estimate the quality of the correction by comparing the properties of the raw images and corrected.(Morel et al., 1977).
For a given image, the atmospheric effect is minimum if the contrast and the ratio of standard deviation is the average maximum, this quantitative criterion is also applied systematically for the selection of good quality data and is also a primary method of atmospheric correction, which tends to minimize atmospheric effects.
Fig. 17: The digital count -reflectance at ground level
There is also a primary method of atmospheric correction, which tends to minimize atmospheric effects.The properties of the images are combined and corrected in the following Table.(Bachari, 1999).For a simple analysis of the results expressed in Table 2, we note that the criterion used is justified for the corrected images.The application of atmospheric corrections is shown in the graph defining the linear relationship between the reflectance calibration and account for two-channel digital Spot XS1 and XS2 for images corrected and uncorrected images (Fig. 18).This simple method of correction amounts to replacing the linear calibration another linear relationship applied to both SPOT images, this new relationship calibration results to correct the measured reflectances.In the histogram of the coefficient of variation we wish to point out that the first channel has a high coefficient corrected this can be explained as the effect of correction is more experienced in this channel than in the other two channels.In the third channel (NIR), Mie scattering and Rayleigh are less experienced.
MODELLING OF SATELLITE MEASURE UNDER SEA WATER
The knowledge of the topography of the seafloor is important for several applications.The principle of measure of bathymetry necessarily takes this model of reflectance joining the intensity of radiometric signal measured by the satellite to the depth as a basis; it can call on the physical method that requires the knowledge of all parameters governing this model (optic properties of water, coefficient of reflection of the bottom, transmittance of the atmosphere (Minghelli- Roman et al., 2007).
The model provides of image mono channel where each pixel of the maritime domain is represented either by a radiometry in-situ but rather by a calculated depth.In general the use of hybrid multiple Spot band regression algorithms are superior to the exclusive use of any single band.(Bachari et al., 2008, Houma et al., 2010).
The spectral distribution of the submarine radiance varies in a complex way with the depth, in relation with the selective character of the attenuation.
The total signal received by a sensor operating at high altitude water above can be decomposed in a first time, in two terms: In the case of the channel 2 of the spot the sensitivity of this channel to effects of the bottom can reach 10 meters.For the channel 1 of the spot the effect of the bottom can reach funds that pass the 30 meters.For the TM1 the effect of the bottom can reach funds of 40 to 50 meters.
The CN2 quantities and CN1 are luminance's corrected of the atmospheric effects.As in this case we removed the point that present a maximum of SM.This singular point presents an anomaly that indicates the streamlined convergence of currents.
Notices the variable Z is just to the middle in relation to the two luminance's.We removed data that correspond to depths superior to 60 ms.Since for depths that pass the
3 . 3 I••
the functions of the transmittance of the atmosphere for a wavelength λ of molecular diffusion (Rayleigh), mitigation of aerosols, the absorption of water vapor, the absorption of ozone and gas absorption The direct irradiation on a horizontal surface is obtained from the equation multiplied Spectral diffuse irradiance a-The diffuse irradiance on a horizontal surfaceThe diffuse irradiance on a horizontal surface is based on three components:• Component of Rayleigh scattering λ r Release component aerosol λ a I Component that takes into account multiple reflections of light between the ground and air.The total λ s I diffuse illumination is given by the sum.diffuse illumination on an inclined surface The global spectral irradiance on an inclined surface is represented by:
Figure
Figure4shows that the maximum solar irradiance is with a solar zenith angle of 0 °, which explains when the sun is overhead (the sun is at noon) solar intensity is high and second illumination decreases with increasing solar zenith angle.
Fig. 5 :Figure 5
Fig. 5: Variation of solar Irradiance according to the optical thickness
Fig. 6 :
Fig.6: Variation of illumination depending on the water vapor precipitated Figure6shows the variation of water vapor only affect the light weakly, but it is very important as if we compare it with the influence of the number of days in the year of the illumination.
gain factor of the system in the channel b (sensor sensitivity); ) b ( T λ , is the atmospheric transmittance of the earth to the satellite in channel b ) in the channel b .
reflected from the surface of the water; , λ τ : Total spectral transmittance.
calculated in sea water for the channel λ .
λ∆:
Spectral band of the channel.
λδ:
Sensitivity of the channel.
λ τ a : Thickness selective absorption; λ τ R : Thickness scattering by small particle (molecular) is named Rayleigh diffuse; λ τ d : Thickness scattering by medium particle is named Mie diffuse.
Fig. 7 :
Fig. 7: Contribution of the atmosphere on the hyperspectral channels SpotXS
Fig. 13 :
Fig. 13: The effect of relative humidity on the radiance
Fig. 18 :
Fig. 18: Linear calibration reflectance before and after atmospheric correction
Fig. 19 :Fig. 21 :
Fig. 19: XS1 image of the bay of Oran Fig. 20: XS1 corrected of the bay of OranThe statistical properties of the corrected images are presented in the following histograms:
Fig. 22 :
Fig. 22: Coefficient of variation of the three images corrected for atmospheric effects
Fig. 23 :
Fig. 23: Radiometric correction in the three Spot channels of the bay of Algiers atmospheric component and λ e S is a water component.In a second time, it is possible to analyze the composing water measured near the surface: at the surface, λ f S is a reflectance of the bottom in shallow waters, λ d S a component owed to the diffuse reflection by volume water. of the sea water, Ra , a reflectance of the bottom, k , is attenuation coefficient, z , a depth, 0 ω , albedo of diffusion of water molecules, z θ , a zenith angle and v θ , a viewer angle of the sensor.
Fig. 24 :
Fig. 24: A variation a luminance's of a Spot XS with a depth
Table 1 :
Conversion factors accounts simulated digital reflectance
Table 2 :
Statistical data of radiometric channels Spot HRV | 7,431.2 | 2023-10-23T00:00:00.000 | [
"Environmental Science",
"Physics",
"Engineering"
] |
Unraveling the Intricate Link: Deciphering the Role of the Golgi Apparatus in Breast Cancer Progression
Breast cancer represents a paramount global health challenge, warranting intensified exploration of the molecular underpinnings influencing its progression to facilitate the development of precise diagnostic instruments and customized therapeutic regimens. Historically, the Golgi apparatus has been acknowledged for its primary role in protein sorting and trafficking within cellular contexts. However, recent findings suggest a potential link between modifications in Golgi apparatus function and organization and the pathogenesis of breast cancer. This review delivers an exhaustive analysis of this correlation. Specifically, we examine the consequences of disrupted protein glycosylation, compromised protein transport, and inappropriate oncoprotein processing on breast cancer cell dynamics. Furthermore, we delve into the impacts of Golgi-mediated secretory routes on the release of pro-tumorigenic factors during the course of breast cancer evolution. Elucidating the nuanced interplay between the Golgi apparatus and breast cancer can pave the way for innovative therapeutic interventions and the discovery of biomarkers, potentially enhancing the diagnostic, prognostic, and therapeutic paradigms for afflicted patients. The advancement of such research could substantially expedite the realization of these objectives.
Introduction
Camillo Golgi is credited with the identification of the Golgi apparatus, a fundamental organelle inherent to eukaryotic cells [1].Characterized by its intricate and dynamic nature, the Golgi apparatus is pivotal in an array of cellular activities, predominantly protein modification, segregation, conveyance, and packaging before its designated delivery to specific intracellular locales.Structurally, the organelle comprises overlapping membranous sacs, termed cisternae, which possess a unique architectural design optimized for proficient modification and packaging before routing to defined cellular destinations.
Acting as a central hub, the Golgi apparatus is instrumental in the processing and classification of diverse soluble proteins and lipids, directing them to their intended cellular destinations [2].Given its seminal position in the secretory continuum, any perturbation in its architecture or functionality can gravely impact cellular protein and lipid equilibrium.Notably, a growing body of research has demonstrated that aberrations in the Golgi apparatus are implicated in a spectrum of conditions ranging from neurodegenerative maladies [3][4][5][6][7] to ischemic strokes, cardiovascular ailments, pulmonary arterial hypertension, infectious diseases, and malignancies.
Intrinsically adaptable, the Golgi apparatus possesses the capacity to rapidly recalibrate in response to evolving cellular demands and extracellular cues.This involves undergoing morphological transitions, such as reorganization, fragmentation, and integration with ancillary organelles, to aptly address variances in protein production and cellular requisites.In this context, the Golgi apparatus engages in complex interplays with other cellular structures, notably the endoplasmic reticulum (ER) and endosomes, orchestrating a sophisticated intracellular transport and communication matrix [8].
The indispensability of the Golgi apparatus in cellular operations underscores the ramifications of its dysfunction on human physiology and health.Disruptions or anomalies in its form, functionality, protein shuttling, or associated metabolic pathways have been pinpointed as etiological agents in a diverse array of pathologies, including malignancies, neurodegenerative diseases, and metabolic anomalies.Consequently, deepening our comprehension of its operational dynamics and significance is imperative for unveiling the underpinnings of these disorders and crafting targeted therapeutic modalities for their efficacious management.
Golgi Sorting, Protein Trafficking, and Glycosylation Abnormalities
Preserving the structural coherence of the Golgi apparatus is paramount for its optimal operation, as structural perturbations could usher in an array of pathologies.Operational aberrations of the Golgi apparatus encompass modifications in its pH equilibrium, anomalous glycosylation trajectories, and compromised membrane transport.Notably, fragmentation of the Golgi has been postulated as a precursor event in cellular apoptosis [9,10].In scenarios of pharmacological or oxidative duress, the Golgi apparatus undergoes transformations, such as cargo saturation, ion concentration disequilibrium, and irregular luminal acidity, which collectively can induce membrane transport defects.We have coined the term "Golgi stress" to encapsulate this specific Golgi apparatus response, and two welldiscussed molecular pathways are the structural preservation of Golgi apparatus by the TFE3 (transcription factor binding to IGHM enhancer 3) pathway and the proteoglycan pathway, which uptake the expression of enzymes for glycosylation [11].
Glycosylation stands as a pervasive posttranslational modification of proteins and plays a pivotal role in protein-mediated signaling.The glycans situated at glycosylation loci can span a spectrum in terms of complexity, from singular sugar chains to polymers boasting over 200 sugar units.Furthermore, glycans can be subjected to auxiliary modifications, encompassing the addition of entities like phosphate, sulfate, acetate, or phosphorylcholine for further diversification.It is noteworthy that a multitude of glycans manifest branch-like structures.An N-glycan entity can house up to six branches, each embedded with several recurrent disaccharide segments.The work by Stanley et al. (2011) offers insights into the traits and operations of Golgi glycosyltransferases (GTs), encompassing their activity spectra from their initiation at the cis-Golgi to their passage through the trans-Golgi network (TGN) [12].
The glycosylation of proteins is executed at two discrete intracellular locales, each defined by unique attributes.Proteins resident in the cytosol and nucleus undergo O-GlcNAcylation, wherein singular sugar entities termed N-acetylglucosamine (GlcNAc) directly bind to serine or threonine amino acids.This mechanism is instrumental in finetuning protein interactions, stability, functionality, and a gamut of cellular undertakings such as transcription, metabolism, apoptosis, and organelle genesis and transport [13,14].In contrast, within the ER and Golgi apparatus lumen, secretory and transmembrane proteins are subjected to glycosylation by affixing specific glycosaccharides, or glycans, to particular amino acid chains.This modus operandi facilitates their functional diversification, allowing them to partake in multifarious cellular events [15].
The Golgi apparatus houses an array of glycosylation enzymes capable of either cleaving monosaccharides (glycosidases) or attaching them (GTs).Intriguingly, these enzymes can form both heteromeric and homomeric assemblies [16].Structurally, GTs are membranebound proteins characterized by a brief N-terminal segment, a singular membrane domain, and a luminal domain.Due to this intricate configuration, they frequently establish enzyme complexes with other active enzymes within specific glycosylation pathways.N-glycosyltransferases within the Golgi can manifest in either homomeric or heteromeric groupings.The cyclical process of these GTs entails transitions influenced by the microenvironment, oscillating between heteromeric and homomeric states.While homomeric enzyme formations are pivotal in facilitating the folding and transportation of GTs to the Golgi apparatus, the more active heteromers are predominantly utilized for streamlined glycosylation [17,18].Noteworthy GTs include GalNAc-T2 (N-acetylgalactosaminyltransferase-2) and GalT (β1,4-galactosyltransferase), which, due to their specificity for the Golgi apparatus, highlight that any depletion of juxtanuclear Golgi staining might be indicative of the organelle's attributes and the associated membrane proteins [19].
A dysfunctional Golgi glycosylation process has been associated with invasive behavior in various cancer types, encompassing prostate and breast malignancies [20,21].The glycosylation process within the Golgi plays a cardinal role in numerous oncogenic molecular and cellular sequences, such as signal transduction, cellular communication, dissociation and invasion of cancer cells, cell-matrix attachment, angiogenesis, immunomodulation, and metastasis [22].Analogous to the function of epithelial cadherin in mediating epithelial cellular cohesion, the Golgi-mediated glycosylation of N-linked glycans on epithelial cadherin might influence the epithelial-to-mesenchymal transition, thereby catalyzing the emergence of metastatic outgrowths.Such a mechanism is postulated to facilitate the migratory capacity of neoplastic cells from their inception point, be it during reparative processes post-injury or other standard physiological events, and becomes instrumental in the metastatic spread and proliferation of cancer [8,23].
The GOLPH3 complex, recognized as Golgi phosphoprotein 3, stands as a pivotal molecular entity in the realm of Golgi-facilitated oncogenesis.Its centrality in cancer can be attributed to a myriad of critical functionalities.GOLPH3 not only orchestrates Golgi glycosylation pivotal for the cancerous phenotype manifestation but also amplifies the DNA (deoxyribonucleic acid) damage response, bolstering survival amidst DNAinjurious scenarios.Additionally, it synergizes with retromer elements to enhance the mTOR (mammalian target of rapamycin) signaling upon growth factor induction and facilitates cell motility by orienting the Golgi apparatus toward the cellular forefront.Beyond GOLPH3, the Golgi spectrum hosts another consequential protein, GM130 (Golgi matrix protein 130).Integral to Golgi glycosylation and membranous protein trafficking, the downregulation of GM130 culminates in autophagy, diminished angiogenesis, and suppressed tumorigenesis [6,[23][24][25].
Dysregulated Golgi glycosylation not only holds implications for carcinogenesis but might also propel cancer progression.Given the intertwined nature of Golgi-related operations and oncology, delving into and therapeutically targeting these processes should be foundational in cancer research endeavors.
Divergences in glycosylation can engender alterations in the conformation and function of numerous membranous proteins, with particular significance to collagen, fibronectin, integrins, and laminin at the extracellular interface.The paramount role of transmembrane integrins lies in fortifying the cytoskeleton via myriad cell-cell and cell-matrix interactions, thereby catalyzing cellular maturation and proliferation [26].Glycosylation aberrancies might culminate in the flawed anchorage of these proteins, engendering a plethora of pathologies encompassing neurodegenerative conditions, malignancies, and cardiovascular afflictions [27][28][29].
The Golgi apparatus, with its cardinal role in modulating core cellular mechanisms, like adhesion and migration, stands as a keystone in the panorama of cancer evolution and metastatic dissemination.A prominent influencer in these oncogenic processes is identified as phosphatidylinositol 4-phosphate (PI4P).Hence, its role in human breast cancer can markedly sway cell-cell adhesion and migratory patterns [24].
Recent investigations underscore the paramount regulatory role of PI4P in the structural and functional intricacies of the Golgi apparatus, notably affecting glycosylation and the trafficking of proteins pivotal to cell-cell adhesion.By modulating the Golgi PI4P concentrations, the localization and activity of cardinal adhesion molecules, such as E-cadherin, are affected, thereby reshaping the intensity and dynamics of cell-cell interactions.Beyond its role in adhesion, PI4P governs activities linked with enzymes crucial for the synthesis or restructuring of glycosphingolipids, which are indispensable for cell surface interactions and signaling modalities.Furthermore, PI4P is integral in governing invasive cellular motility, a critical phenomenon in oncologic metastasis.Its regulatory role in Golgi-centric vesicular trafficking and membranous dynamism facilitates the modulation of invasive cell polarization and protrusive activities, augmenting their migratory and invasive propensities.In this orchestration, PI4P collaborates with a cohort of Golgi-associated proteins and lipid-mediated signaling pathways to modulate cytoskeletal transformations and matrix degradation, thereby facilitating the metastatic voyage of cancerous cells [21,30].
The traversal of cargoes through the Golgi apparatus is a multifaceted event and remains a focal point of discourse in the scientific literature.This review sheds light on five contemporary models postulated for assessing Golgi traffic, weighing their merits and demerits.The inaugural model posits anterograde vesicular transport amidst stable compartments of the Golgi.Conversely, the second hypothesis advocates for cisternal progression/maturation, wherein Golgi cisternae transition through sequential maturation phases.The third paradigm fuses progression/maturation with heterotypic tubular conveyance between cisternae.The penultimate model champions swift protein partitioning within a heterogenous Golgi and the terminal model envisions stable compartments as precursors for ensuing cisternal development.
A meticulous analysis reveals that no singular model can holistically encapsulate all documented phenomena across varied organisms.It might be more tenable to perceive cisternal progression/maturation as a foundational and evolutionarily conserved mechanism governing Golgi traffic.Certain cellular systems might integrate heterotypic tubular transport within Golgi cisternae.A judicious exploration of these models will illuminate the intricate facets of Golgi traffic, bestowing deeper insights into its operational mechanisms and elucidating this quintessential cellular undertaking.Grasping its foundational tenets is indispensable for decoding its influence on cellular equilibrium as well as pathological states linked to protein trafficking or excretion [31].
Golgi Apparatus Involvement in Breast Cancer
Breast cancer, a pressing global health challenge, leads to female mortality rates, and its prevalence is anticipated to surge in the forthcoming years.Diagnostic techniques like mammography and clinical breast inspections are pivotal for its early identification.While therapeutic modalities encompass surgical interventions, chemotherapy, and radiation treatments, each come with a set of concerns.Chemotherapy, despite its efficacy in neutralizing cancerous cells, presents a suite of adverse reactions.Radiotherapy, typically paired with surgery, may inflict enduring harm to critical organs.More promising therapeutic avenues encompass the deployment of anti-ErbB2 antibodies, exemplified by trastuzumab, especially for HER2-positive breast cancer variants.Additionally, antiestrogens and aromatase inhibitors serve to suppress the manifestation of estrogen-associated genes, proffering treatment avenues with diminished side effects [32,33].
The Rab GTPases, pivotal orchestrators of vesicular transportation, hold profound implications for the malignancy and invasiveness of cancer cells.Delving into estrogen receptor-positive breast cancer cellular frameworks reveals the instrumental role of Rab27B.Its heightened expression correlates with an augmented cellular elongation and an escalated invasiveness when interacting with collagen matrices.Such effects can be counteracted through miRNA-mediated interventions.Moreover, the amplification of Rab27B expression bears a direct relation to the surge in HSP90 alpha expression, a molecular custodian pivotal for upholding the structural integrity of MMP2 [34,35].
Rab40B's influence is palpably seen in maneuvering the trafficking pathways of metalloproteases MMP2 and MMP9 within the MDA-MB-231 breast cancer cellular context, facilitating the degradation of the external cellular matrix.Another metalloprotease, MT1-MMP, falls under the regulatory domain of Rab2A, which further fuels metastatic behaviors via its interaction with the VPS39 protein and is crucial for the amalgamation and clustering of late endosomes/lysosomes.
SiRNA screening has unmasked Rab2A's regulatory influence over the Golgi transport mechanisms of surface E-cadherin in breast cancer cells.Given the pivotal role of E-cadherin loss as an oncogenic transformation indicator, these revelations accentuate the significance of Rab GTPases in dictating vesicular transportation mechanisms.Such processes wield influence over cellular structural dynamics, invasion capacities, and external matrix degradation in these particular breast cancer cells [36,37].Refer to Figure 1 for a visual representation.
receptor-positive breast cancer cellular frameworks reveals the instrumental role of Rab27B.Its heightened expression correlates with an augmented cellular elongation and an escalated invasiveness when interacting with collagen matrices.Such effects can be counteracted through miRNA-mediated interventions.Moreover, the amplification of Rab27B expression bears a direct relation to the surge in HSP90 alpha expression, a molecular custodian pivotal for upholding the structural integrity of MMP2 [34,35].
Rab40B s influence is palpably seen in maneuvering the trafficking pathways of metalloproteases MMP2 and MMP9 within the MDA-MB-231 breast cancer cellular context, facilitating the degradation of the external cellular matrix.Another metalloprotease, MT1-MMP, falls under the regulatory domain of Rab2A, which further fuels metastatic behaviors via its interaction with the VPS39 protein and is crucial for the amalgamation and clustering of late endosomes/lysosomes.
SiRNA screening has unmasked Rab2A s regulatory influence over the Golgi transport mechanisms of surface E-cadherin in breast cancer cells.Given the pivotal role of E-cadherin loss as an oncogenic transformation indicator, these revelations accentuate the significance of Rab GTPases in dictating vesicular transportation mechanisms.Such processes wield influence over cellular structural dynamics, invasion capacities, and external matrix degradation in these particular breast cancer cells [36,37].Refer to Figure 1 for a visual representation.As highlighted in the cited study [38], GOLPH3 s pronounced overexpression in breast cancer cells and tissues contrasts starkly with its presence in normal breast tissue.As highlighted in the cited study [38], GOLPH3's pronounced overexpression in breast cancer cells and tissues contrasts starkly with its presence in normal breast tissue.Escalated GOLPH3 levels correlate with advanced tumor development, metastatic spread, and a grim prognosis for breast cancer sufferers.
In the breast cancer scenario, GOLPH3 emerges as a pivotal entity, underpinning cancer cell proliferation and longevity by modulating its DNA damage response appara-tus.A noteworthy interaction of GOLPH3 is with ATM (ataxia-telangiectasia mutated), a quintessential DNA damage response protein located at the Golgi.This interaction amplifies survival rates, rendering cancer cells more resilient against DNA damage-driven cellular demise.Such a mechanism equips cancer cells with heightened resistance against the genotoxic assaults unleashed by treatments like chemotherapy and radiation [39].
Furthermore, GOLPH3's tentacles extend into cancer progression by exerting regulatory control over a slew of signaling pathways, notably the PI3K-AKT-mTOR axis.GOLPH3 bolsters AKT activation, a kinase pivotal for cell proliferation and survival.This ultimately accelerates tumor growth and endows them with fortified resistance against treatments.From a therapeutic lens, targeting GOLPH3 emerges as a promising stratagem in the battle against breast cancer.A nuanced inhibition of its expression or functional prowess could prime cancer cells for increased susceptibility to DNA-damaging agents, effectively crippling tumor proliferation and metastatic spread [40].
In breast cancer patients, a surge in gene expression linked to ER-Golgi transport processes is evident, exemplified by genes like ARF4, COPB1, and USO1.These genes play an instrumental role in ferrying proteins between the ER and Golgi apparatus.To elucidate further, COPII vesicles shepherd proteins from the ER to the Golgi, whereas ARFs pilot the retrograde journey from the Golgi to the ER, which is facilitated by COPI vesicle formation [41].
Delving into the transportation dynamics of these genes reveals intriguing insights.The overexpression of ARF4, COPB1, and USO1 accelerates protein shuttling from the ER to Golgi.Introducing biotin amplifies this trafficking tempo even more, hinting at the pivotal role these ER-Golgi trafficking genes play in optimizing transportation kinetics [41].
A comprehensive meta-analysis of breast cancer cells also unravels that the expression patterns of ARF4, COPB1, and USO1 are orchestrated by the CREB3-like transcription factors.This harmonious co-expression, when disrupted, wreaks havoc on the cellular adhesive capacities, mobility, invasion potential, and the overarching metastatic traits of cancerous cells [32].
The discoveries highlight the pivotal roles that ARF4, COPB1, and USO1 undertake in breast cancer cell proliferation and invasiveness.Their significance underscores their role as key contributors to the disease's progression.Their paramount importance is further spotlighted through their integral roles in breast cancer evolution, particularly via the ER-Golgi trafficking mechanisms [42].
CREB3, a transcriptional architect, is instrumental in governing the traffic between the ER and Golgi apparatus.Its implications for breast cancer metastasis are a subject of fervent research.The quest to understand gene expression footprints steered by CREB3-mediated ER-Golgi trafficking unveils repercussions on the metastatic journey of breast cancer [43].
Evidence affirms that CREB3 activation spurs the upregulation of genes that participate in ER-Golgi trafficking, prominently featuring constituents of the COPII and COPI vesicle transportation networks.This unique genetic footprint, orchestrated by CREB3, is tethered to enhance metastatic capabilities in breast cancer cells.A slew of experimental methodologies was deployed in this research, which strove to pinpoint the direct nexus between CREB3-driven trafficking and the invasiveness inherent to breast cancer cells [44].
Moreover, the CREB3-directed trafficking signature has been painted as a harbinger of grim clinical outcomes in breast cancer patients.An upsurge in the expression of signature genes coincides with an elevated risk of metastasis and a dip in overall survival rates.This spotlight on the clinical significance accentuates its potential as a harbinger of disease prognosis in breast cancer [21].
In addition, genetic and epigenetic deviations in loci affiliated with the Golgi apparatus, such as CCDC170 (Coiled-Coil Domain Containing 170), have been entwined with susceptibilities to breast cancer.The unraveling of Golgi microtubule organization by proteins, like CCDC170, can culminate in anomalies in cell polarity and motility.These facets are quintessential for the invasiveness and metastatic prowess inherent to cancer cells [45].
Estrogen-Mediated Regulation of Protein Transcriptome: Impact on Vesicular Trafficking and Giant Vesicle Formation in Breast Cancer
Giant Vesicles (GVs) are vesicles, either inside or outside cells that range in size from 3 to 42 µm amd play a pivotal role in tumor proliferation.These vesicles originate mainly from ERα (estrogen receptor alpha)-negative breast cell lines and predominantly reside at the cell's edge [46].
Estrogen, pivotal in steering the transcriptome of proteins involved in vesicle movement and GV formation in breast cancer cells, regulates gene expressions essential to these processes.By modifying this transcriptome, estrogen directs the creation of vital proteins for vesicle movement and GV formation, thereby fueling growth and disease progression.As such, estrogen stands as a chief architect in this molecular realm and is linked with vesicle movement and GV formation.Estrogens, as paramount female sex hormones, are intrinsic to many physiological and pathological functions and hold a significant role in the onset of breast cancer.They operate by docking onto nuclear estrogen receptors, subsequently reshaping gene expressions.Notably, genes dictated by estrogen have been identified to sway various dimensions of cancer cell movement [47][48][49].
Two gene standouts, SYTL5 (Synaptotagmin-like 5) and RAB27B, regulated by 17 ss estradiol, are central to vesicle movement and exocytosis.SYTL5 functions as an intermediary molecule, collaborating with GTPases RAB27A/B.An increased presence of these genes has been documented in estrogen receptor-positive breast cancer cell lines, emphasizing their association with vesicle movement [50].
Wright et al. unveiled a unique vesicle species in breast cancer cells, which is known as the GV.Breast cancer cells, like MCF 7 and T47D, which robustly express the estrogen receptor alpha, rely on estradiol for their genesis.In contrast, non-ER alpha-negative cells, like MDA-MB-231/MDA-MB-468, remained untouched by estradiol in GV formation.However, in the presence of ER alpha, estradiol instigated the inception of estradioldependent GVs, hinting at a possible route where estradiol might spark this formation through ER alpha expression [52].
In essence, genes guided by estradiol and their interplay in vesicle movement across diverse frameworks considerably drive breast cancer cell growth and spread.This provides a compelling narrative on the multifaceted relationship between estrogen cues, gene orchestration, and vesicle movement in breast cancer.These discoveries elucidate the sophisticated dance between estrogen signaling, gene oversight, and vesicle movement, shedding light on their role in the progression of the disease [53].
Inhibition of Golgi-Associated Lipid Transfer Proteins (LTPs) as Potential Targets for Disease Intervention
The Golgi complex (GC) is pivotal in lipid biosynthesis and distribution.This incorporates both vesicle transport and non-vesicular pathways via Lipid Transfer Proteins (LTPs) like CERT (ceramide transfer protein), OSBP (oxysterol-binding protein), and FAPP2 (fourphosphate adaptor protein 2).Each of these proteins boasts distinct transport capabilities: CERT shuttles ceramide, OSBP transfers cholesterol, and FAPP2 moves GlcCer.All these proteins carry an N-terminal PH domain, enabling them to bind with PI4P for effective delivery within the GC.Some inhibitors targeting these processes are emerging as potential antiviral and anticancer therapeutics [54].
CERT specializes in carrying ceramide from the ER to the TGN, where it undergoes conversion to sphingomyelin [55].Its inhibition or depletion results in ceramide accumulation, catalyzing ceramide-induced ER stress.This phenomenon primes various cancer cells, including ovarian, colorectal, and HER2-positive breast cancer cells, for enhanced vulnerability to chemotherapy.HPA-12, a CERT inhibitor, works by hampering CERT's recruitment during viral or parasitic invasions.This leads to augmented ceramide levels, rendering cancer cells more susceptible to paclitaxel-induced cell death, especially in contexts where these cells are resistant to traditional paclitaxel treatments or when there are infections or parasitic interferences [56].
OSBP1, a specialist in binding oxysterol, is central to the swap of cholesterol for PI4P between the ER and GC.OSBP, alongside ORP4L (oxysterol binding protein (OSBP)related protein 4L), has been pinpointed as a target for various anticancer agents, including ORPphilin compounds like cephalostatin 1, OSW-1, Ritterazine B, and Schweinfurthin A, due to their profound effects on lipid metabolism.Moreover, Itraconazole, primarily an antifungal, demonstrates anticancer efficacy by targeting OSBP.Summarily, the GC's equilibrium in lipid concentrations hinges on both vesicle-based transport and the action of lipid transfer agents, like CERT and OSBP.Strategies that target these proteins are emerging as promising avenues in the development of novel anticancer and antiviral treatments [57].
Rho-Related BTB Domain Containing 1 (RhoBTB1) Drives Breast Cancer Growth and Metastasis through Methyltransferase-like 7B (METTL7B) Regulation
In breast cancer cells and patient samples, there is a marked downregulation of RhoBTB1 (Rho-Related BTB Domain Containing 1) expression, hinting at its probable tumor suppressor properties.Scientific inquiry reveals that a lack of RhoBTB1 leads to the disintegration of the Golgi structure, compromising its functions.This results in anomalies in protein glycosylation and associated trafficking pathways [58].
Adding another layer, METTL7B (Methyltransferase-like 7B) operates downstream of RhoBTB1.Recognized for its role in protein glycosylation, METTL7B experiences an uptick in its expression in breast cancer cells.Its heightened presence is, unfortunately, an indicator of grim outcomes for cancer patients.Through various functional studies, it is discerned that a surge in METTL7B expression counteracts the invasive tendencies of breast cancer cells and concurrently rectifies the Golgi disintegration brought on by RhoBTB1 insufficiency [59].
This interplay implies that RhoBTB1 acts as a brake on METTL7B, ensuring the structural and functional integrity of the Golgi apparatus and curtailing the invasion of breast cancer cells.Disturbances in this delicate balance seem to facilitate aggressive tendencies in breast cancer cells [60].
The narratives surrounding RhoBTB1 and RhoBTB2, and their associations with breast cancer, are subjects of intrigue, but their precise roles remain shrouded in mystery.RhoBTB3's part in breast cancer deterrence is even less understood.To delve deeper into these ambiguities, this research utilized bioinformatics tools, like Oncomine and cBioportal, and aimed to elucidate the roles and potential prognostic value of RhoBTB3 and Col1a1 in the context of breast cancer.Comprehensive methodologies including qRT-PCR analysis, immunoblotting assays, and a range of functional tests, such as invasion and proliferation assays, and flow cytometry were employed.The endgame was to pinpoint their contribution to the trajectory of breast cancer and evaluate their worth as prognostic indicators [61].
Possible Therapeutic Targets Regarding Breast Cancer
Breast cancer presents itself as a multifaceted ailment that is characterized by diverse subtypes and molecular variations.This heterogeneity necessitates a wide spectrum of treatment approaches targeting different pathways and molecules (Table 1).A breakdown of these targeted pathways and molecules follows.Breast cancers characterized by excessive HER2 expression are labeled "HER2-positive".This overexpression catalyzes rapid cell growth and survival.Targeted therapies, specifically designed to obstruct HER2 signaling, are the countermeasures employed here.Drugs like trastuzumab (Herceptin), pertuzumab (Perjeta), and ado-trastuzumab emtansine (Kadcyla) are paramount in battling these types of breast cancers [63].
When it comes to both estrogen receptor-positive and HER2 breast cancers, there is an interesting dynamic at play involving SCAMP1 (Secretory Carrier-Associated Membrane Protein 1).Acting in post-Golgi recycling pathways, this protein, deemed a tumor suppressor, enhances the trafficking of MTSS1 (metastasis suppressor protein 1) to the cell's surface.Once there, MTSS1 kickstarts Rac1-GTP, promoting cell-cell adhesion, thereby forming a defensive front against cancer progression and invasion [64].
Lastly, for breast cancers carrying mutations in the BRCA1/2 genes, Poly (ADPribose) Polymerase (PARP) inhibitors come into play.Drugs like olaparib, talazoparib, and niraparib capitalize on DNA repair deficiencies in these cancers.By targeting cells that show homologous recombination inadequacy, they demonstrated significant clinical benefits for patients with BRCA mutation-positive breast cancer cases [65].
Breast cancer's complexity necessitates a wide array of targeted therapeutic interventions to tackle its multifarious molecular pathways.Several key molecules and pathways in breast cancer therapy include: Cyclin-Dependent Kinase 4/6 (CDK4/6)-Drugs such as palbociclib, ribociclib, and abemaciclib, have been developed to inhibit CDK4 and CDK6 activities, which are pivotal for the progression of the cell cycle.Their introduction has yielded promising results, especially for patients with hormone receptor-positive and HER2-negative advanced breast cancer, showcasing significant improvements in progression-free survival [66].
The PI3K/AKT/mTOR Pathway-A recurrent aberration observed in breast cancer patients is a malfunction in this particular signaling pathway.As a consequence, inhibitors targeting its constituents, like PI3K inhibitors (e.g., alpelisib) and mTOR inhibitors (e.g., everolimus), have emerged as particularly potent against certain breast cancer variants [67].
Immune Checkpoint Inhibitors-These are groundbreaking agents, such as pembrolizumab and atezolizumab.They are engineered to target immune checkpoints, notably PD-1/PD-L1 (Programmed Cell Death Ligand 1), amplifying the immune system's capacity to identify and destroy cancer cells.Their efficacy has been particularly notable in cases of advanced triple-negative breast cancer [68].
Moreover, it is worth highlighting that the Golgi apparatus has sparked interest as a potential molecular target in breast cancer therapy.A growing number of therapeutic avenues, including antibody-drug conjugates, nanoparticles equipped to deliver antibody drugs, conjugate drug therapies, and advanced immunotherapies, are under scrutiny to optimize cancer patient outcomes [69].The selection of a treatment strategy is contingent on myriad factors, including the patient's tumor subtype, its stage, and substage classification.
Conclusions
Breast cancer's progression and initiation intricately intertwine with the functioning of the Golgi apparatus.Alterations in its structural and operational framework have deep implications, manifesting in the acceleration of tumor growth, invasive behavior, and metastasis.A significant causative factor is the malfunctioning of Golgi-associated proteins, like matrix proteins, lipid transporters, and resident enzymes, leading to anomalies in glycosylation, protein trafficking, and signal relay-fundamental hallmarks of breast cancer.
CCDC170 stands out as a critical genetic component interlinked with the Golgi system with profound effects on breast cancer proliferation.Any perturbations within this structure might trigger cellular transformations, escalating the risk of cell invasion and potential metastasis.
Golgi-associated proteins, extending from matrix proteins to lipid transfer agents and enzymes resident within the Golgi, are complicit in fostering tumor growth, invasiveness, and metastatic spread.They modulate vital cellular functions, including adhesion, migration, and proliferation-fundamental processes underpinning the progression of breast cancer.
The intricate nexus between the Golgi apparatus and breast cancer is a treasure trove of insights.It reveals the underlying molecular intricacies and illuminates potential therapeutic interventions.By zeroing in on specific Golgi-mediated processes, such as protein glycosylation, vesicle transport, and interactions between Golgi and microtubules, we unveil novel therapeutic horizons.These might halt tumor expansion, avert metastatic spread, and enhance the prognosis for breast cancer patients.
However, the enigma of the Golgi apparatus and its relationship with breast cancer demands further exploration.By deciphering its elaborate processes and their role in cancer evolution, we are better poised to develop advanced diagnostic instruments, tailor treatments more precisely, and ensure a brighter prognosis for those battling breast cancer.
Figure 1 .
Figure 1.This diagram depicts how Rab27B, Rab2A (in a late-stage endosome form), and Rab40B from the Rab family of proteins, formed by the Golgi apparatus, are involved in breast cancer.After exocytosis, those Rab proteins play a key role in ECM (Extracellular Matrix Degradation), a representative element in the cancer microecosystem, which will further determine cell proliferation, tumoral invasion, and the formation of metastatic masses.
Figure 1 .
Figure 1.This diagram depicts how Rab27B, Rab2A (in a late-stage endosome form), and Rab40B from the Rab family of proteins, formed by the Golgi apparatus, are involved in breast cancer.After exocytosis, those Rab proteins play a key role in ECM (Extracellular Matrix Degradation), a representative element in the cancer microecosystem, which will further determine cell proliferation, tumoral invasion, and the formation of metastatic masses.
Table 1 .
Pharmaceutical agents for breast cancer treatment with Golgi apparatus implications.In estrogen receptor-positive breast cancers, the cancer cells thrive on estrogen signals.To curtail this growth, endocrine therapies are deployed.Notable examples include selective estrogen receptor modulators, like Tamoxifen, and aromatase inhibitors, such as Letrozole, which have proven adept at inhibiting cancer cell proliferation [62]. | 6,841.8 | 2023-09-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Automatic morphometry in Alzheimer's disease and mild cognitive impairment
This paper presents a novel, publicly available repository of anatomically segmented brain images of healthy subjects as well as patients with mild cognitive impairment and Alzheimer's disease. The underlying magnetic resonance images have been obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. T1-weighted screening and baseline images (1.5 T and 3 T) have been processed with the multi-atlas based MAPER procedure, resulting in labels for 83 regions covering the whole brain in 816 subjects. Selected segmentations were subjected to visual assessment. The segmentations are self-consistent, as evidenced by strong agreement between segmentations of paired images acquired at different field strengths (Jaccard coefficient: 0.802 ± 0.0146). Morphometric comparisons between diagnostic groups (normal; stable mild cognitive impairment; mild cognitive impairment with progression to Alzheimer's disease; Alzheimer's disease) showed highly significant group differences for individual regions, the majority of which were located in the temporal lobe. Additionally, significant effects were seen in the parietal lobe. Increased left/right asymmetry was found in posterior cortical regions. An automatically derived white-matter hypointensities index was found to be a suitable means of quantifying white-matter disease. This repository of segmentations is a potentially valuable resource to researchers working with ADNI data.
Introduction
This paper presents results of a project that aims to provide anatomical labels based on automatic segmentation for magnetic resonance (MR) brain imaging data supplied by the Alzheimer's Disease Neuroimaging Initiative (ADNI). The result of this work is made available to the general scientific community via the same channels as the source ADNI data.
Anatomical segmentations of structural images of the human brain can be used for a plethora of purposes. A principal motivation is to understand the impact of neurodegeneration, trauma, epilepsy and other conditions on the brain's macroscopic structure. Such understanding leads to morphometric descriptors with the potential to serve as biomarkers for the diagnosis and monitoring of brain disease (Colliot et al., 2008;Duchesne et al., 2008;Heckemann et al., 2008;Klöppel et al., 2008). Beyond the realm of morphometric analysis, individual anatomical segmentation is frequently used in the analysis of functional imaging data, e.g. to precisely locate areas of hypo-or hypermetabolism within the subject's own anatomical reference frame. Anatomical segmentation also enables studies of regional connectivity based on diffusion tensor imaging [e.g. Traynor et al. (2010)].
ADNI MR imaging data have hitherto been provided with only minimal amounts of segmentation information. For a subset of ADNI images, labels of the left and right hippocampi are available. These labels have been generated using a semiautomatic tool (SNT, Medtronic Surgical Navigation Technologies, Louisville, CO) that relies on manual seed point placement. In work by Hsu et al. (2002), the SNT tool was claimed to yield hippocampal volume measurements equivalent to a manual delineation protocol, but the validation was not entirely convincing: Hsu et al. make reference to previous work by Watson et al. (1992), but the protocol described there finds distinctly larger volumes in normal adult hippocampi (Watson -right: 5264 ± 652 mm 3 , left: 4903 ± 684 mm 3 ; Hsuright: 3103 ± 505 mm 3 , left: 2945 ± 503 mm 3 ). Furthermore, the SNT method yields volume measurements that are yet smaller than those of the manual reference (right: 2323 ± 326 mm 3 , left: 2275 ± 253 mm 3 ). Both the validation and anatomical coverage of available ADNI segmentation data are thus limited.
Beyond the hippocampus, researchers requiring anatomical labels of ADNI data have three choices: 1. Normalize subject images to a reference space and apply one of a choice of anatomical volume or surface atlases available for this space [e.g. Talairach (Talairach and Tournoux, 1988), AAL (Tzourio-Mazoyer, 2002), Maximum Probability Brain Atlas (Hammers et al., 2003), The Whole Brain Atlas, 3 LPBA40 (Shattuck et al., 2008), PALS-B12 (Van Essen, 2005), the Freesurfer atlas Fischl et al. (2004), or a purpose-made atlas]. This can be a simple solution, in particular if other parts of the analysis already require spatial normalization. Since the segmentation process takes place in the common space, an inverse normalization has to be carried out in order to recover the volume and shape of segmented regions in native space. This approach is typically based on a singlesubject atlas or maximum-probability atlas. The latter are generally preferable because they tend to eliminate idiosyncrasies due to anatomical variants in individual subjects. Success depends on the suitability of the chosen atlas, as well as the suitability and robustness of the chosen spatial normalization algorithm. 2. Carry out anatomical segmentation according to an existing or tailored protocol for manual region outlining in individual subject space. A full outlining protocol has been described by Hammers et al. (2003); other examples include the protocol by Shattuck et al. (2008) and another by Filipek et al. (1989). A further protocol for cortical labeling is under development as a collaborative project (brainCOLOR 4 ). These methods require training of an operator in the chosen protocol and are expensive in terms of operator time and validation requirements, with costs rising approximately linearly with the number of images to be segmented and the number of regions labeled. The resulting segmentations are subject to intraobserver and interobserver variation. 3. Use one of a choice of semiautomatic approaches that require manual input, such as landmarks or seed points. Examples are SNT as noted above, Cardviews [Center for Morphometric Analysis, Massachusetts General Hospital, Boston, MA, USA, (Rademacher et al., 1992)], CARET [cortex only, Washington University School of Medicine, Saint Louis, MO, USA (Van Essen, 2005)] and LDDMM (Beg et al., 2004;Csernansky et al., 2004). Compared to manual outlining, interobserver variation is reduced, since even if the manual input varies within a certain range, the algorithms tend to arrive at the same results. These approaches are less laborintensive, but the costs are still closely tied to the number of target images and regions.
4. Carry out anatomical segmentation in individual space using a fully automatic procedure. Software packages are available that implement the required functionality, but have limitations. For example, Mindboggle and its extension using multiple atlases are designed for cortical segmentation only, while the FS + LDDMM method achieves limited accuracy (Khan et al., 2008). Such approaches typically place a high demand on the computing infrastructure. An exception in this respect is the work by (Lötjönen et al., 2010), which is designed to reduce the computational demand sufficiently to make multi-atlas segmentation clinically feasible.
The present work is an instance of the fourth option. We have generated anatomical labels for ADNI MR images and provide them for download along with other ADNI data (http://www.loni.ucla.edu/ ADNI). We present segmentations of 816 subjects' screening and baseline images into 83 regions, along with a statistical description of regional volumes.
To obtain automatic segmentations, we used multi-atlas propagation with enhanced registration [MAPER, Heckemann et al. (2010)]. This is a refined version of a previously validated approach (Heckemann et al., 2006). MAPER is the first automatic whole-brain multi-region segmentation method that has been shown to yield robust results in subjects with neurodegenerative disease. It uses training data ("atlases," images with reference segmentations) to segment T1weighted brain MR images of any provenance into anatomical regions . We showed in previous work (Heckemann et al., 2006) that the accuracy achieved with MAPER is only slightly inferior to that of manual segmentation performed by a trained operator, and that the procedure is robust in the face of anatomical variation in the target subjects, specifically ventricular enlargement as seen in aging and neurodegeneration.
The implementation of MAPER used here relies on software tools sourced from the Image Registration Toolkit [IRTK, 5 Rueckert et al. (1999)] and from Nifty Reg, 6 (Modat et al., 2010). In a comparison of tools for intersubject registration of MR brain images, IRTK was recently found to be among the best-performing ones (Klein et al., 2009). Two other tools [SyN 7 (Avants et al., 2008) and ART 8 ] (Ardekani et al., 2005) achieved more consistent results than IRTK in the comparison by Klein et al. Nevertheless, when working with heterogeneous data, we found IRTK to be more robust than ART and SyN, in particular when source (atlas) MR images had been acquired on different scanners than the target data for segmentation [e.g. ADNI images, Heckemann et al. (2010)]. ART and SyN have been shown to be suitable for registering pairs of images of identical provenance. MAPER is characterized by its robustness towards ventricular distension in the target subject. To achieve this, it relies on IRTK's ability to register multi-spectral tissue probability maps using cross correlation as the similarity measure, a feature that, to our knowledge, has not been implemented elsewhere. Our choice of IRTK rests on these two factorsrobustness towards both intensity differences and typical pathologyalthough MAPER could in principle be implemented using other toolkits.
We validate the results using the volumes of the segmented regions as well as agreement measures between segmentations of images that have been serially acquired at different field strengths, and document limitations of the automatic procedure and the generated results for the benefit of future users of the data. We found that signal changes caused by white-matter disease can result in misclassification of tissues and lead to distortions in the segmentations. To quantify this influence, we describe and validate an automatically generated index. Finally, we show that statistical analyses of the automatically generated segmentations confirm previous observations of morphometric changes in Alzheimer's disease and mild cognitive impairment.
MR data
Atlas data as required for MAPER consisted of 30 T1-weighted 3D image volumes acquired from healthy young adult volunteers at the National Society for Epilepsy at Chalfont, UK. Details of the acquisition are in Hammers et al. (2003). Hand-drawn segmentations of 83 structures had been previously prepared according to the protocols described in Hammers et al. (2003) and Gousias et al. (2008). Segmentation protocols are also available at http://www. brain-development.org.
MR images of patients with Alzheimer's disease and mild cognitive impairment as well as healthy elderly subjects were obtained from the ADNI database (www.loni.ucla.edu/ADNI). 9 The research presented here aligns with the primary goal of ADNI, which has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer's disease. The full repository of ADNI images was accessed in February 2010. The clinical information was retrieved in August 2010. Each subject was assigned to one of five diagnosis groups: healthy subjects (HS), mild cognitive impairment with no conversion within the observation period (stable MCI, s-MCI), mild cognitive impairment at baseline, with progress to Alzheimer's disease within the observation period (p-MCI), Alzheimer's disease (AD), and Other (O). The latter assignment was used as a "catch-all" for subjects who did not fit the other categories, for example if ADNI noted a reversion from AD to MCI. The observation period was 24 11 months.
Preprocessing
As envisaged in the ADNI study design, images were obtained from the ADNI database in fully preprocessed versions. Depending on the scanner source, preprocessing included all or some of GradWarp geometric distortion correction (Jovicich et al., 2006), B1 nonuniformity correction to compensate for signal inhomogeneity , N3 bias field correction (Sled et al., 1998) and phantom scaling. We chose the originally supplied linearly scaled images, irrespective of problems reported on a subset, 10 as linear scaling issues do not affect the segmentation procedure. Likewise, volume measurements, once normalized by intracranial volume as measured on the same source image, are unaffected by linear scaling.
To match the requirements of the MAPER procedure, we applied further preprocessing for brain extraction and tissue classification, as described in the following. Utilities used for these steps were taken from the Image Registration Toolkit [IRTK, Rueckert et al. (1999)], from the FSL suite (Smith et al., 2004) and from the ANTs toolkit (Avants et al., 2010).
For the brain extraction step, binary masks covering both intracranial white matter and gray matter (WM + GM) were available as the starting point. These had been generated as part of an earlier project using MIDAS, a semi-automatic procedure described elsewhere (Freeborough et al., 1997). Each mask was extended to cover the intracranial region generously by blurring (6 mm Gaussian kernel), thresholding at 27% and hole-filling. FSL FAST was applied to identify cerebrospinal fluid (CSF) within the pre-masked region. The original WM + GM mask was extended by the resulting CSF mask to obtain a complete intracranial mask that excluded meninges, sinuses and extracranial tissue. The original, semi-automatically created WM + GM mask is fully contained within the intracranial mask, reducing the impact of operator-dependent variability on the intracranial volume measurement.
Individual tissue probability maps for CSF, GM and WM obtained using FSL FAST were combined into a single multi-spectral volume. Fig. 1 shows a sample section from an image of a healthy subject. A binary maximum-probability gray matter mask was extracted from the discrete tissue class image generated by FAST.
T1-weighted screening (1.5 T) and baseline (3 T) images from the ADNI repository were obtained for all subjects for whom MIDASprepared brain masks were available. After removing data sets that had been withdrawn by ADNI after the download, a total of 996 images on 816 subjects (1.5 T: 811, 3 T: 185; of which paired: 180) were segmented and quality assessments carried out (cf. Statistical and visual analysis). For statistical analysis, only subjects in the HS, s-MCI, p-MCI, and AD groups (i.e. excluding the "Other" category), and of these only those who passed the outlier analysis (Outlier analysis section) were included (777 subjects, 953 images, 1.5 T: 772; 3 T: 181, of which paired: 176). The age distribution of included subjects is shown in Fig. 2. A breakdown by diagnostic group and gender is given in Table 1.
In the 176 included subjects for whom images had been acquired at both field strengths, the 3 T image was typically acquired within weeks after the 1.5 T image (median 22 days, range 0-112 days).
Segmentation
The MAPER procedure for robust, automatic segmentation of T1weighted MR images of the human brain has been described and validated previously . Each target is paired with each of the atlases to generate an individual atlas-based segmentation. The steps are summarized in Table 2. In Steps 3 and 4, alignment of details in the image pair was achieved by optimizing a free-form deformation (FFD) represented by displacements on a grid of control points blended using cubic B-splines (Rueckert et al., 1999). These steps are carried out using each of the 30 atlases in turn, resulting in 30 segmentations, which are subsequently combined using vote-rule decision fusion (Rohlfing et al., 2004;Kittler et al., 1998).
In contrast to the approach discussed in Heckemann et al. (2010), where the entire registration was done in IRTK, we used Nifty Reg Version 1.3 (Modat et al., 2010) to carry out the detail-level registration (Step 4). The transformed and interpolated output from IRTK was used as the starting point. Nifty Reg is a particularly efficient implementation of the same FFD registration. To compare the accuracy of MAPER based on the combination (IRTK and Nifty Reg) with MAPER based on pure IRTK, we carried out a leave-one-out cross-comparison on the 30 atlas sets with both implementations, following the method described in Heckemann et al. (2010). Agreement between the generated and the manual label sets was measured using the mean Jaccard coefficient [JC; intersection divided by union (Jaccard, 1901)] across all 83 regions. The mean JC m across the 30 atlas images was 0.691 for both methods [range 9 ADNI was launched in 2003 by the National Institute on Aging (NIA), the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the Food and Drug Administration (FDA), private pharmaceutical companies and non-profit organizations, as a $60 million, 5-year public-private partnership. The Principal Investigator of this initiative is Michael W. Weiner, M.D., VA Medical Center and University of California San Francisco. ADNI is the result of efforts of many co-investigators from a broad range of academic institutions and private corporations, and subjects have been recruited from over 50 sites across the U.S. and Canada. The initial goal of ADNI was to recruit 800 adults, ages 55 to 90, to participate in the researchapproximately 200 cognitively normal older individuals to be followed for 3 years, 400 people with MCI to be followed for 3 years, and 200 people with early AD to be followed for 2 years. 10 http://www.loni.ucla.edu/twiki/bin/view/ADNI/ADNIMRICore.
Quantifying white-matter disease
White-matter disease (WMD), characterized by diffusely hypointense regions within the white matter, is frequently seen in elderly subjects, and specifically in those with dementia (Black et al., 2009). Such regions can adversely affect the functioning of intensity-based methods. In the case of FSL FAST, they tend to be incorrectly labeled as gray matter, and this can impact subsequent processingin the case of MAPER, resulting segmentations can be distorted. In particular, the lateral ventricle, the caudate nucleus, and the insula can be overestimated (cf. Outlier analysis). We developed a procedure to estimate the amount of white matter that is misclassified. The estimate is derived from a set of different label images derived from the T1-weighted image: • A binary WM segmentation of the target by majority vote fusion of transformed atlas WM segmentations (each estimated from their intensities by FAST; note the atlas subjects are healthy young adults not affected by WMD): M W • A binary GM segmentation of the target generated with FAST: F G • A semi-automatically generated binary label covering white matter and gray matter of the target, as described in Preprocessing section: S B • A binary segmentation of both lateral ventricles in the target extracted from the fusion of the transformed atlas labels: M V An image with suspected WMD voxels is generated as and E is a 3 3 3 structuring element used for eroding the intermediate images (this operation symbolized by ⊖). In subjects where WMD leads to hypointensities that coincide with white matter, as identified by transforming atlas WM segmentations, such regions will be labeled by the intermediate Image A. In subjects where hypointensities border on the lateral ventricles, Image B will capture the affected regions. The volume of the resulting label W, normalized by the intracranial volume, provides an indication of the subject's WMD load. In the following, we refer to this measure as the white-matter hypointensities index (WMHI).
We assessed the validity of the WMHI by comparison with a semiquantitative rating. We adapted the rating scale described by Wahlund et al. (2001), which was designed for X-ray computed The WMHI distribution was highly nonlinear with a small number of high values. To select a subset for visual scoring, we ranked the images according to WMHI, divided the sample into three equal parts, and sampled in a proportion of 42:21:7 from each group, yielding a total of 70 images for review. 11 An experienced rater (AH) who was blinded to WMHI, age, and diagnosis assigned the score after reviewing the T1-weighted images in three orthogonal planes. Where subjectively appropriate, based on comparisons within the sample, the rater assigned a tendency to the score, which was recorded as an addition or subtraction of 0.3 points to or from the integer score.
Masking based on tissue class
Depending on the application, it may be desirable to use segmented regions that have been multiplied with a binary tissue class label. In particular, since aging and Alzheimer's disease are characterized by cortical neuronal loss, the GM portion within each cortical label is often more relevant than the full label containing both GM and WM. We thus provide both raw segmentation data and masked versions. For the latter, regions with a substantial GM portion have been masked with a GM label (all except ventricles, central structures, cerebellum and brainstem), and the lateral ventricles have been masked with a CSF label. Unless otherwise noted, the analysis results reported in this work are based on the masked label sets.
Statistical and visual analysis
We assembled and analyzed the results of volumetry on all structures in all target subjects using standard statistical methods as provided by the R environment (http://www.r-project.org/). Segmentation failures typically lead to grossly inaccurate estimations of the volume of individual regions. To detect outliers in the data, we grouped the images by diagnosis, gender and field strength, and determined per-group means and standard deviations of the regional volume (normalized by intracranial volume; masked by GM except for ventricles, central structures, brainstem and cerebellum). On this basis, all region volumes were converted to z scores. Regions where the z score was greater than 4 or less than − 4 were flagged as outliers. Images containing outlier regions were visually assessed by an experienced reader (RAH). Label outlines were superimposed on the MRI image and the flagged region and its neighborhood viewed in the transverse, sagittal and coronal planes. The segmentation quality was rated on a visual analog scale from 1 to 5 (1: failed segmentation; 2: poor boundary matching, but correct indication of the relative position of neighboring regions; 3: fair; 4: good segmentation with minor boundary mismatches, 5: excellent segmentation with exact boundary matching). The likely cause of the outlying size of the region, based on the reader's subjective impression, was identified and recorded. The remainder of the image was searched in the transverse plane for obvious label mismatches beyond the flagged region and a note of the overall impression recorded.
Statistical analysis was carried out with a view to comparing diagnostic groups and determining potential volumetric criteria characteristic for Alzheimer's disease or impending progression from mild cognitive impairment. We also used MAPER measurements to determine balanced asymmetry indices for paired regions (A r ) as for right and left regional volumes, V R and V L . Unbalanced indices were generated by dividing V L by V R . The volumetry and asymmetry studies were carried out using the images acquired at 1.5 T. The findings were compared with published knowledge as a consistency check for the correctness of the segmentation approach.
Comparison across field strengths
Where pairs of images acquired at 1.5 T and 3 T were available for individual subjects, the pair was rigidly registered and the unmasked label sets compared in the space of the 3 T image, using JC. A persubject summary measure of agreement was obtained by calculating the mean JC across all 83 regions (JC m ). Key results are also provided as Dice similarity coefficients [DSC, intersection divided by average label volume, (Dice, 1945)].
Measuring precision by comparing independent segmentations
To derive a quantitative indicator of the precision of the segmentation of each target, we employed the following procedure. For each image in the ADNI set, we bisected the atlas set randomly into two subsets of 15 atlases each. From the pair of subsets, we generated a pair of independent segmentations using vote-rule decision fusion. The overall agreement between the unmasked segmentation pair was measured as the mean Jaccard coefficient across all 83 regions (JC m ).
Results and discussion
Segmentation results are available for download in NIfTI format as 3D label maps identifying 83 structures by spatial correspondence with the T1-weighted images as supplied by ADNI. 12 11 Originally, we had sampled 21 evenly from the ranked list. Based on the review of this original set, we decided to increase the sample size. The trisection approach allowed us to add to the existing sample, while maintaining even spacing within the parts and emphasizing the upper WMHI value range, where we expected the findings to be most relevant. 12 ADNI users can download the label maps from the image database (https://ida. loni.ucla.edu) via the "Advanced Search" feature by selecting "Post-processed" and entering "MAPER*" in the field "Series Description".
Quality of intracranial masks
The mean intracranial volume (ICV) obtained by measuring the volume of the intracranial mask (cf. Preprocessing) was 1.41l (range 1.02-1.86 l, SD 0.143 l) on 1.5 T images. The images with the three largest (I72219, I40356, and I35499) and the three smallest (I63227, I82594, and I52799) ICVs were reviewed with the mask outline superimposed to search for visible under-and overestimations. All six masks were judged to adequately represent the intracranial volume after careful visual inspection.
In subjects for whom images had been acquired at both field strengths (n = 176), the measured ICV on 3 T was highly correlated with that of 1.5 T (Pearson's r = 0.976), giving smaller results on average than 1.5 T, but not significantly so (−1%, range −5%-+10%, SD 2 percentage points, p = 0.32). Similar observations have previously been made, when ICV measurements were compared on pairs of brain images of subjects who had been scanned serially at different field strengths . Automatic methods showed a tendency to underestimate ICV on 3 T and overestimate ICV on 1.5 T images. For the most consistent automatic method described by Keihaninejad et al., the difference was 0.7%.
Outlier analysis
Sixty regions in 42 subjects met the outlier criterion and were reviewed visually. Twelve subjects appeared twice in the list, two subjects appeared three times and one appeared four times. The regions that appeared most frequently in the outlier list were the temporal horn of the lateral ventricle (8 right, 6 left), the caudate nucleus (7 right, 5 left), and the subcallosal area (2 right, 4 left).
On visual review, the flagged regions appeared to be affected by white-matter disease in a large number of cases (WMD: 24; other flawed segmentations: 19, correct: 17). In 13 of the 19 problematic segmentations that were not WMD-related, the flaw appeared to be limited to the region in question. No further segmentation problems were detected in these cases, and the extent of over-or undersegmentation was deemed to be mild or moderate (scoring 3 or 4 on the visual analog scale described in Statistical and visual analysis section). In the remaining six regions, more general problems were seen and the relevant four cases (I64189, I38944, I67210, and I91126) were excluded from further analysis (MR acquisition problems leading to lack of GM/WM contrast: four regions in three images; motion artifact: two regions in one image).
WMD is a highly prevalent feature in the subjects of this cohort, frequently leading to overestimation of the caudate nuclei and the insula regions. The gray-matter portion of labels of other cortical regions often included white-matter regions that had been mistaken for gray matter by the tissue classification. Subcortical regions other than the caudate nuclei appeared largely unaffected on visual review. We determined for each image an index (cf. Quantifying white matter disease section) that signals WMD load. This index correlates well with the visual appearance of distortion (cf. Measuring WMD using white-matter hypointensities index). It is provided with the label images as part of the metadata.
The raw MAPER-based label for the lateral ventricle is frequently overestimated, incorrectly including hypointense portions of white matter. We dealt with this issue by masking this region pair with the binary CSF label generated by FAST.
While MAPER is robust in the majority of cases, the limitations of automatic segmentation (and, indeed, manual segmentation) need to be considered in subjects whose anatomical configuration is severely abnormal and in those who show texture abnormalities such as white matter disease.
Measuring WMD using the white-matter hypointensities index
The WMHI ranged from 0 (seen in 135/996 images) to 151, with the distribution strongly skewed towards 0. The distribution is best visualized using a log scale as shown in Fig. 3. The WMHI is intended to alert users to possible WMD-related oversegmentation when susceptible region labels are used for analysis. Such regions include the lateral ventricles before CSF-masking, the caudate nucleus and the gray-matter masked cortical regions, especially the insula. The WMHI has some value as a metadatum indicating the reliability of the segmentation. With a view to the caudate nucleus, however, its value is limited due to the way the index is generated: hypointensities adjacent to the caudate nucleus tend to be included in the generated caudate label, in which case they are not identified as WMD. Thus, it is possible for a caudate nucleus to be oversegmented due to white matter disease, even when the WMHI is zero. Random visual reviews have revealed one image where this appears to be the case (I63489). In future work, we will seek to address the issue of WMDrelated oversegmentation in a principled fashion by identifying affected subjects and regions a priori and counteracting the distorting effects at the registration step. We will also search for better criteria to indicate the overall veracity of the generated segmentations.
Volumetric analysis Normalization
To reduce interindividual variation of region volumes, various measures have been proposed for normalization (Free et al., 1995). In particular, normalization of brain volume by intracranial volume was found to substantially reduce variation, and to remove genderrelated differences . We found in previous work a correlation between hippocampal volume and ICV (Hammers et al., 2007), and this was confirmed in the present data (Pearson's r = 0.56 for the sum of both hippocampi in 1.5 T images of healthy subjects). Normalization by ICV also eliminates inaccuracies arising from problems with the phantom scaling, which have been reported for a subset of ADNI cases (Clarkson et al., 2009).
Our ICV measurements were stable across the diagnostic groups (cf. Fig. 5). Based on a two one-sided tests (TOST) procedure (Schuirmann, 1987), the null hypothesis of non-equivalence can be rejected for all paired comparisons of diagnosis groups, except (s-MCI, AD) where p = 0.056 (α = 0.05; = 0.05μ). In the following, individual region sizes are expressed as a fraction of ICV, scaled by an arbitrary factor of 10 4 .
The benefit of ICV normalization can be seen in group comparisons by diagnosis: the absolute total gray matter volume differs between groups, but the distinction is comparatively weak (cf. Fig. 6, left panel). The right panel shows total gray matter volume with normalization, which results in larger group differences.
Aggregated regional analysis
For Fig. 7, volume results for individual regions have been aggregated into six superregions. The plots indicate that the temporal lobe is most distinctly different between diagnostic groups. Differences in the medians are also substantial for the ventricle regions, but the variance is greater in all groups, resulting in larger overlaps.
Individual regional analysis
The analysis of individual regional volumes reveals a pattern of increasing atrophy from the HS group via s-MCI and p-MCI to the AD group. Table 3 shows this for the 14 regions where the AD-HS difference is largest. An extended version of the table that includes all regions is provided as supplemental material. Most of the results match our expectations: ventricles are enlarged, especially the temporal horns; hippocampi are smaller, notably also when comparing HS with s-MCI (9% either side). The amygdala, the middle and inferior temporal gyrus and the fusiform gyrus are reduced in size, adding to the evidence that temporal lobe regions beyond the hippocampus are affected by the disease process. The amygdala is functionally connected with and spatially adjacent to the hippocampus, and its involvement in AD is well known from histopathology (Kromer Vogt et al., 1990;Scott et al., 1991) and imaging research (Cuénod et al., 1993;Jack et al., 1997;Lehericy et al., 1994). Other temporal lobe structures, notably the fusiform gyrus, the parahippocampal gyri, and the middle and inferior temporal gyri also have previously been found to be significantly affected (Chan et al., 2001).
In recent imaging studies, thalamic volumes have been found to be reduced in Alzheimer's disease (Cherubini et al., 2010;Zarei et al., 2010), in line with earlier post-mortem observations (Braak and Braak, 1991). de Jong et al. (2008) found reduced sizes of both putamen and thalamus. Our results confirm lower volumes of the thalamus, even when comparing the HS and s-MCI groups (5% either side, highly significant). For the putamen, the same comparison was marginally significant, while the difference between HS and AD was not. This finding may indicate a limitation of accuracy of the putamen segmentation in subjects with more advanced disease.
The heatmap in Fig. 8 indicates for each region and selected pairs of diagnostic categories the extent to which the measured volume can serve to distinguish the diagnosis groups. Red color indicates the "most significant" results in each column. Please note that p-values in this context are not used for the usual purpose of hypothesis testing, but for comparing regions; therefore we did not employ alpha thresholding or attempt correction for multiple comparisons. Regions in the mesial temporal lobe (hippocampus, amygdala, and parahippocampal gyri) are particularly prominent, along with the temporal horn of the lateral ventricle and the posterior temporal lobe. Outside of the temporal lobe, large posterior cortical regions (parietal lobe, occipital lobe) are highlighted. These observations align well with previously described AD patterns, specifically a posteriorto-anterior gradient in atrophy (Likeman et al., 2005).
Asymmetry
Generally, AD atrophy is described as a disseminated process with no lateral predilection. Regional counts of plaques and tangles in pathological specimens showed larger variability within one and the same region than between left and right counterparts (Janota and Mountjoy, 1988;Moossy et al., June, 1989;Wilcock and Esiri, 1987). Imaging studies comparing AD with other entities found that asymmetry indices may be a useful tool for differential diagnosis, as asymmetry of various regions frequently attends clinically similar conditions, specifically frontotemporal lobar degeneration (Barnes et al., 2006;Boccardi et al., 2003;Horínek et al., 2007;Likeman et al., 2005) and argyrophilic grain disease (Adachi et al., 2010). As a differential diagnostic criterion, asymmetry thus speaks against AD according to these studies.
For the hippocampus, a physiological right-larger-than-left asymmetry in healthy adults is well established [e.g. Pedraza et al., (2004)], but studies focussing on hippocampal asymmetry in AD have yielded varying results. Small lateral differences in atrophy rates between AD patients and controls were found by Barnes et al. (2005). Shi et al. (2007), focussing on shape characteristics rather than volume, also found small differences between AD and controls in the atrophy pattern. A metastudy on hippocampal volume found that right hippocampal volume was larger than left in all groups studied (AD, MCI and controls), with AD subjects showing smaller effect sizes due to larger variation (Shi et al., 2009). Similarly, Barber et al. (2001) report a loss of hippocampal asymmetry in AD patients versus controls. An increase in hippocampal asymmetry as a function of cognitive decline was seen in one study (Wolf et al., 2001).
In the present study, results for the hippocampal left/right volume ratio have a wide distribution. We therefore choose to report the median and the median absolute deviation (MD), which are more robust measures of central tendency and dispersion than means and standard deviations. For healthy subjects, we found the previously reported pattern of leftbright hippocampal asymmetry (median L/R volume ratio 0.93; MD 0.073). In AD, the median of the volume ratio appears to be somewhat reduced (0.90; MD 0.12), but the difference is not significant. The balanced asymmetry index A r is higher in AD than Table 3 Regional volumes and volume differences. Column "HS vol" shows regional volume as a fraction of ICV, averaged across healthy subjects, scaled by 10 4 . Columns labeled "d()" show the volume difference compared to HS as a percentage of HSvol, except "d(pMCI, sMCI)", which shows the volume difference between p-MCI and s-MCI as a percentage of the mean volume of the s-MCI group. The sort criterion is the modulus of the difference between AD and HS. Only the 14 regions ranking highest on the sort criterion are shown. "Code" is the numerical identifier for the region used in the label maps. in healthy subjects (0.12 versus 0.08), and this difference is significant (p = 1.3 × 10 − 5 ). Few studies have looked at asymmetry in AD beyond the hippocampus. The amygdala has been studied by Whitwell et al. (2005), who found a significant increase of asymmetry in AD patients. Thompson et al. (1998) found asymmetries in the Sylvian fissure in normal subjects, and these were significantly accentuated in AD.
We note that A r is particularly large in AD compared to HS when considering large regions (posterior temporal lobe, HS: 0.041, AD: 0.092, p = 1.3 × 10 − 15 , parietal lobe, HS: 0.068, AD: 0.100, p at limit of precision, cf. Fig. 9). This is an area for future exploration.
Consistency across field strengths
High levels of agreement were seen when comparing the segmentation obtained on a 1.5 T image with that obtained on the same subject's 3 T image. The aggregate measure across all structures showed little variation between subjects (JC m 0.802 ± 0.0146, range 0.749-0.829; corresponding to a DSC of 0.890). Results were equivalent for all four disease conditions as shown by a TOST procedure (p b 1e-17 for α = 0.05 and = 0.04). In previous work, we used the JC m measure to assess MAPER with reference to manual segmentation in normal adults , obtaining a mean of 0.691 (DSC 0.817). The fact that the MAPER method produces consistent results across field strengths indicates high precision and corroborates our previous findings showing the accuracy of the method.
Between individual regions, we note large differences in the standard deviation of the Jaccard coefficient (JC σ ). This standard deviation, here expressed as a percentage of the mean, ranges from 1.4% (brainstem) to 17.7% (left pallidum). Labels of regions that are well-defined by gray-scale intensity gradients in the T1 image are particularly consistent across field strengths. JC σ for, e.g., the frontal horn and central part of the lateral ventricle is 3.0% (either side). For the precentral gyrus, which is a cortical region of average size within our set, JC σ is 2.3% (left) and 2.5% (right). Small regions with weakly defined boundaries are naturally difficult to segment, both manually and by automatic procedures based on manual input. In the present results, this is reflected in large standard deviation values for the pallidum (JC σ left: 17.7%, right: 14.3%) and the nucleus accumbens (left: 12.9%, right: 11.0%).
Precision based on atlas subsets
There was strong agreement between independent atlas-subset based segmentations of the target images (JC m was 0.800 ± 0.0092, range 0.771-0.819, corresponding to a DSC of 0.889). Three outliers were seen out of 996, one of each in the HS (I64867, JC m 0.765), s-MCI (I69360, JC m 0.718) and p-MCI (I97327, JC m 0.755) groups. On visual review of the corresponding segmentations, no substantial mislabelling was seen. Fig. 9. Boxplot showing group differences of asymmetry index for selected regions. The difference between AD and HS is highly significant for each of the regions shown (p b 10 − 4 ); the difference between s-MCI and p-MCI is significant (p b 0.05) for the shown regions except thalamus and parietal lobe. Cf. Fig. 8 for abbreviations. Fig. 8. Heatmap showing per-region results of unpaired two-tailed t-tests between selected pairings of diagnosis groups. P-values are mapped to colors so that the "most significant" results for each column are highlighted in red. Each column is color-scaled independently. R: right, L: left, ant: anterior, amb: ambiens, temp: temporal, med: medial, lat: lateral, sup: superior, post: posterior, inf: inferior, g: gyrus, gg: gyri, l: lobe, frt: frontal, rem: remainder; superregion abbreviations as in Fig. 7.
Future improvements
The choice of the segmentation approach employed in this work was guided by multiple considerations and represents a compromise that can be improved upon in future work. An important factor was proved robustness, as it helped to ensure that all segmentation results are traceable to a single procedure, and helped to avoid any individualized modification of the output data. As a consequence, a variety of recently published algorithmic developments have not been considered and may lead to even more accurate and detailed segmentations, once their robustness has been demonstrated. Promising developments are taking place in several areas. Registration algorithms are becoming more accurate and efficient as shown by Lötjönen et al. (2010) (albeit only for image pairs of identical provenance). The dependence on expensive expert input may in the future be reduced thanks to algorithms that uncover latent atlases (Riklin-Raviv et al., 2010). Procedures that select or weight atlas images from the outset (Aljabar et al., 2009) or at the segmentation combining stage (Artaechevarria et al., 2009) may yield more accurate results, especially when heterogeneous repositories are used as atlas data. Algorithms that revisit the image data after segmentation in order to refine the result are showing strong promise (Wolz et al., 2010b;van der Lijn et al., 2008).
A further compromise was made with regard to the choice of the atlas data. We settled on a set that has been segmented in high detail and with strong validation (Hammers et al., 2003), although it is based on young adults and is therefore demographically dissimilar from the ADNI target images. Work by Wolz et al. (2010a) shows for individual regions that the LEAP approachpropagating atlas labels indirectly via intermediate imagescan yield improved results on such dissimilar targets. The question of how LEAP or a similar approach can be adapted for multiple regions is another promising research avenue.
Conclusion
In this work we present a repository of label data on healthy elderly subjects and patients with mild cognitive impairment or Alzheimer's disease. We offer segmentations of 996 screening and baseline images in 816 subjects. The data are publicly available as an accompaniment to the MRI data supplied by the ADNI project. We validated the segmentation results and presented results of statistical analysis that are congruent with established knowledge about atrophy progression in AD.
We are committed to maintaining and enhancing the repository with findings from future research. In particular, we envisage developing further indices of accuracy and adding them as metadata. As improvements to the segmentation algorithm are developed and validated, we are planning to add updated segmentations, using versioning to ensure that the original set described here remains available for reference. If members of the community should express an interest in segmentations of follow-up scans, such data will also be added.
For researchers working with ADNI data, the repository will provide reliable information on a large number of anatomical regions. We envisage that the segmentations be used to search for novel imaging biomarkers of Alzheimer's disease and progressive mild cognitive impairment, using regional volume, shape, and texture information that can be derived. Our data will also enable regionbased analysis of the functional imaging data acquired using positronemission tomography, and of the connectivity data acquired using diffusion tensor imaging in the respective subsets. | 9,914.8 | 2011-06-15T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Tragedy of urban green spaces depletion in selected sub-Sahara African major cities
Urban green spaces (UGS) play important role in enhancing the socioeconomic and environmental health of cities around the world. For instance, UGS such as playgrounds, parks and residential greenery provide relief from mental and physical stress in densely populated areas. In spite of the significance of UGS in urban life and city development, their depletion rate in sub-Saharan countries seems alarming. Based on mixed methods approach including content analysis of relevant publications and spatiotemporal analyses, this paper discusses urban green spaces depletion in three randomly selected sub-Saharan African cities. The selected cities are Dar es Salaam, Accra, and Luanda. The study reveals a disturbing trajectory of UGS depletion in the selected cities. The causation factors include (a) pressure of rapid urbanization; (b) weak urban planning regulations; (c) socioeconomic challenges; and (d) weak institutions. Policy implications of these findings include the need to prepare and implement public park plans at regional and the local levels, and build institutional competence and capacity to address rapid depletion of urban green spaces (UGS) in sub-Saharan African cities.
INTRODUCTION
The desire to create ecologically and economically vibrant cities has made green spaces an essential feature of city development. Green spaces are natural and multifunctional spaces that provide social, environmental, and economic benefits to cities (Smaniotto et al., 2008), as well as enhancing a city"s aesthetic image (Hussain et al., 2010;Narh et al., 2020). Green spaces may be set aside for socialization and recreation or ecological conservation purposes (Hussain et al., 2010;Narh et al., 2020;King, 2010;Mensah, 2014a). Broadly defined, Urban Green Spaces (UGS) includes parks, community woodlands, street trees, wetlands, playgrounds and residential greenery (Narh et al., 2020). They play an important role in enhancing the economic, health, and social way of life of cities, making them significant in achieving healthy and sustainable urban communities (King, 2010;Mensah, 2014a). In most developed countries, national and city authorities have put in place measures and policies to curb the depletion of urban green spaces (Girma et al., 2019;Schägner et al., 2016).
Despite the important contribution of urban green spaces, urbanization coupled with poor management practices and competing land uses have resulted in the rapid deterioration of green spaces in most sub-Saharan *Corresponding author. E-mail<EMAIL_ADDRESS>Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License associated with leisure or recreation, and health promotion. Findings in the UK, Finland, Mexico, and China classified green spaces as a major urban resource for leisure and outdoor recreational activities such as relaxing, playing with kids, walking pets, exploring, and observing nature and wildlife (Haq, 2011;Mensah, 2017;Xi-Zhang, 2009). Cilliers (2017) and Jansson (2014) also found that green spaces serve as meeting centers for middle and low-income earners in both developed and developing countries, where they go to spend time relaxing, engaging in games, and having picnics. The measure of the benefits of green spaces is seen in terms of inherent aesthetic value, the living environment they create (Cilliers et al., 2013), the positive view of residents with regards to the value of the park (Guenat et al., 2021), and the deep sense of community and common interest (Narh et al., 2020). Additionally, Darkhani et al. (2019) indicate that green spaces with natural features play an important role in bringing people in the same community together.
Apart from recreation, studies suggest that green spaces play a crucial role in the physical and mental development and health of individuals. Contemporary urban life style is associated with chronic stress and anthropogenic environmental hazards that could be alleviated by the provision of urban green spaces (WHO Regional Office Europe, 2019). Kolimenakis et al. (2021) noted that walking to and within green spaces not only promotes socialization but also allows for physical exercises that eventually contribute to good health of users. Thus, physical activities such as walking, jogging, and other sporting activities that mostly characterize the use of green spaces are good ways to prevent obesity and other diseases such as cardiovascular disease, musculoskeletal diseases, stroke, and cancer (Cilliers, 2017;Jennings and Bamkole, 2019). In addition, green spaces are major stress relief sites for mostly children and young adults (Heikinheimo et al., 2020;Hussain et al., 2010). The sense of community created at green spaces improves social interaction and reduces fear and aggressiveness, which promotes quality neighbor relationships (Cilliers, 2017;Lategan and Cilliers, 2014;Nero, 2016).
Finally, Nero (2016) argues that green spaces not only enhance the knowledge of young people about nature but also helps them develop a sense of stewardship when they are frequently exposed to nature. Green spaces also transcend beyond helping locals and tourists understand the flora and fauna nature (Jennings and Bamkole, 2019) to serve as a useful resource for scientific research (Osei-mainoo, 2012) while sustaining cultural and national heritage (Mensah, 2016). To sum up, green spaces offer a wide range of social benefits, such as leisure and recreation, improved physical and mental wellbeing, improved children"s development, social cohesion, and the preservation of cultural and national heritage. All these benefits contribute to the sustainable development of nations.
Environmental significance
Green spaces with natural elements have been observed over the years as a local urban climate regulator (Maghrabi et al., 2021). Hard surfaces such as asphalt, pavements, and other concrete surfaces, urban areas absorb solar radiation, which produce heat waves in urban areas. On the contrary, evapotranspiration from urban green spaces and parks keeps urban temperatures cool and consequently alter the local climate (Browning and Rigolon, 2019;King, 2010;Maghrabi et al., 2021). Thus, urban green spaces and parks (UGSP) help to regulate urban high temperatures, lessen the effect of urban heat islands, and further promote the comfort of city dwellers (WHO, 2016). In addition, UGSP enhances the air quality in urban areas as studies conducted in Ottawa and Singapore, where a number of buildings have green vegetation roofs, revealed a significant reduction in Sulphur dioxide and nitrous oxide in those areas (Mensah, 2017). The existence of trees serves as a carbon sink and helps remove pollutants that improve the quality of air (Cilliers, 2017;Maghrabi et al., 2021). Moreover, natural elements available in green spaces help intercept the mobility of pollutants and consequently minimize air pollution in urban areas. Another significant benefit of UGSP is the protection of biodiversity and the conservation of the natural environment. Studies on the urban environment have revealed that different forms of urban green spaces have a significant amount of biodiversity (Cilliers, 2017;Maghrabi et al., 2021;Mensah, 2014b). Green spaces also mitigate some urban environmental problems; such as soil erosion. Studies show that urban trees, forests, golf courses, parks, and gardens, help to maintain urban soil and limit the effects of erosion (Adjei Mensah, 2016;Besada et al., 2019).
Finally, from an aesthetic point of view, green spaces and parks help to beautify the cityscape. Scott (2015) points out that gardens, urban trees, and green spaces help enrich urban architecture through their different styles and colors. This makes the urban setting more diverse and uniform. According to Cilliers (2017) and Mensah (2017), when planning towns and cities, green spaces should be at the forefront of our thoughts because they improve city identity and make urban areas more attractive to live, invest, work, and tour. Given the multiple environmental benefits of green spaces and parks, it is necessary to preserve them in sub-Saharan African cities.
Economic significance
Economically, since public park projects are often laborintensive and require enormous maintenance work, they are sources of direct and indirect jobs for individuals in urban areas (Adjei Mensah, 2015). These job opportunities include landscape architects, contractors, laborers, and foremen, among others. In developing countries, the high rate of unemployment can be addressed with such job opportunities. According to Takyi and Seidel (2017), thousands of people are employed in different capacities to work on green spaces. Thus, urban green spaces and parks create employment opportunities for numerous people (Adjei Mensah, 2015;Cilliers, 2017;Mensah et al., 2020). A study by Heikinheimo et al. (2020) reveals that entertainment events and shows hosted in parks draw crowds and promote the economic viability of cities. Events such as "Ghana Party at the Park," hosted in the UK, draw over ten thousand people who pay to watch artiste perform, buy drinks and foods at these events, and sometimes buy souvenirs. According to Yagha (2015), these events provide income to thousands of people from event organizers to food vendors. Also, neighborhoods with green spaces have higher property value than areas without parks (Adjei Mensah, 2015;Monica, 2019;Verweij et al., 2009). This helps increase government revenue and aids in the implementation of other projects. In Canada, an assessment showed that the existence of parks in some neighborhoods helped the government to gain 8% more in property tax (Jennings and Bamkole, 2019). The overall economic impact of parks when analyzed in different forms shows that having green spaces in cities provides enormous economic benefits and should be promoted by city authorities. To sum up, it can be said that green spaces are useful city features that immensely contribute in diverse ways to enhancing the sustainable development of cities. Their benefits are widely expressed in social, economic, and environmental dimensions. Yet, in sub-Sahara African countries, the benefits of green spaces have largely not been accrued as city authorities disregard or are unaware of its significance (Abass et al., 2020;Adjetey et al., 2023;Cobbinah et al., 2020). On the other hand, developed countries have experienced the contributions due to their regard for the parks, and these experiences can serve as lessons for sub-Saharan countries (Hoover et al., 2023;Saavedra and Domingos, 2023;Takyi and Seidel, 2017).
MATERIALS AND METHODS
The study was based on a content analysis of secondary data and maps were generated to validate the key findings. The term secondary data is used in this study to refer to data that are used to address research questions different from the ones the original collector sought to answer (Vartanian, 2011). The search for the secondary data was guided by phrases such as: a) green spaces in sub-Saharan Africa; b) economic, social, and environmental benefits of green spaces; c) factors responsible for the deterioration of green spaces in SSA; and d) effective management of green spaces. A total of eighty articles/reports/books related to green spaces and park, its benefits, depletion, effects and management were used. All the relevant literature from different geographical locations on the indicators was screened to aid the analysis of the study. At the end of the search, three sub-Saharan cities stood out and land use-land cover data were attained to further make a strong case for the study, these are Dar es Salaam, Accra, and Luanda (Table 1 and Figure 1).
Image acquisition and processing
High-resolution images produced from sun-synchronous passive sensors on Landsat 5, 7, and 8 were used for this study and remotely acquired. The production of the outcome was carried out using Erdas Imagine 2015, ENVI 5.3, Google Earth Pro and ArcGIS 10.5. Specifically, multispectral Landsat 5, Landsat 7, Landsat 8, and Landsat 9 images were obtained from Earth Explorer for 2000, 2001, 2002, 2010, 2011, 2012, 2021, and 2022 (Table 2). Factors influencing the choice of images were the nature of the study, the focus and availability of data (cloud-free images) useful to the study. For both types of images, multispectral bands were layer staked with the layer stacking tool in ENVI 5.3/ArcGIS 10.7 that makes it possible to combine multiple derivate image measures (texture, independent components, and so forth) into a single multiband image to improve the accuracy of the classification. Although some images had data gaps (due to the Scan Line Collector-SLC failure of all LANDSAT 7 images from 2003), they still maintain the same geometric corrections as images taken earlier and are still useful. The scan line gap was corrected using the gap mask tool (fill gap) in ENVI 5.3 and QGIS 3.162 (Hannover). Images with clouds were greatly avoided; a few portions with clouds were noticed and compared to ground controls and vital information from Google Earth Pro, which aided in the detection of land cover at those areas for accurate classification. Subsequently, autonomous atmospheric corrections, noise reduction on all images, and other radiometric corrections were carried out where necessary in Erdas Imagine 2015. Images were sub-setted with green areas as areas of interest. The metadata of images captured are depicted in Table 2.
Image classification
For the purpose of the study, two basic methods were used: Normalized Difference Built-up Index (NDBI) aided in the delineation of the built-up areas, and Normalized Difference Vegetation Index (NDVI) -false color combination (which highlights Infra-red) was used to identify areas with vegetative cover, hence non-built-up. The NDBI remains a good approach and an effective method for using LANDSAT images to automatically map out urban built-up areas (Chunyang et al., 2010). The Normalized Difference Vegetation Index (NDVI) is an indicator used to identify the presence of green vegetation (Mzava et al., 2019). This indicator for identifying the vegetation index is also used to quantify urban and infrastructure development and the growth of settlements by observing the decline in green vegetation over a period of time. The robustness of the classification for each year was tested with 100 ground control points (GCP) using the accuracy assessment algorithm of Erdas Imagine. The overall classification accuracy was 95%, with the Kappa statistic being 0.886. These statistics indicate the accuracy of the classification and its usefulness for further analyses (Lillesand et al., 2004). Signature Separability was calculated for bands used for the classification so as to rule out bands that were not useful in the result classification. In the calculation between means of themes with Euclidean spectral distances, the normalized probability was found to be 0.6000. Outputs were color ramped and finalized in ArcGIS 10.5 and three thematic areas (that is, water, built-up, and green spaces) were identified for this work.
In order to determine the rate of increase or decrease in land cover, a change detection technique was used to determine the land surface changes that occurred from 2000 to 2022 (Lu et al., 2010). To better comprehend the rate of change over a 31-year period, the rate of change as applied in Mensah et al. (2020) was also determined for various cities.
RESULTS AND DISCUSSION
The land cover changes in the study areas were classified into built-up, water, and green spaces. The results of the classified satellite images over the period under consideration showed that there has been an increase in the built areas in the selected cities over the years, which in effect has resulted in a massive decline in green space cover.
Depletion of green spaces in Dar es Salaam, Tanzania
In Dar es Salaam, the change detection statistics indicated that in the year 2000, the green space cover stood at 86.6%, while the built-up area was 13%, with the water body in the city taking 0.4%. In 2010, however, the green space and water body declined from 86.6 and 0.4% in 2000 respectively, to 80.4 and 0.3% while the built-up area increased to 19.3%. As per the land cover data, the levels of encroachment on green spaces increased between 2010 and 2021 as there was a sharp decline in green spaces from 80.4% in 2010 to 40.1% in 2021, and the water bodies also lost 0.1% of coverage while the built-up areas increased from 19.3% in 2010 to 59.7% in 2021, as shown in Figures 2 and 3.
These changes could be linked to level of urbanization in Dar es Salaam. Tanzania National Bureau of Statistics Table 3 shows the crucial statistics of all the land cover changes that occurred between 2000 and 2021.
Depletion of green spaces in Accra, Ghana
The analysis for change detection in Accra revealed that in 2002, green space coverage accounted for 48.79%, while built-up areas constituted 47.95%, and the Table 4 provides crucial statistics on the land cover changes in Accra between 2002 and 2021.
Depletion of green spaces in Luanda, Angola
The change detection statistics in Luanda reveal that in the year 2000, the city's green space cover accounted for 21.2%, while the built-up area accounted for 77.8%, and the water body occupied 1%. However, by 2012, the green space and water body declined significantly from 21.2 and 1%, respectively, to 7.93 and 0.07%, while the built-up area increased to 92%. The analysis of land cover data indicates that from 2012 to 2022, the level of encroachment on green spaces intensified, leading to a sharp decline in green space coverage from 7.93% in 2012 to 5.4% in 2022. Additionally, the water resource lost 0.82% of its coverage, while the built-up area increased from 92% in 2012 to 94.6% in 2022, as illustrated in the Figures 6 and 7.
In Luanda, the 1990 official census revealed a population of 2,100,000, which significantly increased to 3,400,000 in 2000 (Instituto Nacional de Estatistica, 2000), prompting a substantial surge in urbanization. By 2012, the population had grown to 6,945,386 (Instituto Nacional de Estatistica, 2012), leading to a corresponding rise in the built-up area from 77.8% in 2000 to 92%. Rural-urban migration, driven by job opportunities, access to quality education, and a better standard of living, largely contributed to the increase in population and the consequent land cover changes. In 2022, the population of Luanda further increased to 9,700,000 (Brinkhoff, 2023), representing a significant 39.65% change over 10 years, leading to an increase in the builtup area from 92% in 2012 to 94.6% in 2022. Unfortunately, proper urban planning has not kept pace with population growth, leading to a disregard for the preservation of green spaces in many developing countries. Table 5 provides crucial statistics on all land cover changes between 2000 and 2022.
CAUSATION FACTORS
The foregoing analysis depicts the trend of green space depletion in three sub-Saharan African countries. Content analysis of the literature reveals several factors responsible for the state of green spaces in the region. The factors are discussed under the following themes: (a) Pressure of rapid urbanization; (b) Weak urban planning
Pressure of rapid urbanization
In Sub-Saharan Africa, one of the predominant causes of the depletion of green spaces is rapid urbanization (Nero, 2016;Tibesigwa et al., 2020). Some of the world"s most populous cities, such as Lagos (Nigeria) and Kinshasa (D.R. Congo) are found in the region. The 2018 State of African Cities Report put together by UN Habitat indicated that over 900 million people live in the Sub-Saharan Region, and 50% of these people live in urban centers (UN Habitat, 2018). Frightening statistics that revealed the intensity of the region"s urbanization and its effects were presented in the report. For instance, in Southern Africa, the zone where nations such as Zambia, Zimbabwe, and the Republic of South Africa are found has 60% of its population living in urban areas. The urban population of West Africa, as of 2010 had 137.2 million people living in urban areas, and it is also projected to reach 427.7 million by 2050. The depletion of urban green spaces in African cities is directly associated with the region"s rapid urbanization (Besada et al., 2019;Emife, 2020). A manifestation of this phenomenon is the massive emergence of several slums across the region"s cities. Urban sprawl settlers take up environmentally sensitive zones and lands reserved for urban green spaces. The Sub-Saharan region has over 200 million slum dwellers, the world"s highest slum population (UN Habitat, 2020). The rapid rate of urbanization in Nigeria, with its corresponding increase in informal settlements and the depletion of urban green spaces, cannot be overemphasized. Lagos, the financial capital of Nigeria, has witnessed a tremendous increase in its population from 1.4 million in 1970 to 14.8 million in 2021 (National Population Commission Nigeria, 2021). The drastic increase in population has resulted in the creation of many shanty towns, causing the worrying depletion of several urban green spaces (Dekolo et al., 2015). The phenomenon is worrying in Lagos, as slum dwellers in Makoko have extended their settlement from the land to settle on the Makoko water (BidCreates, 2020). This goes to show the levels of sprawl in the city. In Dar es Salaam, Tanzania, the consequences of rapid urbanization on the depletion of green spaces are alarming. Studies conducted on the proliferation of informal settlement indicate that there is evidence of both planned and informal residential settlements in the city, that dates back to as far as 1992, and is a major contributor to the sale of environmentally sensitive zones and urban green spaces in the informal land market (Bhanjee and Zhang, 2018;Tibesigwa et al., 2020). Also, the urban trees that were planted to enhance the aesthetics of the city and protect the urban environment have all been destroyed to a large extent (Tibesigwa et al., 2020).
Furthermore, high rates of urbanization have caused many cities in South and West Africa, such as Johannesburg, Cape Town, Pretoria (South Africa), Luanda (Angola), Harare (Zimbabwe), Kano, Kaduna, Ibadan (Nigeria), Accra, Tema, Kumasi (Ghana), Freetown (Sierra Leone), and Dakar (Senegal) to lose major portions of their public green spaces to infrastructural development and urban sprawl (Abass et al., 2020; Dekolo et al., 2015;Nero, 2016). In a related development, a study on informal settlement and urban sprawl in Dar es Salaam and its effects on green spaces showed a considerable loss of urban greenspace coverage due to built-up area expansion (Bhanjee and Zhang, 2018). Specifically, the study showed that from 1982 to 2002, informal settlement grew by 120% at the expense of the natural environment (Bhanjee and Zhang, 2018).
Weak urban planning system and regulations
Town planning in Sub-Saharan Africa is governed by regulations issued by the legislative assembly and approved by the executive. Inasmuch as there are several land regulations on green spaces in Sub-Saharan Africa, their implementation has been a major problem. The following issues are identified as the major factors causing the weak urban planning regulations on green spaces: The nature of urban planning systems in the region, the lengthy and bureaucratic nature of acquiring development permits in the region, and less regard for green spaces by city authorities in Sub-Saharan Africa.
The outmoded and archaic nature of urban planning systems in the region fails to address the current challenges associated with development and urbanization in cities. Although urban planning regulations exist in Sub-Saharan Africa, most of the fundamental regulations and plans were made several years ago by the colonial authorities (Awuah et al., 2010). For instance, the 1946 and 1956 town planning ordinances of Nigeria and Tanzania were prepared by colonial masters and does not have the muscles needed to curb urban sprawl and protect urban greenery; this situation is similar for countries like Ghana and Malawi (Josse, 2020). These regulations, prepared almost 65 years ago, are still in operation. Little and in some instances, no changes have been made to the regulations, which makes it difficult to address the current urban environmental challenges in most Sub-Saharan African cities. Furthermore, Sub-Saharan African cities over rely on master plans that were prepared by colonial masters to manage development urban areas. Master plans are maps that display the long-term desired urban form in broad land use (JICA, 2020). These master plans are not meant to deal with detailed land use challenges such as depletion of urban green spaces, due to the fact that they are outmoded and proper stakeholder engagements were not conducted during the drafting of the plans in the colonial era (Boamah and Amoako, 2013). Cities like Accra (Ghana) have a master plan that was drawn in 1944 and revised in 1957 and is still in operation; the master plan of Abuja (Nigeria) was drawn in 1970, and physical development is still based on this plan. Current development patterns and challenges in the cities in Sub-Saharan Africa makes it practically impossible for wornout regulations and plans to guide the growth and developments of these cities, resulting in the massive encroachment of environmentally sensitive zones and green spaces.
The lengthy and bureaucratic nature of acquiring development permits in the region was identified as a contributor to the weak urban planning system. Across the region, attaining land development permits takes quite a long time, and the process is mostly bureaucratic. In Nigeria, it was noticed that for an individual to get the right documentation for land acquisition, they would have to go through thirty-two processes and it sometimes takes over a year to get it done (Adjei Mensah, 2015;Josse, 2020). In Tanzania, the process for building permits and plans being approved could take up to four years. Moreover, for countries like Ghana and DR Congo, securing development permits from state planning agencies could take up to two years (Adjei Mensah, 2015). The bureaucratic processes influence urban developers and private citizens to evade the processes required for land development projects and sometimes resort to bribing their way through the process. As a result, environmentally sensitive areas such as wetlands and urban green spaces are encroached upon by these developers, with no one able to hold them accountable (Adjei Mensah, 2015).
Moreover, authorities in cities are less concerned about protecting urban green spaces. A study by the UN Habitat on the state of planning in Africa revealed that rapid urbanization has drifted the attention of city authorities to the provision of transport infrastructure, schools, public housing, and other social infrastructure, while issues regarding public park management and protection have been neglected (Addo-Fordwuor, 2014;UN-Habitat and Africa Planning Association, 2013). In Tanzania, Olaleye et al. (2013) also observed that city authorities hardly regard urban green spaces due to the enormous socioeconomic benefits associated with urbanization. Also in Kenya, city authorities have less regard for the management and development of green spaces, which has resulted in a lack of basic facilities such as washrooms, chairs, and notice boards on public parks (Adjei Mensah, 2015).
Socioeconomic challenges
A major social challenge associated with the loss of urban green spaces is the low enthusiasm of city dwellers towards the preservation of green spaces. Due to the limited stakeholder engagement in making decisions about green spaces and low education of city residents on the benefits of green spaces, individuals tend to ignore the depletion of green spaces. For instance, in Ghana, decisions on green spaces are left to city authorities; this approach lacks public consensus and support (Addo-Fordwuor, 2014). This challenge is similar in many countries across the Sub-Saharan Africa region, where city residents are not consulted or involved in the management of green spaces (Narh et al., 2020). The lack of involvement of city residents and their inadequate knowledge of the benefits of green spaces have influenced their perception of leaving green spaces to the sole management of city authorities and the neglect of green spaces in their neighborhoods. This has resulted in the depletion of green spaces by city dwellers and the conversion of these spaces into dumpsites in cities such as Accra and Kumasi (Ghana), Lagos and Kaduna (Nigeria), Luanda (Angola), and Dar es Salaam (Tanzania) (Mensah, 2014b;Tibesigwa et al., 2020).
Economic challenges associated with the depletion of green spaces are the misappropriation and embezzlement of state funds and poverty among urban dwellers. Socioeconomic developments such as green space development, even with their little budgetary allocation, often have their funds misappropriated and embezzled by state officials (Mensah, 2014b;Narh et al., 2020). Misuse of funds allocated for the development of green spaces in most Sub-Saharan African countries often finds its way into the pockets of corrupt officials (Mensah, 2017). For instance, in Harare (Zimbabwe), funds donated by donor agencies to fund projects that will protect environmentally sensitive sites were found to have been diverted and misused, rendering the projects to a halt (Adjei Mensah, 2015;Tibesigwa et al., 2020). In Accra (Ghana), as of 2016, the forestry commission set out to redevelop the Achimota Forest, an urban forest, into an ultra-modern ecotourism park, but till date the project has not been commenced (Dasmani, 2016). The fund for the project, which was initially $1.2bn as stated by the Deputy Chief Executive of the Forestry Commission, Mr. John Allotey, was slashed to $320m (Anstey, 2016). The project has since been abandoned, and conversations about the ecotourism park have been dropped due to the diversion of funds initially allocated for the project. Poverty in the path of urban dwellers was also identified as an economic challenge that poses an existential threat to green spaces. The studies on urban poverty in Sub-Saharan Africa indicate that the region"s rate of urban poverty is dire (Emife, 2020). Currently, the international poverty line stands at $1.90 per person per day, and the Sub-Saharan Africa region holds the largest number of poor people in the world after it overtook Asia in 2019 (Emife, 2020). In 2020, the Nigerian National Bureau of Statistics indicated that 40% of Nigerians live below the poverty line (Onyeiwu, 2021). As of 2019, the World Bank reported that 38.5% of the people in Benin also live below the poverty line (Emife, 2020). The rate of poverty in the region is linked to the depletion of green spaces, as poor people rely on these for survival. In African cities, poor people take over environmentally sensitive areas and make such areas their homes, such as Makoko (Lagos) and Old Fadama (Accra). The consequence of these acts is the excess depletion of green spaces in the cities of Sub-Saharan Africa by the urban poor.
Political issues
On the path of African leaders, the political will to embark on greenspace initiatives is weak. In many cities in Sub-Saharan Africa, it was found that there was a lack of enthusiasm when it came to initiating measures to promote the development and management of green spaces. A study conducted by Amoako and Adom-Asamoah (2017) indicated that instead of preserving green spaces in Kumasi (Ghana), political actors in some cases change their use. A typical example is the Kumasi Race Course, which has now been converted into a commercial space. Also, the Achimota Eco-tourism Park in Accra(Ghana) which was to commence in 2016, has still not seen the light of day due to lack of political will (Dasmani, 2016). In the wake of calls for action to tackle climate change, several leaders in Sub-Saharan Africa pledged to plant trees in their major cities in countries such as Nigeria, Kenya, and DR Congo, but the implementation of these projects have all been abysmal due to the lack of political will (Agency Report, 2020; Alfa Shaban, 2019;Kimani and Schreckenberg, 2020). The issue of Sub-Saharan African politicians unwilling to take the bull by the horn and work to protect and preserve the public parks is a major cause of the extinction of the parks.
Again, political instability in the region was identified as the last and major challenge to the preservation of green spaces. The last three decades have been challenging for many Sub-Saharan African states, with civil wars and coups in Angola, Cote d"Ivoire, Mali, Guinea, DR Congo, and Liberia. The disturbing effects of these wars on sustainable development and the preservation of green spaces cannot be downplayed. For instance, in Liberia and Cote d"Ivoire, long years of war destroyed a substantial part of the urban natural environment in Monrovia and Abidjan (Adjei Mensah, 2015). Civil wars in Angola and Mali also led to the loss of trees and the destruction of parks. Coup d"états in countries like Mali and, recently, Guinea also impede the urban development of the country. Usually, when coups occur, activities the government was carrying out get halted; these activities could include the development of urban parks or even large-scale tree planting agendas.
POLICY IMPLICATIONS AND RECOMMENDATIONS
The advancement of sustainable development ensures all facets of urban development, including urban parks, stand the test of time, one that should see green spaces devoid of conversion (Elliot, 2017). Conserving urban parks has enormous benefits, such as promoting greenery and protecting the climate. However, there are several problems associated with urban parks in sub-Saharan Africa. For SSA, one of the problems has been that the appropriate institutions do not have the power to perform activities fully without governments intervening. There have also been issues with encroachment on lands purposely set aside for urban parks. This kind of encroachment is mostly due to rapid population growth and urban sprawl (Yeshitela, 2019). These problems associated with urban parks are to be remedied intentionally. There should be a conscious effort made to conserve these parks by curtailing, if not eliminating, all problems or potential ones. In this regard, the subsequent paragraphs will highlight effective and efficient ways of sustaining the management of green spaces in sub-Saharan Africa.
Preparation of Public Park plans
With the growing population of sub-Saharan Africa, there has been an urgent need for commercial, agriculture, and residential land uses (Kimengsi and Fogwe, 2017). More and more farmers are seeking farmlands in order to grow food to feed their families. Others are also illegally taking urban lands to build structures and inadvertently destroying urban green spaces (Guenat et al., 2019;Tibesigwa et al., 2020). This has been made worse by rapid and inappropriate urban sprawl and rural-urban migration (Guenat et al., 2021). To resolve this situation, there is the need to prepare and implement public parks or greenspace plans at regional and the local levels, and build institutional competence and capacity to arrest the tragedy (Yeshitela, 2020;Yeshitela, 2019). With this, all urban parks must be properly planned and strictly protected. These green space plans are to ensure that certain spaces set aside as urban parks are protected by the law and that anyone who flouts it is adequately punished. There is the tendency that recalcitrant inhabitants may opt to flout the plan and disobey city authorities, and that is where the law is expected to take its course.
Building institutional competence and capacity
Institutions in charge of managing urban parks in sub-Saharan Africa have neither the power to enforce any law made towards the protection of these green spaces nor the financial muscle to maintain these spaces due to their high maintenance costs (Lindholst et al., 2015). These institutions are not able to ensure that people do not encroach on lands allocated for such purposes, and for that matter, they are not able to sue individuals that go against the law (Guenat et al., 2019). Again, they are poorly resourced, primarily in terms of finance. Despite the fact that some of these urban parks charge entry fees, they are still unable to accrue the funds necessary for routine maintenance and to sustain them. This has led to many urban parks failing despite being developed with huge sums of money. Institutions in charge of managing these urban parks can make it an institutional vision to sue any individual that trespasses on their space and also, innovative in raising or generating funds internally to do its routine maintenance. Innovative in a manner that some special, attractive offers and programmes can be introduced in the urban parks to attract people and encourage them to patronize these special offers, and with that, monies for maintenance costs can be raised. In the event that the above-mentioned are not sufficient, these institutions can collaborate with other companies and corporations to render services that would increase their funds.
Despite the fact they structure all these, the onus lies on these institutions to sensitize the people on the essence of urban parks, the penalties associated with trespassing on the property or lands or pollution of the parks, and also the various social activities, offers, and collaborations that the public can take advantage of (Guenat et al., 2019(Guenat et al., , 2021Yeshitela, 2020). These institutions can ensure a participatory approach where the views of the people are fully sought after and taken into consideration in the management of these parks. This will encourage active participation of residents around the park, appreciable patronage of the parks, and general cooperation towards fundraising and management of the parks (Guenat et al., 2019).
Institutional independence
Political succession, almost as fleeting as it comes, poses a great disservice to state institutions in sub-Saharan Africa (Yeshitela, 2019). More often than not, should there be a framework concerning urban parks being worked on by a government or a newly sworn-in government, these newly sworn-in governments prefer to start projects as designated by their political parties, thereby abandoning projects commenced by the outgoing governments. On the other hand, there is political interference. Institutions are not able to undertake certain activities if the ruling government does not support them. The interference comes in two forms: a situation where governments are to append to some legal frameworks concerning these urban parks or provide some fund allocation and simply because it does not serve in the party"s interest or part of its priorities, the funds or framework are delayed for years. The other interference is such that the institutions in charge of these urban parks may decide to take action against some citizens for going against the activities of the parks and the area. These two instances render the institution powerless in every sense.
Sustaining these would mean that institutions in charge of urban parks are allowed full autonomy to operate, maintain, and also punish culprits that flout rules. Consequently, institutions are empowered to do more to protect these parks with the full backing of the government. In the event that full autonomy makes way for corruption, then the government, having granted this autonomy, should also make room for a very effective audit system that can ensure that accountability is observed. This autonomy is expected to be made effective through the preparation of new legislative or executive instruments entrenched in the highest laws of various countries in sub-Saharan Africa or the strengthening of an existing legal framework (Nelson and Agrawal, 2008). It is to be entrenched to avoid governments making changes to the laws to suit their priorities. It will also give credibility to the institutions managing the parks and help them exercise the power they wield over the effective management of urban parks.
CONCLUSION
Creating ecologically and economically vibrant cities has made green spaces an essential feature of city development. Green spaces are natural and multifunctional spaces that provide social, environmental, and economic benefits to cities as well as enhancing a city"s aesthetic image. Around the world, green space plays an important role in enhancing the economic, health, and social way of life of cities, making them significant in achieving healthy and sustainable urban communities. They contribute significantly to the sustainability of cities. In the discourse of sustainable development, green spaces play a very crucial role due to their contributions to the social, economic, and environmental facets of society. In spite of the numerous benefits they provide, there is evidence of the rapid depletion of green spaces in many sub-Saharan African cities. Based on a comprehensive review of relevant literature, this paper contributes to the discourse on the trajectory of urban green space depletion in three cities within the Sub-Saharan Africa region. The findings show that in spite of the many benefits green spaces provide there is significant decline in green space cover in SSA, with spatial evidence from selected cities. This is tragic. In Accra, Ghana, evidence shows significant decline in the city"s green infrastructure between 2002 and 2021. Approximately 48.8, 48 and 3.2% of the land area was made up of greeneries, builtup area and water bodies, respectively, in the year 2002. In 2011 however, there was a 16.8% decline in the green areas between 2002 and 2011 (48.8% in 2022 and 32% in 2011), but an increase in the built up area to 65%. There was further decline in the green areas of the city to 27% in 2021, compared to a 71.2% in the built up area. Secondly, in Dar es Salaam, financial capital of Tanzania, evidence shows significant decline in its urban green spaces over time -21 years. Approximately 86, 13 and 1% of the city"s land comprised green space, built-up area and water body, respectively, in the year 2000. There was however a decline in the green area to 80%, representing a 6% reduction in 2010, with a corresponding increases in the built-up area. There was a further rapid decline in green area to 40% in 2021 largely to increased population and corresponding demand for land for physical development. Similar to evidence in Accra, and Dar es Salaam, Luanda has experienced decline in its green infrastructure over time. Land cover analysis shows that Luanda"s green areas covered 21% of its total land area (1,155.52 sq. km.) in 2000, compared to 78% making up the built-up area. In 2012 and 2022 however, its greeneries decline to 7.9 and 5.4% respectively. Its built-up area however increased from 78% in 2000 to 92% in 2012 and 94% in 2022.
It was revealed that the several factors have accounted for the decline in green space cover in the cities. Despite the growing recognition of the benefits of urban green spaces, the review identifies several factors that hinder their development, performance, and management in SSA. The primary obstacle is the rapid urban population growth in the region, leading to uncontrolled urban development and urban sprawl. The high population growth has resulted in informal urbanization, leading to encroachment on urban green spaces and areas designated for greenery. Additionally, weak urban planning regimes and poor enforcement of development control measures exacerbate the decline of urban green spaces, forests, nature reserves, and parks. This research calls for the preparation of public park plans, building institutional competence and capacity as well as institutional independence based on these findings. These are measures to address the many challenges resulting in the decline of green spaces in the cities. | 9,697.8 | 2023-07-31T00:00:00.000 | [
"Economics"
] |
Atomic Layer Deposition of Buffer Layers for the Growth of Vertically Aligned Carbon Nanotube Arrays
Vertically aligned carbon nanotube arrays (VACNTs) show a great potential for various applications, such as thermal interface materials (TIMs). Besides the thermally oxidized SiO2, atomic layer deposition (ALD) was also used to synthesize oxide buffer layers before the deposition of the catalyst, such as Al2O3, TiO2, and ZnO. The growth of VACNTs was found to be largely dependent on different oxide buffer layers, which generally prevented the diffusion of the catalyst into the substrate. Among them, the thickest and densest VACNTs could be achieved on Al2O3, and carbon nanotubes were mostly triple-walled. Besides, the deposition temperature was critical to the growth of VACNTs on Al2O3, and their growth rate obviously reduced above 650 °C, which might be related to the Ostwald ripening of the catalyst nanoparticles or subsurface diffusion of the catalyst. Furthermore, the VACNTs/graphene composite film was prepared as the thermal interface material. The VACNTs and graphene were proved to be the effective vertical and transverse heat transfer pathways in it, respectively.
Background
Vertically aligned carbon nanotube arrays (VACNTs) have various outstanding performances and show great potential for a wide variety of applications. Due to their high axial thermal conductivity, many VACNT-based thermal interface materials (TIMs) have been developed for thermal packaging applications [1][2][3][4][5][6][7]. To synthesize the high-quality VACNTs on different substrates, chemical vapor deposition (CVD) has been commonly used, and the buffer layer should be deposited on the substrate before the deposition of the catalyst, such as Fe. Generally, the buffer layers are used to prevent the diffusion of the catalyst into substrates, so it is also very important to achieve the high-quality buffer layers on different substrates.
Atomic layer deposition (ALD) has self-limited behavior, which could achieve pinhole-free, dense, and conformal films on complex non-planar substrates [8].
Recently, many researchers have used it to deposit the buffer layers for the growth of VACNTs [9][10][11]. Amama et al. reported the water-assisted CVD of VACNTs using ALD Al as the buffer layer [9]. Quinton et al. reported the floating catalyst CVD of VACNTs using Fe as the catalyst. They found that VACNTs had faster nucleation rate and more uniform tube diameter on ALD Al 2 O 3 buffer layer, compared with SiO 2 [10]. Compared with thermal and microwave plasma SiO 2 , the VACNTs grown on ALD SiO 2 had the fastest nucleation rate [10]. Yang et al. reported that VACNTs could be synthesized on non-planar substrates using ALD Al 2 O 3 as the buffer layer and Fe 2 O 3 as the catalyst, respectively [11]. Compared with the planar surface, the non-planar surface could largely increase the specific surface area, which would be very beneficial for the preparation and further applications of VACNTs [12][13][14]. Although some ALD oxide buffer layers have been synthesized for the growth of VACNTs, their role was still not very clear in the CVD process.
In this research, we used CVD to prepare the VACNTs with different buffer layers, including ALD Al 2 O 3 , ALD TiO 2 , ALD ZnO, and thermally oxidized SiO 2 . The effects of different oxide layers and deposition temperature on the growth of VACNTs were analyzed. Besides, the VACNTs/graphene composite film was also developed as the thermal interface material, and the VACNTs were used as the additional vertical thermal transfer pathways in it.
Methods
Al 2 O 3 , ZnO, and TiO 2 thin films were deposited on Si substrates by ALD, and SiO 2 was formed on Si substrate by thermal oxidization. Trimethylaluminum (TMA), tetrakis(dimethylamino)titanium (TDMAT), and diethylzinc (DEZ) were used as the precursors for ALD of Al 2 O 3 , TiO 2 , and ZnO films, respectively. For all of them, H 2 O was used as the oxygen source, and the deposition temperature was set at 200°C. The thickness of Al 2 O 3 , ZnO, and TiO 2 , and SiO 2 films was 20 nm. One-nanometer-thick Fe film was deposited on all of them by electron-beam (EB) evaporation, where it was used as the catalyst. The CVD method was applied to synthesize the VACNTs based on a commercial CVD system (AIXRON Black Magic II). Before the growth of VACNTs, the catalyst was annealed in the hydrogen (H 2 ) atmosphere at 600°C. The period was 3 min, and the flow rate of H 2 was set at 700 sccm. After that, the acetylene (C 2 H 2 ) and H 2 were introduced into the chamber, and then VACNTs were prepared. The flow rates of C 2 H 2 and H 2 were 100 and 700 sccm, respectively. The deposition temperature was changed from 550 to 700°C , and the period was fixed at 30 min.
After the growth of VACNTs on Al 2 O 3 , the VACNTs/ graphene composite film was also prepared as the thermal interface material. Epoxy resin, curing agent, and diluents were purchased from Sigma-Aldrich Trading and Tokyo Chemical Industrial Co., Ltd. The multilayer graphene was purchased from Nanjing Xianfeng Nanomaterials Technology Co., Ltd. For the preparation of the composite film, the catalyst was firstly patterned using a lithography machine (URE-2000S/A). The pattern size was 500 μm, and the distance was 150 μm among patterns. Secondly, the VACNTs were deposited by CVD at 650°C, and the growth period was 30 min. Thirdly, the VACNTs were densified by the acetone vapor, and the period was 20 s. Fourthly, graphene, epoxy resin, curing agent, and diluent were mixed as the matrix, and the amount of graphene was fixed at 10 wt.%. After that, the VACNTs were immersed into the matrix and cured in a vacuum oven at 120°C for 1 h and then at 150°C for 1 h. Finally, the prepared composite film was polished to the thickness of about 300 μm, and the tips of VACNTs should be protruded from its both surfaces, as shown in Fig. 1.
The morphology of VACNTs and the composite film was analyzed by the field emission scanning electron microscopy (FESEM, Merlin Compact) and transmission electron microscopy (TEM, Tecnai G2 F20 S-TWIN).
Raman spectra of VACNTs were recorded by inVia Reflex, using a laser excitation wavelength of 632.8 nm. The thermal diffusivity (α) and specific heat capacity (Cp) of the composite film were measured by the laser flash thermal analyzer (Netzach LFA 467) and differential scanning calorimeter (DSC, Mettler Toledo DSC1), respectively. After that, the thermal conductivity could be calculated according to the Eq. 1: where λ and ρ were the thermal conductivity and density of the composite film, respectively. Fig. 2 a, b, and d. Among them, the VACNTs were the thickest on Al 2 O 3 , which indicated that the lifetime of catalyst nanoparticles was the longest on it during the growth period. The lifetime of catalyst nanoparticles represents the time after it has basically lost its catalytic function to grow carbon nanotubes, which could be deduced from the thickness of VACNTs [9]. Unlike it, the relatively thin VACNTs were deposited on SiO 2 and TiO 2 , which might be caused by the relatively serious Ostwald ripening of catalyst nanoparticles or the subsurface diffusion of Fe [15,16]. As shown in Fig. 3, Ostwald ripening is a phenomenon whereby larger nanoparticles increase in size while smaller nanoparticles, which have greater strain energy, shrink in size and eventually disappear via atomic surface diffusion [17]. When a catalyst nanoparticle disappeared, or when too much catalyst was lost, the carbon nanotubes growing from it stopped [17]. Besides, subsurface diffusion of Fe into the buffer layer or substrate could also cause mass loss from the catalysts that grow the carbon nanotubes, eventually causing termination of growth [16]. From Fig. 2 a, b, and d, we could also see that the density of VACNTs was the highest on Al 2 O 3 , and the lowest on TiO 2 . Generally, any marginal alignment seen in CVD samples was due to a crowding effect, and carbon nanotubes supported each other by van der Waals attraction [18]. Therefore, it means that the density of VACNTS was quite important, and higher density generally resulted in better vertical alignment of VACNTs, which were confirmed in Fig. 2 a, b, and d. Besides, Fig. 2 c shows that there were almost no VACNTs grown on ZnO, which could be caused by much more serious Ostwald ripening of catalyst nanoparticles and subsurface diffusion of Fe, compared with others [15,16]. Figure 4 a-d show Raman spectra of VACNTs grown on Al 2 O 3 , TiO 2 , ZnO, and SiO 2 . Generally, the D, G, and G' bands were around 1360 cm −1 , 1580 cm −1 , and 2700 cm −1 , respectively [19,20]. For different oxide buffer layers, the ratio of I D and I G was calculated to be near or more than 1, and there were also no radial breathing modes (RBMs) around 200 cm −1 . It indicated that all the prepared VACNTs were multi-walled on Al 2 O 3 , TiO 2 , Figure 5 a-d show the morphology of VACNTs on different oxide buffer layers, which was analyzed by TEM. The VACNTs were multi-walled on all of them, which was consistent with the results of the Raman analysis. The VACNTs were mostly triple-walled on Al 2 O 3 , but more than four walls on TiO 2 , ZnO, and SiO 2 . Figure 6 shows the growth rate of VACNT variation with deposition temperature on Al 2 O 3 and SiO 2 . When the temperature increased, the growth rate of VACNTs firstly raised and then decreased on both of them. It might be related to the serious Ostwald ripening of catalyst nanoparticles or subsurface diffusion of Fe, which largely reduced the lifetime of catalyst nanoparticles and the growth rate of VACNTs [15,16]. Above 600°C, the growth rate of VACNTs still increased on Al 2 O 3 , but decreased on SiO 2 . It indicated that the lifetime of catalyst nanoparticles on Al 2 O 3 was longer than that on SiO 2 . When the deposition temperature was below 500°C, there were obvious VACNTs on Al 2 O 3 but no VACNTs on SiO 2 , which meant that the nucleation and initial growth of VACNTs were more easily achieved on Al 2 O 3 , compared with SiO 2 . It indicated that the activation energy for the nucleation and initial growth of VACNTs on Al 2 O 3 was much lower than that on SiO 2 . Commonly, each catalyst nanoparticle could produce at most one carbon nanotube, but not all the catalyst nanoparticles could achieve the carbon nanotubes, because the activation energy should be overcome for their nucleation and initial growth [21][22][23]. Therefore, compared with SiO 2 , the lower activation energy of VACNTs on Al 2 O 3 might result in their higher density, which could be confirmed by Fig. 2 a and d. Figure 7 a shows the morphology of VACNTs with the patterned catalyst on Al 2 O 3 . Generally, there still had a lot of gaps inside VACNTs, which were filled with air, as shown in Fig. 2 a. However, the thermal conductivity of air was only 0.023 Wm −1 K −1 at room temperature, so the VACNTs need to be densified to remove it. From Fig. 7 b, we could see that the obvious densification of VACNTs has been achieved with the acetone vapor. Figure 7 c shows the cross-sectional image of the VACNTs/graphene composite film. The VACNTs and graphene were used as the additional vertical and transverse thermal transfer pathways in it. Figure 8 a and b show the vertical and transverse thermal conductivities of the composite film, which were measured to be about 1.25 and 2.50 Wm −1 K −1 , respectively. Compared with the pure epoxy resin, its vertical and transverse thermal conductivities have been obviously enhanced. It confirmed that the effective vertical and transverse heat transfer pathways have been offered by the VACNTs and graphene in the composite film, respectively.
Conclusions
The growth of VACNTs has been analyzed on different oxide buffer layers, such as ALD Al 2 O 3 , ALD TiO 2 , ALD ZnO, and thermally oxidized SiO 2 . Among them, VACNTs were the thickest and densest on Al 2 O 3 , which indicated that the lifetime of catalyst nanoparticles was the longest and the vertical alignment of VACNTs was the best on it. Besides, the VACNTs were found to be multilayer on Al 2 O 3 , and the deposition temperature was very critical to the growth of VACNTs. Compared with SiO 2 , the nucleation and initial growth of VACNTs were more easily achieved on Al 2 O 3 , which resulted in a higher density of VACNTs on it. After the growth of VACNTs on Al 2 O 3 , they were used to prepare the composite film together with graphene and epoxy resin. Compared with the pure epoxy resin, the vertical and transverse thermal conductivities of the composite film have been largely improved. | 3,059 | 2019-04-02T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Estimation-type results on the k-fractional Simpson-type integral inequalities and applications
ABSTRACT We establish a Simpson-type identity of multiparameter and certain Simpson-type inequalities via k-fractional integrals. Worth mentioning, the obtained inequalities in this article generalize some results presented by Set et al. [Simpson type integral inequalities for convex functions via Riemann-Liouville integrals. Filomat. 2017;31(14):4415–4420] and Sarikaya et al. [On new inequalities of Simpson's type for s-convex functions. Comput Math Appl. 2010;60:2191–2199]. As applications, we also provide several inequalities for f-divergence measures and probability density functions. We expect that this study will be result in the new k-fractional integration explorations for Simpson-type inequalities.
Introduction
The following inequality is named the Simpson type integral inequality: where h : [τ 1 , τ 2 ] → R is a four-order differentiable mapping on (τ 1 , τ 2 ) and h (4) ∞ = sup t∈(τ 1 ,τ 2 ) |h (4) Considering the Simpson type inequalities, many researches generalized and extended them. For example, Hsu et al. [1], Du et al. [2], Noor et al. [3], İşcan et al. [4] and Tunç et al. [5] obtained some Simpson type inequalities for differentiable mappings which are convex, extended (s, m)-convex, geometrically relative convex, p-quasi-convex mappings and h-convex, respectively. Further results involving the Simpson type inequality in question with applications to Riemann-Liouville fractional integrals have been explored out by some scholars, including Set et al. [6] and Hwang et al. [7] in the study of the Simpson type inequalities using convexity, as well as İşcan [8] in the study of the Simpson type inequalities using s-convexity. More details corresponding to the Simpson type inequality and its extension, we refer to some articles by Hussain and Qaisar [9], Matłoka [10], Qaisar et al. [11], Ujević [12] and Ul-Haq et al. [13].
Let us consider an m-invex set A. A set A ⊆ R n is named m-invex set with respect to the mapping η : A × A × (0, 1] → R n for certain fixed m ∈ (0, 1], if mθ 1 + tη(θ 2 , θ 1 , m) holds, for all θ 1 , θ 2 ∈ A and t ∈ [0, 1]. A mapping h : A → R is called generalized (α, m)preinvex respecting η, if the following inequality holds, for every θ 1 , θ 2 ∈ A and t ∈ [0, 1]. Fractional calculus, as a very useful tool, shows its significance to implement differentiation and integration of real or complex number orders. This topic has attracted much attention from researchers who focus on the study of partial differential equations during the last few decades. For recent results related to this subject, we refer to some studies by Sohail et al. [14], Hameed et al. [15], and Khan et al. [16,17]. Among a lot of the fractional integral operators growed, the Riemann-Liouville fractional integral operator has been extensively studied, because of applications in many fields of sciences, such as differential equations, differential geometry and physics science. An important generalization of Riemann-Liouville fractional integrals was considered by Mubeen et al. in [18] which is named k-fractional integral operators. and respectively, where k > 0 and k (μ) is the k-gamma Some recent results pertaining k-fractional integrals can be found in [19][20][21].
Here, via k-fractional integral operators, we obtain some estimation-type results of Simpson-type inequality in terms of a multi-parameter identity. We also consider the established inequalities applying to f -divergence measures and probability density functions.
Main results
Throughout this article, let N * be the set of all positive integers, and let A ⊆ R be an open m-invex subset respecting η : We also utilize the following notation: h,η (μ, k; n, m) To prove main results, we give the following lemma.
then the following inequality with μ > 0 and k > 0 holds: Proof: Using Lemma 2.1, the Hölder integral inequality and the generalized (α, m)-preinvexity of |h (x)| ρ , we have Using the fact that From (15) and (16), we get the desired result in (14), since Proof: The second inequality is obtained by using the fact that n In the next theorem, we use the following functions.
For obtaining further estimation-type results, we next deal with the boundedness and the Lipschitzian condition of h .
Remark 2.4:
As several special cases of Theorems 2.3 and 2.4 above, some sub-results can be deduced by taking different mappings η and special parameter values for μ, k, n and m.
f-divergence measures
Let the set φ and the σ -finite measure μ be given, and let the set of all probability densities on μ to be defined on := {p|p : φ → R, p(x) > 0, φ p(x) dμ(x) = 1}. Let f : (0, ∞) → R be given mapping and consider D f (p, q) be defined by | 1,127 | 2019-09-09T00:00:00.000 | [
"Mathematics"
] |
Canonical Density Matrices from Eigenstates of Mixed Systems
One key issue of the foundation of statistical mechanics is the emergence of equilibrium ensembles in isolated and closed quantum systems. Recently, it was predicted that in the thermodynamic (N→∞) limit of large quantum many-body systems, canonical density matrices emerge for small subsystems from almost all pure states. This notion of canonical typicality is assumed to originate from the entanglement between subsystem and environment and the resulting intrinsic quantum complexity of the many-body state. For individual eigenstates, it has been shown that local observables show thermal properties provided the eigenstate thermalization hypothesis holds, which requires the system to be quantum-chaotic. In the present paper, we study the emergence of thermal states in the regime of a quantum analog of a mixed phase space. Specifically, we study the emergence of the canonical density matrix of an impurity upon reduction from isolated energy eigenstates of a large but finite quantum system the impurity is embedded in. Our system can be tuned by means of a single parameter from quantum integrability to quantum chaos and corresponds in between to a system with mixed quantum phase space. We show that the probability for finding a canonical density matrix when reducing the ensemble of energy eigenstates of the finite many-body system can be quantitatively controlled and tuned by the degree of quantum chaos present. For the transition from quantum integrability to quantum chaos, we find a continuous and universal (i.e., size-independent) relation between the fraction of canonical eigenstates and the degree of chaoticity as measured by the Brody parameter or the Shannon entropy.
Introduction
As first recognized by Ludwig Boltzmann [1,2] "molecular" chaos lies at the core of the foundation of classical statistical mechanics. Only when the phase space of an isolated mechanical system is structureless can the motion be safely assumed to be ergodic and the equal a priori probability for phase space points on the energy hypersurface, the basic tenet of the microcanonical ensemble, is realized. Moreover, chaotic dynamics is "mixing", thereby enforcing the approach to the thermal equilibrium state from "almost all" out-of-equilibrium initial conditions. While any large isolated system is expected to be described by a microcanonical ensemble, any well-defined small subsystem thereof that is only allowed to exchange energy with the remainder of the large system (referred to as bath or environment in the following) is described by the canonical ensemble. The phasespace density of the subsystem is weighted by the Boltzmann factor e −βH s , where H s is the Hamilton function of the subsystem, β = 1/k B T with T the temperature imprinted by the environment and k B the Boltzmann constant. However, when the phase space of the system is not chaotic but rather dominated by regular motion on KAM tori [3,4], neither ergodicity nor mixing is a priori assured, and thermalization of an initial non-equilibrium state may be elusive. The implicit assumption of classical equilibrium statistical mechanics is that in the limit of a large number of degrees of freedom, chaos is generic for any interacting many-particle system.
How those concepts translate into quantum physics has remained a topic of great conceptual interest and lively debate [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. Renewed interest is stimulated by the experimental accessibility of ultracold quantum gases [22][23][24][25][26][27][28], trapped ions [29], and nanosystems [30,31], where many of the underlying concepts became quantitatively accessible in large but finite quantum systems in unprecedented detail. The foundation of thermalization of quantum systems has been pioneered by von Neumann in terms of the quantum ergodic theorem [32][33][34][35][36][37]. Accordingly, the entropy is an increasing function of time and expectation values of generic macroscopic observables for pure states formed by coherent superposition of states within microscopic energy shells converge to that of the microcanonical ensemble provided that the energy spectrum of the system is strictly non-degenerate. Recently, this description of thermal equilibrium states was extended to the notion of canonical typicality [37][38][39][40]. Accordingly, starting from almost any pure state formed by a coherent superposition of energy eigenstates of a large isolated many-body system with eigenenergies within a given energy shell [E, E + ∆E] of macroscopically small thickness ∆E, the reduction to a small subsystem by tracing out the degrees of freedom of the bath will yield the same reduced density matrix one would obtain from the reduction of the microcanonical density matrix for the entire system. If the bath is sufficiently large and the interactions between the bath and the subsystem sufficiently weak, the reduced density matrix corresponds to the standard canonical density matrixρ s = e −βĤ s /Tr[e −βĤ s ] withĤ s the Hamilton operator of the subsystem. The proof of this canonical typicality invokes the intrinsic randomness of the expansion coefficients of the pure state in terms of entangled subsystem-bath states. The latter assumption goes back to the notion of intrinsic quantum complexity of entangled states in large systems put forward already by Schrödinger [41].
An alternative approach to thermalization is tied to the eigenstate thermalization hypothesis (ETH) [5][6][7][8] first put forward by Landau and Lifshitz [42], stating that basic properties of statistical mechanics can emerge not only from ensemble averages but from typical single wavefunctions. However, the condition under which such an equivalence may emerge has remained open. The more recent formulation of the ETH [5][6][7][8] invokes the notion of quantum chaos and Berry's conjecture. Characteristics of quantum chaos were originally identified in few-degrees of freedom systems whose classical limit exhibits chaos [43][44][45][46][47][48][49]. Nowadays, the notion of quantum chaos is invoked more generally for systems that display the same signatures such as energy level distributions predicted by random matrix theory (RMT) [43,48,50,51] or randomness of wavefunction amplitudes [5,52] even when a well-defined classically chaotic counterpart is not known. The ETH conjectures that for chaotic systems, the diagonal matrix elements of any generic local observable taken in the energy eigenstate basis are smooth functions of the total energy while the off-diagonal elements are exponentially decreasing randomly fluctuating variables with zero mean [6][7][8]. If the ETH is valid for a specific system, individual eigenstates show thermal properties upon reduction to a small subsystem. The ETH has been shown to hold for a large variety of systems without a classical analogue [24,[53][54][55][56][57][58][59][60][61][62][63][64]. Deviations from the ETH have been observed for local observables in finite systems of hard-core bosons and spin-less fermions [57,58,65] when the energy level distribution deviates from the Wigner-Dyson level statistics of RMT characteristic for chaotic systems.
In the present paper, we explore the quantitative relationship between thermal properties of reduced density matrices (RDMs) emerging from single isolated eigenstates of the entire system and quantum chaos. More specifically, we want to address the question: Is for large but finite systems quantum chaos a conditio sine qua non for the emergence of the Gibbs ensemble, i.e., the canonical ensemble of the subsystem, from eigenstates of the entire system? Or is quantum entanglement and complexity in these systems itself sufficient to render the reduced density matrix of a small subsystem canonical? To this end, we determine the fraction of canonical density matrices emerging upon reduction from the entire set of eigenstates. We explore the existence of a quantitative relationship between the fraction of eigenstates that upon reduction lead to canonical eigenstates, also termed fraction of canonical eigenstates, and the degree of quantum chaos of the entire system. We unravel the connection between this eigenstate canonicity and quantum chaos by exact diagonalization of a large yet finite mesoscopic quantum system. We emphasize that this measure addresses isolated energy eigenstates of the many-body system, in contrast to coherent superpositions of energy eigenstates from a given energy shell of finite width with random expansion coefficients as invoked in the well-established notion of canonical typicality [37][38][39][40].
As a prototypical case in point, we consider an itinerant impurity embedded in a spinpolarized Fermi-Hubbard system. Unlike impurity models for disordered systems [66], our model is fully deterministic. All key ingredients for the realization of the present system, i.e., discrete lattice, tunable interactions, and impurity can be experimentally realized with ultracold fermionic atoms (see, e.g., [67][68][69][70][71][72]). In the present scenario, the impurity serves as a probe or "thermometer" in the isolated many-body quantum system providing an unambiguous subsystem-bath decomposition with tunable coupling strength between subsystem and bath. Moreover, our system features a tunable transition from quantum chaos to quantum integrability without invoking any extrinsic stochasticity or disorder [66]. The fact that the subsystem consists of a distinguishable particle has a number of distinct advantages: The reduced density matrix of the probe is uniquely defined and its properties are basis-independent. No choice of a specific basis for the probe such as the independentparticle basis is involved. Moreover, its canonical RDM approaches a Maxwell-Boltzmann rather than a Fermi-Dirac distribution for indistinguishable fermions. Its thermal state is thus characterized by a single equilibrium parameter, the temperature T, without the need for introducing a chemical potential µ, thereby improving the numerical accuracy of the test of canonicity. We measure the proximity of the reduced density matrix of the impurity to the canonical density matrix and identify a direct and size-independent correlation between the fraction of canonical eigenstates and quantum chaos.
The paper is structured as follows. In Section 2, we introduce our impurity-Fermi-Hubbard model which serves as a prototypical (sub)system-environment model system. Quantitative measures for quantum chaos are introduced in Section 3. The mapping of spectral properties of this isolated many-body system onto thermal states of the impurity within the framework of the microcanonical and canonical ensembles are discussed in Section 4. The distance in Liouville space between the reduced density matrix of the impurity and a generic canonical density matrix is analyzed and the relation between the fraction of canonical eigenstates and quantum chaos is established in Section 5. Concluding remarks are given in Section 6.
The Fermi-Hubbard Model with Impurity
We investigate a variant of the single-band one-dimensional Fermi-Hubbard model which is particularly well suited to study entanglement and quantum correlations between subsystem and its environment or bath. The bath is represented by spin-polarized fermions enforcing single occupancy of sites by bath particles while the distinguishable impurity can occupy any site. Accordingly, the Hamiltonian of the total system is given bŷ where the Hamiltonian of the subsystem, i.e., the impurity (I), iŝ while the Hamiltonian of the bath iŝ The interaction between the subsystem and the bath is given bŷ The operatorsâ j andâ † j (b j andb † j ) are the creation and annihilation operators of the impurity (bath particles) on site j with the anticommutation The operatorsn j =â † jâ j andN j =b † jb j correspond to the number operators of impurity and bathn j |j = n j |j andN j |j = N j |j with occupation numbers n j and N j of site j, respectively. J I (J B ) describes the hopping matrix elements of the impurity (bath particles). The bath particles interact with each other by a nearestneighbor interaction with strength W BB while the impurity interacts with the bath particles via an on-site interaction with strength W IB . The Hubbard chain has M s sites with Dirichlet boundary conditions imposed at the edges. An additional very weak external background potential (V J I , J B ) with on-site matrix element V(j) (j = 1, . . . , M s ) is applied, for which we use a linear (n = 1) or quadratic (n = 2) function in order to remove residual geometric symmetries such that the irreducible state space coincides with the entire state space and symmetry related degeneracies are lifted. Alternative impurity models were recently suggested for the investigation of the ETH [64]. We solve the system via exact diagonalization to determine all eigenstates and eigenenergies of the entire system. The dimension of the Hilbert space of the system is d H = M s ( M s N B ), where N B is the number of bath particles. We consider typical half-filling configurations with N B ≈ M s /2. The largest M s considered is M s = 15 resulting in a Hilbert space dimension of d H = 96,525 for N B = 7. We set J I = J B = J which also defines the unit of energy (J = 1) in the following. The key advantage of the present model is that it allows to control and tune the properties of the bath separately by varying W BB while keeping fixed the properties of the subsystem whose reduced density matrix we probe. This clear-cut subsystem-bath decomposition allows for the unambiguous probing of the emergence of canonical density matrices, thereby avoiding any ad hoc separation by "cutting out" of the subsystem which then requires the grand canonical density matrix for an open quantum system since both energy and particles can be exchanged [65]. Moreover, its thermal state is unambiguously characterized by T rather than by T and µ as for indistinguishable fermions, thereby improving the numerical reliability of the performed tests.
The present system should be realizable for ultracold fermionic atoms trapped in optical lattices [28,[69][70][71][72][73][74]. All key ingredients required for its realization including tunable interactions and impurity-bath mixtures are available in the toolbox of ultracold atomic physics. We note that tuning the nearest-neighbor interaction W BB between the atoms in optical lattices to large values in the regime of strong correlations, W BB /J B 1, still poses an experimental challenge which might be overcome in the near future.
Measures of Quantum Chaos
The present single-band Fermi-Hubbard model does not possess an obvious classical counterpart whose phase space consists of regions of regular and/or chaotic motion. Lacking such direct quantum-classical correspondence, quantum integrability and quantum chaos in the present system is identified by signatures of the quantum system that have been shown to probe chaotic and regular motion in systems where quantum-classical correspondence does prevail. Several measures of quantum chaos have been proposed that are based on either properties of eigenstates or of the spectrum [14,49,58,[75][76][77][78][79]. As will be shown below, by tuning W BB , we can continuously tune the entire system from the limit of quantum integrability to the limit of quantum chaos across the transition region of a mixed quantum system in which integrable and chaotic motion coexist and explore its impact on the fraction of eigenstates which upon reduction lead to canonical density matrices. The influence of the continuous transition from quantum integrability to quantum chaos on the thermal state of the subsystem will be explored with the help of the present prototypical system.
Spectral Measures
Starting point for analyzing and quantifying quantum chaos by means of spectral statistics is the cumulative spectral function also called the staircase function where E α are the energy eigenvalues of the entire system, and Θ is the Heaviside step function. Its spectral derivative is the density of states (DOS) Examples for N(E) and Ω(E) of the present system are shown in Figure 1.
The smoothed "average" spectral staircase functionN(E) fitted to a polynomial of order 10, also shown in Figure 1, provides the reference for spectral unfolding required for certain measures of quantum fluctuations about the (classical) mean. Accordingly, the unfolded energy spectrum is given by e α =N(E α ). For systems for which quantumclassical correspondence holds,N(E) corresponds to the classical phase space volume in units of Planck's constant h and Ω(E) to the microcanonical energy shell. We note that the saturation of N(E) observed with increasing E (Figure 1a) or, likewise, the bell-shaped curve for the DOS (Figure 1b) decreasing at large E is in the present case a consequence of the single-band approximation of the Fermi-Hubbard model (Equation (1)) and, more generally, appears for systems with a spectrum bounded from above. For realistic macroscopic systems, N(E) and Ω(E) should generically increase monotonically with E. As discussed in more detail below, this non-generic decrease of the density of states observed for the present as well as for other finite and mesoscopic systems has implications for the ensuing thermal properties.
The probability density P(s) of the nearest-neighbor level spacings (NNLS), s = e α+1 − e α , features distinctively different shapes for quantum integrable and quantum chaotic systems. While for integrable systems, the NNLS have been predicted by Berry and Tabor [50] to feature an exponential (or Poissonian) distribution P P (s) = exp (−s), for chaotic systems it closely follows random matrix theory [43]. In our case of a time-reversal symmetric system, the corresponding random-matrix ensemble is the Gaussian orthogonal ensemble (GOE) which has been shown (see e.g., [46]) to closely follow the Wigner-Dyson distribution (or Wigner surmise) given by A complementary spectral measure first proposed by Gurevich and Pevzner [80] and applied to quantum chaos [81,82] has the advantage that it does not require spectral unfolding but can be applied to the spectral raw data, i.e., the restricted gap ratios r α where . The distribution of restricted gap ratios has been shown to obey for 3 × 3 GOE matrices the analytical prediction For chaotic systems, this prediction remains very accurate even for large systems [82]. In the limit of quantum integrable systems, the distribution of restricted gap ratios is given by (see [81]) The search for generic spectral measures for the transition regime between the quantum integrable and quantum chaotic limit has remained an open problem. For systems possessing a classical counterpart with a mixed phase space in which integrable and chaotic motion coexist, several models for the NNLS have been proposed [83][84][85][86][87]. Empirically, one of the best fits to spectral data for mixed systems has been provided by a heuristic ansatz suggested by Brody [88] which allows for a one-parameter smooth interpolation of the NNLS distribution in the transition region between the quantum integrable and quantum chaotic limit, where the Brody parameter γ characterizes the transition from the integrable (γ = 0) to the chaotic limit (γ = 1) and b follows from the normalization as The Brody parameter can be viewed as a measure of the strength of level repulsion between neighboring levels of the quantum system. For mixed few-degrees of freedom systems with a classical analogue, γ could be identified as a measure for the chaotic fraction of classical phase space [86,89]. Moreover, γ has also been found to be directly proportional to the degree of phase-space (de)localization of eigenstates as measured by their Husimi distribution [77]. The parameterization of the transition from quantum integrability to quantum chaos in terms of a variable exponent γ has the salient feature that even for very small but finite γ, 0 < γ 1, P B (0) = 0, reflecting the fact that any perturbation of quantum integrability immediately causes level repulsion and suppresses the probability density for any exact degeneracy. We recall that non-degeneracy is one of the key prerequisites of von Neumann's quantum ergodic theorem [32]. We further note that the Hasegawa distribution [84] sometimes provides an even more accurate fit to the NNLS distribution (see, e.g., [90]), however, at the price of a second adjustable parameter.
To determine γ, we fit Equation (12) to the data for P(s) (Figure 2). The quality of the fit is evaluated through the χ 2 -function which measures the deviation of the distribution of nearest-neighbor spacings P(s) from the Brody distribution P B (s) using a bin size of ∆s. As an additional measure for the uncertainty of γ, we use the fact that the Brody parameter can be alternatively determined from a fit to the integral ds P(s ) rather than to P(s) itself. The small differences found between the two fits can be used as a measure for the numerical error. For W BB = 0 and for the linear tilt of the external potential V(j) (Equation (5)), we observe an excess of (near-) degenerate states as compared to the prediction of the exponential (Poisson) distribution in the first bin at s = 0 with ∆s = 0.01. This hints at the presence of an only weakly broken symmetry which disappears when using a quadratic tilt. For reasons of consistency, we employ for all W BB a linear tilt in the following. Neglecting the first bin in the fitting procedure for W BB = 0, we obtain γ = 0.005 and, overall, a very good agreement with the Poisson distribution ( Figure 2a). As the intra-bath interaction is varied from W BB = 0 to W BB = 1, we observe a continuous transition from a near Poissonian to an approximate Wigner-Dyson NNLS distribution (Figure 2a-c). The Brody parameter monotonically increases from γ 0.005 at W BB = 0 to γ 0.9 at W BB = 1. We note that after reaching a plateau at γ 0.93 near W BB = 3, the Brody parameter decreases again for W BB > 5 and vanishes in the strongly correlated limit of W BB 1. The decrease of the Brody parameter for large W BB results from clustering of the energy spectrum in the strongly interacting regime. The bath fragments into clusters of particles with the interactions between separate clusters suppressed. Thus, a partially ordered system emerges reducing the degree of quantum chaoticity. We will focus in the following on the parameter range W BB ≤ 1 within which the transition from a nearly quantum integrable to a nearly fully quantum chaotic system occurs.
For the two limiting cases of quantum integrability (W BB → 0) and quantum chaos (W BB → 1) of the present Fermi-Hubbard system, we can also apply the predictions for the restricted gap ratio distribution (Equations (10) and (11)) . We find for these two limiting cases very good agreement between the prediction and the data (Figure 3), confirming that the identification of quantum integrability and quantum chaos is independent of the particular choice of the spectral measure. (10)) and for integrable spectra (Equation (11)).
For the first moment of the restricted gap ratio distribution, we find r = 0.5284 for W BB = 1 agreeing to within 0.5% with the GOE expectation value for asymptotically large matrices r GOE = 0.5307 [82]. Conversely, for W BB = 0, we find r = 0.3811 in very good agreement with the prediction for a Poisson distribution r P = 0.3863. As there is presently no interpolation function W(r) available for the transition between the quantum integrable limit (Equation (11)) and the quantum chaotic limit (Equation (10)), we will focus in the following on the Brody distribution for the NNLS as spectral measure for the transition regime.
Measures for Wavefunctions
As an alternative to spectral measures, on can also explore and quantify chaos through the complexity of the eigenstates. According to Berry's conjecture, the eigenstates of a chaotic system feature randomly distributed amplitudes over an appropriate basis, e.g., in quantum billiards, they correspond to randomly distributed plane waves [75]. Following this conjecture, a large number of such measures have been proposed. They include the statistical distribution of eigenvectors [9,12,46,91], the configuration-space probability distribution [92], the configuration-space self-avoiding path correlation function [52], the Wigner function-based wavefunction autocorrelation function [75], the inverse participation ratio [93], the Shannon entropy [94], and the phase space localization measured in terms of the information entropy encoded in the Husimi distribution [77]. One limitation for the quantitative significance of most of these measures (with the possible exception of [77]) is their dependence on the chosen basis of representation. For systems that can be continuously tuned from integrable to chaotic, the eigenstates of the integrable limit suggest themselves as a convenient basis to monitor the transition to chaos [9,12,58]. For manybody systems, the eigenstates of the mean-field Hamiltonian often provide the reference basis for measuring quantum chaoticity [14]. In the following, we use the eigenstates |ψ 0 α of the integrable system with W BB = 0 as a basis for determining the statistical distribution of eigenvectors. From the amplitudes c α α = ψ 0 α |ψ α and probabilities |c α α | 2 , we calculate the Shannon entropy [94] for each eigenstate |ψ α We observe that for W BB = 1, the Shannon entropy as a function of E α forms an inverted parabola-like function with remarkably small eigenstate-to-eigenstate fluctuations (Figure 4). At the apex near the center of the spectrum, S α reaches a maximum S max close to the GOE limit S GOE ≈ ln 0.48d H [58] with d H the dimension of the Hilbert space (Figure 4a). States in the tails of the spectrum show strong deviations from this limit as the eigenstates in this region are less complex and do not fulfill the ETH [58]. Best agreement with GOE predictions can therefore be expected near the center of the spectrum at α ≈ d H /2 with the highest density of states. For smaller W BB (Figure 4b-d), the Shanon entropy reveals a significantly diminished complexity of the eigenstates indicated by a reduced S max and, at the same time, drastically increased state-to-state fluctuations. Probing the generic features of the wavefunctions, we will use the dependence of the scaled Shannon entropȳ as an alternative wavefunction-based measure of quantum chaoticity complementing the Brody parameter γ as spectral measure. Numerically, we determine S max by averaging over small intervals of energy and calculating the maximum of the resulting smooth curve. Empirically, we find that the dependence of the Brody parameter γ, i.e., the degree of quantum chaoticity on the interaction parameter of the bath particles, γ(W BB ) ( Figure 5) can be accurately approximated by with γ 0 = 0.88 and W 0 BB = 0.15. While a monotonic increase is intuitively expected, the origin of this particularly simple functional form remains to be understood. Remarkably, the evolutions of γ andS as a function of W BB closely mirror each other, thereby representing two independent measures of the degree of quantum chaoticity during the transition from integrability to chaos. Overall, the agreement between γ andS is very good. Residual differences can be viewed as a measure for the residual uncertainty in the quantitative determination of the degree of the eigenstate quantum chaoticity. Figure 4 and correspond to the scaled standard deviation around S max . Error of the fit to the Brody distribution as measured by the square root of the χ 2 function (Equation (14)) (gray line and right y-axis).
The Reduced Density Matrix of the Impurity
The impurity embedded in the Fermi-Hubbard system serves as a "thermometer", i.e., as a sensitive probe of the thermal state of the interacting many-body system. We aim at exploring the emergence of thermal properties of the impurity when the entire (subsystem and bath) system is in a given pure and stationary eigenstate ofĤ with energy E α and vanishing state entropy (or von Neumann entropy S vN = 0). Such an isolated large quantum system can be viewed as the limiting case of the quantum microcanonical ensemble where the width of the energy shell ∆E vanishes, i.e., ∆E → 0. Unlike other approaches, it does not invoke any coarse-graining over a macroscopically small but finite width of the energy shell nor any random interactions. For such a quantum system without any a priori built-in statistical randomness, we pose the following question: Starting from a given isolated eigenstate of the entire system, under which conditions will the reduced density matrix of the impurity correspond to a canonical density matrix, i.e., the thermometer will be accurately represented by a Gibbs ensemble or, for short, be in a Gibbs state? If such a thermal state emerges, what will be its temperature T, or its inverse temperature β = 1/k B T? We refer to this process as emergence of a thermal equilibrium state rather than the frequently used term "thermalization" as the latter (implicitly) implies a time-dependent approach to an equilibrium state starting from an out-of-equilibrium (statistical or pure) initial state that represents a coherent superposition of different energy eigenstates. We neither invoke any ensemble average over states from the microcanonical energy shell of finite thickness ∆E nor do we invoke wave packet dynamics of a nonstationary state of the entire system.
For finite isolated systems, in particular, systems with a bounded spectrum such as the present Fermi-Hubbard model, the extraction of proper thermodynamic (or thermostatic) variables from the microcanonical ensemble requires special care. As has been recently demonstrated [95,96], the alternative definitions of the entropy used as the fundamental thermodynamic potential for the microcanonical ensemble yield, in general, inequivalent results. The standard definition [97] attributed to Boltzmann with Ω(E) the DOS of the entire closed system, implies an inverse temperature that may violate certain thermodynamic relations for mesoscopic systems with a bounded spectrum [95,96]. As shown more than 100 years ago [97,98], the Gibbs entropy defined by results in an inverse temperature that is free of such inconsistencies. From Equations (19) and (21), it follows that the two inverse temperature definitions are interrelated through the specific heat C [95] β Boltzmann = (1 − k B /C)β Gibbs (22) with C = (∂T Gibbs /∂E) −1 and T Gibbs = β −1 Gibbs /k B . Only for systems with a small specific heat of the order of k B or smaller, differences between β Boltzmann and β Gibbs become noticeable. This is in particular the case for systems with a bounded spectrum. While β Boltzmann (E) features negative values as soon as the density of states Ω(E) = N (E) decreases (Equation (19)), β Gibbs (E) remains always positive semi-definite (Equation (21)). Figure 6 presents a comparison between β Gibbs and β Boltzmann for the present Fermi-Hubbard model with an impurity where we have applied the microcanonical thermodynamic relations for β Boltzmann and β Gibbs (Equations (19) and (21)) to the numerically determined spectral data (Figure 1) of the entire system over a wide range of energies E. The two inverse temperatures closely follow each other in parallel with β Gibbs shifted upwards relative to β Boltzmann as long as Ω (E) > 0. For larger E when β Boltzmann turns negative, the discrepancies increase as β Gibbs remains positive for all E.
Alternatively, the entire system can be assigned an inverse temperature β c by treating the system as a canonical ensemble. Accordingly, the energy E can be expressed in terms of the canonical expectation value where Z c = Tr exp (−β cĤ ) is the canonical partition function andĤ is the Hamiltonian of the entire system (see Equation (1)). For a given E, Equation (23) yields an implicit relation for β c also shown in Figure 6. Obviously, for this finite system, β c is close to β Boltzmann . In the thermodynamic limit, we would expect β c = β Boltzmann . In spite of the fact that the size of our system is still far from the thermodynamic limit (N → ∞), the agreement between different thermodynamic ensembles is already remarkably close. Deviations appear primarily near the tails of the density of states and are larger in the region of negative β Boltzmann where the DOS decreases rather than increases with E. (20), blue) as well as the canonical expectation value (Equation (23), dashed black). The energy is restricted to the interval [E min , E peak + E FWHM /2, ] with E min the lower bound where the DOS of the entire system is ≥15% of its peak value at E peak , and E FWHM the full-width-at-half-maximum of the DOS. Bath-bath interaction strength W BB = 1 and impurity-bath interaction W IB = 1 (see Figure 1b).
The conceptually interesting question now arises which of these temperatures, if any, will be imprinted on the impurity upon an exact calculation of its reduced density matrix by tracing out all bath degrees of freedom from a given single exact eigenstate of a the isolated many-body system, and without invoking any a priori assumption of the microcanonical ensemble.
To address this question, we start from the density operator for any pure energy eigenstate |ψ α of the entire system given by the projector |ψ α ψ α |. Consequently, the reduced density matrix (RDM) of the impurity follows from tracing out all bath degrees of freedom, which will, in general, depend on the parent state |ψ α it is derived from. We explore now the generic properties of D (I) α independent of the particular parent state. Specifically, we investigate whether a given D yielding natural orbitals |η m,α with natural occupation numbers n m,α [99]. We emphasize that within the present approach, the RDMs D (I) α and their eigenvalues, the occupation numbers n m,α , which characterize the thermal state, are a priori uniquely determined and not influenced by the choice of any (approximate) basis. Compared to previous investigations, this is one distinguishing feature of the present study of the thermal state emerging from an isolated deterministic many-body system. RDMs have been previously employed in studies of disordered fermionic systems [100][101][102].
m,α = η m,α |Ĥ I |η m,α , which, in turn, should be close to the eigenstates ofĤ I . Moreover, the resulting value for β extracted from the fit to the exponential distribution allows the identification of the inverse temperature uniquely characterizing the thermal distribution. For a finite-size system with an impurity and a bath with an order of magnitude of 10 particles and finite impurity-bath coupling, the residual interaction of the impurity with the bath is not negligible and should therefore be included to improve the numerical accuracy. We account for the residual impurity-bath interaction on the level of the meanfield (MF) or Hartree approximation [14]. Accordingly, the energies (I) of the impurity appearing in the Boltzmann factor include a correction term where the MF interaction operator in site-representation reads the reduced one-body density of residual bath particles at the site j when the entire system is in state |ψ α . In Equation (28), the partial trace over all but one (N B − 1) bath particles and the impurity (I) is denoted by Tr N B −1,I . The energy fluctuations provide a measure for the proximity of the natural orbitals of the RDM to the eigenstates of the (perturbed) single-particle Hamilton operator of the subsystem,Ĥ I,eff =Ĥ I +Ŵ (IB) MF,α . The energy fluctuations (Equation (29)) vanish only when the natural orbitals |η m,α with which the matrix elements in Equation (29) are evaluated do coincide with the eigenstates ofĤ I,eff . Therefore, the variance ∆¯ (I) m,α can serve as a distance measure of the natural orbitals from eigenstates of the impurity Hamiltonian operator. The MF correction in Equation (27) follows from the Liouville-von Neumann equation for the reduced system where the interaction with the bath consists of the MF term and a collision operator. The collision operator describes the correlations between the impurity and the bath particles and contains the so-called two-particle (subsystem-bath) cumulant ∆ 12 . We numerically monitor the validity of the MF approximation through the magnitude of the two-particle correlation energy determined by ∆ 12 . Consistently, we find that for all many-particle states |ψ α which reduce to a near-canonical RDM for the impurity, the correlation energy is negligible compared to the MF energy thereby justifying Equation (26). Of course, in the limit of weak impurity-bath coupling, the MF correction (Equation (27)) becomes negligible as well.
A representative example for the spectrum of the impurity RDM, i.e., the occupation number distribution of natural orbitals of the impurity RDM emerging from a single energy eigenstate of the entire system with state index α = 4364 (with α sorted by energy) and energy eigenvalue E α = −2.396 lying on the tail of the DOS with positive β for W BB = 1, is shown in Figure 7.
Indeed, a Boltzmann distribution ∝ e −β¯ (I) m,α characterizing the canonical density matrix is observed. Moreover, the fit to an exponential yields β ≈ 0.58 in close agreement with β Boltzmann = 0.58 predicted by Equation (19) for the inverse temperature within the microcanonical ensemble (see also Figure 6) and reproduces the distribution of occupation numbers very well. It also agrees with β c predicted by Equation (23) where the entire system is treated as a canonical ensemble. We note that the Boltzmann-like decay of the diagonal elements would remain qualitatively unchanged when neglecting the MF correction in Equation (26) but the fit to β would deteriorate. Thus, from the reduction of state α = 4364, we have verified that a canonical density matrix emerges. (29)). The blue solid line corresponds to the best exponential fit yielding the exponent β ≈ 0.58 in agreement with β Boltzmann deduced for this state from Equation (19). The inset shows the same plot on a logarithmic scale.
On a conceptual level, the present results confirm the analysis by Dunkel and Hilbert [95] who showed that the recently observed experimental single-particle population distribution in an isolated finite cold-atom system [22] is governed by β Boltzmann . Thus, the canonical density matrix of a small system emerging from tracing out bath variables is characterized by the inverse Boltzmann temperature β Boltzmann rather than by β Gibbs . Consequently, level inversion in a small system in thermal contact with a bath, in particular, spin systems [103,104], can be properly characterized by negative β Boltzmann . The point to be noted is that while β Boltzmann describes the canonical density matrix, the use of β Gibbs is required for consistency in thermodynamic relations such as the Carnot efficiency [95,96]. In the following, we present the numerical results for the canonical density matrix of the impurity in terms of β Boltzmann which we denote, from now on, for notational simplicity by β. We point out that β can be straightforwardly transformed into β Gibbs using Equation (22) and that none of the conclusions to be drawn in the following are altered by this transformation.
Eigenstate Canonicity and Degree of Quantum Chaoticity
The demonstration of the emergence of a canonical density matrix from a particular eigenstate |ψ α (α = 4364) of the entire system invites now the following questions: Is the reduction to a canonical density matrix generic, i.e., will it emerge for almost all |ψ α ? Is this appearance related to the quantum chaos present in the underlying many-body system? On a more quantitative footing: For how many of the eigenstates will a canonical density matrix emerge and does this number depend on the degree of quantum chaos of the system?
We explore these questions by determining the fraction of many-body eigenstates reducing to a canonical density matrix of the impurity, referred to in the following as eigenstate canonicity, as a function of the exact total energy E α for the complete set of eigenstates α of the entire system and for varying bath-bath interaction W BB . The corresponding degree of quantum chaoticity of the entire system is measured by either the Brody parameter (Equation (12)) or the Shannon entropy (Equation (16)). Striking differences in the approach to the thermal state with inverse temperature β appear which are controlled by the Brody parameter γ (or Shannon entropyS): At W BB = 1, when the system is chaotic as indicated by a Brody parameter γ ≈ 0.9 (or scaled Shannon entropȳ S = 0.9), a thermal distribution with a well-defined inverse temperature β, consistent with the (micro)canonical ensemble prediction (Equations (19) and (23)), emerges for an overwhelming fraction of states with the exception of states in the tails of the spectrum where the DOS is strongly suppressed (Figure 8a). The large deviations in the tails are consistent with the corresponding deviations ofS in the same spectral region (Figure 4a). With decreasing W BB and, correspondingly, decreasing γ orS, an increasing fraction of states yields values of β that are far from the thermal ensemble prediction. Moreover, the quality of the fit to a canonical density matrix measured by the variance of ∆β and indicated by the color coding of Figure 8 drastically deteriorates. In other words, for a significant fraction of states, the emerging RDMs do not conform with the constraints of a canonical density matrix.
In order to quantify the decomposition of the Hilbert space into the subspace of states |ψ α whose reduction to the subsystem yields a canonical density matrix and into the complement whose reduction fails to yield such a thermal state, we introduce a threshold for the variance of the inverse temperature ∆β th above which we consider the eigenstate canonicity to be failing. We then calculate for all states |ψ α the fraction of emerging canonical density matrices satisfying ∆β ≤ ∆β th . Of course, the resulting fraction of states will depend on the precise value of ∆β th chosen. We have determined these fractions for thresholds ranging from ∆β th = 5 × 10 −3 to 1.5 × 10 −2 . Changes of the fractions due to variation of ∆β th are indicated by the vertical error bars in Figure 9. An unambiguous trend of a monotonic increase of the fraction of canonical density matrices with chaoticity is emerging, obviously unaffected by the choice of ∆β th . This fraction representing Gibbs states, denoted in the following by G, monotonically increases with quantum chaoticity as parameterized by either the Brody parameter γ, G(γ), or alternatively by the scaled Shannon entropy, G(S) (Figure 9). Since γ andS both increase monotonically with the bath interaction W BB (see Figure 5), this implies also a monotonic relationship G(W BB ). The conceptually important observation emerging from Figure 9 is that the degree of canonicity of the RDM, G(γ), undergoes a continuous transition from the quantum-integrable (γ → 0) to the quantum-chaotic limit (γ → 1). The strength of level repulsion in the NNLS parameterized by γ directly determines the probability of finding the RDM of the impurity represented by a Gibbs ensemble.
The approach of the RDM of the impurity to the Gibbs ensemble with MF,α ] can be also directly observed in the spatial site representation (j 1 , j 2 ) of the RDM of the impurity (Figure 10b).
We illustrate the RDM in the site representation for two energetically nearest-neighbor states (α = 13,637 and α = 13,638) when the system is in the transition regime between integrable and non-integrable (in the present case, W BB = 0.1). We quantify the approach to D Gibbs α through the density matrix site correlation function where j|D (I) α |j is the RDM of the impurity (Equation (24)) in the site basis. While the state α = 13,638 results in a nearly diagonal RDM in the site basis (Figure 10a) with rapidly decaying site correlations closely following the prediction for a Gibbs ensemble (Equation (30)), the adjacent state α = 13,637 yields a RDM with significant off-diagonal entries, extended site correlations, and strong deviations from Equation (30). Thus, the emergence of a thermal density matrix in the transition regime between quantum integra-bility and quantum chaos displays strong state-to-state fluctuations and is not a smooth function of the energy E α . As quantitative measure for the distance of a given RDM from the Gibbs ensemble, we use the trace-class norm with ||M|| 1 = Tr √ M † M the largest of the Schatten p-norms (p = 1). For hermitian positivesemidefinite matrices of unit trace, the Schatten 1-norm is bounded by 0 ≤ ||M 1 − M 2 || 1 ≤ 2. We observe for RDMs derived from all eigenstates of the entire system ( Figure 11) an overall reduction of distances ∆D α from a canonical density matrix with increasing W BB . For W BB = 1 (Figure 11a) in the (near) quantum chaotic limit, the vast majority of impurity RDMs have a distance of 0.15 from an ideal Gibbs ensemble (apart from those reduced from many-body states in the tail regions of the spectrum with low DOS). The distribution of ∆D α mirrors the distribution of Shannon entropies (Figure 4). We note that for the present finite quantum system, we find that the distance measured by the Schatten 1-norm has a lower bound of ∆D α 0.05. As the Schatten 1-norm is sensitive to small deviations in both diagonal and off-diagonal elements, these deviations are due to residual fluctuations (Equation (29), Figure 7) of the natural orbitals of the impurity which are expected to vanish in the thermodynamic limit N → ∞. Indeed, plotting the value of the smallest distance (∆D α ) min as a function of the dimension of the Hilbert space of the system d H for three numerically feasible system sizes indicates that the minimal distance vanishes in the thermodynamic limit as d −1/3 H (inset Figure 11a). With decreasing W BB , e.g., W BB = 0.1 in Figure 11b, the mean distance of RDMs from a Gibbs state significantly increases and, moreover, the spread becomes much larger reflecting, again, the behavior of the Shannon entropy (Figure 4c). The emergence of canonical density matrices, i.e., of Gibbs states for almost all |ψ α in the quantum chaotic limit (γ 1 orS 1) can be viewed as a rather specific manifestation and extension of the ETH [5][6][7][8]. The local observable in this case is the RDM of the impurity, D (I) α , itself. Its diagonal elements are, indeed, a smooth function of the total energy E α as predicted by ETH but now, more specifically, Boltzmann-distributed ∝ e −β α¯ (I) m,α over impurity states with the inverse temperature imprinted by E α . The present analysis covers, in addition, also the transition regime between quantum integrability and quantum chaos (0 < γ < 1) where, in general, the ETH does not apply. A canonical density matrix may still emerge but now only for a decreasing fraction of eigenstates of the finite large system. The size of this fraction G is predicted by the degree of quantum chaoticity as measured by the Brody parameter γ or Shannon entropyS (Figure 9).
The direct relation between the emergence of the canonical density matrix for a small subsystem from eigenstate reduction and the quantum chaoticity of the large system it is embedded in, established here for a finite quantum system, raises the conceptual question as to the extension of this connection to the thermodynamic (N → ∞) limit. Clearly, this question cannot be conclusively addressed by the present method of exact diagonalization. Nevertheless, we can provide evidence to this effect by exploring the scaling with system size still within computational reach. We first establish that the degree of quantum chaoticity as measured by the Brody parameter γ (or the Shannon entropȳ S) indeed increases with system size at fixed strength of the interaction W BB that breaks quantum integrability (Figure 12). . The observed increase of quantum chaoticity with system size is qualitatively in line with properties of classical chaos: In a mixed phase space with surviving local regular structures such as tori, their influence on phase space dynamics is rapidly diminishing with increasing phase space dimension a prominent example of which is Arnold diffusion [3,4]. This increase of quantum chaoticity with system size at fixed interaction strength turns out to be key for the emergence of a universal, i.e., (nearly) system-size-independent, interrelation between the fraction of canonical eigenstates and the degree of quantum chaoticity. Both the Brody parameter γ as well as the fraction of density matrices complying with the Gibbs ensemble increase with system size at fixed bath interaction strength. As a consequence, a near universal, i.e., size-independent, relation G(γ) between the fraction of (approximate) Gibbs states and the degree of quantum chaos as measured by γ emerges (Figure 13).
The data for different combinations of values of W BB and M s fall on the same curve. A very similar relation would emerge for G(S) as a function of the scaled Shannon entropy. We have thus established the remarkable feature that the fraction of canonical eigenstates, i.e., the likelihood that a subsystem is in a Gibbs state when the large but finite system is in a pure energy eigenstate with zero von Neumann entropy is controlled and can be tuned by γ (orS) and, in turn, by the degree of level repulsion in the quantum many-body system which is controlled by γ.
Conclusions and Outlook
In this work, we have explored the emergence of a thermal state (or Gibbs ensemble) of a small (sub)system in contact with a bath when the combined large but finite deterministic quantum system is isolated and in a well-defined energy eigenstate. As prototypical case, we have considered an impurity embedded in an interacting spin-polarized Fermi-Hubbard many-body bath which facilitates a clear-cut subsystem-bath decomposition and a tunable transition of the entire system from quantum integrability to quantum chaos. By tracing out the bath degrees of freedom, we have investigated how many of the resulting reduced density matrices of the subsystem represent a canonical density matrix. We have shown that the probability for finding a canonical density matrix monotonically increases with the degree of quantum chaos. The degree of quantum chaos is identified here by both the energy-level statistics as well as by the randomness of the eigenstates as measured by the Shannon entropy. The likelihood for the emergence of thermal states is thus found to be controlled by the degree of quantum chaoticity as parameterized by the Brody parameter or the Shannon entropy. Even though our simulations are limited to finite-size systems, the present results for varying system sizes suggest that the relation between the fraction of eigenstates of the isolated many-body system whose reduction to a small subsystem yields a reduced canonical density matrix and the degree of quantum chaoticity is universal, i.e., size-independent. Each many-body eigenstate represents the fine-grained version of the energy shell of the microcanonical ensemble of the entire impurity-bath system. This connection between the fraction of canonical eigenstates and quantum chaoticity thus offers a direct quantum analogue to the role of classical chaos which Boltzmann invoked in deducing the classical (micro-)canonical ensemble. One can view this as an example of classicalquantum correspondence to this cornerstone of the foundation of statistical mechanics. The statistical ensemble properties can already emerge for isolated energy eigenstates without invoking any randomness, e.g., coarse-graining over a macroscopically thin energy shell or superposition of many eigenstates of the isolated large system as frequently employed. The emergence of statistical ensemble properties from the reduction of pure states was already early anticipated by Landau and Lifshitz [42] and later related to quantum chaos [14]. The present study establishes a direct quantitative relationship between the degree of canonicity and the degree of quantum chaos, in particular, also covering the transition regime from quantum integrability to quantum chaos.
The present results are also expected to have implications for the topical issue of thermalization in finite quantum systems [24,26,27,30]. In this paper, we intentionally avoided this notion and, instead, focused on thermal equilibrium states as we deduce the canonical density matrix from stationary energy eigenstates bypassing any explicit time dependence of the dynamics. Thermalization of an initial non-equilibrium state is, by contrast, a fundamental probe of the time evolution of quantum many-body systems. Up to now, one primary focus has been on quantum quenches, the relaxation of out-off equilibrium initial states. Their time evolution has typically shown a transition from an exponential decay for weakly perturbed many-body systems to a Gaussian decay in the strongly coupled limit, however, without an unambiguous correlation to quantum chaos [105,106]. The Shannon entropy was found to linearly increase with time before reaching saturation [107]. For disordered systems, an initial rapid decay followed by a slow power-law relaxation of occupation numbers has been observed [66,102]. The extension of the present study to a non-equilibrium initial state of a deterministic many-body system would yield the time evolution of the entire one-body RDM, and its eigenvalues and eigenvectors, the time dependence of which remains to be explored. Moreover, the dependence of the relaxation dynamics of the RDM on the choice of the initial state for systems in the transition regime between quantum integrability and quantum chaos (i.e., for intermediate values of the Brody parameter γ) is of particular interest. Most importantly, will quantum chaos play an analogous role for the process of mixing as classical chaos does for classical non-equilibrium dynamics and the relaxation to equilibrium? The origin and properties of such "quantum mixing" remain a widely open question.
Data Availability Statement:
The presented data can be obtained from the corresponding author upon a reasonable request. | 12,262.6 | 2021-03-10T00:00:00.000 | [
"Physics"
] |
Overview of Catalysts with MIRA21 Model in Heterogeneous Catalytic Hydrogenation of 2,4-Dinitrotoluene
: Although 2,4-dinitrotoluene (DNT) hydrogenation to 2,4-toluenediamine (TDA) has become less significant in basic and applied research, its industrial importance in polyurethane production is indisputable. The aim of this work is to characterize, rank, and compare the catalysts of 2,4-dinitrotoluene catalytic hydrogenation to 2,4-toluenediamine by applying the Miskolc Ranking 21 (MIRA21) model. This ranking model enables the characterization and comparison of catalysts with a mathematical model that is based on 15 essential parameters, such as catalyst performance, reaction conditions, catalyst conditions, and sustainability parameters. This systematic overview provides a comprehensive picture of the reaction, technological process, and the previous and new research results. In total, 58 catalysts from 15 research articles were selected and studied with the MIRA21 model, which covers the entire scope of DNT hydrogenation catalysts. Eight catalysts achieved the highest ranking (D1), whereas the transition metal oxide-supported platinum or palladium catalysts led the MIRA21 catalyst ranking list.
Introduction
Polyurethanes, referred to as urethanes, PUs, or PUR, are characterized by the urethane linking -NH-C (= O)-O-, which is established by the reaction of the organic isocyanate (NCO) groups and hydroxyl (OH) groups [1].Due to their versatility and excellent mechanical, chemical, physical, and biological properties, they have a wide range of applications and a variety of uses, such as in appliances, automotive, construction, furniture, clothing, and the wood industries.Although the impact of COVID-19 has been startling, the global polyurethane market size was USD 56.45 billion in 2020 and it is projected to grow [2].The rising demand for foams in furniture and in the construction industry has been driving the toluene diisocyanate (TDI) market growth.
TDI is one of the main materials of polyurethane production.TDI is produced in three different steps: the nitration of toluene, the dinitrotoluene hydrogenation to toluenediamine (TDA), and in the phosgenation of diaminotoluene.The general industrial process of TDA formation is the catalytic hydrogenation of dinitrotoluene in the liquid phase in the presence of a catalyst.Six isomers of TDA can be generated, but the major intermediate of TDI production is 2,4-toluenediamine (2,4-TDA).
In addition to the production volume and the versatility of its application, the industrial importance of TDA production is also shown by its patent history.A search on the Google Patents website using the keywords 'dinitrotoluene', 'hydrogenation', and 'toluenediamine' yields 400 patents that have been published since 1953 [3].While the first patents in the 1950s described some general reaction conditions and some catalyst components, the newest patents provide much more detailed descriptions of multicomponent catalysts, their composition, and their preparation [4][5][6][7][8][9][10].Although the latest patents describe the high performance catalysts comprising activated metal, one or more auxiliary metals, and a special support material such as oxide [11], the most commonly used catalyst in the industry is the nickel catalyst [12].Despite the fewer number of published scientific research papers [13], a high conversion and selectivity were achieved with the catalysts of many different formulations of nickel, platinum, or palladium on carbon, oxide, or zeolite supports [14][15][16][17][18][19].However, in addition to the catalytic performance, sustainability parameters also play an increasingly important role in the chemical industry, such as reversibility and stability [20][21][22].There are many steps between the fundamental research on catalysts to their industrial application.Nevertheless, new scientific findings are essential for the development of applied technological innovations if the new knowledge is to be used effectively [23].
The Miskolc Ranking 21 (MIRA21) model is a new, multi-step, functional mathematical method to extract the knowledge from the heterogeneous catalyst data through the catalyst characterization, comparison, and ranking of a series of catalysts [24].In our previous work we discussed the method and application possibilities through the reaction of nitrobenzene hydrogenation to aniline.The ranking model applies a fifteen-parameter descriptor system to facilitate the comparison of the experimental and scientific publication results of a determined reaction to support catalyst development.The parameters of the descriptor system can be divided into four groups: catalyst performance, reaction conditions, catalyst conditions, and sustainability parameters.The model qualifies and ranks the catalysts based on these parameters.
This overview summarized the advances in the selective hydrogenation of dinitrotoluene to form toluenediamine, based on the catalysts used to carry out this process in the last 50 years.As the focus point of this work, we characterized and ranked 58 catalysts from 15 articles according to the MIRA21 model to make the systematic comparison of them.
Figure 1 describes the technological process: 'toluenediamine' yields 400 patents that have been published since 1953 [3].While the first patents in the 1950s described some general reaction conditions and some catalyst components, the newest patents provide much more detailed descriptions of multicomponent catalysts, their composition, and their preparation [4][5][6][7][8][9][10].Although the latest patents describe the high performance catalysts comprising activated metal, one or more auxiliary metals, and a special support material such as oxide [11], the most commonly used catalyst in the industry is the nickel catalyst [12].Despite the fewer number of published scientific research papers [13], a high conversion and selectivity were achieved with the catalysts of many different formulations of nickel, platinum, or palladium on carbon, oxide, or zeolite supports [14][15][16][17][18][19].However, in addition to the catalytic performance, sustainability parameters also play an increasingly important role in the chemical industry, such as reversibility and stability [20][21][22].There are many steps between the fundamental research on catalysts to their industrial application.Nevertheless, new scientific findings are essential for the development of applied technological innovations if the new knowledge is to be used effectively [23].
The Miskolc Ranking 21 (MIRA21) model is a new, multi-step, functional mathematical method to extract the knowledge from the heterogeneous catalyst data through the catalyst characterization, comparison, and ranking of a series of catalysts [24].In our previous work we discussed the method and application possibilities through the reaction of nitrobenzene hydrogenation to aniline.The ranking model applies a fifteen-parameter descriptor system to facilitate the comparison of the experimental and scientific publication results of a determined reaction to support catalyst development.The parameters of the descriptor system can be divided into four groups: catalyst performance, reaction conditions, catalyst conditions, and sustainability parameters.The model qualifies and ranks the catalysts based on these parameters.
This overview summarized the advances in the selective hydrogenation of dinitrotoluene to form toluenediamine, based on the catalysts used to carry out this process in the last 50 years.As the focus point of this work, we characterized and ranked 58 catalysts from 15 articles according to the MIRA21 model to make the systematic comparison of them.
Figure 1 describes the technological process: The production of TDI is carried out in a three-step continuous process (Figure 1).Dinitrotoluene is produced in the first step by the nitration of toluene.The second and key step is the catalytic hydrogenation of dinitrotoluene to toluene diamine.In the last step, toluene diamine is phosgenated to form TDI.
The formation of DNT by the mixed acid nitration of toluene occurred at atmospheric pressure and between 40 °C and 70 °C.The main product of the process is a mixture of the 2,4-and 2,6-dinitrotoluene isomer mixture (Figure 2) [26].These are the starting reagents for hydrogenation.The side products of the reaction are 2,3-and 3,4-DNT isomers, The production of TDI is carried out in a three-step continuous process (Figure 1).Dinitrotoluene is produced in the first step by the nitration of toluene.The second and key step is the catalytic hydrogenation of dinitrotoluene to toluene diamine.In the last step, toluene diamine is phosgenated to form TDI.
The formation of DNT by the mixed acid nitration of toluene occurred at atmospheric pressure and between 40 • C and 70 • C. The main product of the process is a mixture of the 2,4-and 2,6-dinitrotoluene isomer mixture (Figure 2) [26].These are the starting reagents for hydrogenation.The side products of the reaction are 2,3-and 3,4-DNT isomers, whereas the 2,5-and 3,5-isomers and other byproducts can also be found in smaller quantities.[26].
Figure 2. Raw material of hydrogenation process
The second step of the industrial process is the catalytic hydrogenation to dinitrotoluene to toluenediamine using a solid catalyst at a high pressure and high temperature (100-150 °C, 5-8 bar).This step was previously carried out in the presence of iron filings and aqueous hydrochloric acid [27], but today it is hydrogenated using a Ra-Ni or Pd/C catalyst.In strong industrial conditions (high pressure and temperature), an extremely high-quality product is produced with a high yield.Furthermore, Figure 3 shows the general reaction equation with the main product of dinitrotoluene hydrogenation.The process occurred in a continuously stirred tank reactor where the DNT isomer mixture usually reacts with hydrogen gas in the presence of the supported precious metal catalyst in a TDA/water medium.In order to achieve a high conversion, the correct catalyst composition and reaction conditions (temperature, pressure, etc.) are crucial [28].The spent catalyst is removed from the system through a catalyst filter and the new catalyst is added.It is important that the catalyst can be easily removed and regenerated.Westerwerp et.al. made a pilot installation of a 2,4-DNT synthesis plant and studied the reactor design and operation process [29][30][31].The experiments took place in a continuously stirred, three-phase slurry reactor with an evaporating solvent.They mentioned that, in The second step of the industrial process is the catalytic hydrogenation to dinitrotoluene to toluenediamine using a solid catalyst at a high pressure and high temperature (100-150 • C, 5-8 bar).This step was previously carried out in the presence of iron filings and aqueous hydrochloric acid [27], but today it is hydrogenated using a Ra-Ni or Pd/C catalyst.In strong industrial conditions (high pressure and temperature), an extremely high-quality product is produced with a high yield.Furthermore, Figure 3 shows the general reaction equation with the main product of dinitrotoluene hydrogenation.
Catalysts 2023, 13, x FOR PEER REVIEW 3 of 20 whereas the 2,5-and 3,5-isomers and other byproducts can also be found in smaller quantities.
The second step of the industrial process is the catalytic hydrogenation to dinitrotoluene to toluenediamine using a solid catalyst at a high pressure and high temperature (100-150 °C, 5-8 bar).This step was previously carried out in the presence of iron filings and aqueous hydrochloric acid [27], but today it is hydrogenated using a Ra-Ni or Pd/C catalyst.In strong industrial conditions (high pressure and temperature), an extremely high-quality product is produced with a high yield.Furthermore, Figure 3 shows the general reaction equation with the main product of dinitrotoluene hydrogenation.The process occurred in a continuously stirred tank reactor where the DNT isomer mixture usually reacts with hydrogen gas in the presence of the supported precious metal catalyst in a TDA/water medium.In order to achieve a high conversion, the correct catalyst composition and reaction conditions (temperature, pressure, etc.) are crucial [28].The spent catalyst is removed from the system through a catalyst filter and the new catalyst is added.It is important that the catalyst can be easily removed and regenerated.Westerwerp et.al. made a pilot installation of a 2,4-DNT synthesis plant and studied the reactor design and operation process [29][30][31].The experiments took place in a continuously stirred, three-phase slurry reactor with an evaporating solvent.They mentioned that, in The process occurred in a continuously stirred tank reactor where the DNT isomer mixture usually reacts with hydrogen gas in the presence of the supported precious metal catalyst in a TDA/water medium.In order to achieve a high conversion, the correct catalyst composition and reaction conditions (temperature, pressure, etc.) are crucial [28].The spent catalyst is removed from the system through a catalyst filter and the new catalyst is added.It is important that the catalyst can be easily removed and regenerated.Westerwerp et al. made a pilot installation of a 2,4-DNT synthesis plant and studied the reactor design and operation process [29][30][31].The experiments took place in a continuously stirred, threephase slurry reactor with an evaporating solvent.They mentioned that, in addition to a good catalyst, it is important to choose the ideal hydrogenation reactor unit and optimal reaction parameters, and to solve the deactivation problem of the catalyst.
Reaction Mechanism and Kinetics
The kinetics and reaction mechanism of the catalytic hydrogenation of 2,4-dinitrotoluene to 2,4-toluenediamine was investigated by several research groups [15,16,[32][33][34].In the 1990s, Janssen et al. studied the reaction scheme and modelled the reaction rates and catalyst activity to evaluate the performance of a batch slurry reactor at 308-357 K and over the pressure range of 0-4 MPa [35,36].The reaction rates are described by the Langmuir-Hinshelwood model.They found that 2,4-dinitrotoluene can be converted to 2,4toluenediamine through two parallel pathways with consecutive reaction steps.They found that 4-hydroxyamino-2-nitrotoluene, 4-amino-2-nitrotoluene, and 2-amino-4-nitrotoluene are the most stable intermediates, but the presence of 2-amino-4-hydroxyaminotoluene and another azoxy compound were also observed.One of the reaction pathways is the direct conversion of an o-nitro group to an amino group.The other one is the conversion of the p-nitro group to an amino group in a two-step reaction.
While Janssen et al. was the first to describe the two reaction pathways, Neri et al. wrote a more complex reaction mechanism [37,38].Neri et al. investigated this hydrogenation reaction over a supported Pd/C catalyst and found that 4-hydroxyamino-2nitrotoluene, 2-amino-4-nitrotoluene, and 4-amino-2-nitrotoluene can form directly from 2,4-dinitrotoluene. Figure 4 Electronic structure computational studies can be a great help in studying the reaction mechanism of three-phase catalytic hydrogenation reactions.In such a study, Barone et.al. applied the Monte Carlo algorithm to simulate the batch hydrogenation of 2,4-dinitrotoluene on a carbon-supported palladium catalyst [31,42,43].They investigated the influence of the molecular adsorption modes, the steric hindrance, and the metal dispersion on the reaction mechanism.They found that the steric hindrance of the different surface species had the largest influence on the mechanism.
Hajdu et al. worked on a new catalyst that contains precious metal on chromiumoxide nanowires for 2,4-toluenediamine synthesis [44].In our previous work, we examined and described a possible reaction mechanism based on the GC-MS results.Our study Electronic structure computational studies can be a great help in studying the reaction mechanism of three-phase catalytic hydrogenation reactions.In such a study, Barone et al. applied the Monte Carlo algorithm to simulate the batch hydrogenation of 2,4dinitrotoluene on a carbon-supported palladium catalyst [31,42,43].They investigated the influence of the molecular adsorption modes, the steric hindrance, and the metal dispersion on the reaction mechanism.They found that the steric hindrance of the different surface species had the largest influence on the mechanism.
Hajdu et al. worked on a new catalyst that contains precious metal on chromium-oxide nanowires for 2,4-toluenediamine synthesis [44].In our previous work, we examined and described a possible reaction mechanism based on the GC-MS results.Our study confirms the mechanism by Neri et.al., as we detected the presence of nitroso and hydroxylamine compounds (Figure 5).According to our results, 2,2-dinitro-4,4-azoxytoluene was found in the system (Figure 6), which could form through the reaction between the nitroso and hydroxylamine functional groups.In addition to the two semi-hydrogenated intermediates (4-amino-2nitrotoluene, 2-amino-4-nitrotoluene), we detected other side products, which further supports the reaction mechanism of Neri et al. Figure 7 shows the detected and assumed side products obtained during the formation of TDA.According to our results, 2,2-dinitro-4,4-azoxytoluene was found in the system (Figure 6), which could form through the reaction between the nitroso and hydroxylamine functional groups.In addition to the two semi-hydrogenated intermediates (4-amino-2-nitrotoluene, 2-amino-4-nitrotoluene), we detected other side products, which further supports the reaction mechanism of Neri et al. Figure 7 shows the detected and assumed side products obtained during the formation of TDA.According to our results, 2,2-dinitro-4,4-azoxytoluene was found in the system (Figure 6), which could form through the reaction between the nitroso and hydroxylamine functional groups.In addition to the two semi-hydrogenated intermediates (4-amino-2nitrotoluene, 2-amino-4-nitrotoluene), we detected other side products, which further supports the reaction mechanism of Neri et al. Figure 7 shows the detected and assumed side products obtained during the formation of TDA.
Results and Discussion of TDA Synthesis Catalysts
The hydrogenation of 2,4-dinitrotoluene to 2,4-toluenediamine is an essential technological step in the polyurethane industry.Although the technological process, the reaction mechanism, and the reaction kinetics have been investigated and have come to be generally accepted, there is still much to learn about the catalysis of this process.That is why the mapping of the current state of catalyst development likewise facilitates the development of scientific research.However, the review of the literature on catalysts used We also demonstrated that E-1-(2,4-dinitrophenyl)-N-(2-methyl-5-nitrophenyl) methanimine and 2-[(E)-[(2-methyl-5-nitrophenyl)imino]methyl]-5-nitrophenol was formed by the water loss in the condensation reaction (A and B).Molecule C was formed by the reaction between 4-methylbenzene-1,3-diol and 2-nitroso-4-nitrotoluene.As shown in Figure 7, 2-nitro-4-nitrosotoluene reacted with 2-methoxy-4-methylphenol to yield compound D. Isomers E and F were formed by the reaction between 2-methoxy-1-methyl-4nitrobenzene and dimethyl-2-nitrobenzene. Compound G was formed by the reaction between 2-methoxy-1,4-dimethylbenzene and 2-methoxy-4-nitrosotoluene.
Results and Discussion of TDA Synthesis Catalysts
The hydrogenation of 2,4-dinitrotoluene to 2,4-toluenediamine is an essential technological step in the polyurethane industry.Although the technological process, the reaction mechanism, and the reaction kinetics have been investigated and have come to be generally accepted, there is still much to learn about the catalysis of this process.That is why the mapping of the current state of catalyst development likewise facilitates the development of scientific research.However, the review of the literature on catalysts used for TDA synthesis does not provide sufficient information to achieve this aim.The comparison of the catalysts examined so far provides a much more comprehensive picture of the latest developments on their effectiveness.Therefore, the MIRA21 model was used to execute the catalyst's characterization, comparison, and qualification [24].
Catalyst Library
The results of the literature research are surprising because there are relatively few published scientific results about the dinitrotoluene hydrogenation process.They were mostly prepared before the 2000s.Based on Google Scholar searches for the keywords dinitrotoluene hydrogenation, we obtained 2210 matches, however, if we added toluenediamine, there were only 212 hits.In total, 92 pieces of these included scientific results obtained after 2010.To demonstrate this, the keyword kinetic was added to the initial search, which then yielded 120 articles.Overall, only a few research groups have studied TDA synthesis and have prepared catalysts for this reaction.On one hand, a smaller database reduces the reliability of the MIRA21 results.On the other hand, a smaller dataset makes it easier to delineate the possible research pathways on the topic.
After the first selection, 56 articles remained.During the data analysis, we concluded that it is justified to change the publication year selection criteria (after 2000) and we also worked with previous articles.The left panel of Figure 8 shows the distribution of the scientific publications according to the publication date.The right panel of the figure presents the studied articles based on its Q-index in 2021 after the primary article selection (relevance, publication year, Q-index).The figure shows that the data used to analyze the catalysts mainly came from Q1 articles.A few publications whose publisher has since ceased to exist were also included in the analysis because they had previously provided space for the publication of high-quality scientific works.
Catalysts 2023, 13, x FOR PEER REVIEW 8 of 20 for TDA synthesis does not provide sufficient information to achieve this aim.The comparison of the catalysts examined so far provides a much more comprehensive picture of the latest developments on their effectiveness.Therefore, the MIRA21 model was used to execute the catalyst's characterization, comparison, and qualification [24].
Catalyst Library
The results of the literature research are surprising because there are relatively few published scientific results about the dinitrotoluene hydrogenation process.They were mostly prepared before the 2000s.Based on Google Scholar searches for the keywords dinitrotoluene hydrogenation, we obtained 2210 matches, however, if we added toluenediamine, there were only 212 hits.In total, 92 pieces of these included scientific results obtained after 2010.To demonstrate this, the keyword kinetic was added to the initial search, which then yielded 120 articles.Overall, only a few research groups have studied TDA synthesis and have prepared catalysts for this reaction.On one hand, a smaller database reduces the reliability of the MIRA21 results.On the other hand, a smaller dataset makes it easier to delineate the possible research pathways on the topic.
After the first selection, 56 articles remained.During the data analysis, we concluded that it is justified to change the publication year selection criteria (after 2000) and we also worked with previous articles.The left panel of Figure 8 shows the distribution of the scientific publications according to the publication date.The right panel of the figure presents the studied articles based on its Q-index in 2021 after the primary article selection (relevance, publication year, Q-index).The figure shows that the data used to analyze the catalysts mainly came from Q1 articles.A few publications whose publisher has since ceased to exist were also included in the analysis because they had previously provided space for the publication of high-quality scientific works.The 58 qualified catalysts selected from 15 articles were mostly supported catalysts (Figure 9 left) [14,16,40,[44][45][46][47][48][49][50][51][52][53][54][55].Most of the produced catalysts contained one active component on the support (middle of the figure).The catalysts with two active components generally applied palladium-platinum, palladium-iron combinations.The catalyst systems containing three active components were composed of either iridium-manganeseiron, iridium-iron-cobalt, or nickel-lanthanum-boron.The frequency of the active metal components was in the order of Pd > Pt > Ni.In addition to palladium and platinum, nickel was also seen, which is used as a common catalyst in industrial practice (Figure 9 right).Regarding the catalyst carrier, we mainly identified metal oxides (zirconium, chromium, titanium, aluminum, and silicon), ferrites, maghemites, zeolites, and activated carbon as The 58 qualified catalysts selected from 15 articles were mostly supported catalysts (Figure 9 left) [14,16,40,[44][45][46][47][48][49][50][51][52][53][54][55].Most of the produced catalysts contained one active component on the support (middle of the figure).The catalysts with two active components generally applied palladium-platinum, palladium-iron combinations.The catalyst systems containing three active components were composed of either iridium-manganese-iron, iridium-iron-cobalt, or nickel-lanthanum-boron.The frequency of the active metal components was in the order of Pd > Pt > Ni.In addition to palladium and platinum, nickel was also seen, which is used as a common catalyst in industrial practice (Figure 9 right).Regarding the catalyst carrier, we mainly identified metal oxides (zirconium, chromium, titanium, aluminum, and silicon), ferrites, maghemites, zeolites, and activated carbon as typical in the chemical industry.Occasionally, PVP-based catalysts were also investigated [14].
typical in the chemical industry.Occasionally, PVP-based catalysts were also investigated [14].
The catalysts were characterized in detail, as 10 or more known parameters could be collected in each case (from 15).The tested reaction conditions are in the range 295-393 K and 1-50 atm, with the exception of two cases (98 and 150 atm).The time required for maximum conversion ranged from a few minutes to a 1-day interval, which therefore shows a large standard deviation.The average reaction time for 100% conversion is 60 min (Figure 10).This shows that the reaction times of the best catalysts were under 40 min.
Maximum conversion
Reaction time
The catalysts were characterized in detail, as 10 or more known parameters could be collected in each case (from 15).The tested reaction conditions are in the range 295-393 K and 1-50 atm, with the exception of two cases (98 and 150 atm).The time required for maximum conversion ranged from a few minutes to a 1-day interval, which therefore shows a large standard deviation.The average reaction time for 100% conversion is 60 min (Figure 10).This shows that the reaction times of the best catalysts were under 40 min.
typical in the chemical industry.Occasionally, PVP-based catalysts were also investigated [14].
The catalysts were characterized in detail, as 10 or more known parameters could be collected in each case (from 15).The tested reaction conditions are in the range 295-393 K and 1-50 atm, with the exception of two cases (98 and 150 atm).The time required for maximum conversion ranged from a few minutes to a 1-day interval, which therefore shows a large standard deviation.The average reaction time for 100% conversion is 60 min (Figure 10).This shows that the reaction times of the best catalysts were under 40 min.
Maximum conversion
Reaction time The amount of initial dinitrotoluene was in the range of 0.002 and 0.3 mol.The amount of active metal in the catalyst also showed a large deviation from 5.13 × 10 −7 mol to 0.034 mol.Despite the low amount of the catalyst, as mentioned above, 100% conversion was achieved [54].The increased amount of the material was typical for nickel-type catalysts.
Furthermore, Figure 11 shows the catalytic performance results for the selected, studied, characterized, ranked, and classified catalysts.The conversion of the studied catalysts in classes D1-Q1-Q2 is over 99 n/n%, however, the product selectivity is much more differentiated.Based on these results, it can be said that achieving the pure TDA product produced during hydrogenation is a serious challenge for researchers.The worst-performing catalysts (class Q4) worked below 50 n/n%.
The amount of initial dinitrotoluene was in the range of 0.002 and 0.3 mol.The amount of active metal in the catalyst also showed a large deviation from 5.13 × 10 −7 mol to 0.034 mol.Despite the low amount of the catalyst, as mentioned above, 100% conversion was achieved [54].The increased amount of the material was typical for nickel-type catalysts.
Furthermore, Figure 11 shows the catalytic performance results for the selected, studied, characterized, ranked, and classified catalysts.The conversion of the studied catalysts in classes D1-Q1-Q2 is over 99 n/n%, however, the product selectivity is much more differentiated.Based on these results, it can be said that achieving the pure TDA product produced during hydrogenation is a serious challenge for researchers.The worst-performing catalysts (class Q4) worked below 50 n/n%.The catalyst composition changed according to the ranking of the MIRA21 model.In addition, Figure 12 shows the active components and support types of the catalyst systems based on their classification.The best-performing catalysts (class D1) consist of palladium or platinum and transition metal oxide supports.Although nickel is more commonly used in the industry, these types of catalysts are in the lower half of the ranking.Iridium as an active component in the catalyst also obtained a relatively good MIRA21 number.Most of the unsupported, carbon black, Al2O3 and SiO2-supported catalysts are in the lower half of the ranking.
CONVERSION AND SELECTIVITY DEPENDS ON CLASSIFICATION >99% 99-50% 50%> The catalyst composition changed according to the ranking of the MIRA21 model.In addition, Figure 12 shows the active components and support types of the catalyst systems based on their classification.The best-performing catalysts (class D1) consist of palladium or platinum and transition metal oxide supports.Although nickel is more commonly used in the industry, these types of catalysts are in the lower half of the ranking.Iridium as an active component in the catalyst also obtained a relatively good MIRA21 number.Most of the unsupported, carbon black, Al 2 O 3 and SiO 2 -supported catalysts are in the lower half of the ranking.
Practically, the catalyst carrier of the system differed according to the MIRA21 classes.Mainly activated carbon supports can be found in Q2 and Q4 classes.The catalysts with the transition metal supports are at the top of the ranking.
The eight best D1-rated catalysts are listed in Table 1.The columns contain the ID code and the designation of the catalysts, the type of catalyst support and active component, the number of known parameters, and the calculated MIRA21 number.The best MIRA catalysts consist of only one active component and transition metal oxide supports.Based on the results, the platinum-containing catalysts produced better results than their competitors did.The synergistic effect of the combination of the active components is difficult to assess because there is not enough information available.Class D1 includes the catalysts that are studied according to sustainability considerations, such as stability and reactivation capabilities.These catalysts are at the beginning of the innovation pathway and are not yet suitable for industrial application.If the results were compared with the ranking of the catalysts analyzed in the case of the nitrobenzene hydrogenation reaction, it can be found that the best MIRA21-ranked catalyst was similar to the Pt/ZrO 2 catalyst, which is one of the most effective catalyst systems in the first class.Zhang et al. prepared a Pt/ZrO 2 /SBA-15 hybrid nanostructure catalyst that showed an excellent catalytic performance at 313 K, 7 atm in 50 min for the hydrogenation of nitrobenzene to aniline [56].They found that the dispersion of ZrO 2 in SBA-15 improved the performance of the catalyst due to its mesoporous structure.Therefore, it would be worthwhile to try this catalyst for the synthesis of TDA as well.Practically, the catalyst carrier of the system differed according to the MIRA21 classes.Mainly activated carbon supports can be found in Q2 and Q4 classes.The catalysts with the transition metal supports are at the top of the ranking.
The eight best D1-rated catalysts are listed in Table 1.The columns contain the ID code and the designation of the catalysts, the type of catalyst support and active component, the number of known parameters, and the calculated MIRA21 number.The best MIRA catalysts consist of only one active component and transition metal oxide supports.Based on the results, the platinum-containing catalysts produced better results than their competitors did.The synergistic effect of the combination of the active components is difficult to assess because there is not enough information available.Class D1 includes the catalysts that are studied according to sustainability considerations, such as stability and reactivation capabilities.These catalysts are at the beginning of the innovation pathway and are not yet suitable for industrial application.If the results were compared with the ranking of the catalysts analyzed in the case of the nitrobenzene hydrogenation reaction, it can be found that the best MIRA21-ranked catalyst was similar to the Pt/ZrO2 catalyst, which is one of the most effective catalyst systems in the first class.Zhang et al. prepared a Pt/ZrO2/SBA-15 hybrid nanostructure catalyst that showed an excellent catalytic performance at 313 K, 7 atm in 50 min for the hydrogenation of nitrobenzene to aniline [56].They found that the dispersion of ZrO2 in SBA-15 improved the performance of the catalyst due to its mesoporous structure.Therefore, it would be worthwhile to try this catalyst for the synthesis of TDA as well.The work of Hajdu et al. focused on the development of new magnetic catalysts for the hydrogenation of DNT to TDA [44,53,55].One of the catalysts is Pd/NiFe2O4, which has achieved 99 n/n% TDA yield at 333 K and 20 atm.In this work, they synthetized the nickel ferrite spinel nanoparticles to solve the problem of separating the catalyst from the products by magnetization.Another magnetic catalyst with good catalytic performance is Pd/maghemite, which is made by a combustion method with a sonochemical step.Palladium on a maghemite support resulted in a high catalytic activity for TDA synthesis at about 60 min and under the same reaction conditions as ferrite hydrogenation.The first and the fifth place of the MIRA rankings were the chromium oxide platinum and palladium catalysts.These innovative systems yielded excellent performing catalysts.It was prepared with chromium (IV) oxide nanowires that were decorated with platinum and palladium nanoparticles.These catalysts showed high catalytic activity at 333 K and 20 atm.If a Pt/CrO 2 catalyst was used, 304.8 mol of TDA was produced under these conditions, while only 1 mol of the precious metal catalyst was used.When palladium is used as an active component, only 60.14 mol TDA was produced, but it is also a relatively large amount.From an industrial point of view, it is important that this type of catalyst could be easily separated from the reaction mixture due to its magnetic properties.The stability of the catalyst was studied, and it was found that the catalyst could be used at least four times without regeneration.
Ren and his colleagues made half of the D1 class catalysts, and these catalytic systems consisted of zirconium oxide supports and platinum precious metal [54].Ren et al. prepared the ZrO 2 -supported platinum catalysts with different Pt concentrations and at different reduction temperatures.They found the 0.156% Pt-containing zirconium oxide catalyst has the highest catalytic performance at 353 K and 20 atm.According to their results, the use of this catalyst reached an initial hydrogen consumption of 4583 mol H2 mol Pt-1 min-1.In this work, they investigated the interaction between the precious metal and the oxide support.It was found that zirconium oxide had the highest adsorption capacity for platinum ions due to its ability to be protonated and deprotonated.
MIRA21 Method
In our previous work, we successfully developed the Miskolc Ranking 2021 (MIRA21) system as a multistep process for the identification of new and useful patterns in the catalyst data sets to provide a standard algorithm for catalyst characterization and to compare and rank catalysts with minimal bias [24].It is a practical and functional mathematical model of exact catalyst qualification with four classes of descriptors: catalyst performance, reaction conditions, catalyst conditions, and catalyst sustainability.The comparison of TDA catalysts could enable the supporting design of catalysts and the monitoring of research and development trends.The model facilitates the determination of the direction of catalyst development by establishing a system for ranking and classifying the catalysts.Furthermore, the standardization of the data in scientific publications could also benefit from accurate and coherent data.Figure 13 illustrates the process of the MIRA21 method from the literature sources to useful knowledge.
MIRA21 Method
In our previous work, we successfully developed the Miskolc Ranking 2021 (MIRA21) system as a multistep process for the identification of new and useful patterns in the catalyst data sets to provide a standard algorithm for catalyst characterization and to compare and rank catalysts with minimal bias [24].It is a practical and functional mathematical model of exact catalyst qualification with four classes of descriptors: catalyst performance, reaction conditions, catalyst conditions, and catalyst sustainability.The comparison of TDA catalysts could enable the supporting design of catalysts and the monitoring of research and development trends.The model facilitates the determination of the direction of catalyst development by establishing a system for ranking and classifying the catalysts.Furthermore, the standardization of the data in scientific publications could also benefit from accurate and coherent data.Figure 13 illustrates the process of the MIRA21 method from the literature sources to useful knowledge.Table 2 shows the descriptor system of the model.The parameters can be divided into four classes with different weighting coefficients.The catalyst performance class includes conversion, selectivity, yield, and turnover number attributes.The second group contains the reaction conditions of the laboratory or large-scale experiments with the prepared catalysts.The third group is the catalyst conditions, with these two easily described parameters included in it.The last group addresses the sustainability and industrial application of the catalysts.
Table 2 describes the parameters used to characterize the performance of the catalyst, as well as the main mathematical equations used in the calculation of the MIRA21 number and ranking.Table 2 shows the descriptor system of the model.The parameters can be divided into four classes with different weighting coefficients.The catalyst performance class includes conversion, selectivity, yield, and turnover number attributes.The second group contains the reaction conditions of the laboratory or large-scale experiments with the prepared catalysts.The third group is the catalyst conditions, with these two easily described parameters included in it.The last group addresses the sustainability and industrial application of the catalysts.Table 2 describes the parameters used to characterize the performance of the catalyst, as well as the main mathematical equations used in the calculation of the MIRA21 number and ranking.
Figure 14 lists the equations used in the MIRA21 model, where n DNT , n TDA , and n catalyst are the corresponding molar amounts of the compounds; A is the value of the attribute; A t is the transformed attribute value; min A and max A are the corresponding calculated minimum and maximum values of the attribute in the data set; MIN is the minimum scoring point; MAX is the maximum scoring point; i = 1 . . .15 is the number of attributes; MAXrank is the highest and MINrank is the lowest score of the MIRA21 ranking.
Summary and Conclusions
In summary, Table 3 includes the MIRA21 results and the classification of selected and studied catalysts for the hydrogenation of DNT.The aim of this work is to make an overview of the hydrogenation of dinitrotoluene to toluenediamine.The chemical technology, development of reaction mechanism, and previous catalyst research were summarized by a quantitative comparison method called MIRA21.In total, 58 catalysts from 15 research articles were selected and studied with the MIRA21 model, which covered the complete scientific literature of the catalytic hydrogenation DNT.According to the ranking and classification, eight catalysts were ranked in the highest class (D1).
Summary and Conclusions
In summary, Table 3 includes the MIRA21 results and the classification of selected and studied catalysts for the hydrogenation of DNT.The aim of this work is to make an overview of the hydrogenation of dinitrotoluene to toluenediamine.The chemical technology, development of reaction mechanism, and previous catalyst research were summarized by a quantitative comparison method called MIRA21.In total, 58 catalysts from 15 research articles were selected and studied with the MIRA21 model, which covered the complete scientific literature of the catalytic hydrogenation DNT.According to the ranking and classification, eight catalysts were ranked in the highest class (D1).The number of catalysts developed for TDA synthesis is low, since the scientific research focused mostly on the reaction mechanism and reaction kinetic.Despite this fact, many different catalyst systems have been developed.
More than 80% of the 58 types of catalysts produced and tested had excellent conversions, but only 45% of them demonstrated a selectivity above 90% n/n%.More than 80% of the produced catalysts consisted of only one active component.Since the combination of catalysts has not been scarcely investigated, one recommended direction of research is the multi-component catalyst.Catalyst development represents a new trend that has led to the establishment of many high-performance catalysts.Based on the analyzed catalysts, compared to the traditional carbon-based supports, catalysts with oxide and/or magnetic supports showed better results in laboratory conditions.Carbon-supported nickel catalysts are primarily used in the industry, but nickel catalysts did not yield the best results.The advantage of well-performing magnetic catalysts due to their ability to be repaired is indisputable, but the economic implications of their industrial application must also be considered.
compares the Janssen et al. and Neri et al. reaction schemes.The latter found that the hydrogenation of the hydroxylamine intermediate occurred via a triangular reaction pathway.Their further studies focused on 2-hydroxyamino-4nitrotoluene as a reaction intermediate, which accumulates in the reaction mixture instead of 2-hydroxiamino-4-nitrotoluene [37,39,40].It was shown that the formation of the nitro group depends on the presence of electron-donating substituents and steric effects [41].Catalysts 2023, 13, x FOR PEER REVIEW 5 of 20
Figure 4 .
Figure 4. Dinitrotoluene hydrogenation pathways according to Janssen et al. and Neri et al. (Blue lines shows the new pathways comparable to Janssen et al.'s results.)[37]
Figure 4 .
Figure 4. Dinitrotoluene hydrogenation pathways according to Janssen et al. and Neri et al. (Blue lines shows the new pathways comparable to Janssen et al.'s results.)[37].
Figure 7 .
Figure 7. Possible side products of the TDA synthesis according to Hajdu et al.'s research (black line-detected molecules, blue line-assumed molecules) [44].
Figure 8 .
Figure 8. Publication year distribution of 56 articles after first selection (left) and Q-index distribution of 15 articles after second selection (right).
Figure 8 .
Figure 8. Publication year distribution of 56 articles after first selection (left) and Q-index distribution of 15 articles after second selection (right).
Figure 9 .
Figure 9. Composition of studied catalysts according to support, active component.
Figure 10 .
Figure 10.Maximum conversion with required reaction time.
Figure 9 .
Figure 9. Composition of studied catalysts according to support, active component.
Figure 9 .
Figure 9. Composition of studied catalysts according to support, active component.
Figure 10 .
Figure 10.Maximum conversion with required reaction time.
Figure 10 .
Figure 10.Maximum conversion with required reaction time.
Figure 12 .
Figure 12.Distribution and active components of catalysts according to MIRA21 ranking and classification (D1-best, Q4-worst qualification, according to MIRA21 coloring). | 9,167.2 | 2023-02-10T00:00:00.000 | [
"Chemistry"
] |
Meeting the Data Management Compliance Challenge: Funder Expectations and Institutional Reality
In common with many global research funding agencies, in 2011 the UK Engineering and Physical Sciences Research Council (EPSRC) published its Policy Framework on Research Data along with a mandate that institutions be fully compliant with the policy by May 2015. The University of Bath has a strong applied science and engineering research focus and, as such, the EPSRC is a major funder of the university’s research. In this paper, the Jisc-funded Research360 project shares its experience in developing the infrastructure required to enable a research-intensive institution to achieve full compliance with a particular funder’s policy, in such a way as to support the varied data management needs of both the University of Bath and its external stakeholders. A key feature of the Research360 project was to ensure that after the project’s completion in summer 2013 the newly developed data management infrastructure would be maintained up to and beyond the EPSRC’s 2015 deadline. Central to these plans was the ‘University of Bath Roadmap for EPSRC’, which was identified as an exemplar response by the EPSRC. This paper explores how a roadmap designed to meet a single funder’s requirements can be compatible with the strategic goals of an institution. Also discussed is how the project worked with Charles Beagrie Ltd to develop a supporting business case, thus ensuring implementation of these long-term objectives. This paper describes how two new data management roles, the Institutional Data Scientist and Technical Data Coordinator, have contributed to delivery of the Research360 project and the importance of these new types of cross-institutional roles for embedding a new data management infrastructure within an institution. Finally, the experience of developing a new institutional data policy is shared. This policy represents a particular example of the need to reconcile a funder’s expectations with the needs of individual researchers and their collaborators. International Journal of Digital Curation (2013), 2(8), 157–171. http://dx.doi.org/10.2218/ijdc.v2i8.280 The International Journal of Digital Curation is an international journal committed to scholarly excellence and dedicated to the advancement of digital curation across a wide range of sectors. The IJDC is published by UKOLN at the University of Bath and is a publication of the Digital Curation Centre. ISSN: 1746-8256. URL: http://www.ijdc.net/ 158 Data Management Compliance doi:10.2218/ijdc.v2i8.280
Introduction
The University of Bath is a comparatively small research-intensive UK university with an international reputation as a top-ten university 1 .Links between research and commerce were written into the University of Bath's Charter 2 when it was incorporated, resulting in much collaborative research between the university and industrial, commercial and public sector partners.The University of Bath has a strong applied science and engineering research focus and, as such, the Engineering and Physical Science Research Council (EPSRC) is a major funder of research at the University of Bath.
In 2011 the EPSRC, one of six research councils and other bodies that fund primary research in the UK, published its new Policy Framework on Research Data 3 .This policy included nine expectations 4 covering all aspects of data management including the requirement for institutional policies, data and metadata publication, restrictions on access, length of preservation, persistent identifiers, non-digital data, and resourcing.In a change from the approach taken by many other UK funding bodies, responsibility for compliance was placed on the institution rather than on individual researchers.The EPSRC set two deadlines for the institutions that it funds: by May 2012 institutions were to have a roadmap in place, setting out how they planned to comply with the EPSRC's policy.Full compliance with the expectations is then required by May 2015.What made UK universities take particular note of this policy was the EPSRC's assertion that non-compliance would incur sanctions, which could ultimately include ineligibility for future EPSRC funding.
Due to the importance of EPSRC funding to its research effort, the University of Bath took the EPSRC's new policy framework extremely seriously.The university had already established a Research Data Steering Group to advise on data management issues across the institution.In 2011, this group successfully applied for funding to establish a project, Research360, which would initiate and pilot the work required to achieve full compliance.Research360 was an 18-month project funded by Jisc's 2011-2013 Managing Research Data programme 5 , and was structured around meeting each of EPSRC's nine expectations.
In this paper, the Research360 project shares its experience in starting to develop a new data management infrastructure required to enable a research-intensive institution to achieve full compliance with a particular funder's policy, whilst simultaneously supporting the interests of the university and its external collaborators.The paper focuses on four essential components of this infrastructure: the roadmap required by EPSRC and a business case to support its implementation; two new data roles established to deliver the project; and the creation of a new data management policy.
'Roadmap for EPSRC' Development
The first step towards meeting the EPSRC's expectations was development of a roadmap setting out how full compliance with the policy framework would be achieved.The 'University of Bath Roadmap for EPSRC: Compliance with Research Data Management Expectations' (Lyon and Pink, 2012) was developed by the Research360 project on behalf of the university's Research Data Steering Group.The roadmap was originally based on Monash University's influential 'Research Data Management Strategy and Strategic Plan 2012 -2015' (Beitz, Dharmawardena and Searle, 2012).Monash University's strategy and strategic plan clearly demonstrated how the benefits of well managed research data were aligned with the university's long-term strategic aims, together with a series of 13 goals and associated initiatives, designed to ensure that the university continued to enjoy the benefits of improved research data management.
Taking Monash University's strategy as a starting point, the Research360 project team first aligned five new data management themes with the University of Bath's corporate plan and strategic aims.These themes covered areas such as international reputation, innovation, business planning, capacity and capability, and infrastructure.The next step was to convert the goals and initiatives in Monash University's strategic plan into objectives and actions relevant to the thematic areas specific to the University of Bath.This process involved extensive rewriting, recognising that Monash University and the University of Bath differed in terms of external drivers, funding environment and extent of data management infrastructure already in place.For example, Monash University already had a well-developed data management infrastructure and, as such, the strategy focused on leadership, with a goal to 'maintain and grow' international recognition of leadership in research data management.In contrast, the University of Bath focused on its relationship with external collaborators, with objectives including the specification of research data management requirements in new contracts with research partners.Importantly this process meant that, first and foremost, the final 'Roadmap for EPSRC' would meet the needs of the institution, not just the funder.
The second stage of the development process involved mapping the new Bath-focused objectives to the nine expectations of EPSRC's Policy Framework on Research Data.As part of this process, the Digital Curation Centre's (DCC) series of blog posts (Jones, 2012;DCC, 2012) in advance of the EPSRC's May 2012 deadline prompted the inclusion of contextual information setting out the rationale for the roadmap's development, and a statement of the current position for each expectation.The latter, relating to the University of Bath's position relative to each expectation, was established based on a Data Asset Framework survey that the university undertook during 2011 (Jones, 2011), from the outputs of an initial test of the DCC's CARDIO tool6 with subject librarians, and from the experience of members of the Research360 project team.
A combination of the objective-mapping exercise and the current position statements enabled gaps to be identified and additional objectives to be created.For example, in order to meet the first expectation that the organisation promote internal awareness of the objectives, an additional objective was added to focus activities, information and guidance on a single RDM website.In many cases, activities were also expanded to ensure full compliance with every EPSRC expectation, such as Objective 2.1, which was expanded to include development of a template data access statement for inclusion in published papers.Ultimately, the 13 goals in Monash University's 'Research Data Management Strategy and Strategic Plan 2012 -2015' were expanded to 22 objectives in the 'University of Bath Roadmap for EPSRC'.
Using this process to develop the roadmap ensured both that it met a specific funder's policy and aligned with the strategic goals of the institution.For example, Objective 1.1 of the roadmap seeks to 'develop the data management skills and knowledge of Bath researchers' by providing training to researchers, including postgraduate research students in Doctoral Training Centres (DTCs) and graduate schools.This objective would ensure that researchers are aware of the EPSRC's policy, of the external regulatory environment and reasons why data might be withheld, thus ensuring compliance with EPSRC Expectation 1 7 .This objective would also form part of the training and development of postgraduate researchers, thus contributing to the University's Research Strategy 8 .Similarly, Objective 7.1 seeks to 'align digital data storage infrastructure with research and data management requirements'.Delivering this objective would include development of a data repository for long-term retention, archiving and accreditation of research data.This objective would therefore ensure that EPSRC-funded research is securely preserved for ten years from the date of last access, thus meeting EPSRC's seventh expectation 9 , but would also contribute to strategic investment in 'high-quality research infrastructure, facilities [and] research support' as part of the University's Research Strategy.
Throughout the development of the roadmap, the support of the Pro-Vice-Chancellor (Research) proved vital.The Pro-Vice-Chancellor provided guidance and was able to anticipate what questions were likely to be raised by committees during the approval process.Importantly, the Pro-Vice-Chancellor also acted as a champion of the roadmap at the Vice-Chancellor's Group, where it was submitted for consideration and final approval.The approval process proved to be a valuable component in the roadmap's development, as it allowed the Vice-Chancellor's Group members to draw upon their extensive experience to provide valuable feedback.For example, their suggestions helped to position the roadmap at a realistic point between minimal compliance with the EPSRC's policy and an 7 EPSRC Expectation i: "Research organisations will promote internal awareness of these principles and expectations and ensure that their researchers and research students have a general awareness of the regulatory environment and of the available exemptions which may be used, should the need arise, to justify the withholding of research data." 8University of Bath Research Strategy: http://www.bath.ac.uk/research/about/strategy/index.html 9 EPSRC Expectation vii: "Research organisations will ensure that EPSRC-funded research data is securely preserved for a minimum of 10-years from the date that any researcher 'privileged access' period expires or, if others have accessed the data, from last date on which access to the data was requested by a third party; all reasonable steps will be take to ensure that publicly-funded data is not held in any jurisdiction where the available legal safeguards provide lower levels of protection than are available in the UK." The International Journal of Digital Curation Volume 2, Issue 8 | 2013 ambitious 'gold standard' data management service, which would have been unfeasible to deliver within the time available.
Once approval had been granted, the 'University of Bath Roadmap for EPSRC' was submitted to the EPSRC in time for the May 2012 deadline.Feedback from the EPSRC was positive, with Ben Ryan, EPSCR's Senior Evaluation Manager, commenting that the roadmap was "an excellent example of an appropriate response [that] fully meets our needs for assurance that the University is taking our policy framework on research data seriously."10
The Business Case for Data Management
A key aspect of the 'Roadmap for ESPRC' was that responsibility for implementation and management oversight was assigned for each of the 22 objectives.In every case, this responsibility was shared between a number of key stakeholders across the university beyond the initial Research360 project team, ranging from professional service departments and committees to smaller research groups or individuals.This highlighted the extent to which ensuring effective and sustainable management of research data represents a shared responsibility, requiring collaboration between different services within the university.However, in recognition of the demands already placed on these key stakeholders, another outcome of the Research360 project was to develop longer-term strategic plans to ensure that sufficient resource was available to implement the 'Roadmap for EPSRC'.The development of a business case to support investment in research data management was central to these plans.To create this business case, the Research360 project team worked with Charles Beagrie Ltd11 , drawing on its extensive experience in cost/benefit analyses in the areas of data management and digital preservation.
Any business case must demonstrate the benefits that can be derived from investment of additional resource.To articulate the benefits of data management in the context of the collaborative research undertaken at the university, the Research360 project team and Charles Beagrie Ltd built on the Keeping Research Data Safe Benefits Framework12 to identify benefits both to the university community and to a range of external stakeholders.This was published as 'Benefits from Research Data Management in Universities for Industry and Not-for-Profit Research Partners' (Beagrie and Pink, 2012), which identified how management of research data within the university would benefit both the university community (including researchers, students, professional services) and the institution as well as external partners, (including commercial, public and voluntary sector collaborators, government and society).For example, feedback from industry suggested that it would welcome more open access to research data, as this would provide reference datasets against which new approaches could be tested.Similarly, not-for-profit research partners would benefit from mechanisms that provided enhanced data security and access control in relation to personally sensitive data, as this would encourage more of the public to The International Journal of Digital Curation Volume 2, Issue 8 | 2013 doi:10.2218/ijdc.v2i8.280volunteer as participants in new research.Note that neither of these benefits directly related to any of the EPSRC's expectations.
The business case also demonstrated the importance of data management in the context of a number of key drivers.Central to these drivers was the EPSRC's data policy, so the business case reiterated the university's current level of compliance for each expectation, taken from the 'Roadmap for EPSRC'.In addition, the business case also highlighted the importance of data management in the context of the 2014 Research Excellence Framework (REF2014).To support this, the Research360 project and Charles Beagrie Ltd prepared an overview of how research data and data management might contribute to the three elements of REF2014: research outputs, impact, and environment (Beagrie, McKen and Pink, 2012).These guidelines recognised that 'it is the research activity itself and its impact that is the focus of the REF and the universities' submissions'.However, the university's compliance with the EPSRC expectations would benefit REF submissions and similar exercises, since it would focus on improvements in research data management highlighted by the guidelines as 'a support activity enabling excellent research' (Beagrie, McKen and Pink, 2012).
In order to support the transition from project-based activities to an embedded infrastructure, the business case presented the anticipated medium-to long-term costs of data management, based on a series of case studies prepared by Beagrie, Chruszcz and Lavoie (2008), which illustrated data preservation at a number of UK universities.In these case studies, Beagrie, Chruszcz and Lavoie demonstrated that the costs associated with institutional repositories for research data were an order of magnitude greater than costs associated with archiving e-publications alone.Further, they showed that the staff costs tend to be the major component of preservation costs, particularly during repository startup.Accordingly, the business case made a number of recommendations, which included investment in two permanent posts with responsibility for research data management.These posts were originally created as part of the Research360 project and are described below.
Data Management Roles and Responsibilities
In order to meet the ambitious goals of both the Research360 project and extensive funder requirements, a team of staff from key departments across the institution was brought together.This included the Research Publications Librarian, the Research Information Manager, and representatives from Computing Services, the Vice-Chancellor's Office, UKOLN and a cross-faculty academic research centre: the Centre for Sustainable Chemical Technologies (CSCT).The team was coordinated by a new role, that of the Institutional Data Scientist, who was supported by a Technical Data Coordinator.
Institutional Data Scientist
The Institutional Data Scientist was responsible for the coordination and overall delivery of the Research360 project and, as such, the development of the pilot data management infrastructure across the institution.Based in UKOLN 13 , the Institutional Data Scientist had a cross-departmental role that facilitated communication and coordination among the different internal stakeholders.This was particularly important, as data management brought together a number of activities that were traditionally seen as distinct services provided by different professional service departments, such as grant applications (Research Support Office), data storage (Computing Services), and archive, publication and open access (Library).
A large component of the Data Scientist's role during the Research360 activity was to build a case for continuing with data management activities once the project finished.This involved working closely with other members of the project team and external consultants to develop the roadmap and business case previously described.To support them, the Data Scientist was responsible for collating evidence of demand for data management infrastructure, such as requests for support, improved management of research active data and re-use of project outputs by other institutions.The latter also demonstrated how investment in data management not only facilitated compliance with funder policies and benefited researchers, but also enhanced the university's national and international reputation, particularly as members of the project team were invited to present expert talks at national and international events.
While most of the project team had other core responsibilities as part of their normal roles within the university, the Data Scientist's sole focus on, and interest in, data management meant that they were able to act as a champion of data management.This not only ensured that the project progressed but also provided a central point of contact for all data management queries, both within the professional support services and, more importantly, for researchers.Here, the background of the Data Scientist as a researcher, both in academia and in industry, proved to be beneficial.Direct experience of the research process enabled the Data Scientist to engage with researchers and assure them that their research needs were regarded as paramount.The Data Scientist was able to demonstrate that the new tools, guidance and technical resources being developed were intended to support the research process and enhance activities already undertaken by researchers, and that compliance with funder policies would be an inevitable consequence of the university's investment in data management, rather than its sole motivator.
Assistance with writing data management plans (DMPs) represents a good example of this support.Many funders require submission of DMPs as part of a grant application.Help provided by the Data Scientist went beyond directing researchers to templates and guidance already available or mandated by funders, to include detailed review and enhancement of draft DMPs.In most cases, the Data Scientist was able to meet with researchers to discuss not only their DMP but more general data management concepts.This direct engagement often enabled researchers to improve the storage of their current data, to discover new metadata standards relevant to their discipline (such as MIBBI 14 ), or simply to reflect on their proposed methodologies for forthcoming projects by exploring plans for data capture and processing in more detail.Feedback from researchers regarding this level of support has been extremely positive, providing a foundation for the necessary cultural change needed to ensure compliance with funder requirements.
Technical Data Coordinator
Supporting the Data Scientist was a second full-time data management role, that of the Technical Data Coordinator.The primary role of the Technical Data Coordinator was to provide general research data technology expertise on a number of project areas, including data repository development, virtual research environments and electronic lab notebooks.An important aspect of the role was to provide specific coordination and communication with technical services, including Bath University Computing Services (BUCS).
The Technical Data Coordinator was seconded to the Research360 project from the cross-faculty Centre for Sustainable Chemical Technologies (CSCT), where they previously provided support for the centre's Doctoral Training Centre (DTC).As such, the Technical Data Coordinator had close links with academics in the Faculties of Science and Engineering, which are recipients of the majority of the University's EPSRC funding.In addition, the Technical Data Coordinator was able to use the centre's DTC as a test bed for many of the outputs of the project:.Postgraduate research students from CSCT attended the first pilot data management training workshop, and also trialled a range of data management planning tools. 15The established relationship between the Technical Data Coordinator and these doctoral students meant that they were willing to provide constructive feedback on draft deliverables, something which contributed substantially to the improvement of these resources for use by other researchers.
One of the primary methods by which compliance with the EPSRC's expectations could be achieved would be to use an institutional data repository for archive and publication of research data.Although a number of possible platforms for such a repository were available, all would have required some customisation both for research data and to support complete EPSRC compliance.As such, no institutional data repository had yet been selected for the University of Bath.Another facet of the Technical Data Coordinator's role was therefore to develop a specification for such a repository.It was essential that the specified repository would meet the needs of the EPSRC and other UK funding bodies, and also be usable by the university's researchers.In addition, it was intended that the specification would allow for future integration with existing research infrastructure, notably the open access publications repository, Opus, 16 and the Current Research Information System (CRIS).
To develop this specification, the Technical Data Coordinator data collated information from an institution-wide survey of researchers on data management issues, and conducted a series of one-to-one interviews with representatives from key stakeholder and advisory groups, including Computing Services, the Library, the Research Support Office and UKOLN.The Data Scientist was interviewed to represent the needs of the EPSRC and other external partners, such as publishers.The Technical Data Coordinator assembled a series of data repository user stories (Cope, 2013c) An example of how the specification met the requirements of the EPSRC's policy, researchers in the institution and their commercial collaborators was the requirement for an option to mandate input of the core DataCite 18 metadata fields.This would pave the way for minting of Digital Object Identifiers 19 (DOIs) for datasets in the future.This functionality would not only meet the EPSRC's fifth expectation 20 that digital data is issued with a 'robust digital object identifier' but it would also allow researchers to format citations for their data and include persistent identifiers in publications, thus promoting discovery, re-use and attribution of their data and increasing their research's impact.Similarly, the requirement that a basic metadata schema must include licensing and embargo periods would comply with the EPSRC's sixth expectation 21 relating to restricted access to commercially confidential data, and also allow adherence to collaboration agreements with commercial partners.
In addition to focusing on the technical aspects of data management, the Technical Data Coordinator's role was expanded to include provision of data management planning expertise.This involved development of a comprehensive suite of data management planning tools, including templates and guidance, with versions designed specifically for academic staff (Cope, 2013d;Cope, 2013e) and postgraduate research students (Cope, 2013a;Cope, 2013b).
It is anticipated that the Technical Data Coordinator's role will continue to develop once customisation of the institutional data repository commences post-project completion.While delivery of technical expertise, training and support will continue, the focus will increasingly be on technical development, with provision of data management planning tools becoming the responsibility of the Institutional Data Scientist.
Shared Responsibilities
Many of the activities of the Institutional Data Scientist and Technical Data Coordinator overlapped as they worked together to deliver training, draft technical reports, gather requirements and present at dissemination events.This close collaboration meant that the increasing workload of data management support could be shared.For example, both roles were able to provide support for individual 17 EPrints Services: http://www.eprints.org/services/ 18DataCite: http://www.datacite.org/ 19Digital Object Identifiers: http://www.doi.org/ 20EPSRC Expectation v: "Research organisations will ensure that appropriately structured metadata describing the research data they hold is published (normally within 12 months of the data being generated) and made freely accessible on the internet; in each case the metadata must be sufficient to allow others to understand what research data exists, why, when and how it was generated, and how to access it.Where the research data referred to in the metadata is a digital object it is expected that the metadata will include use of a robust digital object identifier (For example, as available through the DataCite organisation -http://datacite.org)." 21EPSRC Expectation vi: "Where access to the data is restricted the published metadata should also give the reason and summarise the conditions which must be satisfied for access to be granted.For example 'commercially confidential' data, in which a business organisation has a legitimate interest, might be made available to others subject to a suitable legally enforceable non-disclosure agreement." The International Journal of Digital Curation Volume 2, Issue 8 | 2013 researchers, ensuring that requests for help submitted to<EMAIL_ADDRESS>were always responded to and resolved in a timely manner.It is important to note that half of the requests for support received by the project team originated outside the project's focal Faculties of Science and Engineering and, based on details provided by researchers, the majority of requests for help related to funders other than the EPSRC, predominantly the Economic and Social Research Council (ESRC), the Biotechnology and Biological Sciences Research Council (BBSRC), the Medical Research Council (MRC) and the National Health Service (NHS).This was because support requests were often the result of these funders requiring submission of a DMP as part of a grant application, something that is not currently required by the EPSRC.Although the Research360 project focused on the EPSRC's requirements, the number of requests for help with other funder policies raised the question of whether the project team should also support non-EPSRC-funded researchers.
Surveys of existing data management practice amongst University of Bath researchers (Jones, 2011;Pink, Cope and Jones, 2013) had highlighted considerable researcher uncertainty about data management and a lack of awareness of existing data infrastructure.Declining support to researchers who had requested help with other funders' policies, particularly when their other projects might be funded by the EPSRC, would have reinforced these misconceptions and the damaged reputation of data management would have spread rapidly amongst the University of Bath's close research community.It was therefore considered essential to support as many researchers as possible, regardless of their funding source, in order to enhance the status of data management and to initiate a general cultural change in how research data are managed.As a result, all researchers would be compliant with their funder's policies, including those whose research is supported by the EPSRC.This demonstrated an important lesson: that meeting the requirements of one particular funder cannot be achieved at the expense of another and that, perhaps surprisingly, these requirements can be met by instead focussing on the needs of the researchers, the institution and other external partners.
Use of the resources provided by both the Data Scientist and Technical Data Coordinator has, to date, been voluntary and dependent on researchers seeking out the assistance they require as and when a need arises.However, in order for the institution to achieve full compliance with the EPSRC's policy, it will be necessary to ensure that all researchers are aware not only of their responsibilities under the policy but also of the support the university offers to help them manage their data.One method of achieving this is advocacy, either directly or via word-of-mouth between researchers, giving rise to the cultural change previously described.However, this can take time and in order to meet the EPSRC's rapidly approaching 2015 deadline, the adoption of a more prescriptive method was required.
The International Journal of Digital Curation
Volume 2, Issue 8 | 2013
Policy Development
In order to ensure compliance with the EPSRC's third data expectation 22 relating to organisational policies and associated processes, it was necessary to develop a new, high-level policy for research data management.The development of this policy for the University of Bath typifies the challenge of reconciling conflicting internal and external drivers.Like many other UK universities, the University of Bath initially based its draft policy on the University of Edinburgh's influential Policy for Management of Research Data (2011).However, internal guidance on policy development quickly established that, since policies generally comprise requirements that are both measurable and enforceable, the policy ought not to consist of a purely aspirational set of principles.This change in style raised a number of questions: to whom and what would the policy apply, and how could compliance be achieved before a full data management infrastructure was in place?
When considering the scope of the policy, it was clear that at a very minimum it must cover all research funded by the EPSRC and, by extension, all research council-funded research.Immediately, the question of to whom the policy should apply became pertinent.University staff are contractually obliged to comply with relevant university policies.As many of the university's postgraduate research students are funded by the EPSRC and other research councils, it was essential that their research was also covered by the policy.As such, the policy team considered expanding the scope of the policy to include all research owned by the university.However, this caused difficulties in two areas.Firstly, it would have excluded dissertation projects undertaken by final-year undergraduate and taught postgraduate students.However, whilst the project team felt that experience of data management would be a valuable skill for graduates entering postgraduate research or employment, feedback from researchers was that mandating provision of data management training by policy could be considered excessive.
The second, more pressing problem was that the overlap between data ownership and data management was complicated by the collaborative nature of most research undertaken by the university.The nature of these research partnerships is defined by collaboration agreements, which tend to be complex legal documents between a number of different academic and commercial, national and international partners.Existing collaboration agreements generally do not explicitly reference research data as an output, let alone the long-term preservation and publication of such data.Further, in order to protect the commercial interests of industrial partners, it can sometimes be necessary for funding council policies to be flexibly interpreted, something that the funding councils tend to be amenable to in order to encourage these research partnerships.As such, developing a data management policy that simultaneously mandated compliance with funding council policies, whilst containing sufficient caveats to promote future collaborations with industry, proved extremely difficult.Feedback from researchers on early drafts of the policy suggested that 22 EPSRC Expectation iii: "Each research organisation will have specific policies and associated processes to maintain effective internal awareness of their publicly-funded research data holdings and of requests by third parties to access such data; all of their researchers or research students funded by EPSRC will be required to comply with research organisation policies in this area or, in exceptional circumstances, to provide justification of why this is not possible." The International Journal of Digital Curation Volume 2, Issue 8 | 2013 inclusion of too many sub-clauses made the policy difficult to read, understand and subsequently comply with.
The policy development team therefore decided to separate data management from data ownership.An alternative solution was to apply the policy to all activities classed as 'research', as defined in the internationally accepted OECD Frascati Manual.Research was therefore defined as 'creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications' (OECD, 2002).The advantage of this definition was that it would include both research council-funded research and research funded by other funding bodies, such as charities and industry.However, it also included research that the university provides as a service for external researchers, such as the industrial consultancy provided by the Microscopy and Analysis Suite 23 .As the outputs of this research do not belong to the university, and the storage, retention and access to such data are generally defined by the contract, this type of research had to be explicitly excluded from the policy.
Another type of research that had to be excluded from the policy was that undertaken by postgraduate research students studying for an EngD or similar professional doctorate.These students, whilst still members of the university, conduct their research embedded entirely in external organisations and, as such, would use research infrastructure provided by their host organisation.The university would therefore be unable to mandate how research data created by these students were managed.Limitation of the policy's scope to research carried out at and for the university was therefore required.
The second aspect of policy development that proved difficult was allowing compliance before a full data management infrastructure was in place.In order to ensure that the university is fully compliant with the EPSRC's policy by 2015, it is necessary to mandate how data created by current projects are managed.For example, EPSRC-funded projects started since the publication of the nine expectations will have to ensure that research data 'be made as widely and freely available as possible' and that structured metadata be 'sufficient to allow others to understand…why, when and how it was generated.'For researchers publishing their work in 2015 it would be unfeasible, or in some cases impossible, for them to provide this information retrospectively about data created several years previously.The process of capturing descriptive metadata must therefore start as soon as possible in order for researchers to comply with the policy in the future.However, a research data survey carried out by the Research360 project team determined that some researchers (16.4% of 210 respondents) do not currently document their data sufficiently for others to understand them, and some perceived preparation of data for publication to be an additional burden on their time (Pink, Cope and Jones, 2013).Researchers are under a lot of competing pressures and without a policy mandating data documentation and publication, some researchers are unlikely to do this voluntarily.This raises a further question about where researchers publish their data.Some disciplines, such as the biological sciences, structural chemistry and the social sciences, have a long history of open access data archiving, but for many researchers there are no national or internationally maintained data repositories where data can be archived and published.This is particularly so within engineering and therefore of concern with regard to EPSRC-funded research.Use of an institutional data repository would enable researchers to fulfil many of the EPSRC's expectations, particularly relating to publication of structured metadata, use of access restrictions, secure preservation for ten years and use of persistent identifiers.However, by the end of the Research360 project this repository was still in the early stages of development and not anticipated to be fully customised and ready for use until after approval of the data policy.
As previously discussed, compliance with relevant university policies is mandatory for all staff.A policy that could not be complied with because the necessary infrastructure was not yet in place would immediately place all research staff in breach of that policy.To avoid this problem, other UK universities have used a number of approaches.For example, the University of Edinburgh included a statement which 'acknowledged that this is an aspirational policy, and that implementation will take some years.'Alternatively, the University of Bristol's draft policy consisted of a set of guiding principles 24 that sought to encourage researcher practice, rather than mandate activities.Due to the contractual nature of Bath polices, neither of these approaches was deemed suitable.Instead, it was agreed that the policy would be accompanied by an additional set of guidelines that would demonstrate to researchers how, in the interim period while the full data infrastructure is developed, minimal compliance could be achieved using resources already in place.For example, publication of the existence of data with details of how the data could be accessed, possibly via provision of a contact email address, could be achieved via the existing CRIS.An advantage of this solution was that separation of a high-level policy from more detailed guidance would allow the latter to be frequently updated as and when the data infrastructure is developed, without having to re-submit the policy itself for approval.
A final area of potential conflict between the various funders of academic research, as mentioned above, is the desire of publicly funded research councils to promote availability and access to research data, whereas many researchers and their commercial partners want to be able to restrict access to data.Such restriction allows them to maximise their return on the time and skill they have invested in creating the data by publishing a number of articles based on them, or sometimes by commercialising results.The EPSRC's policy, like that of many other funding bodies, recognises that there may be 'available exemptions which may be used, should the need arise, to justify the withholding of research data' and that these might include '...'commercially confidential' data, in which a business organisation has a legitimate interest.'Translating this into policy should, in theory, include a strong statement advocating publication of research data, whilst including caveats to allow for the withholding of data, where appropriate.In practice however, researchers need clear guidelines about what constitutes 'appropriate' or 'damage' to the research process due to inappropriate release of data.Does, for example, a competitor research group using published data for an identical follow-up study and then publishing the findings before the data's creators constitute damage to the research process?Until data publication is considered equal to article publication in terms of career development, many researchers might argue that it does.Correctly wording the policy to ensure that researchers do not all apply 'appropriate' caveats to publication of their data is likely to prove difficult and it may take time to determine how successfully the policy achieves the balance between funding council requirements and researcher interests.
Conclusion
The Research360 project concluded that for a research-intensive institution to achieve full compliance with a particular funder's policy, it can, perhaps counter-intuitively, be necessary to focus instead on fulfilling the needs of the institution, its external partners and researchers funded by other bodies.Development of the exemplar 'Roadmap for EPSRC' demonstrated the importance of aligning a new data management infrastructure with the existing strategic goals of the institution.To gain the support and resource required to implement this roadmap required exploration of how a range of stakeholders beyond the institution and focal funder would also benefit from investment in improved data management.Once high-level plans for infrastructure development are in place, it is the researchers themselves who must change how they manage their data to comply with their funders' policies.These researchers require support, not only in terms of technical infrastructure such as a repository for data archive and publication, but also in the form of guidance and individual assistance.The two data management roles described here have been pivotal in providing this support.Finally, that development of the institutional data policy extended beyond the extent of the Research360 project demonstrated how difficult it can be to reconcile the finer details of both funder and institutional needs, particularly before a supporting infrastructure has been fully implemented.Looking to the future, the continued provision of a research data service to support all researchers, regardless of their funding source, will continue the cultural change already started, meaning that best practice in data management will become 'business as normal' and all researchers will comply with their funders' policies, including the subset funded by the EPSRC.
14
Minimal Information for Biological and Biomedical Investigations: http://mibbi.sourceforge.net/portal.shtml from which the specification was distilled and prioritised according to the 15 More information about the pilot training workshop and the exercise testing data management planning tools is available via two Research360 blog posts: http://blogs.bath.ac.uk/research360/2012/02/rdm101-intro-definitions/ and http://blogs.bath.ac.uk/research360/2012/03/rdm101-data-management-planning/ 16 University of Bath Online Publications Store (Opus): http://opus.bath.ac.uk/The International Journal of Digital Curation Volume 2, Issue 8 | 2013 needs of the University and the Research360 project.This specification was used to commission a pilot data repository, developed by EPrints Services 17 .
The International Journal of Digital Curation Volume 2, Issue 8 | 2013
23University of Bath Microscopy and Analysis Suite: http://www.bath.ac.uk/facilities/mas/industry/ | 9,332.4 | 2013-11-19T00:00:00.000 | [
"Computer Science"
] |
An expanded evaluation of protein function prediction methods shows an improvement in accuracy
Background A major bottleneck in our understanding of the molecular underpinnings of life is the assignment of function to proteins. While molecular experiments provide the most reliable annotation of proteins, their relatively low throughput and restricted purview have led to an increasing role for computational function prediction. However, assessing methods for protein function prediction and tracking progress in the field remain challenging. Results We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assess computational methods that automatically assign protein function. We evaluated 126 methods from 56 research groups for their ability to predict biological functions using Gene Ontology and gene-disease associations using Human Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis compared with CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysis compared the best methods from CAFA1 to those of CAFA2. Conclusions The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracy can be attributed to a combination of the growing number of experimental annotations and improved methods for function prediction. The assessment also revealed that the definition of top-performing algorithms is ontology specific, that different performance metrics can be used to probe the nature of accurate predictions, and the relative diversity of predictions in the biological process and human phenotype ontologies. While there was methodological improvement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remain context-dependent. Electronic supplementary material The online version of this article (doi:10.1186/s13059-016-1037-6) contains supplementary material, which is available to authorized users.
Background
Accurate computer-generated functional annotations of biological macromolecules allow biologists to rapidly generate testable hypotheses about the roles that newly identified proteins play in processes or pathways. They also allow them to reason about new species based on the observed functional repertoire associated with their genes. However, protein function prediction is an open research problem and it is not yet clear which tools are best for predicting function. At the same time, critically evaluating these tools and understanding the landscape of the function prediction field is a challenging task that extends beyond the capabilities of a single lab.
Assessments and challenges have a successful history of driving the development of new methods in the life sciences by independently assessing performance and providing discussion forums for the researchers [1]. In 2010-2011, we organized the first critical assessment of functional annotation (CAFA) challenge to evaluate methods for the automated annotation of protein function and to assess the progress in method development in the first decade of the 2000s [2]. The challenge used a time-delayed evaluation of predictions for a large set of target proteins without any experimental functional annotation. A subset of these target proteins accumulated experimental annotations after the predictions were submitted and was used to estimate the performance accuracy. The estimated performance was subsequently used to draw conclusions about the status of the field.
The first CAFA (CAFA1) showed that advanced methods for the prediction of Gene Ontology (GO) terms [3] significantly outperformed a straightforward application of function transfer by local sequence similarity. In addition to validating investment in the development of new methods, CAFA1 also showed that using machine learning to integrate multiple sequence hits and multiple data types tends to perform well. However, CAFA1 also identified challenges for experimentalists, biocurators, and computational biologists. These challenges include the choice of experimental techniques and proteins in functional studies and curation, the structure and status of biomedical ontologies, the lack of comprehensive systems data that are necessary for accurate prediction of complex biological concepts, as well as limitations of evaluation metrics [2,[4][5][6][7]. Overall, by establishing the state-of-the-art in the field and identifying challenges, CAFA1 set the stage for quantifying progress in the field of protein function prediction over time.
In this study, we report on the major outcomes of the second CAFA experiment, CAFA2, that was organized and conducted in 2013-2014, exactly 3 years after the original experiment. We were motivated to evaluate the progress in method development for function prediction as well as to expand the experiment to new ontologies. The CAFA2 experiment also greatly expanded the performance analysis to new types of evaluation and included new performance metrics. By surveying the state of the field, we aim to help all direct and indirect users of computational function prediction software develop intuition for the quality, robustness, and reliability of these predictions.
Experiment overview
The time line for the second CAFA experiment followed that of the first experiment and is illustrated in Fig. 1. Briefly, CAFA2 was announced in July 2013 and officially started in September 2013, when 100,816 target sequences from 27 species were made available to the community. Teams were required to submit prediction scores within the (0, 1] range for each protein-term pair they chose to predict on. The submission deadline for depositing these predictions was set for January 2014 (time point t 0 ). We then waited until September 2014 (time point t 1 ) for new experimental annotations to accumulate on the target proteins and assessed the performance of the prediction methods. We will refer to the set of all experimentally annotated proteins available at t 0 as the training set and to a subset of target proteins that accumulated experimental annotations during (t 0 , t 1 ] and used for evaluation as the benchmark set. It is important to note that the benchmark proteins and the resulting analysis vary based on the selection of time point t 1 . For example, a preliminary analysis of the CAFA2 experiment was provided during the Automated Function Prediction Special Interest Group (AFP-SIG) meeting at the Intelligent Systems for Molecular Biology (ISMB) conference in July 2014.
The participating methods were evaluated according to their ability to predict terms in GO [3] and Human Phenotype Ontology (HPO) [8]. In contrast with CAFA1, where the evaluation was carried out only for the Molecular Function Ontology (MFO) and Biological Process Ontology (BPO), in CAFA2 we also assessed the performance for the prediction of Cellular Component Ontology (CCO) terms in GO. The set of human proteins was further used to evaluate methods according to their ability to associate these proteins with disease terms from HPO, which included all sub-classes of the term HP:0000118, "Phenotypic abnormality".
In total, 56 groups submitting 126 methods participated in CAFA2. From those, 125 methods made valid predictions on a sufficient number of sequences. Further, 121 methods submitted predictions for at least one of the GO benchmarks, while 30 methods participated in the disease gene prediction tasks using HPO.
Evaluation
The CAFA2 experiment expanded the assessment of computational function prediction compared with CAFA1. This includes the increased number of targets, benchmarks, ontologies, and method comparison metrics.
We distinguish between two major types of method evaluation. The first, protein-centric evaluation, assesses performance accuracy of methods that predict all ontological terms associated with a given protein sequence. The second type, term-centric evaluation, assesses performance accuracy of methods that predict if a single ontology term of interest is associated with a given protein sequence [2]. The protein-centric evaluation can be viewed as a multi-label or structured-output learning problem of predicting a set of terms or a directed acyclic graph (a subgraph of the ontology) for a given protein.
Because the ontologies contain many terms, the output space in this setting is extremely large and the evaluation metrics must incorporate similarity functions between groups of mutually interdependent terms (directed acyclic graphs). In contrast, the term-centric evaluation is an example of binary classification, where a given ontology term is assigned (or not) to an input protein sequence. These methods are particularly common in disease gene prioritization [9]. Put otherwise, a protein-centric evaluation considers a ranking of ontology terms for a given protein, whereas the term-centric evaluation considers a ranking of protein sequences for a given ontology term.
Both types of evaluation have merits in assessing performance. This is partly due to the statistical dependency between ontology terms, the statistical dependency among protein sequences, and also the incomplete and biased nature of the experimental annotation of protein function [6]. In CAFA2, we provide both types of evaluation, but we emphasize the protein-centric scenario for easier comparisons with CAFA1. We also draw important conclusions regarding method assessment in these two scenarios.
No-knowledge and limited-knowledge benchmark sets
In CAFA1, a protein was eligible to be in the benchmark set if it had not had any experimentally verified annotations in any of the GO ontologies at time t 0 but accumulated at least one functional term with an experimental evidence code between t 0 and t 1 ; we refer to such benchmark proteins as no-knowledge benchmarks. In CAFA2 we introduced proteins with limited knowledge, which are those that had been experimentally annotated in one or two GO ontologies (but not in all three) at time t 0 . For example, for the performance evaluation in MFO, a protein without any annotation in MFO prior to the submission deadline was allowed to have experimental annotations in BPO and CCO.
During the growth phase, the no-knowledge targets that have acquired experimental annotations in one or more ontologies became benchmarks in those ontologies. The limited-knowledge targets that have acquired additional annotations became benchmarks only for those ontologies for which there were no prior experimental annotations. The reason for using limited-knowledge targets was to identify whether the correlations between experimental annotations across ontologies can be exploited to improve function prediction.
The selection of benchmark proteins for evaluating HPO-term predictors was separated from the GO analyses. We created only a no-knowledge benchmark set in the HPO category.
Partial and full evaluation modes
Many function prediction methods apply only to certain types of proteins, such as proteins for which 3D structure data are available, proteins from certain taxa, or specific subcellular localizations. To accommodate these methods, CAFA2 provided predictors with an option of choosing a subset of the targets to predict on as long as they computationally annotated at least 5,000 targets, of which at least ten accumulated experimental terms. We refer to the assessment mode in which the predictions were evaluated only on those benchmarks for which a model made at least one prediction at any threshold as partial evaluation mode. In contrast, the full evaluation mode corresponds to the same type of assessment performed in CAFA1 where all benchmark proteins were used for the evaluation and methods were penalized for not making predictions.
In most cases, for each benchmark category, we have two types of benchmarks, no-knowledge and limitedknowledge, and two modes of evaluation, full mode and partial mode. Exceptions are all HPO categories that only have no-knowledge benchmarks. The full mode is appropriate for comparisons of general-purpose methods designed to make predictions on any protein, while the partial mode gives an idea of how well each method performs on a self-selected subset of targets.
Evaluation metrics
Precision-recall curves and remaining uncertaintymisinformation curves were used as the two chief metrics in the protein-centric mode [10]. We also provide a single measure for evaluation of both types of curves as a real-valued scalar to compare methods; however, we note that any choice of a single point on those curves may not match the intended application objectives for a given algorithm. Thus, a careful understanding of the evaluation metrics used in CAFA is necessary to properly interpret the results.
Precision (pr), recall (rc), and the resulting F max are defined as where P i (τ ) denotes the set of terms that have predicted scores greater than or equal to τ for a protein sequence i, T i denotes the corresponding ground-truth set of terms for that sequence, m(τ ) is the number of sequences with at least one predicted score greater than or equal to τ , 1 (·) is an indicator function, and n e is the number of targets used in a particular mode of evaluation. In the full evaluation mode n e = n, the number of benchmark proteins, whereas in the partial evaluation mode n e = m(0), i.e., the number of proteins that were chosen to be predicted using the particular method. For each method, we refer to m(0)/n as the coverage because it provides the fraction of benchmark proteins on which the method made any predictions.
The remaining uncertainty (ru), misinformation (mi), and the resulting minimum semantic distance (S min ) are defined as where ic(f ) is the information content of the ontology term f [10]. It is estimated in a maximum likelihood manner as the negative binary logarithm of the conditional probability that the term f is present in a protein's annotation given that all its parent terms are also present. Note that here, n e = n in the full evaluation mode and n e = m(0) in the partial evaluation mode applies to both ru and mi.
In addition to the main metrics, we used two secondary metrics. Those were the weighted version of the precision-recall curves and the version of the remaining uncertainty-misinformation curves normalized to the [0, 1] interval. These metrics and the corresponding evaluation results are shown in Additional file 1.
For the term-centric evaluation we used the area under the receiver operating characteristic (ROC) curve (AUC). The AUCs were calculated for all terms that have acquired at least ten positively annotated sequences, whereas the remaining benchmarks were used as negatives. The termcentric evaluation was used both for ranking models and to differentiate well and poorly predictable terms. The performance of each model on each term is provided in Additional file 1.
As we required all methods to keep two significant figures for prediction scores, the threshold τ in all metrics used in this study was varied from 0.01 to 1.00 with a step size of 0.01.
Data sets
Protein function annotations for the GO assessment were extracted, as a union, from three major protein databases that are available in the public domain: Swiss-Prot [11], UniProt-GOA [12] and the data from the GO consortium web site [3]. We used evidence codes EXP, IDA, IPI, IMP, IGI, IEP, TAS, and IC to build benchmark and ground-truth sets. Annotations for the HPO assessment were downloaded from the HPO database [8]. Figure 2 summarizes the benchmarks we used in this study. Figure 2a shows the benchmark sizes for each of the ontologies and compares these numbers to CAFA1. All species that have at least 15 proteins in any of the benchmark categories are listed in Fig. 2b.
Comparison between CAFA1 and CAFA2 methods
We compared the results from CAFA1 and CAFA2 using a benchmark set that we created from CAFA1 targets and CAFA2 targets. More precisely, we used the stored predictions of the target proteins from CAFA1 and compared them with the new predictions from CAFA2 on the overlapping set of CAFA2 benchmarks and CAFA1 targets (a sequence had to be a no-knowledge target in both experiments to be eligible for this evaluation). For this analysis only, we used an artificial GO version by taking the intersection of the two GO snapshots (versions from January 2011 and June 2013) so as to mitigate the influence of ontology changes. We, thus, collected 357 benchmark proteins for MFO comparisons and 699 for BPO comparisons. The two baseline methods were trained on respective Swiss-Prot annotations for both ontologies so that they serve as controls for database change. In particular, SwissProt2011 (for CAFA1) contained 29,330 and 31,282 proteins for MFO and BPO, while SwissProt2014 (for CAFA2) contained 26,907 and 41,959 proteins for the two ontologies.
To conduct a head-to-head analysis between any two methods, we generated B = 10, 000 bootstrap samples and let methods compete on each such benchmark set. The performance improvement δ from CAFA1 to CAFA2 was calculated as where m 1 and m 2 stand for methods from CAFA1 and CAFA2, respectively, and F (b) max (·) represents the F max of a method evaluated on the b-th bootstrapped benchmark set.
Baseline models
We built two baseline methods, Naïve and BLAST, and compared them with all participating methods. The Naïve method simply predicts the frequency of a term being annotated in a database [13]. BLAST was based on search results using the Basic Local Alignment Search Tool (BLAST) software against the training database [14]. A term will be predicted as the highest local alignment sequence identity among all BLAST hits annotated with the term. Both of these methods were trained on the experimentally annotated proteins available in Swiss-Prot at time t 0 , except for HPO where the two baseline models were trained using the annotations from the t 0 release of the HPO.
Top methods have improved since CAFA1
We conducted the second CAFA experiment 3 years after the first one. As our knowledge of protein function has increased since then, it was worthwhile to assess whether computational methods have also been improved and if so, to what extent. Therefore, to monitor the progress over time, we revisit some of the top methods in CAFA1 and compare them with their successors.
For each benchmark set we carried out a bootstrapbased comparison between a pair of top-ranked methods (one from CAFA1 and another from CAFA2), as described in "Methods". The average performance metric as well as the number of wins were recorded (in the case of identical performance, neither method was awarded a win). Figure 3 summarizes the results of this analysis. We use a color code from orange to blue to indicate the performance improvement δ from CAFA1 to CAFA2.
The selection of top methods for this study was based on their performance in each ontology on the entire benchmark sets. Panels B and C in Fig. 3 compare baseline methods trained on different data sets. We see no improvements of these baselines except for BLAST on BPO where it is slightly better to use the newer version of Swiss-Prot as the reference database for the search. On the other hand, all top methods in CAFA2 outperformed their counterparts in CAFA1. For predicting molecular functions, even though transferring functions from BLAST hits does not give better results, the top models still managed to perform better. It is possible that the newly acquired annotations since CAFA1 enhanced BLAST, which involves direct function transfer, and perhaps lead to better performances of those downstream methods that rely on sequence alignments. However, this effect does not completely explain the extent of the performance improvement achieved by those methods. This is promising evidence that top methods from the community have improved since CAFA1 and that improvements were not simply due to updates of curated databases.
Protein-centric evaluation
Protein-centric evaluation measures how accurately methods can assign functional terms to a protein. The protein-centric performance evaluation of the top-ten methods is shown in Figs. 4, 5, and 6. The 95 % confidence intervals were estimated using bootstrapping on the benchmark set with B = 10, 000 iterations [15]. The results provide a broad insight into the state of the art.
Predictors performed very differently across the four ontologies. Various reasons contribute to this effect including: (1) the topological properties of the ontology such as the size, depth, and branching factor; (2) term predictability; for example, the BPO terms are considered to be more abstract in nature than the MFO and CCO terms; (3) the annotation status, such as the size of the training set at t 0 , the annotation depth of benchmark proteins, as well as various annotation biases [6].
In general, CAFA2 methods perform better at predicting MFO terms than any other ontology. Top methods achieved F max scores around 0.6 and considerably surpassed the two baseline models. Maintaining the pattern from CAFA1, the performance accuracies in the BPO category were not as good as in the MFO category. The best-performing method scored slightly below 0.4. For the two newly added ontologies in CAFA2, we observed that the top predictors performed no better than the Naïve method under F max , whereas they slightly outperformed the Naïve method under S min in CCO. One reason for the competitive performance of the Naïve method in the CCO category is that a small number of Fig. 4 Overall evaluation using the maximum F measure, F max . Evaluation was carried out on no-knowledge benchmark sequences in the full mode. The coverage of each method is shown within its performance bar. A perfect predictor would be characterized with F max = 1. Confidence intervals (95 %) were determined using bootstrapping with 10,000 iterations on the set of benchmark sequences. For cases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented. Details for all methods are provided in Additional file 1 relatively general terms are frequently used, and those relative frequencies do not diffuse quickly enough with the depth of the graph. For instance, the annotation frequency of "organelle" (GO:0043226, level 2), "intracellular part" (GO:0044424, level 3), and "cytoplasm" (GO:0005737, level 4) are all above the best threshold for the Naïve method (τ optimal = 0.32). Correctly predicting these terms increases the number of true positives and thus boosts the performance of the Naïve method under the F max evaluation. However, once the less informative terms are down-weighted (using the S min measure), the Naïve method becomes significantly penalized and degraded. Another reason for the comparatively good performance of Naïve is that the benchmark proteins were annotated with more general terms than the (training) proteins previously deposited in the UniProt database. This effect was most prominent in the CCO (Additional file 1: Figure S2) and has thus artificially boosted the performance of the Naïve method. The weighted F max and normalized S min evaluations can be found in Additional file 1.
Interestingly, generally shallower annotations of benchmark proteins do not seem to be the major reason for the observed performance in the HPO category. One possibility for the observed performance is that, unlike for GO terms, the HPO annotations are difficult to transfer from other species. Another possibility is the sparsity of experimental annotations. The current number of experimentally annotated proteins in HPO is 4794, i.e., 0.5 proteins per HPO term, which is at least an order of magnitude less than for other ontologies. Finally, the relatively high frequency of general terms may have also contributed to the good performance of Naïve. We originally hypothesized that a possible additional explanation for this effect might be that the average number of HPO terms associated with A perfect predictor would be characterized with F max = 1, which corresponds to the point (1,1) in the precision-recall plane. For cases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented Fig. 6 Overall evaluation using the minimum semantic distance, S min . Evaluation was carried out on no-knowledge benchmark sequences in the full mode. The coverage of each method is shown within its performance bar. A perfect predictor would be characterized with S min = 0. Confidence intervals (95 %) were determined using bootstrapping with 10,000 iterations on the set of benchmark sequences. For cases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented. Details for all methods are provided in Additional file 1 a human protein is considerably larger than in GO; i.e., the mean number of annotations per protein in HPO is 84, while for MFO, BPO, and CCO, the mean number of annotations per protein is 10, 39, and 14, respectively. However, we do not observe this effect in other ontologies when the benchmark proteins are split into those with a low or high number of terms. Overall, successfully predicting the HPO terms in the protein-centric mode is a difficult problem and further effort will be required to fully characterize the performance.
Term-centric evaluation
The protein-centric view, despite its power in showing the strengths of a predictor, does not gauge a predictor's performance for a specific function. In a term-centric evaluation, we assess the ability of each method to identify new proteins that have a particular function, participate in a process, are localized to a component, or affect a human phenotype. To assess this term-wise accuracy, we calculated AUCs in the prediction of individual terms. Averaging the AUC values over terms provides a metric for ranking predictors, whereas averaging predictor performance over terms provides insights into how well this term can be predicted computationally by the community. Figure 7 shows the performance evaluation where the AUCs for each method were averaged over all terms for which at least ten positive sequences were available. Proteins without predictions were counted as predictions with a score of 0. As shown in Figs. 4, 5, and 6, correctly predicting CCO and HPO terms for a protein might not be an easy task according to the protein-centric results. However, the overall poor performance could also result Fig. 7 Overall evaluation using the averaged AUC over terms with no less than ten positive annotations. The evaluation was carried out on no-knowledge benchmark sequences in the full mode. Error bars indicate the standard error in averaging AUC over terms for each method. For cases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented. Details for all methods are provided in Additional file 1. AUC receiver operating characteristic curve from the dominance of poorly predictable terms. Therefore, a term-centric view can help differentiate prediction quality across terms. As shown in Fig. 8, most of the terms in HPO obtain an AUC greater than the Naïve model, with some terms on average achieving reasonably well AUCs around 0.7. Depending on the training data available for participating methods, well-predicted phenotype terms range from mildly specific such as "Lymphadenopathy" and "Thrombophlebitis" to general ones such as "Abnormality of the Skin Physiology".
Performance on various categories of benchmarks Easy versus difficult benchmarks
As in CAFA1, the no-knowledge GO benchmarks were divided into easy versus difficult categories based on their maximal global sequence identity with proteins in the training set. Since the distribution of sequence identities roughly forms a bimodal shape (Additional file 1), a cutoff of 60 % was manually chosen to define the two categories. The same cutoff was used in CAFA1. Unsurprisingly, across all three ontologies, the performance of the BLAST model was substantially impacted for the difficult category because of the lack of high sequence identity homologs and as a result, transferring annotations was relatively unreliable. However, we also observed that most top methods were insensitive to the types of benchmarks, which provides us with encouraging evidence that stateof-the-art protein function predictors can successfully combine multiple potentially unreliable hits, as well as multiple types of data, into a reliable prediction. The top-ten accurately predicted terms without overlapping ancestors (except for the root). AUC receiver operating characteristic curve
Species-specific categories
The benchmark proteins were split into even smaller categories for each species as long as the resulting category contained at least 15 sequences. However, because of space limitations, in Fig. 9 we show the breakdown results on only eukarya and prokarya benchmarks; the species-specific results are provided in Additional file 1.
It is worth noting that the performance accuracies on the entire benchmark sets were dominated by the targets from eukarya due to their larger proportion in the benchmark set and annotation preferences. The eukarya benchmark rankings therefore coincide with the overall rankings, but the smaller categories typically showed different rankings and may be informative to more specialized research groups. For all three GO ontologies, no-knowledge prokarya benchmark sequences collected over the annotation growth phase mostly (over 80 %) came from two species: Escherichia coli and Pseudomonas aeruginosa (for CCO, 21 out of 22 proteins were from E. coli). Thus, one should keep in mind that the prokarya benchmarks essentially reflect the performance on proteins from these two species. Methods predicting the MFO terms for prokaryotes are slightly worse than those for eukaryotes. In addition, direct function transfer by homology for prokaryotes did not work well using this ontology. However, the performance was better using the other two ontologies, especially CCO. It is not very surprising that the top methods achieved good performance for E. coli as it is a well-studied model organism.
Diversity of predictions
Evaluation of the top methods revealed that performance was often statistically indistinguishable between the best methods. This could result from all top methods making the same predictions, or from different prediction sets resulting in the same summarized performance. To assess this, we analyzed the extent to which methods generated similar predictions within each ontology. Specifically, we calculated the pairwise Pearson correlation between methods on a common set of gene-concept pairs and then visualized these similarities as networks (for BPO, see Fig. 10; for MFO, CCO, and HPO, see Additional file 1).
In MFO, where we observed the highest overall performance of prediction methods, eight of the ten top methods were in the largest connected component. In addition, we observed a high connectivity between methods, suggesting that the participating methods are leveraging similar sources of data in similar ways. Predictions for BPO showed a contrasting pattern. In this ontology, the largest connected component contained only two of the top-ten methods. The other top methods were contained in components made up of other methods produced by the same lab. This suggests that the approaches that participating groups have taken generate more diverse predictions for this ontology and that there are many different paths to a top-performing biological process prediction method. Results for HPO were more similar to those for BPO, while results for cellular component were more similar in structure to molecular function.
Taken together, these results suggest that ensemble approaches that aim to include independent sources of high-quality predictions may benefit from leveraging the data and techniques used by different research groups and that such approaches that effectively weigh and integrate disparate methods may demonstrate more substantial improvements over existing methods in the process and phenotype ontologies where current prediction approaches share less similarity. Fig. 9 Performance evaluation using the maximum F measure, F max , on eukaryotic (left) versus prokaryotic (right) benchmark sequences. The evaluation was carried out on no-knowledge benchmark sequences in the full mode. The coverage of each method is shown within its performance bar. Confidence intervals (95 %) were determined using bootstrapping with 10,000 iterations on the set of benchmark sequences. For cases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented. Details for all methods are provided in Additional file 1 Fig. 10 Similarity network of participating methods for BPO. Similarities are computed as Pearson's correlation coefficient between methods, with a 0.75 cutoff for illustration purposes. A unique color is assigned to all methods submitted under the same principal investigator. Not evaluated (organizers') methods are shown in triangles, while benchmark methods (Naïve and BLAST) are shown in squares. The top-ten methods are highlighted with enlarged nodes and circled in red. The edge width indicates the strength of similarity. Nodes are labeled with the name of the methods followed by "-team(model)" if multiple teams/models were submitted At the time that authors submitted predictions, we also asked them to select from a list of 30 keywords that best describe their methodology. We examined these authorassigned keywords for methods that ranked in the top ten to determine what approaches were used in currently high-performing methods (Additional file 1). Sequence alignment and machine-learning methods were in the topthree terms for all ontologies. For biological process, the other member of the top three is protein-protein interactions, while for cellular component and molecular function the third member is sequence properties. The broad sets of keywords among top-performing methods further suggest that these methods are diverse in their inputs and approach.
Case study: ADAM-TS12
To illustrate some of the challenges and accomplishments of CAFA, we provide an in-depth examination of the prediction of the functional terms of one protein, human ADAM-TS12 [16]. ADAMs (a disintegrin and metalloproteinase) are a family of secreted metallopeptidases featuring a pro-domain, a metalloproteinase, a disintegrin, a cysteine-rich epidermal growth-factor-like domain, and a transmembrane domain [17]. The ADAM-TS subfamily include eight thrombospondin type-1 (TS-1) motifs; it is believed to play a role in fetal pulmonary development and may have a role as a tumor suppressor, specifically the negative regulation of the hepatocyte growth factor receptor signaling pathway [18].
We did not observe any experimental annotation by the time submission was closed. Annotations were later deposited to all three GO ontologies during the growth phase of CAFA2. Therefore, ADAM-TS12 was considered a no-knowledge benchmark protein for our assessment in all GO ontologies. The total number of leaf terms to predict for biological process was 12; these nodes induced a directed acyclic annotation graph consisting of 89 nodes. In Fig. 11 we show the performance of the top-five methods in predicting the BPO terms that are experimentally verified to be associated with ADAM-TS12.
As can be seen, most methods correctly discovered non-leaf nodes with a moderate amount of information content. "Glycoprotein Catabolic Process", "Cellular Response to Stimulus", and "Proteolysis" were the best discovered GO terms by the top-five performers. The Paccanaro Lab (P) discovered several additional correct leaf terms. It is interesting to note that only BLAST successfully predicted "Negative regulation of signal transduction" whereas the other methods did not. The reason for this is that we set the threshold for reporting a discovery when the confidence score for a term was equal to or exceeded the method's F max . In this particular case, the Paccanaro Lab method did predict the term, but the confidence score was 0.01 below their F max threshold.
This example illustrates both the success and the difficulty of correctly predicting highly specific terms in BPO, especially with a protein that is involved in four distinct cellular processes: in this case, regulation of cellular growth, proteolysis, cellular response to various cytokines, and cell-matrix adhesion. Additionally, this example shows that the choices that need to be made when assessing method performance may cause some loss of information with respect to the method's actual performance. That is, the way we capture a method's performance in CAFA may not be exactly the same as a user may employ. In this case, a user may choose to include lower confidence scores when running the Paccanaro Lab method, and include the term "Negative regulation of signal transduction" in the list of accepted predictions.
Conclusions
Accurately annotating the function of biological macromolecules is difficult, and requires the concerted effort of experimental scientists, biocurators, and computational biologists. Though challenging, advances are valuable: accurate predictions allow biologists to rapidly generate testable hypotheses about how proteins fit into processes and pathways. We conducted the second CAFA challenge to assess the status of the computational function prediction of proteins and to quantify the progress in the field.
The field has moved forward
Three years ago, in CAFA1, we concluded that the top methods for function prediction outperform straightforward function transfer by homology. In CAFA2, we observe that the methods for function prediction have improved compared to those from CAFA1. As part of the CAFA1 experiment, we stored all predictions from all methods on 48,298 target proteins from 18 species. We compared those stored predictions to the newly deposited predictions from CAFA2 on the overlapping set of benchmark proteins and CAFA1 targets. The head-to-head comparisons among the top-five CAFA1 methods against the top-five CAFA2 methods reveal that the top CAFA2 methods outperformed all top CAFA1 methods.
Our parallel evaluation using an unchanged BLAST algorithm with data from 2011 and data from 2014 showed little difference, strongly suggesting that the improvements observed are due to methodological advances. The lessons from CAFA1 and annual AFP-SIG during the ISMB conference, where new developments are rapidly disseminated, may have contributed to this outcome [19].
Evaluation metrics
A universal performance assessment in protein function prediction is far from straightforward. Although various evaluation metrics have been proposed under the framework of multi-label and structured-output learning, the evaluation in this subfield also needs to be interpretable to a broad community of researchers as well as the public. To address this, we used several metrics in this study as each provides useful insights and complements the others. Understanding the strengths and weaknesses of current metrics and developing better metrics remain important.
One important observation with respect to metrics is that the protein-centric and term-centric views may give different perspectives to the same problem. For example, while in MFO and BPO we generally observe a positive correlation between the two, in CCO and HPO these different metrics may lead to entirely different interpretations of an experiment. Regardless of the underlying cause, as discussed in "Results and discussion", it is clear that some ontological terms are predictable with high accuracy and can be reliably used in practice even in these ontologies. In the meantime, more effort will be needed to understand the problems associated Fig. 11 Case study on the human ADAM-TS12 gene. Biological process terms associated with ADAM-TS12 gene in the union of the three databases by September 2014. The entire functional annotation of ADAM-TS12 consists of 89 terms, 28 of which are shown. Twelve terms, marked in green, are leaf terms. This directed acyclic graph was treated as ground truth in the CAFA2 assessment. Solid black lines provide direct "is a" or "part of" relationships between terms, while gray lines mark indirect relationships (that is, some terms were not drawn in this picture). Predicted terms of the top-five methods and two baseline methods were picked at their optimal F max threshold. Over-predicted terms are not shown with the statistical and computational aspects of method development.
Well-performing methods
We observe that participating methods usually specialize in one or few categories of protein function prediction, and have been developed with their own application objectives in mind. Therefore, the performance rankings of methods often change from one benchmark set to another. There are complex factors that influence the final ranking including the selection of the ontology, types of benchmark sets and evaluation, as well as evaluation metrics, as discussed earlier. Most of our assessment results show that the performances of top-performing methods are generally comparable to each other. It is worth noting that performance is usually better in predicting molecular function than other ontologies.
Beyond simply showing diversity in inputs, our evaluation of prediction similarity revealed that many topperforming methods are reaching this status by generating distinct predictions, suggesting that there is additional room for continued performance improvement. Although a small group of methods could be considered as generally high performing, there is no single method that dominates over all benchmarks. Taken together, these results highlight the potential for ensemble learning approaches in this domain.
We also observed that when provided with a chance to select a reliable set of predictions, the methods generally perform better (partial evaluation mode versus full evaluation mode). This outcome is encouraging; it suggests that method developers can predict where their methods are particularly accurate and target them to that space.
Our keyword analysis showed that machine-learning methods are widely used by successful approaches. Protein interactions were more overrepresented in the bestperforming methods for biological process prediction. This suggests that predicting membership in pathways and processes requires information on interacting partners in addition to a protein's sequence features.
Final notes
Automated functional annotation remains an exciting and challenging task, central to understanding genomic data, which are central to biomedical research. Three years after CAFA1, the top methods from the community have shown encouraging progress. However, in terms of raw scores, there is still significant room for improvement in all ontologies, and particularly in BPO, CCO, and HPO. There is also a need to develop an experiment-driven, as opposed to curation-driven, component of the evaluation to address limitations for term-centric evaluation. In the future CAFA experiments, we will continue to monitor the performance over time and invite a broad range of computational biologists, computer scientists, statisticians, and others to address these engaging problems of concept annotation for biological macromolecules through CAFA.
CAFA2 significantly expanded the number of protein targets, the number of biomedical ontologies used for annotation, the number of analysis scenarios, as well as the metrics used for evaluation. The results of the CAFA2 experiment detail the state of the art in protein function prediction, can guide the development of new concept annotation methods, and help molecular biologists assess the relative reliability of predictions. Understanding the function of biological macromolecules brings us closer to understanding life at the molecular level and improving human health. | 9,479.4 | 2016-01-03T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Unoriented 3d TFTs
This paper generalizes two facts about oriented 3d TFTs to the unoriented case. On one hand, it is known that oriented 3d TFTs having a topological boundary condition admit a state-sum construction known as the Turaev-Viro construction. This is related to the string-net construction of fermionic phases of matter. We show how Turaev-Viro construction can be generalized to unoriented 3d TFTs. On the other hand, it is known that the"fermionic"versions of oriented TFTs, known as Spin-TFTs, can be constructed in terms of"shadow"TFTs which are ordinary oriented TFTs with an anomalous $\mathbb{Z}_2$ 1-form symmetry. We generalize this correspondence to Pin$^+$-TFTs by showing that they can be constructed in terms of ordinary unoriented TFTs with anomalous $\mathbb{Z}_2$ 1-form symmetry having a mixed anomaly with time-reversal symmetry. The corresponding Pin$^+$-TFT does not have any anomaly for time-reversal symmetry however and hence it can be unambiguously defined on a non-orientable manifold. In case a Pin$^+$-TFT admits a topological boundary condition, one can combine the above two statements to obtain a Turaev-Viro-like construction of Pin$^+$-TFTs. As an application of these ideas, we construct a large class of Pin$^+$-SPT phases.
It is an age-old problem to provide a complete definition of quantum field theories. A part of the problem is to understand on what kinds of manifolds can we put a quantum field theory. For instance, we can ask the following question: Given a theory that can be defined on orientable manifolds, what sort of extra data do we need in order to extend the definition of the theory to non-orientable manifolds? First of all, such an extension may not be possible. For instance, if the theory has a framing anomaly, then it will not be well-defined on non-orientable manifolds. This was recently explained in a footnote of [1]. Second, if such an extension is possible, then it need not be unique.
That is, there can be different unoriented theories which reduce to the same oriented theory on orientable manifolds. We will see plenty of examples like this in this paper. On a non-orientable manifold, we can choose a consistent orientation everywhere if we remove a locus homologous to the Poincare dual of first Stiefel-Whitney class w 1 . The induced local orientation flips as we cross this locus. In order to be able to define an unoriented theory in terms of the data of the oriented theory, we need the existence of orientation reversing codimension one defects which we place along this locus. These orientation reversing defects implement orientation reversing symmetries akin to the orientation preserving codimension one defects which implement a global symmetry transformation [2]. These defects can be placed on top of each other forming the structure of a group G with a homomorphism ρ : G → Z 2 whose kernel G 0 is the global symmetry group of the theory. The set G 1 = G−G 0 parametrizes the orientation reversing symmetries.
In this paper, we explore the consequences of the existence of such orientation reversing defects in the context of 3d TFTs which admit a topological boundary condition. We restrict ourselves to the case in which the structure group of the TFT can be decomposed as O(3) × G for a finite global symmetry group G. In such cases, the properties of orientation reversing defects allow us to propose a generalization of Turaev-Viro state-sum construction of 3d TFTs [3,4] to the unoriented case. 1 We check that this proposal indeed defines a 3d unoriented TFT. From now on, whenever we say "unoriented TFT", we mean this particular structure group.
For an oriented 3d TFT T with global symmetry G, the input data for the construction is a G-graded spherical fusion category C. We will find that an unoriented 3d TFT T extending T is constructed in terms of a G-graded "twisted" spherical fusion category C where C is embedded as a subcategory of C. In terms of the data of C, we give a prescription to construct the partition function of T on any (possibly non-orientable) 3-manifold.
We also apply these ideas to 3d Pin + -TFTs (i.e. TFTs with structure group Pin + (3) × G) which are a generalization of 3d Spin-TFTs. Spin-TFTs are "fermionic" analogs of ordinary oriented TFTs as they are sensitive to the spin structure of the underlying orientable manifold. To define fermions on an unorientable manifold, we 1 Historically, the original construction due to Turaev and Viro was based on a modular tensor category treated as a spherical fusion category. This construction produced theories which could be defined on an unoriented manifold. This construction was generalized to arbitrary spherical fusion categories but such theories could only be defined on oriented manifolds. It is this latter construction that we call "oriented Turaev-Viro construction" in this paper. This paper presents a further generalization of this setup which we call "unoriented Turaev-Viro construction". Our construction can be used to construct any unoriented 3d TFT with a topological boundary condition. need to choose either a Pin + -structure or a Pin − -structure on the manifold. Correspondingly, the natural unoriented generalizations of Spin-TFTs are Pin + -TFTs and Pin − -TFTs.
In [5], a recipe was given to construct a 3d Spin-TFT from an ordinary 3d TFT with an anomalous Z 2 1-form symmetry. This ordinary TFT T f was called the shadow of the corresponding Spin-TFT T s . The idea was to use a kernel TFT T k to connect the shadow theories with their spin counterparts. T k is a Spin-TFT with an anomalous Z 2 1-form symmetry. The diagonal Z 2 1-form symmetry in the product theory T f × T k is non-anomalous. This non-anomalous 1-form symmetry is then gauged to obtain the spin TFT T s .
We extend their recipe by constructing shadows for Pin + -TFTs. The Pin + -shadows correspond to theories with anomalous Z 2 1-form symmetry and a certain time-reversal anomaly in the presence of a background 2-connection for the Z 2 1-form symmetry. The Pin + -kernel TFT T + k has a corresponding time-reversal anomaly which cancels the anomaly of the shadow. Hence, the resulting Pin + -TFTs are time-reversal invariant and can be put on non-orientable manifolds without any ambiguity [6].
As an application, we construct a large class of Pin + -SPT phases with global symmetry G. SPT phases are TFTs which are invertible under the product operation on TFTs. In the condensed matter literature, these are referred to as fermionic SPT phases protected by G × Z T 2 with T 2 = (−1) F . In the case when G is trivial, cobordism hypothesis predicts two Pin + -SPT phases forming a Z 2 group structure [7]. Our construction reproduces both of these SPT phases along with the Z 2 structure. This paper is organized as follows. In section 2, we propose a Turaev-Viro construction for unoriented 3d TFTs. In section 3, we provide a construction of Pin + -TFTs in terms of ordinary unoriented TFTs with a Z 2 1-form symmetry which is anomalous and has a mixed anomaly with time-reversal symmetry. In section 4, we construct a large class of Pin + -SPT phases with global symmetry G and reproduce the Z 2 group of Pin + -SPT phases in the case of trivial G. In section 5, we present our conclusions and comment on future directions which include a strategy to classify all Pin + -SPT phases with global symmetry G.
Turaev-Viro construction
For an exhaustive review and physical understanding of the Turaev-Viro state-sum construction of oriented 3d TFTs, the reader is referred to [5]. In this section, we first review relevant aspects of this construction. Then, we propose a generalization of the construction to the unoriented case. We also provide a physical understanding of our proposal in terms of orientation reversing defects. We close the section by discussing invertible unoriented TFTs with global symmetry G, or in other words bosonic SPT phases protected by G × Z T 2 . A reader only interested in the Turaev-Viro construction of unoriented 3d TFTs is referred to subsections 2.3 and 2.4.
Boundary line defects and spherical fusion category
In general, a boundary condition B allows one to define a TFT T on a manifold M with boundary B by placing B on the boundary. The boundary condition is called topological if topological deformations of M (including topological deformations of B) leave the partition function of T on M invariant.
Turaev-Viro procedure constructs an oriented 3d (unitary) TFT T from the knowledge of a topological boundary condition B of T [8]. T can be recovered from any one of its topological boundary conditions. For simplicity, we will assume that T has a one-dimensional Hilbert space on S 2 . The Turaev-Viro construction for such a TFT T is phrased in terms of a (unitary) spherical fusion category C.
The objects of C are line defects living on B. Such line defects are specified by a label L and an orientation along the line corresponding to L. If a line defect with a certain orientation is denoted as an object L in C, the same line defect with opposite choice of orientation is denoted as the dual object L * . See Figure 1.
The morphisms m AB from A to B in C are local operators living between two boundary lines. Thus, m AB form a vector space. This vector space can also be identified with the Hilbert space of states on the disk with boundary punctures corresponding to A * and B. Similarly, the local operators living at the junction of multiple outgoing lines A i is the space of states on disk with boundary punctures corresponding to A i . The space of states can be generated by placing a hemispherical cap on which the lines A i Figure 2. A morphism m between outgoing lines A 1 , A 2 and A 3 corresponds to a state m in the Hilbert space on a disk with boundary punctures A 1 , A 2 and A 3 . Consider on a hemisphere geometry with a boundary on the spherical part and the disk shown in (b) being the cross-section. The state shown in (b) is produced on the cross-section if the boundary has the graph shown on (a) inserted on it such that A i end on their respective punctures. emanate from a point on the boundary of the cap and go to their respective punctures. See Figure 2.
The composition of morphisms corresponds to fusion of local operators along the line. There is a tensor product corresponding to fusion of lines as they are brought together. There are also canonical associator, evaluation and coevaluation maps which physically correspond to placing the lines in a certain fashion and fusing them. See Figures 3 and 4. Using these canonical morphisms, we can assign a morphism m from ⊗ i A i to ⊗ j B j to any planar graph Γ of boundary line defects (with local operators at their junctions) such that Γ has incoming lines A i and outgoing lines B j . The canonical morphisms satisfy certain identities which guarantee that a topologically equivalent graph Γ ′ evaluates to the same morphism m.
Consider vacua i of B which can be characterized by the expectation value of a line L i . Such lines are called simple lines. Morphism space from L i to L j is empty for i = j and is one-dimensional for i = j. The space of local operators living on L i can be identified as C because there is a canonical identity operator living on L i . Every line L can be written as a sum of simple lines L = ⊕n i L i where n i denotes the multiplicity of the simple line L i in the sum. The identity line 1 can be treated as a special simple line which can be inserted anywhere without changing any answers. The duals of simple lines are simple as well.
Turaev-Viro construction uses C as an input and produces the partition of T on any oriented manifold M as the output. We will describe the construction in a very handson fashion in the next subsection. We will see that the basic object in the construction is a graph Γ in C drawn on the sphere. Γ can be projected down to a closed graph Γ p drawn on the plane. Γ p constructs a morphism from identity line to itself which evaluates to a definite number. This number is the partition function Z Γ of T on a 3-ball along with a network of boundary lines (and local operators at their junctions) Γ inserted at the boundary 2-sphere. The word "spherical" in spherical fusion category corresponds to certain axioms which guarantee that different projections to the plane evaluate to the same number.
This construction can be easily generalized to TFTs with a global symmetry group G. The symmetry manifests itself in the existence of codimension one topological defects U g labeled by g ∈ G. Going across the locus of U g implements a symmetry transformation on the system by g. These defects fuse according to the group law and can end on B giving rise to new lines at the junction. Thus the category of boundary lines living on B becomes graded by G, i.e. C = ⊕ g C g .
Oriented Turaev-Viro
Let's look at the decomposition of the tensor product of two simple lines L i ⊗ L j = ⊕n k ij L k . This means that there is a n k ij dimensional space of morphisms from L k to L i ⊗ L j . We pick a basis of this space labeled by α. Similarly we pick a dual basis for the space of morphisms from L i ⊗ L j to L k which we also label by α. See Figure 5. Figure 7. A change of basis via a unitary matrix. The completeness of the basis can be written graphically as in Figure 6(b). 2 We can transform to a basis labeled by α ′ . We denote the unitary matrix corresponding to the transformation as (U ij k ) α ′ α . See Figure 7. The associator induces an isomorphism between the morphism space from L l to (L i ⊗ L j ) ⊗ L k and the morphism space from L l to L i ⊗ (L j ⊗ L k ). In terms of our chosen basis, this isomorphism can be captured in terms of F -symbols (F ijk l ) (p,α,β)(q,γ,δ) which are defined in Figure 8. Under a change of basis, F -symbols transform as We are now ready to describe Turaev-Viro prescription for the partition function of requires an ordering > of the vertices of the triangulation. To an edge e between vertices a and b, a branched triangulation assigns a direction a → b if a > b. The G-connection α 1 on M assigns an element g e of the group G to each directed edge e. We now label each directed edge e by a simple element living in C ge . Pick a face f of T . Rotating it and flipping it, f looks like as shown in Figure 9(a). Then, we label f by some α corresponding to a morphism as shown in the Figure 9(b). Thus we have a labeling of edges and faces of a branched triangulation. Call one such labeling asl. Pick a tetrahedron t inl. To each t we assign a planar graph Γ t in C and we call such a graph as a tetrahedron graph. Notice that t can have two chiralities -positive and negative as shown in Figure 10. Γ t for a positive chirality t and a negative chirality t are shown in Figure 11. The first one evaluates to (F ijk l ) (p,α,β)(q,γ,δ) and the second one evaluates to (F ijk l ) * (p,α,β)(q,γ,δ) . Let's call this number as n t (l) and define N(l) = t n t (l).
are equal. In terms of F -symbols, this means that (2.5) One can check that (2.5) is invariant under an arbitrary gauge transformation (2.1).
Twisted spherical fusion category and orientation reversing defects
Consider an oriented theory defined by C. We propose that an unoriented parent theory can be constructed in terms of a larger "twisted" spherical fusion categoryC. This larger category is assembled from four piecesC =C 0,0 ⊕C 0,1 ⊕C 1,0 ⊕C 1,1 such that each of the subcategoriesC ǫ,ǫ ′ is G-graded.C 0,1 is a bimodule on whichC 0,0 acts from left andC 1,1 acts from right. Similarly,C 1,0 is a (non-empty) bimodule withC 1,1 acting from the left andC 0,0 acting from the right. An object inC 0,1 fuses with an object iñ C 1,0 to give an object inC 0,0 . Similarly, an object inC 1,0 fuses with an object inC 0,1 to give an object inC 1,1 .C can also be thought of as a 2-category made out of two objects '0' and '1'.
C 0,0 is the same as the spherical fusion category C.C 1,1 has same objects as that of C. Similarly,C 1,0 is a copy ofC 0,1 at the level of objects. Graphically, we describe simple objects ofC ǫ,ǫ ′ as lines with the left plaquette labeled by ǫ and right plaquette labeled by ǫ ′ . In general, we draw graphs Γ inC in which we label the each plaquette by some ǫ. See Figure 12.
In our notation, the labels i, j, k etc. tell us that we have a line ofC ǫ,ǫ ′ with a specific value of ǫ+ ǫ ′ but do not determine the individual ǫ, ǫ ′ . The data of individual ǫ is captured in the labeling of plaquettes by ǫ, ǫ ′ , ǫ ′′ etc. Thus the labeling of plaquettes is slightly redundant. We need only specify the label of a single plaquette and the labels for the other plaquettes can be determined from the labels i, j, k etc. In what follows, we will often just specify the label of the left-most plaquette.
C comes equipped with the data of an anti-linear isomorphism I between various morphism spaces. This map is easy to describe in terms of simple objects. It takes the morphism space (V ij k ) ǫ from L k to L i ⊗ L j with some labeling of plaquettes (such that the left-most plaquette is ǫ) to the morphism space (V ij k ) ǫ+1 from L k to L i ⊗ L j but with the labeling on plaquettes flipped. Thus, we just have to pick a basis α of morphism spaces (V ij k ) 0 . The basis (V ij k ) 1 is determined by the action of I and we label the resulting basis by the same labels α. See Figure 13. Thus, under a change of basis The associators are compatible with I. Let's denote F -symbol associated to a graph having the left-most plaquette 0 as (F ijk l ) (p,α,β)(q,γ,δ) as shown in Figure 14. Then, compatibility of associator and I implies that the F -symbol associated to the same graph Figure 13. The action of anti-linear isomorphism I.
but with the labels of all plaquettes flipped is (F ijk l ) * (p,α,β)(q,γ,δ) . Thus, the pentagon equation for the associator becomes (2.6) where s(i) = * if i labels a simple object ofC 0,1 orC 1,0 and s(i) = 1 otherwise. As the equation in terms of F -symbols looks different from (2.5), we refer to this equation as the twisted pentagon equation even though it still descends from the pentagon equation on the associators. This equation also appeared in [9].
Notice that the gauge transformations on F -symbols now take the following form Figure 14. Definition of F -symbols forC. and one can check that (2.6) is invariant under this gauge transformation.
The identity line of C =C 0,0 can be inserted anywhere in plaquettes labeled by 0 without changing any answers. Similarly, the identity line ofC 1,1 can be inserted anywhere in plaquettes labeled by 1 without changing any answers. For each object iñ C 0,1 , we define a dual object inC 1,0 and vice-versa. These duals are slightly different from the usual duals in a spherical fusion category C. That is, given a line L inC 0,1 , the evaluation maps take L ⊗ L * to identity inC 0,0 and take L * ⊗ L to identity inC 1,1 . Similar statements hold true if we flip 0 and 1 or replace evaluation with co-evaluation.
Given a graph Γ inC drawn on the sphere, different projections to planar graphs Γ p must be equivalent. In other words, we demand that there are conditions onC similar to that of a spherical structure on a spherical fusion category C.
We now turn our attention to the physical interpretation of the structure we have just described. An unoriented 3d TFTT has an orientation reversing defect U R implementing a reflection transformation. This defect can fuse with other orientation preserving defects U g to form more orientation reversing defects U Rg . The fusion of these defects froms a groupG = G × Z 2 and there is a canonical homomorphism ρ 1 fromG to Z 2 whose kernel is G.
U R can be fused with the boundary B to give a new boundary B ′ . Under such a fusion, the orientation of the boundary flips as well. There is a spherical fusion category C ′ associated to B ′ which is identified asC 1,1 . If there is a line L on B, then fusion of B with U R flips its orientation and we obtain the line L * on B ′ . Consider a morphism from L k to L i ⊗ L j on B. Slapping U R on top of it, we send each line to its dual and B to B ′ . However, since this process flips the orientation of the boundary, we have to take a mirror of the resulting configuration of lines to read it in terms of C ′ . See Figure 15. Thus, fusion with U R provides a linear isomorphism from V ij in C ′ . This is the origin of the anti-linear isomorphism I inC described above. Figure 15. The action of U R sends a graph (a) on B to a graph (b) on B ′ but in the "wrong" orientation. This means that the tensor product in graph (b) is taken from right to left. In order to get back to our convention of tensor product from left to right, we take a mirror of graph (b) and obtain graph (c). The resulting graph (c) is read in Figure 16. The symmetry of the theory under a reflection guarantees that the above two graphs evaluate to the same number.
U Rg can end on the boundary giving an interface between B and B ′ and an interface between B ′ and B. The lines living on these interfaces give rise toC 0,1 and C 1,0 respectively. Together they form a "twisted" spherical fusion categoryC described above.
The label 0 and 1 of plaquettes in a graph inC corresponds respectively to the boundary B and B ′ in the physical setting. A graph Γ inC drawn on a sphere computes the partition function ofT on a 3-ball with a network of boundary lines specified by Γ. The bulk of the 3-ball contains orientation reversing defects which end on the boundary at the location of lines living inC 0,1 orC 1,0 .
Given such a 3-ball with Γ on the boundary, we can bubble a U R in the bulk of the 3-ball and bring it to the boundary. This sends Γ to Γ ′ (after taking the mirror) and both of these graphs must evaluate to the same number. This is the origin of the compatibility between associators and I. See Figure 16.
Unoriented Turaev-Viro
In this subsection, we generalize the Turaev-Viro prescription to compute the partition function of an unoriented theoryT on an unoriented 3-manifold M. We will assume that the reader has read subsection 2.2 before reading this subsection and so we will sometimes cut corners in what follows.
Fix an orientation O on R 3 . An unoriented 3-manifold M can be constructed as follows. We pick open sets of R 3 and glue them along codimension one loci using piecewise-linear maps. This gives us a locus L in M which is defined by the property that the transition functions are orientation reversing. The Poincare dual of this locus is a representative of first Stiefel-Whitney class and we call it w 1 . We assign a local orientation O t to any small tetrahedron t in M − L by first using the local chart to map it to a tetrahedron in R 3 where we have already picked an orientation O. O t remains invariant under deformations of t inside M − L.
Pick a branched triangulation T of M. w 1 assigns a number p e valued in {0, 1} to every edge e. And the G-connection α 1 on M assigns an element g e of the group G to each directed edge e.
Let's extract a set of labels S 0,0 such that each label in the set corresponds to a simple object ofC 0,0 . Similarly, extract a set of labels S 0,1 such that each label in the set corresponds to a simple object ofC 0,1 . Define an involution * on S 0,0 induced by taking the dual of simple objects. Similarly, define an involution * on S 0,1 under which i is sent to j if the * operation ofC sends the object L i inC 0,1 to the object L j iñ C 1,0 . There is also a G-grading on both of these sets descending from the G-grading of simple objects. Figure 17. The possible graphs that we attach to a tetrahedron.
We now label each directed edge e by a label in (S 0,pe ) ge . Pick a face f of T . We can label f by some label α just as in the oriented case. Thus we obtain a labeling of edges and faces of a branched triangulation. Call one such labeling asl. Now pick a tetrahedron t in M − L in the labelingl. To each such t we will assign a planar graph Γ t inC. If the chirality of t matches the local orientation O t , we assign the graph shown in Figure 17(a) with ǫ = ǫ ′ = ǫ ′′ = ǫ ′′′ = 0 and if the chirality doesn't match the local orientation we assign the graph shown in Figure 17(b) with ǫ = ǫ ′ = ǫ ′′ = ǫ ′′′ = 0. To define Γ t for a t intersecting L, we choose a small neighborhood U t of t such that L looks like a wall cutting U t into two parts. On one side of the wall, we assign 0 to every vertex and on the other side we assign 1. We assign a global orientation O Ut to U t given by local orientation O t ′ of any tetrahedron t ′ lying completely on one side of the wall where vertices are labeled by 0. We now assign the graph shown in Figure 17(a) with ǫ = 0 and arbitrary ǫ ′ , ǫ ′′ , ǫ ′′′ if chirality of t matches O Ut and the graph shown in Figure 17(b) with ǫ = 0 and arbitrary ǫ ′ , ǫ ′′ , ǫ ′′′ if it does not.
Notice that if we flip the choice of 0 and 1 that we assigned to the sides of the wall and apply the above presciption, then Γ t is flipped to the "reflected" graph Γ ′ t which is the graph obtained by acting U R on Γ t . See Figure 16. Γ t and Γ ′ t evaluate to the same number.
Also notice that if we take a tetrahedron t ′′ in U t whose vertices are all labeled by 1 and assign to it a new graph Γ ′ t ′′ by matching its chirality with O Ut instead, then Γ ′ t ′′ will be the "reflected" version of the old graph Γ t ′′ that we assigned in the starting of last paragraph by matching its chirality with the local orientation, and hence Γ ′ t ′′ will evaluate to the same number as Γ t ′′ . Thus, we see that we could have also given the prescription to compute the partition function in various patches U i by using the local orientations and assigning {0, 1} to the two sides produced by an intersection of U i with the w 1 wall. The tetrahedra lying the intersection of U i and U j would give the same contribution in each patch. Thus, we would just have to make sure that we "glue" the tetrahedra in various intersections properly.
Returning back to our original prescription, we just repeat what we already said for the oriented case. Let's call the evaluation of Γ t as n t (l) and define N(l) = t n t (l).
To each edge e of T , we can associate a number d e (l) which is the quantum dimension of the simple line assigned to e inl. Define d(l) = e d e (l). The partition function Z(M) is then given by is what we dub as the total quantum dimension ofC (where d i is the quantum dimension of simple line L i ) and v is the number of vertices in T . We would like to emphasize that we are picking labels i only in "half" ofC i.e.C 0,0 and C 0,1 . Hence, the total quantum dimension only involves square of quantum dimensions of half of the simple lines.
The invariance of Z(M) under Pachner moves and under change of representative of w 1 is guaranteed by the twisted pentagon equation (2.6) satisfied by the F -symbols inC. In the rest of the paper, by "twisted spherical fusion category" we will mean the data ofC 0,0 ⊕C 0,1 and we will often repackage this data as C ′ = ⊕gC ′ g = ⊕ g C ′ g ⊕ g C ′
Example: Bosonic SPT phases
Bosonic SPT phases protected by G = G 0 × Z T 2 are invertible unoriented TFTs with global symmetry G 0 . Such a phase is constructed by a twisted spherical fusion category C having a single simple object L g in each subcategory C g . The fusion rules are L g ⊗ L g ′ ≃ L gg ′ . F-matrices define a U(1) valued function of three group elements α 3 (g, g ′ , g ′′ ) = F g,g ′ ,g ′′ ;gg ′ g ′′ . The twisted pentagon equation (2.6) translates to α s(g) where s(g) = (−1) ρ(g) . This means that α 3 is a ρ-twisted group cocycle. On the other hand, gauge transformations (2.7) become which corresponds to adding an exact ρ-twisted cocycle δβ 2 to α 3 . This means that the bosonic SPT phases are classified by the ρ-twisted group cohomology H 3 (BG, U(1) ρ ) [10]. A background connection α 1 for G 0 on M combines with w 1 to give a background connection for G which is represented as a map from M to BG. An element of H 3 (BG, U(1) ρ ) is then pulled back to a density on M which can be integerated on M to produce the partition function Z(M, α 1 ).
Pin + -TFTs
We start this section by reviewing the construction of Spin-TFTs from their shadows [5]. We will argue that the Pin + -shadows must have an additional kind of anomaly which was not present in the case of Spin-shadows. Incorporating this addtional anomaly will allow us to generalize the shadow construction to Pin + -TFTs. We finish the section by showing how to take a product of Pin + -TFTs at the level of shadows.
Review of Spin case
[5] provided a recipe to construct a 3d Spin-TFT T s from its shadow T f . The shadow is an ordinary TFT with an anomalous Z 2 1-form symmetry. This manifests itself in the existence of a bulk line Π which fuses with itself to the identity and has certain properties. See Figure 18.
We want to couple T f to a background 2-connection β 2 for the 1-form symmetry. We can do so by inserting Π lines inside a triangulated manifold such that an even number of Π lines cross a face having β 2 = 0 and an odd number of Π lines cross a face having β 2 = 1. Since Π has a non-trivial crossing with itself, topologically different ways of gluing Π lines inside the tetrahedron will differ by signs. Hence, we need to pick a convention of how we will glue the Π lines crossing these faces inside each tetrahedron when we say that T f is coupled to a background 2-connection β 2 . Once we have picked this convention, the partition function will not be invariant under gauge transformations of β 2 .
After fixing the convention, the change in the partition function under gauge transformations is independent of the theory, however. To see this, consider the product T = T 1 × T 2 of two shadow theories T 1 and T 2 , and couple it to a background 2connection for the diagonal Z 2 1-form symmetry. The partition function would then be the product Z(M, β 2 ) = Z 1 (M, β 2 )Z 2 (M, β 2 ) (3.1) and a gauge transformation would leave Z invariant. This is because resolving a crossing of the product line Π 1 Π 2 gives no minus sign as the signs from crossing of Π 1 and crossing of Π 2 cancel each other. The strategy of [5] was to compute this anomaly for a simple shadow theory, namely the shadow of Gu-Wen fermionic SPT phases. The anomaly under β 2 → β 2 + δλ 1 turns out to be This transformation is the same as the transformation of a spin-structure η 1 dependent sign z(M, η 1 , β 2 ). This sign can be written as [11] z(M, η 1 , where N is a 4-manifold whose boundary is M and w 2 is a representative of second Stiefel-Whitney class. For oriented manifolds, this sign is independent of N because β 2 ∪ β 2 + w 2 ∪ β 2 is exact if β 2 is a cocycle. It is easy to see from this expression that Here we have used a representation of spin structure as an equivalence class of 1cochains η 1 satisfying δη 1 = w 2 under the equivalence relation given by addition of exact 1-cochains to η 1 [11].
Thus combining the shadow theory with this sign gives a theory with a nonanomalous Z 2 1-form symmetry. The spin theory T s is obtained from this by gauging this 1-form symmetry We would like to have a Turaev-Viro construction for Z f (M, β 2 ). To this end, we should understand how to encode the Π line in terms of the spherical fusion category C. Notice that Π is mapped to a boundary line P by bringing it to the boundary. If we bring Π to the boundary such that it crosses a boundary line X, we obtain a canonical isomorphism β X : X ⊗ P → P ⊗ X. Bringing Π to the boundary in topologically equivalent ways should lead to same answers. Hence, (P, β) can be moved across other morphisms. See Figure 19.
Mathematically, this means that Π is an element (P, β) of Drinfeld center of C. This element fuses with itself to identity and β P = −1. The Turaev-Viro construction for Z f (M, β 2 ) is achieved by inserting a Π line emanating from every vertex whose dual face has β 2 = 1. See Figure 20.
Fermion in Pin + -theories
Pin + -TFTs are a generalization of Spin-TFTs to the unoriented case. Spinors can be defined on an n-dimensional non-orientable manifold by using transition functions valued in Pin + (n) group or Pin − (n) group, both of which are double covers of O(n). They are distinguished by the value of R 2 acting on spinors where R is a spatial reflection. R 2 = +1 for Pin + -group and R 2 = −1 for Pin − -group. In terms of time reversal symmetry T , the action on spinors is T 2 = −1 for the Pin + case and T 2 = +1 for the Pin − case. A Pin + -structure exists only if the second Stiefel-Whitney class [w 2 ] vanishes. On the other hand, a Pin − -structure exists only if [w 2 + w 2 1 ] vanishes where [w 1 ] is the first Stiefel-Whitney class. Two Pin + or Pin − -structures differ by an element of H 1 (M, Z 2 ).
In the Pin + case, there is a choice in defining the action of reflection in i-th spatial direction on spinors. We can either multiply the spinor by the gamma matrix γ i or by −γ i . This suggests that in a Pin + -shadow there are two canonical choices m R and n R = −m R of local operators at the junction of a Π line and R-defect. These operators square to 1. The orientation preserving defects always have a single canonical local operator at the junction. Now we will argue that, in general, there must be a locus L embedded inside the locus M of orientation reversing defects which implements the transformation m ↔ n. Moreover, the homology class of L must be the Poincare dual of [w 2 1 ]. Choose a locus L ′ embedded inside M whose homology class is the dual of [w 2 1 ]. Now bubble a fermion line near M and move it such that it intersects M in two junctions. See Figure 21(a). The local opeartors at the two junctions must be inverses of each other. Take one of these junctions around a cycle C in M. If the cycle intersects L ′ , then fusing the fermion line with itself at the end of this process gives a crossing of fermion line which provides a factor of −1. See Figure 21(b). Topological invariance demands that C must intersect L as well so that the fusion of the local operators at the end of the process provides a factor of −1 which cancels the sign from the crossing. Similarly, if C doesn't intersect L, it doesn't intersect L ′ either. Hence, L and L ′ are in the same homology class. Thus, we can choose to identify L with the representative w 2 1 . We will see in the next section that this flip m ↔ n as Π crosses L ′ is responsible for the presence of mixed anomaly between time reversal symmetry and Z 2 1-form symmetry in Pin + -shadows.
Shadows of Pin + -TFTs
Just as in the spin case, to define what we mean by a Pin + -shadow T f coupled to a background β 2 , we need to pick a convention for configuring Π lines. In addition to this, we also need to choose whether we will put m or n on the junctions when Π crosses R-defect. The Pin + -TFT T + is obtained as where the sign which cancels the anomaly for Z 2 1-form symmetry of T f can be defined as where ∂N = M and η 1 parametrizes Pin + -structures. The expression is independent of N as β 2 ∪ β 2 + (w 2 1 + w 2 ) ∪ β 2 is exact if β 2 is a cocycle. Fliping the choice of local operator changes the partition function as Z f (M, β 2 ) → (−1) M w 1 ∪β 2 Z f (M, β 2 ). This can be absorbed into a permutation of Pin + -structures η 1 → η 1 + w 1 . Thus, as in the Now, notice that Pin + -shadows have a time reversal anomaly in the presence of a background 2-connection β 2 . As we add δv 0 to w 1 , we add δu 1 to w 2 1 where u 1 = w 1 ∪ v 0 + v 0 ∪ w 1 + v 0 ∪ δv 0 . This corresponds to moving M and L ′ . But during such movements, L ′ will cross some Π lines encoding β 2 and the partition function will change as Under this transformation, the sign also transforms in the same way and the corresponding Pin + -TFTs have no time-reversal anomaly. The signs z + written above implies the following anomaly under β 2 → β 2 + δλ 1 Figure 22. The equations defining twisted Drinfeld center.
where w 1 is a representative of first Stiefel-Whitney class. As the anomaly is universal, we will verify that this is the correct anomaly by computing the anomaly directly for shadows of Pin + generalization of Gu-Wen fermionic SPT phases in the next section.
To obtain the Turaev-Viro construction for Z f (M, β 2 ), we need to know how to encode the Π line in terms of the data of C. As in the spin case, Π is mapped to some boundary line P with canonical isomorphisms β X : X ⊗ P → P ⊗ X. However, unlike the spin case, Π is not an element of Drinfeld center of C. Rather, we need to insert extra signs whenever we move Π across L ′ . This descends to the statement that (P, β) is an element of a twisted Drinfeld center which is defined in Figure 22.
Product of Pin + -TFTs
In this subsection, we want to figure out the shadow of the product of two Pin + -TFTs. This will lead to the definition of a product on the shadow theories which we will call the shadow product.
First, notice that 3 Now consider two Pin + -TFTs T + and T ′ + with their corresponding shadows T f and T ′ f . Using the above, we can write the partition function of the product theory as Figure 23. The properties of a bulk line b generating a non-anomalous Z 2 1-form symmetry are very similar to that of Π. The only difference is that crossing b lines doesn't lead to a sign.
which can be massaged as (3.14) being the partition function of the shadow corresponding to the product theory. We denote this shadow theory as the shadow product T f × f T ′ f . Physically, we are constructing the shadow of the product by gauging the diagonal Z 2 1-form symmetry in the product of the shadow theories. Notice that this 1-form symmetry is non-anomalous and hence gauging it makes sense.
To implement the shadow product in the Turaev-Viro description, we first take a graded product of C × G C ′ of C and C ′ . This means that (C × G C ′ ) g = C g × C ′ g . Now we need a notion of gauging the line b = ΠΠ ′ in the Drinfeld center of C × G C ′ . In general, we can consider the following problem. Take a theory T b specified by a twisted spherical fusion category C b having a non-anomalous Z 2 1-form symmetry. This means that there exists a line b in the Drinfeld center of C b which fuses with itself to identity and has the properties shown in Figure 23. We want to construct the twisted spherical fusion category for the theory T Z 2 obtained after gauging the 1-form symmetry generated by b.
b is invisble in the gauge theory. This means that a morphism from A to b ⊗ B in C b has to be regarded as a morphism from A to B in C Z 2 . And the morphisms from to B in C b are also morphisms from A to B in C Z 2 . The composition and tensor product of new morphisms are defined as shown in the Figure 24. Let's try to understand what happens to the simple objects under this operation. If L is a simple object in C b , M = b ⊗ L is simple as well. If M is not isomorphic to L, then the morphism from L to b ⊗ M in C b provides an isomorphism from L to M in C Z 2 combining them into a single simple object in C Z 2 . If M is isomorphic to L, then the morphism from L to b ⊗ M in C b provides an additional endomorphism ξ L of L in C Z 2 . Since there are two independent morphisms from L to itself in C Z 2 , it must split into two simple objects L + and L − which can be constructed by inserting a projector on L.
Fermionic SPT phases
In this section, we discuss Pin + -SPT phases. We also explicitly compute the partition function on an arbitrary manifold M of a certain Pin + -shadow which gives rise to the Pin + Gu-Wen phases. We can read the anomaly of Pin + -shadows from the expression for the partition function. The anomaly matches the expectation of the previous section. We finish the section by reproducing Z 2 group of Pin + -SPT phases without any global symmetry. Figure 25. We choose our basis for morphism space L gg,ǫ+ǫ ′ +n 2 (g,g ′ ) → L g,ǫ ⊗ L g ′ ,ǫ ′ such that the basis for different (ǫ, ǫ ′ ) are related by crossing of a Π line as shown in the figure. Here a label ǫ adjacent to double line denotes that the double line is Π if ǫ = 1 and the double line is the identity line if ǫ = 0.
Gu-Wen phases
In this subsection, we will discuss Pin + Gu-Wen SPT (f-SPT) phases with global symmetry G. Gu-Wen fermionic SPT phases were first described in [12] and explored further in [11]. The twisted spherical fusion category for these phases is such that C g has two simple objects L g,0 and L g,1 for any g in G × Z R 2 . The fusion rule is where n 2 is a Z 2 -valued group cocycle, i.e. it is an element of H 2 (B(G × Z R 2 ), Z 2 ).
is also the group of central extensions of the form Thus, we can view C as descending fromĈ which is aĜ-graded category with a single simple object in each grade. One obtains C by forgetting the sub-grading corresponding to the Z 2 subgroup appearing in the above central extension. More physically,Ĉ can be viewed as generalizing the notion of unoriented bosonic SPT phases to bosonic SPT phases with more complicated structure group. Forgetting the Z 2 grading corresponds to gauging the Z 2 symmetry. The associator of elements in C can be read from the associator inĈ which we denote asα 3 . It is an element of H 3 (BĜ, U(1) ρ ). As a note, we will denote an arbitrary element of G × Z R 2 by g in what follows. We demand the existence a fermionic line Π in the twisted Drinfeld center of C which fuses with itself to the identity. For the 1-form symmetry generated by this Figure 26. Intermediate computational steps relatingα 3 and ν 3 . The sign arises from dragging a Π line over a vertex. We can further resolve the crossing on the right hand side to make contact with ν 3 which is defined in Figure 27. line to be compatible with G, the line must be of the form (L e,0 , β) or (L e,1 , β). The former case cannot lead to a fermionic line. Hence, Π must be of the form (L e,1 , β). The existence of such a line will put some constraints on the form of C which we now explore. First, we choose our basis of morphisms as shown in the Figure 25. Consider the basic graph dual to the tetrahedron. Using our basis, it can be written as in Figure 26(a). This, in turn, can be manipulated to the final graph shown in Figure 27 which we define to be ν 3 (g, g ′ , g ′′ ). During this manipulation we obtain a sign from resolving a crossing and another sign from moving a Π line across a vertex. See Figure 26(b). Thus, we see thatα where ǫ 1 is a Z 2 -valued co-chain which sends (g, ǫ) to ǫ.
However, there is a redundancy in such a description. We will see in subsection 4.3 that the phase defined by ν 3 = 1 and n 2 = ρ 2 1 is the same as the trivial phase specified by ν 3 = 1 and n 2 = 0.
For the rest of this subsection, we note that we can write H 2 (B(G × Z R 2 ), Z 2 ) in terms of group cohomology of G. Let's denote an arbitrary element of G × Z R 2 as g 1 , g 2 etc. We also denote an arbitrary element of G as g and R as the generator of Z R 2 . We have the gauge transofrmations n 2 (g 1 , g 2 ) → n 2 (g 1 , g 2 ) + n 1 (g 1 ) + n 1 (g 1 g 2 ) + n 1 (g 2 ) (4.11) Pick n 1 such that n 1 (g) = 0 for all g and n 1 (R) + n 1 (gR) = n 2 (g, R). Thus we have fixed a gauge such that n 2 (g, R) = 0 for all g. Then using the cocycle condition n 2 (g 2 , g 3 ) + n 2 (g 1 g 2 , g 3 ) + n 2 (g 1 , g 2 g 3 ) + n 2 (g 1 , g 2 ) = 0 (4.12) we find that we can express n 2 as where m 2 parametrizes an element of H 2 (BG, Z 2 ), m 1 parametrizes an element of H 1 (BG, Z 2 ) andm 1,2 denotes the pullback of m 1,2 from G to G × Z R 2 . This analysis establishes that
Anomaly for Pin + -shadows
In this subsection we will compute the partition function Z f (M, β 2 ) for a Pin + Gu-Wen phase. The explicit expression will allow us to compute the anomaly under a gauge transformation β 2 → β 2 + δλ 1 . As the anomaly is universal, this will justify our prescription (3.6) for constructing Pin + -TFTs in terms of their shadows.
In the presence of a background β 2 , the basic tetrahedron graph is as shown in Figure 29(a). This can be gauge fixed as shown in Figure 28. After the gauge fixing, we can move the Π lines to the position shown in Figure 29(b). This implies that the partition function can be written as This expression is non-zero only when the G-connection is such that n 2 = β 2 + δα 1 . By shifting ǫ 1 → ǫ 1 + α 1 , the above expression can be re-written as The sign inside the sum is exact and hence we obtain
Group structure of Gu-Wen phases
Now we would like to compute the product of two Gu-Wen phases labeled by (ν 3 , n 2 ) and (ν ′ 3 , n ′ 2 ). The G-graded product of corresponding categories has 4 simple objects in each grade L g,ǫ,ǫ ′ which fuse according to the cocycle (n 2 , n ′ 2 ) and have associatorŝ α 3α ′ 3 . The non-anomalous Z 2 1-form symmetry is generated by L e,1,1 which has crossing (−1) ǫ+ǫ ′ .
Gauging the symmetry identifies L g,ǫ,ǫ ′ with L g,ǫ+1,ǫ ′ +1 . We pick representative objects L g,ǫ,0 in each grade and compute the associator of L g,ǫ,0 , L g ′ ,ǫ ′ ,0 and L g ′′ ,ǫ ′′ ,0 via the tetrahedron graph. Multiplying two representative objects L g,ǫ,0 and L g ′ ,ǫ ′ ,0 , we obtain L gg ′ ,ǫ+ǫ ′ +n 2 (g,g ′ ),n ′ 2 (g,g ′ ) which can be mapped back to the representative object L gg ′ ,ǫ+ǫ ′ +n 2 (g,g ′ )+n ′ 2 (g,g ′ ),0 by inserting n ′ 2 (g, g ′ ) number of ΠΠ ′ lines emanating from the corresponding vertex. The representative objects thus fuse according to the cocycle n 2 + n ′ 2 . Now, we gauge fix as in the previous subsection. Then, doing same manipulations as in the previous subsection, we find that the tetrahedron graph evaluates to Upto a gauge redefintion, it can be written as Thus the product is a Gu-Wen phase withν 3 = ν 3 ν ′ 3 (−1) n 2 ∪ 1 n ′ 2 andñ 2 = n 2 + n ′ 2 . However, notice that substituting ν 3 = 1, n 2 = 0 in (4.17) and writing w 2 1 = δσ 1 gives and substituting ν 3 = 1, n 2 = ρ 2 1 gives which are the same expressions! Thus, the Gu-Wen phase labeled by (ν 3 = 1, n 2 = ρ 2 1 ) is the trivial phase. The reader might complain that (4.21) does not seem to describe a trivial phase. We would like to stress that this is the partition function of the shadow theory describing the trivial Pin + -TFT. The trivial Pin + -TFT is obtained by combining a non-trivial shadow with a non-trivial sign.
Z R 2 version of Ising
As an application of our formalism, we would like to construct all Pin + -SPT phases wth global symmetry group G being the trivial group {id}. There is only one Gu-Wen phase in this class, which is the trivial phase. There is a non-trivial phase in this class which is given by the Z R 2 analogue of a Z 2 graded spherical fusion category I which is known as the Ising fusion category. Below we recall the construction of I and its Z R 2 cousin. It turns out that the analysis for both the cases is similar and we treat both of them together.
We are looking for a Z 2 graded (twisted) spherical fusion category such that C 0 = {I, P } and C 1 = {S} are the simple objects. The fusion rules are P ⊗ P ≃ I The F -symbols can be bootstrapped from these fusion rules by using (twisted) pentagon equation and taking advantage of the gauge freedom. When the Z 2 grading corresponds to a Z 2 global symmetry, the non-trivial Fsymbols are determined to be When the Z 2 grading corresponds to Z R 2 orientation reversing symmetry, the non-trivial F -symbols are determined to be That is, the choice of sign becomes a gauge freedom in the Z R 2 case. The fermion line is given by an element of (twisted) Drinfeld center of the form (P, β). Solving the Drinfeld center equations for the Z 2 case, we obtain Thus, there are two choices for the fermion line Π. Given the choice in picking the associator and the choice in Π, we can construct four Spin-TFTs with global symmetry Z 2 .
On the other hand, solving the twisted Drinfeld center equations for the Z R 2 case, we obtain However, as we know from before, flipping the sign of all the β in the orientation reversing sector doesn't change the resulting Pin + -TFT and we can fix β S = +1. Hence, in the Z R 2 case, there are no choices and we obtain only one Pin + -TFT which we call I + .
Pin + -SPT phases with no global symmetry
Cobordism hypothesis predicts a Z 2 group of Pin + -SPT phases [7]. We have already found the trivial phase as a Gu-Wen phase. We claim that the non-trivial phase corresponds to the Pin + -TFT I + that we encountered in last subsection. To justify this, we will show that the square of I + is the trivial Gu-Wen phase. This will prove that I + is indeed an SPT phase and provide an explicit construction of Pin + -SPT phases without global symmetry. The existence of this non-trivial phase was also discussed in [13].
The graded product of I + with itself has simple objects II, P I, IP , P P in the trivial grade and a simple object SS in the non-trivial grade. Gauging the 1-form symmetry generated by ΠΠ, we obtain a category C with C 0 having simple objects II, P I and C 1 having simple objects SS + , SS − . SS + and SS − are constructed by using projectors obtained by using the non-trivial endomorphism of SS. See Figure 30.
SS + ⊗ P I involves the F -symbol F P SP which flips the sign of ξ S and hence SS + ⊗ P I ≃ SS − . On the other hand, P I ⊗ SS + involves β P and hence P I ⊗ SS + ≃ SS − . The computation of SS + ⊗ SS + can be done in a similar but more involved manner which we explain in Figure 31. We find that SS + ⊗SS + ≃ II. All the statements above hold true if we replace SS + with SS − . Thus, C has the fusion rules of the Gu-Wen phase which is trivial.
For a general G, we can consider the pullback of I + along ρ 1 which we denote as I + (G). I + (G) g has two simple elements I g , P g if ρ 1 (g) = 0 and has a single simple object S g if ρ 1 (g) = 1. The fusion rules and associators are just pulled back from I + . Clearly, I + (G) will also square to 0 as our argument above is independent of G-grading.
This allows us to construct GW(G) × Z 2 worth of Pin + -SPT phases with global symmetry G. We suspect that this is not the full classification and comment on how Figure 31. Computation of SS + ⊗ SS + is by definition a sum of four terms which involve associators and crossings. II inside SS ⊗ SS is mapped to II by first and fourth terms and to P P (which is isomorphic to II in the new category) by the second and third terms. Similarly, P I is mapped to the zero object as the four terms cancel in pairs. Hence, SS + ⊗ SS + ≃ II.
to complete the classification in the next section.
Conclusion and future directions
In this paper we discussed the generalization of Turaev-Viro construction of oriented 3d TFTs to unoriented 3d TFTs. We proposed that the input data of this construction in the unoriented case should be a "twisted" spherical fusion category in which the pentagon equation for the F -symbols is modified. As a generalization of the construction of [5], we also proposed a construction for Pin + -TFTs in terms of their shadows. The shadows are ordinary unoriented TFTs with a Z 2 1-form symmetry which is anomalous and has a mixed anomaly with time-reversal symmetry.
Combining the above two ingredients, we were able to give explicit constructions of a large class of invertible Pin + -TFTs with global symmetry G. Such theories are known as Pin + -SPT phases. We also reproduced the Z 2 group of Pin + -SPT phases without any global symmetry.
There are plenty of interesting directions in which this work can be extended in the future and we make some very speculative comments about them in what follows. Perhaps the most immediate future direction is to use the machinery developed in this paper to provide a classification of Pin + -SPT phases for an arbitrary group G which admit a topological boundary condition. The author suggests to look at a spherical fusion category graded by Z 2 × Z R 2 with simple elements I, P in the (0, 0) grade, I 1 , P 1 in the (0, 1) grade, S in the (1, 0) grade and S 1 in the (1, 1) grade. The fusion rules mimic the Ising category. Is it possible to find a consistent set of F -symbols? If yes, then the class of Pin + -SPT phases we presented in this paper is not the full answer. It should then be possible to finish the classification, in a spirit similar to the one in [5], by pulling back this Z 2 × Z R 2 phase and combining it with the class of phases presented in this paper.
It would be very interesting to provide a construction (Turaev-Viro-like or some other construction) for TFTs with more general struture groups. For instance, one could mix O(n) and G or mix Pin + (n) and G in the fermionic case. It seems that a proper treatment of these generalizations should involve a rich interplay of symmetry defects along with higher codimension defects living in the worldvolume of symmetry defects.
Let us comment about the Pin − (n) × G case. It seems natural that the kernel for Pin − -TFTs would be the sign z − (M, η 1 , β 2 ) = (−1) M η 1 ∪β 2 + N β 2 ∪β 2 +(w 2 1 +w 2 )∪β 2 (5.1) which seems to be the same expression as (3.7) but this time we take η 1 to parametrize Pin − -structures. This would suggest that the corresponding shadow theory has no mixed anomaly between time reversal and Z 2 1-form symmetry. Also, the anomaly for the 1-form symmetry should now be Z f (M, β 2 ) → (−1) M λ 1 ∪β 2 +β 2 ∪λ 1 +λ 1 ∪δλ 1 Z f (M, β 2 ) (5.2) However, for Pin + case, we saw in Figure 21 that moving the fermion Π across the locus dual to w 2 1 should change the operator at the junction of Π line and the orientation reversing defects. The argument given there was that this sign was needed to cancel the sign coming from the crossing of Π lines. This lead to different anomalies than the ones we want for the Pin − case. So, in the Pin − case, we do not want such a change in the sign of the corresponding local operator. The author suspects that in this case the sign coming from crossing of Π lines will be canceled by factors coming from patching of Π with RΠ where RΠ is Π line with a reflected framing. This would make sure that Π is an element of Drinfeld center rather than a twisted Drinfeld center, which would in turn imply the anomalies given above. It would be interesting to work out the details and provide a Turaev-Viro construction for Pin − shadows.
Of course, this means that one will have to first understand how to compute (in terms of the twisted spherical fusion category) the extra data attached to a bulk line which corresponds to patching the line with itself but with reflected framing. In other words, this corresponds to a generalization of Moore-Seiberg data [14,15] to the unoriented case. A step towards this was recently taken in [16].
A puzzle here is that there should be no non-trivial Pin − -SPT phase according to [7]. So, somehow the Pin − -TFTs produced by the potential Pin − -shadows having Z R 2 version of Ising as their twisted spherical fusion category should be trivial.
Another interesting direction to pursue would be to see if it is possible to find a generalization of Turaev-Viro construction which could construct anomalous 3d TFTs. Such TFTs live at the boundary of a 4d SPT phase. Hence, such TFTs should not admit topological boundaries of their own but they can admit interfaces to other 3d TFTs with the same anomaly. Perhaps it is possible to choose a simple TFT in each anomaly class and build a Turaev-Viro construction using a topological interface between the TFT we want to construct and the simple TFT. See [1,6,17,18] for recent interesting work on anomalous unoriented 3d TFTs.
Finally, it would be interesting to concretely construct a time-reversal invariant commuting projector Hamiltonian using the data of twisted spherical fusion category. This Hamiltonian goes into the string-net construction of fermionic phases of matter. See [19], [5] for more details. | 15,463 | 2016-11-08T00:00:00.000 | [
"Physics"
] |
Ergodicity of a Nonlinear Stochastic SIRS Epidemic Model with Regime-Switching Diffusions
In this paper, taking both white noises and colored noises into consideration, a nonlinear stochastic SIRS epidemic model with regime switching is explored.*e threshold parameter Rs is found, and we investigate sufficient conditions for the existence of the ergodic stationary distribution of the positive solution. Finally, some numerical simulations are also carried out to demonstrate the analytical results.
Introduction
It is well known that the incidence rate plays a crucial role in studying the dynamics of infectious disease models. In general, bilinear incidence βSI is considered in most infectious disease models [1,2]. For example, Li and Ma [3] conducted the qualitative analyses of the SIS epidemic model with vaccination and varying total population size. Nakata and Kuniya [4] introduced the global dynamics of a class of SEIRS epidemic models in a periodic environment. In order to effectively investigate the rapid spread of the disease, it is rewarding to consider the behavioral changes and crowding effect of the infected individuals, as well as choose appropriate parameters to prevent the unbounded contact rate. Capasso and Serio [5] proposed the saturated incidence (βSI/(1 + αI)), which is more reasonable than the bilinear incidence. For the detailed introduction of the saturated incidence, see [5].
e classical SIRS epidemic model with the saturated incidence rate is in the following form [6,7]: where S(t), I(t), and R(t) represent the number of susceptible, infected, and removed individuals at time t, respectively. A denotes an input of new members into the population, β stands for the transmission rate, μ is the natural mortality, d is the death rate relative to the disease, λ is the proportion of the infective class to the recovered class, and ω is the per capita rate of loss of immunity. In nature, it is inevitable for a population to be affected by a variety of random factors [8,9]. Consequently, it is crucial to consider the randomness which might exist during the transmission of disease [10,11]. In general, there are two types of random perturbations to be considered in ecosystem modeling: one is white noise which can be described as Brownian motion [12][13][14], and the other is colored noise (also called telegraph noise) which can be described through a finite-state Markov chain [15][16][17]. In [15], Liu et al. investigated the threshold behavior of a multigroup SIRS epidemic model with standard incidence rates and Markovian switching. Lin and Jin [16] considered a stochastic SIS epidemic model with regime switching; by verifying a Foster-Lyapunov condition, the threshold condition for the ergodicity is presented. Hu et al. [17] studied a stochastic SIS epidemic model with vaccination and nonlinear incidence under regime switching.
Motivated by the above literature, we study a nonlinear stochastic SIRS epidemic model with two kinds of random interference. e model is as follows: where q is a fraction of vaccinated individuals for newborns. e incidence rate α contains the crowding effect of the infected individuals and should not be disturbed by the noises in the environment. B i (t)(i � 1, 2, 3) denotes onedimensional standard Brownian motion, and σ i (i � 1, 2, 3) is the intensity of white noise. ξ(t) is a right-continuous Markov chain taking values in M � 1, 2, . . . , m { }, and the generator matrix of ξ(t) is Γ � (c ij ) 1 ≤ i,j ≤ m . e details of the Markov chain are presented in [18], which we omit here.
In this paper, the dynamic behaviors of stochastic differential system (2) are discussed. In Section 2, we get the conditions for the extinction and persistence in mean of the infected. In Section 3, we investigate the ergodicity of system (2) by constructing a suitable Lyapunov function. Finally, numerical simulations are given in Section 4.
The Extinction and Persistence of the Disease
In system (2), let N(t) � S(t) + I(t) + R(t); then, we have From Lemma 2.1 and Lemma 2.2 of [19], we have the following. (2) has the following properties:
Definition 1
(1) If lim t⟶∞ I(t) � 0, then the disease tends to be extinct (2) If lim t⟶∞ inf(1/t)E t 0 I(z)dz > 0, then the disease tends to be persistent in mean
> 0, and the disease I(t) of system (2) is persistent in mean □ Remark 1. According to eorem 1, if the intensity of white noise is large enough that the condition R s < 0 holds, then the disease dies out with probability 1. Conversely, if R s > 0, the disease of system (2) is persistent in mean. is means that the presence of environmental noise is conducive to disease control.
Ergodic Stationary Distribution of System (2)
e study of the ergodicity and stationary distribution has been widely concerned by many scholars [22,23]. In this section, in order to investigate the ergodic property of system (2), we establish a suitable Lyapunov function with Markov conversion. (2) is ergodic and has a unique stationary distribution in R 3 + × M.
is proof is completed.
Conclusions and Numerical Simulations
is paper investigated a nonlinear epidemic disease model with two kinds of noise disturbances. e threshold of extinction and persistence in mean is obtained.
(i) If R s < 0, the infected individuals tend to become extinct (ii) If R s > 0, the infected individuals are persistent in mean (iii) If R s > 0, the stochastic process (S(t), I(t), R(t), ξ(t)) of system (2) is ergodic and has a unique stationary distribution To verify the correctness of the theoretical analysis, numerical simulation is employed in the following example. Complexity en, the unique stationary distribution of ξ(t) is π � (π 1 , π 2 ) � (1/4, 3/4). Let α � 0.2, and other coefficients in system (2) are selected as follows.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 1,393 | 2020-10-19T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
Exponential Synchronization of a Class of N-Coupled Complex Partial Differential Systems with Time-Varying Delay
Copyright © 2017 Wenhua Xia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is concerned with the exponential synchronization for a class of N-coupled complex partial differential systems (PDSs) with time-varying delay.The synchronization error dynamic of the PDSs is defined in the q-dimensional spatial domain. To achieve synchronization, we added a linear feedback controller. A sufficient condition is derived to ensure the exponential synchronization of the proposed networks using the Lyapunov–Krasovskii stability approach andmatrix inequality technology.Theproposed system has broad applications. Two example applications are presented in the final section of this paper to verify the proposed theoretical result.
Introduction
Over the last few years, complex dynamical networks have been used to describe numerous large-scale systems in different fields, such as natural and human societies.A complex dynamical network is a large set of interconnected nodes; each node represents an individual in the system, whereas the edges denote the relations between them.Typical examples include physical systems, biological neural networks, the Internet, electrical power grids, and social networks.Many interesting and important studies have been previously conducted on various complex dynamical networks [1][2][3][4][5].
Synchronization, a common phenomenon in real systems, occurs within widespread fields such as flushing fireflies, brain web, distributed computing systems, sensor networks, and applause and ranges from natural to artificial networks.The present experiment proves the following: the flicker frequency of a firefly is affected by the flicker frequency of its surrounding luminescence; heart muscle cells can relax and contract the heart valve through synchronous oscillations.Synchronizations are vital in our daily life.Thus, we must find conditions to guarantee that the nodes in a network converge on the same desired trajectory; that is, the network achieves synchronization.Therefore, the synchronization problem of complex networks has attracted great attention in the past and is becoming an important topic.Many important results on synchronization have been obtained for various complex dynamical networks [3][4][5][6].External force controllers usually need to be designed and applied to ensure the synchronization of networks.Several control schemes, such as adaptive, impulsive, and pinning control, have been reported [7][8][9][10].Scholars have used various methods to study asymptotical synchronization, exponential synchronization, passivity synchronization, and ∞ synchronization, for different complex networks, and have achieved fruitful results [11][12][13][14][15][16][17][18][19][20][21][22][23][24].
Many phenomena exist in practice, such as the ones in chemical engineering, neurophysiology, and biodynamics, where state variables depend not only on time but also on spatial position.These phenomena are generally modeled in partial differential systems (PDSs).Therefore, increasing concerns have risen on the study of PDSs [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43].A significant part of research is based on reaction-diffusion neural network models, such as [25,[34][35][36][37][38].Multiple intercoupled reaction-diffusion neural networks can generate complex networks, as shown in the following example: the author of [25] discusses the passivity-based synchronization of a complex delayed dynamical network consisting of linearly 2 Complexity and diffusively coupled identical reaction-diffusion neural networks.In practice, the nodes of a complex network are not necessarily neurons.For example, in [26], Yang et al. address a class of complex spatiotemporal networks with space-varying coefficients, where node dynamics are described by coupled partial differential equations (PDEs); in [27], Wang et al. study a class of networked linear spatiotemporal networks consisting of identical nodes, where the spatiotemporal behavior of each node is described by parabolic PDEs.Complex network systems with a partial differential term are generally studied more.
Time delays are often encountered in practical cases, and their existence is often one of the key factors that cause system shock and instability.Hence, the problem of considering time delay in the PDSs has aroused the interest of researchers.In [29], sufficient conditions on asymptotical synchronization for a class of coupled time-delay PDSs via boundary control have been obtained; Wu et al. address the robust ∞ synchronization of PDSs with time delay [41,42]; Wang et al. focus on the passivity problem of a class of delayed reaction-diffusion networks [43].
To date, some studies have been conducted on the exponential synchronization of PDSs, where spatial variables are one-dimensional and the networks have no time lag [26,27].However, no literature on the exponential synchronization of time-delay PDS in -dimensional space variables has been published.
Motivated by the above analysis, this paper considers the exponential synchronization for a class of time-varying delay -coupled PDSs in the -dimensional spatial domain.The innovation points of this paper can be summarized as follows: (1) We propose a method that studies exponential synchronization for the time-delay partial differential complex network model with reaction-diffusion term presented in this paper.That is, the sufficient condition of delay-dependent exponential synchronization is obtained by constructing the appropriate Lyapunov-Krasovskii function and using matrix inequality analysis technology.The sufficient condition is presented in the form of a linear matrix inequality and can reflect the influence of time delay on system exponential synchronization, hence becoming a less conservative method than the existing models.To the best of our knowledge, the exponential synchronization of the time-delay partial differential complex network model with multidimensional spatial variables has not yet been studied.
(2) We reveal the influence of a diffusion coefficient on the exponential synchronization of the proposed system.The sufficient condition obtained reveals an interesting conclusion: if is sufficiently large, then condition (16) of Theorem 8 will always be satisfied.Hence, a given system will always reach exponential synchronization if the diffusion coefficient is large enough.
(3) For the proposed system, we can use the obtained sufficient condition of exponential synchronization and the LMI Control Toolbox in MATLAB to easily calculate the estimated values of the maximum delay margin and maximum exponential synchronization decay rate 0 .
The rest of this paper is organized as follows.In Section 2, a class of -coupled complex PDSs with time-varying delay is presented and some preliminaries are given.In Section 3, we analyze the synchronization of the addressed network based on the Lyapunov-Krasovskii stability theorem and linear matrix inequalities (LMIs) technology.Two numerical examples are provided to verify the effectiveness of the theoretical results in Section 4. Section 5 presents the concluding remarks.
Network Model and Preliminaries
2.1.Notations.Throughout this paper, and × denote the -dimensional Euclidean space and the set of all × real matrices, respectively.For symmetric matrix , the notation > 0 means that is a positive-definite matrix. min (⋅) and max (⋅) represent the minimum and maximum eigenvalues, respectively, of the matrix. (Ω) refers to the space of functions with continuous partial derivatives of an order less than or equal to in Ω.The symbol ⊗ denotes the Kronecker product.
Model Description.
Consider the following -coupled complex PDSs with time-varying delay, in which each node is an -dimensional dynamical subsystem: The boundary conditions and initial values are given as follows: where Φ (, ) is bounded and continuous on Ω, = 2 = ( 2 ) × can be defined similarly as 1 = ( 1 ) × .The network in this paper is undirected and weighted.
Definition 6.The complex network (1) with initial condition (2) is said to achieve exponential synchronization if there exist constants > 0, > 0 such that where ‖ ⋅ ‖ 2 stands for the Euclidean vector norm and (, ) is synchronous evolution of network (1).
Proof.Construct the following Lyapunov functional: From Assumption 5, the time derivative of () along the trajectory of system ( 13) is as follows: By using Green's formula and the boundary condition (2), from Lemma 1, we can obtain the following: Similarly, We can then obtain where a real matrix exists, such that + = .By using Assumption 4 and Lemma 2, we can derive Thus, substituting inequalities ( 22)-( 26) in (21), we can obtain Add 2() to both sides of ( 27); then, By using Lemma 3, from ( 16), we derive To integrate both sides of inequality (29), we can obtain Given that > 0, (), as given by (18), satisfies the following inequality: Finally, we obtain where Therefore, by using Definition 6, the error dynamical system ( 14) is globally exponentially stable at the equilibrium set with the exponential rate .Consequently, network (1) is globally exponentially synchronized under controllers (12).Thus, the proof is completed.
Remark 9. To date, some studies have been conducted on the exponential synchronization of PDSs, where spatial variables are one-dimensional and the networks have no time lag [26,27].However, no literature on the exponential synchronization of time-delay PDS in -dimensional space variables has been published.
Complexity
Remark 10.We reveal the influence of a diffusion coefficient on the exponential synchronization of the proposed system in Theorem 8.The sufficient condition obtained reveals an interesting conclusion: if is sufficiently large, then condition (16) of Theorem 8 will always be satisfied.Hence, a given system will always reach exponential synchronization if the diffusion coefficient is large enough.
Remark 11.The criterion given in Theorem 8 is dependent on the time delay.It is well known that the delay-dependent criteria are less conservative than delay-independent criteria when the delay is small.Remark 12. From Theorem 8, one can determine an upper bound of such that system (1) is exponentially synchronized.This requires solving the following optimization problem: maximize subject to LMI (16).This means that system (1) under controllers (12) will be exponentially synchronized if ≤ 0 , where 0 is the maximized value of of the optimization problem.
Remark 13.By iteratively solving the LMIs given in Theorem 8 with respect to and ℎ 0 , one can find the maximum allowable upper bound , ℎ 0 of ℎ() and ḣ (), respectively, for guaranteeing the exponential synchronization of system (1).
Illustrative Example
In this section, two numerical examples are provided to illustrate the effectiveness of the proposed method.
Example 1.Consider the controlled delayed PDSs consisting of four identical coupling nodes, wherein each node is a onedimensional value described by (, ) = Δ (, ) + ( (, )) with the initial values Choosing the Lyapunov positive-definite matrices = 1.7601 4 and = 1.901 4 , by iteratively solving the LMIs given in Theorem 8 with respect to , we can find controller matrix = 4.1156 4 .
Conclusion
In this paper, we discuss the exponential synchronization for a class of -coupled complex PDSs with time-varying delay.By using Lyapunov-Krasovskii stability approach and matrix inequality technology, sufficient conditions are derived to ensure the exponential synchronization of the proposed networks.Simulations also verify our theoretical results.
Example 2 .
If we take ℎ 0 = = 0.1, = 0.1, with the parameters given in Example 1 and by using the LMI Toolbox in Matlab, we can find the controller matrix and the Lyapunov positive-definite matrices and as follows: Figure 1 | 2,469.6 | 2017-01-01T00:00:00.000 | [
"Mathematics"
] |
HOPX regulates bone marrow-derived mesenchymal stromal cell fate determination via suppression of adipogenic gene pathways
Previous studies of global binding patterns identified the epigenetic factor, EZH2, as a regulator of the homeodomain-only protein homeobox (HOPX) gene expression during bone marrow stromal cell (BMSC) differentiation, suggesting a potential role for HOPX in regulating BMSC lineage specification. In the present study, we confirmed that EZH2 direct binds to the HOPX promoter region, during normal growth and osteogenic differentiation but not under adipogenic inductive conditions. HOPX gene knockdown and overexpression studies demonstrated that HOPX is a promoter of BMSC proliferation and an inhibitor of adipogenesis. However, functional studies failed to observe any affect by HOPX on BMSC osteogenic differentiation. RNA-seq analysis of HOPX overexpressing BMSC during adipogenesis, found HOPX function to be acting through suppression of adipogenic pathways associated genes such as ADIPOQ, FABP4, PLIN1 and PLIN4. These findings suggest that HOPX gene target pathways are critical factors in the regulation of fat metabolism.
Retroviral transduction.
Full-length human coding sequence for HOPX (NCBI RefSeq: NM_001145459.1) was subcloned into the pRUF-IRES-GFP vector (Kind gift by Paul Moretti, University of South Australia, Australia). Retroviral transduction of HOPX/pRUF-IRES-GFP or empty vector control pRUF-IRES-GFP into human BMSC was performed as previously described 15 . Stably transduced BMSC expressing high levels of GFP were selected by FACS, using a BD FACSAria Fusion flow cytometer (https ://www.bdbio scien ces.com). Overexpression of HOPX was confirmed by qPCR analysis. siRNA knock-down transfections. Human BMSC were seeded at 10 4 cells/cm 2 and siRNA knockdown was performed on the following day. Sequence specific siRNAs against HOPX (ThermoFisher Scientific, https :// www.therm ofish er.com/) were used at 12 pMol to achieve a > 90% knockdown of transcript levels. The siRNA used in this study were: HOPX s39106 and s39107 and Silencer Select Negative Control #1 siRNA. The procedure was performed in accordance with manufacturer's instructions with a 72 h incubation period before performing functional assays. in vitro differentiation assays. Human BMSC (10 4 cells/cm 2 ) were cultured in either normal growth conditions (αMEM supplemented with 10% FCS, 2 mM l-glutamine, and 100 μM l-ascorbate-2-phosphate); or osteogenic inductive conditions (αMEM supplemented with 5% FCS, 2 mM l-glutamine, 50 U/mL penicillin-streptomycin, 10 mM HEPES buffer, 1 mM sodium pyruvate, 0.1 mM dexamethasone, 100 μM l-ascorbate-2-phosphate and 2.6 mM KH 2 PO 4 ); or adipogenic inductive conditions (αMEM supplemented with 10% FCS, 2 mM l-glutamine, 50 U/mL penicillin-streptomycin, 10 mM HEPES buffer, 1 mM sodium pyruvate, 120 mM indomethacin and 0.1 mM dexamethasone) for up to 3 weeks as previously described 15,16 . Mineralized bone matrix was assessed with Alizarin red (Sigma-Aldrich, Inc.) staining 15 . Extracellular calcium was measured as previously described 15 . Lipid formation was identified by Nile-red (Sigma-Aldrich, Inc.) staining as previously described 15 Differential gene expression and pathway analysis. Gene expression analyses were carried out in R using mostly Bioconductor packages EdgeR 22,23 and Limma 24 . Gene counts were filtered for low expression counts by removing genes with less than 1 count per million (cpm) in more than two samples and then normalised by the method of trimmed mean of M-values 25 . Differential gene expression was carried out on log-CPM counts and precision weights available from the Voom function in Limma 26 , with linear modelling and empirical Bayes moderation. Annotation of results were carried out using Ensembl annotations (https ://grch3 7.ensem bl.org) available in BiomaRt 27 , and expression results displayed in heatmaps using the Pheatmap package 28 .
Statistics. Generation of graphs and data analysis was performed using GraphPad Prism 7 (GraphPad Software, LA Jolla, CA, https ://www.graph pad.com/). Statistical significance (*) of p < 0.05 between samples are shown based on Student's t-test and One-way ANOVA as indicated.
HOPX expression is directly repressed by EZH2. Previous studies of global ChIPseq analyses found
that the H3K27 methyltransferase, EZH2, regulates HOPX expression during BMSC osteogenic differentiation 8 . Enforced expression of EZH2 in cultured human BMSC resulted in a decrease in HOPX gene expression levels ( Fig. 1A,B). Manual ChIP analysis was used to assess the binding of EZH2 to putative DNA binding sites on HOPX, using genomic DNA isolated from cultured human BMSC. The data showed preferential binding of EZH2 to the S3 binding region of the HOPX promoter region in BMSC cultured under normal growth conditions and osteogenic inductive conditions (Fig. 1C,D). However, EZH2 enrichment on all HOPX binding sites (S1, S2 and S3) was greatly diminished when BMSC were cultured under adipogenic inductive conditions (Fig. 1E).
HOPX is a promoter of BMSC proliferation. In order to determine if HOPX regulates BMSC proliferation, HOPX was overexpressed in BMSC using retroviral transduction ( Fig. 2A). Cell proliferation was assessed by BrdU incorporation under normal growth conditions. The data showed a significant increase in the proliferation rates of BMSC following enforced expression of HOPX (Fig. 2B). To further confirm that HOPX regulates BMSC proliferation, HOPX expression was knocked down using two independent siRNA molecules targeting HOPX transcripts (Fig. 2C). Knock down of HOPX in BMSC resulted in a significant decreased in proliferation rates (Fig. 2D). These data suggest that HOPX is a positive regulator of BMSC proliferation.
HOPX is an inhibitor of BMSC adipogenesis. We next explored the role of HOPX during human BMSC differentiation. Functional studies were carried out using retroviral transduced HOPX overexpressing constructs or empty vector infected BMSC, cultured in control or adipogenic inductive media (Fig. 3A). Overexpression of HOPX resulted in decreased Nile-red-positive lipid producing adipocytes compared with empty vector control Scientific RepoRtS | (2020) 10:11345 | https://doi.org/10.1038/s41598-020-68261-2 www.nature.com/scientificreports/ To verify these findings, siRNA-mediated knockdown using two independent siRNA targeting HOPX transcripts in BMSC was performed (Fig. 4A). The data showed a dramatic increase in Nile-red-positive lipid-producing adipocytes following adipogenic induction, compared with BMSC treated with control scramble siRNA ( Fig. 4B-E). Furthermore, siRNA knockdown of HOPX resulted in an increase in C/EBPα (Fig. 4F) and ADIPSIN (Fig. 4G) transcript levels compared with scramble siRNA-treated cells following adipogenic induction. Overall, these data demonstrate that HOPX is a repressor of adipogenesis. www.nature.com/scientificreports/ To identify the function of HOPX in BMSC osteogenic differentiation, HOPX overexpressing BMSC or empty vector infected BMSC were cultured under control or osteogenic inductive media (Fig. 5A). Assessments of extracellular calcium levels found no difference between HOPX overexpressing BMSC and vector control BMSC (Fig. 5B). Similarly, mineralized deposits were stained with Alizarin Red after 3 weeks under osteogenic growth conditions with no observable differences (Fig. 5C). In accord with these findings, HOPX overexpressing BMSC (Fig. 5D) showed no significant difference in the transcript levels of the osteogenic master regulator, RUNX2 (Fig. 5E) and the mature bone marker, OSTEOPONTIN (OPN) (Fig. 5F), compared to the vector control cells. Confirmatory studies employing siRNA-mediated knockdown of HOPX in BMSC (Fig. 5G) found no significant differences in the levels of Alizarin positive mineral and extracellular calcium levels compared with scramble siRNA-treated BMSC (Fig. 5H-K). Overall, these findings demonstrate that HOPX has no direct effect on the osteogenic capacity of BMSC.
HOPX inhibits BMSC adipogenic differentiation via suppression of adipogenic associated genes.
We next explored potential mechanisms of HOPX action during BMSC adipogenic differentiation.
Total RNA was collected from HOPX overexpressing and vector control BMSC cultured for 2 weeks under adipogenic inductive conditions, then processed for RNA-sequencing to identify novel HOPX-regulated genes during BMSC adipogenic commitment [79]. Due to the variable gene expression patterns between different individ- www.nature.com/scientificreports/ uals (n = 3 donors), the P-value significance was excluded as a criteria to select for differentially expressed genes (DE). Therefore, the top 50 DE (Fig. 6) were selected based on the fold change (a log fold change (logFC) ≥ |1| or ≥ |− 1|). To validate the RNA-sequencing results, confirmatory qPCR was performed on a number of genes that appeared to change expression in HOPX overexpressing BMSC under adipogenic conditions. HOPX transcripts were found to be elevated in HOPX overexpressing BMSC compared to vector control BMSC, which were relatively higher during adipogenesis compared to normal growth conditions for the respective population. From the transcriptional expression heat map (Fig. 6), we observed a number of genes that were upregulated during adipogenesis but suppressed in HOPX overexpressing cells. Table 1 indicates the functional role of these genes following Gene Ontology (GO) enrichment analysis, with 188 genes involved in EMT, 185 genes in adipogenesis and 127 genes in fatty acid metabolism. The differential gene expression levels of representative upregulated genes, HOPX, ADIPOQ, AOC3, FABP4, G0S2, GPD1, PLIN1 and PLIN4 were confirmed by qPCR ( Fig. 7A-H).
Other genes were found to be downregulated during adipogenesis and promoted by HOPX expression such as CNN1 (Fig. 7I). The RNA-sequencing analysis provides insight into putative targets of HOPX during BMSC adipogenesis.
Discussion
Our studies suggest that HOPX mediates postnatal BMSC proliferation and lineage determination. Protein structural studies have demonstrated that HOPX is unable to bind to DNA, suggesting that HOPX functions through protein-protein interaction with partner proteins. A number of HOPX partner proteins have been identified, including Hdac1, Hdac2, MTA 1/2/3, MBD3 and Rbbp4/7 29 . HOPX has been shown to be a key factor in cardiac development, where it regulates cell proliferation and differentiation at different stages during murine cardiac development 5,6 . The present study found that human BMSC express low levels of HOPX during normal growth yet expression is dramatically increased during osteogenesis and adipogenesis in agreement with previous observations 8 . Studies of homozygous mutations of the Hopx gene (loss-of-function mutations) in mouse showed partial penetrant embryonic lethality due to heart deformation during embryo development. However, those that survived display no gross deformities. Hopx heterozygous mutated mice are viable and comparable to wild type mice 5,6 . This suggests that Hopx is important for cardiac development. On the other hand, the incomplete penetrance of HOPX mutation indicates that there are other compensatory mechanisms that rescue part of the phenotype. However, no bone or fat-associated phenotypes have been reported in Hopx knockout studies. Functional studies using siRNA-mediated knockdown of HOPX did not affect BMSC osteogenesis but did alter the cellular proliferation and adipogenic potential of the cells. Our studies showed that siRNA-mediated HOPX knockdown in human BMSC decreased proliferation and increased adipogenic potential of these cells. This was demonstrated by an increase of lipid formation and increased expression of early adipogenic marker C/EBPα and mature marker ADIPSIN, when compared to the siRNA scramble controls. Conversely, these results were confirmed by enforced expression of HOPX in BMSC using retroviral transduction. HOPX overexpressing BMSC demonstrated decreased lipid formation and decreased expression of adipogenic associated markers. Our data suggest that HOPX is a novel molecular inhibitor of BMSC adipogenesis, which may have implications in the regulation of fat metabolism. Furthermore, HOPX overexpression or knockdown studies, failed to demonstrate any effects on the osteogenic potential of BMSC. We predict that HOPX acts by inhibiting adipogenesis via suppression of C/EBPα by potentially binding to adipogenic suppressor proteins that act on its promoter region as a complex. Given that EZH2 inhibits BMSC osteogenic differentiation but allows adipogenesis to proceed 11,15 , implicates HOPX as a potential counter balance to regulate BMSC adipogenesis.
HOPX is known to repress transcription by direct interaction with co-repressors such as HDAC2, which consequently inactivate GATA6/Wnt7 pathway important in development and differentiation 7 . However, conflicting data in the literature demonstrate the duo-functions of HOPX in promoting and inhibiting proliferation and differentiation at different developmental stages, suggesting the importance of HOPX in maintaining the balance between growth and differentiation in various tissues based on in vitro and in vivo systems 5,6 . Our data suggests that in humans, HOPX is likely to play a role in fat metabolism. vector only (Vector) BMSC were cultured in either control (Ctrl) or osteogenic inductive (Osteo) conditions and HOPX expression levels were determined relative to β-ACTIN using qPCR, n = 6 donors. Error bars represent mean ± S.E.M, One-way ANOVA p < 0.05(*). (B) Extracellular calcium levels were quantitated and normalized to total DNA content per well, n = 3 donors. (C) Vector control and HOPX OE BMSC stained with Alizarin red. Total RNA was harvested at 7-14 days post induction (n = 6 donors) from Vector and HOPX OE BMSC. Gene expression levels were measured by qPCR for (D) HOPX, (E) RUNX2, (F) OPN relative to β-ACTIN. (G) siScram, siHOPX1 and siHOPX2 BMSC were incubated in control (Ctrl) or osteogenic inductive (Osteo) conditions for 3 weeks, and HOPX expression levels were determined relative to β-ACTIN using qPCR, n = 8 donors. Extracellular calcium levels were quantitated in siScram, (H) siHOPX1 and (I) siHOPX2 BMSC and normalized to total DNA content per well, n = 4 donors. (J, K) (I, III) siScram, (II) siHOPX1 and (IV) siHOPX2 BMSC were stained with Alizarin red. Representative of one donor is shown. Error bars represent mean ± S.E.M, Student's t-test p < 0.05 (*), n.s. represents non-significant. Scale bar (20 μm). www.nature.com/scientificreports/ www.nature.com/scientificreports/ In bone marrow, the differentiation of MSC into osteoblasts and adipocytes is competitively balanced. The commitment of BMSC to the adipogenic lineage may result in increased adipocyte formation and decreased osteoblast numbers as observed in age-related bone loss 30 . Numerous in vitro experiments performed on BMSC have revealed various factors that promote adipocyte formation inhibit osteogenesis, and conversely, many factors that promote osteoblast formation inhibit adipogenesis 31,32 . This occurs through the interaction between different signaling pathways such as Wnt, Bmp, TGF-β, Notch, mTOR [33][34][35][36][37] . Previous findings implicate the Bmp/Wnt signaling pathways in regulating HOPX family members 29 . Inhibition of HOPX in mouse and zebrafish results in disruption of cardiac development and lethality. HOPX is found to be expressed in cardiomyoblasts, which interacts physically with activated Smad4 and functions to coordinate local Bmp signals to inhibit Wnt pathway, promoting cardiomyogenesis 29 . However, little is known about the biological function of HOPX in BMSC during postnatal skeletal development and homeostasis.
In order to identify novel HOPX target genes during BMSC adipogenesis, RNA-seq analysis was performed on HOPX overexpressing and vector control BMSC cultured under normal growth or adipogenic inductive condition for 2 weeks. Differentially expressed genes were identified between normal growth and adipogenic inductive conditions. Survey of the literature identified 188 genes involved in EMT, 185 genes in adipogenesis and 127 genes in fatty acid metabolism. To identify possible signaling or molecular pathways involved in HOPX signaling, gene ontology (GO) enrichment analysis was performed. A heatmap was constructed according to the fold change of gene expression between HOPX overexpressing and vector control BMSC cultured under either normal growth or adipogenic conditions. Many of the top 50 differentially expressed genes were found to be associated with adipogenesis such as ADIPOQ, FABP4, PLIN1 and PLIN4, which generally showed a negative correlation with HOPX expression.
ADIPOQ is a cytokine secreted in various tissues including BMSC 38 . Adiponectin signals through its cell surface receptors adipoR1 (adiponectin receptor 1) and adipoR2 (adiponectin receptor 2) and can act in either endocrine, paracrine or autocrine pathway 39,40 . Upon ligand binding, distinct signaling pathways are initiated across tissues including PPARα, mTOR, AMPK [41][42][43] . On the other hand, the downstream signaling of adipoR1 can stimulate oxidative phosphorylation, which subsequently increases cell differentiation via suppression of the Wnt inhibitor, sclerostin 44,45 . Therefore, suppression of ADIPOQ by HOPX leads to termination of various pro-adipogenic signaling pathways and results in decreased adipogenic potential of BMSC.
Interestingly, CNN1 gene expression was increased in HOPX overexpressing BMSC compared to vector control BMSC, suggesting that CNN1 is positively regulated by HOPX. CNN1 is an actin binding protein (ABP) that regulates the dynamics of actin cytoskeleton by direct/indirect participating in the assembly/disassembly of actin filament, which in turn regulates the cell contraction and movement 46 . CNN1 has been shown to play a role in bone homeostasis, where high expression of CNN1 leads to delayed bone formation and decreased bone mass 47,48 . CNN1 is known to interact directly with activated or inactivated Smad1/5/8 protein and inhibit Bmp2-Smad1/5/8 signaling 49 . Although the function of CNN1 in the regulation of fat metabolism is unknown, it is involved in the Bmp/Smad pathway, which is a critical pathway in the crosstalk between BMSC osteogenesis and adipogenesis. However, more studies are needed to determine the effects of HOPX on the 'stemness' state of BMSC, and whether HOPX is dysregulated during skeletal aging and bone disease in vivo, which are often associated with increased marrow adipogenesis at the expense of bone formation.
Collectively, our findings suggest that HOPX promotes human BMSC proliferation and inhibits adipogenesis, and this is the first ever finding showing the importance of the HOPX in human BMSC self-renewal and cell fate determination as a possible counter balance to EZH2 function ( Supplementary Fig. 2), which normally represses HOPX gene expression in BMSC under normal growth conditions. HOPX appears to act by inhibiting BMSC adipogenesis via suppression of C/EBPα and potentially through co-factors binding to adipogenic suppressor proteins. Our future studies will employ co-immunoprecipitation and ChIPseq analyses to identify putative Table 1. Gene ontology annotations of differentially expressed genes from RNA-seq analysis of HOPX overexpressing and Vector only BMSC cultured under normal growth or adipogenic conditions. www.nature.com/scientificreports/ binding partners/co-factors of HOPX, the genome wide binding sites of HOPX protein complexes and the role of putative HOPX targets in human BMSC growth and lineage determination. This study lays the foundation for further research into the role of the homeobox family members in BMSC biology and fat metabolism. | 4,061.4 | 2020-07-09T00:00:00.000 | [
"Biology"
] |
Gaugino mass term for D-branes and Generalized Complex Geometry
We compute the four-dimensional gaugino mass for a Dp-brane extended in spacetime and wrapping a cycle on the internal geometry in a warped compactification with fluxes. Motivated by the backreaction of gaugino bilinear VEVs, we use Generalized Complex Geometry to characterize the internal geometry as well as the cycle wrapped by the brane. We find that the RR fluxes and the non-closure of the generalized complex structures combine in the gaugino mass terms in the same form as they do in the bulk superpotential, while for the NSNS fluxes there is a crucial minus sign in the component normal to the brane. Our expression extends the known result for D3 and D7-branes in Calabi-Yau manifolds, where the gaugino masses are induced respectively by the imaginary anti-self dual and imaginary self-dual components of the complex 3-form flux $G_3$.
Introduction and main result
Gaugino bilinear vacuum expectation values (VEVs) play a prominent role in the mechanism of moduli stabilisation, both in heterotic and type II theories. In the latter, where the gaugini are part of the fermionic degrees of freedom on Dp-branes, such VEVs give rise to non-perturbative contributions to the effective potential involving the modulus that corresponds to the size of the cycle wrapped by the D-brane. In compactifications of type IIB theory, where Dp-branes wrap cycles of even dimension, there are no perturbative contributions to the potential for these moduli coming from NSNS or RR fluxes, and the non-perturbative terms become the leading order contribution. This is the mechanism proposed by Kachru, Kallosh, Linde and Trivedi (KKLT) [1] to stabilise the Kähler moduli (corresponding to sizes of four-cycles) in type IIB compactifications on Calabi-Yau manifolds.
The starting point in the KKLT construction is the well-known set-up with a self-dual combination of NSNS and RR 3-form fluxes such that the background geometry is Calabi-Yau, with a metric that is related to the Ricci-flat metric by a conformal factor [2,3]. In the effective four-dimensional theory corresponding to the compactification on a Calabi-Yau manifold with 3-form fluxes, this solution appears as supersymmetric or supersymmetrybreaking Minkowski vacuum with stabilised complex structure moduli, which measure the sizes of 3-cycles. However, the Kähler moduli corresponding to sizes of 4-cycles are flat directions, which in this construction are lifted by gaugino bilinear VEVs on D7-branes wrapping these cycles. The resulting effective field theory has supersymmetric AdS 4 vacua with all moduli stabilised. Furthermore, these vacua exhibit a clear separation between the KK scale and the AdS curvature, allowing for a four-dimensional truncation of the field theory. Given the problems encountered in reproducing this feature in classically stabilised vacua (see e.g. [4][5][6][7][8][9]), this makes gaugino condensates interesting not only for KKLT but also for any type II compactification.
However, same as fluxes, gaugino bilinear VEVs back-react on the geometry: the internal manifold cannot be Calabi-Yau (or conformally related to it), since it does not support AdS 4 vacua. Moreover, it has been shown [10][11][12][13] that it cannot even be a geometry with SU(3) structure (defined by a single globally defined spinor), but it requires a more general "dynamic SU(2) structure" (with two globally defined spinors that can become parallel at points or subspaces of the internal manifold). This situation is better described in the language of Generalized Complex Geometry (GCG, see [14] for a review), in terms of SU(3)×SU(3) structures on the tangent plus cotangent bundle of the manifold. In order to try and describe backgrounds with gaugino bilinear vevs (i.e. a VEV for the operator corresponding to the gaugino mass terms) on D-branes, one is thus led to consider the generalized (almost) complex geometry of the internal manifold and that of the cycle wrapped by the brane.
In this paper we compute the four-dimensional gaugino mass term for a space-filling Dp-brane, which wraps a cycle Σ on the internal manifold. The brane is stable if the cycle satisfies the so-called calibration conditions, which split into algebraic and differential requirements. We assume that the cycle satisfies the former, but not necessarily the latter. For example, for D7-branes in SU(3)-structure backgrounds characterized by a real (1,1)form J and a holomorphic (3,0)-form Ω, this means that we assume the four-cycle to be holomorphic with respect to the almost complex structure of the background. On the other hand, we do not require the differential calibration conditions. In the D7-brane example, this implies we do not necessarily assume the complex structure to be integrable (or in other words the geometry need not be complex, that is, dΩ can be anything), nor do we assume any relation between d J 2 and the 5-form flux.
To compute the gaugino mass term we use the fermionic Dp-brane action at the two fermion level of [15][16][17][18]. This is the action for a single Dp-brane 1 with world-volume flux embedded in a generic bosonic flux background. Doubling the fermionic degrees of freedom, the quadratic fermionic action can be written compactly in a canonical Dirac-like form. The redundant degrees of freedom are removed by choosing a gauge for the fermionic κ-symmetry of the action. For simplicity, we set the world-volume fluxes to zero.
Let us state the main result of this paper. We find that, for a generic Dp-brane in an SU(3)×SU(3) structure background described in terms of the even and odd pure spinors 1 The non-Abelian version of the fermionic action is not known. Up to order α 2 the non-Abelian generalization of the fermionic DBI action amounts to adding a trace to the abelian expression [19]. At higher orders one expects though the appearance of new terms that are absent in the Abelian case (see [20,21] for recent discussions on possible couplings at four-fermion level).
(or polyforms) Ψ + and Ψ − , the gaugino mass is given by where the upper (lower) sign is for type IIA (IIB). Here δ (0) is a scalar delta-function with support at the locus of the brane 2 , , denotes the Mukai pairing, defined in (2.7), F is the even (odd) polyform given by the formal sum of internal RR fluxes in type IIA (IIB) and dH is the twisted exterior derivative where the two components of H flux with none and 2 world-volume indices appear with a relative sign (the definition is written in (3.35)).
The calibration conditions imply that many of the components of the pure spinors vanish on the brane locus, such that only some of the components of the fluxes give a nonzero contribution to the gaugino mass. For instance, for a D3-brane, only the RR 3-form flux F 3 leads to a mass term, irrespectively of the (generalized) structure of the background.
The contribution from the RR fluxes and the derivative of the pure spinors to the gaugino mass is of the same form as in the bulk superpotential [22,23], given in (2.15). However, the NSNS flux enters with a slight (but key) difference: there is a relative sign in the component along the normal directions to the brane compared to the twisted exterior derivative that appears in the superpotential, or in the supersymmetry equations, given in (2.13). This relative sign is such that for instance for D3-branes, where the derivative term gives no contribution, the gaugino mass is proportional toḠ 3 ∧ Ω, in accordance with [24][25][26]. For D7-branes instead, the gaugino mass involves the component of H with two directions along the world-volume, which does not have a relative minus sign, and thus the gaugino mass is proportional to G∧Ψ − , with G the complex polyform defined in (2.16), and this reduces in Calabi-Yau manifolds to G 3 ∧ Ω, in agreement with [27,28]. Thus imaginary self dual fluxes as in the solutions [2,3,29] do not generate gaugino masses on D3-branes, but they do on D7. The latter is also true in D8-branes, since there are no components of H completely normal to the brane. In D4-branes, there is only the contribution from the normal component to the brane, which has a minus sign, but since the derivative term is not zero, the NSNS contribution cannot be combined with the RR piece into aḠ as for D3-branes. For D5 and D6 branes both components of H-flux enter, and similarly we cannot write generically the mass term in terms of G and/orḠ. The paper is organized as follows. In section 2 we introduce the GCG description of type II compactifications, and also review how space-time-filling D-branes wrapping calibrated internal cycles are included in this framework. In section 3 we use this formalism to compute the gaugino mass terms. We first introduce the quadratic fermionic D-brane action and set our conventions for the dimensional reduction. We then present the computation in some detail and give our final result in Eqs (3.34)-(3.37). Finally, we show how the gaugino mass looks like for the particular case of an SU(3) structure geometry. Further conventions are presented in Appendix A together with some useful identities.
Compactifications and GCG
We start with type II superstring theory on a warped product of an extended and maximally symmetric four-dimensional manifold (Mink 4 , AdS 4 or dS 4 ) and a compact internal sixdimensional manifold M 6 . The metric ansatz is Here we use the democratic formulation of [30] (mostly following the conventions of [31]) and the polyform notation such that F = F q with q = 0, 2, 4, 6, 8, 10 (q = 1, 3, 5, 7, 9) for type IIA (IIB). BothF andF have only internal legs, and the self-duality condition for F can be brought to the more useful form Note that the operator * 6 • α squares to −1. We require the background to have globally defined spinor(s) 3 . In type II string theory, the most generic situation is to consider two globally defined spinors η 1 and η 2 , which can become parallel at certain loci of the manifold. By using these spinors, one can build two polyforms or pure spinors which characterise the background geometry, and are defined as where the subindex on the interal spinors η i denotes their chirality, and underlined forms are contracted with γ-matrices as defined in (A.10). These bi-spinors define an SU(3)×SU(3) ⊂ Spin(6, 6) structure.
In the well-known case where the internal manifold accepts only one globally welldefined spinor η 1 and η 2 are parallel everywhere. This is known as an SU(3) ⊂ O(6) structure compactification. In such configurations the pure spinors reduce to (2.5) Here J and Ω are the usual real (1,1)-form and holomorphic (3,0) form defining the SU(3) structure, and θ is the relative phase between the spinors: η 1 + = ie iθ η 2 + . In our conventions, described in detail in Appendix A, a supersymmetric background compatible with D3/D7branes has θ = 0. Similarly, for the D5/D9 supersymmetry we take θ = −π/2.
It is not hard to show that the pure spinors satisfy the self-duality condition * 6 α(Ψ) = iΨ . (2.6) We also introduce for later use the Mukai pairing, which is the form version of the inner product between Spin(6,6) spinors where [·] 6 indicates one should only keep the 6-form in the polyform wedged product. Moreover, the pure spinors Ψ + and Ψ − are "compatible", which means that the following identities hold: Supersymmetric backgrounds: The background preserves N = 1 supersymmetry if the supersymmetry variations of the gravitino ψ and the dilatino λ vanish for a given supersymmetry parameter . We use the double spinor notation, and σ 3 as well as P q = {−σ 1 , iσ 2 , −iσ 2 , σ 1 } for q = {0, 1, 2, 3} mod 4 act on the fermion doublets.
Finally, H M is defined in (A.11).
Since the internal manifold has two globally defined spinors, it is natural to use them in the decomposition of the supersymmetry parameter into an external and an internal spinor, namely (see conventions for the fermions in Appendix A.1) where 1 + is a 10d Majorana-Weyl spinor of positive chirality, and 2 ∓ has the opposite (same) chirality in type IIA (IIB) and ζ + and ζ − are 4d Dirac spinors required to satisfy 2∇ ν ζ + = ±μγ µ ζ − , with µ related to the 4d cosmological constant by Λ = −3|µ| 2 . Here and in the rest of the paper the upper (lower) sign corresponds to type IIA (IIB).
The supersymmetry conditions are equivalent to the following differential equations on the pure spinors [32,33] is the twisted exterior derivative and (2.14) Importantly, it was shown in [10,22,23,34,35] that the above supersymmetry conditions can be obtained as D-and F-flatness conditions from the superpotential HereĈ are the internal RR gauge potentials (d HĈ =F ), and G is the polyform that generalizes the G 3 -flux [31] It follows from (2.12c) that for supersymmetric GKP compactifications, G = G 3 satisfies the usual ISD condition, while in more general supersymmetric compactifications this generalizes to For µ = 0, the ISD requirement (2.17) also describes a class of supersymmetry breaking solutions with (0, 3) three-form flux G 3 [2], or more generally a component of G alongΨ 2 [31].
D-branes in GCG
Consider a space-time filling Dp-brane, wrapping a cycle Σ p (p = p − 3) on the internal manifold whose geometry is encoded in the pure spinors. As is well known, for the Dbrane to be stable, Σ p has to be a minimal volume hyper-surface, denoted as calibrated submanifolds for BPS objects. In this situation, the supersymmetry generators 4 satisfy at the brane locus where Γ Dp is defined in (3.5). We review this setup in the context of generalized complex geometry, following [14].
To be precise, in the GCG context we should in fact talk about generalized submanifolds and calibrations [36], whose definitions take into account the background B-field and the gauge field A 1 on the brane. The world-volume fluxes are characterized by the gauge invariant combination 5 F ≡ B 2 + (2π) −1 dA 1 such that the generalized submanifold is given by the pair (Σ p , F). The bosonic part of the Dp-brane action reads such that for a space-time filling brane the integration is over the warped product of X 4 and Σ p , and we set 2π √ α = 1, such that T p = 2π is the Dp-brane tension and C is the sum of the RR potentials, i.e. d H C = F .
The generalization of the minimal surface condition then tells us that the (Σ p , F) is calibrated iff the DBI energy density (per unit of the generalized volume) can be written as 20) and the following differential condition holds: This can also be obtained by studying the corresponding supersymmetry conditions. In fact, e 4A−φ Re Ψ 1 constitutes a so-called generalized calibration form, and (2.20) is the algebraic calibration condition that we require.
In this paper we require the brane to satisfy the algebraic calibration condition (2.20), but not the differential one (2.21). Note that the latter coincides with the bulk supersymmetry equation (2.12c). The remaining supersymmetry equations, (2.12a) and (2.12b), which generically we do not impose either, can also be interpreted as calibration conditions for domain wall-type and string-like D-branes respectively, as seen from the four-dimensional perspective.
Some well-known examples of calibrated submanifolds can easily be described for compactifications with internal SU(3) structure in the F = 0 case. On the one hand, there are the 2l-dimensional complex submanifolds, related to the calibration forms proportional to J l ∼ [Re Ψ 1 ] 2l = Re [(e iθ e iJ )] 2l in type IIB, where θ depends on the brane in question as explained below (2.5). On the other hand, there are the special Lagrangian submanifolds. In this case the calibration form is Re (e −iθ Ω).
It will prove useful to describe the calibration condition (2.20) in more detail. It can be split as follows [37] (see also [38]). First, one has This is nothing but the D-flatness condition for the four-dimensional effective theory on the D-brane. Second, one requires that (Σ p , F) is a generalized complex submanifold with respect to the generalized complex structure associated to the other pure spinor, Ψ 2 . This corresponds to the 4d F-flatness condition, and implies that For instance, in the special Lagrangian example given above, the D-and F-flatness conditions are actually the special and Lagrangian conditions, respectively. These calibration conditions will be crucial for our computation, so we will provide further details about them shortly.
Finally, we define δ 6−p [Σ p ] as the (6 − p )-form Poincare-dual to the cycle Σ p where a Dp-brane is wrapped, namely for any p -form ω. We also define δ 0 [Σ p ] as the associated scalar δ-function in the form here vol Σp is the volume form on the cycle and we used the form scalar product defined in (A.13). If the brane satisfies the algebraic calibration condition (2.20), this is equivalent to where (Im Ψ 1 ) 6−p is the 6 − p vector dual to the 6 − p form-piece of Im Ψ 1 .
Fermionic D-brane action
The supersymmetric version of the DBI and Wess-Zumino action at the two-fermion level was computed in [16][17][18]. For any Dp-brane, the bosonic terms were given in (2.19) and the fermionic terms can be compactly written as Here, the world-volume fermion θ is a doublet of Majorana-Weyl (MW) fermions with Γ (11) the ten-dimensional chirality matrix. The θ i are not independent: the fermionic action possesses a fermionic gauge symmetry called κ-symmetry. One can use this to gauge away half of the fermionic degrees of freedom. We fix the gauge by requirinḡ where Γ κ is built out of the world-volume chirality operator as in (A.17). This is in contrast with the supersymmetry generators defined in (2.11), which satisfy (2.18) (in terms of Γ κ , this equation reads¯ Γ κ =¯ ). The sign difference with respect to (3.3) is crucial since the WV spinor should not be proportional to the pullback of the supersymmetry generator, since for supersymmetric backgrounds the latter is a redundancy of the background. The matrix Γ κ squares to the identity, and thus 1 2 (1 − Γ κ ) is a projector onto the subspace of "physical" fermions. The gauge-fixing condition (3.3) relates the two MW fermions according tō The operator Γ Dp is a 'chirality' operator on the generalized world-volume of the Dp-brane: 6 here the indices run over world-volume directions and the matrices Γ α i represent the pullback of the Γ M . The operator L(F) is given in (A. 16), and vanishes for F = 0, which is the situation we will restrict to in this paper. Finally, D α and ∆ are the operators involved in the gravitino and dilatino supersymmetry variations, defined in (2.9) and (2.10). We are interested in the combination that appears in (3.1): where q sums over all even RR field strengths, and P q is defined above (2.11).
Dimensional reduction: general idea
In this section we consider a Dp-brane in the warped compactifications described in section 2. The metric ansatz is given in (2.1), bulk and brane fields split into external and internal components. The ten-dimensional Γ-matrices are decomposed as follows:
7)
γ µ and γ m being the 4d and 6d gamma matrices while γ (5) and γ (7) represent the corresponding chirality operators. Let us now focus on the world-volume spinor θ, which combines the degrees of freedom corresponding to the gaugino and the chiral fermions on the brane. Here we are only interested in the gaugino. Since we are considering D-branes that satisfy the algebraic calibration condition, we can use the bulk spinors η 1 and η 2 to write the 4d gaugino as [31] θ = θ 1 θ 2 = e −2A 4π here λ + and λ − are 4d Dirac spinors of definite chirality and + ... stands for the 4d fermions in the three chiral multiplets. The overall numerical constant and warp factor are introduced in order to get canonically normalized kinetic term for the 4d gauginos. The κ-fixing condition (3.3) then implies where the γ-matrices are internal. Note that the condition on the internal spinors, which is required to hold only at the brane locus, is the same as the internal part of the calibration condition (2.18). The relative minus sign between (2.18) and (3.4) has been absorbed when defining the 4d gaugino in (3.8).
Calibration conditions revisited
The relation (3.9) between the internal spinors arises from the calibration condition (2.18) and can be used to better understand the local constraints (2.22)-(2.23) that the pure spinors satisfy on calibrated cycles. For this, we first consider the q-form component of Ψ 2 = Ψ ± with n indices along the p cycle wrapped by a calibrated Dp-brane. The calibration condition implies that where in the first equality we used (3.9) twice and in the second one we took the transverse. This implies that certain components of Ψ 2 are zero due to the calibration condition. An analogous calculation for Ψ 1 = Ψ ∓ shows that (3.11) Therefore (3.9) mixes Ψ 1 and its complex conjugate, so the calibration condition must be imposed separately on the real and imaginary parts of Ψ 1 .
Note that in both cases the projection imposed by the calibration condition depends on the combination p + q − 2n, which actually takes within certain range. Indeed, since n indicates the amount of world-volume indices of the q-form component of a polyform, it must satisfy 0 ≤ n ≤ q. Also, because the brane wraps a (p − 3)-cycle, necessarily n ≤ p − 3. Finally, requiring that there are enough transverse directions implies q − n ≤ 9 − p. Combining these inequalities one finds that physically sensible options satisfy 3 ≤ p + q − 2n ≤ 9. This range can be combined with the above constraints to find that, at the brane locus, Im Ψ 1 | (n) q = 0 for p + q − 2n = 5 or 9 (3.12) Re Ψ 1 | (n) q = 0 for p + q − 2n = 3 or 7 .
Note that the second and third line are equivalent due to the self-duality property of the pure spinors (2.6). The conditions (3.12) are equivalent to (2.22)-(2.23), but they are written in a way that will be useful later.
Dimensional reduction: details of calculations
Our goal is to compute the dimensional reduction of the fermionic action in order to obtain the gaugino mass-terms. We start with the gauge-fixed action where θ satisfies (3.3) andD, defined in (3.6), involves a derivative part, a term proportional to the gradient of the dilaton, a RR contribution and an NSNS flux piece. The derivative term involves both external and internal directions. The former gives the 4d gaugino kinetic term, while the latter combines with the fluxes and the dilaton giving the gaugino mass. The resulting four-dimensional action is then of the form (3.14) with m λ = m d λ + m φ λ + m F λ + m H λ and f the gauge kinetic function. Here we provide the detailed calculation of the dimensional reduction of the fermionic brane action. The uninterested reader may skip this section to find the final result in the next one. We compute the kinetic function f , as well as each of the contributions to the mass separately.
Derivative term: We need to computē where α runs over over all world-volume directions. Separating external and internal indices we find 7θ with a = 1, . . . , p . In the second line we have used the fact that the terms proportional to dA are zero by virtue of (A.9). Taking the norm of the internal spinors to be η 1 † + η 1 + = η 2 † + η 2 + = |η| 2 = e A , as in supersymmetric compactifications, the first line gives the kinetic term in (3.14) with On the other hand, the derivative along the internal directions is slightly more involved. First, we note that using the κ-gauge fixing condition (3.9) we get (no sum over m) with s = 0 if m is a world-volume index and s = 1 if it is transverse. This means that in the second line of (3.16) we can actually sum over all internal indices. We then have where in the first line we have integrated by parts, and in the second one we used the ciclicity of the trace, the definition of the pure spinors (2.4), and also used the trace to rewrite the result as an inner product (see (A.21)). We have also made use of (A.23) to convert the derivative into an exterior derivative, and the fact that upon acting with η 2 ∓ η 1 † − on the left and taking the trace, only one of the terms in (A.23) survives due to (A.9).
Putting everything together, the mass contribution from the derivative term gives Dilaton gradient term: By performing the dimensional reduction and using (A.9) one finds that Consequently, the dilaton gradient does not contribute to the 4d gaugino effective mass. The same argument holds for terms involving derivatives of the warp factor.
RR flux term: At first sight, the operator in (3.6) involving the RR fluxes splits them according to the amount of world-volume indices, denoted as n. We get The coefficient in front of each component depends on a peculiar combination of the degree q, the number of world-volume indices n and the dimension of the Dp-brane involved. However, we will see that in the final expression this odd-looking separation of the different components will not appear. In order to show this, let us first focus on a particular component of the purely internal fluxesF (n) q . Its mass contribution is proportional to where, starting from the second equality, the contraction is performed using the internal γ-matrices, and in the last one we used (A.21). The last expression provides the key to understand why the apparent separation in components of equation (3.22) does not appear in the final expression: for a calibrated brane many components of Ψ 2 vanish on the brane locus, as we saw in (3.12), and only those satisfying p + q − 2n = 6 give a contribution. A similar story holds for the RR fluxes with external legs. Note that the relevant combination is now (vol 4 ∧e 4AF ) (n) q , so the degree of these forms is at least 4. Also, because we consider space-filling Dp-branes these fluxes have n ≥ 4. Performing the dimensional reduction this time we find (3.24) so the main difference with the previous case comes from the appearance of the 4d chirality matrix. As it happened for internal fluxes,F also appears in a contraction with Ψ 2 and thus is subject to the projections in (3.12). Nevertheless, for fluxes with 4d indices we find that the ones making a contribution are those with p + q − 2n = 2. As a consequence, we see that all fluxes that make a contribution to the mass term have the same coefficient in (3.22) but there is a relative sign between internal fluxes and those with external indices. This means that the combination of fluxes appearing on the effective mass iŝ after using the duality condition (2.3). Moreover, because fluxes appear in the mass term contracted with the pure spinor Ψ 2 , which satisfies the self-duality condition (2.6), we conclude that the contribution from theF andF fluxes is exactly the same, so we will write the final result in terms of internal fluxesF only. Finally, we put everything together to find that the total RR flux contribution to the gaugino mass term is given by where one must take into account that the calibration conditions select only a few components of this flux, that we list in Table 1.
NSNS flux term: Using the same notation as above, we find In the double-spinor notation, this is combined with the Pauli matrix σ 3 , which gives Using once again the calibration condition (3.9) we find that so only the components with zero and two world-volume indices (H (0) and H (2) ) make a contribution to the effective mass. We can furthermore proceed analogously to the previous cases and re-write these contributions in terms of the pure spinors. We get where we have used (A.24), and the fact that only one of their terms survives in the relevant trace.
Combining everything together we get An alternative computation of the H-flux can be done by using (3.9) on the spinor on the left in (3.28) γ Dp H here Σ is the Hodge duality operator on the cycle Σ p whose definition is in (A.20). This implies that the H-flux contribution to the gaugino mass term can also be written as The same as in previous cases, the calibration conditions automatically projects out some of the flux conributions.
Dimensional reduction: summary of results and analysis
Putting together our results so far, we see that the four-dimensional action for the gaugino kinetic and mass terms takes the form where the gauge kinetic function is given in (3.17) and the gaugino effective mass is Finally, we use the scalar delta-function δ (0) defined in (2.26), that is, the scalar version of the (6 − p )-form representing the Poincaré dual of Σ p , and the Mukai pairing (2.7), to rewrite this effective mass in terms of an integral over the full six-dimensional manifold. In this way, we obtain the main result of this paper: There are several features of this result that are worth pointing out. First, due to (3.12), the (algebraic) calibration condition implies that several terms do not contribute to m λ . In Table 1 we put together all fluxes that can give a contribution to the effective mass. As for the derivative, the only non-zero contributions come from the derivative along world-volume directions, as the original formula (3.16) indicates.
Note that the components of RR fluxes that give a contribution to the gaugino mass are compatible with the T -duality rules. In fact, the components surviving the projection can be guessed by starting from one of the cases at the extrema. It turns out that the extremum on the low part of the Table 1 is known. Indeed, in a compactification with D9-branes, one must necessarily introduce O9-planes, so this is a Type I compactification. Therefore, the only flux available is the RR 3-form F 3 . This is precisely what we find from the calibration condition. Obviously, the only possibility is that all indices of F 3 lie along the world-volume of the D9-brane, namely n = 3. A T-duality turns the D9 into a D8-brane, and F 3 with n = 3 into both F 2 with n = 2 and F 4 with n = 3, in agreement with what we find. The rest of the table can be obtained by iterating this argument.
Furthermore, we note that in (3.36) both the RR fluxes and the derivative term contribute to the effective mass of the gaugino exactly in the same way as they appear in the superpotential (2.15) [22,23]. This was conjectured in [39] for the simpler situation of D7branes in an SU(3) structure background. However, the full mass term is not proportional to the (integrand of) the superpotential: somewhat counterintuitively, the contributions generated by H (0) and H (2) enter with opposite sign. More precisely, the component of H flux with all indices transversal to the D-brane world volume appears with the opposite sign as compared to (2.15). This relative sign constitutes a crucial consistency check. Indeed, for the simple case of calibrated D3-branes (where the structure at the location of the brane must be SU(3), corresponding to the spinors in (2.5)) it is well-known that the gaugino mass is proportional to Ḡ 3 · Ω [24][25][26]. In contrast, and as will be described in some detail below, for D7-branes in a similar configuration we know that the gaugino mass is given by (G 3 · Ω) [27,28]. Consistency with these two particular examples would be hard to achieve without this odd-looking relative sign in the definition ofH = H (2) − H (0) . As a consequence, the combination (G · Ψ 2 ) appearing in the superpotential gives the gaugino mass only for D7-, D8-and D9-branes, for which H (0) = 0. This mysterious relative sign is actually absorbed in the alternative expression for the H-flux contribution to the gaugino mass involving the Hodge star on the cycle, Eq. (3.33). Writing this in terms of an integral over the whole manifold, and combining with the other contributions, this is Supersymmetric backgrounds: Finally, let us briefly show that the gaugino mass terms we have computed vanish for calibrated D-branes in supersymmetric backgrounds, as they should. For this it is more convenient to go back to the original brane action in terms of the internal spinors η 1,2 as in (3.1). Recall that the mass terms arise from the operator (3.6), that is a combination of the supersymmetry variation of the gravitino (2.9) and the dilatino (2.10). In supersymmetric backgrounds these variations vanish, and upon using the warped compactification ansatz (2.11), one finds that internal spinors must satisfy the following conditions: Note that on both expressions we only look at the internal component and thus all contractions are taken with internal γ matrices. Analogous conditions hold upon exchanging the spinors and shifting the signs for the corresponding fluxes accordingly.
In order to proceed, recall the minus sign in the definition of θ 2 as compared to the supersymmetry parameter 2 . 8 This relative sign implies that the above SUSY conditions can be used to re-write the multiple contributions to the gaugino mass in terms of the RR flux contribution, and as seen above it is sufficient to focus on the internal terms, characterized byF . Three types of mass contributions exist: the ones coming from the four-dimensional part of D µ , twice (after imposing the supersymmetry condition (3.38)) the contribution from the cycle directions D α , and twice (after imposing (3.39)) the contribution from ∆ with the relative (-1/2) factor from the combination in (3.6). In order to sum all contributions it is convenient to use identities arising from the anticommutation of Γ matrices similar to (3.22). We find that the sum of all contributions is proportional to This combination vanishes due to (3.12). As expected, no gaugino mass terms are induced for calibrated branes when the background is supersymmetric.
D-branes in SU(3) structure
Here we compute the gaugino masses for Dp branes in an SU(3) structure-background. First, note that the calibration condition (3.9) implies that one cannot have D4-or D8branes in this context. In contrast, D9-branes can only be present for an internal manifold with SU(3) structure, while D3-branes also require an SU(3) structure but only at the location of the brane 9 . However, since the computation of the gaugino mass is performed at the location of the brane, the SU(3) structure expression gives the general gaugino mass term for D3-branes. The SU(3) structure pure spinors are given in (2.5) 10 . We first study the type IIB branes. 8 There is also a warp-factor difference between both definitions but this does not modify our procedure.
The reason is that warp factor gradients do not contribute to the gaugino mass, same as the dilaton gradients. 9 This means that D3-branes can be calibrated in a so-called dynamic SU(2) structure, as long as the structure reduces to SU(3) (the internal spinors η 1 and η 2 become parallel) at the location of the brane. 10 The gaugino masses should actually have an overall extra phase e −iθ from Ψ2, that we are not writing.
This extra phase is important in AdS4 compactifications since it should be aligned with the phase in µ [13].
• For a D3-brane (θ = 0), the gaugino mass (3.34) reduces to where the contribution of the NSNS flux is from H (0) only, and in the last equality we have used the usual definition G 3 = F 3 + ie −φ H. This is the exactly the massterm computed in [24] for the particular case of Calabi-Yau manifolds. Note that the exterior derivative gives no contribution, and this extends to the more general case of SU(3)×SU(3) structure, since the latter has to be an SU(3) structure at the location of the D3-brane.
• For D5-branes (θ = −π/2), we get The other RR fluxes do not give a contribution as Ψ 2 has only a 3-form piece. We can see here clearly that the projection onto the component with one world-volume index as indicated in Table 1 is redundant as only the component of F 3 with three anti-holomorphic indices enters, out of which only one can be world-volume index for a calibrated D5-brane for which Σ should be a holomorphic cycle. Finally, the NSNS fluxes give no contribution as H ∧ Re Ψ 1 is a five-form. In this case the integrand is proportional to that of the superpotential.
• For D7-branes (θ = 0), we get as in Calabi-Yau manifolds [27,28]. The derivative of the (real part of) the pure spinor e −φ e iJ gives no contribution as it has one and five-form pieces only. The RR part is straightforward. The H contribution comes from the H (2) piece, and thus has the opposite sign as for D3-branes. Once again, the components of F 3 and H that give a non-zero contribution can only have two world-volume indices.
Finally, for type IIA the only brane one can have in SU(3) structure is a D6-brane.
• For D6-branes we can re-absorb the phase in Ω and thus simply take Ψ 1 = iΩ. Thus, we obtain 44) Here all NSNS terms (derivative, H (2) and H (0) ) contribute, and we cannot combine them with the RR piece to form either a G or aḠ flux.
A.2 Other conventions and definitions
Throughout the paper, an underlined differential form denotes the contraction of said form with the appropriate (antisymmetrized) product of gamma matrices. Note that depending on the context these can be 10d or 6d matrices. Writing the matrices generically as Γ M , the precise definition is The scalar product · contracts p-forms as follows: The generalization for polyforms is simply the sum of scalar products involving the different components. A useful identity involving the scalar product of (poly)forms is (A p · B p ) vol 6 = A p ∧ * 6 B p , (A.14) where vol 6 is the volume form on M 6 . Thus, given a polyform Ψ satisfying the self-duality condition (2.6) and another generic one, say Φ, this means that Ψ, Φ = ∓i (Ψ · Φ) vol 6 (A. 15) after using that α * 6 = ∓ * 6 α. The operators in (3.1) that involve the world-volume flux are given by where Γ κ is the relevant operator for κ-gauge fixing in the spinor doublet notation, and satisfies Γ −1 Dp = Γ † Dp = (−1) 1+p+[ p 2 ] Γ Dp . (A. 18) In order to define the Hodge star operation on the cycle Σ wrapped by the brane used in Section 3, we first need to impose that indices in any form are ordered as follows to remove sign ambiguities: A a 1 ...anm 1 ...m q−n n!(q − n)! dy a 1 ∧ ... ∧ dy an ∧ dy m 1 ∧ ... ∧ dy m q−n (A. 19) and the Hodge star operator is then (ε 1...p = 1)
A.3 Useful identities
We provide an example of a very useful type of identities to perform the dimensional reduction η 2 † ± F q η 1 + = Tr(F q (η 1 + η 2 † ± )) = i|η| 2 8 Tr(F q Ψ q,± ) = i|η| 2 (−1) In the second equality we used the definition of pure spinors (2.4), and Ψ q,± denotes the q-form component of Ψ ± . Using the Γ-matrix anticommutator one can easily show that The bispinor operation that corresponds to the wedge of a 1-form (e.g. F 1 ) with the pure spinor Ψ 1 is given by gives | 9,500.4 | 2020-02-04T00:00:00.000 | [
"Physics"
] |
Megahertz-rate ultrafast X-ray scattering and holographic imaging at the European XFEL
Results from the first megahertz-repetition-rate X-ray scattering experiments at the Spectroscopy and Coherent Scattering (SCS) instrument of the European XFEL are presented.
Introduction
X-rays have long been used as an advanced characterization tool of matter. They are typically used for diffraction, spectroscopy and imaging experiments with high spatial and energy resolutions. These properties have now been exploited for more than a century to achieve a deep understanding of molecules, solid materials and biological samples, fundamental to the progress of science. The appearance, one decade ago, of X-ray free-electron lasers (XFELs) providing intense X-ray pulses with a high degree of transverse spatial coherence and ultrashort pulses, has opened great opportunities for imaging and time-resolved experiments in atomic physics, condensed matter, chemistry and life sciences beyond what is possible at synchrotron light sources (Ayvazyan et al., 2006;Emma et al., 2010;Ishikawa et al., 2012;Altarelli, 2011;Bostedt et al., 2016;Grü nbein et al., 2018;Allaria et al., 2012;Patterson et al., 2010;Kang et al., 2017;Halavanau et al., 2019;Pellegrini, 2016).
XFEL technology constantly advances, particularly in terms of spectral brightness. The European XFEL (EuXFEL) is the first facility able to deliver soft and hard X-ray pulses at megahertz (MHz) repetition rate generated via a self-amplified spontaneous emission (SASE) process (Decking et al., 2020). This greatly improves the statistics of the collected data and in turn the achievable signal-to-noise ratio within a typical experiment time. While in serial femtosecond X-ray crystallography many copies of the samples can be injected into the beam at MHz repetition rates for accumulation of data (Chapman et al., 2011), it remains a challenge to recover or to replenish the sample for condensed matter studies in fields such as magnetism, strongly correlated materials and quantum science.
In this work, we demonstrate non-destructive, stroboscopic soft X-ray scattering and holography experiments at MHz repetition rates at the Spectroscopy and Coherent Scattering (SCS) beamline of the EuXFEL, exploiting the opportunities offered by the newly commissioned, custom-made twodimensional detector able to match the EuXFEL MHz operation. We illustrate the initial capabilities of the beamline at the time of the presented experiments with representative examples of magnetic scattering and imaging experiments of the type performed at other XFELs Pfau et al., 2012;Graves et al., 2013;Henighan et al., 2016;Bü ttner et al., 2017;Reid et al., 2018;Dornes et al., 2019;Malvestuto et al., 2018;Weder et al., 2020;Bü ttner et al., 2021). We also estimate the heat load on the sample in these experiments, providing a figure-of-merit to find the optimal experimental parameters.
Operation of the MHz-rate beamline and detector
At the EuXFEL, X-rays arrive in 10 Hz trains of multiple pulses. At the time of the experiment, the number of pulses within a train could be arbitrarily chosen between 1 and 150 separated by at least 440 ns, i.e. at a maximum repetition rate of 2.25 MHz within the train, see Fig. 1. The SCS beamline covers an energy range of 0.25 keV to 3 keV, well suited for core-level spectroscopy at the L-edges of 3d transition metals (including the most common ferromagnets), the M-edges of rare earth elements, the K-edges of lighter elements such as carbon, oxygen and sulfur, as well as the L-edges of some 4d metals. The photon energy can be changed via the undulator gap and synchronized with the soft X-ray monochromator. In addition, pulse energy tuning may be required for photon energy steps larger than 100 eV. A soft X-ray monochromator provides an energy resolution of approximately 250 eV for the Co and Fe absorption L-edges reported in this work (E /ÁE ' 3000), and reduces the pulse energy to tens of microjoules . The pulse duration of the monochromatic X-ray beam is 30 fs on average. See also Table I of the supporting information for key features of the SCS instrument..
As shown in Fig. 1, the incoming intensity I 0 of each pulse is monitored by an X-ray gas monitor (XGM) (Maltezopoulos et al., 2019). The beam size at the sample position can be adjusted using adaptive Kirkpatrick-Baez (KB) mirrors, providing a minimal spot diameter of approximately 1 mm and a maximum spot size of up to 500 mm in both horizontal and vertical directions . Samples are mounted in the forward-scattering fixed target (FFT) chamber, which also includes an electromagnet that can be used to apply magnetic fields of up to 350 mT parallel to the X-ray beam direction. The SCS instrument is equipped with the novel DSSC (DEPFET Sensor with Signal Compression) detector, which can be positioned on a translation stage at given sample-detector distances over a range of 0.35 m to 5.40 m prior to the experiment. This allows users to cover different scattering wavevector ranges. During experiments the distance can be changed by 1.50 m around the working point. A multichannel-plate-based transmission intensity monitor (not shown in Fig. 1) simultaneously collects the direct beam after the DSSC detector and is used to measure the sample absorption. The pump laser beam is inserted in the FFT experiment station with an in-coupling mirror and impinges on the sample nearly collinearly with the X-rays. The laser used here is a YAG-white-light-seeded, non-collinear optical parametric amplifier developed in-house at the EuXFEL providing pump pulses of 800 nm with a duration down to 35 fs, which can match the pulse pattern of the XFEL (Pergament et al., 2016;Palmer et al., 2019). The incoming pulse energy can be adjusted from 0.05 mJ up to 2 mJ per pulse with a spot size from tens to hundreds of micrometres in diameter. Spatial overlap between the optical and X-ray beams is achieved by monitoring the beam position on boron nitride in the plane of the sample. Temporal overlap is achieved by looking at the optical reflectivity change of an Si 3 N 4 sample upon X-ray excitation. The delay between the optical pump and X-ray probe can be changed by up to 1 ns using a mechanical delay line. Larger delays can be selected using a phase shifter or the trigger system. In this work, the sample is always pumped at half the probe repetition frequency in order to obtain pairs of pumped and unpumped measurements that are close in time. This allows users to remove the effect of long-term drift on the measurements. The DSSC is presently the fastest 1 Mpixel camera available worldwide, providing single-photon sensitivity in the soft X-ray regime. The present camera uses for each hexagonal pixel a miniaturized silicon drift detector (MiniSDD) coupled to a linear readout electronics front-end, while a second version will employ non-linear DEPFET active pixel sensors . The DSSC detector is capable of recording data from the full pixel array with a 220 ns frame interval, corresponding to a 4.5 MHz repetition rate. The data are retrieved in the 10 ms-long inter-pulse train gap of the XFEL. The sensitive area of the camera is about 505 cm 2 in size, composed of Schematic of the SCS beamline and of the X-ray pulse structure at the EuXFEL. The X-ray beam propagates from the right to the left side. The X-ray bursts arrive in trains which contain a userdefined number of pulses. The X-ray gas monitor (XGM) measures the pulse intensity I 0 before the focusing KB optics. The pump laser is delivered into the experimental chamber via an auxiliary window and directed to the sample almost parallel with respect to the X-ray beam. The photons scattered by the sample are recorded on the DSSC detector. 136 mm. The camera comprises 16 sub-units called 'ladders' (horizontal blocks) arranged into four quadrants. Each ladder has two monolithic sensors and is read out by 16 independent readout application-specific integrated circuits (ASICs) . The four quadrants can be moved independently if required by the experiment, while the location of the ladders within one quadrant is fixed.
While the DSSC detector always runs at 4.5 MHz, a 'veto' system allows frames to be discarded according to a userdefined pattern or an additional signal provided by an external veto source. When pulses are delivered at a smaller frequency than 4.5 MHz, the user can choose to record frames at the same frequency as the XFEL. Discarding (vetoing) unused frames is crucial to minimize the amount of data collected and perform efficient analysis. In fact, at full repetition rate, the camera produces a data rate of 134 Gbit s À1 , which leads to single experiments creating petabytes of data. So-called intradark frames, see Figs. 2(a)-2(b), are regularly collected in between data frames to improve the contrast in the final image as described in the next paragraph. Fig. 2(c) is an example of the raw data collected by the DSSC detector; the uncorrected image has a mean of 73.35 ADUs, which is almost entirely an offset signal due to the analog-to-digital converters (Hansen et al., 2013;Porro et al., 2021), which can be removed by appropriate signal subtraction.
The first dark signal subtraction, pixel-by-pixel, is made using dark frames acquired in a separate run with the same settings of the DSSC camera (gain and veto pattern), but without X-rays hitting the detector. This is labeled as a dark run, and subtraction of such a run from the data results in the plot in Fig. 2(d). The few darker squares in the figure are due to the fact that, for a few random frames, the ASICs did not transfer the acquired data correctly. This is due to a firmware bug that was solved after the experiment. A separate dark run helps to remove the large static electronic offset, but does not correct for other sources of noise, such as the signal-generated backscattered photons or other systematic electronic effects which are occurring during the measurements. These can, however, be removed using the intra-dark signal, closer in time to the signal events. By combining the dark run with the intra-darks, one can achieve the most appropriate background subtraction, as shown in Fig. 2(e), where the image was calculated as Note that the three black squares indicate ASICs that were damaged and cannot be used for data collection . We estimate an experimental root-mean-square (RMS) noise for each pixel: /N 1/2 ' 5 Â 10 À3 ADU, where ' 1.4 ADU is the standard deviation and N ' 10 5 is the number of events in a measurement run. With the four data sets needed for complete offset subtraction, this leads to a total RMS noise of tot /N 1/2 ' 10 À2 ADU, which allows to readily measure signals in the 0.1-1 ADU range, as shown in Fig. 2(e).
During the beam time, more than 780 terabytes of data were captured using the EuXFEL's control and acquisition system . Offline data analysis was directed from Python and Jupyter notebooks (Fangohr et al., 2020), making use of the storage, calibration, compute and data analysis infrastructure at EuXFEL (Kuster et al., 2014;Fangohr et al., 2018). Analysis tools that were developed for this work and Schematic of the pulse labeling for the dark subtraction and application example. X-ray pulse labeling for acquisition (a) with X-rays and (b) without X-rays, a so-called dark run. Separate dark runs are usually 1 min for practical reasons (here 90 000 frames). (c) Raw data collected by the DSSC detector plotted around its mean value. (d) Dark subtraction using only a separate dark run and (e) dark subtraction combining a separate dark run and the intradark events. that can be re-used for similar research have been integrated into the EuXFEL open-source software data analysis stack (Fangohr et al., 2022).
Ultrafast small-angle X-ray scattering at MHz repetition rates
Small-angle X-ray scattering (SAXS) in the soft X-ray regime has been shown to be a unique tool to explore not only the temporal but also the spatial dynamics of ultrafast processes on nanometre length scales. In ultrafast magnetism, this capability has been proven to be a crucial feature, since many of the fundamental physical processes at play are strongly connected to the nanometre structure in the material Pfau et al., 2012;Graves et al., 2013;Bergeard et al., 2015;Iacocca et al., 2019;Hennes et al., 2020). We measure thin film multilayers with a composition of Ta(3 nm) / Cu(5 nm) / [CoFe(0.25 nm)-Ni(0.75 nm)] 20 / CoFe-(0.25 nm) / Cu(3 nm) / Ta(3 nm) deposited on 200 nm-thick Si membranes with a lateral size of 2 mm. Sample thicknesses were calibrated with X-ray reflectometry. This CoFe/Ni multilayer sample has an out-of-plane magnetization showing ordered stripe domains with a typical domain size in the range 115-125 nm, as revealed by magnetic force microscopy (see the supporting information). The magnetic domains were aligned to stripes after in-plane demagnetization and were characterized via SAXS at the VEKMAG endstation at the BESSY II synchrotron (Noll & Radu, 2017) and at the RESOXS endstation of the SEXTANTS beamline at Synchrotron SOLEIL, as well as by magnetic force microscopy.
Due to the X-ray magnetic circular dichroism (XMCD) effect, the magnetic stripe domains act as an absorption grating for linearly polarized photons in resonance with the Co L 3 absorption resonance at approximately 778 eV (Hellwig et al., 2003). This gives rise to an anisotropic scattering signal along a preferential axis. The sample also comprises a curved diffraction grating milled in the silicon carrier membrane, using a focused Ga + ion beam (FIB) system, creating a non-resonant reference scattering signal on the detector . The DSSC camera is placed 2 m from the sample and the X-ray beam size is 75 mm. As optical pump, we use 800 nm, 100 fs laser pulses with a spot size of 370 mm. The pump laser is operated at a repetition rate of 282 kHz with 10 pulses per train, while the XFEL runs at 564 kHz with 20 pulses per train, allowing unpumped X-ray scattering frames to be recorded in between pumped ones. Due to thermal damage, the number of optical laser pulses had to be limited to 10 pulses per train, which led to a total of 20 X-ray pulses per train to record pumped and unpumped data. For static measurements, the number of X-ray pulses could be increased to 50 per train.
A typical scattering pattern from the magnetic stripe domains recorded from the SEXTANTS beamline at Synchrotron SOLEIL (Sacchi et al., 2013) is shown in Fig. 3(a), with the corresponding XFEL data in Fig. 3(b). In both images, we observe two broad features arising from the scattering of X-rays from the magnetic domains along the top-left/ bottom-right diagonal of the image, as well as the smaller features related to the reference diffraction grating along the opposite diagonal. The synchrotron image is acquired with an average photon rate of 5 Â 10 12 photons s À1 and 1 s exposure time while for the XFEL data a total of 9 Â 10 11 photons were incident on the sample, with 50 pulses per train and 600 trains in total with an average of 3 Â 10 7 photons pulse À1 . Note that the repetition rate of SOLEIL is 325 MHz, which gives roughly 1.6 Â 10 4 photons pulse À1 .
The black symbols in Fig. 3(c) show the laser-induced ultrafast dynamics of the magnetic scattering spot intensities, measured in a pump-probe configuration, with a pump fluence of 5 mJ cm À2 and with the sample at magnetic remanence. In the same plot, we compare the XFEL data with those recorded on the very same sample using a table-top time-resolved magneto-optical Kerr effect (tr-MOKE) setup with a saturating magnetic field and with a pump fluence of 9 mJ cm À2 . Both curves describe the laser-induced ultrafast demagnetization of the ferromagnetic film (Beaurepaire et al., 1996). The curves were fitted using the formula derived from a three- MHz-rate time-resolved magnetic X-ray scattering. Resonant Co L 3 -edge scattering pattern of a CoFe/Ni thin film multilayer recorded at (a) the SEXTANTS beamline at Synchrotron SOLEIL and (b) the SCS beamline at the EuXFEL. The first-order magnetic scattering is observed along the topleft to bottom-right diagonal. The scattering from the non-magnetic grating is the feature visible along the opposite diagonal. The intensity is in linear scale and normalized to the maximum magnetic scattering amplitude. (c) Time-resolved pump-probe data recorded on the same sample. Black symbols: data from the EuXFEL, computed as the azimuthally integrated intensity of the first-order peak in the frames when the pump laser was impinging on the sample, divided by the nearest previous unpumped frame. Gray symbols: data from a table-top MOKE setup with different pump fluence. The solid lines show the fit to the data. Further details are given in the main text. temperature model (Beaurepaire et al., 1996;Malinowski et al., 2008;Hennes et al., 2020), i.e.
where M is the demagnetization time and R is the picosecond recovery time, different from the thermal one with much larger time constant. The constants A, B and C are amplitudes that can be related to the different physical processes. Here we are only interested in the time constants, and we neglect further considerations on these amplitudes. The convolution with a Gaussian function À(t) takes into consideration the finite pulse durations which were different for the tr-SAXS and tr-MOKE measurements, and allows us to extract the true demagnetization constant. From the fit of the XFEL data, we find M = 102 AE 8 fs and R = 2.18 AE 0.07 ps, while from the tr-MOKE we obtain M = 129 AE 10 fs and R = 6.08 AE 0.5 ps. The slightly smaller time constants retrieved for the XFEL measurements are consistent with a smaller quenching of the sample (Koopmans et al., 2010). Note that the good signal-tonoise ratio of the XFEL data indicates that normalization of the DSSC data with the XGM signal is reliable (see the supporting information for details), opening the way to highquality spectroscopy experiments not achievable at earlier XFELs Tiedtke et al., 2014).
X-ray holographic imaging at MHz repetition rates
High-resolution X-ray imaging techniques are mostly of two kinds: those based on Fresnel-type optics, and those which are lensless. While the former type has found much application at synchrotron lightsources, they are difficult to realize in the soft X-ray region at free-electron lasers due to the risk of damage by strong absorption of intense X-ray pulses. In these facilities, lensless techniques are preferred for full field imaging, since they can exploit the high degree of transverse coherence of XFEL radiation (Wang et al., 2012;von Korff Schmizing et al., 2014;Willems et al., 2017). X-ray holography is one such lensless imaging technique that relies on the interference between two beams, where one holds information about the sample, and the other acts as the phase reference. A Fourier transform of the two-dimensional diffraction reconstructs the real-space image. The samples are magnetic multilayer films of [Ta(5 nm) / Co 20 Fe 60 B 20 (0.9 nm) / MgO(2 nm)] 15 with out-ofplane magnetization. They were produced by DC magnetron sputtering deposition on Si 3 N 4 membranes. From magnetic force microscopy we observe approximately 200 nm-wide labyrinth magnetic domains at remanence. The holography aperture is a square with a side of 2.5 mm, rotated by 45 with respect to the sides of the X-ray transparent window where the film is deposited. The reference beam is generated by two orthogonal slits in the holography mask (see the supporting information for details). This allows to reconstruct the image using the HERALDO technique (Zhu et al., 2010;Duckworth et al., 2011), which mitigates the artifacts due to the detector gaps. The HERALDO holography mask was fabricated by milling reference slits through the 1 mm-thick Au layer using an FIB system. These reference slits (40 nm wide and 4 mm long) are milled through the Au, the Si 3 N 4 membrane and the magnetic thin film while only the Au is removed over the sample (object hole).
The sample was pre-characterized at the COMET endstation at SOLEIL (Popescu et al., 2019). In Fig. 4(a) we plot the magnetic scattering signal recorded at the synchrotron, calculated as the difference between the signal taken with X-rays of opposite helicities at the Fe L 3 -edge, i.e. at approximately 707 eV. In Fig. 4(b), we show the corresponding image reconstruction applying the full HERALDO procedure (Zhu et al., 2010;Duckworth et al., 2011). The image reveals the presence of magnetic domains in one of the six smaller squares which are the cross-correlation between the object and the three corners of the L-shaped reference slit. Each corner yields a pair of conjugated images, where the opposite contrast indicates oppositely oriented magnetic domains. The XFEL measurement on the same sample -with different magnetic domain pattern due to exposure to a magnetic field between the respective measurements -is shown in Fig. 4(c), where in this case the X-ray helical polarization at the required photon energy is achieved with a thin Fe film polarizer inserted into the beam before the sample (Mü ller et al., 2018), at the expense of photon flux. Helicity reversal is obtained by reversing the magnetic field applied to the thin film polarizer. The detector is placed 4.6 m away from the sample, in order to record the magnetic information in the lower q-range. The beam spot is 50 mm in diameter, smaller than in the case of the SAXS experiment, but much larger than the holography apertures. Megahertz-rate magnetic X-ray holographic imaging. Magnetic hologram of a CoFeB thin film multilayer recorded at the Fe L 3 -edge at (a) the COMET endstation at the SEXTANTS beamline at Synchrotron SOLEIL and (c) the SCS beamline at the EuXFEL at a 2.25 MHz repetition rate. The intensity is in linear scale and normalized to the maximum intensity value. Reconstructions of the magnetic domains using the HERALDO technique on (b) the synchrotron data and (d) the freeelectron laser data. and 2.25 MHz with no sample damage observed. This can be partly explained by the thick gold layer where the holography mask is patterned, as we discuss in detail in the final part of this work. The hologram is the result of 15 min acquisition (1000 pulses s À1 ) corresponding to 4 Â 10 13 photons on the sample area for each helicity. As a comparison, the photon count on the same sample area at the COMET endstation of the SEXTANTS beamline was 1 Â 10 13 photons acquired in 90 s. Fig. 4(d) shows the 2D Fourier transform of the hologram of the XFEL data. Like in Fig. 4(b) we observe the autocorrelation of the object aperture in the center of the image, and three pairs of reconstructions.
Discussion
When comparing the SAXS measurements in Fig. 3, we note that the number of pulses per train had to be reduced to 50 in order to keep the sample unchanged by the X-rays; subsequently the average photon flux (photons s À1 ) is two orders of magnitude smaller compared with that of the synchrotron, mostly limited by the burst mode operation of the machine. Naturally, the XFEL measurements are performed using femtosecond X-ray pulses, which allows for ultrafast experiments that are not feasible at a synchrotron. We have also confirmed that the extracted time constants with table-top and XFEL experiments are comparable, demonstrating the reliability of the XFEL measurements in measuring ultrafast dynamics.
Looking at the holographic imaging data we notice that, despite the fact that the XFEL image is slightly noisier, the magnetic domains are clearly distinguishable. We believe that part of the issue is also a non-ideal illumination of the holographic mask which can be readily improved with an optimized design. Furthermore, the slight distortion in the reconstruction is due to a simplified hexagonal-to-Cartesian pixels conversion that does not include sub-pixel interpolation. Nevertheless, these data demonstrate that a full magnetic image reconstruction at the EuXFEL is possible within tens of minutes. Hence, 'movies' of the magnetization on ultrafast time-scales and nanometre resolutions are now possible at an XFEL within a typical beam time allocation. The availability of an afterburner generating circularly polarized X-rays will increase the polarization degree from 50% (thin film polarizers) to 100% and enhance the signal-to-noise of the chargemagnetic interference term which is responsible for magnetic contrast in the image reconstructions. Altogether, these improvements potentially shorten the acquisition time by up to one order of magnitude. Finally, we estimate the possible heating effects of X-ray pulses at high repetition rate on the samples. We perform heat diffusion simulations (Appendix A), and we use the dependence of the magnetization on temperature to calculate the loss of signal due to heat. The details are given in Appendix A. These calculations allow us to obtain a figure of merit (FOM) which can then be plotted as a function of XFEL repetition rate (considering the actual pulse structure), and for different pump fluences, as shown in Figs. 5(a)-5(b). The FOM is determined by the competition of two processes: the number of photons reaching the detector, which increases linearly with the average X-ray power, and the amount of meaningful signal (proportional to the magnetization squared), which decreases with average power. Thus, the FOM can be interpreted as the number of information-carrying photons hitting the detector over a given time. We find that the optimal repetition rate is of the order of 100 kHz for pump-probe measurements on typical samples on free-standing membranes, which can be pushed to the MHz rate if a proper heat sink layer is implemented within the sample, such as for the case of holographic imaging experiments.
Related literature
The following references, not cited in the main body of the paper, have been cited in the supporting information: Avery et al. Ordal et al. (1985Ordal et al. ( , 1987Ordal et al. ( , 1988
APPENDIX A Heat diffusion simulation
The fraction of X-ray and optical pulse energy absorbed by the CoFe/Ni multilayered sample was calculated using the optical constants and refractive indexes of the sample materials for certain photon energies, available from online databases. The subsequent heat diffusion in the layers was simulated with the equation The first and second terms of equation (1) describe the heat diffusion in the layer n, while the third term introduces the heat exchange between the layers n and n + 1. In equation (1) T is the temperature of the layer n, is the mass density, C is the heat capacity and k is the thermal conductivity of the respective layer, and h is the coefficient of heat transfer between layers n and n + 1. The h value depends on the thermal conductivity and the thickness of two layers, as well as the thermal conductance of the interface between them (Lyeo & Cahill, 2006). Equation (1) was solved numerically for each layer of the sample in the polar coordinates. We assumed that the system is two-dimensional since the thickness of the layers is much smaller than their lateral size. In the heat diffusion simulation, the lateral sample size was 2 mm, the spacing for the computation grid was 2 mm, the total time of the simulation 270 ms and the time step 1.35 ps. While varying the X-ray and pump pulses repetition rate, the pump was always kept at half the frequency of the X-ray probe. We used the constant temperature boundary conditions, assuming the perfect heat removal from the sample by the perimeter, which is always maintained at room temperature. All parameters of the simulation were taken as constants at room temperature. The magnetization was estimated from the temperature values using the mean field approximation (Kittel et al., 1996), where T and M are the average temperature and magnetization of the magnetic layers within the X-ray beam spot, T C = 750 K is the Curie temperature of the CoFe/Ni sample and M S = 1 is the saturation magnetization. DSSC consortium want to thank all engineers, technicians, postdocs and students who have contributed to the design, the development and the assembly of the camera. The authors would like to thank the teams of the SEXTANTS beamline at Synchrotron SOLEIL (proposal ID 20160880 for the characterization of static properties of the FeCo/Ni multilayers, and through in-house beam time for the static holography) and the VEKMAG endstation at BESSY II synchrotron for the static characterization of the samples. Open access funding enabled and organized by Projekt DEAL. | 6,588.4 | 2022-01-17T00:00:00.000 | [
"Physics"
] |
Improvement of magnetic resonance imaging using a wireless radiofrequency resonator array
In recent years, new human magnetic resonance imaging systems operating at static magnetic fields strengths of 7 Tesla or higher have become available, providing better signal sensitivity compared with lower field strengths. However, imaging human-sized objects at such high field strength and associated precession frequencies is limited due to the technical challenges associated with the wavelength effect, which substantially disturb the transmit field uniformity over the human body when conventional coils are used. Here we report a novel passive inductively-coupled radiofrequency resonator array design with a simple structure that works in conjunction with conventional coils and requires only to be tuned to the scanner’s operating frequency. We show that inductive-coupling between the resonator array and the coil improves the transmit efficiency and signal sensitivity in the targeted region. The simple structure, flexibility, and cost-efficiency make the proposed array design an attractive approach for altering the transmit field distribution specially at high field systems, where the wavelength is comparable with the tissue size.
The number of ultra-high-field (UHF) magnetic resonance imaging (MRI) systems with field strengths of 7 Tesla (7T) or higher are distributed significantly by increasing the interest in neuroscientific applications and clinical research [1][2][3][4][5] . In 2017, "Comformité Européenne" mark was given for a 7T MRI system, indicating safety and environmental protection standards 6 , and later the same year, the United States Food and Drug Administration (FDA) approved the first clinical 7T system 7 . UHF magnets have begun to satisfy the growing demand for increased signal-to-noise ratio (SNR), detailed spatial information, and functional contrast [8][9][10][11] . Enhanced signal can be used to reduce the scan time while simultaneously improving the spatial resolution necessary to visualize small brain structures in greater detail. To this end, various protocols have exploited the higher SNR to improve visualization of the supratentorial brain and skull base, and have even enhanced visualization of layers within the cerebellum [12][13][14][15][16] . The value of UHF MRI in modern neuroimaging is demonstrated by the ever-increasing number of 7T scanners installed all over the world. Although 7T MRI was not integrated into clinical settings until 2017, there are now more than 87 installations worldwide and this number is expected to increase rapidly in the coming years 17 .
The radiofrequency (RF) MRI coils used to transmit RF energy into the body and receive MR signals from it play a critical role in determining image quality. The uniformity of the transmit field ( B + 1 ) generated by the RF coils is a necessary need in MRI applications in terms of SNR and image uniformity. The most commonly commercially available coils specifically at 7T head MRI consist of a relatively short single channel transmit birdcage coil surrounding a 32-channel receive array. These coils are limited in their coverage because of the technical challenges associated with the RF wavelength effect 18,19 . As the wavelength in the tissue becomes comparable with the body dimensions, the birdcage coils' B + 1 homogeneity and efficiency becomes insufficient 20,21 . Locations of increased and decreased transmit efficiency occur at body-specific locations, causing an inhomogeneous flip angle spatial distribution. Consequently, areas of artifactually high and low signal often appeared in the posterior fossa (cerebellum, brainstem, and other caudal areas of the brain, Supplementary Fig. S1). The difficulty of www.nature.com/scientificreports/ covering the entire brain, including posterior fossa, has led to most 7T applications to focus on specific regions of the brain 22,23 . It is possible to ameliorate the B + 1 non-uniformity caused by the RF wavelength effect using active RF shimming techniques. One such technique is parallel transmission (pTx), which uses multiple transmit coils to improve B + 1 homogeneity at UHF MRI systems [24][25][26][27][28] . The results indicated that pTx can significantly enhance B + 1 uniformity across the entire brain. However, pTx systems clinically are limited by the hardware complexity and difficulty ensuring compliance with local specific absorption rate (SAR) safety limitations compared with a single-transmit configuration 29 .
As an alternative to pTx, passive RF shimming methods have been proposed, which have historically been the only clinically feasible way to homogenize B + 1 distribution in a region of interest (ROI). One of these methods is using dielectric pads (DPs; high permittivity material with ε r >50) 30,31 . DPs are widely used in MRI to improve the transmit efficiency and increase SNR [32][33][34] . The resultant benefits can be explained by the modified Ampeère's law, displacement currents within the DPs produce a secondary local RF field which augments the applied B + 1 field. However, there are limitations of conventional DPs, which are usually comprised of mixtures of liquids and ceramic powders: their material parameters can change over time, and some ingredients may be bioincompatible. Artificial dielectric was introduced as an alternative to conventional DPs to avoid some of the disadvantages of conventional pads 35 . Artificial dielectrics are typically periodic structures made of metal or dielectric elements that support the propagation of slow waves with similar phase velocities in the operational band as in natural dielectric materials 36 . Simulated phantom results at 7T show that the artificial dielectrics can obtain the same increase in the B + 1 distribution as the conventional DP 35 . Similar approaches have been reported using resonant metal-dielectric metasurfaces 37,38 , with hybrid structure comprised of a two-dimensional metamaterial surface and a very high permittivity dielectric substrate, which enhance the local performance of MRI coils 39 . One such study at 7T demonstrated the utility of the metasurface acquiring in vivo human brain images and proton MR spectra with enhanced local sensitivity in the visual cortex on a commercial 7T system 40 . However, the metasurface has a complex structure and may inadequately regulate the hyperintense MR signal in its close vicinity, possibly impairing anatomical information near the structure of interest.
Another passive shimming method to locally improve the B + 1 of the birdcage coil is a non-resonant surface RF coil, appropriately coupled to a volume transmit coil 41 . Theoretical analysis and numerical simulation results demonstrated the feasibility and effectiveness of this method in the human head MRI at 7T.
In our previous study, we showed that using an array of 3 receive-only coupled elements can improve the SNR in the target region 42 . In this study, we aim to improve brain MRI at 7T focusing on the posterior fossa using such an RF array placed against the posterior and caudal portion of the head inside a conventional singlechannel transmit MRI head coil. We propose a simple and practical passive resonant RF array design, which works in conjunction with conventional transmit/receive (Tx/Rx) MRI coils to improve the transmit efficiency, signal sensitivity, and anatomical coverage of conventional coils. This design consists of an array of geometrically decoupled resonator elements patterned on a flexible substrate. The addition of an inductively coupled RF array enhances the transmit efficiency during the RF excitation and signal sensitivity during the signal reception near the target region. The tunable and size-adjustable structure of the passive RF resonator array allow to be easily implemented in MRI applications to improve the performance in terms of visualizing regions with reduced transmit efficiency and signal sensitivity, specifically at UHF systems. This concept is generalizable to different field strengths and anatomy of interest.
Results
RF array electromagnetic simulations. The fundamental principle behind the proposed technique is that inductive coupling between the passive RF resonator and MRI Tx/Rx coils results in B + 1 and receive magnetic field ( B − 1 ) enhancement. This principle originates from (i) inductive coupling between the resonator and RF excitation that leads to increase in the transmit efficiency (Fig. 1a); and (ii) inductive coupling between the resonator and magnetization vector ( M ) during the reception that leads to receive-signal amplification (Fig. 1b). The transmit magnetic field, B rf of the transmit coil inductively couples to the RF resonator, the reaction of the resonator with excitation field leads to circulating current in the resonator, which results in a local magnetic field, B re as illustrated in Fig. 1c. B re is an additional field provided by the resonator inductance that is added to the original excitation field and enhancing the RF excitation over the object 43,44 .
A single RF resonator was modeled as a circular broadside-coupled split-ring-resonator (BCSRR) 45 , which is a 3-layer structure consisting of a flexible dielectric substrate ( ε r =3.4) sandwiched between two split-ring resonators (SRRs), where SRRs are counter-oriented (180° rotated relative to each other; Fig. 2a). The SRR is an enclosed metal loop with a gap (g) along the loop. An equivalent circuit model of a BCSRR is shown in Fig. 2b encircled with a red dashed line, where the resonator is modeled as a series RLC circuit with a resistance (R), distributed capacitance (C), and inductance (L). Electrical characteristics of the RF resonator [resonance frequency ( f 0 ) and Q-factor] rely on C, L, and R, which depend on four design parameters: (i) diameter D, (ii) dielectric thickness d, (iii) gap width g , and (iv) strip width W.
A 10-element wireless passive RF resonator array was designed using 10 BCSRRs. The elements were oriented as a 2 × 5 matrix (Fig. 2c). The array is a multilayer laminated structure consisting of two metal layers and a dielectric layer as shown in the transverse and longitudinal cross-sectional planes (Fig. 2d). To avoid the physical contact due to overlapping elements on the same plane a thin layer of an insulator is implemented (not shown in the Fig. 2d). The critical overlap technique was used to decouple adjacent elements 46 . As was illustrated in Fig. 2c, y is the center-to-center distance between two inline neighbor elements. The mutual coupling between these elements is minimized when y = 0.76D . Simulated S-parameters show a transmission coefficient ( S 21 ) of − 22 dB and − 18 dB at 312 MHz for the geometrically decoupled inline element pairs and diagonal elements, respectively. www.nature.com/scientificreports/ To obtain optimized transmit efficiency in the presence of the resonator the off-resonance frequency of the resonators was adjusted 5% above the Larmor frequency (further electromagnetic (EM) formulations are provided in Supplementary Information B).
The array was placed at the posterior position, between the head and the coil: 2 cm away from the coil and 0.3 cm away from the model illustrated in Fig. 2e.
The simulation results showed improved transmit efficiency ( B + 1 / √ SAR ) at the regions covered by the array. The transmit efficiency was enhanced significantly in the ROI, as compared to the case without the array ( Fig. 3a,b). The RF array resulted in 1.8-fold improvement in transmit efficiency in posterior brain and cerebellum regions (red ellipse).
Maximum 10 gr local SAR values were simulated using the human Duke model from the library of the CST (Computer Simulation Technology Microwave Studio). SAR distribution was computed in the birdcage coil without and in the presence of the array (Fig. 3c,d). The results showed that the SAR increased 24% at the location which the array was placed (encircled area), although the transmit efficiency was improved by 1.8-fold. Peak SAR (indicated with the H letter in Fig. 3c) decreased in the presence of the array by 9%.
RF array prototype.
A single BCSRR was fabricated using the preferred design parameters found in EM modeling. Parameter optimization was performed to obtain efficient electrical characteristics. The fabrication process included the following steps: (i) the first copper layer of SRR was patterned on one side of a flexible dielectric substrate (Kapton ® polyimide films, DuPont™), and (ii) a second copper layer of SRR was patterned on the other side of the substrate with 180° rotation but in the same axis as the first layer. The resulting geometrical parameters used for the BCSRR fabrication were: D = 50 mm, d = 200 µm, W = 3mm, g = 16mm.
The built-in distributed capacitance between two layers in a BCSRR was used for fine frequency tuning. Changing the conductor length can affect the capacitance and inductance values, and consequently the operating frequency.
For the RF array fabrication, 10 SRRs (first layer) with 0.76D mm center-to-center overlapping distance were patterned on one side of a single piece of a dielectric substrate (Kapton), followed by patterning of another layer of 10 SRRs (second layer) on the other side, where first and second layers were aligned counter-oriented. The total dimension of the array is 9 cm × 20 cm (Fig. 4a). www.nature.com/scientificreports/ The critical overlapping based on the loop center-to-center distance (0.76D) between neighbouring elements served to reduce the inductive coupling between elements. In order to avoid the RF over-flipping and boosting the absorption RF energy during the transmission, some of the resonators were decoupled from RF excitation using antiparallel cross diodes (Macom , Newport Beach, CA, USA), specifically the resonators which were inside the MRI coil in strong coupling position with the RF excitation. The resonators with diodes enhanced only the receive signal. The circuit model of a decoupled resonator encircled with a blue dashed line is shown in Fig. 2b. The coupling level was evaluated using B + 1 mapping method, which will be explained later. The array extended out of the coil (Fig. 4b), extends the region of usable B + 1 coverage outside of the volume coil.
Bench-top measurements. Although the required design can be achieved through a simulation analysis, it can also be tuned on the bench-top based on transmission and reflection coefficients measurements. Specifically, to obtain the minimum decoupling condition, y should be adjusted such that the frequency with minimum transmission coefficient ( S 21 ) is equal to the tuned frequency (312 MHz). An array of two decoupled resonators illustrating the bench-top experiments (Fig. 5a) used for S-parameter measurements plotted in Fig. 5b. As expected, the critically overlapped ( y = 0.76D ) resonators are strongly A schematic of a 10-element RF resonator array consisting of 10 resonators in a 2 × 5 matrix. An anti-parallel cross diode is used to detune the strongly coupled elements from RF excitation to avoid RF over-flipping (diode is not shown in the 3D model). The corresponding circuit model is encircled with blue dashed in (c). (d) Cross-sectional view of the array in both transverse and longitudinal axis. The array consists of three layers; two top and bottom metal layers and a dielectric layer sandwiched in between. A thin layer of an insulator is implemented to avoid the physical contact between the overlapping elements (not shown here). (e) Schematic of the MRI head coil, wireless RF array, and human model. The array is positioned at the base of the skull such that extends outside from the coil to extend the spatial coverage. www.nature.com/scientificreports/ www.nature.com/scientificreports/ decoupled from each other, with a S 21 of − 20 dB. The strong decoupling also makes the reflection coefficient ( S 11 ) plot symmetrical around the tuned frequency. A 10-element array consisting of 10 resonators (critically overlapping 2 × 5 matrix) was further built resulting in which two kinds of significant coupling exist between the elements: (i) coupling between inline neighbouring elements; and (ii) coupling between diagonal elements (Fig. 5c). Decoupling between elements was examined in the presence of the phantom by S 21 measurements between pairs of elements while all other neighbouring elements were detuned. Figure 5d shows the coupling levels of a single element (#2) relative to the adjacent inline neighbouring (#1, 3, 5) and diagonal elements (#4, 6). In the array construction, geometrically overlaps were adjusted to achieve an acceptable decoupling level (< − 16 dB) between inline elements. Diagonal elements showed a decoupling level of about − 14 dB.
The loaded and unloaded Q-factor for the resonators was calculated and the average loaded Q-factor of 12 and average unloaded Q-factor of 18 were calculated. We also tested the effect of bending on the 2-element array by 45°. Bending the array in the middle did not significantly change the S-parameters.
Heating experiment. Safety testing following a suitable modification of ASTM standards was performed to ensure tissue heating remained well below safety limits. After 15 min of RF transmission, a maximum temperature increases of 0.55 °C and 0.71 °C were experienced at the decoupled (detuned by diode) and coupled (tuned) RF resonators, respectively. While a temperature rise of 0.5 °C was recorded from the control probe. Corresponding SAR gains of 1.11 for the detuned array and 1.45 for the tuned array were calculated relative to the counterpart point in control (without array) set up. Therefore, for safe scanning, artificially reduced SAR limits of 88% and 70% must be adhered to for detuned and tuned arrays, respectively. In Supplementary C, Supplementary Tables 1 and 2 summarize the measured temperatures and calculated SARs at various points in the vicinity of the array. The temperatures at each position remained increased for several minutes after RF transmission was turned off. This indicates that thermal convection in the gel was low, suggesting that the gel experiment exaggerated heating above what would be seen in vivo where blood-flow enhances convective cooling. Additional safety analyses in the presence of the array were performed at various positions relative to the phantom and the head coil which are shown in Supplementary C. www.nature.com/scientificreports/ In vivo brain MRI. Figure 6a,b show measured in vivo axial transmit field ( B + 1 ) maps obtained with and without the RF array in one of the five human subjects. B + 1 maps are calculated using the turbo-FLASH based method with the same input power. The average improvement factor of about 1.8 ± 0.2 is measured over the ROI (dashed ellipse) in five human subjects. Figure 6c,d show the experimental SNR maps without and in the presence of the RF array in the axial plane. The array improves the SNR by an average factor of 2.2 in the ROI including the cerebellum and brainstem. Supplementary Figure S2 showed B + 1 maps and SNR maps acquired in a phantom. In Supplementary Fig. S3, ex vivo MRI experiments at 7T conducted on 3 postmortem musk ox brains (in the context of an unrelated research project) using a Nova 1Tx/32Rx head coil in conjunction with the wireless RF array showed an average enhancement factor of 2.
In vivo MRI feasibility of the RF array was studied in five human subjects using TSE and GRE sequences. The array was placed behind the neck covering the posterior fossa, where the B + 1 efficiency and signal sensitivity are intrinsically poor. Figure 7a,b show the sagittal T2-weighted TSE images improved visibility of the cerebellum, brainstem, and cervical vertebrae (red dashed circle) in the presence of the RF array. Also, in Supplementary Fig. S4, the axial T1-weighted GRE images obtained with and without the array focusing on posterior fossa show that the presence of the array improves the image uniformity and visibility of the cerebellum and the brainstem. Figure 7 also shows the axial T2-weighted TSE images obtained at various slices locating at the lower brain without and with the RF array shown. Slice #1 (Fig. 7c,d) includes images focusing on the posterior fossa demonstrating significant B + 1 efficiency and SNR enhancement in this region. Slice #2 (Fig. 7e,f) and Slice #3 (Fig. 7j,h) consist image obtained approximately at the physical border of the coil and 2 cm away from the border (extended out in the MRI z direction), respectively. The presence of the array extends the spatial coverage of the coil allowing visualizing spinal cord and vertebral artery, which are barely detectable using the head coil without the array.
Axial TSE 0.7 mm in-plane resolution images of the cerebellum and brainstem demonstrate exquisite anatomical detail with excellent gray matter/white matter contrast. An average CNR enhancement of 52% and 58% between gray matter and white matter in the cerebellum was calculated in TSE and GRE images, respectively. www.nature.com/scientificreports/
Discussion
We have proposed and validated an effective and simple design of a passive RF array to improve transmit efficiency and signal sensitivity in MRI systems. This array works in conjunction with the MRI coils by placing it over the ROI and requires only to be tuned to the desired operating frequency. A wireless passive RF resonator array providing solution for lack of spatial coverage and B + 1 inhomogeneity problems in standard MRI coils was constructed and tested. The flexible and distributed architecture made the array advantageous in tunable and size-adjustable for various MRI applications. The array did not used any lumped element components, which allowed to maintain flexibility and may also prevent creating severe SAR hot spots. To prevent B + 1 over-flipping, elements inside the MRI coil which were in strongly coupled position with RF excitation (Fig. 2e) were decoupled from RF excitation using anti parallel diode. Elements positioned in the inferior of the coil left tuned to enhance the signal in both transmit and receive phases. In the Supplementary B, we showed that to obtain optimized transmit efficiency in the presence of the resonator the off-resonance frequency of the resonators should be adjusted 5% above the Larmor frequency. Several methods have been described to improve transmit field efficiency in MRI, such as pTx coils and passive RF shimming techniques 28,30 . In this study we focused on using a wireless RF array in conjunction with a standard head coil to improve the whole-brain MRI at 7T by improving the receive signal sensitivity and transmit efficiency in the brain, particularly in the posterior fossa. The transmit and receive inductive coupling of the RF array with the RF excitation and magnetization vector not only improve the transmit efficiency and receive sensitivity but also extend the anatomical coverage to visualize regions inherently outside of the coil. Note that elements with diodes were decoupled from RF excitation, therefore they only enhanced the receive signal. Elements without diodes were inductively-coupled in both transmit and receive phases, therefore they improved transmit efficiency and signal sensitivity.
We used a commercial head coil to demonstrate the improvement in coil sensitivity. The coil suffered from limited sensitivity at the posterior fossa, which was improved by placing the RF array near inferior regions of the head. In addition to non-uniformity of the high RF field at UHF, the increased vulnerability of UHF imaging to susceptibility-induced image distortions near air-filled cavities is also challenging, especially in cerebellum imaging 22,47 where physical location in the posterior cranial fossa and anatomical diversity, combined with its small size, presents challenges for UHF MRI. Susceptibility artifact in particular was not addressed in this study.
The array performance was evaluated using EM simulations, bench tests, and MRI experiments. We demonstrated, in both simulations and experiments, that the SNR and the transmit efficiency of a commercial head Figure 7. In vivo brain MRI at 7T. Sagittal T2-weighted TSE images obtained without (a) and with (b) the RF array show significant SNR and CNR improvement in the inferior regions in the presence of the array. In particular, the cerebellum, brainstem, and neck muscle are more clearly visible using the RF array. Axial T2-Weighted TSE slices obtained at various locations in the lower brain. Images obtained in slice #1 (c,d) show that placing the RF array results in significant improvement in B + 1 uniformity and SNR in the cerebellum and brainstem. Slice #2 (e,f) and slice #3 (g,h) obtained at the physical border and 2 cm extended outside of the border, respectively. These images show that using the RF array extends the anatomical coverage of the head coil visualizing vertebral artery and spinal cord, which are hardly visible without the array. www.nature.com/scientificreports/ coil at 7T can be improved in the skull base and cerebellum using a passive RF array. This enhancement in SNR was used for improving whole brain imaging, where the standard coil is limited to image due to poor transmit and receive sensitivity. SAR distribution of the standard coil was manipulated in the presence of the RF array. The 10 gr local SAR increased 24% with array in the location which the array was presented, producing greater transmit efficiency at this location; but the peak local SAR decreased 9% when the array was used. Transmit inductive coupling between the RF excitation and some of elements in the array could be considered as a major reason of SAR amplification. A mitigation in the local peak SAR can be explained by a variation in the global RF energy distribution over the object in the presence of the array. Temperature tests under high SAR MRI sequences also reported a maximum local SAR gain of about 1.45 and 1.11 in the presence of the tuned and detuned elements, respectively.
Considering the previously mentioned enhancement in local SAR, the increase in transmit efficiency per square root of maximum SAR, experimental in vivo B + 1 maps used in evaluating RF coil transmit efficiency, was a factor of 1.8 ± 0.2. In practical terms the increase in transmit efficiency means that the amplitude or duration of the transmitted RF pulse can be reduced by the equivalent factor. The average in vivo SNR enhancement of 2.2-fold was calculated using the RF array, which corresponds to a decrease in total experimental scan time for a constant SNR. Note that the reference voltage was kept constant for both with and without array cases, we did not re-calibrate the reference voltage between array and no-array cases to disentangle the effect of receive amplification from increases in flip angle (FA) in the SNR. The experimental SNR analysis scaled by the B + 1 determined that 2.2-fold SNR enhancement was mainly due to the improved transmit efficiency (due to FA amplification) and partially (33%) due to received-only coupled sensitivity improvement. In vivo images showed that sensitivity, contrast, and image uniformity were enhanced in the presence of the RF array, which resulted in improved visibility in the inferior region of the human brain extended outside of the coil (Fig. 7).
This study has demonstrated the promise of a novel flexible and compact RF resonator array structure, which can improve MRI performance in regions where transmit efficiency is reduced due to wavelength effect. This new flexible RF structure is the first broadside-coupled SRR array, which can be integrated into MRI coils. The utility of the RF array acquiring in vivo human brain images on a commercial 7T MRI system was also shown. In this demonstration we targeted the posterior and caudal brain including cerebellum and brainstem, regions with limited coverage due to coil structure. Experimental results showed that the RF array can enhance the local transmit field and improve the receive signal sensitivity at the localized region. The proposed RF resonator array, constructed of a very simple substrate, can be considered a low-cost and time-stable alternative to other passive RF-shimming techniques. Although this work demonstrated 10-element RF array in brain MRI at 7T, where the B + 1 inhomogeneity is more severe, it can be modified to be implemented in other MRI applications when the transmit efficiency and receive sensitivity improvement are required.
RF array modeling.
When designing a resonator, it is desirable to control the capacitance and inductance to tune the resonator 48,49 . The built-in distributed capacitance between the metal layers in BCSRR structure used to tune the resonator avoids the need for any lumped element capacitance. In addition, distributed capacitance spreads out the electric field along the resonator and prevent the resonator from creating severe SAR hot spots 50 . A series of electromagnetic (EM) simulations [Computer Simulation Technology Microwave Studio (CST), Germany] was performed to investigate the effects of design parameters on the electrical properties of a single BCSRR. Optimized resonator geometry was used for the array modeling; further details are provided in Supplementary A.
A coupling an array of resonators to a volume coil is different from the coupling a single resonator to a volume coil. Since all the resonators in the array are inductively coupled through the volume coil, therefore, the array interaction with the coil must be considered as well in global B + 1 efficiency and signal sensitivity. We conducted EM numerical simulations (CST) to evaluate the EM field distribution of a head coil in the presence of the RF array. We modeled a head-sized birdcage coil (22 cm To evaluate the RF array effect on the SAR distribution, we calculated 10 gram (10 gr) average SAR for with/ without the array using the same birdcage coil containing "Duke" head model. Time-averaged SAR values were calculated by finding the time derivative of the incremental energy, absorbed by an incremental 10 gr mass of tissue. All resulting simulated SAR values were compared with the corresponding limits (10 W/kg for maximum local SAR and 3.2 W/kg for head average SAR) recommended by the FDA and IEC 51 . The SAR distribution was used to estimate the possible hot spots for the heating test.
Electrical bench test. All elements were tuned to f 0 = 312 MHz, 5% above the Larmor frequency at 7T, 297 MHz to obtain optimized transmit efficiency in the presence of the resonator (Supplementary B) while loaded with the cylindrical saline phantom (15 cm in diameter and 30 cm in height; relative permittivity: 75; conductivity: 0.60 S/m). Tuning was assessed by measuring the reflection coefficient ( S 11 ) using a single sniffer probe in a calibrated vector network analyzer (VNA, E5071C, Agilent Technologies, Santa Clara, CA, USA). Detuning performance of the resonators (with antiparallel diode) was measured as the change in the scattering parameter S 21 of a loosely coupled double pick-up probe. The double pick-up probe consists of two overlapped www.nature.com/scientificreports/ sniffer loops made out of semi-rigid coaxial cable with a gap in the shield placed symmetrically in the middle of each loop 52 . The sniffer loops were overlapped to the extent required for inductive decoupling. Sniffer loop #1 transmits RF energy to the resonant element under test and sniffer loop #2 functions as a pick-up coil detecting currents excited in the resonant element under test. Q-factor of each element was calculated as f 0 /�f , where f is the FWHM bandwidth of the measured S 21 using the double pick-up probe. Decoupling between neighboring elements was adjusted to null their mutual inductance by moving those tuned elements towards each other while measuring the S 21 parameter between the elements. S 21 interaction between the pairs of elements was monitored using two independent sniffer loops: one sniffer loop connected to port 1, transmitting RF power, and coupled to the first element under test; and the other sniffer loop connected to port 2, receiving the induced RF power, and coupled to the second element under test (Fig. 5a). When measuring the decoupling between an adjacent pair, all other unused elements of the array were detuned.
Experimental heating test.
To evaluate the presence of the RF resonator array in causing a possible safety concern in a mode of heating, temperature measurements were conducted in an MRI scanner following a suitable modification of ASTM standards 53 . The RF array was immersed inside an ASTM gel phantom ( δ gel = 0.50 S/m, ε gel = 77, heat capacity = 4154 J/kg °C), in the location within the volume coil where the array was expected to be during the human MRI experiments. The array was coated with a thin layer of plastic to avoid the direct contact of the array with the gel. The assembly was placed inside the head coil and was scanned for 15 min with a high SAR turbo-spin-echo (TSE) sequence (repetition-time (TR) = 500 ms, echo-time (TE) = 11 ms, Flip angle (FA) = 120°, bandwidth = 277 Hz/pixel, FOV = 16 cm × 23 cm, matrix = 256 × 256, average = 32, slice thickness = 10 cm). RF excitation was performed using a Nova 1Tx/32Rx head coil (Nova Medical, Wilmington, MA, USA) in a 7 T MRI scanner (Magnetom, Siemens Healthcare, Erlangen, Germany). Temperature was measured using four fiber-optic temperature probes (LumaSense Technologies, Santa Clara, CA) located at the possible hot spots estimated using SAR simulations in the presence of the array. Baseline temperatures were recorded before RF transmission, and temperature changes were measured during scanning. SAR was calculated as; SAR = C hc (dT/dt) , where C hc is the heat capacity, T is the temperature and t is the time.
We used the exact same location of the probe when studying the temperature changes occurring with and without the array. We visually examined the location of the probes relative to the RF array, immediately before and after the heating assessment because slight variability in probe positions relative to the array can lead to significant variations in the measured temperature.
In vivo MR imaging. In vivo B + 1 and SNR maps were calculated in the human brain with and without the resonator array. The array was placed behind the neck covering the posterior fossa. The flexible and thin structure of the array allowed it to be placed on the curved surface of the back to fully cover the ROI. B + 1 maps were measured using the presaturation-prepared turbo-FLASH based method (MGH QA package) with acquisition parameters TR/TE = 2.7/1.2 ms, FA = 10, FOV = 16 cm × 21 cm, matrix = 256 × 256. The applied input power was the same (200 V) for with/without the array B + 1 mappings. The signal reception performance of the array was evaluated using SNR map calculations with/without the array. SNR maps were generated using the images obtained from two gradient-recalled echo sequences (GRE, TR/TR = 400/9 ms, FA = 5°, bandwidth = 977 Hz/pixel, FOV = 16 cm × 21 cm, matrix = 256 × 256), one with and the other without RF transmission (MGH coil QA package was used).
For in vivo imaging, five healthy adult volunteers (3 males, 2 females, age 30-45 years) were imaged on the 7 T whole body MRI scanner. MR images were acquired with and without the resonator array using a Nova 1Tx/32Rx head coil. The imaging sequence was a GRE sequence with parameters: TR = 100 ms, TE = 4 ms, FA = 10°, bandwidth = 977 Hz/pixel, FOV = 9 cm × 14 cm, matrix = 256 × 256. TSE images were also acquired with the parameters TR/TE = 3000/76 ms, FOV = 22 cm × 18 cm, flip angle = 120°, slice thickness = 2 mm, bandwidth = 977 Hz/pixel, matrix = 256 × 256, TSE factor = 15. The human experimental procedures were approved by the internal review board at the Icahn School of Medicine at Mount Sinai.
Contrast enhancement analysis was also conducted using calculation of contrast-to-noise ratio (CNR), formulated as: CNR = |(S ROI − S REF )/σ N | , where S ROI , S REF are the mean signal intensity of the ROI and signal intensities of the reference area, respectively. σ N is the standard deviation of the background noise.
All methods were carried out in accordance with the Icahn School of Medicine at Mount Sinai guidelines and regulations. All experimental protocols were approved by the Icahn School of Medicine at Mount Sinai Medical Ethics Committee. In vivo images of the brain were acquired from healthy volunteers after informed consent was obtained, in accordance with the guidelines of the Icahn School of Medicine at Mount Sinai Medical Ethics Committee.
Data availability
The data that support the findings of this study are available on request from the corresponding authors A.A. The data are not publicly available due to restrictions imposed by the local ethical committee as they contain information that could compromise the privacy of the research participants. | 8,332.8 | 2021-11-29T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Fabrication of Nanoyttria by Method of Solution Combustion Synthesis
In the work the research on properties of an yttria nanopowder obtained by solution combustion synthesis (SCS) in terms of its application in ceramic technology is presented. In order to characterize the SCS reaction the decomposition of yttrium nitrate, glycine and their solution was investigated using differential thermal analysis coupled with FT-IR spectrometry of the gases emitted during the measurements. The product obtained in the SCS process was characterized in terms of its microstructure, particle size distribution and BET specific surface. Although the obtained powders showed nanoscaled structures, only after calcination at a temperature of 1100 °C nanosized particles were revealed. The calcined powder occurred in an agglomerated state (cumulants mean Zave = 1.3 µm). After milling particle size was successfully decreased to Zave = 0.28 µm. The deagglomerated powder was isostatically densified and tested for sintering ability. The obtained nanopowder showed very high sintering activity as the shrinkage onset was detected already at a temperature of about 1150 °C.
Introduction
A wide variety of applications of yttria is a driving force for the development of methods to fabricate pure powders. Doped or surface modified yttria nanoparticles find application in medicine and electronics [1][2][3][4].
Ceramic nanoparticles recently gained interest as an reinforcement of light weight alloys. Lately it was proven that the addition of 2.5 wt% ceramic nanoparticles caused an increase of the microhardness of Ti6Al4 V by 50%, manufactured by selective laser melting (SLM) [5]. Due to its high hardness and low reactivity with molten metals yttria can be an interesting material for metal matrix composites. Additionally, yttria has a relatively high thermal conductivity (8-12 W/m·K), which in case of the SLM technique can prove to be beneficial for the forming and densification of produced parts due to better heat transfer in the area of the laser's operation.
Yttria is also widely used in ceramic technology. Dense yttria ceramics find application as refractive ceramics (e.g., coatings and crucibles for molten reactive metals), optic devices (i.e., infrared missile domes) [6,7]. For ceramic technology nanopowder is desired due to the possibility of reducing the sintering temperature [8].
Solution combustion synthesis (SCS), which is based on the high energy reaction between metal nitrates and a reducing agent, is a promising method for the fabrication of nanopowders. Unlike sol-gel 6 Y (NO 3 ) 3 + 10 NH 2 CH 2 COOH → 3 Y 2 O 3 +20 CO 2 + 25 H 2 O +14 N 2 (1) During the reaction vast amounts of gases are emitted which causes nanostructuration of the produced grains. Usually, the powders obtained by using this method show a high specific area and very complex particle morphology. Such microstructure is not beneficial for the fabrication of dense ceramics. Densification of nanopowders is a challenge in itself, as the smaller the particle, the more the particle-particle contact and the torque occurring between particles hinders the packing. Additionally, agglomerated particles need even higher pressures during compaction in order to destroy the inner structure of the particles. In the presented work we prove that with carefully designed technological steps the yttria powder obtained by using the SCS method has sintering ability and can be used for a ceramic application.
Materials and Methods
For the solution combustion synthesis of yttria yttrium nitrate hexahydrate (Sigma-Aldrich, St. Louis, MO, USA, purity 99.8%) and glycine (Sigma-Aldrich, purity ≥ 99%) was used. Glycine was used as the reducing agent and yttrium nitrate was both the precursor salt for yttria synthesis and the oxidizer in the redox reaction.
The solution combustion synthesis (SCS) was carried out in a quartz beaker. After the dissolution of the reagents in deionized water, the water was evaporated, and a gel was formed. The gel was then heated to the reaction initiation temperature. Once that temperature was reached, the high-temperature, self-propagating redox reaction took place [22]. The substrates were added in stoichiometric amounts, in each batch the aim was to obtain 5 g of yttria.
Thermogravimetric analysis was carried out in alumina crucibles using the thermal analyzer TG 449 F1 Jupiter (Netzsch Gerätebau GmbH, Selb, Germany). The gaseous products emitted were analyzed by FT-IR spectroscopy using a coupled FT-IR spectrometer (Tensor 27, Bruker, Billerica, MA, USA). The signals were identified based on the NIST database [23] and literature on glycine decomposition [24]. The curves of intensity of the characteristic absorbance wavenumber of a specific substance were subtracted and plotted as a function of temperature to investigate the reaction mechanism.
Obtained powders were deagglomerated in an attritor mill (Netzsch MiniCer, 1000 rpm) using zirconia balls of diameter of 0.4 mm. The milled powders mixed with binder and plasticizer were cryogranulated. Nanomaterials 2020, 10, 831 3 of 11 The particle size distribution of the powders was measured in aqueous suspensions by technique of dynamic light scattering (DLS), using zeta potential analyzer Zetasizer Nano ZS (Malvern Instruments Ltd., Worcestershire, UK). The analysis results are presented in terms of Z average (Z ave ) and polydispersity (Pd). These are supported by a median particle diameter (d V50 ). Z average (also called the cumulants mean or harmonic intensity averaged particle diameter) is a mean value from the intensity distribution, which is the primary result obtained from the measurement thus the most stable result. Polydispersity derives from the polydispersity index (calculated from the cumulants analysis) and is the width of the estimated Gaussian distribution.
The specific surface area was measured by use of the BET technique (Gemini VII, Micromeritics Instrument Corp., Norcross, GA, USA). Based on the results of the BET, the equivalent spherical-particle diameter (d BET ) was calculated.
The measurement of linear changes of pressed granulates were conducted using a Netzsch high-temperature dilatometer (model Dil 402E) equipped with a graphite furnace. The measurement was carried out in a temperature range of RT to 1700 • C with a heating rate of 10 • C/min and the isothermal stage at the maximum temperature for a duration of 10 min.
Prior to measurement, calibration was carried out with a graphite standard of known properties and expansion. The measurement was carried out under the same conditions (temperature heating program, atmosphere, gas flow rate) to determine the signals related to the expansion of the dilatometer elements and to correct the results obtained during the proper measurement. Figure 1 shows the results of thermal analysis for yttrium nitrate hexahydrate. In Figure 1 in the top graph the curves corresponding to mass loss, mass loss derivative and thermal effects of thermal decomposition of yttrium nitrate hexahydrate are presented. The graphs below represent the absorbance intensity trends of selected wavenumbers as a function of temperature. Nanomaterials 2020, 10, x FOR PEER REVIEW 3 of 12
Results
Obtained powders were deagglomerated in an attritor mill (Netzsch MiniCer, 1000 rpm) using zirconia balls of diameter of 0.4 mm. The milled powders mixed with binder and plasticizer were cryogranulated.
The particle size distribution of the powders was measured in aqueous suspensions by technique of dynamic light scattering (DLS), using zeta potential analyzer Zetasizer Nano ZS (Malvern Instruments Ltd., Worcestershire, UK). The analysis results are presented in terms of Z average (Zave) and polydispersity (Pd). These are supported by a median particle diameter (dV50). Z average (also called the cumulants mean or harmonic intensity averaged particle diameter) is a mean value from the intensity distribution, which is the primary result obtained from the measurement thus the most stable result. Polydispersity derives from the polydispersity index (calculated from the cumulants analysis) and is the width of the estimated Gaussian distribution.
The specific surface area was measured by use of the BET technique (Gemini VII, Micromeritics Instrument Corp., Norcross, GA, USA). Based on the results of the BET, the equivalent spherical-particle diameter (dBET) was calculated.
The measurement of linear changes of pressed granulates were conducted using a Netzsch high-temperature dilatometer (model Dil 402E) equipped with a graphite furnace. The measurement was carried out in a temperature range of RT to 1700 °C with a heating rate of 10 °C/min and the isothermal stage at the maximum temperature for a duration of 10 min.
Prior to measurement, calibration was carried out with a graphite standard of known properties and expansion. The measurement was carried out under the same conditions (temperature heating program, atmosphere, gas flow rate) to determine the signals related to the expansion of the dilatometer elements and to correct the results obtained during the proper measurement. Figure 1 shows the results of thermal analysis for yttrium nitrate hexahydrate. In Figure 1 in the top graph the curves corresponding to mass loss, mass loss derivative and thermal effects of thermal decomposition of yttrium nitrate hexahydrate are presented. The graphs below represent the absorbance intensity trends of selected wavenumbers as a function of temperature. In the gaseous products resulting from yttrium nitrate decomposition water and nitrogen dioxide are detected. The first endothermic effect detected on the DTA curve ( Figure 1) corresponds with the melting of the salt. Minor weight loss is then observed (3.44%) related to the evaporation of adsorbed water and the small signal on the DTG curve with a minimum at a temperature of 87 • C. At a temperature of about 108 • C dehydration begins and is followed in two stages ( Figure 1):
Results
• 108-193 • C with a maximum of mass loss rate at a temperature of 166.1 • C and endothermic peak at 170.8 • C, ∆m 108-193 • C = 9.48%, • 193-327 • C with a maximum of mass loss rate at a temperature of 267.5 • C and endothermic peak at 273.9 • C, ∆m 193-327 • C = 18.74%.
In the temperature range of 108-327 • C the total mass loss is 28.12% which is close to the theoretical value of the complete dehydration of the salt (28.20%), which is confirmed by FT-IR data since exclusively the signal of water is visible (Figure 1).
Further mass loss occurs in two steps and corresponds to the degradation of the nitrate. The first distinctive mass loss (∆m = 25.16%) occurs in the temperature range of 327-444 • C with a maximum mass loss rate at T = 397.7 • C and an endothermic peak at T = 398 • C ( Figure 1). During the last decomposition stage, a mass loss of 13.32% occurs with a maximum mass loss rate at a temperature of 521.7 • C and an endothermic peak of 521.8 • C. Above the temperature of 641 • C the mass of the sample is stable. In both stages the signal indicating the presence of NO 2 is visible. Yttrium nitrate primarily decomposes to yttria and nitrogen pentaoxide, which is unstable and converts to nitrogen dioxide and oxygen. The two observed steps are a consequence of partial decomposition and the forming of cyclic oxynitrates [25].
In the temperature range of 327-641 • C the total mass loss equals 38.48%, which is in good consistency with the theoretical value of the decomposition reaction stoichiometry (42.30%) (2).
Total mass loss observed during the decomposition of yttrium nitrate hexahydrate is 69.68% and corresponds well to the theoretical value of mass loss (70.52%).
In Figure 2 the results of the thermal analysis for glycine are presented. Temperature ( ο C) Int. The decomposition of glycine begins at a temperature of 228.6 • C. Up to a temperature of 303.8 • C the mass loss amounts to 46.26% with two overlapping maximum mass loss rates at T = 235.0 and 269.7 • C and with a sharp endothermic peak at 252.3 • C. In the exhaust gases HCNO, HCN, NH 3 , CO 2 and H 2 O were detected.
The next decomposition stage occurs at a temperature range of 269.7-473.6 • C with a minimum on DTG at T = 391.9 • C and no distinctive effect on the DTA curve. The mass loss equals 18.46% and is connected with the emission of CO 2 and HNCO.
The last stage of decomposition is the residual burnout (exothermic peak at T = 695.2 • C) and it ends at 835.8 • C (∆m = 35.34%).
According to literature [24] the decomposition begins with the emission of NH 3 . Simultaneously, glycine can undergo condensation and cyclization reactions through dehydration reactions (Equations (3) and (4)). Subsequently, HCN, HNCO and CO is emitted due to selective cracking of cyclic amides. In air HNC and CO oxidize to HNCO and CO 2, respectively (5) [24]. The decomposition of glycine begins at a temperature of 228.6 °C. Up to a temperature of 303.8 °C the mass loss amounts to 46.26% with two overlapping maximum mass loss rates at T = 235.0 and 269.7 °C and with a sharp endothermic peak at 252.3 °C. In the exhaust gases HCNO, HCN, NH3, CO2 and H2O were detected.
The next decomposition stage occurs at a temperature range of 269.7-473.6 °C with a minimum on DTG at T = 391.9 °C and no distinctive effect on the DTA curve. The mass loss equals 18.46% and is connected with the emission of CO2 and HNCO.
The last stage of decomposition is the residual burnout (exothermic peak at T = 695.2 °C) and it ends at 835.8 °C (Δm = 35.34%).
According to literature [24] the decomposition begins with the emission of NH3. Simultaneously, glycine can undergo condensation and cyclization reactions through dehydration reactions (Equations (3) and (4)). Subsequently, HCN, HNCO and CO is emitted due to selective cracking of cyclic amides. In air HNC and CO oxidize to HNCO and CO2, respectively (5) [24].
200 400 600 800 100012001400 0.00 Nanomaterials 2020, 10, x FOR PEER REVIEW 6 of 12 In Figure 3 the results of the thermal analysis for the solution containing stoichiometric amounts of yttrium nitrate hexahydrate and glycine are presented. The measurement was conducted in synthetic air flow, to best imitate the conditions of the synthesis, which takes place in an open quartz beaker.
The slope on the TG curve produced during the thermogravimetric measurement of the solution containing yttrium nitrate and glycine begins below 100 °C. The first endothermic effect (with the peak at T = 124.3 °C) continues up to a temperature of 182.0 °C and is connected with a mass loss of 71.79%. The mass loss is attributed mainly to the evaporation of water. However, on the FT-IR spectra signals resulting from the presence of HCN, NH3 and NO2 are also visible ( Figure 3). This is surprising, as these compounds result from the degradation of glycine and yttrium nitrate, which according to the results presented in Figures 1 and 2 should be stable in this temperature range.
At a temperature of 238.2 °C the red-ox reaction between yttrium nitrate and glycine begins. It is distinguished by the exothermic peak on the DTA curve (Tpeak = 244.1 °C) and an abrupt weight loss (Δm = 15.93%, Figure 3). At a temperature of 263.0 °C the process ends and with further temperature increase minor mass loss is observed (Δm = 3.96%). On FT-IR spectra not only CO2 and H2O, but also HCN, HCNO, NH3 and NO2 are detected. The yttria powders prepared by solution combustion synthesis using glycine and yttrium nitrate hexahydrate were investigated in terms of microstructure (Figure 4), particle size distribution and specific surface area ( Table 1).
The synthesized powders are characterized by a highly porous microstructure (Figure 4a). The cumulants mean measured by technique of dynamic light scattering (DLS) is 2354 nm with a broad particle distribution (Pd = 1498 nm).
To burn out the substrates' residues the powders were calcined. After calcination at a (4) Nanomaterials 2020, 10, x FOR PEER REVIEW 6 of 12 In Figure 3 the results of the thermal analysis for the solution containing stoichiometric amounts of yttrium nitrate hexahydrate and glycine are presented. The measurement was conducted in synthetic air flow, to best imitate the conditions of the synthesis, which takes place in an open quartz beaker.
The slope on the TG curve produced during the thermogravimetric measurement of the solution containing yttrium nitrate and glycine begins below 100 °C. The first endothermic effect (with the peak at T = 124.3 °C) continues up to a temperature of 182.0 °C and is connected with a mass loss of 71.79%. The mass loss is attributed mainly to the evaporation of water. However, on the FT-IR spectra signals resulting from the presence of HCN, NH3 and NO2 are also visible ( Figure 3). This is surprising, as these compounds result from the degradation of glycine and yttrium nitrate, which according to the results presented in Figures 1 and 2 should be stable in this temperature range.
At a temperature of 238.2 °C the red-ox reaction between yttrium nitrate and glycine begins. It is distinguished by the exothermic peak on the DTA curve (Tpeak = 244.1 °C) and an abrupt weight loss (Δm = 15.93%, Figure 3). At a temperature of 263.0 °C the process ends and with further temperature increase minor mass loss is observed (Δm = 3.96%). On FT-IR spectra not only CO2 and H2O, but also HCN, HCNO, NH3 and NO2 are detected. The yttria powders prepared by solution combustion synthesis using glycine and yttrium nitrate hexahydrate were investigated in terms of microstructure (Figure 4), particle size distribution and specific surface area ( Table 1).
The synthesized powders are characterized by a highly porous microstructure (Figure 4a). The cumulants mean measured by technique of dynamic light scattering (DLS) is 2354 nm with a broad particle distribution (Pd = 1498 nm).
To burn out the substrates' residues the powders were calcined. After calcination at a In Figure 3 the results of the thermal analysis for the solution containing stoichiometric amounts of yttrium nitrate hexahydrate and glycine are presented. The measurement was conducted in synthetic air flow, to best imitate the conditions of the synthesis, which takes place in an open quartz beaker.
The slope on the TG curve produced during the thermogravimetric measurement of the solution containing yttrium nitrate and glycine begins below 100 • C. The first endothermic effect (with the peak at T = 124.3 • C) continues up to a temperature of 182.0 • C and is connected with a mass loss of 71.79%. The mass loss is attributed mainly to the evaporation of water. However, on the FT-IR spectra signals resulting from the presence of HCN, NH 3 and NO 2 are also visible ( Figure 3). This is surprising, as these compounds result from the degradation of glycine and yttrium nitrate, which according to the results presented in Figures 1 and 2 should be stable in this temperature range.
At a temperature of 238.2 • C the red-ox reaction between yttrium nitrate and glycine begins. It is distinguished by the exothermic peak on the DTA curve (T peak = 244.1 • C) and an abrupt weight loss (∆m = 15.93%, Figure 3). At a temperature of 263.0 • C the process ends and with further temperature increase minor mass loss is observed (∆m = 3.96%). On FT-IR spectra not only CO 2 and H 2 O, but also HCN, HCNO, NH 3 and NO 2 are detected. The yttria powders prepared by solution combustion synthesis using glycine and yttrium nitrate hexahydrate were investigated in terms of microstructure (Figure 4), particle size distribution and specific surface area (Table 1). The synthesized powders are characterized by a highly porous microstructure (Figure 4a). The cumulants mean measured by technique of dynamic light scattering (DLS) is 2354 nm with a broad particle distribution (Pd = 1498 nm).
To burn out the substrates' residues the powders were calcined. After calcination at a temperature of 800 • C the powders sponge-like microstructure remained intact (Figure 4b). The DLS analysis provides information about average particle size expressed in the cumulants mean of 2649 nm while d V50 = 0.9 µm (Table 1). These strong discrepancies result from high polydispersity of particle size (expressed in a high value of polydispersity width-Pd), also visible in the SEM image ( Figure 4b) where a micrometric sized particle is accompanied by some smaller grains at the image boundary. The BET surface is 19.7 m 2 /g demonstrating a relatively high level of surface development. Table 1. Particle size measured by method of dynamic light scattering (DLS) and calculated from BET specific surface area of yttria powders obtained by using the SCS method (Z ave -cumulants mean, Pd-polydispersity width, d V50 -median diameter of the particle size distribution, S BET -specific surface area, d BET -BET equivalent spherical particle diameter). Powders calcined at a temperature of 1100 • C reveal a finely grained microstructure. The microstructure transformation occurs without mass loss (Figure 3). The fine (about 100 nm in diameter) grains occur in agglomerates, with Z ave = 1338 nm and d V50 = 1190 nm. Together with agglomerate size, BET surface is decreased as well and equals 11.5 m 2 /g ( Table 1). During calcination at a temperature of 1100 • C the disordered matter in nanostructures of high surface energy undergoes diffusion and reorganization into grains. The "sponge-like" structure undergoes conversion-its thin walls disappear and in its place uniform globular grains are produced. The specific surface decreases in the process of matter diffusion and grain formation, but the globular grains are connected by van der Waals forces or sintering necks. Such structure is more probable to disintegrate than the initial "sponge-like" aggregates which is portrayed in the DLS analysis. This kind of structure is still not beneficial for ceramic technology, as agglomerates are very difficult to densify during conventional pressing techniques and may cause fluctuations in density in the bulk of the pressed sample.
Calcination
To deagglomerate the obtained powder high-energy milling was implemented. In Figure 5 the trend of cumulants mean vs milling time and in Figure 6 size distributions of the milled powders are presented. Powders calcined at a temperature of 1100 °C reveal a finely grained microstructure. The microstructure transformation occurs without mass loss (Figure 3). The fine (about 100 nm in diameter) grains occur in agglomerates, with Zave = 1338 nm and dV50 = 1190 nm. Together with agglomerate size, BET surface is decreased as well and equals 11.5 m 2 /g ( Table 1). During calcination at a temperature of 1100 °C the disordered matter in nanostructures of high surface energy undergoes diffusion and reorganization into grains. The "sponge-like" structure undergoes conversion-its thin walls disappear and in its place uniform globular grains are produced. The specific surface decreases in the process of matter diffusion and grain formation, but the globular grains are connected by van der Waals forces or sintering necks. Such structure is more probable to disintegrate than the initial "sponge-like" aggregates which is portrayed in the DLS analysis. This kind of structure is still not beneficial for ceramic technology, as agglomerates are very difficult to densify during conventional pressing techniques and may cause fluctuations in density in the bulk of the pressed sample.
To deagglomerate the obtained powder high-energy milling was implemented. In Figure 5 the trend of cumulants mean vs milling time and in Figure 6 size distributions of the milled powders are presented. In order to estimate the optimal time of milling the size distribution of yttria powder calcined at a temperature of 800 °C was measured after 1, 3, 5, 7, 10, 13 and 15 min ( Figure 5). After 10 min, the particle size was about 500-600 nm and remained unchanged until maximum milling time which was 15 min. Basing on these observations and previous experience considering deagglomeration of Powders calcined at a temperature of 1100 °C reveal a finely grained microstructure. The microstructure transformation occurs without mass loss (Figure 3). The fine (about 100 nm in diameter) grains occur in agglomerates, with Zave = 1338 nm and dV50 = 1190 nm. Together with agglomerate size, BET surface is decreased as well and equals 11.5 m 2 /g ( Table 1). During calcination at a temperature of 1100 °C the disordered matter in nanostructures of high surface energy undergoes diffusion and reorganization into grains. The "sponge-like" structure undergoes conversion-its thin walls disappear and in its place uniform globular grains are produced. The specific surface decreases in the process of matter diffusion and grain formation, but the globular grains are connected by van der Waals forces or sintering necks. Such structure is more probable to disintegrate than the initial "sponge-like" aggregates which is portrayed in the DLS analysis. This kind of structure is still not beneficial for ceramic technology, as agglomerates are very difficult to densify during conventional pressing techniques and may cause fluctuations in density in the bulk of the pressed sample.
To deagglomerate the obtained powder high-energy milling was implemented. In Figure 5 the trend of cumulants mean vs milling time and in Figure 6 size distributions of the milled powders are presented. In order to estimate the optimal time of milling the size distribution of yttria powder calcined at a temperature of 800 °C was measured after 1, 3, 5, 7, 10, 13 and 15 min ( Figure 5). After 10 min, the particle size was about 500-600 nm and remained unchanged until maximum milling time which was 15 min. Basing on these observations and previous experience considering deagglomeration of In order to estimate the optimal time of milling the size distribution of yttria powder calcined at a temperature of 800 • C was measured after 1, 3, 5, 7, 10, 13 and 15 min ( Figure 5). After 10 min, the particle size was about 500-600 nm and remained unchanged until maximum milling time which was 15 min. Basing on these observations and previous experience considering deagglomeration of yttria in water using an attritor mill [26], the milling time was set for 15 min and yttria powders were milled in these conditions. The results of particle size measurements of the milled powders are presented in Table 1 and in Figure 5. The size distribution of yttria powder calcined at the temperature of 800 • C revealed some agglomeration, as agglomerates or aggregates of 5 µm are visible. Agglomerates of powder calcined at a temperature of 1100 • C were disintegrated more effectively during milling (Z ave = 275 nm and d V50 = 352 nm) which is consistent with the microstructural observations of the powder before milling. The agglomerates visible in Figure 4c underwent partial disintegration, the particle size distribution showed in Figure 6 indicates that the milled powder is trimodal with peaks at 0.275, 0.850 and 5 µm, which suggests that most of the agglomerates were disintegrated into particles with diameters of about 275 nm with some 3-4 particle agglomerates (d = 850 nm) and some bigger agglomerates left intact. In Figure 7 the SEM images of the milled powders are presented, where particles of about 100-150 nm with some bigger agglomerates are visible. This powder was cryo-granulated with the addition of binder and plasticizer, die pressed and densified in a cold isostatic press under a pressure of 150 MPa before taking dilatometric measurements ( Figure 8). The suspension used for cryo-granulation was very diluted (c solids ≈ 5 vol%), which is the reason why the powder does not appear in proper granules. Instead it occurs as separate particles and small agglomerates ( Figure 6). Nanomaterials 2020, 10, x FOR PEER REVIEW 9 of 12 visible. Agglomerates of powder calcined at a temperature of 1100 °C were disintegrated more effectively during milling (Zave = 275 nm and dV50 = 352 nm) which is consistent with the microstructural observations of the powder before milling. The agglomerates visible in Figure 4c underwent partial disintegration, the particle size distribution showed in Figure 6 indicates that the milled powder is trimodal with peaks at 0.275, 0.850 and 5 µm, which suggests that most of the agglomerates were disintegrated into particles with diameters of about 275 nm with some 3-4 particle agglomerates (d = 850 nm) and some bigger agglomerates left intact. In Figure 7 the SEM images of the milled powders are presented, where particles of about 100-150 nm with some bigger agglomerates are visible. This powder was cryo-granulated with the addition of binder and plasticizer, die pressed and densified in a cold isostatic press under a pressure of 150 MPa before taking dilatometric measurements ( Figure 8). The suspension used for cryo-granulation was very diluted (csolids ≈ 5 vol%), which is the reason why the powder does not appear in proper granules. Instead it occurs as separate particles and small agglomerates ( Figure 6). The sintering starts at a temperature of 1149 °C and proceeds in two steps: the first-with the maximum sintering rate at a temperature of 1387 °C (dL/L0 = 14.33%) and the second with the maximum sintering rate at T = 1677 °C (dL/L0 = 7.33%). visible. Agglomerates of powder calcined at a temperature of 1100 °C were disintegrated more effectively during milling (Zave = 275 nm and dV50 = 352 nm) which is consistent with the microstructural observations of the powder before milling. The agglomerates visible in Figure 4c underwent partial disintegration, the particle size distribution showed in Figure 6 indicates that the milled powder is trimodal with peaks at 0.275, 0.850 and 5 µm, which suggests that most of the agglomerates were disintegrated into particles with diameters of about 275 nm with some 3-4 particle agglomerates (d = 850 nm) and some bigger agglomerates left intact. In Figure 7 the SEM images of the milled powders are presented, where particles of about 100-150 nm with some bigger agglomerates are visible. This powder was cryo-granulated with the addition of binder and plasticizer, die pressed and densified in a cold isostatic press under a pressure of 150 MPa before taking dilatometric measurements ( Figure 8). The suspension used for cryo-granulation was very diluted (csolids ≈ 5 vol%), which is the reason why the powder does not appear in proper granules. Instead it occurs as separate particles and small agglomerates ( Figure 6). The sintering starts at a temperature of 1149 °C and proceeds in two steps: the first-with the maximum sintering rate at a temperature of 1387 °C (dL/L0 = 14.33%) and the second with the maximum sintering rate at T = 1677 °C (dL/L0 = 7.33%). The sintering starts at a temperature of 1149 • C and proceeds in two steps: the first-with the maximum sintering rate at a temperature of 1387 • C (dL/L 0 = 14.33%) and the second with the maximum sintering rate at T = 1677 • C (dL/L 0 = 7.33%).
Discussion
The general formula for the red-ox reaction of solution combustion synthesis (Equation (1)) suggests that the byproducts consist of the non-toxic gases: CO 2 , H 2 O and N 2 . However, the FT-IR analysis of the gases emitted during the reaction showed also such specimens as HCN, HNCO, NH 3 (Figure 3). This indicates that the SCS reaction (1) conducted even in a very well controlled environment with homogenous temperature distribution (small sample in crucible and TGA chamber) is accompanied by decomposition reactions of the substrates: yttrium nitrate and glycine.
What is more, it was observed that below the redox reaction ignition point (238 • C) the presence of HCN, NH 3 and NO 2 was detected on FT-IR spectra (Figure 3). The specimens occur already at a temperature of about 124 • C.
Such observations have been also made by Biamino et al. [27] in the investigation of SCS with urea as fuel. In the aforementioned work [27] it was suggested that the emission of nitrate oxides derives from the direct reaction of nitrate with urea, which occur at a temperature below the reaction ignition point. In case of the reaction of yttrium nitrite with glycine signals deriving from NO 2 and HCN are visible in FT-IR spectra already at a temperature of about 124 • C ( Figure 3). According to [27] the corresponding reaction of glycine and yttrium nitrite can be described as follows (6, 7): In both cases the reaction proceeds with the formation of carbon oxides which were not detected during the measurement below the ignition point (238 • C). Reactions 6 and 7 may take place during the exothermic reaction after reaching the ignition point, as HCN and NO 2 were detected then, together with carbon dioxide (the measurement was carried out in air which can cause the oxidation of carbon monoxide and cyanic acid to isocyanic acid). This suggests that the reduction of the nitrate does not cause a complete degradation of the glycine carbon chain. Presumably, the presence of nitrate in the solution can trigger the first step of glycine degradation i.e., removal of ammonia, which is visible in FT-IR spectra and can cause a degradation of the cyclic amides (5). Ammonia and intermediate compounds derived from the decomposition of cyclic peptides may react with nitrite in accordance with the following general formulas (8,9) The assumption is consistent with macroscopic observations as precipitation was observed when the solutions containing the reagents were left to age for a month at room temperature. This suggests that a reaction between reagents took place.
Combustion synthesis is a widely used method for the production of nanopowders in both, laboratory and semi-technical scale. The presented results indicate that for a scaling-up of the process special precautions must be undertaken, which will provide for the neutralization of the hazardous nitrogen derived compounds.
The reaction of yttrium nitrate with glycine leads to the fabrication of nanostructured powder. The SEM observation of the powders showed an agglomerated, "sponge-like" microstructure of the particles. The structure remains stable at 800 • C as the microstructure of the powder remains intact after calcination at this temperature. Such morphology may be beneficial in some applications [1][2][3][4]. However, for ceramic technology the agglomeration is undesired as the densification of the nanopowder is severely hindered. After calcination at a temperature of 1100 • C the microstructure of the powder underwent modification and the "sponge-like" structure transformed into agglomerates consisting of globular grains with a diameter of about 100 nm. Milling of the powder calcined at a higher temperature was more effective as the particle diameter measured by method of DLS was decreased to 275 nm. Despite the milling, some agglomeration was observed in the size distribution curve ( Figure 6). Further studies will be focused on optimization of milling conditions.
Sintering of calcined powder starts at a temperature of 1149 • C and proceeds in two distinctive stages. Such behavior indicates that the sintering process is divided into two stages: densification of spherical particles (reorganization of particles without particle growth) and grain growth, which was also observed by other researchers [28]. Another explanation of this phenomenon is that the first densification stage corresponds to the sintering of grains within the agglomerates, afterwards the sintering of the agglomerate domains and the separate particles takes place [29,30].
Conclusions
Solution combustion synthesis is an effective method for yttria nanopowder fabrication. The high-temperature self-propagating red-ox reaction between yttrium nitrate and glycine lowers the temperature of the nitrate decomposition and yttria formation from 642 to 263 • C. The vast amounts of gases emitted during decomposition implies the formation of nanoparticles. The yttria powders obtained by method of the SCS have a high sintering ability an can be applied in ceramic technology. | 8,229.2 | 2020-04-27T00:00:00.000 | [
"Materials Science"
] |
Effect of Construction Business Relationship Situation on Design Service Delivery in Ghana
Literature describes the construction business relationship (CBR) situation in developing countries like Ghana as harsh, adversarial and non-collaborative, resulting in several business relationship challenges among Design Service Delivery (DSD) actors. This non-collaborative business relationship situation causes discords, disputes and conflicts (DDC), affecting improvement of DSD activities. This paper seeks to describe the current CBR situation among DSD actors in the Ghanaian construction industry (GCI) and its effect on development and constitution of supply chains of information flow (SCIfs). Drawing on action-oriented system theory, system thinking and rethinking approaches the current CBR situation among DSD actors are isolated for their conceptual and empirical understanding. A case study approach was used to achieve the study objectives and the data was analysed using content analysis and pareto analysis. A stringent eligibility criteria enabled nine different professional groups of five DSD actors each to be purposively selected for the study. Fourteen percent (14%) of DSD actors described the current CBR situation as “lacking harmonization of professional work and good business relationships” and “hostile, frustrating, with tension and conflicts”, 13% described the situation as “lacking interdependencies and sustainability” and 9% described it as “having mixed relationships of affiliates and training mates relationships” among others. The resultant effects of the current CBR situation among DSD actors include “difficulties in sharing and exchanging information”, “disturbance of time schedule/control”, “reduction in quality of work”, and “cost ineffectiveness”. The paper makes an important theoretical contribution to knowledge by providing empirical description of the current CBR situation and its effect on design service delivery in the construction industry, especially in the context of developing countries. It brings to the fore the need for proactive management action to help address the situation in developing countries such as Ghana.
Introduction
Construction business relationship concerns working associations or ties between or among individuals or parties in construction activities (Jiang et al., 2012;Hawkins, 2011).Construction business relationship is a kind of association which develops among individuals or parties (actors) engaged in construction.It encompasses both internal and external relationships of parties in organizations such as a Design Service Delivery (DSD) entity (Hawkins, 2011).The associations or the connections which exist between/among the actors are nurtured on some useful relationship factors such as collaboration, trust, communication, commitment, improvement and continuous improvement, marketing skills, alignment of objectives, joint problem solving, risk handling/allocation among others (Meng, 2010).
A number of studies suggest that adversarial relationships and poor communication practices are common in the construction industry (Anim, 2012;Laryea, 2010;Pryke, 2009;Yiu & Cheung, 2006;Anvuur et al., 2006).This situation has adverse effects on supply chains of information flow (SCIfs) and business relations.These relationship challenges come from the entrenched situation of over-reliance on culture involving professional autonomy, lack of interdependency and mistrusted relationships which disturb collaborative practices (Jaffer et al., 2011;Laryea, 2010;Chan et al., 2004;Latham, 1994).There have also been several attempts in the literature towards recommended collaborative initiatives such as alliancing (Yeung et al., 2007), partnering (Bresnen, 2007;Alderman and Ivory, 2007;Kadefors et al., 2007;Wong and Cheung, 2004;Naoum, 2003;Bresnen and Marshall, 2002) and integration of teams (Baiden et al., 2006) which are not frequently applied in development of the SCIfs.The real effects of these collaborative initiatives are not clearly understood, especially in the context of developing countries (Anim, 2012;Pryke, 2009).
The business relationship situation in which most construction works contracts are procured by design service delivery (DSD) actors (DSD practitioners and contractors) are indeed non-collaborative and harsh or adversarial (Laryea, 2010;Anvuur et al., 2006).This trend of business relationship undermines not only products of contracting but also DSD activities (Odusami et al., 2003).Despite the several evidence of non-collaborative and adversarial relationships documented in literature, the real characteristics of the construction business relationship (CBR) situation and their actual effect on SCIfs and business relations among the DSD actors are not clearly established and documented.Using the Ghanaian situation, this paper makes important contribution by bringing to the fore the exact characteristics of the CBR situation and the actual effects it has on SCIfs and business relations among the DSD actors in the construction industry.Thus, while this paper lends credence to the fact that the CBR is non-collaborative and adversarial in nature, further evidence drawn from both qualitative and quantitative approaches describe the characteristics of the current business relationship situation and its actual effects on DSD activities regarding the production of SCIfs and business relationships.
Design Service Delivery (DSD)
The DSD activities involve actor groups whose construction business contributions are essential for an effective and efficient development of SCIfs (Edum-Fotwe et al., 2001).For the SCIfs to benefit from regular contribution from the different actor groups is greatly dependent on or associated with the kind of construction business relationship which exist among them (Hawkins, 2011).As noted in the Science Applications International Corporation, SAIC (2002, p. 3), "a business relationship provides a mutual forum in which the goals and influences that affect the achievement of a desired objective interact".In this sense, the objectives desired to be achieved by the DSD actors are the development and use of SCIfs for construction projects (Hatmoko and Scott, 2010).Further, successful construction business relationship is about the DSD actors achieving some pressing individual and collective goals and objectives as successfully as their respective responsibilities required (SAIC, 2002;Yeo & Ning, 2002) Also, the business relationships must be structured to allow for maximum chance to realize a win-win-win situation for impartial benefits to all clients and the DSD actors (Pryke, 2009;SAIC, 2002).
The design service delivery provides an arena for construction design interaction requiring harmonious, cordial business relationships among DSD actors to produce useful dynamics to achieve DSD collaborative businesses (Anim, 2012;Hawkins, 2011).However, very often, proper and appropriate development of construction business relationships as useful ingredient for cordial and successful development of SCIfs, are neglected.The neglect occurs as a result of oversight due to overemphasis on approaches used in planning, sourcing, making and delivering SCIfs and including overzealousness to obtain more contracts (Hawkins, 2011;Yeo & Ning, 2002).Due to the weaknesses in the DSD activities, development of the construction business relationship often escapes the DSD practitioners who work as professionals trained in project management, architecture, structural engineering, quantity surveying, geomatic engineering, services engineering, geotechnical engineering, and planning, producing primary information flow.It is supposedly happening through the notion that the practitioners have right genes, insights and sufficient education to develop and sustain cordial relationships (Hawkins, 2011).For that matter, the collective DSD activities of developing and constituting SCIfs, the CBR is brushed aside with perhaps only maintenance of relationships between the clients and individual professionals (Hawkins, 2011;Seebass, 2008;Gouveia & Ros, 2000).
The contractors who receive and use the information from initiation, planning, executing, controlling to closing of a project experience the most significant form of harsh or adversarial CBR in the Ghanaian construction industry (Laryea, 2010).DSD practitioners or professionals who are either in-house or external consultants working in firms for clients have a lot of CBR challenges with contractors (Jaffar et al., 2011;Orgen et al., 2011;Laryea, 2010).These are professionals who form part of clients' organization and need to develop the appropriate culture through change of mind set for cordial relationships (Cheung & Rowlinson, 2005).This change of relationship culture should be treated as urgent by the agents of principals (clients) who run agencies for DSD works, producing SCIfs for selected contractors (Hawkins, 2011;Cheung & Rowlinson, 2005).The DSD practitioners provide the SCIf which are different from other supply chains such as the flow of materials, labour, plant and equipment including temporary work (Hatmoko & Scott, 2010).
These supply chains of information flow consists of chains of project documentations such as drawings, bill of quantities, specifications, contract conditions, spot levels, geotechnical reports, explanations and clarifications, which form the basis of all activities in the project (Edum-Fotwe et al., 2001).The DSD works of providing SCIfs are for decision-making, which affects planning, executing, controlling and closing of projects.Thus, it is obvious that DSD practitioners are responsible for all other conduct, of sharing information between the supply chain members.So, failure to manage the CBR for the best development of SCIfs has profound consequences on general construction project delivery (Hawkins, 2011;Titus, 2005).This is because the information sharing among members is seen as key to effective and efficient supply chain management of projects (Hatmoko & Scott, 2010;Titus, 2005).Hence, delay in the supply chain of information flow may slow down decision-making of all the project teams.This phenomenon is identified as the main cause of delay in projects delivery (Chan & Kumaraswamy, 1997).
However, in the current CBR situation, DSD actors are observed to have a lot of relationship instability which cause discords, disputes and conflicts (DDC) leading to non-collaborative and harsh or adversarial relationship, having various effects on the improvement of DSD activities (Orgen et al., 2012a;Laryea, 2010;Anvuur et al., 2006;Proenca & De Castro, 2005).Such CBR challenges or problems, which cause delays in developing and constituting SCIfs, might slow down decision-making of all the project teams and this situation is identified as the main cause of delays in projects deliveries (Ramus & Birchall, 2006;Sahin & Robinson, 2002;Chan & Kumaraswamy, 1997).Besides, these problems are potential sources of DDC.They further create a relationship instability cycle which leads to continuous delays and subsequent destruction of all project objectives or abandonment of projects (Ramus & Birchall, 2006;Proenca & De Castro, 2005).
As built environment professionals in a developing country, DSD practitioners, including contractors, face further CBR challenges emanating from uncertainties of weak economies, which do not encourage building of effective relationships or developing stable relationships (Hawkins, 2011;Proenca & de Castro, 2005).The challenges become pronounced by the highly fragmented characteristics of the construction industry reiterated in several literature and low levels of trust existing among the actors (Jiang et al., 2012;Pryke, 2009;Bresnen, 2007;Baiden et al., 2006;Naoum, 2003;Bresnen & Marshall, 2002;Egan, 1998;Latham, 1994).The relationships and other responsibilities turn to push aside the need for the development of appropriate DSD collaborative construction business relationships (Anim, 2012;Hawkins, 2011).
The scale of CBR effects in developing SCIfs eludes DSD actors as they seek more contract opportunities (Hawkins, 2011).Such an elusive situation has increasingly encouraged the loss of sight of the threats relationship failures posed to the developmental efforts of seeking mergers in developing and constituting SCIfs (Hawkins, 2011).This makes the elimination or reduction of adversarial business relationship a mirage, disturbing or drawing back improvement of the DSD activities (Jiang et al., 2012;Hawkins, 2011).For instance, as the DSD actors become aggressively desirous to win and gain from contracts, the intention for harmonious, cordial business relationships among DSD actors to produce dynamics in the DSD collaborative businesses is usually constrained (Anim, 2012;Hawkins, 2011).These are illustrated in the statement of Pryke (2009) supported by Skitmore and Smyth (2007).They report that non-collaborative behavioural culture and adversarial business relationship in some developed economies like the UK is characterized by cost cutting of tender figure or projects cost.
In Ghana, the situation of CBR in the most common traditional system of procuring contracts, where design is separated from production, causes divisions among DSD actors with some clients' requirements/decisions failing to appear in tender documents these leading to variations in the construction phase (Laryea, 2010;Anvuur et al., 2006).Besides, CBR challenges like poor communication cause the SCIfs to be characterized sometimes as inconsistent and lacking coherence with law and best practices (Hatmoko and Scott, 2010;Public Procurement Authority, PPA, 2010b;Chan et al., 2004;Latham, 1994).It is in similar view that Odusami et al. (2003) indicated that it is not uncommon to observe Nigerian construction industry uncoordinated supply chains of information flow for DSD activities.These problems occur partly as a result of lack of proper allocation and location of authority of control among the DSD practitioners for the improvement of the DSD activities (Orgen et al., 2011(Orgen et al., , 2012a)).
The problems are compounded by the fact that a research carried out into causes of cost overrun, indicated that five out of eight problems identified are design management related (Odusami et al., 2003).The problems, which are not different from some of CBR issues in Ghana, include non-compliance of design with planning or statutory requirement, incomplete design at the time of going to tender, lack of co-ordination, ambiguity of risk allocation and inadequacy of management control (Odusami et al., 2003).These problems keep surfacing because no evidences are found to show that the DSD actors consider CBR as critical collective ethos and persona for effective and efficient development of SCIfs to improve the DSD activities (Hawkins, 2011).
On the contrary, what is realized frequently is who is best placed to lead the project team or the DSD practitioners is another major source of controversy bleeding DDC, which sometimes ends in non-co-operation and adversarial business relationship amongst the DSD practitioners (Orgen et al., 2011(Orgen et al., , 2012a)).These occurrences are common especially where actors who are not project managers claim to be one (Ahadzie et al., 2014).In Ghana, the enactment of the Public Procurement Act 2003, Act 663, in which recognition is now given to the title Project manager (PM) is very striking in the annals of procurement practices in Ghana.Hitherto, the articles of agreement mentioned the architect for especially building works and engineer for civil engineering works.However, a lot of collaborative work still rests on the shoulders of both the DSD actors and clients (largely government) to change the existing CBR which causes the non-collaborative and harsh and adversarial nature of the traditional approaches responsible for poor managerial and administrative fragmented practices associated with projects (Anim, 2012;Hawkins, 2011).
For CBR to be trustingly or adversarial oriented is in no doubt too simple a view as both strategies co-exist as regards to human attitudes and behaviours.However, the profound worry is that the latter can destroy completely all project objectives or the improvement of the DSD activities (Jiang et al., 2012;Orgen et al., 2012a;Ramus & Birchall, 2006).
Building business relationships are becoming increasingly accepted as developing key success factors, without which, construction or any other business cannot thrive (Hawkins, 2011).It will be much helpful to understand that the construction business relationships, like any other business relationship, are multidimensional.The broad view concerning business relationship is that it carries along values; but there is a huge potential risk if relationships fail in activities of an entity, like the DSD activities (Hawkins, 2011).For that matter, if under any circumstances like a delay in the arrangement or execution of an activity, this can be viewed in some way as lack of commitment, which is one of the business relationship factors that can cause a long serious defect to the DSD relationship.This is because delays can instigate negative effects like costs overruns, low or loss of profits, increased DDC and subsequent huge payment in many lawsuits between clients and DSD actors with sometimes contract termination (Owolabi et al., 2014).
The literature is replete with several evidence of issues that cause non-collaborative working and adversarial business relationships situations among DSD actors (DSD practitioners and contractors).However, CBR situation has not gained from extensive and concerted investigation into its effects on DSD activities.This study and the discussion which follow provide further empirical evidence describing the characteristics of the current business relationship situation and its actual effects on DSD activities regarding the production of SCIfs and business relationships.These can assist DSD actors to agree on common procedures that will have real impact on the situation for DSD actors' efforts to build collaborative businesses for effective SCIfs to improve the DSD activities.
Construction Discords, Disputes and Conflicts in Perspective
Contentious issues in construction, which generate construction Discords, Disputes and Conflicts (DDC), ending in non-collaborative and adversarial relationship can be associated to a definition in Social Psychology literature.Heidelberg Institute for International Conflict Research (2005) defines DDC as the clashing of interest (positional differences) on national/group/individual values of some duration and magnitude between at least two parties (organized groups, state, organizations, individuals) that are purposefully pursuing their interest and win their cases (Axt et al., 2006;Yiu & Cheung, 2006).Basically, humans are by nature social beings, forming groups out of shared interests and needs (Vold, 1958;Misis, 2010) and in the process generate DDC due to differences in interest, which sometimes result in non-collaborative working and adversarial business relationship.
The intentions among the various DSD actors are to form groups or teams to achieve shared interests and needs but the egoistic tendencies revealed in attitudes and behaviours take control of the DSD human attitudinal behaviours, ending in increased in-ward looking, dishonesty, less or no communication, non-commitment, increased competition and lack of concern for others (Williams & McShane, 2010;Misis, 2010).The issues of egoistic tendencies are similar as according to Williams and McShane (2010) which are also confirmed in the work of Misis (2010).The works of Misis show that the interests and needs of DSD groups interact and produce competition over maintaining and/or expending one group's position relative to others in the control of valuable resources (money, time, new projects, education, information and the like).
Granted that the differences in interests among humans are a potential source of such DDC, it is not surprising that there exists an adversarial business relationship among the DSD practitioners and between them and contractors (Yiu & Cheung, 2006).The disturbing and unacceptable aspect is that such attitudinal behaviours drawback construction project performance due to lack of alignment of interests and objectives of partners (Lee, 2006).
Theoretical Framework
Construction DDC arises frequently over items or issues which are considered consensually valuable as in conflicts in other fields of endeavour (Dahrendorf, 1959;Axt et al., 2006).In giving further details, the great sociologist, Deutsch (1973) clearly delineate five basic issues over which conflict could arise namely: control over resources, preferences and nuisances, beliefs, values, or nature of the relationship.In the construction industry and by the foregone distinction of issues of conflicts, there is indication that a limited number of projects and time-bound projects/consequences such as liquidated ascertained damages, have potentials to cause DDC (Yiu & Cheung 2006).
The usual trend of DDC passes through certain intensity scale with a series of phases such as: beginning phase, developmental phase and end phase (Axt et al., 2006;Yiu & Cheung, 2006).It is this kind of dynamic development that produces DDC in phases and contributes to persistence of non-collaborative working and adversarial business relationship.Actually, it is the dynamics, the intensities and the persistence of the DDC that disturb or disallow collaborative working, business relationship development, preservation of business relationship improvement and continuous improvement in DSD activities (Axt et al., 2006).The characteristics of the business relationship situation in which SCIfs are developed and constituted and the culture issues also contribute to DDC consequently affecting DSD activities.Hofstede (1986) in a famous research on culture realised that human behaviour is not random but predictable.People carry mental plans and agenda that can be seen indirectly through attitudes and behaviours shown.The plans and agenda of the DSD actors in developing and constituting SCIfs, like all other humans, influence their collaborative or non-collaborative decisions, policies, beliefs and attitudinal behaviours in dealing with all DSD activities.Most especially the first three factors out of four cultural dimensions or factors that Hofstede (1986) established: power distance, individualism-collectivism and avoiding uncertainty, masculinity and femininity clarify the situation (Gouveia & Ros, 2000).
Figure 1 represents the conceptual framework illustrating the construction business relationships existing among the DSD actor groups.By theorization it is noted as indicated in figure 1 that, the effects of the existing non-collaborative and harsh or adversarial situation lead to weak or failure business relationships.The framework draws on action oriented system theory, theory of action and system theory thinking and rethinkingin multi-theory building (Jugdev, 2004;Harriss, 1998;Seymour et al., 1997).In multi-theory building, the system theories discussed are taken as integral part of the rethinking processes (Pickel, 2007).Further, in the Rethinking System Theory (RST) each system takes all other systems as its environment (Global and Ghanaian), an ontological position that allows greater flexibility in the conceptualization of systems than that based on the part to whole distinction (Pickel, 2007(Pickel, , 2004)).In this regard, a system cannot be defined only by the set of elements: structure, components and their relations to an environment.There is the need for the inclusion of the actual processes mechanism (bond) that make the system a system, which in the complex real-world allows self-organisation (Orgen et al., 2013a(Orgen et al., , 2012b;;Pickel, 2007).These assisted in drawing the effects of the CBR on the DSD activities.According to Bunge (2004), "systemism" is like holism.The difference is that it encourages analysis of wholes into their constituents and as a result is never in harmony with the intuitionist epistemology inherent in holism.Therefore, the DSD practitioners and contractors should be treated as the producers of any social whole (ie DSD activities).Coleman and Ostrom (2011), Seebass (2008) and Tuomela (1991) also indicate that Theory of Action (TA) is intention-driven.Relevant aspect of the TA required in this theorization regards collective action from all DSD actors.It is a kind of collective action that is based on the following steps: pre-condition, action and results or consequences.The TA is concerned with I-intention of an action, weaker than the other, We-intention or We-sense.This is explained further that the separate action of an individual is not comparable to the joint action of individuals in a group.The joint goal depending on "We thinking" or effort of the We-intention, for example, to assess DSD improvement or to improve DSD activities is stronger than the I-intention (Coleman & Ostrom, 2011).Application of the "We-intention or We thinking" to make SCIfs effective and efficient are concerned with act-relational intentions.An act-relational intentions produces full blown stronger "We-sense" of effective and efficient collaborative working to reduce cost, time and achieve high quality design service delivery.The achievement of success in the DSD activities is caused by the joint effect of We-intention (Tuomela, 1991).This offer inputs and outputs for the multi-system theorizations, showing the aspect of TA essential for collaborative working and appropriate business relationship (Seebass, 2008).
Another theory used in conjunction with the TA in the theorization process is the System Theory (ST).ST is an interdisciplinary theory about every system in nature, in society and in many scientific domains as well as a framework which can be used to investigate phenomena from a holistic approach (Mele et al., 2010).A system from multidisciplinary point of view is defined as an entity, which is a coherent whole (Mele et al., 2010;Ng-Maull & Yip, 2009) with perceived boundary around it in order to distinguish internal and external elements such as: clients, sub-contractors and construction suppliers' activities outside the DSD entities.It also identifies input and output connected to and emerging from the entity.On this basis, Mele et al. (2010) stated that ST is a theoretical perspective that analyses a phenomenon seen as a whole ie DSD activity and not as simply the sum of elementary parts, like the individual professional SCIf works (DDC sub-SCIfs) or separate works of the individual DSD actors (Orgen et al., 2013a).
Another important aspect of the ST useful in strengthening this theorization is the system thinking developed from a shift in attention from the parts to the whole (Orgen et al., 2013a(Orgen et al., , 2012b;;Mele et al., 2010).This shift occurs in a way that makes the sub-SCIfs integrate and in interact in a situation of handling a DSD phenomenon that reveals properties of single parts.These parts are different professions such as Project manager (PM), Architect (Arc), Quantity surveyor (QS), Services engineer (Ser Eng), Structural engineer (St Eng), etc, distinctly as "I" s or be in absolute union as system elements (i.e.sub-SCIfs or DSD actors work rationally connected) (Mele et. al., 2010) (see Figure 1).The core problem of system thinking revolves around causation and reductionism (Pickel, 2007).This respect can further be explained by the rethinking system theory (RST) "systemism".
Method
An extensive literature review was conducted into the construction business relationship situation among DSD actors in developing and constituting SCIfs.A qualitative strategy was used for the study (Baxter & Jack, 2008;Zainal, 2007).It was useful for the purpose of describing the current CBR situation and for studying the actual effects of the situation on DSD activities (Baxter & Jack, 2008).In this qualitative research, descriptive case study design was employed which enabled the necessary multi-theory theorizations to be carried out on CBR in the discussions (Zainal, 2007).
The non-probability purposive non-proportional quota sampling was most suitable and was used for the study (Gravetter & Forzano, 2006;Landreneau & Creek, 2003;Kumekpor, 2002;Greemstein, 2001).The purposive approach was necessary as the DSD population had a distribution which was found in two to three urban centers in Ghana (Kumekpor, 2002).Also, to obtain a sample with representative views of the DSD population, five-point eligibility criteria involving a minimum of ten years working experience after obtaining professional association membership, size (scale) of projects undertaken, number of DSD actors involved in the execution of projects, professional status and local, national or international awards obtained (Baxter & Jack, 2008;Devers & Frankel, 2000) were considered.
One Hundred and Thirty-two DSD actors were present in the various organizations, out of which 50 DSD actors comprising 13 public and 37 private organizations satisfied the interview eligibility criteria (Kumekpor, 2002;Devers & Frankel, 2000).Based on the use of non-probability purposive non-proportional quota sampling only 45 senior DSD actors (i.e.Executive Officers or Directors of the organizations) were selected out of the 50 for the interview (Gravetter & Forzano, 2006;Landreneau & Creek, 2003;Kumekpor, 2002;Greemstein, 2001;Devers & Frankel, 2000).The sample frame eligibility criteria set, drew into the research some finest DSD experts in Ghana who have rich experiences and familiar with DSD professional practice (Devers and Frankel, 2000).In-depth interviews were carried out among DSD practitioners, including contractors who used the final design.In-depth interviews were conducted among the 45 interviewees, 5 from each of the 9 different DSD professions in Ghana, using an interview guide of semi-structured open ended questions (Yin, 2003).
In the first phase of the face-to-face in-depth interviews, eighteen DSD interviewees were involved; two from each of the 9 different professions, in an average time of three hours per interviewee.This enabled an initial identification of categories of issues that the measuring instrument should cover as a follow up in the CBR data collection (Naoum, 2004).The categories of issues were useful in seeking justifications and other follow up explanations concerning the effects of the CBR on the DSD activities in the data collection from the other twenty-seven (27) DSD interviewees to saturate the information obtained (Fellows & Liu, 2003).Data was electronically recorded, detailed summaries were written down by each interviewee and relevant observations were recorded by the researcher.The data was collected during working hours when the various offices were in session and staff was busy at work (Baxter & Jack, 2008).Data from 5-DSD actor groups of nine different professionals in a group was obtained.The actor groups' views of the case were studied and discussed as obtained.
Examination, coding, grouping of themes and categorisations to realize the research objectives were carried out for reliability and validity (Baxter & Jack, 2008;Devers & Frankel, 2000).The three kinds of content analysis approaches involving conventional, directed and summative methods were used (Hsieh & Shannon, 2005).
Details of DSD Organisations Visited
Table 1 presents the details of the DSD organizations involved in the study..The 45 senior DSD actors who were sampled comprised 9 public and 36 private actors.Table 1 also presents the details of the 5 DSD actors who were interviewed in both the public and private organizations to make up the sample size of 45 interviewees, including Project Managers, Architects, Quantity Surveyors, Geotechnical Engineers, Geomatic Engineers, Planners and Contractors.Each of these professionals had at least 10 years working experience, and therefore familiar with DSD professional practice.
Characteristics of the Current Construction Business Relationship Situation in Ghana
Table 2 shows that twelve attributes are used by the 9 professional actor groups to describe the current CBR situation in which SCIfs are developed and constituted.These include "lack of harmonization of professional work and good business relationships" and "hostility, frustration, tension and conflicts" with frequency of 14% each.Besides, there are "lack of interdependencies and sustainability" and "mixed relationships of affiliates and training mates relationships" (colleagues or school mates) with frequencies of 13% and 9% respectively.Four other attributes with frequencies of 7% each, were used to describe the current CBR situation including "low motivation", "no command structure", "harsh system of falsification of documents and greed" and "misinterpretation of documents by DSD actors".The remaining four attributes with frequencies of 5% each, are "business-like relationships", "detrimental competition", "no agreed practitioners cost inputs on works" and "client dissatisfaction".*Key: √ = Emerging attributes Figure 2 shows a pareto plot of the attributes describing the current CBR situation in which SCIfs are developed and constituted in Ghana.The pareto plot is useful for ranking the attributes and also for selecting the critical ones for remedying.The plot shows eight critical attributes including "lack of harmonization of professional work and good business relationships" and "hostility, frustration, tension and conflicts" each with frequency of 14%.These also involve "lack of interdependencies and sustainability" with frequency of 13% and "mixed relationships of affiliates and training mates relationships" with frequency of 9%.The other four critical attributes are "low motivation", "no command structure", "harsh system of falsification of documents and greed" and "misinterpretation of documents by DSD actors" each with frequency of 7%.
Of the eight critical attributes identified using the pareto plot (Fig. 2) as attributes describing the current CBR situation in Ghana, seven are negative attributes pointing to business relationship challenges.The negative critical attributes include: "lack of harmonization of professional work and good business relationships", "hostility, frustration, tension and conflicts", "lack of interdependencies and sustainability", "low motivation", "no command structure", "harsh system of falsification of documents and greed" and "misinterpretation of documents by DSD actors".The positive attribute identified, however, is "mixed relationships of affiliates and training mates relationships" for example, colleagues or school mates.A1 to H1 -Critical attributes in Table 2
Effect of Construction Business Relationship situation on DSD Activities
Table 3 presents a summary of the DSD actors' description of the effects of the current CBR situation on DSD activities in Ghana.The effects identified from the study included among others lack of time control and delays in DSD activities, reduction in quality of the DSD products, cost ineffectiveness, lack of feedbacks and information inflow and outflow, shoddy work, confrontational issues, lack of effectiveness and efficiency.
Project Managers
Cause delays in DSD activities, disturbing improvement in DSD time schedules.
However, there are improvements in quality of DSD activities and value for money of some SCIfs obtained through collaborative master programmes.
Architects
Reduce quality of DSD design products and also make SCIfs cost ineffective.But the situations are different where competent actors improve the DSD activities
Quantity Surveyors
Disturb effectiveness and efficiency of SCIfs blocking expansion and improvement of quality of DSD actors/products and encourages shoddy works
Services Engineers
Cause drawback improvement of DSD encouraging unhygienic and haphazard infrastructure activities.
Structural Engineers Reduce quality of SCIfs and make DSD less cost effective.But in some few situations improve the DSD activities by reducing errors to achieve lower cost and save time.
Geotechnical Engineers
Disturb information sharing and disallow effective developing of SCIfs through a lot of confrontational issues that affect the improvement of the quality of SCIfs.
However, some situations foster the right frame of mind to exchange project information freely to improve DSD products of SCIfs in legal and cost control terms.
Geomatic Engineers Prevent a holistic approach in developing and constituting SCIfs.These affect standards and ignore important details which prevent meaningful improvement in quality, cost and time control of DSD activities in project life cycle.
Planners
Result in incomplete SCIfs which are ineffective, inefficient and substandard, affecting the improvement of DSD activities by ignoring procedures, unwillingness to learn and to adopt changes.Also, cause unstable development from poorly constituted SCIfs to improve the DSD activities through knowledge acquired in other design.
Contractors
Create difficulties in inflow and outflow of project information which is required for SCIfs.These disturb improvement of DSD cost control and time due to non-compliance to regulations, rules and other legal issues.These, however, cause defects in the SCIfs that affect improvement of total quality, cost and time of the DSD activities.
Discussion Relating to Characteristics of the Current Construction Business Relationship Situation
Attributes used by the different DSD actor groups to describe the characteristics of the current CBR situation as given in Table 2 assisted and strengthened the interpretations provided for the various responses from the groups (Fellows & Liu, 2003).
Of the eight critical attributes identified using the pareto plot (Fig. 2), all the 9 DSD actor groups, except the Quantity Surveyors, the Geomatic Engineers and the Contractors, used "lack of harmonization of professional work and good business relationships" with frequency of 14% to describe the current CBR situation.These critical attributes are confirming the existence of non-collaborative and harsh or adversarial relationship (Laryea, 2010;Anvuur et al., 2006).Five DSD actors including the Structural Engineers, Geotechnical Engineers, Geomatic Engineers, Planners and the Contractors used "hostility, frustration, tension and conflicts" with frequency of 14% to describe the current CBR situation, and another group of 5 DSD actors including the Architects, the Quantity Surveyors, the Service Engineers, the Geotechnical Engineers and the Planners used "lack of interdependencies and sustainability" with frequency of 13%.The critical attributes used by the actors suggest that evidence are available for DDC which disturb business relationship among the DSD actor groups for the improvement of DSD actors (Axt et al., 2006;Yiu & Cheung, 2006).The Project Managers, the Architects, the Quantity Surveyors and Geomatic Engineers used "mixed relationships of affiliates and training mates relationships" with frequency of 9% to describe the current CBR situation.It can be inferred from the critical attributes used that DSD actors engaged or employed are not wholly based on performance or competencies (Orgen et al., 2011;Laryea, 2010;Cheung & Rowlinson, 2005).The remaining 4 of the 8 critical attributes were each used by four DSD actor groups with frequency of 7%.The frequency of usage of each of the critical attributes indicates the level of appropriateness of the description to the current CBR situation.Thus, the descriptions "lack of harmonization of professional work and good business relationships" with frequency of 14%, "hostility, frustration, tension and conflicts" with frequency of 14%, "lack of interdependencies and sustainability" with frequency of 13% and "mixed relationships of affiliates and training mates relationships" with frequency of 9% are more appropriate to the current CBR situation than the others The CBR situation illustrated by these critical attributes suggest little severer situation than those advanced in work Laryea (2010); Cheung and Rowlinson (2005), but in line with that of Axt et al. (2006) and Yiu and Cheung (2006).
The use of "lack of harmonization of professional work and good business relationships" to describe the current CBR situation shows that majority of the DSD actors see the current CBR as lacking enough cordial or smooth relationship for free open system to share or exchange project information.These can be inferred to be not similar to open system explained in the study of Anim (2012); Mele et al. (2010) and Loo (2003).The use of "hostility, frustration, tension and conflicts", is evidence in support of the true issues underpinning the lack of harmonious professional relationship among DSD actors in Ghana.Hostility, frustration, tension and conflicts generate DDC which can completely destroy any improvement of infrastructural project objectives.This attribute also serves as a potential cause of non-collaborative working and adversarial business relationship (Du Plessis, 2007;Axt et al., 2006;Yiu & Cheung, 2006;Adebayo, 2002).The use of "lack of interdependencies and sustainability" indicates non existence of inter-professional reliance (Yiu & Cheung, 2006).This indicates the existence of a close system of business relationship with individualism or "I-intention" or "I-sense" in which there is professions separatism characterized by non-collaborative adversarial business relationship (Coleman & Ostrom, 2011;Du Plessis, 2007;Yiu & Cheung, 2006;Mullins, 2005;Adebayo, 2002;Hofstede, 1986) Such business relationship situation makes improvement of DSD activities difficult (Axt et al., 2006;Yiu and Cheung, 2006).The use of the attribute "mixed relationships of affiliates and training mates" relationships' is somehow positive, though not the best for continuous improvement.This critical attribute is used by four out of the nine DSD actor groups i.e.Project Managers, Architects, Quantity Surveyors and Geomatic Engineers.These are professionals usually seen as working colleagues in the construction industry.They are usually trained in the same College of the Universities they attended and some take common courses and share common facilities.It is therefore no wonder that they describe the current business relationship situation as having mixed relationships of affiliates and training mates' relationships.The question, however, is whether this attribute is strong enough to result in significant improvement in SCIfs in design service delivery.
The situation of "low motivation" disturbs or distorts improvement of DSD activities, and indicates harsh or adversarial business relationship.This situation shows a close system characterized by unfair play which does not motivate DSD actor groups to be collectively collaborative in processes and procedures (Mele et. al., 2010;Yiu & Cheung, 2006;Mullins, 2005).The use of "no command structure" to describe the current CBR situation shows that the individual professions have autonomous culture of no system thinking and rethinking (Pickel, 2007;Gouveia & Ros, 2000).No coordinating command structure is in place for DSD activities (Mullins, 2005), with each DSD actor group operating independently.This situation indicates the existence of a closed business relationship with individualism or "I-intention" or "I-sense" in which there is professions separatism characterized by non-collaborative adversarial business relationship (Coleman & Ostrom, 2011;Du Plessis, 2007;Yiu & Cheung, 2006;Mullins, 2005;Adebayo, 2002;, 1986).The attribute "harsh system of falsification of documents and greed" can be inferred that this borders on corruption as confirmed in some previous study (Ameyaw et al., 2013;Orgen et al., 2012a;Anvuur et al, 2006).Corruption destroys the achievement of project objectives and does not promote improvement in design service delivery.A closed business relationship with individualism or "I-intention" or "I-sense" in which there is professions separatism promotes corruption.
The Geotechnical Engineers used six of the eight critical attributes to describe the current CBR situation in Ghana.
The Structural Engineers and the Geomatic Engineers each used five critical attributes, whilst the Project Managers, the Architects, the Quantity Surveyors and the Planners each used four critical attributes.The Service Engineers and the Contractors, however, used three and two critical attributes respectively to describe the current business relationship situation.This trend indicates variable views of the DSD actor groups on the current CBR situation in Ghana.
Discussion Relating to Effects of Construction Business Relationship Situation on DSD Activities
The Project Managers described the effects of the business relationship to include delays in DSD activities, causing development of SCIfs to stretch over long periods.Such delays disturb improvement in DSD time schedules as there are no collective decisions to follow (Seebass, 2008;Chan & Kumaraswamy, 1997).In the view of the Architects, the current CBR situation reduces quality of the DSD products, reduces cost effectiveness and disallows exchange of feedbacks and innovative information for SCIfs development (Anim, 2012;Loo, 2003).The Quantity Surveyors' description was that the current CBR situation disturbs effectiveness and efficiency of the SCIfs, blocking expansion and improvement of quality of DSD products, and encouraging shoddy works.
The Service Engineers, however, stated that the current CBR situation causes drawback in improvement of DSD activities, encouraging unhygienic and haphazard infrastructural development due to lack of consultation for feedback and innovative information sharing among the actors (Anim, 2012;Loo, 2003).The Structural Engineers' description pointed to a reduction in the quality of the SCIfs, making DSD activities less cost effective due to the narrow or limited amount of information sharing (Anim, 2012).In the view of the Geotechnical Engineers, the current CBR situation disturbs or disallows sharing of project feedback or innovative information for developing effective and efficient SCIfs (Anim, 2012;Loo, 2003).There are a lot of confrontational issues among the DSD actors which affect improvement of the quality of DSD main products of developing and constituting SCIfs (Jaffar, et al., 2011;Yiu & Cheung, 2006).According to the Geomatic Engineers', the current CBR situation prevents holistic approach to the development and constitution of SCIfs by ignoring important details which affect standards and meaningful improvement of quality.The situation further disturbs cost and time control of the DSD activities, creating difficulties in project life cycles (Yiu & Cheung, 2006).
The Planners, however, pointed out that the current CBR situation creates incomplete SCIfs which are ineffective and inefficient, resulting in sub-standard products that affect improvement of the DSD activities.These occur as procedures are ignored, with a show of unwillingness to learn, adapt to changes and have a change of mindset for continuous improvement of DSD activities (Cheung & Rowlinson, 2005).The view of the contractors was that the current BR situation creates difficulties in inflow and outflow of project information, which are required for the development and constitution of SCIfs (Anim, 2012;Loo, 2003).This in their view disturbs improvement of DSD cost control and time schedules due to non-compliance with regulations, rules and other legal issues.The situation in turn causes defects in the SCIfs and affects improvement of total quality, cost and time of DSD activities (Yiu & Cheung, 2006;Odusami et al., 2003).
The views of all the DSD actors groups brings to the fore the issues of time, cost and quality which are the three traditional performance indicators in the construction industry (Chan, 2001) Thus, the DSD actors are generally of the opinion that the current CBR situation does not promote, but disturbs improvement of DSD activities.Jaffaret et al. (2011) reported of three BR challenges which disturb improvement of DSD activities-contractual, technical and attitudinal behavioural relationship challenges.However, in developing and constituting SCIfs by the DSD actors, only technical and attitudinal behavioural relationship challenges are considered since business relationship among the DSD actors are non-contractual and so may not exist (Jaffar et al., 2011).Delays in DSD time schedules causing SCIfs to stretch over long periods when no collective decision or format are there to be followed (Table 2), can be a technical challenge to the development and constitution of SCIfs (Seebass, 2008;Chan & Kumaraswamy, 1997).Reduction in quality of DSD design products and cost effectiveness, which are also some of the negative effects of the current BR situation (Table 3), can be attributed to both technical and attitudinal behavioural relationship challenges.Lack of experience or competence can result in poor quality design and cost ineffectiveness, disturbing improvement of SCIf.
The results also point to the fact that information sharing among members is key to effective and efficient supply chain management of projects (Hatmoko & Scott, 2010;Titus, 2005).The SCIf consists of documentations such as drawings, bill of quantities, specifications, contract conditions, spot levels, geotechnical reports, explanations and clarifications, which form the basis of all activities in a project (Edum-Fotwe et al., 2001).DSD activities of providing SCIfs are for decision-making, which affects planning, executing, controlling and closing of projects.Smooth information flow and information sharing can improve performance of DSD actors in developing and constituting SCIf in DSD activities.Delay in the supply chain of information flow may slow down decision-making of all the project teams, and is identified as the main cause of delay in projects delivery (Chan & Kumaraswamy, 1997).
Application of the "We-intention" or "We thinking", an act-relational intention which produces full blown stronger "We-sense" of effective and efficient collaborative working, can help reduce cost, time and achieve high quality design service delivery (Tuomela, 1991).A shift in attention from the parts to the whole (Orgen et al., 2013a(Orgen et al., , 2012b;;Mele et. al., 2010) such that the sub-SCIfs are integrated and are in absolute union as system elements of DSD activities can also help reduce the effects of the current CBR situation and improve DSD in Ghana.
Some positive effects of the current BR situation on improvement of DSD activities were reported by some DSD actors (Table 3).Some of the SCIfs achieve value for money and records improvements in the quality of DSD activities through collaborative master programmes.Such positive effects may be due to the influence of competent DSD actors who work to reduce errors, achieve lower costs and save time through fair amount of cooperation in an open system, as theorized under the system theory (Mele et. al., 2010).Some actors also create the right frame of mind through regular consultations to freely exchange project information to improve SCIf.These are demonstrations of collaborative less adversarial business relationship among DSD actors, which result in improvement of DSD activities.
Conclusion
The study has provided empirical evidence to support the fact that the current CBR in Ghana is adversarial and lacks collaborative relationship among DSD actors when developing and constituting SCIfs.For the description of the relationship situation using the critical attributes, 14% of the situation is characterised as "lack of harmonization of professional work and good business relationships", 14% "hostility, frustration, tension and conflicts", 13% "lack of interdependencies and sustainability", 7% "low motivation", 7% "no command structure", 7% "harsh system of falsification of documents and greed" and 7% "misinterpretation of documents by DSD actors".The situation, however, is also characterised as 9% "mixed relationships of affiliates and training mates relationships".These attributes provided, in summary, characterizes the existing CBR situation in Ghana.
Some of the adverse effects of the current CBR situation which cause drawbacks to improvement of DSD activities are delays in DSD activities which disturb improvement of DSD time schedules, reduction of quality of DSD design products through ineffective and inefficient SCIfs and reduction in cost effectiveness, restriction of inflow and outflow of project information.The situation blocks expansion and encourages shoddy works, unhygienic and haphazard infrastructure development.Employing the action oriented system theory, system thinking and rethinking, will result in collaborative less adversarial relationships with collective action of "We-intentions" of joint goals, with openness and transparency in developing SCIfs by DSD actors.This will also create effective and efficient sub-SCIfs to ensure quality, cost and time effectiveness to improve DSD activities.The few positive effects of the current CBR situation such as the achievement of value for money, improvement in the quality of DSD activities through collaborative master programmes, and the creation of right frame of mind to freely exchange project information to improve DSD products in legal and cost control terms, could be attributed to the influence of competent DSD actors who worked to reduce errors, achieve lower costs and save time through a fair amount of cooperation in a collaborative less adversarial business relationship.
Figure 1 .
Figure 1.Conceptual framework for improvement of DSD activities
Figure 2 .
Figure 2. Pareto plot showing attributes describing current construction business Relationships situation in Ghana
Table 1 .
Details of DSD organizations visited
Table 2 .
Emerging Attributes describing the current construction business relationship situation in which SCIfs are developed and constituted
Table 3 .
Summary of DSD actors' description of effect of construction business relationship situation on DSD activities | 10,809.4 | 2015-04-27T00:00:00.000 | [
"Business",
"Engineering"
] |
One-dimensional diamondoid polyaniline-like nanothreads from compressed crystal aniline
One-dimensional diamondoid polyaniline-like nanothreads combine the outstanding mechanical properties of carbon nanotubes with the versatility of NH2 groups.
Introduction
The obtainment of highly-ordered low-dimensional polyaniline (PANI) is a long-standing challenge of great interest for a large part of the scientic community. In fact, PANI is expected to present extraordinary electronic and optoelectronic properties, exhibiting strong potential in several elds, covering from fundamental chemistry to applied materials science. [1][2][3] Recently, synthesis of a real 2D PANI framework has been reported, 1 nevertheless, due to the complexity in the mechanism of aniline polymerization, full control of the PANI structure at the atomic scale has not yet been achieved. 4 In this framework, pressure-induced polymerization of aniline in the solid-state could represent an attractive alternative route for obtaining ordered PANI.
Solid-state chemistry induced at high-pressure and hightemperature has been successfully used in the search for new and fascinating materials such as conned polymers and extended amorphous networks. [5][6][7][8][9] A possible advantage of these reactions is represented by the topochemical constraints posed by the crystal that can give rise to products closely recalling the symmetry of the molecular crystal from which it is formed. 6,10,11 In some cases the pressure and temperature conditions required for the synthesis are such that they can be easily scaled up, thus representing a 'green' method appealing to industrial chemical synthesis, since the use of additional and polluting compounds such as initiators, catalysts and solvents is avoided. For example, high density polyethylene was successfully synthesized at high-pressure and the reaction conditions are completely accessible to the current industrial technology. 12 Recently the formation of a 1D, highly ordered, saturated nanomaterial with a diamond-like local structure was reported aer the compression of benzene up to 20 GPa. 6 Such diamondoids are "cage-like" structures consisting of fused cyclohexane rings that exhibit complex structures and geometries. 13 With their unique and stable molecular structures, diamondoids and their derivatives are a new generation of novel and durable materials and devices for nanotechnology applications. 13 An aniline-rich and/or amino-rich diamondoid-derived nanomaterial could combine the atomic-level uniformity of diamondoid materials and inherit PANI properties, that can be tailored by changing the dimensionality of the nanostructures, 14 and is expected to be a potentially relevant technological material. Previous experiments on the high-pressure behaviour of aniline have shown a remarkable stability of the monomer under quite high pressure and temperature conditions possibly due to strong directional H-bonds. 15 Here, for the rst time to our knowledge, we have induced the transformation of aniline to a pale yellow-brownish recoverable one-dimensional diamondoid-like polyaniline by compressing aniline to 33 GPa and heating to 550 K. Infrared spectroscopy, transmission electron microscopy (TEM), X-ray diffraction data, and density functional theory (DFT) calculations support the formation of this totally new polyaniline nanothread.
Results and discussion
Literature data have showed that aniline crystal phase-II is anomalously chemically stable in a broad P and T range when compared to other aromatic molecules. 6,[20][21][22] This occurrence is likely related to the longer C-C contacts between adjacent molecules in aniline, a consequence of the strong and directional H-bonds along the c-axis. 15 In order to estimate the P-T reactivity threshold of aniline at several P and T conditions, we have employed a model accounting for thermal displacements already used for s-triazine 5 and benzene 10 crystals adopting the critical distance, 2.5-2.6Å between the closest intermolecular C-C contacts found in these cases. As reported in ref. 10, the maximum instantaneous linear displacement from the equilibrium position (2a m ) can be estimated as 3s, accounting for more than 99% of the displacement amplitudes, where s 2 is classically given by: where k is the Boltzmann constant, T the temperature, m the molecule mass, and n the frequency (cm À1 ) of the phonon mode. The pressure evolution of the density of states (DoS) of the lattice phonon modes was estimated using the relationship F P ¼ F P 0 (V 0 /V P ) g , where F P 0 is a representative phonon frequency at P ¼ 0 GPa (62 cm À1 ), V 0 and V P are the volumes at P ¼ 0 GPa and at a given pressure (P), 15 and g is the Gruneisen parameter. A g value of 1.5 accounts for a regular linear evolution of the instability boundary.
In Table 1 we report, for given pressures, the estimated thermal corrections according to eqn (1) and the corresponding temperature thresholds. Our calculations suggest that aniline is stable in a broad P-T range, see Fig. 1, overcoming the P-T stability of other comparable molecules such as benzene, 10 striazine 5 and pyridine. 20 Because of the approximations used in this calculation, the reactivity threshold is not reported as a line (see gray area in Fig. 1) to account for the uncertainty in the P and T values.
Aer having estimated the chemical instability boundary for aniline, an isothermal compression at 550 K was performed ( Fig. 1pathway I); a temperature that should correspond to a reaction onset between 30 and 35 GPa. This expectation, and consequently the previously adopted assumptions, were conrmed by the experiment. Indeed when the pressure was slightly lower than 33 GPa, aniline's spectral features begin to weaken signicantly, suggesting that the aniline became unstable under these P and T conditions.
Once the P and T values to induce aniline's reactivity had been identied, another fresh sample was prepared, however, in this case, a KBr pellet was used to reduce the optical path avoiding saturation of the absorption bands and allowing the kinetic study of the reaction. The sample was rst compressed up to 33 GPa and then heated to 550 K ( Fig. 1 pathway II). Aer reaching the reaction onset, the kinetics was followed over about 24 h. The observation of the reaction at the same P-T conditions rules out any effect on the reactivity of the P-T path followed and of the salt substrate. The initial four spectra measured during the kinetics study are presented in Fig. 2A and B. The amount of reacted aniline was determined by the absorbance of the band at $830 cm À1 , relative to the C-H out- Fig. 1 The blue area represents the stability P-T range of aniline explored experimentally (see ref. 15). The full blue circles represent the estimated reactivity thresholds of aniline phase-II, according to Table 1, while the gray area is an estimate of the uncertainty of this determination. The full red diamond represents the induced reactivity of aniline phase-II at 33 GPa and 550 K. The black squares linked by a line represent the reported liquid-solid transition boundary from ref. 23. P-T pathways followed to trigger the reactivity in phase-II aniline are also indicated.
of-plane bending mode. 24 The absorption pattern was tted using a Voigt prole and the total absorbance, the area of the band, was taken as a measurement of the amount of residual aniline during the evolution of the reaction. The data were analyzed with the Avrami model. [25][26][27] A tting equation derived from ref. 29, can be written as where A 0 , A t , and A inf are the integrated absorptions of aniline at the beginning of the reaction (t 0 ), aer a time t and at the equilibrium, respectively; n is a parameter that accounts for the growth dimensionality for a given nucleation law and k is the rate constant. The kinetic curve was nicely reproduced using the rate constant k and the n parameter reported in Fig. 2C. The n parameter can provide insight about the microscopic evolution of the reaction once it is related to the growth geometry and nucleation rate. An n value smaller than 1 unambiguously indicates unidimensional growth. In our case, the n parameter obtained was equal to 0.28, clearly indicating a product propagating with 1D growth. Similar n values were already reported in the pressure induced unidimensional polymerization of acetylene 28 and ethylene. 29 Once no further spectral modications could be appreciated, the sample was rst brought back to ambient temperature and then the pressure was slowly released. According to Fig. 2C, a certain amount of unreacted aniline, about 25%, was still present suggesting that lattice defects can prevent the 1D reaction propagation. Upon releasing the pressure, the intensity of all of the aniline bands decreased indicating that the reaction proceeds during the downstroke.
The rather homogeneous pale yellow/brownish material synthesized at high P and T and recovered at ambient conditions is shown in Fig. 3a. The infrared absorption spectrum of this material (Fig. 3b) resembles that of the product recovered from the high pressure reactivity of benzene thus suggesting a saturated material containing N-H bonds. 21 Recently, a 1D, highly ordered, saturated nanomaterial with a local diamond-like structure was recovered aer a controlled compression-decompression cycle of benzene up to 20 GPa. 6,30 The product, characterized by a wealth of techniques like bright-eld transmission electron microscopy and synchrotron X-ray diffraction, differed substantially from the amorphous material reported in previous reports, 10,21 consisting of a tubular or thread-like structure, which was also supported by rst principle calculations. 6,30 In view of the striking similarities with the benzene reactivity and the intriguing results of the Avrami model, suggesting a material characterized by 1D growth, the product was morphologically characterized using multiple techniques. The recovered material was mechanically removed from the gasket with a needle and placed directly onto the surface of a standard transmission electron microscope (TEM) copper grid. The collected multiple high-resolution TEM images at two different magnications (Fig. 4a) exhibit parallel striations, suggesting the formation of threads or tubes. The line prole measured along the white line in the low magnication BF-TEM image ( Fig. 4a -le) presented in Fig. 4b shows that the striations are $5.5Å apart. The high magnication BF-TEM image (Fig. 4a right) evidenced the presence of long-range 1D parallel striations spaced at 4.0-5.1Å and sizing tens of nanometers long. These values can be compared to the 1-D nanothreads obtained from benzene which were characterized by a distance between packed threads of $6.4Å. 30 The regions where the linear threads bend, as visible in the image with higher magnication (Fig. 4aright), are likely related to crystal boundaries or dislocations of the starting aniline crystals. It should also be remarked that the nanothreads develop along a specic aniline crystal direction, namely the a-axis (see following discussion), which should therefore lie in the image plane to make the threads observable. The latter issue can account for the regions where apparently no threads are present.
In order to support the spectroscopic and morphological characterizations, quantum chemical calculations were performed using DFT to model a residue of a 1-D polyaniline-like nanothread structure consisting of 4-fused aniline molecules in which the sp 2 carbons of the rings were converted into sp 3 carbons by forming covalent bonds that develop between the rings. Views of the optimized geometry of a segment that accounts for the aniline-derived nanothread with the relative dimensions are presented in Fig. 4c. According to the DFT optimized geometry, the diameter dimensions expected for an aniline-derived nanothread are in the order comprised between 5.0 and 5.4Å, which is in striking agreement with the parallel striations observed in the TEM images.
According to Thess et al. 31 a 2D triangular lattice is characterized by a group of peaks in the low-Q region: a strong peak around 0.44Å À1 followed by four weaker peaks up to 1.8Å À1 . Angle dispersive X-ray diffraction measurement on the recovered material (Fig. 5a) notably agrees with the mentioned characterization, suggesting the formation of a triangular structure with an a ¼ 13.3Å, in the middle range of the lattice values reported for benzene-derived nanothreads ($6.4Å) 6 and fullerene single-wall carbon nanotubes (17Å). 31 The lattice constant is two and half times the value of the nanothread diameter and almost double that of benzene derived nanothreads, which can suggest that the molecular orientation and the hydrogen bond network in the crystal strongly inuence the reaction for aniline and the product characteristics. Crystal aniline presents a peculiar H-bond arrangement connecting the NH 2 groups of the nearest neighbor molecules and developing along the a-axis. 15,32 The presence of such strong interactions in the crystal prevents the participation of the NH 2 groups in the reaction and favors the remarkably anisotropic compression along the direction more suitable for inter-ring interaction. The natural conclusion is that these constraints selectively drive the inter-ring formation of C-C bonds, with the consequent C hybridization change from sp 2 to sp 3 , along the a-axis. Therefore, double nanothreads form along this direction interacting through H-bonds and having a size of $12.8Å (Fig. 4c), resulting in an expanded 2D triangular lattice when compared to benzene. This is in excellent agreement with the lattice parameter (a ¼ 13.3Å) derived from the angle dispersive X-ray diffraction pattern.
Finally, the structure of the one-dimensional polyaniline nanothread with a local diamondoid-like geometry was optimized and its vibrational spectrum was calculated by quantum chemical calculations using density functional theory (DFT). Two views of the optimized nanothread geometry and one view of the double nanothread interacting through the H-bond region are presented in Fig. 4c. DFT calculations strongly support the formation of one-dimensional aniline-derived nanothreads with the structural parameters and the calculated infrared spectrum being in great accordance with the experimental data. Both IR spectra, calculated and measured, are dominated by the C-H stretching modes of sp 3 hybridized carbon atoms at 2900 cm À1 and by the bending N-H modes at 1575 cm À1 (Fig. 5b) thus providing solid evidence for the formation of a one-dimensional aniline-derived nanothread.
The properties of different functionalized diamond nanothreads have been recently computed by DFT calculations. 33 These materials correspond to the possible products of highpressure transformations of both functionalized benzene and heteroaromatic rings. Interestingly, the functionalized diamond nanothreads maintain the mechanical properties of the pristine material but offer, depending on the functional group and its spatial distribution, the possibility of tuning the band gap. According to these predictions the NH 2 -enriched carbon polyaniline-like nanothread is expected to present a band gap in the order of 3.5 eV, an essentially insulating material due to the intrinsic carbon sp 3 character, an ideal strength of $14.8 nN, a Young's modulus of 163 nN and a fracture strain (3 max ) of $0.16. 33 Moreover, the synergic effect between these remarkable mechanical properties with the versatility of the NH 2 groups decorating the exterior of these nanothreads representing potential active sites for doping and as linkers for molecules with biological interest and inorganic nanostructures, must be taken into account.
Methods
Aniline (C 6 H 5 NH 2 , Merck) was distilled under reduced pressure prior to use and was loaded into a MDAC (membrane diamond anvil cell) equipped with IIa type diamonds and a rhenium gasket where a hole with an initial diameter of 150 mm was drilled and used as a sample chamber. In order to reduce the strong IR absorption of the sample, the optical path was reduced by pressing KBr into the sample chamber producing a pellet whose surface was successively scratched. Aerwards, liquid aniline and a ruby chip were added above the KBr pellet resulting in a sample thickness ranging from 10 to 20 mm. Hightemperature experiments were performed using the resistively heated MDAC. The temperature was measured with an accuracy of AE0.1 K by a K-type thermocouple placed close to the diamonds. FT-IR absorption measurements were recorded with an instrumental resolution of 1 cm À1 using a Bruker-IFS 120 HR spectrometer modied for high-pressure measurements. 16 The ruby uorescence was excited using a few milliwatts of a 532 nm laser line from a Nd:YAG laser source.
Angle-dispersive X-ray diffraction (ADXRD) experiments were performed at the ESRF high-pressure beamline ID27 using monochromatic X-ray radiation of wavelength l ¼ 0.3738Å, a MARCCD 165 detector positioned at 146 mm from the sample, as calibrated with a CeO 2 standard. The focal spot (fwhm) of the beam was $3 mm. The 2D diffraction patterns were integrated using DIOPTAS; manual background subtraction was done in Fityk.
Bright-eld imaging (BF-TEM) was performed using an FEI Titan Themis 60-300 transmission electron microscope operated at an 80 kV accelerating voltage. The sample was removed from the gasket with a needle directly onto the surface of a standard TEM copper grid.
Quantum chemistry calculations were performed using the Gaussian03 package 17 to obtain optimized structures and vibrational frequencies for the one-dimensional diamondoid polyaniline-like model. Density functional theory (DFT) using the Becke's three-parameter hybrid exchange functional and Lee-Yang-Parr correlation functional (B3LYP) 18,19 and 6-311++G(d,p) basis set was used. No imaginary vibrational frequencies were obtained indicating that the vacuum geometries were at the minimum of the potential surface.
Conclusions
The reactivity of aniline was induced by compressing the crystal phase-II above 30 GPa at temperatures in excess of 500 K. The onset of the reaction nicely agrees with the estimate based on a simple model accounting for thermal displacements. The reaction kinetics sharply indicates the formation of a 1D product. The latter was recovered at ambient conditions and characterized by transmission electron microscopy, X-ray diffraction and FTIR spectroscopy which provided evidence of the formation of a totally new diamondoid polyaniline-like nanothread. Inter-ring bonds give rise to a fully sp 3 hybridized structure forming double nanothreads with diameters of approximately 12.8Å and arranged in a 2D triangular lattice with an a parameter almost double that in benzene derived nanothreads. The reaction is strongly inuenced by NH 2 groups that, although not participating in the reaction, favor, through the H-bonding arrangement, anisotropic compression along the direction more suitable for the inter-ring interaction. In addition, the NH 2 groups decorate the exterior of these nanothreads representing potential active sites for doping and as linkers for molecules with biological interest and inorganic nanostructures. The combination of these properties with a great Young's modulus emphasizes the strong potential of this material to be applied in a broad range of areas, from chemistry to materials engineering.
Conflicts of interest
There are no conicts to declare. | 4,221 | 2017-10-18T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Quantized p -form Gauge Field in D-dimensional de Sitter Spacetime
,
I. INTRODUCTION
The Quantum Field Theory in flat (Minkowski) spacetimes is one of the most successful theories in physics.It serves as the foundation upon which the Standard Model of particles is constructed and provides a quantum description of the strong, weak, and electromagnetic forces.On the other hand, gravity is described by Einstein's General Theory of Relativity, a classical theory, which has also proven to be very successful [1].However, it is well understood that General Relativity remains an incomplete theory.Attempts to incorporate gravity into the Standard Model have proven to be non-renormalizable, as it requires an infinite number of parameters to do so.
Quantization of gravity is one of the most difficult and arduous challenges in modern physics and mathematics.Various approaches to quantizing gravity have been developed, with the most well-known being String Theory, which achieves quantization along with unification with the other three forces.However, experimental validation of the theory poses a significant obstacle due to the extremely weak quantum effects of gravity.This fact allows for creativity in the search for observable characteristics of the theory [2] Despite the success of quantum field theory in flat spaces, and some theoretical achievements of string theory, there are still several problems related to the behavior of fields in curved spaces in a quantum theory at cosmological scales.For example, interesting results have been achieved in a background of time-dependent fields such as for example black Hawking radiation [3], which predicts the evaporation of black holes.Closely related is the Unruh effect [4], which suggests that an accelerated particle would perceive a thermal bath, and the dynamical Casimir effect [5], which predicts particle creation by an accelerated mirror.Another notable outcome is particle creation by the vacuum, which has been extensively researched [6][7][8][9][10][11][12], with the confirmation of the Schwinger effect expected soon [13].
On the other hand, the investigation of de Sitter spacetime gains significance when contemplating a ΛCDM model of the cosmos.In the current epoch, the universe can be roughly characterized by de Sitter spacetime, wherein matter decays with volume while the cosmological constant remains constant.For t ≫ H −1 , the universe tends to be effectively described by the de Sitter model.In this connection, the study of quantum effects of a massive scalar field in de Sitter spacetime was examined in Ref. [14], where the authors utilized exact linear invariants and the Lewis and Riesenfeld method [15] to derive the corresponding Schrödinger states based on solutions of a second-order ordinary differential equation.Additionally, they formulated Gaussian wave packet states and computed the quantum dispersion and correlations for each mode of the quantized scalar field.
The quantization of certain types of fields has been explored in the literature.For the scalar field [16], it has been shown that the conformability of the system is tied to the choice of the curvature parameter.Similarly, the electromagnetic field [17] proves to be conformal in D = 4, as expected.Additionally, it has been observed that the Kalb-Ramond field is conformal in D = 6 [18].
On the other hand, various branches of theoretical physics have provided indications of the possible existence of extra dimensions.Examples include string theory, higher-dimensional black holes, or brane world models.It is worth mentioning that higher-dimensional FRW scenarios, including the de Sitter scenario, and their particle creation, have also garnered attention in recent years [19,20].In this context, as indicates reference [21]: p-form gauge fields play a significant role in theories involving extra dimensions.For example, within string theories in 26 dimensions, a low-energy normal mode of the string is represented by a two-form gauge field A µν .However, in four-dimensional spacetimes, p-forms do not introduce new possibilities.
In this study, we will employ the method developed by Lewis and Riesenfeld [15] to find a solution for the equations of motion governing a p-form gauge field in a D-dimensional de Sitter spacetime.Subsequently, we will proceed with its quantization by determining a solution to the 'auxiliary' equation [22,23].Additionally, we will discuss the potential physical implications of our analysis for various extra-dimensional scenarios of interest in physics.
II. EQUATIONS OF MOTION AND ITS DECOMPOSITION
We will employ the Friedmann-Lemaître-Robertson-Walker(FLRW, for short) metric in where the field strength is Although there are various paths to perform the decomposition of the field in its normal modes, as its seen in [16] for example, there are no drawbacks to performing it directly from the equations of motion.
To simplify this equation we can fix the gauge by means of the transverse condition and the invariance of the gauge allows us to fix A 0i 2 •••ip = 0. Putting these two conditions into (2) will leads us to the following equation Now we take the standard approach by tackling this equation with the usual normal modes decomposition with . Substituting ( 4) in (3) we finally arrive in our desired equation Here, we have omitted all indices attached to r, as equation ( 5) remains the same for all of them.Now, in order to find a solution to this equation we will examine the Hamiltonian for the harmonic oscillator with a time-dependent mass and frequency, m = m(t) and ω = ω(t), respectively, where p and q are now considered operators and obey the relation [q, p] = i .The equations of motion are trivially obtained and are given by q + ṁ m q + ω 2 q = 0, (7) which has a striking resemblance to (5).In view of this similarity we can consider our system as being a time-dependent harmonic oscillator with mass given by m = a D−2p−1 and frequency ω = k a .
III. QUANTIZATION OF THE p-FORM GAUGE FIELD IN THE DE SITTER SPACE-TIME
In this section we will discuss the tool to obtain a solution to the time-dependent harmonic oscillator.
We search for an I(t) with the requirement of it being invariant of the Hamiltonian of equation ( 6) [24] with real eigenvalues, which makes it Hermitian.It turns out that the solution |ψ n for the Schrödinger equation by means of this invariant (8) cannot be completely determined just by the assumption that it needs to be Hermitian.We need to specify the phase θ of the invariant's eigenstates |n, t , assumed to form a complete orthonormal basis for I. Then the exact solution is given by where the phase θ n (t) needs to satisfy the following equation [15] dθ n (t) Since the invariant is of our choice(given that we follow the conditions set upon it), we consider the following choice [24] where q(t) satisfies equation (7) and where ρ = ρ(t) satisfies the generalised Milne-Emarkov-Pinney (MP) [22,23] equation where the choice for (11) comes from assuming it to be of a quadratic form in q and p, so it would closely resemble (6).
To perform the quantisation we take the standard route of considering the (timedependent) creation b † (t) and annihilation b(t) operators defined as constructed so that b, b † = 1, with the usual properties and we are assuming the eigenvalues for I to be discrete.This allows us to write the following eigenvalue equation for the equation ( 11) and we see that the eingenvalues of I are given by where n = b † b.These assumptions follow the same path we make to quantise the Hamiltonian, so its safe to assume that the eigenstates for the invariant I(t) are indeed related to the Hamiltonian's by means of (9).
The Schrödinger equation is given by: i ∂ψ(q, t) ∂t = H(t)ψ(q, t) Lewis and Riesenfeld showed that the the general solution of the Schrödinger equation is given by: where c n are time-independent coefficients and where |ψ n satisfies equation ( 9) and phase θ n (t) satisfy the equation (10).
By using a unitary transformation and following the steps outlined in reference [24], the normalised solution for the time-dependent harmonic oscillator is then written as where H n are the Hermite polynomials of order n, and the phase θ n (t) from ( 10) now reads Hence, quantizing the time-dependent harmonic oscillator hinges on identifying a solution to the corresponding MP equation ( 12), which will be incorporated into equation (19).
It is worth mentioning that a solution to this nonlinear equation consists of a nonlinear combination of solutions to the linear case [25].Notice that the linear form of the MP equation mirrors our classical equation of motion (7).Hence, discovering solutions to (7) enables us to uncover the sought-after solution to the problem.
Let's now apply the process of quantizing the p-form gauge field.With m = a D−2p−1 and As mentioned earlier, in order to obtain the solution for equation ( 21), we will initially seek solutions to the classical equation ( 5).Now, by means of a change to the conformal time η by setting dt = adη and r = Ωr, we get from ( 5) where prime (') and dot (.) represent differentiation with respect to the conformal time η and t, respectively.If we make the choice Ω = a −(D−2p−1)/2 we get We'd like to emphasize that when D = 2p + 1, certain terms cancel out, leading to a simplified equation contingent upon the choice of parameter a.Let us consider, the de Sitter spacetime where a = e Ht .The expressions for a, ȧ and ä are reduced to and we finally obtain where ν = D − 2p − 1 2 .This equation is Bessel's equation which has two linearly independent solutions, given by J ν (k |η|) and Y ν (k |η|), the Bessel's functions of first and second kind, respectively.Now employing our earlier redefinition of r = Ωr, with Ω = a −(D−2p−1)/2 , we obtain that two linearly independent solutions for r are: Following references [26,27], a particular solution of the equation ( 21) is: where A and B are real constants.The determination of these constants is intricately tied to our vacuum selection.This arises from the non-uniqueness in constructing particle states and selecting the vacuum in curved spaces like the one employed in this scenario.This holds significance because the generation of particles can only be deduced once we have selected a vacuum for comparison with our physical solution.
In our scenario, a suitable choice corresponds to the Bunch-Davies vacuum, which aligns with the adiabatic vacuum for very early times (t → −∞) or the adiabatic vacuum for wavelengths much smaller than the de Sitter horizon H −1 .With these assumptions, the values of the constants are A = B = π/2H [26], and ρ is given by Now that we have finally found the general solution to the Milne-Emarkov-Pinney equation (21) in a de Sitter scenario, we can substitute it in the expression for the solution of the harmonic oscillator with mass and frequency dependent on time (19).This concludes the quantisation of the p-form gauge field in a D-dimensional de Sitter background.
IV. CONCLUDING REMARKS
In this work, we have presented a generalization of the quantization procedure, through the use of p-form gauge fields, for the scalar, electromagnetic, and Kalb-Ramond fields, all of which have been previously studied and referenced [16][17][18].In this connection, we have obtained a solution to the Schrödinger equation using the method developed by Lewis and Reisenfeld [15], applied to the quantization of the p-form field in a D-dimensional de Sitter space-time.A general solution for equation ( 22) is found to depend on the scale factor present in the FLRW space-time and was obtained in the particular case of de Sitter space-time, which is significant because our Universe today can be approximated as such, and in the far future, it would fully become one.
We can check that the equation ( 28) is constant for D = 2(p + 1).This is in agreement with the previous works for D = 4 and p = 1 [17,27].Thus, for D = 2(p+1), for a (massless) photon, the initial adiabatic vacuum persists indefinitely, resulting in zero photon production within de Sitter spacetime, while its energy undergoes the redshift characteristic of radiation.
An interesting case where D = 2(p + 1) arises for p = 4 in a 10-dimensional space-time, where A µ 1 •••µp corresponds to a 4-form gauge field, while the field strength corresponds to a 5-form.In this scenario, there will be no production of particles, and a co-moving accelerated observer will not experience a thermal bath.This specific dimension value indicates that the 4-form exhibits conformal invariance, allowing for the straightforward solution of the time-dependent harmonic oscillator (19) in a de Sitter spacetime.Since our field strength is going to be a 5-form it will be dual to itself The case where D = 10 has garnered attention as it represents the critical dimension in superstring theory, and the 5-form field strength naturally appears as a first-order approximation for the gravitational coupling constant of chiral N = 2 D = 10 supergravity [28].
For D = 2(p + 1), it becomes evident that we can no longer have a constant solution for (28), regardless of our chosen cosmological model.In a future work, a more general solution could be computed for the case of de Sitter space-time, which undoubtedly has more interesting consequences.Specifically, we can observe particle production for D = 2(p + 1) because the solutions are time-dependent.Additionally, observers in an accelerated comoving reference frame will also experience a thermal bath.
As mentioned in the introduction, gauge p-form fields in certain extra-dimensional scenarios exhibit physical properties that are not visible in four dimensions.Thus, an intriguing implication arises in the realm of extra-dimensional physics, which has garnered significant attention in certain scenarios involving de Sitter spacetimes.For instance, in higherdimensional FRW scenarios and their associated particle creation [19,20], or in de-Sitter braneworld models [29,30].Braneworld models where FRW branes possess a temperature have been investigated in reference [31].In braneworld models, our universe is conceptualized as a brane existing within a five-dimensional space.Consequently, in such a setup, a de Sitter spacetime would lead to particle production and the presence of a thermal bath for observers moving within this expanded space.Consequently, this could potentially contribute to an effective temperature within the membrane, suggesting the intriguing possibility that precise measurements of the Cosmic Microwave Background could reveal the presence of extra dimensions. | 3,309.6 | 2024-04-04T00:00:00.000 | [
"Physics"
] |
The influence of pump coherence on the generation of position-momentum entanglement in down-conversion
Strong correlations in two conjugate variables are the signature of quantum entanglement and have played a key role in the development of modern physics. Entangled photons have become a standard tool in quantum information and foundations. An impressive example is position-momentum entanglement of photon pairs, explained heuristically through the correlations implied by a common birth zone and momentum conservation. However, these arguments entirely neglect the importance of the `quantumness', i.e. coherence, of the driving force behind the generation mechanism. We study theoretically and experimentally how the correlations depend on the coherence of the pump of nonlinear down-conversion. In the extreme case - a truly incoherent pump - only position correlations exist. By increasing the pump's coherence, correlations in momenta emerge until their strength is sufficient to produce entanglement. Our results shed light on entanglement generation and can be applied to adjust the entanglement for quantum information applications.
Strong correlations in two conjugate variables are the signature of quantum entanglement and have played a key role in the development of modern physics [1,2]. Entangled photons have become a standard tool in quantum information [3] and foundations [4,5]. An impressive example is position-momentum entanglement of photon pairs [6], explained heuristically through the correlations implied by a common birth zone and momentum conservation. However, these arguments entirely neglect the importance of the 'quantumness', i.e. coherence, of the driving force behind the generation mechanism. We study theoretically and experimentally how the correlations depend on the coherence of the pump of nonlinear down-conversion. In the extreme case -a truly incoherent pump -only position correlations exist. By increasing the pump's coherence, correlations in momenta emerge until their strength is sufficient to produce entanglement. Our results shed light on entanglement generation and can be applied to adjust the entanglement for quantum information applications.
Entanglement of photons has been explored among different degrees of freedom, such as polarization [4,5,7], time and frequency [8,9], position and momentum [6] as well as angular position and orbital angular momentum [10,11]. Entanglement of two-dimensional systems, in analogy to classical bits, is the primary resource for quantum communication and processing [3]. In addition, multiple-level quantum systems can show highdimensional entanglement with a high complexity [12][13][14] and can be exploited for various quantum information tasks [15]. Position-momentum entanglement as a continuous degree of freedom is the ultimate limit of highdimensional entanglement and its deeper understanding is essential for the development of novel quantum technologies.
Position-momentum-entangled photon pairs can be rather straight-forwardly generated in spontaneous parametric down-conversion (SPDC) [6,16], the workhorse of many quantum optics labs. In this process, a strong pump beam spontaneously generates a pair of signal and idler photons through a nonlinear interaction. Formation of position-momentum entanglement is often explained by simple heuristic arguments: A pump photon is converted at one particular transverse position into signal and idler. Due to this common birth place, they are correlated in position. In addition, transverse momentum conservation requires the generated photons to travel in opposite directions, i.e. they are anti-correlated in momentum. Hence, in an idealized situation the generated pairs can be perfectly correlated in both, the position and momenta, which is the key signature of quantum entanglement. However, these arguments have not taken the coherence properties of the pump beam, i.e. the quantum aspect of the driving force behind the pair generation, into account. In this letter, we study how the generation of position-momentum entangled photon pairs relies on the coherence properties of the pump. For that, we pump a nonlinear crystal by a coherent light source (a laser), a true incoherent source (an LED), and examine the transition between these extreme cases by pumping with pseudo-thermal light of variable partial coherence. We find that the strength of the momentum anti-correlation depends strongly on the coherence of the pump so that the degree of entanglement can be adjusted. Fundamentally, our analysis demonstrates that the lack of momentum correlation does not imply an violation of the conservation of momenta; it shows that the coherence of the pump, i.e. its 'quantumness', is crucial for the generation of entangled photons.
A first theoretical analysis [17,18] of the pairgeneration process shows that the angular profile of the pump and its coherence is transferred to the downconverted light. Thus, it determines the uncertainty of the anti-correlation and also effects the generation of entanglement. Along similar lines, the influence of different coherent pump profiles on entanglement and on the propagation of the generated pairs have been already explored [19][20][21][22][23]. The impact of temporal coherence of the pump has been investigated in [24][25][26].
Our experimental setup (see Fig. 1) is designed in a flexible manner so that switching between the the laser FIG. 1. Schematic of the experimental setup. Photon pairs are generated by pumping a type-II nonlinear crystal (ppKTP) with either a laser beam with adjustable transverse coherence (red shaded beam path) or a beam derived from an LED that is spatially incoherent (blue shaded beam path). The coherence of the laser is tuned by modulating the transverse phase profile with a spatial light modulator (SLM). A polarizing beam splitter (PBS) splits the pairs and their joint spatial distributions are measured by independently movable slits in each arm. They are followed by bucket detector systems consisting of microscope objectives, multimode fibers, single-photon detectors. Position correlations are registered by a coincidence measurement in the imaging plane (f1 and f2), while momentum correlations are observed in the focal plane of a lens f3 (Fourier transform plane, or momentum space). and the LED (red and blue shaded regions in Fig. 1) can be easily accomplished with a flip mirror. We can further change between detecting position and momentum correlations simply by using a different set of lenses (see more details in Methods). To investigate entanglement, we measure the probability distributions of the distance x − ≡ (x s − x i )/ √ 2 between singal (s) and idler (i) photons, as well as their average momentum p + ≡ (p s + p i )/ √ 2 and compare the results obtained for both sources. A high correlation in the positions and momenta reflects itself in small uncertainties ∆x 2 − and ∆p 2 + . In fact, they are often used to verify entanglement of continuous variables, as it is possible that the product of the uncertainties violates the inequality [2,27] The distributions of x − for both sources are shown in Fig. 2(a) and (c). The positions of signal and idler photons are highly correlated and the shapes of the two distributions coincide, underlining argument of a common birth zone. For the momenta, the distributions of p + obtained with a laser and with an LED differ significantly, see Fig. 2(b) and (d). The momenta of the photons generated by the laser are anti-correlated, in agreement with the argument of momentum conservation. We further verify entanglement, since the measured uncertainty product violates inequality (1). Here, as well as in all following discussions, we obtain the uncertainties by a Gaussian fit to the experimental data. In contrast, the momenta obtained from an LED-pumped source are uncorrelated, and the broad distribution leads to consistent with inequality (1), implying that entanglement is not present and seemingly in contrast to the argument of momentum conservation. For a more detailed analysis, we measure the entire joint probability distributions for position space P(x s , x i ) and momentum space P (p s , p i ) for both the laser and the LED. The results are illustrated in Fig. 3. The joint momentum distribution consists two contributions: the angular profile of the pump along the diagonal and the phase-matching function along the anti-diagonal of (p s , p i )-space, given by p ± = (p s ± p i )/ √ 2, respectively (see Methods for more details). In position space, the distribution has the same structure and can be written as the product of two contributions that can be associated with the spatial profile and the Fourier transform of the phase-matching function along the digaonal and antidiagonal of (x s , x i )-space, given by The distributions for a laser pump are shown in Fig. 3(a,b). We observe narrow ellipses along the diagonal in position space (∆x − /∆x + = 0.153 ± 0.003) and along the anti-diagonal in momentum space (∆p + /∆p − = 0.083 ± 0.004), which underlines the high degree of position correlation and momentum anticorrelation. The combination of the two is a signature of entanglement and these measurements underline our heuristic arguments of a common birth zone and momentum conservation.
The joint position distribution for the LED pump is shown in Fig. 3(c). Since we designed the experiment such that the width of the intensity distribution of the LED light in the crystal is comparable to that of the laser, the two distributions are very similar. We observe a narrow ellipse along the diagonal in position space, i. e. the photon pairs are strongly correlated in position (∆x − /∆x + = 0.174 ± 0.003). In contrast, the joint momentum distribution for the LED shown in Fig. 3(d) demonstrates that the two momenta are uncorrelated (∆p + /∆p − = 1.0 ± 0.1). Because entanglement requires a strong degree of correlation in both positions and momenta, we observe no position-momentum entanglement of photon pairs generated by the LED. The anti-correlations vanish not because transverse momentum conservation becomes invalid, but because the angular profile of a transverse incoherent beam is dramatically different from that of a coherent beam.
We complete our study by experimentally invetsigating the effect of the coherence length l c of a partially coherent beam on the entanglement. We spatially modulate the laser to generate a pseudo-thermal field that can be described by a Gaussian Schell-model beam [28]. Such a pump beam with a beam waist w, a radius of curvature R, and a wave number k p leads to the variance [18] of the angular profile. The coherence length l c causes a spread similar to the one caused by a finite radius of curvature R. We tune the coherence length [29] through the modulation strength of different random phases imprinted on the pump laser and averaged over 300 patterns (see Methods for more details). The measured uncertainties ∆x 2 − and ∆p 2 + are shown in Fig. 4(a). The position correlation remains unchanged and is independent of the coherence length [18]. In contrast, the uncertainty ∆p 2 + scales quadratically with the parameter w/l c , following equation (4). The product ∆x 2 − ∆p 2 + shown in Fig. 4(b) highlights the impact of l c on entanglement. For suffi-
FIG. 4.
Momentum anti-correlation, position correlation, and the entanglement criterion for pseudo-thermal pump beams with different coherence lengths. Part (a) shows that ∆x 2 − (green) is independent of the coherence length, whereas ∆p 2 + (purple) follows equation (4), as highlighted by the fit. The product ∆x 2 − ∆p 2 + in part (b) increases for decreasing coherence and therefore makes a transition from entangled to classically correlated photon pairs. The red star represents this product for the (coherent) laser and the blue star represents this product for the (incoherent) LED whose coherence length has been extrapolated from a fit.
ciently large coherence (small w/l c ), the product is below the bound of 2 /4. For a decreasing coherence length (increasing w/l c ), we exceed this bound and cannot verify entanglement. The laser result from equation (2) is consistent with the limit of a fully coherent beam. The result for the LED from equation (3) is far beyond what we observed for pseudo-thermal light. Although an extrapolation from our data would lead to a rough estimate of 12 µm for the coherence length of the LED, we emphasize that the Gaussian Schell model does not describe such a source very well. We believe that the uncertainty ∆p + of the LED is not determined solely by the inverse of l c , but is in addition limited by the finite aperture of the microscope lens, the low pump efficiency and the non-paraxiality of the incoherent light. An indication of similar effects might be the small difference of ∆p − between the laser and LED measurements, which could be caused by the strong focusing of the LED inside the crystal and its small longitudinal coherence [21,30].
In summary, we have studied the importance of spatial coherence of the pump to generate position-momentum entangled photons and demonstrated the ability to control the degree of entanglement by tuning the coherence of the pump. Since partially coherent beams have been shown to be less susceptible to atmospheric turbulence [31], our configuration might be useful for fu-ture long-distance quantum experiments and could offer a testbed for entanglement purification and distillation protocols [32]. We have demonstrated that only for idealized situations, i.e. a perfectly coherent pump, the heuristic arguments to explain position-momentum entanglement remain valid, and we have shed light on important subtleties of the underlying phenomena of entanglement. Our results underline the relevance of the coherence of the driving force for the generation of entanglement, not only in quantum optics but also in other physical systems such as matter waves or Bose-Einstein condensates.
METHODS
Experimental Setup: In our experiment, the coherent pump source is a laser diode module (Roithner LaserTechnik, RLDE405M-20-5), which can be turned into a pseudothermal light source by modulating the transverse phase profile with a spatial light modulator (SLM, Hamamatsu X10468-05). The SLM is either used as a simple mirror or to generate a pseudo-thermal light source with varying transverse coherence [29] and a beam waist of w = 0.11 mm in the crystal. The incoherent source is a blue LED [33,34] with a center wavelength of 405 nm and an output power of up to 980 mW (Thorlabs M405L3). To ensure a Gaussian-like beam profile while maintaining transverse incoherence, we couple the light into a 400-µm-core multimode fiber. The out-coupled LED beam is then demagnified by a 4f -system before it enters the crystal. To ensure the same polarization for both sources, we introduce polarizers in both beam paths. We additionally add a 3-nm-bandpass filter at 405 nm in front of the crystal to reduce the broad spectrum of the LED. After this filtering, we measure a pump power of 20 µW for the laser and 130 µW for the LED at the crystal.
In all pump scenarios, the photon pairs are generated by a 1 mm×2 mm×5 mm periodically poled potassium titanyl phosphate crystal (ppKTP), which is phase-matched for type-II collinear emission. A long-pass filter and a 3-nm-spectral filter at 810 nm after the crystal block the pump beam and ensure that only frequency-degenerate photons are detected. We split the photon pairs into two separate paths by means of a polarizing beam splitter. In each path we place a narrow vertical slit of about 100 µm width, which can be translated in the horizontal direction and detects either position or momentum depending on the optical system (see below). Photons passing through the vertical slits are collected by microscope objectives, coupled into multimode fibers, and detected by avalanche photodiode single-photon counting modules. The photon coincidence count rate is recorded with a coincidence window of 1 ns and as a function of the two distances ds and di of the slits from the optical axis. To measure the joint position distribution, we image the exit face of the crystal onto the planes of the slits with a 4f -system consisting of two lenses with focal lengths f1 = 50 mm and f2 = 150 mm (placed prior to the beam splitter). We magnify the down-converted beam to reduce errors that arise from the finite precision of the slit widths. By replacing the 4f -system with a single lens f3 and placing the two slits in the Fourier planes of the lens, we measure correlations of the transverse momenta of the photons. We use a focal length of f3 = 100 mm for the laser and a shorter focal length of f3 = 50 mm for the LED to account for the broader momentum distribution of the LED beam. Again, we record the coincidence count rate as a function of the position of each slit, and we transform the distance ds,i to momentum through the relation ps,i ∼ = ds,iks,i/f3. Here, ks,i denotes the wave number of the signal or idler field.
To generate a Gaussian Schell model beam, we imprint with the SLM different random phase patterns on the pump laser. The statistics of these random patterns is Gaussian with a transverse width in the crystal of δ φ = 0.11 mm. To tune the coherence length, we vary the strength of the modulation φ0 and obtain the coherence length from lc = δ φ /φ0 [29]. For each modulation strength, we display around 300 different patterns, average over the observed counts per measurement setting, and evaluate the obtained uncertainties ∆x 2 − and ∆p 2 + . SPDC theory: In a spontaneous parametric down conversion process, the joint momentum distribution P (ps, pi) = PE Pχ consists two parts: (i) the angular profile of the pump PE ∝ |E (ps +pi)| 2 , where E is the angular field amplitude, and (ii) the phase-matching function Pχ, which depends on the the mismatch ∆κ ≡ κp − κs − κi. Here, κp,s,i are the longitudinal components of the wave vectors of the pump, signal, and idler fields. For a bulk crystal of length L, the phase-matching function takes the familiar form Pχ ∝ sinc 2 (∆κL/2), but for other configurations it depends on the crystal poling and other properties that arise from the propagation of the light through the medium. If we assume a crystal of infinite transverse size, we obtain precise transverse momentum conservation, as is apparent from the argument ps + pi of E . In the paraxial approximation, ∆κ scales as the square of the difference in the transverse momenta ps − pi, as can be seen from a Taylor expansion of κj = k 2 j − p 2 j / 2 1/2 for pj ≪ kj , where kj is the modulus of the wave vector of the respective field [18]. With the help of a rotated coordinate system p± ≡ (ps ± pi)/ √ 2, we can rewrite the angular intensity profile to PE = PE (p+) as well as the phase-matching function Pχ = Pχ(p−) such that they are only functions p+ and p−, respectively. After transforming to position space with a Fourier transformation and after an analogue rotation of the coordinates system x± ≡ (xs ± xi)/ √ 2, we find a similar structure P(xs, xi) = PE (x+)Pχ(x−). Here, the function PE (x+) along the diagonal of (xs, xi)-space corresponds to the intensity profile of the laser and the function Pχ(x−) along the anti-diagonal is connected to the phase-matching function through a Fourier transformation. | 4,435.2 | 2018-12-22T00:00:00.000 | [
"Physics"
] |
Automatic Classification Method of Music Genres Based on Deep Belief Network and Sparse Representation
Aiming at the problems of poor classification effect, low accuracy, and long time in the current automatic classification methods of music genres, an automatic classification method of music genres based on deep belief network and sparse representation is proposed. The music signal is preprocessed by framing, pre-emphasis, and windowing, and the characteristic parameters of the music signal are extracted by Mel frequency cepstrum coefficient analysis. The restricted Boltzmann machine is trained layer by layer to obtain the connection weights between layers of the depth belief network model. According to the output classification, the connection weights in the model are fine-tuned by using the error back-propagation algorithm. Based on the deep belief network model after fine-tuning training, the structure of the music genre classification network model is designed. Combined with the classification algorithm of sparse representation, for the training samples of sparse representation music genre, the sparse solution is obtained by using the minimum norm, the sparse representation of test vector is calculated, the category of training samples is judged, and the automatic classification of music genre is realized. The experimental results show that the music genre automatic classification effect of the proposed method is better, the classification accuracy rate is higher, and the classification time can be effectively shortened.
Introduction
Music is an art that can effectively show human emotions. At the same time, music is a note composed of a specific rhythm, melody, or musical instrument according to certain rules [1][2][3]. Rock music, jazz, classical music, and other music genres are examples of diverse style tracks comprised of unique beats, timbres, and other aspects exhibited in music works. With the fast development of network and multimedia technologies, people's primary method of listening to music has shifted to digital music, which has fueled people's need for music appreciation to some level [4][5][6]. Most online music websites' major categorization and retrieval elements are now based on the music genre. Simultaneously, the music genre has evolved into one of the categorization features used in the administration and storage of digital music databases. e pace of database updating is sluggish when dealing with a large volume of music data information.
e effectiveness of manual labelling in the early days of music information retrieval could not satisfy the real demands of contemporary management. erefore, it is of great significance to study the automatic classification of music genres. At present, scholars in related fields have studied the classification of music genres and achieved some theoretical results. Reference [7] proposed a music type classification method based on Brazilian lyrics using the BLSTM network. With the help of genre labels, songs, albums, and artists are organized into groups with common similarities. Support vector machine, random forest, and two-way short-and long-term memory networks are used to classify music types, combined with different word embedding techniques.
is method is effective. Reference [8] proposed a music genre classification method based on deep learning. Machine learning technology is used to classify music types. e residual learning process, combined with peak and average pool, provides more statistical information for higher-level neural networks. is method has significant classification performance. However, the above methods still have the problems of low classification accuracy, long time, and poor effect.
An automated music genre categorization technique based on deep belief networks and sparse representation is suggested to address the aforementioned issues. Framing, pre-emphasis, and windowing are used to preprocess the music signal, and Mel frequency cepstrum coefficient analysis is used to extract the signal's distinctive properties. A music genre classification network model is built based on the deep belief network and integrated with the sparse representation classification technique to achieve autonomous music genre categorization. is method has a good effect and high accuracy in music genre classification and can effectively shorten the classification time. Restricted Boltzmann machine (RBM) belongs to a randomly generated neural network based on the probability distribution characteristics of the learning input data set [9][10][11]. It can be seen that the layer q � q 1 , q 2 , . . . , q n and the hidden layer w � w 1 , w 2 , . . . , w m together constitute an RBM, in which the neurons in each level have no connection. Use q i ∈ 0, 1 { }, w j ∈ 0, 1 { } to describe the value of the binary random unit. e data features are mainly described by neurons in the visible layer, and the features of hidden layer neurons are used for feature extraction. e RBM network structure is shown in Figure 1. e RBM energy function is defined as the following formula:
Deep Belief Network and Sparse Representation
In formula (1), α � a, b, w { } is a real parameter, q i and w j are used to describe the states of the i and j neurons in the RBM layers, z j is the bias of q i , b i is the bias of w j , c ij is the weight between the states of q i and w j neurons, and n, m is its corresponding node. According to formula (1), the joint probability distribution P(q, w; α) of (q, w) can be obtained as the following formula: In formula (2), V(α) is a normalized function. When both q and w are known, formulas (3) and (4) are used to express the activation probability of neurons in the two layers of RBM: In formulas (3) and (4), sigmoid � 1/1 + e − x is the activation function. When the training data set K is given, the likelihood function is maximized to express the RBM target as shown in the following formula: In formula (5), δ is the number of training set data. e essence of RBM is to map the original data to different feature spaces to retain the key feature information of the data and obtain a better low-dimensional representation. According to this idea, an idea of optimizing RBM training in this paper is to replace the objective function formula (5) of RBM equally. If the output of each RBM is converted according to formula (6), the output of its hidden layer can be inversely converted and then compared with the original data. e errors of the two can be used as the standard to judge the learning effect of the current RBM network, so as to learn the key features of the data faster.
In formula (6), G y is the original data set, J is the weight of RBM, J T is the transposed matrix of J, κ is the bias of the visible layer, and θ is the bias of the hidden layer. e difference between the new data obtained by formula (6) and the original data is calculated, and the mean square error MSE is used as the objective function of RBM, and then, the optimization algorithm is used for evaluation.
Deep Belief Network.
Deep belief network (DBN) is one of unsupervised learning algorithms [12][13][14]. It is composed of RBM, so there is no connection in the same layer. e relationship between the two layers of RBM is represented by a joint probability distribution. P q, w 1 , w 2 , . . . , w l � P q|w 1 P w 1 |w 2 , . . . , P w l− 2 |w l− 1 P w l− 1 , w l .
(7)
In formula (7), l is the number of hidden layers of DBN. DBN is a hybrid model composed of two parts. e structure of the DBN model is shown in Figure 2.
As shown in Figure 2, the undirected graph model of the top two layers forms associative memory, and the other layers are directed graph models. In practice, they are stacked restricted Boltzmann machines. ey are Boltzmann machine layers stacked layer by layer, two adjacent to each other. However, the training in the DBN model has direction. DBN training method can be simply summarized into two parts: first, RBM is trained layer by layer to obtain better initial parameter values, and then, the network is optimized. e specific steps are as follows: e original input is set to s (i) , d (i) is used to describe the reconstructed input, and batch gradient descent tuning is used for n samples of a given training set (s (1) , d (1) ), . . . , (s (i) , d (i) ) [15,16]. e sample size loss function can be expressed as follows: In formula (8), M (l) ij is used to describe the weight coefficient between the i and j nodes in the l and l + 1 layers, v (l) i is used to describe the i node offset in the l layer, and h R,v (s (i) ) is used to describe the result after s (i) reconstruction. e difference between the original input and the current input after reconstruction is calculated, that is, the mean square error term. In order to avoid overfitting, the weight coefficient is greatly reduced, that is, the regularization term. e above two items are balanced by λ. C(R, v) is used to describe the correctness of the convex function. In order to obtain the global optimal solution, the gradient descent method is used to realize it [17,18]. In order to optimize the loss function, the mean square error is reconstructed to minimize it, and the partial derivative of C(R, v) is found as follows: DBN has good flexibility; that is, it is easy to expand to other networks or combine with other models. A typical example of DBN expansion is the convolution depth belief network.
Sparse Representation Method.
If there are G type training samples, and there are sufficient numbers in any category, the i training sample data and number are denoted by B i � [b i,1 , b i,2 , . . . , b i,n i ] ∈ R m×n i and m, respectively, and the feature set dimension is denoted by n i . en, its space is composed of n i column vectors, and the linear combination is expressed as follows: In formula (11), χ i,n i ∈ R is described as the linear coefficient to be solved. erefore, a complete vector matrix U is defined, which is represented by training samples of G categories, and the vector matrix is represented by test samples of different categories that can be written as follows: At this time, for the test sample y from the i category, the space formed by the training matrix U can be rewritten as follows: In formula (13),
Seeking Sparse Solution.
When m > n, the reconstructed training matrix space has a unique solution. However, under normal circumstances, when m ≤ n, the reconstructed training matrix space has infinite solutions. As a result, the nonzero vectors contained in the coefficient Journal of Mathematics vector obtained by reconstructing the training matrix space are reduced, which can be converted to In formula (14), ‖ · ‖ 0 is described as l 0 norm. However, formula (14) has an NP problem, which is difficult to solve. erefore, solving the NP problem by minimizing the problem [19,20] is expressed as follows: In formula (15), ‖ · ‖ 1 represents the l 1 norm, and p ⌢ 1 is the approximate solution of p.
Automatic Classification Method of Music Genre
Music genre is a traditional means of categorising the attribution of musical works, and it is commonly separated into categories based on historical context, geography, origin, religion, musical instruments, emotional topics, performance styles, and so on. Western music dominates the music genres. Western music encompasses a wide range of musical styles. Classical, blues, rock, pop, metal, jazz, country, hip-hop, and other music genres are widespread [21][22][23][24][25]. is research proposes a deep belief network and sparse representation-based automated music genre categorization approach. A music genre categorization network model is created by preprocessing music signals, extracting music signal characteristic parameters, pretraining, and finetuning the DBN model. e sparse representation of the test vector is calculated, the category of the training samples is assessed, and the automated classification of music genres on this basis is achieved, in combination with the sparse representation classification method. e automatic classification process of music genres based on a deep belief network and sparse representation is shown in Figure 3.
Preprocessing Music Signal.
Usually, before classifying music genres, music signals need to be preprocessed, which is mainly divided into three steps: framing, preemphasis, and windowing. e music signal preprocessing process is shown in Figure 4.
(1) Framing: For signal processing, framing is generally performed. e purpose of framing is to facilitate the extraction of features, and framing can also reduce the dimensionality of the feature matrix. When framing, you need to select the appropriate frame length and frame width. e following relationship among the sampling period T � 1/f, window width L, and frequency F can be expressed as: It can be seen from formula (16) that when T is constant, the frequency F is determined by the change of the window width L, which is inversely proportional. At this time, the frequency resolution is improved, but the time resolution is reduced. Increasing the window width will result in a decrease in frequency resolution and an increase in time resolution, resulting in a contradiction between window width and frequency resolution. For this reason, an appropriate window length should be selected according to different needs. If the window width becomes larger, the appropriate window width is selected according to different needs. When selecting the length, we should also consider that it is suitable for computer operation. e computer operation is based on binary, so the selected length should also be an integer multiple of 2 as far as possible.
(2) Pre-emphasis: When classifying music genres, because glottic excitation directly affects the average power spectrum of music signals, it is difficult to obtain the spectrum. erefore, pre-emphasis processing of music signals is required. In this paper, the first-order digital filter is used to pre-emphasis the music signal. e formula is as follows: In formula (17), a is the pre-emphasis factor, which is generally taken as a decimal number close to 1.
Assuming that the sample value of the music genre signal at time n is x(n), the result after pre-emphasis is as follows: (3) Windowing: It is for framing service. Framing itself means adding a window function. However, due to the truncation effect of the frame during framing, it is necessary to select a good window function. e good slope at both ends of the window shall be reduced as slowly as possible to avoid drastic changes. e frame division is realized by the method of movable finite-length window weighting; that is, the music signal with window is expressed as follows: Digital processing of music signal using rectangular window and Hamming window is expressed as follows: Rectangular window: Hamming window: In formula (21), M is expressed as the frame length. e comparison of relevant indexes of the rectangular window and Hamming window function is shown in Table 1.
As can be seen from Table 1, for the main lobe width, the rectangular window is smaller than the Hamming window. However, the outer band attenuation of the rectangular window also decreases. Although the rectangular window has a good smoothing performance, its high-frequency component has a certain loss and loses the detail component. According to the above analysis, the Hamming window function has good performance.
Extracting Characteristic Parameters of Music Signal.
e process of precisely describing a music signal using a set of parameters is known as music signal feature parameter extraction. To some degree, the performance of music genre categorization is determined by the selection of music characteristics.
e accuracy and speed of music genre categorization may be improved by using good music signal properties.
rough the examination of the results of hearing trials, Mel frequency cepstral coefficient (MFCC) analysis, it is thought that its voice qualities are excellent [26,27]. e linear spectrum is first mapped to the Mel nonlinear spectrum based on auditory perception and then turned into a cepstrum, taking into consideration the features of human hearing. According to the work of Stevens and Volkman, there is the following conversion relationship: In formula (22), f mel is used to describe the perceived frequency, which is in Mel, and f is used to describe the actual frequency, in Hz. e music signal is preprocessed by the first-order FIR high-pass filter to the MFCC music signal. e goal is to compensate for the spectrum. Next, the preprocessed signal is divided into multiple overlapping frames, and each frame is multiplied by the Hamming window to reduce the ringing effect. e FFT operation is performed on each frame, and the corresponding frequency spectrum is obtained corresponding to the frame of the Hamming window. After the discrete cosine transform (DCT) processes the logarithm of Y(b), the parameters for obtaining MFCC are expressed as follows: In formula (23), Z is the total number of filters, and c is the length of the MFCC feature vector. e function of offset 1 in MFCC is to get positive energy for any value. Finally, the MFCC feature vector is expressed as follows: Journal of Mathematics 5 be given in advance. For the visible layer and the hidden layer, c is used to describe the connection matrix, and κ and θ are used to describe the bias vector. e implementation steps of the fast contrast divergence learning method are as follows: (1) Initialization: q 1 � x 0 is used to describe the initial state of the visible layer, and c, κ, and θ is used as random small values. (2) Cycle all q (t) , t � 1, 2, 3, . . . , T { }: Find the conditional probability distribution P(q, w; α), and select q ∈ 0, 1 { } from it Find the conditional probability distribution P(w ′ , q; α), and select w ′ ∈ 0, 1 { } from it Find the conditional probability distribution P(q ′ , w ′ ; α), and select q ′ ∈ 0, 1 { } from it.
(3) Parameter update: is article is based on the DBN model training implemented by the eano library written in Python. e training of the DBN model includes two stages. e first step is the pretraining stage. e RBM is trained layer by layer from the DBN input layer to the output layer to obtain the DBN model layer and layer. e connection weight among each neural unit in the hidden layer is independent and obtained through Gibbs sampling. In the second step, in the fine-tuning stage, DBN uses the error back-propagation algorithm to fine-tune the connection weights in the model according to the output classification and sets the objective function as the maximum likelihood function to optimize the whole model. According to the RBM network structure [28][29][30], this paper designs the music genre classification network model structure as shown in Figure 5.
Classification Algorithm Based on Sparse Representation.
Under the music genre classification network model structure, for the test sample y, the sparse representation p ⌢ 1 of the test vector can be calculated through formulas (13) and (15). e nonzero coefficients in the estimation should be related to the atoms belonging to a certain class i in U. Based on these nonzero coefficients, we can quickly judge which class the test sample belongs to [31,32]. However, due to the existence of factors such as noise and model errors, there will be a small number of cases where p ⌢ 1 is not zero in the projection coefficient. In order to distinguish the categories where y exists, a new vector δ i (p , and there is a small distance from y, then y ⌢ i belongs to this category has a higher probability. e calculation formula is as follows: So the method of judging which category y belongs to is as follows: identity(y) � arg min i μ i (y). (28) rough the above steps, the automatic classification of music genres is realized.
Experimental Environment and Data Set.
e MATLAB 2016a programming software is utilised as an experimental platform, and a deep belief network based on the eano library of Python language is developed to validate the efficiency of the automated categorization technique of music genres based on deep belief networks and sparse representation. Too much sample data will take up a lot of processing time while updating each level in the deep belief network. e sample database is separated into tiny batches of data packets in preparation to boost computing performance, and then, the batch learning approach is utilised. e fine-tuning learning rate is set to 0.1, while the pretraining learning rate is set to 0.01, in this study, and tests and verifications are carried out using the GTZAN data set, which contains 1000 audio files. ere are ten different sorts of music genres included in these 1000 files, each having 100 samples. MPCC is utilised to extract the distinctive Journal of Mathematics characteristics of a music signal in this experiment [33,34]. e frame length is 512, and the number of frames is 2133. e sampling frequency is 48000 Hz, the sample bits are 16, the frame length is 512, and the number of frames is 2133 [33,34]. In the stage of extracting Mel frequency cepstrum coefficient, 12 dimensional Mel filter is used, and its frequency index is shown in Table 2.
e classification algorithm is based on the combination of a deep belief network and sparse representation. e methods of reference [7] and the methods of reference [8] are compared with the proposed methods to verify the effectiveness of the proposed methods.
Evaluation Indicators for Automatic Classification of Music Genres.
e automatic classification evaluation indexes of music genres used in this paper are classification accuracy, recall, F1 value, confusion matrix, and classification time.
e above classification evaluation indexes are used to evaluate the performance of the proposed method.
e classification accuracy is expressed as the ratio of the number of correct samples to the total number of music genre samples. e calculation formula is expressed as follows: In formula (29), F y is the number of correctly classified samples, and F s is the number of classified samples. e classification recall rate is expressed as the ratio of the number of correct samples classified to the total number of music genre samples. e higher the classification recall rate, the higher the classification accuracy of the method. e calculation formula is expressed as follows: In formula (30), F z is the population size of the sample. e F1 value represents the harmonic mean of the accuracy rate and the recall rate. e closer the F1 value is to 1, the higher the classification accuracy of the method. e calculation formula is expressed as follows:
Effect of Automatic Classification of Music Genres.
In order to verify the effect of automatic music genre classification, the confusion matrix is used to represent the effect of automatic music genre classification. Rock, metal, country, classical, and blues music genre samples are selected, the proposed method to evaluate the classification performance of the trained music genre classification network model is used, and the automatic classification effect of the proposed method is obtained as shown in Figure 6. Figure 6 shows that rock, blues, and classical music all have high classification effects, with confusion matrices of 0.98, 0.96, and 0.95, respectively, although metal and country music had less misclassification, with confusion matrices of 0.88 and 0.85, respectively. Because certain country music may be used to accompany country dancing, and some related metal music is incorrectly labelled as country music, country music can easily be misclassified as metal music.
ere are also some differences between metal and rock music, maybe due to the fact that they all pay attention to rhythm and share commonalities. However, the suggested technique can successfully accomplish the automated classification of five music genres, according to the above study, and the automatic classification impact of music genres is superior.
Accuracy of Automatic Classification of Music Genres.
In order to verify the classification accuracy of the proposed method, 1000 music genre samples are selected, and the methods of reference [7] and the methods of reference [8] and the proposed method are used for automatic classification of music genres, respectively. According to formula (29), the accuracy of automatic classification of music genres by different methods is calculated, and the comparison results are shown in Figure 7. It can be seen from Figure 7 that under the number of 1000 music genre samples, the average accuracy of automatic music genre classification of the methods of reference [7] is 88%, the average accuracy of automatic music genre classification of the methods of reference [8] method is 82%, and the average accuracy of automatic music genre classification of the proposed method is as high as 95%. It can be seen that compared with the methods of reference [7] and the methods of reference [8], the proposed method has higher accuracy in automatic classification of music genres and can effectively improve the accuracy of automatic classification of music genres.
On this basis, the accuracy comparison results of automatic classification of music genres by different methods are calculated according to formula (30), as shown in Figure 8.
As can be seen from Figure 8, under the number of 1000 music genre samples, the average recall rate of automatic music genre classification of the methods of reference [7] is 85%, the average accuracy rate of automatic music genre classification of the methods of reference [8] is 78%, and the average accuracy rate of automatic music genre classification of the proposed method is as high as 97%.
erefore, compared with the methods of reference [7] and the methods of reference [8], the proposed method has a higher recall rate of automatic music genre classification, indicating that the automatic music genre classification accuracy of the proposed method is higher.
On this basis, F1 values of automatic music genre classification of different methods are calculated according to formula (31), and the comparison results are shown in Figure 9. e average music genre automatic classification F1 value of the methods of reference [7] is 0.74, the average music genre automatic classification F1 value of the methods of reference [8] method is 0.6, and the average music genre automatic classification F1 value of the proposed method is 0.98, as shown in Figure 9. As a result, when compared to the approaches of references [7,8], the suggested method's F1 value is closer to 1, suggesting that the proposed method's accuracy is greater.
Finally, the suggested technique has a high accuracy and recall rate for automated music genre classification, and the F1 value is near to 1, demonstrating that the proposed method may significantly increase automatic music genre classification accuracy.
Automatic Classification Time of Music Genres.
On this basis, the automatic classification time of the proposed method is further verified. e methods of reference [7], the methods of reference [34], and the proposed method are used for the automatic classification of music genres, respectively. e comparison results of automatic classification time of music genres of different methods are shown in Table 3.
According to the data in Table 3, the automated categorization time of music genres of various approaches grows as the number of music genre samples increases. e automatic classification time of music genre of the methods of reference [27] is 22.6 s, the automatic classification time of music genre of the methods of reference [8] is 25.8, and the automatic classification time of music genre of the proposed method is only 15.8 s when the number of music genre samples is 1000. It can be noticed that the suggested method's automated categorization time of music genres is shorter than the approaches of reference [27,28].
Conclusion
e automatic music genre classification method based on deep belief network and sparse representation proposed in this paper gives full play to the advantages of deep belief network and effectively realizes the automatic music genre classification combined with the sparse representation method. It has a good classification effect and high accuracy and can effectively shorten the time of automatic classification of music genres. However, in the process of automatic classification of music genres, this paper ignores the fuzziness of music genres. erefore, in the next research, we can consider reasonably analyzing the music theory components of music genres and propose a direct end-to-end audio spectrum classification method to further improve the accuracy of music genre classification.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 6,736.8 | 2022-03-07T00:00:00.000 | [
"Computer Science"
] |
Microfacies Analysis and Depositional Environments of Lower Sa’adi Formation, Southern Iraq
Abstract
Introduction
The Sa'adi Formation is an important hydrocarbon-bearing stratigraphic formation.It has oil accumulations in the south of oilfields in Iraq (Al-khayanee, 2015), but the production rate is not commercial in the study area due to the low permeability.The studied oilfield is located in the southern part of Iraq (Fig. 1).The area was covered by alluvial deposits and recent clay material (Dawd and Hussien, 1992).Sa'adi Formation is the highest, youngest, thickest, and most widespread compared with the other formations (Khasib and Tanuma) in Late Cretaceous in Iraq.The formation was first described by Rabanit (l952).Owen and Nasr (1958) introduced the description as a type section of the formation based on the Zubair-3 well data in the Mesopotamian zone.The definition and age determination of the formation underwent several changes later, mainly due to the revisions made by Chatton and Hart (1961) and the Iraqi-Soviet Team (1972).Alshawosh (2002) and Aqrawi et al. (2010) have suggested the middle shelf as a primary depositional environment for the whole sequence, with the apparent influence of the outer shelf or open marine.The abundance of petrophysics, microfacies evaluation, paleoenvironments and sequence stratigraphy of the Turonian-Lower Campanian (Khasib, Tanuma and Sa'adi) formations have been discussed by researchers (Al-Edani, 2017 andAlkhaykanee, 2015).Current research aims to identify the depositional environment by microfacies analysis, and the effects of diagenetic processes on reservoir properties of Sa'adi Formation from the drilling wells data distribution in the studied Oilfield (Fig. 2).The core samples in the upper part of the Sa'adi Formation are missing, therefore, this study focuses on the lower part of the formation.
Geological Setting and Stratigraphy
The Sa'adi Formation is a part of the Santonian-Campanian cycle that belongs to the mega sequence AP9 (Sharland et al., 2001).In the study area, the formation was subdivided into two parts; the upper part is chalky limestone rocks, sometimes overlapping with marly limestone, and the lower part is limestone overlapping with marly limestone (Buday,1980.The lower part can be further divided into three units (A, B and C) according to wells logs and petrophysical properties (Fig. 3).The contact of the Sa'adi Formation with the underlying Tanuma Formation is conformable at the top of black calcareous shale and the base of white chalky limestone.The contact with the overlying Hartha Formation is an erosional unconformity corresponding (Buday, 1980), as shown in Fig. 4.
Materials and Methods
The current study used fieldwork to collect thirty-five core samples, which were then characterized to identify the lithology of the lower Sa'adi Formation.Thirty-five thin sections have been prepared in the laboratory of the Geology Department, University of Baghdad, and the analysis and description of thin sections have been applied in the laboratory of the Department of Geology, University of Basrah in which the microfacies analysis, depositional model, and diagenetic processes were investigated.The different facies will be classified based on Dunham's classification (1962) of carbonate rocks, and the depositional environment determination, standard depositional environment models and facies distribution were compared with the micofasices of Wilson (1975) andFlugel (2010).Machine work by Petrel software (Version-2018) was applied to export the structural map of the top Formation and the correlation between the wells through the area of interest.The Excel sheet application was used to create the porosity and permeability values relation obtained from core analysis data and compare the results with the depths led to determine the best facies environment in the Lower Sa'adi Formation.
Facies Characteristics
Lower Sa'adi Formation in the studied wells consists of the following three facies belts facing the sea from the coast: Shoal, Open marine, and Middle ramp.
Inner ramp (Shoal facies group)
This facies comprises skeletal packstone to grainstone, bioclast packstone to grainstone, and rudist debris grainstone.The major allochems in these microfacies are skeletal grain and a few matrices, and the matrix is sparite.They contain more than 90% grain fossiliferous, millimeters to centimeters bioclast grains abundant rudist fragments and algae.The sub-microfacies reflected a high-energy environment in upper unit A (Fig. 4).This sub-microfacies corresponds to RMF-26 microfacies of Wilson (Wilson, 1975).It is found in the well-D at a depth of 2211.8m and well-E at a depth of 2170m.(Plate 1e, 1f).
Inner ramp (Open marine facies group)
• Algal wackestone sub microfacies The bioclast wackestone consists mainly of skeletal fragments, and the dominant allochems are algae with rudist debris, micrite and Sparite filled algal shell fragments.This microfacies is distinguished by using a rate of grains of about 45%.This facies represents a low-moderate energy environment.It is located well-E at a depth of 2172m.This sub-microfacies corresponds to RMF-15 microfacies of Wilson (Wilson, 1975), that deposited in open marine zone FZ5 (Plate 1a).
• Bivalves packstone sub microfacies
The main component of microfacies is bivalves, with benthonic foraminiferal fragments and debris, such as Rotalia, algae, and a little echinoid.These facies are typically graining in texture with fewer than 20% micrite.Bioclastic packstone is representative of a shallow open marine environment.The sub microfacies is located within FZ-5.This sub-microfacies corresponds to RMF-14 microfacies of Wilson (Wilson, 1975), Well-E depth 2174m (Plate 1b).
• Fossiliferous packstone sub microfacies
This microfacies is characterised by abundant benthonic foraminifera and few planktonic with other shell fragments.The fossiliferous are highly cemented by fine to coarse crystalline calcite cement.These microfacies reflected a high-energy environment.This sub-microfacies corresponds to RMF-13 microfacies of Wilson (Wilson, 1975), facies are typical in well-E at a depth of 2172.8m(Plate 1c).
• Middle ramp facies group • Pelagic lime mudstone submicrofacies
Based on Dunham's classification, these facies were distinguished.They consist of 5% fossils and fragments and contain foraminifera in micrite.These sub-microfacies are observed in the lower Sa'adi well-E at a depth of 2195m (Plates 2a).It is reflected low energy, which is characteristic of mid ramp environment facies zone (FZ-3); these microfacies have similarity to RMF-5 and its location in unit C was found by comparing the microfacies with the standard Wilson model microfacies (Wilson, 1975).(Fig. 4).
• Fossiliferous wackestone submicrofacies
The microfacies are distinguished by the high proportion of fossilized material (20%) that consists primarily of Echinoderm, and benthonic foraminifera, with a rare presence of planktonic foraminifera and algae.These sub microfacies are observed in well-A, well-B and well-E at 2186.1m (Plate 2b).These facies showed similarity to RMF-3 of Wilson (1975).and it is located within unit B (Fig. 4).
• Benthonic foraminifera wackestone sub microfacies
This facies consists of about 20-40% of components' total of texture and percentage of benthonic foraminifera.In the thin section, increased bioclastic relative to echinoderm and bryozoan.These facies showed similarity to RMF-13 of Wilson (1975).These sub-microfacies are observed in lower unit B of the lower Sa'adi Formation (Fig. 4), clearly in well-E at 2184.4m.(Plate 2c).
• Echinoderms wackestone with diverse fossils Sub microfacies
This microfacies matrix is dark brown micrite (microcrystalline calcite); the most significant allochems are echinoderms and benthic foraminifera with pellet grains, fragments of benthic foraminifera as Textularids and Ammonia.It is similar to RMF-3 compared to typical Wilson model microfacies (Wilson, 1975), as clearly in well-E, well-B and well-A at 2259.60 m (Plate 2d).
• Bryozoan wackestone sub microfacies
This microfacies criterion micrite has many bryozoandepth of and some echinoderm fragments.This sub-microfacies corresponds to RMF-9 microfacies of Wilson (Wilson, 1975).These facies are located in well-A, well-B, well-D, and well-E of the study area as typical facies in lower Sa'adi Formation Well-E at 2174.3 m (Plate 2e).
• Argillaceous wackestone sub microfacies These microfacies mainly consist of benthic foraminifera at the base of the formation.They contain skeletal grains of about 40%, although they may reach 70%, becoming packstone.The groundmass is micrite and shell fragments of foraminifera.These sub microfacies were observed in well-E at 2178.2 m (Plate 2f).This sub-microfacies corresponds to RMF-2 microfacies of Wilson (Wilson, 1975).
Diagenetic Processes
All diagenetic processes have affected the lower Sa'adi Formation, including cementation, micritization, compaction, dissolution, dolomitization, recrystallization, stylolitization, and pyritization (Table 1).Dissolution and cementation are the principal diagenetic processes that affect the sediments, whereas micritization and compaction are second-order diagenetic processes that affect the sediments of the lower Sa'adi Formation.
• Micritization: Micritized grains are prevalent in the bioclastic wackestone/packstones of the lower Sa'adi Formation.Mainly occurs in the middle ramp, it is an early diagenetic process, and skeletal grains were micritized shortly after deposition.(Plate 1b,1c).• Cementation: this process has led to primary mineral porosity and secondary porosity precipitation in Sa'adi Formation carbonates.There are many types of cement, such as granular cement, blocky cement, and durzi cement.Early cement mainly took place in inner ramp shoals and open marine, and late cement primarily took place in fossils sediments deposited in the middle ramp (Plate 3e).• Compaction is the dominance of mud-supported fabrics in the Sa'adi Formation; compaction increases when sediment is overburdened and fine grains increase, resulting in a general reduction of rock volume and porosity and mechanical failure of grains (Plate 4f).The stylolite and fracture are the most mechanical and chemical compaction diageneses.The boundary usually accumulates oxides, and organic matter is irregular surfaces formed due to different vertical movements under burial conditions (Friedman, 1975).Compaction also resulted in microfractures in bioclastic packstone in the inner ramp and pressure seams in lime mudstone/wackestone in the middle ramp.Stylolitization indicates a late-diagenetic origin (Plate 3a, 3f).• Dissolution is an essential diagenetic process, especially in packstones/grainstone developed in the inner ramp.It is generally characterized by fabric selection represented by vuggy pores and molding of skeletal grains (Moore, 2013) (Plate 3b, 3e).• Dolomitization: Most Fe-dolomitic crystals occur within wackestone in the middle ramp.It indicates the late diagenetic origin (Bathurst, 1975) (Plate 3b).• Recrystallization; The micrite matrix material is often recrystallized, resulting in the inversion of micrite to microsparites and microspar.It is an early-stage diagenetic in the inner ramp in grain-supported microfacies (Plate 3d).By recrystallization and processes acting during burial diagenesis and thermal history, the original depositional microfacies and diagenetic textures of limestone are often altered or destroyed.• Pyritization diagenetic marked in decreased environments abundant of aerobic bacteria.It is an indicator of low oxygen percentage (Flugel, 2010).Pyrite is autogenic when H2S increases with an abundance of iron.Pyritization is rare in the lower Sa'adi Formation; it was found in the middle ramp (Plate 3c).
Porosity and permeability
Porosity and permeability in every unit were studied.Reservoir properties (porosity of permeability) for each unit are summarized in Table 2. Sa'adi A is a shallower environment than Sa'adi B and Sa'adi C. The porosity and permeability in Sa'adi A are higher than in Sa'adi B and Sa'adi C, which indicates that as the seawater deepens, the quality of the reservoir in the middle ramp becomes poorer in the studied oilfield.The porosity of Sa'adi C is lower than Sa'adi B. It is also certificated by the cross plot of porosity and permeability (Fig. 5).The Sa'adi A is dominated by bioclast packstone/grainstone, indicating that shoal facies with open marine facies are the best favourable environments for reservoir quality in the lower Sa'adi Formation.
Reservoir Characteristics
Reservoir evaluation has been conducted by using core analysis and thin sections.Pore types have been studied.It revealed that the reservoir in the Sa'adi Formation is of medium porosity and moderate to low permeability.Permeability and porosity in inner ramp sediments are higher than in middle ramp sediments.
Pore ypes
Six types of pores are distinguished from the lower Sa'adi Formation by using thirty-five thin sections: • Intergranular porosity was distinguished in intraclasts skeletal packstones, grainstone located at inner ramp facies.This porosity is rarely observed in thin sections (Plate 4b).
Depositional Environments
The primary depositional setting of the lower unit was the shallow marine carbonate environment.Types of organisms determined the evidence of marine carbonate environment and the energy from wave currents, the source of the carbonate material is predominantly biogenic.Comparing them with recent and ancient sedimentary environments indicates that the Sa'adi Formation in the study area is deposited on a carbonate ramp platform.Most of the lower Sa'adi Formation microfacies were deposited in the mid ramp zone and the inner ramp zone (open marine and shoal).Mid-ramp's common environmental deposits (FZ-4 and FZ-3) contain skeleton lime mudstone-wackestone, fossiliferous wackestone, and argillaceous wackestone-packstone.The skeletal are more common in the proximal part of the mid-ramp association, wherever in the distal portion of the mid-ramp, the skeletal fragments are more minor.These fragments are debris of green algae, bryozoan, echinoderms, bivalves, and benthic foraminifera with few planktonic.The dominant texture in these associations is wackestone and wack-packstone textures, which means the energy level ranges from low to moderate.Bioclast wackestone-packstone reflected shallow open marine (FZ-5) in the inner ramp, and rudist debris packstone-grainstone reflected shoal deposition (FZ6) in the high-energy environment.Also, this environment contains bioclasts of debris and rudist represented by large algae, echinoids, shell Mollusca and benthic foraminifera (Schlager, 2005).According to the Facies Zone (FZ), the study area is FZ-3, FZ-4, FZ5 and FZ6 (Fig. 6).
Conclusions
• The diagenetic processes in the lower part of the Sa'adi Formation have both positive and negative effects on reservoir quality; cementation and micrization have a detrimental impact on reservoir quality by reducing the porosity and permeability that decreased the reservoir quality, dissolution and recrystallization create and increase the petrophysics properties, resulting in improved reservoir quality.Other processes also exist but to a lesser extent, such as pyritization and dolomitization, which have no significant impact on reservoir quality.• Most of the porosity within the lower part of the Sa'adi Formation was formed by diagenesis processes.The ramp facies of the lower Sa'adi Formations have the best reservoir quality in terms of porosity and permeability distribution.• The Sa'adi Formation is divided into three units (A, B and C).The lower Sa'adi Formation has 12 carbonate sub-microfacies deposited in three facies associations: six sub-microfacies were identified and interpreted to be deposited in the middle ramp in unit C, and lower unit B. Four submicrofacies were deposited in the inner ramp open marine in upper unit B and lower unit A. Two sub-microfacies were deposited in the inner ramp shoal in upper unit A. Thus, these facies reflected the system of shallowing upwards, which these facies of this paleoenvironment are evidence of the regression in the lower Sa'adi Formation.The deposition environment was articulated as part of the homoclinic ramp.
• The fossils of benthonic Foraminifera with rare planktonic, Echinoids, Bryozoan, and Algal were represented in the Formation's lower part.That skeletal granules are the most dominant facies in the inner ramp environment than in the middle ramp environment.With the more influence of the digenetic processes represented by dissolution (secondary porosity), unit A were incubators of organic matter represented by hydrocarbon.Still, it is considered a tight reservoir due to the moderate-low permeability resulting from disconnected pores.
Fig. 1 .
Fig.1.Location map of the study area showing the location southern fields of Iraq(Chafeet, 2016)
Fig. 3 .
Fig.3.Three units of lateral variation (A, B, and C) for the lower Sa'adi Formation in the studied oilfield.
Fig. 4 .
Fig.4.Lithostratigraphic column of the upper Cretaceous age in the West Qurna oilfield, modified from(Almohsen, 2019) and located in an open marine zone, as very clearly in well-D at a depth of 2214.8m(Plate 1d).Plate.1.Photomicrographs of microfacies in the carbonates of the lower Sa'adi Formation.(a) Algalbioclast Wackstone in well-E at a depth of 2172m.(b) Bivalves-Bioclast Packstone with dissolution (moldic and intraparticle porosity) in well-E at a depth of 2174m.(c) Fossiliferous Packstone with dissolution (vuggy porosity) and cementation (druzy cement), in well-E at a depth of 2172.8m.(d) Fragmental Mollusca Packstone sub-microfacies, in well-D at a depth of 2214.8m.(e) Echinoderms Packstone-Grainstone with rudist and bioclast, also appearance channel porosity and cementation in well-E at a 2171.8m.(f) Bioclast Grainstone submicrofacies in well-D at a depth of 2211.8m.
Plate. 2 .
Photomicrographs of microfacies in the carbonates of the lower Sa'adi Formation.(a) Pelagic lime mudstone with planktonic foraminifera Globotruncanita (well-E at a depth of 2195m).(b) Fossiliferous wackstone, benthonic foraminifera Rotalia and Ammonia with rare planktonic appear as vuggy porosity and micritization.In well-E at a depth of 2186.1m.(c) Fossiliferous-Benthonic Foraminifera wackestone in well-E at a depth of 2184.4 m.(d) Echinoderms wackestone with diverse fossils.In well-E at a depth of 2182m.(e) Bryozoan wackestone containing a fragment of benthic foraminifera, the porosity within the intraparticle of the wackestone is visible at a depth of 2179m in the well-E (f) Argillaceous Wackestone with bioclastic, as cementation processes (granular cement) appear clear in well-E at a depth of 2178.2 m.
Table 1 .
Diagenetic paragenesis of the lower Sa'adi Formation, from the petrographic study in the studied wells Plate.3.Digenesis's processes (a) Chemical compaction (Pressure solution-parallel stylolite) in well-B at depth of 2236.5m.(b) dolomitization diagenesis around dissolution vuggy porosity, in well-C at a depth of 2366.3m.(c) Pyritization diagenesis, in well-A at a depth of 2258.62m.(d) Recrystallization with the presence of mechanical compaction (Low-Amplitude Stylolite), well-D at a depth of 2212m.(e) Drusy mosaic cement and Blocky cement with dissolution, well-B at a depth of 2237.5m(f) Chemical compaction (Peak-High Amplitude Stylolite) in well-B at a depth of 2236.5m.
•
Intragranular dissolved pores are common in skeletal and intraclasts; in foraminifera, bryozoan and Brachiopoda, it was formed in the freshwater environment (Plate 4d).• Moldic porosity was primarily found in skeletal packstones within the inner-ramp facies and in wackestones within the middle-ramp facies, almost in the form of shall fragments, pelecypods, and benthonic foraminifers (Plate 4a).• Abundance of channel pores in lime mudstone in middle ramp facies and packstone in inner ramp (shoal) facies (Plate 4c).• Fractures pores, particularly in foraminifera wackstones in the middle ramp and rudist packstones in the inner ramp (open marine) (Plate 4f).• Vuggy and cavern pores are mainly at the middle and inner ramps (Plate 4e).
Plate 4 .
Porosity type in lower Sa'adi Formation, (a) Moldic and intraparticle porosity in well-E at a depth of 2169.7m.(b) Interparticle porosity and vuggy porosity in well E at a depth of 2171m.(c) Channel porosity in well-E at a depth of 2194m.(d) Intraparticle porosity and vuggy porosity in well-E at a depth of 2170m.(e) Cavern and vuggy porosity in well-C at a depth of 2365.8m.(f) Fracture in well-A at a depth of 2259.60m.
Fig. 5 .
Fig.5.The cross plot of porosity and permeability in the lower Sa'adi Formation shows that the porosity and permeability are good in the inner ramp facies of the lower Sa'adi Formation. | 4,184.6 | 2022-09-30T00:00:00.000 | [
"Geography",
"Environmental Science",
"Geology"
] |
NEW METHODS IN ACQUISITION , UPDATE AND DISSEMINATION OF NATURE CONSERVATION GEODATA – IMPLEMENTATION OF AN INTEGRATED FRAMEWORK
Within the framework of this project methods are being tested and implemented a) to introduce remote sensing based approaches into the existing process of biotope mapping and b) to develop a framework serving the multiple requirements arising from different users’ backgrounds and thus the need for comprehensive data interoperability. Therefore state-wide high resolution land cover vector-data have been generated in an automated object oriented workflow based on aerial imagery and a normalised digital surface models.These data have been enriched by an extensive characterisation of the individual objects by e.g. site specific, contextual or spectral parameters utilising multitemporal satellite images, DEM-derivatives and multiple relevant geo-data. Parameters are tested on relevance in regard to the classification process using different data mining approaches and have been used to formalise categories of the European nature information system (EUNIS) in a semantic framework. The Classification will be realised by ontology-based reasoning. Dissemination and storage of data is developed fully INSPIRE-compatible and facilitated via a web portal. Main objectives of the project are a) maximum exploitation of existing “standard” data provided by state authorities, b) combination of these data with satellite imagery (Copernicus), c) create land cover objects and achieve data interoperability through low number of classes but comprehensive characterisation and d) implement algorithms and methods suitable for automated processing on large scales. * Corresponding author.
Situation
A key task of the authorities in the German federal state of Rhineland-Palatinate is the provision and regular update of state-wide geo-data on ecologically valuable areas.These data serve for multiple purposes ranging from local to EU level, e.g.local and regional administration, biotope management, biodiversity monitoring or EU report obligations (NATURA 2000 ((Council Directive) 92/43/EEC, 1992), CAP-Cross Compliance (Eu, 2013)).Regular state-wide field recordings are expensive and time consuming.Remote sensing and innovative technologies in data management and -analysis offer new opportunities for an increased efficiency (Corbane et al., 2015).The development in other countries show (Banko et al., 2012) that such technologies can help to increase the usability of the data produced and introduce geo-data to new fields of administration technology.Overall guideline is to step by step introduce automated processes into traditional ways of geo-data acquisition, e.g. the field-mapping-based workflows of habitat mapping.
Project Overview
Since mid 2014 the project NATFLO (Landscape Objects from remotely sensed data for Nature Conservancy) is setting up a system of data production for nature conservation purposes based on remote sensing methods and the extended use of automated workflows.The project makes use of the experiences gained in previous studies and the results of expert meetings held regularly from 2010 until present.The purpose of the meetings was to develop the basic methodological background for the production of multifunctional geo-data with special regard to biotope mapping.Some major conclusions were drawn guiding the following project work.
High resolution vector data
Since the data were to be used in administrational contexts an object based mapping approach (vector geometries) was preferred allowing the attachment of additional information in attributes and data bases.A very high spatial resolution was regarded to be adequate being able to capture the landscape in detail (e.g.individual trees or small scale gradual differences in vegetation cover) and to facilitate the combination with cadastral data.
Multifunctionality
Although biotope mapping was to be the main purpose of the envisaged process further fields of application like land use mapping or landscape planning were identified.On top of this both the local habitat classification scheme OSIRIS and EUschemes (e.g.EUNIS habitat types) had to be addressed.Therefore a high degree of interoperability, i.e. the usability of one vector object in different semantical contexts was identified as a major aim.To achieve semantic interoperability ontology based reasoning using object properties was recognized to be the appropriate means rather than the direct implementation of classification schemes and a subsequent translation to other nomenclatures.This approach adopts the ideas published in the concepts of Einonet Group on Land Monitoting in Europe (EAGLE) (Arnold et al., 2013).
State wide approach vs. test regions
As mentioned before the project can take advantage of the experiences made in previous studies.It is due to these experiences that it was decided to run a comprehensive state wide approach instead of developing a method in test regions.For example automated workflows for state wide object based image analysis did already exist and could be adjusted to a new subject.In an iterative way, the project is addressing one question after the other.The advantage of such an approach is that although a system is not yet fulfilling the full range of tasks useful (geo-) information is dripping from it and can already be transferred to official processes because the data exist for the entire country.
Requirements to be met by data and framework
In short, the data and their generation process described below will have to fulfil different requirements and serve for quite a number of tasks: values.However, due to its diverse topography there are great differences in the climatic conditions on local as well as regional scale, e.g.leading to great regional shifts in the onset of the vegetation period.42 % of the area is covered by forest (beech, oak, spruce, pine), 42 % is under agricultural use.Agriculture in the higher altitudes is dominated by meadowland with arable land in some places, whereby arable land dominates in the mid to lower altitudes down to River Rhine.Viticulture with an area of about 63,000 ha and hot-spots along the rivers Mosel and Rhine is a strong economic and cultural factor.Due to its geographical diversity, the significant areas of forest and meadowland and the dense network of permanent water bodies Rhineland-Palatinate is rich in ecologically valuable areas.
Major Data
To keep costs low one of the preconditions of the project was to derive as much information as possible from standard geo-data provided by the RLP authorities.
Aerial Images
A full coverage of multispectral orthophotos (B, G, R, NIR) have been provided by the RLP Ordnance Survey (Landesamt für Vermessung und Geobasisinformation RLP) with a ground resolution of 0.2 m in tiles of 2 by 2 km.Aerial images in RLP are updated every two years.
Stereo Matching
Based DSM DSM as ASCII point data with a resolution of 0.5 m from automated stereo matching have been provided by the RLP Ordnance Survey.Orthophotos and DSM are derived from the same aerial imagery.The point data was rasterised via IDWalgorithms in automated workflows.
LiDAR DTM
LiDAR DEM as ASCII point clouds (first and last pulse) have been provided by RLP Ordnance Survey.The data have been acquired during flight campaigns between 2003 and 2009.Last pulse data with 4 points (average) per square meter were rasterised by IDW-algorithms in automated workflows.These data served for the calculation of normalised DSM with a ground resolution as well as for the derivation of DTM of lower resolution e.g. for terrain analyses.
From Raw Data to Landcover to Habitats
In order to accomplish the challenge of developing a multifunctional data infrastructure and corresponding data model the project uses existing concepts of categorising and describing data like the EAGLE data model (Arnold 2013).Main advantages of this approach are a comprehensive interoperability to other nomenclatures (CORINE Land Cover, INSPIRE LCUS, LCCS etc.) and the ability to derive land cover and nature conservation products in different stages of the workflow.In the first stage a pure land cover product will be generated using the descriptive attributes of the EAGLE data model whereas in the second stage these objects can be enriched by indicators that are necessary to derive the actual biotope map.
Combined use of imagery from different sources
The workflow developed is based on the combined analysis of different data sources including aerial images (biennial update), VHR-DEM (LiDAR, stereo-matching) and satellite imagery.Auxiliary thematic geo-data (DEM derivatives, soil-data) is used to support characterisation and classification purposes.
The set of image data aims at benefitting from both high spatial resolution of aerial images and high spectral and temporal resolution of satellite data (Copernicus: TSX, RapidEye, in future: Sentinels).
Object generation and validation of geometries
Vector geometries (polygons) are generated in an object based image analysis developed in eCognition Developer and run tilewise in batch mode in eCognition Server.Besides the generation of vector objects ruleset manages a pre-classification of the objects and attaches attribute information to each of them, mainly statistical parameters on their spectral and height characteristics.
2.3.1.1
Segmentation The basic approach of the segmentation process is rather pragmatic.Basically it aims at delivering geometries usable for a field mapper in land cover-and respective biotope-mapping.This means that every boundary possibly relevant in landscape is supposed to be mapped.Since even gradual changes in vegetation cover, e.g. in its texture, may depict important habitat boundaries this may lead to some over-segmentation which has some implications on the further processing of the data, e.g.concerning object aggregation.
Image segmentation is run as an iterative process making use of threshold-based and multiresolution approaches (Baatz & Schäpe 2000).All steps currently carried out during segmentation are based on information derived from aerial images, i.e. spectral information and height (including a LiDAR based DTM for the normalisation of the DSM).Tests are currently carried out involving satellite imagery (single scenes and time series, multispectral and SAR) in order to possibly include multitemporal information into the process of object generation.Also DTM derivatives are tested concerning their usability for segmentation in order to better represent geomorphological conditions by object shapes and pattern.
Multi-threshold segmentations based on spectral (Combination of NDVI and Bare Area Index) and DSM-derived (height above ground) parameters are carried out to geometrically separate the main landscape components "Abiotic"/ "Water" and "Biotic" and to further split up these components themselves.Fixed threshold segmentation (instead of values individually adjusted to each image) is used to mitigate negative effects at the tile boundaries and to apply repeatable rules especially when dividing the landscape by the height above ground, a comparably stable parameter.The threshold based segmentation already delivers a clear picture of the landscape.Nevertheless the objects produced are not yet suitable for detailed mapping.
Object characterisation
The classification approach described below (see 2.5.1)makes use of indicators.They describe every single habitat class of the classification scheme.The use of such indicators requires a comprehensive knowledge on the environmental conditions and characteristics of objects.Value based parameters stored as object attributes in data bases fulfil this requirement.These parameters are supposed to be as diverse as possible and describe an object concerning site (soil, macro-and topoclimate, geomorphology and terrain etc.), the characteristics of the cover type (e.g.vegetation height and -intensity) and temporal dynamics within the object like phenological development or management measures.A data base with a collection of relevant geo-data for the calculation of zonal statistics has been built up.It consists of already available data from official authorities as well as from analyses especially run for the project purposes.For example a large number of parameters on terrain dependent site conditions have been calculated on a state wide LiDAR-based DTM with 5 m resolution (Esri ArcGIS, SAGA Gis).Examples here are simple parameters like slope and aspect.More specific information is offered by derivatives like topographical wetness indices (potential soil moisture), incident solar radiation (topoclimate) or topographic position (terrain/ morphology).
In an automated workflow (Python) all object geometries produced in the object based image analysis have been enriched with attributes on numerous value based parameters.A set of standard statistical parameters has been calculated for each parameter and been attached to each object.
Analysis of multitemporal satellite data
The first step of data production, the derivation of object geometries from the multispectral aerial images and nDSM took advantage from the high spatial resolution of the data.Satellite imagery offers high spectral and temporal resolution.Further remote sensing technology can be applied to the data base through satellite imagery by methods making use of their higher spectral and temporal resolution.Methods are being tested analysing optical (RapidEye) and SAR (TerraSAR-X) satellite imagery concerning temporal patterns in time series.Currently the detection of permanent grassland and its characterisation concerning use intensity and pattern is in the focus of interest.Grassland is very important in nature conservancy due to its high biodiversity in semi-natural or low intensity areas while at the same highly threatened by being ploughed up for the production of fuel for renewable energies.Within this project the analysis of intra-annual multispectral Rapid Eye time series with six acquisition dates, carried out in a test region of 200 km² in western RLP separated grassland from cropland and detected the number of mowing events as well as areas managed by grazing through the use of support vector machine classification.TerraSAR-X backscatter time series are analysed implementing methods developed and successfully applied by (Schuster et al., 2011) aiming at the detection of cutting events in semi-natural grassland.On top of this methods are being developed for the detection and quantification of significant changes by indices combining backscatter and coherence in SAR-time series.Both optical and SAR-based approaches are supposed to be run in operational workflows for the derivation of comprehensive information for RLP.Data on management patterns and use intensities is crucial for the characterisation of ecologically valuable areas.Temporal metrics, once produced comprehensively for RLP, are expected to significantly enhance the characterisation of objects and therefore the classification results in the current process.
Formalisation of habitat classes
Basis of the classification approach is an ontological system that formalises habitat types (e.g.included in the EUNIS nomenclature) in an OWL2/XML ontology (Nieland, Kleinschmit, Förster, & Kleinschmitt, 2015).The basic concepts in this ontology are stored in a shared vocabulary and are used to describe important indicators of habitat classes which have been adopted from the classification schemes.In order to generate comprehensive interoperability this shared vocabulary is built on concepts developed by EAGLE (Arnold et al 2014).Since the EAGLE concepts are, until now, mainly focused on land cover and land use additional concepts had to be included in the system.The development of descriptive indicators is a crucial part of this work and is done in cooperation with surveying experts with great care in an iterative procedure.To generate a meaningful and comprehensive set of indicators for habitat classification is the basis for the classification process and the therefore the methodological backbone of this work.In the next step the developed indicators can be used to describe habitat classes according to the subsequent nomenclatures using Description Logics (DL).
Feature selection and data mining
In order to derive descriptive indicators (see 3.4.1)from value based parameters (see 3.3) automated methods are needed.That includes the selection of features that are most appropriate for the derivation of the indicators to reduce the calculation effort as well as the classification process itself.Since we want to benefit from the computation power and accuracy of supervised classification algorithms on the one hand and the transferability and reproducibility of knowledge-based approaches on the other hand, the Separability and Threshold (SeATH) (Nussbaum 2006) approach has been taken into account.This algorithm statistically identifies characteristic features and their thresholds on the basis of present training data (see 3.4.4)and can therefore be used to generate reproducible and transferable rulesets in a simple statistical approach (see Figure 5 ).
Ontology-based classification
The derived rulesets (see 3.4.2) can now be imported to the OWL ontology to produce computer readable, reproducible and transferable classification rules.In the next step the segmented objects (see 3.2.1)and subsequent value based (see 3.3) parameters can be imported as OWL individuals.The classification will be achieved by the fact++ Reasoner i , which is able to perform efficient A-Box reasoning over large ontologies.That means that, in the first step, the reasoner assigns OWL individuals (segmented objects) to indicator concepts based on the rules derived by SeATH (see 3.4.2) and, in the second step, allocates the individuals to habitat classes formalized by taking into account the class descriptions and expert knowledge (see 3.4.1)(see Figure 5).Therefore the reference data will be generated in two steps.Reference data will be generated making use of a subset of objects geometries produced in this process.The subset is a selection based on auxiliary data taken from the recent biotope field mapping campaign, land survey and other reliable sources.The selection is supposed to cover the full range of habitats and indicator combinations of RLP.The selected objects are to be verified by performing field checks and taking into account recent orthoimagery.
Dissemination of Data and Methodology
In Rhineland-Palatinate an increasing demand for comprehensive geo-data on biotopes resp.the ecological value of areas can be stated.Governmental authorities need such data for planning matters.Many administrational tasks have to take into account ecological issues, e.g.local development plans or the granting of building permits.Decision making on a political level needs to be informed by reliable, standardised data, and last but not least the requirements arising from EU-law have to be fulfilled.This requires common standards, granted by European directives and institutions (NATURA2000, INSPIRE, EC, EEA) and activities (EAGLE).
Besides the conceptual backgrounds offered by NATURA2000/ EUNIS and EAGLE) it is the INSPIRE directive setting common technical standards for data exchange.The technical implementation of INSPIRE is still in progress and there are new developments regularly.One project aim is to stay up-todate with current developments in this context.New improvements are continuously discussed in expert meetings and implemented in the data models.This affects especially the data handling and the possibilities concerning formalised exchange of workflows and methodology.
Data access via internet
A project website has been implemented for the dissemination of the geoinformation.A web-GIS enables the user to explore the latest state of data production.A download area offers information on project state and developments.INSPIREcompatible metadata is stored for all data in the system.OGC-Services offer access to web-maps (WMS) as well as physical data.The latter are currently available via Atom-Feeds but will in future be implemented as Web Feature Service.
2.6.2
Exchange of data and methodology through "linked data" One big advantage of the presented approach is the possibility to produce computer readable and fully reproducible and transferable classification workflows.That means, on the one hand they correspond to INSPIRE codelists and can be used to fulfil its technical requirements, on the other hand the whole classification logic will be made available via the web to give other projects and authorities the possibility to re-use and enhance the developed methodology.This offers further opportunities for a continuously increasing degree of standardisation of geo-data in Europe.
STATE OF PROJECT, CONCLUSIONS AND PERSPECTIVES
In its first year the project has proven that the combined use of workflows developed in previous activities enables the project partners to provide the authorities with full coverages of high resolution land cover objects from official standard data bases.
Figure 2 :
Figure 2: Threshold based segmentation of aerial image/ nDSM.High vegetation already subdivided by multiresolution segmentation, Mannebach, Saar-Mosel region (NATFLO) Multiresolution segmentation is then applied to further subdivide the components into smaller objects capturing meaningful subunits e.g. in meadowland or forest.Forested areas and the open landscape are treated with different parameterisations due to different structure and requirements.
Figure
Figure 3: Fine-grained subdivision of the image after threshold-based and multiresolution segmentation (NATFLO)
Figure 5 :
Figure 5: Overview of ontology-based classification.Adapted from Arvor et.al 2013 2.5.4Reference Data: Training and Validation Correct and meaningful state wide reference data is needed to provide a basis for the data mining approaches and to validate the classification results.Developing a concept for the generation of a nation-wide set of reference areas for a great number of different habitat classes and their associated indicators is an enormous challenge.Therefore the reference data will be generated in two steps.Reference data will be generated making use of a subset of objects geometries produced in this process.The subset is a selection based on auxiliary data taken from the recent biotope field mapping campaign, land survey and other reliable sources.The selection is supposed to cover the full range of habitats and indicator combinations of RLP.The selected objects are to be verified by performing field checks and taking into account recent orthoimagery.
These data are iteratively further developed concerning the quality of the geometric representation of the landscape and the characterisation of each object.The second version of a full data set has been finished beginning of 2015 providing highly suitable objects in all areas with high vegetation.Segmentation algorithms delivering object geometries for areas of the open landscape are currently being developed.Skills and methodology in the analysis of time series of image data are extended, tests are delivering promising results.The project partners are looking forward to implementing operational workflows for state wide analyses in the context of Copernicus and the Sentinel missions.Besides data production the infrastructure and conceptual background for an ontology based classification environment has been developed.Tests are delivering promising results.The crucial point in this part of the work is the setup of a valid network of training data representing the full range of habitat types and resp.indicators. | 4,804.8 | 2015-04-29T00:00:00.000 | [
"Environmental Science",
"Geography",
"Computer Science"
] |
Structuring Interdigitated Back Contact Solar Cells Using the Enhanced Oxidation Characteristics Under Laser‐Doped Back Surface Field Regions
Interdigitated back contact (IBC) architecture can yield among the highest silicon wafer‐based solar cell conversion efficiencies. Since both polarities are realized on the rear side, there is a definite need for a patterning step. Some of the common patterning techniques involve photolithography, inkjet patterning, and laser ablation. This work introduces a novel patterning technique for structuring the rear side of IBC solar cells using the enhanced oxidation characteristics under the locally laser‐doped n++ back surface field (BSF) regions with high‐phosphorous surface concentrations. Phosphosilicate glass layers deposited via POCl3 diffusion serve as a precursor layer for the formation of local heavily laser‐doped n++ BSF regions. The laser‐doped n++ BSF regions exhibit a 2.6‐fold increase in oxide thickness compared to the nonlaser‐doped n+ BSF regions after undergoing high‐temperature wet thermal oxidation. The utilization of oxide thickness selectivity under laser‐doped and nonlaser‐doped regions serves two purposes in the context of the IBC solar cell, first patterning rear side and second acting as a masking layer for the subsequent boron diffusion. Proof‐of‐concept solar cells are fabricated using this novel patterning technique with a mean conversion efficiency of 20.41%.
Introduction
In the early 1970s, Schwartz and Lammert developed the first interdigitated back contact (IBC) solar cells. [1]In the nascent stages, IBC cell design was optimized for concentrator application to cope with the high intensities of incoming energy fluxes and the related high current densities. [2]ue to its inherent advantages, this cell architecture was later adapted for one sun application. [3]In the IBC cell architecture, contacts for both types of polarities are placed on the nonilluminated side of the solar cell.The most obvious advantage of IBC cells over conventional both-side contact solar cells is the elimination of any optical shading losses caused by the metal finger and busbars on the front side, allowing the solar cells to boast a higher short-circuit current density J sc .A more comprehensive range of front surface texturing and light trapping schemes could be adopted on the front surface of the IBC structure, as there is no need for heavily doped regions like in the case of both side contact solar cells. [4]Another considerable advantage is the reduced complexity of cell interconnection inside the module. [5]The design architecture is perfect for mechanically stacked tandem cells with higher-bandgap technologies, like perovskites, in a three-terminal configuration. [6]Research groups and companies use IBC architecture worldwide to make highefficiency solar cells because of the abovementioned benefits.In research and development, Kaneka Corporation reported a conversion efficiency of 26.7% for its heterojunction IBC, [7] and ISFH reported a conversion efficiency of 26.1% on its POLO IBC solar cells. [8]At the industrial scale, SunPower reported an efficiency of 25% on the SunPower X-Series technology, [9] and SPIC reported an efficiency above 23.5% on its low-cost bifacial IBC ZEBRA technology. [10]ince both polarities are located on the rear side of the IBC solar cell concept, there is a definite need for a patterning step.Some of the first IBC solar cells developed in the early 1980s and 1990s used photolithography techniques for patterning on the rear side. [11]Photolithography is a well-established technique that uses light to produce minutely patterned thin films of suitable material over a substrate to protect selected regions (mask) during the subsequent etching, deposition, or implantation process.Franklin et al. [12] reported an efficiency of 24.4% on their IBC solar cells, which were patterned using photolithography.The fabrication of this cell had as many as four photolithography steps.Besides photolithography, alternative patterning techniques include inkjet patterning of a resist layer followed by wet chemical etching and aerosol jet printing. [13,14]All the methods DOI: 10.1002/pssa.202300820Interdigitated back contact (IBC) architecture can yield among the highest silicon wafer-based solar cell conversion efficiencies.Since both polarities are realized on the rear side, there is a definite need for a patterning step.Some of the common patterning techniques involve photolithography, inkjet patterning, and laser ablation.This work introduces a novel patterning technique for structuring the rear side of IBC solar cells using the enhanced oxidation characteristics under the locally laser-doped n þþ back surface field (BSF) regions with high-phosphorous surface concentrations.Phosphosilicate glass layers deposited via POCl 3 diffusion serve as a precursor layer for the formation of local heavily laser-doped n þþ BSF regions.The laser-doped n þþ BSF regions exhibit a 2.6-fold increase in oxide thickness compared to the nonlaser-doped n þ BSF regions after undergoing hightemperature wet thermal oxidation.The utilization of oxide thickness selectivity under laser-doped and nonlaser-doped regions serves two purposes in the context of the IBC solar cell, first patterning rear side and second acting as a masking layer for the subsequent boron diffusion.Proof-of-concept solar cells are fabricated using this novel patterning technique with a mean conversion efficiency of 20.41%.mentioned above require multiple wet bench processing steps and sophisticated equipment, making them not viable for application in the photovoltaic (PV) industry.In recent years, Dullweber et al. [15] showed an elegant way of locally depositing one polarity of the polysilicon (poly-Si) layer using plasmaenhanced chemical vapor deposition (PECVD) through a glass-based shadow mask.
Laser ablation has been one of the most successful approaches for patterning the rear side in IBC solar cells. [10,16]The use of the laser ablation technique for the fabrication of IBC solar cells has been reported by Engelhart et al. [16] and O'Sullivan et al. [17] using a picosecond laser for the ablation of dielectric layers like SiO 2 .In addition, Kronz et al. [18] demonstrated SiN x ablation using a nanosecond laser.However, laser ablation can cause unintentional laser-induced damage such as surface melting, heataffected zones, microcracks, and point defects on the underlying silicon layer, negatively affecting solar cell performance. [19]ence an additional wet bench step is introduced for laser-induced damage removal before the subsequent hightemperature step.
Dahlinger et al. [20] presented an innovative technique using laser doping to enable local doping of IBC cells without masking layers. [21]The laser doping was used to form the local n þþ back surface field (BSF) and a local p þþ emitter with high spatial resolution without the necessity of any masking.Alternatively, Franklin et al. [12] and Zieli ński et al. [22] used laser doping to form one polarity and laser ablation for the patterning and aligned contact formation using the nanosecond (ns) laser.
In this work, we use the enhanced oxidation rates under locally laser-doped n þþ BSF regions for patterning the rear side of IBC solar cells.We perform laser doping on POCl 3 diffused samples using PSG glass as a precursor layer.Due to the higher oxidation rates under the heavily doped n þþ BSF regions, [23,24] upon wet thermal oxidation at 850 °C for 30 min, we were able to grow a thick SiO 2 layer of 125 nm under laser-doped regions, as compared to a thin SiO 2 layer of 48 nm on the nonlaser doped regions.The selectivity in the thickness of the SiO 2 layer under lasered and nonlasered regions was used for patterning the rear side of the IBC solar cells.It was also observed that the remaining SiO 2 layer under the laser-doped regions after patterning was sufficient to act as a dopant barrier for the subsequent boron diffusion process, making the processing of the IBC solar cells robust and streamlined.
Deal-Grove Model Simulation Study
Silicon oxide has a profound place in the fabrication of PVs and the integrated circuit industry.SiO 2 is primarily a passivation layer to passivate the surface dangling bonds on c-Si.It is also used as an insulator between semiconductor devices and a masking layer to act as a dopant barrier. [25]Due to its high etch selectivity in alkaline solution compared to c-Si, [26] it is also used in the semiconductor industry for patterning purposes.
SiO 2 is grown in a furnace by supplying oxygen (O 2 ) to the c-Si surface and reacting at high temperatures.Bruce Deal and Andy Grove (of Fairchild Semiconductor) developed the first straightforward kinetic model for SiO 2 growth in the early 1960s. [24]Ho et al. [27] studied the enhanced oxidation effects in the heavily doped n þþ c-Si substrate.The observed phenomenon is attributed to the shifting of Fermi level toward the conduction band in heavily doped n þþ c-Si substrate, causing an increase in the equilibrium concentration of point defects or vacancies in the silicon.These point defects act as reaction sites for the chemical reaction converting Si to SiO 2 , enhancing the linear rate constant, which is interface reaction dependent. [28]ence the oxidation rates are much higher for a heavily doped n þþ c-Si substrate than for a lightly doped c-Si substrate.
In this study, simulations were performed to understand the influence of phosphorous surface concentration N s on the growth dynamics of the SiO 2 layer using Integrated Circuits and Electronics group Computerized Remedial Education and Mastering (ICECREM) software which uses the Deal-Grove model. [29]Simulations of wet and dry oxidation were done by varying the substrate doping from 1 Â 10 15 to 7 Â 10 19 cm À3 for 30 min at 850 °C, as represented in Figure 1.
During dry oxidation, the silicon wafer reacts in a pure oxygen gas atmosphere (O 2 ) at elevated temperatures.Based on the data presented in Figure 1, it is evident that increasing the N s results in a sufficient selectivity in SiO 2 thickness during dry oxidation.However, it is important to note that the oxide growth rate is significantly low, rendering it unsuitable for selective etch back in an acidic solution.Additionally, the retained oxide after patterning is insufficient to serve as a masking layer during subsequent BBr 3 diffusion.During wet oxidation, the silicon wafer reacts in a water vapor atmosphere (H 2 O) at elevated temperatures.Compared to dry thermal oxidation, much thicker oxides can be formed using wet thermal oxidation.The observed phenomenon is due to the higher solubility of H 2 O in Si than in the O 2 molecule. [24]Hence the thicker oxides obtained using wet thermal oxidation can be used for our patterning application.In theory, we observe high selectivity (1:2.8) of SiO 2 thickness between lightly 1 Â 10 15 cm À3 and heavily doped 7 Â 10 19 cm À3 silicon substrates.Increasing the substrate phosphorus level strongly affects the interface reaction rate. [27]2.Influence of Laser Doping Settings on the Sheet Resistance and Oxidation Rates of n þþ c-Si Figure 2a illustrates the impact of laser pulse energy density H p on the R sh of laser-doped n þþ c-Si (left) and the SiO 2 thickness d ox under laser-doped regions after wet thermal oxidation (right).The resulting curves can be interpreted into three separate zones, as observed by Hassan et al. [30] for laser doping of crystalline silicon from boron precursor layers.
In zone-I (0 J cm À2 < H p < 2 J cm À2 ), the laser energy is less than the melting thresholds of Si; thus, doping is not possible.Therefore, we detect no change in the R sh and d ox values in this zone-I.In zone-II (1 J cm À2 < H p < 2 J cm À2 ), a linear decrease in R sh with increasing H p is seen, which indicates that the number of phosphorous atoms in the doped layer also increases linearly.Interestingly, also in zone-II, we observe a linear increase in the d ox from 48 nm at 1 J cm À2 to almost 123 nm at 2 J cm À2 .This increase in the d ox after wet thermal oxidation is attributed to increased N s .From Figure 2b, it can be observed that the N s increases from 4 Â 10 18 cm À3 at 1 J cm À2 to 6 Â 10 19 cm À3 (more than one order of magnitude) at 2 J cm À2 .Hence we can conclude that the decrease in R sh of zone-II can be mainly attributed to the alteration of the dopant concentration at the surface and marginally due to a deeper junction.
Progressing to zone III (H p > 2 J cm À2 ), the R sh decreases marginally and then saturates at 75 Ω sq À1 with increasing laser H p .In this zone, we observe that the decrease in R sh is dominantly due to the deeper junction whereas the N s gradually starts to decrease with increasing laser H p .d ox after wet thermal oxidation starts to decrease in the zone III and this can be explained by the decreased N s .Indeed, as seen in Figure 2b, we observe the decrease in the N s from 6 Â 10 19 cm À3 at 2 J cm À2 to 1.5 Â 10 19 cm À3 at 2.5 J cm À2 .In this zone, we start evaporating the precursor layers and partially evaporating parts of the doped silicon surface, lowering the N s at the silicon interface.A similar observation was seen in Hassan et al. [30] The experimental measured d ox after laser doping and wet thermal oxidation were compared to the simulated wet thermal oxidation d ox and are depicted in Figure 1 with an open red box.We observe that all the experimental measured d ox is lower than that of the simulated d ox .One explanation for the observed discrepancy is the phosphorous active concentration doping profile being different for experimental and simulated results.All the simulated d ox are calculated, assuming a uniform rectangular doping profile, while the experimental laser doping has a complementary error function doping profile.During oxidation, silicon is partially consumed to form SiO 2 .According to the literature, 54% of SiO 2 grows on the c-Si substrate, and the remaining 46% grows below the original c-Si substrate, thereby partially consuming Si during oxidation. [24]Hence for a very shallow laser-doped profile with a steep decrease in the dopant concentration (like 1 J cm À2 ), we observe a much higher deviation between simulated and measured d ox .In comparison, to the deeper laser-doped profile with a uniform doping profile (like 2.5 J cm À2 ), we observe good agreement with the simulated values.
The Process Sequence of IBC Solar Cells Using the Novel Patterning Technique
The previous section established that we could increase the phosphorous concentration N s at localized regions by fine tuning the laser doping parameters.The enhanced oxidation rates under these selectively laser-doped n þþ BSF regions can be used for patterning IBC solar cells.To investigate the feasibility of this patterning approach, a process sequence was formulated and is depicted in Figure 3b.
The phosphorous-doped Cz-Si wafers used for solar cell fabrication were 180 AE 10 μm thick with a base resistivity (ρ b ) = 4 AE 0.5 Ω cm and M2 size.The process sequence starts with saw damage removal and Piranha chemical cleaning.Conventional tube diffusion at atmospheric pressure using POCl 3 liquid precursor was used to deposit a phosphosilicate glass (PSG) layer on the c-Si substrate, which will be later used as a precursor for laser doping.The best laser doping settings obtained from the previous section were used to selectively laser dope 33% of the area on the back side of the wafer, forming the 75 Ω sq À1 n þþ back surface field (BSF) region.The precursor PSG layer is removed in the acidic HF bath before wet thermal oxidation at 850 °C for 30 min.After the high-temperature step, we grow a thick SiO 2 under the BSF regions and a thin SiO 2 under nonlaser-doped regions.The SiO 2 is controllably etched back in the HF solution in the subsequent etching step.The process is interrupted when the thin SiO 2 layer under nonlaserdoped regions is completely removed.The samples then undergo a texturing step during which the nonlasered regions on the rear side and the complete front side get textured, whereas, under the laser-doped BSF regions, the thick SiO 2 acts as an etch stop.A standard tube diffusion system at atmospheric pressure using BBr 3 liquid precursor was used to form the 150 Ω sq À1 p þ emitter on the rear and 150 Ω sq À1 p þ front floating emitter (FFE) on the front.The thick SiO 2 under the laser-doped regions prevents the boron from reaching the BSF interface and acts as a selfmaking layer.The BBr 3 drive-in phase is done in a partial oxygen environment, thereby growing a homogeneous in situ thermal SiO 2 at the Si interface.In the paper by Mihailetchi et al., [31] we demonstrate that this in situ-grown thermal SiO 2 can be used as a buffer layer to etch back the BSG layer entirely in an HF wet bench step due to the selectivity in the etching rates of the dopant glass layer and in situ-grown thermal glass layer.The remaining in situ thermal SiO 2 was capped with a PECVD SiN x to passivate both p þ and n þþ polarities. [32]The metallization of these solar cells was achieved using our best-known method of screen printing and firing through a process from ZEBRA technology. [5]The schematic representation of the IBC solar cells fabricated using the novel patterning sequence is depicted in Figure 3a.
Optimization of the Process Sequence and Solar Cell Results
Successful implementation of this process sequence to fabricate solar cells critically depends on the two key steps after wet thermal oxidation, during which the SiO 2 under nonlasered regions has to be completely removed.At the same time, under the laser-doped regions, one has to retain sufficient SiO 2 thickness to withstand the texturing step and the subsequent BBr 3 diffusion.Hence these two processing steps are studied in detail.
HF Etching of SiO 2 and KOH Texturing
The goal during the step etching in acidic HF solution is to completely etch the 48 nm-thick SiO 2 under the nonlaser-doped regions and still retain a thick-enough SiO 2 under laserdoped regions to act as a masking layer for the subsequent texturing and BBr 3 diffusion processes.Since we have a difference in the oxidation rates under lasered and nonlasered regions for the same wet thermal oxidation duration, the etching rates of the SiO 2 under both regions were tested.To investigate the etch rate of the different SiO 2 layers, the wafers were immersed in 2-vol% HF acid solution for a defined period, followed by thickness measurement.The thickness t versus etching time slope was used to determine the etching rate.The etching rate of laser-doped regions was determined to be 0.17 nm s À1 , whereas that of nonlaser-doped regions was 0.16 nm s À1 , which is quite comparable.SiO 2 -obtained etching rates were comparable with Spierings et al. [25] at an etchant concentration of 2-vol%.Using the knowledge of the etching rates, the samples underwent a wet bench processing in an HF bath for 420 s, during which the SiO 2 under the nonlasered part was completely etched and retained 75 nm-thick SiO 2 under the laser-doped n þþ BSF regions.
The subsequent texturing step happens in the batch-type RENA wet bench.The etching rate of thermally grown SiO 2 = is 200-400 times slower than that of c-Si in an alkaline texturing bath at 80 °C. [25]Hence, the nonlasered regions on the rear and the complete front side are textured, leaving the laser-doped n þþ BSF region protected from texturing using the SiO 2 etch stop layer.The samples are quickly cleaned in the HF solution during the standard batch-type RENA wet bench texturing.The texturing step removed ≈3.5 μm of c-Si under nonlaser-doped regions.In contrast, only 20 nm of SiO 2 under the laser-doped n þþ BSF regions were etched.Hence during the texturing process using the self-masked SiO 2 under laserdoped BSF regions, we pattern the rear side and texture the front side of the IBC solar cells.
BBr 3 Diffusion and BSG Thinning
BBr 3 diffusion step was used to form the p þ emitter on the rear as well as on the front side (FFE) of the solar cells.During BBr 3 diffusing, we wanted to investigate if the 55 nm-thick SiO 2 retained under the laser-doped n þþ BSF regions after texturing was sufficient to act as a masking layer.To study the thickness of SiO 2 needed to function as a masking layer for our BBr 3 tube diffusion recipe, we used the SiO 2 thickness control samples specified in the experimental section.
Figure 4a shows the results of the R sh as a function of d ox , and Figure 4b shows the boron-active doping concentration profile measured using ECV on samples with different d ox masking layer thicknesses that underwent BBr 3 diffusion.An eddy current sensor was used to measure the layer stack's sheet resistance (i.e., the parallel summation of the emitter and base R sh ).The as-cut bare wafer used in this study was a p-type wafer with a base resistivity = 5 AE 0.5 Ω.cm, corresponding to a doping concentration = 2.5 Â 10 15 cm À3 and the R sh = 350 AE 10 Ω sq À1 .
For samples without SiO 2 , there is no blocking layer; hence, a heavily doped p þ region with comparatively high boron N s = 2 Â 10 19 cm À3 and deep junction depth D p = 0.6 μm was formed.Due to the heavy doping of the emitter layers, the R sh of the layer stack decreases to 60 Ω sq À1 , as shown in Figure 4a.
For 1 nm < d ox < 10 nm, the total R sh of the layers increases linearly from 60 to 250 Ω sq À1 with the increase in d ox .In this regime, the dopant penetrates through the SiO 2 , forming a heavily doped emitter region.The increase in the R sh values with increasing d ox indicates that with thicker oxide, we tend to block boron from reaching the c-Si interface though not completely.For this thickness range, as shown in Figure 4b, the N s decreased from 2 Â 10 19 cm À3 at 0 nm = SiO 2 to 9 Â 10 18 cm À3 at 10 nm of SiO 2 , with no significant change in the junction depths.
For d ox in the range between 10 and 20 nm, the total R sh increases from 250 to 350 Ω sq À1 .After a further increase in the thickness of SiO 2 , the total R sh saturates at 350 Ω sq À1 , which is then limited by the R sh of the undiffused wafer.Another important observation from Figure 4b is for the measurements with a SiO 2 thickness = 15 nm; though the dopant penetrates through the SiO 2 layer, it decreases the overall dopant N s by three orders of magnitudes to almost 2 Â 10 16 cm À3 with a very shallow doping profile.However, there is no detectable boron doping profile for the samples with SiO 2 thickness of more than 20 nm, indicating the thickness of SiO 2 is sufficient to completely block the BBr 3 diffusion, as also confirmed from the R sh measurements shown in Figure 4b.Hence, we can conclude that the 55 nm-thick SiO 2 under the laser-doped n þþ BSF region is sufficiently thick to block the boron atoms from reaching the BSF interface.
Solar Cell Results
The proof-of-concept solar cells were fabricated using the process sequence depicted in Figure 3a.The (current-voltage) I-V parameters were measured using a HALM flash tester and are listed in Table 1.A median cell efficiency of 20.09%, with a maximum efficiency of 20.41%, was obtained.The results obtained from the injection-dependent lifetime measurements conducted using quasisteady-state photoconductance on symmetrical lifetime samples revealed a lower bulk lifetime (τ bulk ) under the phosphorous laser-doped BSF, as compared to the boron emitter regions with comparable emitter saturation current density ( J 0e ) values.The observed decrease in bulk time in the phosphorous laser-doped BSF sections is speculated to be due to two potential factors: laser-induced bulk damage during the laser doping step and the formation of oxidation-induced stacking faults in the subsequent wet thermal oxidation step. [19,33]Although the achieved cell efficiency is not particularly high, as the fabrication process was not optimized, it demonstrates the functionality of the novel patterning method to fabricate IBC solar cells.The utilization of the increased oxidation characteristics of n þþ Si layers in the novel patterning technique is not limited solely to diffused junctions, as demonstrated in this study.It may also be applied to the state-of-the-art passivating contact IBC solar cells.
There are several fundamental distinctions that must be considered when implementing this method on poly-Si layers.1) Grain boundaries have a crucial role in facilitating the diffusion of phosphorous atoms, hence enabling a high dopant diffusivity in case of poly-Si compared to crystalline silicon. [34]ence, the phosphorous diffusion should be optimized such that low phosphorous (<5 Â 10 18 cm À3 ) should be incorporated in the poly-Si layer during POCl 3 diffusion and then using the laser doping we can selectively increase the phosphorous concentration (>7 Â 10 19 cm À3 ) such that we can get enough selectively in the oxide thickness post wet thermal oxidation.2) The anticipated threshold fluence required for laser doping is expected to be lower for poly-Si layers in comparison to the c-Si substrate, owing to the higher absorption coefficient exhibited by the former. [35]Furthermore, it is essential to carefully adjust the laser doping settings in order to avoid any potential damage to the interfacial oxide layer and hence maintain the desired passivation quality. [36]3) During oxidation, due to the stress induced by the grain boundaries, the oxidation rates of the poly-Si are initially faster than the crystalline silicon. [37]Also during oxidation, silicon is partially consumed to form SiO 2 .As a result, it is necessary to start with thicker poly-Si layers in order to end with poly-Si layer thicknesses that can be contacted without the metal paste spiking through. [38]
Conclusion
This article presents a novel patterning technique for fabricating IBC solar cells.We have demonstrated that using the enhanced oxidation rates under the local laser-doped n þþ BSF regions, we can pattern the rear side of the IBC solar cells and mask the laserdoped n þþ BSF regions from the subsequent BBr 3 diffusion.
We have investigated the influence of the laser pulse energy density on the sheet resistance of the laser-doped n þþ c-Si and the SiO 2 thickness d ox under laser-doped regions after wet thermal oxidation.It was concluded that upon fine tuning H p , we could locally enhance the phosphorous N s by more than one order of magnitude.Wet thermal oxidation would yield thicker and thinner SiO 2 under laser and nonlasered regions, respectively, with a 2.6 time selectivity under laser-doped regions.This property can be used to an advantage if the SiO 2 is then controllably etched back into the HF solution.After removing the thin SiO 2 layer beneath the nonlaser-doped regions, the etching process is interrupted.The remaining SiO 2 layer under the laser-doped regions can be used as an etch stop layer under the BSF regions from the texturing step, during which the rear side is patterned and subsequently acts as a dopant barrier for the BBr 3 diffusion step, during which the emitter is formed on the front and rear side.The proof-of-concept solar cells were fabricated with novel patterning techniques and have demonstrated a conversion efficiency of 20.41%.
Experimental Section
All the process equipment used was industrial or industrial like, such as Centrotherm tube diffusion furnaces for BBr 3 diffusion, POCl 3 diffusion, and thermal oxidation.Hydrofluoric acid etching was performed in a 2% HF solution, and alkaline etching was carried out on a batch-type RENA wet bench.Local laser treatment was performed in industrially feasible high-throughput tools.The SiN x layers were deposited using the Centrotherm PECVD system and fired in a Centrotherm fast-firing furnace.
Sheet Resistance (R sh ) Samples: The R sh samples were used to investigate the influence of different laser parameters on the electrical properties and the subsequent oxide growth rates under the laser-doped areas.We used the M2 wafer format, p-type Czochralski wafer with a thickness = 180 AE 10 μm, and a base resistivity (ρ) = 1 AE 0.5 Ω cm.The samples underwent saw damage etching and Piranha cleaning before the high-temperature POCl 3 tube diffusion.For this study, the POCl 3 recipe was optimized only to have a deposition phase without any high-temperature drive-in step.Due to the lack of the drive-in phase in POCl 3 diffusion, most of the phosphorous was present in the PSG layer.All samples were then laser treated by a frequency-doubled Nd: YVO 4 nanosecond ns pulse laser with a wavelength = 532 nm.A pulse duration (τ p ) = 60-160 ns was used to create the laser-doped regions with various laser settings.The laser had a flat top profile with a rectangular spot size = 300 Â 600 μm 2 .The samples were cleaned in a 2-vol% HF solution to remove the precursor glass layer before the subsequent hightemperature wet thermal oxidation process for 30 min at 850 °C.The thickness of the SiO 2 layer under the different laser doping settings was carried out with an SE800-PV ellipsometer.The SiO 2 was etched in a 2-vol% HF acid solution at room temperature to evaluate the electrical characteristics of the laser-doped regions, namely, R sh and doping profile.GP Solar's 4Tests PRO was used to measure the R sh using a four-point probe method, and a wafer profiler electrochemical capacitance-voltage (ECV) tool was used to measure the active dopant profile.
Silicon Oxide (SiO 2 ) Thickness Control Samples: The SiO 2 thickness control samples consisted of flat c-Si wafers with different SiO 2 thicknesses ranging from 5 to 65 nm.These samples were used to investigate the required thickness of SiO 2 for functioning as a diffusion barrier against boron diffusion in the BBr 3 tube diffusion step.For these samples, we used p-type Cz-Si wafers with base resistivity = 5 AE 0.5 Ω cm and a nominal thickness = 180 μm.The samples underwent saw damage etching and Piranha cleaning before the high-temperature wet thermal oxidation step.The wet thermal oxidation was done for 45 min at 950 °C to grow 185 AE 2 nm SiO 2 on the wafers.To investigate the etch rate of the thermal grown SiO 2 , the wafers were immersed in 2-vol% HF acid solution for a defined period, followed by thickness measurement.The etching rate was determined from the slope of the thickness versus the etching time.Using the knowledge of the etching rate, each wafer was then separately thinned down to specific thicknesses ranging from 5 nm to 65 nm accordingly.These samples with different SiO 2 thicknesses were processed in the BBr 3 tube diffusion.Like sheet resistance control samples, the glass layers on the c-Si substrate were completely removed in a 2-vol% HF solution to monitor the electrical properties.
Figure 1 .
Figure 1.Influence of phosphorous surface dopant concentration (N s ) on the oxide thickness (d ox ) for wet and dry thermal oxidation simulated using the Deal-Grove model represented by solid squares and the actual measured d ox after wet thermal oxidation at different laser pulse energy densities H p as a function of different N s represented with open squares.
Figure 2 .
Figure 2. Electrical characteristics of the n þþ laser-doped regions.a) Influence of laser pulse energy density H p on the sheet resistance R sh and thickness d ox of SiO 2 layer after wet thermal oxidation and b) active dopant concentration profiles of the n þþ regions, determined from ECV measurements, for different H p .
Figure 3 .
Figure 3. a) A schematic cross-section view of the resulting IBC cell and b) fabrication steps of IBC solar cells using the novel patterning technique.
Figure 4 .
Figure 4. a) Influence of the masking SiO 2 thickness d ox on the sheet resistance R sh of the wafers after BBr 3 diffusion.b) Influence of the masking SiO 2 thickness d ox on the active boron carrier concentration profiles.
Table 1 .
Summary of (current-voltage) I-V characteristics namely: open-circuit voltage (V oc ), short-circuit current ( J sc ), fill factor (FF), and conversion efficiency (η) of the IBC solar cells. | 7,471.8 | 2024-01-04T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
General Trends in Core–Shell Preferences for Bimetallic Nanoparticles
Surface segregation phenomena dictate core–shell preference of bimetallic nanoparticles and thus play a crucial role in the nanoparticle synthesis and applications. Although it is generally agreed that surface segregation depends on the constituent materials’ physical properties, a comprehensive picture of the phenomena on the nanoscale is not yet complete. Here we use a combination of molecular dynamics (MD) and Monte Carlo (MC) simulations on 45 bimetallic combinations to determine the general trend on the core–shell preference and the effects of size and composition. From the extensive studies over sizes and compositions, we find that the surface segregation and degree of the core–shell tendency of the bimetallic combinations depend on the sufficiency or scarcity of the surface-preferring material. Principal component analysis (PCA) and linear discriminant analysis (LDA) on the molecular dynamics simulations results reveal that cohesive energy and Wigner–Seitz radius are the two primary factors that have an “additive” effect on the segregation level and core–shell preference in the bimetallic nanoparticles studied. When the element with the higher cohesive energy also has the larger Wigner–Seitz radius, its core preference decreases, and thus this combination forms less segregated structures than what one would expect from the cohesive energy difference alone. Highly segregated structures (highly segregated core–shell or Janus-like) are expected to form when both the relative cohesive energy difference is greater than ∼20%, and the relative Wigner–Seitz radius difference is greater than ∼4%. Practical guides for predicting core–shell preference and degree of segregation level are presented.
0.2(A): 0.8(B) composition
The simulation for Ag(0.2)-Au(0.8) 6 nm in diameter nanoparticle, which is in the low core-shell category, showed that the patchiness is decreased. That is, the Ag surface occupancy ratio improved from 38 % to 47 %. Similarly, in Au (0.2) : Pt (0.8) 6 nm in diameter nanoparticle which is categorized as Janus-like, Au surface occupancy ratio improved from 59 % to 97 %. In AuPt, the surface patchiness has disappeared completely. This confirms that the patchy surface seen in the 3 nm bimetallic nanoparticles with 0.2 (A) : 0.8 (B) composition is related to the number of atoms, not to the particular composition. Nanoparticle size effect on core-shell preference In all types of structures, the patchiness of the surface is observed. Due to a high surface-to-volume ratio for the size of the nanoparticle simulated here, there are relatively few surface-preferring atoms available for forming the surface. Thus, the core purity with the core preferred material is higher in this particle size.
(B) composition
For the larger NPs with diameters of 6 nm and 10 nm, eight bimetallic combinations were simulated due to computational cost. Pt-Au and Ni-Ag (Janus-like), Co-Au and Pd-Ag (high CS), Pd-Au, Cu-Au, and Co-Pd (low CS), and Fe-Au (mixed) were chosen as representatives for each structure group (SI Figure 4). Due to the relatively lower surface-to-volume ratio for the larger particles, the surface occupancy ratios by the surface-preferring materials in these bigger particles are improved compared to the small particles (1 nm) (SI figure 5). Figure S5. Surface occupancy ratio of the surface-preferring material in bimetallic nanoparticles as a function of the particle size. The surface occupancy ratios by the surface-preferring materials in these bigger particles are increased compared to the small particles (1 nm).
Principal component analysis (PCA)
After running the PCA algorithm on our data, we found that the cumulative variance of the eightcomponent analysis revealed that the first two principal components and the first three components reflected 74 % and 93 % of the total variation, respectively. For all eight components, we get PC1: 40.8 %, PC2: 33.4 %, PC3: 18.4 %, PC4: 4.2 %, PC5: 2.7 %, PC6: 0.35 %, PC7: 0.17 %, and PC8: 0.03 %. Therefore, we consider the first three principal components in our further analysis.
In PCA, finding the optimal number of principal components (PCs) is done by selecting a set of minimum PCs having a significantly larger Eigenvalue than the rest of PCs. The scree plot shown in SI Figure 6 suggests that the first three principal components adequately explain the variability of the data. Therefore, we used the first 3 principal components whose eigenvalues are greater than approximately 1.
Relative Wigner-Seitz radius difference vs cohesive energy difference Figure S9. MD/MC results plotted in the two-dimensional space spanned only by the relative Wigner-Seitz radius difference and relative cohesive energy difference A simulation using a BCC Fe interatomic potential Surface atom identification using alpha-shape Using the alpha-shape method, the surface atoms of each system are identified. SI Figure 11 shows an example of the results. Here the CuAu system is analysed using an alpha value of 2.45 (i.e., 0.6 times the lattice constant of Au), which enable the surface atoms to be distinguished from the core atoms. Figure S11. Identified surface atoms of CuAu system. Atoms are color coded: Surface Cu atoms: white, surface Au atoms: black, Cu core atoms: orange, Au core atoms: yellow. Figure S10. The preferred structure for Fe-Cu was obtained using MD simulation with interatomic potential for BCC Fe. (a) full view(left) and cross-sectional view (right) of the bimetallic NP with strongly core-preferring Fe (red) and surfacepreferring Cu(orange). The structure is a highly segregated Janus-like. (b) The crystalline structure of the Janus-like Fe-Cu bimetallic NP.
Effects of cooling rate on the surface segregation Effects of cooling rate on the degree of core-shell tendency were investigated by performing MD simulations at different cooling rates (0.013 K/ps and 1.3 K/ps). The simulations were carried out for AgPd (high core-shell) and CoNi (mixed). The results indicate that the segregation level is consistent throughout the simulations performed at the different cooling rates. Figure S12. Effects of cooling rate on the degree of core-shell tendency for AgPd and CoNi. In AgPd, Ag is the surfacepreferring material with a surface occupancy ratio approximately 95 %. In CoNi, the surface composition is mixed as the occupancy ratio of neither of the elements is above 70 %.
Effects of final temperature
Effects of the final temperature on the degree of core-shell tendency were investigated by performing the simulations with different final temperatures: 150 K, 300 K, and 450 K for AgPd and CoNi. Note that the cooling rate was set to 0.13 K/ps for all simulations. The segregation level is found to be consistent with negligible fluctuations in the surface and core occupancy ratios. | 1,479.8 | 2021-04-23T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Kinetics of fatigue cracks in the rotor blades of the Mi-171 helicopter
The rotor blades of the helicopter are operated on the principle of ensuring operability within the assigned resource, which is 2000 flight hours, after which they are subject to decommissioning. However, by the end of this period, as a rule, fatigue damage does not have time to propagate in the blade, so it can be operated on. Thus, the study of the process of nucleation and propagation of fatigue cracks in the blades will allow to determine the period during which the presence of cracks does not threaten the performance of the construction. This will make suggestions for a possible increase in the assigned resource of the blade, which, in turn, will lead to cost savings. The problem of the work is to study the propagation of fatigue cracks in the rotor blades of a helicopter to make recommendations on the possible increase in their assigned resource. Research objectives: development of methodology for full-scale testing of blades; determination of their endurance limit; development of a methodology for processing the results of full-scale tests (videos of the growth of fatigue cracks); assessment of the possibility of extending the assigned resource of the blades. Experimental methods of fracture mechanics and statistical methods for processing data obtained during experiments were used as research methods. As a result, it was found that the appearance and propagation of surface cracks in the blades with the test base of N = 1.6⋅107 cycles begin after the stresses exceed the level of 76.94 MPa. A fatigue crack in the blades propagates to failure within 150…170 hours, while the subcritical propagation of the crack lasts 130…150 hours. The period of stable slow propagation of cracks can be proposed for inclusion in the assigned resource of the blade.
Introduction
Most parts of machines during operation are exposed to time-varying stresses. If the level of these stresses exceeds a certain limit, then microdamages begin to form and accumulate in the material, leading to the occurrence of submicroscopic cracks. These cracks gradually grow and merge, forming a macroscopic crack with a length of 0.1 to 0.5 mm. Stresses are concentrated at the crack front, which facilitates its further propagation. The propagation of a crack gradually weakens the cross section of a part, which leads to its sudden destruction, which can be associated with accidents and severe consequences. The study of the kinetic behavior of cracks served as the basis for the work of such scientists as A. A. Shanyavskiy [1][2][3], V. S. Bondar [4], Yu. G. Matvienko [5][6], V. V. Moskvichyov, N. A. Makhutov [7][8][9][10][11], V.N. Shlyannikov [12] and others; a study of the cracks propagation in the rotor blades of helicopters was carried out by A. A. Shanyavskiy [3] based on the testing of samples. To date, scientific work devoted to the study of the kinetics of cracks and based on full-scale testing of machine parts is not enough. This paper presents such a study using the example of the rotor blade of the Mi-171 helicopter, carried out on the basis of full-scale tests of the spars of these blades.
Formulation of the problem
There are three basic principles for ensuring the reliability of constructions operating under cyclic loads: • the principle of ensuring operability within the assigned resource; • the principle of ensuring operational survivability; • the principle of operation in technical condition. The rotor blades of a helicopter are operated according to the first principle. Their assigned resource is 2000 flight hours, after which they are subject to decommissioning. However, by the end of this period, as a rule, fatigue damage does not have time to propagate in the blade, so it can be operated on. Thus, the study of the process of nucleation and propagation of fatigue cracks in the blades will allow to determine the period during which the presence of cracks does not threaten the performance of the construction. This will make suggestions for a possible increase in the assigned resource of the blade, which, in turn, will lead to cost savings. Thus, the problem of the work is to study the propagation of fatigue cracks in the rotor blades of a helicopter to make recommendations on the possible increase in their assigned resource.
Experimental results
The object of study is the rotor blade of the Mi-171 helicopter (8AT-2710-00) (figures 1, 2). The main power element of the blade -the spar -is made of aluminum alloy AVT-1, the mechanical characteristics of which are shown in table 1 [13]. Table 1. Mechanical characteristics of aluminum alloy AVT-1. The blade tests were carried out on a specially designed bench (figure 3). The scheme of the control and measuring system used in the tests is shown in figure 4. Cracks in the blade most often occur in the range of its relative radii 0 5 0 7 R . . = (compartments 11...14) ( figure 5(a)). As a rule, they arise at the lower radius of the rear wall, the lower inner and upper inner surfaces of the spar (positions 1, 2 and 3 in figure 5(b)). To determine the average endurance limit, the interval between its extreme values (68...80 MPa) was divided into 6 equal sections of 2 MPa and calculations were performed according to the formulas: where 1 σ − is average endurance limit; 1 S σ − is standard deviation of the endurance limit; ( ) Next, the resulting record was processed on a computer using specially created software. Each frame of the video was binarized, after which a binary matrix was created on its basis, which was subjected to elementwise processing. Based on the results of processing all the recording frames, a kinetic graph of the crack propagation is plotted ( figure 11). Figure 11. Kinetic graph of the crack propagation (horizontal axis -crack growth time (h); vertical axis -crack growth rate (mm h -1 )).
After tracing the graphs in the Mathcad package, the data obtained are interpolated by a cubic polynomial describing the dependence of the crack growth rate (mm h -1 ) on its growth time (h) (figure 12): Figure 12 shows a graph of the dependence of the crack growth rate on the time of its propagation, from which it can be seen that rapid propagation begins after 140 hours of crack growth.
The critical crack length is determined by integrating polynomial (1) over time with an upper limit of 140 hours; we get the value of 47.5 mm. During the experiments, it was noted that the rapid growth of a crack begins with its length 45...50 mm.
Discussion of the results
Scientific results obtained during the study: • The endurance limit of the spars of the rotor blades of the Mi-171 helicopter is 56 MPa with the test base N = 5.1·10 7 cycles (the method of plotting the Weller curve), 76.94 MPa with the test base N = 1.6·10 7 cycles (the method of plotting the endurance limit distribution curve). | 1,631 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Pannexin 3 regulates proliferation and differentiation of odontoblasts via its hemichannel activities
Highly coordinated regulation of cell proliferation and differentiation contributes to the formation of functionally shaped and sized teeth; however, the mechanism underlying the switch from cell cycle exit to cell differentiation during odontogenesis is poorly understood. Recently, we identified pannexin 3 (Panx3) as a member of the pannexin gap junction protein family from tooth germs. The expression of Panx3 was predominately localized in preodontoblasts that arise from dental papilla cells and can differentiate into dentin-secreting odontoblasts. Panx3 also co-localized with p21, a cyclin-dependent kinase inhibitor protein, in preodontoblasts. Panx3 was expressed in primary dental mesenchymal cells and in the mDP dental mesenchymal cell line. Both Panx3 and p21 were induced during the differentiation of mDP cells. Overexpression of Panx3 in mDP cells reduced cell proliferation via up-regulation of p21, but not of p27, and promoted the Bone morphogenetic protein 2 (BMP2)-induced phosphorylation of Smad1/5/8 and the expression of dentin sialophosphoprotein (Dspp), a marker of differentiated odontoblasts. Furthermore, Panx3 released intracellular ATP into the extracellular space through its hemichannel and induced the phosphorylation of AMP-activated protein kinase (AMPK). 5-Aminoimidazole-4-carboxamide-ribonucleoside (AICAR), an activator of AMPK, reduced mDP cell proliferation and induced p21 expression. Conversely, knockdown of endogenous Panx3 by siRNA inhibited AMPK phosphorylation, p21 expression, and the phosphorylation of Smad1/5/8 even in the presence of BMP2. Taken together, our results suggest that Panx3 modulates intracellular ATP levels, resulting in the inhibition of odontoblast proliferation through the AMPK/p21 signaling pathway and promotion of cell differentiation by the BMP/Smad signaling pathway.
Introduction observed in differentiating odontoblasts [22] and the expression of Cx43 was increased in differentiated odontoblasts [22,23]. These observations suggest that gap junctions between odontoblasts coordinate cellular activity. However, the expression and physiological function of Panx3 in tooth development have not been clearly elucidated.
In this study, we demonstrated that Panx3 is expressed in preodontoblasts and regulates preodontoblast proliferation and differentiation. The Panx3 ATP hemichannel regulates the AMP-activated protein kinase (AMPK) signaling pathway and induces cell cycle exit through the upregulation of p21 expression. Thus, Panx3 plays important roles in the regulation of the transition stage from proliferation to differentiation in odontoblasts.
In situ hybridization
Digoxigenin-11-UTP-labeled single-stranded antisense RNA probes for Panx3 and Dspp were prepared using the DIG RNA labeling kit (Roche Applied Science) according to the manufacturer's instructions. In situ hybridization of the tissue sections was performed according to the protocol provided with the Link-Label ISH Core Kit II (BioGenex). Frozen tissue sections were obtained from postnatal day 1 (P1) mouse embryo heads containing molars and incisors and placed on RNase-free glass slides. The frozen sections were dried for 10 min at room temperature and incubated at 37˚C for 30 min then treated with 10 μg/mL proteinase K at 37˚C for 30 min. Hybridization was performed at 37˚C for 16 h. Washes were carried out with 2 × SSC at 50˚C for 15 min and 2 × SSC containing 50% formamide at 37˚C for 15 min. The sections were then treated with 10 μg/mL RNase A in 10 mM Tris-HCl (pH 7.6), 500 mM NaCl, and 1 mM EDTA at 37˚C for 15 min and then washed. The sections were treated with 2.4 mg/mL Levamisole (Sigma) to inactivate endogenous alkaline phosphatase.
Immunohistochemistry
Frozen tissue sections were fixed with acetone at -20˚C for 2 min or 4% paraformaldehyde at room temperature for 5 min and then washed three times in PBS for 5 min. Immunohistochemistry was performed on sections incubated with Universal Blocking Reagent (Bio-Genex) in 1 × PBS/0.3% Triton X-100 for 7 min at room temperature prior to incubation with the primary antibody. Antibodies against p21 (C-19; Santa Cruz), Lamb1 (Abcam), and Cx43 (US Biological) were used. A rabbit polyclonal antibody against the Panx3 peptide (residues 90-107) was raised and purified using a peptide affinity column. Primary antibodies were detected by Cy-3-or Cy-5-conjugated secondary antibodies (Jackson ImmunoResearch Laboratories). Nuclear staining was performed with DAPI (Invitrogen). A fluorescent microscope (Axiovert 200; Carl Zeiss MicroImaging, Inc., Biozero-8000; Keyence, Japan) was used for immunofluorescence image analysis. Images were prepared using AxioVision and Photoshop (Adobe Systems, Inc.).
Electron microscopy
One-day-old ICR mice were perfused with 4% paraformaldehyde in a 0.1M phosphate buffer (pH7.4) under deep anesthesia by an intraperitoneal injection of chloral hydrate (500 mg/kg). Following decalcification with a 4.13% EDTA-2Na solution, frozen sections were cut sagittally at 80 μm. For immunohistochemistry at the electron-microscopic level, the sections were processed for the Envision+/horseradish peroxidase system (Dako Japan) using anti-Panx3 antibody diluted to 1:100. For final visualization of the sections, 0.05M Tris-HCl buffer (pH 7.6) containing 0.04% 3-3 0 -diaminobenzidine tetrahydrochloride and 0.0002% H 2 O 2 was used. For transmission electron microscopy, the immunostained tissues were postfixed in 1% OsO 4 reduced with 1.5% potassium ferrocyanide, dehydrated through ethanol series, and embedded in Epon 812 (Taab, Berkshire, UK). Semithin sections were cut at 1 μm and stained with 0.03% methylene blue. Ultrathin sections (70 nm in thickness) were double-stained with uranyl acetate and lead citrate and examined with an H-7650 transmission electron microscope (Hitachi High-Technologies Corp., Tokyo, Japan).
Cell culturing
For primary cell cultures, first molars were dissected from P3 mice. To separate the dental epithelium from mesenchymal tissues, the samples were incubated for 10 min in 0.1% collagenase, 0.05% trypsin, and 0.5 mM EDTA and then transferred to keratinocyte-SFM supplemented with EGF and pituitary gland extract (PGE), according to the manufacturer's instructions. The tissues were then micro-surgically separated and then treated with 0.1% collagenase, 0.05% trypsin, and 0.5 mM EDTA for 15 min at 37˚C. Dental epithelium specimens were plated in keratinocyte-SFM with PGE and either 2% FBS or EGF with the addition of 100 units/mL of penicillin and streptomycin and incubated at 37˚C in a humidified atmosphere containing 5% CO 2 . Dental mesenchymal cells were plated on DMEM with 10% FBS as previously described [25]. A dental mesenchymal cell line (mDP) and a single cell clone cell derived from human deciduous tooth pulp cells (SDP11) were cultured in DMEM/F-12 with 10% FBS. Cells were maintained at 30-60% confluency and the medium was replaced every other day. For the differentiation of the cells, 200 ng/mL recombinant human Bone morphogenetic protein 2 (rhBMP2, Wako) was added to the culture medium. Panx3 peptides (HHTQDKAGQYKVK SLWPH from mouse Panx3 and HHKQDGPGPGQDKMKSLWPH from human Panx3) were used for inhibition experiments. A peptide with scrambled sequences (WHTKYQVGLDPQHKASHK for mouse and GMHWPHDPGDKLQKQKSH for human) of the Panx3 peptide were used as a control [14]. All animal experiments were approved by the ethics committee of Kyushu University Animal Experiment Center (protocol no. A19-039-0).
Cell proliferation and bromodeoxyuridine (BrdU) incorporation
Cells were plated at 1 × 10 4 cells/mL/well in 12-well plates and grown for 72 h. Cell numbers were determined using a trypan blue dye exclusion method. For the BrdU incorporation assay, cells were incubated at the same cell density described above for 48 h. BrdU (Sigma) was added to the plates (10 μM) for 30 min; then, the cells were fixed with cold methanol for 10 min, rehydrated in PBS, and incubated for 30 min in 1.5 M HCl. After three washes with PBS, the plates were incubated with a 1:50 dilution of fluorescein isothiocyanate-conjugated anti-BrdU antibody (Roche Applied Science) for 30 min at room temperature. Finally, the cells were washed three times with PBS and incubated with 10 μg/mL propidium iodide (Sigma) in PBS for 30 min at room temperature. BrdU-positive cells were examined under a microscope (Biozero-8000; Keyence, Japan).
Western blotting
Cells were washed three times with PBS containing 1 mM sodium vanadate (Na 3 VO 4 ), then solubilized in 100 μL lysis buffer (10 mM Tris-HCl [pH 7.4], 150 mM NaCl, 10 mM MgCl 2 , 0.5% Nonidet P-40, 1 mM PMSF, and 20 units/mL aprotinin). Lysed cells were centrifuged at 12,000 rpm for 5 min and the protein concentration of each sample was measured using the Micro-BCA Assay Reagent (Pierce Chemical Co.). The samples were denatured in SDS sample buffers and loaded on a 12% SDS-polyacrylamide gel. Ten micrograms of protein lysate were loaded in each lane. Subsequent to SDS-PAGE, the proteins were transferred onto a PVDF membrane and immunoblotted with antibodies to Panx3, p21 and p27 (Santa Cruz), β-actin (Abcam), and P-Smad1/5/8, Smad5, phospho-AMPKα, AMPKα, and retinoblastoma (Rb) (Cell Signaling) and then visualized using an ECL kit (Amersham Pharmacia Biotech). For the Smad and MAPK experiments, the cells were pretreated with siRNA as described below.
ATP flux
ATP flux was determined by luminometry. mDP cells were depolarized by incubation in KGlu solution (140 mM potassium gluconate, 10 mM KCl, and 5.0 mM TES, pH 7.5) for 10 min to open the pannexin channels. The supernatant was collected and assayed with luciferase/luciferin (Promega). For inhibition experiments, cells were treated with the Panx3-peptide or scramble peptide for 30 min prior to incubation in KGlu solution.
Panx3 expression in mouse tissues
We originally identified pannexin 3 (Panx3) from E19.5 mouse molar cDNA microarrays as preferentially expressed in teeth [24,26]. To analyze the expression of Panx3 mRNA in mouse tissues, we first performed RT-PCR and Northern blot analyses using RNA isolated from tissues of newborn mice. RT-PCR showed that Panx3 was strongly expressed in molars and incisors ( Fig 1A). Panx3 expression was also detected in long bones but not in other tissues such as the brain, lung, heart, liver, skin and kidney. Ameloblastin, an enamel matrix marker was RT-PCR analysis (upper three panels) and northern blotting analysis (lower three panels) using RNA from postnatal day 1 (P1) mouse tissues (molar, incisor, brain, lung, heart, liver, skin, kidney, and bone). (B) RT-PCR analysis using the dental epithelium (DE) and mesenchyme (DM), dissected from P1 mouse tooth germ. (C) Immnostaining with anti-Panx3 antibody (red) and DAPI nuclear staining (blue). (D) Light microscopy images of semi-thin sections stained with methylene blue for immunoelectron microscopy. Panx3 is present in the preodontoblasts (pre-od) but not in the odontoblasts (od). (E) Immunoelectron microscopy images of the Panx3 protein in preodontoblasts from P1 incisors showing labeling in the preodontoblasts (upper panels) and at the cell-cell contact sites (lower panels). dp; dental papilla, iee; inner enamel epithelium, si; stratum intermedium, BM; basement membrane.
https://doi.org/10.1371/journal.pone.0177557.g001 expressed only in molars and incisors. Northern blot analysis showed that Panx3 was expressed as a single 2.6 kb mRNA band in molars, incisors, and long bones but not in other tissues ( Fig 1A). Thus, the expression of Panx3 mRNA is strong in hard tissues.
To analyze the cellular localization of Panx3 in teeth, Panx3 mRNA expression was first examined by RT-PCR using RNA from dental epithelium cells and mesenchyme cells prepared from postnatal day 1 (P1) molars. Panx3 was expressed in the dental mesenchyme but not in the dental epithelium ( Fig 1B). Next, we performed immunostaining (Fig 1C and 1D) and immunohistochemistry with an electron microscopic ( Fig 1E) using a Panx3 specific antibody. Immunostaining revealed that Panx3 was strongly expressed in preodontoblasts but not in ameloblasts (Fig 1C and 1D). Panx3 was localized to cell-cell contact regions and cell processes facing the basement membrane (Fig 1C-1E)). These expression patterns indicate that Panx3 may function as a gap junction and a hemichannel.
Panx3 expression in differentiating odontoblasts
In order to further identify Panx3-expressing cells, we compared the mRNA expression pattern of Panx3 and dentin sialophosphoprotein (Dspp), a maker of terminal differentiated odontoblasts and ameloblasts in incisors and molars (Fig 2A). The expression pattern of Panx3 mRNA was different from that of Dspp mRNA, indicating that Panx3 mRNA is expressed in preodontoblasts but not in differentiated mature odontoblasts (Fig 2A).
Preodontoblasts are characterized by proliferation to expand cell population and eventually stop proliferation for differentiation. Since p21 is considered to be important for cell growth arrest at the G1/S checkpoint in odontoblasts and ameloblasts [27], we compared the expression of Panx3 and p21 by immunostaining in incisor sections of P1 mice. In the incisor sections, we can observe gradual differentiation stages of odontoblasts. p21 expression was observed in the restricted area of odontoblasts and ameloblasts ( Fig 2B). Panx3, but not p21, was expressed at the early presecretory stage of preodontoblasts, and both Panx3 and p21 expression was observed in the late presecretory stage of preodontoblasts ( Fig 2B). These observations indicate that Panx3 expression slightly precedes the p21 expression in preodontoblasts, suggesting that Panx3 may regulate p21 expression.
p21 expression in differentiating odontoblasts
To assess the role of Panx3 in odontoblast development, we used the mDP dental pulp cell line [28]. mDP cells can differentiate into Dspp-expressing cells in the presence of Bone morphogenetic protein 2 (BMP2). RT-PCR analysis showed that Panx3 mRNA was weakly expressed in undifferentiated mDP cells and that the expression of Panx3 and Dspp mRNA was induced in differentiating mDP cells (Fig 3A). Western blot analysis also demonstrated that the Panx3 protein was strongly induced during mDP differentiation (S1 Fig). The Panx3 protein size was approximated as 45-kDa by SDS-PAGE, corresponding to its predicted molecular weight. The Panx3 protein was not detected in undifferentiated mDP cells. These results indicate that Panx3 was induced during the differentiation of mDP cells.
p21 was co-expressed in Panx3-expressing preodontoblasts along with the basement membrane. We next examined whether p21 expression was also induced in mDP cells and mouse primary dental papilla cells undergoing BMP2-induced differentiation. Western blot analysis demonstrated that p21, but not p27, was induced by BMP2 treatment in mDP cells (Fig 3B) and mouse primary papilla cells (Fig 3C). These results demonstrate that p21 expression correlates with odontoblast differentiation.
Inhibition of cell proliferation by Panx3
To analyze the function of Panx3 in odontoblast proliferation and differentiation, we established mDP cells stably expressing Panx3 (pEF1/Panx3). We first examined whether Panx3 affects cell proliferation. Panx3-transfected mDP cells dramatically inhibited cell numbers in a three-day culture compared with control cells (Fig 4A). In addition, we analyzed the mDP cell proliferation using BrdU incorporation. The number of BrdU-positive cells decreased in Panx3-transfected mDP cells compared to control vector-transfected cells (Fig 4B). These Pannexin3 regulates odontoblast proliferation and differentiation results indicate that overexpression of Panx3 inhibits mDP cell proliferation, suggesting that Panx3 inhibits dental mesenchymal cell proliferation in vivo. Since p21 is co-expressed in Pan-x3-expressing preodontoblasts, we next tested whether Panx3 affects p21 expression. Western blot analysis demonstrated that p21, but not p27, was significantly induced in Panx3 overexpressing mDP cells (Fig 4C). To analyze the function of endogenous Panx3, we knocked down Panx3 expression using Panx3 siRNA transfection into mDP cells. BMP2-induced Panx3 expression was substantially reduced at the protein level compared to control siRNA transfected cells (S2 Fig). The expression of p21, but not of p27, was significantly reduced by the knockdown of endogenous Panx3 (Fig 4D). Further, we have used single clone cell derived from deciduous tooth pulp cells, SDP11 cells [29]. SDP11 cells were cultured with BMP2 and Panx3 inhibitory peptide. The expression of p21, but not p27, was also significantly reduced by Panx3 inhibitory peptide (Fig 4E). These results indicate that the upregulation of endogenous and mouse primary dental papilla cells (C). Both mDP cells and mouse primary dental papilla cells were cultured with or without 200 ng/mL BMP2 for 72 h and cell extracts were analyzed by western blotting using anti-p21 and anti-p27 antibodies. β-actin was used as a control. Data were pooled from three independent experiments with error bars designating standard deviation of the mean. Panx3 expression is required for p21 expression in BMP2-treated mDP cells and SDP11 cells, suggesting that the p21 expression correlates with the anti-proliferation activity of Panx3.
Panx3 is necessary for Dspp expression
We next examined whether overexpression of Panx3 affects mDP cell differentiation. Panx3transfected mDP cells and control vector-transfected cells were cultured with BMP2. Dspp expression was strongly enhanced in Panx3 overexpressing mDP cells following 12 and 24 h of culturing. In contrast, the expression of Dspp in control mDP cells gradually increased with 48 h of culturing ( Fig 5A). Next, we analyzed whether knockdown of endogenous Panx3 affects mDP cell differentiation. Suppression of endogenous Panx3 by Panx3 siRNA inhibited BMP2-induced Dspp expression by approximately 70% (Fig 5B). These results indicate that Panx3 promotes mDP cell differentiation and is necessary for odontoblast differentiation. Panx3 affects the phosphorylation of Smad1/5/8 Since Panx3 is involved in Dspp expression and the phosphorylation of Smad1/5/8 that constitutes critical downstream targets of BMP2 signaling [30,31], we examined whether Panx3 is associated with BMP2-Smad signaling. To investigate the effect of endogenous Panx3 knockdown on the BMP signaling pathway, we examined phosphorylation levels of Smad1/5/8 in BMP2-treated mDP cells, which had been pretreated with siRNA or control siRNA. BMP2-induced phosphorylation of Smad1/5/8 was inhibited in Panx3 siRNA treated cells (Fig 6). This inhibition was also observed in Panx3-specific hemichannel inhibitory peptide-treated cells (Fig 8C). These results indicate that Panx3 is involved in BMP-Smad signaling. Furthermore, we examined the effect of Panx3 on the activation of the MAPK superfamily, including JNK, p38, and ERK. However, the phosphorylation of JNK, p38, and ERK was not affected by the knockdown of endogenous Panx3 (data not shown).
Panx3 promotes intracellular ATP release and AMPK activation
Since we previously reported that the Panx3 hemichannel releases intracellular ATP into the extracellular space in chondrocytes and osteoblasts [14,15], we examined the efflux of ATP in mDP cells by luminometry. Panx3-transfected cells exhibited elevated ATP release compared to control vector-transfected cells. This Panx3 activity was inhibited by a Panx3-specific hemichannel blocking peptide but not by a control scrambled peptide (Fig 7A). These results suggest that Panx3 functions as an ATP release hemichannel in odontoblasts.
AMP-activated protein kinase (AMPK) is an energy-sensing kinase that plays a role in cellular energy homeostasis and is activated by intracellular ATP levels [32]. We tested whether the function of Panx3 as an ATP release hemichannel affects the activation of AMPK. Western blot analysis showed that BMP2 induced the phosphorylation of AMPK (Fig 7B). Furthermore, BMP2-mediated AMPK phosphorylation was further enhanced by Panx3 overexpression ( Fig 7B). Moreover, both Panx3 siRNA (Fig 7C) and Panx3-specific blocking peptide (Fig 7D) reduced BMP2-induced AMPK phosphorylation. These results indicate that Panx3 regulates AMPK activation.
Relationship between BMP-Smad and Panx3-AMPK signaling in odontoblasts
We next addressed whether BMP-Smad signaling is associated with the Panx3-AMPK pathway by examining the phosphorylation of Smad1/5/8 in the presence of the AMPK inhibitor Ara-a or the AMPK activator 5-Aminoimidazole-4-carboxamide-ribonucleoside (AICAR). BMP2 induced the phosphorylation of Smad1/5/8 and AMPK (Fig 8A). Ara-a treatment inhibited BMP2-induced AMPK phosphorylation but not Smad1/5/8 phosphorylation (Fig 8A). While AICAR induced AMPK phosphorylation, Smad1/5/8 phosphorylation was not induced (Fig 8A and 8B). These results indicate that BMP signaling influences AMPK signaling, but AMPK signaling does not affect BMP signaling. Furthermore, in Panx3 overexpression, when cells were treated with the Panx3 hemichannel blocking peptide, Smad1/5/8, but not AMPK were plated at~50% confluency and ATP levels in the media were measured by luminometry. Statistical analysis was performed using analysis of variance (*P < 0.01). (B) Time course of AMPK phosphorylation in control pEF1 or pEF1/Panx3-transfected mDP cells following treatment with or without BMP2 was analyzed by western blotting using the anti-phospho-AMPK antibody. (C) Western blotting was performed using the anti-phospho-AMPK antibody for mDP cells transfected with either control siRNA or Panx3 siRNA. (D) mDP cells were incubated with the Panx3 peptide or control peptide for 30 min, and then AMPK phosphorylation, induced by BMP2, was analyzed by western blotting. Data were pooled from three independent experiments with error bars designating standard deviation of the mean. Statistical analysis was performed using analysis of variance (*P < 0.01).
https://doi.org/10.1371/journal.pone.0177557.g007 phosphorylation were inhibited (Fig 8C). Panx3 functions as an endoplasmic reticulum (ER) Ca 2+ channel [14,15]. Akt activates the Panx3 ER Ca 2+ channel that induces Ca 2+ release from the ER into the cytosol, which subsequently activates calmodulin (CaM) signaling pathways including calmodulin-dependent protein kinase II (CaMKII)-Smad [15]. We investigated the effects of the CaM inhibitor W-7 (S3 Fig) on BMP2-induced Smad1/5/8 and AMPK phosphorylation in Panx3-transfected mDP cells. W-7 inhibited BMP2-induced Smad1/5/8 phosphorylation ( Fig 8C). However, AMPK phosphorylation was not inhibited by W7 (Fig 8C). These results indicate that Panx3 regulates Smad signaling through the CaM signaling pathway in dental mesenchymal cells. mDP cells were treated with 1 mM of Ara-a for 30 min before treatment with BMP2. 1 mM AICAR was tested Smad1/5/8 and AMPK phosphorylations by western blotting. Data were pooled from three independent experiments with error bars designating standard deviation of the mean. Statistical analysis was performed using analysis of variance (*P < 0.05). (B) The time course of Smad1/5/8 phosphorylation in mDP cells following treatment with AICAR was analyzed by western blotting using the anti-phospho-Smad1/5/8 antibody. (C) Panx3transfected mDP cells were treated with either the Panx3 peptide or the calmodulin inhibitor W-7, and the phosphorylation of both Smad1/5/8 and AMPK was analyzed by western blotting. Data were pooled from three independent experiments with error bars designating standard deviation of the mean. Statistical analysis was performed using analysis of variance (*P < 0.05).
Inhibition of cell proliferation by the AMPK activator AICAR
Finally, we examined whether AMPK signaling directly affects cell proliferation. The number of mDP cells decreased three days following the addition of AICAR in a dose dependent manner ( Fig 9A). Moreover, we examined the effect of AICAR on the expression of p21 and p27. The expression of p21, but not p27, was induced by AICAR treatment (Fig 9B). These findings suggest that the AMPK pathway may be involved in the suppression of dental mesenchymal cell proliferation through p21 expression.
Discussion
There are two dynamic and distinct cellular processes, growth arrest, and differentiation, during odontogenesis. The gradual differentiation of odontoblasts commences from the peripheral cell layer of the dental papilla [33]. Proliferating dental papilla cells eventually stop the cell cycle to differentiate into odontoblasts. Preodontoblasts are considered to be a regulator for growth arrest of proliferative dental papilla cells and for differentiation of odontoblasts. However, the precise function of preodontoblasts in tooth development is not clearly understood.
In this study, we found that pannexin 3 (Panx3) is specifically expressed in preodontoblasts. Differentiating odontoblasts secrete dentin matrix proteins including type I collagen and noncollagenous proteins such as Dspp. Dspp is an important molecule for dentin formation and is abundantly synthesized by differentiating odontoblasts. Dspp-null mice show defective and reduced dentin mineralization and pulp exposure similar to that of human dentinogenesis imperfecta III [34]. Dspp mutations demonstrate a direct correlation with human dentinogenesis imperfecta II and with dentin dysplasia II syndromes [35]. Thus, Dspp has been characterized as a unique marker of odontoblast differentiation. Here, we compared the expression of the Panx3 mRNA and Dspp mRNA during tooth development. In situ hybridization revealed that Panx3 mRNA expression completely differed from Dspp mRNA expression; Panx3 mRNA-expressing cells did not express Dspp mRNA. Also we could not see any Panx3-positive signals in proliferating dental papilla cells. Thus, Panx3 mRNA is specifically expressed in preodontoblasts.
Previously, we have demonstrated that Panx3 inhibits PTH-induced chondrogenic cell proliferation by regulating cAMP/PKA signaling [14]. We also demonstrated that Panx3 inhibits osteoprogenitor proliferation by inhibiting Wnt/β-catenin and PKA/CREB signaling and promotes cell cycle exit by increasing p21 activity (17). In this study, overexpression of Panx3 significantly reduced mDP cell proliferation and induced p21 expression, whereas knockdown of endogenous Panx3 reduced the expression of p21. Consistent with these findings, co-expression of p21 and Panx3 by preodontoblasts was observed by immunostaining. These results suggest that Panx3 may regulate cell proliferation via p21 expression in tooth development. Thus, Panx3 plays a central role in the regulation of proliferation of the progenitor cells in cartilage, bones, and teeth.
Panx3 functions as a hemichannel that releases intracellular ATP into the extracellular space of chondrocytes and osteoblasts [14][15][16]. We found that Panx3 also releases intracellular ATP as a hemichannel in mDP cells. The Panx3 hemichannel blocking peptide inhibited ATP release, indicating that the Panx3 hemichannel is involved in ATP release in preodontoblasts. Balancing intracellular ATP consumption and generation is one of the fundamental requirements of all cells. AMP-activated protein kinase (AMPK) serves as a highly conserved fuel sensor in all eukaryotic cells and is activated when intracellular ATP decreases. AMPK signaling plays an important role in many aspects of cellular functions, including cell proliferation [36][37][38][39][40]. In this study, we found that phosphorylation of AMPK was promoted by Panx3. In addition, we demonstrated that 5-Aminoimidazole-4-carboxamide-ribonucleoside (AICAR), an AMPK activator, significantly reduced mDP cell proliferation and induced p21 expression. These results suggest Panx3 mediates intracellular ATP release, which in turn leads to the activation of AMPK signaling, resulting in inhibition of cell proliferation by p21 expression in preodontoblasts. However, direct mechanism regulating p21 expression and cell cycle arrest via AMPK has been clearly defined. Further experiments were needed to resolve this hypothesis.
Panxs and connnexins share similar protein structures that consist of four hydrophobic transmembrane domains spaced by two extracellular loops, an intracellular loop, and intracellular amino (NH2) and carboxyl (COOH) temini. The gap junction protein features the protein structure to form a hexameric membrane pore complex and the hemichannel [41]. However, organ-, cell-, or time-specific expression of gap junction proteins are suggesting that each of gap junction proteins have specific roles and functions. In fact, connexin 43 (Cx43) plays important roles in osteoblast function and differentiation, but Panx3 deficient mice show more severe skeletal phenotype than Cx43 deficient mice [19]. This is due to the difference of channel properties between Panxs and connexins. Unlike connexins, the (endoplasmic reticulum) ER Ca 2+ channel was found in only Panxs [13,15,17]. The ER plays a predominant role in Ca 2+ storage and regulates intracellular Ca 2+ levels in many cellular processes, including osteoblast differentiation. In osteoblasts, Panx3 functions as the ER Ca 2+ channel that activates CaMKII-Smad signaling and Osx expression [15,19]. Tooth development is mediated by various growth factors. Bone morphogenetic protein 2 (BMP2), one of the critical growth factors, is expressed in the primary enamel knot, an epithelial signaling center; BMP2 plays important roles in odontoblast differentiation involving Dspp expression [42]. We found that overexpression of Panx3 promoted Dspp expression. In contrast, knockdown of Panx3 reduced BMP2-induced Smad1/5/8 phosphorylation and Dspp expression. These results indicate that Panx3 is essential for odontoblast differentiation. Since the calmodulin inhibitor W-7 inhibited BMP2-induced Smad1/5/8 phosphorylation in this study, Panx3 may also function as the ER Ca 2+ channel to increase intracellular Ca 2+ levels for odontoblast differentiation.
In conclusion, this is the first study to demonstrate that Panx3 is specifically expressed in preodontoblast cells and regulates odontoblast cell proliferation and differentiation. The Panx3 hemichannel in preodontoblasts promotes ATP release into the extracellular space, which results in a reduction of intracellular ATP levels. It may be induced subsequent activation of AMPK-p21 signaling cascade to inhibit cell proliferation. The Panx3 hemichannel is also involved in calmodulin-Smad signaling for promotion of cell differentiation (Fig 10). Our results suggest that the restricted expression of Panx3 in preodontoblasts during odontoblast differentiation functions as a critical regulator for cell proliferation and differentiation. Further analysis of the regulatory role of Panx3 in preodontoblasts during odontogenesis may help elucidate the mechanisms underlying tooth development and provide innovative approaches to dentin regeneration. | 6,305.6 | 2017-05-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Synthesis and Properties of Novel Polyurethanes Containing Long-Segment Fluorinated Chain Extenders
In this study, novel biodegradable long-segment fluorine-containing polyurethane (PU) was synthesized using 4,4′-diphenylmethane diisocyanate (MDI) and 1H,1H,10H,10H-perfluor-1,10-decanediol (PFD) as hard segment, and polycaprolactone diol (PCL) as a biodegradable soft segment. Nuclear magnetic resonance (NMR) was used to perform 1H NMR, 19F NMR, 19F–19F COSY, 1H–19F COSY, and HMBC analyses on the PFD/PU structures. The results, together with those from Fourier transform infrared spectroscopy (FTIR), verified that the PFD/PUs had been successfully synthesized. Additionally, the soft segment and PFD were changed, after which FTIR and XPS peak-differentiation-imitating analyses were employed to examine the relationship of the hydrogen bonding reaction between the PFD chain extender and PU. Subsequently, atomic force microscopy was used to investigate the changes in the microphase structure between the PFD chain extender and PU, after which the effects of the thermal properties between them were investigated through thermogravimetric analysis, differential scanning calorimetry, and dynamic mechanical analysis. Finally, the effects of the PFD chain extender on the mechanical properties of the PU were investigated through a tensile strength test.
Introduction
Thermoplastic polyurethane (TPU) is a type of block copolymer that is usually synthesized with a soft segment diol, a hard segment diisocyanate, and a chain extender. The incompatibility between soft and hard segments results in microphase separation, the structure of which dominates the mechanical properties and phase morphology of TPU, which has high intensity, toughness, and wear resistance, as well as the properties of both plastics and elastomers [1][2][3][4]. Through different proportions of soft and hard segments, polyurethane (PU) can be used to synthesize materials with different properties, which can be applied in various fields [5,6]. Therefore, TPU is of great industrial importance [7]. However, compared with other thermoplastic elastomer materials, the poor thermal stability of TPU [8,9] results in limitations in back-end applications.
3 drops of dibutyltin dilaurate were added, the solution was mixed using a mechanical stirrer at 200 rpm, and PU prepolymers were formed after 2 h of reaction. In the second step, the PFD chain extenders were dissolved in DMAc and slowly dripped into the reaction flask, and the reaction was continued for 2 h (Scheme 1). In the polymerization, the di-n-butylamine method [25] was used to calculate the NCO content in all steps to monitor the reaction process. The obtained PFD/PU solution was subjected to vacuum defoaming for 2 h, after which it was poured into a serum bottle and stored in a refrigerator for 1 day. Finally, the PFD/PU solution was poured into a Teflon plate and dried in a temperature-programmable circulating oven for 8 h. The recipe, symbols, and theoretical contents of the hard and soft segments for the PFD/PU films are shown in Table 1. This study was only concerned with the relative contents of hard (or soft) segments of PFD/PUs with different PFD contents, so the theoretical hard (or soft) segment content is sufficient. Therefore, theoretical hard and soft segment contents were calculated by using Equations (1) and (2), respectively, as used in the general literature [26].
Theoretical hard segment content (wt%) = W MDI + W PFD Theoretical soft segment content (wt%) = 100% − Theoretical hard segment content (wt%) (2) Polymers 2018, 10, x FOR PEER REVIEW 3 of 22 (Scheme 1). In the polymerization, the di-n-butylamine method [25] was used to calculate the NCO content in all steps to monitor the reaction process. The obtained PFD/PU solution was subjected to vacuum defoaming for 2 h, after which it was poured into a serum bottle and stored in a refrigerator for 1 day. Finally, the PFD/PU solution was poured into a Teflon plate and dried in a temperatureprogrammable circulating oven for 8 h. The recipe, symbols, and theoretical contents of the hard and soft segments for the PFD/PU films are shown in Table 1. This study was only concerned with the relative contents of hard (or soft) segments of PFD/PUs with different PFD contents, so the theoretical hard (or soft) segment content is sufficient. Therefore, theoretical hard and soft segment contents were calculated by using Equations (1) and (2), respectively, as used in the general literature [26].
Advanced Polymer Chromatography System (APC)
The molecular weights of PFD/PUs were characterized using an Acquity APC core system (Waters Corp., Milford, MA, USA) with Tetrahydrofuran (THF) as eluent at a flow rate of 0.8 mL/min. The measurement was carried out at 45 • C.
Fourier Transform Infrared Spectroscopy (FT-IR)
Fourier transform infrared spectroscopy measurements were performed on a PerkinElmer Spectrum One spectrometer (Waltham, MA, USA). The spectra of the samples were obtained by averaging 16 scans in a range of 4000 to 650 cm −1 with a resolution of 2 cm −1 .
19 F NMR Spectrometer
19 F nuclear magnetic resonance (NMR) spectra of the polymers were recorded on a Bruker AVIII HD400 Hz spectrometer (Bruker, Billerica, MA, USA) using DMSO-d 6 as a solvent and tetramethylsilane as an internal standard.
X-Ray Photoelectron Spectroscopy (XPS)
X-ray photoelectron spectroscopy (XPS) measurements were carried out using a Thermo Fisher Scientific (VGS) spectrometer (Waltham, MA, USA). An Al Kα anode was used as the x-ray source (1486.6 eV), and a binding energy range of 0 to 1400 eV was selected for the analysis. The binding energies were calibrated to the C1s internal standard with a peak at 284.8 eV. The high-resolution C1s spectra were decomposed by fitting a Gaussian function to an experimental curve using a nonlinear regression.
Surface Roughness Analysis
Scanning was performed using a CSPM5500 atomic force microscope from Being Nano-Instruments (Beijing, China), which is generally operated in 2 imaging modes: tapping and contact. The tapping mode was used in this study, and the tip of the oscillation probe cantilever made only intermittent contact with the sample. Regarding the phase of the sine wave that drives the cantilever, the phase of the tip oscillation is extremely sensitive to various sample surface characteristics; therefore, the topography and phase images of a sample's surface can be detected.
Thermogravimetric Analysis (TGA)
Thermogravimetric analysis was performed on a PerkinElmer Pyris 1 TGA (Perkin Elmer, Waltham, MA, USA). The samples (5-8 mg) were heated from room temperature to 700 • C under nitrogen at a rate of 10 • C/min.
Differential Scanning Calorimetry (DSC)
Differential scanning calorimetry was performed on a PerkinElmer Jade differential scanning calorimeter (Perkin Elmer, Waltham, MA, USA). The samples were sealed in aluminum pans with a perforated lid. The scans (−50 to 50 • C) were performed at a heating rate of 10 • C/min under nitrogen purging. The glass transition temperatures (T g ) were located as the midpoints of the sharp descent regions in the recorded curves. The melting points were recorded as the peak maximum of the endothermic transition in the second scan. Approximately 5-8 mg of samples were used in all of the tests.
Dynamic Mechanical Analysis (DMA)
Dynamic mechanical analysis was performed on a Seiko dynamic mechanical spectrometer (model DMS6100) at 1 Hz with a 5 µm amplitude over a temperature range of −50 to 50 • C at a heating rate of 3 • C/min. DMA was conducted in tension mode with specimen dimensions of 20 mm × 5 mm × 0.2 mm (L × W × H). The T g was taken as the peak temperature of the glass transition region in the tan δ curve.
Stress-Strain Testing
Tensile strength and elongation at break were measured using a universal testing machine (MTS QTEST5, model QC505B1, MTS Sys. Corp., Cary, NC, USA). Testing was conducted with ASTM D638. The dimensions of the film specimen were 45 mm × 8 mm × 0.2 mm. Every spectrum was tested 3 times and the average value was obtained. Figure 1 shows the gel permeation chromatography curves for PFD/PUs synthesized under different PFD proportions. As shown, the weight distributions of the PFD/PUs were unimodal, revealing that the synthesis was complete and without material residues. The molecular weight data are presented in Table 2. The results show that a high PFD content increased the effluent time, and the value of the molecular weight distribution (Mw/Mn; dispersity index) calculated using the PFD/PUs fell within 1.6-1.8. A decline in the viscosity of the PFD/PU polymer solution following the increase in PFD content also occurred during the synthesis. These results reveal that increasing the PFD chain extender content reduced the molecular weight of the FTPU. This was because the carbon chain number and molecular weight of PFD are higher than those of other chain extenders, such as 1,4-butanediol and ethylene glycol used in general PU. Therefore, the reduced molecular weight of the PFD/PUs following an increase in PFD was probably due to the effects of activity. The trend was the same as that in the study by Yang et al. [22], in which an increase in fluorine content reduced the molecular weight. Figure 2a shows the FTIR spectrum of the PFD/PUs at a wavenumber range of 4000-650 cm −1 . The spectrum revealed that the polymers had five major common peaks: -NH stretching vibration peak (3333 cm −1 ), CH2 stretching vibration peak (2925 and 2862 cm −1 ), C=O (amide I band, near 1727 cm −1 ), -NH (amide II band, 1534.68 cm −1 ), stretching vibration peak of the C-F group (1221-1205 cm −1 ), and C-O stretching vibration peak (near 1098-1071 cm −1 ). Moreover, no free NCO group was observed at 2240-2275 cm −1 . Therefore, MDI had fully reacted with the PCL or PFD chain extender during the synthesis processes and the yields of PFD/PUs were all 100%. Figure 2a shows the FTIR spectrum of the PFD/PUs at a wavenumber range of 4000-650 cm −1 . The spectrum revealed that the polymers had five major common peaks: -NH stretching vibration peak (3333 cm −1 ), CH 2 stretching vibration peak (2925 and 2862 cm −1 ), C=O (amide I band, near 1727 cm −1 ), -NH (amide II band, 1534.68 cm −1 ), stretching vibration peak of the C-F group (1221-1205 cm −1 ), and C-O stretching vibration peak (near 1098-1071 cm −1 ). Moreover, no free NCO group was observed at 2240-2275 cm −1 . Therefore, MDI had fully reacted with the PCL or PFD chain extender during the synthesis processes and the yields of PFD/PUs were all 100%. Figure 2b shows the absorption peak within the wavenumber range of 1900-1000 cm −1 . FTIR analysis conducted by Yang et al. [27] showed that C=O functional groups obtained from the PU system using the curve-fitting technique included C=O free , C=O HB disordered , and C=O HB ordered , with C=O HB ordered appearing at approximately 1724, 1701, and 1660 cm −1 . Wang et al. [28] subjected FPU to an FTIR test, and the data indicated that when the C=O and -NH functional groups appeared at three peaks (free, disordered, and ordered) in the curve fitting, C-O and C-F functional groups produced two peak values (free and HB). The HB percentage of FPU was calculated at 1530 cm −1 because the stretching vibration peak of the benzene ring at this wavelength did not overlap with the other peak values. Therefore, the existence of HBs in the PU was proved using the following formula:
FTIR
where I H is the HB strength, I free is the free radical bonding strength, and I ref represents the absorption intensity at 1534 cm −1 . Additionally, according to the experimental data in this study, the characteristic peaks of the N-H, C=O, C-F, and C-O groups affected by the HBs also appeared in the PFD/PUs, with H-bonded C=O, H-bonded C-F, and H-bonded C-O located at 1646, 1205, and 1098 cm −1 , respectively. This confirmed that a weak HB existed between N-H and C-F. Figure 3 shows the absorption peaks within the wavenumber range 1240-1190 cm −1 . As shown, the three peak values that appeared at 1240-1190 cm −1 were amide III, C-F free , and C-F HB . The HB percentage of N-H···F-C was calculated using Equation (3), with PFD/PU-01, PFD/PU-02, and PFD/PU-03 having an HB percentage of 23.63%, 27.41%, and 31.18%, respectively. Accordingly, an increase in PFD enhanced the HB interaction in the FTPU film. Figure 2b shows the absorption peak within the wavenumber range of 1900-1000 cm −1 . FTIR analysis conducted by Yang et al. [27] showed that C=O functional groups obtained from the PU system using the curve-fitting technique included C=Ofree, C=OHB disordered, and C=OHB ordered, with C=OHB ordered appearing at approximately 1724, 1701, and 1660 cm −1 . Wang et al. [28] subjected FPU to an FTIR test, and the data indicated that when the C=O and -NH functional groups appeared at three peaks (free, disordered, and ordered) in the curve fitting, C-O and C-F functional groups produced two peak values (free and HB). The HB percentage of FPU was calculated at 1530 cm −1 because the stretching vibration Figure 4 shows the molecular structure of the fluorine parts of the PFD/PUs and the 19 F NMR analytical chart for PFD/PU-01. The figure shows three absorption peaks, labeled 1-3. 19 F-19 F COSY of PFD/PU-01 was performed to accurately analyze F1-F3. Figure 5 shows two signals of F2 (labeled F2 and F2 ) and three strong correlations (F2-F4, F2-F2 , and F1-F2 ). Fluorine spectrum studies have found that 4 J(F,F) is stronger than 3 J(F,F) [29], indicating that a strong coupling exists between the next signals (i.e., 4 J(F,F)). Figure 5 also shows that F-A and F-A' were the most affected by the other elements and were thus labeled F1. In a similar fashion, the peaks at −119.78 ppm (F1), −121.59 ppm (F2 and F2 ), and −123.83 ppm (F3) corresponded to F-A and F-A'; F-C, F-C', F-D, and F-D'; and F-B and F-B', respectively. The corresponding positions of fluorine were confirmed, as shown in Figure 6. Such coupling was insufficient to verify that the fluorinated chain extender was attached to the PU, and additional identification through 2D NMR spectroscopy ( 1 H-19 F COSY, 1 H-13 C HMBC) was required. Figure 7 shows the 1 H-19 F COSY diagram for the PFD/PUs. The spectrum revealed that the H atoms of the CH 2 group in the fluorinated chain extender were located at 4.89 ppm and had a relevant coupling ( 3 J(H,F)) with F1. Additionally, Figure 7 illustrates that the H atoms at 4.89 ppm shared a weak coupling ( 4 J(F,F)) with F3, which again verified that the analysis was correct. Figure 8 shows the 1 H-13 C heteronuclear multiple bond correlation (HMBC) spectrum of PFD/PU-01. According to the literature, the PU ester O-C=O is located at approximately 153 ppm [30], which revealed that the C=O location corresponded to the location of the H atoms of CH 2 (4.89 ppm). The aforementioned analysis showed that the PFD chain extender had successfully reacted with MDI to form urethane groups.
Fluorine-19 NMR
Polymers 2018, 10, x FOR PEER REVIEW 10 of 22 Figure 4 shows the molecular structure of the fluorine parts of the PFD/PUs and the 19 F NMR analytical chart for PFD/PU-01. The figure shows three absorption peaks, labeled 1-3. 19 F-19 F COSY of PFD/PU-01 was performed to accurately analyze F1-F3. Figure 5 shows two signals of F2 (labeled F2 and F2′) and three strong correlations (F2-F4, F2-F2′, and F1-F2′). Fluorine spectrum studies have found that 4 J(F,F) is stronger than 3 J(F,F) [29], indicating that a strong coupling exists between the next signals (i.e., 4 J(F,F)). Figure 5 also shows that F-A and F-A' were the most affected by the other elements and were thus labeled F1. In a similar fashion, the peaks at −119.78 ppm (F1), −121.59 ppm (F2 and F2′), and −123.83 ppm (F3) corresponded to F-A and F-A'; F-C, F-C', F-D, and F-D'; and F-B and F-B', respectively. The corresponding positions of fluorine were confirmed, as shown in Figure 6. Such coupling was insufficient to verify that the fluorinated chain extender was attached to the PU, and additional identification through 2D NMR spectroscopy ( 1 H-19 F COSY, 1 H-13 C HMBC) was required. Figure 7 shows the 1 H-19 F COSY diagram for the PFD/PUs. The spectrum revealed that the H atoms of the CH2 group in the fluorinated chain extender were located at 4.89 ppm and had a relevant coupling ( 3 J(H,F)) with F1. Additionally, Figure 7 illustrates that the H atoms at 4.89 ppm shared a weak coupling ( 4 J(F,F)) with F3, which again verified that the analysis was correct. Figure 8 shows the 1 H-13 C heteronuclear multiple bond correlation (HMBC) spectrum of PFD/PU-01. According to the literature, the PU ester O-C=O is located at approximately 153 ppm [30], which revealed that the C=O location corresponded to the location of the H atoms of CH2 (4.89 ppm). The aforementioned analysis showed that the PFD chain extender had successfully reacted with MDI to form urethane groups. Figure 9 shows XPS spectra for PFD/PUs, with each spectrum containing four main peaks: C1s, O1s, N1s, and F1s. The element composition and peak-related properties are listed in Table 3. These findings revealed that the binding energies of C1s, O1s, N1s, and F1s decreased following an increase in PFD content, and the F content increased from 2.31% to 9.47%. Compared with PFD/PU-02 and PFD/PU-03, the F1s binding energies of the C-F bond in PFD/PU-01 exhibited a clear offset from 690 to 688 eV. Accordingly, the molecular interaction in the PFD/PU film changed when the PFD content increased. The O1 elements in PFD/PU-01 and PFD/PU-03 were subjected to an XPS peakdifferentiation-imitating analysis ( Figure 10). As shown, the O-C=O* binding energies of PFD/PU-01 and PFD/PU-03 were 532.08 and 531.98 eV, respectively. Berger et al. [19] concluded that the interaction of organic fluorine reveals that a dipole-dipole interaction exists between C-F•••C=O. These results reveal that the lower transfer values of the C1s, O1s, N1s, and F1s binding energies confirmed the interaction between the -C=O group and C-F in the PFD/PUs [31]. Figure 9 shows XPS spectra for PFD/PUs, with each spectrum containing four main peaks: C1s, O1s, N1s, and F1s. The element composition and peak-related properties are listed in Table 3. These findings revealed that the binding energies of C1s, O1s, N1s, and F1s decreased following an increase in PFD content, and the F content increased from 2.31% to 9.47%. Compared with PFD/PU-02 and PFD/PU-03, the F1s binding energies of the C-F bond in PFD/PU-01 exhibited a clear offset from 690 to 688 eV. Accordingly, the molecular interaction in the PFD/PU film changed when the PFD content increased. The O1 elements in PFD/PU-01 and PFD/PU-03 were subjected to an XPS peak-differentiation-imitating analysis ( Figure 10). As shown, the O-C=O* binding energies of PFD/PU-01 and PFD/PU-03 were 532.08 and 531.98 eV, respectively. Berger et al. [19] concluded that the interaction of organic fluorine reveals that a dipole-dipole interaction exists between C-F···C=O. These results reveal that the lower transfer values of the C1s, O1s, N1s, and F1s binding energies confirmed the interaction between the -C=O group and C-F in the PFD/PUs [31]. Figure 11 shows the XPS peak-differentiation-imitating analysis of C1s plotted for the PFD/PUs with different proportions of the PFD chain extender. The C-C binding energy distributed by the C1s curve in the PFD/PUs was approximately 285.0 eV, and approximately 286 eV for C-O, 287 eV for C-O-C, and 292 eV for C-F 2 . The corresponding peak produced by O-C=O was approximately 288 eV [32], and it was distributed to the carbonyl group in the urethane group. Table 3 illustrates that the nitrogen content increased following an increase in PFD and that the amount of nitrogen represented the amount of hard segments. Substantial HB interactions between C-F and N-H may also have existed within the PFD/PUs; thus, fluorine chains were believed to facilitate the pulling of the hard segments to the surface of the PU [33]. According to the figure, the increased PFD content led to a shift in the C-F position from 292 to 293.0 eV, a change in the peak intensity, and an increase in C-N binding energy from 285.88 to 285.95 eV. The reason was that increasing the PFD increased the number of C-F···H-N HBs, which in turn increased the C-F binding energy, a result that was consistent with the FTIR analysis. Moreover, the binding energies of C-O, C-O-C, and O-C=O decreased following an increase in PFD content. This may have been caused by introducing long-chain fluothane segments into the PU, which disrupted the original HB reaction of the PU due to a steric hindrance, thereby reducing the binding energy. However, C-F···H-N had a greater HB interaction than did C=O···H-N because of the high electronegativity of fluorine, and a stronger HB interaction was produced in the PU film.
Surface Roughness Analysis
The left and right images in Figure 12a-c show the topography and phase data images for PFD/PU-01, PFD/PU-02, and PFD/PU-03, respectively. The PFD/PUs exhibited some continuous protrusions in the topography. The average surface roughness of PFD/PU-01, PFD/PU-02, and PFD/PU-03 was 2.17, 2.72, and 4.45 nm, respectively. The results revealed that the surface roughness increased when the PFD content increased, which caused a rougher FTPU. This phenomenon was attributed to the increase in the hard segments of the FTPU and the interaction between CF 2 and C=O in the hard segments following the increase in PFD. In other words, increasing the PFD chain extender increased the HB interaction in the FTPU film, which in turn caused aggregations or protrusions on the film's surface [24]. Additionally, numerous continuous irregular granular and stripe phases were observed in the phase diagram of the PFD/PUs, which increased as the PFD content increased. These irregular phases revealed that the hard segments were rich in PFD chain extender [34,35], a phenomenon that was consistent with the findings in the XPS spectrum. Figure 13 illustrates the thermogravimetric analysis (TGA) curve of the PFD/PUs synthesized with different amounts of PFD chain extender. The initial decomposition temperature of the PFD/PUs was defined as T onset , which related to pyrolysis of the FTPU. The data revealed that the T onset of PFD/PU-01, PFD/PU-02, and PFD/PU-03 was 299.2, 305.1, and 308.6 • C, respectively; the thermogravimetric data of PFD/PUs are presented in Table 4. The results show that T onset increased when the PFD chain extender content increased in the PFD/PUs. This could be attributed to the strong bonding energy of -CF 2 (540 kJ/mol), which required relatively high energy to break the bond. Furthermore, the covalent radius of the fluorine atom was equivalent to half the C-C bond length; thus, fluorine atoms shielded the main C-C chain and ensured its stability. Moreover, the interaction between the C=O and -CF 2 groups was verified through FTIR, and the polar bonding of -CF 2 contributed to the formation of the phase separation of hard segments of the PU film in the soft segment [36]. The additional PFD increased the thermal stability of the PU film. The residual weight at 700 • C exhibited an increase when the PFD increased, which consequently reduced the amount of PCL that was required. This resulted in more hard segments, which facilitated carbon formation. the increase in PFD. In other words, increasing the PFD chain extender increased the HB interaction in the FTPU film, which in turn caused aggregations or protrusions on the film's surface [24]. Additionally, numerous continuous irregular granular and stripe phases were observed in the phase diagram of the PFD/PUs, which increased as the PFD content increased. These irregular phases revealed that the hard segments were rich in PFD chain extender [34,35], a phenomenon that was consistent with the findings in the XPS spectrum. Figure 14 shows the differential scanning calorimetry thermograms of the PFD/PUs with different PFD content, and the relevant data are displayed in Table 4. The results reveal that the glass transition temperature (Tg) points of PFD/PU-01, PFD/PU-02, and PFD/PU-03 were 3.7, 5.6, and 10.3 °C, respectively. Tg was related to the soft segment that consisted of repeat linkages of reacted alternative MDI and PCL units. Previous FTIR spectra indicated the presence of a strong interaction between the C=O groups in soft segments and the -CF2 groups in hard segments of the PFD/PUs. When PFD content was higher, more -CF2 groups therein would cause a higher interaction that inhibited the segmental Figure 14 shows the differential scanning calorimetry thermograms of the PFD/PUs with different PFD content, and the relevant data are displayed in Table 4. The results reveal that the glass transition temperature (T g ) points of PFD/PU-01, PFD/PU-02, and PFD/PU-03 were 3.7, 5.6, and 10.3 • C, respectively. T g was related to the soft segment that consisted of repeat linkages of reacted alternative MDI and PCL units. Previous FTIR spectra indicated the presence of a strong interaction between the C=O groups in soft segments and the -CF 2 groups in hard segments of the PFD/PUs. When PFD content was higher, more -CF 2 groups therein would cause a higher interaction that inhibited the segmental chain motion in the PFD/PUs, consequently increasing the T g of PFD/PUs. In other words, the PFD/PUs with more hard segments or chain extenders would have higher T g as previously reported [37].
Surface Roughness Analysis
Polymers 2018, 10, x FOR PEER REVIEW 18 of 22 chain motion in the PFD/PUs, consequently increasing the Tg of PFD/PUs. In other words, the PFD/PUs with more hard segments or chain extenders would have higher Tg as previously reported [37]. Figure 15 shows the tan δ and loss modulus (E'') of the PFD/PUs with different amounts of PFD chain extenders. The dynamic Tg was defined as Tgd. As shown, the Tgd of PFD/PU-01, PFD/PU-02, and PFD/PU-0 from the tan δ curve was 7.9, 10.8, and 13.3 °C, respectively, whereas the Tgd from the E'' curves was 0.1, 3.6, and 5.4 °C, respectively. The Tg values obtained using different testing methods are listed in Table 5. The results show that the Tgd of the PFD/PUs increased as the PFD chain extender content increased. This may have been caused by the inhibition of segmental motion of the PFD/PUs following the increase in hard segments and C-F•••H-N HB interactions that increased the Tgd of the PFD/PUs, a finding that was similar to the results from the thermal property analysis described previously. The tan δ curves of the PFD/PUs indicate a decrease in tan δmax with an increase in PFD content. This was because of the increased hard segments and influence of the HB interactions following the increased PFD, resulting in more elastic PFD/PUs, because the value of tan δ was obtained by dividing the value of E'' by E'. Therefore, PFD/PU-03, which had the highest fluorine content, showed the lowest peak value. In other words, the hard segments containing PFD units were harder than the soft segments containing PCL units. Additionally, the dipole-dipole interaction between C-F•••C=O contributed to the blocking of the segmental activity of the PFD/PUs. In summary, the PFD/PUs with relatively high PFD content were elastic, which suggests that increasing the PFD content improves the rigidity of PFD/PUs. Figure 15 shows the tan δ and loss modulus (E") of the PFD/PUs with different amounts of PFD chain extenders. The dynamic T g was defined as T gd . As shown, the T gd of PFD/PU-01, PFD/PU-02, and PFD/PU-0 from the tan δ curve was 7.9, 10.8, and 13.3 • C, respectively, whereas the T gd from the E" curves was 0.1, 3.6, and 5.4 • C, respectively. The T g values obtained using different testing methods are listed in Table 5. The results show that the T gd of the PFD/PUs increased as the PFD chain extender content increased. This may have been caused by the inhibition of segmental motion of the PFD/PUs following the increase in hard segments and C-F···H-N HB interactions that increased the T gd of the PFD/PUs, a finding that was similar to the results from the thermal property analysis described previously. The tan δ curves of the PFD/PUs indicate a decrease in tan δ max with an increase in PFD content. This was because of the increased hard segments and influence of the HB interactions following the increased PFD, resulting in more elastic PFD/PUs, because the value of tan δ was obtained by dividing the value of E" by E'. Therefore, PFD/PU-03, which had the highest fluorine content, showed the lowest peak value. In other words, the hard segments containing PFD units were harder than the soft segments containing PCL units. Additionally, the dipole-dipole interaction between C-F···C=O contributed to the blocking of the segmental activity of the PFD/PUs. In summary, the PFD/PUs with relatively high PFD content were elastic, which suggests that increasing the PFD content improves the rigidity of PFD/PUs. Figure 16 shows the stress-strain curves of the PFD/PUs synthesized with different amounts of PFD chain extender, and the data on their mechanical properties are listed in Table 6. According to the results, PFD/PU-01, PFD/PU-02, and PFD/PU-03 had, respectively, a maximum tensile strength of 8.23, 16.34, and 21.55 MPa; an extension at break of 1630%, 1434%, and 1156%; and a Young's modulus of 0.5, 1.4, and 2.1 MPa. These results reveal that increasing the PFD content increased the tensile strength and Figure 16 shows the stress-strain curves of the PFD/PUs synthesized with different amounts of PFD chain extender, and the data on their mechanical properties are listed in Table 6. According to the results, PFD/PU-01, PFD/PU-02, and PFD/PU-03 had, respectively, a maximum tensile strength of 8.23, 16.34, and 21.55 MPa; an extension at break of 1630%, 1434%, and 1156%; and a Young's modulus of 0.5, 1.4, and 2.1 MPa. These results reveal that increasing the PFD content increased the tensile strength and Young's modulus. This is due to the following: first, the FTPU film was more rigid when FTPU contained more PFD or hard segments; and second, the HBs produced between -NH and CF 2 in the PFD/PUs inhibited the segmental motion of the PFD/PUs. The result was an increase in tensile strength and Young's modulus in the FTPU film. The results of the mechanical property curve are consistent with those of the dynamic mechanical analysis. Young's modulus. This is due to the following: first, the FTPU film was more rigid when FTPU contained more PFD or hard segments; and second, the HBs produced between -NH and CF2 in the PFD/PUs inhibited the segmental motion of the PFD/PUs. The result was an increase in tensile strength and Young's modulus in the FTPU film. The results of the mechanical property curve are consistent with those of the dynamic mechanical analysis.
Conclusions
In this study, PFD was introduced into PU to produce FTPU, and 1 H NMR, 19 F NMR, 19 F-19 F COSY, 1 H-19 F COSY, and HMBC confirmed the successful synthesis of PFD/PUs. The results from FTIR and XPS indicate that introducing the CF2 group into the PFD produced an HB interaction with the -NH group; a high PFD content resulted in more HB interactions in the PFD/PUs. AFM showed that PFD contributed to the microphase separation of the PFD/PUs because of the HB interactions. Thermal property analysis showed that because of the strong binding energy of -CF2 in the PFD (540 kJ/mol) and because the covalent radius of fluorine atoms is equivalent to half the C-C bond length, increasing the PFD content shielded the main C-C chain and enhanced the thermal stability of the PFD/PUs. Dynamic mechanical analysis and tensile strength tests also confirmed that the PFD extender enhanced the rigidity of the PU film.
Conclusions
In this study, PFD was introduced into PU to produce FTPU, and 1 H NMR, 19 F NMR, 19 F-19 F COSY, 1 H-19 F COSY, and HMBC confirmed the successful synthesis of PFD/PUs. The results from FTIR and XPS indicate that introducing the CF 2 group into the PFD produced an HB interaction with the -NH group; a high PFD content resulted in more HB interactions in the PFD/PUs. AFM showed that PFD contributed to the microphase separation of the PFD/PUs because of the HB interactions. Thermal property analysis showed that because of the strong binding energy of -CF 2 in the PFD (540 kJ/mol) and because the covalent radius of fluorine atoms is equivalent to half the C-C bond length, increasing the PFD content shielded the main C-C chain and enhanced the thermal stability of the PFD/PUs. Dynamic mechanical analysis and tensile strength tests also confirmed that the PFD extender enhanced the rigidity of the PU film. | 7,576.8 | 2018-11-01T00:00:00.000 | [
"Materials Science"
] |
Inferring circRNA-drug sensitivity associations via dual hierarchical attention networks and multiple kernel fusion
Increasing evidence has shown that the expression of circular RNAs (circRNAs) can affect the drug sensitivity of cells and significantly influence drug efficacy. Therefore, research into the relationships between circRNAs and drugs can be of great significance in increasing the comprehension of circRNAs function, as well as contributing to the discovery of new drugs and the repurposing of existing drugs. However, it is time-consuming and costly to validate the function of circRNA with traditional medical research methods. Therefore, the development of efficient and accurate computational models that can assist in discovering the potential interactions between circRNAs and drugs is urgently needed. In this study, a novel method is proposed, called DHANMKF , that aims to predict potential circRNA-drug sensitivity interactions for further biomedical screening and validation. Firstly, multimodal networks were constructed by DHANMKF using multiple sources of information on circRNAs and drugs. Secondly, comprehensive intra-type and inter-type node representations were learned using bi-typed multi-relational heterogeneous graphs, which are attention-based encoders utilizing a hierarchical process. Thirdly, the multi-kernel fusion method was used to fuse intra-type embedding and inter-type embedding. Finally, the Dual Laplacian Regularized Least Squares method (DLapRLS) was used to predict the potential circRNA-drug sensitivity associations using the combined kernel in circRNA and drug spaces. Compared with the other methods, DHANMKF obtained the highest AUC value on two datasets. Code is available at https://github.com/cuntjx/DHANMKF. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-023-09899-w.
Introduction
Circular RNA (circRNA) is a unique type of RNA that differs from other RNAs in that it forms a covalently closed loop and is typically considered non-coding.With the advancement of high-throughput genomics technology, circRNA has become a hot topic in RNA biology research [1].Since the discovery of the first circRNA in RNA viruses in the 1970s [2], the advancement of biomedical technology has resulted in the discovery of an increasing amount of circRNAs.However, research into circRNA function has progressed very slowly over several decades, until 2013, when Memczak et al. and Hansen et al. proved that the circular RNA of human cerebellar degeneration-related protein has an important function in neural development [3,4].This discovery led to a great increase in the study of circRNA function.The most notable function of circRNAs is that they act as miRNA sponges, which regulates target gene expression by inhibiting miRNA activity.One circRNA can regulate one or multiple miRNAs through multiple miRNA binding sites in a circular sequence [5].Previous studies have found that circRNA can regulate alternative splicing or transcription [6,7], as well as parental gene expression [8,9].The results of these studies have also indicated that cir-cRNA plays an important role in physiological and pathological processes, and that the dysregulation of circRNA is closely related to many human diseases [10].over the past two decades, several verified biological function experiments have shown that circRNA has potential as a new clinical diagnostic marker.
Over the years, an increasing number of studies have demonstrated that circRNA can significantly affect the drug sensitivity of cells.For example, Gao et al. [11] screened 18 circRNAs from 3093 circRNAs and then verified them in real-time by quantitative reverse transcription PCR.Finally, hsa_circ_0006528 was found to play an important role in chemotherapy resistance in breast cancer patients.Peng et al. [12] first used nextgeneration sequencing (NGS) technology to identify the comprehensive circRNA expression profile of multidrug-resistant osteosarcoma(OS) cell lines and found that hsa_circ_ 0004674 was significantly elevated in OSresistant cells and tissues, and was associated with poor prognosis.This was then verified by quantitative realtime PCR (qRT-PCR).A study by Wu et al. [13] found that hsa_circ_0001546 is decreased in gastric cancer, which is associated with poor prognosis and also inhibits drug resistance via the ATM/Chk2/p53-dependent pathway.Ruan et al. [14] used four identification algorithms to describe the expression profile of circRNA in approximately 1000 human cancer cell lines and observed a strong correlation between circRNA expression and drug response.That study systematically demonstrated the effect of circRNAs on drug sensitivity.However, research into the relationship between circRNA and drug sensitivity is a newly emerging field that has developed rapidly over the past decade, so our understanding of this relationship is still in its early stages.
The process of validating the relationships between circRNA and drug sensitivity using traditional biomedical methods is time-consuming and costly.Therefore, some researchers have developed computational models that can help to reveal the potential relationships between circRNA and drugs.For example, Deng et al. [15] proposed a computational model called GATECDA for predicting the association between circRNA and drug sensitivity.GATECDA is based on the Graph Attention Auto-encoder(GATE) [16].First, sequence information data for circRNAs, structural data for drugs, and cir-cRNA-drug sensitivity association data were collected.Then the similarity between circRNAs and drugs were each calculated and these data as well as circRNA-drug sensitivity association data were input into the GATE, in order to generate low-dimensional vector representations of circRNA and drug nodes.Finally, the low-dimensional vector representations generated by the GATE were input into a fully connected neural network for circRNA-drug sensitivity association prediction.Later, Yang et al. [17] proposed a model called MNGACDA.The model constructs a multimodal network based on multiple information sources on circRNAs and drugs.Then, a node-level attention Graph Auto-Encoder was used to obtain lowdimensional embeddings of circRNAs and drugs from the multimodal network.Finally, the low-dimensional embeddings of circRNAs and drugs were input into an inner product decoder to score the association between circRNAs and drug sensitivity.To our knowledge, these were the first models to apply computational methods to predict the potential association between circRNAs and diseases.Thus far, no other new models have been applied in this field, and considerable advancement in the creation of new and improved models for this field of research is much needed.
Since Multiple Kernel Learning (MKL) [18] was proposed, it has been widely applied to bipartite biological networks for the improvement of model performance.Specifically, the information contained in the samples were used by MKL to compute the multiple kernel matrix, and then the optimal kernel matrix was obtained by fusing multiple kernel matrices.For example, MKGCN, which is based on MKL and GCN [19], was proposed by Yang et al. to infer novel microbe-drug associations.Yan et al. [20] proposed a computational methods called MKLC-BiRW, based on MKL and Bi-random walk algorithm, to predict potential drug-target interactions by integrating diverse drug-related and targetrelated heterogeneous information.
In this study, we propose a novel method, called DHANMKF, that aims to predict potential circRNA-drug sensitivity associations for further biomedical screening and validation.Firstly, multimodal networks were constructed by DHANMKF using multiple sources of information on circRNAs and drugs.Secondly, comprehensive intra-type and inter-type node representations were learned using multi-relational heterogeneous graphs, which are attention-based encoders under a hierarchical process.Thirdly, a multi-kernel fusion method was used to fuse intra-type embedding and inter-type embedding.Fourthly, the Dual Laplacian Regularized Least Squares (DLapRLS) method was used to predict the potential circRNA-drug sensitivity associations by the combined kernel in circRNA and drug spaces.In order to evaluate the effectiveness of DHANMKF, it was compared with six state-of-the-art methods on a benchmark data set under 5-fold cross-validations (5-CV).Compared with the other methods, DHANMKF obtained the highest AUC.Furthermore, an ablation study was performed to compare the experimental results from different perspectives.Finally, case studies were conducted to demonstrate that the DHANMKF model can be a useful tool for helping with the study of circRNA-drug sensitivity associations in real situations.To the best of our knowledge, DHANMKF is the first algorithm to use dual hierarchical attention networks for the prediction of circRNAdrug sensitivity associations.Our main contributions, differing from previous approaches, are summarized as follows: (1) We classify nodes into two types, i.e., head nodes and tail nodes, based on the degree of the nodes, and then define the types of edges based on the associations between different kinds of nodes.(2) Based on the differences in types of edges, we use dual hierarchical attention networks to extract the information on circR-NAs and drugs, use the multi-kernel fusion method to fuse this information, and then use the dual graph regularized least squares method to predict potential cir-cRNA-drug associations.(3) We tested DHANMKF on two datasets, and the results show that multi-relational dual hierarchical attention networks perform better than the other methods in predicting potential circRNA-drug associations.These results can provide new insights for further research on circRNA-drug associations.
Datasets
Two datasets, data271 and data251, were used in this study.Data271 is from Deng et al. [15] and data251 is from Deng et al. [15] and Peng et al. [21].CircRNAdrug sensitivity associations were collected from the circRic database [14] by Deng et al. [15], where drug sensitivity data were obtained from the GDSC database [22].After Wilcoxon tests with a false discovery rate < 0.05 , these significant circRNA-drug sensitiv- ity associations were extracted as the data271 dataset, which contains N c = 271 circRNAs, N d = 218 drugs and 4134 circRNA-drug sensitivity associations.Integrating with the dataset of Peng et al. [21], we removed circR-NAs with host-gene interaction scores ≤ 0.5 and nodes with a degree of 0. This resulted in the data251 dataset, containing N c = 251 circRNAs, N d = 217 drugs, and 3635 circRNA-drug sensitivity associations.Additional information on these two datasets can be found in the Supplementary file.In our experiment, circRNAs and drugs were represented as two different types of nodes in the network.The node set of N c circRNAs was defined as C = {c 1 , . . ., c N c } .Similarly, the node set of N d drugs was described as D = {d 1 , . . ., d N d } .An adjacency matrix Y ∈ R N c ×N d was created for the storage of circRNA-drug associations.In this matrix, N c rows represent the num- ber of circRNAs and N d columns represent the number of drugs.If circRNA c i (1 During the training phase, all the Y ij = 1 are treated as positive samples and the others are treated as negative samples.We randomly masked some positive samples from Y to get Y train .In order to calculate the similarity of circR- NAs and drugs, the host gene sequences of circRNAs were downloaded from the National Center for Biotechnology Information (NCBI) gene database [23] and the drug structure data were downloaded from NCBI's PubChem database [24].
Sequence similarity of host genes of circRNAs
Applying methods similar to those of Deng et al. [15] and Yang et al. [17], we treated the sequence similarity between host genes of circRNAs as the similarity between circRNAs.In this way, the similarity calculation between circRNAs became the sequence similarity calculation between host genes of circRNAs.The sequence similarity between host genes of circRNAs was calculated based on the sequence Levenshtein distance, which was obtained using the ratio function of Python's Levenshtein package.A similarity matrix CSS ∈ R N c ×N c was created for storing the circRNA sequence similarity.
Structural similarity of drugs
The structure of drugs has a great impact on their function.Therefore, it has become a common practice to measure the similarity of drugs based on their structure.As in previous studies [25,26], RDKit [27] toolkit and the Tanimoto method were used to calculate the structural similarities between drugs.The specific process was as follows: first, the structural data on several drugs were obtained from the PubChem database.Then, RDKit was used to calculate the topological fingerprint of each drug.After that, the structural similarity between drugs was calculated using the Tanimoto method.Finally, the drugs structural similarity matrix DSS ∈ R N d ×N d was derived.
Gaussian interaction profile kernel similarity for circRNAs and drugs
The Gaussian Interaction Profile (GIP) kernel similarity [28] algorithm is a collaborative filtering algorithm that has been widely used in previous studies for similarity calculation [29,30], and it helps to obtain topological information on circRNAs and drugs in relational graphs.Therefore, we calculated the GIP kernel similarity for circRNAs and drugs using the circRNA-drug association network.Firstly, based on the assumption that similar circRNAs are more likely to be associated with similar drugs, we utilized a binary vector BI(c i ) , which is the ith row of the Y train matrix, representing the asso- ciations between circRNAs c i and all drugs in the training matrix of Y .Then, the GIP kernel similarity for circRNAs CGS(c i , c j ) between circRNA c i and c j was calculated as below: Here, α c has been set to 1 referring to [28]'s studies.And similarly, we calculated the GIP of drug DGS(d i , d j ) between drugs d i and d j as follows: Here, the binary vector BI(d i ) is the ith column of the Y train matrix, representing the associations between drugs d i and all circRNAs in the training matrix of Y .α d has been set to 1 referring to [28] studies.
Integrated similarity for circRNAs and drugs
Inspired by the study of Wang et al. [31], we used a nonlinear fusion method to integrate the circRNA similarity and the drug similarity.With circRNA similarity, for example, we first normalized the sequence similarity of host genes of circRNAs using the following formula: Then, the K Nearest Neighbors (KNN) algorithm was used to measure CSS 's local affinity as follows: 6) is the set of KNN of c i , including c i in CSS .This operation is based on the assumption that the higher the local similarity, the more reliable it is.Therefore, the near-end similarity is high while the far-end similarity ( 1) (2) ( ( is set to 0. Similarly, we repeated the process for CGS and then we obtained CGS ′ and CKNN2 .After that, we updated the similarity matrix for each kind of data as follows: After each iteration, CSS ′ (t+1) is normalized by formula Eq. ( 5).Similarly, CGS ′ (t+1) performs the same normalization.The iteration does not stop until the convergence condition is met, and the convergence condition is met when the relative change in �CSS is less than 10 −6 .Assuming that the process involves t iterations, the overall comprehensive similarity matrix of circRNA can be obtained by Eq. ( 9) when the iteration ends.
Based on these rules, the similarity matrix S c is an asymmetry matrix.Therefore, we calculated the as the circRNA comprehensive similarity matrix.For drugs, we applied the same rules to DSS and DGS , then we obtained the comprehensive drug similarity matrix S d .
DHANMKF
Dual Hierarchical Attention Networks (DHAN) were proposed by Zhao et al. [32] in 2022.Comprehensive node representations are learned with intra-type and inter-type attention-based encoders using a hierarchical process based on the bi-typed multi-relational heterogeneous graphs in DHAN.Specifically, DHAN uses two encoders, one to aggregate information on nodes of the same type and the other to aggregate node representations of different type neighbors.Then, the complex structure of the bi-typed multi-relational heterogeneous graph is captured by the model by a hierarchical process and dual-level attention operation.It is worth noting that the association matrix Y of circRNA-drug is a bi-typed single-relation heterogeneous graph.Therefore, in order to fully utilize the extraction ability of DHAN for node embedding, it is necessary to classify the relationships between nodes.
It is well-known that the adjacency matrix describing different objects in the biomedical field is sparse.This means that there are many nodes with small degrees.Histograms of the degree distributions of circRNAs and drugs can be found in the Supplementary file.It can be seen that most of the nodes have small degrees regardless (7 2 .
of whether they are circRNA nodes or drug nodes.Intuitively, the biomedical significance of a drug being associated with only a few circRNAs or a drug being associated with many circRNAs are different.Inspired by Liu et al. [33], we categorized the nodes into head nodes and tail nodes according to the value of their degrees.That is, for every node v ∈ V , where V is the set of nodes in a graph.N v is denoted by the set of neighboring nodes of v, and the number of elements in set N v is defined as the degree of v.Here we let V h and V t denote the set of head and tail nodes, respectively.For some threshold K, we define tail nodes as nodes with a degree not exceeding K, i.e., K is treated as a hyperparameter in our study.In this way, the association of circRNAs with drugs in dataset data271 changes from being one type to being the following four types.
1. Association between the head node of circRNA and the head node of the drug.2. Association between the head node of circRNA and the tail node of the drug.3. Association between the tail node of circRNA and the head node of the drug.4. Association between the tail node of circRNA and the tail node of the drug.
Because the node types in the circRNA similarity network and the drug similarity network are the same, that is, they are either all circRNA or all drugs, the following three types of associations will be in these two similarity networks.
1. Associations between the head nodes.2. Associations between head node and tail node.
Associations between tail nodes.
In summary, we represent intra-type relationships and inter-type relationships as R intra = {1, 2, 3} and R inter = {1, 2, 3, 4} , respectively.Whereas in the data251 dataset, circRNAs were split into two types depending on whether the host gene of the circRNA was associated with a disease or not, which is analogous to splitting cir-cRNAs into head nodes and tail nodes.Thus the same number of edge types can also be obtained, and the definition is more biologically meaningful in this way.
Intra-type attention-based encoder
After the computational process above, the associated network of circRNA and drug becomes a bi-type multi-relationship heterogeneous network, given a node pair (n i , n j ) ∈ C that are connected via node intra-type relationship � k ∈ R (c) intra = {1, 2, 3} .Firstly, we initialized the representation matrix of circRNA to ) is the feature vector of the node n i .Secondly, self-atten- tion was performed on the circRNA nodes to formulate the importance e k ij of a specific-relation based node pair (n i , n j ) as follows: Where || denotes the concatenate operation, and a T k ∈ R 2d ′ ×1 denotes the shared node-level attention weight vector under relation k .LeakyRelu is the nonlinearity activation function, which is widely used in attention-based neural networks.In the third step, e k ij is standardized using the Eq. ( 11) to facilitate comparison of importance between different nodes.
Where N � k intra (n i ) denotes specific relation-based neighbors of n i , the embedding h k 1 of node n i under given relation k is obtained as follows: Where Norm k denotes the relation-specific layer normalization operation; h k i is semantic-specific.Therefore, by using Eq. ( 12) to fuse the aggregated information of nodes with different specific relations, more comprehensive node embeddings can be obtained as follows: Where q ∈ R 2d ′ ×1 is a trainable parameter.Similar to Eq. ( 11), we standardize g k i by using the softmax function as follows: Here β � k ij is used to measure the local importance of intra-relation k .Finally, the intra-type attention-based representation of circRNA node n i can be obtained as follows: (10) e G denotes how important intratype l is for all circRNA nodes and can be regarded as a global importance parameter.The global and local importance of the intra-type relationship l is smoothed by the parameter t.Both β φ l G and t can be learned from training.The aggregated information for node n i under intra-type relation l is represented by h l i .Initialize the representation matrix of drug to is a learnable parameter and Using the same process above, we can get the intra-type attention-based representation of drug node n i , which can be represented as T respectively represent the first layer output of the intratype attention-based encoder, that is, the node embedding matrix of circRNAs and drugs.Assuming that the intra-type attention-based encoder has t layers, the output of the previous layer is taken as the input of the next layer.Repeating this process can obtain t node embedding matrices about circRNA and drugs as follows:
Inter-type attention-based encoder
The purpose of the intra-type attention-based encoder is to learn node embeddings by aggregating the node information of the same type neighbors, while the purpose of the inter-type attention-based encoder is to handle interactions between different types of nodes.Let n i ∈ C and n j ∈ D , respectively.z c i and z d j are the learned representations of the circRNA node n i and drug node n j by intra- type attention networks, respectively.The node-level importance c m ij can be calculated by Eq. ( 16) and normalized by Eq. ( 17) as follows: inter (n i ) denotes the neighbors of node n i under spe- cific inter-relation m .W c inter and W d inter ∈ R d ′ ×d ′ are two type-specific matrices that map their features z c i and z d i into a common space.a m ∈ R 2d ′ is a learnable weight vector.The relationship embedding of circRNA node n i can be aggregated from the embeddings of its neighbors (15) .
of different types(that is, the nodes of a drug), with corresponding coefficients as follows: Norm m denotes the layer normalization operation related to the inter-type relation.Then, the importance of relation embedding z m i related to node n i are obtained by fusing all relational representations by Eq. ( 19), and it is normalized by Eq. ( 20) for making relation importance comparable within inter-type relations.
Finally, the representation u i of circRNA node n i is obtained by fusing these relation-specific representations as follows: Similarly, we can get the inter-type attention-based representation of drug node n j , which can be rep- resented as T respectively represent the first layer output of the inter-type attention-based encoder, that is, the node embedding matrix of circR-NAs and drugs.Assuming the inter-type attention-based encoder has M layers, the output of the previous layer is taken as the input of the next layer.Repeating this process can obtain M node embedding matrices about cir-cRNA and drugs as follows:
Multi-kernel fusion
We can extract multiple embeddings from the intratype attention-based encoder and the inter-type attention-based encoder that represent the information on circRNA nodes and drug nodes of different types and different relationships.For all the embeddings of circRNA and drug, we used the GIP kernel similarity function to calculate the circRNA and drug kernel matrices in each layer as follows: (18) Where We integrated all the kernels above with multiple kernel fusion in order to fully utilize the information and improve the performance of predicting circRNA-drug associations, then the final kernel matrices of circRNA and drug were obtained as follows: , and are the corresponding weight of circRNA kernels and drug kernels, respectively.
Dual Laplacian regularized least squares model
Inspired by previous studies [34] and [35], the Dual Laplacian Regularized Least Squares (DLapRLS) method was adopted by us to predict circRNA-drug associations.Overfitting was avoided by adding graph regularization with DLapRLS.Thus, the loss function can be defined as follows: Where � • � F is the Frobenius norm, α c and α T d ∈ R N c ×N d are learnable matrices, φ c and φ d are regularization param- eters.;L c ∈ R N c ×N c and L d ∈ R N d ×N d are normalized Laplacian matrices, as follows: i=1 ID are diagonal degree matrix.Finally, the prediction F for circRNA-drug associations from IC and ID is obtained as follows: (24
Training
Except for parameters α c and α d , the parameters of our model are updated by Adam [36].The parameters of α c and α d are updated by calculating the partial derivatives for the parameters of DLapRLS.The specific calculation process is as follows: we first assume that α d is a constant matrix when α d is optimized.Thus, the partial derivative of the loss func- tion Eq. ( 26) with respect to α c can be calculated as follows: Let ∂J ∂α c = 0 , then α c can be obtained as follows: Similarly, the partial derivative of the loss function Eq. ( 26) with respect to α d can be calculated as follows: Same as above, we let ∂J ∂α d = 0 , and then α d can be obtain as follows: α c and α d were randomly initialized at the beginning of our model training, and then they were calculated by Eqs.(31) and (33) directly in each iteration, while other parameters were optimized by Adam.The flowchart of our model is shown in Fig. 1.All experimentally verified circRNA-drug associations were treated as positive samples, and the unknown circRNA-drug associations were treated as negative samples, similar to the work of Deng et al. [15] and Yang et al. [17].Then, the same number of negative samples were randomly selected from all the the unknown circRNA-drug associations.Finally, the same number of positive and negative samples were selected for training.
Implementation details and performance evaluation
The model used in this study was implemented based on PyTorch and PyG, and we evaluated the predictive (29) performance of our model using 5-fold cross-validation (5CV).The training epochs were set to 40, the learning rate to 0.05 and the weight decay to 0.01.The number of layers for both the intra-type attention-based encoder and the inter-type attention-based encoder were set to 1 and the output dimensions were both set to 16.The thresholds for distinguishing the head and tail nodes of circRNAs and drugs were set at 27 and 39, respectively.Multi-headed attention was set to 5, and the remaining hyperparameters were set as follows: 75 .During evaluation, we randomly divided all the samples into 5 folds.Four of these folds were used as a training set while the remaining fold was treated as a test set.Seven metrics are used to compare model performance: AUC, AUPR, Accuracy, Precision, Recall, F1-Score, and Specificity.It is well-established that improved model performance is reflected by higher AUC and AUPR values.F1-Score is the average of accuracy and recall, while specificity measures the ability of the classifier to correctly identify negative cases.
Performance comparison with other methods under 5-CV
The current computational methods for predicting cir-cRNA-drug sensitivity associations are restricted.We found that GATECDA [15] and MNGACDA [17] are specifically designed for predicting circRNA-drug sensitivity associations.Thus, like Ref. [15] and Ref. [17], we compared our model with seven state-of-the-art models from different domains, namely MNGACDA [17], GATECDA [15], MINIMDA [37], LAGCN [38], MMGCN [39], and GANLDA [40] .Brief descriptions of these models are provided below: • MNGACDA [17] : a computational framework for predicting circRNA-drug sensitivity associations.This model uses multimodal networks to learn the embedded representations of circRNAs and drugs, then captures the internal information between nodes in the networks with node-level attention Graph Auto-Encoder.• GATECDA [15] : a computational model based on Graph Attention Auto-encoder (GATE) for predicting circRNA-drug sensitivity associations.
Fig. 1 The overview of our proposed method • MKGCN [35] : a computational model based on GCN and MKL for predicting microbe-drug associations.• MINIMDA [37] : a method of predicting miRNAdisease associations by constructing integrated similarity networks and using multimodal networks to obtain embedding representations of miRNAs and diseases.These representations are then fed into a multilayer perceptron for prediction.• LAGCN [38] : LAGCN integrates various associations into a heterogeneous network, learns embeddings of drugs and diseases by Graph Convolution operations, and then combines multiple layers by using an attention function.
• MMGCN [39] : MMGCN differs from simple multisource integration in that it uses a GCN encoder to obtain miRNA and disease features in different similarity views and enhances the learned representations for association prediction by using multichannel attention that adaptively learns the importance of different features.• GANLDA [40] : this method combines heterogeneous data of lncRNA and disease as original features and reduces noise by using Principal Component Analysis (PCA).Then the Graph Attention Network is used to extract information from the features.Finally, a multi-layer perceptron is used to predict lncRNA-disease associations.
The prediction performance of each method was evaluated by a 5CV experiment using the same settings and optimal parameters recommended in their respective studies.From Table 1, it can be seen that DHANMKF achieved the highest AUC and AUPR values.This indicates that DHANMKF performed better overall compared to the other models.
Evaluation of parameters
The prediction performance of DHANMKF is affected by various parameter values.The parameters of DHAN-MKF can be divided into four parts: the parameters in the inter-type attention-based encoder and the intratype attention-based encoder, bandwidth parameter γ in MKF, regularization parameters ( φ c and φ d ) in DLapRL, and degree threshold parameters ( K c and K d ) for distin- guishing circRNA and drug nodes as head and tail nodes.
Here, the process of parameter evaluation is demonstrated using data271 as the baseline dataset.The parameter settings of DHANMKF on the data251 dataset have been put into the Supplementary file.
Optimizable parameters in the intra-type attention-based encoder and the inter-type attention-based encoder
• Learning rate and its weight decay.Learning rate and its weight decay are the same in the intra-type attention-based encoder and the inter-type attention-based encoder.Based on the research conducted by Zhao et al. [32], we set them as 0.05 and 0.01 respectively.• Dropout and number of model training epochs.We selected the dropout to be {0.02,0.021, . . ., 0.03 }.
When the value of the dropout is 0.026, the model performance reaches its optimum, and with an increase in the value of the dropout, the model performance gradually declines.The loss of DHANMKF started converging at 40 training epochs, so the number of epochs for our model was set to 40.• The number of attention heads.To have a more powerful representation learning capacity, the multi-head attention mechanism was incorporated into the model.This parameters was tuned using 5CV.As shown in Fig. 2A, when the number of attention heads is equal to 5, the model performance reaches its optimum.• The output dimensions.We analyzed the output dimensions of the intra-type attention-based encoder and inter-type attention-based encoder, as shown in Fig. 2B.When the output dimension was 16, the AUC performance was best.• The number of layers of the intra-type attention-based encoder and the inter-type attention-based encoder.
As shown in Fig. 3A, when the number of layers of the intra-type attention-based encoder and the intertype attention-based encoder are both 1, the AUC of DHANMKF reaches its optimal value.
Optimizable parameters in MKF and DLapRL
• The bandwidth parameter γ in MKF is actually the 1 2σ 2 of the Gaussian kernel function, that is, γ = 1 2σ 2 .Parameter σ determines the smoothness of the Gaussian filter.The larger σ is, the smoother it is.Therefore, by adjusting γ , a compromise can be reached between over-smoothing and undersmoothing.As shown in Fig. 3B, when γ = 1 75 , the AUC of DHANMKF reaches its optimal value.• The parameters φ c and φ d play a regulating role in DLapRL, and they can be adjusted to balance underfitting and overfitting.From Fig. 4A, we can see that the AUC of the model reaches its maximum when φ c and φ d are both 1 120 .
Optimization of head and tail node thresholds
The threshold of the head and tail nodes can adjust the number of head and tail nodes and the number of associated types, thus affecting the embedding of corresponding nodes.As shown in Fig. 4B, the maximum AUC value of the model is achieved when the thresholds of the circRNA node and drug node are 27 and 39, respectively.
Ablation tests
Ablation experiments were conducted from two perspectives: 1. Analyzing the importance of the intra-type attention-based encoder and the inter-type attention-based encoder; 2. Analyzing the effects of multiple relationships.Therefore, we constructed three ablation experiments.The first one is called DHANMKF-intra, which means that DHANMKF removes the embedding produced by the intra-type attention-based encoder when doing Multi-Kernel Fusion.The second one is called DHANMKF-inter, which means that DHANMKF removes the embedding produced by the inter-type attention-based encoder when doing multi-core fusion.
Fig. 2 DHANMKF's attention heads and output dimensions
The third one is called DHANMKF-multi, which means that the model no longer divides the relationships between nodes into multiple categories.
Table 2 shows the comparison results of the 5CV.From Table 2 we can see that DHANMKF performs better than all other models.This shows that: In summary, there are two main reasons why DHAN-MKF can outperform other models.The first reason is that our model can fully capture the complex structures
Case studies
To further evaluate the predictive performance of our model, we selected two drugs, PAC-1 and Vorinostat, for case studies.Similar to Deng et al. [15] and Yang et al. [17], we used the circRNA-drug associations in the GDSC database as the training set and those in the CTRP database as the testing set.For each drug, we chose the top 20 cir-cRNAs with the highest predicted scores from our model's circRNA-drug association prediction outputs for validation.PAC-1 is the first known small molecule drug that directly activates procaspase-3 to caspase-3 [41].It not only enhances procaspase-3 activity but also induces cancer cell apoptosis.Vitro experiments have shown that PAC-1 exhibits cytotoxicity against lymphoma, multiple myeloma, and many other cancer cells [42].Currently, PAC-1 has been used in clinical trials for the treatment of various tumors, including but not limited to lymphoma, melanoma, solid tumors, breast cancer, and lung cancer [43].As shown in Table 3, among the top 20 circRNAs predicted by our method to be associated with PAC-1, 16 have been identified in CTRP.
Belinostat is a small-molecule hydroxamate-type inhibitor that can inhibit the activity of class I, II and IV histone deacetylase enzymes.It has been used to treat relapsed or refractory peripheral T-cell lymphoma [44].Table 4 shows that 17 of the top 20 circRNAs predicted by our method have been confirmed in circRic.
In order to demonstrate the performance of DHAN-MKF in predicting the potential association between new drugs and circRNA, we chose two drugs for ab initio testing, both of which had only one known cir-cRNA-drug association.During the training phase, we removed the unique association between these two drugs and circRNA.At this point, these two drugs were not associated with any circRNAs and were treated as new drugs during training.These two drugs were Bortezomib and MS-275 (Entinostat).Bortezomib is a novel proteasome inhibitor with potent chemo/radio-sensitizing effects that can overcome the traditional resistance of tumors when used in combination with chemotherapy [45].In addition, existing clinical applications have shown that Bortezomib can improve clinical outcomes in the treatment of hematologic malignancies [46].
MS-275, also known as Entinostat, is effective in human leukemia cells and lymphoma cells.It can reduce the level of Bcl-XL in cells, induce p21 protein expression, cause cell cycle arrest (G1 phase), and induce cell apoptosis [47].In addition, when used in combination with other drugs, entinostat can enhance the activity of some anticancer drugs, including Rituximab, Gemcitabine, Doxorubicin, Sorafenib and Bortezomib.Currently, Entinostat is undergoing phase III clinical trials and its clinical data shows that it has great potential for treating breast cancer [48].
As shown in Table 5, 6 of the top 10 predicted circR-NAs associated with Bortezomib have been confirmed in circRic, and 7 of the top 10 circRNAs related to MS-275 have been confirmed in circRic.
Conclusions
Recent research over the past twenty years has shown that circRNA plays an important role in drug sensitivity.Therefore, predicting the potential association between circRNA and drug sensitivity can be helpful in drug development and utilization, thus benefiting patients.In this study, we proposed a method, based on intra-type attention and inter-type attention called DHANMKF, for discovering potential circRNA-drug sensitivity associations.To verify the effectiveness of the model, DHANMKF was compared with six stateof-the-art methods based on 5CV on benchmark datasets.The results showed that DHANMKF achieved the best performance.In addition, to further evaluate the ability of the model to discover new drugs, a case study was conducted and the model's prediction results were validated using an independent database.The validation results clearly demonstrate that DHANMKF is an effective tool for predicting new circRNA-drug sensitivity associations.
The results show that our model outperforms the baseline models.We believe the main reasons are the following: (1) We classify the nodes into head and tail nodes, which in turn defines the types of edges connecting these two types of nodes.This allows our model to extract node embeddings from the circRNA-drug heterogeneous graph based on different types of edges.The MKL method fuses the multi-relational heterogeneous graph information captured by the two encoders in order to improve the overall performance of the model.In future studies, we plan to integrate more biomedical data, in order to generate more comprehensive circRNA and drug kernels and further improve model performance.Currently, there are few studies that use computational methods to predict potential associations between circRNA and drug sensitivity, so further investigation in this field is merited.
d
are the l-th element of the circRNA embedding matrix set H c = {H c 0 , Z c 1 , . . ., Z c t , U c 1 , . . ., U c M } and the drug embedding matrix set H d = {H d 0 , Z d 1 , . . ., Z d t , U d 1 , . . ., U d M } , respectively.γ l denotes the corresponding bandwidth, we set γl = γ , l = 1, • • • , K + 1 ,and γ is a hyperparam- eter, and K + 1 is the number of elements in the circRNA embedding matrix set H c and the drug embedding matrix set H d .
(1) Compared with DHANMKF-intra and DHANMKF-inter, DHAN-MKF performs better, which means that the embeddings produced by the intra-type attention-based encoder and the inter-type attention-based encoder improve the performance of the model.(2) Compared with DHAN-MKF-multi, DHANMKF can generate node embeddings corresponding to different relationships between nodes.
Fig. 3 Fig. 4 φ
Fig. 3 DHANMKF's layers and γ (2) The intra-type attention-based encoder can efficiently aggregate information from nodes of the same type.(3) The inter-type attention-based encoder adequately extracts node representations from different types of nodes .(4)
Table 1
Performance comparison based on five-fold cross-validation
Table 2
Ablation experimentof the bi-typed multi-relational heterogeneous graphs.The second reason is that biological networks in reality are sparse, so it is reasonable to divide the nodes in biological networks into head nodes and tail nodes for analysis.
Table 3
Top 20circRNAs related to PAC-1 predicted by DHANMKF
Table 4
Top 20 circRNAs related to Belinostat predicted by DHANMKF
Table 5
The Top 10 predicted circRNAs associated with the two new drugs Bortezomib and MS-275 | 9,021.6 | 2023-12-21T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Universality of Pattern Formation
We develop a theory of pattern formation in non-Hermitian scalar field theories. Patterned configurations show enhanced Fourier modes, reflecting a tachyonic instability. Multicomponent $\mathcal{PT}$-symmetric field theories with such instabilities represent a new universal class of pattern-forming models. The presence of slow modes and long-lived metastable behavior suggests a connection between the computational complexity of the sign problem and the physical characterization of equilibrium phases. Our results suggest that patterning may occur near the critical endpoint of finite density QCD.
INTRODUCTION
Pattern formation is a ubiquitous phenomenon throughout physics [1][2][3]. In patternforming systems, equilibrium phases exhibit complicated behaviors characterized by persistent inhomogeneous patterns such as stripes and dots. Conventional scalar field theories typically satisfy reflection positivity and thus have positive, self-adjoint transfer matrices [4]; such theories cannot display the modulated behavior associated with patterning. We show that multicomponent PT -symmetric scalar field theories with complex actions are a large, natural class of local field theories exhibiting patterning.
PT -symmetric field theories are invariant under the combined action of a discrete linear transformation P and complex conjugation T [5,6]. This symmetry implies that each eigenvalue of the transfer matrix is either real or part of a complex conjugate pair. It is this latter possibility that is responsible for modulated behavior and pattern formation. The prototypical example of a PT -symmetric field theory is the iφ 3 model, which is the field theory for the Lee-Yang transition [7]. In this case P takes φ → −φ. Many PT -symmetric models, including the iφ 3 model, have complex actions and therefore suffer from the sign problem [8]. QCD at nonzero chemical potential is of great interest in nuclear and particle physics but has a sign problem that has severely hampered its study [9,10].
In this paper we consider a PT -symmetric field theory that can be studied both analytically and with lattice simulations. Extensive simulations of this model indicate a smooth transition between pattern morphologies, in contrast to microphase behavior [2,11], which assumes clear distinctions between different morphologies. Instead we find a unifying principle: tachyonic instabilities drive pattern formation. We analytically derive a criterion describing when homogeneous phases are unstable to patterning. The pattern-forming region of parameter space exhibits increased computational complexity characterized by slow modes and long autocorrelation times in simulations. This suggests a connection between the computational difficulty associated with the sign problem and the physical characterization of equilibrium phases in scalar field theories.
A recent development in lattice simulations allows us to simulate the complex action where we set Eq. (1) represents a Hermitian scalar field φ(x) coupled to a PT -symmetric scalar field χ(x) by the imaginary strength ig. The dual action of Eq. (1) takes the form where the dual potentialṼ is given byṼ (φ, ∂ · π) = (∂ · π − gφ) 2 /2m 2 χ + λ(φ 2 − v 2 ) 2 + hφ. This model was recently studied for the case h = 0 in two and three dimensions [12]. Here we extend the study of this model to the full g −h plane. We present extensive simulations in d = 2, in which we vary the parameters g and h on a 64 2 lattice with parameters m 2 χ = 0.5, λ = 0.1 and v = 3. We have also observed similar phenomena in d = 3.
In figure 1 we show nine configuration snapshots of φ, each taken after 20, 000 lattice updates. These snapshots are taken from a large dataset, which extends from g = 0.0 to Instead we see a smooth transition between long line segments to shorter ones as h increases, with a distribution of shapes in each configuration. The average action S varies smoothly as h and g are varied. In most of the two-dimensional simulations, we see a fairly complete ring in momentum space, consistent with pattern formation without preferred directions. In some cases, however, a smaller number of modes on the ring are excited, and the absence of isotropy is evident in the configurations. This may be related to finite size effects or to locking into an atypical but long-lived pattern. For many systems, it is known that the energy is minimized by regular patterns, typically stripes [13,14].
Our model is also amenable to analytical treatment. Because χ enters quadratically in the action S, it can easily be integrated out, yielding a nonlocal effective action of the form This model has been extensively studied in the case m χ = 0; see, e.g. We determine the value of the order parameter φ 0 at tree level by minimizing the potential or equivalently by minimizing the effective potential associated with S eff : The effect of χ on φ 0 is to restore the symmetric phase φ 0 = 0 for h = 0 at sufficiently large values of g. Given our simulation results, the details of the phase structure follow from the inverse φ propagator obtained at tree level: Three different behaviors are possible, depending on the zeros of G −1 . If both zeros occur at q 2 < 0, then the propagator decays exponentially. If the zeros are complex, they must form a complex conjugate pair and the propagator decays exponentially with sinusoidal modulation. The boundary between these two behaviors is by definition a disorder line [16].
If one or both zeros are real and positive, then the linearized theory predicts that these modes will grow exponentially, indicating that the homogeneous phase is unstable. This is the region where pattern formation occurs. This region is determined by noting that G −1 (q) has a minimum at q 2 > 0 provided g > m 2 χ . The propagator has tachyonic modes if the minimum lies below zero, corresponding to 2g − m 2 χ − 4λv 2 + 12λφ 2 0 < 0. The region predicted to have tachyonic modes is in reasonable agreement with the boundaries of the pattern-forming region observed in simulation, subject to the limitations imposed by lattice size and spacing.
Our simulations and complementary analytical studies point to a common origin for patterning. The observed ring in Fourier space appears independently of the particular morphology of the configuration in real space. Combined with the gradual transition between different morphologies, this suggests that all pattern-forming behavior is associated with tachyonic modes. We see no indication of a first-or second-order thermodynamic phase transition between supposed microphases. It is of course possible that some currently unknown operator might serve as an order parameter for what are referred to as geometric transitions associated with percolative behavior. It is known in the case of the d=1 Ising model that there is an infinite class of nonlocal string operators, each with its own disorder line. This is associated with the behavior of the model in an imaginary magnetic field, which is the prototypical PT -symmetric problem [17], so it is plausible that such behavior may exist in other PT -symmetric models.
As a first step towards approximating the behavior of the equilibrium patterning state, we consider a simple model that provides additional insight into the transition between where the momenta k j are constant in magnitude but uniformly distributed in direction; the phases δ j are also uniformly distributed. The patterning we observe in simulations is strikingly similar to that in phase transition dynamics, although it is associated with equilibrium behavior. The dynamics of φ can be modeled by a Langevin equation where as usual Γ is a decay constant and η(x) is a white noise term. The difference between this model and a standard φ 4 field theory is the nonlocal term in S eff induced by χ, which stabilizes φ in what would otherwise be an unstable region of the phase diagram. It is easy to show that the dynamics of pattern formation have the same enhanced modes in momentum space as the equilibrated configurations do. The dichotomy between a tachyonic origin of patterned phases and the microphase model is reminiscent of the distinction between spinodal decomposition and nucleation and growth in phase transition dynamics. Spinodal decomposition is the mechanism by which unstable states equilibrate while nucleation and growth is associated with the decay of metastable states. We now know [18][19][20] that there is typically no sharp boundary between these two mechanisms in phase transition dynamics.
Because of the close connection between dynamics and statics in this model, we propose that the relation of our tachyonic picture to the microphase picture is essentially the same as that of spinodal decomposition to nucleation and growth.
We now demonstrate that multicomponent PT -symmetric scalar field theories with complex actions form a natural class of models associated with pattern formation. Consider a general field theory of this class in d dimensions where both φ and χ may have more than one component. The action has the form given by Eq. (1), but with V (φ, χ) an arbitrary potential satisfying the PT symmetry condition. As before, we find homogeneous equilibrium phases by minimizing V , with (φ 0 , χ 0 ) the global minimum. We assume that PT symmetry is maintained, which implies that φ 0 is real, χ 0 is imaginary and V (φ 0 , χ 0 ) is real.
The one-loop effective potential V eff (φ, χ) is given by where the mass matrix M in block form is given by This mass matrix evaluated at (φ 0 , χ 0 ) is not necessarily Hermitian but is PT -symmetric.
In the two-component case, we have This generalizes to the multicomponent case as where Σ is a diagonal matrix with entries ±1. The characteristic equations for M and M * are the same, so they have the same eigenvalues. As a consequence, the eigenvalues of M are either both real or form a complex conjugate pair.
We calculate the one-loop contribution to V eff at the tree-level minimum (φ 0 , χ 0 ). We see easily that det (q 2 + M) can be negative if and only if one or more of the eigenvalues of M are real and negative. If one or more eigenvalues are negative, then V eff will have an imaginary part, indicating instability of the homogeneous phase, and the equilibrium phase is inhomogeneous. The decay rate of the homogeneous phase is [21] where R is the region of q space where det M < 0. Note that this decay rate is perturbative, representing a fast decay, in contrast to the slow modes associated with changing pattern morphology. This indicates that relaxation from a given initial condition takes little simulation time relative to autocorrelation time, Adequately sampling the equilibrium state may require a great deal of time depending on the target observables. Note that in the general case, there may be more than one homogeneous solution which is unstable to pattern formation. We cannot necessarily predict which inhomogeneous phase has the lowest free energy. In the case of a theory with three or more components, pattern formation may be quite complicated [13,14].
The observation of pattern formation in field theories with complex actions raises interesting issues about computational complexity in bosonic models with sign problems. Some of the characteristics observed in our simulations, such as large numbers of metastable configurations and very slow quasizero modes, are reminiscent of glassy behavior. It is known that the problem of finding the ground state of an Ising model with general couplings is NP-hard [22]. Certain fermionic models with sign problems have been mapped to the Ising spin glass, a known NP-hard problem [23]. In the pattern-forming region of the model studied here, the computational complexity, as measured by our ability to adequately sample equilibrium behavior, increases dramatically; this is similar to the behavior of a spin glass, but without the random character of spin glass interactions. In PT-symmetric scalar field theories, imaginary couplings can change the fundamental behavior of interactions, making attractive couplings repulsive. This in turn can set up a conflict between attractive and repulsive forces, a well-known cause of pattern formation. For example, nuclear pasta, believed to occur in neutron star crusts, arises from the attractive nuclear force and the repulsive Coulomb force [11]. Thus the connection between the sign problem and pattern formation is in hindsight natural.
Our original interest in field theories with sign problems was motivated by QCD at finite density, a multi-component field theory with a generalized PT symmetry. The widelyconjectured phase structure of finite density QCD is characterized by a first-order line with a critical end point in the Z(2) universality class, similar to the model studied here. This raises the interesting possibility that finite-density QCD might exhibit pattern formation around its critical end point, composed of regions of confined and deconfined phase. As discussed above, patterns may also form out of equilibrium, an interesting feature from an experimental point of view. | 3,003 | 2019-06-17T00:00:00.000 | [
"Physics"
] |
Theories on the Relationship between Price Process and Stochastic Volatility Matrix with Compensated Poisson Jump Using Fourier Transforms Perpetual
Investors find it difficult to determine the movement of prices of stock due to volatility. Empirical evidence has shown that volatility is stochastic which contradicts the Black-Scholes framework of assuming it to be constant. In this paper, stochastic volatility is estimated theoretically in a model-free way without assuming its functional form. We show proof of an identity establishing an exact expression for the volatility in terms of the price process. This theoretical presentation for estimating stochastic volatility with the presence of a compensated Poisson jump is achieved by using Fourier Transform with Bohr’s convolution and quadratic variation. Our method establishes the addition of a compensated Poisson jump to a stochastic differential equation using Fourier Transforms around a small time window from the observation of a single market evolution.
Introduction
Volatility measures uncertainty of returns which plays a major role in cash flows from selling assets at a precise future date.It is very essential in financial markets due to price fluctuations, prediction of stock prices, option pricing, portfolio management and hedging.Decision and policy makers depend on volatility to determine the bullish and bearish nature of the market to avoid loss.The varying nature of volatility makes it difficult to predict stock prices.The Black-Scholes framework assumes constant volatility but empirical evidence has proved otherwise leading researchers to explore more into modeling volatility of an asset.It is important to note that the original Black-Scholes framework did not include jumps into price processes.It is then of interest to explore to what extend the inclusion of jumps affects the dynamics of stock price volatility.
Volatility can be estimated through parametric and nonparametric methods.
When using the parametric methods, it is modeled using its functional form of observed variables in the market.These include discrete-time volatility models such as Autoregressive Conditional Heteroscedasticity(ARCH) models, where volatility relies on the past returns and other variables that are directly observed only [1].Also, the implied volatilities are based on parametric models.Computation of the historical volatility without assuming its functional form is done by nonparametric methods.The expost variation in the frequency domain of equity prices was analyzed by Wang [2].He proposed a realized periodogram based estimator based on Fourier transform which estimated the quadratic variation consistently using the price process which was log equilibrium.For contaminated prices caused by the micro-structure noise from the market data, the estimator filtered out the high frequency periodograms which converted the high frequency data to a low frequency periodograms.Its application was done to electric prices transaction that was general through simulation.The proposed estimator was said to be insensitive to sample frequency choices.
Malliavin and Mancino [3] presented the computation of a time series volatility using Fourier series analysis method from observations of a semimartingale data.The method was nonparametric and model free.The Fourier coefficient of the volatility was estimated based on integration of the time series.
They stated that the method was well suitable for financial market and specifically for analysis of time series data which are of high frequency and cross volatilities computations.Kanatani [4] derived a linear interpolation bias of realized volatility.He used Fourier series estimator proposed by Malliavin and Mancino [3] to avoid the biasness.He examined the theoretical relationship between realized volatility and Fourier estimator and showed that the Fourier estimator was most efficient than the realized volatility.He also proposed that, linear interpolation should not be used as the preparation for realized volatility calculations.Hoshikawa et al. [5] compared alternative estimators theoretically and empirically for the performance of the classical quadratic variation method by Hayashi and Yoshida [6] and Fourier series estimator by Malliavin and Mancino [3] in the presence of high frequency data.They found out that, the Hayashi and Yoshida [6] estimator performed the best among the alternatives in view of the bias and the Mean Square Error for integrated multivariate volatility since the biasness of Hayashi and Yoshida [6] was mostly due to the biasness of the drift.
The other estimators were shown to have possibly heavy bias mostly toward the origin.They also applied these estimators to Japanese Government Bond futures to obtain the results which was consistent with their simulation.Mancino and Sanfelici [7] studied the Fourier estimator of an integrated volatility in the presence of micro-structure noise.They presented the properties of finite sample of the Fourier estimator and derived an analytic expression for the contaminated estimator in terms of biasness and Mean Square Error.They also revealed that, the estimate could be used experimentally to design an optimal Mean Square Error-based estimator, which was efficient and robust with noise.Their conclusion was that, the Fourier estimator was relatively unbiased and the biasness of the finite sample could be made small by appropriately cutting off the highest frequencies.Mattiussi and Iori [8] analyzed a method based on Fourier analysis to estimate volatility and correlation when the observed prices are at a high frequency rate.Their method did not require data manipulation and led to more robust estimates than the traditional methods that have been proposed.They evaluated the performance of the Fourier algorithm to reconstruct the time volatility of simulated bivariate and univariate models.They also used the Fourier method to investigate the volatility and the correlation dynamics of the future market over the Asian crisis period.They detected possible interdependencies and volatility transmission.A nonparametric estimation method based on Fourier analysis applied to continuous semi-martingales was proposed by Malliavin and Mancino [9].It was mainly constructed for measuring instantaneous univariate and multivariate volatility and co-volatility using observations from high frequency data.The Fourier transform of log-returns took into account asynchronous observations and unevenly spaced data.Asymptotic normality and the consistency of the statistical properties of the Fourier estimator was analyzed.They defined the Fourier estimator purposely for co-volatility computation without manipulating the data based on integration embedded in Fourier transform.Also, with high frequency data, the reconstruction of co-volatility as a stochastic function was done in an effective way which allowed the volatility function to be handled as a variable observed in financial applications.[10] presented a new nonparametric method to compute the trajectory of instantaneous covariance using Fourier transform.
Cuchiero and Teichmann
The observation used was discrete from a multidimensional price process in the presence of jump.They extended the work of Malliavin and Mancino [3], [9] by adding a classical jump-robust estimator of a realized covariance estimation integration to estimate the Fourier coefficient.The path of the instantaneous covariance was reconstructed using Fourier-Fejer inversion.They proved the central limit theorem and the consistency of the estimator.Also, they analyzed the asymptotic estimator variance which was smaller by a factor 2/3 as compared to the classical local estimators.They investigated its robustness and showed how to empirically and theoretically estimate the integrated realized covariance of a stochastic covariance process which was instantaneous.Barucci et al. [11] studied the performance of forecasting the volatility estimator using Fourier with micro-structure noise presence.They compared analytically the Fourier estimator which significantly performed better than the realized volatility estimator types, especially for high-frequency data and without noise component.
They showed that the Fourier estimator outperformed the other methods designed for handling the market micro-structure contamination.
All these and many researches have been done in volatility estimation using Fourier analysis without the consideration of the addition of a compensated Poisson jump.In this paper, we are motivated to propose a nonparametric estimation method based on Fourier analysis incorporated with Bohr's convolution and quadratic variation, applying it to continuous semi-martingale process with the addition of a compensated Poisson jump as an extension of the results of Malliavin and Mancino [9].We investigate to see whether the addition of compensated Poisson jump has effect on the results obtained.This method will be suitable for measuring instantaneous multivariate volatility.This process will incorporate discontinuities on the returns on stocks and help fit a better market data with regards to the reflection of the reality in the stock market [12].The estimation of volatility with a compensated Poisson jump will give a fair idea on how the movement of the stock prices will be in order for investors to position themselves well in terms of investing or hedging.
The volatility is reconstructed as a function of time.The volatility matrix under the hypothesis that the volatility process is square integrable.We then derive an instantaneous volatility estimator from the identity based on a discrete, unevenly spaced asset prices.This is important when the derivation of stochastic volatility is performed along the time evolution in terms of contingent claim pricing-hedging [13].This method can be generalized to measure the cross-correlations or co-variances which is a multivariate case.The paper is organized as follows: Section 2 deals with theoretical concepts.Section 3 discusses the estimation of stochastic volatility using Fourier Transform and Section 5 presents the conclusion.
Theoretical Concepts
We present some important mathematical preliminaries that are significant to this paper.These preliminaries will outline some definitions and theorems that will be used in this paper.
Definition 1.A stochastic process N is a counting process if there exists an increasing family of sequence of random variables where j T is finite and ) ( ) Definition 2. Suppose is a metric space of a real line with metric ( ) , x y θ then a continuous real-valued function ( ) x t on is called an almost periodic function if, for every 0 ε > , there exists contains at least one number τ for which ( ) ( ) Φ Ψ be functions on , where is an integer, then their Bohr convolution is: ( ) is the identity [16].
Definition 5. Let M be an n-dimensional Riemannian Manifold, ( ) be a tangent space and ( ) * T M be the dual space of ( ) T M then a vector field If Z is an adapted vector field with respect to ω being an adapted differential one form along p , then ( ) ( ) − with respect to ( ) ( ) p e τ ω τ is an adapted vector on n [16]. .If ω is an adapted differential one form along p , then ( ) ( ) p e τ ω τ is in a linear form [16]. Theorem 1. (Ito energy identity).Let ω be an adapted differential form and Z be an adapted vector field along p ; then it implies that we have Proof: see [16].
Theorem 2. (Ito Formula for a complex variable case).If ( ) where , i j V V are martingales and we apply complex Itô formula to it, then we get, ( ) Proof: see [17]. .
If 2 p = , then we have a special case, then it implies that this inequality holds in this case with constants 2 2 1, See proof [18].
Remark 1.The compensated Poisson process ∈ is a martingale with respect to its own filtration t F [14].
Definition 7. Let t V be a real-valued stochastic process defined on the probability space ( ) , , Ω F and with time t that ranges over non-negative real numbers then the pth variation is defined as, and Π is the norm of the partition This is true under certain conditions for example, 1 p = defines the first variation or total variation process, for 2 p = , the pth variation equals the quadratic variation if the sum converges.Also, it is a bounded variation if and For a generalize Itô processes, where B is a standard Brownian motion, its quadratic variation is given by [ ] The quadratic variation of a compensated Poisson process
Stochastic Volatility Estimation with Fourier Transforms
Fourier Transform is a nonparametric method which is the representation of frequency domain, and the mathematical operation that links the frequency domain representation to a function of time.A time varying data can be transformed from one domain into a different domain (frequency domain) and that is the main idea behind Fourier [22].To represent functions in Fourier transform, the function has to be non-periodic and its integral of the absolute of the function must converge [23].A function can be reconstructed from its Fourier transform by using inverse Fourier transform.When volatility changes with time, its computation which is done by nonparametric methods centers around small time windows which can be daily, weekly or monthly in a high frequency data.
Fourier transform constructs the volatility as a function for it's iteration and computation of the cross-correlation between price and volatility.Fourier Transform takes into consideration all the observations and avoids inconsistency in data since it is based on integration.Let ( ) p t be the log-price of assets which is a continuous semi-martingale on a fixed time window, then We solve for the price process with a compensated Poisson jump as where σ is adapted to a filtration but α is not necessarily adapted and it's bounded by c are independent Brownian motions on a probability space, such that j i σ and j α are random processes which are adapted to a filtration and satisfies the conditions in (QA) below; ( ) ( ) Definition 8. Suppose we have two assets whose prices are ( ) ( ) then its respective volatilities will be ( ) ( ) (co-volatilities), hence the entries of its volatility matrix ( ) .
, the volatility matrix will be known as the instantaneous volatilities.
The Identity Relation for a Complex Martingale Case
We present the following propositions with their proofs below.
Proposition 5.The identity that relates the price process and volatility matrix with the compensated Poisson jump is Proof.Here we establish an identity which relates the Fourier transform of the price process ( ) p t to the Fourier transform of the volatility matrix The drift α does not contribute to the quadratic variation [24], so without loss of generality, we let 0 α = ; then p is a semi-martingale.
Suppose we have a price process which has a volatility matrix and a com- where ( )( ) ( ) then it suffices from the theory of convolution that, Since 0 α = then from Bohr's convolution, the functions that convolve with α will be zero and also functions that convolve with M apart from itself will be zero, hence we have, From Definition 3, we have, From Equation ( 10), we define the Bohr's convolution of the volatility and the compensated Poisson jump as: which implies Hence the identity that relates the volatility matrix and the compensated Poisson jump is The volatility matrix and compensated Poisson jump are related to the price process by the identity with the volatility independent of the stock's Brownian motion. Proof.
Using Itô formula in Theorem 2 to solve the complex martingale we have, , which implies, and from the definition of a Fourier transform, we have If we have an integer 1 n ≥ , then for any integer q, where q n ≤ and from the Bohr's convolution theory we have, ( ) ( ) ( ) ( ) where s n H q s s H q s s n H q s s + − can also be reduced to n Q by symmetry as, ( ) ( ) Let ( ) 0 0 Γ = (it refers to the Fourier transform of the initial price) then it implies that By the definition of n Q we have,
∫ ∫
From Equation ( 4), when 0 α = and we have two assets then, then using Itô energy identity Equation (1) we have, ( Expressing Equation ( 17) using Cauchy-Schwarz inequality we have then evaluating the rest of the terms using Burkholder-Gundy's inequality, we ( ) cos qt takes the interval ( ) since the maximum of cos is 1, we have , t u t t ν = − = , then by change of variables we have, This implies that as n → ∞ , ( ) Suppose p is the price process satisfying Equation(3), then the instantaneous volatility function with a compensated Poisson jump is where ( ) Vol p is the volatility of the price process ( ) p t at time t and , q n are integers.
Proof.From Equation ( 7) Then from Proposition 2 and 3, the identity that relates the Fourier transform of the price process with a compensated Poisson jump and the volatility is simplified as, The Fourier-Fejer summation function for From Definition 8, to obtain the instantaneous volatility function , we extract the diagonals of the matrix and sum them, that is In comparison with the one obtained in this paper, the identity that relates the price process and the volatility matrix with the addition of compensated Poisson jump is This means that the addition of compensated Poisson jump had an effect on the volatility of the price process.
Stochastic Volatility Estimation for a Specific Case
Suppose the price process follows the process, ( ) ( ) ( ) ( ) ( ) ( ) ( ) p q p q t q M q σ α β , then from the theory of convolution, q n s n q Vol p p s p q s N q iqt n n β q n s n q Vol p p s p q s N q iqt n n Σon the time window [ ] 0,t is computed by changing the origin of time, rescaling it to reduce the time window to [ ] 0, 2π and using Fourier transform with Bohr's convolution and quadratic variation to estimate it.We prove a general identity relating the Fourier transform of the price process ( ) p t with a compensated Poisson jump to that of the instantaneous multivariate volatility form ω of degree one along p is a map from [ ]
Theorem 3 .
(Burkholder-Davis-Gundy inequality) For every 1 p ≤ < ∞ there exist positive constants , p p k K such that, for all local martingales X with 0 0 X = and stopping times τ , the following inequality holds.
on ( ) B t then from Equation (9), we have, the Fourier estimator of the instantaneous volatility with a compensated Poisson jump as,( From Equation(10), the Bohr's convolution of the jump diffusion process is: gives the identity relating the price process to the volatility matrix and the jump diffusion process.It implies that, In the case where ( ) , | 4,268.6 | 2017-06-19T00:00:00.000 | [
"Mathematics"
] |
Institutional Investor Portfolio Stability and Post-IPO Firm Survival : A Contingency Approach
This study examines the influence of institutional investor portfolio stability on the survival of 379 IPO firms that went public in 1997. I find a negative relationship between the amount of stable institutional investment in and newly public firms and post-IPO firm failure. Consistent with multiple agency theory I also find that outside director board control weakens the influence of stable institutional investment on post-IPO firm failure. These results provide support for multiple agency theory and highlight the importance of differences among and between principals and agents in the post-IPO setting.
Introduction
A large body of research examines the role of institutional ownership in shaping organizational outcomes (Dalton, Hitt, Certo, & Dalton, 2007;Shleifer & Vishny, 1997;Short, 1994).A subset of these studies examine the influence of institutional ownership on the performance of established corporations (Daily, 1996;Dalton, Daily, Certo, & Roengpitya, 2003).These studies are commonly grounded in agency theory (Dalton et al., 2007), which posits that high levels of institutional investment discipline managers to act in the interests of shareholders rather than in their own (Jensen & Meckling, 1976).While a substantial body of research has focused on the influence of institutional investors to discipline managers the results of these studies have been mixed.Indeed, two meta-analyses, (Dalton et al., 2003;Sundaramurthy, Rhoades, & Rechner, 2005) found no statistically significant relationship between institutional ownership and firm performance.
I suggest that extant research has paid scant attention to the effects of institutional investor interest differences when examining linkages between institutional Indeed a growing body of research suggests that institutional owners differ in their investment objectives and time horizons (Koh, 2007;Porter, 1992).Whereas traditional agency theory assumes that principals such as institutional investors are homogenous in their interests research drawing upon multiple agency theory logic suggests that principals vary in their interests and that these differences shape organizational actions (Connelly, Tihanyi, Certo, & Hitt, 2010;Tihanyi, Johnson, Hoskisson, & Hitt, 2003).
Multiple agency theory (Hoskisson, Hitt, Johnson, & Grossman, 2002), examines the agency problem of conflicts of interest between multiple principles and multiple agents.As such, multiple agency theory draws upon the principal-agent relationship of traditional agency theory (Jensen & Meckling, 1976), but accounts for the increasing complexity of principal-agent relationships in the corporate world (Child & Rodrigues, 2003).
Multiple agency theory recognizes the potential for conflicting interests between and among different principals and agents as a result of differing investment time horizons (Arthurs, Hoskisson, Busenitz, & Johnson, 2008;Hoskisson et al., 2002).
This study addresses gaps in extant research by drawing on multiple agency theory to consider the influence of differences in institutional investor investment time horizon preferences on the survival of newly public firms.Specifically, this study develops theory and hypotheses which address the influence of stable institutional investment, defined as institutional investment which exhibits a long-term investment horizon manifest through low levels of portfolio churning, on the failure of newly public firms.In doing so, I propose that stable institutional investors may possess greater motivation and ability to monitor management in the post-IPO context than short-term institutional investors.Moreover, I suggest that stable institutional investors may allow newly public firm executives to make the investments necessary to ensure post-IPO firm survival.
This study attempts to answer the question of, "Does the amount of IPO firm equity held by institutional investors with stable investment portfolios influence post-IPO firm survival?"As such, this study considers the role that post-IPO institutional investment time horizons play in influencing IPO firm adaptation to the rigors of public trading.In doing so this study contributes to multiple agency theory by demonstrating that some principals, in this case institutional investors with long-term investment horizons are better equipped to monitor newly public firms than those with shorter-term investment horizons in order to ensure IPO firm survival.Moreover, we contribute to multiple agency theory by considering the manner in which institutional investment portfolio stability interacts with other managerial monitoring mechanisms that may produce similar agency and time horizon benefits in order to test the theory developed in this study.
In the section that follows I develop theory and hypotheses which address the influence of this unique institutional investment characteristic in the post-IPO context.Next I discuss our sample and analytic procedures.I then proceed to discuss the results of our hypotheses tests and, discuss our findings and contributions.I conclude by discussing limitations of this study and opportunities for future research.
Theoretical Framework and Hypothesis
Newly public firms exhibit a higher rate of failure than their more seasoned counterparts.For example research by Fama and French (2004) shows the probability of survival for newly public firms is less than that of firms with greater experience on public equity exchanges.High rates of IPO firm failure negatively influence the wealth of investors and entrepreneurs.Accordingly, developing an understanding of factors that impact the failure rates of newly public firms represents a topic of interest to entrepreneurs, stockholders, and society in general.Fischer and Pollock (2004) posit that the high failure rates typical of the IPO transition stem from the fact that newly public firms face a variety of challenges as they adapt to a new institutional environment.For instance, moving from the private to public arena may require a change of organizational goals and performance objectives.Indeed, public investors may be less tolerant of performance volatility and possess shorter time horizons than private investors (Price Waterhouse, 1995).This suggests that managers of newly public firms need to adapt to the objectives and challenges presented by public shareholders (Fischer & Pollock, 2004).Agency theory suggests that, newly public firms face greater potential for agency problems than pre-IPO firm because of the separation between ownership and control results from the issuance of additional equity share (Jensen & Meckling, 1976).As such, newly public firms must learn to cope with increased formal governance procedures and reporting requirements of the Securities and Exchange Commission (Husick & Arrington, 1998;Price Waterhouse, 1995).
A variety of factors have been linked to the high failure rates of newly public firms.For example, research suggests that firm size (Bhabra & Pettway, 2003;Hensler, Rutherford, & Springer, 1997), age (Hensler et al., 1997) and financial performance (Bhabra & Pettway, 2003;Platt, 1995) are negatively related to post-IPO firm failure.More recently research has shown that average management team tenure and IPO deal network embeddedness are also negatively related to post-IPO firm failure (Fischer & Pollock, 2004).The nature of equity ownership in IPO firms represents another stream in this line of research.Extant research in this area has generally focused on venture capital ownership.Studies in the area suggest that venture capital ownership enhances post-IPO firm survival chances by providing newly public firms with resources and monitoring (Jain & Kini, 2000).Moreover, research suggests that CEO and management ownership reduce the likelihood of post IPO firm failure by aligning the interests of management with the long term survival of the firm (Fischer & Pollock, 2004;Hensler et al., 1997).
Despite the wealth of research on post-IPO firm survival, a paucity of research exists that examines the influence of institutional ownership on post-IPO firm survival.This is surprising when considering that multiple studies have suggested that the characteristics of IPO investors are likely to play a key role in determining the actions of newly public firms (Fischer & Pollock, 2004;Higgins & Gulati, 2006).The addition of institutional investment to a firm's ownership group is likely to influence the objectives and operations of newly public firms.The transition from private to public markets, a.k.a.initial public offering (IPO), represents a significant developmental stage in the life of a firm (Daily, Certo, Dalton, & Roengpitya, 2003).Indeed, the IPO represents a point of transition from one institutional environment to another (Fischer & Pollock, 2004).This transformational event effectively resets that organizational clock (Fischer & Pollock, 2004), thereby creating a context in which the actions of principals such as institutional investors, and agents such as IPO firm executives are highly likely to impact long term organization outcomes.
A primary challenge presented by the IPO transition for newly public firms is to learn how to keep institutional investors satisfied.Failure to keep such professional money managers satisfied is likely to result in some form of intervention on their part (Sanders & Carpenter, 2003).Indeed, failure to keep institutional investors satisfied can result in multiple actions being taken by institutional investors to discipline firm executives such as executive replacement (Boeker, 1992;Puffer & Weintrop, 1991) and reducing executive compensation (Rajagopalan & Finkelstein, 1992;Wiseman & Gomez-Mejia, 1998).Another action institutional investors can take to demonstrate their dissatisfaction is to 'vote with their feet' (Parrino, Sias, & Starks, 2003).In other words, institutional investors can sell their holdings, resulting in a decreased stock price.As stock price decreases from this selloff, the board of directors is increasingly likely to act to remove the executives (Fredrickson, Hambrick, & Baumrin, 1988) and firm executives are less likely to receive bonuses and be able to cash in stock options.As a consequence, IPO firm executives are likely to be keenly aware of the potential impacts institutional investor selloff of their firm's stock may have on their individual earnings and career prospects.
Prior studies suggest that this type of short-term investment behavior is particularly likely in the IPO context (Higgins & Gulati, 2006;Rock, 1986).However, while institutional investor stock sell off represents a possible action for institutional investors the tendency for individual institutional investors to engage in this behavior is likely to vary from one institutional investor to another (Bushee & Noe, 2000;Tihanyi et al., 2003).Indeed, prior research demonstrates that institutional investors vary substantially in the degree to which they churn their portfolio of investments (Bushee, 1998).In the section that follows I develop hypotheses regarding the survival benefits of having institutional investors that exhibit stability in their investment portfolio for newly public firms.For the purposes of this study, institutional investor stability represents the extent to which an institutional investor does not turn over, or churn its portfolio of investments.
Institutional Investor Stability
As noted previously, one source of the challenges facing newly public firms stems from the increased potential for agency problems arise (Certo, Daily, Cannella, & Dalton, 2003).The notion that the potential for agency problems to arise as firms transition from being privately held to publicly traded is reflected in the myriad of structural and reporting requirements IPO firms must comply with before making their equity offerings.For example, IPOs require firms to issue additional shares of their equity, which may result in increased ownership dispersion.This is due in large part to the fact that entrepreneurs often undertake IPOs in order to liquidate their shares (Brau & Fawcett, 2006), which increases the separation between ownership and control.Agency theory suggests that as the separation between ownership and management becomes more extensive, the interests of manager and stockholders are more likely to diverge (Eisenhardt, 1989;Fama & Jensen, 1983b).Indeed, the SEC reporting and disclosure requirements necessary for public listing are intended to reduce the occurrence and severity of agency problems.
A traditional agency theory perspective suggests that the presence of institutional investors should serve to alleviate this problem (David, Hitt, & Gimeno, 2001).Indeed, research conducted on institutional ownership within larger corporations often ascribes to the notion that institutional investment will discipline managers to focus on the long-term strategic objectives of the organization (Useem, 1996).Yet, recent research which draws upon multiple agency theory logic suggests that not all institutional investors exhibit the same investment objectives, particularly with regard to their investment time horizons (Connelly et al., 2010;Hoskisson et al., 2002).Multiple agency theory suggests that the principals and agents of the traditional agency theory vary in their interests and that these differences in interests shape the decisions they make (Hoskisson et al., 2002).Consistent with this view, research demonstrates that the differences in institutional investor, managerial, and director time horizons influence firm strategic decisions (Arthurs et al., 2008;Connelly et al., 2010;Hoskisson et al., 2002).
Drawing upon multiple agency theory logic we suggest that the amount of stable institutional investment in a newly public firm enhances the survival prospects of newly public firms through their monitoring of firm management.Stable institutional investors are likely to possess greater ability and motivation to monitor management given the long-term nature of their investment time horizon.This is due to the fact that they may capture the value created by long-term firm investment decisions (David et al., 2001).Moreover, institutional investor stability may reduce the pressure IPO firm executives feel from the threat of institutional investor stock selloff.As a result, IPO firm executive may be able to dedicate more cognitive resources strategic and operational decision making, as well as to facilitating the organizational changes required by the IPO transition.
As a result, executives of firms with stable institutional investment may be more comprehensive in the decisions they make.Conversely, the lower the stability of institutional investors investing in a newly public firm, the higher the pressure felt by top management to focus on short-term earnings management.The greater the pressure felt by top managers, the more likely they will be to take mental shortcuts and engage in limited search to arrive at their strategic and operational choices, thereby reducing the effectiveness of their decisions.Accordingly, we argue that stable institutional investors' tendency to own a firm's stock for a long period of time will encourage and enable managerial resources to focus on strategic and operational issues instead of pressuring them to focus on managing investor and analyst concerns regarding short-term performance fluctuations (Bushee, 1998;Fischer & Pollock, 2004;Higgins & Gulati, 2006).
The monitoring benefits of stable institutional investment is likely to be particularly valuable in the context of IPO firms, where institutional investors often depend on short-term arbitrage opportunities for their portfolio gains (Aggarwal, 2003).Such pressure from typical IPO institutional investors is likely to pressure newly public firm executives to emphasize short-term financial gains.In contrast, stable institutional investment may serve to buffer newly public firms from short-term pressures typical of the IPO context as a result of stable institutional investor willingness to tolerate short-term performance disappointments.As a result, newly public firm executives with stable institutional investment will likely be better able to focus on the long-term operations and strategy of the firm that are necessary to ensure firm survival.Thus, I hypothesize the following: Hypothesis 1: The amount of IPO firm equity held by stable institutional owners is negatively related to the likelihood of newly public firm failure.
Institutional Investment Portfolio Stability and Outside Director Control: A Contingency Approach
Drawing upon multiple agency theory I suggest that newly public firms with high amounts of stable institutional investment experience a lower likelihood of post-IPO failure.I have highlighted the agency and time horizon benefits as the mechanism through which stable institutional investment creates the hypothesized survival effect.However, alternative mechanisms might create similar hypothesized survival effects.For example, it is possible that stable institutional investors may focus their portfolios on specific types of investments that meet their investment objective.As such, it is possible that institutional investment portfolio stability may represent an outgrowth of such objectives and thus reflect the type of firms that stable institutional investors are investing in, rather than the monitoring mechanisms I discussed in developing Hypothesis 1.
My view is that the agency and time horizon benefits that I have articulated, as well as the investment objectives of institutional investors likely operate jointly to influence the likelihood of post-IPO firm failure.Accordingly, my intent is not to challenge the idea that institutional investor portfolio stability may reflect the chosen investment strategies and risk tolerances of institutional investors.Rather, I wish to demonstrate that this institutional investment characteristic also influences the likelihood of post-IPO firm failure through its role in monitoring managers to focus on the long term objectives of the organization.In order to determine whether stable institutional investment in fact impacts post-IPO firm failure by encouraging managers to focus on long-term objectives rather than managing short-term earnings, we investigate how the amount of stable institutional investment interacts with other governance mechanisms that may also discipline newly public firm executives to possess long term time horizon.
Traditional formulations of agency theory suggest that outside director control of the board of directors remedies the agency problem (Fama & Jensen, 1983a).Research on corporate board structure utilizing traditional agency theory suggests that board independence from management facilitates board monitoring (Daily, Dalton, & Cannella, 2003).Implicit in such studies is the assumption that outside directors possess a more long-term orientation than inside directors.Research utilizing multiple agency theory challenges this assumption in several ways.For example, some research suggests that outside directors tend to over rely upon short-term financial information because they often lack knowledge of the firm and industry (Lorsch & MacIver, 1989).Such reliance focuses outside directors on short-term investments.
In contrast, inside directors often possess a greater understanding of their firm and industry allowing them to make higher quality long-term decisions (Zahra, 1996).Moreover inside directors, as employees of their newly public firms, will be impacted by the survival or failure of their firm whereas outside directors do not (Arthurs et al., 2008).Consistent with this view, research has found that inside directors emphasize long-term strategic decisions in the IPO context (Hoskisson et al., 2002).
Drawing upon this logic I posit that outside director control of the board will encourage a short-term focus among newly public firms.As such outside director interests are likely to conflict with those of stable institutional investors.This view is consistent with recent research which suggests that given their ties to their respective firms, inside directors act to reduce underpricing in IPO firms, whereas a greater proportion of outsiders increases it (Arthurs et al., 2008).Accordingly, we suggest that outside director control of the board will serve to reduce the effects of stable institutional investment on IPO firm survival by encouraging a focus on short-term financial results rather than long-term strategic and operational issues.Accordingly, I hypothesize the following: Hypothesis 2: Outside director control of the board reduces the negative relationship between the amount of IPO firm equity held by stable institutional owners and the likelihood of newly public firm failure.
Data and Sample
The theory and hypotheses in this study were tested on a sample of firms that went public during the calendar year of 1997.This sample was selected largely because the theoretical arguments developed in this study focus on the challenges faced by newly public firms.Selecting IPO firms from 1997 allowed each IPO firm to be tracked for several years following its IPO to develop measures of post-IPO firm survival and also allow for the control of temporal IPO market fluctuations.
The sample for this study was drawn from the Securities Data Corporation (SDC) Global New Issues database provides information on.Based upon this initial sample, IPO prospectuses were identified from EDGAR resulting in 379 firms.Each of these 379 firms issued stock to public markets (i.e.NASDAQ, NYSE, AMEX) for the first time.Additionally, each firm was headquartered in the United States at the time of its IPO.Meeting this criterion controls for potential cultural differences that are beyond the scope of this study.In line with prior research on IPOs (Ritter, 1991), each firm must not have been classified as a corporate spin-off, unit issue, mutual to stock conversion, real estate investment trust or leveraged buy-out.The sample data analyzed in this study consisted of data collected from the calendar period of 1998-2001.
Dependent Variable
Data on firm failure was gathered from the Center for Research on Securities Pricing (CRSP) data base.CRSP records a delisting code for firms who de-list from a stock exchange.Because firms may de-list from a stock exchange for a variety of reasons (merger, acquisition, etc) that do not correspond to firm failure, prior research has utilized delisting codes ranging from 500 to 585.These codes indicate a firm's inability to meet the requirements for listing on an exchange as a measure of firm failure (Fischer & Pollock, 2004).Based upon this same range of CRSP delisting codes we coded firm survival (0) or firm failure (1) for each of the years in the study period.As was appropriate for our statistical analytic technique, and consistent with prior studies examining IPO firm failure (Fischer & Pollock, 2004), a firm was dropped from the sample after delisting, and the remaining firms were right-censored.
Independent Variables
The data used to create this measure were drawn from the CDA/Spectrum Institutional Ownership Database (CDA) from Thomson Financial Publishing accessible through Wharton Research Data Systems.CDA collects ownership information on all institutions required to file an SEC form 13-f.As Higgins and Gulati (2006:9) note, "The Spectrum database 'reverse'-compiles this information so that information may be obtained for companies invested in, rather than the company doing the investing".
In order to capture the stability of the institutional investment in each of our sample IPO firms I first identified each of the individual institutional investors that owned equity in at least one of our sample firms.Next, I created a measure of portfolio stability to address the variability of each of the identified institutional investor's portfolio holdings that we adapted from Bushee (1998).The formula to calculate this measure can be expressed as follows: where, W k is the two year total of the quarterly portfolio weights (shares held times stock price at quarter's end) in firm k reported at the end of each quarter; ΔW k is the two year total of the absolute value of quarterly portfolio weight changes in firm k reported at the end of each quarter; Portfolio stability (PS i ) thus represents the percentage of an institutional investor's equity portfolio that does not change during the prior two years.I then used the portfolio stability (PS i ) measure of individual institutional investors discussed above to capture the degree of stability of each IPO firm's institutional investment.First, for each sample firm we created a sum of weighted average of institutional investor portfolio stabilities (APS k ).I assigned the weights for these averages based upon the percentage of IPO firm institutional investment owned by each individual institutional investor.The formula to calculate this variable can be expressed as follows: where, PS ik represents the portfolio stability of firm k's i th institutional investor; I ik represents the number of shares in firm k owned by institutional investor at the year's end; I k represents the total shares of firm k's stock owned by institutional investors at the calendar year's end.
To create the final measure of the amount of stable institutional investment in each IPO firm used in testing our hypotheses (IIPS k ), I weighted the average level of institutional investor portfolio stabilities (APS k ) by the total percentage of IPO firm equity owned by institutional investors.The mathematical formula to express this can be represented as follows: where, I k represents that total shares of firm k's stock owned by institutional investors at the year's end; S k represents the number of shares of firm k's common stock outstanding at the year's end.This final measure of the nature of institutional investment, i.e. institutional investment portfolio stability (IIPS k ) was added to one, logged (natural logarithm), and updated annually.Additionally, this variable was lagged on year in accordance with the temporal ordering of our hypotheses.
Outside Director Control
Drawing upon the corporate governance literature, I created two commonly utilized proxies for managerial monitoring (Dalton, Daily, Ellstrand, & Johnson, 1998) the percentage of outside directors represented on the board of directors, and the separation of the titles of CEO and Chairperson of the board of directors.The separation of the title of CEO from the chairperson of the board, and outside director representation represent two common board structural elements that foster board independence from management (Johnson, Dailey, & Ellstrand, 1996).Recent research suggests that such structural elements of the board of directors play a key role in shaping managerial and board attention (Tuggle, Sirmon, Reutzel, & Bierman, 2010).Descriptions of these two variables follow.I utilized a common measure of outside director independence from management, the percentage of members of the board of directors who are not employed by focal firm or the percentage of outside directors.I also examined the separation of the titles of CEO and chairperson of the board was measured by creating a dichotomous variable, CEO-chairperson separation.This variable was coded (1) if the CEO and chairperson titles were not held by the same individual, and coded (0) if the same individual held both titles.
Control Variables
Prior research suggests that the influence of organizational change is time-dependent (Amburgey, Kelly, & Barnett, 1993;Fischer & Pollock, 2004).Accordingly, I control for the number of years that passed since the time of each sample firm's IPO.Multiple studies suggest that venture capital may influence IPO related outcomes (Brau, Brown, & Osteryoung, 2004;Daily, Certo, & Dalton, 2005;Higgins & Gulati, 2003;Jain & Kini, 2000).As such, I also control for venture capital (VC) backing by creating a dichotomous variable indicating whether (1) or not (0) a firm is venture backed at the time of its IPO.I also controlled for aspects of IPO performance.I did this first by controlling for IPO proceeds, which represent the financial resources garnered as a result of the IPO.Prior research suggest firms with greater IPO proceeds may be more capable of funding firm growth and expansion (Fischer & Pollock, 2004;Jain & Kini, 2000).I calculated this variable by taking the natural logarithm of the product of the total number of shares offered and the share price at the end of the first day of trading.I also controlled for IPO underpricing.IPO underpricing represents both money left on the table for the IPO firm and a means to achieve organizational legitimacy (Daily, Certo et al., 2003;Pollock, Gulati, & Sadler, 2002).I measure IPO underpricing by taking the natural log of one plus the percentage change in stock price between the initial price set for the stock and the closing price of the stock on the first day of trading (Pollock et al., 2002).
Newly public firms face a liability of market newness (Certo, 2003).Given their entrepreneurial nature, IPO firms often also experience a liability of newness referred to by institutional theorists (Freeman, Carroll, & Hannan, 1983;Singh, Tucker, & House, 1986).As such, I control for firm age.To measure firm age I calculated the natural logarithm of one plus the firm's age at the time of its IPO.Institutional theory also suggests that small firms may suffer from a the liability of smallness (Baum & Oliver, 1991;Freeman et al., 1983).Accordingly, I controlled for firm size by calculating the natural log of one plus firm revenues.This variable was lagged one year, logged to correct for skewness (natural log of 1+firm revenues), and updated annually.
Underwriters may play a key role in certifying IPO firms to public markets (Carter, Dark, & Singh, 1998;Carter & Manaster, 1990).In order to control for underwriter prestige I utilized the often relied upon Carter-Manaster measure of underwriter prestige (Ritter & Welch, 2002).To correct for skewness, I took the natural logarithm of this variable.
Firms in high technology industries may experience greater growth as well as higher failure rates than those in low technology industries.Accordingly, I controlled for firm industry with a dummy variable indicating whether a firm was classified as participating in a high technology industry or not (Certo, Covin, Daily, & Dalton, 2001;Fischer & Pollock, 2004).
I also control for firm profitability.This measure was created by calculating 100 times firm return on assets (ROA).I drew the data on firm income and assets necessary to create this variable from Compustat.This variable was updated annually, logged(Note 1), and lagged one year.Finally, I control for average TMT tenure by taking the natural log of 1 plus the average of executive tenures with an IPO firm as reported in the IPO prospectus.Firms possessing TMTs with substantial experience working together may be better able to coordinate and implement firm growth initiatives (Eisenhardt & Schoonhoven, 1990;Kor & Mahoney, 2000;Penrose, 1959).
Method of Analysis
To test the hypotheses regarding post-IPO firm failure I utilized Cox proportional hazard analysis (Allison, 1984;Yamaguchi, 1991).Event history analysis, or Hazard analysis is concerned with the patterns and correlates of event occurrence (Yamaguchi, 1991).Hazard analysis is particularly well suited to analyze longitudinal data where the outcome of interests represents a discrete event, and the timing of that event's occurrence is of central interest (Allison, 1984;Yamaguchi, 1991).A Cox proportional hazard model was selected over other forms of hazard analysis for multiple reasons.First, I chose a Cox model because the interval between the IPO date and the end of the first fiscal year are not equal across sample firms, which unlike other forms of hazard analysis, relaxes this assumption.Second, proportional hazards models do not require researchers to specify the how time influences the outcome of interest.Third, Yamaguchi (1991) notes that proportional hazards model represents a popular approach in terms of analyzing the timing of event occurrence.
Results
Table 1 displays the variable descriptive statistics and a correlation matrix of the variables used in this study.The final sample consisted of 379 firms, 71 of which were counted as failures during the study window.Consistent with prior studies of post IPO firm failure, instances of firm delisting because of merger or acquisition were included in the sample up to the point that they were acquired, and censored thereafter (Fischer & Pollock, 2004).Model 3, presents the results of our test for interactions between institutional investor portfolio stability and proxies for outside director control.In order to conduct a test of hypothesis 2, I utilized the approach for testing interactions outlined by Cleves, Goulds, and Gutierrez (2004) that relies upon the interpretation of the sign of the cox regression coefficients.The coefficient of the interaction term between institutional investor portfolio stability and CEO-chair separation in model 3 is not statistically significant.However, consistent with the weakening effect of percentage outside directors on the relationship between stable institutional investment and post-IPO firm failure the interaction coefficient of institutional investment stability and percentage outside directors is positive and statistically significant (p<.01).This suggests that as percentage outside directors increases, the effect of institutional investment stability on firm failure is reduced.
Discussion
This study extends extant research in the areas of institutional investment and IPOs by examining the influence of the portfolio stability of institutional investors in newly public firms on their post-IPO survival.I suggest that extant research examining the influence of institutional investment on firm performance has overlooked two theoretically important issues.First, I argue that the effects of institutional investment on firm performance may be masked in larger established firms given their inertia and bureaucracy.I am unaware of any studies that consider the influence of institutional investment on post-IPO firm survival.Rather, extant studies on the influence of institutional investment on organizational performance have largely examined the influence of institutional investment in large, well established organizations and fail to find a consistent relationship.In order to address this empirical gap we examined the effects of institutional ownership in the post-IPO context.Second, I posit that differences in the temporal interests of institutional investors shape their effectiveness in monitoring management.In order to examine these issues this study focused on examining the role that the institutional investment portfolio stability plays in facilitating IPO firm adaptation to the rigors of public trading.
Theoretical Implications
The findings of this study have important theoretical implications for understanding the performance of newly public firms and the effects of institutional ownership.The results of this study generally support our thesis regarding the benefits of stable institutional investor monitoring with regard to the survival prospects of newly public firms.Specifically, I explore how differences in institutional investment portfolio stability enhance the ability of newly public firms to adapt to the rigors of public trading.I argued and found support for the claim that the amount of stable institutional investment in a firm contributes to managerial monitoring, thereby reducing the likelihood of post-IPO firm failure.
This study also contributes to the growing body of research on multiple agency theory.The results of this study provide support for multiple agency theory propositions regarding the importance of taking into account principal difference in terms of interests with regard to investment time-horizons when considering the influence of principals and agents on organizational outcomes.The partial support for our proposition regarding outside director board control further supports multiple agency theory by challenging the agency theory assumption that outside directors possess a more long-term orientation than inside directors.Combined, these findings contribute to a growing body of research that highlights the importance of taking into account the heterogeneity of institutional owner interests when examining organizational outcomes (Bushee, 1998;Connelly et al., 2010) and extends this body of research by demonstrating how they influence a key firm performance outcome, firm survival.
Managerial Implications
Gaining the support of entities that ensure a firm's survival represents one of top management's most important jobs (Pfeffer & Salancik, 1978).The findings of this study possess significant implications for the managers of IPO firms regarding this important role.Specifically, the results of this study demonstrate that the support of institutional investors with stable investment portfolios enhances the survival of newly public firms.As a result, entrepreneurs, underwriters, and venture capitalists may want to carefully consider the nature of the institutional investors they target with their offerings.
Moreover, this study provides implications for board staffing during the IPO transition.Consistent with prior studies (Arthurs et al., 2008), our findings suggest that the benefits of board independence may be offset by reducing newly public firm performance.Specifically, our results suggest that the survival benefits provided by stable institutional investment may be offset by poor choices regarding the proportion of outside directors to inside directors on post-IPO boards.
Limitations
The examination of IPO firms provides several advantages for researchers.Perhaps the most important advantage for organizational scholars is the ability to track these firms over their entire lives as publicly-traded entities.However, as IPO firms are generally considered to be successful entrepreneurial organizations, we suggest caution in generalizing these findings to other types of entrepreneurial firms.Moreover, researchers should also be cautious of extrapolating these results to more seasoned publicly-traded firms, or to firms that underwent their IPOs under different market conditions.
The design and analytic tools used in this study do not preclude the possibility of alternative mechanisms being responsible for the relationships observed in this study.Specifically, it is possible that the pattern of relationships between institutional investment stability and firm failure is an outgrowth of the investment strategies and objectives implemented by the institutional investor and as such represents a bi-product of the institutional investor's risk/reward profile.In an attempt to address this issue, where possible I collected longitudinal data, and lagged them one year.I also controlled for a variety of factors which extant research suggests may influence the outcomes of interest in this study.Moreover, I utilized contingency logic to demonstrate that managerial monitoring was at least one mechanism influencing post-IPO failure by developing and testing hypotheses regarding the moderating influence of institutional investor portfolio stability on the effects of other sources monitoring on post-IPO failure.While I cannot completely rule out alternative explanations, the pattern of contingency effects found in this study lends partial support to my theoretical arguments.
Future Research
A potential extension to this work could involve determining the influence of institutional investor portfolio behaviors on other types of firms and on other forms of firm performance.Examining these issues in more detail might allow us to better understand the relationship between institutional investment portfolio stability and firm performance.Finally, future research may consider the impact of institutional investor portfolio behaviors on the effectiveness of strategic actions such as merger, acquisitions, and strategic alliances.For instance, future research might examine how institutional investment portfolio stability influences the benefits of mergers, acquisitions, geographic diversification, or product diversification, etc. Research in this area would increase our understanding of how institutional ownership shapes the success of organizational strategic actions.
Table 1 .
Means, standard deviations, and correlations
Table 2 .
Cox regression coefficient estimates of IPO firm failureTable2presents the results of the Cox regression analyses.Model 1 displays the results of the control variables.The effect of years since the IPO (p<.001) is positively related to firm failure.Contrastingly, underwriter reputation (p<.001), profitability (p<.05), and average TMT tenure (p<.001) are negatively related to post-IPO firm failure.Model 2 displays the results corresponding to the test of Hypothesis 1 (H1).In support of H1 I found a negative relationship between institutional investment portfolio stability and firm failure (p < .01). | 8,369.2 | 2012-07-01T00:00:00.000 | [
"Business",
"Economics"
] |
A Modified Expectation Maximization Approach for Process Data Rectification
: Process measurements are contaminated by random and/or gross measuring errors, which degenerates performances of data-based strategies for enhancing process performances, such as online optimization and advanced control. Many approaches have been proposed to reduce the influence of measuring errors, among which expectation maximization (EM) is a novel and parameter-free one proposed recently. In this study, we studied the EM approach in detail and argued that the original EM approach is not feasible to rectify measurements contaminated by persistent biases, which is a pitfall of the original EM approach. So, we propose a modified EM approach here to circumvent this pitfall by fixing the standard deviation of random error mode. The modified EM approach was evaluated by several benchmark cases of process data rectification from literatures. The results show advantages of the proposed approach to the original EM in solving efficiency and performance of data rectification.
Introduction
With the advancement of smart manufacturing, process measurements play a more and more important role in modern chemical manufacturing plants [1][2][3]. The measurements are unavoidably contaminated by random errors and often by large-sized gross errors, too, which degenerate performances of process monitoring, control and optimization strategies based on measurements [1]. To recover the true values of process variables from the contaminated measurements, many approaches to data rectification, i.e., reducing the random and gross errors simultaneously from the measurements, have been proposed since 1960s [2].
The first way identifies gross errors with a statistical test by assuming random errors follow a normal distribution [10], then a procedure of data reconciliation, i.e., solving a constrained least squares problem whose objective is minimizing the difference between the measured values and reconciled values satisfying process models, is carried out to estimate the true values of the measurements not contaminated by gross errors, while the true values of the measurements contaminated by gross errors are treated as unknown parameters to be estimated. Although the algorithmic parameters, such as critical values of a statistical test, can be chosen with clear statistical meanings, only one gross error can be identified at a time because of the smearing effect of a large-sized gross error, so the approaches of a statistical test must identify gross errors one by one and elegant frameworks must be designed to promise the performance of data rectification [11].
The second way is based on robust estimators [6], which can simultaneously reduce the influences of random and gross errors by solving a constrained nonlinear least squares problem once. Different from the approaches of the statistical test described above, it is assumed that measurements contaminated by random and/or gross errors can be described by a heavy tail statistical distribution, such as contaminated normal [12], Cauchy [6], redescending [13], quasi weighted least squares (QWLS) [14] and correntropy [15] etc., which can effectively reduce the smearing effect of gross errors. The advantages of robust estimators for data rectification are: (1) gross errors can be identified with data reconciliation simultaneously; (2) the parameters of the robust estimators can be determined via Monte Carlo methods with clear statistical meanings [16] or online line search methods based on the Akaike information criterion (AIC) [13]. Currently, the robust estimators may be the most popular approach for process data rectification.
The third way is based on mixed integer programming (MIP) techniques [8][9][10], whose objective is to minimize the number of identified gross errors and the difference between the measured and rectified values, where the trade-off can be realized by the AIC [10,13] or a predetermined weighting factor of the objective function [9,17]. The MIP techniques show competitive or comparable performances to robust estimators for process data rectification, and the MIP technique based on the AIC is free from setting algorithmic parameters to balance the fitness and complexity of the model; although a critical value of identifying gross errors still needs to be determined, this value can be easily obtained from daily operation experiences of instrumentation engineers [10,17].
Recently, a novel way of process data rectification based on statistical inference was proposed, such as the approaches of Bayesian inference [18]. Being a widely used method of statistical inference, expectation maximization (EM) [19][20][21][22][23][24][25] has also been applied to process data rectification [26,27]. The statistical inference approaches are based on the Bayes rule [18], which inferences the unknown parameters by combining the information from collected data (measurements) and the prior probability distribution of the inferenced parameters. Although current works assume prior distribution before process data rectification, some reasonable prior information on the random and gross errors of measurements, such as standard deviation of random errors and occurrence of gross errors for a specified sensor, can be collected and modeled from the experiences of plant operators and historical process data [18,27,28]. So, the authors believe that the statistical inference approach to process data rectification deserves to be studied.
The established EM approach [26] is an interesting statistical inference approach because it has no algorithmic parameter to be determined before data rectification, but just assumes that measurement errors follow a finite Gaussian mixture distribution. The large number of parameters to be estimated with the EM algorithm [29] lead to its low-efficiency solving procedure, and from experiences of the authors, the original EM approach cannot be applied to rectify process measurements contaminated by persistent biases, because the estimated standard deviation of the random error mode is close to that of the gross error mode, which leads to difficulty of bias identification.
In this work, we argue that, for the original EM approach, the estimated value of standard deviation of random error mode is unavoidably enlarged by a persistent bias and leads to difficulty of bias detection. To circumvent this problem, we present a modified EM approach, where the standard deviation of random error mode is estimated before the EM iterations with a robust method [30], so the standard deviation of random error mode will not be enlarged by a persistent bias and it is possible to detect bias from the EM calculation result. Compared to the original EM approach, the modified one also reduces the number of parameters to be estimated and the time consumption of EM iterations can also be significantly reduced.
The remainder of this paper is organized as follows. Section 2 introduces the principles of the established EM approach for data rectification. The proposed modified EM approach and detailed calculation steps are presented in Section 3. Section 4 describes the performance analysis procedure used herein. The performance modified EM approach is evaluated and compared to the original one in Section 5. Finally, Section 6 concludes the paper.
Data Rectification Problem
Except for the random errors following normal distribution, three types of gross errors, namely, drift, outlier and persistent bias, also usually contaminate measurements, which are shown in the following Figure 1. Figure 1 shows how different types of gross errors contaminate a process measurement whose true value is 1 in steady state. Obviously, any one of the systematic errors significantly reduces the reliability of a process measurement, which is the basis of online decision-making during the enhancement of the process performance and a systematic error cannot be eliminated with data reconciliation methods, because a zero mean of random error is assumed for the methods of data reconciliation. Essentially, outliers in a measurement horizon are also random errors with larger variance than random noises, and the original EM algorithm can identify and estimate outliers well. A persistent bias as shown in Figure 1 is not random, which will enlarge the estimated variance of the original EM algorithm, as shown in the following Section 3.1. On drift error of a sensor, it is also a non-random one with increasing error size and it will also lead to an enlarged estimated variance as a persistent error, supposing an average of a measurement horizon is taken as a representative of the horizon. In the following, we show how a data rectification problem is set up as a statistical inference problem.
Supposing a measurement horizon y j,h t h=t−H+1 is collected at time t, which involves H data points measured at different time point k for the jth process variable and a data matrix Y ∈ R J×H whose rows represent measurement horizons of all the J measured process variables, where y j,h is an element at the jth row and hth column of Y. A steadystate process data rectification can be formulated as a maximum likelihood estimation problem described as the following Equation (1).
In Equation (1), the objective function is the logarithm likelihood of the sampled measurements y j,h under the condition that the true value of the jth measurement is x j and the jth element of vector x is x j ; f (x) represents the process model and g(x) denotes the inequality constraints for the process variables, considering operational specifications and experienced bounds.
Assuming different distributions of the measurement errors, the formulation of the objective function of Equation (1) varies [6]. For the data reconciliation problem considering random errors only, it is assumed that random errors follow a normal distribution, and the logarithm of the objective function is a quadratic one. If a heavy tail distribution is assumed for measurement errors, as in the situation of data rectification using robust estimators, the logarithm of the objective function shows a more complex formulation, sometimes the function is nonconvex or even discontinuous [6].
In both the situations described above, we fix the distribution parameters of measurement errors. For the data rectification using the EM approach, the parameters of measurement error distribution are inferenced with the Bayes rule, as described in the following section.
Expectation Maximization Approach
For the EM approach [26], the difference between the hth measurement and the true value of the jth process variable, namely, ε j,h = y j,h − x j , is described with the finite Gaussian mixture model shown with Equation (2) [26].
In Equation (2), w j,1 represents the probability of a random error mode with a zero mean and standard deviation σ j,1 , and w j,2 represents the probability of a gross error mode with a zero mean and standard deviation σ j,2 that is larger than σ j,1 under the occurrence of a gross error. Supposing θ j = σ j,k , w j,k k=1, 2 , the likelihood of ε j,h for the kth error mode can be described with the following Equation (3) [26].
In Equation (3), z j,h is a latent variable to be estimated and z j,h = k represents that the error mode of y j,h is the kth one, where k = 1 represents a random error mode, and k = 2 means a gross error mode. Considering both error modes, the whole likelihood of ε j,h is represented as following Equation (4).
Based on the above descriptions, the previously mentioned Equation (1) can be written as the following Equation (5).
It is difficult to solve Equation (5) directly, because w j,k cannot be obtained explicitly. Hence, an EM approach was applied to solve Equation (5), by replacing Equation (5) with the following Equation (6) [26].
In Equation (6), Θ = x j , θ j j=1,...J and Θ (t) represent the estimation result of Θ at the tth iteration, which means the probability of error mode for a measurement, namely, p z j,h = k y j,h , Θ (t) , is estimated from y j,h and Θ (t) using the Bayes rule, as described with Equation (7).
In Equation (7), p j,h,k is calculated with Equation (3) and P z j,h = k Θ (t) = w j,k because Θ = x j , θ j j=1,...J and θ j = σ j,k , w j,k k=1, 2 . The calculation of the probability of P z j,h = k y j,h , Θ (t) is noted as the expectation step (E-step).
After the E-step, we estimate Θ using the maximization step (M-step), namely, solving Equation (6) with fixed P z j,h = k y j,h , Θ (t) calculated at the E-step, then a new estimation of Θ, i.e., Θ (t+1) , is the result. It must be noted that ln p z j,h = k, y j,h Θ is calculated with Equation (3) and the Bayes rule, as described by Equation (8).
To solve Equation (6), a coordinate search method is applied to estimate w j,k and σ j,k separately, as the following Equations (9) and (10) show [26].
With the new estimation of Θ, i.e., Θ (t+1) , obtained, we return to the E-step and check the difference between Q Θ, Θ (t) and Q Θ, Θ (t+1) of Equation (6), if the difference is not obvious we stop the iteration, or the else we continue [26].
Standard Deviation of the Original EM under Persistent Bias
Although the EM approach was successfully applied to several situations, such as non-persistent gross errors and concurrent errors of different types [26,27], there is still a little space for improvement in the situation of measurements contaminated by persistent biases, where y j,h − x (t) j 2 in Equation (9) is relatively large and unavoidably leads to a large σ (t+1) j,k even for the random error mode, whose standard deviation shall be relatively small, as can be argued in the follows.
j,h,k , Equation (9) can be rewritten as following Equation (11): With h = argmin h y j,h − x (t) j 2 and α j,h ,k = 1, it is easy to infer that Equation (11) arrived at its minimum, namely, σ , which fluctuates around the magnitude of the bias contaminating the jth measurement and leads to a large size standard deviation for random error mode of Equation (2), namely, σ j,1 . Under this situation, it is impossible to set ±3σ j,1 as the critical value for bias detection and the original intention of Equation (2) is violated, too. To verify the above argument, the simple linear data rectification case of Ripps [31] is used here to show the influence of persistent bias to the standard deviations of random mode. The Ripps case involves four streams with measured flowrates and three linear mass balance equality constraints are shown as the following Equation (12).
The true values of all the flowrates are x 1 = 0.1739, x 2 = 5.0435, x 3 = 1.2175 and x 4 = 4, with corresponding standard deviations σ 1 = 2.89 × 10 −4 , σ 2 = 2.5 × 10 −3 , σ 3 = 5.76 × 10 −4 , and σ 4 = 4 × 10 −2 for random noises. Here we assume that x 3 is contaminated by a bias with sizes being 5σ 3 , 10σ 3 and 15σ 3 , respectively, then 50 Monte Carlo simulations are carried for each bias size, with all the variables being added random noises with zero mean and corresponding standard deviations. For each Monte Carlo simulation, the sign of the bias is assigned randomly with equal probability. The minimum ratio of estimated σ 3,1 to σ 3 , namely, σ 3,1 σ 3 min , is shown in Figure 2 as follows. As Figure 2 shows, as in the above argument, the minimum estimated standard deviation of random noise mode is several times of that of the true, so it is impossible to detect a bias with the traditional 3 σ rule, as the original EM approach did [26].
Modification to the Original EM Approach
To apply the EM approach to the situation of measurements contaminated by persistent biases, a simple modification is presented herein for the original EM approach, namely, we directly estimate the variance of the random error mode in Equation (2), i.e., σ 2 j,1 , but not via the EM iterations. It has been shown that σ 2 j,1 can be estimated efficiently and robustly from process measurements even when the measurements are contaminated with gross errors [30]. Then all the other parameters of Θ in Equation (6) are still estimated using the above original EM procedure. Obviously, the influence of bias on the estimation of standard deviation of random error is avoided by this modification.
After Θ in Equation (6) being estimated, a criterion must be set up to detect bias for measurements. There are two established ways for detecting bias. The first one is shown by Equation (13), namely, a measurement is contaminated by a bias if the probability of gross error mode is larger than the random error mode [12]. The second one is using the deviation of reconciled value from the corresponding measured value [26], namely, if the following Equation (14) holds where y j is the average of measurement horizon of the jth process variable, then a bias is detected for the jth measured variable.
Obviously, the original EM approach can only use the first way of bias detection because the estimated value of σ j,1 is enlarged by a persistent bias. While three criteria can be applied to the modified EM approach, i.e., a bias is detected when Equation (13) holds, which is noted as probability criterion (PC); or Equation (14) holds, which is noted as deviation criterion (DC); or both Equations (13) and (14) hold simultaneously, which is noted as a probability and deviation criterion (PDC).
Based on the above description, the proposed modified EM approach can be shown in Table 1. 3. Initialize parameters: w y j,h /H and set t = 1.
and w (t+1) j,k using Equations (9) and (10), respectively, calculate x The modified EM with PC, DC or PDC for bias detection is noted as MEM-PC, EM-DC and EM-PDC, respectively.
The advantages of the modified EM algorithm over the original EM algorithm are: (1) the standard deviation of random error is not affected by a persistent bias, because a direct and robust variance estimation method [30] is used; (2) fewer variables need to be estimated by the modified EM algorithm, which means that the modified EM algorithm converges faster than the original EM algorithm.
Performance Analysis
To evaluate the performance of the proposed modified EM algorithm, the following three performance metrics, namely, overall performance (OP), average number of Type-I error (AVTI) and relative error reduction (RER), defined as following Equations (15)- (19) are used here [9]. OP = number of correctly identified bias number of gross errors simulated , AVTI = number of wrongly identified bias number of simulation trials , The following Monte Carlo simulation procedure [6] is carried here to evaluate the performance of data rectification.
(1) For all the measured variables, add random noises with zero mean and corresponding standard deviation. (2) Add bias to each measurement with a predefined probability p b , the bias size randomly distributes in the range of 5 and 25 times of standard deviation of random noise, the sign of the bias, namely, '+' or '−', is randomly assigned with equal probability. (3) Calculate performance of data rectification with Equations (15)- (19) for each evaluated method.
Four well-known test cases of process data rectification were used here to evaluate and compare the performances of the proposed MEM approach to the original EM approach, which are described as following.
The first case is the famous steam metering network (SMN) [32], which involves 11 units interconnected by 28 streams with measured flow rates, whose flowsheet diagram is demonstrated as Figure 3 with the true values of the flowrates of all the streams shown in the parenthesis. For each measured variable, the standard deviation of added random noise is set as 2.5% of its true values. The second case is a bilinear process of metallurgical grinding (MG) [33], which involves four units interconnected by nine streams with measured mass flowrates and 15 measured mass fractions. The flowsheet of the metallurgical grinding is shown in Figure 4 with the true values of all the measured variables, where the true values of flowrates are shown in the parenthesis and composition shown at the right side of the parenthesis. For all the measured variables, the corresponding σ of random noise is set as 2.5% of its true value. The third case, i.e., Pai-Fisher (PF) [34], is a typical nonlinear instance of data rectification, whose model is shown as the following Equation (20 The fourth case, namely, the Swartz case (Sw) [35], is a heat exchanger network, where streams Ai (i = 1, 2, . . . , 8) is heated by streams Bi (i = 1,2,3), Ci (i = 1,2) and Di (i = 1,2) via different heat exchangers, as Figure 5 shows. The true values of flowrate and temperature for each stream [12] are shown in Figure 5, too. The standard deviation of random noise for each flowrate is set as 2.5% of the corresponding true value of flowrate and 0.75 for temperature of each stream. For the Swartz case, both linear material balance equalities and nonlinear energy balance equalities for each heat exchanger/junction are used as constraints of data rectification. The enthalpy of unit mass of each stream is correlated with its temperature using a quadratic polynomial as Equation (21) shows, whose coefficients are shown in Table 2 [1]. For all the tested cases, the Monte Carlo simulations were carried out in a MATLAB 2018 (MathWorks, Boston, MA, USA) environment using a personal computer with Intel Core Processor (TM) i3 CPU 3120M @ 2.50 GHz, 8GB RAM (Intel, Santa Clara, CA, USA), random measuring noises were generated by "normrnd" command and "rand" command was used to assign the size and sign of a bias. The nonlinear programs of the EM were solved with "fmincon" command.
Results and Discussion
To evaluate the performances of different criteria of the modified EM approach, the OP and AVTI performances of the DC, PC and PDC for all the four tested cases are compared as shown in Figure 6. As Figure 6 shows, PC had higher OP and obviously higher AVTI than DC and PDC, which shows that the probabilities of random and gross error modes are not feasible to detect a bias because some variables not contaminated by a bias also have higher probability of gross error mode. The DC and PDC had the same OP and AVTI except for the bilinear MG case, where PDC detected a little less bias than DC; whether this was a special case needs to be investigated in the future, since this work focuses on modifying the original EM approach for rectifying measurements contaminated by persistent biases.
Because MEM-DC and MEM-PDC had almost the same performances of data rectification, MEM-DC was selected to be compared to the original EM approach, as shown in Table 3. As stated in Section 3.2, PC was used to detect bias for the original EM, because DC does not work in the situation of persistent bias contaminating measurement, the standard deviation of random mode, i.e., σ j,1 , was enlarged by the bias contaminating the jth measurement, as shown in Figure 1, and DC based on the 3σ rule cannot detect any bias from experiences of the authors. As Table 3 shows, the original EM had much lower OP and much higher AVTI than MEM-DC, which shows that the persistent bias influences not only the standard deviation of random error mode, but also the probability of random and gross error modes. It is interesting that the original EM approach had only a little worse RER than MEM-DC, which shows that the rectified values of both approaches are close to each other. At last, MEM-DC obviously consumed much less time than the original EM, because fewer parameters needed to be estimated for the former one.
Conclusions
In this work, we analyze the influence of a persistent bias on the estimated standard deviation of the random error mode for the EM approach and argue that the 3σ rule cannot be used to detect bias under the occurrence of a persistent bias. A modified EM approach was devised by estimating the standard deviation of random error mode from process measurements before the EM iterations. The performances of the modified and original EM approaches were evaluated and compared through four widely used linear and nonlinear examples of data rectification, and the results show that the original EM approach cannot be used to detect persistent biases, while the modified EM can; the modified EM consumes much less time than the original EM due to the reduction of estimated parameters.
The convergence of the modified EM algorithm is not proved and we will study this in the future to increase our understanding of the EM approach and to increase the reliability of the proposed EM approach.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 5,805.8 | 2021-01-30T00:00:00.000 | [
"Computer Science"
] |
The Economy of Tourism and Its Impact to Other Sectors in Lampung Province Faurani
Tourism sector has been identified contribute to the economy of Lampung Province. The study aims to assess the direct and indirect linkage towards economic sectors, analyze the sensitivity of tourism sector distribution, to assess the multiplier effect of dispersion in the tourism sector, and calculate the final demand of tourism sector in the economy of Lampung Province. The results revealed that the score of the forward linkages from the tourism sector in Lampung is relatively small compared to the backward linkage. Both directly and indirectly, the tourism sector is a ‘down stream’ sector of Lampung economy, which the output is directly consumed by final consumers. Therefore if tourism sector is developed, it can pull output that is in the upstream sector.
INTRODUCTION
Tourism is an important sector for many countries, and the rapid grows of tourism has been viewed as one of the challenges for new business.United Nations World Tourism Organization (UNWTO) recorded total international tourist arrivals grew faster during the year 2013 in the amount of 5 percent or 1.8 billion tourist arrivals than in 2012 [1].In Indonesia, tourism grows significantly.Recent data shows the number of foreign tourist arrivals to Indonesia increased by 8.6 million tourists in 2012, up to 13.6% compared to 2011 [2].
Increasing amount of tourists visit to Indonesia make the tourism sector in Indonesia can play a role in economy through revenues derived from the large tourism consumption during their visit to the area of tourism destination [3].This can be seen by tourist expenditures during the period which amounted 1,133.81 million US dollars, where the tourists expenditure/spending contributed approximately 9.1 billion US dollars, an increase of 5.8% compared to the year 2011 [4].Meanwhile, based on Indonesia Central Bureau of Statistics in 2012, the contribution of tourism to the national economy amounted to 13.9% of the total Gross Domestic Product, and it will continues to grow in the next year [5].
The linkage of national tourism to economic activities can create some interests to calculate contribution of tourism in the economy and dependence on social activities, especially in the visited places.It is because tourism sector is a combination of various industries such as transport, accommodation, food and drink and etc. Therefore it is necessary to make a range of methods to measure the direct economic contribution of tourism consumption to the national economy [5].
The national tourism development is also associated with the development of tourism in the province of Lampung which during the past few years continues to develop.Lampung Province has much diverse kind of attractions.Nature, culture, and artificial tourism are distributed in Lampung Province, with typical local uniqueness that strengthens the competitiveness of tourism [6].The nature and culture tourism as their uniqueness has become tourism which differ Lampung from other provinces in Indonesia which has a specific travel themes, for example Yogyakarta with cultural tourism, or West Java with its diversity natural attractions to strengthen the competitiveness of tourism products [7].
The tourism sector is able to provide a multiplier effect for other business sectors [8].Because of various tourism activities, all activities demand and consumption to a tour (goods/services, attractions, and more).It generate revenue for a tourism destination as well as a variety of activities providing tour services (goods/services, infrastructure, facilities, etc) which is an expense for tourism areas.Most of them result from the activities which carried out by other sectors outside the tourism sector [9].Same with other sectors, tourism is one sector that is able to contribute to the growth/ development of a region/country.In other words, the size of the contribution that given by this sector will determine the size of the economic growth of an area/region/country.
The average contribution of tourism to GDP Lampung province was only 1.3% during 2010-2014, compare to agricultural sector 32% and processing industry 17% [7].However, this certainly does not mean that these sectors do not need to be considered by the local government.Precisely, the increasing contribution of the tourism sector in Lampung directly or indirectly will push the growth and development in other sectors (considering the sector has significant multiplier effect to the economy [9,10]. The development of tourism in Lampung province illustrated by the development of the number of tourist visits (Table 1).In 2010 the number of tourists by the occupancy rate of hotels and accommodations amounted to 395,961 tourist arrivals and an increase of 39.27% in 2011 which became 551,476.As well as in 2012 rose to 577,893 visits; increase of 4.79% over the previous year.Furthermore, a significant increase occurred in 2013 where the number of visits amounted to 971,400 visits, an increase of 68.1% over 2013, despite a decline of 5.6% in 2014 [7].The development of tourism in Lampung shows that the tourism sector has linkages with other economic sectors as well as the services sector such as entertainment, hotels, restaurants, transport, agriculture, trade and processing sectors.The increasing of numbers of tourist who traveled and spend their money in Lampung are indicators that can be measured from the development of the tourism sector as well as supporting sector.
Input-Output Analysis developed by Leontief in 1930.The tables has grown to become one of the most widely accepted method, not only to describe the industrial structure of the economy but also includes a way to predict the changes of the structure.The model of Input-Output is based on a general equilibrium model [11].
According to BPS RI [12], the Input-Output (I-O) table presents information about goods and services among the economic sectors with the form of a matrix.Fields along the column of I-O Table shows the structure of the inputs used by each sector in the production process, either in the form of intermediate inputs and primary input.Fields along the lines of Table I-O shows the allocation of the output generated by the sector to meet the demand for inter-mediate and final demand.In addition, value-added contents in row shows the composition of sectoral value added creation.This table provides an overview of: 1) The economic structure of a region that includes the output and value added in each sector, 2) The structure of intermediate inputs, namely the use of goods and services transactions between sectors of production, 3) Structure of the supply of goods and services, either domestically produced or imported goods or originating from outside the region, and 4) The structure of demand for goods and services both in the form of requests by various sectors of production and demand for consumption, investment, and exports.
There are some uses of the I-O analysis [11]: 1) to estimate the impact of final demand on output, value added, imports, tax revenues, and labor absorption in various production sectors, 2) to view the composition of the supply and use of goods and services, especially in the analysis of the needs and possibilities of import substitution, 3) for the analysis of price changes by looking at the effect directly and indirectly from input to output price changes, 4) to determine the sectors most dominant influence on economic growth and sectors that are most sensitive to economic growth, and 5) to describe the economy of a region and identify the structural characteristics of the economy region.
This model uses Input-Output (I-O) in the form of a matrix that presents information about the transactions of goods and services as well as interconnections between units of economic activity in a region and a particular period.The basic framework of Table I-O describe the transactions of goods and services that can be viewed from two sides.The first side (column) shows the input structure of economic sectors, the composition of the produced value added and the structure of final demand for goods and services.The second side (line) shows the distribution (allocation) of output of goods and Economy of Tourism in Lampung (Singagerda & Septarina) services for the production process, final demand and imports.The final demand in this regard include household consumption, government consumption, investment and exports of goods and services.
In the analysis on the role of the tourism sector on economic performance in Lampung province, final demand becomes exogenous factors that encourage the creation of value of production of goods and services.In relation to the contribution of tourism, factors (exogenous variable) are in the form of tourist consumption on goods and services.
Tourism is an activity field that supplies most of the production to final consumption [13].Tourism functions as an intermediary supplier of goods and services to other sectors that should not be overlooked.Role of tourism as a result of supplying supply mainly to attract the output of hotel sector, restaurants, and sub-sector tourism itself; machinery industry; drugs; detergent; cosmetics and other chemical products; and transportation services for companies.electrical and electronic products.The tourism sector is not only supplying activity travel services but also goods and intermediary services.But in fact the tourism sector is also supplying goods and brokerage services to other branches in size is relatively low and is not a determining factor for triggering economic growth.The tourism sector with its function as a supplier of goods and services, especially in the final consumption of the sectors, is mostly deploy multiple direct and indirect effects throughout the economy, both in real and nominal side.
The factors that influence the number of tourist arrivals in Indonesia indicates that the amount of accommodation and the number of travel agencies is a factor that positively affects the number of tourist arrivals [14].While a non conducive security situation is a factor that will reduce the number of tourists, instead when a conducive security in Indonesia will increase the number of tourists visiting Indonesia.
Similar study show that the tourism sector has a significant role to the economy in Bandung seen from contributions that are in the top three among the other sectors [15].The tourism sector in the city of Bandung gives a positive multiplier effect on the economy of Bandung City.
Input-Output Analysis is a quantitative method that systematically measure the interrelationships among several sectors contained in a complex economic system.The model also considered for general equilibrium theory as important development model.Input-Output Model is quantitative model that can provide a comprehensive picture of [16]: (1) The structure of the national or regional economy, which includes the structure of output and value added in each sector; (2) The structure of intermediate input, i.e. the use of a variety of goods and services by the production sectors; (3) Structure of the supply of goods and services, either domestic production and imported goods that scale; (4) The structure of demand for goods and services, both between the requests of the sectors of production and consumption of final demand for investment and exports.
This study uses Input-Output analysis and the analysis has several assumptions.Transacti-ons that are used in the preparation of Table I-O is based on following assumptions [17]: (1) Each industry produces only one homogeneous commodity (2); Each industry uses fixed input ratio in producing the output; (3) Production in each the industry is Constant Return to Scale (changes in input will impact change in output).
Based on the explanations above, therefore the objectives of this research are to: 1) calculate how much of direct and indirect linkages, coefficient distribution, power distribution, and labor multiplier; 2) analyses the impact of changes in the tourism sector final demand for the output of the tourism sector and other sectors in Lampung province.
MATERIAL AND METHODS Data Collection
This study is explanatory research using a quantitative approach.Quantitative data is data that is obtained in the form of numbers and can be measured such as data of Input-Output Lampung Province 2015 [6], the number of tourist visits to Lampung, and tourist spending while in Lampung.This data was obtained from the Central Statistics Agency of Lampung Province and the Provincial Tourism Office of Lampung.Data Input-Output Lampung Tourism 2016 [6] will be used to discuss the problem of linkages between sectors and the multiplier impact of income and output.While tourist expenditure data used for the simulation discuss the impact of tourism on the development of economic output Lampung.
This research conducted in the province of Lampung is based on the consideration that the province of Lampung is one of the provinces in Indonesia as one of mostly visited tourism destination.Therefore, the development of tourism in the province of Lampung is expected to attract the development of many other sectors beyond tourism sector.From this research, it can be seen how much the relationship of tourism sector to attract other sectors and what sectors are well aligned and have less relevance to the tourism sector, thus, the results of this study can be used as a reference to create a policy for the government local.
The type of data by source in this research is secondary data (Table 3).The data obtained from several references relevant to the problem under study as Input-Output Lampung province in 2015 [6] that had been prepared earlier by the Central Bureau of Statistics Lampung.More specifically, the data is a table of Input-Output 23 x 23 sectors which were taken from the Book of business sector Indonesia [12], while other supporting data are expenditure data rating, the number of tourists and others taken from the Tourism Office of Lampung Province. .It is used to determine the role of the tourism sector on the economy of Lampung as input providers as well as consumers input.The impact of this sector can be analyzed based on the analysis of multipliers (output, income and employment) and linkages between sectors [18].For the analysis of linkages between sectors and the multiplier, the tool used is the Microsoft Excel software.Analysis of linkage (forward and backward linkages) is used to determine the degree of relatedness of a sector or sub-sector to the other sectors in an economy [19,20].
Direct Forward Linkage
Direct forward linkages is an additional increase in the production output of a sector that is caused by an increase in the final demand sector itself.
Direct Backward Linkage
Direct backward linkages is the increased use of production inputs as a sector that is caused by an increase in the final demand sector itself.
Indirect Forward Linkage
The indirect forward linkage show the effects of a certain sector that use the output of the sector indirectly per unit that increase the final demand.
Indirect Backward Linkages
Indirect backward linkage shows the linkage of the upstream sectors that indirectly provide inputs for the sector per unit that can increase the final demand.
Dispersion Index
Analysis on the impact of the development is the development of an direct and indirect linkages index [11].Thus the existing intersectoral indicators can be compared.Pull power is a measure of the spread to look backward and forward linkages of the economic sectors in the region.Analysis on the impact of the spread or dispersion is divided into two types, namely the power of dispersion index and Sensitivity of Dispersion Index.
Power of Dispersion Index (Pull Power)
Power of Dispersion Index or attractive power coefficient is used to determine the benefits distribution of the a sector development to the development of other sectors through the market mechanism input.This means, the sector has the ability to increase production growth to the upstream sector.A sector would be said to have a higher dispersion if the spread coefficient greater than one.Conversely, a sector which is said to have a low dispersion when the spread coefficient is smaller than one.
Sensitivity of Dispersion Index (Push Power)
With the concept of dispersion sensitivity or pushing power, it indicates the ability of a sector to encourage growth in the production of other sectors which use the output of this sector as input.In other words, the dispersion sensitivity is helpful to determine the sensitivity of a sector towards other sectors through market mechanisms output.It means the ability of the sector to boost production growth of downstream sectors.A sector would be said to have higher backward linkages when the value of the dispersion sensitivity is greater than one.Conversely, a sector which is said to have a low sensitivity dispersion if the value is smaller than one.
Labor Multiplier
The employment multiplier rates shows the change in manpower caused by the initial change in the final demand.Labor multiplier is not obtained from the elements in Table I-O, as the multiplier output and income, as in Table I-O does not contain elements that relate to labor.
RESULT Forward Linkage
The forward linkage is divided into two, namely direct and indirect forward linkages [13].Value of direct forward linkage shows if there is an increase in final demand, produced output of a sector that will increase directly.On the direct forward linkage, the impact of changes in final demand will directly impact to the concerned sector.Therefore, the output generated in the production process is obtained from the input sector itself.Value of direct forward linkages obtained from the coefficient matrix that shows the number of output units from a sector that is required to produce one unit of output of other sectors [11].Based on the results of the analysis, a direct forward linkage of the tourism sector in Lampung consists of sub-sectors Leisure, Recreation and culture as a major sub-sectors of tourism, and followed by restaurants, transportation and hospitality (Table 4).Of the four sub-sectors of tourism, Sub-sector leisure, recreation and culture has the highest direct forward linkage at 0.1347.It means that if there is an increase in the final demand of the sector amounted to 1 million Rupiah, then the output produced by this subsector which generated from the sector will increase the input directly by Rp 134.700.
Meanwhile, indirect forward linkage of tourism sectors presented in Table 5.The subsectors that have the highest indirect forward linkage is a sub-sector of hotel that is equal to the value 11.422.It means that if there is increased final demand of Rp. 1 million then the output of sub-sectors Hotel allocated indirectly to another sector or downstream sector will increase by Rp 1,142,000.
Backward Linkages
Backward linkages is divided into two types, namely the direct backward linkage and indirect backward linkage [11,20].The value of direct backward linkage shows that if there is an increase in the final demand, the inputs of the needed sector will increase directly.In direct backward linkages, the impact of changes in final demand will directly affected the concerned sector.Therefore, the inputs needed in the production process is obtained from the output of the sector itself.The direct backward linkages value obtained from the coefficient matrix [11].
Table 6 show that the restaurant is the highest value of direct backward linkage in sub-sectors of tourism which has amount 0.4558.This can be interpreted in case of increase in the final demand of the sector amounted to Rp 1,000,000, then the sub-sector is increase the demand for the input to the output of its own sector of Rp 455,800.
Meanwhile Table 7 show that the restaurant is the most value of indirect backward linkages of tourism, i.e. 1.4872.It implies that in the case of increase in the final demand of the sector amounted to Rp 1.000.000,then the sub-sector will increase of the input demand to other sectors indirectly Rp 1,487,200.
Impact of Change in Final Demand to the Direct and Indirect Total Output
The results point out that when the final demand increased, an additional final demand should be produced, and automatically produced an additional output [20].If the tourism sector (restaurant, recreation and culture, hotel, and transportation) increase 10% then it can affect to the increasing of direct output sector about Rp 74,328 billion or increase of 1.66% from the previous total output.It is because those sectors are the main sectors in tourism (Table 8).
Table 8 also shows that 10% additional final demand in sub sector of tourism (Leisure, recreation, and culture) will increase the total output on those sectors about Rp 2.87 billion or increase 6.51% from the previous output.If the final demand increase 10%, the tourism subsectors such as restaurant, hotel, and transportation are also continues to increase.The table states that the four sectors) has remain growth (as the most developed) which percentage of growth around 8.27% (Rp.14.90 billion) from the previous total output.
DISCUSSION
Based on the analysis of data, Lampung tourism sector has a greater value of backward linkages compared to the forward linkage.The analysis of dispersion coefficients found that three sub-sectors of tourism (Hotel, restaurant and Leisure, recreation and culture) has a dispersion coefficient > 1.It means that the sector is able to increase the growth of the upstream sector.
Otherwise, the sensitivity dispersion index of the tourism sector (sub-sector of restaurant, hotel, leisure, recreation and culture, and transport) < 1.In other words the tourism sector are less able to encourage the growth of the downstream sector.
We conclude that the Lampung tourism sector is a sector in a downstream position.The sector generates output for direct consumption by the final consumer (tertiary sector).If the Government can develop this sector well and optimally, it can act as magnets on the outputs of the upstream sectors.But to develop the tourism sector in Lampung, there are still such problems in each kind of attraction in Indonesia, e.g. the lack of expansion and promotion of tourism within and outside the country on the attraction of natural and artificial tourism; the lack of safety factor, lack of maintenance, and less adequate infrastructure especially in tourist locations.
Competitive strategy on industry concept can be applied in Lampung Tourism, where the actors -the tourism business and the government, must have competitive strategy to be able to compete with local or other countries tourism such as Bali, Yogyakarta, and West Java.These tourism locations have uniqueness and characteristics for the sustainability of long term tourism development, and competition with other countries such as Singapore, Malaysia, and Thailand [21].With abundant and diverse potential of nature and culture, Lampung tourism business agent and the government are expected to implement following competitive strategies. [135] Economy of Tourism in Lampung (Singagerda & Septarina)
Promote the Unique Characteristic of Lampung tourism
It is useful to minimize the tourism product substitution in Lampung, so travelers will only see the uniqueness in Lampung alone.The concept of competitive strategy in Lampung tourism is expected to be more advanced and survive in the long term development.
Protection and Security
On the cultural and natural attractions, the protection is essential in order to maintain the sustainability and the authenticity of the tourist attraction.The great care costs for the treatment of the attraction became an obstacle for the government.In addition, the safety factor issues should also be considered so that the convenience of both local and foreign tourists remain guaranteed [22].
Vertical Integration
The vertical integration that can be associated with tourism is the cooperation between tourism businesses.It raised their security of supply and reduces transaction costs and uneconomical costs.
Infrastructure
In supporting the growth of the tourism development in Lampung, especially the number of tourists, the government should start thinking about the provision and procurement of adequate infrastructure, particularly for infrastructure related to access to local or tourism area.As for the individual infrastructure that exist, we recommend to do not spoil and changing the original as well as the hallmark of the tourism areas (especially natural tourism area), because it is the sign of tourism and the sale value of Lampung during this time.
IMPLICATIONS
By the consideration on the contribution of tourism sector to attract input -output of the upstream sectors to Lampung under the terms of the linkages and employment, the development of the tourism sector for the future should be a priority.It is necessary to be creative towards the resources to create attraction and tourism marketing in Lampung.One way is to use creative media such as movies and music which are published to the internet and regularly updated, travel fairs (national and international), festivals, and etc.
The nature is the most popular tourist attraction in Lampung.Aside from the natural attractions, Indonesia has a diverse tourism attractions, such as artificial attractions and cultural attractions.The government is expected to make a policy to develop the Indonesian tourism objects corresponding to each kind of tourist attraction without changing the unique and the characteristics of the tourism area.This is in order to create a tourist attraction in an appropriate manner and in accordance with the type of tourism objects.
The government should start thinking of policies related to the protection and safety, especially with regard to the tourism area.It is to ensure the safety and convenience of tourists when visiting tourism sites in Lampung, due to relatively high crime numbers.
CONCLUSION
Forward linkage of the tourism sector in Lampung is relatively smaller than backward linkages, either directly or indirectly.This proves that the tourism sector Lampung is located in the downstream sector or the tertiary sector which its direct output is consumed by the final consumer.The results also show that the tourism sector is a sector that is in a position downstream in the economy of Lampung, which if this sector is developed, it can make interesting output in the upstream sector.
Meanwhile, the results of the sensitivity analysis shows that the dispersion of Lampung tourism sector is able to increase the growth of the upstream sector.While tourism sub-sectors (restaurants, leisure, recreation and culture, transportation, and hotel) are less able to drive the growth of the downstream sector.The results of the analysis of the tourism sub-sector have also found their multiplier effects on employment of 32.21% on the employment in 2015 than the previous year.
Table 1 .
Number of Tourist Arrivals based on Occupancy Rate Hotels and Accommodation in Lampung Province
Table 3 .
Code Sector of Business
Table 4 .
Direct Forward Linkage to the Economy Sector in Lampung, 2015
Table 5 .
Indirect Forward Linkage of Economy Sector in Lampung, 2015
Table 6 .
Direct Backward Linkages of Lampung Economy Sectors, 2016
Table 8 .
Impact of Tourism Final Demand Changes to Direct and Indirect Total Output Sector | 6,020 | 2016-11-02T00:00:00.000 | [
"Economics",
"Business"
] |
Simultaneous electro-generation/polymerization of Cu nanocluster embedded conductive poly(2,2′:5′,2′′-terthiophene) films at micro and macro liquid/liquid interfaces
Cu nanoparticles (NPs) have been shown to be excellent electrocatalysts, particularly for CO2 reduction – a critical reaction for sequestering anthropogenic, atmospheric carbon. Herein, the micro interface between two immiscible electrolyte solutions (ITIES) is exploited for the simultaneous electropolymerization of 2,2′:5′,2′′-terthiophene (TT) and reduction of Cu2+ to Cu nanoparticles (NPs) generating a flexible electrocatalytic composite electrode material. TT acts as an electron donor in 1,2-dichloroethane (DCE) through heterogeneous electron transfer across the water|DCE (w|DCE) interface to CuSO4 dissolved in water. The nanocomposite formation process was probed using cyclic voltammetry as well as electrochemical impedance spectroscopy (EIS). CV and EIS data show that the film forms quickly; however, the interfacial reaction is not spontaneous and does not proceed without an applied potential. At high [TT] the heterogeneous electron transfer wave was recorded voltammetrically but not at low [TT]. However, probing the edge of the polarizable potential window was found to be sufficient to initiate electrogeneration/electropolymerization. SEM and TEM were used to image and analyze the final Cu NP/poly-TT composites and it was discovered that there is a concomitant decrease in NP size with increasing [TT]. Preliminary electrocatalysis results at a nanocomposite modified large glassy carbon electrode saw a > 2 × increase in CO2 reduction currents versus an unmodified electrode. These data suggest that this strategy is a promising means of generating electrocatalytic materials for carbon capture. However, films electrosynthesized at a micro and ~ 1 mm ITIES demonstrated poor reusability.
and oil|ionic liquid (o|IL) 20 ones. In a simple 2-electrode configuration with one electrode immersed in either phase, the Galvani potential difference can be controlled externally via a potentiostat with the potential drop spanning 1-4 nm across the ITIES, ϕ w -ϕ o = � w o φ 8,37 . Johans et al. 38 were the first to describe an analytical solution for nucleation of metal NPs at the liquid|liquid interface. In their work, they emphasized the absence of defect sites that are common at a solid/solution interface; thus, there is a large thermodynamic barrier to particle formation at ITIES. Nevertheless, they 38 and others 8,[15][16][17][18]20,23,24,27,28,39,40 were able to experimentally demonstrate electrochemically controlled metal NP nucleation at w|o interfaces. Interestingly, Nishi's group has suggested that the molecular structure of the liquid|liquid interface is transcribed onto the NP framework 22 . They recently demonstrated that the w|IL interface played an important role mechanistically in the formation of nanostructures. Their IL was modified with a ferrocene (Fc) functional group making it redox active, and was exploited in the formation of Pd nanofiber arrays 22 .
Meanwhile, electropolymerization at liquid|liquid interfaces was initially investigated by Cunnane's group 36 , and more recently by Scanlon's group 4 and ourselves 28 . In these later reports, large, free-standing polymer films were formed. In the case of Lehane et al. 4 who generated PEDOT (poly (3,4-ethylenedioxythiophene)) using Ce 4+ in aqueous and EDOT in α,α,α-trifluorotoluene (TFT), the films were shown to be highly stable and biocompatible. Our work showcased the simultaneous electropolymerization of 2,2′:5′,2′′-terthiophene (TT) and electrogeneration of Au NPs at a micro-ITIES (25 µm in diameter) 26 , building on Cunnane's study at a large ITIES 29,30,32 . We demonstrated that miniaturization of the ITIES could be used to provide another layer of mechanistic and thermodynamic control towards smaller, low dispersity NPs.
Herein, this is expanded to the simultaneous electrogeneration and electropolymerization of Cu nanocluster incorporated poly-TT films. Electrochemical impedance spectroscopy was used to monitor film growth, while SEM and TEM imaging were used to compare film/NP morphology at different TT concentrations between the large and micro-ITIES. Two large ITIES platforms were investigated, including a 1.16 and 10 mm diameter interface. Initial testing of glassy carbon (GC) electrodes modified with the nanocomposite demonstrate excellent electrocatalytic activity towards CO 2 reduction; however, films electrosynthesized at the 25 µm and 1.16 mm interfaces demonstrated poor stability and surface coverage.
Methods
Chemicals. Copper sulphate (CuSO 4 , > 98%), lithium sulphate (Li 2 SO 4 , > 98%), 1,2-dichloroethane (DCE, ≥ 99.0%), 1-bromooctane (99%), trioctylphosphine (97%), and 2,2′:5′,2′′-terthiophene (TT, 99%) were acquired from Sigma-Aldrich. All reagents were used without additional purification. Ultrapure water from a MilliQ filtration system (> 18.2 MΩ cm) was used throughout to generate aqueous solutions. The tetraoctylphosphonium tetrakis(pentafluorophenyl)borate (P 8888 TB) ionic liquid used as an oil phase supporting electrolyte was prepared as detailed previously 41 . Electrochemistry. Liquid|liquid electrochemical experiments were performed using a PG-618-USB potentiostat (HEKA Electroniks) in four-, three-, and two-electrode configurations at the large and micro-ITIES. In the 4-electrode mode a specialized 10 mm inner diameter cell with two Pt wires annealed into the side of the glass cell were connected to the working (WE) and counter electrode (CE) leads of the potentiostat as shown in Fig. 1B. The WE was positioned in the aqueous phase, while the CE was in the organic phase. Two references electrodes (RE) were also employed, one in either phase and inserted into Luggin capillaries built into the spe- www.nature.com/scientificreports/ cialized cell with their tips facing each other ~ 5 mm apart with the liquid|liquid interface positioned in between them, see Fig. 1B. When using the micro-ITIES in two-electrode mode, one Ag wire (Goodfellow Inc.) was integrated into the specialized pipette holder containing the aqueous phase and connected to the WE port of the head-stage, and another Ag wire immersed in the organic phase was connected to the CE/RE port, see Fig. 1A. The body of the holder was fabricated from poly(ether ether ketone) (PEEK). The pipette was backfilled with the aqueous phase through a syringe attached to a 3-way valve incorporated into the side of the specialized holder, then the tip of the pipette was immersed in the organic phase. The 25 µm diameter ITIES was maintained at the pipette tip using the syringe and monitored using an 18-megapixel CCD camera (AmScope) equipped with a 12 × magnifying lens assembly (Navitar). Micropipette fabrication has been described elsewhere 42 . Scheme 1 details the electrolytic cells employed. The experimental potential scale was referenced to the Galvani scale by simple SO 4 2transfer, whose formal ion transfer potential was taken to be -0.540 V 16 . A second large ITIES electrolytic cell (Cell 1, general configuration) was created by using the modified holder shown in Fig. 1A; however, an unmodified borosilicate capillary (2.0/1.16 mm outer/inner diameter, Sutter Instruments) was used in place of a micropipette. Additionally, a Pt wire counter and Ag wire reference electrodes were used, coupled to the integrated Ag wire as WE in the aqueous phase, in a 3-electrode configuration.
Electrochemical impedance spectroscopy (EIS) measurements were performed with a frequency range between 10 and 20 kHz, as well as a 20 mV peak-to-peak perturbation. EIS was only measured using Cell 1 at a 25 µm ITIES in a 2-electrode configuration.
Before or in-between experiments, electrolytic cells/capillaries were cleaned using the procedure outlined in the Supplementary Information (SI).
Electrocatalysis studies were performed using a 3-electrode cell connected to a CH Instruments potentiostat (model# CHI602E) with a glassy carbon (GC, Pine Research) WE (~ 4 mm in diameter) coupled with Ag/AgCl reference (Dek Research) and Pt wire counter electrodes.
Transmission electron microscopy. All transmission electron microscopy (TEM) images were taken using the Tecnai Spirit transmission electron microscope with samples prepared on 200 Mesh Cu ultrathin/lacey carbon or 2 µm holey Au grids (Electron Microscopy Sciences).
Scanning electron microscopy (SEM). SEM imaging was performed using a JEOL JSM 7100 F equipped with energy dispersive x-ray (EDX) which were analyzed via DTSA II software provided by the National Institute of Standards and Technology (NIST) in the US, see https:// www. nist. gov/ servi ces-resou rces/ softw are/ nist-dtsaii.
Results and discussion
The black, dashed curves shown in Fig. 2A-C illustrate cyclic voltammograms (CVs) recorded using Cells 1 and 2 at the micro (25 µm diameter), 1.16 mm, and ~ 10 mm diameter ITIES, respectively, without TT added, but with 5 mM of CuSO 4 in the aqueous phase. In each case, the polarizable potential window (PPW) is limited by the transfer of the supporting electrolyte ions. The large positive current increase at positive potentials is owing to the transfer of Li + /Cu 2+ from water to oil (w → o) or TBfrom o → w, while the sharp negative increase in current towards negative potentials is owing to the transfer of SO 4 2from w → o and P 8888 + from o → w 8,43 . The red, solid trace shows the initial i-V cycle at the micro-ITIES ( Fig. 2A), after addition of 20 mM of TT to DCE. A positive peak-shaped wave was observed during the forward scan, towards positive potentials, with a peak potential (E p ) at roughly 0.52 V. During the reverse scan, towards negative potentials, a sigmoidal wave was observed with a half-wave potential � w o φ 1/2 at ~ 0.515 V. Since the interface was maintained at the tip of a pulled micropipette, the diffusion regime inside and outside the pipette is asymmetric. The former behaves under linear diffusion owing to geometric confinement within the micropipette, while the latter is hemispherical with responses similar to an inlaid disc ultramicroelectrode (UME) 44 . Thus, this signal is consistent with the transfer of negative charge from o → w during the forward scan, such that the process is diffusion limited by a species in the aqueous phase. Therefore, it was hypothesized that this signal is owing to electron transfer from www.nature.com/scientificreports/ TT in DCE across the ITIES to Cu 2+ in water; whereby, Cu 2+ is reduced to Cu 0 and forms nanoparticles, while TT is oxidized and electropolymerized.
To investigate this, the system was cycled a total of 25 times with these i-V curves overlaid in Fig. 2A; the dashed, purple arrow indicates how the peak current (i p ) signal evolves with each consecutive scan. There is only a small change in i p that may be owing to a localized consumption at the interface, or the formation of a nanoparticle incorporated polymer film at the ITIES. The latter would fundamentally alter the effective surface area of the interface as well as its charge transfer characteristics; both would impact the magnitude of i p . Cell 1 www.nature.com/scientificreports/ was also tested using [CuSO 4 ] = 0 mM and [TT] = 20 mM; however, no peak-shaped signal was recorded (data not shown). Thus, it is likely that Li + does not interact with TT and is a good electro-inactive supporting electrolyte. However, when [TT] was decreased to 10 and 5 mM in DCE, no electron transfer wave was recorded voltammetrically (data not shown).
In all cases at the micro-ITIES, an aqueous droplet was ejected onto a TEM grid for imaging using the syringe equipped on the back of the modified holder. Even at 5 and 10 mM TT, a film-like deposit was visible on the substrate under an optical microscope. Therefore, despite no observable electron transfer wave, cycling at the PPW edge-of-scan can induce electrogeneration of the nanocomposite film. These results agree well with our recent work at the w|DCE micro-interface using KAuCl 4 (aq) and TT(org) 28 ; whereby, a thin conductive polymer film, with embedded Au NPs was formed. Lehane et al. 4 also recently employed a large w|DCE interface in the electropolymerization of PEDOT with Ce 4+ as oxidant/electron acceptor in the aqueous phase. Moreover, electrodeposition of Cu, Au, Pd, and Pt NPs at large 16,30,38,40,45 and micro 27,39 interfaces have been demonstrated previously using metallocenes as electron donors dissolved in oil.
An ITIES with a diameter of 1.16 mm using a 3-electrode configuration was also tested voltammetrically, see Fig. 2B. The overall CV profile is similar to the one at a ~ 10 mm ITIES (see below) with a signal at ~ 0.7 V which is likely the electron transfer wave; however, two negative current peaks were recorded at roughly 0.4 and 0.15 V during the scan from positive to negative potentials. These may be the re-oxidation of Cu 0 or anion adsorption waves. Future work will focus on in situ spectroscopic methods to evaluate these two curve features.
Next, film generation was investigated at a 10 mm diameter ITIES using Cell 2 (see Fig. 2C). When scanning from negative to positive potentials, two peak-shaped waves were observed at -0.22 and 0.66 V, while on the reverse scan towards negative potentials, two peak signals were recorded at 0.09 and -0.26 V. The two peaks towards the negative end of the PPW form a reversible signal with a � w o φ 1/2 of -0.240 V which may be the result of the adsorption of anions at the surface of the growing composite film. Indeed, this agrees well with the results of Lehane et al. 4 The two signals at the positive end of the PPW are irreversible electron transfer similar to the result observed at the micro-ITIES in this potential region. With repeated sweeps of the potential, the irreversible electron transfer wave shifts to more negative potentials indicating a reduction in the required overpotential. As mentioned above, the liquid|liquid interface is initially free of nucleation sites which increases the amount of applied driving force necessary to achieve NP nucleation/polymerization 38 versus at a solid/electrolyte one. However, once the film begins forming, the population of viable sites increases which is reflected in the concomitant decrease in peak potential of the electron transfer wave. Simultaneously, the signal decreases in current intensity which is likely owing to the consumption of material in the vicinity of the ITIES. The nanocomposite film formed in the large ITIES cell was thick, brittle, and difficult to extract from the cell. However, pieces were deposited onto glass substrates for SEM imaging and onto a GC electrode for electrocatalytic testing (see below).
The voltammetric negative peak during the reverse scan in Fig. 2C with a � w o φ 1/2 at -0.240 V may be owing to the reorganization of the electric double layer (EDL) and adsorption of supporting electrolyte anions on either side of the forming liquid|solid|liquid interface 4 . As shown by Scanlon's group 4 , supporting electrolyte anions from both phases stabilize film growth through adsorption; however, due to the size and shielded negative charge on B(C 6 F 5 ) 4 -, it is likely a minor contributor and most anion adsorption comes from SO 4 2on the aqueous side. Moreover, SO 4 2is molecularly smaller with exposed dense negative charge on the oxygens, doping the film with sulphate is more efficient in stabilizing and neutralizing the film as it forms. However, during the later stages of film growth, diffusion of anions through the film is inhibited making doping/de-doping a slow process and causing voltammetric peak broadening 46 . Regardless, the film will likely be p-doped.
Both micro and large ITIES experiments were also performed at open circuit potential (OCP), i.e., without an applied external potential. In both cases, no film or NPs were observed. Thus, an applied potential is required to induce nanocomposite film formation.
Using the syringe affixed to the back of the pipette holder, a droplet of the aqueous phase was ejected from the tip of the micropipette after 25 consecutive CV scans using Cell 1 with 5 mM of CuSO 4 paired with 5 or 20 mM of TT in DCE and deposited on a holey-Au TEM grid. Next, the TEM grids were imaged using both TEM and SEM (see Fig. 3). TEM micrographs in Fig. 3A and B show the low dispersity spherical Cu NPs electrogenerated and embedded within the poly-TT film with average sizes of 5.3 and 1.7 nm at [TT] equal to 5 and 20 mM, respectively. NP sizes were measured using ImageJ software and collated into histograms plotted inset in Fig. 3; average NP sizes were determined by curve fitting the histograms with a Gaussian distribution. Fig. S1 of the SI shows the histogram of Cu NP sizes with the poly-TT film formed using Cell 1 with 10 mM of TT in DCE. With increasing [TT] the NP size decreased such that at TT = 10 or 20 mM, Cu particles are in the range of nanoclusters, i.e., 1.3 or 1.7 nm in diameter, respectively 47,48 . In this case, with higher [TT] the thermodynamics and kinetics of the heterogeneous electron transfer reaction are enhanced facilitating faster electropolymerization, which in turn likely limits the size of the Cu nanoclusters.
TEM micrographs were also obtained for films electrogenerated at the 1.16 and 10 mm interfaces and deposited onto 200 mesh Cu lacey carbon TEM grids. Fig. S3 of the SI shows the TEM micrographs along with histograms for the analysis of the Cu NP sizes performed using ImageJ software. Cu NPs at 1.16 and 10 mm ITIES demonstrate a high dispersity despite Gaussian curve fitting showing peaks at 2.2 and 4.1 nm. Errors shown inset in Fig. S3 are for the Gaussian peak position/fitting. Fig. S4 shows a photograph of the aqueous droplet suspended from the unmodified capillary (1.16 mm in diameter) post Cu NP/poly-TT electrogeneration and after removal from the oil phase. A thin-film can be observed spread across the droplet surface. Figure 3C depicts the SEM micrograph of the nanocomposite Cu NP/poly-TT film deposited on a holey-Au TEM grid. The film was dense, compact, and smooth; however, it was also quite fragile and broke apart easily. Relatively large sections can be seen covering the TEM grid and occluding several of the 2 µm holes. These images agree well with the reported morphology for electropolymerized terthiophene in low current densities 49 www.nature.com/scientificreports/ confirmed by energy dispersive x-ray (EDX) spectroscopy performed during SEM imaging (data not shown). 24-h shake-flask experiments performed in a 2 mL vial (large ITIES) using the same electrolyte compositions and TT concentrations as Cell 2 revealed no observable thin film or Cu NPs. Therefore, while not strictly observable voltammetrically at [TT] = 5 mM, by probing the positive edge of the PPW one can initiate Cu NP/poly-TT electrodeposition at relatively low [TT]. Moreover, this can be achieved without the use of extreme overpotentials that risk over-oxidizing the film 31,51 . Figure 3D shows the film generated at the large ITIES with [CuSO 4 ] = 5 mM and [TT] = 20 mM after 1000 CV cycles. The film is smooth; however, the Cu NPs cannot be resolved with SEM. Fig. S2A in the SI shows the CVs recorded during Cu NP/poly-TT electrosynthesis with [CuSO 4 ] = 1 mM and [TT] = 5 mM. The first and every subsequent fifth scan was plotted. The Cu 2+ /TT reduction/oxidation and Cu re-oxidation waves are visible at roughly 0.85 and 0.55 V, respectively. The Cu NP/poly-TT film was extracted from the cell and deposited on a glass slide, then imaged in the SEM. Fig. S2B shows the SEM micrograph, while Fig. S2C and D contain plots of the EDX spectra obtained at the two points indicated in Fig. S2B. In Fig. S2B, the EDX spectra show that point C is rich in Cu and likely an agglomeration of Cu NPs, while point D is the polymer film itself containing sulphur and carbon.
The stepwise nucleation, oligomerization, and elongation of complex polymer/composite materials at liquid|liquid interfaces has been described by Vignali et al. 31 , Robayo-Molina et al. 52 , and recently by us 28 . The early stages of TT electropolymerization via heterogeneous electron transfer and electrodeposition can be described generally by the following, where H-TT is the terthiophene molecule emphasizing the proton at the α-or β-carbon position on one of the terminal thiophene units, TT +• is the radical cation, and TT 2 is the dimer. This initial stage is likely thermodynamically prohibitive at the liquid|liquid interface since it lacks nucleation sites 38 . However, once seeded with Cu 0 nuclei and the positively doped TT oligomers as capping agents, the thermodynamics likely greatly improve, as mentioned above. It should be emphasized that the glass walls of the micropipette likely behave as nucleation sites; this would also apply to the walls of large glass ITIES electrolytic cells. Using Eqs. (1-4) as a basis, an overall reaction can be composed, The overall electron transfer potential � w o φ ET for Eq. (5) can be written as 27,28,38 , where E o ′ ,DCE TT +· /TT and E o ′ ,H 2 O Cu(II)/Cu are the standard redox potentials for TT +• /TT and Cu 2+ /Cu 0 and were taken to be 1.20 28 and 0.342 V 53 , respectively. In this way, � w o φ ET was calculated to be 1.57, 1.36, and 1.18 V for pH's 2, 5.5-6, and 8.5, respectively; since ΔG = nF� w o φ ET 54 , this leads to ΔG > > 0 that nevertheless decreases with increasing pH. These values are much higher than the experimentally determined values at pH ~ 5.5-6; thus, the difference is likely the thermodynamic contribution of the glass walls. However, silanization of the inside of the micropipette resulted in no observable change in the film produced (data not shown).
Next, electrochemical impedance spectroscopy (EIS) was employed during electrosynthesis at the micro-ITIES to elucidate the underlying physical and electrochemical dynamics of film formation. The sinusoidal applied potential waveform (V AC ) can be described by 55 , where V DC and V 0 are the applied DC voltage and AC voltage amplitude (0.020 V peak-to-peak), respectively, while ω (= 2πf) is the angular frequency and t is time. In each case, V DC of roughly 0.7 V was applied (vs. and a CV was performed between each impedance measurement at a scan rate of 0.020 V s -1 . In this way, the Nyquist diagrams in Fig. 4A-C were recorded using Cell 1 at a 25 µm diameter interface with [TT] equal to 0, 5, and 20 mM in DCE, respectively. The semi-circle at high-frequency and partial semi-circle at low-frequencies describe the two typical branches of impedance spectra that are associated with electrical and electrochemical dynamics (i.e., mass transport), respectively 55 . In the case of the liquid|liquid interface, the high-frequency region is often associated with the capacitance of the back-to-back EDLs found in either phase, while the low frequency region is influenced by ion diffusion and electron transfer reactions [55][56][57] . These two features give rise to two "time constants" (τ = R C )55 , so called since they are often modelled in equivalent circuits using a resistor (R ) and capacitor (C ╧ ) in parallel. Equivalent electric circuits (EECs) used to model the impedance data have been drawn in Fig. 4D, which include constant phase elements (CPEs) in place of simple capacitors, in parallel with resistors added to model www.nature.com/scientificreports/ charge transfer reactions (R CT ) and kinetic resistance (R C ). As is common, a resistor was added in series to account for the total solution resistance (R s ). EEC1, a Randles-like equivalent circuit, and EEC2 feature the two terminals at either end for the WE and CE/RE employed experimentally. The 2-electrode configuration limits parasitic impedance artifacts from cabling and/or the CE and RE that are often observed at 3-and 4-electrode cells 57,58 . Operating at the micro-ITIES has added benefits and is able to resolve the charge transfer resistance over the solution electrolyte resistance; moreover, by repeatedly using the same micropipette, one can greatly enhance reproducibility. EIS offers avenues to valuable physical insight into charge transfer processes at the liquid|liquid interface. The geometric capacitance (C geo ), recently modeled by von Hauff and Klotz 55 in the context of perovskite solar cells, may be considered analogous to the structural parasitic coupling elements proposed and modelled by Trojánek et al. 57 for a 4-electrode cell, which they modelled using a 4-terminal EEC. In either case, C geo is normally modelled in parallel with all the other circuit elements. If it exceeded the individual capacitive circuit elements, i.e., CPE1 and CPE2 in Fig. 4D, then the high-frequency semi-circle would become enlarged and dominate the impedance spectrum. However, by operating in the 2-electrode mode, and ensuring that the overall impedance of the micropipette was < 1 MΩ during single phase experiments, then individual circuit elements can be resolved 59 . Thus, C geo /parasitic coupling elements can be ignored, greatly simplifying EEC modelling 57,60,61 . The faradaic impedance (Z f ) is described by 57 , where k f is the apparent rate of charge transfer, A is the surface area of the interface, F is Faraday's constant (96,485.33 C mol -1 ), R is the gas constant (8.314 J mol -1 K -1 ), T is the absolute temperature (298.15 K), z is the number of electrons transferred, and Z W is a Warburg impedance, within which, j 2 = -1, and σ is defined as 57 , In which Q (F s 1-n ) is a constant and when n = 1 the element is a perfect capacitor, while n = 0.5 is in line with a typical Warburg element for semi-infinite diffusion.
At a freshly cleaned micro-ITIES, a CV was performed followed by impedance measurement using a V DC = 0.7 V; whereby, the CV-EIS pulse sequence was performed a total of 11 times. Figure 4A shows the impedance spectra using Cell 1 with [TT] = 0 and [CuSO 4 ] = 5 mM, i.e., a blank spectrum. The pronounced low-frequency tail is likely owing to the relatively high V DC close to the edge of the PPW where supporting electrolyte ions will undergo transfer, e.g., Li + w → o. This spectrum agrees well with one shown recently by Mareček's group 59 which was associated with simple tetraethylammonium ion transfer at a micro-ITIES.
At low [TT] (Fig. 4B) the impedance profile does not undergo significant change; however, the tail in the low-frequency region is greatly suppressed relative to the blank spectra performed in the absence of TT (Fig. 4A). This may indicate that the film has formed after the first CV-EIS sequence and is blocking, or at least inhibiting simple ion transfer of the supporting electrolyte. . Spectra were obtained with a direct applied potential (V DC ) of ~ 0.7 V after performing one CV cycle using the potential range shown in Fig. 2 with v = 0.020 V s -1 ; similarly, in-between each spectrum in B and C a CV pulse was applied. (D) equivalent electric circuits (EEC) for a simple ion transfer (EEC1) or coupled electron and ion transfer during electro-generation (EEC2) of the thin film at an ITIES, such that R s , R CT , and R C , are the solution, charge transfer, and kinetic resistance, while CPE1 and CPE2 are constant phase elements. www.nature.com/scientificreports/ Figure 4C shows the response at high [TT] in which the high frequency region of the EIS increases by 50% between the first and second CV-EIS pulse sequence. There is then a small decrease in the high frequency branch, which stabilizes across the 3rd to 11th iteration. Meanwhile, the low-frequency branch at high [TT] becomes more pronounced with each iteration. Based on the CV results ( Fig. 2A), the film forms immediately generating a liquid|solid|liquid interface and, thus, the low-frequency branch is then associated with mediated electron transfer between Cu 2+ (aq) and TT(org) rather than simple ion transfer. Thus, as the film grows the electron transfer properties change (see below).
Nevertheless, these data indicate a change in the nature of the interface brought about by the development of a liquid|solid|liquid system that occurs immediately upon application of the first CV-EIS. It has been shown that forming a barrier at the interface mainly affects the low-frequency region 28 . However, as the film develops, and likely due to the build up of local micro-convections in this domain 63 , it was found that by pushing the impedance measurement to lower and lower frequencies the interface becomes unstable. This either lead to a higher noise level or the interface itself physically broke down, erupting into the organic phase as an electrophoretically induced droplet. Thus, it was not possible to carry out EIS measurements below 10 Hz. EEC2 (Fig. 4D) was employed during thin film growth in the presence of CuSO 4 (aq) and TT(org), while EEC1 was used in the absence of TT, see the blue, solid curves in Fig. 5. Figure 5 shows the changes in the six EEC parameters after each CV-EIS iteration. With 5 mM of TT in the system, orange filled circle curves, CPE1 shows little deviation from the initial value of ~ 12.0 pF with only a slight decrease over the 11 CV-EIS pulse sequences before finally stabilizing at ~ 11.7 pF. As mentioned above, this circuit element is typically associated with the liquid|liquid back-to-back EDLs and these small changes in capacitance may be owing to ion rearrangement on either side of the interface during thin film electrogeneration or changes in the surface morphology at either side of the liquid|solid|liquid junction. At [TT] = 20 mM, yellow filled square traces in Fig. 5, there is a sudden These values agree with those recently reported by us for Au NP/poly-TT nanocomposites electrogenerated at a micro-ITIES 28 after multiple CV-EIS pulses were performed. Herein, the interface was monitored in situ using a CCD camera equipped with a 12 × zoom lens assembly with a 10-12 cm working distance; however, unlike the Au NP/poly-TT film grown previously 28 , the Cu NP/poly-TT nanocomposite film was clear/colourless and, therefore, no observable change was observed optically.
Preliminary electrocatalysis results were obtained by modifying the surface of a glassy carbon (GC) electrode with a layer of Cu NP/poly-TT film and using 0.1 M NaHCO 3 (aq) as supporting electrolyte. Figure 6 shows CVs recorded at a bare and modified electrode; whereby, the solution was purged with either N 2 or CO 2 gas for ~ 15 min prior to polarization.
At a bare GC electrode, and in the N 2 saturated case, the cathodic peak at roughly -0.45 V (vs. Ag/AgCl) is likely H + reduction. However, this cathodic signal experiences a shift in the onset potential to -0.57 V when purged with CO 2 but maintains the same current intensity. The GC electrode modified with the Cu NP/poly-TT film electrosynthesized at a 1.16 mm ITIES shows a greater than 2 × CO 2 reduction current at 0.75 V (vs. Ag/AgCl). To modify the GC electrode with a Cu NP/poly-TT composite at a 10 mm ITIES, the GC plug was dipped into the electrolytic cell after the 25 CV cycles were performed. Afterwards, the i-V response shown in Fig. 6, yellow trace, demonstrated a further shift in the overpotential towards more negative potentials with no increase in the peak current; thus, electrocatalysis is likely suppressed in this instance. Modification of the interface with a single deposit from the 25 µm diameter interface resulted in no significant change relative to the bare GC electrode (data not shown). Figure 7 depicts SEM images of the GC electrode surface modified with films generated at the 25 µm (A, D), 1.16 mm (B, E), and 10 mm (C, F) diameter interfaces before (left-hand side) and after (right-hand side) one CV cycle; additionally, using ImageJ software the GC surface coverage was estimated to be 0.3, 9.9, and 62.5%, respectively. The film electrosynthesized at the 25 µm ITIES is smooth and shows evidence of folding with Cu NPs distributed along creases in the film. It is hypothesized that the film quickly occludes the ITIES and new polymeric growth pushes the film into the aqueous side of the interface generating these folds. The Cu NP are likely concentrated towards the bottom of these folds, adjacent to the ITIES. Advanced, high-resolution optical methods will be needed to monitor film growth at the micro-ITIES in situ; however, this will be the focus of future work.
Films developed at the two large ITIES were smooth with a relatively even distribution of Cu NPs and no evidence of folding. Therefore, this phenomenon is likely owing to geometric confinement of the growing polymer film within the micropipette tip. www.nature.com/scientificreports/ After electrocatalysis, the Cu NPs in the films generated at the 25 µm and 1.16 mm interfaces showed large changes in NP morphology which have grown by orders of magnitude; therefore, the thin, polymer network in these cases is insufficient to protect them against aggregation/agglomeration. The film created at the 10 mm ITIES showed little change; however, further experimentation is required.
While preliminary, these results are promising. Future work will focus on controlling Cu NP and polymer film morphology while tracking any changes the nanocomposite experiences during electrocatalysis, as well as detailed product analysis.
Conclusions
The successful application of a micro-ITIES towards electrodeless synthesis of Cu NP/poly-TT has been demonstrated and compared to films generated at a large (mm scale) ITIES. At [TT] = 20 mM a well resolved electron transfer wave was observed at the micro-ITIES. However, the large interfaces showed a more complex CV profile with an irreversible electron transfer wave at high, positive potentials and a reversible signal towards www.nature.com/scientificreports/ the negative end. The latter is likely anion adsorption/exchange at the liquid|solid|liquid interfaces and agrees well with recent results by Scanlon's group 4 . Impedance data confirm that the nanocomposite film forms early through large changes in R CT and R C ; moreover, increasing [TT] improves film formation while decreasing the median Cu NP size to < 2 nm. Interestingly, while no electron transfer signal was observed at the micro-ITIES a low [TT], a film was electrogenerated and imaged using SEM. These data show that simply by probing the edge of the PPW one can facilitate electrodeless synthesis of the nanocomposite and avoid overoxidation of the polymer network.
Preliminary voltammetric results at a GC electrode modified with the Cu NP/poly-TT film electrogenerated at a 1.16 mm diameter interface elicited a > 2 × enhancement in the electrocatalytic CO 2 reduction current versus an unmodified electrode. However, this film underwent large changes in NP morphology. While a tentative first step, these results are indicators that these films are promising alternative electrode materials for carbon capture; however, more optimization of nanocomposite electrosynthesis is necessary.
Data availability
All data is available upon request to T.J.S., tstockmann@mun.ca. | 7,998.2 | 2023-01-21T00:00:00.000 | [
"Materials Science"
] |
Damping of a Simple Pendulum Due to Drag on Its String
A basic classical example of simple harmonic motion is the simple pendulum, consisting of a small bob and a massless string. In a vacuum with zero air resistance, such a pendulum will continue to oscillate indefinitely with a constant amplitude. However, the amplitude of a simple pendulum oscillating in air continuously decreases as its mechanical energy is gradually lost due to air resistance. To this end, it is generally perceived that the main role in the dissipation of mechanical energy is played by the bob of the pendulum, and that the string’s contribution is negligible. The purpose of this research is to experimentally investigate the merit of this assumption. Thus, we experimentally investigate the damping of a simple pendulum as a function of its string diameter and compare that to the contribution from its bob. We find out that although in some cases the effect of the string might be small or even negligible, in general the string can play a significant role, and in some cases even a greater role on the damping of the pendulum than its bob.
Introduction
Perhaps the simplest oscillating system is a small object attached to a string of negligible mass, known as simple pendulum.If the amplitude of oscillations is small, the pendulum oscillates with a period T which is independent of the amplitude and is given by where L is the length of the pendulum and g is the acceleration due to gravity (9.80 m/s 2 ).In the absence of frictional losses, the hypothetical pendulum would oscillate indefinitely.However, the amplitude of oscillations of any real undriven mechanical pendulum, including simple pendulum, continuously decreases as a result of frictional losses, mainly due to air resistance while its period and frequency remain constant.To this end, the general assumption is that the drag force due to the air resistance on the bob of the pendulum is the cause of its damping, and normally the air resistance on the string of the pendulum is assumed to be negligibly small.In order to be able to measure the gravitational acceleration very accurately, Nelson and Olsson [1] theoretically investigated the effect of air resistance on the bob as well as the string of a simple pendulum.However, they did not discuss the relative importance of these effects.Later on, Dunn [2] experimentally studied the damping effect of the string on a pendulum by varying the length of the string and the diameter of the bob, but did not mention what the diameter of the string was.Dunn concluded that string drag comprised 5% ± 4% of the damping in the experiment, a small effect but not negligible, and suggested that further investigation was warranted.
Motivated by the work of Nelson and Olsson and by Dunn, we decided to further investigate the effect of string on damping of a simple pendulum.To do so, we experimentally studied the contribution to the damping of a simple pendulum from strings of various diameters.
General Remarks on Drag Force and Air Resistance
When a solid object moves in a fluid, the magnitude of the drag force is in general a function of the speed of the object, ( ) Expanding this function in a Taylor series about However, the constant term is zero because when where 1 2 , , c c are constants.For small speeds, we can approximate the drag force by the first-order term and neglect the second-and higher-order terms.Experimentally it is found that for a relatively small object moving in air with speeds less than about 24 m/s, the force of air resistance is proportional to the first power of speed.For higher speeds, but below the speed of sound, the force is proportional to the square of the speed [3] [4].
The common practice, however, is to write the magnitude of the drag force on an object moving in a fluid as where ρ is the density of the fluid , v is the velocity of the object, and A is the frontal cross-sectional area of the object, i.e., the cross-sectional area of the object perpendicular to its direction of motion.The unitless parameter D C is the drag coefficient.Drag coefficient depends on the shape of the object and the Reynolds number, where L is a characteristic diameter or linear dimension of the object, such as diameter of a sphere, and µ is the absolute or dynamic viscosity of the fluid.
For Re of the order of about 1200 or less, the drag coefficient D C is asymp- totically proportional to Re − , which means that the drag force is a proportional to the first power of velocity [5] [6].At higher Reynolds numbers and before the onset of turbulence flow, Reynolds number is fairly constant, which means that the drag force is quadratic in velocity.Therefore, for a simple pendulum moving with small speeds (long pendulum), the force of air resistance on its bob, b F , is proportional to its velocity, where c is a constant, independent of velocity, but depends on the shape and frontal cross-sectional area of the bob.We now calculate the force of air resistance on the string of the pendulum.
Drag Torque on the String of a Simple Pendulum
Consider an element of the string of a pendulum of length dr located at a distance r from the support point and moving with velocity v , as shown in ( ) where k is a constant.But since v rθ = , where θ is time derivative of θ , we obtain Since this drag force is perpendicular to the string, its torque about the support point is Integration of this equation over the length of the string, L, gives the total torque on the string,
Equation of Motion
Derivation of the equation of motion of the simple pendulum with a linear drag force is trivial, however, we present it here for completeness of the discussion.Therefore, after some simplifications, Equation (12) becomes where the damping constant κ is defined by which has the units of s −1 in the SI system.The first term on the right hand side of this equation is the contribution to the damping of the pendulum due to its string and the second term is that due to its bob.Finally, if the amplitude of oscillations is small, we have sinθ θ ≈ , and Equation ( 14) reduces to Equation ( 16) is a homogeneous second order linear differential equation with constant coefficients, whose solution is straightforward.We fist construct the auxiliary equation, which has the solutions Since air resistance is small, we have and Equation (18) becomes Then, the general solution of the differential equation of motion ( 16) is where A and B are constants to be determined by the initial conditions, and the angular frequency ω is given by Therefore, the amplitude of the oscillations decreases exponentially with time according to
Experiment and Results
Throughout our experiment we used a steel ball of mass 485 g and diameter 50.9 mm as the bob of our simple pendulum.For string we used monofilament nylon fishing lines of various diameters.During the entire experiment, the total length of the pendulum was 261.4 cm, 258.9 of which was the length of the string.Although the thickness of each string was consistent throughout its length, we measure the diameter 8 times along its length while the string was under load.Table 1 shows the mean diameters of the strings.
With each string, we started with an amplitude of 30 cm for the oscillations of the bob and measured the time interval for every 1 cm decrease in the amplitude until the amplitude dropped to 10 cm.We note here that since the length of the string is about 259 cm, the linear amplitude of 30 cm corresponds to an angular amplitude of about 6.65 .Then the error in using the approximation sinθ θ ≈ in Equation ( 14) is only about 0.2%.We also note that the mass of the heaviest string used in our experiments (the 50-lb test line) was only 1.5 g, which is quite negligible compared to the mass of the bob.Equation ( 24) may be written as Therefore, a graph of ( ) 0 ln θ θ versus t should be a straight line with a slope of 2 κ − .Figure 2 shows the results of our experiment for three of the pendulums.We have not plotted all of them to avoid cluttering of the figure. .From these slopes we have calculated the value of κ for each string, which are shown in Table 2.
Because we used the same bob and the same string length in all experiments, according to Equation (15) the graph of the damping constant κ as a function of string diameter D should be a straight line.This is shown in Figure 3.As can be seen, the plot of κ vs D is, to a good approximation, a straight line.A leastsquares analysis gives the following equation for the best line, ( ) ( ) 4 0.777 0.067 7.24 0.32 10 in which D is in meters and κ is in s −1 .This line is also plotted in the figure.
The first term in Equation ( 26) is the contribution of the string of the pendulum to its damping ( S κ ), and the second term is that of its bob ( B κ ).Using the diameters of our strings and Equation (26), we can calculate these contributions in our experiments.The results are shown in Table 3.
Discussion
In all cases studied, the period of the pendulum was about 3.27 s.Since the initial amplitude of the motion of the bob was 30 cm, this gives an average speed of 2 and Figure 3.
The results of Table 3 show that for all pendulums tested, the string plays a more significant role in damping than the bob, and this effect increases with the string diameter.This, however, is not surprising because even though the string of a simple pendulum may be very thin, its total frontal cross-sectional area can be comparable to that of the bob of the pendulum.For example, our string with diameter 0.718 mm and length 259 cm, has a frontal cross-sectional area of 18.6 cm 2 compared to 20.3 cm 2 for the spherical bob.In addition, for Reynolds number of 1000, the drag coefficient of a sphere is 0.47 whereas that of a wire (a circular cylinder with L D = ∞ ) perpendicular to the flow is 1.2 [9] [10].A com- bination of these factors results in a significant damping effect by the string of the pendulum.
Conclusion
In conclusion, the results of this investigation show that the string of a simple pendulum plays a significant role, and in some cases a more important role, in damping the pendulum than its bob.To the best of our knowledge, this effect has not been taken into account in the discussions of damping of pendulums in the literature.
Fig- ure 1 Figure 1 .
Figure 1.Diagrams of a simple pendulum showing (a) the drag force on an infinitesimal element of the string and (b) the net drag force on the string and on the bob.
Figure 1 (τ
Figure 1(b) shows a simple pendulum with a bob of mass m and a total length L. The total drag force on the string and that on the bob of the pendulum are shown by s F and b F , respectively.The equation of motion of the spring is
Figure 2
Figure 2 reveals two things.First, the fact that plots of
Figure 2 .
Figure 2. Plots of amplitude versus time for the pendulums with different string diameters.
Figure 3 .
Figure 3. Damping constant as a function of string diameter.
Table 1 .
Fishing test lines used as strings and their diameters.
Table 2 .
Damping constant for various strings tested.
Table 3 .
Contributions to the damping constant of the pendulum from the string ( S ] for air, and the diameter of the bob of 0.0509 m, Equation (5) gives a Reynolds number of 1134.Therefore, the linear model used for air resistance in this work is justified, which is further supported by the results in Figure String Diameter (mm) S κ (10 −4 s −1 ) | 2,861.6 | 2017-01-25T00:00:00.000 | [
"Physics",
"Education"
] |
Reducing Corruption in African Developing Countries : The Relevance of E-Governance
This paper presents a review on reducing corruption in African developing countries, to lessen the discretion of officials, and increase transparency. While it is true that ICT eliminates many opportunities for corruption for those who do not understand the new technology fully, however, it opens up new corruption vistas for those who understand the new systems well enough to manipulate them. Therefore proper safeguards are needed. Putting in place systemic hurdles may prevent people from abusing their power for private gain. While complete eradication of corruption is difficult to achieve, much can be done in reducing its prevalence. ICT can support actors wishing to improve governance capacity and fight corruption, but the surrounding political, social and infrastructural environment will decide if the technology is to be used to its fullest potentials. Automating existing bureaucratic processes that are defective will not yield good results. In this paper, we propose a methodology to combat corruption using information and communication technologies (ICT) that entails process restructuring. Most developing countries are not fully ready to embrace a comprehensive program of e-government, thus transparency is not holistic in all the sectors. Rather than wait for total readiness, an approach of learning by trial and consolidating small gains are recommended. While eGovernance holds great promise in many developing countries however, substantial challenges are to be tackled. Many ICT projects fail because of insufficient planning capacity and political instability.
INTRODUCTION
In every works and stage in life there are opportunities to be involved in one type of corrupt practice or the other.
Corruption is an Adamic nature in man, founded on the satanic spirit of telling lies.This can only be dealt with by the fear of God and the love for humanity.There are numerous types of corruptions just as we have numerous ways of telling lies.We have different types of corruptions ranging from political, social, academic, economic, cultural, and spiritual just to mention few.In other words, whichever position one finds himself, there is an opportunity to get involved in corrupt practices.In his lecture Jibril (2010), said "corruption and public morality seem to be at the nadir in Nigeria today.We seem to have reached the stage when, if we lived in ancient times God used to destroy nations that were beyond redemption in their moral transgressions, we would have been more than ripe for a total destruction".Poverty, corruption and inequality are the biggest hindrance towards development.Poor people only get the poorest services or the least from the government.Corrupt officials acquire, if not all, most of the public budget for their own benefit.Rich people receive not only good but rather the best things with the aid of their resources.Corruption is also linked with increased inequality in the quality of education between the rich and the poor.When resources allocated for public education is inadequate or do not reach the schools, it is the poor who bear the brunt.Unlike the rich, who can afford private tuition for their children, the poor have to depend on the government.
Education is the cornerstone of a vibrant and involved populace; therefore corruption within the education sector should not be allowed to flourish.
What is Corruption?
Corruption is the misuse of public power, office or authority for private benefit.This misuse manifests in many ways: bribery, extortion, influence peddling, nepotism, fraud, or speed money.Petty corruption is frequently found where public servants who may be grossly underpaid depend upon small kickbacks from the public to pad their pockets and feed their families.Grand corruption involves high officials who make decisions on large public contracts for their personal benefit, or to the benefit of organized group interest.Corruption is a multifaceted phenomenon supported by differing historical and socio-economic conditions in each country.It exists at all levels of society.Although in the past it could have been considered a largely domestic issue, corruption now often transcends national boundaries.Its consequences are global; its hidden costs immense.The private sector has responded by implementing ethics and compliance standards and regulations, while the public sector benefits from the ratification of recent laws and international conventions.Oversight bodies and mechanisms have been created to ensure the smooth running of efforts in both sectors.Nevertheless, corruption remains rampant in many countries, continuing to siphon off valuable resources and economic gains.Corruption undermines everything the law enforcement community works towards.It impoverishes whole communities, and threatens the safety and security of the many for the benefit of a very few.
A person is corrupt when he is dishonest in his intentions and actions.So corruption means dishonesty in thinking and action.Bribery using one's official position improperly and using money or material of others without their permission are some very common forms of corruption.Political corruption means cheating in elections, buying the loyalty of members of assemblies, offering ministries to them to win their support and so on.Bribery is the most common form of corruption in different countries and societies.We find so many people offering and giving money and things to government servants to get their work done.We see students giving money to the staff in examination halls to copy, which can be called educational corruption.It brings down the standards of education.Selling good in black market is a serious form of corruption.Many businessmen sell those things at very high prices that are in short supply.Today we find several other forms of corruption all around.Corruption is the result of wide gap between rich and poor in our society.Poor government servants like policemen and clerks sometimes have to take bribes to meet the expenses of their families.Government should try to improve their economic condition by increasing their salaries.Political corruption during elections, bribery and use of recommendation and connections to get jobs should be checked through strict rules.
In many parts of the world, a major part of the problem in dealing with public sector or government bodies is corruption.No doubt, corruption has been around since time immemorial and indeed, may well be an engrained trait of human nature; nevertheless, most governments and technologists are interested in figuring out what means may be created to combat it.Public corruption can be largely attributed to government intervention in the economy.Therefore, policies aimed at liberalization, stabilization, deregulation, and privatization can sharply reduce the opportunities for corruption (Ndou, 2004 &Dong andTae, 2007).High levels of corruption are present where institutional mechanisms to combat corruption are weak or not used, and where a system of simple internal checks and balances does not exist.In such cases, entrenched political elite dominates and exploits economic opportunities, manipulating them in return for personal gains ( Fangzheng, 2010).
Why does Corruption threaten good governance?
Corruption is a manifestation of institutional weakness, poor ethical standards, skewed incentives and insufficient enforcement.When corrupt officials slowly drain the resources of a country, its potential to develop socially and to attract foreign investment is diminished, making it incapable of providing basic services to or enforcing the rights of its citizens.Furthermore, corruption fuels transnational crime.Terrorists and organized criminals could not carry out their illegal activities without the complicity of corrupt public officials.It threatens security and damages trust in systems which affect people's daily lives.It is a particular concern for the world's police and judicial systems, as corruption in one country can compromise an entire international investigation.Corruption itself does not produce poverty, but it does have a direct and immediate impact on economic growth and good governance, which in turn raises poverty levels.It remains a major obstacle to the achievement of the UN's eight Millennium Developments Goals, whose primary aim is to reduce poverty.The most recent analyses indicate that corruption continues to thrive globally.But as the awareness of corruption increases, so too does the understanding of its negative effects on political, economic and social reforms.Transparency International's 2006 report shows that corruption is rampant despite improved legislation and counter efforts.More than US$1 trillion is paid in bribes alone each year, according to a World Bank Institute report -compared to the estimated size of the world economy at that time of just over US$30 trillion.
Corruption indicators
According to World Bank Report, corruption indicators are inexhaustible and the ingenuity of those involved in corruption knows no bounds!You should beware of: Pressure exerted for payments to be made urgently or ahead of schedule The payment of or making funds available for high value expenses or school fees etc on behalf of others.
The Role of ICT in Fight against Corruption
ICT has had tremendous impact on our lives.We can do almost nothing without ICT today.The ICT industry over the past decades has led to tremendous changes and progress in economic and social development around the world and has opened up greater opportunities for even faster growth and change than has already occurred.ICT has helped improve efficiency in many sectors and increased the volume and quality of outputs in almost all sectors, from industry to manufacturing, construction to banking, finance, health, education, development assistance and government services.With the technological advances, the internet and cryptography, the risk associated with leaking documents conveying important information can be lowered substantially.The use of ICT for faster communications, exchange of information, and improved recording and monitoring of information can greatly improve the operational efficiency of Government agencies.For example, financial transactions and accounts, personnel information, land ownership records, tax payments, birth registration, etc.The increased automation of processes reduces the need for person-to-person contact in the delivery of Government services to the people, and the less contact there is, the less opportunity there is for rent-seeking behavior.Increased automation improves the quality of services delivered to the public and also reduces the cost of doing business.As the cost of doing business declines and becomes "predictable", business development and investment is stimulated, further expanding economic growth.The growth of e-governance services is still at a very early stage in Nigeria and many developing countries in Africa.ICT is increasingly being used to strengthen good governance.The application of ICT to improve the functions and service delivery of the Government is known as "e-Governance".This is where ICT can impact on improving governance, preventing and combating corruption.If implemented strategically, egovernance can improve efficiency, accountability and transparency and contribute to establish governments which are small in size but more efficient and effective in service delivery.ICT facilitates citizen's right to information, augments citizen-government dialogue and encourages people's participation in the administrative process to ensure justice, transparency, accountability and confidence in governance.ICT is also an effective tool to realize people's right to information by making disclosure and it's monitoring quicker and easier than ever before.
E-Government and Corruption
Corrupt actions are so diverse and the concept of corruption so generic that any precise definition of institutional corruption is difficult to frame.Corruption can be broadly defined as the abuse of public power for the benefit of private individuals (Rose-Ackerman, 1999).Corruption includes both monetary and non-monetary benefits.Common forms of corruption are bribery, extortion, influence peddling, nepotism, fraud, and opportunism.Garcia-Murillo and Vinod (2005) identify the main drivers of corruption to be economic, political, and cultural factors, which vary from country to country.ICT can through e-governance systems support the fight against corruption by raising accountability through digital footprints, raise transparency by publicizing regulations and fees, and reduce face-face interaction where most requests for bribes take place.ICT such as mobile phones, effectively empower citizens by allowing people to collaboratively gather and share evidence of corrupt practices.In other words, ICT can assist citizens willing to challenge the systems that condone corruption.Kaufmann, Kraay andMastruzzi (2003, 2005) and Lambsdorff (2001) have identified the drivers of corruption as: (i) monopoly of power; (ii) discretion; and (iii) lack of accountability and transparency.It is useful to distinguish between types of corruption and to identify those which e-Governance can most readily fight.The first group of corrupt practices is petty bureaucratic corruption (i.e.Low-level administrative corruption).The second group of corrupt activities consists of strategies aimed at self-serving asset stripping by state officials.The third group of corrupt activities consists of large political corruption (grand corruption), (Shah & Schacter, 2004).the Internet minimizes the opportunities for public officials to monopolize access to relevant information and to extract bribes from their clients.
In addition, use of ICTs in government can also foster the anticorruption struggle against 'self-serving asset stripping' by state officials (Cisar, 2003;Dorotinsky, 2003Dorotinsky, , 2005;;Talero, 2005;Yum, 2003Yum, , 2005) ) and ICTs may potentially play an important role in preventing some types of grand political corruption (Prahalad, 2005).E-Governance represents a significant opportunity to move forward with qualitative, cost effective government services and a better relationship between citizens and government (Fang, 2002).The potential benefits of using ICT in government include, but go beyond, efficiency and effectiveness.By making available interactive access to and use of information by people who use government services, e-Governance initiatives hope to empower citizens (Gage, 2002) and improve relationships between governments and citizens by helping build new spaces for citizens to participate in their overall development (Gasco, 2003).However, if e-Governance initiatives are to curb corruption then the design of such systems needs an appropriate conceptual framework and needs to be understood by policy makers and public managers (Cisar, 2003;Mahmood, 2004;Tangkitvanich, 2003).
E-government is believed to reduce corruption by prompting good governance and strengthening reformoriented actors.Specifically, e-government can reduce corruption behaviors by externally enhancing relationships with citizens internally by effectively controlling and monitoring employee's behaviors (Ndou. 2004);Dong & Tae, 2007).In addition to E-government impact on combating corruption, there are several compelling case studies.In the areas of internal control, the successful cases are OPEN (Open Procedures Enhancement for Civil Application) system of Seoul Municipality (Kim, Lee, 2009), which reduced human intervention in corruption prevention and e-procurement system in Brazil (Von Halddenwang W. 2004).
E-governance in Combating Corruption
One of the ways to combat corruption is by automating G2C interactions that lie at the heart of e-Society.A critical component of e-society refers to the digital content that users can access.User interactions with digital or electronic means have been grouped in a number of ways ( Homburg andBekkers, 2002 &Karlins, 2005).The introduction of ICT can reduce corruption by improving the enforcement of rules, lessening the discretion of officials, and increasing transparency.Yet, while ICT eliminates many opportunities for corruption for those who do not understand the new technology fully, it opens up new corruption vistas for those who understand the new systems well enough to manipulate them.Proper safeguards are needed.E-government is the use of information and communication technology (ICT) to promote more efficient and cost-effective government, more convenient government services, greater public access to information, and more government accountability to citizens.ICT-enabled reforms can yield many benefits, including lower administrative costs, faster and more accurate response to requests and queries every day.
Another reason for relatively slow ICT adoption by financing accountability, government agencies need to go through a lengthy process of secure problems.To prevent undue influence of any one official, many decisions along the way are made by committees, which can lead to an unclear focus as compromises are made.In the Philippines, the Department of Budget and Management (DBM) has established an online e-procurement system (http://www.procurementservice.org/) that allows public bidding for suppliers to meet government needs.The system has reportedly led to increased transparency in transactions, and is favorably regarded by suppliers (http://www1.worldbank.org/publicsector/egov/philippines_eproc.htm).
In Thailand, Asian Development Bank (ADB) is working with the Asian Foundation to strengthen National Counter Corruption Commission (NCCC) and senate procedures for impeachment, to develop a strategic plan to help the Office of the NCCC better carry out the expanded mandate of the office under the new constitution, and to strengthen civil society's capacity for advocacy and monitoring accountability mechanism (ADB, 2001).ADB, support a pilot project to computerize the task of identifying corruption abuses.At present, the NCCC must process paper forms and financial reports for thousands of government officials.In Seoul, Republic of Korea, the Online Procedures Enhancement for Civil Applications (OPEN) allows citizens to monitor applications for permits or approvals where corruption is most likely to occur, and to raise questions in the event of any irregularities detected.Examples of civil applications are: Building Permits and Inspections, Approval and Sanction of Entertainment Establishments & Song Bars, Decision and Change of Urban Development Plans (http://english.metro.seoul.kr/government/policies/anti/civilapplications/index.cfm).
Transparency and Corruption
The primary factors that contribute to the growth of corruption are the low probability of discovery, and perceived immunity against prosecution.Secrecy in government, restrictions on access to information by citizens and the media, ill defined/complex and excessive rules, procedures and regulations can all lead to a low chance of discovery.A lack of transparency in the functioning of the government agencies can make it easy for the perpetrators to cover their tracks and unearthing corruption becomes very difficult.The weak character of institutions which are supposed to investigate charges of corruption and prosecute the guilty as well as an inefficient or corrupt judiciary further exacerbate the problem of corruption and facilitate immunity against prosecution.E-governance is the use of ICT by Government, civil society, political institutions to engage citizens through dialog and feedback to promote greater participation of citizens in the process of governance of these institutions.For example e-governance covers the use of the Internet by politicians and political parties to elicit views from their constituencies in an efficient manner or the ability of civil society to publicize views which are in conflict with the ruling powers.Many governments have chosen to go on-line in departments such as customs, income tax, sales tax, and property tax; which have a large interface with citizens or businesses and are perceived to be more corrupt.For instance, the online registration of Joint Admission Matriculation Board (JAMB) has brought in some kind of transparency on the issue of admission into Nigerian public universities.Procurement by government is also seen to be an area where corruption thrives.The very process of building an on-line delivery system requires that rules and procedures are standardized across regions and made explicit (amenable for computer coding).Web publishing of Government information builds accountability by providing documentation to citizens to substantiate their complaints against corrupt practices.Increasing access to information, presenting the information in a manner that leads to transparency of rules and their application in specific decisions, increasing accountability by building the ability to trace decisions/actions to individual civil servants represent the successive stages in the hierarchy.
Impacting Corruption through E-Government: the Way Forward
Although there are some evidences that use of ICT in Government can also enhance opportunities for corruption ( Richard, 1998).However e-government reduces corruption in several ways.It increases chances for exposure by maintaining detailed data on transactions, making it possible to track and link the corrupt with their wrongful acts.Egovernment can be used to combat corruption in two ways.First, e-government can become one of the key components of a broader anti-corruption strategy as is demonstrated by the OPEN system installed in the Seoul municipality in Korea.By reducing administrative corruption in service delivery, e-government can reduce the tolerance for corruption amongst citizens who would no longer be required to compromise their honesty by paying a bribe to public officials.E-government can lead to transparency provided that the legal framework supports free access to information.Most developing countries are not fully ready to embrace a comprehensive program of egovernment.
According to (Wei Zhang, Yu Zhang & Bo Wang, 2011), China's efforts on E-anticorruption promotion was classified into four categories based on the information communication theory application, which illustrates the whole process of information flow as orderly sequences of information generation, delivery, share, analysis and feedback.They are making government information public, monitoring of real-time governmental behaviors and collecting the online public opinions, analyzing high-risks of corruption points and making possible prediction, and sharing the corrupt-information with interrelated bodies.The following strategies are used:
•
Information publicizing, basic components of transparent government.Anti-corruption system forced public servants to behave normally by increasing interaction with the public as the manipulation of government relevant power is more authoritative and fair.•
•
Instant monitoring: To be active in analyzing corruption situation, anti-corruption system needs strong data support.As input of the system, data should be firsthand collected and trustworthy.The resource of data mainly contains electronic supervision, statistics of business activities and general information pertaining online public opinion, e.g.online-reporting.
•
Preventing and control: Preventing corrupt activities is the most valuable part in the whole anticorruption process as a core function of the system.By analyzing the data automatically, systems alert when abnormal conditions appear.As a main part of output, preventing/warning function react on earlier stages and lessen adverse effects of corruption.
• Data sharing: Each system has database to store involved information during ordinary operation, but for the technical and communicational limitation in collecting information, database cannot support completely dependable and requisite information used for scientific analysis.In this time, data sharing between different systems is necessary.
CONCLUSION
The rapid development and diffusion of (ICT) have led to the notion of e-government which is quickly embraced by many governments.Using ICT, especially the Internet, governments not only can link databases in different departments to streamline the back-end of public administration processes, but also can improve the interface through which governments interact with their citizens (Mahmood, 2004).Having largely evolved from the e-business framework, the services provided in respect to e-government have now evolved from websites offering basic government information to more value-added transactional-level services that offer convenience, efficiency, and transparency (Dwivedi et al., 2009).By reducing administrative corruption in service delivery, e-government can reduce the tolerance for corruption amongst citizens who would no longer be required to compromise their honesty by paying a bribe to public officials.In addition a massive societal education effort is required to reinforce fundamental values like honesty.E-government can lead to transparency provided that the legal framework supports free access to information.Until a few years ago most countries still had strict national secrecy laws.Secrecy laws are still in effect in much of the developing countries.Most developing countries are not fully ready to embrace a comprehensive program of e-government.Rather than wait for total readiness, an approach of learning by trial and consolidating small gains is recommended.Corruption is rooted in the cultural, political, and economic circumstances of those involved.The introduction of ICT can reduce corruption by improving the enforcement of rules, lessening the discretion of officials, and increasing transparency.Yet, while ICT eliminates many opportunities for corruption for those who do not understand the new technology fully, it opens up new corruption vistas for those who understand the new systems well enough to manipulate them.Therefore proper safeguards are needed.Improving the enforcement of rules is clearly the best way to combat corruption.The introduction of e-Government can play a major role in this context as it automates several processes.Automating existing bureaucratic processes that are defective will not yield good results.In this paper, we propose a methodology to combat corruption using information and communication technologies (ICT) that entails process restructuring.While e-Governance holds great promise in many developing countries, however substantial challenges are to be tackled.Many ICT projects fail because of insufficient planning capacity and political instability.
ACKNOWLEDGEMENT
The authors would like to acknowledge all the authors of articles cited in this paper.In addition, the authors gratefully acknowledge UTM, Research Universiti Teknologi Malaysia for their support and encouragement.
•
Payments being made through 3rd party country, e.g.goods or services supplied to country 'A' but payment is being made, usually to shell company in country 'B' • Abnormally high commission percentage being paid to a particular agency.This may be split into 2 accounts for the same agent, often in different jurisdictions • Private meetings with public contractors or companies hoping to tender for contracts • Lavish gifts being received • Individual never takes time off even if ill, or holidays, or insists on dealing with specific contractors him/herself • Making unexpected or illogical decisions accepting projects or contracts • Unusually smooth process of cases where individual does not have the expected level of knowledge or expertise • Abusing decision process or delegated powers in specific cases • Agreeing contracts not favorable to the organization either with terms or time period • Unexplained preference for certain contractors during tendering period • Avoidance of independent checks on tendering or contracting processes • Raising barriers around specific roles or departments which are key in the tendering/contracting process • Bypassing normal tendering/contractors procedure • Invoices being agreed in excess of contract without reasonable cause • Missing documents or records regarding meetings or decisions • Company procedures or guidelines not being followed • | 5,618.2 | 2013-01-20T00:00:00.000 | [
"Political Science",
"Computer Science"
] |
Solutions of Indefinite Equations
Indefinite equation is an unsolved problem in number theory. Through ex-ploration, the author has been able to use a simple elementary algebraic method to solve the solutions of all three variable indefinite equations. In this paper, we will introduce and prove the solutions of Pythagorean equation, Fermat’s theorem, Bill equation and so on.
Definition of Indefinite Equation
We also know indefinite equation (also called Diophantine equation) that all unknowns and known numbers are positive integers [1] [2]. The indefinite equation, especially the higher-order indefinite equation, is a difficult problem which has not been solved thoroughly in number theory. In this paper, we will introduce their new solutions one by one. Theorem 1. Here is the three variable indefinite equation as
First Order Indefinite Equation Some Theorems
If one of the terms is an arbitrary positive integer, the Equation (1) must have a solution.
Proof. Suppose C is any positive integer, A and B are two positive integers, then the sum of A and B must be positive integers. We can use the number axis to verify that C is also a positive integer. □ Advances in Pure Mathematics proved by many kinds of proofs [3]. Namely Pythagorean theorem is However, its simple way of solutions has never been seen. Next we will find and prove the solutions.
Theorem 2. If positive integers a, b, c are a series of positive integer solutions of following equation
Definition of higher degree indefinite equation is an indefinite equation in
which the exponents of all powers are greater than or equal to 3, as n n n where n ≥ 3, x ≥ 3, y ≥ 3, z ≥ 3.
Fermat's Last Theorem
We have known that Fermat's last theorem has been proved by British Mathematician Andew. Wiles using the properties of elliptic curves (1993) [5]. Now we present a proof using elementary algebra as following.
Theorem 3 (Fermat's Last Theorem). If A, B, C are tree positive integers, n ≥ 3, than equation is no integer solutions. Proof. Suppose Equation (7) By Equations (7) and (10) Comparisons Equations (11) and (12), we have: (12) is true, also need q = C, and we obtain that if Equation (12) is true, require the following are true: Obviously, these equations are contradictory by n ≥ 3, it is impossible. So, the hypothesis is not valid, the Equation (11) is not true, and the Equation (7) . □ Therefore, the equation as (7) is no integer solutions.
Beal's Conjecture
Beal's conjecture: if the indefinite equation is true, there , , x y y z x z ≠ ≠ ≠ and 3, 3, 3 x y z ≥ ≥ ≥ ; than A, B, C must have a common factor. We have proved that Beal's conjecture is true in the paper of "Proof of Beal Conjecture" [6].
Solutions by L-Algorithm
The equations as Equation (14) (14). This is solution for the Beal equations. Therefore, we can use the above method to solve other indefinite equations. Example 1. Solve the following indefinite equation A B C + =. (16) Firstly, a, b, d are selected to satisfy a x + b y = c, as
Conclusion
Through the above introduction, we understand the new solution of the indefinite equation, which shows that the indefinite equation can be solved by the method of elementary algebra. According to this method flexibly, we will solve more higher degree indefinite equations. It adds a new way to solve the higher order indefinite equation for number theory.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. | 845.6 | 2020-09-08T00:00:00.000 | [
"Mathematics"
] |
Enantiocontrol by assembled attractive interactions in copper-catalyzed asymmetric direct alkynylation of alpha-ketoesters with terminal alkynes: OH center dot center dot center dot O/sp(3)-CH center dot center dot center dot O two-point hydrogen bonding
Title Enantiocontrol by assembled attractive interactions in copper-catalyzed asymmetric direct alkynylation of alphaketoesters with terminal alkynes: OH center dot center dot center dot O/sp(3)-CH center dot center dot center dot O two-point hydrogen bonding Author(s) Schwarzer, Martin C.; Fujioka, Akane; Ishii, Takaoki; Ohmiya, Hirohisa; Mori, Seiji; Sawamura, Masaya Citation Chemical science, 9(14): 3484-3493 Issue Date 2018-04-14 Doc URL http://hdl.handle.net/2115/70706 Rights(URL) https://creativecommons.org/licenses/by/3.0/ Type article Additional Information There are other files related to this item in HUSCAP. Check the above URL. File Information c8sc00527c.pdf
Introduction
Steric strain also called steric repulsion between catalysts and substrates plays an important role in enantioselective catalysis, while catalyst design utilizing catalyst-substrate secondary attractive interactions such as electrostatic interactions, hydrogen bondings, p/p stackings and C-H/p interactions may produce advanced concepts. 1 In this regard, enantiocontrol without using catalyst-substrate steric strain has rarely been elucidated, but it should be more generally explored. 2Our previous study on the copper-catalyzed asymmetric direct alkynylation of aldehydes introduced a series of chiral prolinolphosphine ligands (Scheme 1). 3,4Density functional theory (DFT) calculations indicated the occurrence of two-point hydrogen bonding comprising OH/O and non-classical sp 3 -CH/O hydrogen bonds, [5][6][7] which orient the carbonyl group of the prochiral aldehyde.We deduced the enantioselectivity to be due to a steric repulsion between the substituents of Scheme 1 Copper-catalyzed enantioselective alkynylation of carbonyl compounds with chiral prolinol-phosphine ligands.Comparison between the reaction of aldehydes (ref.3a) and aketoesters (this work).Non-classical hydrogen bonding with nonpolar sp 3 -C-H bonds, steric repulsion, and dispersive attractions are highlighted.
the aldehyde (R 1 ) and the alkyne (R 2 ) in the reaction pathway leading to the minor enantiomer.
Herein, we report that the copper-catalyzed asymmetric direct alkynylation of a-ketoesters with chiral prolinol-phosphine ligands occurred with a high level of enantioselectivity through a discrimination of two ketoic carbonyl substituents, R 1 and CO 2 R 2 , by the chiral catalyst.DFT calculations including Grimme's empirical dispersion correction 8 indicated that steric repulsions between the catalyst and the substrates do not play a major role, but the enantioselectivity is determined by assembled attractive catalyst-substrate interactions.Namely, in addition to a two-point hydrogen bonding involving non-classical sp 3 -CH/O hydrogen bonding, dispersive attractions 9 occur between the chiral ligand and the substrates to allow steric-strain-free enantioselection.
From the viewpoint of organic synthesis, catalytic enantioselective direct alkynylation of carbonyl compounds with terminal alkynes is a straightforward and atom-economical strategy for accessing enantioenriched propargylic alcohols, which are versatile building blocks for the asymmetric synthesis of more complex organic molecules. 10Substantial progress has been made in the alkynylation of aldehydes, affording chiral secondary propargylic alcohols through the invention of various efficient chiral catalyst systems with different metals such as Zn, 11 In, 12 Cu, 3 and Ru. 13 However, the synthesis of chiral tertiary propargylic alcohols through the corresponding reaction of ketones is still challenging.As for the reaction of activated ketones, nevertheless, there are limited examples that reported reasonable catalytic activities and high enantioselectivities. [14][15][16][17] For instance, Jiang and co-workers achieved high enantioselectivities in the reaction of a-ketoesters through a modication of Carreira's Zn-b-aminoalcohol catalyst system. 14However, high enantioselectivities were achieved only with a stoichiometric amount of the chiral catalyst or under catalytic (5.5-20 mol%) conditions utilizing excess alkynes as solvents with a limited substrate scope.Oshima, Mashima, and co-workers introduced new Rh-Phebox catalysts to achieve high enantioselectivities for the reaction of triuoropyruvates, and Song, Gong, and co-workers later introduced a similar Rh catalyst system. 15Shibasaki, Kanai, and co-worker reported moderate enantioselectivities in the Cu-catalyzed reaction between triuoroacetophenone and phenylacetylene. 16ecently, Meggers and co-workers reported high enantioselectivities with a broader scope of triuoromethyl aryl ketones in the studies on ruthenium complexes with metal-centered chirality. 17Thus, a chiral catalyst system allowing high enantioselectivity with a broad substrate scope is awaited, while excellent catalyst systems have been developed specically for triuoromethyl ketones. 15,17
Optimization
Initial experiments to nd suitable reaction conditions were conducted for the reaction between methyl 2-phenylglyoxylate (1a, 0.2 mmol) and phenylacetylene (2a, 1.2 eq.) in the presence of CuCl (10 mol%), K 2 CO 3 (30 mol%), and different phosphine ligands (L1-8) at 25 C over 48 h (Table 1).The reaction with the prototype prolinol-phosphine chiral ligand L1, which consists of triphenylphosphine and a simple prolinol linked with each other by a methylene group, occurred cleanly to give the corresponding tertiary propargylic alcohol (3aa) in a high yield (92% yield aer isolation) with a moderate enantioselectivity (56% ee) in favor of the R conguration (entry 1).The secondary (L2) or tertiary (L3) alcohol type ligands, which have one or two methyl groups at the position a to the hydroxyl group, gave slightly better enantioselectivities (58% and 67% ee), but the reaction occurred more slowly and the yield dropped to a moderate level (entries 2 and 3).Neopentyl-substituted ligand L4, which is the optimal ligand for the reaction of aliphatic aldehydes, 3a was only comparable with the tertiary alcohol ligand (L3) concerning both product yield and enantioselectivity (71% yield and 67% ee) (entry 4).Thus, the ligand modication at the alcohol moiety was not fruitful.In contrast, the modication of the P-substituents had a signicant impact.The introduction of electron-donating MeO groups (L5) at the para-position of the two P-phenyl groups caused a dramatic increase in the product yield (96%) with a slight improvement of the enantioselectivity (70% ee) as compared with the results with the parent Ph 2 P-type ligand (L4), while the substitutions with electron-withdrawing F atoms (L6) at the para-positions were unfavorable (entries 5 and 6).Finally, our ligand screening led to identication of the Cy 2 P-type ligand (L7) with a neopentyl substituent at the alcohol moiety as the most suitable.With L7, the reaction occurred quantitatively (97% yield) with enantioselectivity as high as 88% ee (entry 7).The corresponding experiment without using a glove box gave an essentially identical result concerning both the product yield and enantioselectivity (entry 8).
The nature of the solvent had a strong impact on the yield and enantioselection (Table 1, entries 9-11).The use of aprotic solvents such as THF, dioxane or CH 3 CN in place of the protic solvent t-BuOH for the reaction with L7 caused signicant decreases in the product yields (26%, 31% and 35%) and enantioselectivities (74%, 77% and 78% ee).The protection of the hydroxy group in L4 as a methyl ether (L8) inhibited the reaction completely (entry 12).Thus, favorable effects of the protic nature of the solvent and the critical role of the alcoholic site in the prolinol-phosphine ligand were conrmed like in our previous study on the asymmetric alkynylation of aldehydes.3a
Scope of ketoesters
Various a-ketoester derivatives were subjected to the reaction with phenylacetylene (2a) with the Cu-L7 catalyst system in t-BuOH or i-PrOH (Table 2).The isopropyl or tert-butyl 2-phenylglyoxylates (1b and 1c) also served as substrates, and the comparison of the results with that with the methyl ester (1a) showed an increase in the enantioselectivity with the increase in the steric demands of the ester moieties (entries 1 and 2).2-Hydroxyethyl ester (1d) was also a suitable substrate (entry 3).
Overall, the protocol with the Cu-L7 catalyst system in t-BuOH or i-PrOH is applicable to a range of a-ketoesters including 2-(hetero)arylglyoxylates and 2-alkylglyoxylates.However, tert-butyl pyruvate did not react with phenylacetylene (2a) but gave a mixture of self-condensation products.The reaction between tert-butyl triuoropyruvate and 2a resulted in the decomposition of the ketoester without forming the desired alkynylation product.
Scope of alkynes
Various enantioenriched tertiary propargylic alcohols with different substituents at the alkyne terminus were obtained (Table 3).The aromatic alkyne (2b) with an electron-donating methoxy substituent reacted with high product yield and enantioselectivity (entry 1).On the other hand, the substitution of the aromatic ring with the electron-withdrawing tri-uoromethyl or methoxy carbonyl groups resulted in decreases in the yield and enantioselectivity (entries 2 and 3).The sulfuror nitrogen-containing heteroaromatic groups were acceptable as substituents of the alkyne substrate (entries 4 and 5).The cyclic and acyclic 1,3-enyne derivatives (2g and 2h) afforded the corresponding conjugated propargylic alcohols (3cg and 3ch) (entries 6 and 7).Alkylacetylenes were also suitable substrates (entries 8-11).The reaction of linear alkylacetylene 2i with 1f proceeded with reasonably high enantioselectivity (entry 8).The a-branched aliphatic alkyne 2j reacted with high yield and enantioselectivity (entry 9).Propargyl ether 2k and propargylamine 2l also participated in the reaction, albeit with moderate yields and enantioselectivities (entries 10 and 11).The tertbutyldimethylsilylacetylene 2m underwent a clean reaction with a moderate enantioselectivity (entry 12).No reaction occurred with tert-butylacetylene and triisopropylsilylacetylene even at higher temperatures.
Overall, various terminal alkynes, such as phenylacetylene derivatives, conjugated enynes, linear or a-branched alkylacetylenes, protected propargyl alcohol, or amine derivatives, and tert-butyldimethylsilylacetylene, were acceptable substrates.However, the substituent of the alkynes had no small effect on reactivity and enantioselectivity.Furthermore, it should be noted that the reactivity and selectivity prole depending on the alkynes is signicantly different between the alkynylation of aldehydes and that of ketoesters.Namely, the reaction of ketoesters is more sensitive to the steric and electronic effects in the alkyne.In particular, the nonreactiveness of tert-butylacetylene and triisopropylsilylacetylene is in sharp contrast to the results of the alkynylation of aldehydes.3a In the latter, bulky triisopropylsilylacetylene was the most favorable for enantiocontrol with a broad scope of the aldehyde.
Quantum-chemical studies
The direct alkynylation of ketoesters with terminal alkynes exhibits similar behavior towards ligands and solvents to the previously studied reaction of aldehydes under comparable conditions.3a The similarity of the substrates and the reaction conditions suggest that the mechanisms are analogous.The hydroxyl group of the chiral prolinol-phosphine ligand is a key element in the catalytic activity and in the enantioselectivity by forming a highly directional hydrogen bond with the carbonyl oxygen.Additionally supporting this coordination is a nonclassical sp 3 -CH/O hydrogen bond originating from the pyrrolidine moiety of the ligand, which has enough exibility to bend inwards to the reactive center to allow this interaction.A proposed catalytic reaction pathway is shown in Scheme 2. The reaction starts with the formation of the (h 1 -alkynyl)copper(I) complex (R), which is also the resting state, through the association of the ligand, the metal center, and the deprotonated alkyne (2-H + ).The ketoester (1) coordinates via the carbonyl oxygen to the hydroxy group of the ligand, bringing the reacting carbon atoms in proximity.This association complex (AC) is the precursor for the stereoselective carbon-carbon bond formation, which leads to the product complex (PC) in which the tertiary propargylic alcohol (3) is bound via the p-bonds to the copper center.Exchange with the substrate alkyne (2) regenerates the resting state (R) and therefore completes the catalytic cycle.
To further elucidate the origin of the enantioselectivity of the reaction, quantum-chemical calculations based on the transition states of the aldehyde reaction have been performed.Full geometry optimizations using the BP86 density functional 18 including Grimme's empirical dispersion correction (DFTD3 with Becke-Johnson damping) 8 in conjunction with the def2-SVP basis set 19 have been carried out with the Gaussian 09 program suite. 20Density tting has been employed to speed up the calculations.19c,21 This level of theory is denoted as DF-BP86-D3(BJ)/SVP.Normal coordinate analysis has been performed to conrm convergence towards stationary points and to estimate thermal corrections at 298.15 K and 1 atm.Calculations following the intrinsic reaction coordinates (IRCs) from rst-order saddle points (transition states) to local minima (reactants and intermediates) have been used to describe the reaction pathways (see ESI † for details). 22To gain a better understanding of the energetics of this system, single-point calculations have been carried out on the converged geometries using the larger basis set def2-TZVPP, 19 and estimates for the solvent (polarizable continuum model, PCM, with tert-butanol 3 ¼ 12.47). 23These energy values are discussed throughout the paper, and this level is denoted as DF-BP86-D3(BJ)-PCM(tBuOH)/TZVPP//DF-BP86-D3(BJ)/SVP.Relative Gibbs energies (electronic energies in the ESI †) in kcal mol À1 for the calculated reaction pathways of the model system (Table 1, entry 2) are summarized in Table 4.
For the initial computations, the system of 1a, 2a and L2 has been chosen, since phenyl moieties have a small conformational space.To explain the selectivity of the reaction, it is sufficient to calculate the transition state of the C-C bond formation (L2-TS) and the connected intermediates (L2-AC and L2-PC).The a-ketoester has two different conformations (s-cis and s-trans) caused by rotation along the single bond connecting the two carbonyl groups (Scheme 2).The s-cis conformer [1a(c)] is about 2.3 kcal mol À1 higher in energy than the s-trans conformer [1a(t)] (Table 4, R + 1a).Since the activation energies of all reaction pathways are larger than this, the rotation becomes unhindered and both rotamers have to be considered for the evaluation of the reaction mechanism.This is also re-ected in the relative energies of the transition states leading to the R product (L2-TS-R), which are lower in energy than the corresponding S pathways (L2-TS-S).The R-stereochemistry of the product 3aa is well reproduced by the calculations, and based on the four reaction pathways, the overall enantiomeric excess is estimated to be 73.1%.As in the aldehyde system, 3a a non-classical hydrogen bond between the sp 3 -C-H bond in the pyrrolidine ring and the carbonyl oxygen of the ketoester (1a) (sp 3 -CH/O) is preserved in all optimized transition states, in addition to a normal hydrogen bond donated by the copper-bound hydroxyl group, resulting in directional two-point hydrogen-bonding, which orients the ketoester (1a) in a well-dened manner (Fig. 1a).The quantum theory of atoms in molecules (QTAIM) 24,25 allows one to qualitatively estimate the strengths of the classical OH/O bond as well as the non-classical sp 3 -CH/O interaction (Fig. 2).Since the strength of hydrogen bonds is not accessible experimentally, caution should be applied regarding the calculated absolute values (see ESI † for more information).The value of the potential energy density at the bond critical point is proportional to the strength of the hydrogen bond.25c This value indicates a rather strong classical OH/O bond motif, while the non-classical sp 3 -CH/O interaction is less than a tenth of that.This is in line with the previous analysis based on the distances of the respective interactions.3a As visualized in the space-lling models in Fig. 1b, the stereo-discrimination by the catalyst (Cu-L2) is due to the dispersive attractions between the phenyl moiety of the ketoester (red-coloured) and the phenyl groups of the phosphine moiety (grey) through partial p-stacking in L2-TS-R, 8 as opposed to L2-TS-S where these moieties are oriented away from each other.Additionally, in the R path the phenyl moiety of the alkyne (blue-coloured) can partially stack with one of the P-phenyl groups of L2 (grey).In the S-path, this p-stacking is not possible.Instead, the phenyl moiety of the alkyne is in contact with the phenyl group of the ketoester.These non-covalent interactions can be further studied and visualised by analysing the electron density and its derivatives (see ESI † for details). 26hese analyses also reveal the importance of dispersive effects for the sp 3 -CH/O interactions, which are similar to classical hydrogen bonds in most respects, but generally weaker.One difference is that the donating CH group is weakly polarized, which makes the isotropic effects more relevant, while the magnitude of the electrostatic component loses some signicance. 5The classical OH/O bond motifs, on the other hand, are already too strong to register in the analyses within the chosen cut-off parameters.Overall, the dispersive attractions are stronger in L2-TS-R than in L2-TS-S.
Calculations for a more extended system with the L7 chiral ligand yield similar conclusions (Table 5 and Fig. 3).The bulkier P-cyclohexyl moieties, as well as the inclusion of the neopentyl moiety in L7 lead to a higher stereoselectivity, and thus the Scheme 2 Proposed catalytic reaction pathway for the reaction between 1a and 2a catalyzed by the Cu-L system.estimated enantiomeric excess is 99.6% (Table 5).The attractive dispersive interactions between the phenyl group of 1a (red) and the P-cyclohexyl substituents (grey) in the R transition states are stronger than the analogous interactions of the methyl group of 1a (red) in the S path (Fig. 3b).Even the partial p-stacking between the phenyl moieties of the alkyne (blue) and the ketoester (red) in L7-TS-S cannot counteract this trend. 26Noncovalent interactions (cyclohexyl/cyclohexyl) also play an important role for aliphatic substrates like 1q (see ESI † for details).The neopentyl moiety is too far from the reaction center to induce a change in the conformation of the transition state.Thus, these computations do not explain the decent role of this substituent for better enantioselectivity, while we postulate that it may inuence the selectivity by blocking the coordination of the alcohol solvent to the ligand hydroxyl group.
The nearly co-planar arrangement of the PhCO moiety of 1a in L2-TS-R and L7-TS-R (Fig. 1 and 2) towards the phosphine substituents implies that the above-mentioned inertness of the 2-(o-tolyl)glyoxylate may be due to Ar-CO twisting, which reduces the ligand-substrate dispersive attractions.This twist instead may also cause steric repulsions towards the acetylide moiety, as well as internal strain.Similarly, the relatively low enantioselectivity in the reactions of 2-(2-furyl)glyoxylate (1l) (Table 2, entry 11) may be because the furyl ring is too small to have sufficient contact with the ligand P-cyclohexyl groups.
For comparison with the previously reported aldehyde system, 3a a model reaction between cyclohexanecarbaldehyde and trimethylsilylacetylene using ligand L2 has been optimized to match the level of theory (see ESI † for details).Similar conclusions can be drawn from these calculations: the stereoselectivity is again due to the attractive dispersion interactions of the cyclohexyl moiety in the aldehyde and the P-phenyl groups of L2, which are present in the R paths, but absent in the corresponding S paths.In this regard the systems behave almost identically.However, when the alkyne has a substituent of an extreme steric demand as in the case of triisopropylsilylacetylene, which was the most preferable substrate in the reactions with aldehydes, transition states leading to a minor enantiomer will also be destabilized by steric repulsions between the substituent of the aldehyde and the bulky substituent of the alkyne.
Conclusions
Copper-catalyzed asymmetric direct alkynylation of a-ketoesters with terminal alkynes to produce enantioenriched chiral propargylic tertiary alcohols has been developed by employing a chiral prolinol-phosphine ligand.Various a-ketoesters and terminal alkynes participated in the enantioselective reaction, but extreme steric demands in the alkyne as in t-butyl-or triisopropylsilylacetylenes inhibited the reaction.Quantumchemical calculations show the occurrence of OH/O/sp 3 -CH/ O two-point hydrogen bonding between the chiral ligand and the carbonyl group of the ketoester at the stereo-determining transition states.Combined with the hydrogen-bonding interactions orienting the ketoester substrate, dispersive attractions between the chiral ligand and the ketoester in the favored transition states, rather than steric repulsions in the disfavored transition states explain the enantioselectivity of the asymmetric copper catalysis.
Fig. 1
Fig. 1 Comparison of the transition state structures leading to the respective R [left, L2-TS-R(c)] or S [right, L2-TS-S(c)] product complexes with L2 and 1a in the s-cis conformation (Table 4).(a) Stick models showing a developing C-C bond (blue dotted line).Atomic distances (in angstrom) of the OH/O/CH/O two-point hydrogen bonds are shown in yellow dotted lines.(b) Space-filling models highlighting dispersive substrate-ligand interactions (yellow dotted circles).Red: 1a; blue: acetylide moiety.
Fig. 2
Fig. 2 Laplacian of the electron density of the transition state (left) leading to the R product complex with 1a in the s-cis conformation [L2-TS-R(c)] in the sp 3 -C-H/O/H-O plane, which is indicated on the right.Bond critical points (BCPs) are indicated with blue dots (left) and pink spheres (right), with the corresponding bond paths in light brown.Dotted lines mark areas of charge accumulation and solid lines represent areas of charge depletion.Solid blue lines correspond to zero-flux surfaces, and the orange dot represents a ring critical bond.The four terminal phenyl groups are shown as light-purple spheres for clarity (right).
Fig. 3
Fig. 3 Comparison of the transition state structures leading to the respective R [left, L7-TS-R(t)] or S [right, L7-TS-S(t)] product complexes with L7 and 1a in the s-trans conformation (Table 5).(a) Stick models showing a developing C-C bond (blue dotted line).Atomic distances (in angstrom) of the OH/O/CH/O two-point hydrogen bonds are shown by yellow dotted lines.(b) Space-filling models highlighting dispersive substrate-ligand interactions (yellow dotted circles).Red: 1a; blue: acetylide moiety.
Table 1
Copper-catalyzed enantioselective alkynylation of 1a with 2a under various conditions a Yield of the isolated product (silica gel chromatography).bDeterminedby HPLC analysis.c Experiment conducted without using a glove box.
Table 2 (
Contd. ) a Yield of the isolated product (silica gel chromatography).bDeterminedbyHPLCanalysis.cThereactionwascarried out in t-BuOH.dThereactionwascarried out for 72 h.eThe reaction was carried out at À20 C.
Table 4
Relative Gibbs energies (electronic energies in the ESI) in kcal mol À1 for the calculated reaction pathways of the model system (
Table 5
Relative Gibbs energies (electronic energies in the ESI) in kcal mol À1 for the calculated reaction pathways of the model system using L7 as the ligand (Table1, entry 7) a | 4,831.4 | 2018-01-01T00:00:00.000 | [
"Chemistry"
] |
Regulation of calretinin in malignant mesothelioma is mediated by septin 7 binding to the CALB2 promoter
The calcium-binding protein calretinin (gene name: CALB2) is currently considered as the most sensitive and specific marker for the diagnosis of malignant mesothelioma (MM). MM is a very aggressive tumor strongly linked to asbestos exposure and with no existing cure so far. The mechanisms of calretinin regulation, as well as its distinct function in MM are still poorly understood. We searched for transcription factors binding to the CALB2 promoter and modulating calretinin expression. For this, DNA-binding assays followed by peptide shotgun-mass spectroscopy analyses were used. CALB2 promoter activity was assessed by dual-luciferase reporter assays. Furthermore, we analyzed the effects of CALB2 promoter-binding proteins by lentiviral-mediated overexpression or down-regulation of identified proteins in MM cells. The modulation of expression of such proteins by butyrate was determined by subsequent Western blot analysis. Immunohistochemical analysis of embryonic mouse lung tissue served to verify the simultaneous co-expression of calretinin and proteins interacting with the CALB2 promoter during early development. Finally, direct interactions of calretinin with target proteins were evidenced by co-immunoprecipitation experiments. Septin 7 was identified as a butyrate-dependent transcription factor binding to a CALB2 promoter region containing butyrate-responsive elements (BRE) resulting in decreased calretinin expression. Accordingly, septin 7 overexpression decreased calretinin expression levels in MM cells. The regulation was found to operate bi-directionally, i.e. calretinin overexpression also decreased septin 7 levels. During murine embryonic development calretinin and septin 7 were found to be co-expressed in embryonic mesenchyme and undifferentiated mesothelial cells. In MM cells, calretinin and septin 7 colocalized during cytokinesis in distinct regions of the cleavage furrow and in the midbody region of mitotic cells. Co-immunoprecipitation experiments revealed this co-localization to be the result of a direct interaction between calretinin and septin 7. Our results demonstrate septin 7 not only serving as a “cytoskeletal” protein, but also as a transcription factor repressing calretinin expression. The negative regulation of calretinin by septin 7 and vice versa sheds new light on mechanisms possibly implicated in MM formation and identifies these proteins as transcriptional regulators and putative targets for MM therapy.
Background
The Ca 2+ -binding protein calretinin (CR) serves as an undisputed marker for the diagnosis of human malignant mesothelioma (MM), in particular of the epithelioid type and the epithelioid parts of the mixed type [1,2]; weak CR expression is also found in sarcomatoid MM, possibly indicating a less important and/or different role of CR in this MM cell type [3]. CR is also expressed in human reactive mesothelial cells [4,5], considered as the first step in the transition from healthy flat mesothelial cells covering the pleural cavities to one of the most aggressive and currently therapy-resistant tumor type. Downregulation of CR by CALB2 shRNA in human MM cell lines profoundly decreases cell growth and viability in vitro: lentivirus-mediated delivery of shCALB2 causes MM cells, in particular the ones with an epithelioid morphology, to enter apoptosis within 72 h post-infection [3]. Under these conditions, the intrinsic caspase 9dependent pathway is activated. Although the immortalized mesothelial cells LP9/TERT1 show strong CR expression (3), shRNA-mediated CR down-regulation differently affects these non-transformed cells: it inhibits cell proliferation as the result of a G 1 block. Neither is the viability impaired nor any type of cell death pathway activated.
CR is a fast Ca 2+ buffer protein [6,7] modifying the shape of intracellular Ca 2+ transients [8]; overexpression of CR reduces the mitochondrial Ca 2+ uptake in primary mesothelial cells [9]. Very little is known about the regulation of CR expression in the various tissues, even in the subpopulation of neurons, where CR is expressed under physiological conditions. It is assumed that CR expression is regulated in a rather similar way in humans and in mice, mostly based on the strong conservation of the proximal promoter regions of the human CALB2 and mouse Calb2 genes [10]. An AP2-like element in proximity of the TATA box confers neuron-specific expression of a luciferase reporter gene (luc+) in cultured neurons [11] (for additional details, see Fig. 1 in [12]), necessitating the binding of a nuclear protein present in cerebellar granule cells. Of note, this "AP2-like" element in the promoter region has no effect on the transcriptional activity in either MM cells or CR-expressing human colon cancer cells. Down-regulation of β-catenin by its negative regulator Axin2 significantly reduces CR expression in cultured rat thalamic neurons indicating that β-catenin is a positive regulator of the Calb2 gene [13]. A more detailed CALB2 promoter analysis revealed the sequence embracing the − 161/+ 80 bp region to sustain transcriptional activity in MM cells. Cis-regulatory elements within this promoter region including binding sites for NRF-1 and E2F2 are important for CR expression; e.g. siRNA-mediated downregulation of NRF-1 causes a decrease in CR expression levels indicating that NRF-1 acts as a positive regulator of CR expression (14). Moreover, the strong correlation between CALB2 mRNA and CR protein expression levels in MM cells is indicative of a control at the transcriptional level [14]. In colon cancer cells, two butyrate-responsive elements (BRE) embracing the TATA box of the CALB2 gene function as butyrate-sensitive repressors of CR expression, while the same sequence has no effect in cells of mesothelial origin, e.g. Met-5A cells [15]. Butyrate (Bt) is the product resulting from intestinal fermentation of dietary fibers by bacteria and Bt concentrations in the range of 5-30 mM are present in the chyme/feces of the gut [16]. Bt acts as a modulator of histone acetylation that results in the inhibition of the cell cycle (G 1 arrest) and leads to enterocyte (and derived cancer cells) differentiation [17]. Bt exposure of CR-expressing WiDr colon cancer cells results in CR down-regulation [18]. Moreover, gut microbiota might have an influence on respiratory infections [19] also via short chain fatty acids (SFCA) including Bt. Bt is not only produced by the gut microbiota, but also by anaerobic bacteria in the hypoxic environment of cystic fibrosis (CF) airways. Bt concentrations were found to be elevated in the sputum samples of CF patients reaching values of approximately 2 mM [20] and it is conceivable that in the diseased lung of MM patients, Bt may also be increased in the pleural cavity.
In the present study, short-term exposure of MMderived cell lines to low millimolar Bt concentrations revealed a significant increase in CR expression levels. Thus, we set out to investigate in more detail the promoter region of the CALB2 gene that acts as a Btresponsive enhancer in MM cells. To address the question on mechanisms implicated in this regulation, we searched for proteins binding to this particular promoter region that might be functionally implicated in CR regulation in MM cells.
Generation of reporter plasmids including deletion variants of the human CALB2 promoter containing BRE7-13 Genomic DNA was isolated from MSTO-211H and ZL55 MM cells using standard methods and purified DNA (0.5 μg) was used as template to amplify a 1.3 kb stretch of the human CALB2 promoter embracing the putative Bt-responsive elements (BRE) 7-13; the sequence and details of primers are shown in Fig. 2 and Table 1. The amplicon of 1292 bp containing a SacI and a BglII site at the 5′-and 3′-end, respectively, was cloned into the plasmid pGL3-Promotor (pGL3-P; Promega; # U47298; Wallisellen, Switzerland) linearized with the same restriction enzymes. pGL3-P contains a multiple cloning site, followed by a minimal SV40 promoter; the luciferase cassette (luc) in the plasmid is followed by a SV40 late poly (A) signal. This plasmid was then used to generate shorter PCR fragments containing a various number of BREs (see Fig. 2 and Table 1), the fragments were cloned into pGEM-TEasy, sequenced (Microsynth AG, Balgach, Switzerland) and excised by SacI and BglII. The fragments were then inserted into pGL3-P for transfection experiments; DNA (300-500 μg) was isolated using the Wizard Plus Midior Maxiprep kit (Promega, Dübendorf, Switzerland).
Luciferase assay
In order to investigate CALB2 promoter activity the dual luciferase assay was used. The constructs were cotransfected with Renilla-Luciferase in 12-well plates (50,000 cells/well) with Mirus lipofection reagent (Mirus, WI, USA). After 48 h (± Bt treatment; 1 mM) the cells were lysed and promoter activity was measured with the dual-luciferase reporter assay (Promega, Dübendorf, Switzerland) on a Turner Designs TD-20/20 Luminometer (Sunnyvale, CA, USA) according to the manufacturer's protocol. The details on the normalization of signals and the way to calculate fold changes compared to untreated controls have been described before [15].
DNA binding assay
The fragment containing BRE9-13 (706 bp) serving as capture DNA was synthesized by PCR using 5′-biotinylated primers. The μMACS™ FactorFinder kit (Mylteni Biotech, Bergisch Gladbach, Germany) was used to isolate proteins interacting with this stretch of the CALB2 promoter. The cleared cell lysate (15,000 x g, 5 min) from 10 7 MSTO-211H cells (100 μl) was incubated with 1.5 μg of biotinylated BRE9-13 DNA. Putative transcription factor complexes were isolated under non-denaturing (native) conditions according to the manufacturer's protocol.
Western blot analysis and silver staining
Cell pellets were collected and washed 3 times with CMF-PBS. Cytosolic fractions were isolated as described before [22] and the concentration of proteins was determined using the Bradford method (Bio-Rad, Hercules, CA). Samples were loaded and separated by SDS-PAGE (10% PAA) and subsequently transferred onto nitrocellulose membranes (Bio-Rad) by a semidry system (Witec, Litau, Switzerland). Equal protein loading was controlled by transient Ponceau S staining (Sigma) of the membranes. The primary antibodies against calretinin (CR7699/4; Swant, Marly, Switzerland) and septin 7 (rabbit polyclonal anti-septin 7; Bethyl Laboratories Inc., Montgomery, TX, USA or Millipore Corp. #ABT354, Temecula, CA, USA) were used at a dilution of 1:5000 overnight at 4°C; anti-GAPDH was from Sigma (ref. G9545) and used at a working dilution of 1:10,000. Rabbit secondary antibody directly linked to horseradish peroxidase (Sigma-Aldrich) was diluted 1:10,000 and membranes were incubated for 2 h at room temperature. The detection was performed with the chemiluminescent reagent Luminata Classico or Forte (EMD Millipore Corporation, Billerica, MA, USA) and data were collected on an imaging system from Cell Biosciences (Santa Clara, CA, USA).
For normalization and quantification of Western blots, densitometric analysis of the Ponceau S-stained membranes was performed using the GeneTools software (Syngene, Cambridge, UK). The integral of the signals of all transferred proteins was previously shown to represent a more reliable way of normalization, since this reduces the bias towards a specific protein (e.g. GAPDH, α-actin) often used for normalization. Levels of reference (housekeeping) proteins were shown to vary between tissues or different cell lines and might also be affected by the experimental manipulations [23,24]. GeneTools software was also used for the quantification of the specific Western blots signals for CR, septin 7 and GAPDH. In the results section, most often only selected representative regions of the Ponceau S-stained membranes marked as loading control (L.C.) are shown, in particular when Western blot signals were analyzed qualitatively.
Septin 7 cDNA cloning into pLVTHM backbone
A lentiviral system was used to overexpress septin 7. Briefly, the GFP cassette in pLVTHM (Addgene plasmid #12247) was replaced with the human septin 7 (SEPT7) cDNA using the SpeI and PmeI restriction sites. SEPT7 cDNA was obtained from ZL55 cell total RNA. RNA was extracted using the Qiagen RNAeasy kit following the manufacturer's instructions; cDNA was synthetized from 500 ng of total RNA using the QuantiTect Reverse Transcription kit (Qiagen). Septin 7 was amplified with the following primers: FW_hSEPT7 5′-AGT CGT TTA AAC ATG TCG GTC AGT GCG AGA TCC-3′ and RV_hSEPT7 5′-AGT CAC TAG TTT AAA AGA TCT TCC CTT TCT T-3′ containing the SpeI and PmeI restriction sites, respectively and the amplicon was inserted into the vector pLVTHM. The correct sequence of the plasmid was verified by colony PCR and subsequent sequencing of the insert and the novel plasmid was called pLV-hSEPT7.
Immunofluorescence
Cells were seeded on 12-mm glass coverslips and fixed for 15 min with 4% paraformaldehyde. Non-specific binding sites were blocked by incubation with TBS containing donkey serum (10%) for 1 h and coverslips were then incubated overnight at 4°C with the following antibodies diluted in TBS 1X: goat polyclonal anti-CR (1: 500; cat# CG1, Swant, Marly, Switzerland) and rabbit polyclonal anti-septin 7 (1:500; Bethyl Laboratories, USA). After washing, coverslips were incubated with secondary antibodies for 3 h at room temperature with the following secondary antibodies: Alexa Fluor 488conjugated donkey anti-rabbit IgG (1:100, Jackson Immunoresearch Laboratories, West Grove, PA, USA) and Cy5-conjugated donkey anti-goat IgG (1:100; Jackson). Nuclear DNA was stained using DAPI (5 μg/ml; Molecular Probes, Eugene, OR) and coverslips were mounted with Hydromount solution (National Diagnostics, Atlanta, GA). Images were acquired using a Leica fluorescent microscope DM6000B (Wetzlar, Germany) equipped with a Hamamatsu camera C4742-95 (Bridgewater, NJ). For the cells treated with Bt, cells were seeded onto 12-mm glass coverslips pre-coated with Matrigel (Corning, NY, USA) and treated with 1 mM Bt for 48 h.
MTT assay
To assess the putative toxic effects of Bt in MM cells, the mitochondrial activity of living cells was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) quantitative colorimetric assay. MM cells (1500-3000 depending on the cell line) were seeded in 96-well plates. From a stock solution (1 M NaBt in PBS), different amounts were added to yield Bt concentrations from 0.33 to 5 mM. After 3 days of incubation with Bt, MM cells were subjected to the MTT assay by adding fresh medium containing 0.5 mg/ml MTT and incubating the plates for 90 min at 37°C. The supernatant was discarded and DMSO (200 μl) was added to dissolve the formazan crystals. The absorbance at 570 nm was measured using a microplate reader (Infinite Pro200; Tecan Austria GmbH, Groedig, Austria). For each experiment the absorbance value of untreated (control) cells was defined as 1.0 (100%).
Statistical analysis
The data are presented as mean ± standard deviation of multiple experiments (n ≥ 3). The statistics were performed using StatPlus (AnalystSoft) applying a one-way ANOVA followed by a post-hoc Tukey HSD test. Differences were considered as statistically significant if a pvalue was < 0.05.
Butyrate induces CR expression in MM cells and also in immortalized mesothelial cells
We analyzed the effect of Bt on CR expression levels in cells derived from all MM histotypes. Bt concentrations ranging from 0.5-2 mM applied for short periods (48 and 72 h) were tolerated to various degrees; higher Bt concentrations (generally ≥2 mM) and prolonged exposure (> 6 days) resulted in massive death of most of the investigated MM cell lines (data not shown). Immortalized mesothelial cells (Met-5A) and ZL34 cells derived from a sarcomatoid MM showed massive cell death already at 1-2 mM when treated for 6 days. In the few surviving Met-5A cells CR expression levels were insignificantly affected as reported before [15]. Here we investigated the effect of Bt on CR expression levels at 72 h, a time point only mildly affecting cell viability. A quantitative Bt concentrationcell proliferation/viability curve was determined for MSTO-211H, ZL55 and ZL5 cells, the lines used in most of the further experiments (Additional file 1: Figure S1). The weakest effect of Bt on the MTT signal intensity (cell proliferation/viability) was observed in ZL5 cells, intermediate in MSTO-211H cells and the strongest Bt-dependent effect was seen in ZL55 cells. The shape of the curves is not reminiscent of a typical sigmoidal toxicity curve, but rather for a growth-inhibiting effect of Bt, as had been observed before in Bt-treated colon cancer cells and immortalized LP9/TERT1 mesothelial cells (G 1 arrest) [3,26]. In support, only few floating (dead) cells were observed at the investigated concentrations and the selected time point (72 h). Bt treatment of MSTO-211H (biphasic) cells increased CR protein levels in a Bt concentration-dependent way (Fig. 1a). An increase in CR was also seen Met-5A cells, as well as in the epithelioid MM cell lines ZL5 and ZL55, the biphasic cell line SPC212 and the sarcomatoid cell line ZL34 (Fig. 1b). Generally, cells derived from epithelioid and biphasic MM are characterized by higher basal CR expression levels than cells derived from sarcomatoid MM, in particular ZL34 and SPC111 cells, the latter with very low basal CR expression levels [3,14]. Bt treatment of all cell lines led to a concentration-dependent increase in CR levels; semi-quantitative evaluation of the CR signals of the cell lines shown in Fig. 1b are depicted in Fig. 1c. Of note, CR expression levels were determined in cells that remained attached to the cell culture dishes, i.e. viable cells and not the floating and/or dead cells in the culture medium. Since MM cell lines of the epithelioid and biphasic type were previously shown to be strongly affected by CR down-regulation, much more than MM cells derived from sarcomatoid tumors [3], further experiments were carried out with ZL5 and ZL55 (epithelioid) and MSTO-211H (biphasic) cells.
A region in the CALB2 gene promoter containing 7 putative BRE elements acts as a butyrate-responsive enhancer in MSTO-211H and ZL55 MM cells Analysis of the human CALB2 promoter region revealed additional putative BRE elements named BRE7-13 ( Fig. 2b) upstream of the previously identified region that contains 2 functional BRE (BRE5-6), which act as transcriptional repressors in colon cancer cells [15]. The newly identified region has a length of 1278 bp and is approximately 3.3 kb upstream of the CALB2 translation start site (ATG) (Fig. 2a). This 1.3 kb region was amplified by PCR using genomic DNA from ZL55 and Met-5A cells and primers containing specific restriction sites (forward primer: SacI site; reverse primer BglII site) compatible with the reporter luciferase plasmid pGl3-P. Sequencing revealed the inserts amplified from the 2 cell lines to contain alterations in the nucleotide sequence when compared to the sequence annotated in PubMed (CALB2; gene ID: 794). The three changes (1 point mutation (A/T), 2 deletions of either 1 or 2 nucleotides) were present in the sequences of all clones derived from both cell lines, ZL55 and Met-5A, excluding this being PCR artifacts (Additional file 1: Figure S2). The others were specific for each cell line, consisting of point mutations, 2 in the insert derived from ZL55 cells and 3 in the insert derived from Met-5A cells (Additional file 1: Figure S2). None of these changes directly involved the sequences of the putative BREs. Besides the luc + reporter plasmid containing all 7 putative BREs (BRE7-13), truncated versions were generated by PCR and these included plasmids containing BRE9-13, 10-13 12-13, 9, 9-10 and 9-11 (Fig. 2c).
In MSTO-211H cells, luciferase activity was increased by 41 ± 16% (mean ± SEM) by the full-length fragment after Bt treatment for 48 h; however, strongest activation (200 ± 24% of control) was observed with the fragment containing BRE9-13, while elimination of BRE9 (fragment BRE10-13), resulted in a luciferase activity close to basal levels (109 ± 9%). In a shorter promoter fragment that included BRE9 (BRE9-11), activity was partially increased (159 ± 14%), but not to values as seen with the BRE9-13 plasmid. Thus, the essential enhancer/activator part appeared to be contained in the region embracing BRE9-11, but the region containing BRE12 and 13 further enhanced the luciferase activity, indicative that the various BREs contributed to the effects in a rather complex manner. Similar results were obtained in ZL55 cells, although the maximal effect in the presence of the BRE9-13 promoter fragment was of smaller magnitude (123 ± 18%). Thus, the effects were nearly identical qualitatively (Fig. 2c, d), yet the effect was approximately 4-times smaller in ZL55 cells. It is noteworthy that the effect on the luciferase activity was rather well correlated with the fold increase in CR expression levels after Bt treatment (Fig. 1). The increase in CR expression was + 290% in MSTO-211H cells and + 60% in ZL55 cells, i.e. an approximately 5-fold difference. It appears that in ZL55 cells, where CR levels are rather high in control conditions, the relative Btmediated increase in CR expression is smaller than in MSTO-211H cells characterized by a lower "basal" CR expression level [14]. We also tested the BRE7-13 and BRE9-13 promoter fragments containing the luc + reporter in the colon cancer cell line Caco-2 in order to see, whether the enhancer function also persisted in colonocytes. A small, yet insignificant increase of + 19 ± 16% was observed in the presence of the BRE9-13 fragment and similar results were also observed with HT-29 colon cancer cells (data not shown). This hints that the BRE9-13-containing promoter region in the CALB2 gene acts as a Bt-activated enhancer, with a strong preference for MM cells.
Identification of proteins binding to CALB2 promoter fragments BRE5-6 and BRE9-13 in a Bt-dependent manner In a previous study we had demonstrated that both BRE5&6 contribute to the CR-repressing effect in colon cancer cells [15]. Based on the rather complex pattern of BRE-containing fragments on enhancing luciferase activity (Fig. 2c) involving up to 5 BREs [9][10][11][12][13], we addressed the question, which proteins might bind to the BRE9-13-embracing region in a Bt-dependent manner. For this, cell lysates from Bt-treated and control MSTO-211H cells were incubated with the biotinylated BRE9-13 DNA fragment. Several proteins bound to the DNA and were released in the presence of elution buffer (Fig. 3). A side-by-side comparison of silver-stained polyacrylamide (Fig. 3a, b). Differentially expressed proteins (± Bt) consisted of annexin A1 (ANXA1; gene ID: 301; M r 38.7 kDa; − 19%). Even lower amounts of septin 7 (SEPT7; gene ID: 989; M r 50.7 kDa; − 34%), a member of the family of GTP-binding proteins and serpin H1 (SERPH; − 54%), also known as heat shock protein 47 (HSP47), reported to function as a chaperone for collagen [27], were present in the extracts of Bt-treated cells. Other proteins including annexin A2 (ANXA2; gene ID: 302; M r 38.6 kDa) and plasminogen activator inhibitor 2 (SERPINB2; gene ID: 5055; M r 47 kDa) were not differentially bound (Fig. 3b). Of interest, the mesenchymal marker vimentin was also present in the complex bound to BRE9-13 DNA and binding was increased in the Bt-treated MSTO-211H samples (+ 44%).
Since the decrease in BRE9-13 DNA-bound proteins including annexin A1 and septin 7 could be the result of a decreased binding or of a Bt-induced decrease in protein expression, levels of annexin A1 and septin 7 were determined by Western blot analysis of either cytosolic proteins or proteins eluted from the BRE9-13 DNA fragment in control and Bt-treated MSTO-211H cells. Expression levels of annexin A1 were increased in Bt-treated cells, while the amount of annexin A1 bound to BRE9-13 DNA was nearly unaffected by Bt. On the other hand, cytosolic septin 7 levels were slightly decreased by the Bt treatment in these conditions, while the amount of septin 7 bound to BRE9-13 DNA was clearly lower after Bt treatment (Fig. 3c). To further demonstrate the specificity of the effect, similar experiments were carried out with the previously described DNA fragment containing BRE5-6, shown to act as a Btresponsive repressor element in colon cancer cells [15]. In parallel, we compared binding of annexin A1 to both BREcontaining fragments, BRE5-6 and BRE9-13 using cytosolic extracts from control or Bt-treated MSTO-211H cells (Fig. 3d). Elution of bound proteins was carried out in 2 conditions: weakly bound proteins eluted in fraction E1 and strongly bound proteins in fraction E2. Since no striking Fig. 2 Sequence of the human CALB2 promoter region containing putative Bt-responsive elements BRE7-13 a The region of 1278 bp contains 7 putative BREs (cyan). Primer sequences to amplify the 1278-bp fragment are marked in yellow including restriction sites for SacI (Table 1) and BglII (red). The start site of translation is marked in red (bold). Primers used to amplify truncated versions are boxed in gray (details are shown in Table 1). b Sequence alignment of BRE5 -BRE13 with the BRE consensus sequence (15) shown at the bottom. Nucleotide sequences fully complying with the consensus (100% identity) are marked in blue, medium-conserved sequences are marked in green and nucleotides not conferring to the consensus sequence are marked in black; percentages of identity with the consensus sequence are given in the right lane. c, d Luc + reporter assay in MSTO-211H (upper) and ZL55 MM (lower) cells with CALB2 promoter fragments containing variable numbers of BREs. Luc + activities were normalized to the signal obtained with the control plasmid pGl3-P. As reference for the statistical analysis, BRE5-6 [15] was used. The number of independent experiments ranged from n = 4 (e.g. BRE10-13) to n = 11 (e.g. BRE7-13). Values represent mean ± S.D. *p < 0.05 vs. BRE5-6 qualitative differences were observed with respect to proteins eluted in fractions E1 and E2, the results from elution conditions E1 and E2 were combined. Similar as shown in Fig. 3c, the signal for septin 7 eluted from immobilized fragment BRE9-13 was clearly weaker using extracts from Bt-treated cells. Yet no significant differences were observed when using DNA fragment BRE5-6 as bait. An inverse situation was observed for annexin A1: increased binding to fragment BRE5-6 using extracts of Bt-treated cells and no striking changes in the binding to fragment BRE9-13 (Fig. 3d). In summary, annexin A1 preferentially bound to the DNA fragment BRE5-6 in a Bt-dependent manner, while septin 7 showed a Bt-sensitive decrease in binding to BRE9-13. Since BRE9-13 acted as a Bt-sensitive activator of CALB2 promoter activity and septin 7 bound to BRE9-13 was found to be decreased in extracts from Bttreated cells, we reasoned that septin 7 might be a negative transcriptional regulator of CR expression by its binding within a protein complex that binds to the CALB2 promoter in MSTO-211H cells. 7 levels led to a down-regulation of CR expression (Fig. 4d), consistent with the previous results that lower amounts of septin 7 bound to BRE9-13 after Bt treatment are correlated with increased CR expression (Fig. 3c). As a negative control, a lentivirus expressing GFP had no effect on CR expression levels. The opposite, i.e. downregulation of septin 7 by SEPT7 shRNA strongly inhibited cell proliferation, decreased viability and many dying nonadherent cells were observed; the surviving cells had mostly a spindle-shaped morphology typical for the low CR-expressing subpopulation with sarcomatoid morphology of MSTO-211H cells (not shown). While the Btinduced increase in CR was also seen in shSEPT7-treated cells, septin 7 down-regulation did not noticeably affect CR levels in the surviving MSTO-211H cell population when compared to levels in cells infected with GFP shRNA (Fig. 4e). Since the Bt-induced increase in CR was associated with a decrease in septin 7 as evidenced in control and GFP shRNA treated MSTO-211H cells (Fig. 4e) indicative of a negative feedback regulation, we investigated the direct effects of CR overexpression on septin 7 levels. Lentivirus-mediated upregulation of CR using previously developed tools [25] caused a clear decrease in septin 7, irrespective, whether MSTO-211H cells were treated with Bt or not (Fig. 4e). This indicates that CR is a Bt-independent, negative regulator of septin 7. The inverse experiment with shCALB2 was not conclusive, since shCALB2-mediated down-regulation of CR resulted in massive cell death as reported before [3], thus not allowing to investigating septin 7 levels.
The effect of CR overexpression on septin 7 levels was investigated in SPC212 cells, as well as in the CRexpressing colon carcinoma cell line HT-29, previously shown to negatively modulate CR expression in a Btdependent way [15]. In both cell lines, CR overexpression led to a clear decrease in septin 7 levels (Fig. 4f ). This supports the hypothesis that CR functions as a negative regulator for septin 7 in two different cell types (mesothelial cells, colonocytes). In summary, treatment with Bt increases CR levels in all histotypes of MM cells and decreases septin 7 levels (data shown for MSTO-211H and ZL5 cells). Finally, overexpression of either one decreases expression levels of the other. These results are indicative of a finely tuned balance between expression levels of the 2 proteins.
Calretinin and septin 7 are co-expressed during mouse embryonic development and septin 7 levels are higher in primary mesothelial cells from mice without a functional Calb2 gene Based on previous findings that CR is expressed in specific regions within the mesenchyme of murine embryos and in precursor mesothelial cells in the developing lung at embryonic days 14.5 and 16.5 [25], we investigated the expression of CR and septin 7 on serial sections derived from mouse embryos at E10.5. Expression patterns for CR and septin 7 showed an almost complete overlap in the mesenchyme of E10.5 mice (Fig. 5a, b); yet the intensity of staining for one or the other protein varied noticeably. Since transient CR expression is observed in the mesenchyme and developing mesothelial cells [25], we wondered whether CR's absence during this period in CR−/− mice would affect the expression of septin 7 in the terminally differentiated mesothelium. For this, primary mesothelial cells (prMC) were isolated from either WT or CR−/− mice and kept in cell culture in vitro for 10-15 days. Western blots revealed clearly higher levels of septin 7 in mesothelial cells from CR−/− mice compared to WT animals (Fig. 5c). Thus, the transient expression of CR in developing mesothelial cells had a long-lasting effect on septin 7 expression; it appeared that CR acted as a long-lasting repressor of septin 7 synthesis. A similar analysis was carried out in primary mesothelial cells immortalized with SV40. The expression of SV40 Tag/tag was previously shown to substantially increase the proliferation rate of primary mesothelial cells, both from WT and CR−/− mice [25]. Interestingly, in SV40-immortalized cells, septin 7 expression levels were strongly increased independent of the genotype (WT vs. CR−/−; Fig. 5c, right panel).
Binding of septin 7 to calretinin and co-localization of the two proteins in distinct regions of the cleavage furrow and in the midbody region during cytokinesis Based on the co-expression of CR and septin 7 in cells from embryonic mesenchymal tissue, we investigated in more detail, the intracellular localization of the two proteins in human MM cells. Since CR immunostaining was rather weak in most untreated MM cell lines, immunofluorescence staining against CR was mostly carried out in cells overexpressing CR or in Bt-treated cells characterized by elevated CR expression levels. Qualitatively, CR intracellular localization was similar in all three conditions: control, CR-overexpressing, Bttreated cells (data not shown). Septin 7 was either confined to the cell cortex, a zone implicated in the dynamic interplay between plasma membrane proteins and the cytoskeleton and to filamentous structures, likely the socalled "septin cytoskeleton" consisting of septin heteropolymers (Fig. 6a1). In the same cell (e.g. Bt-treated MSTO-211H), the intracellular distribution of CR was rather homogenous (Fig. 6a2), yet with a stronger staining of perinuclear intermediate filaments and sometimes microtubules as reported before in WiDr colon cancer cells [28]. In WiDr cells a direct association of CR with these cytoskeletal structures has been reported previously, also based on co-immunoprecipitation experiments. The merged images including DAPI staining (Fig. 6a4) showed some colocalization (yellow color) that was further investigated in the different MM cell lines. During telophase septin 7 was strongly localized to the zone of the cleavage furrow (Fig. 6b1). Also CR staining was stronger in the region of cell cleavage sometimes resulting in a yellow zone indicating that the proteins were concentrated in this region (Fig. 6b2). During the early phase of cytokinesis the center of the midbody showed strong "septin 7 only" staining (red) (Fig. 6b3-6), followed first by a yellow zone (septin 7 and CR) and finally by a green one (CR only). In some rare cases the center of the midbody was stained for CR only (Fig. 6b8). Based on the co-localization of these proteins in some regions, we determined whether these two proteins directly interact. Co-immunoprecipitation revealed that in the lysate from MSTO-211H-CR and ZL55-CR cells incubated with anti-CR antibody, septin 7 was pulled down (Fig. 6c). These results suggested that CR and septin 7 have the predisposition to interact directly, but that other putative binding partners of either CR or septin 7 or different conformations of either protein prevented this interaction to take place throughout the cell. Their close spatial apposition (at times even colocalization indicative of direct interaction) and the precise time points during cytokinesis indicates that the 2 proteins are likely implicated in the same biological process, namely in cytokinesis, yet have on occasion distinct localizations and thus probably non-identical, but conceivably complementary functions (see discussion).
Discussion
Although CR is currently used as a positive marker for the unequivocal identification of human MM of the epithelioid and mixed type [1,2], still little is known I) about whether CR is implicated in the etiology of MM, II) on the regulation of CR in cells of mesothelial origin and III) about the function(s) of CR in MM cells. Details on the regulation [14] and possible function [3] of CR in MM are only slowly emerging. We had previously reported that Bt acts as repressor of CR expression in CR-positive colon cancer cells via BREs present in the CALB2 promoter [15]. The same elements BRE5-6 were found to be essentially non-functional in the immortalized mesothelial cell line Met-5A.
We had hypothesized that injury of the intestinal wall or damage of the lung tissue by e.g. asbestos fibers might lead to the colonization of the tunica serosa of the peritoneal or pleural cavity by facultative anaerobic bacteria (e.g. Streptococcus pneumoniae, Staphylococcus aureus, Chlamydia pneumoniae in airways or gut microbiota) producing Bt resulting in concentrations in the millimolar Figure S3 range [19]. Thus, we tested the effect of Bt on CR expression levels in immortalized mesothelial cells and several human MM cell lines of all histotypes, the former serving as a model for "reactive" mesothelial cells. The observed strong, rapid and Bt-concentration dependent increase in CR expression depended on the presence of a positive regulatory element in the CALB2 promoter containing additional BREs. Unexpectedly, septin 7, generally considered as a structural protein was found to be part of this regulatory transcriptional complex. Septin 7 belongs to a family of RAS-like GTP-binding and membrane-interacting proteins; currently 13 septin genes subdivided into four distinct groups are known in mammals and septin family members are characterized by a highly conserved domain structure [29]. The different septins are involved in various cellular processes mostly implicated in cell morphology dynamics including cytoskeleton organization, cytokinesis and membrane dynamics (e.g. curvature), for a review see [29]. Besides septins' well-characterized preference for proteinprotein interactions among own family members, forming higher-order structures such as filaments, bundles, scaffolding structures or rings, many other septin-interacting proteins have been identified forming the septin interactome (see Table 1 in [30]). In many cases these proteins are associated with the actin and/or microtubule cytoskeleton or with phospholipid membranes. Interestingly, the cdk1dependent phosphorylation of septin 9 regulates the association with the proline isomerase (Pin1) implicated in the disjunction of daughter cells. The observed strong staining of septin 7 at the site of daughter cell separation indicates that septin 7 might have a similar function in MM cell cytokinesis. In support, septin 7-deficient fibroblasts show defects in the machinery implicated in cytokinesis including stabilization of microtubules and stalled midbody abscission c Co-IP with lysates from MSTO-211H-CR and ZL55-CR cells. Proteins binding to CR were co-immunoprecipitated with a CR antibody. Membranes with PAGE-separated proteins were probed for CR and septin 7 (lane IP). Input samples containing all proteins (prior to immunoprecipitation) were probed for the same proteins. Paxillin was used as negative control [31]. The latter defect was shown to also lead to multinucleation.
Knowledge on the role of septins in gene regulation is sparse. Septin 9 is capable of interacting with the hypoxia-inducible factor 1 alpha HIF-1α acting as a positive regulator in the hypoxic pathway leading to increased proliferation, soft agar clonal survival and tumor growth [32]. In our study we observed that binding of septin 7 to the transcriptional complex driving CALB2 expression was decreased after Bt treatment. Since the Bt-dependent decrease in septin 7 binding to the protein complex bound to CALB2 BRE7-13 leading to increased CR levels might be the result of different processes, we selectively manipulated septin 7 expression in MM cell lines. Of note, Bt treatment had no effect on septin 7 expression levels and moreover down-regulation of septin 7 levels by shSEPT7 had no effect on CR expression. On the other hand overexpression of septin 7 led to a strong decrease in CR expression levels indicating that septin 7 acts as a negative regulator (repressor) of CR expression. The mechanism was operating bi-directionally, i.e. CR overexpression resulted in a decrease in septin 7 levels. In line, a comparison of primary mesothelial cells (prMC) from WT and CR−/− mice revealed higher septin 7 levels in CR−/− prMC, albeit the fact that in these cells isolated from young adult mice, CR expression levels are below the detection limit of Western blot analysis [25].
In summary, these results indicate that both, CR and septin 7 act as negative transcriptional regulators for each other. As CR expression levels in tumor specimen are used to diagnose the clinical outcome, i.e. lower CR levels are a poor prognostic factor [33], also deregulated septin expression has been linked to tumor development/growth [34]. Increased levels of septin 2, 8,9,11 and decreased levels of septin 4 and 10 have been consistently reported in various tumor types and are assumed to act as oncogenes and tumor suppressor genes, respectively [34]. Evidently in order to exert an antagonistic regulatory function, CR and septin 7 need to be coexpressed in situ, i.e. within the same cells. This was clearly evident during normal mouse embryonic development, where the two proteins colocalize in mesenchymal cells during lung development. While our experiments are in clear support of an inverse regulation, the colocalization and/or close apposition of each other might hint towards an implication in the same process, i.e. in cytokinesis. Immuno-EM images of WiDr colon cancer cells stained with an anti-CR antiserum had previously revealed CR staining during early telophase at the midbody, while at later stages the midbody zone was completely negative (see Fig. 2f-i in [35]). Essentially identical findings on CR localization during cytokinesis were observed in various MM cell lines. Also septin 7 staining was particularly strong first at the cleavage furrow and then in distinct parts of the midbody. Of note, strong co-localization was not observed during the entire separation of the daughter cells; at distinct time points and precise midbody localization one or the other protein was more prevalent resulting in a clear separation of the green and red fluorescence. Yet occasionally, the strong yellow color indicated the 2 proteins to be colocalized at the same regions and co-IP experiments confirmed a direct interaction between CR and septin 7.
Septins have been also shown to associate to the mitotic spindle, being required for cytokinesis [31]; the same had been reported for CR-expressing WiDr cells [35,36].
Conclusions
In this report we have identified septin 7, an essential cellular component implicated in the final steps of cell division, as a strong Bt-dependent gene regulatory protein binding to the promoter of CALB2. Moreover, septin 7 was negatively regulated by CR forming a feedback loop. Our study adds knowledge on the molecular mechanism involved in up-regulation of CR caused by Bt, a step that might also be implicated in mesotheliomagenesis. Since CR has been proposed to serve as a putative target for MM therapy due to its essential role in MM cell lines [3], indirectly targeting CR through septin 7 and/or directly targeting septin 7 might represent yet another strategy for the development of a therapy to treat the currently incurable MM.
Additional file
Additional file 1: Figure S1. MTT assay of different MM cell lines exposed to butyrate (Bt). Relative MTT signals for A) MSTO-211H, B) ZL55 and C) ZL5 MM cells exposed to various Bt concentrations ranging from 0.33 to 5 mM. Results are from 3 independent experiments (each sample in triplicate). The value of untreated cells in each experiment was defined as 100%. Results represent mean±SEM. Figure S2. Point mutations in the PubMed database sequence (CALB2; gene ID: 794) of the CALB2 promoter region containing BRE7-13 in comparison to human Met-5A and ZL55 cells are boxed in green (Met-5A) or yellow (ZL55). Insertions or deletions found in the sequence of all analyzed cell lines are boxed in cyan. None of the mutations concern the 7 BRE listed in Fig. 2. Figure | 9,353.4 | 2018-04-27T00:00:00.000 | [
"Biology"
] |
An updated review on animal models to study attention-deficit hyperactivity disorder
Attention-deficit hyperactivity disorder (ADHD) is a neuropsychiatric disorder affecting both children and adolescents. Individuals with ADHD experience heterogeneous problems, such as difficulty in attention, behavioral hyperactivity, and impulsivity. Recent studies have shown that complex genetic factors play a role in attention-deficit hyperactivity disorders. Animal models with clear hereditary traits are crucial for studying the molecular, biological, and brain circuit mechanisms underlying ADHD. Owing to their well-managed genetic origins and the relative simplicity with which the function of neuronal circuits is clearly established, models of mice can help learn the mechanisms involved in ADHD. Therefore, in this review, we highlighting the important genetic animal models that can be used to study ADHD.
INTRODUCTION
Attention-deficit hyperactivity disorder (ADHD) is the most common neurobehavioral disorder in childhood and is characterized by inattention, impulsivity, and hyperactivity.Although ADHD is thought to be a crippling and frequent illness that only arises in infancy, recent studies indicate that it persists in adulthood in 30-70% of patients [1,2].Furthermore, poor cognitive impulsiveness, forgetfulness, planning deficits, poor time management, and impulsive conduct are prevalent in children with ADHD.Adults are diagnosed with ADHD by examining clinical abnormalities, such as: hyperactive-impulsive (ADHD-HI), predominantly inattentive (ADHD-PI), or combination (ADHD-C) subtypes.
ADHD is one of the most frequent juvenile disorders, with a prevalence of 3-5%.About half of the children affected by ADHD continue to experience symptoms as adults [3].Numerous ADHD symptoms, such as-hyperactivity/impulsivity and attention impairment-must appear before the age of 12 years.
ADHD has been demonstrated to be comorbid with a number of different mental diseases in addition to this primary symptomatology.Mood, anxiety, oppositional defiance, and conduct disorders are the most frequent among children [4]; whereas in adults, different comorbidities occur, such as: major depressive disorder, social phobia, and substance abuse [5].Emotional lability or dysregulation plays an underlying role in the development of ADHD symptoms [6].In literature, there are various risk factors that play in elevating the proportion of ADHD patients including genetic [7,8], environmental factors [9][10][11] and other related factors [12][13][14] that are briefly discussed and highlighted by various researcher obtained from different studies (Fig. 1).
Seo et al. [15] used national representative data collected between 2008 and 2018 to investigate the prevalence and comorbidities of ADHD among adults, children, and adolescents in Korea.They reported that ADHD prevalence rates for children/ adolescents had increased steeply over that decade, from 127.1/ 100,000 in 2008 to 192.9/100,000 in 2018, increasing 1.47 and 10.1 times in children/adolescents (≤18 years) and adults (>18 years), respectively.According to the study, a significant proportion of ADHD patients in Korea are either misdiagnosed or undertreated.
The cause of ADHD remains unknown, but mounting evidence points to a hereditary component to its occurrence.Now with the help of recently developed genetic models, we may be able to comprehend the behavior of animals manifested by the presence of an attention-deficit disorder, hyperactivity, impulsivity, or all three traits in a single animal.Several animal models of ADHD have been proposed; however, genetically modified animals are the most promising models for displaying ADHD symptoms.ADHD models differ in terms of pathophysiological abnormalities and the capacity to imitate behavioral symptoms and predict pharmaceutical responses.Their varied nature could be attributed to the lack of sufficient knowledge on ADHD biology from clinical data based on human studies, which is why researchers are unable to determine which model best mimics ADHD or other subtypes.As per the recent research on ADHD, the models used should be classified as animal models of symptoms similar to ADHD rather than exact models of ADHD [16].
In this review, we discuss the most notable animal models that could be valuable for studying ADHD with a particular focus on genetic models.Various models include: dopamine transporter (DAT) knockout mice, spontaneously hypertensive rats (SHR), steroid sulfatase, coloboma mice, and alpha-synucleinlacking mice.
Animal model and criteria for good animal models
McKinney (1988) stated the requirement of animal models for "experimental preparations developed in one species for the purpose of studying phenomena occurring in another species [17,18]".This definition is still valid for clinical researchers.Certain criteria from animal models-such as: etiology, genetic resemblance, physiological processes, and treatment-can be used to study human psychological disorders.Three forms of validity were chosen: predictive, face, and construct.The predictive validity of a model is determined by whether it properly selects a pharmacological treatment with equivalent clinical potency without omission or commission errors.Face validity is determined by how closely a model mimics illnesses in different ways.Construct validity is evaluated by whether both the model's behavior and the features of the disorder traits can be unambiguously interpreted and are homologous, and whether the features being modeled have a well-established empirical and theoretical relationship with the disorder [19].
Dopamine transporters knock out mice
DAT is expressed in all dopamine (DA) neurons but is known to reuptake extracellular DA in the synaptic cleft of the dopamine system [20][21][22].Therefore, dopamine transporter knockout (DAT-KO) using various methods causes an increase in DA by reducing extracellular DA clearance [22]; hence, the level of extracellular DA can be increased by nearly five times [23].
Figure 2 shows the dopamine homeostasis in normal and DAT-KO mice.The right picture depicts DAT-KO mice, where the synthesis of DA is double as compared to that in normal mice, and the neuronal DA concentration is drastically lower, whereas extracellular DA is increased five-fold.DAT-KO mice lacked autoreceptor function.
DA is thought to play an important role in ADHD; however, other neurotransmitters are also involved.Transgenic mice have become indispensable tools for analyzing the role of genetic factors in the pathogenesis of human diseases.Although rodent models cannot fully recapitulate complex human psychiatric disorders such as ADHD, transgenic mice offer an opportunity to directly investigate the specific roles of novel candidate genes identified in patients with ADHD in vivo.Targeting genes implicated in DA transmission, such as the gene encoding the dopamine transporter (DAT1), has led to the development of several knockout and transgenic mouse models proposed as ADHD models.These mutant animals provide researchers with the opportunity to assess the role of dopamine-related processes in brain diseases, analyze the molecular and neuronal mechanisms at play, and test new ADHD treatments.
Due to the defects in DAT, DAT-KO mice exhibit spontaneous hyperlocomotion [22,24].DAT-KO mice showed an elevation in hyperactivity and velocity, along with less time of immobility, with a breakdown or failing in habituating over time in the open field.DAT-KO mice also buried fewer marbles than respective controls of DAT wild-type (DAT-WT) and -heterozygous (HET) mice in appraisal of obsessive or compulsive-like behaviors, likely because of severe hyperactivity and related attention deficit [25] representing attention deficit hyperactivity (ADHA)-related phenotypes.
Multiple studies have reported a relationship between DAT variants and ADHD [26][27][28].It was previously recognized that in patients with ADHD, psychostimulants might interact with the DAT: for example, amphetamine and methylphenidate drugs improve behavioral deficits.However, no clear indication of decreased DAT was identified in patients with ADHD when different researchers compared models to patients, but an increment in the DAT level was seen in the striatum of adults and children [29,30].
DAT is one of the dopamine-related genes that is known to be a candidate for ADHD risk [31] and is also included in the Na+/Cldependent transporter family that uptakes dopamine into neurons [20].DA reuptake by the DAT mainly occurs from the synaptic cleft to the presynaptic terminal and plays an important role in the functioning of the dopamine system [32].
DAT is predominantly expressed in nigrostriatal and mesolimbic dopaminergic neurons of the central nervous system, with the highest levels in the striatum and nucleus accumbens (NAc) [33].In addition, subcellular ultrastructural studies have confirmed that most DAT in striatal dopamine axons are disseminated at the synaptic periphery and nonsynaptic membrane regions [34].
DAT deficiency in DAT-KO mice results in changes in the DA system.Compared to WT mice, DAT-KO mice had a five-fold higher extracellular DA concentration [23], which is consistent Fig. 1 The common risk factor of ADHD.SNP single-nucleotide polymorphism, SLC6A3 solute carrier family 6 member 3, FOXP2 Forkhead box P2, LPHN3 Latrophilin-3, SORCS3 Sortilin-related VPS10 domain containing receptor 3, SCZ schizophrenia, ASD autism spectrum disorder; Odd oppositional defiant disorder, BP bipolar disorder, MDD major depressive disorder, BDNF brain-derived neurotrophic factor, GDNF glial cell line-derived neurotrophic factor, NGF nerve growth factor.
with the 300-fold slower DA clearance in DAT-KO mice [35].It was also confirmed that electrically stimulated dopamine release in DAT-KO mice was reduced by approximately 75% compared with that in WT controls [36].Giros et al. [22] reported a quantitative in situ hybridization study that confirmed a reduction in the messenger ribonucleic acid (mRNA) levels of postsynaptic DA receptors D1 and D2, which were further downregulated by almost 50% in the striatum [22].These studies have shown that the release of dopamine and its receptors is controlled in the brain.Regarding physiological functions, DAT-KO mice had a significantly slower breathing rate with extended inspiration time.DAT-KO mice show a decreased response to hypoxia compared to WT mice; however, CO2 production is unaffected in the mutants [37].Body temperature of DAT-KO mice doesn't follow a circadian variation.Circadian analysis revealed a decrease in body temperature during the daytime in DAT-KO mice.The exclusion of DAT in DAT-KO mice resulted in delayed weight gain compared to HET and WT mice.Females without DAT exhibited poor lactation and diminished ability to care for their young.Deletion of DAT causes anterior pituitary hypoplasia and a number of changes in the hypothalamo-pituitary axis characteristics, emphasizing the function of hypothalamic DA reuptake in developmental events [35].
DAT-KO mice exhibit ADHD-related behavioral changes in various psychological experiments [38,39].In the open field test, the movement speed and hyperactivity of DAT-KO mice increased, whereas the immobility time decreased [40].Fewer marbles were buried during the marble-burying test, which was attributed to hyperactivity and inattention [41,42].In the cliff avoidance reaction test, it was confirmed that they showed slightly more impulsive behavior, unlike WT mice, which tried to avoid falling [43].In addition, behavioral experiments such as -the Y-maze and pre-pulse inhibition have confirmed: poor attention, learning, and memory [44][45][46].Moreover, very poor learning and memory abilities were confirmed through an eight-arm maze, novel object recognition task, and social food preference transmission tests [35,47].Contradictory results were also observed in these animals, such as-the alleviation of hyperactivity by amphetamine, methylphenidate, and cocaine-which act on DAT [23,48,49]; further suggesting that the effects of these compounds in ADHD do not target the DA system alone.In addition, methylphenidateinduced increases in DA concentration in the synaptic cleft were not observed in DAT-KO mice [35], suggesting that the reduction in hyperactivity in DAT-KO mice could be due to the targeting of noradrenergic systems other than the dopaminergic system.
As per the results of various studies, DAT impairment could lead to ADHD-related behaviors-such as in multiple genetic studies that revealed the association between DAT gene mutations in ADHD patients [18]; and further brain imaging studies also showed a reduction in DAT levels in ADHD patients [50].However, other studies have confirmed the opposite results, such as an increase in DAT levels in the striatum of patients [51][52][53].Therefore, the specific role of DAT in ADHD pathogenesis remains unclear.However, the DAT-KO mouse model is by far the most well-known and reliable ADHD model, and has provided several clues about the function of this gene, which may be related to psychological disturbances.
Spontaneously hypertensive rat (SHR) model
Spontaneously hypertensive (SH) rats are one of the most studied animal models of ADHD.This model characterization is based on several findings which recommend that the SHR model could be possibly one of the promising hyperactive model for studying ADHD [54,55].The SHR strain was developed by Okamoto and Aoki (1963) [56].They obtained F1 by crossing male Wistar rats with spontaneous hypertension and females with moderately high blood pressure, and selected and mated the hypertensive rats again, eventually reaching F3, where almost 100% of the rats had spontaneous hypertension [56].SHRs and Wistar-Kyoto (WKY) controls differed in their home-cage circadian activities, with SHRs being more active than WKYs at numerous time points.Interindividual variance in impulsivity was virtually absent in the WKY strain during the test; however, SHRs showed significant inter individual variability [57].
Generally, in the preliminary stage, the SHR model was developed to study patients with hypertension and related comorbidities in animal settings [56].Nevertheless, Sagolden et al. reported resultant hyperactivity and spontaneous motor activity during experiments, suggesting that this animal could be used as a model for ADHD [58].
According to several studies, the SHR validated the key symptoms of ADHD, such as attention deficit, hyperactivity, and impulsiveness [59][60][61][62][63][64][65].SHR have been demonstrated to be similar to children with ADHD [66] in that they are more sensitive to delays in reinforcement, which is consistent with a steeper gradient of delays in reinforcement observed in SHR compared to controls [64,67].Aase et al. (2006) and Aase and Sagvolden (2005) found higher intraindividual variability in SHR behavior than in controls [68,69].This is similar to that observed in children with ADHD [61,63,[68][69][70].
Experiential alterations or deficits have been found to be directly associated with frontostriatal system dysfunction.References [54,71] previously showed that the impaired release of DA was witnessed in SHR in specific areas-clearly observed as affected regions in ADHD, namely: the prefrontal caudate-putamen cortex and NAc [71].In young male SHRs, D5 and D1 receptor density is typically increased in the neostriatum and NAc -according to a previous study by ref. [72]; which demonstrated that the prefrontal cortex (PFC) of SHR has decreased expression of the D4 receptor gene.Furthermore, alterations in noradrenergic system release have been observed in the PFC and LC (locus coeruleus) [73].In other words, the noradrenergic system is overactive in the prefrontal cortex of SHR.The production of noradrenaline (NA) in the prefrontal cortex induced by glutamatergic stimulation is elevated in SHRs compared to their respective control WKY rats [74].Collectively, from the above studies, we recommend SHR as a favorable model for studying ADHD.However, the modifications to this model that affect hypertension may also function as variables.Although this model is valuable, considering the impact of hypertension on it is also important.
The SHR strain was generated as described by Okamoto and Aoki (1963) [56].They obtained F1 by crossing male Wistar rats with spontaneous hypertension and females with moderately high blood pressure, and selected and bred hypertensive individuals among them again, eventually reaching F3 to such an extent that almost 100% of the individuals showed spontaneous hypertension [56].As such, SHR was created for the study of hypertension; however, it shows ADHD symptoms such as : impulsivity, learning and memory deficits, hyperactivity, deficient sustained attention, and increased impulsiveness [63,75].For example, SHR were confirmed to be hyperactive compared to control WKY rats in an open field test, and this increased activity was observed in both male and female rats [74].Moreover, similar to children with ADHD, SHRs are less sensitive to delayed reinforcement and more sensitive to immediate behavioral reinforcement than nonhypertensive WKY control rats [63].The behavioral responsiveness of SHR mice was altered by psychomotor stimulants such as methylphenidate hydrochloride (ritalin) or d-amphetamine, which treat childhood ADHD with major symptoms of attention problems and hyperkinesis.This is consistent with the clinical findings in children with ADHD [76].In addition, behavioral deficits, such as hyperactivity and impulsiveness, can be alleviated by monoaminergic agents [77,78].
An in vitro superfusion technique revealed that depolarization (25 mM K1)induced the release of DA from the NAc slices of SHR, which was significantly lower than that in WKY controls [79].Compared to WKY rats, electrical stimulation required less [3 H]DA in the PFC and caudate-putamen slices of SHR [80].Miller et al. reported that SHR/NCrl exhibited decreased KCl-evoked DA release versus the WKY/NCrl model of inattentive subtype (ADHD-PI) in the dorsal striatum (Str).The SHR/NCrl model of ADHD-PI showed quicker DA uptake in the ventral Str and NAc compared to both control strains, but the WKY/NCrl model of ADHD-PI had faster DA uptake in the NAc compared to the SD control.These findings show that higher surface expression of DA transporters could elucidate the faster DA absorption in the Str and NAc of these ADHD animal models.[81].
Next, SHR exhibited changes in several brain systems.One of them is the dopamine system.Depolarization (25 mM K1)-induced electrical stimulation and KCl-evoked release of dopamine are significantly lower in the NAc, cortex, caudate-putamen, and striatum than in WKY controls [80][81][82].In addition, in SHR, D5 and D1 receptor subtypes showed high levels in the NAc and caudateputamen [83]; and the expression of the D4 receptor gene in the PFC and the protein synthesis of it, was significantly low [72]; and Moreover, in a study of dopamine-related genes (Drd2, Drd4, and Dat1) in WKY and SHR by Mill-who found several mutations in the DAT1 gene, which could explain some of the behavioral differences between the two animals by DAT1 sequence changes [84].
In addition to changes in the dopamine system, changes in the norepinephrine (NE) system have been observed in SHR.Basal NE concentrations were significantly higher in the frontal cortex, LC, A2 nucleus, and substantia nigra of SHR than in WKY [85].During early development of SHR, we found higher NE uptake in the frontal cortex, cerebellum, and hypothalamus, and reduced [3H] DHA binding, indicating downregulation of beta-adrenergic receptors in these regions [86].In the induction of NE release through glutamate, SHR showed more release compared to the WKY controls [87], whereas UK 14,304 (alpha 2 agonists-an adrenergic agonist) and neuropeptides showed less NE release inhibition [88].The inhibition of alpha2-adrenoceptor-mediated NE release was also reduced, suggesting that auto-receptor function in the PFC was disrupted [82].PNMT, NAT, and 1A-R mRNA expression levels were higher in SHR, and PNMT mRNA in SHR was three-fold higher than in WKY rats.In contrast, for 2A-R the mRNA expression was three-fold lower in the spinal cord [89].Enhanced DAT was observed in SHR before the onset of hypertension, whereas enhanced DAT and Dl receptors were observed in posthypertensive SH rats.These findings imply that the dopamine system is involved in the pathophysiology and development of hypertension [90].
In conclusion, SHR can be used as a good ADHD research model based on ADHA-related behavioral characteristics and changes in the brain.However, hypertension-related changes in this model can act as a cause or variable of change.Therefore, the effects of high blood pressure on behavioral changes and brain damage should be considered.We have included Table 1 to briefly describe the different models that can be used to study the different symptoms of ADHD in mutant mice, genes involved, and behavioral changes.
Steroid sulfatase
Steroid sulfatase (STS) is an enzyme encoded by the X-linked gene STS in humans and the pseudoautosomal gene STS in mice [91].STS functions in the desulfation of neurosteroids by hydrolyzing dehydroepiandrosterone sulfate (DHEA-S) to DHEA [92].DHEA-S and DHEA act as negative regulators of GABA A receptors and positive regulators of NMDA-receptors [93,94].STS expression has been confirmed in brain regions important for attention and impulsivity, which are thought to be problematic in ADHD, such as the PFC, thalamus, and basal nucleus [95].
The X and Y chromosomes are joined end-to-end by pseudoautosomal regions in a single large sex chromosome of 39XY*O mice.All other X and Y genes are present in their normal complement despite the deletion of the STS gene [96].STS is expressed in key areas of the developing brain that are vital for attention and impulsivity as well as in the frontal cortex, thalamus, and basal ganglia.However, the aforementioned regions are likely to be dysregulated in ADHD [95].
Several studies have shown that 39XY*O mice exhibit ADHDrelated behaviors, such as hyperactivity, inattention, and
GC-C KO mice
The guanylyl cyclase-C gene has been deleted Hyperactivity, Impulsivity, Inattention [166] Per1 KO mouse Per1 gene mouse with targeted mutation (inactivation) Hyperactivity, Impulsivity [167] PI3Kγ KO mouse Absence of class IB phosphoinositide 3-kinases (PI3Kγ) Hyperactivity, Inattention [168] CK1δ OE mouse Overexpression of the casein kinase 1 (CK1) subunit in the forebrain Hyperactivity [169] GAT1 KO mouse Absence of gamma-aminobutyric acid transporter1 (GAT1) gene Hyperactivity, Impulsivity, Inattention [170,171] nAChR β2 KO mouse Removal of the gene that codes for the β2-subunit of the nicotinic acetylcholine receptor Hyperactivity, Impulsivity, Inattention [160,172,173] ADF/n-cofilin KO mouse Absence of both actin depolymerizing factor (ADF) and n-cofilin Hyperactivity, Impulsivity [174] GIT1 KO mouse Loss of the G-protein coupled receptor kinase interacting protein 1 (GIT1) gene Hyperactivity [175] DGKβ KO mouse Loss of the DGKβ (Dgkb) gene Hyperactivity, Inattention [176,177] Gβ5 KO mouse Missing of the type 5 G protein beta subunit (Gβ5) gene Hyperactivity [178] Fmr1-KO mouse Loss of the fragile X mental retardation 1 (Fmr1) gene Hyperactivity, Impulsivity, Inattention [179,180] Ptchd1-KO mouse Inactivation of the Ptchd1 gene Hyperactivity, Inattention [181,182] NOS1-KO mouse Neuronal nitric oxide synthase (Nos1) gene ablation Hyperactivity, Impulsivity [183] mAChR M1-KO mouse Loss of the gene that encrypts for the M1 subtype of the receptor for muscarinic acetylcholine Hyperactivity [184,185] Brinp1-KO mouse Absence of the bone morphogenetic protein (BMP) / retinoic acid (RA)-inducible neural-specific protein 1 (BRINP1) Hyperactivity [186,187] Cdh13-KO mouse Genetic ablation of the cadherin-13 (Cdh13) gene Hyperactivity [188,189] DAT-CI Triple point-mutation in the cocaine-binding site of DAT Hyperactivity [190] BAC DAT-tg Overexpression of dopamine transporter - [191] Naples high-excitability rat Lower expression of DA D1 receptor transcripts in NHE, 26 mRNAs greatly expressed in the PFc of NHE rats Hyperactivity, Inattention [192,193] Acallosal mouse strain Inbred acallosal mouse strain I/LnJ Hyperactivity, Impulsivity [194] occasional aggression [97][98][99] which have been linked to an increase in serotonin (5-hydroxytryptamine,5-HT) levels in the striatum and hippocampus as a consequence of decreased DHEA [98].Trent et al. showed that 39XY*O mice had higher ratios for progressive ratio (PR) task thought to index motivation compared to WT mice.However, no variation were observed between the two groups in the behavioral tasks that were thought to index compulsivity [99].A neurobiological explanation for the behavioral differences between 40,XY and 39,X(Y)*O mice is the regionally specific perturbations of the 5-HT system, which are associated with significant correlations between hippocampal 5-HT levels and PR performance, as well as between striatal 5-HT levels and locomotor activity.These findings imply that functional variations and inactivating mutations within STS may affect ADHD vulnerability and disease endophenotypes by altering the serotonergic system.Therefore, although 39XY*O mice have some validity as ADHD models based on ADHD-related behavioral phenotypes and altered serotonergic systems, more evidence is needed to establish them as ADHD models.
There is a male bias in ADHD [100].According to previous reports by Szatmari et al. and Gomez et al., the male-to-female ratios were 3:1 and 5:1, respectively [101,102].In addition, if there is a male bias, the association between the X-linked gene and ADHD may also be reflected, as indicated by several studies reporting that patients with Xp deletions exhibit ADHD-like cognitive-behavioral characteristics [103,104].
Moreover, female with Turners syndrome (45XO) who are haploinsufficient for genes which results in the escape of X-inactivationfurther establish that with different cognitive deficits consisting of: social, visuospatial, memory, cognitive and attentional deficits [105,106].Kent et al. (2008) reported the neurobehavioral characteristics of 25 boys with X-linked Ichthyosis, a genetic skin disorder caused by deletion or point mutation of the STS gene, confirming the diagnostic and statistical manual of mental disorders IV ADHD with no comorbidity: 32% (8 cases) of patients were diagnosed with the inattentive subtype [107].ADHD has been found in boys with both STS deletions and putative point mutations, indicating that STS insufficiency may be the cause of the high risk of inattentive symptoms in these populations [100].
SNAP25 is a component of the SNARE (soluble Nethylmaleimide-sensitive factor attachment protein receptor) complex, which facilitates the fusion and docking of postsynaptic vesicles to enable the release of neurotransmitters [117,118].It was previously shown that variations in the SNAP-25 gene could lead to symptoms of ADHD by altering the levels of dopamine and other neurotransmitters at the synapse [119].
SNAP25 dysfunction causes changes in the dopaminergic system.Coloboma mutant mice exhibit a marked decrease in dopamine release from the dorsal striatum compared to their respective controls [120].In addition, mRNA expression of the dopamine D2 receptor increased in the ventral tegmental area and substantia nigra, which is consistent with the inhibition of dopamine neurons [121,122].In addition to the dopamine system, SNAP25 anomalies result in altered NE elevations in the noradrenergic system [121].The reduction of NE in mice with N-(2-chloroethyl)-N-ethyl-2-bromobenzylamine hydrochloride reduced hyperactivity but did not improve impulsivity, demonstrating a link between the noradrenergic system and hyperactivity in this model [123,124].In a coloboma mouse study, Bruno et al (2006) discovered that the alpha (2 C)-adrenergic receptor (ADRA2C) was involved in hyperactivity [125].
Hence, these synaptic differences in coloboma mutant mice can serve as the foundation for the basic approval of this model for triggering behavioral anomalies such as hyperactivity, inattention, and impulsivity.This mice model displayed spontaneous locomotor hyperactivity in an open-field experiment [126] and less patience than the control group in a delayed reinforcement task, demonstrating the characteristics of inattention and impulsivity [123].
Studies have suggested that coloboma mutant mice show a reduction in hyperactivity with d-amphetamine and not with methylphenidate; therefore, it works as a moderate to conventional ADHD treatment [109,110,127].Taking all these considerations into account, we propose that coloboma mutant mice or Snap25-mutant mice could be used as promising models for ADHD.
Alpha-synuclein lacking mice
Alpha-, beta-, and gamma-synucleins belong to the synuclein family and are small, soluble proteins that have been found only in vertebrates and are expressed in nerve tissues and some tumors [128].Among these, mutations in alpha-synuclein are associated with rare familial cases of Parkinson's disease as well as the accumulation of this protein in AD and several neurodegenerative diseases [129,130].Alpha-synuclein is predominantly expressed in the brain tissues of the neocortex, hippocampus, striatum, thalamus, and cerebellum, and is found at presynaptic terminals [131].The human and rodent sequences were 95.3% identical except of six amino acids [132].The amino acid residue at position 53 is typically alanine in humans and threonine in rodents.Surprisingly, the same change, Ala-53-Thr, has been found in some family cases of Parkinson's disease (PD) [129].
Phospholipase D2 (PLD2) has been reported to function in cytoskeletal regulation and/or endocytosis [133].It has been reported that a-and b synuclein can selectively inhibit PLD2 through direct interaction on the membrane surface, suggesting that synuclein may play an important role in regulating the vesicle
References
Atxn7 OE mouse Atxn7 overexpressing Hyperactivity, Impulsivity [195] 5HT2C receptor-KO mice X-chromosome linked serotonin 2c receptor (5HT2C) gene (Htr2c) Impulsivity [196] COMT-KO mice Catechol-O-methyltransferase (COMT)-KO Impulsivity [197] NF1-KO mice Neurofibromatosis type 1 (NF1)-KO mice Inattention [198] Nrg3-KO mice Neuregulin-3 (Nrg3) Impulsivity [199] transport process [134].PLD2 overexpression in the rat substantia nigra pars compacta (SNc) results in: severe neurodegeneration of DA neurons, loss of striatal DA, and ipsilateral amphetamineinduced rotational asymmetry [135].Other studies have found that alpha-synuclein expression in the rat brain-especially in the cerebral cortex, hippocampus, and dentate gyrus-is related to the localization of molecules associated with the phosphoinositol (PI) secondary messenger pathway, such as phospholipase C1 (PLC1) and muscarinic cholinergic receptor types m1 and m3 [136].This discovery gave them the idea that alpha-synuclein could be involved in synaptic vesicle release and/or recycling in response to PI stimulation.This notion is supported by the discovery that a-synuclein can bind to tiny, unilamellar phospholipid vesicles.[137].Therefore, alpha-synuclein may also play a role in the dopamine system because it is associated with Parkinson's disease, along with functions related to the vesicles of alphasynuclein.
In fact, the depletion of alpha-synuclein in primary hippocampal neurons treated with antisense oligonucleotides reduces the pool of presynaptic vesicles [138].Several other studies have suggested that alpha-synuclein is involved in the regulation of DA homeostasis [139,140].Tissue cultures have shown that alpha-synuclein inhibits DA synthesis by regulating the activity of tyrosine hydroxylase, protein phosphatase 2 A, and aromatic amino acid decarboxylase [141][142][143].Alpha-synuclein KO mice, in contrast, showed increased dopamine release in the nigrostriatal terminals as a result of paired electrical stimuli-indicating that alpha-synuclein acts as a negative regulator of DA neurotransmission [144].Alpha-synuclein KO mice showed a reduced effect of D-amphetamine compared to WT, further supporting the fact that alpha-synuclein is a negative regulator of DA neurotransmission [144].
Despite these changes in the dopamine system, behavioral phenotypes related to ADHD are not easily observed in alphasynuclein-related mutant mice.When comparing alpha-synucleinlacking mice and WT mice, no significant difference in amphetamine-induced activity was observed, and the rearing was the same [145].In a study on the association between alphasynuclein and anxiety, no significant difference was observed between alpha-synuclein knockout and WT mice in emotionality tests, such as the open field, elevated plus maze, and light-dark box.Therefore, alpha-synuclein is not involved in anxiety in mice [146].In Senior's study, significant results were found, and it was confirmed that alpha-synuclein and gamma-synuclein double-null mice were hyperactive in the novel environment and alternated at a lower rate in the T-maze spontaneous alternation task.In addition, the concentration of extracellular DA in the striatum doubled in double-null mice after discrete electrical stimulation [147].However, this does not only target alpha-synuclein.Although the behavioral characteristics of hyperactivity have been identified, studies on other ADHD-related behavioral phenotypes-such as inattention and anxiety-are lacking.Therefore, further studies are needed using alpha-synuclein-deficient mice as ADHA-related models.As related behavioral phenotypes and changes in the dopamine system were confirmed in doublenull mice; if the study was expanded to the synuclein family rather than limited to alpha-synuclein, it may provide another clue to research its relationship with ADHD.
CONCLUSION AND FUTURE PROSPECTIVE
Animal models are vital research tools that may help us better understand the possible complex mechanisms involved in the development of a disease and enable us to screen and report new effective medications for therapy that can be translated to humans.Animal models of ADHD are categorized as perfect mimics of all disease-inducing features at both the behavioral and physiological levels.Abundant evidence regarding the genetics of ADHD exists; however, these findings appear to be inconsistent.
Studies have shown that ADHD may develop from the interaction of many polygenic genes.Mooney et al. combined the results of numerous analysis methods to determine the pathways most likely to be connected with ADHD as well as to evaluate different types of route methodologies and the benefits involved in this approach [148].
This study acknowledges seven pathways, including :RhoA signaling, glycosaminoglycan biosynthesis, fibroblast growth factor receptor activity, and pathways containing potassium channel genes, reported as nominally significant by multiple analysis methods using two GWAS databases.This study confirmed earlier beliefs regarding how controlling neurotransmitters, neurite outgrowth, and axon regulation contribute to the ADHD phenotype; and stressed the importance of cross-method convergence when evaluating route analysis results.The polygenic model of illness risk was consistent with the excess minor SNP effects found in each of these pathways.These pathway correlations offer additional support for earlier hypotheses concerning the etiology of ADHD, particularly those associated with the regulation of neurotransmitter release and neurodevelopmental processes; however, further studies are required to confirm this hypothesis.
To study the mechanism and to understand the etiopathology of ADHD studies have highlighted the importance of using induced pluripotent stem cells (iPSCs) for disease modeling.This method allows us to analyze individual-specific neuronal cell lines in vitro in order to research cellular malfunction and identify the underlying genetic variables [149][150][151].Reference [152] developed a methodology for generating iPSCs from hair-derived keratinocytes as beginning somatic cells from patients in order to circumvent the invasive aspect of sample collection in the research of early neurodevelopment diseases such as ADHD [152].
Another pathway discussed by Ohki et al. regarding the cause and pathophysiology of ADHD supported the hypothesis of the Wnt and mTOR signaling pathways [153].Cellular proliferation, polarity, and differentiation are controlled by the Wnt signaling system, whereas synaptic plasticity and several other important neurodevelopmental processes are controlled by the mTOR pathway.Therefore, dysregulation of these time-dependent pathways may result in neurodevelopmental delays and ADHD phenotypes.
Additional genetic variations are present in other models (dopamine transporter gene knockout mice, coloboma mice, Naples hyperexcitable rats, steroid sulfatase, alpha-synucleinlacking mice, and neonatal lesioning of dopaminergic neurons with 6-hydroxydopamine).However, none of them are fully comparable to clinical ADHD.The pathophysiology involved varies, including both deficient and excessive dopaminergic functioning, and there is probable involvement of other monoamine neurotransmitters such as dopamine, serotonin, and noradrenaline.Therefore, improved models as well as further testing of their ability to predict treatment responses are required.Some aspects of ADHD behavior may result from an imbalance between increased noradrenergic activity and decreased dopaminergic regulation of neural circuits that involve the prefrontal cortex.In addition to providing unique insights into the neurobiology of ADHD, animal models are also used to test new drugs that can alleviate ADHD symptoms.
The evidence addressed in this study suggests that currently available animal models may be useful for studying human behavioral disorders.Furthermore, our current knowledge of ADHD neurobiology is insufficient, making it challenging to identify an optimal model for investigating ADHD.
Fig. 2
Fig. 2 Differences in dopamine transmission in normal and dopamine transporter knock out (DAT-KO).Figure was modified and redrawn from Efimova et al. [37].
Fig. 2 Differences in dopamine transmission in normal and dopamine transporter knock out (DAT-KO).Figure was modified and redrawn from Efimova et al. [37].
Table 1 .
[154]ning the reliability of various proposed Attention-Deficit Hyperactivity Disorder (ADHD) transgenic animal models the table was modified and adopted from Pena et al.[154]. | 7,639.8 | 2024-04-11T00:00:00.000 | [
"Medicine",
"Biology",
"Psychology"
] |
Use of multistate Bennett acceptance ratio method for free-energy calculations from enhanced sampling and free-energy perturbation
Multistate Bennett acceptance ratio (MBAR) works as a method to analyze molecular dynamics (MD) simulation data after the simulations have been finished. It is widely used to estimate free-energy changes between different states and averaged properties at the states of interest. MBAR allows us to treat a wide range of states from those at different temperature/pressure to those with different model parameters. Due to the broad applicability, the MBAR equations are rather difficult to apply for free-energy calculations using different types of MD simulations including enhanced conformational sampling methods and free-energy perturbation. In this review, we first summarize the basic theory of the MBAR equations and categorize the representative usages into the following four: (i) perturbation, (ii) scaling, (iii) accumulation, and (iv) full potential energy. For each, we explain how to prepare input data using MD simulation trajectories for solving the MBAR equations. MBAR is also useful to estimate reliable free-energy differences using MD trajectories based on a semi-empirical quantum mechanics/molecular mechanics (QM/MM) model and ab initio QM/MM energy calculations on the MD snapshots. We also explain how to use the MBAR software in the GENESIS package, which we call mbar_analysis, for the four representative cases. The proposed estimations of free-energy changes and thermodynamic averages are effective and useful for various biomolecular systems.
Introduction
Multistate Bennett acceptance method (MBAR) (Shirts and Chodera 2008) is widely used to estimate free energy changes between different states and thermodynamic averages at the states of interest using simulation trajectory data after the simulations have been finished. MBAR is an extension of Bennett's acceptance ratio method (Bennett 1976) that was formulated for only two states to multiple states. Theoretically, the estimation using MBAR is superior to other estimators in that it has the lowest variance and is asymptotically unbiased (Shirts and Chodera 2008). MBAR can be interpreted as the limit of infinitesimal bin size in the weighted histogram analysis method (WHAM) (Kumar et al. 1992), (Souaille and Roux 2001), (Tan et al. 2012). It can avoid any biases introduced by the binning of trajectory data.
MBAR covers a broad range of applications where thermodynamic ensembles are required. They include the estimations of the free-energy differences including absolute/ relative binding free-energy calculations (Chodera et al. 2011) and the ensemble averages including the potential of mean force (PMF) calculations from biased molecular dynamics (MD) trajectories, such as umbrella sampling (Torrie and Valleau 1977), or temperature replicaexchange MD (T-REMD) (Sugita and Okamoto 1999). Furthermore, MBAR has recently been applied to the rapid parameterization of all-atom force-field parameters (Messerly et al. 2018) and coarse-grained model parameters (Shinobu et al. 2019).
One of the practical difficulties in MBAR is to solve the nonlinear simultaneous equations numerically. This requires a sufficient memory capacity on the order of the square of the number of states multiplied by the number of samples. The computational complexity in solving the equations drastically increases with the number of states, which is a bottleneck in large-scale applications with MBAR. To solve this problem, Zhang et al. (2015) and Tan et al. (2016) proposed stochastic solvers, where locally weighted histogram analysis is used to approximate the global solution of the MBAR equations. Recently, Ding et al. developed a rapid solver inspired by the fact that the MBAR equations can be derived as a Rao-Blackwell estimator (Ding et al. 2019).
Another difficulty in using the MBAR equations is how to prepare the input data using different types of MD simulation trajectories. The MBAR equations require the potential energies and thermodynamic parameters of the system as input data. However, in many practical applications, it is often not necessary to use all the potential energy terms in the MBAR equations. For example, in the case of umbrella sampling (US), the potential energies of the system excluding the restraint energies are canceled out in the MBAR equations. The restraint energies alone are sufficient as input data. Another example is the case of MD simulations at constant volume, where pV terms can be canceled out; thus, it is not necessary for solving the MBAR equations. Because it is difficult for non-experts to recognize which terms can be canceled out, users may attempt to prepare whole system's potential energies by themselves. This leads to tedious efforts in preparing unnecessary input data and limits the uses of the MBAR by non-expert users.
In this review, we summarize the basic theory of the MBAR equations and describe representative usages for different types of MD simulation trajectories, such as enhanced conformational sampling methods like umbrella sampling (US) (Torrie and Valleau 1977), temperature replica-exchange MD (T-REMD) (Sugita and Okamoto 1999), and replica exchange with solute tempering (REST/REST2/ gREST) (Liu et al. 2005) (Wang et al. 2011) (Terakawa et al. 2011) (Kamiya and Sugita 2018), and free-energy perturbation (FEP) (Zwanzig 1954) (Tembre and Mc Cammon 1984) (Mey et al. 2020). MBAR is also useful for estimating more reliable free-energy changes using MD simulations based on semi-empirical quantum mechanics/molecular mechanics (QM/MM) and ab initio QM/MM calculations (Warshel and Levitt 1976) on the MD snapshots (Yagi et al. 2021). As examples, we present several computational results using the MBAR equations for free-energy estimator for simple biological systems. Finally, we will discuss our perspectives on free-energy estimation methods.
The classifications of MBAR applications
The MBAR equations First, we define x ∈ Γ as the configurations of the target system to be simulated. Here, Γ is the configuration space. We also define that n(x) is the number of molecules of each of M components of the system, and is the vector of chemical potentials of the corresponding components. In the MBAR formulation, state i is specified by a combination of potential energy function U i (x) , inverse temperature i , pressure p i , and/or chemical potential(s) i , or their conjugate variables depending on the ensemble. Here, the inverse temperature i , the pressure p i , and/or the chemical potential(s) i are given as external parameters in the input of MD simulation. For example, the inverse temperature i corresponds to that of the thermostat. Suppose that we obtain N i uncorrelated samples in state i out of K states, and for each state i, U i (x) , i , p i , and i are defined. Then, the reduced potential energy u i (x) of state i is then defined by Shirts and Chodera (2008): where n(x) are the numbers of molecules corresponding to the chemical potentials.
The dimensionless free energy of state i can be defined using the integral of the Boltzmann weight: Note that the above equations can use arbitrary weights other than the Boltzmann factor. The free energy difference between two states, Δf ij = f i − f j , is estimated using the MBAR equations as discussed below.
The MBAR equations give us the best estimator for free energy differences between different states from MD simulation data. Let x in N i n=1 be configurations sampled in state i, then the equations are given by: Here, f i are the solutions to the MBAR equations. f i are determined up to an additive constant, so only their differences Δf ij =f i −f j are meaningful. Since f i is found on the right-hand side of Eq. (3) as well as the left-hand side, the equation is solved by self-consistent iterations or the Newton-Raphson method until the convergence of f j . On the other hand, when the samples from different states do not have any overlaps in the configuration space, f i in Eq. 3 fail to converge, or f i could have large uncertainties even after convergence. The uncertainties can be estimated by (2) the analytical equation (given in (Shirts and Chodera 2008)) or by using the bootstrap method or the block averaging method. When the hidden energy barriers exist in a full dimensional space, the uncertainty in f i in the MBAR equation cannot decrease rapidly even with extended MD simulation trajectory data. To overcome this, better conformational sampling schemes applicable to higher dimensional spaces are necessary.
Once f i is obtained, thermodynamic averages in the unsampled state can be calculated by "extrapolating" or "interpolating" the MBAR equations from the samples obtained in simulations with K states. Let u target (x) be the reduced potential energy in the target state where the average ⟨A⟩ target of the physical quantity A(x) will be estimated. ⟨A⟩ target can be derived as follows: Here, f target,A uses a non-Boltzmann weight because of the factor A(x) . The estimates of the free energies, f target,A , f target , f target ,A , andf target , can be obtained by solving the above MBAR equations by regarding the ensembles as (K + 1)-th and (K + 2)-th states with N K+1 = 0 and N K+2 = 0 , respectively: When the delta function (or indicator function) on order parameter(s) (or collective variable(s)) is used as the physical quantity A(x) , then the PMF on those coordinate(s) can be obtained.
In principle, the MBAR equations can utilize the potential energy U i (x) , volume V(x) , and other thermodynamic data as input data. However, evaluating the whole system's potential energies for all combinations at different states is a time-consuming process. Obviously, some applications can simplify or skip this process. For this purpose, we categorize four major usages of the MBAR equations: (i) perturbation, (ii) scaling, (iii) accumulation, and (iv) full potential energy as shown in Fig. 1. In the following, we describe the reduced MBAR equations and input data for each case.
Perturbation: umbrella sampling
Here, we mean that "perturbation" is to add an additional term to the original potential energy that is used in MD simulations. Umbrella sampling (Torrie and Valleau 1977) imposes a restraint force(s) along collective variable(s) z(x) and is considered as a major example of "perturbation." The potential energy used in umbrella sampling can be decom- ) . Here, U system (x) is the potential energy from the system's force-field shared at all the umbrella windows, and U (i) restraint (z(x)) is a restraint energy for the i-th umbrella window. The quadratic form ) . In the context of umbrella sampling, states i in the MBAR analysis are specified by To simplify further, only k i , z ref i , and z(x jn ) are needed as the input data. The resulting MBAR equations for the (dimensionless) free energies of umbrella window systems are: Classifications of four major applications for the MBAR equations and the schemes from input data to output results Typically, umbrella sampling aims to obtain the PMF force along z under the ensemble of the target state as the restraintfree state. In this case, u target (x) = 0 is used as the reduced energy, which can be calculated without further input data.
Scaling: temperature replica exchange MD
In "scaling," a scaling of the original potential energy by a factor is applied in MD simulations. We here consider T-REMD as an example of "scaling." In the context of T-REMD, states i in the MBAR analysis are specified by i . The reduced potential energy used in T-REMD is u i (x) = i U system (x) , just a scaling of the original potential energy U system (x) . In this case, the resulting MBAR equation is: Here, i U system x jn can be calculated by multiplying U system x jn by i . Therefore, the only inputs required for the MBAR equations are the set of U system x jn and i . For the calculation of thermodynamic averages or PMF, the target ensemble, i.e., u target (x) = 1 U system x jn , at the lowest temperature, 1 . Note that making use of different temperatures like Eq. (8) using the WHAM instead of the MBAR requires an advanced reformulation of the WHAM theory, such as the parallel tempering WHAM (Chodera et al. 2007). Here, we assume the NVT ensemble, but in the NPT ensemble, we must also consider pV as the inputs (Paschek and García 2004) (Peter et al. 2016).
Accumulation: free-energy perturbation
"Accumulation" is defined as calculations that are accumulating or integrating the free energy difference between states. As a typical example of "accumulation," we here consider free-energy perturbation (FEP) (Zwanzig 1954) (Tembre and McCammon 1984) (Mey et al. 2020), which is typically used to calculate the free energy difference in alchemical transformation. In FEP, multiple intermediate states are stratified with a one-dimensional control parameter λ ( 0 ≤ ≤ 1 ). By stratifying the start ( = 0 ) and end ( = 1 ) states, the distributions of reduced energies of neighboring states over λ can overlap with each other, resulting in more accurate estimates of the free-energy differences between neighboring states, Δf i,i+1 . Theoretically, the free-energy difference between the start and end states is obtained by accumulating the neighboring differences, Δf start,end = ∑ Δf i,i+1 . In FEP, the potential energy is described by U x; i = (1 − )U start (x) + U end (x) . In the context of FEP, states i in the MBAR analysis are specified by U x; i . In the MBAR analysis of FEP data, it is ideal to prepare the reduced energies for all combinations of stratified states u i x jn , and to solve the full MBAR equations. However, for practical uses, it is often sufficient for keeping accuracy of the free-energy difference to care about the reduced energies only between neighboring states. It is thus not worth the cost to calculate the reduced energies for all combinations of the states (Paliwal and Shirts 2011). Also, the difference of the reduced energies between nonneighboring states often becomes too large, leading to the instability of energy calculations. In the case of scaling Lennard-Jones (LJ) interactions of a solute in water, the solute can overlap with water molecules in the end state ( = 0 , i.e., the fully decoupled state of the solute). The reduced energy of the start state ( = 1 ; the fully coupled state of the solute) estimated using the configuration of the end state becomes infinite even though the soft-core modification (Beutler et al. 1994) (Zacharias et al. 1994, p.) is introduced into the LJ interactions. In our framework, only the reduced energies of the neighboring states, i.e., u i x in , u i+1 x in , are used as input data. The free-energy differences between the neighboring states are accumulated to obtain the difference between the start and end states. In this sense, MBAR for "perturbation" is essentially the same framework as the original BAR.
Full potential energy: REST, parameter tuning, and others
The last case, "full potential energy," needs the whole system's potential energies for solving the MBAR equations. In this context, state i in the MBAR analysis is specified by the whole systems' potential energy. In replica exchange with solute tempering (REST) (Liu et al. 2005), REST2 (Wang et al. 2011) (Terakawa et al. 2011), and gREST (Kamiya and Sugita 2018), solute molecules, such as proteins or ligands (in REST/REST2), or part of solute molecules with full or partial potential energies (in gREST) are scaled as a solute region. In REST/REST2/gREST simulations, the solute region in each replica has a different temperature, while the temperatures in the solvent region in all replicas are the same, for instance, room temperature. Although only the scaled terms can be given to the MBAR equations ignoring the other canceling terms, most MD software does not write specific terms of the potential energy by default, but evaluates potential energies from updated parameters of the solute region. Thus, in terms of computational cost, there is no difference in preparing the potential energy for the entire system or the specific scaled terms. Another example is the tuning of force field parameters or model parameters (Messerly et al. 2018) (Shinobu et al. 2019). Since parameters affect the total energy in a complicated way, we need to prepare the potential energies of all the replicas in the whole system. In the case of parameter tuning, parameter sets, which have not yet been sampled, are used as the target state of the MBAR for predicting the behavior of new parameter sets.
Further complicated yet important case is a combination of umbrella sampling (perturbed case) and different potential energy functions (case of full potential energy) in hybrid quantum mechanics/molecular mechanic (QM/MM) calculations (Warshel and Levitt 1976) (Yagi et al. 2021). In the MBAR analysis using the QM/MM data, it is possible to reweight samples obtained by low-level theory (LL, e.g., classical force field or semi-empirical QM) with the energies of high-level theory (HL, e.g., ab initio QM) and obtain more accurate PMF. Because the computational cost of MD simulations with HL is far greater than with that with LL, various methods have been proposed to correct LL data with HL calculations as post-processing (Yagi et al. 2021). LL calculation is often conducted with umbrella sampling along a reaction coordinate z(x) . Then, the total potential energy of the LL with umbrella window becomes U i (x) = U LL system (x) + U restraint (z(x)) . The target potential energy is the potential energy of the HL system without restraint potential, u target (x) = U HL system (x) . The required input data are the potential energies of the LL U LL system x n and HL U HL system x n , and the restraint energies U (i) restraint z(x n ) . The resulting MBAR equations are:
Implementation
Based on the above classifications, we implemented a MBAR code, which we call mbar_analysis in the GENESIS software package (Jung et al. 2015) (Kobayashi et al. 2017). The implemented code preprocesses the input data for each one of the above classifications, and the input formats are thus simplified as much as possible. The code calls a common MBAR equation solver to estimate free-energy differences. The solver was implemented by combining a simple self-consistent iteration with the Newton-Raphson method. The calculation of the denominator of the MBAR equation was parallelized with multi-threads, resulting in a faster execution than the reference MBAR implementation, PyMBAR (Shirts and Chodera 2008). After obtaining the estimates, it gives us the free-energy difference, weights, and PMF according to the input parameters. The code was implemented in FORTRAN.
Test simulation systems
We first demonstrate the MBAR analysis in the case of "perturbation" using alanine-tripeptide in vacuum as a (9) target molecule. Using CHARMM36m force field (Huang et al. 2017), the umbrella sampling simulation was performed with GENESIS atdyn (Jung et al. 2015), (Kobayashi et al. 2017). The angle values were increased by 3 degrees between the centers of neighboring windows, resulting in 61 windows to investigate angle values from 0 to 180. A spring constant of 200 kcal/mol/rad 2 was applied. Temperature was controlled at 300 K by the stochastic velocity scaling method (Bussi et al. 2007). Long-range electrostatic interactions were treated without cutoff. Instantaneous angles corresponding to z(x jn ) in the MBAR equations were extracted from the trajectory data with trj_convert tool in GENESIS. Extracted angle values, the spring constants, and the window centers were used as the input of the MBAR. For reference, the same calculation was performed with the WHAM implemented as wham_analysis tool in GENESIS.
To demonstrate the MBAR analysis of T-REMD data, we performed a T-REMD simulation of alanine-tripeptide in solution. The initial structures and parameters are the same as those used in GENESIS tutorials (https:// www.rccs. riken. jp/ labs/ cbrt/ tutor ials2 022/). We calculated the potential of mean force (PMF) in the and dihedral angle space. Using CHARMM36 force field (Huang and MacKerell 2013) and TIP3P water molecules (Jorgensen et al. 1983), the simulation was performed with GENESIS spdyn. In MD, electrostatic interactions were treated by the smooth Particle Mesh Ewald (Essmann et al. 1995), and covalent bonds containing hydrogen atoms were constrained by the SHAKE (Ryckaert et al. 1977) or SET-TLE (Miyamoto and Kollman 1992) algorithms. The temperature was controlled by the stochastic velocity scaling method (Bussi et al. 2007). These methods were kept in other applications. The system's potential energy was extracted from the MD log file and sorted from replica-ID data to temperature-ID data by using remd_convert tool in GENESIS. Then, they were used as the input together with the temperatures of the replicas for the MBAR. Also, the dihedral angles and were extracted (with trj_convert tool) from the trajectory data and used as the input to calculate PMF at 300 K. We compared PMF using the trajectory only at 300 K with that of all temperature trajectories reweighted using MBAR.
For a demonstration of the "accumulation" case, we calculated a mutation from the amino-acid side-chain analogue of Val to that of Trp in water. Twenty strata were used to divide into equal intervals from = 0 to = 1 , and FEP was performed. Using CHARMM36 force field and TIP3P water model, the simulation was performed with the FEP implementation (Oshima et al. 2020) in GENESIS spdyn. To remove the instability of MD simulations near end states, the soft-core treatment is introduced to the LJ and electrostatic potentials (Zacharias et al. 1994) (Steinbrecher et al. 2011). The reduced energies of the neighboring states, i.e., u i−1 x in , u i x in , u i+1 x in were generated with GENESIS spdyn, and they were used as the input for the MBAR analysis.
As a demonstration of a case of "full potential energy," gREST of alanine-tripeptide in water (taken from GEN-ESIS tutorials) was performed using four replicas of alanine-tripeptide as solute and temperature control by scaling all potential energy terms. Using CHARMM36 force field and TIP3P water model, the simulation was performed with GENESIS spdyn. The potential energies required in MBAR were generated using the optional function of GENESIS spdyn. The potential energy data of replicas were sorted from replica-ID data to temperature-ID data using remd_convert tool of GENESIS. We calculated PMF along the distance between the terminal residues (an oxygen of the ALA1 and a hydrogen of ALA3). We compared PMF using the trajectory at 300 K with that of all replicas' trajectories reweighted by the MBAR.
Finally, as a demonstration of QM/MM with umbrella sampling MD simulation (QM/MM-US-MD), QM/MM-US-MD of malonaldehyde (MA) and p-Nitrophenyl phosphate (pNPP 2− ) were performed at the third-order extension Self-Consistent-Charge Density Functional Tight-Binding level (Gaus et al. 2011, p. 3). MA and pNPP 2− are encapsulated in a TP3P water sphere with a radius of 20 Å. The reaction coordinate for MA is a linear combination of two distances between atoms involved in the proton transfer reaction. The reaction coordinate for pNPP 2− is the P-O distance involved in hydrolysis ( Fig. 3a and b).
For MA, MD calculations were performed for 200 ps at 300 K using a spring constant of 40 kcal/mol/rad 2 in 21 umbrella windows centered along the reaction coordinate increased from − 1.0 to 1.0 at 0.1 Å intervals. For pNPP 2− , MD calculations were performed for 500 ps at 300 K using a spring constant of 300 kcal/mol/rad 2 in 18 umbrella windows centered along the reaction coordinate increased from − 1.4 to 3.1 at 0.1 Å intervals. The 3ob (ophyd) parameter (Gaus et al. 2013) (Gaus et al. 2014) was used as the Slater-Koster parameter, and CHARMM36m (Huang et al. 2017) and CGenFF (Vanommeslaeghe et al. 2009) (Yu et al. 2012) were used as the classical force fields. For reweighting with MBAR, potential energies were evaluated by single-point energy calculation for 2000 and 5000 samples for MA and pNPP2-at the density functional theory (DFT) level using B3LYP/cc-pvdz (Lee et al. 1988) (Becke 1993) (Grimme et al. 2010) (Dunning 1989), respectively. DFTB and DFT calculations were performed using QSimulate (https:// qsimu late. com), a fast quantum computation program package, in combination with GENESIS atdyn (Yagi et al. 2021). Figure 2A shows the inputs required for the MBAR analysis and the results of the umbrella sampling of alanine-tripeptide in vacuum. As explained above, the only inputs required for umbrella sampling are the collective variable trajectories of umbrellas, z(x jn ) , the window centers, and the spring constants. Restraint energies, U restraint,i x jn , required in the MBAR equations, are calculated internally and passed to the solver. In the PMF calculation with WHAM, the density of states is calculated after making the histogram of the trajectories, whereas in the MBAR, the density of states for each z is calculated after the weight of each sample point. Therefore, theoretically, MBAR can prevent any biases due to convolutions. Figure 2b shows the inputs required for the subsequent MBAR analysis and the results of the T-REMD of alanine-tripeptide in water. As explained above, the only inputs required for T-REMD are the potential energies of sorted replicas and temperatures. The reduced energies i U system x jn , required in the MBAR equations, are calculated internally and passed to the solver. Comparing the PMF obtained from only the lowest temperature 300 K trajectory with that weighted all replica trajectories with MBAR, the left-handed helix state is well captured in MBAR. This is because the left-handed helix sampled at high temperature is not discarded but treated as the weighted samples by the MBAR. Figure 2c shows the inputs required in the subsequent MBAR analysis and the results of the alchemical FEP from Val to Trp in water. As explained above, the only input required for FEP is the potential energies of the neighboring states. Our MBAR tool integrates the free-energy difference between the neighboring states to obtain the total difference between the start and end states. Inside the program, the MBAR equation for two states is called repeatedly. The free energy difference between Val and Trp shown in Fig. 2c is comparable with the exponential averaging, and matched with the result obtained with NAMD (Liu et al. 2012), indicating that the potential energy difference from Fig. 2 Results of the MBAR analysis for four cases of our classification. a Potential of mean force (PMF) along dihedral angle of alanine-tripeptide in vacuum. The results of MBAR and WHAM are indicated by red solid line and blue dashed lines, respectively. b PMF in and dihedral angles of alanine-tripeptide in water. The PMF calculated only from the trajectory of 300 K and all trajectories reweighted by the MBAR are shown. The region of the left-handed helix is indicted by red arrow. c Free-energy differences upon the mutation from the amino-acid side-chain analogue of Val to that of Trp as a function of , calculated by free-energy perturbation method. The shaded region indicates the uncertainties estimated by block averaging. EXP means the exponential averaging. d PMF along the distance between the terminal residues of alanine-tripeptide in water, calculated by gREST simulation. The PMF calculated only from the trajectory of 300 K and all trajectories reweighted by the MBAR are shown ◂ the distant state is not contributing. This is also consistent with the result of the exhaustive FEP benchmarks by Paliwal and Shirts (2011), where they showed that MBAR and BAR have comparative accuracies for many FEP data analyses. Figure 2d shows the results of gREST for alaninetripeptide in water and the inputs required in the MBAR analysis. The reduced energies required for MBAR were generated with GENESIS spdyn during the simulations. As shown in the figure, the potential energies with all scaling values are required as input for the MBAR analysis. Figure 2d compares the PMF at the lowest temperature (i.e., scaling factor) trajectory with that using MBAR. The figure shows that the PMF using MBAR well captures the stable conformations indicated at the lower PMF values compared to the PMF values using a single trajectory at 300 K. Figure 3c and d shows the PMF obtained in DFTB, DFT, and reweighted PMF with MBAR. A total of 10 ps (10,000 MD steps) equilibration followed by 20 ps (20,000 MD steps) US-MD calculations were performed in MA and pNPP 2− to obtain PMF at the DFT level. In both systems, the PMF obtained from the DFT-level calculation and the PMF obtained from the reweighting are in good agreement. The computational cost for reweighting was 6.7% for MA (16.7% for pNPP 2− ) of the 30,000 MD steps of the brute-force DFT-US-MD calculation (only 2000 samples were reweighted for MA, and 5000 samples were reweighted for pNPP 2− ). Furthermore, as only the potential energy is required for reweighting, the gradient calculation can be omitted, further reducing the actual computational cost.
Discussion
MBAR is a very flexible method, which can handle various states, ranging from different temperature/pressure conditions to different force field parameters. On the other hand, this flexibility sometimes requires complicated inputs, even for straightforward analyses such as umbrella sampling. In this review, we classified typical applications of MBAR and simplifying input data for practical usages. Our classification would help non-expert users to apply the MBAR analysis to obtain unbiased thermodynamic data from biased MD simulation trajectories.
One important future application of MBAR would be feedback to simulation models, as shown by Messerly et al. (2018) and Shinobu et al. (2019). Since MBAR extrapolates and estimates statistics on parameter sets that have not yet been sampled, it helps us to optimize the parameters of the simulation model. When the number of parameters to be optimized is large, grid search becomes difficult even with the help of the MBAR equations. In this case, as recently shown by Wieder et al. (2021), by making the outputs of the MBAR equations differentiable with respect to potential energies and their force field parameters, efficient gradient search can be performed. It is actually similar to the training of neural network model parameters. On the other hand, the limitation of MBAR in terms of feedback to the simulation model is that the MBAR is just an estimator; thus, it can only estimate free-energy differences or the averaged data. Recent deep learning technologies are also flexible, and they can directly use modeling of the conformational density of molecules as well as statistical a The malonaldehyde is located at the center of the water droplet. b The pNPP 2− is located at the center of the water droplet. The collective variables used in the umbrella sampling are indicted in the figure. c The potential of mean force (PMF) of the malonaldehyde obtained by DFTB (red line), DFT (blue line), and re-weighted with the MBAR (black dotted line). d PMF of the pNPP 2− averages. For example, Wang et al. recently succeeded in modeling conformational density from various temperature trajectories obtained by T-REMD using a diffusion model in which temperature is incorporated as one of the random variables (Wang et al. 2022). If it becomes possible to model the density and the uncertainty of parameters and conformations with these technologies, more efficient optimization will be possible in combination with MBAR. | 7,226.8 | 2022-12-01T00:00:00.000 | [
"Physics"
] |
Sp1 and c-Myc modulate drug resistance of leukemia stem cells by regulating survivin expression through the ERK-MSK MAPK signaling pathway
Background Acute myeloid leukemia (AML) is initiated and maintained by a subset of self-renewing leukemia stem cells (LSCs), which contribute to the progression, recurrence and therapeutic resistance of leukemia. However, the mechanisms underlying the maintenance of LSCs drug resistance have not been fully defined. In this study, we attempted to elucidate the mechanisms of LSCs drug resistance. Methods We performed reverse phase protein arrays to analyze the expression of anti-apoptotic proteins in the LSC-enriched leukemia cell line KG-1a. Immuno-blotting, cell viability and clinical AML samples were evaluated to verify the micro-assay results. The characteristics and transcriptional regulation of survivin were analyzed with the relative luciferase reporter assay, mutant constructs, chromatin immuno-precipitation (ChIP), quantitative real-time reverse transcription polymerase chain reaction (RT-qPCR), and western blotting. The levels of Sp1, c-Myc, phospho-extracellular signal-regulated kinase (p-ERK), phospho-mitogen and stress-activated protein kinase (p-MSK) were investigated in paired CD34+ and CD34- AML patient samples. Results Survivin was highly over-expressed in CD34 + CD38- KG-1a cells and paired CD34+ AML patients compared with their differentiated counterparts. Functionally, survivin contributes to the drug resistance of LSCs, and Sp1 and c-Myc concurrently regulate levels of survivin transcription. Clinically, Sp1 and c-Myc were significantly up-regulated and positively correlated with survivin in CD34+ AML patients. Moreover, Sp1 and c-Myc were further activated by the ERK/MSK mitogen-activated protein kinase (MAPK) signaling pathway, modulating survivin levels. Conclusion Our findings demonstrated that ERK/MSK/Sp1/c-Myc axis functioned as a critical regulator of survivin expression in LSCs, offering a potential new therapeutic strategy for LSCs therapy. Electronic supplementary material The online version of this article (doi:10.1186/s12943-015-0326-0) contains supplementary material, which is available to authorized users.
Background
Acute myeloid leukemia (AML) is an aggressive malignancy with a-5 year overall survival rate of approximately 30%-45% in adults and nearly 70% in children [1]. Although the survival rate in children is quite high, recurrence and acquired drug-resistance remain the leading causes of death, occurring in over 60% of cases by 1 year after complete remission [2]. AML is typically a stem cell-driven disease, and the existence of leukemia stem cells (LSCs) was first identified (CD34 + CD38-) by JE Dick in 1994; these cells extremely promote AML progression, chemoresistance and recurrence [3,4]. Therefore, identifying molecules or pathways that are preferentially expressed in LSCs and determining the principles controlling AML pathogenesis are critical and may facilitate the optimization of clinical treatments for AML patients.
In recent years, many researchers have focused on the development of novel therapies targeting cancer stem cells (CSCs), including specific signaling pathways, the tumor microenvironment, and induction of differentiation and apoptosis [5]. From these studies, survivin has emerged as an essential factor for tumor progression and development [6,7]. Survivin, a member of the inhibitors of apoptosis (IAP) protein family, is one of the most frequently elevated targets in numerous malignancies and is largely undetectable or expressed at very low levels in normal and adult tissues. Moreover, survivin has been linked with poor prognosis, resistance to therapy, and low overall survival [8]. Additionally, survivin is localized in both the cytoplasm and nucleus and has an active nuclear transporter in its linker region [9]. Cytoplasmic survivin co-operatively interacts with XIAP/ Smac/DIABLO to regulate the caspase cascade function by directly preventing its release and activation from mitochondria for the induction of apoptosis [10]. Nuclear survivin predominantly associates with the Aurora B kinase/INCENP/Borealin/Dasra B complex to form a chromosomal passenger, which regulates the progression of the cell-cycle from the centromere stage to the mitotic spindle stage during cell division [11,12]. Moreover, survivin has been confirmed to function as an essential factor in the tumor microenvironment and in multiple signaling pathways, including PI3K, Akt, p53, NF-kB, STAT3, Wnt/β-catenin and MAPK, which are highly involved in controlling tumor maintenance and growth promotion [13][14][15].
Survivin has also been shown to be associated with drug resistance in various cancers or tissues. Park et al showed that survivin is up-regulated in primary acute lymphoblastic leukemia (ALL) and plays a critical role in drug resistance, and inhibition of survivin expression with chemotherapy leads to the eradication of ALL [16]. Fukuda reported that silencing survivin could significantly reduce the proliferation of leukemia cells and induce apoptosis in FLT3 mutant mice [17]. Intriguingly, survivin expression is correlated with Oct-4 expression, allowing concurrently regulation of murine embryonic stem cell survival under stress [18], demonstrating a novel function for controlling the stem cell state. These results suggest that survivin is a potentially worthy target for personalized molecular therapy. However, the effects of survivin on the regulating of CSC biological properties remains to be defined.
In the current study, we aimed to investigate the role of survivin in LSCs drug resistance and to elucidate the mechanisms regulating survivin expression in LSCs. Our results demonstrated that the ERK-MSK-specificity protein (Sp) 1/c-Myc axis was responsible for survivin expression, providing valuable targets for further development of molecular therapies to treat leukemia.
Survivin is highly expressed in LSCs compared with their differentiated counterparts
Several recent studies have suggested that as much as 25% of cancer cell lines and certain tumor samples have the properties of CSCs [19]. Therefore, we first isolated LSCs using magnetic sorting with the gold standard surface markers CD34 + CD38-from six leukemia cell lines (Additional file 1: Figure S1A). KG-1a and MOLM13 cells were enriched in LSCs (9.6% and 4.3%, respectively, Additional file 1: Figure S1B). Subsequently, we identified the isolated LSCs by examination of typical stem cell gene expression, cell surface marker ABCB1 (P-gp), which was preferentially expressed in LSCs [20,21], self-renewal ability, and drug resistance (using three first-line AML drugs, Ara-C, Dexamethasone and L-aspartic acid). This analysis suggested that these cells were indeed LSCs and could be used for further studies (Additional file 1: Figure S1 and Additional file 2: Figure S2A).
Next, to demonstrate the specific molecular mechanisms underlying the chemo-resistance of LSCs, we used the ABC transporter inhibitor verapamil [22] to investigate the dependence of LSCs drug resistance on the expression of the "drug pump". As expected, the addition of verapamil increased Ara-C and dexamethasoneinduced apoptosis (Additional file 2: Figure S2B). However, the LSCs fractions still showed reduced apoptosis and higher resistance to chemotherapeutic agents than non-LSCs subpopulations in KG-1a cells (Additional file 2: Figure S2C). Therefore, we speculated that there might be other mechanisms mediating drug-resistance in LSCs.
To investigate whether anti-apoptosis-related proteins were involved in LSCs drug resistance, we performed reverse-phase protein microarray (RPPA) to analyze the expression of apoptosis-related proteins in the four populations (CD34 + CD38+, CD34 + CD38-, CD34-CD38+, CD34-CD38-) isolated from KG-1a cells ( Figure 1A and Additional file 3: Figure S3A). Intriguingly, we found that four apoptosis-related proteins, i.e., Hsp70, survivin, XIAP, and insulin-like growth factor (IGF)-II, were highly expressed in CD34 + CD38-cells compared with other subsets ( Figure 1B). Immuno-blotting analysis yielded similar results as the protein microarray data in KG-1a cells ( Figure 1C). To further investigate which specific proteins mediated the drug resistance of LSCs, we used specific short-interfering RNAs (siRNAs). As shown in Figure 1D, only inhibition of survivin expression decreased the viability of LSCs consistent with the observed effects in non-LSCs following addition of Ara-C, L-Asp, and Dexamethasone, indicating that survivin could block chemotherapy-induced apoptosis. Additionally, survivin was also found to be highly expressed in the LSC-enriched fraction of MOLM13 cells ( Figure 1E).
To determine whether up-regulation of survivin expression in CD34 + CD38-cell fractions was specific for LSCs, we collected blood samples from 56 patients with AML to directly compare survivin levels in paired CD34+ and CD34-cells. Using RT-qPCR, survivin was found to be significantly up-regulated in primary CD34+ cells from Figure 1 Survivin was highly expressed in leukemia stem cells compared to their differentiated counterparts. (A) KG-1a cells were separated into four subpopulations, and apoptosis-related protein micro-assays were performed using R statistical programming. (B) The relative expression level of four apoptosis-related proteins (Hsp70, survivin, XIAP, and IGF-II) were determined ( * P < 0.05, ** P < 0.01). (C) Protein assay results were confirmed by western blotting. (D) LSCs and non-LSCs viability was analyzed following inhibition of survivin, Hsp70, XIAP, or IGF-II expression with siRNA in the presence of chemical agents (Ara-C, L-Asp, and Dexamethasone [Dex]). (E) Analysis of survivin expression in MOLM13 cells following fractionation for the CD34 + CD38-population. (F) Survivin mRNA expression was analyzed in CD34+/CD34-cells from AML patients (n = 56, *P < 0.05) by real-time qPCR. (G) Survivin protein expression was analyzed in paired CD34+/CD34-samples from 20 AML patients by western-blot. (H) Line graph showing a direct comparison of survivin protein levels in CD34+/CD34-cells from paired AML samples (n = 20, * P < 0.05).
Survivin contributed to the anti-apoptotic and chemoresistant characteristics of LSCs Next, we conducted loss of survivin function to investigate its contribution to LSCs apoptosis-related characteristics. Specific siRNAs were used to knockdown survivin, and a non-targeting siRNA was used as control. CCK-8 assays showed significant decreases (P < 0.05) in LSCs growth at 24 and 48 h after transfection ( Figure 2A). Similarly, marked survivin knockdown was observed, and cleaved PARP and caspase3 were obviously increased with survivin-siRNA 48 h after transfection in both LSC-enriched cell lines ( Figure 2B). Meanwhile, annexin-V/PI FACS analysis confirmed the obvious increase in apoptosis at 48 h after siRNA transfection in LSCs ( Figure 2C), suggesting that knockdown of survivin induced apoptosis in LSCs. Moreover, soft agar colony assays revealed that knockdown of survivin using survivin-siRNA significantly decreased colony numbers in KG-1a LSCs compared with the non-targeting control (43.3 ± 4.8 vs. 145.7 ± 3.2, respectively) and MOLM13 LSCs (38.6 ± 1.7 vs. 150.6 ± 2.3, respectively; P < 0.05, Figure 2D).
Subsequently, to investigate whether survivin expression promoted drug resistance, we analyzed drug resistance AML cell line TS3 (isolated from CD34+ cells), established from a patient who relapsed despite treatment with chemotherapy, and used for further experiments. Transduction of primary AML cells (TS3) with a pcDNAsurvivin construct obviously attenuated the effects of Ara-C, Dexamethasone and L-Asp alone or in combination on cytotoxicity as compared with controls (P < 0.05), indicating that over-expression of survivin renders primary AML cells more resistance to chemotherapeutic agents ( Figure 2E left panel). Conversely, to determine whether AML cells can be sensitized to chemotherapeutic drugs, we specifically silenced survivin with si-RNA in TS3 cells. Non-targeting controls yielded an approximately 2.5-folder higher IC 50 (4.5 μM) than the survivin-siRNA group (IC 50 = 1.8 μM, P < 0.05, Figure 2D right panel), demonstrating that down-regulation of survivin could overcome drug resistance in primary AML cells. Taken together, these results suggested that survivin contribute to the anti-apoptotic and drug-resistant characteristics of LSCs, thereby supporting further investigation of the potential mechanism of survivin over-expression in these phenotypes.
Survivin promoter region is participated in regulation of transcription levels
Survivin mRNA is extremely stable in numerous cancers, and the regulation of this oncogene in tumors can be linked to translational control [23,24]. Therefore, to determine whether survivin mRNA (and protein) was also stable in LSCs, we treated LSCs with the transcriptional blocker actinomycin Act or the translational inhibitor cycloheximide (CHX). Survivin mRNA and intracellular protein levels were largely dependent on regulation of constitutive transcription and translation, both of which levels were dramatically reduced after 24 h of treatment in both LSC cell lines ( Figure 3).
To further understand the transcriptional regulation of survivin in LSCs, we examined the activity of the promoter from the region~2000 bp upstream of the transcription start site (TSS). Deletion constructs showed that promoter activity an increase stepwise (V2 and V3) or decreased slightly (in V4). Subsequent analysis of the luciferase activity of 3′-deletion constructs revealed similar results in V6 and V7. Interestingly, the V5 construct (-218 to +170; the shortest construct), exhibited 17.7and 7.8-fold transcriptional activity in KG-1a-LSCs and Non-LSCs respectively, compared with the empty pGL4.7 vector control ( Figure 4A). The robust promoter activity obversed for the V5 sequences suggested a role for this core region in regulating survivin transcription. We next analyzed this region for transcription factors that could regulate survivin expression and identified several putative transcription factor-binding sites ( Figure 4B). To verify which motifs mediated the transcriptional activity of the survivin promoter, substitution mutations of these individual sites (TCF4, KLF5, Sp1 and c-Myc) were prepared (primers for mutation are listed in Additional file 4: Table S2). Abrogation of individual sites repressed transcriptional activity by 30% and 50% in the MutE and MutH sites, respectively, and combined MutE/MutH decreased transcriptional activity by 85% compared with the wild-type promoter V5 sequence in KG-1a-LSCs. Moreover, none of the other mutant sites showed significant decrease in transcriptional activity as compared with the control ( Figure 4C). Subsequently, we found that MutE and MutH represented the canonical E-box site (c-Myc sites) and tandem GC boxes (Sp1 sites) in the V5 promoter region ( Figure 4D), indicating its vital function for transcription regulation. The core promoter region of survivin was identified (-218 to +170), and several transcription factor binding sites were predicted using Mat-Inspector and Jaspar (underlined regions). The arrow denotes the start site of survivin transcription, and the initiation codon ATG is shown in bold and italicized font. (C) Mutation of each transcription factor binding site was performed for identification of potential trans-activation motifs. Luciferase assays showed changes in promoter region activity for the mutants versus wild-type construct ( * P < 0.05; ** P < 0.01). (D) Mutant E and mutant H are c-Myc and Sp1 binding sites, respectively; the sequences of the two motifs are shown. All graphs represent at least triplicate independent experiments.
Transcription factor Sp1 and c-Myc activated survivin transcription in LSCs
To verify whether Sp1 and c-Myc could truly interact with the survivin promoter in vivo, we conducted chromatin immune-precipitation (ChIP) assays by incubating KG-1a-LSC cell nuclear extracts in the presence of anti-Sp1 and anti-c-Myc antibodies or IgG (as a negative control). ChIP primers for Sp1 (-117/+89) and c-Myc (-218/-15) were designed to amplify promoter regions containing putative binding sites; the distal region primers were used as a negative control. As shown in Figure 5A, ChIP with antibody against Sp1 and c-Myc cross-linked to the both binding site cluster, showed an enrichment of the survivin promoter compared with the IgG control in KG-1a-LSCs. The ChIP-qPCR also exhibited the similar results with the same primers ( Figure 5B).
To further investigate the roles of Sp1 and c-Myc in the regulating of survivin promoter activity, ectopic overexpression of pcDNA-Sp1 and pcDNA-Myc co-transfected with the promoter construct V5 in KG-1a-LSCs and MOLM13-LSCs resulted in distinct elevation in luciferase activities compared to cells transfected with empty vector ( Figure 5C and Additional file 3: Figure S3C). Conversely, we used mithramycin A (MITA), an inhibitor of Sp1 and c-Myc, which inhibits transcriptional activation by preventing Sp1 from binding to GC-rich regions and blocking c-Myc expression [25], to elucidate these two motifs at various concentrations for 48 h. Promoter activity was significantly attenuated in MITA-treated cells from both cell lines ( Figure 5C, lower panel). Specific siRNA-Sp1/c-Myc was also confirmed the marked decrease in luciferase activities of V5 construct in both LSCs cell lines, nontarget siRNA was used as control ( Figure 5D). Consistent with this, survivin protein expression was suppressed after treatment with MITA in a concentration-dependent manner ( Figure 5E). Moreover, ChIP assays demonstrated that MITA could repress Sp1 and c-Myc in transcription level ( Figure 5F). Taken together, these results indicated that Sp1 and c-Myc could directly bind to the predicted sites in the survivin core promoter region V5 and were crucial for the transcriptional activation of survivin expression.
c-Myc required functional Sp1 binding sites for transcriptional regulation of the survivin promoter
To comprehensively identify whether these two factors contributed equally or display different degrees of transactivation of the survivin promoter, point mutations were created in both Sp1 and c-Myc, and mutant constructs were transfected alone or in combination into KG-1a and MOLM13-LSCs ( Figure 6A). As shown in Figure 6B, we observed~50% and~40% decreases in promoter activity in the elements carrying a mutant GC-box region (Sp1) or E-box region (c-Myc), and combination mutants result in a nearly 80% decrease in luciferase activity, as compared with the wild-type promoter V5 -218/+170 in both cell lines. Next, we further investigated the interplay between Sp1 and c-Myc in survivin promoter. When c-Myc was over-expressed in the presence of a mutated Sp1 binding site, evaluation of luciferase activity revealed that the V5 promoter was not significantly activated as compared with the empty pcDNA vector control ( Figure 6C). These data indicated that mutation of the Sp1 binding site affected the transcriptional activation of c-Myc. In contrast, Sp1 over-expression was significantly increased by~2-fold in the presence of mutated c-Myc sites (P < 0.05), and the increased activation was similar with that of the wild-type promoter relative to the empty pcDNA vector control ( Figure 6D). Thus, mutation of the c-Myc binding site did not affect transcriptional activation by Sp1. In general, these results revealed that Sp1 possessed the predominantly role in transcriptional regulation of the survivin promoter, and that the downstream function of c-Myc mainly depended on the integrity of Sp1 binding sites.
Inhibiting Sp1 and c-Myc-dependent survivin expression caused chemo-sensitization
To determine whether inhibit of Sp1 or c-Myc could sensitize LSCs to anticancer drug-induced cyto-toxicity, we incubated KG-1a and MOLM13-LSCs with MITA in the presence of Ara-C (5 μM) for 3 days and measured the percentage of apoptotic cells using flow cytometry analysis. As shown in Additional file 5: Figure S4A and S4B, MITA substantially increased the Ara-C induced apoptosis from 49.7% to 76.3% in KG-1a-LSCs and 46.4% to 63.6% in MOLM13-LSCs. Intriguingly, MITA also substantially reduced the tumor-sphere-forming ability in the presence of Ara-C in LSCs compared with the MITA only and Ara-C only groups (P < 0.01, Additional file 5: Figure S4C). In other words, MITA combined with chemotherapeutic drugs could effectively reduce the selfrenewal of LSCs, which represent for tumor unlimited proliferation and recurrence. Consistent with this, MITA distinctly repressed luciferase activity in the presence of Ara-C compared with DMSO in both cell lines (P < 0.05, Additional file 5: Figure S4D). Collectively, these results indicated that inhibiting Sp1 and c-Myc-dependent survivin expression could cause chemo-sensitization in LSCs, suggesting that survivin could be an effective target for further clinical applications for AML treatment.
High expression of Sp1 and c-Myc in CD34+ AML samples correlated with Survivin expression
To understand the clinical significance of Sp1 and c-Myc, paired CD34+ and CD34-cells from 56 AML patients were analyzed, and relative mRNA levels were determined as shown in Figure 7A. Both Sp1 and c-Myc were substantially up-regulated in CD34+ AML samples compared with their differentiated counterparts. Approximately 80% (45/56) and 64% (36/56) of the CD34+ cells displayed more than 2.2-fold higher expression of Sp1 and c-Myc than that by the CD34-cells. On average, Sp1 and c-Myc levels were elevated by 3.5-and 2.3-fold, respectively, in CD34+ AML samples ( Figure 7B).
On the basis of the above observations, we next investigated whether Sp1 and c-Myc were expressed consistently with survivin in CD34+ AML patient samples. Linear regression analysis demonstrated a positive correlation between the mRNA levels of both survivin and Sp1 (P < 0.05, r = 0.826) and c-Myc (P < 0.035, r = 0.731) in CD34+ AML samples ( Figure 7C and D). We then determined the protein levels of Sp1, c-Myc and survivin in 56 AML patients by immuno-blotting (quantification by Quality One software, high expression was calculated as more than 2 folder compared with CD34-parts, and low expression was calculated as less than 0.5 folder, data not shown). As shown in Figure 7E and F, there were 33 and 28 specimens that showed high expression of both Sp1/c-Myc and survivin, and 7 and 9 specimens that showed low expression of both Sp1/c-Myc and survivin. Evaluation these datas with Fisher's exact test revealed that significant correlations between survivin protein levels and Sp1/c-Myc protein levels in CD34+ AML specimens (P = 0.024 and P = 0.039, respectively). Collectively, these results suggested that Sp1 and c-Myc over-expression in CD34+ AML samples regulated survivin transcription, and that both the mRNA and protein levels of Sp1 and c-Myc were positively correlated with survivin expression.
The ERK/MSK MAPK pathway was constitutively active in primary CD34+ AML samples and was required for Sp1 expression Because survivin transcription was mainly regulated by Sp1, we next examined pathways that regulate Sp1 in LSCs. Various signaling pathways, including the MAPKs, have been linked to the expression of Sp1 in multiple cancers. Therefore, inhibitors of the ERK (U0126), JNK (SP600125), and p38 (SB203580) pathways were used to characterize the potential role of MAPKs in the constitutive expression of Sp1. As shown in Figure 8A, only the ERK inhibitor (U0126) markedly reduced Sp1 protein expression, whereas JNK and p38 inhibitors did not significantly affect Sp1 protein levels. To determine which molecules were involved in signaling downstream of the ERK MAPK pathway to modulate Sp1 gene expression, the phosphorylation of MSK, which can be activated by MAPK/ERK and p38, was analyzed in six paired CD34+ and CD34-AML patients. As expected, phosphorylated-ERK, phosphorylated-MSK, and Sp1 were obviously constitutively active in all CD34+ fractions ( Figure 8B) and were upregulated by 4.8-, 2.0-, and 1.7-fold on average, respectively ( Figure 8C). To investigate whether ERK activated phosphor-MSK, we used tumor necrosis factor (TNF)-α, a specific activator of ERK, and U0126, a specific inhibitor of ERK. The result showed that MSK phosphorylation was elevated after TNF-α stimulation in a concentration-dependent manner ( Figure 8D upper panel). In contrast, pretreatment with 20 μM U0126 markedly inhibited TNF-α-induced MSK phosphorylation ( Figure 8D lower panel), indicating that MSK was regulated by ERK.
Furthermore, because MSK mediates cytokine-induced Sp1 phosphorylation at Thr 453 [26], we next examined whether TNF-α could induce the phosphorylation of endogenous Sp1 via the ERK/MSK pathway using a phospho-specific Sp1 Thr 453 antibody. Substantial phosphorylation of Sp1 was observed upon stimulation with TNF-α ( Figure 8E upper panel). Subsequently, we constructed a vector-encoding mutant MSK (NT-KD), in which the N-terminal kinase domain was inactivating by a point mutation, and a vector encoding wild-type MSK. Pretreatment with U0126 or NT-KD MSK markedly inhibited the phosphorylation of Sp1 as compared with the WT MSK vector,and histone H3 was used as control ( Figure 8E lower panel). Additionally, the survivin mRNA level was significantly up-regulated by TNF-α and W/T MSK vector when compared with NT-KD (P < 0.01, Figure 8F upper panel). Conversely, transfections of W/T MSK vector could partly alleviate the decreased mRNA level caused by U0126 in both cell lines 48 h after treatment (P < 0.05, Figure 8F lower panel). Taken together, our data supported that constitutive activation of the ERK/MSK/Sp1 axis (as shown by phosphorylation of the pathway components) was required for over-expression of survivin in CD34+ AML patients, and finally permitting the evolution to the "Seed cell" in acute myeloid leukemia ( Figure 9).
Discussion
AML is typically a stem cell-related disease, and the existence of LSCs was the first identified in tumorinitiating cells. Strategies to target such cells, which have been shown to be resistant to conventional treatments, are expected to greatly improve clinical outcomes and overall survival rates [27]. Our data presented herein demonstrated that survivin was highly expressed specifically in LSC-enriched fractions and in approximately 85% of CD34+ AML patients compared to their differentiated counterparts, thereby contributing to drug resistance. However, the over-expression of survivin was not associated with clinical characteristics, such as age, sex, FAB subtype, or laboratory parameters (Table 1), indicating that survivin could be an independent factor for diagnosis or prognosis of AML. Moreover, the expression of survivin was transcriptionally up-regulated by Sp1 and c-Myc via the ERK-MSK pathway, and constitutively active (phosphorylated) ERK/MSK in CD34+ AML patients was required for the phosphorylation (i.e., activation) of Sp1, thereby leading to over-expression of survivin. Thus, we showed for the first time that Sp1/c-Myc enhanced survivin expression in LSCs via an ERK/ MSK/Sp1/c-Myc-dependent mechanism, maintaining and promoting chemo-resistance.
Numerous studies, including clinical trials of solid tumors, acute lymphoblastic leukemia (ALL), and large B-cell lymphoma, have shown that survivin is an important target in various cancers. Nakahara T et al. [28] proposed that targeting survivin with the novel small-molecule inhibitor YM155 could obviously suppress the growth of primary prostate tumor xenografts in vivo. Additionally, Kwee J. K et al. [29] reported that survivin could induces mitochondrial fragmentation, and reduces mitochondrial respiration, thereby preventing the accumulation of reactive oxygen species, inhibiting apoptosis, and promoting drug resistance. Consistent with these reports, our results also indicated that inhibition of survivin could dramatically decrease LSCs growth, induces apoptosis, suppress self-renewal, and sensitizes cells to chemotherapy. However, the regulation of survivin expression in CSCs is poorly understood. Thus, a better understanding of the regulation of survivin in LSCs is required for the Figure 9 The proposed mechanism for survivin overexpression in LSCs. (A) TNF-α stimulation leads to transactivation of TNFR, which activates ERK/MSK/Sp1 pathways. This activation, together with c-Myc, concurrently regulates survivin expression in LSCs, facilitating LSCs growth, allowing LSCs escape from apoptosis (B), and finally permitting the evolution to the "Seed cell" in acute myeloid leukemia (C). The straight arrow represented activation, the dotted arrow represented none affection, and TBP and RNA PII represented the transcription initiation complex. development of strategies to target these cells. Our results found that steady-state expression of survivin required constitutive transcription and translation in LSCs; therefore, we focused on elucidating the mechanism of transcriptional activation in LSCs.
In this study, the cis-acting promoter elements and respective binding factors required for constitutive activity of the survivin promoter in LSCs were identified and characterized. Deletion analysis of the survivin promoter in LSCs revealed a core region from nucleotide -218 to nucleotide +170, and Sp1/c-Myc played crucial roles as potent cis-acting elements to regulate survivin transcription during carcinogenesis. Moreover, Sp1 and c-Myc directly interacted with the core promoter region (-218 to +170) in vivo, and over-expression or knockdown of either Sp1 and c-Myc significantly enhanced or attenuated activity of the survivin core promoter (V5) and endogenous survivin expression, indicating that the high expression of survivin in LSCs was dependent on the trans-activation of Sp1 and c-Myc through positioning of these transcription factors at specific sequence sites.
Sp1, a C 2 H 2 -type zinc finger-type transcription factor that binds GC-rich sequences, was one of the first eukaryotic transcription factors to be identified and has been shown to play an important role in the transcriptional regulation of various genes. Its importance is highlighted by the observation that Sp1 -/knockout mice exhibit embryonic lethality [30]. Sp1 is ubiquitously expressed in normal tissues and elevated in tumors, correlating with a wide range of cellular events and different cell-type roles in transcription procession [31,32]. Here we found that the constitutive activation of ERK and MSK was required for Sp1 expression, and phosphorylation Sp1 was regulated by TNF-α and ERK/MSK signaling, which ultimately activation survivin expression with c-Myc in LSCs cell lines and clinical specimen. ERK pathway activation has been shown to play a role in the stability of transcription factors [33][34][35], which may also be the basis for the effects of ERK inhibition on SP1 expression seen in this study. Additionally, our results also showed that Sp1 expression was significantly upregulated in 92% of paired CD34+ AML patients and correlated with the expression of survivin mRNA and protein, indicated that Sp1 was a specific and critical factor mediating survivin expression.
c-Myc is another pivotal trans-activation factor for the survivin promoter, as identified in our study. Myc protein belongs to the Max of Bhlh-Zip family, the members of which bind to E-boxes and regulate transcription of target genes as obligate heterodimers with a parter protein [36]. Myc functions as a proto-oncogene and is frequently up-regulated in a wide range of tumor types, driving proliferation and tumor progression. Myc is also known to mediate the reprogramming of somatic cells [37] and participates in regulation of the cell cycle and stabilization of cell division [38]. In the context of gene regulation, Myc acts as a weak transcription factor, facilitating the formation of transcription complexes [39]. In the present study, we demonstrated that Myc over-expression promoted activation of the survivin promoter in LSCs; however, this effect was only marginal when Sp1 binding sites were mutated, suggesting that Myc requires functional Sp1 sites to regulate survivin expression. Similarly, c-Myc expression was significantly up-regulated in 91% of paired CD34+ AML patients and correlated with the expression of survivin mRNA and protein. Taken together, our data suggested that while Sp1 functioned independently to regulate the survivin promoter, c-Myc required Sp1 function in order to participate in the regulation of survivin. Further studies are required to determine whether this cooperation between Sp1 and Myc also functions to regulate the expression of other oncogenic genes in cancer.
The MAPK signal pathway (ERK, JNK, p38) is involved in regulating cell growth, differentiation, environmental adaptation to stress, inflammatory reactions, and participated in cancer progression [40]. In this study, we showed that only the ERK MAPK pathway played a role in the regulation of Sp1 levels in LSCs, inhibition of JNK or p38 did not affect Sp1 expression. Importantly, ERK and its downstream signaling mediator MSK were obviously constitutively activated (i.e phosphorylated) in six paired CD34+ AML patients, suggesting that activation of ERK and MSK (as observed by phosphorylation levels of these targets) was intrinsic and spontaneous in such cells and was required for the expression of survivin in LSCs ( Figure 10). MSK is known to be regulated by the MAPK/ Figure 10 Phosphorylation ERK was specific activated in LSCs. Phosphorylation ERK was intrinsic and spontaneous activated in LSCs, which led to drug resistance for clinical chemo-agents. In contrast, the bulk leukemia cells were more sensitive to chemical drugs and eventually involved in apoptosis.
ERK and SAPK2a/p38 pathways and is currently the best candidate for the regulation of cytokine-induced phosphorylation of transcription factors [41][42][43]. Interestingly, we found that MSK activated Sp1 by phosphorylation at Thr 453 . Collectively, our results showed that Sp1 activation was involved in the signaling pathway downstream of ERK and MSK, mediating TNF-α-induced expression of survivin.
From our results showing the correlations between Sp1 and c-Myc levels and survivin expression in CD34+ AML patients, inhibitors of Sp1 and c-Myc may provide novel therapeutic strategies for the treatment of leukemia. MITA, an anticancer antibiotic that selectively binds to G-C-rich DNA domains in the presence of Mg 2+ or Zn 2+ , inhibits Sp1 transcriptional activity by preventing its binding into the DNA major grooves. We found that MITA treatment effectively suppressed survivin promoter activity and reduced survivin mRNA and protein levels in LSCs. Moreover, MITA could sensitize LSCs to chemotherapy (such as Arc-C) and decreased the self-renewal ability of LSCs, suggesting that MITA may have potential applications in the treatment of leukemia (Additional file 5: Figure S4). However, there are also some limits in our study. First, we should expand the sample sizes to further elucidate the relationship between survivin, Sp1 and c-Myc. Second, we only collected blood samples from AML for these experiments, and bone marrow samples from AML should also adapt in further investigation for the subsequent research. Additionally, we observed significantly decreased survivin expression and reduced Sp1-DNA binding activity after treatment with TNF receptor signaling inhibitors in our preliminary studies (data not shown). Therefore, the TNFR/ERK/MSK/Sp1/c-Myc pathway is likely involved in survivin over-expression in LSCs. Future studies will focus on the upstream signaling cascades that alternatively contribute to survivin overexpression.
Conclusion
Our findings may provide new insights into the transcriptional regulation of survivin, and described the chemo-resistant properties of LSCs, which will facilitate the development of novel therapy strategies or identification of selective biomarker for diagnosis. Further investigations are warranted to definitively confirm survivin as a marker for AML.
Materials and Methods
Cell isolation, culture, and transfection The human leukemia cell lines K562, KG-1a, HL-60, MOLM13, ML-1, and U937 were cultured in RPMI-1640 medium supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin. KG-1a and MOLM13 cells were separated and enriched for CD34 + CD38-cells using magnetic microbeads (Miltenyi Biotec, Auburn, CA, USA) and labeled with CD34-PE, CD38-FITC, or isotype control antibodies. Cells were analyzed and sorted on a Magnetic Sorter unit, and the purity and viability of isolated cells were routinely greater than 95%. The siRNA primer sequences are listed in Additional file 3: Figure S3B and were custom synthesized by Shanghai Sangon (Shanghai, China). After transfection, the inhibition efficiency was examined by western-blotting. Transfections were performed with Lipofectamine TM2000 according to the manufacturer's protocol (Invitrogen Co. Carlsbad, CA, USA). All signal pathway inhibitors or activators were purchased from Selleck Chemicals.
Patient collection and sample preparation
Peripheral blood (PB) samples were collected from 56 newly diagnosed AML patients after obtaining informed consent according to the procedures approved by the Human Experimentation Committee at the Overseas Chinese Hosptial of Jinan University and Sun Yat-Sen University Cancer Hospital (Guangzhou, China). The characteristics of these patients are summarized in Table 1. Samples were enriched for leukemia cells by using Ficoll density gradient centrifugation to obtain the mononuclear fraction and when then cryopreserved in freezing medium consisting of Cryostor CS10 (BioLife Solutions). Subsequently, CD34+/CD34-cells were separated from the leukemia-enriched portion by magnetic-activated cell sorting (MACS; Miltenyi Biotech) after incubation with anti-CD34 antibodies and IgG controls (BD Biosciences). The CD34+/CD34-cells were supplemented with B27 (1:50; Life Technologies, Carlsbad, CA, USA), 10 ng/mL basic fibroblast growth factor (bFGF) and 20 ng/mL epidermal growth factor (EGF) in DMEM/F12 media. All cells were incubated at 37°C in a humidified chamber with 5% CO 2 .
Reverse phase protein array KG-1a cell subsets (CD34 + CD38+, CD34 + CD38-, CD34-CD38+, and CD34-CD38-) were separated and normalized to a concentration of 1 × 10 4 cells/μL, and whole-cell lysates were prepared. The detailed description and procedures of the protein array (RayBio Human Apoptosis Antibody Array G Series) methodology, including antibody and detection steps according to the manufacturer's protocol. The slides were incubated for 2 h with validated primary antibodies in concentrations ranging from 1:250 to 1:1000. Subsequently, the secondary antibodies were used to amplify the signal for another 2 h, followed by precipitation of the stable dye. Positive and negative controls were positioned across the slide at the edge of each sample, as described below. Finally, the protein expression intensity level of each spot was measured using MicroVigene software (Vigene Tech, North Billerica, MA, USA). All signal numbers were used for data processing and calculation after standardization and topographic normalization, and analyses were performed using the R Statistical Programming Environment, version 2.4.2.
Quantitative RT-PCR (qRT-PCR) analysis Total RNA was extracted using TRIzol reagent (Invitrogen) and reverse transcribed into cDNA. The mRNA level was evaluated by RT-qPCR with SsoFast Eva-Green Supermix (Bio-Ras, Hercules, CA, USA) and was analyzed with a C1000Thermal Cycler (CFX96 Real-Time System, Bio-Rad). Relative expressionwas normalized to GAPDHas an internal control. The following PCR conditions were used on the LightCycler: 95°C for 5 s, 58°C for 5 s, followed by 40 cycles of 95°C for 15 s and 60°C for 1 min in a 10-μL reaction volume. The primer sequences for RT-qPCR arelisted in Additional file 4: Table S3.
Western blot analysis
Cells were harvested, rinsed twice in ice-cold PBS, and kept on ice for 30 min in cell lysis buffer containing 1 mM PMSF with constant agitation. Insoluble cell debris was discarded following centrifugation for 10 min at 12,000 rpm at 4°C. The protein samples were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) on 10% gels and subsequently transferred to polyvinylidene (PVDF) membranes (Millipore). Immunoblotting was performed for survivin, Sp1, c-Myc, c-Jun N-terminal kinase (JNK), ERK, MSK, and p38, with GAPDH expression as an internal control.
CCK-8 assay
Approximately 5.0 × 10 3 cells were seeded into each well of a 96-well plate. The cells were then exposed to various concentrations of chemical agents or si-RNA for 24 or 48 h. After incubation, 10 μL CCK-8 was added to each well and cells were further incubated for 4 h. Cell viability was evaluated based on absorbance at 490 nm in a microplate absorbance reader (Bio-Rad).
Flow cytometry and Annexin V-FITC/PI staining CD34 and CD38 expression were assayed by flow cytometry (FACS Calibur, BD Company) using anti-CD34-PE and anti-CD38-FITC antibodies (BD Company). For apoptosis assays, cells were harvested at 48 h after drug treatment or transfection and centrifuged at 1500 × g for 5 min. After addition of 10 μL Binding reagent and 2.5 μL Annexin V-FITC (KeyGEN BioTECH), samples were suspended in 0.5 mL cold 1× Binding Buffer twice and stained with 10 μL PI. The percentage distributions of apoptotic cells were calculated by FlowJo software.
Plasmids constructs, transient transfection, and luciferase assay
The whole promoter region of the human surviving gene was cloned into the pGL4 plasmid (Promega, Madison, WI, USA). To generate various 5′-deletion and 3′-deletion constructs of the surviving promoter, sequences were amplified from genomic DNA isolated from LSCs (Primers for deletion constructs are listed in Additional file 4: Table S1). A site-directed mutagenesis kit (Stratagene, Santa Clara, CA, USA) was used to generate transcription factor binding site mutant constructs in regions -218 to +170 of the survivin promoter. LSCs were cotransfected with 20 μg of the reporter plasmid and 1 μg Renilla luciferase control vector (Promega) as an internal control for normalization of transfection efficiency. The cells were then harvested at 24 h after transfection using a Dual-Luciferase reporter assay system (Promega), according to the manufacturer's instructions. The whole gene lengths of Sp1,c-Myc, WT MSK, and NT-KD MSK were cloned into the pcDNA3.1 vector. Data are presented as the mean ratio for triplicate experiments.
Sphere formation assay
Single LSCs were isolated and cultured for 2 weeks in methyl-cellulose medium (Stem Cell Technologies) supplemented with 20 ng/mL EGF (Sigma-Aldrich, St. Louis, MO, USA), 10 ng/mL basic fibroblast growth factor (bFGF; Invitrogen), 4 μg/mL insulin (Sigma-Aldrich), and B27 (1:50, Invitrogen). After treatment with chemical drugs, the tumor-sphere numbers were counted, and further statistical analyses were performed. All cell culture was carried out at 37°C in a humidified incubator with 5% CO 2 .
Chromatin immune-precipitation (ChIP) assay Confluent LSCs were cross-linked with 1% formaldehyde for 15 min at room temperature. The cross-linking reaction was terminated by the adding of glycine at a final concentration of 0.125 M. Subsequently, the lysed cells were isolated and sonicated on ice to shear DNA into fragments of 200 bp to 1 kb. Chromatin complexes were collected using EZ-Chip Chromatin Immuno-precipitation kit (Millipore) according to the manufacturers' instructions. The chromatin was immune-precipitated for 16 h at 4°C using anti-Sp1, anti-c-Myc antibodies, and normal IgG (Millipore) as indicated. The input DNA was isolated from the sonicated lysates before immuno-precipitation as a positive control. The immune complexes were collected with Protein A/G Plus agarose (Pierce, Rockland, IL, USA), and the cross-links were then reversed by heating the samples at 65°C for 4 h. Purified DNA was then treated with proteinase K at 42°C for 1 h, and ChIP-PCR was performed as described. | 9,102.8 | 2015-03-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
“Marketing potential of the Sino-Russian bilateral agricultural export market”
China and Russia are important agricultural countries in the world. Expanding exports and increasing sales of agricultural products play an important role in the economic development of both countries. To understand the current situation of agricultural ex- ports of the two countries and formulate strategies to expand the marketing of agricultural products, this paper uses the UN Comtrade Database 2009-2018 on Chinese and Russian bilateral agricultural export sales and other trade data to calculate the jm EM (expansion margin) and jm P (price margin) of agricultural exports, jm Q (quantity margin), to analyze the types, prices, and quantities of exported agricultural products. The results show that China exports to Russia mainly labor-intensive types of agricul- tural products such as processed agricultural and horticultural products, accounting for 87.46% of total agricultural exports on average. The increase in exports is mainly due to the continuous increase in the prices of exported agricultural products. Russia exports to China mainly land-intensive types of agricultural products such as animal products, grains, oilseeds and fat products, which accounted for an average of 79.07% of total agricultural exports. The increase in exports was mainly due to the continuous increase in types and quantities of agricultural products to develop the export potential of agricultural products and expand sales. In addition, China should expand the types and quantities of agricultural products exported, and Russia should increase the added value of agricultural products and raise the export prices of agricultural products. product trade. However, China’s agricultural exports to Russia are smaller than imports, the trade deficit is obvious, and there is a growing trend. In 2018, the deficit reached $1.192 billion; moreover, China’s agricultural exports to Russia accounted for only 1.87% of China’s total agricultural exports, and China’s agricultural imports from Russia accounted for 3.23% of China’s total imports, both of which accounted for China’s total trade with Russia. Thus, a smaller ratio is observed. The export sales of agricultural products are not in line with the rapid development of economic cooperation between China and Russia, and the export sales are not commensurate with the huge agricultural product sales markets of the two countries. In terms of promoting agricultural products
INTRODUCTION
China and Russia are important agricultural countries in the world. Both countries are each other's important agricultural product trading partners. The total agricultural product trade has been increasing year by year, reaching $5.228 billion in 2018, an increase of 1.11 times over 2009. Expanding the export of agricultural products and increasing the sales income of agricultural products is the main goal of the two countries to develop agricultural product trade. However, China's agricultural exports to Russia are smaller than imports, the trade deficit is obvious, and there is a growing trend. In 2018, the deficit reached $1.192 billion; moreover, China's agricultural exports to Russia accounted for only 1.87% of China's total agricultural exports, and China's agricultural imports from Russia accounted for 3.23% of China's total imports, both of which accounted for China's total trade with Russia. Thus, a smaller ratio is observed. The export sales of agricultural products are not in line with the rapid development of economic cooperation between China and Russia, and the export sales are not commensurate with the huge agricultural product sales markets of the two countries. In terms of promoting agricultural products
LITERATURE REVIEW
In recent years, bilateral agricultural product export trade between China and Russia has grown rapidly, and the agricultural products markets of the two countries are closely connected. The export status of agricultural products of two countries has become the focus of research and attention of scholars. However, the focus of studies is put mainly on the current situation and prospects of the import and export of agricultural products markets in the two countries, the structural characteristics of agricultural products import and export, and the impact of political factors on the agricultural products market. However, there is not much involved in the specific factors affecting the export of bilateral agricultural products and the market sales potential. Yin (2021) studied the status and prospects of import and export of agricultural products markets between China and Russia. It was believed that with the advancement of the construction of the "China-Russia-Mongolia Economic Corridor", China and Russia will enhance the facilitation of trade such as strengthening transportation and trade port construction. Some results have been achieved. However, the agricultural products markets of the two countries are not large, the types and products traded are relatively concentrated, trade barriers are relatively prominent, and there is a lack of in-depth cooperation in the agricultural industry. In the future, the two countries should strengthen government and private agricultural cooperation, build international agricultural industrial parks to promote agricultural industry-wide cooperation, increase policy and financial support for agricultural production and trading enterprises, enhance the facilitation of agricultural products transactions, and promote the Chinese and Russian agricultural product sales markets, as well as continuous and rapid development. Li and Zhang (2020) researched the structure, total volume, and development trend of bilateral agricultural imports and exports between China and Russia. It was believed that the eastern region of Russia borders -that is China's northwest and northeast regions -make agricultural land use. With great potential, the two sides have inherent advantages in developing agricultural products markets. Expanding agricultural product markets and agricultural cooperation is one of the key areas of trade cooperation between China and Russia in the new era. For this reason, the bilateral cooperation in the expansion and development of the agricultural product market should be based on the modern logistics industry cooperation, built public logistics information platform, modern information technology to promote the construction of modern logistics network, and at the same time such a cooperation should continue to upgrade the technology in the development process, and constantly strengthen the bilateral sales and cooperation of agricultural products. Zhao (2019) analyzed the trade of agricultural products between China and Russia in the industry. It was stated that the agricultural products market is an important part of the trade exchanges between the two countries. The agricultural products of the two countries are complementary in terms of production capacity, trade types, and market demand. The two countries have great potential in tapping the agricultural product market. However, the import and export structure of agricultural products between the two countries is mainly a vertical industrial trade, with a low level. In other words, there are fewer types of agricultural products in the bilateral trade exchanges, and the development of agricultural trade is not sufficient. China and Russia should strengthen intra-industry transactions of agricultural products to promote the depth and breadth of agricultural production and sales cooperation. Wang and Zhang (2020) analyzed the structural characteristics of agricultural product sales markets. The data on 2003-2017 bilateral agricultural products export sales between China and Russia was used to calculate the dual margin of agricultural exports of the two countries and the expansion margin of agricultural exports to the two countries. The changing trend of the intensive margin was analyzed and factors that affect the export market of agricultural products were studied. The results showed that the expansion margin has a greater impact on the sales of agricultural products in the two countries. China's intensive marginal contribution to Russian agricultural sales is greater than the expansion margin, and economic scale and production efficiency have a role in promoting the intensive marginal growth. When viewing the expansion margin in Russian agricultural sales to China, the contribution is greater than the intensive margin, and there is a trend of continuous expansion. Therefore, China should expand the types of agricultural products sold in the sales of agricultural products to Russia, encourage agricultural production enterprises to innovate, create and promote agricultural sales. Jiang and Huang (2019) used 2001-2016 data on world agricultural products and agricultural product trade between China and Russia to analyze the changes in agricultural exports and product structure of the two countries, calculate the explicit comparative advantage index RCA and the explicit complementarity index TCI to analyze the agricultural product trade between the two countries. The results showed that Chinese and Russian agricultural products markets are complementary in terms of resource endowment and market demand. Competitiveness and complementarity exist in the specific agricultural trade exchanges, but the complementarity is greater than the competition, which is conducive to the cooperation and development of the agricultural product sales markets of the two countries. China and Russia are recommended to strengthen the construction of transportation roads and storage infrastructure, formulate policies to promote trade facilitation and expand cooperation in agricultural product sales. Sun and Tong (2019) conducted statistics on agricultural product trade data between China and Russia from 2010 to 2017. The scale and development trend of the agricultural product market of the two countries, as well as the proportion and structure of agricultural product import and export trade, were analyzed. The VAR model was used to study the impact of Russian green trade barriers on China. The trade volume of agricultural products exported to Russia has an impact. The results showed that green barriers have a depressing effect on China's exports of Russian agricultural products, especially in the short term. China should establish a sound agricultural product production management system and legal system, strengthen coordination and communication between the two parties, and sign a mutual recognition agreement on agricultural product exports to reduce the impact of green barriers on agricultural products exports. Concerning the influence of political factors on the export and sales of agricultural products of the two countries, Zhang (2020) believes that the Ukrainian crisis has led to the implementation of economic sanctions on Russia by the United States and the European Union, putting the development of the Russian economy, especially technology-intensive and capital-intensive industries, into a difficult situation. However, Russia's agriculture has developed rapidly, gradually getting rid of its dependence on imported food, and also making agricultural products an important export product second only to energy and military equipment. Now, trade frictions between China and the United States have occurred. China needs to resolve the diversification of agricultural imports to ensure food supply security. Strengthening cooperation with Russia in the agricultural field has become an important choice. For a long time, China and Russia have had a good foundation for cooperation in the fields of agricultural product trade and agricultural industry investment. In the next step, the two countries need to strengthen cooperation to achieve complementary advantages and common development. Liu et al. (2018) stated that after the Western countries imposed economic sanctions on Russia in 2014, due to the decline in oil prices in the international market and the depreciation of the ruble, the Russian economy significantly declined. Since then, Russia restricted the import of agricultural products from the Western countries and the export sales of domestic agricultural products, stimulating domestic agricultural production, but this did not immediately provide Russia with sufficient food security. The prices of domestic agricultural products in Russia have risen significantly, and consumption even accounts for 40% of total household expenditures. The agricultural cooperation between Russia and China has been for a long time, but the scale of agricultural product sales between the two sides has always grown slowly. Until 2017, the agricultural product trade between the two sides accounted for less than 5% of the total trade, mainly due to the high tariffs and other trade barriers between the two countries. Russia does not have a stable policy environment for foreign enterprises to invest in agricultural production, the types of agricultural products trade are few, and the level of agricultural science and technology cooperation is not high. Therefore, the two countries should strengthen cooperation in agricultural production and sales, and increase the proportion of sales in total trade. Liu et al. (2018) studied the agricultural product market in China and Russia from different angles and obtained valuable results.
Russian scholars have also conducted research on the advantages of Russian agricultural production and export from aspects such as the export structure of advantageous agricultural products and the endowment of resources such as land. Uzun et al.
(2019) believes that Russia's agriculture has been stable since 1999 due to the government's import substitution policy Growth, the food trade balance has steadily improved, and the share of imported food in the retail market is declining. Russia has become one of the world's major exporters of wheat and vegetable oil, and the growth of agricultural production has turned to export-oriented. In order to further develop agriculture, the Russian government should pay attention to the reclamation of unused land and adopt new technologies to increase the relatively low yields of crops and livestock. Change the land allocation and agricultural support system that is now strongly biased towards large farms and agricultural production bases, support the development of small farms more, encourage them to participate in the food value chain, and broaden the scope of agricultural development. Deppermann et al. (2018) conducted a comprehensive assessment of the crop production potential of Russia and Ukraine. It is believed that Russia and Ukraine are countries with large untapped agricultural potential in terms of abandoned agricultural land or agricultural production. Through estimation and analysis of the total amount of available abandoned land and potential production output, compared with the current production situation, it is estimated that by 2030, cereal production may increase by 64%; oilseed production may increase by 84%; each accounted for the global output of these crops. 4% and 3.6%. In addition, the use of intensive production is more efficient, and the crop production potential of Russia and Ukraine can save about 2,100 hectares of farmland on a global scale, and reduce the global average crop price by more than 3%. Benesova et al. (2017) analyzed Russia's exports on a global scale and the degree of trade liberalization, and believed that Russian food exports have advantages over Asia and the CIS countries. Moreover, with the development of economic transformation and trade liberalization, the classification structure of Russian food exports has advantages. It is constantly changing, and there are fewer and fewer ancient species, and the common ones are concentrated in a few categories; from the perspective of comparative advantage, grain, fish and vegetable oil are important parts of Russia's exports.
However, the specific factors of the growth of Sino-Russian agricultural exports have not been studied yet. The questions are: which of the three factors affecting sales (type, quantity, and price of agricultural exports) plays a leading role and contributes more? What are the different factors affecting bilateral agricultural products trade in different periods? What are the differences in types of agricultural products? Therefore, this study is aimed to analyze the bilateral agricultural product export structure between China and Russia, study the growth model of agricultural product exports from the perspective of changes in types, prices, and quantities, and their contribution to the export of agricultural products of the two countries.
METHODOLOGY
This study uses the ternary marginal analysis framework proposed by Hummels and Klenow (2005) to study the export structure of agricultural products between the two countries from the perspective of expanding margins and intensive margins (which can be decomposed into price margins and quantity margins) and analyze the ternary margins to the export of agricultural products of the two countries.
The calculation formulas for the extended margin and the intensive margin are as follows: Among them, jm P represents the price margin of agricultural exports of country j to country m, and jm Q represents the quantity margin of agricultural exports of country j to country m. m g is used to denote the export growth rate, and the growth rate of agricultural product export market share of country j to country m can be expressed by the following formula: . m em p q g g gg = ++ Among them, , em g , p g and q g represent the growth rate of the expansion margin, the price margin, and the quantity margin, respectively. The contribution rate of the three to the export trade of agricultural products can be expressed by the following formula: 100%. 100%.
General overview of agricultural products market
The total trade volume of bilateral agricultural products markets between China and Russia in- Corresponding to the absolute advantage of labor-intensive agricultural products in the export of agricultural products, the export of land-intensive agricultural products only occupies a small share. Land-intensive agricultural products mainly include cereals, oil and fat products, and textile raw products. The two types of agricultural products accounted for 2.25% of China's agricultural exports to Russia, and they accounted for 0.41% in 2018, showing a clear downward trend. The demand for agricultural products has changed from a large demand for primary agricultural products, such as animal products, to a rapid increase in the demand for processed agricultural products, cereals, oilseeds and fat products, and other processed agricultural products. On the other hand, the agricultural products exported by Russia show a trend of diversification. The competitiveness of agricultural products continues to increase. Horticultural products and textile raw materials always account for only a small share of Russian agricultural exports to China. In 2009, these two types of agricultural products accounted for 2.63% of Russian agricultural exports to China, and in 2018 they accounted for 1.21%, showing a declining trend. However, the expansion margin of China's agricultural exports to Russia has always been greater than that of Russia's agricultural exports to China. This shows that the types of agricultural products exported from China to Russia are greater than those exported from Russia to China, and the variety of agricultural products exported from China is more diverse. In general, the maximum expansion margin of Chinese-Russian bilateral agricultural exports is only kept within 0.6, which is not high. With the rapid development of economic and trade exchanges between the two countries in the future, the types of agricultural exports from China and Russia will continue to expand significantly.
Agricultural product export structure
The statistics in Table 1 show that from 2009 The increasing demand for agricultural products has made China increasingly an important market for Russian agricultural exports. However, from the overall comparison of the intensive margin of agricultural exports between China and Russia, the intensive margin of agricultural exports from China to Russia is greater than the intensive margin of agricultural exports from Russia to China. This shows that the export volume of agricultural products from China to Russia in 2009-2018 is greater than the trade volume of agricultural products exported from Russia to China. In recent years, the trade volume of agricultural products between the two countries has shown an ever-increasing trend.
As an indicator to measure the volume of trade, the intensive margin can also be decomposed into the quantity margin and the price margin. Table 1 shows that the overall quantity margins of agricultural exports from China and Russia demonstrate a fluctuating upward trend. The difference is that the changing trend of the quantity margins of China's exports of Russian agricultural products does not coincide with the intensive margin, but continues with the overall intensive margin. The difference in the growth trend is that the quantity margin has experienced a process of decline at first and then increase. . This is different from the trend in that the intensive margin first declined and then increased during the statistical period. This shows that Russian agricultural exports to China are not closely related to the export price of agricultural products; the increase in the export value of Russian agricultural products to China is mainly due to the increase in the export volume of agricultural products.
Analysis on the type structure of exported agricultural products
Analyzing the bilateral agricultural products export structure between China and Russia, Table 2 shows that for different types of agricultural products, there are the same or different reasons for the growth of Chinese and Russian agricultural exports. The increase in Sino-Russian bilateral exports of cereals, oilseeds, and fat products is mainly due to the expansion of the marginal contribution of exports of such agricultural products in the two countries. That is, the increase in the types of products is related to the export of cereals and oilseeds, and fat products. The main driving force for the growth of China's exports of horticultural products to Russia is the price margin, that is, the increase in the prices of such products promotes the increase in trade volume. The main driving force for the growth of Russia's horticultural exports to China is the expansion margin. Thus, it relies on increasing exports. The types of agricultural products to expand the sales of such agricultural products.
The driving force for the growth of China's export of animal products to Russia is the expansion of the margin and the number of margins. Exports increase by expanding the types of agricultural products and the number of agricultural products.
The main driving force of the growth of Russia's animal exports to China is only the expansion of the margin. Export value is increasing by increasing the types of exported agricultural products. The main reason for the increase in bilateral exports of processed agricultural products between China and Russia is the expansion of the expansion margin and the expansion of the quantity margin between the two countries. The changes in China's export trade of textile raw products and agricultural products to Russia are mainly due to the expansion of the quantity margin. The increase in the number of exported agricultural products increases the export sales of such agricultural products. The main driving force of the changes in Russia's textile raw materials and agricultural products export trade to China is the expansion of the margin. It mainly relies on expanding the types of agricultural products exported to increase the export value of such agricultural products. In summary, the increase in China's exports of cereals, oilseeds and fat products, animal products, and processed agricultural products to Russia is mainly due to the increase in the expansion margin. The increase in sales of China's ex-ports of horticultural products to Russia is mainly due to the increase in the price margin. As a result, the growth of textile raw products and agricultural products is mainly due to the marginal increase in quantity. At the same time, the increase in the trade volume of agricultural exports from Russia to China is mainly due to the expansion of the marginal growth.
The analysis of the quality of agricultural products exported by China and Russia shows that such a quality is different. The prices of cereals, oilseeds, and horticultural products that China exports to Russia have seen price increases. It can be considered that the export quality of these agricultural products has improved, while the prices of animal products, processed agricultural products, and textile raw products have all declined in price and quantity. The rising phenomenon shows that in the export trade of these three kinds of agricultural products, China is expanding the export quantity by lowering the export price, and the product quality has not been fundamentally improved. Among the agricultural products exported by Russia to China, the export prices and quantity of cereals, oilseeds and fat products, and processed agricultural products have both increased, indicating that the quality of these types of agricultural products has improved. The export of animal products and textile raw products and agricultural products has seen price declines and quantities. The phenomenon of rising indicates that the increase in the trade volume of such agricultural products is realized by lowering the prices of products. The decline in prices of horticultural products and the fact that the export volume remains unchanged indicates that the export competitiveness of such agricultural products in the Chinese market has declined. Merely taking marketing measures to reduce prices did not increase the export volume of products.
CONCLUSION
The marketing of export agricultural products is affected by many factors, so different indicators are chosen to reflect the different purposes of actual studies. Based on the factors that affect the marketing of agricultural products, this study creatively selects three indicators to reflect the current situation of bilateral agricultural export marketing between China and Russia, including jm EM (expansion margin), jm P (price margin), jm Q (quantity margin), and specific contribution rate. The analysis results show that different agricultural products in the bilateral agricultural export sales between China and Russia, including technology-intensive agricultural products and land-intensive agricultural products, have different degrees of impact on agricultural product sales at different times by the types, prices, and quantities of agricultural products exported. This paper also puts forward suggestions on how to increase the export volume of agricultural products of the two countries and fully develop the potential of the bilateral agricultural export market. In the context of the comprehensive strengthening of economic cooperation between China and Russia, this study provides an important basis for the Chinese government to formulate an effective agricultural product export policy. | 5,803 | 2021-06-24T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
Nature Has No Elementary Particles and Makes No Measurements or Predictions: Quantum Measurement and Quantum Theory, from Bohr to Bell and from Bell to Bohr
This article reconsiders the concept of physical reality in quantum theory and the concept of quantum measurement, following Bohr, whose analysis of quantum measurement led him to his concept of a (quantum) “phenomenon,” referring to “the observations obtained under the specified circumstances,” in the interaction between quantum objects and measuring instruments. This situation makes the terms “observation” and “measurement,” as conventionally understood, inapplicable. These terms are remnants of classical physics or still earlier history, from which classical physics inherited it. As defined here, a quantum measurement does not measure any preexisting property of the ultimate constitution of the reality responsible for quantum phenomena. An act of measurement establishes a quantum phenomenon by an interaction between the instrument and the quantum object or in the present view the ultimate constitution of the reality responsible for quantum phenomena and, at the time of measurement, also quantum objects. In the view advanced in this article, in contrast to that of Bohr, quantum objects, such as electrons or photons, are assumed to exist only at the time of measurement and not independently, a view that redefines the concept of quantum object as well. This redefinition becomes especially important in high-energy quantum regimes and quantum field theory and allows this article to define a new concept of quantum field. The article also considers, now following Bohr, the quantum measurement as the entanglement between quantum objects and measurement instruments. The argument of the article is grounded in the concept “reality without realism” (RWR), as underlying quantum measurement thus understood, and the view, the RWR view, of quantum theory defined by this concept. The RWR view places a stratum of physical reality thus designated, here the reality ultimately responsible for quantum phenomena, beyond representation or knowledge, or even conception, and defines the corresponding set of interpretations quantum mechanics or quantum field theory, such as the one assumed in this article, in which, again, not only quantum phenomena but also quantum objects are (idealizations) defined by measurement. As such, the article also offers a broadly conceived response to J. Bell’s argument “against ‘measurement’”.
Introduction
This article reconsiders the concept of physical reality in quantum theory and the concept of quantum measurement, following Bohr, whose analysis of quantum measurement led him to his concept of "phenomena," as applicable in quantum mechanics (QM) may be different within each set, weak or strong, and Bohr's interpretation, in its ultimate version (developed by him in the late 1930s, as a revision of earlier versions), and the one adopted here are different, although both are of the strong RWR-type and the present one follows that of Bohr in several key aspects. In particular, Bohr was, again, the first to ground his interpretation, in all of its versions, in the irreducible role of measurement instruments in the constitution of quantum phenomena and, in its ultimate version, in the RWR-type concept of reality as applied to the ultimate constitution of the reality responsible for quantum phenomena, although he did not speak in terms of reality without realism. This constitution is, in Bohr's interpretation, associated with quantum objects, as is common. The present interpretation adopts a more stratified view, which redefines the concept of quantum objects, a view proposed by this author earlier in the context of QFT [9]. This view is as follows.
While what is considered as a quantum object in a given experiment is made possible by the ultimate constitution of the reality responsible for quantum phenomena in its interaction with the measuring instruments used, and while these two strata of reality are equally beyond conception, they are not the same forms of idealization. The ultimate constitution of the reality responsible for quantum phenomena is an idealization assumed to exist independently of our interactions with it and thus independently of observation. On the other hand, the reality idealized as quantum objects is, while still of the RWR-type, assumed to exist only at the time observation. It follows that, in this interpretation, there are no quantum objects, such as electrons, photons, or quarks existing independently in nature apart from our interaction with it by means of our observational technology. Hence, one cannot speak of the behavior of quantum objects as independent of observation either. One can only consider this behavior as part of observations or measurement, again, understanding by measurement the construction of quantum phenomena by means of measuring instruments capable to interaction with the ultimate, RWR-type, constitution of the reality responsible for quantum phenomena. This view, on this point following Bohr, makes it impossible to separate, extract, quantum objects from quantum phenomena observed in measuring instruments. Bohr spoke in this connection of the indivisibility or wholeness of quantum phenomena, which makes it impossible to consider the behavior of quantum objects independently, even if one assumes, that they exist independently (e.g., [2] (v. 2, pp. 61, 72)). Quantum phenomena themselves, as defined by effects observed in measuring instruments, allow for a representational and thus realist treatment, as do, in the first place, measuring instruments, or more accurately, their observable parts. They also have quantum strata through which they interact with quantum objects, as considered below. In Bohr's interpretation and, following it, the present interpretation, both the observable parts of measuring instruments and, hence, quantum phenomena are treated by means of classical physics. The quantum character of quantum phenomena is defined by the particular configurations of their classically observed features. Eventually, Bohr adopted the term "phenomenon," when used in quantum physics, as referring to what is observed or, more precisely, what has already been observed, in specified setups, in measuring instruments, as effects of their interaction with quantum objects [2] (v. 2, p. 64). It is, as will be seen, crucial that Bohr's concept refers to what has already been observed rather than only predicted.
The remainder of this article proceeds as follows. The next section outlines its key concepts. Section 3 discusses the concept of quantum measurement. Section 4 discusses the quantum measurement as an entanglement. Section 5 considers measurement, quantum objects, and elementary particles in QFT.
An Outline of Concepts
This section outlines the key concepts considered in this article, both those specifically grounding my argument, such as that of reality without realism or quantum object, and more commonly used concepts, such as reality, causality, or determinism, in their definitions adopted here, because they can be defined otherwise. I begin with the concepts of reality, realism, and reality without realism. As indicated in the Introduction, the concept of reality without realism, RWR, is grounded in more general concepts of reality and existence, assumed here to be primitive concepts and not given analytical definitions. These concepts are, however, in accord with most, even if not all (which would be impossible), available concepts of reality and existence in realism and nonrealism alike. By reality I refer to that which is assumed to exist, without making any claims concerning the character of this existence. The absence of such claims, which define realism, allows one to place this character beyond representation or even conception, as, in the case of the ultimate nature of the reality responsible for quantum phenomena, in RWR-type interpretations. I understand existence as a capacity to have effects on the world with which we interact. The very assumption that something is real, including of the RWR-type, is made on the basis of such effects. Following L. Wittgenstein, I understand "the world" as "everything that is the case," in particular (but not exclusively) the world of events [10] (p. 1). This also includes entities and events, including mental (for example, mathematical), of the human world. Quantum events are observed as phenomena defined by measuring instruments or their equivalents in nature. On the other hand, while unobservable, the ultimate constitution of the reality responsible for quantum phenomena in the RWR view is assumed to be "the case" as well and is, thus, part of the world. It never appears as such and hence is not an event, but it manifests itself in events, from which its existence is inferred.
A given theory or interpretation might assume different levels and different types of idealizations of reality, some allowing for a representation or conception and others not. (By "idealization" I refer a workable conception of something or, in the strong RWR view, a lack of such a conception, thus, possibly, something different from what it idealizes, rather than, as some do, to any form of approximation of something.) As stated in the Introduction, the present interpretation of quantum phenomena and QM or QFT assumes three idealizations within its overall idealization scheme. The behavior of the observable parts of measuring instruments, defining quantum phenomena, is idealized as representable. By contrast, the RWR-type reality ultimately responsible for these phenomena is idealized as that which cannot be represented or even conceived of. The third idealization is that of quantum objects. The reason for assuming the latter idealization is as follows. On the one hand, in contrast to classical physics or relativity, in quantum physics, in each experimental arrangement one must, as Bohr argued, always discriminate "between those parts of the physical system considered which are to be treated as measuring instruments and those which constitute the objects under investigation" [11] (p. 701). One the other hand, the difference between these two parts is, in general, not uniquely defined, which is sometimes expressed as the arbitrariness of the "cut," discussed in the next section. Accordingly, it is how we set up and interpret an experiment that defines what is the quantum object in this experiment and thus brings this object into existence, still as an RWR-type entity, in the present interpretation. Its quantum nature, including as an RWR-type entity, is defined by the ultimate, RWR-type, reality, which, by contrast, exist independently of any experiment.
Realist thinking is manifested in the corresponding realist theories (the terms like "ontic" and "ontological" are sometimes used as well), which are commonly representational in character. Such theories aim to represent the reality they consider, usually by mathematized models based on suitably idealizing this reality. It is possible to aim, including in quantum theory, for a strictly mathematical representation of this reality apart from physical concepts, at least as they are customarily understood, as in classical physics or relativity. It is also possible only to assume an independent architecture, structure, of the reality considered, without assuming that it is either (A) not possible to adequately represent this architecture by means of a physical theory, or (B) even to form a rigorously specified concept of this architecture, either at a given moment in history or even ever. Under (A), a theory that is merely predictive could be accepted for lack of a realist alternative, but usually with the hope that a representational theory will eventually be developed. This was Einstein's attitude toward QM, or QFT, which he expected to be eventually replaced by a realist theory. What, then, grounds realism most fundamentally is the assumption that the ultimate constitution of reality possesses properties and the relationships between them, or, as in (ontic) structural realism [12], just a structure, in particular, a mathematical structure. This constitution may either be ideally represented and, hence, known or be unrepresented or unknown, or even unrepresentable or unknowable, but still conceivable, usually with a hope that it will be eventually represented and known. The second assumption brings realism closer to the weak RWR view. The latter, however, does not imply such a hope and, more significantly, does not assume the existence of such properties or the relationships between them, or just a structure, along lines of ontic structural realism.
Thus, classical mechanics (used in dealing with individual objects and small systems, apart from chaotic ones), classical statistical mechanics (used in dealing, statistically, with large classical systems), or chaos theory (used in dealing with classical systems that exhibit a highly nonlinear behavior) are realist theories. While classical statistical mechanics does not represent the behavior of the systems considered because their great mechanical complexity prevents such a representation, it assumes that the individual constituents of these systems are represented by classical mechanics. In chaos theory, which, too, deals with systems consisting of large numbers of atoms, one assumes a mathematical representation of the behavior of these systems. Our phenomenal experience can only serve us partially in relativity. This is because, while we can give the relativistic behavior of photons a concept and represent it mathematically, which makes relativity a realist and causal and, in fact, deterministic theory, we have no means of phenomenally visualizing this behavior or the behavior represented by Einstein's velocity-addition formula for collinear motion s = v+u 1+(vu/c) 2 . Thus, when the velocity is close to c (or is c), the relativistic concept of motion is no longer a mathematical refinement of our phenomenal sense and the corresponding ordinary concept of motion in the way the classical concept of motion is. Relativity was the first physical theory that defeated our ability to form a phenomenal conception of individual physical behavior, and as such, it was a radical change in the history of physics. Photons, which only exist in motion with a velocity equal to c in a vacuum, represent the limit case. Ultimately, they are quantum entities, which need to be treated by quantum electrodynamics QED. In any event, relativity still offers a conceptual, as well as mathematical, representation of the behavior of individual systems. This behavior could, moreover, be treated causally and indeed deterministically, although, because all physical influences are limited by c, relativity imposes new limits on causal relationships between events, by restricting causes to those occurring in the backward (past) light cone of the event that is seen as an effect of this cause, while no event can be a cause of any event outside the forward (future) light cone of that event.
All theories just mentioned are based in the idea that we can observe the phenomena considered without disturbing them sufficiently to affect them [2] (v. 1, p. 53). As a result, one can, for all practical purposes, identify these phenomena with the corresponding objects in nature in their independent behavior and (ideally) represent and predict their behavior by using this representation by these theories, keeping in mind qualifications just made for classical statistical physics or chaos theory. This identification, thus, helps realism, but does not guarantee it, even in the case of classical mechanics, where representational idealizations are more in accord with our phenomenal experience, as I. Kant already realized [13]. In this case, or in relativity, these predictions are still ideally exact, as opposed to the probabilistic or statistical predictions of quantum theory, even in dealing with most elementary individual quantum phenomena. This is a fundamental difference, arising, one is compelled to argue, because of the impossibility of controlling the physical interference of measuring instruments with the object under investigation, in any interpretation of quantum phenomena and QM, or QFT.
The representation of individual physical (quantum) objects and behavior became partial in Bohr's 1913 atomic theory. The theory only provided representations, in terms of orbits, for the stationary states of electrons in atoms (in which electrons had constant energy levels), but not for the discrete transitions, "quantum jumps," between stationary states. This was an unprecedented and at the time nearly unimaginable step, because this concept was incompatible with classical mechanics and electrodynamics alike. It was expected that Bohr's theory was a temporary expedient that would no longer be necessary when a proper theory of quantum phenomena was developed. It was, however, this concept that became central for Heisenberg, who built on it by abandoning an orbital representation of stationary states as well. This led him to his discovery of QM, the first physical theory that allowed for an RWR-type interpretation, at least of the weak type, of it as a whole, as opposed to only partially conforming to the RWR view as Bohr's 1913 theory was. According to Bohr's 1925 assessment: In contrast to ordinary mechanics, the new quantum mechanics does not deal with a space-time description of the motion of atomic particles. It operates with manifolds of quantities [matrices] which replace the harmonic oscillating components of the motion and symbolize the possibilities of transitions between stationary states . . . . These quantities satisfy certain relations which take the place of the mechanical equations of motion and the quantization rules. [2] (v. 1, p. 48) Following Heisenberg's own thinking at the time, this assessment was thus based on the (weak) RWR view and the corresponding interpretation of QM, implicit here rather than developed by Bohr. By contrast, the first worked-out version of Bohr's interpretation, in his 1927 Como lecture [2] (v. 1, 52-91), restores, ambivalently, realism to QM, by assuming that the independent behavior of quantum objects was represented by the formalism of QM. The Como version of his interpretation was, however, quickly abandoned by Bohr, following his discussion with Einstein in October of 1927 at the Solvay conference in Brussels. This discussion initiated his path toward his ultimate, (strong) RWR-type, interpretation.
This interpretation was first sketched in his 1937 article, "Complementarity and Causality" [14]. Bohr did not use the language of "reality without realism," but his view, as defined by the irreducible role of measuring instruments in the constitution of quantum phenomena, amounted to the RWR view. He referred to "our not being any longer in a position to speak of the autonomous behavior of a physical object, due to the unavoidable interaction between the object and the measuring instrument," which, by the same token, entails a "renunciation of the ideal of causality in atomic physics" [14] (p. 87). This and related statements by Bohr could be read as making a stronger claim implying the incapacity of any theory to restore this position to us. I shall, however, view all such statements as belonging to Bohr's interpretation, from 1937 on, as a strong RWR-type interpretation, placing quantum objects and behavior beyond conception. Bohr clearly does so here. For, if one is no longer in a position to speak of the autonomous behavior of a physical object, this behavior must also be beyond conception, because, if one had such a conception, one would be able to says something about it.
There is still the question of whether our inability to do so only (A): characterizes the situation as things stand now, while allowing that quantum phenomena or whatever may replace them will no longer make this assumption and thus RWR-type interpretations viable, thus reverting to a realist view, or (B): reflects the possibility that this reality will never become available to thought. Logically, once (A) is the case, then (B) is possible too, but is not certain. There does not appear to be any experimental data compelling one to prefer either. (A) and (B) are, however, different in defining how far our mind can, in principle, reach in understanding nature. This is the main reason to distinguish these views, although all of my argument here equally apply to both. Bohr at least assumed (A), and some of his statements, especially those that make stronger than interpretive claims concerning our lack of access to the ultimate nature of reality responsible for quantum phenomena, suggest that he might have entertained (B). The qualification "as things stand now" applies, however, to (B) as well, even though it might appear otherwise given that this view precludes any conception of the ultimate constitution of the reality responsible for quantum phenomena not only now but also ever. It applies because a return to realism in quantum theory is possible, either on experimental or theoretical grounds, if quantum theory, as currently constituted, is replaced by an alternative theory that requires a realist interpretation. This might make the strong (or weak) RWR view obsolete even for those who hold it and is replaced by a more realist view with quantum theory in place in its present form.
One of the reasons for entertaining (B) is that our neurological constitution and, thus, our thinking and language, enabled by this constitution, have evolutionarily developed in our interaction with objects in the world consisting of billions of atoms and thus became essentially classical. Our thinking works in the ways which classical physics mathematically refines, a point often made by Bohr and Heisenberg. Accordingly, there is no special reason to assume that our thought and language should be able to conceive of and describe how nature ultimately works at its very small (or very large) scales.
In either (A) or (B) form, the RWR view, requires a reconsideration of causality and possibly, a "renunciation of the ideal of causality," as Bohr said, although it is difficult to assume, contrary to Bohr's claim elsewhere, this renunciation to be "final" [11] (p. 697). As is clear from Bohr's argument in this article and beyond, this ideal is grounded in the following concept of causality. This concept is defined by the claim that the state, X, of a physical system is definitively (rather than with any probability other than equal to one) determined, in accordance with a law, at all future moments of time once it is determined at a given moment of time, state A, and A is determined definitively in accordance with the same law by any of the system's previous states. This assumption, thus, implies a concept of reality, which defines this law, thus making this concept of causality ontological. By making such a concept inapplicable to the ultimate constitution of the reality responsible for quantum phenomena, the RWR view precludes the application of causality to the relationships between quantum phenomena or events.
Certain qualifications of this definition of causality are in order, however. In particular, this definition, which does not use the concept of cause but only that of the exact determination according to a given law, need not imply that A is a cause of X, in accord, say, with Kant's understanding of causality. This understanding has been commonly used in considering causality since Kant, a key figure in the modern history of the question of causality, although what he defines as the principle of causality has been commonly used earlier, beginning with Plato and Aristotle, or even the pre-Socratics. Kant defined the principle of causality as follows: if an event takes place, it has, at least in principle, a cause of which this event is an effect defined, inevitably and strictly, by a given rule or law [13] (pp. 305, 308). It is commonly (although there are exceptions) assumed that the cause must be prior to, or at least simultaneous with, the effect, an assumption also known as the antecedence postulate. Now, the fact that the physical state of a body at point t 1 determines, by Newton's law of gravity, the state of this body at any other point t 2 does not mean that the state at t 1 is the cause of the state at t 2 . One might argue that the real physical cause for any determination, including that of the initial state, A, that defines any particular case considered, is (in our language) the gravitational field defined by the Sun and other physical objects in the Solar system, as encoded in Newton's law of gravity. In this view, a given state, A, of any single object can only be seen as a physical cause of its future states, insofar as the whole configuration of bodies and forces involved, which determines the law of motion and hence causality, is viewed as embodied in this state. The history of the system thus considered, or any of classical system, whatever a physical law or set of laws defines its causal behavior, only goes so far in a given representation, which suspends more remote causes, let alone of the ultimate cause of this history. (Assuming the ultimate cause of anything is a major philosophical problem, put aside here.) Thus, Newton bracketed the physical nature of and the causes of gravity and was (wisely) content to merely define a law of gravity in considering, as causal, the behavior of any given object under this law, and other laws of Newton's mechanics. In considering planetary motion, all history, such as that of the emergence of the Solar system, was bracketed as well. This type of bracketing is workable on very large spatial and temporal scales, even that of the Universe itself, using Newton's theory of gravity or general relativity, but only up to a point. The situation changes once one gets closer to the Big Bang or to considering the Big Bang itself (assumed to be a complex process in which great many things happen even if in very short time, by our measure), because of the quantum aspects and possible still other have to be considered. This leads to complexities that thus far has defeated all our efforts of resolving them.
For the moment, a well-defined situation in physics (classical, relativistic, or quantum) usually entails the cut off defined by the initial state and the actual or possible final state of the system considered, with either one or both determined by measurements we do or can perform. When at stake are predictions, this demarcation is defined by the initial measurement, performed by us, possibly using nature as part of our observational technology and a possible future measurement that can verify the prediction made. Such predictions can only be made by us, even when they concern the behavior of objects in accordance with causality and, in the case of some causal systems, are exact or deterministic. Nature does not make predictions. In the present view, it does not make measurements and has no quantum objects, like elementary particles, either. Perhaps, as discussed in Section 4, it has quantum fields, that is, such a concept may reflect better the ultimate constitution of nature or part of this constitution responsible for quantum phenomena.
The view of classical physics or relativity, or even quantum physics as conforming by the concept of causality as defined above (which need not require the idea of cause, but only the law strictly connecting events and enabling exact predictions), has been nearly universally accepted. Quantum theory, and especially Bohr's 1913 atomic theory, introduced in the same year, radically challenged this view. As Bohr said later, "The unrestricted applicability of the causal mode of description to physical phenomena has hardly been seriously questioned until Planck's discovery of the quantum of action" [15] (p. 94). Quantum phenomena, at least in the RWR view, expressly violate the principle of causality because no determinable event (or phenomenon) could be established as the cause of a given event (or phenomenon), and only statistical correlations between events could be ascertained. But then, there is, in the RWR view, no relation of the type corresponding to causality as defined here, although claims concerning the existence of such relations are found in considering QM and QFT. In the RRW view, the equations of QM or QFT, such as Schrödinger's or Dirac's equation, only provide, with the help of Born's rule, the probabilities of the outcomes of possible future quantum experiments of the basis of previously performed ones. Born's rule or an analogous rule (such as von Neumann's projection postulate or Lüder's postulate), establishes the relation between the so-called "quantum amplitudes," associated with complex Hilbert-space vectors as complex entities and probabilities as real numbers, by using square moduli or, equivalently, the multiplication of these quantities and their complex conjugates (technically, these amplitudes are first linked to probability densities). Although Born's or analogous rules are connected naturally to the formalism of QM, they are added to this formalism rather than are derived from it. We do not know why these postulates work, which makes it tempting to argue that why they work is the greatest mystery of QM, but then, we do not know why the formalism works either. There is neither one without the other. Some, beginning with P. S. Laplace, have used "determinism" for causality in the present sense. His and related views may be seen as forms of ontological determinism. Laplace (one of the founders of modern probability theory) and others who have adopted such views were of course aware of the necessity of using of probability in physics but only assume it to be necessary for practical, epistemological reasons, due to our lack of knowledge concerning causal or deterministic relationships ultimately defining all events considered. In part for this reason, I prefer to define "determinism" as an epistemological category referring to the possibility of predicting the outcomes of causal processes ideally exactly in accordance with laws that define them as causal. In classical mechanics, when dealing with individual objects or small systems (apart from chaotic ones), both concepts in effect coincide or rather, because they are different concepts, are correlatively applicable. On the other hand, classical statistical mechanics or chaos theory are causal but not deterministic in view of the complexity of the systems considered, which limit us to probabilistic or statistical predictions concerning their behavior.
In the case of quantum phenomena, deterministic predictions are not possible on experimental grounds even in considering the most elementary quantum phenomena, such as those associated with elementary particles. This is because the repetition of identically prepared quantum experiments in general leads to different outcomes, and unlike in classical physics, this difference cannot be diminished beyond the limit defined by Planck's constant, h, by improving the capacity of our measuring instruments. This impossibility is manifested in the uncertainty relations, which would remain valid even if we had perfect instruments and which pertain to the data observed, rather than to any particular theory. Hence, the probabilistic or statistical character of quantum predictions must also be maintained by interpretations of QM or alternative theories of quantum phenomena that are causal. Such interpretations and theories are also, and in the first place, realist because causality implies a law governing it and thus a representation of the reality considered (in these cases, defined by the behavior of quantum objects) in terms of this law. By contrast, RWR-type interpretations are not causal because of the absence of realism in considering the ultimate nature of reality responsible for quantum phenomena.
The meanings of the terms causality and determinism fluctuate in physical and philosophical literature. Thus, Schrödinger's or Dirac's equation is sometimes seen as "deterministic" or "causal" under the assumption that it describes, even in a causal way, the independent behavior of quantum objects, with the recourse to probability only arising because of the interference of measurements into this behavior. This assumption is shared by both von-Neumann's and Dirac's classic books [16][17][18] and ambivalently by Bohr in his Como lecture [18] (pp. 191-218). It poses difficulties, beginning with the fact that the variables involved are complex quantities in Hilbert spaces over C. These difficulties were confronted by E. Schrödinger because his wave-equation was dealing with waves in the configuration space rather than physical space. I shall put these difficulties aside here because they do not affect my argument. In RWR-type interpretations, either equation only determines the mathematical state, the "quantum state," of the corresponding wavefunction as a Hilbert-space vector at any future point once it is determined at a given point, mathematically. It would be more accurate to say that each equation determines it for any given value of the parameter t in the equation, which values could be related to timemeasurements at different points in time. The relationships between these measurements themselves are probabilistic, which also suggests a possibility of conceiving of quantum states themselves in probabilistic terms (e.g., [19]) or redefining causality itself as a probabilistic concept (e.g., [7], pp. 1844-1846; [20]), a subject that, however, requires a separate consideration. Accordingly, physically, each equation only determines, in Schrödinger's terms, an "expectation-catalog" concerning the outcomes of possible future experiments, to be observed in measuring instruments, without representing either how they come about or these outcomes themselves, represented by classical physics [21] (p. 154). Hence, contrary to another common claim, neither equation is seen as time-reversible either.
I comment next on indeterminacy, randomness, and probability from the RWR perspective. I reiterate first that the strong RWR view makes the absence of causality automatic because assuming the ultimate character of the reality responsible for quantum phenomena to be causal would imply at least a partial conception or even representation of this reality as concerns the law that governs it and causally connects events. This does not mean that interpretations of QM or alternative theories of quantum phenomena that are realist and causal are impossible. Even in this case, however, and assuming that quantum objects exist independently (rather than, as in the present view, only at the time of measurement), one cannot track them individually in the way one can individual classical objects by separating their behavior from their interaction with measuring instruments, which implies that the difference between objects and phenomena remains irreducible in this case as well. It follows that, although causality, or in the first place, realism, is possible in considering quantum phenomena, determinism is not. Accordingly, while in classical physics or relativity, where all systems considered are causal, some of them are handled deterministically and others probabilistically or statistically, all quantum systems can only be handled probabilistically or statistically, even if one assumes causality.
I shall now define the concepts of indeterminacy, randomness, chance, and probability, as they are understood here, because their meanings fluctuate as well. In the present definition, indeterminacy is a more general category, while randomness or chance will refer to a most radical form of indeterminacy, when a probability cannot be assigned to a possible event, which may also occur unexpectedly. Randomness and chance may also be understood as different from each other. These differences are, however, not germane in the present context, and I shall only speak of randomness. Both indeterminacy and randomness only refer to possible future events and define our expectations concerning them. Once an event has occurred, it is determined. An indeterminate nature of events may either allow for assuming an underlying causal architecture of the physical reality responsible for this nature, whether this process is accessible to us or not, or disallow for making such an assumption. The first case defines indeterminacy in classical physics, in particular classical statistical physics or chaos theory, and the second in QM, in RWR interpretations. It is impossible to ascertain that an apparently random sequence of events, events that occurred apparently randomly, was in fact random, rather than connected by some rule, such as that defined by causality, and there is no mathematical proof that any sequence is. The sequences of indeterminate events that allow for probabilistic predictions concerning them is a different matter, although there is still no guarantee that such sequences are not ultimately underlain by causal connections in the case of quantum phenomena. Experimentally, as explained, quantum phenomena only preclude determinism, because identically prepared quantum experiments, as concerns the state of measuring instruments, in general lead to different outcomes. Only the statistics of multiple (identically prepared) experiments are repeatable, which is fortunate, because otherwise it would be impossible to have a scientific theory, such as QM or QFT, of these data.
A Bayesian view of quantum theory, based on dealing with probabilities of individual events, would qualify the situation, by making QM or QFT probabilistic, rather than statistical, while, however, retaining the fundamental difference in question between quantum and classical physics and the possibility of using the statistical data of either theory. First, "probabilistic" commonly refers to our estimates of the probabilities of either individual or collective events, such as that of a coin toss or of finding a quantum object in a given region of space. "Statistical" refers to our estimates concerning the outcomes of identical or similar experiments, such as that of multiple coin-tosses or repeated identically prepared experiments with quantum objects, or to the average behavior of certain objects or systems. There are different versions of the Bayesian view (e.g., [22,23]). Most generally, however, it defines probability as a degree of belief concerning a possible occurrence of an individual event on the basis of the relevant information we possess. This makes probabilistic estimates, generally, subjective, although there may be agreement (possibly among a large number of individuals) concerning such estimates. The frequentist understanding, also revealingly referred to as "frequentist statistics," defines probability in terms of sample data by emphasis on the frequency or proportion of these data, which is considered more objective. In quantum physics, exact predictions are, again, impossible even in dealing with elemental individual processes and events. This fact could, however, be interpreted either on Bayesian lines, under the assumption that a probability could be assigned to individual quantum events, or on frequentist lines, but under the assumption that each individual effect is strictly random and hence cannot be assigned a probability at all. (The standard use of the term "quantum statistics" refers to the behavior of large multiplicities of identical quantum objects, such as electrons and photons, which behave differently, in accordance with the Fermi-Dirac and the Bose-Einstein statistics, for identical particles with, respectively, half-integer and integer spin).
An example of a Bayesian approach, which is of an RWR-type in the present definition, is Quantum Baeysianism, or QBism [24,25]. I qualify because, QBists themselves sometimes speak of realism by virtue of assuming the existence of exterior physical re-ality [24]. Although my argument in this article would equally apply if one assumes a Bayesian, RWR-type, view, I adopt the frequentist, RWR-type, view, considered in detail in [4] (pp. 173-186), [8]. Bohr appears to have been inclined to a statistical view as well [2] (v. 2, pp. 18), [4] (pp. 180-184). I might add that, while I can understand why QBism sees our assignments of probabilities as subjective, I would prefer to say that these assignments human, which makes all probabilistic relationships between a given theory and observed phenomena human as well, in classical (or relativistic) and quantum physics alike. Nature does not assign probabilities. In classical physics (or relativity), however, these relationships may be, ideally, assumed to be physically grounded in the behavior of the system considered, because classical physics may be assumed to represents this behavior, which is no longer possible to assume in quantum theory, at least in RWR-type interpretations. There have been statistical interpretations of QM, commonly on realist lines. Two instructive examples are those of A. Khrennikov [8,26] and A. E. Allahverdyan, R. Balian, and T. Nieuwenhuizen [27]. While Khrennikov's interpretation is expressly realist, that of Allahverdyan, Balian, Nieuwenhuizen may be seen as allowing for RWR-type interpretations. This is because they argue that one should only interpret outcomes of pointer indications and leave the richer quantum structure, which has many ways of expressing the same identities, without interpretation. In RWR-type interpretations, this structure would be seen as enabling statistical predictions, without representing the ultimate constitution of the reality responsible for the outcomes of experiments and thus pointer indications.
Probability introduces an element of order into situations defined by the role of indeterminacy in them and enables us to handle such situations better. Probability or statistics is about the interplay of indeterminacy and order. This interplay takes on a unique significance in quantum physics, because of the existence of quantum correlations, such as the EPR (Einstein-Podolsky-Rosen) or, in the case of discrete variables, EPR-Bell correlations. These correlations are properly predicted by QM, which is, thus, as much about order as about indeterminacy, and about their unique combination in quantum physics. The correlations themselves are collective, statistical, and thus do not depend on either the Bayesian or frequentist view of the individual events involved.
The circumstances just outlined imply a different reason for the recourse to probability in quantum physics, in RWR-type interpretations. According to Bohr, the idea of indeterminacy apart from "the causal mode" of understanding reality has "hardly been seriously questioned until Planck's discovery of the quantum of action" [17] (p. 94). As he said on a later occasion (in 1949): "[E]ven in the great epoch of critical [i.e., post-Kantian] philosophy in the former century, there was only a question to what extent a priori arguments could be given for the adequacy of space-time coordination and causal connection of experience, but never a question of rational generalizations or inherent limitations of such categories of human thinking" [2] (v. 2, p. 65). Even more radical philosophical questionings of causality, such as those by D. Hume, are those of our epistemological capacity to grasp the underlying causal order presupposed at the ultimate level of reality. According to Bohr: [I]t is most important to realize that the recourse to probability laws under such circumstances is essentially different in aim from the familiar application of statistical considerations as practical means of accounting for the properties of mechanical systems of great structural complexity [in classical physics]. In fact, in quantum physics we are presented not with intricacies of this kind, but with the inability of the classical frame of concepts to comprise the peculiar feature of indivisibility, or "individuality," characterizing the elementary processes. [2] (v. 2, p. 34) Rather than representing a definitive state of affairs in nature, even if Bohr thought so, this statement should be seen as expressing the strong RWR-type interpretation adopted by Bohr at this point, in 1949. For one thing, as noted, some interpretations of QM, such as those by von Neumann [16] and Dirac [17], or alternative theories, such as Bohmian mechanics, assume causal views of the behavior of quantum objects, with probability or statistics brought in by measurement. Individuality and indivisibility reflect the features of Bohr's concept of phenomena. Referring to "the elementary processes" is due to the fact, in place from Planck's discovery of quantum theory on, that exact predictions are no longer possible, even ideally, regardless of how elementary quantum objects may be.
"The classical frame of concepts" may appear to refer to the concepts of classical physics, and it does include these concepts. By this time (in 1949), however, Bohr adopts the strong RWR view, which places the ultimate nature of reality responsible for quantum phenomena and possible all physical phenomena beyond conception. This gives the phrase "the classical frame of concepts" a broader meaning: all representational concepts that we can form are classical or proto-classical. They may be seen as proto-classical insofar as the physical concepts of classical physics and, as noted, already with significant limitations, relativity may be considered as refinements of our phenomenal intuition, a product of our evolutionary neurological machinery, intuitions embodied in ideas like bodies and motion. This refinement is no longer available for representing the ultimate nature of reality responsible for quantum phenomena or possibly the ultimate constitution of nature, at least as things stand now. Classical physical concepts are, however, still used in quantum physics in RWR-type interpretations, in particular that of Bohr or the present one, in dealing with the behavior of the observable parts of measuring instruments and, thus, data or information found in these parts. But these concepts, again, do not, in these interpretations, apply to the ultimate character of physical reality responsible for quantum phenomena. That still need not mean that a realist interpretation of quantum phenomena or QM or an alternative theory of quantum phenomena, using "the classical frame of concepts," is impossible. As stated above, the RWR view may become obsolete in quantum theory in its present form or whatever may replace it even for those who hold it and replaced by a more realist view, based "on the classical frame of concepts." In other words, even if the ultimate constitution of nature is still assumed to be beyond representation or even conception, because our own evolutionary neurological constitution limits us to classical or proto-classical frame of concepts, there will be no physics that needs to take this limitation into account as concerns any physical phenomena.
For the moment, in RWR-type interpretations, only two types of concepts are available for dealing with the ultimate nature of reality responsible for quantum phenomena, while the classical frame of concepts is used for describing the observable parts of measuring instruments and quantum phenomena, or data or information registered in them. The first type are purely mathematical concepts. Their role eventually led Heisenberg to a form of mathematical realism, with a Platonist flavor, while assuming that QM or QFT does not represent quantum objects and behavior by physical concepts, at least as we conventionally understand them, for example, in classical physics or relativity [28] (pp. 145, 167-186). By contrast, in his ultimate (strong RWR-type) interpretation, Bohr rejected the possibility of a mathematical representation, along with a physical one, of the ultimate nature of reality responsible for quantum phenomena, at least as things stand now. The present interpretation is in accord with Bohr on this point.
The second type of concepts are physical concepts, defined here as RWR-concepts, such as Bohr's concept of phenomena and complementarity, when complementarity is that of phenomena. These concepts have both representational (possibly classical) components and RWR-components and thus reflect that the ultimate nature of reality responsible for quantum phenomena is beyond representation or conception, which defines their RWR-components. This structure is in accord with and is physically defined by the twocomponent structure of measuring instruments, consisting in their classical describable observable part and their quantum strata through which they interact with the ultimate, RWR-type, physical reality responsible for quantum phenomena. According to Bohr: I advocated the application of the word phenomenon exclusively to refer to the observations obtained under specified circumstances, including an account of the whole experimental arrangement. In such terminology, the observational problem is free of any special intricacy since, in actual experiments, all observations are expressed by unambiguous statements referring, for instance, to the registration of the point at which an electron arrives at a photographic plate. Moreover, speaking in such a way is just suited to emphasize that the appropriate physical interpretation of the symbolic quantummechanical formalism amounts only to predictions, of determinate or statistical character, pertaining to individual phenomena appearing under conditions defined by classical physical concepts [describing the observable parts of measuring instruments]. [2] (v. 2, p. 64) Referring to "observations" is precise, because only the classically observed properties of measuring instruments affected by these observations could be measured. (By a "quantum measurement" I, again, refer to this whole process.) As defined by "the observations [already] obtained under specified circumstances," phenomena refer to events that have already occurred, and not to future events that one can predict on the basis of previous events defined by already established phenomena. This is a crucial point, discussed further in Section 3. Referring, phenomenologically, to observations also explains Bohr's choice of the term "phenomenon." This idealization is the same as that of classical physics, which allows one to identify phenomena with the physical objects (here measuring instruments), because an observations does not interfere with their behavior, in contrast to the way an observation by means of a measuring instrument interferes with the ultimate constitution of the reality responsible for a phenomenon thus observed in quantum physics. On the other hand, given that a quantum object is, in the present view, an idealization applicable only at the time of measurement, it is a product of this interference.
Complementarity adds a new dimension to this situation. Complementarity is defined by (a) A mutual exclusivity of certain phenomena, entities, or conceptions, such as, and in particular, those of the position and the momentum measurements, which can never be performed simultaneously in view of the uncertainty relations; (b) The possibility of considering each one of them separately at any given point; (c) The necessity of considering all of them at different moments of time for a comprehensive account of the totality of phenomena that one must consider in quantum physics.
In Bohr's ultimate, strong RWR-type, version of his interpretation, complementarity applies to quantum phenomena observed in measuring instruments. Each of the two complementary phenomena involved in a given complementarity, say, that of the exact position or the exact momentum measurement, associated with a quantum object, may be established alternatively at any given point in time. They cannot, however, be both established (exactly) simultaneously. Neither concept, phenomenon or complementarity, represents the ultimate nature of reality responsible for quantum phenomena. They reflect the impossibility of representing it.
Measurement, Idealization, and Quantum Indefinitiveness
This section explains in detail and derives implications from the tripartite structure of the idealization of physical reality assumed by the present view of quantum measurement. As stated from the outset, this structure is not inherent in and is not necessarily adopted by RWR-type interpretations of quantum phenomena and QM or QFT, including that of Bohr. Even if, given some of his statements, it might be seen as a consequence of Bohr's view, Bohr never expressly stated this consequence. Bohr says, for example, that "the concept of stationary states may indeed be said to possess, within its field on application, just as much, or, if one prefers, as little 'reality' as the elementary particles themselves. In each case, we concerned with expedients which enable us to express in the consistent manner essential aspects of the phenomena" [2] (v. 1, p. 12). That may be close to the present view, and it is clearly a form of the RWR view. The statement still appears, however, more likely to imply that either concept is an idealization but not that the concept of the elementary particle or quantum object, as an idealization, applied only at the time of measurement. According to Fine, commenting on Bohr's reply to EPR [11]: "But should we say that an electron is nowhere at all until we are set up to measure its position, or would it be inappropriate (meaningless?) even to ask?" [29]. In the present view, an electron is not assumed to have existed, or the corresponding idealization to apply, before the interaction between the ultimate constitution of the reality responsible for quantum phenomena and the measuring instrument. Nevertheless, this argument still builds on Bohr's argument concerning the irreducible role of measuring instruments in the constitution of quantum phenomena and thus the irreducible difference between them and quantum objects, and subtler aspects of this situation, as suggested by the following elaboration. Bohr says: This necessity of discriminating in each experimental arrangement between those parts of the physical system considered which are to be treated as measuring instruments and those which constitute the objects under investigation may indeed be said to form a principal distinction between classical and quantum-mechanical description of physical phenomena. It is true that the place within each measuring procedure where this discrimination is made is in both cases largely a matter of convenience. While, however, in classical physics the distinction between object and measuring agencies does not entail any difference in the character of the description of the phenomena concerned, its fundamental importance in quantum theory . . . has its root in the indispensable use of classical concepts in the interpretation of all proper measurements, even though the classical theories do not suffice in accounting for the new types of regularities with which we are concerned in atomic physics. In accordance with this situation there can be no question of any unambiguous interpretation of the symbols of quantum mechanics other than that embodied in the wellknown rules which allow us to predict the results to be obtained by a given experimental arrangement described in a totally classical way. [11] (p. 701) Before I discuss this elaboration as such, I would like to address two common misunderstandings to which this and related statements by Bohr have often led. First, Bohr's statement may suggest that, while observable parts of measuring instruments are described by means of classical physics, the independent behavior of quantum objects is described or represented by means of the quantum-mechanical formalism. This type of view has been adopted by some, for example, as noted earlier, von Neumann [16] and Dirac [17], and, in part under the impact these books, it is sometimes referred to as "the Copenhagen interpretation." It was not, however, Bohr's view, at least after he revised his Como argument, which entertained this type of view and which had influenced others, including Dirac and von Neumann, in this regard. Bohr does say here that the observable parts of measuring instruments are described by means of classical physics and that classical theories cannot suffice to account for quantum phenomena, but he does not say that the independent behavior of quantum objects is described by the quantum-mechanical formalism. His statement only implies that quantum objects cannot be treated classically, for if they could be, classical theories would suffice in accounting for the new types of regularities in question. The "symbols" of quantum-mechanical formalism are assumed here, as they always are by Bohr, only to have a probabilistically or statistically predictive role.
Bohr's insistence on the indispensability of classical physical concepts in considering the measuring instruments is often misunderstood as well, in particular by disregarding that measuring instruments contain both classical and quantum strata. Even though what is observed as phenomena in quantum experiments is beyond the capacity of classical physics to account for them, the classical description can and, in order for us to be able to give an account of what happens in quantum experiments, must apply to the observable parts of measuring instruments. The instruments, however, also have a quantum stratum, through which they interact with quantum objects or, in the present view, the ultimate constitution of the reality responsible for quantum phenomena, which interaction would not be possible without this quantum stratum. The interaction is quantum and thus cannot be observed or, in RWR-type interpretations, represented. It is "irreversibly amplified" to the classical level of observable effects, say, a spot left on a silver screen [2] (v. 2, p. 73). The nature of this "amplification" is a separate matter and is part of the problem, commonly, including by this author, seen as unsolved (although there are claims to the contrary, for example, along the lines of the consistent histories approach), of the transition from the quantum to the classical, which and related subjects, such as "decoherence," are beyond my scope.
It might be added that one could attempt to formalize this situation, as, for example, in [30,31]. One considers a compound quantum system, QO + QI, consisting of the quantum object under investigation, QO and the quantum part, QI, of the instrument I, QO + QI, which is isolated during the (short) time interval when the quantum interaction in question takes place. The rest of the instrument, I, performs the measurement, a pointer measurement, on QI, after the interaction has taken place. In realist schemes, such as that of M. Ozawa [31], the evolution of the QO + QI, the unitary evolution operator, where H = H QO + H QI + H QOQI is the Hamiltonian representing the internal behavior of the subsystems involved and H QOQI the interaction between them. In RWR view, no element of the formalism represents the ultimate nature of reality responsible for quantum phenomena, including its stratum involved in the interaction between QO and QI, responsible for the effects observed. Any such element only serves as part of the mathematics of QM that, with the help of Born's rule, predicts such effects.
The situation under discussion is sometimes referred to as the arbitrariness of the "cut" or, because the term cut [Schnitt] was favored by Heisenberg and von Neumann, the "Heisenberg-von-Neumann cut." As Bohr noted, however, while "it is true that the place within each measuring procedure where this discrimination [between the object and the measuring instrument] is made is . . . largely a matter of convenience," it is true only largely, but not completely. This is because "in each experimental arrangement and measuring procedure we have only a free choice of this place within a region where the quantum-mechanical description of the process concerned is effectively equivalent with the classical description" [11] (p. 701). In other words, the ultimate constitution of the physical reality responsible for quantum phenomena, including quantum objects observed in measuring instruments is always on the other side, never the measurement side, of the cut. Neither are quantum strata of the instruments through which the latter interact with this reality. By contrast, the measurement side of the cut is constituted by the effects observed in measuring instruments as the outcomes of quantum experiments, as quantum phenomena, effects that can be represented, moreover, by means of classical physical concepts.
In the present view, while a measuring instrument, which is, in its observable part, a classical object, or, at the other pole, the ultimate constitution of the reality considered, are assumed to be independent, a quantum object can only be rigorously ascribed existence and be defined by a measurement and its setup, including the cut. Accordingly, in this view, there is no independent behavior of quantum objects either: there is only the interaction between the ultimate (RWR-type) nature of reality and measuring instruments, which interaction allows one to define quantum objects. As discussed in next section, this interaction actually takes place before the measurement itself, which pertain to the state of the quantum stratum of the instrument after this interaction, and no longer the quantum object, which thus no longer exist either in the present view [2] (v. 2, p. 57). It is this state, rather than the state of quantum object or the independent reality which the instrument had interacted but no longer does that is then "irreversibly amplified" to the macroscopic, classical level of observable effects, such as a spot left on a silver screen. If one assumes the independent existence of quantum objects, then one can say that "[the quantum object] is already on its way from one instrument to another" [2] (v. 2, p. 57). What is a quantum object in a given experiment can be different in each case, including possibly something that, if considered by itself, could be viewed as classical, as in the case of Carbon 60 fullerene molecules, which were observed as both classical and quantum objects [32]. The quantum nature of any quantum object is still defined by its microscopic constitution.
The following question might, then, be asked. If a quantum object is only (an idealization) defined by an experiment or measurement, rather than as something that exists independently, could one still speak of the same quantum object, say, the same electron, in two successive measurements? For, if the idealization of quantum objects is only applicable at the time of measurement, then a prediction based on a given measurement and the new measurement based on this prediction could only concern a new quantum object, arising in the interaction between the ultimate constitution of the reality responsible for quantum phenomena and measuring instruments and not an object measured earlier in making a prediction. Accordingly, in the present view, rigorously, one deals with two different quantum objects, two different electrons, for example. To consider them as the same electron is, however, a permissible idealization in low-energy (QM) regime, an idealization ultimately statistical in nature, because a collision with the screen, after the electron passes the slit, is not guaranteed, although the probability that it will not occur is low. Nevertheless, one could still, within these limits, speak of the transition between two (physical) states of the same quantum object, with each state defined by the effects observed in measuring instruments. On the other hand, as discussed in Section 5, speaking of the same electron in any two successive measurements in high-energy (QFT) regimes is meaningless, which further justifies the present concept of a quantum object and the tripartite idealization scheme, adopted here.
The epistemological cost of the RWR view is not easily absorbed by most physicists and philosophers, and to some, beginning, famously, with Einstein, is unacceptable. Both Schrödinger and Bell are among prominent figures and symbols of this resistance, as is of course, even more so, Einstein. This attitude is not surprising because the features of quantum phenomena that are manifested in many famous experiments and that led to RWR-views defy many assumptions concerning nature commonly considered as basic. These assumptions, arising due to the neurological constitution of our brain, have served us for as long as human life, and within certain limits, are unavoidable, although, while fully respected by classical physics, their scope, was already challenged by relativity. QM have made this challenge much greater. As noted earlier, however, the same neurological constitution may also prevent us from conceiving of the ultimate (RWR-type) nature of physical reality responsible for quantum phenomena. Thus, it is humanly natural to assume that something happens between observations. The sense that something happened is one of the most essential elements of human thought. However, in the RWR view, the expression "something happened" is ultimately inapplicable to the ultimate constitution of the reality responsible for quantum phenomena. According to Heisenberg: There is no description of what happens to the system between the initial observation and the next measurement. . . . The demand to "describe what happens" in the quantumtheoretical process between two successive observations is a contradiction in adjecto, since the word "describe" refers to the use of classical concepts, while these concepts cannot be applied in the space between the observations; they can only be applied at the points of observation. [28] (pp. 57, 145) The same would apply to the word "happen" or "system," or any word we use, whatever concept it may designate, including reality, although when "reality" refers to that of the RWR-type, it is a word without a concept attached to it. As Heisenberg says: "But the problems of language are really serious. We wish to speak in some way about the structure of the atoms and not only about 'facts'-the latter being, for instance, the black spots on a photographic plate or the water droplets in a cloud chamber. However, we cannot speak about the atoms in ordinary language" [28] (pp. 178-179). Nor is it possible in terms of ordinary concepts, from which ordinary language is indissociable, or, in the RWR view, even in terms of physical concepts, assuming the latter can be entirely dissociated from ordinary concepts. This is a formidable problem even if one adopts the strong RWR view. The term "reality" in the phrase "reality without realism" does not pose a difficulty here, because this term has no concept associated to it, making it akin to a mathematical symbol. A greater difficulty are expressions like "quantum objects interact with each other," used for example, in considering the EPR experiment and entanglement, or "the interaction between the independent RWR-type reality and a measuring instrument," which refer to something between or before observations. One can handle this difficulty as follows in the RWR view. Although one can provisionally speak of a "relation" between two or more quantum objects, there is no term or concept, such as "interaction" or "relation," or "taking place," applicable to what "takes place." Any rigorous statement can only concern observable events, with which, moreover, and only with which the concept of a quantum objects is associated. Accordingly, one cannot rigorously speak of an interaction between quantum objects between experiments, with the concept a quantum object itself only applicable at the time of measurement in the present interpretation. One can only speak of two quantum objects associated with two measurements performed initially and then two quantum objects associated with two measurements performed subsequently. These measurements may be related in one way or another, for example, in terms of entanglement, and predicted accordingly, in the case of entanglement, by using the concept of an entangled states, in the formalism. Mathematical concepts are, in Heisenberg's view, a possible exception, which I shall consider presently.
Before I do so, I would like to formulate the quantum indefinitiveness postulate, which is a consequence the RWR view and reflects the situation just considered. It dictates the impossibility of making definitive statements of any kind, including mathematical ones, concerning the relationship between any two individual quantum phenomena or events, indeed to definitively ascertain the existence of any such relationship. It does allow for making definitive statements concerning individual phenomena or events, defined by measurements, and statements (statistical in nature) concerning the relationships between multiple events. It only concerns events that have already happened, rather than possible future events, in which case one can make probabilistic statements, on Bayesian lines.
Precluding the possibility of any mathematical connections between individual events makes the postulate stronger than Heisenberg's claim, cited above. While prohibiting a common-language and in effect physical description of what "happens" between quantum experiments, this claim in principle allows for the mathematical representation of what the ultimate constitution of physical reality, including, in Heisenberg's view in his later works. The word "happens" or even "physical" need, accordingly, no longer applies to this representation. As Heisenberg said on an earlier occasion, mathematics is "fortunately" free from the limitations of ordinary language and concepts: It is very difficult to modify our language so that it will be able to describe these atomic processes, for words can only describe things of which we can form mental pictures, and this ability, too, is a result of daily experience. Fortunately, mathematics is not subject to this limitation, and it has been possible to invent a mathematical scheme-the quantum theory [e.g., QM]-which seems entirely adequate for the treatment of atomic processes.
In physics, however, mathematics enables us to relate to things in nature which are beyond the reach of our thinking, including mathematical ones. In quantum theory, it does so by enabling us to estimate probabilities or statistics of quantum events, to which Heisenberg refers here by speaking of this scheme, QM, as "entirely adequate for the treatment of atomic processes." At the time, Heisenberg, adopting the RWR view, used this freedom to construct QM as a theory only designed to predict the probabilities or statistics of events observed in measuring instruments. It is in fact equally fortunate that nature allows us, in our interaction with it, just to have such a scheme, for the fact of its freedom from the limitations of common language and concepts, or in its abstract nature, even physical concepts, does not guarantee that it will work in physics. By contrast, in his later writings, in part in view of QFT, Heisenberg assumed the possibility of a mathematical representation of the ultimate constitution of reality, while excluding physical concepts (at least in their customary sense found in classical physics or relativity) as applicable to this constitution [28] (pp. 145, 167-186). Heisenberg speaks of this representation in terms of symmetry groups and defines elementary particles accordingly, without considering them as particles in a physical sense. The concept of elementary particle can be given a mathematical sense insofar as the corresponding representation of the group is irreducible [34,35]. Heisenberg even suggests that Kant's thing-in-itself is "finally, a mathematical structure:" "The 'thing-in-itself' is for the atomic physicist, if he uses this concept at all, finally a mathematical structure; but this structure is-contrary to Kant-indirectly deduced from experience [rather than is given to our thought a priori, as in Kant]" [28] (p. 83).
Bohr, by contrast, rejected the possibility of a mathematical representation of quantum objects and behavior, or the reality they idealize, along with a physical one, at least in his ultimate, strong RWR-type, interpretation. It is true that Bohr often speaks of this reality as being beyond our phenomenal intuition, also involving visualization, sometimes used, including by Bohr, to translate the German word for intuition, Anschaulichkeit (e.g., [2] (v. 1 p. 51, 98-100, 108; v. 2, p. 59)). It is clear, however, that, apart from the Como lecture, Bohr saw the ultimate nature of this reality as being beyond any representation or even conception, including a mathematical one, at least as things stand now.
Indeed, notwithstanding its dominant role in modern physics, amplified and even made unique by in quantum theory, it is not clear why mathematics, which is the product of the same human thinking as ordinary language or physical concepts are, should be able to represent how nature ultimately works at its very small (or very large) scales. It is not clear either that, in contrast its capacity to do so at the scales handled by QFT, mathematics will enable us to predict phenomena shaped by the workings of nature far at the smaller scales, such as Planck's scale, although the current consensus is that it should be able to do so. However, a consensus is not always a guarantee. Bohr, in speaking of "the special role . . . played by mathematics in development of [all] logical thinking" and "invaluable help [offered] by its well-defined abstractions" in quantum theory, nevertheless, added: "Still, . . . we should not consider pure mathematics as a separate branch of knowledge, but rather as a refinement of general language, supplementing it with appropriate tools to represent relations for which ordinary verbal expression is imprecise and cumbersome" [2] (v. 2, p. 68). This refinement can take us very far from ordinary language and concepts, the distance that is, again, manifested in the mathematics of QM and QFT, as Bohr was well aware. Nevertheless, mathematics is still human. As such, it may not ultimately be suited, because nothing human can, to deal with the ultimate constitution of nature, either in terms of representing or conceiving of it or even only in predicting probabilistically, as QM or QFT does, the outcome of the events considered. I am not saying that we cannot go further with our fundamental physical theories and their mathematics; quite the contrary, especially given the history of quantum theory, which is the affirmation of mathematical thinking in physics. This thinking is all the more remarkable because it connects us, by means of mathematics, to that which may be beyond the reach of thought.
Quantum Measurement as Entanglement
As Bohr came to realize in the wake of his exchange with EPR [11,36], a quantum measurement has a subtler nature, which is parallel to that of EPR-type measurements even in the standard case of quantum measurement. In any quantum experiment, the object under investigation and the measuring instrument become entangled as a result of their interaction with each other. Technically, entanglement is a feature of the formalism of QM which reflects the feature of quantum phenomena defined by this interaction. For simplicity, however, I shall refer by entanglement to this situation as a whole and speak of the entanglement between the object and the instrument.
Further qualifications are necessary given the concept of quantum objects in the interpretation adopted here, as an idealization only applicable at the time of measurement. When one makes a prediction based on a given measurement, it can only concern a new possible quantum object, defined through the interaction between the ultimate constitution of the reality responsible for quantum phenomena and a new measuring instrument used and not an object that we measure in order to make the prediction. As explained in the preceding section, however, in QM or low-energy QFT, although not in high-energy QFT, assuming that our predictions concern the same quantum object as registered by the initial measurement is a permissible idealization. In any event, any rigorous statement concerning an entanglement can only refer to observable events, with which the idealization of a quantum object is, again, associated, while one cannot ultimately speak of the interaction between quantum objects between experiments. One can only speak of two quantum objects associated with the measurements performed initially and then two quantum objects associated with two measurements performed subsequently. Quantum entanglement can be defined in terms of such measurements, the outcomes of which QM properly predicts. The EPR experiment as such is beyond my scope. It suffices to say here that the entanglement between two quantum objects, S 1 and S 2 , forming an EPR pair (S 1 , S 2 ), allows one by means of a measurement performed on S 1 to make predictions, with probability one, concerning S 2 . In the present view, S 2 is only defined once the corresponding measurement is performed, but not when the prediction concerning it is made, which makes it even more difficult and rigorously impossible to speak of any independent properties of S 2, however predicted, because there is no S 2 , in the first place, until it is measured. There is only the independent, RWR-type, reality, ultimately responsible for the existence of S 2 , when it is measured, and secondly, even then S 2 is still an RWR-type entity, which means that no physical properties can be attributed to it as such. These properties could only be attributed to the instrument used. As predictions at a distance, these predictions may be called "quantum-nonlocal" [7]. They do not, however, entail any instantaneous transmission of physical influences between such events, "a spooky action at a distance" [spukhafte Fernwirkung], famously invoked by Einstein [37] (p. 155). As such, they may be called Einstein-local. Einstein-locality would prohibit such an action, as would relativity, although the concept of Einstein-locality, or the locality principle, which implies that physical systems can only be physically influenced by their immediate environment, is independent of relativity.
The interaction between the object and the measuring instrument leading to their entanglement is not a measurement in the sense of giving rise to an observable quantity: this interaction occurs before the measurement takes place or rather before the outcome of this interaction is registered as a quantum phenomenon. This interaction is part of a quantum measurement, as defined here, which establishes a quantum phenomenon manifested in a measuring instrument or the data found in it, to which a measurement in the sense of measuring a physical property can then apply. Once performed, the measurement, say, that of the momentum (manifested only as a property of the instrument), disentangles the object and the instrument, with the observed outcome "irreversibly amplified" to the level of the classically observed stratum of the apparatus [2] (v. 2, p. 73). This outcome is defined by the quantum stratum of the apparatus after this interaction, rather than by the object. As Schrödinger explains in his cat-paradox paper, it is this disentangling that enables one to predict the probability that the momentum measurement at a given future moment in time will be within a certain range [21] (pp. 162-163). Alternatively, if the initial measurement was that of the position, one could predict the probability that the position measurement at a given future moment in time will locate the trace of the interaction between the object and the instrument within a certain area.
Quantum phenomena are never entangled. In the present view, again, not even quantum objects are entangled because they are idealizations only applicable at the time of measurement and thus always irreducibly associated with quantum phenomena. One could only say that two initial measurements associated with S 1 and S 2 lead to the situation in which possible future measurements can be handled by the mathematics of entangled states in the formalism of QM and expectation catalogs they enable. Accordingly, in the present view, only quantum states, ψ-functions, can be entangled, but there is something in the ultimate nature of reality responsible for quantum phenomena that requires this entanglement. If one assumes an independent existence of quantum objects between measurements, which is possible even in RWR-type interpretations, then one could say that they become entangled, although, if one adopts an RWR-type interpretation, the nature of the reality defining this entanglement is beyond representation or knowledge or even conception. ψ-functions never represent either the ultimate reality responsible for quantum phenomena or quantum phenomena and thus the outcomes of measurements. They do not represent these outcomes even if one adopts a realist view of ψ-functions as representing what happens between measurements because one needs Born's rule added to the formalism to predict the probabilities of these outcomes, described by classical physics. Now, according to Bohr's remarkable observation, in effect describing the quantum measurement as an entanglement: After a preliminary measurement of the momentum of the diaphragm, we are in principle offered the choice, when an electron or photon has passed through the slit, either to repeat the momentum measurement or to control the position of the diaphragm and, thus, to make predictions pertaining to alternative subsequent observations. It may also be added that it obviously makes no difference, as regards observable effects obtainable by a definitive experimental arrangement, whether our plans of considering or handling the instruments are fixed beforehand or whether we prefer to postpone the completion of our planning until a later moment when the particle is already on its way from one instrument to another. [2] (v. 2, p. 57).
If, then, a measurement is always made after the object has left the location of the measurement, what does this measurement measure? How does it create the corresponding phenomenon? It "measures" the quantum state of the quantum stratum of the instrument, which interacted with the object in the past (however recently, but always in the past!), by amplifying this state to the classical level of the observation and, thus, by registering the corresponding state of the measuring instrument. This amplification leads to the phenomena in which the outcome of a measurement is registered. What will be registered is either the change in the momentum of certain observed parts of the apparatus or the position of one or another trace of this interaction, say, a spot on a silver screen, given that both can never be registered in the same arrangement, as reflected by the uncertainty relations. Such concepts as momentum or position can only rigorously apply at this classical level in the RWR view, adopted by Bohr by the time of this statement (1949). This point appears to have been missed either in commentaries on Bohr or by treatments of quantum measurement elsewhere. Subtle as it is, Schrödinger's analysis of quantum measurement in his cat-paradox paper does not consider this point [21] (pp. 158-159). Neither does Ozawa in his analysis in [30], discussed earlier, an analysis that is expressly realist and implies that the measured quantity is attributed to the object at the time of measurement. Von Neumann's analysis comes close, but, while it is conceivable that von Neumann realized this point, he did not comment on it, and some of his statements appear to attribute the measured quantity to the object at the time of measurement [16] (pp. 355-356). On the other hand, this aspect of quantum measurement supports the point, made by von Neumann and others, that an instantly repeated measurement will give the same result as the initial measurement [16] (pp. 214-215), [21] (pp. 158-159).
The situation is consistent with the present interpretation, according to which a quantum object is an idealization only applicable at the time of measurement, insofar as one refers by measurement, as Bohr clearly does, to the overall process in question, leading to the emergence of an observed phenomenon (in Bohr's sense). The interaction is between the quantum object and the quantum stratum of the instrument and the amplification of the resulting quantum state of this stratum (after this interaction and hence no longer in the presence of the quantum object) to the classical level of the observed stratum of the instrument, in which stratum the outcome is registered. Even though the quantum object is no longer there, or even no longer assumed to exist, or be a viable idealization, when the corresponding phenomenon is established, one might still see this quantum object as, by its interaction with the quantum stratum of this instrument, responsible for the effect observed. It follows that, in the present view, a quantum object, as an idealization applicable at the time of a measurement, refers to, idealizes, something that physically existed in the past of an observed quantum event. An observed effect could also be that of the measurement of the charge, mass, or spin of an electron. These quantities will be the same for all electrons and will define them as electrons, in the present view at the time of measurement. Such quantities as the position, velocity, momentum, or energy registered in a measurement will be different. One might assume that, say, because of the exchange of momenta between the object and the instrument, the momentum of the object will correspond to the difference between two momentum measurements of the instrument before and after the interaction with the object. Physically, however, one never measures that momentum, given that the object has already left the location of the instrument and that one could have performed instead the position measurement after it did. In any event, one can ascertain, regardless of an interpretation: (a) that one can perform either of the two complementary measurements concerning the state of the quantum stratum of the instrument, with the outcome amplified to the classical level of the observable part of this instrument, and correlatively (b) the quantum-nonlocal nature of quantum predictions, because by changing one's decision which measurement to perform, one can make two alternative predictions concerning distant future events, to which one is not physically connected at the time of either measurement.
Thus, using the measurement of the state of the apparatus, one can predict, at a distance, by means of a ψ-function (cum Born's rule), a possible outcome of a future measurement of either variable, without "in any way disturbing the system," just as in any EPR-type experiment [36] (p. 138). It is true that there was an interaction between the object and the instrument before that measurement. But this is also the case for the two objects of the EPR pair, which have been in an interaction, entangling them. In a standard measurement, the probability of such a prediction will not be equal to one, as it would be in the case of the EPR-type experiments, which possibility, however, requires qualifications, discussed below. Besides, as Bohr realized, with some simple additional arrangements one can, at least in principle, reproduce the EPR case in considering the standard quantum measurement [2] (v. 2, p. 60), [15] (pp. 101-103).
It might seem that, in either the standard or the EPR case, because either of the two complementary quantities could be predicted at a distance for one quantum object by an alternative measurement on another quantum object, the first object at a distance can be assigned both quantities, as, in EPR's language, "elements or reality" "without in any way disturbing" it [36] (p. 138). This was, essentially, EPR's argument, although the possibility of predicting either quantity with probability one in the EPR experiment strengthened their case. EPR also argued that the only alternative, in the case of the EPR experiments, would be that QM is Einstein-nonlocal, because a measurement on one object would alternatively define the state of reality at a distance [36] (p. 141). As, however, should be apparent from the comments just given and as discussed in detail by the present author elsewhere [7] (pp. 1847-1855), this is not necessarily the case, because, in any actual experiment, only one of these quantities could be predicted. There is no experiment that would allow one to physically realize the prediction of both quantities for the same object. At the same time, there is no need to assume that our predictions are Einstein-nonlocal by virtue of determining the quantity in question at a distance, because, even if one can predict this quantity with probability one, one could still measure the complementary quantity and thus establish, in EPR's language, a different element of reality from the one predicted [36] (p. 138). A measurement performed on one quantum object cannot be claimed to define an element of reality pertaining to another, spatially separated, quantum object, by means of a prediction, even with the probability one, the situation discussed in detail under the heading of quantum causality in [7]. Only a measurement on this second object can do so. Accordingly, there is no rigorous basis for assuming that the latter prediction established the reality it predicts. In classical physics, these limitations do not apply because we can always measure and predict and simultaneously verify our predictions for conjugate variables, such that of position and that of momentum. The conjugate variables of classical physics are not the same as complementary variables of quantum physics.
The outline of the EPR experiment just given is only a sketch that requires a proper argument, given by thus author in [7] (pp. 1847-1855). The main point at the moment is that, even in the standard quantum measurement one must always consider not only the object under investigation as in classical physics (where one can disregard the role of measuring instruments) but a composite entangled quantum system, consisting of the object and the quantum stratum of the instrument [21] (p. 167), [20,38]. In each EPR-type experiment, at the last stage of the experiment, when one of the two possible EPR predictions is made, one deals with two combined systems, each consisting of an object and an instrument, the first associated with an actual (already performed) measurement and the other with a possible future measurement, concerning which one makes a prediction, and thus with four systems in total.
"Perhaps the Biggest Change of All the Big Changes in Physics": Quantum Measurement, Quantum Objects, and Elementary Particles in QFT
This section extends the preceding analysis to high-energy quantum regimes and QFT, which also enables this article to address the problems of quantum field and elementary particles. Both have been and remain problems to which nothing more that fragments of possible solutions could be offered, as testified to by the persistent title, "What is an elementary particle?" used by, among others, Heisenberg [39] (pp. 71-88) and S. Weinberg [40]. Because I shall primarily deal with elementary particles, by particles I shall, unless qualified, refer to elementary particles. Of course, a "particle" is a problem, too, a problem that underlies that of an elementary particle and has been around for much longer than that of elementary particles.
It is not my aim here to do more than consider this problem as a problem. In contrast, however, to most approaches to this problem, which are realist in nature, this article offers a nonrealist one, based on the strong RWR view. As a result, my engagement with literature on the subject will be more limited than one might prefer. However, the sources cited here, such as [41][42][43][44], contain extensive bibliographies of both physical and philosophical literature. Among the standard technical textbooks are [45][46][47]. Most works on QFT in physics and the philosophy of physics, too, adopt and primarily address realist views. An extensive philosophical treatment of the concept of a particle is offered by [48]. The question of virtual particles is beyond my scope here. It was considered from the present perspective in a previous article by this author [9]. A compelling realist perspective on the subject and an effective critique of arguments against the existence of virtual particles was offered in [41,42].
The strong RWR view and the corresponding interpretations of QFT imply that the question "What is an elementary particle?" has ultimately no answer, at least as things stand now, insofar as this view allows for no specifiable concept of elementary particles or quantum objects in general, apart from in terms of their effects on measuring instruments, effects that are rigorously specifiable by means of classical physics. It need not follow that the ultimate nature of (RWR-type) reality responsible for quantum phenomena is constant or uniform. While each time unknowable or, in the strong RWR view, unthinkable, this reality is assumed to be each time different in its ultimate character as well as in its manifested effects, making each quantum phenomenon or event, as such an effect, unique in turn, as well as discrete in relation to any other quantum phenomena. Indeed, in contrast to low-energy quantum regimes (QM or QFT), in high-energy quantum regimes an investigation of a particular type of elementary particle unavoidably involves not only other particles of the same type, say, electrons, but also other types of particles, such as, in QED, positrons, photons, or electron-positron pairs, that is, dealing with the corresponding effects, even in the same experiment. By the same token, it becomes meaningless to speak of the same electron detected in any two successive measurements. While, in the present view, assuming the identity of two successively detected quantum objects is only a statistically permissible idealization even in low-energy quantum regimes, this assumption is no longer possible in high-energy quantum regimes. This situation further justified and gives additional significance to the tripartite view or idealization of physical reality adopted in this article: (1) the ultimate constitution of physical reality, as RWR-type reality, responsible for quantum phenomena; (2) quantum objects, including elementary particles, defined by measurement as RWR-type entities, and (3) quantum phenomena, also defined by measurement but represented classically. The identity of particles of the same type is strictly maintained, as it is in the case in QM or low-energy QFT. While applicable and helpful even in QM or in low-energy QFT, the concept of a quantum field introduced here is designed to handle these new types of effects observed in high-energy quantum regimes. Rather than, as is more common, as a quantum object, a quantum field is defined here as a form of the independent RWR-type reality responsible for quantum phenomena and quantum objects, such as elementary particles, at the time of measurement.
Low-energy quantum regimes permit and most interpretations of quantum phenomena and QM (or QFT in low-energy regimes) adopt a conception of elementary particles, as quantum objects. The same conception is also applicable in, although not sufficient for, high-energy regimes, beginning with the circumstance that elementary particles of the same type, such as electrons or photons cannot be distinguished from each other, while these types themselves are rigorously distinguishable. Two electrons could be distinguished by changeable properties associated with them, such as their positions in space or time, momentums, energy, or the directions of spins but, in the RWR view, only as properties manifested in measuring instruments and only at the time of measurement. Such properties are subject to the uncertainty relations and complementarity. It is possible to locate, and in the present view, establish, by measurement two different electrons, as quantum objects, in separate regions in space. It is not possible to distinguish them from each other on the basis of their mass, charge, or spin. These quantities are not subject to the uncertainty relations or complementarity. As H. Weyl observed long ago, "the possibility that one of the identical twins Mike and Ike is in the quantum state E1 and the other in the quantum state E2 does not include two differentiable cases which are permuted on permuting Mike and Ike; it is impossible for either of these individuals to retain his identity so that one of them will always be able to say 'I'm Mike' and the other 'I'm Ike.' Even in principle one cannot demand an alibi of an electron!" [49] (p. 241). In RWR-type interpretations, properties defining electrons or other elementary particles within each type could only be associated with them (even if one assumes their independent existence between measurements, as opposed to only assuming them to exist at the time of measurement) by means of the corresponding effects observed in measuring instruments, rather than properties attributable to these objects themselves. A rare discussion of the "properties" of the electron along similar lines, even if without adopting the RWR view, is offered in [50]. It is possible, however, to maintain both the indistinguishability of particles of the same type and the strict distinguishability of the types themselves in RWR-type interpretations because both features can be consistently defined by the corresponding sets of effects manifested in measuring instruments.
This view is, thus, in accord with the assumption, defining RWR-type interpretations, that the character of elementary particles and their behavior, or of the reality thus idealized, is beyond representation or even conception, just as is the ultimate, RWR-type, character of the reality itself responsible for quantum phenomena. In the present interpretation, this reality is, again, assumed to exist independently, while elementary particles, as quantum objects, only idealize something that is assumed to exist in quantum measurements. An elementary particle of a given type, say, an electron, is specified by a discrete set of possible effects (the same for all electrons), observable in measuring instruments in the experiments associated with particles of this type. An elementary particle can thus only an idealization that is part of a composite system, consisting of this particle and a measuring instrument, system which has a registered effect upon the observable, classically describable, part of this instrument. The elementary character of a particle is defined by the fact that there is no experiment that allows one to associate the corresponding effects on measuring instruments with more elementary individual quantum objects. Once such an experiment becomes conceivable or is performed, the status of a quantum object as an elementary particle could be challenged or disproven, as it happened when hadrons and mesons were discovered to be composed of quarks and gluons. This composite nature will then manifest itself in a new set of effects observed in the corresponding experiments.
The present concept of an elementary particle, defined, as an idealization, in terms of such effects, does not imply that "elementary particles" are fundamental elementary constituents, "building blocks," of nature. This assumption is impossible in RWR-type interpretations, as is any assumption concerning this constitution. Nor is applying the concepts of "elementary" or "constituents" ultimately possible either. Nature has no elementary particles, which are human idealizations, albeit made possible by our interaction with nature, assumed, in its ultimate constitution, to be a form of RWR; nor, by the same token, is it possible to apply to elementary particles any concept of a particle, any more than any other concept, such as wave or field, although, as will be explained presently, the concept of a quantum field could be defined otherwise, as a mode of RWR-type independent reality (rather than a quantum object, such as an elementary particle) beyond the reach of all specifiable concepts.
While most QFT conceptions of an elementary particle are transferred from QM to high-energy quantum regimes, they are insufficient in these regimes and need to be adjusted or supplemented by additional concepts, most commonly that of a quantum field. The present approach follows this pattern by defining the concept of quantum field in RWR terms. First, however, I shall explain why the concept of an elementary particle operative in QM is insufficient in high-energy regimes. This insufficiency arises in view of the following situation, not found in QM (or low-energy QFT, even though the latter introduces new features into quantum phenomena), to which the mathematical architecture of QFT responds. (The low-energy QFT is essential for explaining some quantum phenomena, such as the non-zero the energy of the vacuum, not explicable by QM.) In fact, with Dirac's equation, it was this architecture, discovered first, that led to the discovery of this situation.
Speaking for the moment in classical-like terms, suppose that one arranges for an emission of an electron, at a given high energy, from a source and then performs a measurement at a certain distance from that source, say, by placing a photographic plate there. The probability or, if we repeat the experiment with the same initial conditions (defined by the state of the emitting device), the statistics of the outcomes would be properly predicted by QED, but what will be the outcome? The answer is not what our classical or even quantum-mechanical intuition would expect. This answer was a revolutionary discovery made by Dirac through his equation.
Let us consider first what happens if one deals with a classical object analogous to an electron and then if one considers a nonrelativistic QM electron in the same type of arrangement. I speak of a classical object analogous to an electron because the "game of small marbles" for electrons was finished well before QM. An electron, say, a Lorentz electron, of a small finite radius, would be torn apart by the force of its negative electricity. This led to treating the electron mathematically as a dimensionless point, without giving it any physical structure, while still assigning it measurable physical quantities, permanent (such as mass, charge, or spin) or variables (such as position, time, momentum, or energy). However, a point electron in quantum theory is, as an idealization, different from a point-like idealization in classical mechanics, where the body thus idealized could still be assumed to have spatial dimensions. Thus, one can take as an example of the classical situation a small ball that hits a metal plate. The place of the collision could be predicted (ideally) exactly by classical mechanics, and we can repeat the experiment with the same outcome on an identical or even the same object. Regardless of where we place the plate, we always find the same object. (It is assumed that the situation is shielded from outside interferences, which could, for example, deflect the ball before it reaches the plate).
If one considers an electron in the QM regime, it is, first of all, impossible, because of the uncertainty relations, to predict the place of collision exactly or with the degree (ideally unlimited) of approximation possible in classical physics. An (ideally) exact prediction of the position (or other variables) of a quantum object is possible, specifically in EPR-type experiments by means of a measurement performed on the other particle of the (EPR) pair considered. As discussed earlier, however, even a prediction with probability one is still a prediction and not a guarantee of the reality of what is predicted, because one can always perform a complementary measurement, thus disabling any possibility of verifying such a prediction and hence assigning the corresponding quantity. In addition, a single emitted electron could, in principle, be found anywhere in a given area or not found at all. Nor can an emission of an electron be guaranteed. There is a small but nonzero probability that such a collision will not be observed or that the observed trace is not that of the emitted electron. Finally, assuming that one observes the same electron in two successive measurements is still an idealization in the present interpretation, given that it defines any quantum object as an idealization applicable only at the time of measurement. This idealization is, however, permissible in low-energy quantum regimes.
Once one moves to high-energy quantum regimes, beginning with those governed by QED, the situation is, again, different, even radically different. In a subsequent measurement, one can find in the corresponding region, not only an electron or nothing, as in lowenergy (QM) regimes, but also other particles: a positron, a photon, or an electron-positron pair. That is, in RWR-type interpretations, one can register the events or phenomena (observed in measuring instruments) that we associate with such entities. QED predicts which among such events can occur and with what probability or statistics, and just as QM, QED, in RWR-type interpretations, does so without representing or, in the strong RWR view, allowing one to conceive of how these events come about. The corresponding Hilbert-space formalism becomes more complex, in the case of Dirac's equation making the wave function a four-component Hilbert-space vector, as opposed to a one-component or, if one considers spin, two-component Hilbert-space vector, as in quantum mechanics, keeping in mind that each component is infinite-dimensional. These four components represent the fact that Dirac's equation is an equation for both the (free) electron and the (free) positron, including their spins, and they can transform into each other or other particles, such as photons, in the corresponding high-energy processes, transformations that, in the RWR view, are only manifested in measuring instruments. By the same token, one can no longer speak of the same electron, positron, and so forth as detected in two successive measurements in low energy quantum regimes. In the currently standard versions of QFT, the wave functions are commonly replaced by operators (the procedure sometimes known, for historical reasons, as "second quantization"): to every point x a Hilbert-space operator acting on this space is associated, rather than a state-vector as in QM. As M. Kuhlman notes: "both in QM and QFT states and observables [operators] are equally important. However, to some extent their roles are switched. While states in QM can have a concrete spatio-temporal meaning in terms of probabilities for position measurements, in QFT states are abstract entities and it is the quantum field operators that seem to allow for a spatio-temporal interpretation" [43]. As he (rightly) qualifies, however, "since 'quantum fields' are operator valued, it is not clear in which sense QFT should be describing physical fields, i.e., as ascribing physical properties to points in space. In order to get determinate physical properties, or even just probabilities, one needs a quantum state. However, since quantum states as such are not spatio-temporally defined, it is questionable whether field values calculated with their help can still be viewed as local properties" [43]. While the present concept of a quantum field is physical, all calculated values are only probabilities and all local properties are only those of the observable parts of measuring instruments, just as they are in QM.
Once one moves to still higher energies governed by QFT, the panoply of possible outcomes becomes much greater. Correspondingly, the Hilbert spaces and operator algebras involved have still more complex structures, linked to the appropriate Lie groups and their representations, defining (when these representations are irreducible) different elementary particles. In the case of QED, we only have electrons, positrons, and photons, single or paired; in QFT, depending how high the energy is, one can literally find any known and possibly as yet unknown elementary particle or combination. It is as if instead of identifiable moving objects and motions of the type studied in classical physics, we encounter a continuous emergence and disappearance, creation and annihilation of particles, further complicated by the role of virtual particles, or again, something in nature which compels some (not everyone sees it as necessary) to introduce the latter concept. This is still a classical-like and thus metaphoric picture, which is ultimately inapplicable. But we have no other pictures: if one wants to convey anything that "happens" between experiments by means of a picture or phenomenal concepts, rather than only use mathematics to predict what the outcome of experiments, classical-like pictures are our only recourse. Although, like anything quantum, these transformations can only be handled probabilistically or statistically, they also have a complex ordering to them. In particular, in addition to various correlational patters akin to those found in low-energy (QM) regimes, they obey various symmetry principles, especially local symmetries. The latter have been central to QFT, not the least in leading to discoveries of new particles, such as quarks and gluons inside the nucleus, and then various types of them, eventually establishing the standard model of particle physics. Thus, QED is an abelian gauge theory with the symmetry group U(1) and has one gauge field, with the photon being the gauge boson. The standard model is a non-abelian gauge theory governed by the tensor product of three symmetry groups U(1)⊗SU(2)⊗SU(3) and broken symmetries, and it has twelve gauge bosons: the photon, three weak bosons, and eight gluons.
The concepts (there have been quite a few) of a relativistic quantum field respond to the situation here outlined. (Most concepts of nonrelativistic quantum fields can be seen as limit cases of relativistic ones and will be put aside here, and, unless qualified, quantum fields will henceforth refer to relativistic quantum fields.) These concepts were initially developed as forms of quantization of the electromagnetic field, again, necessary even in low energy quantum regimes. The character and even the very possibility of such concepts, especially as physical concepts, is a subject of seemingly interminable debates, just as and often correlatively is the concept of elementary particles. While there is a strong general sense concerning the mathematics involved (although the range of specific mathematical tools offers one several choices) and while there is a large consensus that a viable physical concept of a quantum field is necessary, most of the proposals concerning such concepts are realist. This assessment is confirmed by Kuhlmann's representative review [43]. By contrast, I suggest a nonrealist physical concept of quantum field defined by the strong RWR view, which is consistent with the mathematics of QFT and most currently available mathematical concepts of quantum field, such as those based on the Lagrangian formulation and canonical commutation or anticommutation relations for fields, for respectively bosonic and fermionic fields, analogous to those of QM, say, for bosonic field, φ and π: [φ(x, t), π(y, t)] = iδ 3 (x − y) [φ(x, t), φ(y, t)] = [π(x, t), π(y, t)] = 0 (Some of these mathematical concepts of quantum fields are associated with physical ones).
As understood here, a quantum field is not a quantum object but a particular mode of the RWR-type reality, which, as any such mode in the present view, is assumed to exist independently and is manifested only by its effects on measuring instruments, via quantum objects, such as elementary particles. A quantum field is independent of measurement, while quantum objects are always defined by measurements. These effects are more multiple than those observed, via the corresponding quantum objects, in low energy regimes. This multiplicity is defined by the fact that these effects correspond to elementary particles, to which a quantum field gives a rise and which can be of various types even in a single experiment, consisting of one or more successive measurements, with the first one performed on a given particle. The initial quantum object could also be a set of elementary particles of the same or different types, with a different such set, possibly consisting of entirely different types of particles, appearing in each new measurement. As a mode of the RWR-type reality assumed to exist independently, a quantum field is responsible for transforming effects associated with elementary particles created in the process at the time of measurement. These effects may be either invariant (as concerns a given particle type), such as those associated with mass, charge, or spin, or variable, such as those associated with position, momentum, or energy. As concerns this association, always via elementary particles, as quantum objects, there is no difference from low energy regimes; the difference is in what kind of effects are observed. These effects have a kind of multiplicity in high-energy regimes, not found in the case of effects observed in low-energy regimes. The multiplicities of types of elementary particles become progressively greater in higher-energy regimes. Hence, the concept of a quantum field just defined brings together the irreducibly unthinkable, discovered by QM, and the irreducibly multiple, discovered by QFT. It may be useful to comment, by way of a contrast, on the concept of a classical field.
A classical field is represented, in a realist way, by a continuous (technically, differential) manifold with a set of scalar (a scalar field), vector (a vector field), or tensor (a tensor field) variables associated with each point and the rules for transforming these variables by means of differential functions, from point to point of this manifold. One can also define it as a fiber bundle over a manifold with a connection. The concept of a fiber bundle is used in QFT, where it is associated with local gauge symmetry, in the RWR view, without representing, any more than any part of the mathematical formalism of QFT, any quantum physical process but only being part of the probabilistically or statistically predictive machinery of QFT. In classical physics or relativity, the variables in question map measurable quantities associated with the field, thus providing a field ontology, which also allows for (ideally) exact predictions concerning future events associated with this field via measurable field quantities. In quantum physics, in RWR-type interpretations, this type of ontology is impossible. One deals with a discrete manifold of phenomena and sets of quantities associated with each phenomenon, and hence, with a discrete manifold of quantities, without assuming any continuous process that would connect them. As does QM, QFT (in any regime) relates, in terms of probabilistic or statistical predictions, the continuous, technically, differential, mathematics to the discontinuous configurations of the observed phenomena and data.
As discussed in Section 2, in considering two successive measurements, which register different outcomes, it is humanly natural to assume that something "happened" or that there was a "change" in the physical reality responsible for these events between them. However, in any interpretation, one cannot give this happening a determined location in space and time, and in RWR-type interpretations, there is nothing we can say or even think about the character of this change, including as a "happening" or "change," apart from its effects, the structures of which are more multiple in high-energy regimes than in low-energy regimes. Nor can one assume (again, in any interpretation), as one can, at least ideally, in low-energy regimes, that we observe the same quantum objects in two successive measurements. For example, it is no longer possible to think of a single electron (or for that matter a single proton) in the hydrogen atom, as the same electron detected by (and in RWRtype interpretations, defined) by different measurements. Each measurement is assumed to detect a different electron; if one makes a measurement between two measurements each of which detects an electron, this measurement can detect a positron or a photon, or an electron-positron pair. One could also speak of quantum fields in the sense defined here in QM (or in low-energy QFT), but in a reduced form that preserves the particle identities, as representatives of a given particle type-each photon always remains the "same" photon (or disappears), each electron the "same" electron (or disappears), and so forth, in the present view, again, strictly as a statistically permissible idealization because each is only defined by a measurement. In high-energy regimes, particles transform into one another, within and beyond a given particle type, also within because an electron could re-appear in this process, that is, be detected by a measurement, after a positron appeared in it after the previous appearance of an electron.
In this understanding, speaking, as is common, of the quantum field of a particle, say, an electron, entails new complexities. Mathematically, the formalism of, say, QED, allows one to make predictions concerning the electron, which, mathematically, invited one to speak of the electron as a quantum field. Physically, however, this only means that the RWR-type reality defining the quantum field considered in a given experiment has strata that enables the corresponding measurements detecting electrons. It is not possible to separate these strata from those similarly associated with the possibility of detecting a positron or a photon in the same experiment (in the sense of being defined by the same initial measurement) because neither of these strata as such is detected in measurement. It is possible, however, to specify quantum fields as associated with the fundamental forces considered in QFT and the corresponding types of particles, field bosons, electromagnetic (photons), weak W + , W − , and Z, or strong (gluons).
These considerations reflect the fact that the present concept of a quantum field is a physical concept, which defines a quantum field as part of the independent reality ultimately responsible for quantum phenomena and not as a quantum object, always dependent of an experiment. This concept can, however, be associated with most currently standard versions of a mathematical concept of a quantum field, defined in terms of a predictive Hilbert space formalism with a particular vector and operator structure, enabling the proper probabilistic predictions of the QFT phenomena concerned. The operators enabling one to predict the probabilities for the "annihilation" of some particles and "creation" of others, that is, for the corresponding measurable quantities observed in measuring instruments, are called annihilation and creation operators or also lowering and raising operators, commonly designated as â and â † , each lowering or increasing a number of particles in a given state by one. In RWR-type interpretations, these operators do not represent any physical reality: they only enable one to calculate the probabilities or statistics of the outcomes of experiments, just as the wave functions do in quantum mechanics. Both, to return to Schrödinger's language, provide expectation-catalogs for the outcomes of possible experiments. Those provided by QFT give probabilities or statistics of the appearance of quantities associated with other types of particles even in experiments initially defined by a particle of a given type. In QFT regimes, it is, again, meaningless to ever speak of a single electron even in the hydrogen atom.
The description just outlined provides an RWR-type interpretation of the view, held even in realist interpretations, that the application of creation operators to quantum states (in the mathematical sense of vectors in a Hilbert space) does not represent a physical process of particle creation out of nothing, which would entail a violation of conservation laws. There are various ways of handling this situation in realist interpretations, for example, in terms of collisions (e.g., [51] (p. 73)). In the present view, this procedure and thus creation and annihilation operators are just part of the mathematical machinery that allows us to predict the transition probability between two quantum events, which would be associated with different particles, in the present interpretation, defined strictly by the measurements defining these events. Any measurement performed before the second measurement could reveal a different particle or set of particles than that found either in the first or in the second measurement.
The concept of a quantum field here defined does not introduce any new mathematics. It is not designed to do so. This concept is part of the (strong RWR-type) interpretation of quantum phenomena and quantum theory, and thus of how mathematics works quantum theory, especially in the case of QFT as different from QM, given the concept of quantum measurement and the tripartite scheme of physical reality in this interpretation, now redefined by configuring the ultimate constitution of physical reality responsible for quantum phenomena as that of quantum fields. As such this interpretation confirms the fundamentally mathematical nature of quantum theory, more radically so than that of classical physics or relativity, both of which are equally mathematical-experimental sciences, with mathematics coming first in this conjunction in all three types of theories. This is because in quantum theory the mathematics functions as an abstract formalism divorced from any physical representation and yet enabling us to relate to reality, including in its unrepresentable or inconceivable aspects, in terms of probabilities of the outcomes of quantum experiments. Indeed, as noted, while especially important to this understanding in high-energy quantum regimes and QFT, the concept of a quantum field proposed here is applicable in all quantum regimes, to the point of compelling one to conclude that, in quantum theory, the reality without realism is the reality of quantum fields.
Beginning with Dirac's equation, QED and then QFT became a far-reaching extension of the revolution ushered in by Bohr's 1913 atomic theory and Heisenberg's discovery of QM, which theories also initiated the RWR view of quantum phenomena. This was acutely realized by Bohr and Heisenberg in assessing Dirac's discovery of his equation and antimatter. Bohr spoke of "Dirac's ingenious quantum theory of the electron," as "a most striking illustration of the power and fertility of the general quantum-mechanical way of description," which was, at this point, defined by Bohr's ultimate, RWR-type, interpretation [2] (v. 2, p. 63). This "most striking illustration," however, never convinced Einstein (whose debate with Bohr occasioned this statement) in the power and fertility of this method, nor many others who resisted this method, such as Schrödinger or Bell. Nevertheless, QED has not only amply demonstrated this power and fertility but is also the best confirmed physical theory ever. Heisenberg, who made major contributions to QFT, including that of nuclear forces, saw Dirac's discovery as even more significant, even "as perhaps the biggest change of all the big changes in physics of our [twentieth] century. It was a discovery of utmost importance because it changed our whole picture of matter. . . . It was one of the most spectacular consequences of Dirac's discovery that the old concept of the elementary particle collapsed completely" [39] (pp. [31][32][33]. However, it also brought with it new ways of thinking about elementary particles and quantum fields, thus, at least in the present view, further confirming "the power and fertility of the general quantummechanical way of description," a way that, as I have argued here, following Bohr, does not describe the ultimate constitution of nature responsible for quantum phenomena. It only describes how we interact with nature by means of our experimental technology and the mathematics of quantum theory, QM and QFT, enabling us to predict probabilistically the outcome of quantum experiments. The question, then, becomes whether nature will allow us to do more, or, as there is no other way for us to do physics, whether we will be able to do more by means of our interactions with nature, which has no particles and makes no predictions. Einstein's, or Bell's, view was that we should. Bohr's or at least the present (perhaps more cautious) view is that they might not, which, however, is not the same as claiming that they never will. | 27,403.6 | 2021-09-01T00:00:00.000 | [
"Physics",
"Philosophy"
] |
Human height: a model common complex trait
Abstract Context Like other complex phenotypes, human height reflects a combination of environmental and genetic factors, but is notable for being exceptionally easy to measure. Height has therefore been commonly used to make observations later generalised to other phenotypes though the appropriateness of such generalisations is not always considered. Objectives We aimed to assess height’s suitability as a model for other complex phenotypes and review recent advances in height genetics with regard to their implications for complex phenotypes more broadly. Methods We conducted a comprehensive literature search in PubMed and Google Scholar for articles relevant to the genetics of height and its comparatibility to other phenotypes. Results Height is broadly similar to other phenotypes apart from its high heritability and ease of measurment. Recent genome-wide association studies (GWAS) have identified over 12,000 independent signals associated with height and saturated height’s common single nucleotide polymorphism based heritability of height within a subset of the genome in individuals similar to European reference populations. Conclusions Given the similarity of height to other complex traits, the saturation of GWAS’s ability to discover additional height-associated variants signals potential limitations to the omnigenic model of complex-phenotype inheritance, indicating the likely future power of polygenic scores and risk scores, and highlights the increasing need for large-scale variant-to-gene mapping efforts.
Introduction
ever since antiquity, scholars have studied human height and the factors that influence it (tanner 1981).the subject has received extensive attention for several main reasons.Firstly, there have been efforts to predict and/or influence adult height.For example, many of the first growth studies were started in the eighteenth century in order to ensure a suitable supply of tall soldiers (tanner 1981). in modern times, there is keen interest in early identification and subsequent hormone intervention for those trending towards acutely short stature (Dauber et al. 2020;lu t et al. 2021).A second motivation is the serious medical conditions associated with height, most notably cancer and cardiovascular disease.the positive relationship with cancer incidence has been well established and is believed to be a function of the greater number of cell divisions in taller individuals (Nunney 2018;choi et al. 2019). in contrast, the inverse relationship with cardiovascular disease is less well understood, and multiple hypotheses have been proposed in this context (Paajanen et al. 2010;Nüesch et al. 2016;Pes et al. 2018;Moon and Hwang 2019;Marouli et al. 2019;Yano 2022).As such, a better understanding of the factors influencing height, in particular genetic contributors, should elucidate the biology of these associated diseases and provide new avenues for therapeutic intervention.
A final key reason for the abundance of research into height is that the trait is relatively easy and inexpensive to observe in comparison to others.it can therefore serve as a model to test new methods and gain insights relevant to other phenotypes with lower incidence rates and higher investigational costs.taking this approach, our review will focus on summarising what is known about the genetic contributions to height, with particular attention to the ways in which our current understanding of height may reflect and inform the biology of complex-phenotypes overall.
Establishing height as a model common complex phenotype the use of human height as a model common trait has a long history in genetics dating back to the nineteenth century.the first major work in this vein was Galton's famous demonstration that average parental height predicts that of offspring (Galton 1886).A few decades later, in another seminal work, R.A. Fisher proposed a model that used height as its illustrative phenotype to show how polygenic inheritance could explain the observed variation in a continuous trait (Fisher 1919).However, when both these papers were written neither author knew the validity of extrapolating results from height to other traits.Before we consider the same possibility of generalising results for height, we will first consider how height's characteristics compare to those of other common complex phenotypes.
Apart from the ease of measurement, height is also a very useful model common complex trait due to its above-average heritability relative to other complex phenotypes (Polderman et al. 2015;watanabe et al. 2019).twin and family-based analyses estimate that between 30% and 90% of human height variation is determined by genetic factors, with most estimates towards the upper end of that range (Preece 1996;Silventoinen et al. 2000;Silventoinen et al. 2001;Macgregor et al. 2006;Perola et al. 2007).this proportion is lower at birth and rises during childhood reaching a peak post-puberty (Burk et al. 2006;Mook-Kanamori et al. 2012;van Soelen et al. 2013;Jelenkovic et al. 2016;Silventoinen et al. 2019).
Since it has been well documented that environmental factors, such as childhood net nutrition and socioeconomic status (tanner 1981;Silventoinen 2003;Deaton 2007;Perkins et al. 2016), have a strong effect on adult height, high heritability likely means that the effect sizes and/or the quantity of genetic variants that impact the phenotype must be higher for height than for the vast majority of other traits.Both possibilities would make it relatively easy to identify genetic factors influencing the trait.
evaluating the likelihood of each of these possibilities is a question of polygenicity, that is the degree to which height's genetic modulators are scattered throughout the genome in many variants of low effect or concentrated in a smaller number of high-effect loci.Several approaches have been used to quantify the polygenicity of height relative to other complex phenotypes, and the results have been somewhat mixed.For a given sample size the number of loci and variants associated with height is comparable to those of other complex traits (chakravarti and turner 2016).Similarly, an analysis of 588 complex phenotypes estimated that the proportion of independent and causal single nucleotide polymorphisms (SNPs) for height is near the 40th percentile (watanabe et al. 2019). in contrast, a more targeted analysis of 5 of the 588 complex traits found that height had nearly double the next highest proportion of putative causal SNPs even though the other four traits had been scored as more polygenic in the prior analysis (Johnson et al. 2021).From the available evidence, it is difficult to precisely quantify the polygenicity of height, but it is generally presumed that it is broadly similar to most other complex traits.
A final element to consider when evaluating the suitability of height as a model for other complex phenotypes is the number of tissues and cell types involved in its aetiology.Studies have consistently shown that height-associated genetic variants are enriched in expressed genes, functional genomic elements and pathways relevant to musculoskeletal and connective tissues (wood et al. 2014;Finucane et al. 2015;Finucane et al. 2018;luo Y et al. 2021).Recent analyses have even found enrichments relevant to cardiovascular-relevant tissues and the uterus (Finucane et al. 2015;Finucane et al. 2018;luo Y et al. 2021).Although the tissues relevant to a given phenotype are distinct, the studies that compared height to other complex traits and diseases found it had a comparable number of relevant cell types and tissues (Finucane et al. 2015;Finucane et al. 2018;luo Y et al. 2021). in summary, height appears similar to other complex phenotypes for the key metrics outlined above, apart from its relative ease of study and high heritability -both of which give it an advantage as a model common complex trait.
Identifying genetic factors for height
Before the advent of genome wide appraisals, candidate gene studies and family-based linkage analyses had been widely employed to study complex phenotypes, though with much lower success.in the case of height, most of the "discovered" regions failed to replicate (Perola et al. 2007;weedon and Frayling 2008) and the handful of true genetic factors identified were principally driven by mutations in genes yielding rare, monogenic forms of extreme stature (Godfrey and Hollister 1988;vissing et al. 1989;Shiang et al. 1994;Rao et al. 1997;Maheshwari et al. 1998;Hasegawa et al. 2000;tartaglia et al. 2001;Durand and Rappold 2013).the key development of genome wide association studies (GwAS) in the mid-2000s expanded this insight into the genetics of complex traits and height drastically.initial GwAS uncovered tens of common genetic polymorphic variants contributing to height (weedon et al. 2007;Sanna et al. 2008;Gudbjartsson et al. 2008;lettre et al. 2008;weedon et al. 2008;Soranzo et al. 2009) and subsequently led to a flood of many thousands more loci being reported.
the first GwAS for height was published in 2007 and involved approximately 5,000 subjects of european ancestry (weedon et al. 2007).the main observation in that initial study was a signal at HMGA2, which encodes the mobility group-A2 oncogene.this result was confirmed in a follow-up effort in approximately 10,000 additional subjects (Yang t-l et al. 2010).the next major GwAS of height revealed strong evidence for a second signal, namely at the GDF5-UQCC locus (Sanna et al. 2008).these first reports were followed by a set of meta-analyses, where datasets from various investigative groups were combined to increase statistical power to gain a collective larger sample size.together these efforts uncovered in excess of 40 more loci for height (Gudbjartsson et al. 2008;lettre et al. 2008;weedon et al. 2008;Soranzo et al. 2009).But differences were observed in the outcomes of these different meta-analyses, likely partly due to differences in statistical power (weedon and Frayling 2008), but perhaps also due to underlying co-morbidities that were not fully accounted for in the analyses of the participants in the study (Yaghootkar et al. 2017).
over the next decade, GwAS sample sizes continued to increase in both size and diversity leading to an ever-growing number of loci associated with height.to illustrate, a 2010 study in 130,000 european ancestry individuals more than quadrupled the number of associated loci to over 180 (lango Allen et al. 2010).Four years later, the next major GwAS of over 250,000 european-ancestry individuals identified 423 loci (wood et al. 2014), a number that would be dramatically surpassed just another four years later by a GwAS in ~450,000 British individuals that identified over 700 loci (Yengo et al. 2018).though not all the loci identified in each of these GwAS would replicate in later studies, each study showed an increasing proportion of phenotypic variation captured by the significantly associated variants as well as increasing numbers of secondary signals at associated loci.Apart from simply increasing european ancestry sample sizes, several GwAS over this period sought to identify associated loci in samples drawn in part or exclusively from non-european individuals (N' Diaye et al. 2011;He et al. 2015;Akiyama et al. 2019;Graff et al. 2021). in addition to replicating many of the loci identified in the european-ancestry GwAS, these analyses also identified several ancestry-specific loci and variants and leveraged differing patterns of linkage disequilibrium across ancestries to fine-map the location of putative causal variants.
in comparison to other complex phenotypes, height has not always been at the leading edge of the GwAS field as initial focus was on disease outcomes directly.indeed, it was not the first trait to be studied in a GwAS (ozaki et al. 2002;Klein et al. 2005;ikegawa 2012;loos 2020), nor was it the first to pass major milestones such as having a sample size over 1 million individuals (loos 2020).Height also suffers from many of the same problems that affect other phenotypes, most obviously an under-representation of non-european ancestry individuals that limits statistical power and the subsequent portability of results across populations (Mills and Rahal 2020;loos 2020). in contrast to this history of height as a typical but non-leading GwAS trait, the latest GwAS of height has presented ground-breaking results that could have important implications for all common complex phenotypes.
The first saturated GWAS
this latest GwAS of height (Yengo et al. 2022) is the first of what is referred to as a "saturation" GwAS.Such a GwAS is so highly powered that further increases in the sample size without increases in participant diversity or variant inclusion are unlikely to reveal any additional genetic insights.in this landmark study conducted in 5.4 million individuals, the largest sample ever assembled for a GwAS, Yengo et al. identified an unprecedented 12,111 independent signals associated with height clustered into 7,209 non-overlapping loci that are enriched near genes with known Mendelian effects on skeletal growth.together these loci comprise 21% of the genome, a result consistent with previous estimates (Shi et al. 2016).the results are notable not only because of their potential use in the discovery of novel height-modifying genes but also because they showed for the first time near exhaustion of GwAS's ability to discover further associations, particularly in european ancestry populations.Such an outcome had been previously hypothesised to be possible but never actually demonstrated (visscher et al. 2017;Kim et al. 2017;wray et al. 2018).
the authors primarily showed saturation of their GwAS by examining the proportion of estimated single nucleotide polymorphism (SNP) based heritability captured by the associated loci.SNP-based heritability is distinct from the total heritability estimated in twin studies and represents the portion of total heritability attributable to SNPs under an additive GwAS model (Yang J et al. 2017). in their analysis, Yengo et al. found that in individuals of european ancestry, associated regions accounted for approximately 100% of the estimated common SNP-based heritability, and across the other four tested ancestries the proportion was over 90%, even in the more genetically diverse African ancestry group.like most other published GwAS, this study was heavily dominated by individuals of european ancestry (75.8% of the sample), so the higher saturation of european ancestry results was expected.the results appeared robust, as the enrichment of heritability within these loci did not transfer to another tested trait, namely body mass index (BMi), and random SNPs matched on allele frequency and linkage disequilibrium score did not show similar enrichment.Based on these results, the authors concluded that height-associated loci are largely shared across ancestries and that the generated list of loci is exhaustive i.e. saturated, for european ancestry.
while estimated SNP-based heritability enrichment strongly supported the comprehensiveness of the catalogued height loci for individuals of european ancestry, it was unknown whether the entire sample had been necessary to achieve such completeness.to better understand when saturation occurs, Yengo et al. reanalysed prior height GwAS (lango Allen et al. 2010;He et al. 2015;Graff et al. 2021) and down-sampled their own study to produce five samples of individuals of european ancestry ranging from ~130,000 to ~4 million people.using these samples, and their trans-ancestry meta-analysis, the authors found different points of saturation across various metrics of interest.For example, they found that the number of prioritised genes by summary-data-based Mendelian randomisation (Zhu et al. 2016) of Gtex (Aguet et al. 2020) and expression quantitative trait loci, only plateaued when all 4 million individuals of european ancestry were analysed.Moreover, the number of loci discovered and the proportion of the genome covered by GwAS loci could not be saturated without including non-european ancestry individuals.
while the latest study by Yengo et al. is the first GwAS to show saturation of european ancestry in associated loci, it should be mentioned that a complementary effort at building a prediction model of height using an l1-penalised regression (lASSo) captured a comparable proportion of the total heritability as Yengo et al. (lello et al. 2018).the proportion captured by the lASSo model was slightly lower, but it is not known whether this difference is significant.However, even putting this difference aside, Yengo et al.'s contributions are still novel and significant in four main ways.Firstly, Yengo et al. were able to demonstrate near saturation in non-european ancestries whereas the lasso model only considered european individuals.Secondly, Yengo et al. analysed other metrics of saturation beyond heritability.thirdly, Yengo et al.only required 21% of the genome to capture the total estimated SNP-based heritability whereas the activated variants in the lASSo model spanned the entire genome roughly uniformly.And lastly, and most importantly, the effect size estimates from a lASSo model depend upon the specific variants retained in the model whereas GwAS does not have that issue.this distinction makes the results of GwAS useful for downstream analyses such as fine mapping for which lASSo results would be unsuitable.
An issue always worth considering when discussing any GwAS results is the effect of assortative mating.A shortcoming of Yengo et al.'s work is that they did not do so.there is an extensive literature documenting human assortative mating in general (Burgess and wallin 1943;Price and vandenberg 1980;Mascie-taylor 1989;Mcleod 1995;Allison et al. 1996;Du Fort et al. 1998;Maes et al. 1998;Hippisley-cox et al. 2002;Stimpson and Peek 2005;Jurj et al. 2006;Meyler et al. 2007;Di castelnuovo et al. 2009;Alford et al. 2011;Ask et al. 2012;Peyrot et al. 2016;luo S 2017;Jeong and cho 2018;Horwitz et al. 2023) and specifically, on the basis of height (Stulp et al. 2017;torvik et al. 2022).Assortative mating can inflate SNP-based heritability estimates.For example, an analysis of ~335,000 individuals of British ancestry found that assortative mating inflated the heritability of height by 14-23% (Border et al. 2022).However, that same study also demonstrated that sufficient sample sizes should cause the heritability estimates derived from the restricted maximum likelihood approach employed by Yengo et al. (Yang J et al. 2011(Yang J et al. , 2012) ) to converge to the true SNP-based heritability.it is plausible that the large sample size of this most recent GwAS is sufficient to ensure such convergence.
though Yengo et al. did not set out to study phenotypes apart from height, given the demonstrated similarity of height to other complex traits, the potential implications of this study for the broader field of complex phenotypes are worth reflection.the remainder of this review considers these possible impacts especially as regards the "omnigenic model, " polygenic risk scores, and causal gene identification.
Relevance to the omnigenic model
Perhaps the most important implication of the Yengo et al. saturation results are for the validity of the widely cited omnigenic model (Boyle et al. 2017).this model was originally proposed to explain the challenge of "missing heritability" (Manolio et al. 2009), that is the proportion of total heritability not captured by significant GwAS loci and variants uncovered at a specific point in time.under the model, this heritability was attributed to a large set of variants dispersed widely across the genome, each with individually small effects.the model was predicated upon the hypothesis that all genes expressed in phenotype-relevant cells impact in some capacity the function of genes central to the phenotype and therefore the phenotype itself.A more extreme version of the model also emerged which hypothesised that with sufficient power, GwAS would effectively implicate every gene and genomic region (Flint and ideker 2019).
the results presented by Yengo et al. seemingly contradict the more extreme version of the omnigenic model.the authors showed that only a subset, albeit a large subset, of genomic regions appear to be implicated in the trait in individuals of european ancestry.though expanding non-european ancestry representation or incorporating new variants from databases of whole genome sequences would likely increase the proportion of the genome associated with the phenotype in future height GwAS, it seems highly improbable that the whole remaining ~79% of the genome will also be implicated.
Rare and structural variant associations could also implicate additional regions of the genome, but it seems similarly doubtful that the proportion would grow considerably.Regarding rare variants, Yengo et al. found suggestive evidence that variants with minor allele frequency between 0.1% and 1% concentrate in the same 21% of the genome as the common variants.this result is consistent with those obtained by recent rare variant analyses for height.~70% of the 83 low-frequency coding variants identified in a 2017 study (Marouli et al. 2017) lie within loci identified by Yengo et al. and a later analysis of 492 traits showed strong colocalisation of rare and common variants (Backman et al. 2021).comprehensive results for structural variation are more limited than for rare variants, but we similarly note that 80% of the height-related copy number variants in a recent cataloguing effort (Hujoel et al. 2022) overlapped the loci identified by Yengo et al.
As for the original conception of the model, it could be the case that the associated 21% of the genome captures the complete set of genes expressed in height-relevant cell types, but this too seems unlikely.As described, several tissues and a large number of cell types are believed to play a role in height aetiology.it seems improbable that 21% of the genome captures all the genes they express but more research would be needed to confirm this hypothesis.Should the patterns observed with height be repeated in the saturation of other complex phenotypes, a highly polygenic model seems a more valid explanation of complex trait inheritance than a truly omnigenic one.
Implications for polygenic scores
Aside from exploring how well their GwAS saturated the discovery of height genetics, Yengo et al. also investigated the ability of their results to accurately predict height across ancestries using polygenic scores.Polygenic scores and risk scores predict individuals' values and risks of complex traits and diseases respectively.Generally, these terms refer to statistical models that make genetic-based predictions using GwAS-estimated variant effect sizes (Sugrue and Desikan 2019; wang et al. 2022), though under some definitions they may denote any genetics-based statistical or machine-learning model designed to predict phenotypes (wand et al. 2021).Prior to the latest height GwAS, polygenic scores and risk scores had shown relatively modest success at predicting traits and disease risk for individuals drawn from the population used to create the scoring metrics (Schrodi et al. 2014;Hu et al. 2017).though generally used on unrelated individuals, at least in the case of height polygenic scores, they can differentiate between siblings (lello et al. 2020).it has been similarly shown that in order for these scores to obtain their maximum accuracy, they should include both coding and non-coding associated variants (Yong et al. 2020).
Measuring the accuracy between height and their polygenic score based on the 12,111 genome-wide significant signals, Yengo et al. obtained an accuracy of around 40% in individuals of european ancestry.As could also be expected based on its equal heritability saturation, the previously mentioned lASSo model also obtained an accuracy around 40% (lello et al. 2018).these accuracies were not significantly different from that obtained by taking the average of parental heights and were comparable to that obtained by using all the SNPs input into the GwAS (42% accuracy).that the gap was so close between the accuracies of the polygenic score based on all input SNPs and the polygenic score based on only the 12,111 significant SNPs was expected given the saturation of common-SNP heritability observed in individuals of european-ancestry.Building on these results, Yengo et al. also demonstrated that a weighted combination of parental average height and the 12,111 SNP polygenic score achieved an accuracy above 54%.this result showed how polygenic scores can be used jointly with family history to appreciably improve predictions and gives an insight into what gains could be expected when polygenic score and risk score accuracy for other phenotypes achieve eventual saturation.
lamentably, Yengo et al. 's polygenic score showed reduced performance in the non-european ancestries studied.Given the large skew of Yengo et al.'s sample towards european ancestry and the well-known poor portability of polygenic scores across ancestries (Duncan et al. 2019;lewis and vassos 2020;Adeyemo et al. 2021) this result was not surprising.in fact, the sensitivity to ancestry differences is so great for polygenic scores that the level of deterioration can be seen to correlate with any admixture (Bitarello and Mathieson 2020) and can be observed even in subtly stratified groups within a single continental ancestry (Berg et al. 2019;isshiki et al. 2021).
Adding individuals of non-european ancestry to capture the missing 5-10% of common SNP-based heritability in these populations should improve the accuracy of polygenic scores in non-european populations by allowing for discovery of disease associated variants that are rare in europeans but common in other groups (wojcik et al. 2019;Bentley et al. 2020).increasing diversity is a necessary step for ensuring that polygenic prediction techniques work well for all people and should be a goal for future research in height as in other complex traits.in addition to benefiting polygenic scores and variant discovery, building cohorts of greater diversity should also enable future studies to more precisely fine-map putative causal variants by leveraging differing patterns of linkage disequilibrium across ancestries; cohorts of African ancestry with their smaller haplotype blocks are especially useful in this regard (Hutchinson et al. 2020;lu Z et al. 2022;Yuan et al. 2023).there are several large-scale ongoing efforts including All of us (the "All of us" Research Program 2019), the Million veterans Program (Gaziano et al. 2016), andH3Africa (owolabi et al. 2019) that aim to recruit participants from these underrepresented ancestry-groups.these efforts clearly should continue and even expand.
The ongoing challenge of gene discovery
Despite the fact that many of the loci identified by Yengo et al. reside near known genes relevant to cartilage and bone biology, such as HHIP, BMPs, GDF5 and ACAN, it is now clear that a systematic approach is required to more accurately characterise the causal effector genes at each of the 12,111 height loci.indeed, the genetics community is now grappling broadly with the issue of whether the nearest gene to a GwAS-implicated signal is actually always the main culprit "effector" gene at a particular locus (Broekema et al. 2020;li and Ritchie 2021).Some key lessons can be learned from the FTO obesity locus.
obesity is principally defined by the relationship between height and weight, as defined by BMi.GwAS for BMi and obesity has consistently shown the signal at the FTO locus to be the strongest association with the trait (Frayling et al. 2007;Rosen and ingelfinger 2015;Yengo et al. 2018;Huang et al. 2022).this signal lies within an intron of the FTO gene (Frayling et al. 2007) and has been widely replicated across multiple populations (Hassanein et al. 2010;okada et al. 2012;wen et al. 2012).Despite the FTO gene product receiving wide attention by the research community, two key papers presented compelling evidence that IRX3 and IRX5 were actually the main effector genes at this particular genomic location (Smemo et al. 2014;claussnitzer et al. 2015).As such, the findings suggested that the strongly associated obesity variant was embedded in one gene but was driving the expression of other, neighbouring genes.these seminal findings have now moved the interpretation of GwAS findings forward, where it is no longer presumed that the nearest gene(s) to a given GwAS signal is the causal effector gene(s), and indeed, this should not come as a surprise given that gene expression can be driven by long range proximal regulatory elements, such as enhancers and silencers, and can exert their effect as far as hundreds of kilobases in distance.
combined with the results from this latest GwAS of height, these findings indicate the magnitude of the challenge that now faces the genetics community.if the associated proportion of the genome is approximately consistent across complex traits, then each complex phenotype has on average thousands of genes that modulate pathogenesis.identifying the causal gene(s) at each locus and validating the observations is, and will continue to be, a challenging task requiring multiple approaches, particularly algorithmic and high-throughput experimental techniques. in summary, height is broadly similar to other complex phenotypes apart from its ease of measurement and high heritability.these factors have made it a widely employed model trait for studying the topic of complex phenotype inheritance.However, throughout the GwAS era, height has not always been at the leading edge of variant and gene discovery, that is until its most recent GwAS by Yengo et al. in having at last closed the gap of missing common SNP-based heritability for a common trait, Yengo et al. may have signalled the beginning of the end of the GwAS era.their work demonstrates the limits to endlessly increasing GwAS sample sizes and highlights the need for greater diversity in study populations.Moreover, their results directly contradict the most extreme form of the omnigenic model and imply that highly polygenic inheritance is likely a more appropriate model for complex traits.the analysed polygenic score results also suggest that when sample sizes across complex phenotype GwAS efforts increase to the point of heritability saturation across all ancestries, polygenic risk scores will become powerful tools for the prediction of disease risk.However, the implications of this study for the identification of individual effector disease genes are less optimistic.Should the GwAS era be drawing to a close, the era of gene identification that follows will surely be one of both great challenges and opportunities.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 6,543.6 | 2023-01-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Estimating post‐Depositional Detrital Remanent Magnetization (pDRM) Effects: A Flexible Lock‐In Function Approach
The primary data sources for reconstructing the geomagnetic field of the past millennia are archeomagnetic and sedimentary paleomagnetic data. Sediment records, in particular, are crucial in extending the temporal and spatial coverage of global geomagnetic field models, especially when archeomagnetic data are sparse. The exact process on how sediment data acquire magnetization including post‐depositional detrital remanent magnetization is still poorly understood. However, it is widely accepted that these effects lead to a smoothing of the magnetic signal and offsets with respect to the sediment age. They impede the direct inclusion of sediment records in global geomagnetic field modeling. As a first step, we model these effects for a single sediment core using a new class of flexible parameterized lock‐in functions. The parameters of the lock‐in function are estimated by the maximum likelihood method using archeomagnetic data as a reference. The effectiveness of the proposed method is evaluated through synthetic tests. Our results demonstrate that the proposed method is capable of estimating the parameters associated with the distortion caused by the lock‐in process.
10.1029/2023JB027373
2 of 17 While the lock-in process in the TRM occurs on short time scales (hours to weeks), the lock-in time of magnetic moments in sediment records can be much longer (years to centuries).The magnetization in sediments is called detrital remanent magnetization (DRM), which was first measured by McNish and Johnson (1938).During the sedimentation process, magnetic particles are deposited in such a way that their magnetic moments tend to point in the direction of the geomagnetic field while interaction with other particles and the ongoing solidification increasingly impede the particles to fully align.Additional sediment particles lead to a consolidation of the underlying layers and thus to a mechanical lock-in of the magnetic particles.The magnetization in sediments is affected by the interaction of the magnetic particles with the substrate at the sediment-water interface and by dewatering of the sediment (Irving, 1957).The terminology and classification of these effects are not completely consistent in the literature.In the following, we will be using the terminology recommended in the review paper by Verosub (1977).According to Verosub (1977), the term DRM refers to the remanent magnetization found in sediments.By depositional DRM (dDRM) we describe the magnetization acquired by the interaction of the particles with the substrate at the sediment/water interface.The term post-depositional DRM (pDRM) refers to the longer timescale and describes any magnetization that is acquired after the particle settles on the sediment/ water interface.
There are various effects that are summarized in the term dDRM.One example is the inclination error, which occurs when non-spherical particles settle flat on the sediment/water interface.This leads to a distortion of the inclination to smaller values (R. King, 1955).Another distortion of the inclination can occur when aligned particles roll into the nearest depression of the sediment/water interface (Griffiths et al., 1960).
In this paper, we will focus on the investigation of the post-depositional DRM.The traditional pDRM model, based on decades of research (e.g., Hamano, 1980;Irving, 1957;Irving & Major, 1964;Kent, 1973;Otofuji & Sasajima, 1981), can be described as follows.In general, only coarse-grained fractions are mechanically fixed more or less immediately after deposition.Smaller particles which are embedded in water-filled voids or pore spaces of the sediment can move freely for a longer period of time.With progressive consolidation and dewatering of the sediment, also these particles become locked in.However, alternative theories have challenged the classical pDRM acquisition concept, suggesting that sediment flocculation restricts significant post-depositional grain movement within pore spaces (Katari et al., 2000).Also, the effects and potential roles of bioturbation have been investigated and resulted in alternative sediment mixing models (e.g., Egli & Zhao, 2015).
To sum up, how exactly the pDRM process works and what effects are more or less important is still not fully understood.Nonetheless, decades of investigation, several experiments, and modeling approaches lead to the conclusion that the pDRM process results in a delayed and smoothed signal.In other words, the magnetic moment of an entire layer is given by the weighted sum of the geomagnetic field over the lock-in period.The weights are given by so called lock-in functions (Roberts & Winklhofer, 2004;Suganuma et al., 2011).
Figure 1 provides a practical illustration of this effect.We show declination and inclination data obtained from a real sediment record from Sweden, together with measurement errors.Additionally, we show the predictions from the ArchKalmag14k.rmodel (Schanner et al., 2022).Note that ArchKalmag14k.r exclusively relies on archeomagnetic data.The visual representation clearly demonstrates a noticeable delay and mild smoothing in the sediment data compared to the predictions from the ArchKalmag14k.rmodel.It's important to highlight that Over the last decades, many lock-in functions have been suggested.Exponential lock-in functions (e.g., Kent & Schneider, 1995;Løvlie, 1976), constant (e.g., Bleil & Von Dobeneck, 1999), linear (e.g., Meynadier & Valet, 1996), cubic (e.g., Roberts & Winklhofer, 2004), Gaussian (e.g., Suganuma et al., 2011) and parameterized lock-in functions that can cover multiple classes (e.g., Nilsson et al., 2018).
In this study, we present a new class of parameterized lock-in functions designed for the comprehensive modeling of the offset and smoothing effects caused by the pDRM process.These lock-in functions are characterized by four parameters giving them the flexibility to approximate a wide range of previously suggested lock-in functions and therefore a wide range of possible lock-in behaviors.
The cornerstone of our methodology lies in the estimation of the lock-in function parameters for a given core sample.The aim is to identify the most suitable lock-in function for a given core sample, one that accomplishes the tasks of shifting the data adequately and removing the right degree of smoothing.The determination of this optimal lock-in function necessitates leveraging external geomagnetic field information that is not affected by distortions associated with the lock-in process.This is where archeomagnetic data plays a pivotal role.Archeomagnetic data acquire magnetization through TRM and are therefore independent of any lock-in distortion.
Our model uses maximum likelihood methods and global archeomagnetic data to estimate the lock-in function parameters.To illustrate this concept, Figure 1 provides a visual representation.It is essential to emphasize that our model does not rely on the pre-existing Archkalmag14k.r model or any similar models.Instead, it exclusively utilizes the original data used in the creation of the Archkalmag14k.rmodel (Schanner et al., 2022).
In Section 2, we first briefly outline the geomagnetic field modeling method and then introduce the new class of lock-in functions.Subsequently, we delve into two technical subsections (Sections 2.3 and 2.4).In Section 2.3, we describe the relationship between the sediment observations and the geomagnetic field resulting in a rigorously defined data model for the sediment observations.Within Section 2.4, we describe the adoption and adjustments of the Kalman Filter-based method employed in Schanner et al. (2022) to suit our context.While these two subsections are not essential for a basic grasp of the results, they are crucial for a deeper understanding of the methodology and for making it easier to replicate the research.The newly developed method undergoes extensive testing on multiple synthetic data sets in Section 3. We discuss the findings and give an outlook on future work, including the prospective application of this methodology to real sediment records in Section 4 before ending with a summary, conclusions, and outlook on future research.
Geomagnetic Field Model
We will model the geomagnetic field by using a Bayesian approach based on Gaussian Processes.Every Gaussian Process is uniquely defined by a mean and a covariance function (Rasmussen, 2004).
We describe the geomagnetic field as the realization of a Gaussian Process denotes the standard two-sphere associated to the space variable.Therefore, the knowledge about the geomagnetic field and its uncertainty is a distribution of functions ∶ 2 × ℝ → ℝ 3 .In the following, we will model the lock-in process for a single sediment core sample and treat the space variable as a constant, that is, we will consider as a Gaussian process of time only.We follow the a priori assumptions of Schanner et al. (2022) and use the estimated hyperparameters given in Table 2 of Schanner et al. (2022).Hence, we assume that all Gauss coefficients are a priori uncorrelated at a reference radius = 2,800 km with zero mean except for the axial dipole.For the axial dipole, we assume a constant mean value of 0 1 = −38 μT (at the Earth's surface).Further, we assume an a priori variance DP = 39 μT for the dipole and an a priori variance ND = 118.22μT for all higher degrees (at the reference radius).The temporal correlation of the Gauss coefficients is assumed to be .
These parameters reflect the statistics of the archeomagnetic data and a physical interpretation is not obvious.For example, the value of −38 μT for 0 1 is the optimal value when fitting an axial dipole to the data directly.The correlation times may relate to physical processes, but are derived from the variability that is resolved in the data.Using other prior parameters is straightforward and an investigation on the influence of those parameters will be done in a future study.
Lock-In Process
The pDRM or lock-in process is affected by several effects which are still not completely understood.However, it is widely accepted that the pDRM process leads to a delayed and smoothed signal in the magnetic moment of a layer.Roberts and Winklhofer (2004) describe this behavior by the following convolution or weighted average where () describes the magnetic moment of the layer at depth , describes the geomagnetic field (see Section 2.1, note that we use the depth-series here derived from an age-depth model) and gives the lock-in depth, that is, the depth where all particles are completely consolidated.The weights are given by the lock-in function .
The lock-in function does not depend on the depth of the layer.The underlying assumption here is that the sedimentation material does not change significantly over the absolute depth of the sediment record.Alternatively, we could formulate our model in time instead of depth and assume a constant sedimentation rate, but this is a much stricter assumption.
For a given core sample, we are interested in the weights, given by the shape of the lock-in function .We propose a new class of piecewise linear parameterized lock-in functions Depending on the four parameters 1, 2, 3, 4 ∈ ℝ≥0 with 4 can model the offset as well as the smoothing related to the lock-in process.The offset is given by the half lock-in depth and the smoothing by the width and height of the lock-in function (see Section 3 for a detailed discussion).
For a given core sample, we estimate the parameters 1-4 using a Bayesian approach based on Gaussian Processes and involving archeomagnetic data as a reference.While leveraging all archeomagnetic data components (declination, inclination, and intensities) we exclusively utilize the directional components from sediment data.This choice comes from the recognition that the lock-in function governing directional data may substantially differ from that governing intensities.A discussion of this point can be found in Section 4. Importantly, our method has the flexibility to be applied equally to intensities or even to all three components simultaneously.Also, largely unknown intensities in paleomagnetic records are challenging and motivated us to focus on directional components (e.g., Roberts et al., 2013).
Data Model
In this section, we will derive the data model which describes the relation between the measured signal in the sedimentary records and the geomagnetic field variations.While our primary focus here is on the sedimentary records, we need the information from archeomagnetic records at a later stage.The model of archeomagnetic data is outlined in Schanner et al. (2022).
The first functional , used to describe the data model, is associated with the offset and smoothing caused by the lock-in process and given by . Later we will apply this function to the geomagnetic field which we model as such a function.The linearity of the functional follows directly from the linearity of the integral.
Besides the natural smoothing caused by the lock-in process, there is a smoothing effect caused by the way the magnetization in a sediment core sample is measured.When investigating sediment core samples, cubes or u-channels of different sizes are taken from the core.Afterward, the magnetization in the extracted cube (or points in the u-channel) is measured.The resulting measurement is then an average of the actual magnetization across the width of the cube (or determined by the response function of the magnetometer in the case of u-channels).
We assume that the size of the extracted cubes (or the response function) does not change within a core sample.Therefore, we can define the size of the extracted cubes for one core sample as This results in a second smoothing and can be described by the following measurement smoothing functional The linearity of the functional again follows from the linearity of the integral.The quantities that are measured in laboratory experiments are not provided in Cartesian field vector components (North [N], East [E], Down [Z]) but as two angles, declination (D) and inclination (I), and intensity (F).The nonlinear relationships between these components can be described by three observation functionals where the three components of () are given by In the following, we will apply these functionals to the Gaussian Process associated with the geomagnetic field.Note that the lock-in function is defined in depth.Therefore, we can not directly use the time-dependent Gaussian Process given in Equation 1.By switching from time to depth we end up with a new Gaussian Process where the mean function coincides with the mean function of the Gaussian Process given in Equation 1.This is because the mean function is assumed to be constant.The kernel function follows directly by applying the age-depth model (a function that maps time to depth) to the kernel function of the Gaussian Process given in Equation 1.
By applying the functional to the Gaussian process B , we get, for all ∈ ℝ , the first part of our data model Since Note that 2 is also a Gaussian Process.Since both functionals lead to a smoothing we absorb the smoothing associated with the measurement functional into the lock-in function and approximate The nonlinearity results in a data model that is not Gaussian anymore.However, as described in Schanner et al. (2022), these functionals can be linearized by a first-order Taylor expansion.Following Hellio et al. (2014) the linearization results in three functionals where By using these linearized functionals we approximate the data model 3 by a Gaussian Process.In conclusion, the components of our final data model are given by where = ( E E E )⊤ ∈ ℝ 3 indicates the vector of measurement errors.
Parameter Estimation
In order to estimate the four lock-in function parameters 1-4 , we perform a maximum likelihood estimation (type-II MLE, Rasmussen, 2004).For a Gaussian process, the marginal likelihood is available in closed form (see e.g., Chapter 2.2, Equation 2.30 in Rasmussen, 2004).However, due to the amount of archeomagnetic data used as a reference in our approach, calculating the marginal likelihood is numerically costly (a single function call may take minutes) and hampers numerical optimization.The number of required function calls for global function optimization with four parameters is in the order of thousands.This makes estimation via the closed form unfeasible. Instead, we perform a sequentialization of the marginal likelihood evaluation, similar to Baerenzung et al. (2020) and Schanner et al. (2022).
The idea is to replace the closed form Gaussian process regression by a Kalman filter (Kalman, 1960).The closed form marginal likelihood is approximated by a sum over the marginal likelihood values calculated for the individual Kalman filter steps (see Equation 24in Baerenzung et al., 2020).The resulting expression provides a measure of how well a set of lock-in function parameters describes the pDRM process in a single sediment core, given the global set of archeomagnetic data and the respective sediment data.In other words, we use the archeomagnetic data to estimate the shape of the lock-in function for a single core.Due to the temporal distribution of the archeomagnetic data, we limit the estimation to the last eight thousand years.
A crucial difference between the existing implementations in Baerenzung et al. (2020) and Schanner et al. (2022) is the modified observation function discussed in Section 2.3.The convolution integral can be interpreted as a delay (and smoothing) in the measurements, resulting in cross-correlations between the Kalman filter steps.To respect these cross-correlations, the Kalman filter state vector is modified to contain recent time steps.The number of time steps has to be big enough, so that the corresponding time period covers the assumed maximal lock-in depth (Choi et al., 2009).The augmentation of the state vector leads, for each ∈ [0, ] , to a modified forward operator given as where = (max, Δ) is the forward operator defined in Baerenzung et al. (2020) The recursive equations for the update step are given by To formulate the backward recursion equations assume that the recursion starts from the last time step .We set = and The backward recursion equations are given as A detailed derivation of these formulas can be found in Appendix A.
Expanding the Kalman filter state vector by previous time steps leads to an increased memory demand and higher computational cost.Choosing a spherical harmonics cutoff degree of max = 8 and a Kalman filter step size of Δ = 40 yrs gives a sweet-spot in the trade-off between estimation accuracy and computation time.Several tests showed that lower time steps and higher cutoff degrees have no significant influence on the estimation, while increasing the computational time dramatically.
Log-Marginal Likelihood Optimization
The optimization of the log-marginal likelihood was conducted using dlib's LIPO-TR function optimization algorithm (D.E. King, 2009;Malherbe & Vayatis, 2017).From a mathematical point of view, this algorithm does not guarantee convergence.To prevent the algorithm from running indefinitely, two approaches have been employed.The first method is to limit the maximum number of function calls and using the best result.We started using this method with a maximum number of function calls of 3,500.However, after finding an optimum several times, the optimization algorithm switches to a random search.This random search continues until the maximum number of function calls is exhausted.In the cases, we investigated the random search found the optima after five hundred to one thousand steps and the following random search did not yield any new optima.Therefore, we decided to use the second method where we consider the algorithm as converged when the obtained optimum remains unchanged for several iterations.
The second method significantly reduces the computation time enabling us to perform 50 optimizations for each of the synthetic test cases.We consider two optima o1, o2 to be the same when where = 10 −7 denotes the relative tolerance and = 10 −14 denotes the absolute tolerance.We set the upper bounds for the parameter estimation to 100 cm.
Note that the optimization of such a problem is not straightforward.Other optimization algorithms might lead to better results or better performance.In the future, we will optimize the procedure by including derivatives.
Synthetic Data
All synthetic data points are based on the same reference geomagnetic field time series drawn from the prior described in Section 2.1.Three synthetic data sets were generated from this reference time series.The first data set represents the archeomagnetic data with input locations and times being the same as in the archeomagnetic data used in Schanner et al. (2022).In addition, two synthetic sediment data sets were generated.The first synthetic data set corresponds to a core sample located in Sweden (60°9′3.6″N, 13°3′18″ E) and the other one, to one located on Rapa Iti (27°36′57.6″S, 144°16′58.8″W).Both have the same temporal distribution shown in the lower panel on the left side of Figure 2. The age-depth model used for both synthetic sediment data sets was derived from the age-depth model of the lake sediment core KLK used in Nilsson et al. (2022).
We then applied six different lock-in functions (orange functions in the first column of Figures 4 and 5) to each of the two sediment data sets.The synthetic data from Sweden distorted with the orange lock-in function in (A) of Figure 4 is shown in Figure 3.The reference process is shown in green.Declination and inclination of the synthetic data with measurement uncertainties are shown as blue dots with error bars.Note that we added noise to the data after the distortion associated with the lock-in process.We see an obvious offset as well as a smoothing of the sediment data compared to the reference process.Visualizations of all other synthetic sediment data used in this study as well as notebooks to generate additional synthetic data can be found on our website at https://sec23.git-pages.gfz-potsdam.de/korte/pdrm/.In conclusion, we ended up with six synthetic tests in Sweden and six in Rapa Iti.The reason why we chose these two locations is that Sweden is an area with many paleomagnetic sediment records and a decent coverage of archeomagnetic data while the coverage of archeomagnetic data around Rapa Iti is sparse.
Results
In this section, we will assess the proposed method by conducting synthetic tests.The data utilized in this section, along with the method's implementation, can be found on our website under https://sec23.git-pages.gfz-potsdam.de/korte/pdrm/ and in the corresponding GitLab repository (Bohsung & Schanner, 2023).Moreover, we provide scripts for generating synthetic data, enabling further testing.
Initially, we compared the estimated parameters 1, . . ., 4 with their counterparts in the true lock-in function.The results are visualized in Figure 4 for the synthetic test located in Sweden and in Figure 5 for the synthetic test located in Rapa Iti.The rows (A-F) correspond to the six synthetic test cases.The lock-in function used for the distortion (orange) and the lock-in functions of the 50 optimization runs are plotted in the first column.The colors of the 50 estimated lock-in function correspond to the associated log-marginal-likelihood (log-ml) value.The color ranges from blue for lock-in functions with a high log-ml value (better estimations) to red for lock-in functions with a low log-ml value (worse estimations).The distributions of the log-ml values are shown in the last column.
The visualization of the estimated lock-in functions reveals that the parameters 1, . . ., 4 exhibit some variability in their determination.This variability could potentially be attributed to the optimizer used.Consequently, we proceeded to explore further aspects of the estimated lock-in functions.First, we focused on the half lock-in depth denoted as 0.5 and given as the depth where half of the lock-in process is completed, that is, 0.5 fulfills the equation The second column of Figures 4 and 5 illustrates the distributions of this parameter across the six cases.Subsequently, the remaining two columns present the distributions of the lock-in function heights and widths given as Note that the parameters are weighted with the associated log-ml values.The results from Sweden (see Figure 4) and Rapa Iti (see Figure 5) show similar variance in the estimated parameters, implying that our method is robust enough for solid parameter estimation even for locations where archeomagnetic data is sparse.
As one can see especially the half lock-in depth, 0.5 , is well determined.Its significance lies in its interpretation as the horizontal offset within the observed data.Furthermore, the height and width parameters correspond to the smoothing in the observed sediment records.While not as precisely determined as the half lock-in depth, these parameters remain reliable indicators.In the first moment, the variance in the estimated lock-in functions might seem problematic (see first row of Figures 4 and 5).However, in Figure 6, we compare the data points generated using the real lock-in function (orange), the estimated lock-in function with the best log-ml value (blue), and the estimated lock-in function with the worst log-ml value (red).Obviously, there is not a big difference Note that the differences between the data points are also influenced by the artificially added noise.We compared the distorted sediment observations since the deconvolution of the distorted sediment observations is not straightforward.In a future study, we will work on this challenge.
Discussion
The presented class of parameterized lock-in functions is capable of modeling the offset as well as the smoothing effects associated with the pDRM process.The parametrization with four parameters delivers high flexibility to approximate a wide variety of lock-in behaviors.Functions of higher degree or with more interpolation points could possibly yield better approximations, but would also increase the number of hyperparameters.Decreasing the number of hyperparameters, for example, to one parameter handling the offset and one for the smoothing would lead to less flexibility.We think the presented class of lock-in functions with four parameters gives a good sweet-spot.
In our study, we focus on the directional components (declination and inclination) of sediment records.As mentioned above, this choice comes from the recognition that the lock-in function for the directional components may substantially differ from that for the intensities.
To illustrate this point, consider a scenario where the intensity of the geomagnetic field is constant, while the direction varies.After completion of the lock-in process, the direction recorded in the sediment layer emerges as a weighted average of geomagnetic field directions.Conversely, the sediment layer's intensity appears to decreased in comparison to the beginning of the lock-in process.This is because in the beginning all particles align with the field, but during the lock-in an increasing proportion of particles deviates from this direction, so that their magnetic moments can partly cancel out.The resulting intensity of the whole layer is the sum of the particle magnetic moments and consequently might be decreased.In such cases, the lock-in function for directional components must be distinguished from that of intensity.In fact, the intensity signal captured in the sediment depends not only on the lock-in process, but also on the directional dynamics of the geomagnetic field.This becomes even more complicated if we consider a realistic scenario, where both the field's direction and intensity The orange data points were generated using the real lock-in function in row (A) from (see Figure 4).For the blue data points, we used the estimated lock-in function with the best log-ml value.And for the red data points the estimated lock-in function with the worst log-ml value.
BOHSUNG ET AL. 10.1029/2023JB027373 14 of 17 change over time.Consequently, considering relative paleointensities from sediments is much more complicated than incorporating directions only.
Additionally to the described effect, assuming one lock-in function with identical parameters throughout the sedimentary record remains a reasonable assumption for directional data (at least for the Holocene).It might however be too strict for intensities.Investigation of these challenges will be done in future research.
The synthetic tests presented in Section 3 emphasize the importance of correctly interpreting the estimated lock-in functions.The four estimated parameters 1-4 can show strong variations.Our investigations reveal certain features of the estimated lock-in function that are reliably determined.Particularly significant among these features is the half lock-in depth, 0.5 , which directly corresponds to the offset induced by the pDRM process.The smoothing effects caused by the pDRM process are associated with the height and width of the estimated lock-in function.Although these two parameters are less precisely determined compared to the half lock-in depth, they still yield valuable insights.Especially the comparison of data points distorted with best, worst, and real lock-in function (see Figure 6) shows the accuracy of our estimation even if the parameters 1 to 4 show high variance.The comparison of outcomes for the geographical regions of Sweden and Rapa Iti demonstrates the robustness of our methodology, even in locations with limited coverage by archeomagnetic data.
Conclusion
We present a new class of parameterized lock-in functions.These functions are capable of modeling the offset and smoothing effects associated with the pDRM process.The four parameters deliver high flexibility to approximate a wide variety of lock-in behaviors.Extensive testing on synthetic data sets has demonstrated that our method is highly effective in estimating the parameters associated with the lock-in process, even in areas where archeomagnetic data is sparse.
Transitioning from theoretical developments and synthetic testing to the practical world of real sediment data presents a crucial next step.There are many challenges to this endeavor, including not only the complex pDRM process, but also ancillary effects such as inclination shallowing.Furthermore, real data often provides declination values in relative terms, necessitating the estimation of the declination offset parameter.
Currently, our methodology focuses on the estimation of lock-function parameters.However, the goal is the deconvolution of sedimentary records using the estimated lock-in functions.This endeavor is poised to yield a dedicated sediment preprocessing software, enhancing the reliability of sediment data for geomagnetic field modeling.
Another approach we will work on is the expanding our modeling technique to accommodate multiple sediment cores and potentially incorporating field parameters like correlation length as additional hyperparameters.The advantage of this method is that we can potentially learn from the cross-correlations between different sediment cores.
A comparative analysis of these two methods, simultaneous estimation of the parameters versus parameter estimation as a part of the preprocessing procedure will be the final step.
Appendix A
In this section, we present the derivations of the formulas used in Section 2.4.
𝚺𝚺
15 of 17 The backward recursion equations are derived as follows where the inverse of the matrix − +1 is derived as follows.We define The inverse of − +1 is then given by where (⋆) follows since and (⋆⋆) follows since
Figure 1 .
Figure 1.Declination and inclination of a sediment record from Sweden together with measurement errors (blue dots with error bars) are compared to predictions of ArchKalmag14k.r (relying exclusively on archeomagnetic data).An offset and smoothing in the signal of the sediment data compared to the signal derived from ArchKalmag14k.r can be observed.
Figure 2 .
Figure 2. Spatial and temporal distribution of synthetic data.The upper panel on the left side shows the temporal distribution of the synthetic archeomagnetic data while the temporal distribution for the synthetic sediment data is shown in the lower panel.Spatial distributions for synthetic archeomagnetic data (blue dots) and the two synthetic sediment locations (red stars) are shown on the right side.
Figure 3 .
Figure 3.The figure shows the declination and inclination of the noise synthetic sediment data (orange dots) distorted using the orange lock-in function in row (A) of Figure 4. Measurement errors are shown as orange error bars.The reference process for these data is shown in green.
Figure 4 .
Figure 4.The figure shows the results of the 50 parameter estimations for the six synthetic tests located in Sweden.Rows (A-F) correspond to the six cases.The first column shows the true lock-in function (orange) and 50 estimations.The color of the estimated lock-in functions depends on the associated log-ml values ranging from red (low log-ml) to blue (high log-ml).Columns two to three show the distributions of the half lock-in depth, lock-in function height, and lock-in function width.The values of the true parameters are visualized in orange.All parameters are weighted with the log-ml values.The distributions of the log-ml values are visualized in the last column.
Figure 5 .
Figure 5.The figure shows the results of the 50 parameter estimations for the six synthetic tests located on Rapa Iti.See Figure 4 for details.
Figure 6 .
Figure6.The figure shows the comparison of synthetic data generated using three different lock-in functions.The orange data points were generated using the real lock-in function in row (A) from (see Figure4).For the blue data points, we used the estimated lock-in function with the best log-ml value.And for the red data points the estimated lock-in function with the worst log-ml value.
function defined in Section 2.2 and 0 is the lock-in depth, that is, the relative depth where the last particle of the layer at depth is fully locked in.Note that is a functional that maps functions in ( ℝ, ℝ 3 ) (set of all continuous functions from ℝ → ℝ 3 ) to a vector in ℝ 3 .In the notation above the function is the argument of the functional , that is, In conclusion, our lock-in function models not only the offset and smoothing associated with the pDRM process but also the measurement associated smoothing.
2() ≈ 1() .As a heuristic explanation consider two extreme cases.If the lock-in depth is significantly larger than the size of the sample cube , the measurement smoothing is negligible.If is close to zero we can approximate the measurement functional by the lock-in functional by choosing the lock-in function with a tableau of width and setting = .
,
max is the spherical harmonics cutoff degree, and Δ is the step size.Due to the constant step size Δ , the forward operator does not depend on the Kalman filter step, that is, ] , the Bayesian filtering equations are recursively defined as = −1 + = + where is the measurement, ∼ (, ) the process noise and ∼ (, ) the measurement noise.The matrix is the operator that projects the model to the data.The matrix Σ = 0 − 0 = For ∈ [1, ] and with the modified forward operator, the recursive equations of the prediction step are given by | 7,596.4 | 2023-12-01T00:00:00.000 | [
"Geology",
"Physics"
] |
In Vivo Pattern Classification of Ingestive Behavior in Ruminants Using FBG Sensors and Machine Learning
Pattern classification of ingestive behavior in grazing animals has extreme importance in studies related to animal nutrition, growth and health. In this paper, a system to classify chewing patterns of ruminants in in vivo experiments is developed. The proposal is based on data collected by optical fiber Bragg grating sensors (FBG) that are processed by machine learning techniques. The FBG sensors measure the biomechanical strain during jaw movements, and a decision tree is responsible for the classification of the associated chewing pattern. In this study, patterns associated with food intake of dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior were monitored: rumination and idleness. Experimental results show that the proposed approach for pattern classification is capable of differentiating the five patterns involved in the chewing process with an overall accuracy of 94%.
Introduction and Contextualization
The understanding of the processes associated with the forage grazing system is related to the assessment of the food intake and ingestive behavior of animals. This kind of study aims to classify food intake and ingestive processes to support the selection of forage that results in increased weight gain and to assist the processes of growth, production and reproduction [1,2]. It also helps in determining the productivity of pastures, is an important measure of animal impact on pastoral ecosystems and may influence agriculture and precision livestock farming [3]. In addition, the monitoring of food consumption activities provides information on the health and well-being of the animal [2].
An important activity related to animal health and food utilization is the rumination process [4]. This process is related to the gastrointestinal capacity to transform compounds of plant cells, such as cellulose and hemicellulose, into energy. These characteristics of the ruminant digestive system allow the better usage of energy from fibrous plants. The rumination process is strongly related to the type of forage used during the feeding process [4]. This shows that, besides the determination of the type of ingested food, the identification of the rumination process is also important for studies aimed at improving animal handling in order to increase productivity [4,5].
Different techniques have been used to assess the ingestive behavior of animals in grazing environments. Among these techniques, some are based on the evaluation of the animals digestive behavior [6], while other techniques use mechanical sensors [7,8] or are based on acoustic methods [9,10]. Since it is not invasive and has a low cost, the acoustic method is the primary technique employed [2,9]. This technique uses audio sensors to obtain data on mandibular movements of animals during the grazing period. However, the audio signal can be affected by interference from external sources that are not related to the chewing process, and like all of the others, the analysis and classification of the data obtained are performed manually [9,10]. Computational tools for this purpose are scarce and have a success rate for classification of approximately 84%, including at most four classes [11].
The acquisition and analysis of chewing movements in ruminants allow the characterization of chewing patterns with high immunity to noise and the development of a computational system that provides greater automation and a higher success rate in the stage of classification, which are important for further advances in equipment for the evaluation of animals' feeding behavior. Thus, this paper proposes a new paradigm for the monitoring and classification of chewing patterns. In order to provide interference immunity to the signal acquisition stage, fiber Bragg grating sensors (FBG) are employed as the transducer element [12]. For the automation of the classification process, a classifier based on decision trees is proposed and developed [13].
The optical biocompatible sensors [14] used in this work were developed from a new technique of packaging with preliminary in vitro tests [15], where the viability of the technique has been demonstrated. Furthermore, with the objective of developing a package for the optical sensor, tests were performed ex vivo [16]. The work resulted in a totally biocompatible sensor without body rejection or tissue reaction due to the fact that its material (silica) is not toxic. Besides that, the sensor is chemically stable, immune to electromagnetic interference, has reduced dimensions with a diameter and a length in the order of micrometers and millimeters, respectively, and provides excellent sensitivity for the acquisition of low intensity signals. Another feature is that it is suitable for monitoring geometrically-irregular regions, such as cheekbones, where it is difficult to apply conventional electric extensometry [17]. These characteristics make FBGs suitable for in vivo experiments [18,19].
The improvement in the data classification process is obtained with the use of machine learning techniques. One possibility is the use of inductive learning techniques, such as decision trees, neural networks, support vector machines, genetic algorithms, and so forth [20]. One of these techniques, the decision tree algorithm C4.5, is known for its capacity of generalization of data susceptible to noise and for working with continuous and categorical attributes, and it also handles missing attribute values [13]. In addition, classifiers constructed with decision trees have a good combination of the speed and error rate of classification [21]. These characteristics make the decision trees a natural choice for the problem in question and have been employed successfully in different areas requiring the classification of patterns, such as in the areas of medical [22,23], safety [24], chemistry [25], geology [26], among other areas. Despite the success of such approaches, there are few studies related to the use of machine learning techniques applied in classification problems of ingestive behavior patterns of ruminants.
Studies for the assessment of ruminant feeding behavior using FBG arose from in vitro [15] and ex vivo [16] experiments. Therefore, these sensors allow the evaluation of strain applied by the animal along time for each chewing movement. In order to improve the classification process of the acquired data, the signals were segmented into each chewing movement, and the frequency spectrum of the signals of each chewing movement was also analyzed to include new features in the dataset.
The description of the proposed instrumentation and classification of chewing patterns of ruminants is organized as follows: Section 2 describes the proposed instrumentation and algorithms used to classify the acquired data. Section 3 presents the pre-processing of the acquired data and the creation of the training set. In Section 4, the results are presented and discussed. Finally, Section 5 concludes the paper. This work has the approval of the Ethics Commission on the use of Animals of the Federal University of Technology-Paraná (CEUA 2013-009 Protocol).
Instrumentation and Data Acquisition
In this work, the mandibular stress is measured by an FBG sensor with λ B = 1541 nm; reflectivity = 50%; FWHM (full width at half maximum) = 0.3 nm. The sensor was set on a surgical titanium plate with cyanoacrylate glue. The plate was attached to the animal's jaw by surgical screws. The animal used was a calf of 2 months of age, weighing approximately 160 kg. All of the materials used in the surgical process are biocompatible, and there is no rejection by the animal after the deployment of the sensor. A detailed description about the employed sensor can be found in [16].
The access to the mandible was made through a surgical procedure, through an incision of the skin and the masseter muscle, in order to reach the masseteric tuberosity, over which the sensor is placed. The location of the instrumentation was based on and described in [15,16], which defined the position of the sensor to better capture the strain generated from masticatory processes of different materials.
For data acquisition, the interrogator DI410 was used, manufactured by HBM ® , along with CatmanEasy ® software. The sampling rate used in the tests was 1 kS/s. Figure 1 illustrates the positioning of the sensor and the in vivo data acquisition process proposed in this work. Figure 1. Positioning of the sensor and data acquisition.
For the acquisition of data during chewing, different types of food were provided to the animal. The first food provided was dietary supplement in the form of pelletized concentrate with an intake time of 13 min. The second material provided to the animal was Tyfton (Cynodon) hay, with an ingestion time of approximately 10 min. Additionally, the third food provided was ryegrass (Lolium multiflorum), which was consumed for about 5 min. During the period of food intake, there were intervals without mastigatory movements. These intervals were used for collecting the idleness class samples. Samples of this class were also obtained between the end time of feeding and the beginning of rumination. After approximately 45 min from the end of food collection, the process of rumination began, which was monitored for about 15 min. The acquired data were used as a training set for the machine learning algorithm described in the next section.
Decision Tree C4.5 Algorithm
The choice of decision trees over the artificial neural networks was made due to the insertion of decision tree algorithms into the symbolic learning paradigm, which allows a clearer view of the created set of rules. It also makes clear the key attributes used in the decision tree structure. The decision tree algorithms also perform well when working with a large number of forecaster attributes.
A decision tree has a structure composed of nodes, branches from these nodes, and terminal nodes (leaf nodes). The nodes represent the attributes of an instance. The branches receive the possible values of the attribute in question, and the leaf nodes represent the different classes present in the dataset. The classification consists of following the path determined by successive nodes arranged along the tree until a leaf node is reached, which contains the class to be assigned to the respective instance [13]. In this work, the C4.5 algorithm was used [13], which provides enhancements to the ID3 algorithm [27], where the ability to work with categorical and quantitative attributes was added.
For the construction of the decision tree, a dataset D is taken into consideration with m instances, D = D 1 , D 2 , · · · , D m , where each instance D i , i = 1, . . . , m, is a subset with n attributes a ij , where j = 1, 2, . . . , n, i.e., D i = {a i1 , a i2 , · · · , a in }. The set of possible values for attribute a ij is represented by dom(a ij ) = {v 1 , v 2 , · · · , v k }, where v l , l = 1, . . . , k, are the possible values for attribute a ij . Each instance of the dataset D is classified according to a set of classes C = {c 1 , c 2 , · · · , c w }, where w is the number of classes.
Algorithm C4.5 uses the attributes that generate a higher information gain ratio to create the node of the decision tree. The evaluation of the information gain ratio depends on the information gain [27]. The information gain uses as a basis a measure known as entropy [13,27]. The entropy measures the amount of information needed to identify the class of a node [28].
Based on the computation of information gain and the information gain ratio of each attribute, the attribute with the highest information gain ratio is selected as the node attribute [13]. For each possible value of the node, branches are created. If all instances belong to the same class, the branch points to a leaf node, which characterizes the classification. Otherwise, the calculations are recursively performed until all instances can be classified.
Training Set Creation
Prior to the creation of the training set, some pre-processing of the acquired data is necessary to extract as much information as possible from the signal. After the pre-processing, the training set can be created. This section covers these topics.
Data Pre-Processing
The acquired data, corresponding to the classes of interest, are labeled according to the type of food consumed by the animal or according to the ingestive event during the signal acquisition. The foods provided to the animal along with the idleness and rumination events form five classes that compose the set of target attributes C, that is, C= {dietary supplement, hay, ryegrass, rumination, idleness} Figure 2 presents 6 s of an already labeled set of data. It is observed that each of the items shows a specific waveform, as well as distinct strain values. This is due to the texture of the forage used, the dietary supplement and the bolus present in rumination. The peaks in Figure 2 express the maximum aperture of the mouth, while the minimum values represent the closure of the animal's jaw.
In order to classify the chewing movements, the signal should be segmented to obtain a particular sample for each movement. To segment the signal, firstly, the average value of the signal is evaluated every second. Then, the average is subtracted from the original signal and zero-crossing detection is used to detect the beginning and the end of each movement. After the segmentation, the signal that describes the jaw movement is again taken into account with the average value added.
It was defined that the data collected of each jaw movement during 1 s form an instance of the training set. For the classification system, each sample of the signal is an attribute. Considering the sampling rate of 1 kS/s, all of the training set has 1000 attributes by chewing movement. Since, after the segmentation, the instance of the training set can have less then 1000 attributes, if this is the case, the average value of the signal is reinserted in the set to provide 1000 elements. The choice of the average value of the signal to the complementation of the attributes is justified, since it is information that varies for different classes, which contributes to a more effective classification. Finally, each chewing movement, considering its average, is labeled according to the class associated with the respective event. Figure 3 illustrates the segmented waveforms of each movement considering its average value. Figure 4 shows the histogram of the strain values of the entire training set. It is observed that the distribution of values of strain between classes is quite distinct. For example, rumination and idleness have strain values concentrated in a small range of values. For rumination, the strain value was 42 µm/m. The maximum strain in the bone tissue for the idleness class was in the order of 12 µm/m. The waveform of these materials is also quite distinct. These behaviors noticed in the signal of each material can improve the classification procedure. Therefore, in order to extract additional information of the signals, the use of the fast Fourier transform (FFT) is proposed for the extraction of relevant information from the frequency spectrum of the signal [29]. Therefore, there is indirect information about the shape of the acquired signal that is suitable to use with decision trees. The frequency spectrum is obtained through [30]: where N is the number of samples contained in the signal under analysis. In this case, it corresponds to the number of samples in one second, that is N = 1000.
Creation of the Training Set
For creating the training set, 200 instances were used for each class, totaling 1000 instances. Each instance D i has 1030 attributes, where the first 1000 attributes come from the segmentation of the data of a chewing movement that was described in Section 3.1. The 30 additional attributes are formed by the first 30 components of the frequency spectrum of the signal obtained by calculating the FFT. In this way, the training set D is given by: where m = 1000. Each element D i ∈ D has an associated label. This label defines the class to which the instance belongs: in which − → x j is a vector of 1030 elements with the values representing the attributes of the instance D i , y z is the value of the class associated with this instance and y z ∈ C. Figure 5 shows the frequency spectrum of the signals representing a chewing movement of each class. The frequency spectrum shows that different frequency components can be noticed for each pattern. Frequency components above the 29th harmonic almost vanish, and its use in the decision algorithm did not produce any further performance improvement.
In Figure 5a-e, the frequency bin spacings are 2.49 Hz, 1.66 Hz, 2 Hz, 1.66 Hz and 1.54 Hz, respectively. Different bin amplitudes reflect in distinct waveforms, while different bin spacing relates to different periods of the signal. Hence, the use of the frequency spectrum can be valuable in the identification of various signal waveforms related to distinct chewing movements.
Comparing the frequency spectrum of the signals presented in Figure 5 with the signals in the time-domain depicted in Figure 2, it is evident that the frequency bin spacing represents the period of the analyzed signals.
The first bin, that is the DC component of the signal, is used as the 1001st attribute, while for the second, up to the 29th harmonic components are also used as attributes. The frequency of each harmonic component depends on the chewing pattern, as mentioned above. Therefore, 30 extra attributes are extracted from the signal that are used in conjunction with the 1000 attributes that came from the original signal to form the 1030 attributes used in the training.
After the creation of the training set, it was submitted to the decision tree algorithm C4.5, described in Section 2.2, for the generation of the decision tree.
Results and Discussion
During the generation of the decision tree, k-fold cross-validation was used [31]. k-fold cross-validation evaluates the ability of the generalization of the set of rules generated for the decision tree.
In k-fold cross-validation, the dataset is divided randomly into k subsets. Of these, k − 1 subsets are used for training, and one subset is used for testing. This process is repeated k times, with each of the k subsets used exactly once as the test set. In this way, different classifiers are obtained, and the accuracy of the training and testing sets can be evaluated. After the tests, the classifier that provided the best accuracy was selected. The tests were carried out using 10-fold cross-validation.
To reduce the risk of overfitting of the decision tree, the algorithm uses the post-pruning method [13,32]. The post-pruning aims to generalize the decision tree that was generated in the training phase by generating a subtree that avoids overfitting of the training data. The post-pruning consists of removing nodes of the tree that do not contribute to its generalization ability. Pruning reduces the complexity of the induced set of rules, improving the accuracy and reducing the overfitting [33,34].
The knowledge obtained in the application of the machine learning technique is represented by a set of ordered rules, which are shown in Figure 6. Rule R1 is read as follows: "IF (the a 851 attribute value is less than or equal to the 1541.107862 nm) then (ryegrass class)". Figure 6. Example of the set of rules generated by the classifier. Figure 7 represents the decision tree generated from the training set D, presented in Section 3.2. The classifier is composed of 83 nodes, 42 leaf nodes and 41 decision nodes. It should be highlighted that the first to the 1000th attributes are related to the acquired signals, while the 1001st to 1016th attributes come from the frequency spectrum obtained through the FFT calculation. The 1002nd divided the tree into two subtrees. The first one classified all instances of idleness. This attribute refers to information obtained with the introduction of the frequency spectrum. The second subtree has decision nodes composed of original signal-related attributes as informational attributes from FFT. Among the subdivisions of the decision tree, the subtree formed by attribute 1001 can also be highlighted, which contains the average value of the signal. This node sorted most of the instances of the class dietary supplement.
The frequency components of the signals were used to compose the structure of the decision tree presented in Figure 7. Observing the decision tree, the first node of the tree is the fundamental frequency component a 1002 . This node is responsible for separating the idleness class from the other classes. This class has the lowest fundamental frequency, that is 1.54 Hz. Hence, all of the idleness events are classified near the starting node.
Another important node is related to the attribute a 1001 . This node corresponds to the DC level of the signals and separates the rumination class from the others. The rumination class has the lowest DC value among the classes dietary supplement, hay and ryegrass, as can be seen in Figure 2. In fact, the DC level of the idleness class is lower than rumination, but the fundamental frequency, node a 1002 , was already responsible for its classification.
The second harmonic component, node a 1003 , separates the tree into two subtrees. One subtree is responsible for classifying the classes hay and ryegrass. Inside this subtree, the decision nodes is associated with the information that came from the frequency spectrum. The second subtree that starts at node a 1003 classifies the classes dietary supplement and ryegrass. In the subtrees that are derived from this point, a great part of the decision nodes is associated with the attributes that came from the sampled signal.
The data presented in Figures 2-4 show the strain of the bone tissue during chewing movements. However, the dataset used during the training of the decision tree classifier was formed by wavelength values collected by the optical interrogator, since one of the objectives of the study was the classification of chewing movements in real time. Then, the rules shown in Figure 7 were formed by wavelength values. Table 1 displays the confusion matrix with the classification results of the training set D. The generated classifier provided a higher accuracy for rumination and idleness classes. These are classes that have values of wavelength and waveform also distinguished from the others. The classes dietary supplement and ryegrass provided the lowest accuracy. Changes in strain values of these foods are similar to the values in some samples of chewing. For dietary supplement, the strain is approximately 50 µm/m, while ryegrass had instances with strain in the order of 60 µm/m. It is also noticed that some samples of these classes have a similar waveform, which results in incorrect classification of some instances. The average success rate was 94%, i.e., of the 1000 training set instances, 940 were correctly classified during the process of the generating and testing of the decision tree. This success rate supplants success rates of other methods available in the literature, which are around 84%, where only four classes were considered [11].
Other similar tests were also conducted with smaller numbers of forecaster attributes. In a second test, 500 attributes of the original signal were used along with the same frequency components that were previously described. In this test, the accuracy was 92%. Another test using 250 attributes of the original signal with the same frequency components resulted in an accuracy of 89.5%. It can be seen that even with a reduced number of forecasters attributes the accuracies are around 90%, which can be considered satisfactory when compared to other techniques in the literature that provide 84% accuracy.
Previous works had already discussed the use of optical sensors in other biomechanical applications where the bone strain must be measured [16,35,36]. However, no in vivo studies were executed. The experiments presented in this study were conducted in one animal due to the restricted availability of animals for study. Nevertheless, the technique can be applied in other animals, providing new data for processing. With additional data, it is possible to include different information that can be used during the training and validation tests of the classification system aiming at a decision tree with better generalization characteristics.
Conclusions
This work presented the development of a system for the classification of chewing patterns in ruminants. This system can bring significant improvements to the studies related to animal nutrition and precision livestock farming.
The proposed approach was based on machine learning using the decision tree algorithm C4.5. The data provided to the classifier were obtained through in vivo optical extensometry using fiber Bragg grating sensors. The sensor used has high sensitivity, and at the same time, it is immune to electromagnetic interference and offers excellent features of biocompatibility.
It was shown that the application of FBG sensors to collect ruminant nutrition-related data is promising. The sensor response is capable of showing the dependence of the chewing force (through the bone strain) and frequency on the hardness and fibrous contents of the given food.
The obtained classifier provided a high success rate due to the pre-processed data. The pre-processing includes information on the frequency spectrum of the signals.
The achieved results showed that the decision tree technique can be used to generate a classifier trained with data from different types of chewing patterns. The classifier was also able to identify other events related to animal nutrition, such as idleness and rumination. The latter one is an important source of data for animal health and welfare monitoring.
The induced set of decision rules was capable of generalizing most patterns in an appropriate manner. The overall success rate average was 94%. Success rates significantly exceed those observed in the literature, where the classifiers work with a maximum of four different classes, and the percentage of success is approximately 84%. Moreover, the technique presented in this paper, after the proper training, allows for greater automation of classification when compared to the usual methods. | 6,041 | 2015-11-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
A Neurospora crassa mutant which overaccumulates carotenoid pigments
A Neurospora crassa mutant which overaccumulates carotenoid pigments Creative Commons License This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License. This regular paper is available in Fungal Genetics Reports: http://newprairiepress.org/fgr/vol31/iss1/6 Results: Using this' system the incorporation of [35S]methionine into acid-insoluble material increased linearly for 40-60 min. in the endogenous system. The N. crassa lysate subjected to micrococcal nuclease treatment did not support any translation; no increase in incorporation of the label into protein fraction was observed as a function of incubation time. ceeded for about 60 min. On priming this lysate with Neurospora RNA translation proAt optimal levels of exogenously added RNA, the incorporation of the label approached the level of endogenous translation observed in untreated lysates. Translation of both poly (A)containing and poly (A-) RNA fractions of Neurospora was supported by this system. Analysis of in vitro translation products of endogenous messengers and exogenously supplied Neurospora mRNA on SDS-polyacrylamide gels revealed a protein profile identical to that demonstrated in extracts of cells pulse-labelled in vivo with [35S]methionine. Polypeptides of up to 200,000 daltons were synthesized in vitro. Pyruvate kinase was detected in the translation products by immunoprecipitation using monospecific antibodies as well as by immunoadsorption on Affigel columns coupled to anti-PK antibody. It is necessary to establish the optimum Kand Mg-acetate concentration for each type of mRNA as efficiency of translation and its dependence on these salts may vary from message to message. Whereas optimal Mg-acetate requirement for both endogenous and exogenous translation was 0.35 to 0.40 mM, endogenous protein synthesis exhibited a sharp K-acetate optimum at 80 mM but the exogenous PK-RNA enriched fraction was translated more efficiently with 20-50 mM. Another factor that influenced translation was GTP concentration. With 0.125 mM GTP both endogenous exogenously primed protein synthesis proceeded efficiently for only ~ 1 0 min. On the other hand with 0.25 mM GTP it was observed to proceed linearly for at least 40 min. This could be due to the stabilization of initiation factors by higher levels of GTP. Heterologous RNA, such as globin mRNA (BRL) was translated efficiently in this system, the optimum Kand Mg-acetate concentrations being 250 mM and 4 mM, respectively. The translation of exogenous RNA was comparable to that su ported by the commercial rabbit reticulocyte lysate (BRL) as witnessed by a similar incorporation of [35S]methionine. University of Calgary, Calgary, Alberta, Canada, T2N lN4. Department of Biology, Harding, R.W., D.Q. Philip*, B.Z. Drozdowicz**, and N.P. Williams A Neurospora crassa mutant which overaccumulates carotenoid
. --Absorption spectra of neutral and acidic carotenoid extracts (in hexane) obtained from dark or light treated wild-type and ovc (S20-16) strains.
Treatments were given at 6 vs. 25° C as described in the text. The Em5297a wild-type strain is designated as Em in the figure. The volume of each extract was adjusted to 10 ml hexane/g dry weight of mycelia extracted before absorption spectra were determined.
Previously it was demonstrated that photoinduced carotenoid biosynthesis in Neurospora crassa mycelia shows an unusual temperature dependence (R.W. Harding. 1974 Plant Physiol. 54: 142-147). The primary light reaction is independent of temperature as expected, but the amount of pigment which subsequently accumulates in the dark is temperature dependent, and surprisingly the optimum temperature is 6°C. We have isolated mutants which produce more pigment than the wild-type strain at temperatures above 6° C but about the same amount at 6° C. We have designated this type of mutant as ovc (overaccumulator of carotenoids).
The o v c locus was then put into a wild-type background by a series of four backcrosses with Em5297a (FGSC #352). Corresponding dark controls were carried out for each of these treatments. After a light treatment and dark incubation at 25° C, the ovc mutant had accumulated more neutral and acidic carotenoid pigment than the wild-type, but both strains produced less pigment at 25° C than at 6° C. The ovc strain also produced significantly more pigment than the wild-type in the dark at both 6° and 25°C. This dark production of pigment can be readily observed visually. however, the mutant is in the dark, since light still produces a dramatic increase in carotenoid production.
not fully induced
To determine which neutral pigment the ovc mutant overaccumulates at 25° C, alumina chromatography (6% water deactivated alumina) of neutral carotenoid extracts was carried out.
In Table I, it is shown that the light treated ovc mutant produces higher levels of every neutral carotenoid at 25° C except phytoene.
In addition, more neurosporaxanthin (the major acidic pigment) is produced by the ovc strain. At 6° C, the accumulation of each pigment following irradiation (with the exception of α-carotene and 3,4-dehydrolycopene) is comparable in the two strains. The ovc mutant did not produce any carotenoids not previously identified in wild-type Neurospora.
Mapping of the ovc locus was carried out.
From backcrosses of o v c , col-4 double mutants with wild-type, the ovc and col-4 loci were found to be linked. In a subsequent cross of ovc, col-4 + , X ovc + col-4, a recombination frequency of 14% (133 progeny scored) was determined.
Subsequently ovc was-shown to be linked to met-5 (9666, FGSC #141) as expected, since the met-5 locus is about 23 map units to the right of col-4 on linkage group IVR (A. Radford, 1972 Neurospora Newsletter 19: 25-26).
The order of these loci was determined by the cross shown in Table II.
These results show that the gene order is col-4, ovc , met-5 with a recombination frequency between col-4 and ovc of approximately 10% and between ovc and met-5 of about 14%. | 1,310.4 | 1984-01-01T00:00:00.000 | [
"Biology"
] |
Experimental quantum tossing of a single coin
The cryptographic protocol of coin tossing consists of two parties, Alice and Bob, that do not trust each other, but want to generate a random bit. If the parties use a classical communication channel and have unlimited computational resources, one of them can always cheat perfectly. Here we analyze in detail how the performance of a quantum coin tossing experiment should be compared to classical protocols, taking into account the inevitable experimental imperfections. We then report an all-optical fiber experiment in which a single coin is tossed whose randomness is higher than achievable by any classical protocol and present some easily realisable cheating strategies by Alice and Bob.
Introduction
The cryptographic protocol of coin tossing introduced by Blum [1] consists of two parties, Alice and Bob, that do not trust each other, but want to generate a random bit. If the parties use a classical communication channel and have unlimited computational resources, one of them can always cheat perfectly. But what if they use a quantum communication channel? Because of its conceptual importance and potential applications, quantum coin tossing was already envisaged by Bennett and Brassard in their seminal paper on quantum cryptography [2]. Later works showed that perfect quantum coin tossing is impossible [4,5,6], but that imperfect protocols exist [7,5,8,9,10,11] that perform better than any classical protocol.
Work on quantum coin tossing distinguishes between "weak coin tossing" and "strong coin tossing". In weak coin tossing Alice and Bob have antagonistic goals: Alice wants the coin to be heads, say, whereas Bob wants the coin to come out tails. Good quantum protocols for weak coin tossing exist, although they seem very difficult to implement [11]. In strong coin tossing Alice and Bob both want the coin to be perfectly random. Quantum protocols that perform better at strong coin tossing than any classical protocol exist [9,10] and come close to the known upper bound (for the original unpublished proof of the upperbound, see [6]; published proofs can be found in [12,13]).
Quantum coin tossing itself is just one example of several interesting tasks that two parties which do not trust each other can achieve if they share a quantum communication channel, but cannot achieve if they use a classical communication channel. Other examples include multiparty coin tossing [12] and weak forms of string committment [14,15]. The no go theorems mentioned above [4,5,6] rule out most other applications, except if one adds additional assumptions such as bounding the size of quantum memories [16].
Recently two works [17,18] have experimentally studied optical implementations of quantum coin tossing. However the experiment of ref. [17] suffered from important photon loss which made it difficult to assess how the experiment worked when tossing a single coin. This was circumvented, as in [18], by addressing string flipping, i.e. the problem where the parties try to toss a string of coins rather than a single one. These works were however carried out without realizing that good classical protocols exist for string flipping, see e.g. [19] for a presentation of such protocols.
In the present work we go back to the conceptually simpler problem of tossing a single coin, and report an experiment in which a single coin is tossed whose randomness is higher than achievable by any classical protocol. We begin by discussing in detail how the results of such a coin tossing experiment should be compared with classical protocols in view of the inevitable imperfections that will occur in any experimental realisation. Coin tossing in the presence of noise was already studied in [20], but with the emphasiz on applications to string flipping, whereas here we are concerned with tossing a single coin. We then present the experimental implementation, which follows closely the earlier work of [18], and present some easily realisable cheating strategies by Alice and Bob.
Formulation of the problem
A protocol for coin tossing consists in a series of rounds of (classical or quantum) communication at the end of which the parties decide on an outcome. The outcome can be either a decision that the coin has the value c = 0 or c = 1, or it can be that the protocol aborts, in which case we say that c =⊥. Note that because the rounds of (quantum or classical) communication are sequential, it is logically possible for Alice to choose one output x, and for Bob to chose another output y. For the sake of generality it is convenient to take this into account and to denote by p xy = Probability that in an honest execution of the protocol Alice outputs x and Bob outputs y, where x, y ∈ {0, 1, ⊥}.
We will say that a protocol is correct, if, when both parties are honest, at the end of the protocol they agree on the outcome, and that the results c = 0 and c = 1 occur with equal probability: p 00 = p 11 = (1 − p ⊥⊥ )/2. This formulation takes into account that because of experimental imperfections, the outcome c =⊥ may occur even when both parties are honest.
The aim of a cheater is to force the outcome of the coin tossing protocol. We denote by p * y = Probability that a dishonest Alice can force an honest Bob to output y p x * = Probability that a dishonest Bob can force an honest Alice to output x An alternative notation often used in the litterature is the bias ǫ which is related to our notation by The bound due to Kitaev [6,12,13] states that either ǫ A or ǫ B is greater or equal to 1/ √ 2. The best known protocol for strong coin tossing due to Ambainis has ǫ A = ǫ B = 1/4. In our experimental implementation, as we will see later, we will be concerned by a protocol which in the terminology of [10] has "ρ 0 and ρ 1 both pure". For such protocols it is proven in [10] that ǫ 2 A + ǫ 2 B ≥ 1/4. In the appendix we prove the following (which generalises a result of [6] when p ⊥⊥ = 0): Lemma 1: For any correct classical coin tossing protocol with three outcomes 0, 1, ⊥ we have: Note that if p ⊥⊥ = 0 these inequalities imply that either p 0 * = 1 or p * 1 = 1, and that either p 1 * = 1 or p * 0 = 1, thereby showing that classical coin tossing is impossible. When p ⊥⊥ = 0 a cheater can no longer necessarily force the outcome he wants. In the supplementary material we show that there exist classical protocols that saturate either one of equations (2) or (3), and that there exist classical protocols that come close to saturating both equations (2) and (3).
In view of Lemma 1, it is natural to quantify the quality of quantum coin tossing experiments by the following merit function: which has the following properties: The interpretation of the merit function is most obvious in the weak coin tossing scheme wherein Alice wins if Bob outputs 1 while Bob wins if Alice outputs 0 because then the term (1 − p * 1 )(1 − p 0 * ) is the product of how often a dishonest Alice cannot force a win times how often a dishonest Bob cannot force a win (and similarly for the term (1 − p * 0 )(1 − p 1 * )). The better the protocol, the larger these terms.
The Protocol
Our implementation of quantum coin tossing uses the following protocol: (i) Alice chooses a ∈ {0, 1} at random. She prepares state ψ a , where the two possible states are non orthogonal: | ψ 1 |ψ 0 | = cos θ > 0. She sends ψ a to Bob. The states ψ 0,1 will be taken to be coherent states of light of amplitude α and opposite phase: which implies that In the notation of [10] we thus have ρ 0 = |ψ 0 ψ 0 | and ρ 1 = |ψ 1 ψ 1 | both pure. Also note that ρ 0 = ρ 1 prevents from cheating strategies based on entanglement [3,4].
(ii) Bob chooses b ∈ {0, 1} at random. He tells the value of b to Alice.
(iii) Alice tells Bob the value of a.
(iv) Bob carries out a measurement which projects onto ψ a or onto the orthogonal space. If he finds that the state is not equal to ψ a he aborts, and the outcome of the protocol is ⊥. If he finds that the state is equal to ψ a then the outcome of the protocol is c = a ⊕ b.
Bob's measurement is carried out as follows: using a local oscillator (LO), he displaces the quantum state by +α if a = 1 or by −α if a = 0. If Alice is honest this results in the state becoming the vacuum state. To check this Bob then sends the resulting state onto a single photon detector. If the detector clicks then Bob assumes that Alice was cheating and he aborts: the outcome of the protocol is ⊥.
If the detector does not click, then Bob assumes that Alice is honest. (Note that Bob's measurement is similar in spirit to the method proposed in [21] for quantum state tomography, but Bob's task is simpler since he only needs to detect if Alice is cheating, and not carry out the full state tomography).
Analysis in the absence of imperfections
We now study how the merit function M depends on the details of the experiment. For the sake of comparison we first look at the situation in the absence of imperfections. First of all, in this case p ⊥⊥ = 0. Second, if Alice is dishonest she will send a fixed state |φ at step 1 and at step 3 she will choose the value of a which will make her win the protocol, and then she will hope that Bob will not abort. The probability that Bob will abort is given by the overlap of |φ with |ψ 0 and |ψ 1 . One easily finds (see [20]) that Alice's optimal choice is |φ = N(|ψ 0 + |ψ 1 ) where N a normalization constant, yielding the optimal values: Third, if Bob is dishonest, he will measure the state sent by Alice at step 2 so as to try to find out whether it is ψ 0 or ψ 1 , and he will then choose the value of b according to the result of his measurement. For the optimal measurement the probability that Bob wins is The maximal value of the merit function M max Note that this is the maximum value for protocols which in the terminology of Spekkens and Rudolph [10] fall in the category "ρ 0 and ρ 1 both pure".
Analysis in the presence of imperfections
To obtain estimates on p * c , p c * and p ⊥⊥ , and hence to estimate M, in the presence of imperfections requires that we make assumptions on how the experiment is carried out. The parameter, p ⊥⊥ , which we also call the Quantum Bit Error Rate (QBER), can easily be measured experimentally by tossing a large number of coins with Alice and Bob both following their honest strategy.
Bob is dishonest
When Bob is dishonest his cheating strategy is, as before, to estimate before step 2 the state |ψ a prepared by Alice so as to correctly guess the value of a. How do experimental imperfections, and in particular the limited visibility V of interferences affect Bob's success probability p c * ? To analyse this note that the state Alice sends to Bob is a short laser pulse of known intensity which is then strongly attenuated. Under strong attenuation all quantum states tend towards mixtures of coherent states (see e.g. [18]). Thus we can assume that the states prepared by Alice are coherent states of known intensity α 2 . These coherent states are not precisely known to Alice. However it is not difficult to show that if two coherent states have intensity α 2 , their scalar product is lower bounded by | ψ 1 |ψ 0 | ≥ e −2α 2 . Bob's cheating probability can then be bounded, as in equation (8), by the scalar product of the two states prepared by Alice:
Alice is dishonest
When Alice is dishonest we suppose that she can prepare an arbitrary state just in front of Bob's laboratory, and then send it to Bob. How do the imperfections in Bob's laboratory affect p * c ? To quantify this Bob could carry out a complete tomography of his measurement apparatus, and based on the results compute what is Alice's best cheating strategy. Here we will make a simple estimate based on easily accessible parameters. First of all let us consider the effects of the attenuation A T during transmission between Alice and Bob's laboratories, of the attenuation A B in Bob's apparatus, and of the efficiency η of his detector. We take these parameters into account by analysing a fictitious system in which Bob's apparatus is replaced by a lossless apparatus, and all the attenuation is under Alice's control, i.e. η f ict = 100%, A f ict B = 1, and A f ict This replacement can only help a cheating Alice. In the fictitious system the state sent by an honest Alice is | ± α f ict Second we analyse the effect of finite visibility on the performance of the fictitious system just described. Because of the finite visibility, Bob will not be making a projection onto the state | ± α f ict B , but onto slightly different states. We make the assumption that Bob's apparatus acts as a passive linear optical system. This implies that the true states onto which Bob projects are slightly modified coherent states | ± α f ict B + δ ± . The deviations due to δ ± give rise to the optical contribution to the QBER: where q, the QBER per photon, can be related to the visibility V of interferences by q ≃ (1 − V )/2. (Note that in addition to QBER opt , there is another contribution to the QBER due to the dark counts of the detectors. The total QBER is the sum of these two contributions: QBER = QBER opt + QBER dk .) The distance between the two states onto which Bob projects is given by Inserting this into equation (7) gives Thus the effect of the imperfections is to replace α 2 by and effective attenuated intensity
Experimental results
Our experimental setup, depicted in Fig. 1, based on the plug and play system developed for long distance quantum key distribution [22], is very similar to the one described in [18]. It consists of an all-fiber (standard SMF-28) passively balanced interferometer, and is therefore well suited to long distance quantum communication. The protocol begins with Bob producing a short (300 ps) intense laser pulse at λ = 1.55µm (id300 from idQuantique). The pulse is split in two by the coupler C1 with equal reflection and transmission coefficients 50%. The two pulses are delayed one with respect to the other by 134ns. The pulses are then recombined on a polarizing beam splitter (PBS) and sent to Alice. The pulse that propagated along the long arm of the interferometer is strongly attenuated and will play the role of signal. The pulse that propagated along the short arm will play the role of local oscillator (LO). Upon receiving the pulses, Alice splits off part of them using the coupler C2 and sends this to a photodiode that triggers her electronics. At Alice's site the pulses are further attenuated by the different optical elements. They are reflected by the Faraday mirror. And Alice randomly chooses which phase Φ A = 0, π to put on the signal pulse using her phase modulator. The signal Alice sends back to Bob is thus the coherent state | ± α with average photon number |α| 2 = 0.27. When the pulses come back to Bob's site, they are sent along the short and long arm of the interferometer by the PBS and interfere at coupler C1. In front of the PBS is a delay line belonging to Bob which ensures that after the pulses enters Bob's laboratory he bluehas the time to send to Alice the value of the bit b and then receive from her the value of a. In our experiment the fiber pigtails of the PBS are sufficient to realize the delay. Upon receiving the value of a, Bob puts the corresponding phase Φ B = aπ on the LO. This ensures that there should be destructive interference at the output port that goes to the circulator and then to detector D1 (id200 from idQuantique). If detector D1 registers a click, Bob aborts. If it does not click, the outcome of the coin toss is c = a ⊕ b. The other output of coupler C1 is monitored by detector D2, although this is not directly used in the experiment.
There are in fact two security loopholes in this experiment. The first arises because Alice does not know the intensity of the signal pulse she attenuates before sending it back to Bob. Thus in principle Bob could send her a more intense state than expected, which would mean that the scalar product of the states prepared by Alice would be smaller than expected. The second security loophole arises because Bob does not know the intensity of the pulse he uses as LO. Thus in principle Alice could send Bob the vacuum state, both in the signal and LO, and cheat perfectly. Both loopholes could be closed by having Alice (Bob) monitor the intensity of the signal (LO) before she (he) attenuates it. This was not realised in the present setup because the laser pulses used were not intense enough, but would be possible using more intense or longer laser pulses as in [18], or by using an isolator combined with an amplitude modulator as in [24].
3.4.1. Both parties are honest As mentioned above, we performed the experiment with |α| 2 = 0.27. In a typical series 10000 coins were tossed, and we obtained 5066 occurrences of c = 1, 2 occurrences of c =⊥, the other outcomes being c = 0 (which is consistent with the statistical uncertainty which should be of order √ 5000 = 70). However we insist that the protocol can be used to toss a single coin.
We estimate the merit function as follows. The abort probability is estimated by tossing a large number (1.5 10 5 ) of coins with Alice and Bob both honest: where the error comes from statistical uncertainty. The transmission losses are assumed to be negligible, A T = 1, as both parties are separated by a few meters of optical fiber. Bob's detector D1 has a η = 10% quantum efficiency. It is gated using a 2.5 ns gate leading to a dark count probability of 4.7 10 −5 . The attenuation of the signal in the optical elements of Bob's laboratory has been measured to be A B ≃ −6 dB (which includes the 3 dB losses at coupler C1 where the signal and the LO interfere). Visibilities, as measured using an intense signal, were at least 99.0% (corresponding to q = 5 10 −3 ). By inserting these parameters in equations (9) and (12) we obtain upper bounds for p * c and p c * : p * c ≤ 0.9971 and p c * ≤ 0.906 (14) leading to the lower bound for the merit function: This bound may seem very small. Its value is roughly explained by noting that the maximal value in the absence of imperfections is M max = 0.021. The main source of imperfections are the efficiency of the detectors (10 dB) and the losses in Bob's apparatus (6 dB). Thus we should reduce the attainable value of M by a factor 40, yielding approximately equation (15). This argument shows that the simplest way to improve the experiment would be to use a more efficient detector. It also shows that the value of M is rather robust against small variations of the experimental parameters. We have computed that we could keep M positive while increasing losses between Alice and Bob to A T ≃ 4.4 dB (more than 20 km of SMF-28 fiber), all other parameters being kept constant.
Bob is dishonest
In order to cheat Bob must estimate the state |ψ a prepared by Alice so as to correctly guess the value of a before sending the value of b. We implemented a simple cheating strategy in which Bob always applies Φ B = 0 on the LO. If detector D1 clicks Bob assumes that Alice chose a = 1, whereas if D1 does not click he assumes a = 0. Implementing this strategy yielded the value p 1 * = 0.505. This very low value is due to the small values of η and A B . Note that a much better cheating strategy, but which was impossible to implement in our laboratory, would be for Bob to carry out a homodyne measurement and measure the quadrature that gives him the best estimate of a.
Alice is dishonest
As discussed above, when Alice is dishonest her best strategy is to send a fixed state |φ = N(| + α + | − α ) to Bob. After receiving b she then sends the value of a that makes her win the coin toss and hopes that Bob will not abort. In practice we implemented a strategy where Alice always sends | + α . Even though this strategy is very basic, it leads to p * c = 0.9956, which is very close to the theoretical maximum equation (14).
Conclusion
In conclusion we have studied in detail how the performance of quantum coin tossing protocols in the presence of imperfections should be compared to classical protocols. We then reported on a fiber optics experimental realisation of a quantum coin tossing protocol. Our analysis shows that in this realisation the maximum success cheating probabilities for Alice and Bob are respectively 0.9971 and 0.906 when experimental imperfections are taken into account, which is still better than achievable by any classical protocol. We implemented this protocol using an all-optical fiber scheme and tossed a coin whose randomness is higher than achievable by any classical protocol. Finally we implemented simple realisable cheating strategies for both Alice and Bob.
After the present work was completed, we learned of a recent proposal specially designed for carrying out quantum coin tossing in the presence of losses [23]. Obviously taking into account losses, in particular those that occur in Bob's apparatus, was an important consideration when choosing and analysing the protocol reported here. The protocol reported in [23] seems more tolerant to loss then ours. Once the effect of other imperfections (such as finite visibility of interference fringes) are taken into account, it could be compared to ours using the merit function M introduced above.
Denote by w(u j ) the probability of reaching state u j at round j in an honest execution of the protocol.
Denote by w(u j+1 |u j ) the probability that in an honest execution, the protocol will be in state u j+1 at round j + 1 if it is in state u j at round j.
Denote by p * y (u j ) the maximum probability that if Alice is dishonest and Bob is honest, then Alice can force Bob to output y at the end of the protocol if the state at round j is u j .
Denote by p x * (u j ) the maximum probability that if Bob is dishonest and Alice is honest, then Bob can force Alice to output x at the end of the protocol if the state at round j is u j .
Note also that at round K, when the protocol has ended, T K is equal to the sum over the final states of the protocol in an honest execution of the product of the probabilities that the output of Alice is not x and that the output of Bob is not y. Thus if we take x = 0 and y = 1, then T K (0, 1) = p ⊥⊥ , ie. the right hand side of eq. (A.1).
To complete the proof we show that T is an increasing function of j, ie. T j+1 ≥ T j . To this end suppose that at round j Bob will send some communication to Alice.
Then Alice cannot influence what will happen at round j, hence we have: p * y (u j ) = u j+1 w(u j+1 |u j )p * y (u j+1 ). Furthermore we have the trivial identity w(u j+1 ) = u j w(u j+1 |u j )w(u j ). Finally we note that since it is Bob's turn to talk at round j, we have 1 − p x * (u j ) ≤ 1 − p x * (u j+1 ) where u j+1 is any state at round j + 1 that can be obtained from state u j at round j in an honest execution.
Inserting these identities into the definition of T j , we obtained the desired inequality T j+1 ≥ T j .
The proof of eq. (A.2) is similar. End of proof of Lemma 1.
We have also obtained a partial converse of Lemma 1: Proof of Lemma 2. Let us consider the following protocol: Round 1: Alice excludes one of the outcomes. That is she chooses that the outcome of the protocol will be either in {0, 1} (she has excluded ⊥), {0, ⊥} (she has excluded 1) or {1, ⊥} (she has excluded 0). She tells her choice to Bob. If she is honest she chooses randomly among these three possibilities with a priori probabilities q 01 , q 0⊥ , q 1⊥ . Round 2: Bob chooses which of the remaining two outcomes is the result of the protocol. He tells Alice what is his choice. Thus for instance if Alice told him that the outcome was in {0, 1}, Bob can choose that the outcome is either 0 or 1, but not ⊥. If he is honest he chooses randomly among the two remaining possibilities with probabilities q 0|01 , q 1|01 ; q 0|0⊥ , q ⊥|0⊥ ; q 1|1⊥ , q ⊥|1⊥ .
Similarly one can saturate inequality (A.2). End of proof of Lemma 2. | 6,281.8 | 2008-04-28T00:00:00.000 | [
"Physics"
] |
Nitric Acid Dissolution of Tennantite, Chalcopyrite and Sphalerite in the Presence of Fe (III) Ions and FeS2
This paper describes the nitric acid dissolution process of natural minerals such as tennantite, chalcopyrite and sphalerite, with the addition of Fe (III) ions and FeS2. These minerals are typical for the ores of the Ural deposits. The effect of temperature, nitric acid concentration, time, additions of Fe (III) ions and FeS2 was studied. The highest dissolution degree of sulfide minerals (more than 90%) was observed at a nitric acid concentration of 6 mol/dm3, an experiment time of 60 min, a temperature of 80 °C, a concentration of Fe (III) ions of 16.5 g/dm3, and an addition of FeS2 to the total mass minerals at 1.2:1 ratio. The most significant factors in the break-down of minerals were the nitric acid concentration, the concentration of Fe (III) ions and the amount of FeS2. Simultaneous addition of Fe (III) ions and FeS2 had the greatest effect on the leaching process. It was also established that FeS2 can be an alternative catalytic surface for copper sulfide minerals during nitric acid leaching. This helps to reduce the influence of the passivation layer of elemental sulfur due to the galvanic linkage formed between the minerals, which was confirmed by SEM-EDX.
Introduction
Copper is a non-ferrous metal that is in high demand on the market. It is produced from monometallic sulfide ores using traditional technologies. The reserves of such raw materials are limited. Therefore, polymetallic ores are given a lot of attention. The ores of Russian deposits are characterized by variety of copper minerals forms and close mutual intergrowth of non-ferrous metal sulfides and iron sulfides. This complicates the production of concentrates with the required extraction of main metals.
The development of an effective method for processing copper polymetallic raw materials with a significant content of fahlore ore minerals is urgent. Nitric acid leaching [30][31][32] makes it possible to achieve the most complete break-down of sulfides and the transfer of valuable makes it possible to obtain stable and safe commercial products containing arsenic [33][34][35][36] and antimony as components [37].
Minerals of fahlore ores are refractory. The complete dissolution of these minerals and their dissociation products can account for a much larger amount of oxidizer compared to the more widespread copper minerals. This is due to the stepwise dissociation of tennantite and tetrahedrite [38,39]. In this regard, it is advisable to consider the addition of special chemical agents, for example, FeS2 and Fe (III) cations. They reduce the nitric acid consumption and accelerate the process. Previously published studies describe the interaction of chalcopyrite, enargite, arsenopyrite and other sulfides with pyrite [40][41][42][43] and iron (III) [44][45][46][47][48][49] in various oxidizing environments. They demonstrate the positive effect of these additives.
The present paper discusses the optimal conditions of nitric acid leaching of natural tennantite, chalcopyrite, and sphalerite in the presence of Fe (III) ions and FeS2.
Materials
A mixture of natural sulfide minerals tennantite (Uchalinsky deposit, Sverdlovsk region, Russia), chalcopyrite (Vorontsovsky deposit, North Ural region region, Russia), sphalerite (Karabashsky deposit, South Ural region, Russia) was used as the main raw material. The ratio of chalcopyrite, tennantite and sphalerite in the mixture had a ratio of 1:0.36:0.17 by weight. This ratio of minerals is typical for the industrial Cu-As concentrate of the Uchalinsky deposit. The FeS2 additive was a natural pyrite mineral (Berezovsky deposit, Sverdlovsk region, Russia). The X-ray diffraction pattern is shown in Figure 1d. The chemical composition of natural minerals is shown in Table 1. All minerals were crushed and sieved. A fraction including 80% of particles with a 20-40 μm size was taken. The granulometric composition of the fraction is shown in Figure 2. Other reagents were analytical grade. Figure 3 shows scanning electron microscopy (SEM) images of an initial mineral mixture. The chemical composition in certain points, obtained using EDX analysis (Figure 3b), is presented in Table 2. Points 1, 3 and 4 stoichiometrically correspond to chalcopyrite. Point 2 corresponds to tennantite. Figure 3 shows scanning electron microscopy (SEM) images of an initial mineral mixture. The chemical composition in certain points, obtained using EDX analysis (Figure 3b), is presented in Table 2. Points 1, 3 and 4 stoichiometrically correspond to chalcopyrite. Point 2 corresponds to tennantite. Table 2. Points 1, 3 and 4 stoichiometrically correspond to chalcopyrite. Point 2 corresponds to tennantite.
Concentrate from the Uchalinsky deposit was used as an industrial copper-arsenic raw material. The particle size of this material was 95% of the −0.074 fraction. The chemical composition is shown in Table 3. The X-ray diffraction pattern of the copperarsenic concentrate is shown in Figure 4a. Based on the results of the X-ray analysis, the main species of the industrial copper-arsenic raw material were determined: FeS 2 -40%, CuFeS 2 -30%, Cu 12 As 4 S 13 -14%, ZnS-7%, PbS-2%, SiO 2 -2%. Figure 3 shows scanning electron microscopy (SEM) images of an initial mineral mixture. The chemical composition in certain points, obtained using EDX analysis (Figure 3b), is presented in Table 2. Points 1, 3 and 4 stoichiometrically correspond to chalcopyrite. Point 2 corresponds to tennantite. Concentrate from the Uchalinsky deposit was used as an industrial copper-arsenic raw material. The particle size of this material was 95% of the −0.074 fraction. The chemical composition is shown in Table 3. The X-ray diffraction pattern of the copper-arsenic concentrate is shown in Figure 4a. Based on the results of the X-ray analysis, the main species of the industrial copper-arsenic raw material were determined: FeS2-40%, CuFeS2-30%, Cu12As4S13-14%, ZnS-7%, PbS-2%, SiO2-2%. In studies with an industrial concentrate, pyrite from the Berezovsky deposit was used. The chemical composition of pyrite concentrate is presented in Table 4. The X-ray diffraction pattern of pyrite concentrate is shown in Figure 4b.
Apparatuses
Laboratory experiments on nitric acid leaching were carried out on a setup consisting In studies with an industrial concentrate, pyrite from the Berezovsky deposit was used. The chemical composition of pyrite concentrate is presented in Table 4. The X-ray diffraction pattern of pyrite concentrate is shown in Figure 4b.
Apparatuses
Laboratory experiments on nitric acid leaching were carried out on a setup consisting of a borosilicate glass reactor with a jacket Lenz Minni-60 and a volume of 0.5 dm 3 (Lenz Laborglas GmbH & Co. Wertheim, Germany). The reactor was thermostated using a Huber CC-205B circulating thermostat (HuberKältemaschinenbau AG, Offenburg, Germany). Stirring was carried out using a CatR-100C overhead stirrer (IngenieurbüroCAT, Ballrechten-Dottingen, Germany) at a speed of 350 rpm for efficient mixing of components.
Experiments
The equilibrium composition and the values of the change in the Gibbs energy were calculated using HSC Chemistry Software v. 9.9 (Metso Outotec Finland Oy, Tampere, Finland).
To obtain the optimal parameters of nitric acid leaching, mathematical planning of the experiment was used. StatGraphics 16 was used to construct a second-order orthogonal matrix with five variable parameters: the temperature was 65-95 • C, the nitric acid concentration was 3-8 mol/dm 3 , the concentration of Fe (III) ions was 5-20 g/dm 3 , the mass ratio FeS 2 to the mixture of sulfides was (0.5-2:1), the time was 15-60 min.
Before the experiment, the solution was heated to a certain temperature. Then, portions of the minerals were added. During the experiment, samples were taken at certain points of time with an automatic dispenser Sartorius Proline (MinebeaIntecAachen GmbH & Co. KG, Aachen, Germany). The final leaching pulp was filtered on a Buchner funnel. The solution was sent for the ICP-MS analysis. The leaching cake was washed with distilled water, and dried at 80 • C until a constant weight. The dried cake was ground on a Pulverisette 6 classic line planetary mill (Fritsch GmbH & Co. KG, Welden Germany), pressed onto a substrate using a hydraulic Vaneox 40t Automatic (Fluxana GmbH & Co. KG., Bedburg-Hau, Germany), and sent for X-ray fluorescence analysis.
Analysis
Chemical analysis of the initial minerals and the solid products was performed using the ARL Advant'X 4200 wave dispersive spectrometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). Phase analysis was carried out on the XRD 7000 Maxima diffractometer (Shimadzu Corp., Tokyo, Japan).
Chemical analysis of the solutions was determined by inductively coupled plasma mass-spectrometry (ICP-MS) using the Elan 9000 instrument (Perkin Elmer Inc., Waltham, MA, USA).
Scanning electron microscopy (SEM) was performed using the JSM-6390LV microscope (JEOL Ltd., Tokyo, Japan) equipped with a module for energy-dispersive X-ray spectroscopy analysis (EDX).
Calculation Method
The dissolution degree of sulfide minerals was calculated using the following procedure: The mass of dissolved tennantite (mCu12As4S13) was calculated using Formula (1). The mass of copper (mCu) in tennantite that passed into the leaching solution was calculated using Formula (2).
where m Cu12As4S13 is the mass of dissolved tennantite, [g]; M Cu12As4S13 is molar mass of tennantite, [g/mol]; M Cu1 is the molar mass of copper in tennantite, [g/mol] Based on the mass of copper that went into the leaching solution from tennantite, the total mass of dissolved chalcopyrite (m CuFeS2 ) was calculated using Formula (3).
where m Cu(total) is the total mass of copper that went into the leaching solution from chalcopyrite and tennantite, [g]; m Cu1 is the mass of copper that went into the leaching solution from tennantite, [g]; M CuFeS2 is the molar mass of chalcopyrite, [g/mol]; M Cu is the molar mass of copper in chalcopyrite, [g/mol]. The mass of dissolved sphalerite (m ZnS ) was calculated using Formula (4).
where C Zn is the concentration of zinc in the leaching solution, determined using ICP-MS, [g/dm 3 ]; V is the volume of the leaching solution [dm 3 ]; M ZnS is the molar mass of sphalerite, [g/mol]; M Zn is the molar mass of zinc present in sphalerite, [g/mol]. The dissolution degree of tennantite, chalcopyrite, and sphalerite was calculated using Formula (5).
where m MeS is the mass of the dissolved mineral, [g]; m MeS(initial) is the initial mass of the mineral in the mixture, [g].
Thermodynamics of Nitric Acid Dissolution of a Mixture of Minerals
To establish the possibility of interaction of sulfide minerals with a nitric acid solution in the presence of Fe (III) ions, the values of the change in the Gibbs energy (∆G, kJ/mol) were calculated for Equations (6)- (17). The calculation was carried out at an average temperature of 80 • C. (17) Based on the results presented above, it can be concluded that the thermodynamic probability of Equations (6)-(17) is quite high.
For the most accurate prediction of the sulfide minerals mixture behavior in the process under study, the graphs of the equilibrium distribution were plotted for their dissolution in nitric acid (Figure 5a) and in a Fe 2 (SO 4 ) 3 solution (Figure 5b). Based on the results presented above, it can be concluded that the thermodynamic probability of equations (6)-(17) is quite high.
For the most accurate prediction of the sulfide minerals mixture behavior in the process under study, the graphs of the equilibrium distribution were plotted for their dissolution in nitric acid (Figure 5a) and in a Fe2(SO4)3 solution (Figure 5b). Pyrite and sphalerite begin to dissolve first when the amount of nitric acid reaches 1 mol (Figure 5a). Chalcopyrite begins to dissolve when the amount of nitric acid reaches 7 mol. For tennantite, this process starts at 9.5 mol of nitric acid. The sequence of sulfide mineral dissolution in a Fe2(SO4)3 solution is similar (Figure 5b). Therefore, tennantite and chalcopyrite are thermodynamically most resistant under these conditions.
Determination of Optimal Parameters of Nitric Acid Leaching
It was established that for the dissolution of sulfide minerals by more than 90%, a nitric acid concentration of 12 mol/dm 3 is necessary (Figure 6). High concentrations of nitric acid significantly increase its consumption and increase the cost of the process. Therefore, it is advisable to use additional oxidants and catalysts, such as Fe2(SO4)3 and FeS2. This reduces the required concentration and consumption of nitric acid, while maintaining a high degree of sulfide dissolution (not less than 90%). Pyrite and sphalerite begin to dissolve first when the amount of nitric acid reaches 1 mol (Figure 5a). Chalcopyrite begins to dissolve when the amount of nitric acid reaches 7 mol. For tennantite, this process starts at 9.5 mol of nitric acid. The sequence of sulfide mineral dissolution in a Fe 2 (SO 4 ) 3 solution is similar (Figure 5b). Therefore, tennantite and chalcopyrite are thermodynamically most resistant under these conditions.
Determination of Optimal Parameters of Nitric Acid Leaching
It was established that for the dissolution of sulfide minerals by more than 90%, a nitric acid concentration of 12 mol/dm 3 is necessary (Figure 6). High concentrations of nitric acid significantly increase its consumption and increase the cost of the process. Therefore, it is advisable to use additional oxidants and catalysts, such as Fe 2 (SO 4 ) 3 and FeS 2 . This reduces the required concentration and consumption of nitric acid, while maintaining a high degree of sulfide dissolution (not less than 90%).
To determine the influence of FeS 2 and Fe (III) ions, the experiments were carried out. The parameters of the experiments were as follows: the concentration of nitric acid was 6 mol/dm 3 , the time was 60 min, the temperature was 80 • C, the concentration of Fe (III) ions was 5 g/dm 3 . FeS 2 was added in mass ratio of 1:1 (to the total mass of sulfide minerals). According to thermodynamic studies, tennantite is the most resistant mineral of the mixture. Therefore, it was chosen as a demonstration of the experimental results ( Figure 7). To determine the influence of FeS2 and Fe (III) ions, the experiments were carried out. The parameters of the experiments were as follows: the concentration of nitric acid was 6 mol/dm 3 , the time was 60 min, the temperature was 80 °C, the concentration of Fe (III) ions was 5 g/dm 3 . FeS2 was added in mass ratio of 1:1 (to the total mass of sulfide minerals). According to thermodynamic studies, tennantite is the most resistant mineral of the mixture. Therefore, it was chosen as a demonstration of the experimental results ( Figure 7). In the experiment with the simultaneous addition of FeS2 and Fe (III) ions, the tennantite dissolution degree increased by 24.5% in 60 min, compared with the experiment without additives. This indicates positive effect of FeS2 and Fe (III) ions on the process. The combined use of these additives had the greatest positive effect on the process, com- To determine the influence of FeS2 and Fe (III) ions, the experiments were carried out. The parameters of the experiments were as follows: the concentration of nitric acid was 6 mol/dm 3 , the time was 60 min, the temperature was 80 °C, the concentration of Fe (III) ions was 5 g/dm 3 . FeS2 was added in mass ratio of 1:1 (to the total mass of sulfide minerals). According to thermodynamic studies, tennantite is the most resistant mineral of the mixture. Therefore, it was chosen as a demonstration of the experimental results ( Figure 7). In the experiment with the simultaneous addition of FeS2 and Fe (III) ions, the tennantite dissolution degree increased by 24.5% in 60 min, compared with the experiment without additives. This indicates positive effect of FeS2 and Fe (III) ions on the process. The combined use of these additives had the greatest positive effect on the process, com- In the experiment with the simultaneous addition of FeS 2 and Fe (III) ions, the tennantite dissolution degree increased by 24.5% in 60 min, compared with the experiment without additives. This indicates positive effect of FeS 2 and Fe (III) ions on the process. The combined use of these additives had the greatest positive effect on the process, compared with their separate use. This effect is possibly explained by the simultaneous catalytic action of FeS 2 and the oxidative action of Fe (III) ions on the passivating layer of elemental sulfur. The passivating layer forms during the dissolution of minerals.
To obtain optimal parameters of nitric acid leaching, experiment mathematical planning was used [50,51]. StatGraphics software was used to construct a second-order orthogonal matrix with five variable parameters: the temperature was 65-95 • C, the acid concentration was 3-8 mol/dm 3 , the concentration of Fe (III) ions was 5-20 g/dm 3 , the mass ratio FeS 2 to the mass of the sulfides mixture was (0.5-2:1), the time was 15-60 min, the liquid-to-solid ratio (L:S) was 6:1. The results of the variance analysis are presented in Table 5. Figure 8 shows the Pareto diagrams describing the effect of studied parameters on the dissolution process of tennantite, chalcopyrite, and sphalerite.
ning was used [50,51]. StatGraphics software was used to construct a second-order orthogonal matrix with five variable parameters: the temperature was 65-95 °C, the acid concentration was 3-8 mol/dm 3 , the concentration of Fe (III) ions was 5-20 g/dm 3 , the mass ratio FeS2 to the mass of the sulfides mixture was (0.5-2:1), the time was 15-60 min, the liquid-to-solid ratio (L:S) was 6:1. The results of the variance analysis are presented in Table 5. Figure 8 shows the Pareto diagrams describing the effect of studied parameters on the dissolution process of tennantite, chalcopyrite, and sphalerite. Considering the data presented in Table 5, it can be concluded that all variable parameters are highly statistically significant for nitric acid leaching of tennantite and chalcopyrite. For sphalerite, the statistically significant parameters are the amount of FeS2, Fe (III) ions, and the concentration of nitric acid. The results, presented in Figure 8, confirm Considering the data presented in Table 5, it can be concluded that all variable parameters are highly statistically significant for nitric acid leaching of tennantite and chalcopyrite. For sphalerite, the statistically significant parameters are the amount of FeS 2 , Fe (III) ions, and the concentration of nitric acid. The results, presented in Figure 8, confirm these data. FeS 2 has the greatest influence on the process.
Diagrams of the dependence of the dissolution tennantite, chalcopyrite, and sphalerite on the amount of FeS 2 and Fe (III) ions at constant values of the concentration of nitric acid (6 mol/dm 3 ), time (60 min) and temperature (80 • C) are shown in Figure 9. Considering the data presented in Table 5, it can be concluded that all variable parameters are highly statistically significant for nitric acid leaching of tennantite and chalcopyrite. For sphalerite, the statistically significant parameters are the amount of FeS2, Fe (III) ions, and the concentration of nitric acid. The results, presented in Figure 8, confirm these data. FeS2 has the greatest influence on the process.
Diagrams of the dependence of the dissolution tennantite, chalcopyrite, and sphalerite on the amount of FeS2 and Fe (III) ions at constant values of the concentration of nitric acid (6 mol/dm 3 ), time (60 min) and temperature (80 °C) are shown in Figure 9.
Therefore, in order to achieve the maximum dissolution degree of tennatite, chalcopyrite, and sphalerite (90% and more), it is necessary to adhere to the values of the Fe (III) concentration, 16.5 mol/dm 3 , weight ratio of FeS 2 to the mixture of minerals, 1.2:1, concentration of nitric acid, 6 mol/dm 3 , leaching time, 60 min, and temperature, 80 • C.
Comparison of Nitric Acid Leaching of a Natural Mineral Mixture and Industrial Cu-As Concentrate
The optimal parameters of nitric acid leaching of a natural minerals mixture established above were applied to the industrial concentrate of the Uchalinsky deposit. A comparison of the results is shown in Figure 10. (20) Therefore, in order to achieve the maximum dissolution degree of tennatite, chalcopyrite, and sphalerite (90% and more), it is necessary to adhere to the values of the Fe (III) concentration, 16.5 mol/dm 3 , weight ratio of FeS2 to the mixture of minerals, 1.2:1, concentration of nitric acid, 6 mol/dm 3 , leaching time, 60 min, and temperature, 80 °C.
Comparison of Nitric Acid Leaching of a Natural Mineral Mixture and Industrial Cu-As Concentrate
The optimal parameters of nitric acid leaching of a natural minerals mixture established above were applied to the industrial concentrate of the Uchalinsky deposit. A comparison of the results is shown in Figure 10. In the period from 0 to 30 min, sulfide minerals in the mixture dissolve more intensively (FeS2-99.9%; ZnS-94%; CuFeS2-84.8%; Cu12As4S13-75%) than minerals in the In the period from 0 to 30 min, sulfide minerals in the mixture dissolve more intensively (FeS 2 -99.9%; ZnS-94%; CuFeS 2 -84.8%; Cu 12 As 4 S 13 -75%) than minerals in the industrial concentrate (FeS 2 -99.9%; ZnS-89.2%; CuFeS 2 -75%; Cu 12 As 4 S 13 -66%). This is due to the minerals in the mixture are not associated with each other. The minerals in industrial Cu-As concentrate are disseminated with each other, and the access of nitric acid to them might be partially limited. Despite this, the dissolution degrees of sulfide minerals in a mixture (FeS 2 -99.9%; ZnS-96.7%; CuFeS 2 -94.1%; Cu 12 As 4 S 13 -92.8%) and in an industrial concentrate (FeS 2 -99.9%; ZnS-94.5%; CuFeS 2 -93.2%; Cu 12 As 4 S 13 -91.7%) are almost identical after 60 min.
Characteristics of the Received Cakes
SEM images and EDX mapping for cake obtained at a nitric acid concentration of 6 mol/dm 3 , a time of 60 min, a temperature of 80 • C, and Fe (III) ions concentration of 16.5 g/dm 3 are shown in Figure 11. 91.7%) are almost identical after 60 min.
Characteristics of the Received Cakes
SEM images and EDX mapping for cake obtained at a nitric acid concentration of 6 mol/dm 3 , a time of 60 min, a temperature of 80 °C, and Fe (III) ions concentration of 16.5 g/dm 3 are shown in Figure 11. The particles of copper minerals in the nitric acid leaching cake have a nonhomogeneous structure. The green zones in Figure 11f correspond to the distribution of elemental sulfur. The mixture of red and blue zones corresponding to iron and copper refers to chalcopyrite. The surface of unreacted chalcopyrite particles is abundantly covered with elemental sulfur. This can reduce the access of reagents to the reaction surface. The content of sulfur in the cake was 79%. The oxidation of sulfide sulfur in the mixture of minerals to elemental condition reached 56%. The degree of breakdown of pyrite was 88%, tennantite-59%, chalcopyrite-60%, and sphalerite-84%.
SEM images and EDX mapping for cake obtained at a nitric acid concentration of 6 mol/dm 3 , a time of 60 min, a temperature of 80 ℃, a Fe (III) ions concentration of 16.5 g/dm 3 , and a mass ratio of FeS2 to a mixture of sulfide minerals of 1.2:1 are shown in Figure 12. The particles of copper minerals in the nitric acid leaching cake have a nonhomogeneous structure. The green zones in Figure 11f correspond to the distribution of elemental sulfur. The mixture of red and blue zones corresponding to iron and copper refers to chalcopyrite. The surface of unreacted chalcopyrite particles is abundantly covered with elemental sulfur. This can reduce the access of reagents to the reaction surface. The content of sulfur in the cake was 79%. The oxidation of sulfide sulfur in the mixture of minerals to elemental condition reached 56%. The degree of breakdown of pyrite was 88%, tennantite-59%, chalcopyrite-60%, and sphalerite-84%.
SEM images and EDX mapping for cake obtained at a nitric acid concentration of 6 mol/dm 3 , a time of 60 min, a temperature of 80 • C, a Fe (III) ions concentration of 16.5 g/dm 3 , and a mass ratio of FeS 2 to a mixture of sulfide minerals of 1.2:1 are shown in Figure 12.
According to Figure 12a,b, particles of pyrite and chalcopyrite after nitric acid leaching form conglomerates. They both have smooth and loose, nonhomogeneous surfaces. The green zones in Figure 12f correspond to the elemental sulfur distribution. The red zones are pyrite. The mixture of red and blue zones is chalcopyrite. As in Figure 11, it is noticeable that elemental sulfur covers the surface of chalcopyrite to a greater extent. Its content is minimal on the surface of pyrite.
In the experiment under these conditions, the oxidation degree of sulfide sulfur to elemental sulfur reached 23%, while the cake contained 14%. The degree of break-down of pyrite was 98%, tennantite-93%, chalcopyrite-94%, and sphalerite-99%.
According to the X-ray diffraction patterns (Figure 13), the leaching cakes contain elemental sulfur, chalcopyrite, and tennantite. According to Figure 12a,b, particles of pyrite and chalcopyrite after nitric acid leaching form conglomerates. They both have smooth and loose, nonhomogeneous surfaces. The green zones in Figure 12f correspond to the elemental sulfur distribution. The red zones are pyrite. The mixture of red and blue zones is chalcopyrite. As in Figure 11, it is noticeable that elemental sulfur covers the surface of chalcopyrite to a greater extent. Its content is minimal on the surface of pyrite.
In the experiment under these conditions, the oxidation degree of sulfide sulfur to elemental sulfur reached 23%, while the cake contained 14%. The degree of break-down of pyrite was 98%, tennantite-93%, chalcopyrite-94%, and sphalerite-99%.
According to the X-ray diffraction patterns (Figure 13), the leaching cakes contain elemental sulfur, chalcopyrite, and tennantite. Based on the SEM images and EDX mappings presented in Figures 11 and 12, as well as preliminary experiments presented in Figure 7, it can be concluded that during nitric acid leaching, the surfaces of copper minerals are passivated by a film of elemental sulfur. This leads to the limiting of access of nitric acid to the reaction zone. The positive effect of FeS2 might be associated with the formation of an electrochemical couple with chalcopyrite and tennantite. In this case, FeS2 acts as an alternative surface, as shown in Figure 14. This effect was observed in the studies of joint nitric acid leaching of arsenopyrite and pyrite in Galvanox processes [52]. Based on the SEM images and EDX mappings presented in Figures 11 and 12, as well as preliminary experiments presented in Figure 7, it can be concluded that during nitric acid leaching, the surfaces of copper minerals are passivated by a film of elemental sulfur. This leads to the limiting of access of nitric acid to the reaction zone. The positive effect of FeS 2 might be associated with the formation of an electrochemical couple with chalcopyrite and tennantite. In this case, FeS 2 acts as an alternative surface, as shown in Figure 14. This effect was observed in the studies of joint nitric acid leaching of arsenopyrite and pyrite [52]. Figure 13. X-ray diffraction patterns of cakes after nitric acid leaching of the minerals mixture with the addition of Fe (III) ions (16 g/dm 3 ) (a), and with the addition of iron (III) ions (16 g/dm 3 ) and weight ratio of FeS2 to the minerals mixture-1.2:1 (b).
Based on the SEM images and EDX mappings presented in Figures 11 and 12, as well as preliminary experiments presented in Figure 7, it can be concluded that during nitric acid leaching, the surfaces of copper minerals are passivated by a film of elemental sulfur. This leads to the limiting of access of nitric acid to the reaction zone. The positive effect of FeS2 might be associated with the formation of an electrochemical couple with chalcopyrite and tennantite. In this case, FeS2 acts as an alternative surface, as shown in Figure 14. This effect was observed in the studies of joint nitric acid leaching of arsenopyrite and pyrite in Galvanox processes [52]. Electrochemical nitric acid leaching of chalcopyrite and pyrite in galvanic coupling might be described by the following reaction: CuFeS2 + FeS2 + NO3 -= Cu 2+ + 2Fe 2+ + SO4 2− + 4NO2 +3S 0 + 8ē (21)
Conclusions
In this paper, the study of the nitric acid leaching of a sulfide mineral mixture of pyrite, tennantite, chalcopyrite, and sphalerite, typical for the ores of the Urals deposits, was carried out. Electrochemical nitric acid leaching of chalcopyrite and pyrite in galvanic coupling might be described by the following reaction: CuFeS 2 + FeS 2 + NO 3 − = Cu 2+ + 2Fe 2+ + SO 4 2− + 4NO 2 +3S 0 + 8ē (21)
Conclusions
In this paper, the study of the nitric acid leaching of a sulfide mineral mixture of pyrite, tennantite, chalcopyrite, and sphalerite, typical for the ores of the Urals deposits, was carried out.
1.
The optimal conditions for nitric acid leaching with the addition of Fe (III) ions and FeS 2 are: concentration of nitric acid-6 mol/dm 3 ; time-60 min; temperature-80 • C; concentration of Fe (III) ions-16.5 mol/dm 3 ; mass ratio of amount of FeS 2 to a mixture of minerals-1.2:1. The dissolution degree of sulfide minerals achieved was more than 90%. Thereby, it was possible to reduce the required concentration of nitric acid from 12 mol/dm 3 to 6 mol/dm 3 .
2.
The most significant factors of the process are: the nitric acid concentration, the concentration of Fe (III) ions, and the amount of FeS 2 . The combination of the addition of Fe (III) ions and FeS 2 into the process has the greatest effect. The multiple correlation coefficients R 2 for the obtained regression equations were 0.93 for tennantite, 0.93 for chalcopyrite, and 0.95 for sphalerite, respectively. This indicates the adequacy of the chosen model.
3.
During nitric acid leaching of a mixture of sulfide minerals, pyrite can act as an alternative catalytic surface for sulfide copper minerals. Due to the galvanic couple formed between the minerals, it is possible to reduce the influence of the passivating layer. | 6,916 | 2022-02-01T00:00:00.000 | [
"Materials Science"
] |
A HERMENEUTICS OF THE KINGDOM OF GOD IN THE CONTEXT OF ARMED CONFLICT IN MINDANAO
This study attempts to come up with a contextual theological understanding of the biblical concepts of the Kingdom of God in the light of the armed conflict in Mindanao. It seeks to understand the roots and causes of conflict in Mindanao (hermeneutical situation), into which the meaning of the biblical message of a peaceable kingdom is interpreted and understood in a meaningful manner. Taking into account the current socio-political, economic and cultural realities that contribute to the ongoing armed conflict in Mindanao, this study raises the issue of how the kingdom of God which embodies God’s love, peace, liberation and justice should be understood and concretized in a way that it could inform and influence the different religious groups and organizations involved in the Mindanao peace process. This attempt for contextualization is based on the principle that theological formulation in the context of conflict in Mindanao, can only be meaningful and intelligible if it reflects critically on the lived-experiences that is shared by different Muslim and Christian communities in Mindanao, especially the poor and marginalized masses in their search for well-being and self-determination.
Introduction
The armed conflict in Southern Philippines has continued for more than four centuries and is considered one of the world's "longest" and "bloodiest" running armed conflicts. 1It is also known as the "largest and most persistent armed conflict in Southeast Asia." 2 The conflict has affected not only the people in Mindanao but also the entire Philippine society.It resulted in the destruction of properties and livelihood, displacement of thousands of families, deaths of thousands of combatants from both sides, and innocent civilians including women and children killed in the crossfire. 3It also contributes significantly to the political and economic instability of the country. 4he conflict has a long historical root that goes back to the Spanish and American colonial rules and the Muslims' continuing struggle for autonomy and self-determination in Mindanao.The struggle for selfdetermination of the Moro people has its origin in their aspiration for freedom and independence from Spanish and American rules and is fed by the perceived failure of the state to address their continuing experiences of impoverishment, social and cultural discrimination, and political injustice.The widespread and persistent government military offensives in Mindanao supported by American troops in the name of "war on terror" gives rise to more violence and armed conflict.
Since the outbreak of war between the government troops and the Moro National Liberation Front (MNLF) in the early 1970's, the Philippine government has maintained its strong militaristic and integrationist approach in resolving the conflict.The peace process has always been derailed by charges and countercharges of ceasefire violations that result in the usual collapse of peace agreements between the Philippine government and Muslim liberationist groups, and in the change of government's policy from negotiation to total war against the Moro National Liberation Front (MNLF), Moro Islamic Liberation Front (MILF), and the Abu Sayyaf Group (ASG). 5uslim combatants and paramilitary groups such as the MNLF, MILF and the Abu Sayyaf Groups (ASG) also continue with their militant activities against the government.Capitalizing on the frustrations of Muslims brought about by their continuing marginalization, militant Muslims have adopted a more aggressive and radical stand against government policies and actions.The government is also viewed by Muslim militants as a threat to the Muslims' struggles and aspirations for independence and self-determination.They believe that their rights and existence are being denied by the government; that they have no control over their destiny, and they can be destroyed any time.With that, they are likely to escalate radicalism as they struggle to protect themselves and to pursue their rights in aggressive ways. 6
Biblical Concepts of the Kingdom of God (Old and New Testament Concepts)
The theme, "kingdom of God", is central to the Biblical message.Biblical scholars are in agreement that the term is pregnant with meanings and there are varied ways of interpreting the concept based on some very specific contexts. 7For the purposes of this study, three aspects of the interpretation of the biblical concept of the Kingdom of God are being emphasized as follows:
Theo-political
First, the kingdom of God is conceived in the Bible as a theopolitical reality.The kingdom describes the very nature of God as King (melek) and Ruler over His people.While the specific term 'kingdom of God' is "virtually absent from the Old Testament 8 , this does not negate the fact that the notion of the kingship and reign of God is present all throughout the Old Testament as expressed repeatedly in the phrase, "the Lord reigns." 9In fact, God's kingship is a dominant and recurring theme in the Hebrew Scriptures. 10 Thus, the basic concepts behind the metaphor "kingdom of God" are undoubtedly present in the Old Testament.
In the New Testament usage, "kingdom" is the usual translation of the Greek basileia, signifying the king's being, nature and state.Like the melek in the Old Testament, one could find the close affinity of king and kingship in the meaning associated with kingdom.A separate partition between this two interrelated terms if not impossible will definitely destroy the essential meaning of each word.The kingdom is the expression of the King's dignity and power in the territory he rules. 11In modern Greek, "kingship", "royal dominion," or "reign," are present in basileia.What the Old Testament canons have (Hebrew and Aramaic originals and the LXX, including the Rabbinic writings) like the kings dignity and power predominating in melek is also true in the New Testament. 12srael's nomadic times as tribes and how God accompanied them in their struggles, later with their Exodus from Egypt and the Mt.Sinai event molded her concepts about God as Ruler and King. 13This shows the close affinity between the concept of God's kingship and the experience of God in Israel's history which is very critical to the Hebrew understanding of God's character.Israel's series of experiences of Yahweh's intervention from times past up to the revelation of God's name through Moses have contributed a lot in their particular comprehension of Yahweh as King later on. 14Yahweh's revelation of His name as the Great "I AM" was understood in His Being and His working, as a name that denotes action that brings about goodness and blessings to His people. 15n the entire history and experience of Israel as a nation, God as the ruling King is affirmed as the ruling Lord, 16 and an All-embracing One. 17 Yahweh's Sovereignty acted with power of which the Old Testament writers were amazed, leads them to portray Him as the hope and comfort of the weak and marginalized.God is described as the One who has passion for justice and for the liberation of those in bondage, and this image is reflected in the Exodus accounts and the rest of the Pentateuchal and prophetic witnesses to the event. 18n many instances, God as King is portrayed as one who cares for the humbled poor and the oppressed (anawim 19 ).The anawim is set against the wicked (reshaim), the oppressors who possess wealth and power and all those who take advantage of the vulnerability of the poor. 20The appellation "King" was applied to Yahweh on the basis of the saving events that Israel experienced and attributed to Yahweh.The notion came to see Yahweh as the one who had dominion or lordship over it and its history. 21This concept of Yahweh's dominion over Israel was later on expanded to cover all peoples and nations. 22
The Kingdom as God's Salvific and Liberative Act in Human History
The Exodus event is one striking and central historical event in the life of Israel which shows God's liberating activity in the world and His special concern for the poor and the oppressed.The sufferings and oppression that the Israelites had suffered in the land of Egypt is described in the early chapters of the book of Exodus: repression 23 ; humiliations 24 slavery 25 ; forced labor and alienated work. 26Thus, exodus had to be remembered and re-enacted in the cult and tradition of Israel as the central theme of liberation and a powerful testimony of God's liberating character. 27The exodus event is also regarded as "the heart" of the Old Testament story and is pivotal for the rest of the Old Testament history and the faith that it witnesses to. 28od's gratuitous liberating act in the life and experience of Israel was to be honored and remembered faithfully by commitment and acceptance of the requirements of the covenant initiated by God and accepted by Israel.This commitment concerns not only fidelity to the one true God, but also a commitment to social obligations that must be observed among the people of the covenant.These social obligations are regulated in particular by what has been called the right of the poor.29 The gift of freedom from bondage in Egypt and the Promised Land, and the gift of covenant in Sinai and the Ten Commandments 30 are therefore "intimately linked to the practices which must regulate, in justice and solidarity, the development of Israelite society."31 In view of the fact that Yaweh is a Liberator God, the Israelites were commanded to become guardians of justice and defenders of the weak and the oppressed.
To the Israelites, exodus was always an event that reminds them of Yahweh's gracious liberating act in human history and gives them the assurance that the God who delivered them out of bondage in Egypt will always be a liberator and savior God who will save His people from all forms of oppression and enslavement. 32This living reminder is enshrined in the basic premise of the preamble of the Ten Commandments, "I am the Lord your God who brought you out of the land of Egypt, out of the house of bondage", 33 and in the Hebrew tradition of celebrating the Sabbatical and Jubilee Year, 34 which refer to favors done to the peasants, the poor, the slaves and the oppressed.
Scholars generally agree that the central theme in the life, ministry, and teaching of Jesus is the kingdom of God. 35His parables are frequently introduced "explicitly or implicitly" as examples of the kingdom. 36The beatitudes include numerous references to the ethical requirements of the kingdom.The Lord's Prayer welcomes the advent of the kingdom and Jesus' answers to human questions are often couched in kingdom language. 37Debates on what exactly Jesus meant by the kingdom has continued down through centuries until the present.However, it appears clearly that Jesus' vision of the kingdom echoes the vision of Isaiah.Centuries before Jesus, Isaiah was projecting his dream of a salvation to come.Quoting the prophet Isaiah in the gospel of Luke, Jesus summarizes his identity and mission in these words: "The Spirit of the Lord is upon me, because he has anointed me to preach good news to the poor.He has sent me to proclaim release to the captives, and recovering of sight to the blind, to set at liberty those who are oppressed, to proclaim the acceptable year of the Lord." 38e mission of Jesus is to proclaim the Kingdom of God-the coming of final and definitive salvation.Like Isaiah, Jesus proclaims that the arrival of the kingdom is salvation and that the kingdom has the decisive connotation of liberation.This liberation was demonstrated in the words and deeds of Jesus: "blinds recover their sights, captives released, lame walk, hungry were fed, and the dead were being brought back to life."Jesus' mission statement i.e., proclaiming liberty and announcing the favorable year of the Lord reechoes the language of the Old Testament Jubilee year.
The kingdom of God is the transformation of an evil and oppressive situation.Jesus proclaimed and demonstrated the kingdom of God in the midst of those who were despised by society and segregated from its life. 39He spoke against economic structures that created and perpetuated hungry masses.He fought against an elite aristocracy -the chief priests of the temple hierarchy, wealthy landowners, merchants, tax collectors, teachers of the law who out of their extravagance have reduced the masses to poverty and indignity. 40Many of the parables of Jesus were directed against abusive landowners who took advantage of the poor farmers who were gradually losing their small piece of land because of their debts.Tax collectors and estate owners took possession of the land of peasants who continued to accumulate outstanding debt.Often the peasant family would end up trapped in the plot, working as day laborers for the wealthy and absent landholders. 41
The Kingdom of God is Universal and Inclusive
The eschatology of Israel is the result of her awareness of God moving in history. 42This dynamic understanding of God's active participation in human history involves not only the history of Israel but of the whole humankind.It involves not only the liberation and restoration of Israel but of all those who are afflicted and oppressed.Amos prophesied of the imminent return and restoration of the exiles to their homeland.But the prophet added a new twist and challenge to the old exclusivistic claims of the Exodus traditions.He brought in a much broader consciousness and spoke in a radically inclusivistic tones of the experience of freedom and restoration as an event that is experienced not only by the people of Israel but by all those whom God has favored with a blessing of a new land and freedom from deprivation. 43God is a God who is not to be exclusively claimed by Israel for themselves alone.Yahweh is a God of the nations and other peoples who were oppressed and exploited.God's presence is boundless and universal: "Heaven is my throne, and earth my footstool.Where will you build a house for me?Where shall my resting place be?All these are my own making and these are mine." 44Thus, there was a significant move from an exclusivistic, narrow nationalistic perspective to a much broader faith perspective and attitude that includes consideration of other peoples, a consideration of their own struggles, their own histories, and their own traditions.
The Exodus event in this respect becomes a thematic key towards a more inclusive, more accepting faith perspective that became very important in Israel's attempt to reconstruct her faith relationship with Yahweh.Exodus in that sense was just one among other Exoduses God has conducted with other peoples.This universalistic view of God's liberative act is best expressed in the following prophetic declaration of Amos: "He who builds his lofty palace in the heavens and sets its foundation on the earth, who calls for the waters of the sea and pours them out over the face of the land--the LORD is his name.Are not you Israelites the same to me as the Cushites?declares the LORD.Did I not bring Israel up from Egypt, the Philistines from Caphtor and the Arameans from Kir? " 45 The universal and inclusive character of the kingdom of God goes against absolutism and exclusivism that sets one religion superior than the other and thereby promote antagonism, hatred and division among different peoples.Here, the Biblical concept of the universal and all-embracing nature of the kingdom of God provides a foundational basis for an inclusive, accepting, and redeeming attitude that should characterize the relationship between Christians and Muslims in Mindanao.
Traditionalist theology maintains the narrow concept of the reign of God as synonymous or identical with the church or Christianity.A careful study and analysis of the meaning of the Kingdom of God however casts serious theological questions on "whether 'God's reign should be limited to the hope of Israel, and in its historical realization in the world, to Christianity and to the church." 46Or, should it be understood in a much wider sense to include "others?" in view of the biblical witness that the reign of God is a universal reality "which extends well beyond the confines of Christianity and the church?" 47"If God's Kingdom is inclusive and universal, how do Christianity and other faith traditions relate respectively to live out the values of this universal kingdom?Do Christians and the others belong equally to the fulfilled reign of God?" 48 Karl Rahner has expressed the same conviction that God's kingdom is not confined within the limits of Christianity and the church: that the different religious traditions contain "supernatural, grace-filled elements," and that other faith traditions and communities are also "members of the Kingdom of God already present as a historic reality." 49In spite of their different religious peculiarities, "people of faith already belong together to the Reign of God and are already in communion in the reality of the mystery of salvation even if there remains between them a distinction at the level of the "sacrament", that is, the order of mediation of the mystery." 50upuis believe that the words "communion" and "sharing" characterize God's Kingdom, that the reality of the reign of God is "already shared together" by different faith traditions in mutual exchange, and that Christians and others "build together the Reign of God each time they commit themselves of common accord in the cause of justice, each time they work together for the integral liberation of each and every human person, structures and systems, and especially for the liberation of the poor and the oppressed." 51
Problem of Armed Conflict in Mindanao
Here, we see that a contextual reading of the biblical text provides an operative framework within which Christians could make sense of the meaning of the kingdom of God in the current socio-political and economic situations in Mindanao.If the kingdom of God means God's rule which is characterized by justice, freedom, equality and peace as chronicled in God's continuous liberative activity to free His people from dehumanizing powers, then, what does it mean to proclaim and participate in the kingdom of God in the midst of socio-economic and political inequalities in Mindanao?
How does the biblical concept of the theo-political character of the kingdom of God which describes God's compassionate rule and righteous governance relate to the struggles of the marginalized Muslim communities in Mindanao?Is God's rule present in the struggles of the Bangsamoro people for freedom and self-determination in Mindanao?If God is on the side of the weak, the poor, and the oppressed as revealed in the way He manifests Himself in the lived-experiences of His people, then, what is God doing in the midst of social, political, cultural and economic injustice in Mindanao?If the kingdom of God refers to His liberative acts all throughout human history with different peoples at different times and places, then, what does it mean to proclaim the peaceable kingdom of God in such a historical milieu where unjust political and economic order exist such as in Mindanao?These are crucial questions that we should seriously consider if we are to make positive impact in the Mindanao peace process.If God's Kingdom is inclusive and universal, how do Christians and other faith communities relate respectively to live out the values of this universal kingdom?Do Christians and the others belong equally to the fulfilled reign of God?
Based on the biblical message, I believe that building the kingdom of God in the context of Mindanao means proclaiming and living out God's compassionate rule and righteous governance , to work for the establishment of an equitable socio-economic order.It means, working for the emancipation and liberation of the poor and transforming evil in all its forms.To affirm the universality of God and His kingdom is to affirm that He is present in every human condition and that God is MELINTAS 30.2.2014 concerned about the whole human family regardless of race, culture and creed.This principle promotes the idea of acceptance, openness and complimentarity which means, Christians and Muslims in Mindanao are supposed to acknowledge their unique differences with a sense of acceptance and respect 52 , and never use them as a ground for discord but an opportunity to compliment and cooperate with one another for their common good.
One vital question is where does the church locate itself in the current socio-political and economic crises in Mindanao?Is the church on the side of the poor, or, has it become (as it was in the past) a legitimizer of the status quo and oppressive social order?
Certainly, the contributions of the different religious organizations in the Mindanao peace process should not be overlooked or undermined.Small scale livelihood projects, financial assistance to displaced families in times of war, "peace zones", "peace sanctuaries", peace-building programs, interfaith dialogues, position papers and calls for a negotiated peace agreement between disputing parties are important and have served their purpose.However, in as far as how these programs have addressed vital issues of equitable distribution of land and other resources, wider participation of the marginalized masses in the political processes, and the establishment of a just social order in Mindanao, remains uncertain.As it appears, there is still much work to be done in terms of finding concrete steps and solutions towards the improvement of economic and political conditions in Mindanao.
Ministry Recommendations
If Christians believe that their duty is to proclaim and help build the kingdom of God in which there is love, justice, peace, and compassion for the weak and the powerless, how are they supposed to translate this conviction into concrete plan of action/s that will contribute towards peace and development in Mindanao?
Given the current socio-economic, cultural and political injustice reigning in Mindanao, this study recommends the following political agenda (based on the above interpretation of the kingdom of God) for genuine and lasting peace in Mindanao:
Economic Transformation
The conflict in Mindanao has its roots in the socio-economic marginalization of the Moro people.Their economic displacement is largely a historical outgrowth and the cumulative effect of a long process of discriminatory laws, policies, and programs, including development programs.The most visible sign of displacement of the Moro people including other indigenous and minority groups in Mindanao has something to do with their rights to land.The historic discriminatory land policies and legal statutes favoring Christians and large scale multi-national agriculture and mining corporations during the American colonial rule and the Philippine Government's policies of resettlement of Christians to Mindanao had resulted in a slow but sure abrogation of traditional Moro property rights and their eventual marginalization from mainstream economic growth and development. 53tatistics show that in spite of the government's comprehensive land reform program, millions of people especially the marginalized and poor Muslims in Mindanao remain landless.In many parts of Mindanao, vast tracks of land are owned by multinationals and super rich who dominate the economy and making the poor poorer. 54he state of land distribution in the Philippines shows that land ownership is concentrated in the hands of a very few people.Statistics show that 45 per cent of the country's agricultural land is owned by only five per cent of the total landowning families.Another document pointing to the glaring inequality of land ownership notes that roughly 80 per cent of the total cultivated land is controlled by only 20 per cent of the landowning families.Not only do few landowners own large tracts of lands, they also possess the most fertile lowlands.Multinational corporations such as, DOLE, Del Monte, and United Fruits utilize more than 80 percent of the country's most fertile lowlands for export crops. 55n this particular context, peace in Mindanao would mean, inclusion of key issues of reparations, economic redistribution, and land reform.The economic displacement of the Moro people must be at the center and not the periphery of the peace and development challenge in Mindanao.Peace-building program in Mindanao should first and foremost, address the land problem.Current development approaches of assisting minority Muslims with micro projects such as livelihood programs, community assistance, rehabilitation projects for victims of war and other dole out economic approaches are mere palliative since they do not address vital issues and the real roots and causes of poverty in Mindanao.Concrete steps are to be done to break the chains of oppressive economic structures through the implementation of genuine land reform program.
To address the issue of landlessness which significantly contributes to poverty among the Muslim masses, the Philippine government needs to legislate laws to regulate and limit the size of the family holding of land and in the process implement land redistribution program to cater to the needs of the landless masses in Mindanao.No peace can occur in a situation where big and powerful land lords continue to dominate the economic and political sphere at the expense of the weak and the poor.Addressing the problem of economic marginalization in Mindanao also requires that the government should create laws and implement inclusive and far reaching economic programs that are accessible to address the economic well being and dignity of the poor and the marginalized.Laws and policies need to be established to prevent and penalize abusive and exploitative economic practices, and ensure the protection of the poor and the oppressed, provide equal economic and political access, establish mechanism for consultative and participatory leadership where the marginalized could take part in the decision-making process to determine their future and destiny.
Socio-Cultural Transformation
Another important issue that must be addressed in relation to the search for peace in Mindanao is the continuing socio-cultural marginalization of the Moro people.Stereotypical negative concepts of Muslims as "savage", "uncivilized", and people of "inferior race" that has been institutionalized since colonial era, and has been reinforced by subsequent Filipinization program of the Philippine government has not ceased to disturb and affect significantly Christian-Muslim relations in Mindanao.Despite the Muslims' resistance, the central government insists on its integrationist policy which seeks to mainstream minority Islamic and other indigenous cultures into the majority Filipino culture.Muslims find themselves at odds with what constitutes the "national identity" of the majority lowland Christian population who in their view had been assimilated into the cultures and ways of the two major colonial regimes.
The inculcation and imposition of the majority Filipino culture is interpreted by many Muslims as an attempt to eradicate Moro culture and identity.The Moro people have been longing for the recovery not only of their lost causes but also the restoration of their dignity and worth as a people.Equitable sharing of wealth, political and social justice are the recurring themes that Muslims in Mindanao have been clamoring for up until now.Conflict resolution or transformation in Mindanao is the process of addressing these causes and working with those concerned to redefine relationships and bring about a change in the conflict context.
To address the problem of conflict in Mindanao, a culture of peace and mutual recognition of both Islamic and Christian values and culture, has to sink deep into the social fabric where cultural openness, social unity and pursuit of peaceful means to resolve conflict is appreciated and practiced by all.Social and cultural reforms are one of the key ingredients of lasting peace and development in Mindanao.Without them the issues that underlay the breakdown of peace and social order will continue to exist.Peace and development in Mindanao needs a sustained effort at social justice, good governance, and corporate social responsibility.To achieve mutual respect and appreciation between and among Muslims and Christians in Mindanao, relationship building across sectoral, social, cultural and religious divides is of primary importance.
Solution to the Mindanao problem is anchored on the creation of a national consciousness sensitive to cultural diversity.This means, the government and the majority Filipino populace must recognize the value and distinctiveness of Moro cultures and identities.Consequently, it also means that the government should adopt culturally-sensitive policies that seek to honor and preserve Islamic cultural heritage.The government must through its Department of Education (DedEd) and Commission on Higher Education (CHED), review and effect changes in the history curriculum in so far as the history of Islam in the Philippines is concerned, to correct negative images of Muslims and emphasize positive and unique cultures and values that they share towards peace and development.
The government also needs to formulate laws and policies that promote cultural understanding and ethnic awareness.Giving Muslim Mindanao autonomy and addressing their socio-economic problems are not enough.
Their cultural identities must be recognized and accommodated by the state.The Moro people must be free to express these identities without being discriminated against in other aspects of their lives.In a nutshell, cultural liberty is a human right that must be enjoyed by the marginalized Moro masses-and thus worthy of state action and attention.
Political and Structural Transformation
Political domination and marginalization, graft and corruption, clan and patronage politics, and fraudulent electoral systems which perpetuate traditional political elites in power remain to be one of the major causes of conflict and violent confrontations in Mindanao.The government has failed to make concrete political actions to address the aspirations of the poor and marginalized majority Muslim masses.Instead, it caters to the whims and caprices of Christian and Muslim powerful elites who are taking advantage of their positions at the expense of the weak, and therefore, privileging only the dominant segment of society.The dominance of the powerful and the marginalization of the poor and powerless has been the pattern of relationship that characterizes the Philippine society.
The pacification and demobilization approaches employed by the government which seeks to address the conflict by cooptation of leaders and followers through the offer of positions, or livelihood, or integration has left the deeper roots of the conflict unaddressed.Obviously, power and resources are concentrated in the hands of a few political elites while the masses (mostly Muslims) are being pushed to the periphery of human existence.Philippine politics has been reflective of extensive patron-client networks wherein access to political power is greatly dependent on one's loyalty to those who already wield it.Once in office, politicians are often able to perpetuate themselves in power, and as soon as their term limits end, they easily move on to occupy some other positions.This results to only a few political dynasties competing for political power leaving the weaker segments of society powerless. 56he ties of traditional Muslim elite leadership with the central government has kept the marginalized Muslims' struggles unaddressed and deprived them of their right to self-determination.It has been noted by a number of analysts that the same traditional local elites amass contemporary political power in the form of elected positions by entering into a political economic bargain with the national political elites to barter Internal Revenue Allocations (IRA) from the central state treasury in exchange for delivering votes and security for the competing national and local political actors. 57xercise of absolute authority by traditional political elites are made possible not only by political patronage from the national government, but also by "laws and regulations permitting the arming and private funding of civilian auxiliaries to the army and police; lack of oversight over or audits of central government allocations to local government budgets; the ease with which weapons can be imported, purchased and circulated; and a thoroughly dysfunctional legal system." 58The question is how can the government prevent the emergence of overly dominant political clans and warlords who set their own rules and use their power to exploit and oppress the weak and the poor?Precisely, the Mindanao problem is a political and structural problem.Thus, it requires a political and structural solution as key dimension.No significant changes in so far as addressing the problem of conflict in Mindanao can take place unless policies change; and for these change to happen, the country's politics must change toward more participation, involving especially the marginalized sectors in making decisions that affect them.Any social, economic, and political strategy that attempts to effectively address problems of conflict in Mindanao will have to be comprehensive, inter-sectoral, communal, and participatory.
It has been observed that, despite numerous development projects and financing programs that have been channeled through different government agencies since early 1970s to solve the problem of poverty in Mindanao, the economic and living conditions of the Moro people has not significantly changed.This was mainly because of defective bureaucratic structures that were known for their graft and corruption. 59Obviously, social intervention and economic development devoid of appropriate and viable political structure is insufficient.
Concrete steps should be done to minimize (if not totally eliminate) rampant graft and corruption practices both in the higher and lower echelon of the government.This requires stricter and fuller implementation of anti-graft laws and their corresponding punishments as well as creation of preemptive structures such as "Graft watch" composed of highly credible representatives coming from the government, civic, business, political and religious sectors.The establishment of anti-graft measures is important not only to prevent corrupt and anomalous practices in the government, but also to ensure protection of the economic interests of the poor and the marginalized and to pave the way for economic progress.
The government also needs to develop massive and sustainable grassroots based programs of peace and development by establishing mechanisms that would enhance and ensure peoples participation, by initiating continuous and regular public consultations involving the poorest of the poor, the indigenous people, the women and the youth, and by making concessions not with the political elites but with the Moro masses who are the actual victims of oppression and marginalization in Mindanao.
God's will is peace, love, hope and justice.The situation of unpeace in Mindanao brought by the continuing oppression and marginalization of the weak and the poor is radically opposed and incompatible with the biblical vision of a just, humane and peaceful community where persons live with peace and dignity.This biblical vision must come in contact with the socio-cultural, economic, and political realities reigning in Mindanao.
In a nutshell, the peaceable kingdom of God as understood in the context of Mindanao, provides political and theological basis for asserting a notion of peace and justice, the content of which are defined in concrete socio-political, cultural and economic terms.There is a direct link between the theoretical concept of God as Liberator and Defender of the poor and the political, economic, and social injustice in Mindanao.This calls for moral and political responsibility to act responsibly on behalf of justice and freedom and to work towards the establishment of a just political, social and economic structure which is in harmony with the divine vision of a peaceable kingdom. 60 | 7,918.4 | 2014-08-01T00:00:00.000 | [
"Philosophy",
"History",
"Political Science"
] |
Seafloor observations indicate spatial separation of coseismic and postseismic slips in the 2011 Tohoku earthquake
Large interplate earthquakes are often followed by postseismic slip that is considered to occur in areas surrounding the coseismic ruptures. Such spatial separation is expected from the difference in frictional and material properties in and around the faults. However, even though the 2011 Tohoku Earthquake ruptured a vast area on the plate interface, the estimation of high-resolution slip is usually difficult because of the lack of seafloor geodetic data. Here using the seafloor and terrestrial geodetic data, we investigated the postseismic slip to examine whether it was spatially separated with the coseismic slip by applying a comprehensive finite-element method model to subtract the viscoelastic components from the observed postseismic displacements. The high-resolution co- and postseismic slip distributions clarified the spatial separation, which also agreed with the activities of interplate and repeating earthquakes. These findings suggest that the conventional frictional property model is valid for the source region of gigantic earthquakes.
B oth seismic (unstable) and aseismic (stable) slip can occur on the plate interfaces at convergent 1,2 and transform plate boundaries [3][4][5] , and it has been broadly accepted that these two different types of slip show complementary spatial distributions [2][3][4][5][6][7] . The differences in slip behaviours have been explained as the consequence of variations in the frictional parameters of the empirical rate-and state-dependent friction law 8,9 , which are expected to be strongly variable along a fault. Under ordinary circumstances, those parts of a fault hosting a seismic rupture are unable to exhibit much stable sliding in response to the external stress associated with the secular plate motion. Thus, quasi-static postseismic slip occurs mainly in those areas shallower and/or deeper than the coseismic rupture zone. Sometimes postseismic slip propagates along the fault strike direction and promoted large earthquakes at neighbouring seismic patches [10][11][12][13] . This conceptual model has been supported by a number of studies that have revealed postseismic slip distributions based on geophysical observations. Previous studies on the rupture process [14][15][16][17][18][19][20][21] and coseismic slip distribution [22][23][24][25] of the M9.0 2011 Tohoku Earthquake occurred on 11 March 2011 have consistently revealed that substantial amount of coseismic slip of B10 m occurred on the deep (depth: 40-50 km) potion of the plate interface as well as the extremely large (430 m) coseismic slip on the shallow (depth: o30 km) fault, although the models show significant diversities in detailed slip pattern reflecting the differences in the data, analysis procedure and assumed structure [14][15][16][17][18][19][20][21][22][23][24][25] . The deeper portion of the mainshock rupture included the source region of the Miyagi-Oki earthquakes, a sequence of repeating BM7.5 interplate earthquakes with a recurrence interval of B40 years. The most recent event in the sequence happened on 12 June 1978 (Mw7.5) (refs 25,26).
The postseismic slip associated with the 2011 Tohoku Earthquake has been estimated using continuous global positioning system (GPS) observations and the activities of smaller repeating earthquakes [27][28][29][30][31][32] . Some studies reported that co-and postseismic slip overlap [27][28][29][30][31][32] and a possibility of breakdown of the complementary distributions of the slips was suggested and a doubt regarding applicability of the empirical rate-and state-dependent friction law to the natural earthquake faults was raised 28 . Although the spatial resolutions of the estimations of co-, post-and/or interseismic slip distributions are limited without sufficient near-field observations, these analyses did not include seafloor geodetic data. Such data can provide a strong constraint for the estimation of a component of the postseismic deformation due to the viscoelastic relaxation process in the asthenosphere, which occurs simultaneously with the postseismic slip on the plate interface. Recently, landward motion of the seafloor above the main rupture area of the Tohoku Earthquake has been reported, which constitutes strong evidence for the larger contribution of the viscoelastic relaxation 33,34 .
In the present study, we investigated the spatial distribution of the postseismic slip on the plate interface. This was based on both terrestrial GPS data and seafloor geodetic observations composed of GPS/Acoustic (GPSA) survey results and ocean-bottom pressure (OBP) gauges records. Among these observations, the continuous time series of vertical seafloor displacement provided by the OBP recordings, newly reported in this study, was expected to improve considerably the spatial resolution of the postseismic slip on the plate interface. By taking into account the viscoelastic relaxation, the estimated postseismic slip distribution is consistent with the activities of interplate and repeating earthquakes. The broad area of the coseismic slip during the M9 mainshock had exhibited different behaviours during the previous smaller earthquakes or episodic slow slip events, but does not overlap with the zone of the aseismic slip after the 2011 mainshock. The result suggests that the conventional frictional property model based on the rate-and state-dependent friction law is valid also for the source region of the gigantic earthquake.
Results
Postseismic slip based on geodetic data. Figure 1 shows the cumulative postseismic slip distribution, computed for the analysis period of B8 months, together with its estimation error. Comparisons between the observed displacements and those calculated (predicted) based on the estimated postseismic slip model, and their residuals, are shown in Figs 2 and 3, respectively. The results reveal that the area in which large postseismic slip occurred on the plate interface can be generally divided into three subareas: (1) . These three subareas of substantial postseismic slip are located beyond the distribution of coseismic slip during the 2011 Tohoku Earthquake, which was estimated based on geodetic data obtained by almost the same seafloor and terrestrial observation networks 25 (Fig. 1).
The residual horizontal displacements show evident alongstrike motion. These may have arose from the strike-slip components of the postseismic slip, the effect of the subducting Philippine Sea slab to the viscoelastic relaxation, the error due to the transformation from spheroidal to Cartesian coordinates, and/or uncertainty of the plate motion model to transform the displacements into Okhotsk-plate-fixed reference frame.
Postseismic slip based on seismic data. The spatial extent of the postseismic slip can be constrained by analysis of the small repeating earthquakes that occur on the plate interface 29 . Figure 4 shows the estimated postseismic slip distribution, based on the activities of small repeating earthquakes (see ref. 29 for the analysis procedure), for the same period as the geodetic inversion analysis in the present study (Fig. 1). The estimated slip distribution clarifies that no significant postseismic slip occurred in the area of the mainshock rupture.
Analysis of the small repeating earthquakes enable us to estimate the rate of aseismic slip around small isolated seismic patches (asperities) on the plate interface causing the small repeating earthquakes 35 . However, we cannot evaluate the slip rate in the region near the Japan Trench, to the north and south of the large coseismic rupture area, because no asperities generating repeating earthquakes have been identified in the areas. Instead, we focused on the activity of the interplate earthquakes other than the repeating earthquakes, assuming that their activity would be increased in association with the acceleration of aseismic slip, as is usually assumed in the studies of repeating earthquakes. Although it is difficult to estimate the slip rate quantitatively based on the non-repeating earthquakes, their activation can be interpreted as the acceleration of aseismic slip in the area. Figure 5 shows the ratio of seismicity rate during the period in which postseismic slip was estimated to that during the preseismic period from the beginning of 2008 to just before the mainshock based on the catalogue of the thrust faulting earthquakes 36 . It clearly shows prominent activation of interplate earthquakes, in the areas surrounding the mainshock rupture, indicating the acceleration of the aseismic slip, that is, the occurrence of the postseismic slip.
Sensitivity of the geodetic observations to the slip near the trench. The repeating earthquake analysis showed the postseismic slip of up to 50 cm on the trench-ward side of the subarea 1 (Fig. 4), where the resolution of the geodetic inversion was poor. We calculated the displacements due to the slip in the area at the geodetic stations ( Supplementary Fig. 2a,b). The amount of the given slip was 40 cm, as large as the amount estimated by the repeating earthquakes analysis. Calculated displacements were significantly smaller (less than a half) than the residual of the inversion analysis (Fig. 3), indicating that it is not possible to identify postseismic slip in this region using the existing geodetic data.
The postseismic slip amounting to B50 cm was estimated from the analysis of the repeating earthquakes on the landward side of the subarea 3 identified by the geodetic inversion (Figs 1a and 4). It would be difficult to constrain the location of the postseismic slip in the off Fukushima and Ibaraki prefectures based on the existing geodetic data, because there is only one offshore geodetic site. Displacement pattern expected from the slip on the landward area ( Supplementary Fig. 2c,d) and that from the slip near the trench ( Supplementary Fig. 2e,f) are similar to each other. On the other hand, we have to mind that the epicentre of the far-off earthquakes determined by the onshore seismic network data are difficult to be constrained in the dip direction. The uncertainties of the repeating earthquake epicentres result in the uncertainty of the slip location. Therefore, we regard the discrepancy between the slip distributions from the geodetic and repeating earthquake analyses is insignificant although substantial amount of the postseismic slip is required along the southern side of the large coseismic slip zone.
Comparison with the previous models. Co-and postseismic slip distributions based on investigations that used only the terrestrial geodetic data [27][28][29]31 differ significantly from our results. The most prominent difference between our results and those of previous studies is the presence or absence of overlapping of the co-and postseismic slip in the source region of the interplate BM7.5 Miyagi-Oki earthquakes between subareas 1 and 2. Our result shows clear spatial separation between the co-and postseismic slip. We note that the difference in the postseismic slip pattern is mainly due to the estimated amount of viscoelastic displacement. As the onshore displacements due to postseismic slip and viscoelastic relaxation occurred in almost the same direction, it is difficult to distinguish the contribution of the postseismic slip from that of the viscoelastic relaxation based only on terrestrial observations. However, the landward motion, opposite to that of the postseismic slip, which was observed on the seafloor and used in the present analysis, provides strong constraint with which to separate the two factors.
The introduction of the viscoelastic effect is essential for explaining the postseismic deformation data 33,34 ; however, it is difficult to find an appropriate viscoelastic structure in the crust and upper mantle with which to model the deformation quantitatively. Although a simple layered-structure model could explain both the on-and offshore horizontal displacement fields 37 , such a model is clearly inconsistent with the actual tectonics in the subduction zone and it cannot explain well the observed magnitude of seafloor subsidence. The heterogeneous structure including the subducting Pacific slab and cold mantle wedge 33 , assumed in the present study, explains both the horizontal and the vertical deformation better than the simple layered model and it would be much more appropriate for evaluating the contribution of the viscoelastic deformation to the observed postseismic deformation. The rheological structure and parameters estimated by ref. 33 may not be uniquely determined, but are well constrained. The viscosities controlling both the spatial distribution and temporal variation of the deformations were constrained by the time constants of decay of the observed displacement time series. The elastic thicknesses of the oceanic slab and the continental lithosphere are sensitive to the horizontal displacement pattern. Thicker oceanic slab reduces the, landward motion on the seafloor, whereas increases the seaward displacement on land. The increase of the thickness of continental lithosphere enlarges the seaward motion on land. Too large viscoelastic displacement results in the over corrections of the observed postseismic displacement field and the large normal fault type postseismic slip on the fault is required but the substantial normal faulting postseismic slip is physically unrealistic. Such trial-and-error to constrain the viscoelastic deformation model based on terrestrial and seafloor geodetic observation data were thoroughly performed by refs 33,38; therefore, we concluded that their model is the most reliable model for the present.
The simple model predicts smaller viscoelastic displacements near the Pacific coast than the novel realistic model. As an underestimation of the viscoelastic contribution results in overestimation of the magnitude of postseismic slip, the previously estimated postseismic slip distribution could be biased, even if the viscoelastic effect were considered, as might be those based on the pure elastic assumption 33 . For example, the amount of postseismic slip beneath the northern coastal region (subarea 1) was estimated at B1.2 m/yr, based on the analysis of the small repeating earthquakes (Fig. 4), which is much smaller than previous estimates (45 m/yr) (refs 27,28,31). Although the slip estimation based on repeating earthquakes could be biased due to the uncertainty in the empirical scaling relationship used for the slip estimation, its consistency with that derived from the geodetic data after the careful treatment of the viscoelastic relaxation, indicates the present result is a more realistic presentation of the postseismic deformation following the 2011 Tohoku Earthquake. The argument against the spatial partitioning of seismic and aseismic behaviour on faults, based on previous geodetic data analyses, is that they could be drawn from a less reliable slip pattern and thus, we argue for the conventional concept based on an improved postseismic slip distribution.
In Supplementary Fig. 1, the estimated postseismic distribution is compared with coseismic slip distributions of various studies 14,15,[17][18][19]23 . Spatial separation between co-and postseismic slips are well recognized in subarea 1, as in the comparisons with the slip model based on the on-and offshore geodetic data 25 (Figs 1a and 4). Although some coseismic slip distributions seem to overlap with the postseismic slip distribution at subarea 2, but we regard that the lower spatial resolutions of the slip models based on terrestrial geodetic observation and/or far-field seismic waveforms account for the apparent overlaps. Spatial separation between the co-and postseismic slip. As an important conclusion of the geodetic and seismological data analyses performed in this study, we claim that no significant postseismic slip occurred from 23 April 2011 to 10 December 2011 in the major coseismic slip zone of the mainshock area. This means that the conventional asperity model of slip behaviour 8,9 is also valid for the rupture area of the 2011 Tohoku Earthquake, the subduction zone to the northeast of Japan, and it shows that the anomalously large earthquake was not exceptional in terms of slip behaviour.
The postseismic slip is estimated in the coseismic rupture area off Fukushima and Ibaraki prefectures from the repeating earthquake analysis (Fig. 4). One may dispute that this would be a disproof of the spatial separation of co-and postseismic slip, but we regard this overlapping is apparent one caused by the lower spatial resolutions of the postseismic slip estimation as we explained early. We have to note the uncertainty of the coseismic slip in this area estimated by the geodetic data. No substantial coseismic slip was estimated if the GPSA data at the site located in the area is excluded from the inversion (see Supplementary Fig. S10 of ref. 25), meaning the coseismic slip distribution in the region is strongly dependent on the single GPSA station data. Early postseismic deformation at the site could be responsible for the mislocation of the coseismic slip because the coseismic deformation data from the campaign style GPSA surveys inevitably contain early postseismic deformation.
Although we have to refrain from making a detailed interpretation about the slip distributions in areas with low spatial resolution, it must be noted that the co-and postseismic displacements on the seafloor off Miyagi Prefecture were well constrained by the continuous OBP records. Supplementary Fig. 4b demonstrates how the OBP records improve the spatial resolution on the slip distribution in the off Miyagi Prefecture. In the area, the coseismic slip distribution is constrained well even if the GPSA data, possibly contaminated by early postseismic deformation, are excluded 25 . Therefore, the spatial partitioning of the slip off Miyagi Prefecture is strongly supported by the present observations and analysis. Seafloor observation networks of GPSA and OBP gauges 39,40 , which have been installed since the 2011 M9.0 Tohoku Earthquake and are still being developed, will contribute to future studies on the postseismic slip off the Pacific coast of Northeast Japan.
Discussion
Spatial partitioning between the co-and postseismic slip distributions on the plate interface of the subduction zone to the northeast of Japan suggests high seismic probability at the rupture area of the 1968 Tokachi-oki Earthquake, located northeast of subarea 1. No substantial postseismic slip was identified in the rupture area of the 1968 earthquake, but the area is surrounded by extensive postseismic slip, as indicated by the geodetic data and interplate seismicity (Figs 1a, 4 and 5). These results suggest that the rupture area of the 1968 Tokachi-oki Earthquake is still coupled strongly, especially in the northern portion (40.5-41.2°N, 141.9-142.6°E) because of its stick-slip frictional property, but that the loading rate would be increased by the surrounding postseismic slip. Based on the obtained slip distribution, we argue that the occurrence time of an The averaged recurrence interval of the earthquakes at the rupture area of the 1968 Tokachi-oki Earthquake has been estimated to be B97 years based on the past events that occurred in 1677, 1763, 1856 and 1968 (ref. 42). The maximum slip amount of the earthquake in 1968 has been estimated to be B9 m (ref. 43), while the plate convergence rate, which is equal to the steady slip rate at the portion with zero coupling between the subducting and continental plates, is B8 cm per year around the Tohoku district 44,45 . During the period analysed in this study, the cumulative postseismic slip on the plate interface reached B40 cm near the source region of the 1968 Tokachi-oki Earthquake within 1 year after the occurrence of the M9 mainshock, which is equivalent to 5 years steady slip at the rate evident before the Tohoku-oki Earthquake. Therefore, the next earthquake at the rupture area of the 1968 Tokachi-oki Earthquake could occur at least 4 years earlier than the ordinary recurrence interval, if the interplate coupling ratio does not change.
Recurrence intervals of such large interplate earthquakes are known to have intrinsic irregularities, which could be related to various factors, for example, stress perturbations due to nearby seismic and/or aseismic slip events along the plate boundary, to large intraplate earthquakes, tidal loading effects. Ref. 46 have recently revealed that the interplate coupling in the northeast Japan subduction zone shows periodical variations with 2-6 years period, and large earthquakes may be triggered by the acceleration of the aseismic slip on the plate interface. It is possible that the postseismic slip associated with the Tohoku Earthquake advances the next Tokachi-oki earthquake by more than 4 years, as the result of the interplay with the newly found slow slip acceleration cycle. Moreover, the postseismic slip in this region did not terminate until the end of the analysis period, and possibly still continues. Thus, the occurrence time advance must be more than the above estimated value, but further investigation using the seafloor geodetic observation data 39 will be necessary to determine the time of advance more accurately. Postseismic slip distributions estimated from small repeating earthquakes and terrestrial and seafloor geodetic data. The colour map shows the average cumulative slip estimated from three or more repeating earthquake groups in 0.3°Â 0.3°windows, where the grey zones indicate no repeater activity during the period of the window. Black contours represent postseismic slip distribution based on geodetic data (identical to the black contours in Fig. 2a). The blue dashed contours represent the coseismic slip distribution 25 The black dashed line denotes the down-dip limit of the interplate earthquakes 56 . The broken red lines show the depth of the subducting plate interface 50 . A green dotted line surrounds the area in which the average residuals between the given and estimated slips among the computational tests shown in Supplementary Fig. 3 are o0.3 m (Supplementary Fig. 4). Grey contours show the rupture area of past large interplate earthquakes 43,57-59 . The data during the period immediately after the mainshock occurrence until 23 April are excluded in order to minimize the effect of poroelastic relaxation and coseismic deformations associated with several major intraplate (both in the shallow crust and the oceanic slab) earthquakes happened in the period. In the present study, we limited the analysis period to the early stage of the postseismic period, so that we can neglect the effect of the recoupling between the subducting and continental plates, which gives additional complexity in interpreting the deformation data.
GPS data observed at stations managed by the Geospatial Information Authority of Japan (GSI), Tohoku University (TU), Japan Nuclear Energy Safety Organization (JNES) and National Astronomical Observatory (NAO) were analysed to obtain daily station coordinates with uncertainties of B2 and 5 mm for horizontal and vertical components by applying Precise Point Positioning 47 using GIPSY-OASIS II software. Overall, 383 GPS sites were used in the inversion analysis used to estimate the postseismic slip distribution on the plate interface. It is reasonable to expect an improvement in the spatial resolution of the inversion analysis by including GPS stations of organizations other than the GSI, namely 38 sites are added to 345 GSI's sites, because we can detect the short-wavelength spatial variation in the surface displacement rate field by using the dense GPS observation array. The detailed examination of the improvement of the spatial resolution is presented in ref. 25.
Seven GPSA stations in and around the rupture area of the 2011 Tohoku Earthquake have been in operation under the management of the Japan Coast Guard (JCG) and TU since the occurrence of the earthquake. In the inversion analysis, we used six GPSA stations where the surveys were performed more than twice after the Tohoku Earthquake 33,34 .
The 1 Hz sampled OBP records of six sites were used in this study. Raw OBP data are available before 23 September 2011, that is, about beginning two-thirds of the entire analysis period from 23 April 2011 to 10 December 2011. We estimated changes in the seafloor levels using the OBP gauges 48 . As these pressure gauges were installed before the 2011 Tohoku Earthquake, the secular change in the OBP due to sensor drift can be excluded by linear and exponential curve fitting to the records before the mainshock occurrence. Thus, we were able to obtain vertical displacement time series due to the postseismic deformation associated with the 2011 Tohoku Earthquake from these pressure records. These OBP records were shorter than the entire analysis period and therefore, we extrapolated the time series by fitting a logarithmic function to the available postseismic displacement time series (Fig. 6g).
All site coordinates and daily displacement time series are available as Supplementary Data sets 1 and 2.
Preparation of the inversion analysis.. We estimated the station displacement, which can be regarded as having been caused by the postseismic slip, by subtracting the displacement due to steady plate motion, large aftershocks and viscoelastic relaxation from the observed postseismic displacement time series. Figure 6 shows the displacements due to viscoelastic relaxation and displacements regarded as being due to postseismic slip after the corrections, as well as the observed postseismic displacements.
The effect of the plate motion is corrected by transforming the observed data into the Okhotsk-plate-fixed reference frame by applying the plate motion model of the ITRF2005 (ref. 49). We estimated and removed the coseismic displacements associated with the intraplate earthquakes. The displacements at the observation stations were calculated based on the centroid-moment-tensor (CMT) solutions of the earthquakes during the analysis period, provided by Japan Meteorological Agency.
The viscoelastic deformation was predicted using a novel finite-element method 33 . Their viscoelastic model is based on the realistic structure of Northeast Japan arc-trench system such as the subducting oceanic slab. Burgers rheology was assumed within the viscoelastic layer. The model successfully reproduces not only seafloor but also terrestrial displacements, while no other models succeeded to explain all the onshore and offshore crustal deformation so far. The constructed finite-element method model was tuned such that no landward residual displacement remained when the calculated viscoelastic displacement was subtracted from the observed viscoelastic displacement. This tuning was performed to ensure that no normal faulting motions are required to explain the residual displacements, considered to reflect the postseismic slip on the plate boundary. On the other hand, the postseismic reverse faulting slip would be overestimated when displacements due to viscoelastic relaxation are underestimated 33 .
Estimation of the postseismic slip. A time-dependent inversion method 6 was used to estimate the postseismic slip distributions associated with the 2011 Tohoku Earthquake from the corrected postseismic crustal deformation data. The postseismic slip distribution was expressed by the dip-slip on the triangulated tessellation of the plate interface geometry 50 in a homogeneous elastic half-space 51 . Strike-slip components were ignored to reduce computation time. The weights of the constraint condition with respect to the spatial and temporal smoothness of the slip distribution, and the Dirichlet-type boundary condition were optimized by minimizing Akaike's Bayesian Information Criterion 52,53 .
We undertook a set of computational tests to examine the spatial resolution of the inversion analysis. In these tests, displacements at the geodetic sites were calculated using slip distributions with a checkerboard pattern on the plate interface and were inverted for the slip distribution ( Supplementary Fig. 3). The difference between a given slip amount and the estimated one at a triangular element was evaluated. The test was performed using 100 different checkerboard patterns with changing the pattern alignment. The differences between given and estimated slips were averaged to know how well the slip amounts were recovered by the inversion. Here, we regard the slip amounts at the triangular elements with the averaged difference o0.3 m are well constrained. Note that the formal estimation errors (Fig. 1b) Selection of the interplate earthquakes. Interplate seismicity can be independent information regarding the aseismic slip along the plate interface if we assume that the interplate small earthquakes are triggered by the aseismic slip around the fault patches of them. Here we inspect the change in the seismicity rate of the interplate earthquakes identified according to their focal mechanisms. In addition to the thrust faulting earthquakes on the F-net focal mechanisms catalogue 54 and the small repeating earthquakes 29 , we included small offshore earthquakes whose mechanism solutions were estimated by ref. 36 based on the seismic waveforms similarities to earthquakes of known focal mechanisms.
Data availability. All geodetic data used in this study are included in Supplementary Data sets 1 and 2. | 6,245.4 | 2016-11-17T00:00:00.000 | [
"Geology"
] |
MULTI-SOURCED, REMOTE SENSING DATA IN LEVEES MONITORING: CASE STUDY OF SAFEDAM PROJECT
In the following study, the authors present the development of a created levee monitoring system a supplement to the existing programs of flood protection providing flood hazard and risk maps in Poland. The system integrates multi-source information about levees, acquiring and analysing various types of remote sensing data, such as the photogrammetric and LiDAR data obtained from Unmanned Aerial Vehicles, optical and radar satellite data. These datasets are used in order to assess the levee failure risk resulting from their condition starting from a general inspection using satellite data and concluding with UAV data usage in a detailed semiautomatic inventory. Finally, the weakest parts of a levee can be defined to create reliable flood hazard maps in case of levee failure, thus facilitating the constant monitoring of the water level between water gauges. The presented system is an example of a multisource data integration, which by the complementation of each system, provides a powerful tool for levee monitoring and evaluation. In this paper, the authors present a scope of the preventative configuration of the SAFEDAM system and the possible products of remote sensing data processing as the result of a hierarchical methodology of remote sensing data usage, thus leading to a multicriteria analysis defining the danger associated with the risk of levee failure.
INTRODUCTION
Floods are the most frequently occurring cataclysm in Europe and worldwide.For decades, human efforts towards flood prevention have been focused on levee construction and monitoring.These constitute particularly valuable products, which contribute to increasing the awareness about possible negative flood consequences, and in the meantime, allowing for a more accurate crisis and planning management in the form of flood risk and hazard maps.The current development of the remote sensing technology allows for fast and precise environment monitoring by analysing ongoing changes.For years, airborne optical sensors have provided a vast range of photogrammetric data for the interpretation of photo and remote measurements.Another modern technologies widely used nowadays for the purpose of environment monitoring are Airborne Laser Scanning (ALS) and satellite imaging systems, the usage of which expands every year thanks to the continuous improvement in the systems' resolution.Noteworthy to mention is also the development in the field of satellite radar systems, which in addition to the aforementioned technologies, provide accurate data regardless of weather conditions, both during the day and at night.Satellite images that are regularly acquired are willingly used in order to monitor the phenomena that concern wide areas, e.g.earthquakes, wide area forest fires (Kussul et al., 2011;Tralli et al., 2005).
Considering all the aspects mentioned above, the authors have prepared a levee monitoring system that integrates all the technologies mentioned above with the existing databases of the embankment parameters and the hydrological data in Poland.By analysing multi-source data, the system evaluates the risk of levee failure and provides valuable information supporting crisis management.
SAFEDAM PROJECT
The main goal of the project 'Advanced technologies in the prevention of flood hazard' named SAFEDAM is monitoring the levees in order to prevent hazards.In Poland, there are over 4000 km of levees, the technical condition of which has to be evaluated within a five-year cycle.The constant monitoring of levees' susceptibility to failure is a fundamental process for disaster prevention, especially in light of the intensification of the flood frequency occurrence.The aim of the SAFEDAM project is the creation of a levee monitoring system using Unmanned Aerial Vehicles -UAV (equipped with a light LiDAR unit -laser scanner and multispectral cameras -blue, green, red and near-infrared spectrum), optical and radar satellite imagery and archival aerial imagery.Multi-sourced and multi-temporal data make it possible to evaluate the levees' condition supplementing direct surveying measurements.A comprehensive IT system facilitates the collection, automatic data analysis and visualisation for hydrological services and crisis management professionals.Its implementation ensures the effective management of flood risk.SAFEDAM complements the already implemented projects of flood protection in Poland, such as an IT System of the Country's Protection -ISOK (Kurczyński and Bakuła, 2013) and a System of Hydrological Structures Control Records -SEKOP.It also goes beyond the recommendations of the Floods Directive (Directive 2007/60/EC).The presented SAFEDAM system consists of two configurations: interventional and preventative.This article is focused on the preventative, an early warning part of the system being a module responsible for preparing data for the evaluation of the levee condition.In the system, multi-sourced levee information is used by implementing and analysing various types of remote sensing data, providing an levee failure risk resulting from their condition.
The preventative module of the SAFEDAM system enables the collection, processing, management and processed data.Its functionalities are predominantly based the created aerial and satellite orthophotomaps, as well as digital terrain models obtained from various platforms.The implemented algorithms are based on the expert classification using radiometric vegetation indices.Furthermore, the system allows the user the possibility of supervision and manual correction, by providing tools for creating layers photointerpretation.
STUDY AREA
In order to verify the system's reliability and proper application of its functions, monitoring tests have been For the interest site selection, the following criteria have been applied: (1) levee with high frequency of natural failure occurrence; (2) sides with the highest amount of potential levee failure threats; (3) hydrological structures with documented current and historical geotechnical state; ( with limited access. Based on the levee data analyses obtained Monitoring Center (OTKZ), Institute of Meteorology and Water Management -National Research Institute (IMGW study areas have been selected (Figure 1).All levees are located near Vistula -the largest river in Poland, breach occurred during the extended floods in Europe 2010, causing deaths and damage to properties.SAFEDAM system enables the collection, processing, management and distribution of the predominantly based on and satellite orthophotomaps, as well as on the terrain models obtained from various platforms.The implemented algorithms are based on the expert classification .Furthermore, the system ty of supervision and manual layers for supporting reliability and proper application functions, monitoring tests have been conducted on site.For the interest site selection, the following criteria have been applied: (1) levee with high frequency of natural failure highest amount of potential levee failure threats; (3) hydrological structures with a wellcurrent and historical geotechnical state; (4) sites obtained from the Dam Monitoring Center (OTKZ), Institute of Meteorology and Water National Research Institute (IMGW-PIB), five udy areas have been selected (Figure 1).All levees are located whose most recent floods in Europe in year properties.
SAFEDAM Project.
Study area No. 1: located near the town of Płock, central embankment failure, the ation consisted of its reconstruction including a sheet gabion as an additional protection on the river slope.Study area No. 2: located near the east Poland.Length: 30 km.The breach form of a landslide, as a result of a loss in levee ation consisted of ground material exchange, soil compaction and additional slope protection by iron nets.Study area No.3: located near the town of Annopol, south-east Poland.Length: failure, the levee was rebuilt (using soil co pile was applied to a length of 105 m. additionally secured with a bentonite blanket and iron net.Study area No. 4: located near the town of Sandomierz (Winiary village), south-east Poland.Length: the embankment failure, it was sealed GU-7-600.Study area No. 5: located near the town of Tarnobrzeg, south-east Poland.Length: modernisation consisted of the embankment bentonite blanket and a sheet pile installation in the levee base
PREVENTION SYSTEM DESCRIPTION
In the system few kinds of remote sensing data are used use cannot be simultaneous.The system adopted the principle of using data from the general to the specific range.In the hierarchical dependence (Figure 2), lowest resolution are used to monitor days and together with data from the geodetic repositories updated every 2-3 years, allow accurate measurements.The data items gathered platform and direct surveying measurements accurate sets of data that de embankments.In the following subchapters, the division into these two source groups is discussed in more detail.
Satellite data -overall monitoring
In order to identify embankments, ongoing and small scale crest height changes, the most precise data available should be analysed.However, due to the preparation of a country-wide system, the collection and analysis amount of data with a high spatial resolution reaching 10 cm is time consuming and economically inefficient.
. Study area No.3: located near the town east Poland.Length: 6.2 km.After the rebuilt (using soil compaction).A sheet a length of 105 m.The embankment was bentonite blanket and iron net.Study area No. 4: located near the town of Sandomierz (Winiary east Poland.Length: 3.7 km.After sealed using the iron sheet pile 600.Study area No. 5: located near the town of east Poland.Length: 7.5 km.The ation consisted of the embankment being sealed by a installation in the levee base.
PREVENTION SYSTEM DESCRIPTION
In the system few kinds of remote sensing data are used.Their be simultaneous.The system adopted the principle of using data from the general to the specific range.In the hierarchical dependence (Figure 2), satellite data as a data with are used to monitor the water range every few ther with data from the geodetic repositories allow to indicate areas for more items gathered from the UAV platform and direct surveying measurements constitute the most data that determine the state of the embankments.In the following subchapters, the division into is discussed in more detail.
actions in the preventative SAFEDAM project.
overall monitoring
In order to identify embankments, ongoing erosion processes, and small scale crest height changes, the most precise data .However, due to the preparation wide system, the collection and analysis of a large high spatial resolution reaching 10 cm is time consuming and economically inefficient.
To avoid unnecessary data processing and ensure immediate reaction, the system provides the possibility of pre-selecting areas for a further and more detailed analysis based on satellite data and data from the national geodetic and cartographic repository enriched by aerial images with a 25 cm spatial resolution for rural areas and 10 cm for urban areas.Additionally, the geodetic and cartographic repository in Poland provides a Digital Terrain Model (DTM) with a 15 cm accuracy and it is treated as a reference DTM for the further multitemporal analysis.
The methodology of satellite data use in relation to the optical data in the SAFEDAM system has been described in Weintrit et al. (2017).The investigation into the project possibilities of radar data use has been described in the studies of Pluto-Kossakowska et al. (2017).Both sources investigated the possibilities of current water level identification (Figure 3) (considering satellite revisit), which is essential particularly in areas located between water gauges or in the case of gauge failure.
The temporal resolution of satellite optical systems for areas characterised by medium latitudes, such as Poland, is daily (e.g.Plѐiades constellation) or up to several days (e.g.LANDSAT, Sentinel-2).It depends on the construction of the satellite system and the possibilities of its programming.For preventative monitoring, it is a sufficient revisit time, but the possibilities of the usage of satellite optical systems are limited due to weather and lighting conditions, such as cloud coverage.Optical imagery can be used only in periods of cloudless weather or with very little local cloudiness.The identification of levee damage can be conducted only as a quasi-continuous monitoring.
Due to restrictions, radar satellite data is more widely used.A special tool for the automatic gathering and processing of Sentinel-1 radar data was developed.It is providing a fully automatic analysis about water ranges.The Sentinel database is regularly queried using the shared api SCI HUB.With the regular inspection of the database by the application, all products will be downloaded on a regular basis.The process is a Windows service that runs one or two times a day.The subsequent process steps are recorded in logs: start of the process, area of interest (AOI) number, date of the last downloaded equal to the latest archive in the database for the AOI checked, number of archives to download, whether the file has already been downloaded for another AOI, information about adding to the database or assigning to the AOI an existing archive.In order to correctly save metadata from different sensors, including Sentinel data, metadata mappings from the source files were converted into tables in the SAFEDAM database.
In this configuration, high resolution satellite (VHRS) scenes ordered on demand and mainly aerial images from repositories and elevation data are also used to preselect areas, which require more detailed measurements.For this purpose, some supporting tools were prepared: vegetation indicators (Normalised Differential Vegetation Index -NDVI and Green-Red Vegetation Index -GRVI), raster algebra for analysing the DTM and an algorithm enabling automatic water detection.Indicated levees can be examined using more detailed remote sensing technology in higher resolution or in direct surveying measurements.
UAV monitoring
Beside the satellite data, levees are monitored platforms.These platforms can be equipped not only with cameras (RGB, near-infrared, thermal infrared), but also with light laser scanners collecting 3D point clouds.Equipping the UAV platforms with light laser scanners is still a novelty in the field of remote sensing, however there are already some articles about their application and accuracy (Petrie, 2013;Pilarska et al., 2016;Bakuła et al., 2016Bakuła et al., , 2017)).Mounting more than one sensor on the UAV platforms makes it possible to acquire various data simultaneously, which enriches the analysis.
In the SAFEDAM project, the UAV application is a second phase of the levee monitoring, which can be conducted systematically in order to acquire up obligatory when danger occurs.Therefore, the advantage of the low-altitude UAV platforms over the airborne data in the levee monitoring is the possibility of fast data acquisition, high resolution of the final products and precise area of interest.Additionally, the UAVs have a in monitoring linear objects, such as levees 2016).
Within the SAFEDAM project, two types considered: interventional and preventative.The aim of interventional platform is to monitor the levees a hazard occurs.This platform needs to be light and operable, therefore it is equipped only with cameras.The second, preventative UAV platform, which is developed within the SAFEDAM project, is the fixed-wing NEO3 (Figure 4).This platform is equipped with a Riegl miniVUX laser scanner and two Sony a6000 cameras.One camera acquires the RGB images while the second one registers in wavelength.This platform can provide the LiDAR data with mean point density value of up to 8 points per square meter and register the data from a height between 100 and 150 m above the ground.
The LiDAR data, which are provided by the UAV platforms are used for the generation of the DTMs and the analysis of height differences.The RGB and NIR images are used for the emerging new bare ground detection.The classification of the bare ground is conducted based on the selected indices or GRVI.Both differences in height and in which occur on the levee, are important in the levee monitoring because they might constitute a hazard sign.
Figure 4.The fixed wing NEO3 equipped with marked sensors: RGB and NIR cameras, and light laser scanner satellite data, the levees are monitored using UAV platforms.These platforms can be equipped not only with infrared, thermal infrared), but also with light laser scanners collecting 3D point clouds.Equipping the t laser scanners is still a novelty in the field of remote sensing, however there are already some articles (Petrie, 2013;Pilarska et .Mounting more than one platforms makes it possible to acquire enriches the analysis. In the SAFEDAM project, the UAV application is a second which can be conducted up-to-date data, and obligatory when danger occurs.Therefore, the main advantage altitude UAV platforms over the airborne data in the levee monitoring is the possibility of fast data acquisition, high precise investigation of the area of interest.Additionally, the UAVs have a strong potential levees (Bakuła et al., two types of platforms are considered: interventional and preventative.The aim of the the levees' condition when hazard occurs.This platform needs to be light and easily , therefore it is equipped only with cameras.The which is developed within wing NEO3 (Figure 4).Riegl miniVUX laser scanner and two Sony a6000 cameras.One camera acquires the RGB the second one registers in the near infrared ovide the LiDAR data with a up to 8 points per square meter and height between 100 and 150 m above which are provided by the UAV platforms are used for the generation of the DTMs and the analysis of height differences.The RGB and NIR images are used for the new bare ground detection.The classification of the the selected indices: NDVI in bare ground range, are important in the levee monitoring .
The fixed wing NEO3 equipped with marked sensors: GB and NIR cameras, and light laser scanner
Description of the preventative mode of system
The system is divided into two different and it is operated by two different types of end hydrological services, the prevention mode is dedicated, while the interventional mode is assigned to the crisis management services (Weintrit et al., 2018).Exceeding the appropriate value of the hazard condition assessment gives the user a statement about the threat.The IT system must have clearly defined functionality in each of the previously defined modes of operation.Some of them will be re called basic functionalities of the system, such as displaying spatial data layers, searching for data, or measurements.The main tools developed under the preventative mode are: a tool for the import and comparative analysis of sa data from an unmanned measuring platform, a threat monitoring tool, a module for the determination of flood risk and flood risk analysis.The system's functions facilitate the monitoring of flood embankments for the purpose of flood risk a assessment, as presented in Table 1.
Access login, appropriate permissions
Available data access to the full SAFEDAM database (including ISOK, SEKOP, BDOT, PSHM) and continuous update Basic functions zooming, centering, displaying coordinates of the mouse cursor, dynamic scale of the map, searching for data in the database, the ability to turn off the visibility of layers, setting the transparency of layers, measurements of area, length and height, the ability to draw on the map, screen shot from 2D and 3D to graphic form, export of shapefiles with markings from the action, export of shapefiles with evaluation of leeves, sharing links to WMS layers visible in the system Table 1.Functions of the SAFEDAM system in mode.
preventative mode of the SAFEDAM divided into two different configuration modes and it is operated by two different types of end-users.For the hydrological services, the prevention mode is dedicated, while the interventional mode is assigned to the crisis management al., 2018).Exceeding the appropriate value of the hazard condition assessment gives the user a statement about the threat.The IT system must have clearly defined functionality in each of the previously defined modes of operation.Some of them will be repeated, these are the socalled basic functionalities of the system, such as displaying spatial data layers, searching for data, or measurements.The main tools developed under the preventative mode are: a tool for the import and comparative analysis of satellite data and data from an unmanned measuring platform, a threat monitoring tool, a module for the determination of flood risk and flood risk analysis.The system's functions facilitate the monitoring of flood embankments for the purpose of flood risk and threat assessment, as presented in Table 1.
State of risk -definition
In order to identify the appropriate moment to switch between the prevention and intervention monitoring modes, the state of danger needs to be defined.For that reason, a multi-criteria analysis of the parameters indicating the potential threat directly and indirectly has been applied.Multi-criteria analyses are based on three main sets of data: (1) water level data transmitted directly from the gauge monitoring network; (2) levee safety evaluation based on multi-source data; (3) alerts sent by civilians through the geoparticipation portal (Table 2).
Civilians alerts
Geoparticipation portal
Gauge monitoring network
Table 2. Multi-criteria analysis scheme of the state of riskdefinition The levee safety evaluation is a result of the levee geotechnical, geometric and spatial parameter analysis.The elaborated multisourced levee safety evaluation methodology resulted from the weighted values of three data sets: (1) Level technical state evaluation based on the geotechnical and geometrical parameters; (2) Interpretation of remote data supported Multitemporal DTM changes and bare earth raster analysis; (3) Levee overflow risk (Table 2).Levee technical state data originate from SEKOP and are based on the evaluation studies conducted by the OTKZ, however, the geometrical levee parameters, such as crown height, slope and crown width will be further updated and detailed based on the direct monitoring with use on the prevention platform.Data updates will differentiate the original, previously generalised levee technical state evaluation.The multi-temporal analysis of the geometry changes as well as the bare earth raster analysis will be conducted after every system update with new data: LiDAR scanning (for topography changes) and photogrammetric (for bare ground range changes).
Another parameter affecting the levee safety is the information about the locations of the embankment overflow simulated as part of the ISOK project, calculated for a probability of onehundred-year and five-hundred-year flood.According to the initial assumptions of the project, the user's alerts sent by the geoparticipation portal have been included as a component of the levee failure risk analysis.It was assumed that the activity on the geoparticipation portal will increase in line with the rising threat.However, due to the difficulties associated with information credibility authentication, alerts should be verified on a regular basis by the local system administrator.Water level data are an integral part of the SAFEDAM system.The transmission of the gauge provided data is conveyed directly by the Central Hydrology Database Systems (PSHM) administered by IMGW-PIB.The data is updated automatically every 24 hours while in prevention mode but in the case of the intervention mode, the data update rate increases to up to one hour.The PSHM provides information about the current water level and characteristic levels, which overpass results in the hydrological warning or alarm condition.
In order to automate the process of threat spatial distribution determination, a weighing system involving multi-criteria analysis parameters has been applied.The numerical values of all the parameters have been assigned according to their influence on the final result.As a result of the simulations carried out, it was found that the decisive factors influencing the determination of the threat state are the water levels.The second factor according to its significance is the state of the levees.The value of the civilians' alerts has been difficult to validate due to the pure geoparticipation portal distribution and lack of critical events during the project's second phase.
As a result of the conducted analysis, a new raster including the graphic representation of the embankment safety state is generated.The created algorithm allows for an evaluation of the four states, namely lack of threat, state of warning, state of alarm and critical state, to be conducted.In the final raster, the safety states will be distinguished by a graphic representation varied in colour (Table 3).Referring to recommendations for the monitoring configuration depending on the state of hazard, for each of the safety states, recommendations for the monitoring mode have been proposed.When no threat is identified, a standard monitoring procedure is implemented (Table 3).This prevention mode assumes a continuous data collection with the use of a fixed-wing NEO3 platform.The warning state indicates an increased level of potential threat and should be considered as an indicator of the areas for a further evaluation.For that reason, field supervision by a specialist and further data collection by the means of the prevention platform is highly recommended in order to verify the intensity of a potential threat.When at least one of the key factors related to safety evaluation shows a continuous rise, the system switches to the first phase of the namely the state of alarm.Since then, monitoring in a 12 h cycle is recommended with the use of prevention or intervention platforms, depending on the interest area and threat intensity.
Prevention
The last phase indicates the critical level of levee failure risk.In this case, constant, 24 h monitoring is essential in order for crisis management centres to plan an optimal rescue route.Independently from the aforementioned system switches immediately to the interventional mode in cas of: (1) people evacuation; (2) defensive action of the embankment (strengthening, sealing, etc.).
Examples result
The data, which are obtained with the UAV platform and the satellite imagery, are uploaded into the external database, which is linked to the system.As a result, many valuable spatial analyses can be performed.The radar satellite data and high resolution optical satellite scenes make it possible to generate water masks.However, the main advantage of the radar data over the optical dataset is the ability to penetrate water masks support the monitoring of the water range in rivers.
The comparison of the water masks generated from the satellite data obtained in different terms indicates the differences in water range.If the water range increases, members of staff that a hazard may occur.For the optical VHRS satellite data, the water ranges are determined based on the NDVI, and the radar satellite data, calculated.The example of using vegetation index to map the range of water is shown in Figure 5.
The UAV data (images and LiDAR point clouds) are used for more detailed analysis of the levees' condition.The Digital Terrain Models, which generated from the LiDAR data used for the detection of the height differences in levees (Figure 6).Such layer serves as support for the specialist analysis of the multi-sourced remote data.These differences may result from the damages caused by animals, or may indicate a landslide.They tell the user what place to pay special attention to.
As it can be observed, in Figure 5, many areas with height differences higher than 0.25 m can be identified due to a few reasons, e.g.interpolation process, the resolution of the DTMs, errors in the acquired data.Therefore, in order to interpret the results of the analysis properly, a specialist in the field of hydrology is needed, who will properly indicate the areas which seem to constitute damage in the levee and that should be ignored.
The new bare ground detection analysis w on the aerial images.It is the second layer that should help end-user in the evaluation of the levee condition.In order to distinguish the bare ground and the vegetation, the NDVI and GRVI were calculated.The results showed that the NDVI gives better results than the GRVI (Figure 6).
Additionally, the studies showed that the threshold for distinguishing the bare ground and vegetation for both the NDVI and the GRVI is not constant.Therefore, in the system there is a possibility to change the values of the thresholds by the user and choose the best value to detect the bare ground and calculate the differential bare ground raster in order to deliver the new bare ground layer.
evaluation shows a continuous rise, the the international mode, state of alarm.Since then, monitoring in a 12 h cycle is recommended with the use of prevention or intervention interest area and threat intensity.itical level of levee failure risk.In constant, 24 h monitoring is essential in order for the optimal rescue route.classifications, the interventional mode in case of: (1) people evacuation; (2) defensive action of the which are obtained with the UAV platform and the are uploaded into the external database, which is linked to the system.As a result, many valuable spatial The radar satellite data and highresolution optical satellite scenes make it possible to generate the main advantage of the radar data to penetrate the clouds.The the monitoring of the water range in rivers.omparison of the water masks generated from the satellite the differences in the , this is a sign to the .For the optical VHRS the water ranges are determined based on the the water mask is calculated.The example of using vegetation index to map the The UAV data (images and LiDAR point clouds) are used for a condition.The Digital which are generated from the LiDAR data, are detection of the height differences in levees (Figure specialist conducting the sourced remote data.These differences may result from the damages caused by animals, or may indicate a landslide.They tell the user what place to pay special many areas with height can be identified.This can be few reasons, e.g.interpolation process, the resolution of acquired data.Therefore, in order to rpret the results of the analysis properly, a specialist in the field of hydrology is needed, who will properly indicate the damage in the levee and those The new bare ground detection analysis was conducted based the second layer that should help the levee condition.In order to distinguish the bare ground and the vegetation, the NDVI and GRVI were calculated.The results showed that the NDVI gives Additionally, the studies showed that the threshold for distinguishing the bare ground and vegetation for both the GRVI is not constant.Therefore, in the system, there is a possibility to change the values of the thresholds by the user and choose the best value to detect the bare ground and late the differential bare ground raster in order to deliver The presented products of remote sensing data and their processing in the preventative configuration provide the possibility of conducting a multirisk.The result of the analysis window of the preventive configuration is presented in Figure 7 Figure 5. Example of the height differences between the DTMs generated from the airborne and UAV LiDAR The assessment of the levee state is conducted in the system considering the remote sensing data, flood wave simulation results from flood risk and hazard maps, technical levee condition verified by overflow simulations as well as water level from the water gauges network and geoparticipation portal.The developed system integrates a large amount of various data sources thus ensuring the efficient management of various levees.The results of the preventative mode simulations providing data for the system in intervention mode where it is possible to work with flood hazard and risk maps integrated into one system.Such system simplifies rescue action planning and documentation and provides a full scale problem overview for better crisis management.
Figure 1 .
Figure 1.Study areas in the SAFEDAM Project.
Figure 2 .
Figure 2. Diagram of the actions in configuration of the SAFEDAM project.
Figure 3 .
Figure 3. Automatic water bodies detection based on the Pleiades VHRS optical imagery (RGB composition, NIR channel and resulted water body overlaing ortohophotomap Dedicated functions the ability to create dynamic cross displaying messages from geoparticipation application in the real time analysis and updating of the assessment of the condition of flood embankments presentation and updating of the water level from water gauges from the display of warnings from the PSHM base Result updated flood risk and threat assessment coordinates of the mouse cursor, dynamic scale of the map, searching for data in the database, the ability to turn off the visibility of layers, setting the transparency of layers, measurements of area, length and height, the to draw on the map, screen shot from 2D and 3D to graphic form, export of shapefiles with markings from the action, export of shapefiles with evaluation of leeves, sharing links to WMS layers visible in the system create dynamic cross-sections displaying messages from geoparticipation application in the real time analysis and updating of the assessment of the condition of flood embankments presentation and updating of the water level from water gauges from the PSHM base display of warnings from the PSHM base updated flood risk and threat assessment SAFEDAM system in the preventative mode.
Figure 6 .
Figure 6.Example of detection of the RGB and NIR images
Table 3 .
Monitoring recommendations depending on the state of risk | 7,274.4 | 2018-03-06T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Geography"
] |
Multifrequency Excitation Method for Rapid and Accurate Dynamic Test of Micromachined Gyroscope Chips
A novel multifrequency excitation (MFE) method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE) method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.
Introduction
Gyroscope chips are key components in micromachined gyroscopes, a type of angular rate sensor widely used in military, automotive, consumer electronics, and other fields [1][2][3]. They are usually manufactured by a silicon-etching technique. Because of their small size and the imperfect etching OPEN ACCESS technique, precise control of the dynamic chip parameters is difficult, such as the resonant frequency and quality factor. Thus, we have to perform a dynamic test before gyroscope chips can be packaged into gyroscopes. In large-scale manufacturing of gyroscopes, a high efficiency for the dynamic test is urgently required, either for chips or gyroscopes.
The sweep-frequency excitation (SFE) method is the most commonly used dynamic test method for gyroscopes or gyroscope chips [4][5][6]. A given sweep range (bandwidth) around the resonant frequency is tested by the stepped frequency point by point to plot the frequency response curve, and the dynamic parameters are then extracted. For each frequency point, a stable sine excitation signal is applied, and a stable response signal is then recorded to achieve a high test accuracy. However, this method is extremely time-consuming and cannot yield satisfactory test efficiency, especially in large-scale tests. Recently, search-and-track strategies for the SFE method have been reported in [7], which shorten the test times for the resonant frequency to 1 s and 5 s. The limitations of these methods are that they can only measure the resonant frequency. On the other hand, the use of transient signal excitation such as a step or impulse signal can shorten the test time to 15-20 s, as reported in [8,9]. However, these methods extract the dynamic parameters from the free-decay responses, which are not as stable as the sine response in the SFE method and thus cannot assure test repeatability and accuracy [8,9].
In the current work, a novel multifrequency excitation (MFE) method is proposed to achieve both high efficiency and accuracy. The multifrequency signal was designed to yield a high output signal-to-noise ratio, and a gyroscope-chip dynamic test system was implemented. Finally, the performance of the MFE method was experimentally verified by comparison with the traditional SFE method. Figure 1a and b show a scanning electron microscope (SEM) photograph and schematic of a linear vibratory micromachined gyroscope chip. The chip is composed of an orthogonal mass-spring system with a shared sensitive mass and two pairs of electrodes in the two vibratory axes, including the sense axis (x axis) and drive axis (y axis) [10,11]. First, a sine signal voltage is applied to the drive electrode so that the sensitive mass vibrates along the drive axis. If the gyroscope turns around its sensitive axis (z axis) at this instance, the sense electrode will be driven by a Coriolis force and will subsequently vibrate with an amplitude proportional to the angular rate Ω of the sensitive axis. Finally, the vibration is transformed into a capacitance change in the sense electrode pairs and yields a voltage output signal proportional to Ω.
Principles of the Micromachined Gyroscope Chip and Its Dynamic Test
Theoretically, the linear vibratory micromachined gyroscope chip is a second-order vibratory system along the drive and sense axes with the following transfer functions: 2 where the subscripts d and p represent the drive and sense axes, respectively; X and Y are the displacements of the sensitive mass m along the x and y axes, respectively; Fp and Fd are the forces applied to the mass; ωd and ωp are the resonant frequencies; and Qd and Qp are the quality factors. In this study, a dynamic test of the gyroscope chips was conducted on the basis of the principle of electrostatic actuator-capacitance detection, which was described in detail in the authors' previous work [12]. The schematic of the dynamic test of the drive axis input-sense axis output mode is shown in Figure 1c.
In brief, a DC offset Vdc was first added to the excitation signal Vin and its inverted signal -Vin. In the SFE method, Vin = Asin(ω0t). Then, the two outputs were applied to a pair of inverted drive electrodes, resulting in inverted electrostatic forces. The summation of the inverted electrostatic force, finally applied to the sensitive mass, has the same frequency ω0 as Vin. Then, the sensitive mass was driven by the electrostatic force. The relationship between the displacement of the sensitive mass and the electrostatic force can also be expressed as Equation (1). The change in the capacitance of the sense electrode pairs, which has an amplitude proportional to the movement of the sensitive mass, was converted to a voltage and then read out as Vout. Finally, the frequency response can be calculated as where Vin(ω) and Vout(ω) are the Fourier transforms of the excitation signal vin(t) and the chip output signal vout(t).
The dynamic parameters such as the resonant frequency, resonant amplitude, and resonant phase were extracted from the frequency response. A single chip needs to be tested under four modes depending on the different input and output electrode (or axis) combinations: drive axis input-sense axis output, sense axis input-drive axis output, sense axis input-sense axis output, and drive axis input-drive axis output.
Principle of the MFE Method
In the current dynamic test procedure of gyroscope manufacturing, the designed resonant frequency of the chips is approximately 3000 Hz. The quality factor is approximately 200 and the half-power bandwidth is approximately 15 Hz. At the same time, the performance of the second-order vibrational mode's spectrum peak, which is normally at approximately 3200 Hz, is tested, therefore the scanned bandwidth is set as 600 Hz (frequency range: 2700-3300 Hz). A 1-Hz frequency resolution is required. To achieve a high test accuracy, full-period sampling is applied to prevent the picket-fence effect of the fast Fourier transform (FFT)-based spectrum analysis [13]. Furthermore, a response settling time is also considered to exclude the transient process. By adding the 0.2-s full-period sampling and 0.05-s response settling times, the sampling time for one frequency point test is 0.25 s. If the traditional SFE method is used, sine signals with 601 frequencies in the scanned bandwidth are applied individually. Therefore, a 2.5-min sampling time is required to complete a one-mode test, and at least a 10-min sampling time is required to finish a four-mode test of one chip. The test efficiency is obviously very low for a large-scale test.
According to the superposition principle of linear systems [13], if the signal vout,k(t) is the output corresponding to the input signal vin,k(t), k = 1, 2……, N, the input signal In the MFE method, the 601 sine signals used in the SFE method are added together to generate a multifrequency signal before they are transmitted to the chip. The chip output will be the sum of all the single responses of each sine frequency input. Thus, through input-and output-signal spectrum analysis, the frequency response curve could be plotted using only one excitation and one sampling. To meet the 1-Hz frequency resolution requirement, the shortest full-period sampling time should be 1 s, which is the reciprocal of the frequency resolution. By adding a response settling time of 0.5 s, completing a one-mode test takes 1.5 s, and only 6 s is required to complete a one-chip test. Clearly, the test efficiency is dramatically improved through the MFE method presented in this paper.
Design of the Multifrequency Signal
The major challenge for the MFE method is generating an appropriate multifrequency signal. We desire that the input excitation signal possesses a high enough amplitude to yield a high output signal-to-noise ratio for each frequency. However, adding 601 high-amplitude sine signals together would make the total amplitude exceed the limitations of the data acquisition card output or the chip input, which will cause the actual excitation amplitude to be lower than the designed value or result in direct damage of the chip due to a high-voltage input.
The total amplitude of a multifrequency signal is determined by the amplitudes and initial phases of the total sum of the 601 sine signals. Therefore, the amplitude and initial phase distribution should be carefully designed. To simplify the signal control and analysis, the amplitudes of all frequencies were set to be equal. We studied three initial phase distributions, namely, the zero, random, and linear difference phases, as previously described in [14].
(1) If we adopt the zero phase, the initial phase at the kth frequency is expressed as where N is the number of frequencies. The corresponding multifrequency signal can be expressed as where A is the amplitude of each frequency, and fk is the frequency of the kth signal.
(2) If we adopt the random phase, the initial phase at the kth frequency is expressed as where rand is a uniform random number between zero and one. The corresponding multifrequency signal can be expressed as (3) If we adopt the linear difference phase, the initial phase at the kth frequency is expressed as The corresponding multifrequency signal can be expressed as We simulated these three conditions of the initial phase distribution using Matlab. The amplitude of each frequency was 1 V. The first frequency was 2000 Hz, and the frequency interval was 5 Hz. The number of frequencies was set from 1 to 1000. The signal duration time was 0.2 s. Figure 2 shows the maximum amplitude of the multifrequency signal, which varied with the number of frequencies for the three initial phase distributions. The multifrequency signal with the initial linear-difference-phase distribution clearly has the smallest total amplitude. Therefore, in this work, we adopted the initial linear-difference-phase distribution, and the corresponding total amplitude is approximately 30 V for N = 601. Typically, the amplitude of the sweep sine signal for a gyroscope chip test is 2 V; thus, the amplitude of each frequency in the multifrequency signal was set to 1/30 of 2 V or 0.065 V.
Test System for Realizing the MFE Method
Virtual instrument technology based on LabVIEW (Instrumentation software released by National Instruments, Austin, TX, USA) helps us realize a test system that is more efficient than traditional instruments and has been adopted for the dynamic testing of gyroscopes in several reports [2,5,8].
In the current study, the multifrequency signal was numerically programmed in LabVIEW and then transmitted to the chip through the NI 4461 data acquisition card, which has two output channels with a 24-bit output resolution and 204.8-kS/s maximum sampling rate. The excitation and response signals were synchronously input into the computer through the NI 4461 card whose input channels have a 24-bit input resolution and 118-dB dynamic range. This test system was previously used to carry out a dynamic test of the chips in the traditional SFE method, and its hardware was described in a previous article [12] by the corresponding author of this paper. The test system was modified very conveniently to use the new MFE method by changing only the system software without any hardware modification.
Results and Discussion
Performance verification of the MFE method and test system was carried out in a clean room where the gyroscope chips were assembled. Figure 3a and b show a computer-recorded multifrequency signal with a 600-Hz bandwidth (frequency range: 2700-3300 Hz) and its partial magnified image. Its corresponding amplitude and phase distributions, which are computed using the FFT-based spectrum analysis [13] without windowing and averaging, are shown in Figure 3c and d. The amplitude in Figure 3c is the peak voltage value of each frequency point. The maximum voltage of the signal is approximately 2 V, as designed.
Maximum amplitude (V) (c) (d) Figure 4 presents the test results of a gyroscope chip with a 5-Hz frequency interval and a wide bandwidth from 2000 Hz to 5000 Hz. The number of frequencies is 601. The excitation signal was applied through its drive axis, and the output signal was read out from its sense axis (the drive axis input-sense axis output mode). When using the MFE method, the input signal is very similar to the waveform shown in Figure 3a. Figure 4a and b show the output signal of the gyroscope chip and its partial magnified image, respectively. In this case, the shortest full-period sampling time was set to be 0.2 s. By adding a response settling time of 0.5 s, completing a one-mode test takes 0.7 s, and only 2.8 s is required to complete a one-chip test. The frequency responses of the gyroscope chip tested using the SFE and MFE methods are almost the same, as shown in Figure 4c and d. The only difference exists in the bandwidth far away from the resonant frequency, where the value is unimportant for the accuracy test. At the frequency points within the bandwidth far away from the resonant frequency, the amplitudes of the output signal are low to approximate 15 μV. The signal-to-noise ratio is very low, resulting in the error in Figure 4c.
The MFE method was applied 10 times to a randomly selected chip and compared with the traditional SFE method. The testing results of the sense axis input-sense axis output and drive axis input-drive axis output modes are listed in Table 1. It is observed that the test repeatability and accuracy for the resonant frequency and the corresponding amplitude and phase can meet the test requirements. The sense axis input-drive axis output and drive axis input-sense axis output modes were tested using the MFE method, and their results are similar to those of the other two modes. Excitation methods have long been known to be essential for the dynamic tests of mechanical, electronic, and electromechanical systems [15]. The principle of the MFE method is similar to the swept-sine-wave tests using SFE. The swept-sine-wave test continually varies the frequency of a sine wave and applies it to the tested system. In the MFE method, all of these sine waves with continually varying frequencies are first added together and then applied to the tested system at once. This allows for an efficiency improvement. At the same time, every sine wave component corresponding to those in the swept-sine-wave test is accurately sampled and processed, as a result of the well-designed test parameters. This is why the test accuracy can be maintained in such a rapid test. Therefore, the MFE method proposed in this paper can be used for rapid and accurate dynamic testing of micromachined gyroscope chips and in other similar electromechanical systems.
Conclusions
A novel MFE method has been proposed in this paper to replace the traditional SFE method for rapid and accurate dynamic testing of micromachined gyroscope chips. Using the proposed method, the processing time was reduced from 10 min to 6 s when testing one chip under four modes at a 1-Hz frequency resolution and 601-Hz bandwidth. The experimental results demonstrated that the MFE method can achieve the same repeatability and accuracy as the traditional SFE method. Thus, the proposed MFE method can potentially meet the urgent need for gyroscope chip filtering and pairing or packaged gyroscope calibration for large-scale gyroscope manufacturing. The MFE method can also be applied to the dynamic testing of other electronic and electromechanical systems with voltage-signal excitation. | 3,677.8 | 2014-10-01T00:00:00.000 | [
"Engineering"
] |
Geometric back-reaction in pre-inflation from relativistic quantum geometry
The pre-inflationary evolution of the universe describes the beginning of the expansion from a static initial state, such that the Hubble parameter is initially zero, but increases to an asymptotic constant value, in which it could achieve a de Sitter (inflationary) expansion. The expansion is driven by a background phantom field. The back-reaction effects at this moment should describe vacuum geometrical excitations, which are studied with detail in this work using Relativistic Quantum Geometry.
I. INTRODUCTION
The inflationary model is a very tested description of how the universe can provide a physical mechanism to generate primordial energy density fluctuations on cosmological scales [1], below Planckian scales. During this stage, the primordial scalar perturbations drove the seeds of large scale structure which had then gradually formed today's galaxies. This is being tested in current observations of cosmic microwave background (CMB) [2]. These fluctuations are today larger than a thousand times the size of a typical galaxy, but during inflation they were very much larger than the size of the causal horizon [3]. According to this scenario, the almost constant potential depending of a minimally coupled to gravity inflation field φ, called the inflaton, caused the accelerated expansion of the very early universe. During this epoch, the potential energy density was dominant, so that the kinetic energy can be neglected. This is known as the slow-roll condition for the inflaton field dynamics. In this framework the problem of nonlinear (scalar) perturbative corrections to the metric has been studied in [4].
Geometrodynamics [5] is a picture of general relativity that study the evolution of the space-time geometry. The significant advantages of geometrodynamics, usually come at the expense of manifest local Lorentz symmetry [6]. During the 70's and 80's decades a method of quantization was developed in order to deal with some unresolved problems of quantum field theory in curved spacetimes [7]. In this context, recently we have introduced a new method to study the scalar perturbations of the metric in a non-perturbative manner [8] by introducing Relativistic Quantum Geometry (RQG). This formalism is non-perturbative and serves to describe the dynamics of the geometric departure of a background Riemannian spacetime with the help of a quantum geometrical scalar field [9][10][11]. The dynamics of the geometrical scalar field is defined on a Weyl-integrable manifold that preserves the gauge-invariance under the transformations of the Einstein's equations, that involves the cosmological constant. Our approach is different to quantum gravity. The natural way to construct quantum gravity models is to apply quantum field theory methods to the theories of classical gravitational fields interacting with matter. In spite of numerous efforts the general problems of quantum gravity still remain unsolved. Our approach is different because our subject of study is the dynamics of the geometrical quantum fields. This dynamics is obtained from the Einstein-Hilbert action, and not by using the standard effective action used in various models of quantum gravity [12]. There are no non-linearities or high-derivative problems in the dynamical description, so our formalism is much easier to apply to different physical systems like inflation [8], or pre-inflation. This primordial epoch is of relevant interest in cosmology and deserves a detailed study. Presently, we cannot understand completely the first epoch of the universal evolution. How the universe begun to expand and how was the first stage of this evolution? The theory that describes this epoch is called pre-inflation [13].
The existence of a pre-inflationary epoch with fast-roll of the inflaton field would introduce an infrared depression in the primordial power spectrum. This depression might have left an imprint in the CMB anisotropy [14]. It is supposed that during pre-inflation the universe begun to expand from some Planckian-size initial volume to thereafter pass to an inflationary epoch. In this framework RQG should be very useful when we try to study the evolution of the geometrical back-reaction effects given that we are dealing with Planckian energetic scales, and back-reaction effects should be very intense at these scales.
In this work we shall describe a model in which the universe begins to expand from nothing (the initial energy density is null). The Hubble parameter is initially zero and thereafter increases to an asymptotic constant value, in which it could achieve a de Sitter (inflationary) expansion. In our model the background expansion is driven by a phantom scalar field φ, in which the equation of state of the universe during pre-inflation is P pi /ρ pi < −1 [15]. The back-reaction effects at this moment should describe vacuum geometrical excitations, which are the main subject of study in this work.
II. RELATIVISTIC QUANTUM GEOMETRY: THE STRUCTURE OF SPACE-TIME IN AN EXPANDING UNIVERSE
We shall consider a metric tensor in the Riemannian manifold with null covariant derivative (we denote with a semicolon the Riemannian-covariant derivative): ∆ḡ αβ =ḡ αβ;γ dx γ = 0, such that the Weylian [16] covariant derivativeḡ αβ|γ = θ γḡαβ , described with respect to the Weylian connections 1 is nonzero In the case of an expanding universe, the Riemannian manifold will be described by the background geometry characterized with a FRW metric. Of course, all the variations with respect to the expanding background are in the Weylian geometrical representation. As was demonstrated in [11] the Einstein tensor can be written as and we can obtain the semi-Riemannian invariant (the cosmological constant) Λ Notice that Λ(θ, θ α ) is a Riemannian invariant, but not a Weylian invariant. Hence, one can define a geometrical Weylian quantum action W = d 4 x √ −ḡ Λ(θ, θ α ), such that the dynamics of the geometrical field, after imposing δW = 0, is described by the Euler-Lagrange equations which take the form∇ The canonical momentum components are Π α ≡ − 3 4 θ α and the relativistic quantum algebra is given by [11] [ αŪ α for the Riemannian components of velocities U α . 1 To simplify the notation we shall denote θ α ≡ θ ,α , where the comma denotes the partial derivative.
Furthermore, we shall exalt with a " bar " the quantities represented on the Riemannian background manifold.
III. PRE-INFLATION AND BACK-REACTION
One of the most important paradigms in the cosmology consists in providing an explanation of how was the initial moment of the expansion of the universe. This implies providing a model of how the universe begun its expansion before the inflationary accelerated expansion with a Hubble parameter very close to a constant and P i /ρ i −1. A possible scenario is pre-inflation, in which the Hubble parameter is initially null, to thereafter increase to a asymptotic constant value. During the beginning of the expansion the universe has a equation of state with P pi /ρ pi < −1, which implies that the expansion is driven by a minimally coupled to gravity scalar phantom field φ. During pre-inflation the action is where κ = 8πG, G is the gravitational constant, |ḡ| = a 3 (t) is the volume of the manifold are the components of the diagonal tensor metric.
With the aim to describe pre-inflation, we shall use λ = −1, which describes the dynamics of a fast-rolling phantom field. However, this epoch would be followed by an inflationary expansion driven by the slow-rolling inflaton field, for which the dynamics is obtained when λ = 1. Here, φ(t) is the background solution that describe the dynamics of a isotropic and homogeneous background metric that characterizes a semi-Riemannian manifold.
where V (φ) is the potential and the prime denotes the derivative with respect to φ. The semi-Riemannian (background) Einstein equations arē where the components of the background stress tensor areT αβ = δL δḡ αβ −ḡ αβL . For a background FRW metric the Einstein equations result − 3H 2 + 2Ḣ = κ P pi = κ λφ From the two Einstein equations we obtain thatφ = − 2H κλ , and the time dependent potential can be written as a function of the Hubble parameter and its time derivative: This expression can be re-written taking into account the φ-dependence A. The pre-inflationary model with a phantom field We consider a model in which the Hubble parameter which is initially zero, and tends where the cosmological constant is related to H 0 : Λ = 3H 2 0 . The scale factor of the universe during this stage is with a 0 = H −1 0 . Notice that this solution describes an universe in whichḢ > 0. In other words, the model describes an universe which begin to expanding since an initial scale factor a(t = 0) ≡ H −1 0 . Furthermore, the Hubble parameter increases super-exponentially from a null value to an asymptotically constant value. The scalar potential can be written as a function of t so that for sufficiently large times, we obtain that V (t)| t→∞ → 3H 2 0 κ . From the Einstein equations (10) and (11), we obtain the time dependence ofφ Using the fact that V =Vφ in the equation of motion (8), we obtain the time dependence of the background scalar field where 0 ≤ φ ≤ π 2 √ κ . Notice that the phantom field increases during pre-inflation. Therefore, if we use this expression in the equations (14) and (16), we obtain the φ-dependence of the Hubble parameter and the scalar potential κ . Notice that ρ pi (t = 0) = 0, so that in this model the universe is created from nothing.
B. Back-reaction effects in pre-inflation
The geometrical scalar field θ can be expressed as a Fourier expansion where A † k and A k are the creation and annihilation operators. From the point of view of the metric tensor, an example in power-law inflation can be illustrated by where the scale background scale factor a(t) is given by (15). The quantum volume of the manifold described by (22) The dynamics for θ is governed by the equationθ and the momentum components are Π α ≡ − 3 4 θ α , so that the relativistic quantum algebra is given by the expressions (6) with co-moving relativistic velocities U 0 = 1, U i = 0.
Furthermore, as was calculated in a previous work [8], the variation of the energy density fluctuations is given by the expression such thatθ ≡ B|θ 2 |B 1/2 . To understand what is the line element S in a quantum context, we can define the operatoř where b † k and b k are the creation and annihilation operators of spacetime, such that βγδ are the Levi-Civita symbols. In this framework, we must understand that the exact differential related to a is the eigenvalue that results when we apply the operator δx α (x β ) on the background quantum state |B , defined as a Fock space on the Riemannian manifold. The Weylian line element is given by Hence, the differential Weylian line element dS provides the displacement of the quantum trajectories with respect to the "classical" (Riemannian) ones: dS 2 =ḡ αβ dx α dx β .
C. Quantization of modes
The equation of motion for the modes ξ k (t) is The annihilation and creation operators B k and B † k satisfy the usual commutation algebra Using the commutation relation (29) and the Fourier expansions (21), we obtain the normalization condition for the modes. For convenience we shall re-define the dimensionless where the asterisk denotes the complex conjugated. The general solution for the modes where Hn[a, q; α, β, γ, δ; z] = ∞ j=0 c j z j is the Heun function. Since the Heun functions are written as infinite series, we can make a series expansion in both sides of (30), in order to obtain the restrictions for the coefficients C 1 and C 2 , and the wavenumber values k. The , can be written as a series expansion where f N (k (N ) n ) = 0, for each N . To simplify the notation we are denoting the τ -derivative with a prime. There are 2N modes for each N -th order of the expansion, which cames from the roots of each equation. These roots provide us with the discrete quantum modes becoming from the quantization of θ. From the zeroth order of the expansion (in τ ), we obtain that C 2 = −i C 1 /2. Hence, we shall choose C 1 = 1 results that C 2 = −i/2 in the general solution (31). The first eight terms of the series, are From each k-dependent polynomial we obtain the roots, which provide us the permitted modes that guarantee the quantization of θ. There are infinite discrete permitted modes.
The expectation value forθ 2 on the quantum state |B calculated on the background semi-Riemannian hypersurface, is such that (ξ kn (τ )) is τ -derivative of ξ kn evaluated at k = k n : (ξ k (τ )) k=kn and k n 2 , are the complex roots of the polynomials f N (k) = 0, in (33). The expression (34) can be alternatively written for each mode k n , as which takes into account the contribution of each k n -mode in B|θ 2 |B .
We see that the first modes have roots in k 1,2 = ± √ 2 i. The modes for these roots have the same contribution in the expression for B|θ 2 |B In the figure (1) we were drawn the contributions of the modes k 1 (red), k 3 (blue), k 5 (black) and k 7 (green), to B|θ 2 |B kn , for a 0 = G 1/2 . Notice that all the contributions tends asymptotically to zero for few Planckian times (t p 10 −43 sec). In other words, the excitations of the background (i.e., the Riemannian vacuum), are significative at the moment of the big bang, but decrease to zero whenḢ/H 2 → 0. This corresponds just to the approximation to the de Sitter (inflationary) regime.
IV. FINAL COMMENTS
We have studied back-reaction effects in a pre-inflationary universe using RQG. This formalism makes possible the nonperturbative treatment of the vacuum fluctuations of the spacetime, by making a displacement from a semi-Riemannian to a Weylian one. In this framework the Einstein equations are exactly valid on the Riemannian manifold, but the quantum effects are described on the Weylian one by the field θ. In the Weylian manifold the cosmological constant is not an invariant, but a Lagrangian density Λ(θ, θ α ) with which we define the quantum action W. The dynamics of the geometrical field θ is that of a free scalar field and describes the dynamics of the geometrical quantum fluctuations with respect to the Riemannian (classical) background. When we apply this formalism to the pre-inflationary scenario, which describes the beginning of the universal expansion from nothing, we obtain that the modes of the geometrical field θ are discrete, but infinite in number. The contribution of some of these to the variation of energy density, were drawn in the figure (1). Notice that all the modes's contributions become asymptoticaly null for τ 1. Other distinctive characteristic of these modes is that they are not unstable, as in the case of the modes of θ during inflation [8]. A subsequent study on how to describe the transition from pre-inflation to inflation remains pendent. This issue will be the subject of study in a future work. | 3,606.4 | 2016-03-11T00:00:00.000 | [
"Physics"
] |
Black hole Skyrmion in a generalized Skyrme model
We study a Skyrme-like model with the Skyrme term and a sixth-order derivative term as higher-order terms, coupled to gravity and we construct Schwarzschild black hole Skyrme hair. We find, surprisingly, that the sixth-order derivative term alone cannot stabilize the black hole hair solutions; the Skyrme term with a large enough coefficient is a necessity.
Introduction
Black holes are generally believed to be characterized by two properties at asymptotically far distances, namely their masses and their global charges. This is known is the weak nohair conjecture. The first stable counter example to this no-hair conjecture was made in the framework of the Skyrme model, i.e. a black hole with Skyrme hair [1][2][3][4][5][6][7] (for a review, see ref. [8]). The Skyrme model is a scalar field theory based on the chiral Lagrangian with the addition of a quartic derivative term, which is like a curvature term on the internal (target) space of the model [9,10]. In flat space the Skyrmion is a map from space (R 3 ) to an SU (2) target space, which is characterized by π 3 (S 3 ), giving rise to topological solitons; i.e. the Skyrmions. As implied by the fact that they are topological, their charges -called baryon charges -are integers in flat space. The Skyrmion with a black hole is interpreted as a black hole with scalar hair and the asymptotic behavior of the Skyrmion is very similar to that in flat space. Near the black hole the Skyrmion is deformed, nevertheless. Now when the black hole is formed with the Skyrmion surrounding it, the Skyrmion loses a fraction of its charge; this happens due to the fact that the profile function of the Skyrmion, which normally "winds" π to complete the 3-cycle, only "winds" π − . When the black hole horizon -and hence its mass -becomes larger than a certain critical value then the Skyrmion ceases to exist and the Skyrme hair becomes unstable.
Another twist to the Skyrmion solution in flat space is that when it is the hair of a black hole, two branches of solutions (fixed points of the action) open up [1]; one of these two branches of solutions contains, however, unstable Skyrmion solutions. The two branches bifurcate at the above mentioned critical mass or horizon radius, beyond which no stable solution exists. If we pick a point on the stable branch and take the limit of the black hole mass going to zero, then the solutions converge smoothly to that of the flat space. If we now pick a point on the unstable branch, the answer depends on whether the gravitational coupling is turned on or not; if it is turned on -which is tantamount to the gravitational backreaction being taken into account -then the conclusion remains the same; the solution converges to that of flat space. If the gravitational coupling is turned off, however, then the solution becomes discontinuous and ceases to exist -the limit is hence not well defined.
JHEP09(2016)055
Apart from the seminal result of Luckock et al. (ref. [1]), and the papers that followed; other variants of the Skyrme black hole hair system have been studied in the literature. The most natural generalization is to turn on a nonvanishing cosmological constant; in refs. [11,12] and [13] the black hole Skyrme hair was ported to anti-de Sitter and de Sitter spacetimes, respectively. The late-time evolution of the radiation emitted from the black hole with Skyrme hair was studied in refs. [14,15]. Gravitating sphalerons in the Einstein-Skyrme model have been constructed in ref. [16]. Quantization of collective coordinates in the Skyrmion black hole was carried out [17]. Another natural generalization of the black hole Skyrmion system, is to consider the Skyrmion surrounding the black hole to have a higher charge (winding); a particular class of axially symmetric solutions has been found in refs. [18,19] and quantization of collective coordinates was considered in such systems as well [20]. Recently, it has been contemplated that black holes do not necessarily violate the baryon number when the possibility of black hole Skyrme hair is taken into consideration [21].
An interesting question is: what is the foundation of the stabilizing mechanism of the black hole hair? If we turn off the Skyrme term, the scalar hair is not stable. In light of recent developments in the Skyrme model, which was motivated by a completely different effect -namely the large binding energy of the multi-Skyrmion, being too large for the Skyrmions to be interpreted as nuclei -a sixth-order derivative term has been introduced [22,23] and this model has been dubbed the BPS-Skyrme model (neutron stars have been studied in the framework of the BPS-Skyrme model [24,25] and we have recently found gravitating analytic and numerical Skyrmion solutions in the BPS-Skyrme model [26]). For the story of the binding energy, the sextic term has the interesting property that a saturable BPS bound exists in the subset of the model containing only said sextic term as well as a potential term. Using Derrick's theorem [27], any higher-order derivative term can stabilize the Skyrmion solution in flat space, by balancing the pressure with respect to that of the kinetic (Dirichlet) term and/or the potential. As for the hair of the black hole, however, it is far less trivial which kind of terms can stabilize the black hole hair. One may naively think that we can substitute the Skyrme term with the sixth-order derivative term and retain a similar black hole hair solution. Our findings, however, suggest otherwise. Although we can add the sextic term to the model and have stable black hole hair; the Skyrme term with a positive coefficient is a necessity. This is the main result of our paper.
Another result found in this paper concerns the unstable branches mentioned above. When the gravitational coupling is turned on in the Skyrme model without the sextic term -corresponding to taking gravitational backreaction into account -then the unstable branches of solutions smoothly converge back to the flat space Skyrmion solution in the limit of vanishing black hole size. Once we turn on the sextic term (sixth-order derivative term), there is a small, but finite, critical value for the coefficient of said term, for which the unstable branches end at a finite horizon radius: hence the limit is not smooth.
The paper is organized as follows. Section 2 introduces the model and the governing equations for the black hole in a Schwarzschild metric with Skyrme(-like) hair. Section 3 presents the numerical results and finally section 4 concludes with a discussion.
JHEP09(2016)055 2 The model
The model is a nonlinear sigma model of Skyrme-type with higher-derivative terms up to sixth order, coupled to gravity and the action reads where the n-th order Lagrangians are given by 3)
4)
where L µ ≡ U † ∂ µ U is the left-invariant current, U = σ1 2 + iπ a τ a , a = 1, 2, 3 is the Skyrme field with the constraint det U = 1, g is the determinant of the metric and we are using the mostly-negative signature of the metric. L 2 is the standard kinetic term, L 4 is the Skyrme term and L 6 is the baryon current density squared, which is inspired by the BPS Skyrme model [22]. In the remainder of the paper, we will use the terminology 2 + 4 model : 2 + 4 + 6 model : The symmetry of L S for V = 0 isG = SU(2) L ×SU(2) R acting on U as U → U = g L U g † R and thus L µ is manifestly covariant. Finite energy configurations require that U asymptotically takes on a constant value, e.g. U = 1 2 . Hence in the vacuumG is spontaneously broken down toH SU(2) L+R , which in turn acts on U as U → U = gU g † . The target space is thereforeG/H SU(2) L−R .
For concreteness we will use the potential which is sometimes called the modified pion mass term [28][29][30][31], see also refs. [32][33][34][35][36][37]. This potential breaksG to SU(2) L+R explicitly. The sixth-order term is written in a way where it is manifest that it is the baryon current squared. It is however slightly easier to work with the term after rewriting it in the following form [38] In this paper we will consider the Schwarzschild metric which is appropriate for studying a single Skyrmion -which is also spherically symmetric -for which we choose the so-called hedgehog Ansatz wherex is a spatial unit vector and a = 1, 2, 3. Using the hedgehog Ansatz, we can write the static mass of the Skyrmion as where f r ≡ ∂ r f and r h is the horizon radius. The mass of the Skyrmion is the energy density integrated from the horizon to infinity. We will now change variables to the dimensionless coordinate ρ ≡ c 0 c 2 r and rescale the coefficients c 4 → have rescaled the coordinates by the coefficient of the mass term, we insert a dimensionless mass, δ which can take the value 0 or 1. If δ = 0 then c 0 is still the unit of the would-be mass (c 0 can never vanish). In the case of δ = 0, c 0 can be adjusted such that c 4 = 1. We can now write the mass as follows where the dimensionless horizon radius is ρ h = c 0 c 2 r h and we define c 4 , c 6 and δ = 0, 1 are now dimensionless parameters. The baryon current is and integrating the time component of this gives the baryon charge and thus the total baryon charge B is less than unity for any f (ρ h ) < π (we have used the asymptotic boundary condition f (∞) = 0).
JHEP09(2016)055
The equation of motion for the Skyrme field profile f is given by The energy-momentum tensor can readily be calculated as Writing out the nonzero components, we have We are now ready to obtain the Einstein equations and defining α ≡ 8πGc 0 , we can write down the resulting equations by taking suitable linear combinations We can eliminate the field, N , by inserting eq. (2.24) into eq. (2.18) and simplify the coefficient of f ρ by using eq. (2.25). The resulting system of equations is then given by In order to find numerical solutions to the system of equations (2.26) and (2.25), it will be convenient to use a shooting method for ordinary differential equations (ODEs). For that we need boundary conditions at the horizon (at ρ h ) with a shooting parameter as well as boundary conditions at infinity. The boundary condition at the horizon is and hence µ(ρ h ) = ρ h /2. f (ρ h ) = f h is the shooting parameter and by taking the limit ρ → ρ h of eq. (2.26), we get (2.28) Now we need to evaluate C ρ at the horizon which follows straightforwardly from eq. (2.25). Summarizing, we have the boundary conditions at the horizon while at infinity they are f (∞) = 0, µ ρ (∞) = 0, (2.33) where the second condition follows from the first and corresponds to i.e., the metric is asymptotically Schwarzschild. In total there are exactly three boundary conditions on our system (which is what is necessary) and one shooting parameter. Note that the first derivative of the Skyrmion profile function, f ρ , is negative at the horizon (as it should be), only when is positive, because sin 2f h is negative for f h ∈ ( 1 2 π, π), which is the relevant range of the shooting parameter. When Ξ vanishes, the first derivative is not defined on the horizon and solutions cease to exist.
JHEP09(2016)055 3 Numerical solutions
We will now pursue finding numerical solutions to the black hole Skyrmion system. Eq. (2.28) implies that the coefficient of f ρρ vanishes at the horizon, which is problematic for a shooting algorithm, as we would like to make a dynamic system of equations as where M is some matrix (functional); the second row of the right-hand side is defined by eq. (2.26) and the last row by eq. (2.25). However the right-hand side of the second row is not well defined at the horizon (the coefficient of f ρρ in eq. (2.26) vanishes at the horizon). Therefore we start the shooting from a very small radius ρ and calculate the values of the fields at ρ h + ρ as where µ ρ (ρ h ) is given by eq. (2.29). This is a good approximation if ρ is extremely small. In the numerical calculations we have found that ρ 10 −5 is small enough for allowing for the linear approximation and large enough to start the shooting algorithm.
From ρ h + ρ we employ a standard fourth-order Runge-Kutta method to integrate the equations (2.25)-(2.26) up to an appropriately chosen cutoff.
We are now ready to present the numerical results. We start by reproducing the well-known results in the 2+4 model, which is simply the standard Skyrme model with the addition of the modified pion mass. After rescaling we have two free parameters: the gravitational coupling α and c 4 . Actually, if we were to consider the model without the potential term, then the rescaling would eliminate c 4 instead of the potential parameter (pion mass). 1 In figure 1 we show the stable and unstable branches in black solid lines and dashed red lines, respectively, and for all combinations of vanishing/nonvanishing gravitational coupling and vanishing/nonvanishing potential parameter δ (recall that δ after rescaling, can only take the values 0 or 1). The first thing we note is, as explained in the introduction, that with the gravitational coupling turned on, the unstable branch smoothly converges back towards the flat space Skyrmion solution as the horizon radius ρ h is sent to zero; whereas in the case of vanishing gravitational coupling it does not. The unstable branch continues down in the f h direction and in the limit of vanishing ρ h the solution is discontinuous and ceases to exist [1].
We now turn on a positive value for the coefficient of the sextic term, c 6 > 0; this corresponds to the 2+4+6 model. A priori one would not expect substantial differences with respect to the 2+4 model discussed above, because in flat space the soliton solutions (in this JHEP09 (2016) parameter range; that is, when c 4 is of the same order of magnitude as c 6 or larger) are quite similar [36,37,39]. However, by inspection of figure 2 tational backreaction for decreasing horizon radius, ρ h . However, long before reaching the limit of vanishing black hole size, the unstable branches come close to the Ξ = 0 line in the diagram, where Ξ is defined in eq. (2.35). This phenomenon happens both with and without the potential term. The Ξ = 0 line is defined by the position of the pole in the first radial derivative of the Skyrmion profile function f ρ (ρ h ) at the horizon radius, and corresponds to a vanishing Hawking temperature. It is intuitively clear that when the Skyrmion profile blows up at the black hole horizon, no continuous stable black hole hair solution exists. It is also clear that if the Hawking temperature vanishes for a finite black hole mass, then the entropy would have to blow up; this should not happen in a physical system and hence it signals an instability of the black hole hair. We observe from figure 2 that the unstable branch of solutions ceases to exist slightly before Ξ = 0, but quite close to this line in the diagram.
As we know that in the 2+4 model, the unstable branch moves upwards in the (ρ h , f h ) phase diagram for decreasing horizon radius ρ h , there should be some critical value of c 6 for which the unstable branches start to end at a finite horizon radius ρ h > 0. We therefore consider taking the limit of c 6 → 0 and see when the unstable branches start to exist in the limit of ρ h → 0. Figure 3 shows the phase diagram with only the unstable branches for various values of the coefficient of the sextic term, c 6 , and we can see from the figure that for c 4 = 1, δ = 0, the critical value of c 6 is between 0.025 and 0.03.
In figure 4 we show the same physics, but in terms of c 6 and f h ; the different curves depict various horizon radii, ρ h . This figure clearly shows that if we turn off the sextic term, c 6 = 0, then all (the shown) horizon radii have solutions. Curiously, the lines open up in the limit of ρ h → 0 and allow for bigger c 6 than for example ρ h = 0.03, which is the most restricting radius in the diagram. From figure 3 we estimated that the critical value of c 6 for which the unstable branch ends at a finite horizon radius, ρ h > 0, is about 0.025-0.03; while from figure 4, we can confirm that it is slightly less than 0.03, so in accord with the previous estimate.
Our final numerical investigation considers turning off the Skyrme term for fixed coefficient of the sextic term c 6 = 1. figure 5b where it is clear that the black hole Skyrme hair will cease to exist when the Skyrme term is turned off. The biggest difference between the two panels lies in the unstable branches, because in the 2+4 model the unstable branches return to the flat-space Skyrmion when the black hole horizon radius is sent to zero (ρ h → 0). Fixing c 4 we can read off the f h branch as function of the horizon radii by looking at a vertical line in the figure. Following a fixed horizon radius, ρ h , we can see at which value of the Skyrme term coefficient, c 4 , the solutions cease to exist. Interestingly -and this is one the main findings of this paper -all solutions cease to exists in the limit of c 4 → 0, even though we have the sextic term turned on. We can also see that the unstable branches cease to exist JHEP09(2016)055 quite before the stable branches (in the case of c 6 = 1, see figure 5a). We can physically understand that the bifurcation point -which is the maximal size of black hole that can support the Skyrme hair -simply goes to zero in the limit of c 4 → 0 for fixed c 6 = 1; we can perhaps say that the black hole eats the Skyrme hair if there is no Skyrme term turned on.
Discussion and conclusion
In this paper we have considered Schwarzschild black holes with Skyrme hair in a Skyrmelike model with the addition of a sixth-order derivative term as well as a potential term. We first reproduce the expected branches of solutions in the (ρ h , f h ) phase diagram for the Skyrme model with the sextic term turned off. Then turning on the sextic term, we find that the unstable branches are modified and end at finite horizon radii -beyond which no unstable solution exists, for all but very small values of the coefficient of the sextic term. This is due to the unstable branches coming too close to a line in the phase diagram (Ξ = 0) where the Hawking temperature vanishes. Furthermore, we find the quite surprising result that there is no stable or unstable Skyrme hair for any black holes -with or without the potential -in the limit of vanishing coefficient of the Skyrme term, c 4 → 0. This unexpected result implies that, although higher-derivative terms can stabilize Skyrmions in flat space, the sixth-order derivative term cannot stabilize the Schwarzschild black hole hair. In ref. [26] we observed that there are no black holes in the BPS-Skyrme submodel; i.e. in the case without a kinetic term and without the Skyrme term. This observation was made in the case of a particular potential and so it was not clear whether Skyrme hair solutions in the 2+6 model would be stable or not. Now we have the answer in the negatory. This interesting result begs for the question: under what circumstances does black hole hair exist? It is quite unlikely that any potential of any type will alter this conclusion as potentials tend to collapse the Skyrmions and the black hole already does that without the Skyrme term present -even with the sixth-order derivative term. Let us note that for small values of the Skyrme term coefficient (small c 4 ), the unstable branches come too close to the line in the phase diagram where the Hawking temperature vanishes. Furthermore, we observe that the critical point moves to smaller and smaller black hole horizons for decreasing values of c 4 and eventually leads to a vanishing black hole size even for the stable branches. This in turn means that the stable branch is approaching the line where the Hawking temperature is vanishing. Let us also note that the sixth-order derivative term itself -without the presence of the second-order kinetic term -leads to a theory described by a perfect fluid [40]. Combining these two facts, we can intuitively understand that the black hole Skyrme hair with a sixth-order derivative term -possessing the properties of a perfect fluid -cannot withstand the gravitational attraction: the black hole Skyrme hair collapses.
It is not clear at this point if all higher-order derivative terms higher than fourth order will be unable to stabilize black hole Skyrme hair. Higher-order terms may not yield a theory with the properties of a perfect fluid. We will leave this question for future studies. We conjecture that the sixth-order derivative term, due to its properties of a perfect fluid, is the only higher-order derivative term leading to a second-order radial equation of motion which cannot stabilize the black hole Skyrme hair. The proof thereof awaits to be found. | 5,213.6 | 2016-09-01T00:00:00.000 | [
"Physics"
] |
The Synergistic Biologic Activity of Oleanolic and Ursolic Acids in Complex with Hydroxypropyl-γ-Cyclodextrin
Oleanolic and ursolic acids are natural triterpenic compounds with pentacyclic cholesterol-like structures which gives them very low water solubility, a significant disadvantage in terms of bioavailability. We previously reported the synthesis of inclusion complexes between these acids and cyclodextrins, as well as their in vivo evaluation on chemically induced skin cancer experimental models. In this study the synergistic activity of the acid mixture included inside hydroxypropyl-gamma-cyclodextrin (HPGCD) was monitored using in vitro tests and in vivo skin cancer models. The coefficient of drug interaction (CDI) was used to characterize the interactions as synergism, additivity or antagonism. Our results revealed an increased antitumor activity for the mixture of the two triterpenic acids, both single and in complex with cyclodextrin, thus proving their complementary biologic activities.
An interesting theory was formulated by Takada et al. in 2010, regarding their different capacities to block TNF-α-induced E-selectin expression; according to their study, these differences are due to conformational differences (the stable conformation of rings in UA is a twist-chair-twist-chair form, and that of OA is chair-chair), caused by the position of one methyl group (C-29) at C-19 in UA and C-20 in OA [6].
OA and UA were found as effective anti-hepatoma agents with marked anti-cancer activity [7]; they also show protective effects against H 2 O 2 -induced DNA damage in leukemic L1210, K562 and HL-60 cells as well as significant antioxidant effects [8].
More recently, a synergistic antimicrobial activity was reported for the two acids, accompanied by an immunostimulatory effect [9]. Another case of synergism was reported in 2010 in the case of oleanolic acid and insulin in STZ-induced diabetic rats [10]. Another research group reported the protective effect of a mixture of OA and UA against colon cancer [11].
Chemical carcinogenesis is a multi-stage process that begins with exposure, usually to complex mixtures of chemicals that are found in the human environment [12,13], that can be divided conceptually into four steps: tumor initiation, tumor promotion, malignant conversion and tumor progression [14]. 7,12-Dimethylbenz(a)anthracene (DMBA) is an immune-suppressor and a powerful organ-specific carcinogen used as a tumor initiator while tumor promotion can be induced by applying 12-O-tetradecanoyl-phorbol-13-acetate (TPA) in some models of two-stage carcinogenesis [15].
The pentacyclic triterpenes present a bulky non-polar structure and, consequently, very low water solubility. One of the most studied pathways in solving the solubility problem is the synthesis of cyclodextrin (CD) inclusion complexes [16].
The aim of our research was the study of the synergistic antitumor effect of the two triterpenic acids; cyclodextrin complexes of OA, UA and OA/UA mixture, respectively, were used for in vivo studies, in order to achieve the necessary water solubility. Based on previous studies [17] 2-hydroxypropyl-γ-cyclodextrin (HPGCD) was chosen as host molecule for the triterpenic acids and their mixture.
Results and Discussion
Viability and proliferation assay with Alamar Blue (AB) is based on the evaluation of mitochondrial activity of living cells which reduce resazurin, a dark blue compound with an intrinsic fluorescence, to resorufin, a pink and highly fluorescent compound (579 extinction/584 emission). Maximum absorbencies appear at 605 nm and 573 nm for resazurin and resorufin, respectively (according to the test manufacturer's protocol). Figure 2 shows cells' viability in A375 and A2058 cell lines after 48 h exposure to different concentrations of ursolic acid. Ursolic acid exhibited an antiproliferative effect in a dose-dependent manner. The IC 50 of UA in A375 and A2058 human melanoma cell lines was 75 µM and 60 µM, respectively. At low concentrations (40 µM, 50 µM) UA did not show any cytotoxic activity neither in A375 nor in A2058 cell line while at higher concentrations (85 µM and 100 µM) a stronger antiproliferative response to UA exposure was found for the A375 cell line; in case of a medium concentration (60 and 75 µM) the strongest antiproliferative effect was found on A2058 line. Incorporation of this pentacyclic triterpene in HPGCD seems to keep up the dose-range effect of the pure active compound on both cell lines, showing the same behavior, with a slightly increased activity for certain concentrations (Figure 2b), and, except for one case, without statistical significance. The A375 human melanoma cell line stands as the exceptional case, where a significant increased activity can be noticed for the complex concentration of 85 µM (p = 0.046). This behavior is also valid in case of A2058 human metastaic cell line (p = 0.048).
As shown in Figure 3, after 48h exposure at oleanolic acid, cells viability was less than 30% of the control, for both cell lines (27% in A375 and 22% in A2058), decreasing with the concentration. Based on our previous studies on A375 human melanoma cell line, which showed a lower IC50 (between 50 and 75 µM) for the ursolic acid, we chose to use higher concentrations of oleanolic acid than the ones used in case of ursolic acid [18]. The IC 50 of OA in A375 and A2058 human melanoma cell lines was 75 µM and 60 µM, respectively. After the incorporation of the oleanolic acid in HPGCD the same observations depicted above for the ursolic acid were valid, the cyclodextrin complexation leading to a slightly increased antiproliferative activity. Significant results were found in case of A375 cell line, starting from the concentration of 100 µM as follows: p = 0.047 for 100 µM; p = 0.046 for 150 µM; p = 0.043 for 200 µM. Significant results were found when the A2058 cell line and concentrations of 100 µM (p = 0.046) and 150 µM (p = 0.048) were used. The most significant results in terms of inhibiting cells viability in A375 and A2058 cell lines after 48 h were obtained in the case of exposure to different concentrations of 1:1 UA:OA mixture ( Figure 4). It was a dose-dependent antiproliferative effect, where the IC 50 of the mixture on A375 and A2058 human melanoma cell lines appear at a concentration of 60 µM. Cytotoxic activities also appeared at low concentrations (40 µM, 50 µM), but the strongest antiproliferative effect on both cells lines was achieved at 85µM and 100 µM. The use of the equimolar mixture of triterpenic acids incorporated in HPGCD led to the same behavior previously described for the pure compounds. A slightly increased activity can be seen for the A375 cell line, with no significant relevance, except for the concentration of 75 µM (p = 0.039). For the A2058 cell line the exception occured at the concentration of 100 µM (p = 0.037). Similar results were obtained by an in vitro research on A2058 and A2780 cell lines, but the study only compared the individual cytotoxic activity of these triterpenic acids [19].
Analyzing CDI values one can notice that in both cases, with or without cyclodextrin complexation, a synergistic behavior of the two triterpenic acids was recorded (CDI < 1) (Figures 5 and 6); moreover, for some concentrations (e.g., 85-100 μM) applied on A375 cell line, the CDI value is very close to 0,7 which reveals a significant synergistic effect. Cyclodextrin complexation preserves this behavior and improves water solubility, leading to a higher bioavailability.
Inclusion of different active substances into different CDs and their effect on a wide range of cell lines have been discussed in the literature. Some groups stain that this physicochemical procedure increases the antiproliferative activity of a potent compound due to the increased cellular uptake after incorporation [28][29][30]. This kind of results were reported for betulin, albendazole, pyrazolo [3,4-d]pyrimidines, ferrocenyl-tamoxifen adducts [31][32][33][34]. On the other hand, other research groups reported that cyclodextrin incorporation had no influence on the antiproliferative effect of the active compound [30,35]. During our research, HPGCD encapsulation seems to have a significant effect only when used in higher concentrations (starting from 75 μM), but the observation is not widely available as described above. Given the high incidence and fast metastatic proliferation of melanoma, the use of antiproliferative compounds is mandatory; the synergistic behavior of triterpenic acid ensures smaller doses of each compound and therefore weaker side effects. The skin parameters were monitored for six weeks and data were collected in the first day of every week. The measurements were done in triplicate and are presented in Figures 7-12 as differences between treated skin area and a blank area. The measurements of melanin and erythema serve as quantitative results regarding tumour evolution.
The TEWL measurements indicated important increases of transepidermal waterloss during the six weeks of the experiment (Figure 7). TEWL values below 10 g/h/m 2 characterize a good skin condition while over 25 g/h/m 2 values correspond to a poor skin condition [10]. In the current research, the most important change was recorded in the case of control mice (ΔTWL ~25 units/six weeks), while practically no modification was noticed in the case of 1:1:1 OA:UA:HPGCD treated mice group. The skin-pH was approximately the same for the control mice group (maximum difference recorded from one week to another was 0.2 units). Slight increases of skin-pH values were noticed for the mice treated with cyclodextrin complexes. The most important change was seen in the case of OA/UA:HPGCD treated mice (Figure 8). Similar results were revealed in a previous study of our team [36]. Erythema is the most important skin parameter involved in the evaluation of drugs or chemicals irritative potential, as well as the evaluation of antimelanoma agents. An important change was recorded for the control mice group, the difference between the treated skin area and a blank area reaching more than 230 units after six weeks of experiment). By contrast, a very small difference was noticed for the mice treated with the cyclodextrin complex of the 1:1 OA:UA mixture (below 50 units after six weeks of treatment) ( Figure 11). Figure 11. Erythema progress.
The water loss from the stratum corneum appears in Figure 12 as increased differences between the exposed and unexposed skin areas. The decrease of stratum corneum' moisture in the control mice group (Figures 12 and 13a) reached the highest level; OA:HPGCD and UA:HPGCD treated mice groups lost more or less water from stratum corneum during the experiment, while the smallest difference (around two units) was noticed for the OA:UA:HPGCD treated mice group. However, the values for UA:HPGCD and OA:UA:HPGCD groups are very similar. The MPA5 from Courage-Khazaka is a powerful tool for dermatologists, but also, for the study of any skin changes during the development of skin cancer models. TEWL and erythema increase significantly in the first week of chemically (DMBA/TPA treatment) and/or UVB induced skin cancers [36]. A recent paper dealing with the toxicity of nitrofuran-type compounds on melanoma revealed that melanin protects melanoma cells from nitrofuran-induced DNA damage [37]; however, during the current experiment, the melanin level did not fluctuate significantly. Swalwell et al. evaluated the role of melanin in skin cancers using human melanoma cells; they found that skin pigment prevents mitochondrial superoxide production and mitochondrial DNA damage, but does not appear to prevent cytosolic oxidative stress [38]. In a 15 weeks-experiment, Cerga et al. reported that the level of TEWL increased two times less for skin cancer C57BL/6j mice models treated with OA or UA-cyclodextrin complexes than the level recorded in the control group [15].
The evolution of skin surface pH in melanoma, non-melanoma skin cancers and other skin diseases was rarely assessed. J. Liu et al. noticed no modification of this parameter in volunteers with vitiligo [39]. Differences of skin surface pH depending on Fitzpatrick types were reported by Gunathilake et al.; subjects with type IV-V skin, with increased epidermal lipid content and lamellar body secretion, have more acidic stratum corneum surface pH [40]. Elevated pH values interfere with both permeability barrier homeostasis and stratum corneum integrity leading to an increased activity of serine proteases, responsible of normal desquamation [41].
In previous studies, the moisture of stratum corneum was used to evaluate the skin photoaging [42] or the hydration potential of UV protection creams [43]. In a 2010 US patent the hydration increase of the skin treated with a cosmetic product is attributed to ursolic acid [44]. Pentacyclic triterpenoids improve epidermal barrier function and induce collagen production thus modifying the parameters of skin [45].
The cell lines (ECACC; Sigma Aldrich origin Japan stored UK) was seeded onto a 96-well microplate ( where ε OX = molar extinction coefficient of alamar Blue oxidized form (BLUE); A = absorbance of test wells; A° = absorbance of positive growth control well (cells without tested compounds); λ 1 = 570 nm and λ 2 = 600 nm The coefficient of drug interaction (CDI) was used to analyze the interactions between the pure compounds while used as mixture, with or without cyclodextrin complexation; according to CDI values, the interactions were categorized synergism, additivity or antagonism, respectively. CDI was calculated as follows: CDI = AB/(A × B) where: AB = absorbancy value for the mixture of the two active agents/absorbancy value for the control A and B = absorbancy value for the single active agent / absorbancy value for the control.
A CDI value <1, =1 or >1 indicates that the drugs are synergistic, additive or antagonistic, respectively. A CDI value less than 0.7 indicates that the drugs are significantly synergistic [46,47].
Preparation of Inclusion Complexes
The preparation of inclusion complexes was already described in detail in our previous papers [15,17]. Briefly, OA and UA, respectively, and HPGCD were kneaded with a 50% ethanol solution in quantities corresponding to a molar ratio of 1:1 triterpene: CD (M UA = M OA = 456.7; M HPGCD = 1761.76). The mixture of UA and OA was prepared as 1:1 molar ratio; its inclusion complex with HPGCD was prepared using the same kneading procedure, in final molar ratio of 0.5:0.5:1 (UA:OA:HPGCD).
In Vivo Experimental Cancer Procedure
SKH1 females, 8 weeks old mice were obtained from Charles River Germany and divided in four groups (six mice/group): group 1 (used as control)-mice were exposed to UVB and 7,12-dimethylbenz(a)anthracene (DMBA) (390 nmol/0.1 mL acetone) was topically applied on the back skin (a single application in the first week of experiment) before irradiation; groups 2, 3 and 4 were treated with 200 μL of 2% aqueous solutions of OA:HPGCD, UA:HPGCD and OA/UA:HPGCD, respectively, 1/2 h before application of carcinogens [36,48]. For UVB exposure, cages were placed in an automatically time-switched irradiation setup. In the experiment, VL-6.M/6W (312 nm wavelength and 680 μW/cm2 intensity at 15 cm) tubes (VilberLourmat, Torcy, France) were used. Under the lamps the minimal erythema dose (MED) of hairless SKH-1 mice, was ≈300 J/m2 [17]. The exposure protocol was the following: irradiation 5 min / day, 2 times/week for 6 weeks, total dose being around 200 J/m2 UVB radiation. During exposure the mice were maintained in a plastic cage and the distance between the lamp and the back of the mice was 15 cm [36].
Non-Invasive Skin Measurements
The following skin parameters were evaluated using a Courage-Khazaka multiprobe adapter, MPA-5 (Cologne, Germany): transepidermal water loss (TWL) using Tewameter ® TM 300 probe, skin-pH using the Skin-pH-Meter ® PH 905 probe, sebum using the Sebumeter ® SM 815 probe, melanin and erythema using a Mexameter ® MX 18 probe and stratum corneum (SC) moisture content using a Corneometer ® CM 825 probe. Melanin and erythema values were spectrophotometrically determined at 2 wavelengths, respectively: 660 and 880 nm for melanin and 560 and 660 nm for erytema [17,49]. The measurements were conducted every three days after radiation exposure, on a 5 mm diameter back area of the mouse.
Statistical Analysis
All data were analyzed using paired Student's t tests or One-way Anova followed by Bonferroni's post-tests in order to establish the statistical difference between experimental and control groups; *, ** and *** indicate p < 0.05, p < 0.01 and p < 0.001. A 0.05 level of probability was taken as level of significance.
Compliance with Ethics Requirements
Authors declare that they have no conflict of interest and all procedures involving animal subjects complied with the specific regulations and standards. The experiment was first evaluated and approved by the Ethical Committee of the "Victor Babes" University of Medicine and Pharmacy Timisoara, Romania. The work protocol followed the rules of National Institute of Animal Health: throughout the experiment animals were maintained under standard conditions: 12 h light-dark cycle, food and water ad libitum, temperature 24 ± 1 °C, and humidity above 55%. At the end of the experiment, animals were sacrificed by cervical dislocation.
Conclusions
The synergistic in vitro activity of oleanolic and ursolic acids was evaluated on human melanoma cell lines revealing the capacity of the two active agents to potentiate each other's antiproliferative activity. Hydroxypropyl-γ-cyclodextrin was chosen as a water soluble carrier for these triterpenic acids as well as their mixture in order to be used in chemically (DMBA/TPA) and UV induced murine skin cancers. The measurements of transpidermal water loss, erythema, and skin hydration are readily available and of clinical importance; objective, fast and reproducible results were obtained in terms of detecting skin cancers status. The synergistic activity of oleanolic and ursolic acids was also confirmed by the in vivo study. | 3,974.6 | 2014-04-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
anti-Similarities and Differences Between the Light and Heavy Chain Ig Variable Region Gene Repertoires in Chronic Lymphocytic Leukemia
Analyses of Ig V H DJ H rearrangements expressed by B-CLL cells have provided insights into the antigen receptor repertoire of B-CLL cells and the maturation stages of B-lymphocytes that give rise to this disease. However, less information is available about the L chain V gene segments utilized by B-CLL cells and to what extent their characteristics resemble those of the H chain. We analyzed the V L and J L gene segments of 206 B-CLL patients, paying particular attention to frequency of use and association, mutation status, and LCDR3 characteristics. Approximately 40% of B-CLL cases express V L genes that differ significantly from their germline counterparts. Certain genes were virtually always mutated and others virtually never. In addition, preferential pairing of specific V L and J L segments was found. These findings are reminiscent of the expressed VH repertoire in B-CLL. However unlike the V H repertoire, V L gene use was not significantly different than that of normal B-lymphocytes. In addition, V κ genes that lie more upstream on the germline locus were less frequently mutated than those at the 3 ′ end of the locus; this was not the case for V λ genes and is not for V H genes. These similarities and differences between the IgH and IgL V gene repertoires expressed in B-CLL suggest some novel features while also reinforcing concepts derived from studies of the IgH repertoire.
INTRODUCTION
Immunoglobulin (Ig) variable (V) domains are the components of the B-cell antigen receptor (BCR) that interact with antigen. Understanding the gene segments that encode these domains can provide indirect information about the structure of the BCR. In addition, somatic changes that occur in these genes can suggest clues regarding the maturational history of a B lymphocyte.
These principles have been of special value in understanding the biology of the B lymphocytes that become leukemic in B-cell chronic lymphocytic leukemia (B-CLL). For example, analyses of V H DJ H rearrangements expressed by these clones (1)(2)(3) has led to the recognition that B-CLL cases segregate into subgroups based on the presence or absence of mutations in V H genes, i.e., mutated and unmutated B-CLL, respectively (4,5). This division has considerable prognostic significance (6,7). Patients in the Ig V H mutated subgroup have a relatively benign clinical course. These individuals can live for many years after diagnosis (10-25 years), usually do not require therapy, and often die with the disease, not because of it. In contrast, patients in the unmutated subgroup follow a much more aggressive clinical course; these people have a median survival of less than 8 years, despite extensive therapeutic efforts which may quell but do not cure the disease. B-CLL remains incurable.
Furthermore, these studies suggest that the leukemic cells from both the mutated and unmutated subgroups are anti-
Similarities and Differences Between the Light and Heavy Chain Ig Variable Region Gene Repertoires in Chronic Lymphocytic Leukemia
Analyses of Ig V H DJ H rearrangements expressed by B-CLL cells have provided insights into the antigen receptor repertoire of B-CLL cells and the maturation stages of B-lymphocytes that give rise to this disease. However, less information is available about the L chain V gene segments utilized by B-CLL cells and to what extent their characteristics resemble those of the H chain. We analyzed the V L and J L gene segments of 206 B-CLL patients, paying particular attention to frequency of use and association, mutation status, and LCDR3 characteristics. Approximately 40% of B-CLL cases express V L genes that differ significantly from their germline counterparts. Certain genes were virtually always mutated and others virtually never. In addition, preferential pairing of specific V L and J L segments was found. These findings are reminiscent of the expressed VH repertoire in B-CLL. However unlike the V H repertoire, V L gene use was not significantly different than that of normal B-lymphocytes. In addition, Vκ genes that lie more upstream on the germline locus were less frequently mutated than those at the 3′ end of the locus; this was not the case for Vλ genes and is not for V H genes. These similarities and differences between the IgH and IgL V gene repertoires expressed in B-CLL suggest some novel features while also reinforcing concepts derived from studies of the IgH repertoire. Online address: http://www.molmed.org doi: 10.2119/2006-00080. Ghiotto gen-experienced B-lymphocytes. Observations on the surface phenotypes (8) and the gene expression profiles (9,10) of the leukemic cells corroborate this notion. Finally, biased use of V H , D, and J H gene segments (5,7,(11)(12)(13) and selective combinatorial associations (14)(15)(16)(17)(18)(19) suggest that the antigenic epitopes responsible for this activated and antigen-experienced/memory state are limited in nature. An alternative, not mutually exclusive explanation, is that the normal B-cells from which B-CLL cells derive are markedly restricted in their antigenbinding repertoire, either genetically or due to prior antigen selection (1,3,16,17).
Thus, considerable basic as well as clinical information has been gleaned from studying the rearranged V H DJ H segments that code for the V domains of the Ig H chain in these leukemic cells. In contrast, less information is currently available about the gene segments that make up the V domains of the Ig L chains of B-CLL cells and the extent to which the characteristics of these segments resemble those of the H chain (20). Therefore, it is not resolved if conclusions about the immunobiology of B-CLL cells that were derived from H chain data are recapitulated by the rearranged V L J L segments. In this study, we analyzed the V L and J L gene segments expressed by a large cohort of B-CLL patients, focusing on the frequency of use and association, mutation status, and characteristics of the 3rd complementarity determining region (CDR3) that is critical for antigen-binding (21,22). These analyses reveal similarities as well as some differences in these features between the V region H and L chain repertoires expressed in B-CLL.
B-CLL Samples
We analyzed the DNA sequences of V L J L rearrangements of 206 patients with B-CLL; 179 of these patients had expansions of IgM + /CD5 + /CD19 + B cells and 27 patients displayed expansions of CD5 + / CD19 + B cells expressing smIgG + or smIgA + . DNA sequences for some of these cases have been described (5,16,23). PBMC, obtained from heparinized venous blood by density gradient centrifugation (Ficoll-Paque; Pharmacia LKB Biotechnology, Piscataway, NJ), were used immediately or cryopreserved with a programmable cell-freezing machine (CryoMed, Mt. Clemens, MI) prior to being thawed and analyzed. cDNA prepared from samples were screened for expression of a dominant V L family (representing that of the B-CLL clone) by standard PCR. In this study we included only B-CLL cells that exhibited allelic exclusion.
Preparation of RNA and cDNA Synthesis
Total RNA was isolated from fresh or cryopreserved PBMC using Ultraspec RNA (Biotecx Laboratories, Houston, TX) according to the manufacturer's instructions. One µg of RNA was reverse transcribed using 200U M-MLV reverse transcriptase (GIBCO BRL, Life Technologies, Grand Island, NY), 1U of RNase inhibitor (Eppendorf, Hamburg, Germany), as previously described (5).
Ig V L J L Gene Sequencing and Analysis
V L J L sequences were determined as reported previously (16). Sequences were compared with the V BASE sequence directory (24) using MacVector software, version 7.0 (Accelrys, San Diego, CA), to GenBank, and to the international Im-MunoGeneTics information system® http://imgt.cines.fr (Initiator and coordinator: Marie-Paule Lefranc, Montpellier, France; ref. (25)). In those instances where > 1% mutation was found in an expressed V L gene, the algorithm of Chang and Casali (26) was employed to determine the extent to which "antigenselection" of the replacement (R) mutations had occurred, taking into account the inherent susceptibility of CDR to R mutations. The expected number of R mutations in CDR and FR was calculated using the formula R = n × CDR Rf (or FR Rf) × CDRrel (or FRrel) where n is the total number of observed mutations, Rf is the R frequency inherent to the CDR or FR, and CDRrel and FRrel are the relative sizes of these segments. A binomial probability model was used to evaluate whether the excess of R mutations in CDR or the scarcity in FR was due to chance (26).
Analyses of LCDR3 Rearrangements
LCDR3 length was determined by counting the number of amino acids (aa) immediately following the conserved cystine (C) at position 88 at the end of FR3 to the aa preceding position 98 at the beginning of FR4 (a conserved phenylalanine (F) in all JL segments). LCDR3 charge, as defined by an estimated pI, was determined using the MacVector software program (version 7.0).
Statistical Analyses
Analyses focused primarily on descriptive statistics (summaries using means, medians, standard deviations, proportions). Additional analyses examined associations between study groups (B-CLL vs. normal subjects) and V L isotypes (κ vs. λ), C H isotypes (IgM + vs. non-IgM + ), V H mutation status, and other categorical variables using the Fisher's Exact Test. The standard goodness-of-fit test was used to determine whether specific Vκ-Jκ and Vλ-Jλ pairings were more frequently encountered than others. The Mann-Whitney test was used to compare LCDR3 length and charge between specific group comparisons.
κ and λ L Chain Use in B-CLL Cells
The distribution of κ and λ chain use in normal, polyclonal B-cell populations is 2:1 (27). Therefore, we analyzed the V L gene sequences expressed by 206 B-CLL clones (179 IgM + and 25 IgG + and 2 IgA + ) to determine if this was the case in the B cells transformed in this disease (Supplemental Tables S1, S2). In 67.9% (140/206) of the cases, the leukemic cells expressed a Vκ gene and in 32.0% (66/206) a Vλ gene. This distribution was similar for IgM + (66.5% κ + and 33.5% λ + ) and non-IgM + (77.8% κ and 22.2% λ) cases. Thus, the ratio of κ and λ chains in B-CLL resembles that of normal B lymphocytes.
V L Gene Family Use
Among the 140 κ-expressing samples, Vκ genes were derived from families I, II, III, IV, and VI (Table 1) in the following order of frequency: VκI (52.1%) > VκIII (25.7%) > VκII (16.4%) > VκIV (5.0%) > VκVI (0.7%). The order of Vκ family distribution and their relative frequencies were unchanged when the cases were divided into subgroups based on C H isotype. In addition, there was no significant difference in the distribution and frequency of Vκ family use in IgM + B-CLL cases compared with the reported repertoires of normal IgM + CD5 + and IgM + CD5 -B cells (Table 1) (28,29). A comparison between non-IgM + B-CLL cases and normal non-IgM + CD5 + and CD5 -B cells could not be performed because data on the latter 2 control populations are lacking in the literature.
Use of Specific Vκ and Vλ Genes
IgV use among normal B lymphocytes is not stochastic; rather it is skewed by genetic and environmental pressures (33)(34)(35)(36). The distribution of individual Vκ genes of IgM + B-CLL cases resembled that reported in normal CD5 + and CD5 -IgM + B cells (data not shown) (28,29). A similar comparison of specific Vλ gene expression was not possible because of the lack of data reported for normal control subsets.
Number and Location of VL Gene Mutations
When mature B cells encounter antigen, they can undergo the somatic hyper- mutation process that alters the structure of Ig H and L V region genes and possibly the protein structure of the BCR [reviewed in (37)]. This is especially frequent if the antigen bound elicits T-cell help (38). Thus the presence of IgV mutations indicates antigenic experience as well as implies maturational pathway. We analyzed the number, type, and location of somatic changes in expressed V L J L of our B-CLL cohort. A difference of 2% or more from the most similar germline gene was taken as the cut-off point to define a sequence as mutated (4). Approximately 42% (87/206) of B-CLL V L genes differed from the most similar germline gene by 2% or more (Table 3). C H isotypeswitched cases were more often mutated (63%) than IgM + cases (39.1%; P = 0.02). When Vκ-and Vλ-expressing cases were analyzed separately, 43.6% of the former and 39.4% of the latter differed by ≥ 2% from the germline counterpart. Many more isotype-switched Vλ-expressing cases were mutated (83.3%) than IgM + Vλ-expressing cases (35.0%; Table 3; The Vκ families differed in mutation frequency with a distribution of VκIV > VκI > VκIII > VκII (Tables S1,S2). In addition, certain Vκ genes displayed more mutations than others (Table 4). IGKV1-5 and 3-20 exhibited significant levels of muta-tion (80%, 8/10 cases and 63.6%, 12/19 cases, respectively). In contrast, other genes were rarely mutated. For example, in every instance (8/8) IGKV2-28 was > 98% similar to the germline sequence; all of these cases expressed IgM. Likewise, IGKV1-33, which was also found only among IgM + cases, and IGKV1-39 were very similar to their germline counterparts (9/10, 90%, and 21/25, 84%, respec-tively), even though 42.5% (31/73) of the VκI-expressing cases were mutated. Of Vλ-expressing cases, 85.7% (12/14) of those using IGLV3-21 were minimally divergent from the germline sequence (Tables 4,S1,S2); all these cases were IgM + .
Finally, most of the Vκ genes that remained unmutated were positioned considerably upstream on the Vκ locus (Figure S1). Specifically, 46.5% (27/58) of the Vκ genes located at the 3′ end of the locus (within the interval from IGKV4-1 to 3-20) were unmutated, whereas 71.7% (43/60) of the Vκ genes 5′ of IGVK3-20 (starting at 6-21 and considering also the duplicated portion of the locus) were unmutated. In contrast, unmutated and mutated Vλ genes were distributed uniformly along the Vλ locus.
Types of VL Mutations
BCRs that have been selected by antigen often display a higher frequency of R mutations in CDRs and/or a lower frequency of such mutations in FRs (39,40). Based on these considerations, ~50% of both κ-expressing and λ-expressing cases demonstrated evidence of antigen selection (Tables S1,S2).
Among the IgMκ + mutated cases (Table S1), 40.1% (20/49) exhibited either a significantly increased frequency of R mutations in CDR (n = 1) or more often a significantly decreased frequency of R mutations in FR (n = 12) and 9 cases displayed mutations patterns in both the CDR and FR that were consistent with antigen selection. Of the IgMλ + mutated cases (Table S1), 47.6% (10/21) demonstrated evidence for antigen selection. In 6 cases, there were fewer R mutations in FR and in 2 instances more R mutations in CDR than predicted. In 2 cases, both criteria for selection were found.
Approximately 83% of the κ-expressing and 50% of λ-expressing isotype-switched cases demonstrated similar evidence for antigen selection (Table S2). Although none of the cases showed a significantly increased frequency of R mutations in CDR, 8 exhibited a significant decrease in R mutations in the FR and 3 displayed significant changes in CDR and FR.
Allelic Polymorphisms
To ensure that the differences observed were primarily the effect of somatic changes and did not reflect known polymorphic variants, V L genes encountered in our analyses were compared with the list of polymorphisms available in the IMGT and GenBank databases. In every instance, the identified differences were consistent with somatic mutations. Furthermore, the alleles most commonly used in B-CLL were the same as those identified in our normal subject V L database. Thus, allele IGKV1-39*01 was used in 100% of the B-CLL cases and in 91.7% of normal controls. Similarly, the frequency of use of alleles IGKV1-5*03, 3-20*01, and 3-11*01 were identical in all B-CLL and normal B-cells.
J L Gene Use
J L family use among the entire B-CLL cohort, the IgM + cases, and the non-IgM + cases are listed in Table 5. The frequency of Jκ family use was the same in each group: Jκ1 > Jκ2 > Jκ4 > Jκ3 > Jκ5. For Jλ, the order of frequency in the entire B-CLL cohort and the IgM-expressing cases was Jλ3 > Jλ2/3 > Jλ1. Of note, all the cases expressing IgG or IgA used the Jλ2/3 or Jλ3 gene segment.
V L -J L Joining
An analysis of the frequency of joining of V L gene families with J L genes indicated that there was no preferential association, either for κ (P = 0.28) or λ (P = 0.8). However, when we compared the distribution of V L -J L pairing in IgM + B-CLL with that observed in normal subjects (28-32), we did notice that Vλ2 genes paired more frequently with the Jλ3 segment in B-CLL compared with normal individuals (11.7% (7/60) versus 1.5% (4/135), P = 0.04).
LCDR3 Characteristics
The antigen-binding pocket of the BCR is a composite of both the H and L rearrangements (41). Although HCDR 1 and 2 and LCDR 1 and 2 are important contributors, the H and L CDR3s have the greatest impact on the structure of the binding site for most antigens (21,22). In B-CLL, the HCDR3 of the BCR often displays unique features that differ between the Ig V H mutated and unmutated subgroups. Therefore, we carefully examined the LCDR3 of the κand λ-expressing cases in regards to length, amino acid composition, and charge.
The average charge of LCDR3, as determined by the estimated pI value, for the κ-expressing samples was 6.5 ± 1.9 (Tables S1,S2), a value similar to those for IgM + (6.4 ± 1.9) and non-IgM + (6.9 ± 1.9) cases and cases expressing different Vκ gene families. It is also similar to the average LCDR3 pI (6.4 ± 1.9) of healthy individuals (calculated from refs. 28,29).
Pairing of the Most Commonly Encountered Vκ and Vλ Genes with Specific IgV H Genes
Because, as mentioned above, both V H DJ H and V L J L rearrangements contribute to antigen binding, we searched for selective associations of certain V L with V H genes. In fact, we found 2 examples of such associations. IGKV1-39 paired mainly with V H 1 (10/20) and V H 3 (6/20) family members in the IgM + B-CLL cases; in IgG + cases, IGKV1-39 paired almost exclusively with IGHV4-39 gene (4/5). In addition, a preferential pairing of IGVL3-21 gene with V H 3 gene family members (11/14) was identified.
DISCUSSION
In this study, we analyzed the expression of V L and J L segments in a cohort of 206 B-CLL patients, paying particular attention to [1] frequencies of utilization and association of these segments, [2] frequency, level, and location of somatic mutations, and [3] characteristics of LCDR3. We found similarities and some differences between the L chain and H chain V region repertoires of B-CLL cells (Table 6).
Gene Use
The most striking difference between the IgL and IgH repertoires is the lack of bias in V L and J L gene use. The IgH repertoire in B-CLL is characterized by a use of V H , D, and J H genes (5,7,12,42) and alleles (11,13) that differs from B cells of normal individuals. In contrast, our data indicate that expression of IgL κ and λ families and genes and J L segments mirrors that reported for the normal adult human B cell repertoire. However, Stamatopoulos recently reported skewed representation of individual IgL κ and λ genes in B-CLL (20). The reason for this discrepancy is unclear but may relate to the normal control populations used for comparison. Only, 1 clear difference between the IgL repertoire in B-CLL and normal B cells appears to exist, i.e., exclusive use of Jλ2 among λ + C H isotype-switched cases.
As might be expected from the similarity in V L and J L gene use, the LCDR3s of both the κ and λ rearrangements were similar to their counterparts in the normal B-cell repertoire. Again, this is different from the IgH repertoire, which differs in length, amino acid composition, and charge (Tables S1,S2) (5,11,13) from that of normal circulating B cells. Of note, however, certain Vλ families (Vλ2 > Vλ3 > Vλ1) do differ significantly in LCDR3 pI values. In this latter regard, the IgL and IgH B-CLL repertoires are similar because V H family-related differences exist for HCDR3 charge (V H 3 > V H 4 > V H 1) and length (for example, longer in V H 1 genes, in particular 1-69 and shorter in V H 3, in particular 3-07) (5).
However, 1 similarity in gene use does exist between the IgL and IgH repertoires in B-CLL, i.e., use of certain genes solely among IgM-expressing cases. For example, 3 of the most commonly encountered V L genes (IGKV1-33 and 2-28 of the κ repertoire and IGLV3-21 of the λ repertoire) were not found among CH isotype-switched cases. This phenomenon is reminiscent of that seen for the IGHV1-69 gene, which is rarely encountered in IgG + or IgA + B-CLL cases (16).
Interestingly, the B-CLL IgL repertoire does differ from that of normal B cells in gene family pairing. We found that Vλ2 and Jλ3 genes paired more frequently in B-CLL than in normal subjects (Tables S1,S2). At the specific gene level, in IgM + cases, IGKV1-39 associated preferentially with Jκ2 and IGLV3-21 paired often with Jλ3. However, among C H isotype-switched cases, IGKV1-39 associated most frequently with Jκ1. Coordinate association of V and J genes also occurs in IgH repertoire, where V H 1-69expressing B-CLL cells often use J H 6 and 3-07-expressing cells often use J H 4 (5,11). Our data differ somewhat from those reported recently (20), mainly by a lower percentage of IGKV4-1 and IGLV2-8 genes in our cohort.
Somatic Mutation
Another feature shared by the V genes of the H and L chain repertoires is the presence of somatic mutations, which in some instances is limited to specific genes. In approximately 42% of our B-CLL cases, the expressed V L genes differed by 2.0% or more from the most similar germline counterpart (Table 3); the level and extent of gene difference was the same for Vκ-and Vλ-expressing cases. These percentages agree with those recently reported by Stamatopoulos and co-workers (20). In addition, the frequency of mutated V L sequences was significantly higher among C H isotypeswitched cases (63%) than IgM + cases (36%; P = 0.0165); this finding resembles that of the V H repertoire in B-CLL (1-3). Of note, virtually all Vλ + cases that ex-pressed a switched C H isotype were mutated (83.3%, P = 0.01; Table 4).
The IgL and IgH gene repertoires also were similar in regards to the type and location of nucleotide differences (Tables S1,S2). Of the 87 sequences with at least 2% difference from the germline, 49.4% (43/87) exhibited evidence for antigen selection. Of these 43 cases, selection was suggested most often by a preservation of FR structure (65.2%, 28/43), followed by a preservation of FR with a change in CDR structure (27.9%, 12/43), and then solely a change in CDR structure (7.0%, 3/43). This is consistent with the pattern of R mutations in mutated V H genes in B-CLL (5,7,12,42). Finally, the definition of a somaticallymutated sequence employed here, and in prior studies of the V H repertoire in B-CLL, used a ≥ 2% difference from the most similar germline gene as an arbitrary cutoff to define "mutation." This cutoff was originally selected to account for unknown polymorphisms in the human IgV H locus (4). However, if one uses a ≥ 1% difference to assign this designation, the numbers of "mutated" sequences change minimally and insignificantly ( ≥ 1% = 47.4% mutated sequences vs. ≥ 2% = 40.7% mutated sequences). This is consistent with our finding that known polymorphisms of specific V L genes do not account appreciably for the differences reported here or for the differences in B-CLL V H gene sequences reported by others (43).
Pairing of V H and V L Segments in the BCRs of B-CLL Cells
Pairing of specific IgV H genes with V L genes was studied for the most frequently encountered Vκ (IGKV1-39) and Vλ (IGLV3-21) genes. In both instances, pairing appeared to be non-random. IGKV1-39 paired preferentially with V H 1 and V H 3 family members in IgM + cases, although no preferential coupling with a specific V H gene in these families was observed. Conversely, in IgG + cases IGKV1-39 gene almost always paired with IGHV4-39 gene. These cases represent a B-CLL subgroup with almost identical BCR structures that involve the entire VHDJH and VLJL rearrangements, including H and L CDR3s with unique amino acids at the V-(D)-J junctions (16). Similarly, IGLV3-21 gene (represented only in IgM + cases) was paired mainly with VH3 gene family members (11/14) cases) and often with IGHV3-21 (4/10). Indeed, these B-CLL cases represent another subgroup of B-CLL with remarkably similar BCR structures (15). A comprehensive analysis of IgV H and V L pairing in a large number of B-CLL cases is being prepared (Ghiotto et al., manuscript in preparation).
Concluding Remarks
What do these studies of the IgV L repertoire add to the knowledge already gleaned from studies of the IgH repertoire? The presence of significant levels of somatic mutations in the V L genes of 40% of patients confirms the conclusion drawn from V H analyses that many B-CLL cases derive from mature B-lymphocytes that have experienced antigenic stimulation at some point in their development. However, the V L data do not shed additional light on the manner in which the B-CLL cell precursors accumulated these mutations. Certainly, reactivity with a foreign antigen that elicited a classical T-cell-dependent, germinal center-mediated somatic hypermutation process may have occurred. Alternatively, a T-cell-independent process initiated by non-protein antigens, such as those expressed on the surface of certain microbes, could be responsible for the observed V L gene changes.
The V L mutation data also support the contention that B-CLL cells derive from cells selected by specific antigens. A clear bias for selection against R mutations in FR exists, because less than 50% of the "antigen selected" V L sequences involved an amino acid replacement in LCDR1 or LCDR2 (Tables S1,S2). This type of structural conservation is consistent with the need for B cells to retain an intact BCR, a principle that has been illustrated clearly in animal systems (44). Furthermore, the greater tendency to preserve FR structure, rather than altering CDR amino acid composition, can be viewed as favoring securing antigen outside of the classical binding pocket, implying that superantigen drives some B-CLL cells and their precursor B-lymphocytes. Nevertheless, the association of certain specific V, (D), and J segments (5,7,(11)(12)(13) and their selective combination in assembling H and L chain rearrangements (14)(15)(16)(17)(18)(19) supports binding of specific antigens in those cases.
Finally, because more cases using unmutated Vκ genes lie upstream on the Vκ locus ( Figure S1), receptor editing may have taken place in these cells (45,46). Because unmutated B-CLL cases are enriched in poly/autoreactivity (47), receptor reconfiguration in these cases may not have been effective in eliminating low-affinity autoreactivity (48), due to the use of certain germline V L (49) and V H (50,51) genes and rearranged HCDR3 segments (52). This phenomenon has been observed in transgenic murine models (53,54). Additional studies that compare IgV mutation status with antigen binding will be necessary to confirm this possibility. | 6,374.2 | 2006-11-01T00:00:00.000 | [
"Biology"
] |
Diagnostic accuracy of point-of-care ultrasound with artificial intelligence-assisted assessment of left ventricular ejection fraction
Focused cardiac ultrasound (FoCUS) is becoming standard practice in a wide spectrum of clinical settings. There is limited data evaluating the real-world use of FoCUS with artificial intelligence (AI). Our objective was to determine the accuracy of FoCUS AI-assisted left ventricular ejection fraction (LVEF) assessment and compare its accuracy between novice and experienced users. In this prospective, multicentre study, participants requiring a transthoracic echocardiogram (TTE) were recruited to have a FoCUS done by a novice or experienced user. The AI-assisted device calculated LVEF at the bedside, which was subsequently compared to TTE. 449 participants were enrolled with 424 studies included in the final analysis. The overall intraclass coefficient was 0.904, and 0.921 in the novice (n = 208) and 0.845 in the experienced (n = 216) cohorts. There was a significant bias of 0.73% towards TTE (p = 0.005) with a level of agreement of 11.2%. Categorical grading of LVEF severity had excellent agreement to TTE (weighted kappa = 0.83). The area under the curve (AUC) was 0.98 for identifying an abnormal LVEF (<50%) with a sensitivity of 92.8%, specificity of 92.3%, negative predictive value (NPV) of 0.97 and a positive predictive value (PPV) of 0.83. In identifying severe dysfunction (<30%) the AUC was 0.99 with a sensitivity of 78.1%, specificity of 98.0%, NPV of 0.98 and PPV of 0.76. Here we report that FoCUS AI-assisted LVEF assessments provide highly reproducible LVEF estimations in comparison to formal TTE. This finding was consistent among senior and novice echocardiographers suggesting applicability in a variety of clinical settings.
INTRODUCTION
Cardiovascular disease is the leading cause of mortality worldwide, with disease prevalence nearly doubling since 1990 1 .The rising prevalence of cardiac disease has dramatically increased the financial burden on healthcare systems and has further constrained access to limited resources.Transthoracic echocardiography (TTE), the most frequently utilized cardiovascular test, accounted for approximately $1.2 billion in Medicare spending in 2010, which accounted for 11% of its spending on imaging services 2,3 .
Recent technological advances in ultrasound components along with declining costs have led to the development of pocket-sized devices that are increasingly being utilized outside the confines of a formal echocardiography laboratory 4 .As a result, point-of-care ultrasound (PoCUS) is being used by clinicians from diverse clinical backgrounds as part of their cardiovascular assessment 5 , where it has been shown to outperform physical examination and improve diagnostic accuracy [6][7][8] .
Assessment of left ventricular ejection fraction (LVEF) is a fundamental component in the focused cardiac ultrasound (FoCUS) examination.While TTE is the standard of care to determine LVEF in clinical practice, it is often not readily available for immediate bedside evaluation and remains a scarce resource in particular communities 9 .Accordingly, the prevalence of FoCUS in clinical practice has increased to facilitate rapid clinical assessment.The reliability of FoCUS to screen for left ventricular (LV) dysfunction has been previously demonstrated 6,7 , however, in the majority of prior studies FoCUS assessment of LVEF has been limited to trained sonographers and clinicians with formal echocardiography training 6,7 .To the contrary, in real-world clinical settings, FoCUS is routinely used in primary care, anesthesia and emergency departments and are commonly performed by providers with limited PoCUS training 10 .Consequently, uncertainty regarding the accuracy of LVEF assessments and the potential impact of erroneous results on patient care remain a concern 11,12 .Due to the potential impact that FoCUS and bedside LVEF could have on patient management, it is imperative that LVEF is accurately measured by clinicians.
Artificial intelligence (AI) technologies present a potential solution to these concerns, yet insufficient validation of these novel algorithms in real-world settings limit their widespread implementation [13][14][15] .To date, the majority of AI-assisted LVEF assessment has been performed by echocardiographers and trained sonographers using formal TTE machines 16,17 .While early studies with FoCUS have been promising, it remains unclear whether the integration of AI into FoCUS devices improves diagnostic accuracy in real-world clinical settings where a diverse range of user background and experience exists.
Herein, we present a pragmatic prospective cohort study assessing the accuracy of AI-assisted LVEF evaluation.Our objectives were (1) to determine the accuracy of bedside AIassisted LVEF compared to formal TTE, (2) to assess the real-world efficacy of AI-assisted LVEF assessment through comparative analysis between early and experienced scanners, and (3) to determine if AI-assisted LVEF assessments can accurately classify the severity of LV dysfunction.The hypothesis of our study is that AI-assisted LVEF assessments are a reliable index text for determining the presence of cardiac dysfunction and severity in comparison to the reference formal echocardiogram.
Baseline characteristics
A total of 449 participants were enrolled in the study including 227 in the novice scanner (NS) subgroup and 222 in the experienced scanner (ES) subgroup.After the exclusion of studies due to non-diagnostic image quality (n = 25) the final cohort included 208 and 216 studies (p = 0.008) in the NS and ES subgroups, respectively (Supplementary Fig. 1).A formal TTE was able to calculate a LVEF in all of the excluded studies.
The median age for the overall cohort was 65 (IQR 20) years including 144 (34%) female participants (Table 1).The median BMI was 27.1 (IQR 6.8) kg/m 2 .Previously documented LV dysfunction on prior TTE was present in 101 (23.8%) participants, while 125 (29.5%) participants had LV dysfunction on their present TTE.The NS performed 92 of 125 (73.4%) abnormal studies and 29 of 33 (87.8%) with severely reduced LVEF.NS recruited from inpatient settings (emergency department, wards and intensive care units) at the University of Ottawa Heart Institute while ES recruited from the outpatient settings from Tufts Medical Centre.
LVEF assessment
The ICC for the entire cohort showed excellent reliability with a value of 0.904.The NS and ES subgroups have excellent and good reliability with ICC values of 0.921 and 0.845, respectively.The Bland-Altman plot shows a significant bias of 0.73% towards TTE (p = 0.005) with a level of agreement of 11.2% (Fig. 1).Simple linear regression demonstrated a correlation between bedside AI-assisted LVEF and TTE LVEF (R 2 of 0.82, root mean squared error [RMSE] 5.33, mean absolute error [MAE] 4.25, p < 0.0001; Supplementary Figure 2A).The correlation between bedside AIassisted LVEF and TTE LVEF assessment in both the NS (R 2 = 0.85, RMSE 5.31, MAE 3.86, p < 0.0001; Supplementary Figure 2B) and ES (R 2 = 0.72, RMSE 5.23, MAE 4.48, p < 0.0001; Supplementary Figure 2C) subgroups.
The ability to correctly classify LVEF severity by bedside AIassisted FoCUS was compared to TTE LVEF (Table 2).In cases where there was disagreement between the AI-assisted LVEF and TTE LVEF classification, only two cases (0.5%) disagreed by more than one severity category.This corresponded to a Cohen's weighted kappa of 0.83 (confidence interval [CI] 0.76-0.91).
DISCUSSION
In this prospective, multicentre, observational cohort study of 424 participants we found that bedside FoCUS with AI-assisted LVEF assessment has diagnostic performance when compared to comprehensive TTE read by a board-certified echocardiographer.Independent of the level of experience of the FoCUS scanner, users can accurately identify the presence and severity of LV dysfunction with high degrees of certainty.These findings suggest that in both inpatient and outpatient settings, AI-assisted FoCUS assessments may serve as a surrogate for a formal TTE in order to assess LVEF.The assessment of LVEF by FoCUS has mostly been studied without AI, with image acquisition and interpretation completed independently by the bedside user.In a meta-analysis on the diagnostic accuracy of FoCUS, an abnormal LVEF has been reported to be identified with a sensitivity and specificity of 84% and 89%, respectively 6 .Comparatively, the majority of these studies had an experienced sonographer or echocardiographer as the bedside user 11,[18][19][20] .In the remaining two studies that used novice FoCUS operators, the sensitivity and specificity for an abnormal LVEF (defined as less than 50%) was 85-86% and 82-89%, respectively 21,22 .Furthermore, there are only two studies that classified the severity of LV dysfunction, with both studies using experienced operators 18,19 .While pivotal to the growth of FoCUS, there are methodologic concerns that limit the applicability of these studies to clinical practice.The clinical FoCUS user population is diverse in terms training and experience and is unlike the highly subspecialized user population in these studies.As a result, the documented diagnostic accuracy of non-AI LVEF assessment may not be reflective of the real-world application.Despite its importance, there is a paucity in evidence outlining whether severity can be accurately identified at the bedside outside the hands of trained echocardiographers.The integration of AI into FoCUS remains a potential method for minimally trained users to accurately and efficiently identify abnormal LVEF, while simultaneously providing a reliable evaluation of severity.
Integration of AI has been mostly studied within general echocardiography but emerging data has shown promise in FoCUS [23][24][25] .A recent validation study of 100 participants demonstrated that FoCUS AI-assisted LVEF assessments could be accurately done with a correlation coefficient of 0.87 and a sensitivity and specificity of 90% and 87% for LVEF less than 50%, respectively 26 .Admittedly, these images were acquired by an experienced echocardiographer and in an optimal setting of a formal echo laboratory, but nevertheless these results demonstrate the potential of AI-assisted FoCUS assessment in real-world practice.Open-source AI-LVEF software are now becoming available; an emergency department study of EchoNet-POCUS achieved an AUROC of 0.81 (0.78-0.85) for identifying reduced LV function.A notable difference in model training is the use of physician visual assessment rather than quantitative methods like the biplane method of disks.While the current iteration does not comment on the severity of dysfunction, this is a promising noncommercial alternative 27 .Beyond these studies, our study provides robust evidence by providing multicenter data across a spectrum of users and clinical context.
The current study, which represents the largest cohort of patients evaluated using AI-assisted FoCUS, builds upon the previous data by validating the utility of AI-assisted LVEF using a design that is reflective of real-world clinical practice.First, we have shown that AI can reliably and accurately identify LV dysfunction.The current standard for FoCUS is a visual assessment for LVEF which may be more susceptible to misclassification 11,12 .As a result of our findings, AI-assisted LVEF assessments may provide a safety measure to confirm the bedside clinician's impression, while also providing the degree of severity.Given the potential changes to management depending on the degree of LV severity, AI-FoCUS may provide a rapid, accurate assessment of LVEF and therefore allow for prompt diagnosis and management.Second, we have shown that LVEF can be accurately calculated with a novice user using the AI-assisted FoCUS device.FoCUS examinations may be deferred by clinicians due to the lack of training, comfort, or clinical expertise and as a result bedside ultrasound is only used in two to five percent of emergency department assessments 28,29 .The novice users in this study, who did not have formal FoCUS or echocardiography training, demonstrates that inexperienced users can use AI-FoCUS to accurately determine LVEF.This avails LVEF assessment to a broader and more comprehensive spectrum of users than traditional FoCUS such as regions with limited cardiac and sonography infrastructure 30,31 .
With the concern surrounding the implementation of AI into real-world clinical care, our study has specific methodological strengths that highlight its applicability into current practice [13][14][15] .Our study has a pragmatic design that is based on real patient encounters.Participants were not excluded based on their clinical presentation or setting, body habitus or cardiac rhythm.Image acquisition was often done with participants in less-than-ideal positioning, including supine or sitting upright related to their presenting disease.Despite these challenges, 94.4% and 91.6% of all and NS studies were diagnostic quality.As a result, the limitations that commonly impact FoCUS in real-world practice were incorporated in the study design, meaning that our results are reflective of current clinical practice.Additionally, this is a large multicentre, international study that integrates the differences in image acquisition between centres and countries.
Interestingly, the diagnostic accuracy for LVEF AI-assessments were better with NS in comparison to ES. Considerations for this discrepancy include heterogeneity due to differences in the primary recruitment settings between cohorts and nonrandom distribution of cases.The ES subgroup had less studies excluded from the data analyses which likely reflects their ability to obtain studies in patients with challenging imaging windows but this could be confounded by the difference between the primary recruitment settings between the groups.Potentially, the images are less likely to be diagnostic quality and are often challenging to interpret and have likely impacted the findings in ES 32 .Our study also finds a lower error between AI and echocardiographers compared to independent cardiology assessments 33 .We hypothesize that this likely due to several factors including the differences from variability due to expertise and reader-fatigue, as well minimizing cognitive biases present in human diagnosis 34 .
Nonetheless, there are important limitations to our study.Due to participant enrollment being completed as a convenience series, there could be selection bias to exclude critically unwell patients that needed a timely assessment by an experienced sonographer.Additionally, the findings of our study are specific to the AI technology of the EchoNous KOSMOS device and performance might not be duplicated with similar but different platforms.Furthermore, it is important to highlight that this AI technology only provides assistance in interpretation and not in image acquisition; this is reflected with the differences in diagnostic quality images between the ES and NS cohorts.With new and upcoming AI software providing for assistance in image acquisition, our findings are only limited to image interpretation 35 .Another limitation of our study is the heterogeneity between the NS and ES groups.The NS group recruited mostly inpatients from one centre while the ES group mostly recruited outpatients from the other centre.This may be a confounder in the difference the number of diagnostic quality studies between cohorts.Finally, while TTE read by trained echocardiographers was utilized as the gold standardinter-observer variability can introduce error in the comparative gold standard and better modalities, such as MRI, can be used for precise LVEF assessment.
In summary, artificial intelligence-assisted FoCUS performed by both novice and experienced scanners can accurately determine LVEF compared to a comprehensive TTE.
Study design
This was a prospective, multicentre, observational cohort study conducted at The Ottawa Hospital (Ottawa, Canada), University of Ottawa Heart Institute (Ottawa, Canada) and Tufts Medical Centre (Boston, United States).All adult (≥18 years of age) patients undergoing TTE as part of their routine clinical care between December 2020 and March 2022 were eligible for study inclusion.Participants were enrolled in a convenience series from inpatient (critical care, ward, and emergency department) and outpatient settings.FoCUS assessment of LVEF was performed within 48 h of TTE image acquisition (Fig. 3).There were no exclusion criteria.All participants, or their substitute decision maker, provided informed verbal consent prior to enrollment.Ethics approval was obtained from the Ottawa Health Science Network Research Ethics Board and Tufts Institutional Review Board (Supplementary Document).
Participants had their FoCUS evaluation completed by either a novice or experienced scanner.The novice scanners were fellows meeting competency for FoCUS that had completed less than 100 scans.The experienced scanners included trained cardiac sonographers.We included the experienced scanners to act as a control group to help understand whether differences between AI LVEF and cardiologist-quantified LVEF in novice doing scans in real-life clinical settings was related to image acquisition from experience or the technology itself.The EchoNous KOSMOS pointof-care ultrasound machine device (Redmond, United States) was used for all FoCUS examinations by both novice and experienced scanners.Image acquisition was completed with participants laying in the left lateral decubitus position, and if unable, modified images with the patient supine or sitting upright was done.During bedside AI-assisted assessment, the study frames and LV tracings completed by the AI were left unmodified.Studies were excluded from the data analysis if the images were non-diagnostic, and the device was unable to calculate a LVEF.
AI device description, development and assisted assessments
The EchoNous KOSMOS point-of-care ultrasound machine device (Redmond, United States) was used for the AI-assisted LVEF assessments.KOSMOS is a hand-held, 64-channel diagnostic medical ultrasound system that uses machine-learning workflow to facilitate cardiac image acquisition and estimate LVEF.The model used in this study included a bridge tablet, where images could be reviewed and edited, and a torso probe which was used for image acquisition.The AI algorithm calculates LVEF using the modified Simpson's biplane method of disks.The device was internally validated with a study of over 1200 patients from inpatient wards, coronary care units and emergency departments as well as during cardiology consultation.A subgroup of 100 patients were used to validate the AI-assisted LVEF assessments, which found a correlation of 0.87 (p < 0.0001) with cart-based machines (https://echonous.com/clinical-benchmarking-kosmosplatform/).The device has received 510(k) U.S. Food and Drug Administration (FDA) clearance.For the purposes of this study, the AI was used only for image interpretation and not for image acquisition.
The AI workflow begins with the prospective acquisition of a five-second recording of the apical four-chamber view then the apical two-chamber view.The device automatically identifies the end-diastolic and end-systolic frames, automatically traces the left ventricular (LV) endocardial border and calculates the LVEF using the biplane method of disks (Fig. 4, Supplementary Video).If image acquisition was not possible with the participant laying in the left lateral decubitus position, modified images with the patient supine or sitting upright was done.Of note, the enddiastolic and -systolic frames, and LV tracing can be modified by the user, though as previously noted, were left unmodified for the purposes of this study.
Transthoracic echocardiography assessment of left ventricular ejection fraction
All participants underwent a comprehensive TTE following the recommendations by American Society of Echocardiography 18 .Image acquisition for all TTE studies was completed by a trained cardiac sonographer using a cart-based system and interpretation was performed by an experienced level-three echocardiographer.The LVEF was determined using the biplane method of disks by manually tracing the end-diastolic and end-systolic LV endocardial borders in the A4C and A2C views.
Statistical analysis
Continuous variables were summarized using means and standard deviations (SD) if normal distribution or with median and interquartile range (IQR) if non-normally distributed.Categorical variables were summarized using frequencies and percentages.All non-diagnostic studies were excluded from our data analyses.
Agreement between the bedside AI-assisted and TTE LVEF was analyzed using simple linear regression and intraclass correlation (ICC) of the combined cohort and within the novice and experienced scanner subgroups.A Bland-Altman plot was performed, and the level of agreement and bias was calculated between AI-assisted and TTE LVEF.LVEF was categorized as normal (≥50%), mild (40-49.9%),moderate (30-40%) or severe dysfunction (<30%).The LVEF from the comprehensive TTE study was used as the standard reference.Categorical agreeability was calculated with a Cohen's weighted kappa coefficient.The performance of bedside AI-assisted LVEF assessment for identification of abnormal (<50%) and severely reduced LVEF was assessed using a receiver operating characteristic (ROC) analysis.The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for abnormal and severely reduced LVEF were determined.All reported p values are twosided, and a value less than 0.05 was considered to indicate statistical significance.Analyses were performed using SAS software, version 9.4 (SAS Institute, North Carolina, USA).Our findings were presented as per the STARD reporting guidelines.Devices used in this study for obtaining bedside AI-assisted LVEF were provided by Echonous.
Fig. 1
Fig. 1 Bland-Altman plot for LVEF assessment by bedside AI-assisted FoCUS and TTE.
Fig. 4
Fig. 4 The AI-assisted LVEF workflow using FoCUS.A Step 1: acquire five-second clip ofA4C view, B Step 2: acquire five-second clip of A2C view, C Step 3: AI identifies enddiastolic and -systolic frames, traces endocardial border and calculates LVEF.
Fig. 3
Fig. 3 AI-assisted LVEF FoCUS and TTE image acquisition.LV cavity tracings by automated AI and manually by an echocardiographer of the A4C end-diastolic (A) and -systolic (B) frames and A2C end-diastolic (C) and -systolic (D) frames to calculate LVEF.
Table 2 .
Comparison in the classification of LVEF severity between bedside AI-assisted FoCUS and TTE. | 4,604.6 | 2023-10-28T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
On Characterizations of Weighted Harmonic Bloch Mappings and Its Carleson Measure Criteria
For α > 0 , several characterizations of the α -Bloch spaces of harmonic mappings are given. We also obtain several similar characterizations for the closed separable subspace. As an application, we give relations between B α H and Carleson ’ s measure.
Introduction
Let Ω denote a simply connected region in the complex plane ℂ; a harmonic mapping is a complex-valued function h defined on Ω such that the Laplace equation satisfied where h w w is the complex second partial derivative of the harmonic mapping h.
It is known that in the literature, a harmonic mapping h can be written in the form f + g, where f and g are analytic functions.This form is unique if we fix w 0 such that gðw 0 Þ = 0.
Let D = fw ∈ ℂ : jwj < 1g be the well-known open unit disk in ℂ and HolðDÞ and HarðDÞ denote the class of analytic functions and harmonic mappings on D, respectively.
In the last few decades, the Banach spaces of analytic functions on D have been gaining a great deal of attention, but for the harmonic extensions of analytic spaces, it is still limited.Besides [1] by F. Colonna, papers such as [2] for the study of the operator theory on some spaces of harmonic mappings, [3] for characterizations of Bloch-type spaces of harmonic mappings, [4] for composition operators on some Banach spaces of harmonic mappings, [5] for the study of harmonic Bloch and Besov spaces, [6] for the study harmonic Zygmund spaces, [7] for the study of harmonic ν-Bloch mappings and [8] for the study of harmonic Lipschitz-type spaces.For α > 0, α-Bloch space for harmonic mapping is defined such that where The mapping h ↦ khk B α H ≔ jhð0Þj + B α h defines a norm which yields a Banach space structure on B α H .This space is an extension to harmonic mappings of the classical α-Bloch space B α introduced by Zhu in [9], see also [10].We recall that f ∈ HolðDÞ belongs to B α if and only if with norm k f k B α = j f ð0Þj + B α f .Thus, representing h ∈ HarðDÞ as f + g with f , g ∈ HolðDÞ and gð0Þ = 0, we see that h w = f ′ and h w = g ′ .Therefore, Consequently, h ∈ B α H if and only if the functions f , g ∈ HolðDÞ such that h = f + g with gð0Þ = 0 are in the classical α-Bloch space.When α = 1, the space B α is the (analytic) Bloch space B and the corresponding harmonic extension denoted by B H .The elements of B H were first introduced in [1].
The little harmonic α-Bloch space B α H,0 is defined such that [2]); for more information about B α H and B α H,0 , see [2,3,11] and [1].For b ∈ D, the conformal automorphism is given by and Green's function with logarithmic singularity at the fixed point b is defined by For b ∈ D and 0 < δ < 1, the pseudohyperbolic disk Dðb, δÞ with the pseudohyperbolic center b and pseudohyperbolic radius δ is given by The pseudohyperbolic disk Dðb, δÞ is a Euclidean disk with Euclidean center ð1 − jbj 2 Þδ/ð1 − δ 2 jbj 2 Þ and Euclidean radius ð1 − δ 2 Þb/ð1 − δ 2 jbj 2 Þ (see [12]).Now, we let λ denote the normalized Lebesgue area measure on D, since Dðb, δÞ ⊂ D is a Lebesgue measureable set; then, the Euclidean area of Dðb, δÞ is given by Thus, by directed computation, we have the following fact: Fact 1.Let δ ∈ ð0, 1Þ; then, for all w ∈ Dðb, δÞ, For any w, b ∈ D, the hyperbolic distance between w and b is given by Meanwhile, for R ∈ ð0,∞Þ, the hyperbolic disk is given by Throughout this paper, we say that two quantities Q 1 ðhÞ and Q 2 ðhÞ, depending on the harmonic mappings h, are equivalent denoted by Q 1 ðhÞ ≈ Q 2 ðhÞ, if there exists a constant C > 0 such that In this work, we expand the study carried out in [13,14] and [15] for harmonic mappings.
Some Integral Criteria for Harmonic α-Bloch Mappings
The following lemma needed in the prove of the main theorem of this section (see Lemma 3 in [15]).
The following theorem is the main theorem of this section.
Remark 3. The above characterizations of harmonic α-Bloch functions is an extension of Theorem 1 proved by Zhao in [15].We extend the known characterizations of α-Bloch space for analytic function to the harmonic setting using the conditions of the analytic functions f and g and their subharmonicity on the unit disk.Therefore, we used the proof technique as in the proof of Theorem 1 in [15].
Remark 5.
When h is analytic function, B α H,0 is the little α-Bloch space B α 0 , and Theorem 4 is proved by Zhao in [15].
Carleson's Measures and Harmonic α-Bloch Mappings
Let μ be a positive measure on For a subarc J ⊆ ∂D, we let SðJÞ be the Carleson box based on J; that is, For J = ∂D, we let SðJÞ = D. Let s > 0; then, a positive Borel measure μ on D is called a s-Carleson measure if Note that s = 1 gives the classical Carleson measure.We say that μ is a vanishing s-Carleson measure if As is well known, the Berezin transform of a positive Borel measure μ on D is bounded if and only if μ is a Carleson measure (see, for example, [13,14]).Then, for any δ ∈ ð0, 1Þ, we say dμ is a Carleson measure if where λ is the normalized Lebesgue area measure on D.Moreover, we say dμ is a compact Carleson measure if For all α ∈ ð0,∞Þ, a positive measure μ on D is a bounded α-Carleson measure if and only if For all α ∈ ð−1,∞Þ, p ∈ ½0,∞Þ, we denote by A p,α H ðDÞ the weighted harmonic Bergman space, where A p,α H ðDÞ is the set of all h ∈ H arðDÞ for which where dλ α ðwÞ = ð1 − jwj 2 Þ α dλðwÞ.
Then, F b ð0Þ = 0 and [2], we obtain Now, setting w = φ b ðζÞ, we have which means that That is, ðbÞ holds.Secondly, ðbÞ ⇒ ðcÞ.Suppose that Then for any r ≥ 0, Journal of Function Spaces Finally, ðcÞ ⇒ ðaÞ.For some C > 0, let Then, For any h ∈ HarðDÞ, since f , g ∈ HolðDÞ such that h = f + g with gð0Þ = 0, h has the Taylor series Hence, by a simple calculation for a Taylor series of F b ðwÞ, which converges uniformly on D δ = fw : jwj < δg, δ ∈ ð0, 1Þ, we have Now, we let δ ⟶ 1; then, Similarly, for δ ⟶ 1, we see that which means that Since log 1/jwj ≲ 1 − jwj 2 when jwj > 1/4, At the same time, . So, we see that ð bÞ holds.
The other direction comes easily from inequality Secondly, ðiÞ ⇔ ðiiiÞ.For any f ∈ HolðDÞ, since B α 0 is the closure in B α of the polynomials, there exist a polynomial p such that (see [17]) Also since B α H,0 is the closure in B α H of the polynomials, there exist a polynomial P = p 1 + p 2 where p 1 , p 2 ∈ B α , such that Hence, Furthermore, Now, set δ = 1 − ε, where ε ∈ ð0, 1Þ in (77); we get δ 1 ∈ ð0, 1Þ so that for D \ Dð0, δ 1 Þ, So, we see that (iii) holds.
After that, ðiiiÞ ⇒ ðiÞ.Assuming that dμ 2 is a compact p-Carleson measure, as in the proof of Theorem 7, we have So, we have h ∈ B α H,0 . | 1,906.8 | 2023-02-13T00:00:00.000 | [
"Mathematics"
] |
Advanced technique of myocardial no-reflow quantification using indocyanine green
The post-ischemic no-reflow phenomenon after primary percutaneous coronary intervention (PCI) is observed in more than half of subjects and is defined as the absence or marked slowing of distal coronary blood flow despite removal of the arterial occlusion. To visualize no-reflow in experimental studies, the fluorescent dye thioflavin S (ThS) is often used, which allows for the estimation of the size of microvascular obstruction by staining the endothelial lining of vessels. Based on the ability of indocyanine green (ICG) to be retained in tissues with increased vascular permeability, we proposed the possibility of using it to assess not only the severity of microvascular obstruction but also the degree of vascular permeability in the zone of myocardial infarction. The aim of our study was to investigate the possibility of using ICG to visualize no-reflow zones after ischemia-reperfusion injury of rat myocardium. Using dual ICG and ThS staining and the FLUM multispectral fluorescence organoscope, we recorded ICG and ThS fluorescence within the zone of myocardial necrosis, identifying ICG-negative zones whose size correlated with the size of the no-reflow zones detected by ThS. It is also shown that the contrast change between the no-reflow zone and nonischemic myocardium reflects the severity of blood stasis, indicating that ICG-negative zones are no-reflow zones. The described method can be an addition or alternative to the traditional method of measuring the size of no-reflow zones in the experiment.
Introduction
The post-ischemic no-reflow phenomenon is the absence of complete tissue perfusion at the microcirculatory level and occurs after removal of the cause of artery occlusion [1][2][3][4][5].The incidence of angiographic no-reflow (TIMI 0-1 blood flow on angiography) ranges from 10 to 40% in subjects undergoing primary percutaneous coronary intervention [6].Cardiac magnetic resonance imaging with gadolinium-based contrast, detects no-reflow in 54.9% of subjects, which is defined by a contrast-enhanced infarct core [4].The occurrence of no-reflow reduces the infarct-limiting efficacy of early revascularization, worsens prognosis, increases incidence of early myocardial infarction complications and postinfarction scar size [4,5,7,8].Two main mechanisms for the development of no-reflow are common: ischemia-reperfusion injury (IRI) and distal embolization by thrombus and atherosclerotic plaque fragments [9].Markers such as carbon black particles, microspheres, and thioflavin S (ThS) are mainly used to visualize no-reflow in experimental studies with IRI [4].Vital fluorescent dye ThS has been used since 1974 and remains the main fluorescent dye for estimating the size of the no-reflow zone [1].It stains the entire endothelium and allows visualization of no-reflow zones or ThS negative zones, in which circulation is absent or severely reduced due to microvascular obstruction (MVO).ThS distribution in the organ is observed on its sections under ultraviolet excitation, where hypofluorescent or nonfluorescent dark areas can be found on the background of the fluorescent surface of section [1,4,10].
The inconvenience of using ThS is that this dye is not stable in the light or at room temperature, therefore dyed tissue samples should be stored in the dark at 4°C.In addition, ThS is an irritant, so it is necessary to wear appropriate personal protective equipment when working with it.In this context, the search for other fluorophores for no-reflow visualization is a relevant issue.There are conditions that allow us to consider the use of indocyanine green (ICG) for this purpose.Visualization in the near-infrared range with ICG has several advantages over fluorescence in the visible range, namely: greater depth of penetration, weak dependence on ambient light, higher contrast due to the absence of tissue autofluorescence [11].In addition, ICG is FDA approved for clinical use and this method is not harmful to either the animal or the operator.
The method of no-reflow visualization in the experiment with ICG is based on the ability of the fluorophore to accumulate in the area of tissue injury and necrosis [11][12][13].We have previously shown that ICG injected in the first minutes of reperfusion ("early ICG") after 30 min of ischemia in rat myocardium remains in the infarct area and its fluorescence can be observed in vivo, on the surface of the heart for two hours, and ex vivo in heart sections at the end of 2 h of reperfusion [13].Early ICG fluorescence in rat heart sections was visible over the entire surface of the necrosis zone, but the no-reflow zone was not visualized under these conditions.The absence of no-reflow zones (immediate no-reflow) in the first minutes of reperfusion after brief ischemia (30 minutes) has also been demonstrated with ThS in dogs [2] and rabbits [10].This may be explained by the fact that no-reflow zones in rat's myocardium after 30 min of ischemia do not appear until tens of minutes after the onset of reperfusion and expand with time (delayed no-reflow).We hypothesized that delayed administration of ICG ("late ICG") injected after 90 min of reperfusion would allow simultaneous visualization of no-reflow zones and areas of increased vascular permeability throughout the myocardium.It is expected that the intensity of ICG fluorescence will depend on the degree of vascular permeability, i.e. the area with the brightest ICG fluorescence will indicate the presence of the highest vascular permeability.The contrast between areas of normal and increased vascular permeability and ICG accumulation can be seen in 20-30 min from the moment of intravenous fluorophore injection, which is necessary for clearance of ICG from the microcirculatory bed by the liver and its retention in the interstitial space of the injured tissue.
To test the feasibility of usage of ICG for visualization of the no-reflow, the current study was designed to answer the following questions: 1) do the no-reflow zones detected by ICG correspond to those visualized by ThS, which is the standard fluorophore for this purpose; 2) how does the zone of ICG fluorescence and its intensity correlate with the zone of necrosis; 3) is it possible to detect ICG fluorescence by delaying intravenous ICG administration for 24 hours?Therefore, the aim of the present study was to investigate the imaging characteristics of myocardial infarction using ICG for estimation of the severity of no-reflow phenomenon in comparision with ThS.
Materials and methods
The experiments were performed on male Wistar rats weighing 250-300 g.The animals were fed standard laboratory rodent chow and given water ad libitum.All experiments were conducted in accordance with the Guide for the Care and Use of Laboratory Animals and approved by the local ethics committee.
An in vivo rat model of regional myocardial IRI was used in all experiments.Figure 1 shows the experimental study protocol.
1. I-30'/R-2 h, ICG-90' (n = 6): 30 min of ischemia and 120 min of reperfusion were produced; bolus of ICG solution was infused intravenously for a 1 min at 90th min of reperfusion.Bolus of ThS was infused intravenously 15 sec before excision of the heart.
Preparation of ICG solutions
ICG (Pulsion medical systems, AG, Germany) was dissolved in distilled water at a final concentration of ICG 0.25 mg/ml.NaCl was added to the ICG solution to get concentration of NaCl 0.9%.
In vivo rat model of myocardial IRI
Male Wistar rats were anesthetized with isoflurane 2-3%.Body temperature was maintained at 37.0 ± 0.5 °C (ATC1000-220, World Precision Instruments, Inc., USA).The rats were tracheostomized and mechanically ventilated (CWE-SAR-830/AP, World Precision Instruments, Inc, USA) with 60% oxygen (respiratory rate -60 / min, tidal volume of 3 mL / 100 g body weight).Ventilation was adjusted by repeated arterial blood gas analyses throughout the experiment (ABL80FLEX, Radiometer, Denmark).Femoral vein catheterization was performed for infusions of the dyes.The right common carotid artery was cannulated with polyethylene tube (PE-50, Intramedic, USA) for blood sample withdrawal, measurement of arterial blood pressure (BP) and heart rate (HR).The arterial cannula was connected to a pressure transducer (Baxter, USA).
During the experiments, animals had continuous monitoring of hemodynamic parameters using software PhysExp (LLC "Kardioprotekt", Russia).For induction of regional IRI the chest was opened by incision at the fourth intercostal space and the ribs were spread to expose the heart.The pericardium was opened, and a 6-0 polypropylene non-traumatic suture was passed around the major branch of the left coronary artery, about 2 mm of its origin [14].The ends of suture were passed through an occluder -small polyethylene tube ∼ 6-7 cm (PE-90, Intramedic, USA) and exteriorized.After the end of surgical procedures and 30 min stabilizing period, myocardial ischemia was initiated.The reversible myocardial ischemia was produced by shifting occluder down along ligatures and placing a surgical clamp on the occluder to prevent its shifting back.Registration of hemodynamic parameters was performed just before the 30 min occlusion, after 5 and 15 min after the occlusion, at the beginning of reperfusion (in the 5th min) and then every 30 min until the end of the experiment.
Ex vivo visualization of the IRI
At the end of the experiment (at 120 min or 24 h of reperfusion) the left coronary artery was reoccluded, followed by administration of 2.5 mL of 2.5% Evans blue (Merck, USA) via the femoral vein for identification of the area at risk (AAR).The hearts were excised for photographing of the outer surface followed by the registration of the fluorescence area and intensity of ICG and ThS in areas of IRI and the surrounding undamaged myocardium.Then hearts were cut into five 2 mm thick sections parallel to the atrioventricular groove.The basal surface of each section was photographed, using a digital camera.The sections were immersed in a 1% solution of 2,3,5-triphenyltetrazolium chloride (TTC; ICN Pharmaceuticals, USA) at 37 °C (pH 7.4) for 15 min and photographed again for identification of infarct area and registration of fluorescence area and intensity of ICG and ThS.The images were analyzed using ImageJ software (Bethesda, MD, USA).The AAR (Evans-negative sites) was expressed as a percentage of the whole section, and the infarct area (TTC-negative sites) was expressed as a percentage of the AAR.Values of AAR and infarct area for each heart were obtained by summarizing data for the sections and calculating mean values.
Methods of fluorescence registration
Semiconductor laser "Diolan" (NPP VOLO, Russia) with a wavelength of 808 nm, power 0.5-5 W and quartz fiber optic light guide was used to excite ICG fluorescence [15].A lamp illuminator was used for ThS, in which an HXP 120VIS short-arc mercury lamp Osram, 120 W was used as a light source, radiation selection is carried out by switchable light filters (in this case, a bandpass filter with a central wavelength of 390 nm, width 40 nm, FF01-390/ 40-25) and a liquid lightguide for radiation delivery [16].To improve the uniformity of illumination, the light guides are equipped with collector lens (illumination unevenness ±5%).The power density of the exciting radiation was about 15 mW/cm 2 (390 nm) and 50 mW/cm 2 (808 nm).At this excitation intensity, no damage to the samples was observed.Image registration was carried out using a multispectral imaging system is based on a high-sensitivity RGB-television array (ICX285AQ single-array progressive-scanning CCD detector of 2/3-inch format (SONY), maximum frame frequency is 14 Hz with 1280 × 1024 resolution elements, intrinsic noise equals 10 e-).In front of the camera, which is equipped with a Computar M1614-MP2 megapixel lens (Japan) (f = 16 mm, F/1.4), detector filters BLP01-442R-25 and NF03-808E-25 (Semrock, USA) were installed to block exciting radiation 390 nm and 808 nm, respectively.The dimensions of the field of view in these studies were 25 × 25 mm, which made it possible to simultaneously record all 5 sections of the myocardium, which were in a Petri dish filled with a buffer solution.To stabilize the position of the sections and eliminate glare, the sections were covered with a microscope slide.The exposure time did not exceed 70 ms, the s/n ratio was approximately 100.
Comparison of ICG and ThS fluorescence intensity in the area of myocardial IRI
The obtained images with ICG and ThS fluorescence were evaluated using Image-Pro Plus software (Rockville, MD, USA).To study the nature of ICG and ThS fluorescence intensity, a grid was drawn on the images of the AAR of the second and third sections from the apex of the heart, with an equal number of sectors and grid cells and limited by the boundaries of the AAR.The grid formed in this way was then transferred to images of the same sections stained with 2,3,5-triphenyltetrazolium chloride (TTC) and with ICG and ThS fluorescence.
The grid allows comparison of ICG and ThS fluorescence intensities in the same section, as well as localization and relation of borders of increased vascular permeability with borders of necrosis and no-reflow zones.The average fluorescence intensity within selected region of interest (ROI) was then calculated and expressed as a.u.The average fluorescence intensity of ROI in any grid cell within the AAR was compared with the average fluorescence intensity in the corresponding cell in the reference sector in the interventricular septum equidistant from the AAR borders.
Quantitative comparison was performed using the contrast parameter calculated by the formula: where ROI aar is the mean value of fluorescence intensity in a grid cell in the AAR minus background fluorescence intensity and ROI ref is the mean value of fluorescence intensity in a grid cell in the reference zone minus mean background fluorescence intensity.The mean value of background fluorescence intensity was measured at five randomly selected points around the measured section.Additionally, to compare the fluorescence intensity of ICG and ThS in different layers of the left ventricular myocardium, a graphical tool was used -the line intensity scan function in Image Pro Plus software (Media Cybernetics, USA).Graph of ICG and ThS fluorescence intensity along the scan line and the reference line were plotted using the line profile tool.
Comparison of no-reflow area sizes from ICG and ThS fluorescence images
To calculate the AAR, necrosis, and no-reflow zones, planimetric imaging of cardiac sections was performed in the ImageJ software.The boundaries of the no-reflow zones in the images of ICG fluorescence were considered to be the dark edge of the ICG peak fluorescence stripe along which the boundary of the no-reflow area was delineated.Also, the borders of nonfluorescent areas in the images of ThS fluorescence were traced for calculation of the no-reflow zone.
Myocardial histology
Heart samples were excised and fixed in buffered 10% formaldehyde and histology slides were prepared by Mallory's trichrome-stain method.Morphometry of 5-µm sections was used to quantify the areas of stasis (the areas occupied by erythrocytes).The areas of stasis (the target area) in the intramural layer of the area at risk (total area 3 mm 2 ) were detected using the ImageJ threshold tool.The percentage of the target area was calculated using the formula: (target area per section/total area) × 100.
Statistical analysis
Statistical analysis was performed using the nonparametric Mann-Whitney U test, with P values less than 0.05 considered to be statistically significant.
Results
No differences in hemodynamic parameters were observed between the three groups.No significant hemodynamic changes were observed (data not shown).
Effect of myocardial TTC staining on ThS fluorescence intensity
To examine the relation of nonfluorescent areas (no-reflow) with the borders of the risk zone and necrosis zone in hearts stained with ICG and ThS, additional double staining with Evans blue (EB) and TTC was performed.Figure 2(c) shows that after additional double staining (EB and TTC), ThS fluorescence intensity decreased compared to TTC(-) sections (Fig. 2(c)).However, this decrease allowed us to see the ThS peak fluorescence stripe along the border of the no-reflow zone (Fig. 2(c), 3(d), 4(a)).This finding showing the presence of increased vascular permeability at the border of the no-reflow zone.The same peak fluorescence stripe of ICG was seen along the border of the no-reflow zone in the near-infrared light.These observations suggest similar mechanisms of accumulation of a portion of ThS with ICG at the border of the no-reflow zone.
Imaging of the early stage of no-reflow expansion by double ICG and ThS staining
Figure 3 shows that injection of ICG, which has a short plasma half-life, in the first minutes of reperfusion allowed to capture the initial stage of no-reflow development, when there is increased vascular permeability in the center of the AAR (Fig. 3(a)-3(c)) and there is no microvascular obstruction (MVO) (Fig. 3(c)).In this moment there is the greatest intensity of ICG accumulation in the subepicardial and intramural layers of the inner sectors (S2 and S3) where drastic increase in vascular permeability occurs.From the Fig. 3(e), it is evident that boundaries of ICG retention zone are wider than borders of necrosis zone on 0.5 -1.0 mm, however where the ICG retention zone overlaps the necrosis zone, the intensity of ICG fluorescence increases greatly (Fig. 3 ICG and ThS fluorescence images (Fig. 3(c), 3d) show that the sites of increased vascular permeability at the beginning of reperfusion became the MVO zones after 2 hours (Fig. 3(d)), as indicated by the presence of the ThS(-) zone in the corresponding grid cells (S2 and S3 sectors).Figure 3(e) shows that the maximum ICG fluorescence intensity overlaps by the no-reflow zone.Thus, double ICG and ThS staining with early ICG injection can be useful as it allows evaluation not only the initial stage of no-reflow expansion, but also visualization of MVO including track the correlation between MVO severity at the end of reperfusion and the severity of vascular permeability at its beginning.
Comparison of the size of no-reflow zones obtained with two fluorophores: ICG and ThS after 2 h of reperfusion
The main objective of the study was to evaluate the possibility of using ICG to visualize no-reflow zones and to compare the size of no-reflow zones in the same animal obtained with two fluorophores: ICG and ThS.For this purpose, rats were injected intravenously with ICG solution at the 90th min of reperfusion and with ThS solution 30 min later (120 min after the onset of reperfusion).ICG signal was detectable macroscopically within the zones of myocardial necrosis (Fig. 4(a), 4(b)), within which ICG-negative zones were detected.The sizes of ICG-negative zones were significantly smaller by 8.7% [3.16-12.98]than the sizes of the no-reflow zones detected at 120 min with ThS (Fig. 4(b)).The boundaries of the ICG-negative zones lie within the boundaries of the ThS-negative zones, and the sizes of the no-reflow zones obtained with ICG correlate with the sizes of the no-reflow zones detected with ThS (Fig. 4(c)).These observations suggest that the ICG-negative zones are the area of the no-reflow zone that became larger 30 min after ICG injection (90'ICG) and was detected with ThS (120'ThS).
Comparison of no-reflow zone sizes obtained with ICG and ThS after 24 h of reperfusion
To assess the ability to visualize no-reflow after 24 h of reperfusion with ICG, we performed 30 min of regional ischemia and 24 h of reperfusion (I-30'/P-24 h).Animals were reanesthetized 24 h after IRI to visualize no-reflow and measure infarct size.Figure 5(a)-5(c) show images of double fluorescence (ICG and ThS) and TTC stained section one day after the ischemic episode.
The images 5(a) and 5(b) show ICG and ThS fluorescence within the zone of myocardial necrosis, within which ICG-negative zones are identified, the size of which is not significantly different from the size of the no-reflow zones identified with ThS, indicating that the ICG-negative zones are no-reflow zones (Fig. 5(d)).
Comparison of ICG and ThS fluorescence intensity in the area of myocardial infarction
ICG and ThS fluorescence intensity was compared in the intramural layer of the myocardium of apical and middle cross sections of the heart (Fig. 6(a), 6(b)), in the central sectors of which (S2 and S3) the MVO zone was often observed.To compare the fluorescence intensity of two fluorophores, we used the parameter "contrast", which reflects the ratio between the fluorescence in the examined (injured) myocardial zone and the reference zone (nonischemic), calculated by showed a significant difference between the fluorophores, with ICG fluorescence contrast being higher than ThS fluorescence contrast in the risk zone (Fig. 6).Negative contrast values were observed in the negative ThS fluorescence zone corresponding to the MVO zone.This was observed at both 2 h (Fig. 6(a), 6(b)) and 24 h of reperfusion (Fig. 6(c), 6(d)).Contrary to the central sectors, contrast was positive in the border sectors.Significant differences in ThS fluorescence in the two sections after 2 h of reperfusion were obtained between the contrasts of the border and central sectors, with the negative ThS fluorescence contrast being more pronounced in the middle section than in the apical section.This pattern of ThS fluorescence persisted after 24 h.Contrary to ThS fluorescence, the intensity of ICG fluorescence in the risk zone was higher than in the reference zone in all sectors, including the central sectors; therefore, the contrast in the central sectors of the intramural layer was mostly positive at 2 h of reperfusion and at 24 h.The distribution of zones with high and low intensity of ICG fluorescence corresponded to the character of ThS fluorescence, i.e., ICG fluorescence intensity was high in the border sectors and relatively low in the central sectors.This character of ICG fluorescence intensity distribution allows us to delineate the no-reflow zone and measure its area.
Comparison of ICG fluorescence intensity with severity of blood stasis in the area of myocardial necrosis
Marked red blood cell stasis was found in ICG and ThS-negative areas.The magnitude of the negative ICG fluorescence contrast depended on the severity of blood stasis in the infarct zone after 30 min of myocardial ischemia and 2 h and 24 h of reperfusion (Fig. 7).We quantified the severity of red blood cell stasis in the intramural layers of rat hearts after 2 h of reperfusion by counting the area occupied by erythrocytes in Mallory-stained myocardial sections and found an inverse relationship between the value of contrast and the severity of blood stasis.The median percentage of area occupied by erythrocytes in the intramural sectors (S1-S4) of apical and middle sections was 3.4 [2.3-8.7]. Figure 7(a) shows the relation between blood stasis and a delayed flow impairment: the higher the percentage of area occupied by erythrocytes per unit section area, the lower ICG fluorescence contrast value in this myocardial zone.This observation was also evident in 24 h reperfusion experiments (Fig. 7(b)).
Discussion
We report for the first time the benefits of usage of ICG for estimating the size of the no-reflow zone in myocardial ischemia/reperfusion injury in a rat model.We compared ICG with frequently used ThS by dual-fluorescence staining of one heart that applies direct macroscopic examination using a fluorescence camera.Additionally, risk zone and infarct size were determined by blue dye and triphenyltetrazolium chloride (four stains in one heart).Comparative analysis of macroscopic images (magnification ×1) of ICG and ThS fluorescence intensity using a grid dividing the IRI zone into cells (squares) showed that ICG and ThS fluorescence have a similar character of fluorescence intensity distribution, indicating similar mechanisms of fluorophore accumulation in the IRI zone.This is supported by correlations in the size of the no-reflow areas.In contrast to ThS, ICG-stained myocardial sections can be stored at room temperature, and ICG fluorescence intensity does not decrease when sections are incubated in TTC solution.This property of ICG allows us to use it instead of ThS to measure the no-reflow zone and to compare areas of different fluorescence intensities with areas of necrosis, blood stasis and other histological images of the same heart.
The ability of fluorophores to bind to plasma proteins, including albumin, in the blood flow allows their use as a marker of increased vascular permeability, since intact vascular endothelium is "impermeable" to albumin [17,18].Extravasation of proteins in inflammation-induced tissue edema occurs by paracellular transport [17] and appears to be the primary mechanism for tissue retention of any dye that has been bound to albumin in the blood flow or preconjugated prior to intravenous administration (enhanced permeability and retention effect).For example, Evans blue used in the Miles test [18,19] is carried by albumin in the blood flow.FITC (FITC-albumin) covalently bound to albumin is used for the same purpose [20].ICG has been used as a marker of increased vascular permeability in tissue injury and inflammation in several experimental [13,[21][22][23] and clinical studies [24][25][26].ICG is transported in plasma with high and low density lipoproteins and albumin, allowing it to be used for fluorescence-based localization of areas of increased vascular permeability and its quantification [24][25][26].Apparently, ThS is able to penetrate the damaged myocardium by the same mechanism as ICG.This assumption can be made based on the coincidence of ThS fluorescence peaks with ICG fluorescence peaks at the edges of the no-reflow zone.Another thioflavin, thioflavin T, is known to bind to albumin [27].The structure of the ThS molecule contains an acidic residue SO − 3 [28], the presence of which suggests the ability of ThS to form ionic bonds with plasma proteins.
The ability of ICG to persist in tissues with increased vascular permeability allows us to simultaneously assess both the borders of no-reflow and the degree of increased vascular permeability in the IRI zone in a single image.In ThS, this property is evident when its fluorescence intensity is "quenched" after incubation in TTC solution.It should be noted that the interval from the time of intravenous injection to the time of heart excision was different for the two fluorophores.For ICG, this interval was 30 min to obtain almost maximal reduction of its background fluorescence in the intact myocardium, and for ThS, this interval was 15 sec before the onset of the microcirculatory phase and staining of the whole heart.There is a direct dependence of the degree of accumulation of the substance in tissues on the duration of its circulation in the blood flow [29].The longer interval of circulation time in blood and the absence of ICG retention in the reference zone can explain the higher ICG fluorescence contrast throughout the zone of IRI.
With the use of ICG, injured areas of myocardium that perfused by blood appear bright and areas not perfused appear dark (ICG-negative), especially in the central sectors and the intramural layer, which is a diagnostic sign of no-reflow and is explained by the presence of MVO.The complex mechanism of MVO has been well studied [3][4][5] and includes the gradual appearance of blood stasis in the no-reflow zone, which is visible in native myocardial sections and in myocardial staining with Mallory trichrome or hematoxylin-eosin [30].In the present study, we used for the first time a technique that allows us to observe the dynamics of no-reflow development due to early ICG and late ThS injection with simultaneous registration of the fluorescence of two fluorophores.Using this technique, we confirmed the literature data on the absence of visible MVO in the first minutes of reperfusion after 30 min of ischemia at the beginning of reperfusion [2,10] and showed the possibilities of dual fluorescence staining of the heart with ICG and ThS in comparing the early manifestations of increased vascular permeability with the late manifestations of no-reflow.Figure 3 shows that during early ICG administration (at the beginning of reperfusion), bright ICG fluorescence was observed in the central sectors, indicating increased vascular permeability and absence of blood stasis.After 2 and 24 hours, a decrease in ICG and ThS fluorescence intensity was observed in the intramural layer.Our data suggest that blood stasis is the main cause of negative contrast ICG fluorescence after 30 min of ischemia in rats.It is known that with increasing ischemia in rats, its consequences appear earlier and in the first minutes of reperfusion there are zones where blood flow is practically not restored, so stasis cannot develop in such zones due to the absence of blood flow during the entire period of reperfusion [10].A similar observation has been made in dogs after 90 min of ischemia, in which "true" or "immediate" no-reflow developed in the central zones of myocardial ischemia that had no collateral blood flow during coronary artery occlusion [1,2].Consequently, the presence of blood stasis in the IRI zone is a sign of temporary restoration of blood flow after relatively brief ischemia with gradual development of MVO.
Previously, using the 1 h myocardial ischemia mouse model with double histochemical fluorescence staining with Evans blue and ThS, it was shown that increased vascular permeability in the no-reflow zone persists for up to 3 days [31].The authors made the interesting observation that 1 day after 1 h of ischemia, intravenously injected Evans blue accumulated in the center of the no-reflow zone (intramural layer).Of note, Evans blue was administered intravenously 3 h before the mice were euthanized, allowing the dye to accumulate in significant amounts, indicating that little blood or plasma flow remains in the MVO zone.Our data on ICG accumulation in the no-reflow zone after 30 min of ischemia complement the authors' findings by showing the presence of increased vascular permeability at the border of the no-reflow zone and in the no-reflow zone itself at 24 h, as detected by ThS.The penetration of ICG into the no-reflow zone is evident when analyzing the contrast of ICG and ThS fluorescence, which shows the presence of positive contrast with ICG in sectors where ThS has negative contrast.However, in the case of zones of maximum blood stasis, they correspond to zones of negative contrast of ICG fluorescence, indicating extremely low residual blood or plasma flow in the no-reflow zone.
Contrary to acute experiments with 2 h reperfusion, the intensity of ICG accumulation decreases after 24 h of reperfusion, which can be explained by a gradual decrease in the intensity of increased vascular permeability.These data do not agree with those obtained using Evans blue in mice [31], where the authors showed an increase in vascular permeability at 24 hours of reperfusion after 60 min of ischemia.These differences may be explained by differences in dyes used, duration of ischemia, methodological approaches, and species differences.Data from the present study and data from Gao et al. (2017) confirm the presence of increased vascular permeability at 24 h [31], which allowed us to visualize no-reflow at 24 h using ICG.Sufficient contrast was observed between the boundaries of the no-reflow zone, the MVO zone, and the reference zone, allowing us to clearly distinguish the boundaries of the no-reflow zone on the cardiac section as well as the IRI zone within healthy tissue on the epicardial surface of the heart.Previously, in the same in vivo model, in acute experiments on rats, we demonstrated the possibility of intraoperative IRI visualization from the epicardial surface of the heart [13].The phenomenon of ICG fluorescence in the IRI area detected in the present study after 24 h of reperfusion can be used for intraoperative visualization of ischemia-reperfusion myocardial damage that occurred one day ago.
Conclusion
Thus, ICG can be used for simultaneous visualization of no-reflow and assessment of the severity of the processes involved in its formation: increased vascular permeability and MVO.At 30 min after a single intravenous injection of ICG at a dose of 1 mg/kg, the no-reflow zones become visible in NIR light due to absence of ICG retention and visualized microscopically on myocardial sections.In the surrounding myocardium, where blood flow is preserved, ICG accumulation occurs due to increased vascular permeability, creating a stripe of bright ICG fluorescence (ICG-positive zones) around of the no-reflow zone.Contrary to ThS, ICG-stained myocardial sections can be stored at room temperature and ICG fluorescence intensity is not reduced by incubation of the sections in TTC solution.This property of ICG allows us to use it instead of ThS to measure the no-reflow zone and to compare areas of ICG fluorescence with areas of necrosis, blood stasis, and different histological images of the same heart.Double fluorescence staining of ICG and ThS with early ICG injection and late ThS injection makes it possible to observe the dynamics of the transition from processes of increased vascular permeability with ICG to processes of MVO formation with ThS.
Fig. 2 .
Fig. 2. Effect of myocardial EB and TTC staining on ThS fluorescence intensity.(a) Selection of rat heart sections for measurement of ICG and ThS fluorescence intensity in the no-reflow zone.Numbers from 1 to 5 are the number of transverse sections from the apex to the base of the rat heart.Arrows indicate 2 (apical) and 3 (middle) sections.(b) Fluorescence of ThS and ICG, respectively.Numbers in circles indicate the second and third sections used to measure ICG and ThS fluorescence intensities.(c) Comparison of ThS fluorescence intensity in TTC-stained (TTC+) and non-TTC-stained (TTC-) representative sections of rat hearts.(d) Comparison of ThS fluorescence intensity in representative sections of the interventricular septum (reference zone) -red scan lines and the left ventricular free wallorange scan lines, where no-reflow zones are visible.Vertical blue arrows show the change of ThS fluorescence intensity in the reference zone of the section compared to the background (blue lines).Scale bars: 1 mm.
Fig. 3 .
Fig. 3. Evaluation of the initial stage of no-reflow expansion via double ICG and ThS staining.Imaging of ICG and ThS fluorescence of the same section was performed after double histochemical staining (EB and TTC).(a) The area between the dashed lines delineating the anatomical AAR is divided into 6 equal sectors: two border sectors (BZ-1 and BZ-2) and four inner sectors (S1-S4).Each sector is divided into three grid cells: 1) subepicardial (Subep.),2) intramural (Intr.), and 3) subendocardial (Suben.).Reference -red reference line in the reference sector plotted equidistant from the AAR.(b) Image of the same section stained with TTC.(c) Section in near-infrared light.* -the site of maximum ICG fluorescence intensity.Scan line 3 and ref. are highlighted in red to show ICG and ThS fluorescence intensity in plot (e).(d) Section in UV-light.* -the point corresponding to the maximum ICG fluorescence intensity.(e) Graph of ICG and ThS fluorescence intensity along the scan line 3 and the reference line (red) plotted.The gray "White" line was drawn from the image of the TTC-stained section (plot (b)).Scale bars: 1 mm.
Fig. 4 .
Fig. 4. Comparison of sizes of anatomic no reflow detected with both ICG and ThS in the same heart.There was increase in size of no-reflow area between 90'ICG and 120'ThS (n = 7).(a) Representative ICG (ICG'90) and ThS fluorescence (ThS'120) images of a single section.Dotted lines are the borders of the no-reflow zone.(b) Comparison of the area of the no-reflow obtained by planimetric analysis of fluorescence images in cross-sections of rat hearts.(c) Positive correlation of no-reflow area sizes obtained by ICG and ThS fluorescence imaging (p = 0.0341).Scale bars: 1 mm.
Fig. 5 .
Fig. 5. Comparison of the size of no-reflow zones after 30 min of ischemia and 24 h of reperfusion in the same heart using both ICG and ThS staining in the same heart.(a) and (b) are ICG and ThS fluorescence images.(c) The same section in white light.The free wall of the left ventricle shows a TTC-negative zone, which is a zone of necrosis.(d) Sizes of no-reflow zones detected by ICG and ThS (n = 5).Scale bars: 1 mm.
Fig. 6 .
Fig. 6.Comparison of contrast expression between sectors of the intramural layer of the left ventricular wall in the risk zone and the remote zone (interventricular septum) after 2 h of reperfusion and after 24 h.BZ-1 and BZ-2 are border sectors of the risk zone, and S1, S2, S3, and S4 are inner sectors.*,** -statistically significant difference between the contrast of the inner sectors and the next border sector of the intramural layer.
Fig. 7 .
Fig. 7. Dependence of negative contrast values on the severity of blood stasis in the area of myocardial infarction.(a) Two representative images of cross-sections of rat hearts of group I-30'/R-2 h, which differs in value of negative contrast, to show relation between ICG fluorescence contrast and severity of blood stasis (BS).(b) A representative cross-section of a rat heart from group I-30'/R-24 h, in which marked negative ICG fluorescence contrast corresponds to severe blood stasis.Scale bars are 1 mm for all images of rat whole-heart cross section, 50 µm for images ×20. | 8,234.8 | 2024-01-02T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Skyrmions and clustering in light nuclei
One of the outstanding problems in modern nuclear physics is to determine the properties of nuclei from the fundamental theory of the strong force, quantum chromodynamics (QCD). Skyrmions offer a novel approach to this problem by considering nuclei as solitons of a low energy effective field theory obtained from QCD. Unfortunately, the standard theory of Skyrmions has been plagued by two significant problems, in that it yields nuclear binding energies that are an order of magnitude larger than experimental nuclear data, and it predicts intrinsic shapes for nuclei that fail to match the clustering structure of light nuclei. Here we show that extending the standard theory of Skyrmions, by including the next lightest subatomic meson particles traditionally neglected, dramatically improves both these aspects. We find Skyrmion clustering that now agrees with the expected structure of light nuclei, with binding energies that are much closer to nuclear data.
One of the outstanding problems in modern nuclear physics is to determine the properties of nuclei from the fundamental theory of the strong force, quantum chromodynamics (QCD). Skyrmions offer a novel approach to this problem by considering nuclei as solitons of a low energy effective field theory obtained from QCD. Unfortunately, the standard theory of Skyrmions has been plagued by two significant problems, in that it yields nuclear binding energies that are an order of magnitude larger than experimental nuclear data, and it predicts intrinsic shapes for nuclei that fail to match the clustering structure of light nuclei. Here we show that extending the standard theory of Skyrmions, by including the next lightest subatomic meson particles traditionally neglected, dramatically improves both these aspects. We find Skyrmion clustering that now agrees with the expected structure of light nuclei, with binding energies that are much closer to nuclear data.
QCD is the fundamental theory of the strong nuclear force and describes how quarks are confined to form protons and neutrons, together with the binding of these nucleons to form atomic nuclei. However, the complexity of the non-perturbative regime means that extracting the properties of nuclei directly from QCD is not within reach of current computational capabilities. Traditional methods of nuclear physics have confirmed that protons and neutrons are excellent effective degrees of freedom at the nuclear energy scale, but establishing a link to the more fundamental theory will not only provide a more complete understanding of nuclear physics but will also allow predictions for experimentally unknown nuclei and for matter under extreme conditions, for example in the interior of neutron stars.
Skyrmions are named after the British physicist Tony Skyrme, who introduced the standard version of the model almost sixty years ago [1] as a nonlinear field theory of the lightest subatomic meson particles, called pions. This theory has topological soliton [2] solutions, that is, twisted localized particle-like excitations of the pion fields, that are now known as Skyrmions. The number of twists in the pion fields corresponds to the number of Skyrmions and Skyrme proposed that this be identified with baryon number: which is equal to the mass number A, that counts the number of nucleons in a nucleus. This proposal was verified twenty years later [3] by demonstrating that the model may be regarded as a low energy effective field theory of QCD in the limit of a large number of quark colours. Skyrmions therefore provide an intermediate approach to nuclei, between the currently intractable fundamental theory of quarks, and the accuracy of more conventional nuclear physics methods based directly on protons and neutrons.
Skyrmions in the standard version of the model are displayed in Fig.1a, for nucleon numbers A = 1 to A = 8, by plotting baryon density isosurfaces that reveal the intrinsic shapes of the nuclei predicted by these Skyrmions [4]. Although Skyrmions have had some success in modelling nuclei over the last few decades [5] there are two major problems with Skyrme's original version of the theory. The first is that it produces binding energies for nuclei that are an order of magnitude larger than nuclear data obtained from experiments, and the second is that it does not reproduce the clustering structure of light nuclei suggested by both experimental data and more conventional nuclear theories [6].
The novel aspect of Skyrmions is that nuclei composed of baryons miraculously appear as solitons in a field theory of meson particles. In the standard version of the Skyrme model only the lightest meson particles, pions, are included within the theory. Heavier mesons are expected to provide corrections to this leading order theory but they are neglected in the standard Skyrme model, simply to make the computation of Skyrmions tractable. By performing extensive parallel computations on a high performance computing cluster, we have been able to obtain the first results for Skyrmions in the theory containing both massive pions and rho mesons, the next lightest of the meson particles. In this letter we show that including the previously neglected rho mesons dramatically improves the two major failings of the standard Skyrme model highlighted above. Namely, Skyrmions now produce the required cluster structure of light nuclei, with binding energies that are much closer to nuclear data. In the standard version of the Skyrme model [1] the triplet of pion fields (π 1 , π 2 , π 3 ) is encoded in the SU (2)-valued Skyrme field where the auxillary sigma field imposes the constraint σ 2 + π 2 1 + π 2 2 + π 2 3 = 1. The three su(2)-valued currents are defined to be R i = ∂ i U U −1 and in dimensionless units the static energy that defines the Skyrme model is given by Without loss of generality, the positive constants c 1 and c 2 can be set to unity by rescaling the dimensionless energy and length units, and this choice of scaling is known as using Skyrme units. However, in this study it will be convenient to work with a different scaling, so we set c 1 = 0.141 and c 2 = 0.198, to match with the normalization of the extended version of the model to be introduced later. The constant m is the pion mass in dimensionless Skyrme units and is fixed by the experimental value. The only parameters of the Skyrme model are therefore the two conversion factors to convert dimensionless energy and length units into physical units. Two physical quantities are required as input to determine these two factors and the common practice is to use the conversion values calculated by fitting to the properties of the proton and its excited state the delta baryon [7]. In these units the physical pion mass corresponds to the value m = 0.526, which we take from now on.
Baryon number is identified with the integervalued topological charge with the integrand being the baryon density B.
Skyrmions that model nuclei with mass number A are the stable energy minima of the energy E π with B = A. Skyrmions are computed, as described in detail in previous work [8], by evolving dynamical secondorder in time field equations derived from a Lagrangian with a static contribution equal to −E π , where fourth-order accurate finite difference approximations are used to evaluate spatial derivatives on a cubic lattice with boundary condition U = 1. Flow to minimal energy states is achieved by instantaneously freezing the motion, via setting all time derivatives to zero, whenever E π is increasing. The simulations presented in Fig.1 were performed on a cubic lattice containing 128 3 lattice points with a lattice spacing ∆x = 0.08 and the time evolution implemented via a fourth-order Runge-Kutta method with a timestep ∆t = 0.02.
The results for Skyrmions in the standard Skyrme model are presented in Fig.1a by plotting baryon density isosurfaces B = 0.02. These surfaces are coloured to indicate which of the three constituent pion fields has the largest magnitude and its sign, according to the colouring scheme shown in the figure. This information is relevant for understanding the forces between Skyrmions, because the pion fields interact as a triplet of orthogonal dipoles. The data presented in Fig.2 highlights the problem with Skyrmion binding energies in the standard Skyrme model. The blue squares show the experimental nuclear data on the mass per nucleon, in units of the proton mass, for nuclei with nucleon numbers A = 1, .., 8. This demonstrates that the binding energies of nuclei are no greater than 1% of the mass of the nucleus. The red circles denote the mass per nucleon of Skyrmions E π /A, normalized by the single Skyrmion mass. In contrast to the experimental data, this plot confirms that the binding energy of a Skyrmion can be greater than 10% of its mass: an order of magnitude larger than experimental values.
Despite this quantitative failing, the intrinsic shapes of several of the Skyrmions shown in Fig.1a are known to have some promising features. In particular, to interpret the classical Skyrmion solutions as nuclei requires the introduction of spin and isospin, which is generally performed via a semiclassical quantization that treats the Skyrmion as a rigid body that is free to rotate in both space and isospace. The symmetries of the classical Skyrmion solutions determine the allowed spin and isospin states and these have been calculated for many Skyrmions [9], including all those shown in Fig.1a. The predicted ground state spins and isospins for nucleon numbers A = 1, 2, 3, 4 match with experiment as a result of the symmetries of these Skyrmions. Unfortunately, the above match between Skyrmion states and nuclear data begins to break down at A = 5 and is particularly poor for odd values of A. This signals a problem with the intrinsic shapes of Skyrmions for A > 4 that is not unexpected and can already be anticipated from the images in Fig.1a. As discussed in more detail below, the basic problem with Skyrmions for A > 4 is that they fail to show a cluster structure, and instead allow all constituents to merge and form configurations that are too symmetric. The extended Skyrme model requires the inclusion of the three su(2)-valued rho meson fields ρ i . These are introduced using the dimensional deconstruction formulation [10], that has the advantage of introducing no additional free parameters, and yields the energy [11] of the extended model given by with the values of the constants c 3 = 0.153, c 4 = 0.050, c 5 = 0.038, c 6 = 0.078, c 7 = 0.049. Skyrmion solutions of the extended model are displayed in Fig.1b for nucleon numbers A = 1, .., 8, by plotting baryon density isosurfaces B = 0.02. These Skyrmions were computed by applying the simulation scheme described above to minimize the extended energy E π,ρ , with the condition ρ i = 0 imposed at the boundary of the simulation lattice. These plots reveal that the intrinsic shapes of Skyrmions have completely changed for A > 4 and now display the cluster structure expected of light nuclei, as discussed below. Furthermore, the black diamonds in Fig.2 show the mass per nucleon E π,ρ /A of Skyrmions in the extended model, again normalized by the single Skyrmion mass. This data reveals that binding energies are reduced from over 10% to no more than 3%, significantly improving the comparison with experimental data.
Both experimental evidence and a variety of theoretical approaches support the existence of clustering in light nuclei [6], namely the emergence of molecular-like sub-units. Clustering begins at A = 5, where the nucleus of 5 He is regarded as an α-particle ( 4 He) core with an orbiting neutron [12]. This is not reflected in the A = 5 Skyrmion in the standard model, where all five Skyrmions are democratically merged to form a single structure. However, clustering is clearly evident in the A = 5 Skyrmion in the extended model including rho mesons, where a single A = 1 Skyrmion is isolated from a core that has the shape of a slightly deformed cube corresponding to the α-particle of the A = 4 Skyrmion.
Similar comments apply to the A = 6 and A = 7 Skyrmions, where neither the α-particle plus deuteron ( 2 H) cluster [13] of 6 Li nor the α-particle plus triton ( 3 H) cluster [13] of 7 Li are reflected in the intrinsic shapes of the standard Skyrmions, but are clear in the extended model with rho mesons. In fact the A = 7 Skyrmion in the standard model is embarrassingly symmetric, having dodecahedral symmetry that predicts a ground state with a spin far greater than that seen in the experimental data for the ground state of 7 Li . Recently, an approach to address this failure has been proposed [14] by including vibrations that encourage the too symmetric A = 7 Skyrmion to split into the required A = 4 plus A = 3 cluster structure, but here we find that including rho mesons already yields this clustering without the need for vibrations.
Most attention on clustering has been directed towards the study of α-particle sub-units in αconjugate nuclei, composed of an equal and even number of protons and neutrons [6]. The first example is the 2α system of 8 Be, but again the A = 8 Skyrmion in Fig.1a shows no sign of reflecting this property. However, it has been found that using an artificially large pion mass, of around twice the physical value, does change the intrinsic shape of the A = 8 Skyrmion to generate a 2α cluster, and also yields N α clusters for A = 4N Skyrmions [15]. De-spite this encouraging development in the standard version of the model with a large pion mass, this does not change the shapes of any of the Skyrmions with A < 8 or reduce the large binding energies. Here we find that the 2α cluster of the A = 8 Skyrmion with rho mesons, displayed in Fig.1b, appears without the need to artificially increase the pion mass, and indeed the cluster structure of a pair of α-particles is now more obvious. A success [16] of the standard Skyrme model, albeit it with a large value of the pion mass, is the next α-conjugate nucleus of 12 C, where there are two different A = 12 Skyrmions, one with a triangular 3α structure and the other with a linear 3α arrangement, that have properties suggesting identifications with the ground state and the famous Hoyle state of 12 C, respectively. Similar A = 12 Skyrmions exist for both configurations in the extended model, see Fig.3, so this success of Skyrmions is maintained by the inclusion of rho mesons. In agreement with the situation in the standard Skyrme model with a large pion mass, we find that the linear chain cluster of Fig.3a has a slightly lower energy than the triangular cluster of Fig.3b.
In computing the Skyrmions displayed in Fig.1b a wide variety of initial conditions were applied for each value of A > 1, to avoid trapping in local minima. This included a product ansatz to generate A initially separate single Skyrmions and the application of the rational map approximation [17], that provides a good description of Skyrmions in the standard Skyrme model. In each case there was a clear gap between the Skyrmions presented in Fig.1b and any other local energy minima, except for the case A = 6, where another Skyrmion solution with an energy equal to the one shown (to within our expected numerical accuracy) was also obtained. This alternative solution also has a cluster structure, but rather than the A = 4 Skyrmion plus the A = 2 Skyrmion cluster shown in Fig.1b it is a cluster of two A = 3 Skyrmions, arranged face-to-face to preserve a triangular symmetry.
In recent work [18] Skyrmions of the extended model were computed in the simplifying limit in which the pions are assumed to be massless, that is, by minimizing the energy E π,ρ with m = 0. It must be stressed that this apparently innocuous simplification has dramatic consequences. In particular, there is no clustering behaviour in this limit and the Skyrmions retain the shapes of those in the standard Skyrme model.
In summary, we have shown that extending the standard version of Skyrmions, by including not only massive pions but also massive rho mesons, significantly improves the features of Skyrmions in exactly the areas where discrepancies with experimental results were most problematic, whilst retaining the successful aspects of Skyrmions. Of course, we do not expect the addition of rho mesons alone to now provide a perfect match between Skyrmions and nuclear data, because we have clearly shown that neglecting heavier mesons can have significant consequences. However, the results presented here provide considerable evidence that as each of the heavier mesons are included within the theory there is an optimism for the convergence of Skyrmions to nuclei. | 3,949.4 | 2018-11-05T00:00:00.000 | [
"Physics"
] |
Need Analysis of Mobile Learning Based on Problem-Based Learning for Elementary School Student
Abstract
INTRODUCTION
The rapid development of technology and information has a positive influence on improving the quality of education in the learning process.The learning process began to shift from using conventional learning methods to student-centered learning methods.(Indriani et al., 2021).All learning must experience this transition.One way of centered learning is by developing learning media in the form of mobile learning using an effective model to use.Mobile learning is a learning media that contains materials that are packaged in the form of applications that utilize communication technology on mobile phones anywhere and anytime (Purosad et al., 2020).The use of mobile learning in learning can help them understand the concepts that will be presented in the material.So that it has positive feedback on the experience and further explores learning through mobile learning on learning engagement (Meng et al., 2023;Samoekan, 2021).Mobile learning plays a role in guiding students to learn independently, making it easier for students to understand teaching material, and making the classroom atmosphere more interesting so that students are expected to be able to reason critically (Agus & Endang, 2021).Critical thinking is the ability to think using rational reasoning to be able to solve a problem faced by each individual in an integrated and structured manner, fornınlate innovative solutions, and reflect (Kurniawati & Ekayanti, 2020;Nurul & Mukhayyarotin, 2021).Critical thinking skills must be taught and exposed from an early age so that students can overcome the problems they will face.Critical thinking skills can be taught through social studies learning in elementary school.The purpose of elementary social studies learning is to foster a sense of responsibility for oneself and one's obligations in society, nation and state and is also expected to train intellectual skills in identifying and finding solutions to problemis faced (Jumriani et al., 2021).
Seeing the importance of critical thinking skills, these skills must be trained from an early age.In the scope of elementary school, critical thinking skills can be trained through several existing lessons, one of which is social studies learning.Elementary social studies learning outcomes also refer to the ability, namely the ability to understand concepts and the ability to apply social studies understanding, such as the ability to think critically and creatively, the ability to solve problems, and the ability to make decisions (Sarah, 2021).This is done to achieve learning objectives in elementary school, so social studies learning needs to be taught as interesting and creative as possible.Based on the results of the needs analysis of teachers and students at SDN Pondok Bambu 04 by conducting interviews with teachers and students found problems such as, social studies learning materials in the independent curriculum teachers still separate science and social studies that should not be separated so that students can understand IPAS learning.The material is delivered without optimal media support.Learning only focuses on using thematic books and videos via Youtube.
The use of videos from YouTube is considered not optimal for use in learning because it is not in accordance with the development of student characteristics in class IV SDN Pondok Bambu 04.Activities have also applied methods in learning but implemented to students still not according to the syntax used.Furthermore, in practical activities, critical thinking skills in elementary schools still need to be improved, especially the critical thinking skills of grade 4 students at Pondok Bambu 04 Elementary School Researchers found the problem of students still not being able to analyze further problems in elementary social studies learning.Students analytical skills can be seen from their answers in the evaluation in the form of descriptions with the Higher Order Thingking Skill (HOTS) model.When given questions with HOTS questions, students still have difficulty in understanding the intent and purpose of the question so that students need to be given further direction and guidance by the teacher to understand the meaning of the question.
Seeing this problem, the researcher aims to develop mobile learning as an alternative solution to the problem.The mobile learning developed is oriented towards problem-based learning (PBL).PBL is a learning model that can be said to be a strategy where students learn through practical problems related to real life (Flavia, 2023;Junaidi, 2020) .Through a series of problems, it will then become the subject matter of systematic student learning activities.To be able to find these problems, students must search, analyze, synthesize and apply information to solve problems given by the teacher at the beginning of class (Eric & Douglas, 2023;Jairina et al., 2020).Students can find the problems discussed critically and can conclude the problem.Problem solving is an important element in supporting stadent understanding and academic success.If students do not have problem solving skills, it will be difficult to solve problems faced throughout the learning process.(Razilu, 2021) shows that the development of mobile learning using articulare storyline 3 is categorized as "Very feasible for use in learning.Research with research conducted by (Purbasari et al., 2019) produces research results, namely social studies learning media based on mobile learning applications is very feasible to use.Research conducted (Maulida, 2020) PBL model to improve thinking skills is very high.Research conducted (Suhardi et al., 2022) producing scintific-based mobile learning media is very feasible to use.Research (Salma et al., 2022) resuhed in research that Interactive multimedia based on android to improve ritis skills is categorized as moderate.
The researchers developed a digital form that was developed in the form of an application.In developing handouts in this application using Unity software so that mobile learning is developed in the form of applications.The use of mobile learning must also pay attention to the appearance, content, and ease of access for students to have mobile learning (Elvani et al., 2022).The use of mobile learning in learning can increase student independence (Frans et al., 2023).In addition, the use of unity software has the advantage that the resulting application is easier to operate on various devices and roads such as cell phones, laptops and computers.This is done by researchers as one of the supports for students to take part in technological advances.The utilization of technology and information in learning in the 21st century has human resources who have knowledge, skills, critical thinking, digital literacy, technology and information (Mardhiyah et al., 2021).
METHODS
The development research conducted here uses the ADDIE model which has 5 stages in developing a product consisting of analysis, design, development, implementation, and evaluation (Robert, 2009) The purpose of this study was to analyse the need to develop PBL-based mobile learning products in primary school social studies learning.This research was conducted with 30 IVgrade students at SD Pondok Bambu 04 Elementary School.
Judging from the research objectives, the research on needs analysis was conducted to teachers and students to obtain detailed data about problems in the learning process, especially in social studies lessons in grade 4. The data collection techniques used in the study are observation and interview with descriptive quatitative analysis.The following is an interview instrument for teacher and student needs analysis.
RESULTS & DISCUSSION
The results and improvements of this research reflect the ADDIE research steps which include 5 stages, analysis, design, development, implementation, and evaluation.However, with the limited time in this study, the researchers only conducted this research stage limited to the needs analysis at the analysis stage.The results of the needs analysis focused on aspects of analyzing the IPAS learning process in grade IV SD.The results of the student needs analysis questionnaire are illustrated in Figure 2.
Figure 2. Diagram Need Analysis
Based on the results of the student analysis questionnaire, it is known that in learning IPAS the teacher has used learning media in the classroom.Media is one of the important elements that make learning more fun and meaningful for students (Pratiwi & Dzakiyah, 2024).The learning media used by teachers are learning videos from YouTube.The use of learning videos serves to make it easier for students to support learning activities, help students understand the subject matter, facilitate the teacher during teaching and learning activities in the classroom, direct the attention of learners to concentrate on the content of the lesson, understand and hear information or messages contained in the image, provide context for understanding the text, help those who are weak in reading to organize information in the text and remember it (Fitri & Ardipal, 2021;Suarni et al., 2019).This finding is in line with research which states that effectiveness in the learning process is influenced by the type of media chosen and the learning methods applied to significantly improve results (Khoir et al., 2020;Supiadi et al., 2023).The advantages of learning videos on YouTube include being able to help students understand the material better, as an independent learning media, and providing learning video features that can be accessed when users are online (Yeni et al., 2022).Learning videos are alternative learning media that can facilitate students' learning styles audibly and visually.Visual learning style is a learning style that dominantly uses the eyes so that it has a very important role in learning to more easily capture information by observing (Supit et al., 2023).To develop the potential of students with visual learning styles, teachers should focus more on moving demonstrations and give them objects related to the lesson (Mulabbiyah et al., 2019).Meanwhile, audio learning style is an auditory learning style that mostly uses the sense of hearing for individuals to learn more easily and capture stimulus through the sense organ of hearing (ear) (Maheni, 2019).To develop the abilities of students with audio learning styles, teachers should provide opportunities for discussion and participation in the classroom and family, and require oral expression of ideas so that information is easily understood.For example, students are encouraged to use their own knowledge and experience to solve problems rather than always relying on the opinions of others (Minasari & Susanti, 2023).
The use of learning videos from YouTube can already help the learning process but there are still many things that need to be improved because they are not in accordance with the characteristics of students in a school.Sometimes the learning videos used still do not think about the psychology of student development.There are still visualizations of images that are not suitable for student growth and development.The selection of learning media must be adjusted to the development of students because the teaching and learning process must enable the process of interaction and mutual communication between teachers and students in accordance with the functions and learning objectives that have been set, can communicate learning materials effectively to students to describe learning materials, can prevent communication barriers in the teaching and learning process, such as verbalization, misunderstanding, lack of concentration, and lack of student understanding (Inayah, 2023).Seeing this, it is necessary to develop mobile learning as an alternative problem solving.By developing mobile learning media, it can increase motivation to follow the learning process in class (Herlina et al., 2023).Then the development of learning media has advantages such as; (1) having more flexible access when learning anywhere and anytime without limiting space and time (Talakua & Sesca Elly, 2020); (2) Media is interactive so that it is expected to attract students' interest in learning (Wulandari et al., 2019); (3) effective and efficient, so that in the process of learning activities users can learn independently (Herlina & Anwar, 2020).
In addition to learning media, the selection of teaching methods is still dominated by classical learning methods where learning activities are still dominated by the teacher.Classical learning is still less effective when done in the classroom because students feel bored and less focused on the teaching material delivered by the teacher (Nuraeni et al., 2022).In addition, the condition of students tends to be passive, even some students are seen doing busy work at their respective desks such as making fans from paper so that they show their boredom in learning.Then the students at the back were even busy chatting with their classmates without paying attention to the explanation of the material from the teacher.In general, students show a lack of student engagement in learning with teachers using classical methods in the classroom.Classical learning affects the learning carried out by the teacher so that it is boring for students, it is still difficult to understand the material, learning outcomes are not optimal, causing students' opportunities to make efforts to obtain information independently are still very limited and students feel less interested in the learning carried out by the teacher so that students' critical thinking skills are still not well developed by the teacher (Khoiruman & Banyuwangi, 2021;Utomo & Burhan, 2021).So that different learning is needed to overcome these solutions such as PBL learning.By using the PBL learning model, it can improve students' creative thinking skills in developing knowledge that can be applied in everyday life.So that in learning students do not only rely on memory to memorize, but students work together, communicate well in groups, take responsibility for solving a problem and can improve problem solving skills that stimulate active and creative student participation in dealing with contextual problems that commonly occur in everyday life (Handayani & Koeswanti, 2021).The improvement of critical thinking skills through mobile learning media is evidenced by the results of research conducted (Muhammad & Henny, 2021;Nofita et al., 2023;Noni et al., 2022).In developing critical thinking skills, students can use concrete knowledge that occurs in students' daily lives (Trimahesri & Hardini, 2019).
Mobile learning is developed using PBL steps where students learn by finding more than one information to solve problems.The use of mobile learning itself can facilitate students in accessing the learning media because it can be used anywhere and anytime (Sartika et al., 2024).The use of PBL-based learning media is a supporting material for students' learning styles today which is one of the current needs and which can be a learning process as an additional media by teachers with the support of the role of technology in education.Activities in PBL learning can make it easier for students to get various information in teaching materials.Through problem-based learning, of course, students can obtain information and learn how to build a problem framework, look more closely, collect data, analyze data and prepare arguments related to problem solving.This makes students learn to utilize the potential that exists within them to see problems, solve problems and can create thinking that is not easily trusted, still trying to find mistakes or errors (Siti et al., 2023).
CONCLUSION
Based on the results of the study, it shows that the learning media developed by the teacher still has some shortcomings.This is the information researchers get in interviews and needs analysis questionnaires.So schools really need creative learning media for learning, especially PBL-based mobile learning in accordance with the independent curriculum.The implication of this research is that further researchers can develop learning media in the form of PBL-based mobile learning in accordance with the analysis of student needs.
Table 1 .
Student Needs Analysis Instrument | 3,449.6 | 2024-08-31T00:00:00.000 | [
"Education",
"Computer Science"
] |
On the Challenges of Charging Electric Vehicles in Domestic Environments
This poster abstract presents a case study of charging Electric Vehicles (EVs) at home, taking into consideration the household power consumption and the vehicle driving routines of the residents. It reveals some challenges of charging EVs in the household and highlights the importance of proper charging scheduling in order to avoid potential tripping of the household circuit breaker.
INTRODUCTION
Domestic households typically have a limit of maximum instantaneous power that can be drawn, which is determined by the contract between the household and the electricity supplier. Thus, as Electric Vehicles (EV) become more prominent, it is necessary to ensure that EV charging at the home does not cause power outages, due to excess power being drawn from the electric installation.
Still, while several studies (e.g., [1,4,11]) have been carried out for planning the schedule of EV charging at the micro or macro-grid level, we have not yet found such studies conducted at individual household level. In this poster abstract, we present a case study, where we simulate the charging of two EVs in a household environment, based on the analysis of the household power consumption, and the driving routines of the dwellers. Our objectives are two-fold: i) understand how the driving routines affect the charging needs, and ii) understand how the charging of EVs may affect the stability of the domestic electric circuit. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from<EMAIL_ADDRESS>
DATA COLLECTION AND SIMULATION 2.1 Household Consumption Data
For this simulation, we consider a household in the city of Funchal, Portugal. The household has an electricity power contract of 6.9 kVA (30A at 230V), and it is constituted by three adults. Two members commute to work from Monday to Friday, while the third one is a house worker. The electric energy usage of the selected household was monitored between the 28 th of October 2016 and the 9 th of January 2017, at the frequency of 1/60Hz. Each measurement consists of a timestamp, current, voltage, active and reactive power. Figure 1 shows the box-and-whiskers plot for each hour of the day, using the current (Amps) as the baseline metric.
Electric Vehicle Models
Currently, there are no EVs in the household. Instead, we assume that the EVs are two Renault Zoe (model R90), with a nominal energy capacity of 22 kWh since, this is the best selling EV in the region [10]. This model can be charged in the household using 10A or 16A sockets. The former takes about 13H30 to fully charge the battery, whereas the latter takes about 7H55 [7].
EV Commuting and Charging Simulation
Using the driving routines and household consumption data, we ran a simulation to get the necessary data for this case study.
Assumptions.
The following assumptions were made: i) A fully charged battery is enough for 202 km (both drivers' speed limit is 50 km/h [6]), ii) Charging is done at 16A and its duration is linear with the required charge, ii) EVs start simulations with the batteries fully charged and cannot charge simultaneously, iv) From Monday to Thursday, charging can only happen between 7 PM and 7 AM, v) From Friday to Sunday, EVs can be charged anytime between Friday 7 PM to Monday 7 AM, and vi) If both EVs have to charge on the same day, we estimate the time available for charging and divided it equally among the EVs.
Simulation.
Three simulations were conducted by generating the daily commutes for each EV in km and changing the percentage of energy that must be in the battery at the beginning of the next day. Regarding the energy consumed from the battery, 25%, 50%, and 75% were selected to simulate when EVs need battery recharge. For each of these values, five simulations were in the monitoring period. Concerning the charging process, we tested every possible combination in the allowed time interval and recorded the number of times that the drawn current was above the 30A threshold. Whenever there was not enough time to fully charge an EV this event was flagged. As a practical example, if an EV needs 4 hours of charge (240 minutes), and this can be done any time between 7PM and 7AM (12 hours -720 minutes), we tested the 720 − 240 + 1 = 481 possible time slots.
RESULTS AND DISCUSSION
For each simulation we analyzed: i) the number of days that EVs needed a recharge, ii) number of days where issues occur while charging, and ii) number of weekend days that EV charging was requested. The results are summarizes in Table 2 (appendix B).
As expected, most problems happened when it was necessary to charge after only 25% of the battery capacity was used. These issues occurred mostly on weekends since no restrictions were made on the allowed time-slots. Further analysis revealed that the periods with higher chances of going over the contracted power happened on weekdays between 7 PM and 8 PM. To illustrate the jeopardizing effect, figure 2 shows a comparison of the household electricity demand with and without charging EVs, for the period of December 24-26 of 2016 (Saturday, Sunday, and Monday).
Ultimately, our study indicates that it is necessary to plan and schedule the use of electrical appliances once the EV charging is introduced into the households in order to prevent any potential tripping of the electrical circuit.
LIMITATIONS AND FUTURE WORK
There are a few limitations that should be addressed in future iterations of this work.
First, linear loading times are assumed to simplify the simulation. Since this is not the case in real-world, in future iterations submetering should be employed to gather more realistic charging times. Second, although the average speed of 50 km/h is a fair assumption in this study, it may not be the case in other scenarios. As such, one possible improvement would be to monitor the actual driving routines using smartphone applications (e.g., [3,5,8]).
Regarding the actual simulation, future work should also consider producing different combinations of charging thresholds as this is highly dependent on driver preferences. Furthermore, individual appliance consumption information can be used to improve the scheduling as many of the peaks in consumption happen only when some appliances are switched ON. Thus, by inferring when such appliances will be used (e.g., [2,9], it should be possible to briefly disable EV charging.
On the Challenges of Charging Electric Vehicles in Domestic Environments e-Energy '18, June 12-15, 2018, Karlsruhe, Germany ACKNOWLEDGMENTS This project has received funding from the European Union's Horizon 2020 research and innovation program under the grant agreement no. 731249. Both* 0 ± 0 * 0 ± 0* 0 ± 0* * Simultaneous charging was only required 4 times. In 2 out of the 4 occasions, it was not possible to fully charge both EVs.
C RESOURCES
The consumption data, simulation source code, and simulation results can be found at the following web page: https://www.alspereira. info/pubs/e-energy-2018/. | 1,733.2 | 2018-06-12T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
The Electrical Properties of Al/Methylene-Blue/n-Si/Au Schottky Diodes
We studied the electrical characteristics of Al/methylene-blue/n-Si/Au Schottky diodes such as current-voltage, conductance-capacitance-voltage, and conductance-capacitance-frequency. We plotted rectification ratio vs. voltage (RR-V) of the diode. From I-V plots of the diodes, saturation current (Io), and ideality factor (n) were calculated. Barrier height (eΦB) and series resistance (RS) were calculated with Norde functions. The results show that in the Al/methylene-blue/n-Si/Au diode, the methylene-blue layer has a significant impact on electrical properties such as series resistance, barrier height, ideality factor, conductance, rectification ratio, and capacitance.
Introduction
It is well known that Schottky diodes with organic components have many advantages over inorganic semiconductors in electronic devices such as easy fabrication, low cost, and applicability to rigid-flexible substrates [1]- [4].Many organic interfacial layers between metal and semiconductors can be prepared with simple techniques such as electrostatic spraying, spin coating, dip coating, the sol-gel technique, etc. [5].The electrical parameters of Schottky diodes are mainly affected by the interfacial organic layer [6].
Previous studies have examined Schottky barrier diodes with different organic components.Kılıçoğlu et al. studied Al/methyl-red/p-Si Schottky diodes.The structure demonstrated rectifying behavior [1] [2].Ocak et al. studied Sn/methylene-blue/p-Si Schottky diodes.They found that the presence of this organic layer converted metal-semiconductor (MS) devices into metal-insulator-semiconductor (MIS) devices.Many studies have been done with organic interfacial layers on controlling the barrier height and minimizing the interface state [3].Soylu et al. studied silicon-metal junctions with mixed methylene-blue and reported that the layer mixture of organ-ic components significantly affected electrical parameters and that to understand fully the mechanism's effect on device performance, further study of the mixture percentage was necessary [4].Zeyrek et al. studied Al/perylene/p-Si diodes and noted that MPS-type (Metal-P-Semiconducture) SBD (Schottky barrier diode) could be used as a good electronic material combination for possible application.Modification of SBD was done with a perylene organic polymer interlayer [5].
Previous studies examined methyl-red and methylene-blue with p-Si Schottky barrier diodes [1]- [4].This study's objective was to understand the electrical properties of n-Si Schottky diodes with methylene-blue organic components such as Al/methylene-blue/n-Si/Au. We examined ideality factor, series resistant, barrier height, rectification ratio, capacitance, and conductance.
The ohmic contact on the back surface of the n-type Silisium (n-Si) wafer piece was fabricated by evaporating previously-cleaned Au [11] [12] followed by a heat treatment at 570˚C for 30 minutes in nitrogen (N 2 ) atmosphere [13].On the polished surface of the sample, a methylene-blue layer was formed using the shipping-coating method with a methylene solution of 10 −2 mol L −1 in methanol [1]- [4].Next, the sample was placed in the evaporation chamber to evaporate previously cleaned Al (99.99%) with 10 −6 torr pressure on the front side to form a rectifying contact.The result was an Al/methylene-blue/n-Si/Au (Al/MB/n-Si) Schottky diode.
G/w-C-V Measurements were performed at room temperature (300˚K) at 0.5 MHz frequency and G/w-C-f measurements at room temperature (300˚K) at 250 mV bias.We plotted G/w-C-V, C −2 -V and G/w-C-f measurements of the diode (Figure 1).
Results and Discussion
When there is an insulating layer between metal-semiconductor interfaces of a metal-semiconductor device, the current-voltage (I-V), according to thermionic emission theory for non-ideal condition, can be given as follows [1] ( ) where n is ideality factor, V is forward bias voltage, e is the electron charge, R s is series resistance, k is the Boltzmann constant, T is temperature in Kelvin, and I o is the saturation current derived from the straight line intercept of the I axis at V = 0 and given by In Equation (2), n R * is the Richardson constant (112 A/cm 2 ⋅K 2 for n-type Si) [12], A is the effective diode area (0.4 mm radius), and eΦ B is the effective barrier height.
The ideality factor n is determined from slope of the linear region of the forward bias in I-V from the ratio [1]- [3]: Figure 2 represents the forward bias current-voltage characteristics of the diode.For an ideal diode, the ideality factor (n) equals 1.However, the ideality factor (n) in experiments can be greater than the unity [7] [9].The ideality factor (n) was obtained from the I-V graph, which was 2.8 for the sample.The barrier heights (eΦ B ) of the Schottky diode obtained as 0.83 eV.High values of the ideality factor and barrier height imply that methylene-blue layer, native oxide layer, and contamination occurred at the metal-semiconductor interface [1]- [3].
( ) ( ) I(V) is the current obtained from I-V measurements, γ is an arbitrary integrated than ideality facyor ( γ > n) and the barrier height (eΦ B ) determined using the minimum value of F(V)-V plot by means of the relationship In Equation ( 5), F(V 0 ) is the minimum F(V) value and V 0 is the voltage.The F(V)-V curves of the samples are shown in Figure 3.The series resistance (R s ) of the diodes can be expressed as where I min is the current value at V 0 [9] [12].The series resistance (R s ) and barrier height (eΦ B ) were obtained using Equations ( 5) and ( 6).From Norde's functions, the series resistances (R s ) and the barrier heights (eΦ B ) of the Schottky diode obtained as 437 Ω and 0.89 eV respectively.The barrier height and ideality factor of the diode were calculated as 0.83 eV and 2.8 using Equations ( 2) and ( 3) from the y-axis intercept and the linear slope of lnI -V, respectively.The ideality factor is greater than unity and shows deviation from an ideal diode.The deviation can be attributed to several effects such as the effect of organic layer, native oxide layer, contamination, and non-homogeneous interfacial (methylene) layer [8].The barrier height is greater than MS and shows deviation from an ideal diode.The higher barrier height shows that the organic interfacial layer is a physical barrier between the metal and semiconductor.The charge transport mechanisms through the MB layer can be obtained from the analysis of the double logarithmic (logI -logV) plot (Figure 2).The plot has three distinct regions.The slopes (m) in regions I, II, and III and were 2.3, 7.4, and 5.4, respectively.The first and the third regions correspond to low bias and low slopes, which can be characterized by the ohmic region of the MB layer.The second region with a high slope and low ideality factor can be characterized by the power law.This phenomenon is associated with two-effect, low-series resistance, and a high-injection carrier [8].
We measured the rectification ratio (RR) of Al/MB/n-Si Schottky diode.RR is calculated from the ratio of forward and reverse current at a certain bias using equation b r . The rectification ratio of the Al/MB/n-Si is plotted as RR-V in Figure 4.The RR was about 10 5 at +2 V, which is lower than MS diodes and can be attributed to the methylene layer [12].
G/w-V and C-V measurements of diodes were carried out at 0.5 MHz, at room temperature, and in darkness (Figure 5).When we analyze characteristics G/w-V and C-V of diodes, the conductance and capacitance with negative voltage remained small, but with positive voltage there were conductance and capacitance peaks [3] [5].C −2 -V was plotted for 0.5 MHz and reverse bias (Figure 5).Diffusion potential (V 0 ) was obtained from the intercept of C −2 -V with the horizontal axis.Ideality factor (n = 3.1) and barrier height (eФ B = 0.83 eV) can seen in the C −2 -V graph [9].
G/w-V, G/w-f and C-f measurements of diodes were carried out at 250 mV at room temperature, and in darkness (Figure 6).C-f plot shows excess capacity at low frequencies.The capacity decreases at higher frequencies and is equal to the depletion region capacity at some point [9].According to the literature, excess capacity at low frequencies is explained by either charges at the interface that cannot follow the alternative current signal or the quality of the back-ohmic contact on the bulk semiconductor substrate [9] [14].
Conclusion
The purpose of this paper was to explain the electrical properties of an Al/methylene-blue/n-Si/Au diode.The results show that the methylene-blue layer has a significant effect on electrical properties such as series resistance, barrier height, ideality factor, interface state density, and capacitance of the diodes.The values of barrier height (0.83 eV), ideality factor (2.8), and series resistance (437 Ω) were found to be higher than a typical MS.High values imply that the methylene-blue layer, native oxide layer, and contamination occurred at the metal-semiconductor interface.
When we analyze characteristics of G/w-V and C-V of diodes, the conductance and capacitance with negative voltage remain small, but with positive voltage peaks of conductance and capacitance occur.
The rectification ratio of the Al/methylene-blue/n-Si/Au was plotted as RR-V.The diode's RR was about 10 5 at +2 V, which was lower than that of other MS diodes and could be attributed to methylene blue layer.The capacitance-frequency shows excess capacitance at low frequencies.We infer that interface states can follow a signal at low frequencies.
Figure 2 .
Figure 2. The forward and reverse bias current-voltage (I-V) plots and the forward logI vs. logV (logI -ogV) plot of the Al/MB/n-Si diode. | 2,102.8 | 2016-01-06T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |