id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
246705850
pes2o/s2orc
v3-fos-license
Parasitic-free gate: A protected switch between idle and entangled states We propose a gate to switch superconducting qubit pairs in and out of a two-body interaction. This gate uses cross resonance driving on a tunable circuit with adjusted parameters and without accumulating residual ZZ interaction for idle and interacting qubits. It is imperative that this gate does not spread errors through the quantum register. Our detailed theoretical results show that these error-free modes do not necessarily require largely tunable circuits, such as magnetic modulation of qubits or couplers. We obtain the operational gate on weakly tuneable circuits as well and show that switching between them is remarkably fast. I. INTRODUCTION Over a few decades quantum computing has evolved from a concept [1] to experiments on noisy intermediatescale quantum processors [2][3][4]. In the processors, entangled qubits together search for the answer state to a computational problem and this outperforms classical computational time [5][6][7]. In gate-based quantum processors, quantum maps are decomposed into a sequence of one and two-qubit gates. Engineering of these gates has so far targeted fast and less erroneous quantum state change. However, further developments are needed to improve gate fidelity and, as soon as gates are deactivated, idle qubits bring them within the threshold of fault-tolerant computation [8][9][10]. In superconducting qubits, one of the main sources of two-qubit gate phase errors is the parasitic ZZ interaction, which enforces repulsion between computational energy levels and non-computational ones. In state-of-theart circuits the qubit-qubit residual ZZ repulsion is typically 50-100 times weaker than their coupling strength, however, this is sufficient for 1% of gate fidelity reduction over the typical gate length of ∼ 0.1 µs [11][12][13]. Remarkable advances for perfecting gates have been made by exactly zeroing the static ZZ interaction between flux qubit and transmon [14][15][16], and a pair of transmons [17,18]. Ungated qubits built on chip repel one another and then lead to the so-called static ZZ interaction. Zeroing this interaction can take place at either a genuine point, where qubits are decoupled, or an affine point, where repulsions from both sides of computational levels cancel each other, see Fig. 1. The distinction between genuine or affine labels, however, is basis dependent; an affine pair of qubits with frequency detuning stronger than coupling strength are effectively decoupled (with slightly shifted frequencies) in the eigenmode basis [19]. External driving adds a new component to the total ZZ strength, namely the dynamic part, which depends on circuit parameters and driving amplitudes. Further progresses have been made recently on perfecting the Cross Resonance (CR) gate by eliminating all residual ZZ interactions [20,21]. A large class of circuits under CR gates can cancel out their static ZZ strength, making a net-zero ZZ gate whose fidelity is only limited by qubit coherence times. Perfecting qubits and gates seems to be necessary steps toward error-free quantum computation, however, they are insufficient as such a computation also needs perfect idle qubits. Deactivating a ZZ-free gate without readjusting circuit parameters can harm the quantum register because this eliminates the dynamic part of ZZ interaction and leaves qubits with the finite static part. During the entire time that gates are not active, the seemingly idle qubits collect phase errors that over time not only grow larger but also can easily spread throughout the entire multiqubit states. Here we introduce a new gate called the parasitic-free (PF) gate by combining a tunable circuit and CR driving. Compared to the recently implemented CR gate in a tunable coupling superconducting circuit [22], this gate can switch between idle (I) and entangled (E) modes without accommodating any residual ZZ interaction in either mode. Figure 1 schematically shows residual ZZ interaction in the presence or the absence of CR amplitude Ω over a large domain of coupling strength. Switching mainly occurs by enabling tunability in coupling strength J between qubits. In E mode, qubits are parked at a non-zero static ZZ point, say at J on , where activating CR pulse sets its total parasitic ZZ to zero, so that the qubits purely ZX-interact. Deactivating external driving returns the total parasitic interaction to a non-zero static ZZ strength, therefore changing coupling to J off helps send them to a static ZZ-free point, either the genuine or an affine point. Our theoretical results show that the two modes can be nearby so that tuning does not necessarily require supplying large difference between J on and J off , see the tuning in the vicinity of the affine point in Fig. 1. We find a large class of circuits in which weakly tunable circuits can accommodate both modes. An interesting realization of such circuits can be the recent proposal of magnetic-free couplers between two qubits that are weakly tuned by mutual inductive coupling to an external inductance [23]. II. PRINCIPLES The ZZ parasitic-free (PF) gate is a reversible operation that requires modulating a circuit parameter to switch it between two modes: undriven idle (I) mode, and driven entangled (E) mode. Qubits at I mode are free from residual ZZ interaction. To bring the qubits to desirably interact at E mode, they are first brought to couple with finite static ZZ, then a microwave pulse drives them so that they start to interact with a desired ZX-type coupling at the same time they are liberated from net parasitic ZZ interaction. This gate combines two important features: 1) by filtering out parasitic interactions in presence or absence of external driving, high state fidelity can be achieved on both modes, and 2) it safely provides faster as well as higher fidelity 2-qubit gate by combining circuit tunability with external driving. Figure 2(a) shows the schematic of the PF gate that switches qubits Q1 and Q2 into either one of the following two modes: PF/I mode: represented by a green box-where qubits are ZZ-free by tuning a circuit parameter, such as mutual coupling strength, and this hibernates their initial state |Q 1 Q 2 as long as they are at the idle (I) mode, PF/E mode: represented by a pink box-where qubits are coupled after changing circuit parameter and then applying microwave driving on qubits. These make them interact only by ZX in absence of unwanted ZZ interactions as long as the microwave is on. The switching operation can be accomplished in either frequency-tunable qubits, tunable coupler between qubits, or combining both. The gate principles are universal for all types of qubits and both harmonic and anharmonic couplers. However, there is a practical preference to use a tunable coupler rather than tunable qubits, since the latter is proved to slightly suffer from rather lower coherence times that may degrade gate performance [24]. Let us consider a circuit with qubits Q1 and Q2 coupled via the coupler C and denote their quantum states as |Q 1 , C, Q 2 . Schematic circuits can be seen in Fig. 2(b), where Q1 and Q2 interact directly by g 12 and indirectly by individual couplings to C with coupling strengths g 1c and g 2c . In principle, qubit and coupler Hamiltonians are similar since coupler can be considered as a third qubit, i.e. being annihilation (creation) operator, ω i frequency, δ i anharmonicity, and i = 1, 2, c. We can write circuit Hamil- with g ij being coupling strengths. In the situation where qubits are far detuned from coupler, |ω 1/2 − ω c | |g|, namely the dispersive regime, the total Hamiltonian can be perturbatively diagonalized in higher order of g/|ω 1/2 − ω c |. However, it is important to emphasize that quantum processors can operate beyond the dispersive regime, see Ref. [25]. By summing over coupler states and transforming the Hamiltonian into a block diagonal frame [26], one can simplify it as an effective Hamiltonian in the computational Hilbert space of two qubits [27]. This simplification reveals that the two qubits interact only by a ZZ interaction, which is usually considered unwanted and always-on as long as energy levels are not shifted. The computational part of effective Hamiltonian in its eigenbasis, namely 'dressed basis', is withω i being qubit frequency in tilde dressed basis and ζ s =Ẽ 11 −Ẽ 01 −Ẽ 10 +Ẽ 00 being the static level repulsion coefficient in absence of external driving. As long as qubits are not externally driven Eq. (1) describes the circuit quantum electronics to acceptable accuracy. Externally driving Q1, namely control qubit, at the frequency of Q2, namely target qubit on superconducting circuits, introduces the operator driving Hamiltonian: H dr = Ω cos(ω 2 t)(â † 1 +â 1 ). This operator in states representation can be written as nc,n2 (|0, n c , n 2 1, n c , n 2 | + |1, n c , n 2 2, n c , n 2 | + · · · + H.c.) and induces that external driving triggers some transitions on target qubit. In the frame co-rotating with driving pulse, ignoring highly excited levels simplifies the Hamiltonian in the leading order of Ω: In Appendix B we briefly explain how one can derive this Hamiltonian (2) and evaluate all λ's in the leading g 2 order. In the Hamiltonian (2) we dropped driven unwanted interactions that are experimentally removable, such as |000 001| + |100 101| and |001 002| + |101 102|, which acts as a single-qubit gate on Q2. In practice, a secondary simultaneous extrenal pulse should be applied on the target qubit with certain characteristics to eliminate these unwanted interactions, such as IX, ZY , and IY . Moreover, by either echoing the pulses or by applying virtual Z gate one can eliminate ZI component. The transitions listed in Eq. (2) are schematically shown in Fig. 3 for typical circuit frequencies used in Fig. 2(b). Evidently in the dispersive regime by mapping the Hamiltonian (2) on computational subspace one can obtain a microwave assisted part for ZZ interaction between qubits [27,28], denoted here by ζ d . This indicates that total ZZ interaction in presence of driving pulse is ζ = ζ s + ζ d . Let us now supply further details about the static part. The coupler between two qubits can be a harmonic oscillator, such as a resonator, or another qubit with finite anharmonicity. The perturbative analysis of a harmonic coupler shows that it supplies the effective ZZ coupling ζ (1) s between two qubits, which depends on circuit parameters as shown in Eq. (3). A finite anharmonicity δ c for the coupler will add the correction ζ shows both parts in O(g 4 ): with , and the effective coupling between two qubits is with Σ q = ω q + ω c . At the entangled mode of the PF gate, firstly qubits are coupled by changing circuit parameters and therefore a non-zero static ZZ interaction is expected to show up between qubits. A cross resonance pulse is then assisted so that ZX interaction is supplied between qubits. Let us denote the strength of this coupling with α ZX . From Eq. (2) one can determine it in the leading perturbative order α ZX ∼ λ 1 Ω which is in agreement with experiment in weak Ω regime [29]. Any further nonlinearity can be studied in higher orders. The ZX interaction transforms quantum states by the operator U = exp(2πiα ZX τẐX/2) during the time τ that external driving is active. In order to perform a typical π/2 conditional-rotation on the second qubit, i.e. ZX 90 , the two-qubit state must transform by exp(i(π/2)ẐX/2). This indicates that external driving should be switched on for τ = 1/4α ZX . Therefore the stronger α ZX is the shorter time performing the gate takes. Needless to say that during the whole time the driving is active, this interaction is accompanied by the driving-assisted parasitic interaction ζ d . However, there is a good chance that one can find a large class of parameters at which the total ZZ interaction vanishes. Interestingly we show in the next section that strengthening α ZX can be found by modulating both coupling strength between qubits and the external driving amplitude. This improves the gate performance not only by zeroing parasitic interactions but also by making the gate much faster. In the following sections, we will discuss a detailed analysis of several circuit examples and show the performance of the PF gate on them. III. EXAMPLES OF THE PF GATE Switching between I and E modes requires a change in circuit parameter before external driving is activated. In a circuit with two qubits and a coupler there are different possibilities for selecting which one is tuned by circuit parameter modulation and which one is driven externally. Figure 4 shows three possible examples based on superconducting qubits. In circuit (a) two fixed frequency qubits Q1 and Q2 are coupled by a tunable coupler, which can be another qubit with flux-tunable frequency, so that as one can see in Eq. (4) changing the flux modulates effective coupling strength between qubits. In circuit (b) two qubits are coupled to a weakly tunable qubit (WTQ), which enables its frequency to be tuned in a small range by manipulating inductive coupling [23]. Circuit (c) is different as it consists of a flux-tunable Q1 coupled to fixed-frequency Q2 via a fixed-frequency coupler, but it suffers from a rather limited qubit coherence time. For all the three circuits, the E mode can be assisted by externally driving Q1. It is worth mentioning that there are other microwave-activated approaches to implement quantum gates, for instance, imposing additional pulses in circuit (a) has recently proved useful to execute multiqubit gate experiment [30,31]. Here we study circuit (a) with tunability in the coupler, but it does not mean the coupler must be an asymmetric transmon. In this circuit PF/I mode is obtained by tuning the coupler to the frequency ω I c , and in PF/E mode it is tuned at ω E c accompanied by a CR drive, as shown in Fig. 2 To test the performance of the PF gate we numerically study seven sample devices all parametrized on the circuit of Fig. 4(a). These devices are listed in Table I. The capacitive direct coupling g 12 are grouped in 3 values, Table I versus coupler frequencies. the weakest for device 1, intermediate for devices 2, 3, 4, 7, and the strongest in devices 5 and 6. Among devices in the intermediate g 12 group device 3 has stronger coupler anharmonicity, while devices 2 and 4 have similar coupler anharmonicity, yet in device 4 qubit anharmonicity is stronger. Specifically, device 7 stays out of straddling regime where |∆| > |δ| and the tunable coupler has positive anharmonicity. In the group of devices 5 and 6 with the strongest g 12 , the qubit-qubit detuning frequency is stronger compared to all other devices with the difference that device 5 is on a hybrid circuit by combining a transmon and a Capacitively Shunted Flux Qubit (CSFQ) while device 6 is on a transmontransmon circuit. We consider universal qubit-coupler coupling strength g 1c /2π = g 2c /2π = 95 MHz parked at ω c = 4.8 GHz for all devices. We evaluate the static ZZ interaction using the parameters listed in Table I. We take three different approaches for our evaluations. In one approach we numerically diagonalize the Hamiltonian (A1) in a large Hilbert space. We tested these results with yet another numerical formalism proposed recently in Ref. [32], namely the Non-Perturbative Analytical Diagonalization (NPAD) method. These two methods give rise to the same result as plotted in Fig. (11) in Appendix A. In the same plot we also present the second order Schrieffer-Wolff perturbative results as SWT, which turns out to be consistent with the numerical results only when the coupler frequency is tuned far away from qubits. Figure 5 shows numerical values for the static ZZ interaction at different coupler frequency ω c for the seven devices listed in Table I. One can see that all devices possess at least one zero-ZZ point. This will be further discussed below. A. The idle mode In the circuit of Fig. 4(a), if the coupler frequency is far detuned from qubits, there may exist a certain coupler frequency at which the effective interaction between qubits vanishes, i.e. g eff = 0. Using Eq. (4) one can easily find the answer. However, if the coupler frequency is closer to qubits, ζ Fig. 5 as an affine ZZ-free point. There is also a third type of ZZ zeroness which stays between genuine and affine points, however, it always shows up accompanied with at least one of the genuine/affine ZZfree points, we treat it as a trivial solution and will not further discuss it. Genuine idle (GI) mode: Let us first study the genuine ZZ-free point with effective coupling g eff = 0. As discussed in Ref. [33] in a circuit couplings between interacting elements are frequency dependent. The qubitcoupler interaction strengths g 1c and g 2c , denoted in Fig. 2(b), can be rewritten in terms of capacitances shown in the analogue circuit of Fig. 4(a). The relation between two sets of parameters can be approximated as follows: g ic ≈ α i √ ω i ω c and g 12 ≈ α 12 √ ω 1 ω 2 for the qubit label i = 1, 2, with α i = C ic /2 √ C i C c and α 12 = (C 12 +C 1c C 2c /C c )/2 √ C 1 C 2 , more accurate derivation can be found in Refs. [15,34]. By substituting these relations into Eq. (4) one can find the so-called genuine idle coupler frequency ω GI c at which qubits are effectively decoupled: By substituting ω GI c in Eq. (3) the static ZZ interaction turns out to have a small offset Usually, this offset in the dispersive regime is only a few kilohertz due to the inaccuracy of second-order perturbation theory used to derive Eq. (3). For example, in the circuit used in Ref. [17] α 1/2 ∼ 10α 12 ∼ 0.02 and ω 1/2 ∼ 4 GHz, the offset is found approximately −2 kHz. This is the main reason we do not limit our analysis in the rest of the paper into perturbation theory and instead, we take a more accurate approach of numerical Hamiltonian diagonalization. Further comparison can be found in Appendix A. The numerical result shows that ω GI c is slightly shifted from Eq. (5) by a few MHz. This difference for the seven circuits is given in Table II. Affine idle (AI) mode: When the coupler frequency is closer to qubits, effective coupling g eff is strengthened such that g eff g 12 . In this case by solving ζ s = 0 we have the following perturbative idle coupler frequency: The PF gate is switched on the entangled mode in two steps, in the first step coupler frequency is brought out of ω I c value so that static ZZ becomes nonzero. In the second step, qubits are driven externally by microwave pulse with amplitude Ω. One two-qubit circuit example is to consider that we use microwave pulse to drive the 'control' qubit, i.e. the qubit whose second excited level is higher, with the frequency of the other qubit, namely the 'target' qubit. As discussed above, this driving is called cross resonance drive and is supposed to supply only the conditional ZXtype interaction between qubits, however, the desired external force is accompanied by 3 types of unwanted operators listed below: 1. Classical crosstalk: supposedly triggered by the classical translation of driving electromagnetic waves to the position of target qubit, 2. Control Z rotation: due to driving control qubit with qubits detuning frequency, 3. Microwave-assisted ZZ interaction: triggered by Ωtransition between computational and noncomputational levels -listed in Eq. The first two can be eliminated as described in Ref. [35]: by driving target qubit with a second pulse to eliminate classical crosstalk, and either echoing the pulses or software-counterrotating control qubit to eliminate its detuning Z rotation. However, these methods do not eliminate the third one. To eliminate it we proposed a method called "dynamic freedom", which sets total ZZ to zero by fine-tuning microwave parameters so that it cancels out the static parasitic interaction [20]. The PF gate takes advantage of the dynamic freedom in the PF/E mode by combining microwave driving with a tunable coupler. Let us recall that after eliminating the classical crosstalk and control Z rotation, external driving activates the Hamiltonian (2) with transitions within and outside of computational levels shown in Fig. 3. By block-diagonalizing the Hamiltonian to the computational subspace one can find the following simplified version: Perturbation theory helps determine ζ d and α ZX in terms of driving amplitude Ω. Results show that α ZX (Ω) depends linearly on Ω in the leading order and ζ d (Ω) depends on Ω 2 (For details see Eq. (15) and Fig. 5,6 in [20]). One may expect that higher order corrections can be worked out by adding terms that have larger natural number exponent, however comparing results with experiment has shown in the past that perturbation theory is not accurate beyond leading order [27]. Alternatively we use a nonperturbative approach, the so-called Least Action (LA) [27,36]. Our numerical analysis evaluates total parasitic interaction ζ by adding the driving part to the static part. We plot the total ZZ interaction in Fig. 6 in a large range of qubits frequency detuning ∆ 12 and coupler frequency ω c for two sets of circuit parameters. Left (Right) column plots show simulations for a set of parameters similar to device 2 (6) except that here we keep ∆ 12 variable. On the dashed lines labelled by 2 and 6 the detuning frequencies are fixed to values given in Table I. We plotted three sets of driving amplitudes in each row: Fig. 6(a,b) show no driving Ω = 0 to study static level repulsions, Fig. 6(c,d) shows total ZZ interaction after we apply driving with amplitude Ω = 20 MHz, and Fig. 6(e,f) doubles the amplitude to Ω = 40 MHz. In these plots, we show the total parasitic interactions can be either positive (in red), or negative (in blue). The zero ZZ devices are shown in black boundaries between the two regions. In Fig. (6) by closely examining ζ variation with ω c one or more than one zero points can be found for a device with fixed ∆ 12 . More details with stronger driving amplitude can be found in Appendix C. In general, there are two types of ZZ-free boundaries: Type I can be found in regions where ζ values become shallow by gradually being suppressed and they change the sign smoothly in white areas. Examples are zeros on the closed-loop in (a,c,e) and on the boundary in the middle of (b,d,f). In type II the ζ values abruptly change the sign between dark blue and red areas in a narrow domain of parameters. Examples are the far left side boundaries in (a,c,e). These correspond to two types of PF gate: Genuine PF gate starting from genuine idle mode to E mode with type I freedom and Affine PF gate starting from affine idle mode to E mode with type II freedom. External driving in Fig. 6(c-f) leaves a large class of devices with zero total ZZ interaction, however by comparing total ZZ with the static one it is noticed that external driving distorts the freedom boundaries. In (a,c,e) subplots, which describe the same devices, by increasing Ω the closed-loop surrounding a blue island on the right is shrunk, while a new closed-loop appears in the middle surrounding a red island. These boundaries are additionally distorted for devices with resonant frequency qubits ∆ 12 = 0 and devices at the symmetric point with ∆ 12 = −δ 1 /2. Perturbation theory shows that ζ diverges at these two points. We show these points in darker green dashed lines. Our nonperturbative numerical results based on LA method shows ζ stays finite, however by increasing driving power, near these detuning values microwave-assisted component ζ d is largely magnified and heavily dominates total ZZ therefore zero boundaries are largely distorted. Further discussion about the derivation of microwave-assisted components ζ d can be found in Appendix B. Let us study the devices listed in Table I and we externally drive each with driving amplitude Ω and then evaluate the coupler frequency for dynamic freedom. For any amplitude associated to a ZZ-free coupler frequency is named the freedom amplitude denoted by Ω * . Figure 7(a) shows driving device 1 with amplitudes below 60 MHz sets total parasitic interaction to zero. Devices 2-4 show three such frequencies in a rather weaker domain of freedom amplitudes, with an interesting feature on the rightmost one, near ∼6.6 GHz. Increasing driving amplitudes does not change the strongest ω E c , therefore at this frequency not only static level repulsion ζ s is zero, but also driving assisted component ζ d vanishes since g eff = 0. Devices 5 and 6 show decoupling frequency below a certain driving amplitude, above which more ZZ-free coupler frequencies are added up. In device 6, however there is a frequency domain between M 1 and M 2 in which parasitic freedom is not expected to take place and we indicate it with the shaded region and we will come back to it later. For device 7 staying beyond straddling regime, the static ZZ freedom is only realized at higher coupler frequency. A two-qubit gate not only is needed to have high fidelity, but also it must be fast because such gates can perform many operations during qubit coherence times. as discussed above it is important that external driving supplies a strong ZX interaction, mainly because the strength of ZX interaction, i.e. α ZX scales inversely with the time that consumes to perform the gate, the so-called gate length τ ∼ 1/α ZX . Therefore the entangled mode of the PF gate must be tuned on a coupler frequency that not only is located on ZZ-free boundary but also present a short gate length. Figure 7(b) plots ZX strength at all freedom amplitudes and shows that ZX rate is stronger at lower ω c 's. Therefore a ZZ-free coupler frequency with strong ZX strength can be used for ω PF/E c . One exception is device 7 which stays out of straddling regime and has weaker ZX rate, so it is not feasible to implement a CR-like gate and will not be discussed later. One of the advantages of our numerical analysis is that it can predict nonlinear correction in both ζ and α ZX denoted in Eq. (7). Perturbation theory determines the leading order of α ZX and ζ d are linear and quadratic in Ω, respectively [20]. The perturbative theory considers higher-order terms with next natural-number exponents above the leading terms, however in comparison with experiment those results are not accurate beyond leading order [27]. Since our approach is different we consider higher-order corrections in real-number exponents: Our numerical results for ζ d in device 6 estimates the exponents a and b at different coupler frequencies ω c . The result is summarized in Fig. 8, in which far-left points are similar to perturbative results; i.e. a = 4 in ζ and b ∼ 3 in α ZX . However, there is a domain of frequency in which a increases and nearly reaches 5. Moreover, within the same domain ZX rate vanishes and this makes it meaningless to calculate b exponent in that region. In device 6 the higher-order term of ζ d has the opposite sign of η 2 Ω 2 + ζ s and is the dominant term in the coupler frequency domain 5.1-5.5 GHz. Therefore in this domain total ZZ cannot vanish. This describes the reason behind why there is a shaded area in Fig. 7(a) in device 6 where PF/E mode cannot be found. More details can be found in Appendix D. 8. Beyond 2nd (1st) order exponent of Ω a (Ω b ) in ζ d (αZX ) at different coupler frequency ωc for device 6. Shaded area denotes the region where effective coupling g eff is small and starts to change its sign. IV. ERROR MITIGATION OF PF GATE Switching from Idle mode to entangled mode takes place in two steps: first coupler frequency is changed, then microwave pulse is activated. The other way around needs to take place in the reversed order: first, the microwave pulse is switched off, then the coupler frequency is changed. In each step, there is a possibility that quantum states accumulate error. Although PF gate can effectively eliminate universal ZZ interaction, however, it still suffers from unwanted transitions during coupler fre-quency change. Moreover, limited qubit coherence time can be another source of fidelity loss. Here we quantify the performance of both Genuine PF gate and Affine PF gate by calculating the metric of gate fidelity. A. Error during coupler frequency variation For the Genuine PF gate, the coupler frequency is far detuned from the frequency of qubits. Switching the mode to entangled mode requires that the coupler frequency is brought much closer to the qubits. While for the Affine PF gate, the coupler frequency is near qubits, then switching the mode to entangled mode only needs slight change in the coupler frequency, e.g. with a WTQ. Figure 9(a) sketches the two types of PF gate implemented in device 2. In either case to avoid reinitialization of qubit states after a frequency change, we can perform the frequency change so that leakage does not take place from the computational subspace to other energy levels. This mandates to perform the coupler frequency change adiabatically [37]. Leakage rate out of computational levels depends on the ramping speed of coupler frequency dω c /dt. In particular, if the coupler frequency is tuned by external magnetic flux f = Φ ext /Φ 0 with Φ 0 being flux quantum unit, the rate of coupler frequency change can be written in terms of df /dt [38]. Here we compare two protocols for the pulse envelopes to quantify the leakage due to ω c modulation. Genuine PF gate: Figure 9(b) shows two pulses that we prepared for being used on device 2: a hyperbolic tangent envelope pulse in solid line and a flat-top Gaussian envelope in dashed line. The qubits are decoupled at ω GI c =6.577 GHz. Figure 7(b) shows that α ZX is rather strong -nearly ∼5 MHz -at the frequency 4.8 GHz which we take as ω E c . Note that much stronger α ZX of about nearly 10 MHz is also possible on this device which corresponds to 0.2 GHz smaller coupler frequency. Affine PF gate: Figure 9(c) shows similar two types of pulse envelope. The difference is that for Affine PF gate idle coupler frequency ω AI c is lower than ω E c . On device 2 qubits are ZZ-free at ω AI c =4.509 GHz. To make a comparison, we tune the coupler frequency to make α ZX also around 5 MHz but much closer to ω AI c -at the frequency 4.530 GHz which we take as ω E c . Let us denote the total time it takes for the PF gate to start at ω I c and return to it by t = 2τ 0 + t g in which t g is the microwave activation time in between two coupler frequency changes. Each coupler frequency change takes place during time τ 0 . By varying τ 0 and solving differential equations in the open quantum system, we can evaluate fidelity loss for computational states and then determine the optimized pulse for frequency change. Here we calculate the fidelity loss of computational states in absence of driving pulses to evaluate the errors from the switching idle and entangled modes. Genuine PF Affine PF without external drive. Both pulses show that overall by increasing τ 0 the computational state fidelity increases. However there is a difference between the two pulse performances. The hyperbolic tangent envelope pulse which rapidly changes coupler frequency between I and E modes can further reduce the error by raising all computational state fidelities to above 99.9% at τ 0 = 35 ns. For Affine PF gate, the individual state fidelity loss is less than 0.1% as shown in Fig. 9(e), in particular, the shortest rise/fall time for achieving 99.9% total state fidelity is τ 0 = 15 ns for the hyperbolic tangent envelope . B. Gate error during external driving In the two pulses discussed in Fig. 7(a) once the coupler frequency is changed to a lower value the circuit is ready to experience a ZZ-free ZX-interaction. This takes place by turning on the microwave during time t g . The length of t g for ZX gate is governed by microwave driving. During the time t g the qubit pairs enjoy the absolute freedom from total ZZ interaction, however, gate fidelity is limited by qubit coherence times. Let us indicate here that we do not consider the option of echoing the microwave driving as this doubles the gate length. Instead, we follow the recent practice at IBM where a single cross resonance driving is applied on qubits followed by a virtual Z rotation [35]. We also take the example of ZX 90 pulse for this typical analysis. In this pulse the relation between α ZX and the length of the flat top in the microwave pulse τ has been discussed before Section (III), i.e. τ = 1/4α ZX . Here we consider the microwave pulses are round squared with ∼20 ns rise and ∼20 ns fall times as shown in the inset of Fig. 10. Thus the total ZX-gate length is t g = (40 + 1/4α ZX [MHz]) in nanoseconds. Coherence times of Q1 and Q2 are ideally assumed to be all the same: i.e. {T behaviour as they show a minimum at a certain gate length t g where the error rate is as low as that expected only from coherence time limitation where the device experiences total ZZ freedom. Among the five devices 2-6 plotted, device 5 (a hybrid CSFQ-transmon) has the shortest gate length and after that stands device 6 (a pair of transmons with 200 MHz frequency detuning). The minimum error rate, although cannot be eliminated without perfecting individual qubits, indicates the possibility of ZX interaction gate with fidelity as high as 99.9%. Note that there is a limitation on the gate length behaving as a cutoff in Fig. 10, see Ref. [20] for more details. For Affine PF gate, we tune the coupler frequency such that ZX rate in devices 2-4 is the same as that in PF-G gate at corresponding freedom amplitude, and plot the error rate in Fig. 10(b). The difference compared to Genuine PF gate is the coupler is only tuned in a narrow domain e.g. 50 MHz using a WTQ, which can effectively suppress decoherence from flux noise. Moreover, required driving amplitude for the same gate duration is weaker, then total ZZ is smaller and less detrimental to the gate fidelity. Here we also study device 1 with the same qubit detuning as device 2. Figure 10(b) shows that device 1 enables the Affine PF gate with stronger ZX rate and therefore shorter gate length and less error rate. Summing the error rate from both rise/fall times and decoherence, all together for device 2 one can estimate a minimum total time length of 105 ns long Affine PF gate that takes Q1 and Q2 from a ZZ-free affine idle mode to entangled mode and returns it back to original affine idle mode only by weakly tuning the coupler. The idle to idle error rate during the affine PF gate time of t g + 2τ 0 will be about 0.1%. While for the Genuine PF gate the minimum gate length is 135 ns with 99.7% gate fidelity. There is a possibility that the length of the PF gate can become even shorter if the microwave rise and fall times combine the two switching coupler frequency times. For the example discussed above this can save up to 40 ns from the gate length. Theoretically, such a time saving needs careful analysis in optimal control theory, which goes beyond the scope of this paper, however experimentally it can be investigated. V. SUMMARY To summarize, we propose a new two-qubit gate by combining idling and entangling gates. This gate can safely switch qubits states between idle and ZXentangled modes and once at both modes quantum states do not accumulate conditional ZZ phase error in time. By zeroing ZZ in both modes qubits before, during, and after entanglement are safely phase-locked to their state. This gate can be realized in superconducting circuits by combining tunable circuit parameters and external driving in two ways: 1) At genuine idle mode tuning circuit parameter makes qubits decoupled and therefore the static ZZ interaction vanishes. At entangled mode, the static ZZ interaction is cancelled by a microwave-assisted ZZ component, so that qubits are left only with the operation of ZX-interaction; 2) At affine idle mode qubits are strongly coupled, but the level repulsions from both sides of computational space cancel each other. At entangled mode qubits ZX-interact with zero-ZZ. We evaluate a typical time length for the PF gate once its fidelity is only limited by qubit coherence times. In a complete operational cycle from idle to idle mode, in passing once through an entangled mode, the Affine PF gate is as short as 105 ns with the overall error budget rate being about 0.1%. We show that this gate is universally applicable for all types of superconducting qubits, such as all transmon or hybrid circuits, and certainly not limited to frequencies in the dispersive regime. We believe the PF gate will pave a new way to implement high-quality quantum computation in large-scale scalable quantum processors. ACKNOWLEDGEMENT The authors thank Britton Plourde, Thomas Ohki, Guilhem Ribeill, Jaseung Ku, and Luke Govia for insightful discussions. We gratefully acknowledge funding by the German Federal Ministry of Education and Research within the funding program "Photonic Research Germany" under contract number 13N14891, and within the funding program "Quantum Technologies -From Basic Research to the Market" (project GeQCoS), contract number 13N15685. Appendix A: Comparing numerical and perturbation methods The circuit Hamiltonian in the lab frame is written in the form of multilevel systems as H 0 = i=1,2,c n ω i (n i )|n i + 1 n i + 1| + i<j n (n i + 1)(n j + 1)g ij (|n i , n j n i + 1, n j + 1| where being the bare energy of level n for subsystem i (i = 1, 2, c). Especially, frequency and anharmonicity can be simplified as ω i (0) = ω i and δ i (0) = δ i . We evaluate static ZZ on the seven devices listed in Table I by fully diagonalizing their corresponding circuit Hamiltonian and plotted results in Fig. 5. Moreover, we compare static ZZ interaction in device 2 with the following three methods: numeric simulation (Numeric), NPAD [32] and Schrieffer-Wolff Transformation (SWT). Figure 11 shows the static ZZ at the lower x axis for the coupler frequency, as well as g eff at the upper x axis for the effective coupling strength. Appendix B: Driven Hamiltonian When microwave drive is on, CR driving Hamiltonian H d = Ω cos(ω 2 t) (|n 1 n 1 + 1| + |n 1 + 1 n 1 |) needs to be transferred to the same regime as the the qubit Hamiltonian. In the rotating frame the total Hamiltonian is whereH 0 = U † H 0 U with U being the unitary operator that fully diagonalizes H 0 ,H d = U † H d U and W = i=1,2,c n exp(−iω d tn i )|n i n i |. For simplicity we assume g 1c = g 2c = g, g 12 = 0 and δ 1 = δ 2 = δ, the transition rates in Eq. (2) then are derived from perturbation theory and listed in Table IV. In the entangled mode only qubits are encoded, we can further simplify the total Hamiltonian by decoupling the tunable coupler, block diagonlizing the Hamiltonian and then rewriting it in terms of Pauli matrices as discussed in Sec. III B. Appendix C: Dynamic ZZ freedom Figure 12 shows how the driving amplitude Ω impacts the total ZZ interaction. In device 2, static ZZ interaction exhibits three zero ZZ points in terms of the coupler frequency. By increasing the driving amplitude, total ZZ interaction becomes smaller and finally annihilates two of the zero ZZ points, leaving the only one ω GI c . However, the behaviour of device 6 is opposite, external drive makes it possible to realize ZZ freedom beyond the only ω GI c point. To show how the computational states accumulate conditional phase error, we plot exp(iζτ p ) in devices 2 and 6 during the idling periods of duration τ p in Fig. 13. Usually such fringes can be measured by performing a Ramsey-like experiment on the target qubit to validate the ZZ cancellation [18]. One can see that device 2 does not accumulate conditional phase at two additional coupler frequencies beyond g eff = 0 as shown in Fig. 13(a). Figure 13(c) shows that these two ZZ-free coupler frequency points reduce to one at the critical amplitude Ω = 47 MHz. Above this amplitude, i.e. Ω = 60 MHz in Fig. 13(e), the circuit can be free from parasitic ZZ interaction only at ω GI c , indicating that the idle mode is robust against external driving. However, device 6 shows the opposite phenomena. At the idle mode device 6 is only ZZ-free at ω GI c as shown in Fig. 13(b). When driving amplitude is increased, ZZfreedom can be found in additional coupler frequencies as shown in Fig. 13(d) and 13(f). Appendix D: Impact of Higher order correction Figure 14(a) and 14(b) show total ZZ interaction and ZX rate in device 6 at different coupler frequencies. Dashed lines indicate the trend of the Pauli coefficients without higher order correction (a, b = 0). However in reality corrections from higher levels contribute such that ZZ curves become more flat and finally purely negative with increasing coupling frequency. While ZX rate decreases from positive to negative continuously due to the sign change of g eff . The normalized higher order terms are evaluated and plotted in Fig. 14(c) and 14(d). In the logarithmic scale, these higher-order terms are almost linear at lower driving amplitude, and become more flat with increasing driving amplitude Ω, the slopes also increase with the coupler frequency. Moreover, when the coupler frequency is tuned to be around ω GI c , effective coupling g eff is quite weak and will change its sign. Since η 2 ∝ ∼ g 2 eff and µ 1 ∝ ∼ g eff are extremely small in the vicinity of the idle coupler frequency ω I c , higher-order terms contribute dominantly at weak driving amplitude.
2022-02-11T06:48:31.552Z
2022-02-10T00:00:00.000
{ "year": 2022, "sha1": "ea881f04dd1976c403f20fd850559ef528275089", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ac103d89ea23d11f8a856f81a9836f9c53378fce", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233740093
pes2o/s2orc
v3-fos-license
SeaDronesSee: A Maritime Benchmark for Detecting Humans in Open Water Unmanned Aerial Vehicles (UAVs) are of crucial importance in search and rescue missions in maritime environments due to their flexible and fast operation capabilities. Modern computer vision algorithms are of great interest in aiding such missions. However, they are dependent on large amounts of real-case training data from UAVs, which is only available for traffic scenarios on land. Moreover, current object detection and tracking data sets only provide limited environmental information or none at all, neglecting a valuable source of information. Therefore, this paper introduces a large-scaled visual object detection and tracking benchmark (SeaDronesSee) aiming to bridge the gap from land-based vision systems to sea-based ones. We collect and annotate over 54,000 frames with 400,000 instances captured from various altitudes and viewing angles ranging from 5 to 260 meters and 0 to 90 degrees while providing the respective meta information for altitude, viewing angle and other meta data. We evaluate multiple state-of-the-art computer vision algorithms on this newly established benchmark serving as baselines. We provide an evaluation server where researchers can upload their prediction and compare their results on a central leaderboard Introduction Unmanned Aerial Vehicles (UAVs) equipped with cameras have grown into an important asset in a wide range of fields, such as agriculture, delivery, surveillance, and search and rescue (SAR) missions [5,48,21]. In particular, UAVs are capable of assisting in SAR missions due to their fast and versatile applicability while providing an overview over the scene [38,26,6]. Especially in maritime * These authors contributed equally to this work. The order of names is determined by coin flipping 1 The leaderboard, the data set and the code to reproduce our results are available at https://seadronessee.cs.uni-tuebingen.de. scenarios, where wide areas need to be quickly overseen and searched, the efficient use of autonomous UAVs is crucial [54]. Among the most challenging issues in this application scenario is the detection, localization, and tracking of people in open water [20,41]. The small size of people relative to search radii and the variability in viewing angles and altitudes require robust vision-based systems. Currently, these systems are implemented via datadriven methods such as deep neural networks. These methods depend on large-scale data sets portraying real-case scenarios to obtain realistic imagery statistics. However, there is a great lack of large-scale data sets in maritime environ-ments. Most data sets captured from UAVs are land-based, often focusing on traffic environments, such as VisDrone [58] and UAVDT [16]. Many of the few data sets that are captured in maritime environments fall in the category of remote sensing, often leveraging satellite-based synthetic aperture radar [12]. All of these are only valuable for ship detection [11] as they don't provide the resolution needed for SAR missions. Furthermore, satellite-based imagery is susceptible to clouds and only provides top-down views. Finally, many current approaches in the maritime setting rely on classical machine learning methods, incapable of dealing with the large number of influencing variables and calling for more elaborate models [44]. This work aims to close the gap between large-scale land-based data sets captured from UAVs to maritime-based data sets. We introduce a large-scale data set of people in open water, called SeaDronesSee. We captured videos and images of swimming probands in open water with various UAVs and cameras. As it is especially critical in SAR missions to detect and track objects from a large distance, we captured the RGB footage with 3840×2160 px to 5456×3632 px resolution. We carefully annotated ground-truth bounding box labels for objects of interest including swimmer, floater (swimmer with life jacket), life jacket, swimmer † (person on boat not wearing a life jacket), floater † (person on boat wearing a life jacket), and boat. Moreover, we note that current data sets captured from UAVs only provide very coarse or no meta information at all. We argue that this is a major impediment in the development of multi-modal systems, which take these additional information into account to improve accuracy or speed. Recently, methods that rely on these meta data were proposed. However, they note the lack of large-scaled publicly available data set in that regime (see e.g. [27,51,36]). Therefore, we provide precise meta information for every frame and image including altitude, camera angle, speed, time, and others. In maritime settings, the use of multi-spectral cameras with Near Infrared channels to detect humans can be advantageous [20]. For that reason, we also captured multispectral images using a MicaSense RedEdge. This enables the development of detectors taking into account the nonvisible light spectra Near Infrared (842 nm) and Red Edge (717 nm). Finally, we provide detailed statistics of the data set and conduct extensive experiments using state-of-the-art models and hereby establish baseline models. These serve as a starting point for our SeaDronesSee benchmark. We release the training and validation sets with complete bounding box ground truth but only the test set's videos/images. The ground truth of the test set is used by the benchmark server to calculate the generalization power of the models. We set up an evaluation web page, where researchers can upload their predictions and opt to publish their results on a central leader board such that transparent comparisons are possible. The benchmark focuses on three tasks: (i) object detection, (ii) single-object tracking and (iii) multi-object tracking, which will be explained in more detail in the subsequent sections. Our main contributions are as follows: • To the best of our knowledge, SeaDronesSee is the first large annotated UAV-based data set of swimmers in open water. It can be used to further develop detectors and trackers for SAR missions. • We provide full environmental meta information for every frame making SeaDroneSee the first UAV-based data set of that nature. • We provide an evaluation server to prevent researches from overfitting and allow for fair comparisons. • We perform extensive experiments on state-of-the-art object detectors and trackers on our data set. Related Work In this section, we review major labeled data sets in the field of computer vision from UAVs and in maritime scenarios which are usable for supervised learning models. Labeled Data Sets Captured from UAVs Over the last few years, quite a few data sets captured from UAVs have been published. The most prominent are these that depict traffic situations, such as VisDrone [58] and UAVDT [16]. Both data sets focus on object detection and object tracking in unconstrained environments. Pei et al. [43] collect videos (Stanford Drone Dataset) showing traffic participants on campuses (mostly people) for human trajectory prediction usable for object detection. UAV123 [39] is a single-object tracking data set consisting of 123 video sequences with corresponding labels. The clips mainly show traffic scenarios and common objects. Both, Hsieh et al. [24] and Mundhenk et al. [ Labeled Data Sets in Maritime Environments Many data sets in maritime environments are captured from satellite-based synthetic aperture radar and therefore fall into the remote sensing category. In this category, the airbus ship data set [2] is prominent, featuring 40k images from synthetic aperture radars with instance segmentation labels. Li et al. [30] provide a data set of ships with images mainly taken from Google Earth, but also a few UAV-based images. In [52], the authors provide satellite-based images from natural scenes, mainly land-based but also harbors. The most similar to our work is [34]. They also consider the problem of human detection in open water. However, their data mostly contains images close to shores and of swimming pools. Furthermore, it is not publicly available. Multi-Modal Data Sets Captured from UAVs UAVDT [16] provides coarse meta data for their object detection and tracking data: every frame is labeled with altitude information (low, medium, high), angle of view (front-view, side-view, bird-view) and light conditions (day, night, foggy). Wu et al. [51] manually label VisDrone after its release with the same annotation information for the object detection track. Mid-Air [19] is a synthetic multimodal data set with images in nature containing precise altitude, GPS, time, and velocity data but without annotated objects. Blackbird [7] is a real-data indoor data set for agile perception also featuring these meta information. In [35], street-view images with the same meta data are captured to benchmark appearance-based localization. Bozcan et al. [10] release a low-altitude (< 30 m) object detection data set containing images showing a traffic circle and provide meta data such as altitude, GPS, and velocity but exclude the import camera angle information. Tracking data sets often provide meta data (or attribute information) for the clips. However, in many cases these do not refer to the environmental state in which the image was captured. Instead, they abstractly describe the way in which a clip was captured: UAV123 [39] label their clips with information such as aspect ratio change, background clutter, and fast motion, but do not provide frame-by-frame meta data. The same observation can be made for the tracking track of VisDrone [18]. See Table 1 for an overview of annotated aerial data sets. Data Set Generation We gathered the footage on several days to obtain variance in light conditions. Taking into account safety and environmental regulations, we asked over 20 test subjects to be recorded in open water. Boats transported the subjects to the area of interest, where quadcopters were launched at a safe distance from the swimmers. At the same time, the fixed-wing UAV Trinity F90+ was launched from the shore. We used waypoints to ensure a strict flight schedule to maximize data collection efficiency. Care was taken to maintain Table 3. Meta data that comes with every image/frame. a strict vertical separation at all times. Subjects were free to wear life jackets, of which we provided several differently colored pieces (see also Figure 2). To diminish the effect of camera biases within the data set, we used multiple cameras, as listed in Table 2, mounted to the following drones: DJI Matrice 100, DJI Matrice 210, DJI Mavic 2 Pro, and a Quantum Systems Trinity F90+. With the video cameras, we captured videos at 30 fps. For the object detection task, we extract at most three frames per second of these videos to avoid having redundant occurrences of frames. See Section 4 for information on the distribution of images with respect to different cameras. Lastly, we captured top-down looking multi-spectral imagery at 1 fps. We used a MicaSense RedEdge-MX, which records five wavelengths (475 nm, 560 nm, 668 nm, 717 nm, 842 nm). Therefore, in addition to the RGB channels, the recordings also contain a RedEdge and a Near Infrared channel. The camera was referenced with a white reference before each flight. As the RedEdge-MX captures every band individually, we merge the bands using the development kit provided by MicaSense. Meta Data Collection Accompanied with every frame there is a meta stamp, that is logged at 10 hertz. To align the video data (30 fps) and the time stamps, a nearest neighbor method was performed. The data in Table 3 is logged and provided for every image/frame read from the onboard clock, barometer, IMU and GPS sensor, and the gimbal, respectively. Note that α = 90 • corresponds to a top-down view, and α = 0 • to a horizontally facing camera. The date format is given in the extended form of ISO 8601. Furthermore, note that the UAV roll/pitch/yaw-angles are of minor importance for meta-data-aware vision-based methods as the onboard gimbal filters out movement by the drone such that the camera pitch angle is roughly constant if it is not intentionally changed [25]. Note that the gimbal yaw angle is not included, as we fix it to coincide with the UAV's yaw angle. We need to emphasize that the meta values lie within the error thresholds introduced by the different sensors, but an extended analysis is beyond the scope of this paper (see e.g. [61,1,29] for an overview). Annotation Method Using the non-commercial labeling tool DarkLabel [3], we manually and carefully annotated all provided images and frames with the categories swimmer (person in water without life jacket), floater (person in water with life jacket), life jacket, swimmer † (person on boat without life jacket), floater † (person on boat with life jacket), and boats. We note that it is not sufficient to infer the class floater by the location from swimmer and life jacket as this can be highly ambiguous. Subsequently, all annotations were checked by experts in aerial vision. We choose these classes as they are the hardest and most critical to detect in SAR missions. Furthermore, we annotated regions with other objects as ignored regions, such as boats on land. Moreover, the data set also covers unlabeled objects, which may not be of interest, like driftwood, birds or the coast such that detectors can be robust to distinguish from those objects. Our guidelines for the annotation are described in the appendix. See Figure 2 for examples of objects. Data Set Split Object Detection To ensure that the training, validation, and testing set have similar statistics, we roughly balance them such that the respective subsets have similar distributions with respect to altitude and angle of view, two of the most important factors of appearance changes. Of the individual images, we randomly select 4 /7 and add it to the training set, add 1 /7 to the validation set and another 2 /7 to the testing set. In addition to the individual images, we randomly cut every video into three parts of length 4 /7, 1 /7, and 2 /7 of the original length and add every 10-th frame of the respective parts to the training, validation, and testing set. This is done to avoid having subsequent frames in the training and testing set such that a realistic evaluation is possible. We release the training and validation set with all annotations and the testing set's images, but withhold its annotations. Figure 2. Examples of objects. Note that these examples are crops from high-resolution images. However, as the objects are small and the images taken from high altitudes, they appear blurry. ation will be available via an evaluation server, where the predictions on the test set can be uploaded. Object Tracking Similarly, we take 4 /7 of our recorded clips as the training clips, 1 /7 as the validation clips and 2 /7 as the testing clips. As for the object detection task, we withhold the annotations for the testing set and provide an evaluation server. Data Set Tasks There are many works on UAV-based maritime SAR missions, focusing on unified frameworks describing the process of how to search and rescue people [38,20,33,34,45,47,22]. These works answer questions corresponding to path planning, autonomous navigation and efficient signal transmission. Most of them rely on RGB sensors and detection and tracking algorithms to actually find people of interest. This commonality motivates us to extract the specific tasks of object detection and tracking, which pose some of the most challenging issues in this application scenario. Maritime environments from a UAV's perspective are difficult for a variety of reasons: Reflective regions and shadows resulting from different cardinal points (such as in Fig. 1) that could lead to false positives or negatives; people may be hardly visible or occluded by waves or sea foam (see Supplementary material); typically large areas are overseen such that objects are particularly small [38]. We note that these factors are on top of general UAV-related detection difficulties. Now, we proceed to describe the specific tasks. Object Detection There are 5,630 images (training: 2,975; validation: 859; testing: 1,796). See Figure 3 for the distribution of images/frames with respect to cameras and the class distribu-tion. We recorded most of the images with the L1D-20c and UMC-R10C, having the highest resolution. Having the lowest resolution, we recorded only 432 images with the RedEdge-MX. Note, for the Object Detection Task only the RGB-channels of the multi-spectral images are used to support a uniform data structure. Furthermore, the class distribution is slightly skewed towards the class 'boat', since safety precautions require boats to be nearby. We emphasize that this bias can easily be diminished by blackening the respective regions, as is common for areas which are not of interest or undesired (such as boats here; see e.g. [16]). Right after that, swimmers with life jacket are the most common objects. We argue that this scenario is very often encountered in SAR missions. This type of class often is easier to detect than just swimmer as life jackets mostly are of contrasting color, such as red or orange (see Fig. 2 and Table 4). However, as it is also a likely scenario to search for swimmers without life jacket, we included a considerable amount. There are also several different manifestations/visual appearances of that class which is why we recorded and annotated swimmers with and without adequate swimwear (such as wet suit). To be able to discriminate between humans in water and humans on boats, we also annotated humans on boats (with and without life jackets). Lastly, we annotated a small amount of life jackets only. However, we note that the discrimination between life jackets and humans in life jackets can become visually ambiguous, especially in higher altitudes. See also Fig. 2. Figure 4 shows the distribution of images with respect to the altitude and viewing angle they were captured at. Roughly 50% of the images were recorded below 50 m because lower altitudes allow for the whole range of available viewing angles (0 − 90 • ). That is, to cover all viewing angles, more images at these altitudes had to be taken. On the other hand, there are many images facing downwards (90 • ), because images taken at greater altitudes tend to face down- wards since acute angles yield image areas with tiny pixel density, which is unsuitable for object detection. Nevertheless, every altitude and angle interval is sufficiently represented. Single-Object Tracking We provide 208 short clips (>4 seconds) with a total of 393,295 frames (counting the duplicates), including all available objects labeled. We randomly split the sequences into 58 training, 70 validation and 80 testing sequences. We do not support long-term tracking. The altitude and angle distributions are similar to these in the object detection section since the origin of the images of the object detection task is the same. Multi-Object Tracking We provide 22 clips with a total of 54,105 frames and 403,192 annotated instances, the average consists of 2,460 frames. We differentiate between two use-cases. In the first task, only the persons in water (floaters and swimmers) are tracked, it is called MOT-Swimmer. In the second task, all objects in water are tracked (also the boats, but not people on boats), called MOT-All-Objects-In-Water. In both tasks, all objects are grouped into one class. The data set split is performed as described in section 3.3. Multi-Spectral Footage Along with the data for the three tasks, we provide multispectral images. We supply annotations for all channels of these recordings, but only the RGB-channels are currently part of the Object Detection Task. There are 432 images with 1,901 instances. See Figure 1 for an example of the individual bands. Evaluations We evaluate current state-of-the-art object detectors and object trackers on SeaDronesSee. All experiments can be reproduced by using our provided code available on the evaluation server. Furthermore, we refer the reader to the Supplementary Material for the exact form and uploading requirements. Object Detection The used detectors can be split into two groups. The first group consists of two-stage detectors, which are mainly built on Faster R-CNN [23] and its improvements. Built for optimal accuracy, these models often lack the inference speed needed for real-time employment, especially on embedded hardware, which can be a vital use-case in UAVbased SAR missions. For that reason, we also evaluate on one-stage detectors. In particular, we perform experiments with the best performing single-model (no ensemble) from the workshop report [60]: a Faster R-CNN with a ResNeXt-101 64-4d [53] backbone with P6 removed. For large onestage detectors, we take the recent CenterNet [57]. To further test an object detector in real-time scenarios, we choose the current best model family on the COCO test-dev according to [4], i.e. EfficientDet [49], and take the smallest model, D0, which can run in real-time on embedded hardware, such as the Nvidia Xavier [27]. We refer the reader to the appendix for the exact parameter configurations and training configurations of the individual models. Similar to the VisDrone benchmark [58], we evaluate detectors according to the COCO json-format [32], i.e. average precision at certain intersection-over-unionthresholds. More specifically, we use AP=AP IoU=0.5:0.05:0.95 , AP 50 =AP IoU=0.5 and AP 75 =AP IoU=0.75 . Furthermore, we evaluate the maximum recalls for at most 1 and 10 given detections, respectively, denoted AR 1 =AR max=1 , and AR 10 =AR max=10 . All these metrics are averaged over all categories (except for "ignored region"). We furthermore provide the class-wise average precisions. Moreover, similar to [27], we report AP 50 . Ultimately, it is the goal to have robust detectors across all domains uniformly, which is better measured by the latter metrics. Table 4 shows the results for all object detection models. As expected, the large Faster R-CNN with ResNeXt-101 64-4d backbone performs best, closely followed by CenterNet-Hourglass104. Medium-sized networks, such as the ResNet-50-FPN, and fast networks, such as CenterNet-ResNet18 and EfficientDet-D0, expectedly perform worse. However, the latter can run in real-time on an Nvidia Xavier [27]. Swimmers are detected significantly worse than floaters by most detectors. Notably, life jackets are very hard to detect since from a far distance these are easily confused with swimmers † (see Fig. 2). Since there is a heavy class imbalance with many fewer life jackets, detectors are biased towards floaters. Single-Object Tracking Like VisDrone [59], we provide the success and precision curves for single-object tracking and compare models based on a single number, the success score. As comparison trackers, we choose the DiMP family (DiMP50, DiMP18, PrDiMP50, PrDiMP18) [9,14] and Atom [13] because they were the foundation of many of the submitted trackers to the last VisDrone workshop [18]. Figure 5 shows that the PrDiMP-and DiMP-family expectedly outperform the older Atom tracker in both, success and precision. Surprisingly, PrDiMP50 slightly trails the accuracy of its predecessor DiMP50. Furthermore, all trackers' performances on SeaDronesSee are similar or worse than on UAV123 (e.g. Atom with 65.0 success) [9,14,13], for which they were heavily optimized. We argue that in SeaDronesSee there is still room for improvement, especially considering that the clips feature precise meta information that may be helpful for tracking. Furthermore, in our experiments, the faster trackers DiMP18 and Atom run at approximately 27 ever, we note that they are not capable of running in realtime on embedded hardware, a use-case especially important for UAV-based SAR missions. Multi-Object Tracking We use a similar evaluation protocol as the MOT benchmark [37]. That is, we report results for Multiple Object Tracking Accuracy (MOTA), Identification F1 Score (IDF1), Multiple Object Tracking Precision (MOTP), number of false positives (FP), number of false negatives (FN), recall (R), precision (P), ID switches (ID sw.), fragmentation occurrences (Frag). We refer the reader to [46] or the appendix for a thorough description of the metrics. We train and evaluate FairMOT [56], a popular tracker, which is the base of many trackers submitted to the challenge [17]. FairMOT-D34 employs a DLA34 [55] as its backbone while FairMOT-R34 makes use of a ResNet34. Another SOTA tracker is Tracktor++ [8], which we also use for our experiments. It performed well on the MOT20 [15] challenge and is conceptually simple. Surprisingly, Tracktor++ was better than FairMOT in both tasks. One reason for this may be the used detector. Track-tor++ utilizes a Faster-R-CNN with a ResNet50 backbone. In contrast, FairMOT is using a CenterNet with a DLA34 and a ResNet34 backbone, respectively. Meta-Data-Aware Object Detector Developing meta-data-aware object detectors is difficult since there are no large-scale data sets to evaluate their performances. However, some works provide promising preliminary results using this metadata [51,36,27]. We provide an initial baseline from [27] incorporating the meta data. We evaluate the performances of 5×Altitude@3-and 5×Angle@3-experts, which are constructed on top of a Faster R-CNN with ResNet-50-FPN, respectively. Essentially, these experts make use of meta-data by allowing the features to adapt to their responsible specific environmental domains. As Table 9 shows, meta data can enhance the accuracy of an object detector considerably. For example, 5×Angle@3 outperforms its ResNet-50-FPN baseline by 3.1 AP avg 50 while running at the same inference speed. The improvements are especially significant for underrepresented domains, such as +9.2 and +6.4 AP avg 50 for the acute angle (A) and the medium angle (M), respectively, which are underrepresented as can be seen from Fig. 4. Conclusions This work serves as an introductory benchmark in UAVbased computer vision problems in maritime scenarios. We build the first large scaled-data set for detecting and tracking humans in open water. Furthermore, it is the first largescaled benchmark providing full environmental information for every frame, offering great opportunities in the so-far restricted area of multi-modal object detection and tracking. We offer three challenges, object detection, singleobject tracking, and multi-object tracking by providing an evaluation server. We hope that the development of metadata-aware object detectors and trackers can be accelerated by means of this benchmark. Moreover, we provide multispectral imagery for detecting humans in open water. These images are very promising in maritime scenarios, having the ability to capture wavelengths, which set apart objects from the water background.
2021-05-06T01:15:49.265Z
2021-05-05T00:00:00.000
{ "year": 2021, "sha1": "11cfcf8f15fbbe14c63803ab0ae9a58e3037ce95", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "11cfcf8f15fbbe14c63803ab0ae9a58e3037ce95", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
261383207
pes2o/s2orc
v3-fos-license
circROBO1 promotes prostate cancer growth and enzalutamide resistance via accelerating glycolysis Background and aim: As non-coding RNAs, circular RNAs (circRNAs) contribute to the progression of malignancies by regulating various biological processes. In prostate cancer, however, there is still a lack of understanding regarding the potential molecular pathways and roles of circRNAs. Methods: Loss-off function experiments were performed to investigate the potential biological function of circRNA in the progression of prostate cancer. Western blot, qRT-PCR, and IHC assay were used to examine the expression level of different genes or circRNAs. Further molecular biology experiments were conducted to uncover the molecular mechanism underlying circRNA in prostate cancer using dual luciferase reporter and RNA immunoprecipitation (RIP) assays. Results: A novel circRNA (hsa_circ_0124696, named circROBO1) was identified as a significantly upregulated circRNA in both prostate cancer cells and tissues. Suppression of circROBO1 significantly attenuated the proliferation of prostate cancer cells. In addition, we found that the knockdown of circROBO1 remarkably increased the sensitivity of prostate cancer to enzalutamide treatment. A deceleration in glycolysis rate was observed after inhibition of circROBO1, which could suppress prostate cancer growth and overcome resistance to enzalutamide. Our results revealed that circROBO1 promotes prostate cancer growth and enzalutamide resistance via accelerating glycolysis. Conclusion: Our study identified the biological role of the circROBO1-miR-556-5p-PGK1 axis in the growth and enzalutamide resistance of prostate cancer, which is the potential therapeutic target of prostate cancer. Introduction Prostate cancer is the most common malignancies in men worldwide, with a high incidence and mortality rate [1].Prostate cancer is classified into different types based on the histological characteristics of the tumor, including adenocarcinoma, small cell carcinoma, and neuroendocrine carcinoma [2].The prognosis of prostate cancer varies depending on the stage and grade of the tumor, with early-stage and low-grade tumors having a better prognosis than advanced-stage and high-grade tumors [3,4].The treatment of prostate cancer includes surgery, radiation therapy, chemotherapy, and hormone therapy.Hormone therapy, also known as androgen deprivation therapy (ADT), is a common treatment option for advanced or metastatic prostate cancer [5,6].ADT works by reducing the levels of androgens, such as testosterone, in the body, which can slow down the growth of prostate cancer cells [7]. Enzalutamide, a second-generation antiandrogen, has been approved for the treatment of metastatic Ivyspring International Publisher castration-resistant prostate cancer (mCRPC) and has shown remarkable efficacy in clinical trials.Enzalutamide exerts its therapeutic effect by inhibiting the androgen receptor (AR) signaling pathway, which plays a crucial role in the development and progression of prostate cancer.Enzalutamide has been shown to prolong overall survival and delay disease progression in patients with mCRPC in several clinical trials, including the AFFIRM trial and the PREVAIL trial [8,9].However, the emergence of enzalutamide resistance has become a major challenge in the management of mCRPC.The clinical statistics show that approximately 20-40% of patients with mCRPC do not respond to enzalutamide, and the majority of responders eventually develop resistance after a median time of 9-15 months [10,11].Therefore, it is curial investigate the molecular pathogenesis of prostate cancer and to develop novel therapeutic strategies to overcome the enzalutamide resistance. Circular RNAs (circRNAs) are a class of non-coding RNAs (ncRNAs) that have recently gained attention due to their potential roles in various biological processes [12].Unlike other linear RNAs, circular RNAs have a covalently closed loop structure, which increases their stability and resistance to degradation [13].Additionally, circRNAs are highly conserved and tissue-specific, and their expression levels are regulated in a cell-and developmental stage-specific manner [14].These unique features of circRNAs make them attractive candidates for biomarkers and therapeutic targets in various diseases.There have been many studies showing that circRNAs are involved in the regulation of various diseases, including diabetes, neurological disorders, autoimmune disorders, heart failures, and cancers [15,16].Circular RNA sponge for miR-7 (ciRS-7) is a circular RNA molecule that has been recently identified as a key regulator of cancer progression.It acts as a competitive endogenous RNA (ceRNA) by binding to miR-7 and preventing its interaction with target mRNAs.This results in the upregulation of miR-7 target genes, which are involved in various cellular processes such as proliferation, apoptosis, and migration [17][18][19][20][21]. Studies have shown that circHIPK3 acts as a sponge for microRNAs, thereby regulating the expression of downstream target genes.For example, circHIPK3 has been found to sponge miR-124, leading to increased expression of their target genes, which are involved in cell proliferation and apoptosis [22].In addition, circEZH2 has been found to play a crucial role in regulating tumor progression.The expression level of circEZH2 has been shown to be upregulated in various types of cancers, including glioblastoma, breast cancer, and some other tumors.It is revealed that EZH2-92aa, encoded by circEZH2, inhibits surface NKG2D ligand binding in glioblastoma cells [23].By enhancing the epithelial-mesenchymal transition, the FUS/ circEZH2/KLF5 feedback loop promotes CXCR4induced liver metastasis of breast cancer [24].In addition, the researchers investigated the potential application of circRNA in glioma tumorigenesis, and demonstrate the inhibitory effects of isoliquiritigenin on circ0030018 via the miR-1236/HER2 signaling pathway [25].In our recent study, we investigated the role of circROBO1 in regulating the liver metastasis of breast cancer through a feedback loop involving KLF5 and FUS, which inhibits the selective autophagy of afadin [26].Nevertheless, little is known about how circRNAs might contribute to prostate cancer and their mechanisms. In this study, we identified a novel circRNA (hsa_circ_0124696, named circROBO1) as a frequently upregulated circRNA in both prostate cancer cells and tissues.To investigate how circROBO1 may contribute to prostate cancer progression, loss-off function experiments were performed.The inhibition of circROBO1 significantly decreased the proliferation of prostate cancer cells.In addition, we found that the knockdown of circROBO1 remarkably increased the sensitivity of prostate cancer to enzalutamide treatment.A deceleration in glycolysis rate was observed after inhibition of circROBO1, which could suppress prostate cancer growth and overcome resistance to enzalutamide.We performed further molecular biology experiments using a dual luciferase reporter assay and RNA immunoprecipitation assays to obtain further insight into the molecular mechanism behind circROBO1 in prostate cancer.Our results revealed that circROBO1 promotes prostate cancer growth and enzalutamide resistance via accelerating glycolysis.Generally, our study identified the biological role of the circROBO1-miR-556-5p-PGK1 axis in the growth and enzalutamide resistance of prostate cancer, which could be an effective therapeutic target. Clinical tissue collection We obtained fresh nearby normal prostate tissues and primary prostate cancer samples from the First Affiliated Hospital of Jinan University.Only patients with early-stage prostate cancer were included in the study.Patients with incomplete clinicopathological information or pathologically unconfirmed prostate cancer were excluded.The detailed clinic parameters of the enrolled patients in this study are displayed (Supplementary Table 1). This study was approved by the Ethics Committee of the the First Affiliated Hospital of Jinan University.Written informed consent was collected from all prostate cancer patients before participation in this study. Cell line preserve and culture The cell lines used in this study were all purchased from the ATCC (WPMY1, C4-2B, LNCaP, 22RV1, VCaP, DU145, PC-3, and HEK293T).Culture of cell lines was performed according to instructions provided by the supplier.WPMY1, 22RV1, VCaP, and HEK293T cell lines were cultured in DMEM + 10% FBS + 1% P/S.C4-2B, LNCaP, and DU145 cell lines were cultured in RPMI1640 + 10% FBS + 1% P/S.Cells were cultured at 37 ° C in an incubator containing 5% CO2.The authenticity of cells was occasionally verified through the use of DNA fingerprinting. Nuclear and cytoplasmic fractionation Nuclear and cytoplasmic RNA from prostate cancer cells were separated by the PARIS Kit (Invitrogen, CA, USA) according to the protocols. RT-qPCR analysis Total cellular RNA was extracted using the TRIzol reagent (Invitrogen, USA).In this study, we used a SYBR Premix Ex Taq Kit from Takara (Japan) for qRT-PCR.The sequences of the primers used in this study is displayed in Supplementary Table 2. Western blot analysis The cells were lysed using RIPA lysis and PMSF to isolate the total protein.Proteins were separated on SDS-PAGE gels in each well.A PVDF membrane was then used to move the protein for two hours at 300 mA for two hours.As a next step, we blocked the membranes with 5% skim milk powder.Each antibody was incubated overnight at 4°C on the membrane, followed by one hour at room temperature with the specific secondary antibody.The protein bands were eventually detected by chemiluminescence.The anti-PGK1 (1:1000, CST, 63536S, USA) and anti-beta-actin antibody (1:1000, CST, 4970S, USA) are used to detect certain protein. Actinomycin D assay Actinomycin D was used to degrade the linear mRNA transcription of C4-2B prostate cancer cells for 0, 8, 16, and 24 hours.By using qPCR-analysis, the linear ROBO1 mRNA and circular circROBO1 mRNAs were detected in C4-2B cells at certain times. RNase R digestion assay After 5 ug of the extracted total RNA from C4-2B prostate cancer cells were treated with RNase R (5 U / ug) or the blank control solution for 30 min at 37 ℃, a RT-qPCR analysis was performed on the resulting RNA solution to quantification. Lactate production and glucose consumption measurements A glucose/glucose oxidase assay kit (Invitrogen, USA) was used to measure glucose consumption and lactate production.In order to normalize the data, the total amount of cellular proteins was taken into the account. CCK-8 assay After digesting and resuspending the C4-2B and 22RV1 prostate cancer cells, they were resuspended in a suspension medium.Afterwards, 4000 cells per well were seeded into 96-well plates.T Cells were incubated at 37°C for each time period.After adding the CCK-8 solution (10 μl), incubation for an hour was performed before measuring the results. Colony formation assay A total of 5×10 3 cells were resuspended and seeded in the each well of a six-well plate.A 7-day incubation at 37°C is followed by methanol fixation and staining with 2.5% crystal violet, followed by an image analysis utilizing Image J software. Dual luciferase reporter assay Synthesis of the sequences of circROBO1 or PGK1 3'-UTR that contain the wild-type (WT) or mutant-type (MT) miR-556-5p binding sites was performed.We then cloned wild-type or mutant sequences of circROBO1 or PGK1 3'-UTR into pmirGLO (Promega).A co-transfection of mimics-NC and mimics-miR-556-5p was then carried out in HEK293T cells with the corresponding plasmids.Following 48 hours of incubation, all co-transfections above were examined for Renilla and Firefly luciferase activity using the Dual Luciferase Reporter Assay Kit from Promega (Wisconsin, USA).Renilla activity was used as the internal reference. RNA immunoprecipitation (RIP) Assays were conducted using Magna RIP kits (Millipore, MA, USA) according to the protocols provided.In RIP assays for Ago2 protein, anti-Ago2 antibodies were used.After RNA purification, circROBO1, PGK1, and miR-556-5p expression levels were assessed.C4-2B and 22RV1 prostate cancer cells were transfected with ms2-circROBO1 plasmid, ms2-circROBO1-mutant plasmid, and ms2bs-NC plasmid.RIP assays were performed following 72 hours of incubation.After purification of RNA complexes, the relative abundance of miR-556-5p was determined. Statistical analysis The statistical analysis was performed using SPSS 22.0 (SPSS, USA).Data are reported as mean ± standard deviation (SD).Student's t test was used to compare two groups.Paired t test was used to compare the expression difference between normal and prostate cancer tissues.P<0.05 was considered as statistically significant. Results circROBO1 is upregulated in prostate cancer cells and tissues circROBO1 (hsa_circ_0124696) originates from exon 5, exon 6, exon 7, and exon 8 of the ROBO1 mRNA, with a back spliced junction site of exon 8 and exon 5. First, we detected the expression level of circROBO1 in prostate cancer lines using RT-qPCR analysis.CircROBO1 was upregulated in prostate cancer cell lines compared to normal cell WPMY1, especially in C4-2B and 22RV1 prostate cancer cell lines, determined by RT-qPCR analysis (Figure 1A).We used RT-qPCR to verify the expression level of the circROBO1 expression level in ten pairs of prostate cancer tissues and adjacent normal tissues (Figure 1B).The absolute expression levels of circROBO1 between normal and tumor prostate tissues is displayed in Supplementary Table 3.We found that circROBO1 was significantly upregulated in tumor tissue compared with their paired normal tissue (Figure 1B).A further investigation of circROBO1's circular structure and stability was conducted using actinomycin D and RNase R assays.In our study, we found that circROBO1 was resistant to RNase R (one kind of RNA exonuclease) (Figure 1C-D).Actinomycin D tests were conducted to determine circROBO1 and linear ROBO1 transcription half-lives in addition to evaluating the stability of circROBO1.We found that the circular ROBO1 mRNA has a longer half-life in actinomycin D assays than the linear ROBO1 mRNA (Figure 1E-F).These results verified that circROBO1 has the characteristics of circular RNA. circROBO1 promotes the glycolysis and enzalutamide resistance of prostate cancer cells Short hairpin RNA (shRNA) targeting the back-splice sequence of circRNA were designed to investigate the function of circROBO1 in prostate cancer (Figure 2A).The efficacy of shRNA was determined in C4-2B and 22RV1 prostate cancer cell lines (Figure 2B).Inhibition of circROBO1 significantly suppressed the proliferation rate of the C4-2B and 22RV1 prostate cancer cell lines in vitro, revealed by the CCK-8 assays (Figure 2C).Similarly, a colony-formation assay revealed similar results as a proliferation assay (Figure 2D).Knockdown of circROBO1 inhibited the colony-formation ability of the C4-2B and 22RV1 prostate cancer cells (Figure 2E).Targeting circROBO1 significantly increased the sensitivity of C4-2B and 22RV1 prostate cancer cells to enzalutamide treatment (Figure 2F-G).Glucose uptake and lactate production are two important rate-limiting steps in the biological process of tumor cell glycolysis.Also, circROBO1 could enhance glucose uptake and lactate production, thereby accelerating the glycolysis of C4-2B and 22RV1 prostate cancer cells (Figure 2H-I). circROBO1 acts as a sponge of miR-556-5p in prostate cancer Then, we evaluated the potential interactions between circular RNAs and multiple miRNAs using the Circular RNA Interactome database.According to the predictions, miR-556-5p binds the circROBO1 sequence with possible interacting sites (Figure 3A).Next, we used RT-qPCR to verify the expression level of the miR-556-5p expression level in ten pairs of prostate cancer tissues and adjacent normal tissues (Figure 3B).We found that miR-556-5p was significantly downregulated in tumor tissue compared with their paired normal tissue (Figure 3B).Then, we also detected the expression level of miR-556-5p in prostate cancer lines using RT-qPCR analysis.miR-556-5p was extremely downregulated in prostate cancer cell lines compared to normal cell WPMY1, especially in C4-2B and 22RV1 prostate cancer cell lines, determined by RT-qPCR analysis (Figure 3C).To identify the subcellular localization of circROBO1, we performed RT-qPCR on the cytoplasmic and nuclear fractions of C4-2B and 22RV1 prostate cancer cells (Figure 3D).According to our results, circROBO1 is primarily located in the cytoplasm rather than in the nucleus (Figure 3E).To further confirm the interaction sites between circROBO1 and miR-556-5p, we constructed a wild-type and a mutant dual-luciferase reporter plasmids containing the predictive interacting sites between circROBO1 and miR-556-5p.Compared to the mutant group, miR-556-5p mimics significantly decreased the luciferase activity of wild-type group.In contrast, there was no detectable difference between the miR-556-5p mimics and control mimics group (Figure 3F-G).To confirm the direct interaction between circROBO1 and miR-556-5p, we next performed RIP assay to pulldown circROBO1 (Figure 3H-I).Our results showed that intracellular miR-556-5p were predominantly gathered in the ms2-circROBO1 plasmid overexpressed group (Figure 3H-I).These results indicated that circROBO1 could bind with directly miR-556-5p and act as a sponge of miR-556-5p in prostate cancer. Glycolysis regulating enzyme PGK1 is the downstream target of miR-556-5p in prostate cancer To investigate the potential mechanism of miR-556-5p activity, TargetScan was used to analyze the bioinformatics data and discover the potential downstream target of miR-556-5p (http://www .targetscan.org).According to the intersection of TargetScan data and PGK1 binding sites, miR-556-5p putative binding site is conserved in PGK1 mRNA 3'-UTR (Figure 4A).Phosphoglycerate kinase 1 (PGK1) is a glycolytic enzyme that catalyzes the conversion of 1,3-bisphosphoglycerate to 3-phosphoglycerate in the glycolytic pathway [27].PGK1 is not only involved in energy production but also plays a crucial role in regulating tumor progression [28][29][30].Next, we used RT-qPCR to verify the expression level of the PGK1 expression level in ten pairs of prostate cancer tissues and adjacent normal tissues (Figure 4B).We found that PGK1 was significantly upregulated in tumor tissue compared with their paired normal tissue (Figure 4B).Then, we also detected the expression level of PGK1 in prostate cancer lines using RT-qPCR analysis (Figure 4C).PGK1 was extremely upregulated in prostate cancer cell lines compared to normal cell WPMY1, especially in C4-2B and 22RV1 prostate cancer cell lines, determined by RT-qPCR analysis (Figure 4C).RIP assays were further conducted to verify the molecular mechanism (Figure 4D-E).We found that circROBO1, miR-556-5p, and PGK1 mRNA were all enriched to RNA induced silencing complex in both C4-2B and 22RV1 prostate cancer cells, assessed by the anti-Ago2 related RIP assays (Figure 4D-E).To further confirm the interaction sites between PGK1 mRNA 3'-UTR site and miR-556-5p, we constructed a wild-type and a mutant dual-luciferase reporter plasmids containing the predictive interacting sites between PGK1 mRNA 3'-UTR site and miR-556-5p.Compared to the mutant group, miR-556-5p mimics significantly decreased the luciferase activity of wild-type group.In contrast, there was no detectable difference between the miR-556-5p mimics and control mimics group (Figure 4F-G).Furthermore, both C4-2B and 22RV1 prostate cancer cell lines revealed a decrease in PGK1 mRNA gathering to RISC complexes after targeting circROBO1 (Figure 4H-I). circROBO1 promotes prostate cancer enzalutamide resistance and glycolysis through circROBO1-miR-556-5p-PGK1 axis To further validate the mechanism of circROBO1-miR-556-5p-PGK1 axis in regulating prostate cancer enzalutamide resistance and glycolysis, we conducted several rescue assays.The proliferation ability of C4-2B and 22RV1 prostate cancer cells was decreased after knocking down of circROBO1, which was reversed after transfection of miR-556-5p mimics (Figure 5A-B).The enzalutamide resistance was also reversed by the transfection of miR-556-5p mimics when circROBO1 was inhibited in C4-2B and 22RV1 prostate cancer cells (Figure 5C-D).The glycolysis activity of C4-2B and 22RV1 prostate cancer cells was decreased after silencing of circROBO1, which was also reversed after supplement of miR-556-5p mimics (Figure 5E-F).PGK1 protein variation in cell lines was determined using Western blot assays.According to our results, PGK1 was extremely decreased after the inhibition of circROBO1 in C4-2B and 22RV1 prostate cancer cells (Figure 5G).This effect could be reversed after supplement of the miR-556-5p mimics, which further determined the circROBO1-miR-556-5p-PGK1 axis in prostate cancer (Figure 5H). Discussion In recent years, circRNAs have become a hot topic in cancer research due to their unique characteristics and potential roles in cancer development and progression [31].circRNAs are highly stable and resistant to RNase degradation compared to linear RNAs, making them more suitable as biomarkers for cancer diagnosis and prognosis [32].circRNAs have been found to regulate gene expression by acting as miRNA sponges, interacting with RNA-binding proteins, and even encoding small peptides [33].These diverse functions of circRNAs provide a new perspective on the complexity of gene regulation in cancer [34].the unique characteristics and diverse functions of circRNAs, as well as their dysregulated expression in cancer, have made them a hot topic in cancer research [35].Although circRNA has been implicated in prostate cancer, its molecular mechanism and biological role have not been well studied.In the current study, we identified circROBO1 as a significantly downregulated circRNA in both prostate cancer cells and tissues.We then conducted circROBO1 silence experiments to investigate the function of circROBO1 in prostate cancer.The inhibition of circROBO1 significantly decreased the proliferation of prostate cancer cells.In addition, we found that the knockdown of circROBO1 remarkably increased the sensitivity of prostate cancer to enzalutamide treatment.A deceleration in glycolysis rate was observed after inhibition of circROBO1, which could suppress prostate cancer growth and overcome resistance to enzalutamide.Glycolysis is a metabolic pathway that converts glucose into pyruvate, generating ATP and NADH [36].It is a fundamental process in metabolism and is essential for cell survival and proliferation [37][38][39].However, in cancer cells, glycolysis is often upregulated, even in the presence of oxygen, a phenomenon known as the Warburg effect [40].This metabolic reprogramming is thought to provide cancer cells with the necessary energy and building blocks for rapid proliferation and survival under drug treatment [41].Phosphoglycerate kinase 1 (encoded by PGK1) is a glycolytic enzyme that catalyzes the conversion of 1,3-bisphosphoglycerate to 3-phosphoglycerate in the glycolytic pathway [42].PGK1 is not only involved in energy production but also plays a crucial role in regulating tumor progression [43][44][45][46].Our results also showed that knockdown of circROBO1 could decrease the PGK1 expression, which afterwards significantly inhibited the proliferation of cancer cells, and decreased the glucose consumption and lactate production in cancer cells. According to the theory of competitive endogenous RNA, different types of RNA molecules, including circRNAs, messenger RNAs (mRNAs), long non-coding RNAs (lncRNAs), and pseudogenes, can interact with each other by competing for shared microRNAs (miRNAs).This interaction can lead to changes in gene expression and cellular processes [47].In this research, miR-556-5p was found as a miRNA which could interact with circROBO1 in prostate cancer.miR-556-5p is a recently discovered miRNA that has been shown to play an important role in the progression of various types of cancer.miR-556-5p has been shown to be differentially expressed in various types of cancer.In colorectal cancer, miR-556-5p inhibits cell proliferation and migration by targeting CTBP2, which is involved in the EMT signaling pathway [48].In breast cancer, miR-556-5p inhibits cell proliferation and invasion by targeting YAP1, which is a transcription factor that regulates the expression of genes involved in cell cycle progression and apoptosis [49].In cholangiocarcinoma, miR-556-5p inhibits cell proliferation and migration by targeting YY1, which might be used as a promising therapeutic target for cholangiocarcinoma [50].Understanding the molecular mechanisms of miR-556-5p in regulating prostate cancer progression may provide new insights into the development of novel therapeutic strategies for prostate cancer treatment.Additionally, the findings of our previous study showed that Zoledronic acid enhances cytotoxic T cell response and antitumor immunity in prostate cancer patients [51].Therefore, future study should focus on the function of circROBO1 in the immune evasion of prostate cancer. In conclusion, our study identified the biological role of circROBO1 in the growth and enzalutamide resistance of prostate cancer through the miR-556-5p-PGK1-glycolysis axis.These findings are important for the development of new treatment strategies and potential prognostic implications for prostate cancer. Figure 1 . Figure 1.circROBO1 is upregulated in prostate cancer cells and tissues.(A) Comparison of the expression level of circROBO1 in normal WPMY1 cells and prostate cancer cells.(B) The expression level of the circROBO1 expression level in ten pairs of prostate cancer tissues and adjacent normal tissues, detected by qPCR analysis.(C) RNase R assays were used to examine circROBO1's circular feature in C4-2B and 22RV1 prostate cancer cell line.(D) Actinomycin D treated assays showed that circular ROBO1 transcripts were more stable than linear ROBO1 transcripts in C4-2B and 22RV1 prostate cancer cell line. Figure 2 . Figure 2. circROBO1 promotes the glycolysis and enzalutamide resistance of prostate cancer cells.(A) Short hairpin RNA (shRNA) targeting the back-splice sequence of circRNA were designed to investigate the function of circROBO1 in prostate cancer.(B) The efficacy of shRNA was determined in C4-2B and 22RV1 prostate cancer cell lines.(C) The proliferation of cells was evaluated using the CCK-8 assay in C4-2B and 22RV1 prostate cancer cell line.(D) The colony-formation ability of cells was evaluated using the Colony-formation assay in C4-2B and 22RV1 prostate cancer cell line.(E) A graph showing the statistical data for the colony-formation test.(F-G) Knockdown of circROBO1 significantly increased the sensitivity of C4-2B and 22RV1 prostate cancer cells to enzalutamide treatment.(H-I) circROBO1-induced glucose uptake and lactate production was found to accelerate glycolysis in C4-2B and 22RV1 prostate cancer cells. Figure 3 . Figure 3. circROBO1 acts as a sponge of miR-556-5p in prostate cancer.(A) The predicted binding sites for miR-556-5p within circROBO1.(B) Comparison of the expression level of miR-556-5p in normal WPMY1 cells and prostate cancer cells.(C) The expression level of the miR-556-5p expression level in ten pairs of prostate cancer tissues and adjacent normal tissues, detected by qPCR analysis.(D-E) RT-qPCR analysis was performed on mRNA expression of 18S, ACTB, circROBO1 and ROBO1 in the nuclear and cytoplasmic fractions.(F-G) Dual luciferase reporter assay of C4-2B and 22RV1 prostate cancer cell lines transfected with miR-556-5p mimics and circROBO1 wild-type or mutant-type luciferase vectors.(H-I) The ms2-related RIP assay was performed by transfecting the ms2-circROBO1 plasmid, the ms2-circROBO1-mutant plasmid, or the Rluc-NC plasmid. Figure 4 . Figure 4. Glycolysis regulating enzyme PGK1 is the downstream target of miR-556-5p in prostate cancer.(A) The downstream target of miR-556-5p, PGK1 mRNA, was predicted by TargetScan online.(B) Comparison of the expression level of PGK1 in normal WPMY1 cells and prostate cancer cells.(C) The expression level of the PGK1 expression level in ten pairs of prostate cancer tissues and adjacent normal tissues, detected by qPCR analysis.(D-E) Enrichment of circROBO1, PGK1 and miR-556-5p on AGO2 in C4-2B and 22RV1 prostate cancer cell lines assessed by RIP assay.(F-G) Dual luciferase reporter assay of C4-2B and 22RV1 prostate cancer cell lines transfected with miR-556-5p mimics and PGK1 mRNA 3'-UTR wild-type or mutant-type luciferase vectors.(F) Enrichment of PGK1 mRNA to AGO2 was significantly increased after the suppression of circROBO1 in C4-2B and 22RV1 prostate cancer cell lines. Figure 5 . Figure 5. circROBO1 promotes prostate cancer enzalutamide resistance and glycolysis through circROBO1-miR-556-5p-PGK1 axis.(A-B) The cell proliferation rate of the C4-2B and 22RV1 prostate cancer cell lines was decreased after the inhibition of circROBO1, which was reversed after the transfection of miR-556-5p mimics.(C-D) The enzalutamide resistance was also reversed by the transfection of miR-556-5p mimics when circROBO1 was inhibited in C4-2B and 22RV1 prostate cancer cells.(E-F)The glycolysis activity of C4-2B and 22RV1 prostate cancer cells was decreased after silencing of circROBO1, which was also reversed after supplement of miR-556-5p mimics.(G) Western blot assay revealed that PGK1 expression was decreased after the suppression of circROBO1 in C4-2B and 22RV1 prostate cancer cell lines.(H) Western blot assay revealed that the decrease of PGK1 was rescued via the supplement of miR-335-5p mimics in C4-2B prostate cancer cells, which further determined the circROBO1-miR-556-5p-PGK1 axis in prostate cancer.
2023-08-31T15:08:20.904Z
2023-08-21T00:00:00.000
{ "year": 2023, "sha1": "25e0234859676e2be5875f83b15339f5f1631255", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v14p2574.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8dedb8250c55aebaa3c50e813ce86e9ef616b195", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
12931270
pes2o/s2orc
v3-fos-license
Local electron and ionic heating effects on the conductance of nanostructures Heat production and dissipation induced by current flow in nanostructures is of primary importance to understand the stability of these systems. These effects have contributions from both electron-phonon and electron-electron interactions. Here, we consider the effect of the local electron and ionic heating on the conductance of nanoscale systems. Specifically we show that the non-linear dependence of the conductance on the external bias may be used to infer information about the local heating of both electrons and ions. We compare our results with available experimental data on transport in $\mathrm{D}_2$ and $\mathrm{H}_2$ molecules. The comparison between experiment and theory is reasonably good close to the lowest phonon mode of the molecule, especially for the $\mathrm{D}_2$ molecule. At higher biases we cannot rule out the presence of other effects like, e.g., current-induced forces that make the scenario more complex. INTRODUCTION The idea of building electronic devices from nanostructures has gathered a lot of attention due to the high expectations in terms of size reduction and power dissipation [1]. Encouraging progress has been made in experimental techniques and theoretical modeling towards this aim [2]. However, a fundamental and technologically important issue, namely local heat production and dissipation in these systems has attracted much less attention [3,4,5,6,7,8,9,10]. It has been argued that since the electron inelastic mean free path is large compared to the dimensions of a nanostructure, no energy dissipation occurs in the nanostructure region. However, nanoscale systems carry very large current densities compared to bulk electrodes. This implies an increased number of scattering events per unit time and unit volume so that interactions among electrons or among electrons and phonons are particularly important. In addition, the reduced size means a small heat capacitance: any small energy transfer from the current-carrying electrons to local ionic vibrations or other electrons in the system may induce a substantial heating of the nanostructure [7]. So far, direct measurements of the amount of energy locally dissipated in a nanoscale system have been beyond our reach. However, new experiments have considered the indirect effects of local heating on accessible quantities [8,9,11,12]. For example, in [8,9] an effective ionic temperature is determined via the force needed to break the chemical bonds between molecules and the adjacent leads. These experiments indirectly probe the local ionic temperature, the contributions due to electron-electron interactions, and corresponding local electron heating [7,9]. Here, we discuss another possible indirect method to probe both the local ionic and electron temperatures via the non-linearities in the DC conductance of nanostructures. We will compare our results with the experimental conductance of simple molecules such as D 2 and H 2 sandwiched between two Pt leads as studied in Ref. [13] (and references therein). In order to address the above issues we need a theory that takes into account both energy production and dissipation on an equal footing. A full quantum-mechanical description in terms of many-body states for the present nonequilibrium problem seems hopeless. Instead, we have previously shown that a much more "economical" hydrodynamic theory in terms of the single-particle density and current density may be derived for nanostructures [14]. In this paper, we first review such a theory, and later on use it to study the effect of heating on conductance. CLASSICAL HYDRODYNAMICS In the following, we will refer to some concepts of classical hydrodynamics. For completeness, we repeat here some of those concepts, while a more comprehensive description of the dynamics of classical fluids can be found in many textbooks [15,16]. The dynamics of a classical viscous fluid is usually described by the so-called Navier-Stokes equations for the single-particle density, n(r, t), and the velocity field, v(r, t), (ratio between the current density and the density) D t n(r, t) = −n(r, t)∇ · v(r, t) mn(r, t)D t v i (r, t) = −∇ i P (r, t) + ∇ j π i,j (r, t) − n(r, t)∇ i V ext (r, t) where P (r, t) is the pressure, π i,j (r, t) is the Navier-Stokes stress tensor and V ext (r, t) is the external potential. [Throughout the paper, ∇ i corresponds to the derivative with respect to the i-th spatial component (i = {x, y, z}), and summation over repeated indexes is understood.] In these equations, the operator D t = ∂ t + v(r, t) · ∇ is the so-called "convective" derivative, while the viscosity coefficients, η and ζ, are the shear and bulk viscosity of the liquid, respectively. The viscosity coefficients have their origin in the approximate nature of the Navier-Stokes equations and in the particle-particle interaction [17]. The first equation in (1) is the continuity equation and states the mass conservation when sources or sinks are not present. The second equation in (1) is the force equation: the left hand side is the acceleration of a small volume of liquid subject to the internal forces (due to pressure, particle-particle interactions) and external forces (F ext = −∇V ext ). It is important to realize the approximate nature of these equations: in classical physics the very basic concept of particle density n has a meaning only in a coarse-grained sense, i.e., with respect to volumes of the liquid small compared to the other relevant scales of the problem, but large to contain "enough" particles so that a continuum mechanics can be developed. In the opposite condition, one has to revert to the solution of the Newton equations of motion for each particle. Due to the continuous spatial nature of wave-functions, the above limitations do not pertain to Quantum Mechanics, for which a hydrodynamic description can be formulated exactly. HYDRODYNAMICAL FORMULATION OF QUANTUM MECHANICS Ever since the formulation of the Schrödinger equation of motion for complex wave-functions, there have been several attempts to formulate Quantum Mechanics in terms of classical quantities. The degree to which these attempts have been successful is still undecided, since the use of words like "particle", "trajectories", and "directions of propagation" is widespread in the modern scientific literature. One such attempt was made at the dawn of Quantum Mechanics, in 1926, by Madelung [17,18] who showed that the Schrödinger equation for a single particle is exactly equivalent to a set of equations of motion for the particle density and "velocity". For this single-particle problem, the velocity is defined as the variation of the phase of the wave-function with position, and thus seems a mere mathematical tool [17]. An equivalent, but more transparent definition of this velocity field is v(r, t) = j(r, t) n(r, t) , where j(r, t) is the current density. This definition is valid for the points r for which n(r, t) = 0. It is remarkable that the equation of motion for this velocity is governed by the external forces, plus a "quantum mechanical" contribution, known as "Bohm stress tensor", that has not a classical counterpart [17]. Indeed, if we start from the Schrödinger equation for the wave-function, Ψ, of a particle in the presence of the external potential V ext , ( = e = 1 throughout this paper, where e is the electron charge) we can rewrite Ψ(r, t) in terms of two real functions of time and position, R and S, as It is a simple exercise to show that, if one defines the density n(r, t), and the velocity v(r, t), then the equations of motion hold. Eqs. (8) and (9) have a clear physical interpretation: The quantum mechanical system is equivalent to a fluid whose dynamics is governed by the Euler equation (9) subjected to the force exerted by the external potential [15], and an internal force whose origin is purely quantum mechanical. 1 Moreover, the dynamics conserves the mass, i.e., the total probability, and then the continuity equation (8) holds [19]. The solution of the equations of motion (8) and (9) is equivalent to the solution of the Schrödinger equation. It is interesting to point out that the quantum mechanical force can be expressed in terms of the Bohm stress tensor, and n(r, t) 2m If one introduces the convective derivative the equations of motion (8) and (9) assume the well known form of the Navier-Stokes equations of motion Eqs. (12) and (13) are formally identical to the Navier-Stokes equations (1) for a classical fluid. However, unlike the Navier-Stokes equations which describe an approximate dynamics of the many-body classical fluid, Eqs. (12) and (13) are exactly equivalent to the Schrödinger equation: no approximation has been made in their derivation. While this approach to Quantum Mechanics may appear as a simple attempt to recover a classical description of quantum phenomena, over the years it has proven to be a very useful tool to describe the dynamics of quantum systems in several contexts ranging from condensed matter physics to nuclear physics (see, e.g., [20], and references therein). More recently, we have shown that a hydrodynamic description of the electron flow in nanoscale systems leads to the prediction of novel phenomena, like the existence of a dynamical (viscous) resistance [21], turbulence [14,22,23,24], and local electron heating and its effect on ionic heating [7,9]. Here, we describe our hydrodynamical approach to transport in nanostructures. As a first step, we need to generalize the derivation of the equations of motion (12)- (13) to the case of a many-body interacting system. We follow closely the formalism presented in Refs. [14,25]. (See also Ref. [26] for a general formulation of the dynamics of a manyparticle electron system.) We describe the dynamics of the system via a field creation (annihilation) operator ψ † (r, t) (ψ(r, t)) which evolves in time following the Heisenberg equation of motion where the potential w(|r−r ′ |) describes the particle-particle interaction. We define the single-particle density operator via the usual definition,n(r, t) = ψ † (r, t)ψ(r, t) and the current density operator It is lengthy but straightforward to show that these two operators follow the dynamics induced by the coupled equations of motion where we have defined the kinetic stress tensor operator From the equations of motion for the operators, we get immediately the equations of motion for their expectation values where Another rather lengthy and involved calculation allows us to write the force density due to the particle-particle interaction as a second-rank tensor, provided the interactions are negligibly small at the boundary of the integration volume in equation (20). The result is [14,25] so that we arrive at the dynamical equation where we have defined From here, by using the definition of convective derivative and re-scaling the particle momentum so that the stress tensor reads one obtains the equations of motion for the particle and current densities in a form identical to the single-particle equations of motion (12)-(13). Like Eqs. (12) and (13), which, for a given initial condition, constitute a closed set, i.e., their solution is equivalent to the solution of the single-particle time-dependent Schrödinger equation, also their many-body counterpart, Eqs. (19) and (22) are equivalent to the solution of the many-body time-dependent Schrödinger equation. This equivalence is a direct consequence of the theorems of time-dependent density-functional theory [27,28]. These theorems state that, given an initial condition, there exists a one-to-one correspondence between the time evolution of the particle density n(r, t) and scalar potential V ext (r, t) applied to the quantum mechanical system. A similar correspondence holds between the current density j(r, t) and an external vector potential A(r, t) [27,29,30,31], while the mapping does not generally exist between the current density and the external scalar potential [32]. The physical relevance of these theorems to our case is that the stress tensor P i,j (r, t) in (23) is a functional of either the density or the current density, i.e., . This implies that once the exact many-body stress tensor P i,j is known, one can, in principle, recover from the solution of Eqs. (19) and (22) full information on the many-body wave-function. Needless to say, the exact stress tensor is unknown. However, starting from Eqs. (19) and (22) one can develop perturbation schemes to approximate the exact stress tensor [30,33,34], at least for the problem at hand, thus simplifying enormously the solution of the many-body problem. In the following, we will describe one of such approximation schemes for the present case of current flow in a nanojunction. We will derive an equation of motion for the stress tensor P i,j and show that it depends on the so-called three-particle stress tensor P (3) i,j,k , which in turn describes the way three particles interact. The derivation of an equation of motion for P (3) i,j,k would bring us into the maze of a hierarchic set of equations for stress tensors that describe electron-electron interactions to all orders. We will show, however, that for the case at hand, we can truncate this hierarchy and obtain a closed equation for the stress tensor P i,j . VISCO-ELASTICITY OF THE ELECTRON LIQUID In parallel with the hydrodynamic description of Quantum Mechanics, a visco-elastic formulation of the dynamics of the electron liquid has been derived within linear-response theory. It has been realized that a certain class of low-energy, long-wavelength excitations of the electron liquid may be mapped into the dynamics of a visco-elastic medium [35]. The dynamics of this visco-elastic medium is described by an equation of motion for the current density given by (in linear response and d dimensions, d > 1) whereK andμ are two complex constants which depend on the electron density n. These complex constants are expressed in terms of the more familiar viscosities, ζ (bulk viscosity) and η (shear viscosity) and elastic constants K (bulk modulus) and µ (shear modulus) via the relations where ω is the frequency of the external perturbation used to excite the electron liquid. The next step is then to express the visco-elastic coefficients of the liquid in terms of its microscopic properties, i.e., relate these quantities to the response functions. Here we only report the results that are relevant to the present work and refer the reader to Ref. [35] for an explicit derivation. We are only concerned with the DC (zero frequency) limit of the above quantities. By using an interpolation of the numerical results of mode-mode coupling theory [36] one finds the following density dependence of the zero-frequency shear viscosity (the bulk viscosity is identically zero in the same limit) [35] in 3D and in 2D by where r s is the electron constant for the electron liquid with uniform density n: with a B the Bohr radius. It is interesting to point out that specific confining potentials (e.g., an electron liquid in a quantum well) may make the approximations used to derive Eqs. (28) and (29) ill founded, leading to a peculiar behavior of the viscosity coefficients [37]. HYDRODYNAMIC APPROACH TO TRANSPORT IN NANOSCALE SYSTEMS In this section we show that in the case of nanoscale systems the stress tensor can be approximated to a form similar to the classical Navier-Stokes one. This is due to the geometric constriction experienced by electrons flowing in the nanostructure which gives rise to a very short "collisional" time [38,39]. The system we have in mind is some nanoscopic junction sandwiched between two mesoscopic or macroscopic leads (see Figure 1) and current is induced in the system by, e.g. polarizing the leads with a finite bias. In this regime, we show that one can truncate the infinite hierarchy of equations of motion for the electron stress tensor given in (23) to second order and thus derive quantum hydrodynamic equations. To realize how the simple presence of the junction has such a strong impact on the equation of motion of the current, one has to keep in mind that the former acts as a single impurity potential that cannot be avoided by the electron flow. This is different from the corresponding effect in bulk materials for which a certain density of impurities is necessary to have a finite resistance. Let us then employ the quantum Boltzmann equation for the single-particle distribution function f (r, p, t) (which can be derived from the time-dependent Schrödinger equation with standard techniques [40]) and show how the short collisional time induced by the nanostructure allows us to close the equations for the stress tensor. 2 The quantum Boltzmann equation for the distribution function in a co-moving (Lagrangian) reference frame moving with velocity v(r, t) is [14,41] where I is the usual collision integral [40], ϕ is the sum of the external potential and the Hartree part of the interaction potential. The collision integral contains two terms, one elastic and the other inelastic. In what follows, it is important to realize that both terms can drive the system toward a local equilibrium configuration. From the quantum Boltzmann equation, we can derive the equation of motion for the moments of the distribution function. The general expression for the mth moment is the mth-rank tensor P i1,...,im = dp p i1 . . . p im f (p, r, t). The zeroth order is the single particle density, the first moment is the velocity field, and the second moment is the stress tensor we want to approximate. The equation of motion for the stress tensor contains a term proportional to the third moment P (3) : We note that P (3) enters in (33) only through its spatial derivative. If the latter is small then the hierarchy can be truncated [14,41]. From (33) we easily see that this derivative is small compared to the other terms whenever γ = u/(L max(ω, ν c )) ≪ 1. Here u is the average electron velocity, L is the length of inhomogeneities of the liquid that give rise to scattering among three particles, ω the system proper frequency and ν c the collision rate. The parameter 1/L enters through the spatial derivative of P (3) , ω from the frequency dependence of the interactions (in the DC limit of interest here ω → 0), ν c through the collisional integral This derivative is indeed small for transport in nanostructures: When electrons move into a nano-junction they adapt to the given junction geometry at a fast rate, and approach to local equilibrium occurs at this fast rate even in the absence of electron interactions [38,39]. This "relaxation" mechanism occurs roughly at a rate ν c = (∆t) −1 ∼ ( /∆E) −1 , where ∆E is the typical energy spacing of lateral modes in the junction. For a nano-junction of width ℓ we have ∆E ∼ π 2 2 /mℓ 2 and ∆t ∼ mℓ 2 /π 2 . If ℓ = 1 nm, ν c is of the order of 10 15 Hz, i.e., orders of magnitude faster than typical electron-electron or electron-phonon scattering rates. The condition γ = u/(L max(ω, ν c )) ≪ 1 thus requires the length of inhomogeneities L ≫ 1 nm, which is easily satisfied in nanostructures. Note instead that in mesoscopic structures this condition is not necessarily valid. In that case, the dominant relaxation rate ν c is given by inelastic effects, i.e. it is of the order of THz, so that for typical lengths of mesoscopic systems, γ ≈ 1 in the DC limit. Nonetheless, the above condition could still be valid for high-frequency excitations, like plasmons, and/or very low densities, so that moments of the distribution of order higher than two are negligible. By neglecting ∇ k P i,j,k in (33) we can thus derive a form for P i,j . Let us write quite generally the stress tensor P i,j as P i,j = δ i,j P − π i,j , where the diagonal part gives the pressure of the liquid, and π i,j is a traceless tensor that describes the shear effect on the liquid. From (33) we thus find that the tensor π i,j can be written as (in d dimensions, d > 1) where η is a real coefficient (the viscosity) that is a functional of the density [41]. We point out that (34) is in fact a particular case of a general stress tensor with memory effects taken into account [30,35,42]. In our derivation this is the first non-trivial term of an expansion of the stress tensor in terms of the density and velocity field. Consequently the Navier-Stokes stress tensor in (34) can be seen as the first-order (non-trivial) contribution to the exact stress tensor of the electron liquid (see also [25,30,42]). Using this stress tensor we finally get from (22) the generalized Navier-Stokes equations for the electron liquid in nanoscale systems Equations (36) are formally equivalent to their classical counterpart [15] [see Eq. (1)] and thus describe also nonlinear solutions, i.e., the possibility to develop turbulence in the electron liquid in its normal state. In the following, we will consider only the case in which the liquid is in the laminar regime and incompressible so that the viscoelastic coefficients are spatially uniform. This latter approximation is practically satisfied in metallic quantum point contacts (QPCs) but needs to be relaxed in the case of QPCs with organic/metallic interfaces (see, e.g., [21]). In addition, for this case the Hartree potential is constant and its spatial derivative is thus zero. Therefore, (36) reduce to the Navier-Stokes equations for the density and velocity of a viscous but incompressible electron liquid D t n(r, t) = 0, HEAT EQUATIONS FROM HYDRODYNAMICS The above results allow us to treat heat generation and transport using a simplified hydrodynamic approach. In fact we know that the flow of a viscous fluid, as described by our formalism, generates internal friction and consequently an effective temperature distribution inside the system. Therefore, when a steady state has been reached, we can supplement the Navier-Stokes equations with an equation for the energy balance. In the process of heat production, we need to identify a heat source, a mechanism for the dissipation of this heat and, since the system is in a steady state, equate these two terms with the local entropy production. In a recent paper [7] we have developed this model obtaining the equation for the energy balance where T e is the electronic temperature, k(r) is the diffusion constant and c V is the specific heat at fixed volume of the electron gas. 3 Eq. (37) can be either justified on physical grounds, or derived formally as high-order expansion of the many-particle stress tensor [25]. We also stress once more that in deriving this equation we have assumed that the flow of the electron liquid is laminar, i.e., we are far from the onset of a turbulent regime [14,15]. Obviously, in writing Eq. (37) we have assumed that some thermodynamic quantities like temperature and entropy for an electron liquid flowing in a nanostructure can be defined. This is a much debated point, and obviously we do not have a general solution for it. However, here we argue that the electron temperature may be defined as the one ideally measured by a probe weakly coupled to the system and in local equilibrium with the latter [2]. While this operational definition may not be simple to realize in practice, we know from experiments that local heat generation due to current has a large effect on the stability of nanostructures [9]. From the form of Eq. (37) we can deduce a general relation between the applied bias and the electron temperature. To do this, we realize [7] that the electron fluid velocity, v, (which is generally smaller than the Fermi velocity [22]) responsible for the transport of current and heat is, in linear response, proportional to the bias V . 4 This simple proportionality, and the usual result that k ∝ c V , bring us to the general result where γ ee is a constant whose expression in terms of microscopic parameters of the electron liquid has been recently derived for a quasi-adiabatic connection between the leads and the nanojunction [7] where G is the conductance of the system in the limit of zero bias, A c its cross section, d is the dimensionality (d > 1). Moreover, in 3D, and γ = πk F k 2 B λ e /6 in 2D [7], k F is the Fermi momentum, k B the Boltzmann constant, and λ e is the inelastic mean free path. Interestingly, Eq. (38) can be obtained from general thermodynamic arguments, by comparing the energy dissipated in the transport process in the nanostructure (proportional to V 2 from Ohm's law), and the energy carried away by electrons (proportional to T 2 e for small temperatures) [7]. LOCAL ELECTRON HEATING In the case of a finite background temperature and in the absence of ionic heating, from our hydrodynamic theory, the local temperature of the electrons in the nanostructure is given by [7] T e (V ) = T 2 0 + γ 2 ee V 2 (41) where V is the external bias, and T 0 is the electron temperature deep into the electrodes. If we now let the ions heat up, their effective local temperature is given by [7] (for values of the parameters such that the argument in the root is non-negative) where γ ep can be expressed in terms of the physical parameters of the nanostructure [3,43], and we have assumed that both the ions and the electrons are at the same background temperature T 0 deep into the electrodes. At zero background temperature and for negligible electron-electron interactions from the above equation we obtain the known result for the local ionic temperature [3,7,43] T ≃ γ ep √ V . Effect on conductance -We can now calculate the effect of local electron heating on the conductance of a nanostructure. We focus on the quasi-ballistic regime and we generalize Eq. (13) of Ref. [43] for the inelastic current in the presence of a finite electron temperature. 5 We also consider one mode frequency ω. We will generalize later to more modes. To take into account the effect of an effective local electron temperature on the inelastic current, one faces the calculation of terms with factors of the type , with α, β = {R, L} corresponding to electrons moving from either left or right, and f α E = (exp((E − µ α )/k B T ) + 1) −1 is the Fermi distribution with the difference between the electrochemical potentials equal to the bias, µ L − µ R = V . (Refer to [43] for additional details on the notation.) We could provide a numerical calculation of the inelastic current. However, we are interested in an analytical expression and thus proceed as follows. We evaluate the above integrals in the Sommerfeld approximation and keep only the terms of zeroth order in the electron temperature (this is reasonable because the local electron temperature is generally a small quantity). This approximation brings us to the expression for the current flowing in the system where γ I is the amplitude of the conductance drop at V = V c ≡ ω for zero electron and phonon temperature, G el is the elastic conductance at zero bias, and β(V ) = 1/k B T e (V ) where k B is the Boltzmann constant. By differentiating Eq. (44) with respect to bias, and again keeping only the terms of zeroth order with respect the electron temperature, we arrive at To obtain this result, one also has to bear in mind that the approximations we make pertain to the energy region . An expression for the conductance similar to Eq. (45) can be derived for the case of zero electron temperature [43], i.e., β → ∞. Notice, however, that for consistency, one has to take this limit in the expression for the current (44) before taking the derivative with respect to the bias. An example of the effect of local ionic and electron heating on conductance is given in Fig. 2 (see also discussion below). In the absence of both effects (and at zero nominal background temperature) the conductance shows a simple step-like drop at the bias corresponding to the energy of the phonon mode. The ionic heating introduces a shoulder at biases larger than the mode energy, while the electron heating broadens the conductance curve with an effective temperature larger than the nominal background temperature. Comparison with experiments -To compare our results with available experimental data we consider a D 2 molecule sandwiched between two Pt leads [13]. We focus on the predictions of our hydrodynamic theory on the local electron heating effect. Therefore, we do not attempt to do a full first-principles calculation of ionic heating, and take the relevant parameters from experiment. For the D 2 molecule we consider a cross section of π × 1Å 2 , i.e., a circle with radius ≃ 1Å. The nominal electron temperature deep inside the electrodes is taken to be T 0 = 5 K. From the experimental results we have the frequency of the phonon mode V c = 0.05 eV, the drop of the conductance and the conductance at zero bias, G el = 0.984 G 0 . We use as fit parameters γ ep and evaluate γ ee from Eq. (39). In obtaining γ ee we have assumed an inelastic mean free path λ e of 1 µm, a value in line with the expectations for this system [44]. We have also assumed that the electron density that enters the local heating is the one of the chemical bonds between the D and Pt atoms. This density is estimated to be close to the Pt bulk density, n = 6.6 × 10 28 m −3 which gives the electron constant r s ≃ 3. From these values, the electron viscosity η and the constant γ ee are easily obtained from Eqs. (28) and (40), respectively: The electronic heating constant is predicted from Eq. (39) to be γ ee = 180 K/V. This implies an effective electron temperature of about 10 K at the D 2 junction at a bias of 50 mV. This temperature is higher that the nominal bulk temperature. The ionic heating constant is found to be γ ep = 405 K/ √ V . 6 This value can be compared with the corresponding γ ep for a Au point contact at small biases which is about γ Au ep = 170 K/ √ V [43]. This means that the Pt-D 2 -Pt system heats up more than the Au QPC. For instance, at 0.1 V the ions of the Pt-D 2 -Pt junction have an average temperature of about 130 K while at the same bias the gold atoms heat up locally to about 54 K. This larger temperature is reasonable since, while the conductance is similar for a Au point contact and Pt-D 2 -Pt, the D 2 molecule is lighter than Au with a consequent increase of the electron-phonon coupling. In addition, the modes of the D 2 molecule have lower probability to elastically scatter into the bulk modes of Pt -thus reducing lattice heat dissipation into the bulk electrodes -than the modes of a single Au atom into the bulk modes of Au. Both effects lead to a higher local ionic temperature. We thus expect the Pt-D 2 -Pt junction to be more unstable under the same bias conditions than a Au point contact, i.e., we expect that the chain Pt-D 2 -Pt breaks, on average, at much smaller biases than Au point contacts due to heating effects. The theoretical conductance containing both the local electron and ionic heating effects is reported in Fig. 3, together with the experimental data. The qualitative agreement between theory and experiment is very good. It is interesting to note that the tail of the experimental data goes approximately as V 0.8 , while the theory predicts 1 − G/G 0 ≃ V 1/2 [3,5]. It is important to realize, however, that at large biases, other effects such as current-induced forces and other structural instabilities may also contribute to the actual value of the conductance [45]. We have also performed a second fit, not shown here, using γ ee and γ ep as free parameters. The values for these parameters obtained from this second fit are close to those obtained from the theory and the one-parameter fit by less than 10 % (we find the best fit for γ ee = 200 K/V). Inelastic conductance width -Let us now discuss how the width of the inelastic conductance around the vibrational mode increases with bias (see Fig. 2). This quantity can be directly measured and provides additional information on local electron heating. If the background temperature is zero, the local electron temperature increases linearly with bias as in Eq. (41). Let us define the quantity ∆ as shown in Fig. 2: It is the energy distance between the middle drop of the conductance and the value at which the conductance assumes its purely elastic value within a ratio α = [G el − G(V c )]/G el as indicated in Fig. 2. This quantity is plotted in Fig. 4 for different values of the vibrational mode energy and for a few values of α, assuming that the vibrational energy is the only quantity allowed to vary. We conclude that the width ∆ increases linearly with the vibrational energy to reflect the linear bias dependence of the local electronic temperature. A systematic experimental study of this quantity would thus provide more information on the electron heating phenomenon. DISCUSSION Our analysis in conjunction with the experimental data suggests that electrons heat up locally at the Pt − D 2 − Pt junction. Our Eq. (41) also predicts that electrons cool down when lowering the bias. On the other hand, a constant electron temperature -above the background temperature -for all biases is difficult to understand on physical grounds, unless one assumes the existence of an external source of energy that keeps the electron hot even at zero bias. Experimental data showing an electron temperature equal to the background temperature, i.e., negligible electron heating, may be consistent with the fact that the effective cross section "seen" by the electron liquid is the one of a Pt atom and not of a D 2 molecule. 7 If that were the case, the effective cross section would be 7 times larger than that of the D 2 molecule, and since the electron temperature scales inversely proportional to the cross section (see Eq. (39) and Ref. [14]) the electron heating temperature would be lower than the background temperature. The conductance on the other hand is unlikely to be so sensitive to this cross section due to the extended nature of the Pt d-orbitals. Further generalization of Eq. (45) to the case where many vibrational modes are present is possible. For example, it has been reported that a H 2 molecule sandwiched between two Pt leads shows two fundamental vibrational frequencies at 48 meV and 62 meV [13]. If one assumes that scattering by these two modes is uncorrelated, a straightforward where ω 1 and ω 2 are the two vibrational frequencies and we have taken into account the possibility that the two coupling constants γ ep,1 and γ ep,2 , and the two amplitudes of the conductance drops γ I1 and γ I2 be different. A plot of G H2 is reported in Fig. 5 as a function of the external bias along with the experimental data. Since the cross section for D 2 and H 2 is essentially the same, and the electron heating does not depend on the mass of the ions, our estimate of γ ee holds for H 2 as well. Our results are again in qualitative agreement with the available experimental data [13], although our theory might be not sufficient to quantitatively describe all the experimental findings. Indeed, our fit in this case has failed in producing any sensible result for the constants γ ee , γ ep,1 and γ ep,2 : the large fluctuations of the experimental data, especially in the region of small bias and close to the phonon modes energies do not allow for a systematic fit of the data with the theory. Finally, it is interesting to note that a value of γ ep similar to the one we have obtained for the D 2 molecule gives a reasonably good agreement between theory and experiment also for the H 2 molecule. This seems to suggest that the longitudinal modes of the bonds between the H and Pt atoms are mainly responsible for the local ionic heating of the Pt-H 2 -Pt junction, and similarly the longitudinal modes of the bonds between the D and Pt for the Pt-D 2 -Pt junction. We expect that such modes are slightly affected by the change of mass of the smaller atom in the bond. Clearly, more theoretical and experimental work in this direction is necessary. CONCLUSIONS We have discussed a novel hydrodynamic approach to transport that allows the description of charge and heat flow in terms of the single-particle density and velocity field of the electron liquid [14]. The theory allows us to make predictions about the electron flow past a nanostructure and its dependence on the external bias (or the current). One such prediction is the heating of electrons locally at the nano-junction [7]. Here we have considered the measurable consequences of this effect on the inelastic conductance which shows a broadening at the inelastic step larger than the one expected from the background nominal temperature. We have compared our theory with available experimental results [13] and found a reasonable quantitative agreement for the case of a D 2 molecule between two Pt leads. For the case of a H 2 molecule between the same leads our theory is only in qualitative agreement with the experimental findings. We also predict that the width of the inelastic conductance step should increase linearly with bias, a fact that can be tested experimentally.
2008-02-25T19:01:58.000Z
2008-02-25T00:00:00.000
{ "year": 2008, "sha1": "8e120d83fedbbb11576254ea9543c3d23c7bb2b9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0802.3676", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8e120d83fedbbb11576254ea9543c3d23c7bb2b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Chemistry" ] }
237478681
pes2o/s2orc
v3-fos-license
Screening plans for SARS-CoV-2 based on sampling and rotation: An example in a European school setting Screening plans for prevention and containment of SARS-CoV-2 infection should take into account the epidemic context, the fact that undetected infected individuals may transmit the disease and that the infection spreads through outbreaks, creating clusters in the population. In this paper, we compare through simulations the performance of six screening plans based on poorly sensitive individual tests, in detecting infection outbreaks at the level of single classes in a typical European school context. The performance evaluation is done by simulating different epidemic dynamics within the class during the four weeks following the day of the initial infection. The plans have different costs in terms of number of individual tests required for the screening and are based on recurrent evaluations on all students or subgroups of students in rotation. Especially in scenarios where the rate of contagion is high, at an equal cost, testing half of the class in rotation every week appears to be better in terms of sensitivity than testing all students every two weeks. Similarly, testing one-fourth of the students every week is comparable with testing all students every two weeks, despite the first one is a much cheaper strategy. In conclusion, we show that in the presence of natural clusters in the population, testing subgroups of individuals belonging to the same cluster in rotation may have a better performance than testing all the individuals less frequently. The proposed simulations approach can be extended to evaluate more complex screening plans than those presented in the paper. Introduction Since the beginning of the COVID-19 emergency in the early 2020, the importance of implementing extensive screening procedures to prevent or slow down the spread of the SARS-CoV-2 infection has been emphasized [1,2] and, in light of the threat of new variants of the virus that could be more widespread and of the critical issues related to the rapid implementation of effective vaccination plans [3,4], it still seems early to consider extensive surveillance strategies on the population (or specific subgroups of it) no longer necessary. Pharmaceutical industries have produced tests of various nature and cost, which have been proposed and used in screening plans aimed at early detection of asymptomatic or paucisymptomatic individuals in specific populations. The ability of these tests to correctly classify the single patient as infected or not has been widely discussed and debated, sometimes overshadowing the necessity that screening plans account for strengths and limitations of the used tests and are tailored to the specific context in which they are applied [5,6]. When dealing with a screening plan for an infectious disease, two points should not be overlooked: • undetected infected subjects can transmit the disease; • the infection usually spreads in small outbreaks, creating clusters in the population (families, classes, work colleagues). For these two reasons, screening plans similar to those implemented in the case of noncommunicable diseases could lead to suboptimal results in terms of cost-benefit ratio. Furthermore, it is crucial that the screening procedures are assessed accounting for the actual epidemic context, including the strength of contagion [7]. From the point of view of epidemic containment, school environment is critical both in terms of number of people involved (students, teachers as well as other school staff) and in terms of time spent by individuals in shared enclosed spaces. If, on the one hand, studies claimed that schools are not less safe than other environments if measures such as use of masks, desk distance, and ventilation are observed properly [8][9][10][11], on the other hand evidence arose that school closures are associated with decrease in COVID-19 incidence and mortality [12], suggesting a role of schools in spreading epidemic in the community. Additionally, it should be also considered that appropriate social distancing and individual protection measures may be difficult to implement [13]. Being the debate about school safety open, implementation of screening plans in schools appears to be a priority, also considering that the need of guaranteeing open schools is widely recognized in order to provide a safe environment, learning opportunities, but also to allow many people (parents included) to work [13][14][15]. Moreover, schools are an ideal place to carry out screening procedures, due to the natural clustering of individuals into separate cohorts (classes), that allows effective contact tracing and rapid quarantining. In this paper, we compare through simulations alternative strategies designed for screening in schools, based on repeated tests to be performed at regular time intervals on all students, or on tests to be performed in rotation on subgroups. The idea of performing rotation on subgroups of students belonging to the same class is the main novelty of our proposal and it strongly relies on the idea that testing a sample of students provides information about the presence of contagion in the class. The comparison that we propose refers to a single class of N = 24 students-a class size which is consistent with the European average [16] -, where class is here defined as a group of students who attend the same course each day at school or university for several weeks. The comparison takes into account the epidemic context and is performed under simple alternative scenarios of epidemic spread. Although the simulation analysis refers to school settings, we would like to point out that the methods adopted and the results obtained may be considered as valid in any context in which there are natural clusters within which contagion could spread starting from a single initial infection. Methods Let us suppose that our objective is to early identify infections in school settings, in order to quarantine the classes where there is at least one infected student. Let us suppose that individual tests are performed on all students in the class or on a subset of them, depending on the screening plan. The class is considered positive if at least one of the individual tests is positive; it is considered negative if no individual tests are positive. Therefore, at the class level, the sensitivity of the test is defined as the probability that the class is positive given that there is at least one infected student present. Specificity is the probability that the class is negative given that there are no infected students in it. In our analysis, we assume that the individual tests used in the screening procedure have maximum specificity and sensitivity p. The assumption of maximum specificity of the individual test implies maximum specificity at the class level and rules out those situations where false positives may lead to quarantining classes when not necessary. This assumption allows us to simplify subsequent calculations without compromising evaluations regarding the ability of the proposed plans to detect outbreaks. We also assume that the results of the tests are available within one day from the collection of the biological samples, and that quarantine is imposed immediately after the first notification of one or more positive students in the class. Finally, our simulations refer to a context in which infections are asymptomatic, i.e. we exclude that tests outside the screening plan can be performed on the class as a consequence of symptoms onset among students. Screening plans We considered six screening plans (Fig 1), which differ from each other for the time interval between consecutive evaluations on the class and number of students involved in each of them: • Plan A1. Individual tests on all students of the class every week; • Plan A2. Individual tests on all students of the class every 2 weeks; • Plan B1. Individual tests every week on 1/2 of the students of the class, in rotation; • Plan B2. Individual tests every 2 weeks on 1/2 of the students of the class, in rotation; • Plan C. Individual tests every 10 days on 1/3 of the students of the class, in rotation; • Plan D. Individual tests every week on 1/4 of the students of the class, in rotation. Plan A1 guarantees the best performance in terms of surveillance but requires more resources compared to the others. Plans B1, B2, C, and D have an additional element of risk compared to plans A1 and A2 because they test each time sub-samples of the students of the class. Plans B1, C and D allow more frequent monitoring of the class compared to plan A2 and B2. We assume p = 0.7, which is close to the average sensitivity reported by Organisation for Economic Cooperation and Development (OECD) for antigenic tests [6]. Then, in order to perform a sensitivity analysis, we assume p = 0.9 as well. Epidemic scenarios Let us suppose that one of the N students is infected on day 1. The epidemic dynamic within the class that originates from this first infection can be simulated by using a compartmental model [17], where the contagion strength depends on the average time of infectivity T and on the basic reproduction number R 0 , which is the number of secondary infections generated from the first infected student in the class. In particular, we assume that, on average, each infected student may spread the contagion in the class for T days from the onset of infection, still remaining detectable as infected for 4 weeks [18]. We also assume that there is not latency time. We consider different epidemic contexts, characterized by different combinations of R 0 and T (R 0 = 1.1, 1.5, 3, 5 and T = 7, 14, 21, for a total of 12 scenarios) [19], and we apply on each of them the six screening plans. The ratio β = R 0 /T is proportional to the contagion rate; combinations of R 0 and T which result in the same β generate the same epidemic dynamic, net of stochastic variability. This is the case for R 0 = 2, T = 14 and R 0 = 3, T = 21. Simulations Separate simulations, for a minimum of 7000, are performed for each combination of R 0 and T and each screening plan. We assume that the number of new infections and the number of infected that become not infectious at time t in the class, I new (t) and R new (t), follow Binomial distributions: PLOS ONE Screening plans for SARS-CoV-2 based on sampling and rotation where S(t − 1) and I(t − 1) are the number of susceptible students and the number of infectious students at time t − 1, respectively. We further assumed that the groups, when required by the screening plan, are randomly generated, that the probability of becoming infected for a susceptible subject does not depend on the group to which he/she belongs, and that the new infections at each time are randomly distributed among the groups. Let us suppose that g = 2 groups of students G 1 and G 2 have been created and the susceptible individuals in the two groups at time t are S 1 (t), S 2 (t). The number of new infections in G 1 is sampled from a Hypergeometric distribution: where M is the population size, K is the number of successes in the population, and n is the number of draws. Then the number of new infections in G 2 is obtained as difference: In general, if g > 2 the number of new infections in the groups is obtained by sampling from a Multivariate Hypergeometric distribution. The results of the individual tests on different individuals are assumed to be independent and follow a Bernoulli distribution with parameter equal to the sensitivity of the individual test p. According to the assumption of maximum specificity, the result of the test on the not infected students is always negative. In the simulations, we allow the epidemic to originate at any time between two consecutive assessments of the class and we focus on a time window of 4 weeks from the first infection ( Fig 1) to evaluate the performance of the screening plan in terms of: • probability of detecting the outbreak at 7, 14, 21 and 28 days since its onset; • total number of infection-days which are left undetected by the screening plan in the 4 weeks time window. An infection-day is here defined as a day spent by a subject in the infectious status, thus a day in which he/she can spread the contagion. The curve describes the overall performance of the screening plans in detecting the presence of infections in the class. As expected, plan A1 guarantees the best performance in terms of epidemic detection. The cumulative probabilities for plans A2 and B1 are very similar in scenarios where the infection spreads slowly within the class (upper left quadrant), while plan B1 seems to have better relative performance compared to A2 in high-epidemic spread contexts (bottom right quadrant). Screening plans B2, C and D have the worst performance if the infection spread is low, but they reach good results in high-risk scenarios. Plan D seems to detect infections within 2 weeks from the beginning of the epidemic with a probability between 70% and 80% in scenarios where β � 0.36. Results The probabilities that the screening plans do not detect the infection within 4 weeks from the beginning of the epidemic are reported in Table 1. The risk of not detecting the outbreak within 4 weeks decreases as the rate of infection within the class increases. For plan A1, the probability of a false negative is always negligible (lower than 0.4%). For plans A2 and B1 probabilities are very similar (from 1.0% when β = 0.71 to 5.5% when β = 0.05), as well as for plans B2, C and D (from 3.4% when β = 0.71 to 24.4% when β = 0.05). PLOS ONE Screening plans for SARS-CoV-2 based on sampling and rotation comparable in terms of lost infection-days, of the three cheaper plans, plan D seems the one that guarantees the lowest number of lost infection-days. Interestingly, plan D is equivalent or better than A2 in scenarios where β � 0.36. Plan B1 leaves less undetected infection-days than plan A2. In Fig 4, focusing on plan A1, B1 and D, which are the best ones within their cost range, we compare the cumulative probabilities of a positive result when assuming p = 0.7 and 0.9, under the scenarios characterized by the larger and the lower rates of contagion. Increasing the sensitivity of the individual test to 0.9, the performance of the screening plans increases, but their relative accuracy remains similar. This result arises also from the comparison of S1 and S2 Tables, which report averages and 90th percentiles of the number of lost infection-days for the six screening plans and the 12 epidemic scenarios, for p = 0.7 and 0.9, respectively. Discussion In this paper, we performed simulations to compare the performance of six screening plans based on individual tests with low sensitivity and maximum specificity in detecting infection outbreaks at the class level in schools or, more in general, at the cluster level in a population. We accounted for uncertainties around the epidemic dynamic in the class through a stochastic compartmental model and around the screening plan implementation through random generation of groups, when required by the plan, and random allocation of the new infections across them. Our work, which can be considered a proof of concept study, is not far from others which used mathematical models or simulations to investigate relevant issues during the COVID-19 emergency, such as the definition of optimal quarantine strategies, optimal pool size in pooled testing, optimal time interval between repeated screenings [20][21][22][23][24][25]. In particular, with respect to similar works [24,25], the novelty of our proposal is related to the introduction of rotating schemes on random subgroups of the students, which allow a strong reduction of costs. Additionally, it is important to stress that rotation plans require less frequent collection of individual biological samples and this could enhance larger participation and compliance of the students to the screening program. The compared plans have different costs-we simplistically assumed that costs are directly proportional to the number of performed tests-and are based on recurrent evaluations on all students or on subgroups of students on rotation. Among all possible plans that perform Table 1. Probability of not detecting the outbreak in the class within 28 days from the beginning of the epidemic, by screening plan (A1, A2, B1, B2, C, D), under different epidemic scenarios (R 0 and T), assuming that individual tests have sensitivity 0.7 and maximum specificity. Scenario Screening plan assessments on the students at time intervals greater than one week, the best option obviously consists in testing all students at weekly intervals. This option assures the maximum sensitivity and the lowest number of infection-days left undetected. However, it is very expensive, hence the need of exploring cheaper strategies. Without claiming to be exhaustive, we considered and compared through simulations five alternatives to the best option. At an equal cost, testing half of the class on rotation every week proves to be better than testing all students every two weeks, because it allows to earlier detect the presence of infections, especially in scenarios of a high rate of contagion in the class. PLOS ONE If resource constraints are even tighter, less expensive plans can be considered: testing half of the class once every two weeks, testing one-third of the class every 10 days, testing onefourth of the class every week. These plans, which have same cost, perform similarly in case of a low rate of contagion. However, in case of a high rate, testing one-fourth of the class every week seems to be the best option. This again suggest that reducing times between successive evaluations may largely balance the risk related to a lower class coverage at each assessment. Moreover, considering that strength and speed of virus transmission within each single class are not known in advance, it seems reasonable to focus on high-transmission scenarios, potentially more dangerous in terms of infection spread within and outside the class [26]. Interestingly, the plan which tests one-fourth of the students every week results to be comparable to the plan that tests all students every two weeks in terms of sensitivity and turns out to be better in terms of lost infection-days. This suggests that reducing costs by simply increasing the interval between successive assessments should not be considered a priori a good option, because much less expensive strategies based on testing subgroups on rotation could end up performing similarly. PLOS ONE Screening plans for SARS-CoV-2 based on sampling and rotation The absolute performance of the screening plans increases with the sensitivity of the individual tests used, but their relative performance remains unchanged. It should be noticed that the sensitivity value of 0.9 roughly corresponds to the one that would be obtained by repeating a test with a sensibility of 0.7 twice on the same individual, in a procedure of double testing [6,27]. In our simulations, the time between collection of the biological sample and availability of the test result is assumed to be less than a day. If it is longer, then it is important to take this into account in terms of additional infection days got out of control [24]. Additionally, we assumed that there is not latency time, but we expect that results would be quite similar assuming that there are few days between getting infected and first becoming contagious. Our results are valid in school contexts similar to those observed in most European countries, where classes are usually fixed groups of people attending the same courses every day. The size of the class used in this work has been chosen to be consistent with the average European class sizes (possibly including teachers) [16] or with a small university class. For larger classes, fixed the number of subgroups, the advantage of rotation remains, even if the basic assumption that all students are homogeneously mixed-each one can have contact with each other-could be less appropriate. Many factors could influence the epidemic dynamic into the class, such as number of hours spent in the classroom and size of the classrooms, kind of school (with/without laboratories), age of the students, ventilation, external temperature, individual protection measures adopted and compliance of the students to them. We model all these factors through a single parameter R 0 that we use to simulate epidemic scenarios. This is a very simplistic approach, but the idea underlying our simulations could be extended to account for the contribution of distinct factors to the contagion rate, if reliable information about the ways of transmission was available. More complex models, like agent-based models, could be used to reproduce the epidemic dynamics scenarios in entire schools and in more complicated situations where students share other places besides the classroom (campus, houses, dormitories) [28]. Our simulations can be extended also to screening plans which combine rotation and sample pooling strategies, that could be less costly than those based on individual tests [29]. Each group of six students in plans C or D could be tested in pool with only one molecular test on mixed material from individual swabs. Similarly, the screening on the entire class could be done by performing only one pooled test on the whole class or by defining k � 1 subgroups of students, then performing k pooled tests. However, the sensitivity of pooled testing for various pool sizes and its relationship with viral loads distribution in the population should be considered essential inputs to assess and compare the performance of plans which involve sample pooling [22,30]. A final remark concerns our assumption of maximum specificity. This assumption leads to conservative estimates of the sensitivity at the class level: if a student is falsely declared positive in a class where there is at least one infection, the probability that the class is declared positive is higher than in our analysis, where we rule out the presence of false negatives. On the other hand, a suboptimal, even if high, specificity could induce a not negligible number of inappropriate quarantines, with the resulting social costs. We do not address this issue in the paper, but it is worth noting that the risk of inappropriate quarantine is lower if tests are performed on smaller groups of students. For example, if the individual test has a specificity of 0.99, the specificity on the class at each assessment is 0.99 6 = 0.94 under plan C and 0.99 24 = 0.79 under plan A1. In conclusion, our study, that for its simplicity has the nature of a proof of concept, emphasizes the importance of taking into account the epidemic context when comparing alternative screening strategies aimed at contagion prevention. Combining calculation of test accuracy with contagion dynamic models is an easy way to do this, even when the plans to be evaluated are more complex than those considered in this paper. We show that, in the presence of clusters in the population, less costly strategies that exploit the correlation between subjects may have good performance, in particular in detecting outbreaks at high-rate of contagion, and that the time interval between successive evaluations on the same cluster is a very relevant input. The closer the assessments, the lower the number of infection-days left undetected, with a reduction of the risk that infectious subjects spread the contagion within the cluster and outside it. In real applications, economic and social costs as well as positive and negative predictive values of the proposed plans, should be evaluated making assumptions about the prevalence of infections in the population.
2021-09-12T05:21:37.363Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "3f03ab2d69f85d6c92cfc5af44e979b11c0557da", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257099&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f03ab2d69f85d6c92cfc5af44e979b11c0557da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
39143693
pes2o/s2orc
v3-fos-license
Assorted weak matrix elements involving the bottom quark As part of a larger project to estimate the fB decay constant, we are recalculating fB_static using a variational smearing method in an effort to improve accuracy. Preliminary results for the static B_B parameter and HQET two point functions are also presented. INTRODUCTION The extraction of CKM matrix elements from experimental data requires the calculation of QCD matrix elements of operators involving the bottom quark [1]. For a number of years the MILC collaboration has been doing a systematic study of the f B decay constant [2]. The difficulty of simulating the bottom quark, with its mass larger than current feasible inverse lattice spacings, was overcome by interpolating between the results of simulations using Wilson quarks with masses around the charm mass and those from static simulations (infinite mass). Unfortunately, for some of the simulations the f static B results were not useable. These results came as a by-product of the hopping parameter expansion of the heavy quark, and for technical reasons had significant contamination by higher momentum intermediate states when the physical volume was large [2]. In addition, it is well known that the poor signal to noise ratio of the static * Presented by C. McNeile. simulations makes it important to use an efficient smearing method. To provide f static B results on all lattices, and to reduce the errors even in the cases where the results were previously available, we have started a set of static-light simulations using a variational smearing method. STATIC f B The computational cost of our current implementation of the FFT on parallel machines, prohibited the use of a more sophisticated smearing technique such as MOST [3], so we use the following basis of smearing functions: The parameters A, B, C and D were obtained from uncorrelated fits to the Kentucky group's measured wave functions of static-light mesons (at β = 6.0) [3]. The A, B, C and D param- eters were scaled to the appropriate value for a given β, using the estimates of the lattice spacing. Because of uncertainties in the lattice spacing, and to have the flexibility to choose different smearing functions for each kappa value, two additional sets of the parameters were chosen. Thus a variational smearing matrix of order ten (including the local operator) is used in the simulations. The static quark is smeared relative to the light quark using standard FFT methods. The completed static-light production runs are shown in Table 1. The β = 5.6 configurations were generated by the HEMGCC collaboration [4]. Our analysis of the data is very preliminary-in particular we have not yet fully optimized our smearing functions. All the results presented here use a single exponential source. We will focus on comparing the new static results with the numbers from other simulations. Ultimately, all the static data will be combined with that from propagating quark simulations [2]. We do a simultaneous fit to the smeared-local and smeared-smeared correlators. To compare our raw lattice numbers with other simulations, we quote our numbers in terms of Z L , defined by Z L is related to the decay constant, no perturbative factors are included in its definition and we assume the light quark propagator has been multiplied by 2κ. E sim is the energy of the staticlight meson, it is equal to the sum of the difference between the mass of the B meson and the bottom quark mass, and an unphysical 1 a renormalization factor. Correlations were included for the fits in time, but no kappa correlations were included in the chiral extrapolations. Some preliminary, static fit results for the β = 5.6 simulation are contained in Table 2. Ali Khan et al. [5] have also calculated Z L on gauge configurations from the same β = 5.6 simulation, as those used here, but not on exactly the same sample of configurations. At κ = 0.1585, they quote aE sim = 0.528 (5) We have done some fits to the β = 5.445 static data. The masses obtained were consistent with the older MILC calculation. This is a largevolume case where the older method does not produce useable static Z L factors, so no check is available there. However, the new static correlators seem reasonably consistent with the propagating quark results. STATIC B B PARAMETER The B B parameter is required in the extraction of the V td CKM matrix element from the experimental data on B−B mixing. For the β = 5.5 and 5.6 simulations we calculated the static B B parameter (so far we have only analyzed the β = 5.6 data). Ours is the first calculation of the static B B parameter that includes dynamical fermions. The method used is described in references [6] and [7]. The same set of smearing functions used in the f static B calculations was used to smear the quarks in the external mesons. Fig. 1 shows the static B L operator as a function of time. In Table 3 we show some preliminary results for static B B parameter. The χ 2 /dof for all the fits in Table 3 were all close to 0.5. The errors are statistical only; the systematic errors due to the choice of fit range are larger than the statistical errors. The errors should be reduced when we include the additional smearing functions in our analysis. We used the Kentucky group's [6] organization of perturbation theory to find the required linear combination of operators to calculate B B (m B ). However we omitted some next to leading order log µ/m terms. The missing terms have only recently been calculated [8] and are a small effect. Chirally extrapolating the B B (m B ) results to κ c = 0.16103 and converting the results to the one loop RG invariantB B parameter, we obtain the preliminary value ofB B = 1.31 +3 −4 (statistical error only). This result is consistent with the values obtained from quenched simulations using Wilson fermions [1] [6]. LATTICE HQET To calculate the form factors of the semileptonic decays of the B meson, we want to follow a strategy similar to the one used in the f B simu- Table 3 Static B B parameter results for β = 5.6, am sea = 0.01, using the source exp (−0.4 * | r |), fixing timeslice 29, and fitting times 2 to 6. (2) lations, except that we will combine the results of heavy quark effective field theory (HQET) with the analogous propagating quark calculations, to allow the final results to be interpolated to the B meson mass. As a "warm up exercise" we studied the two point function of a HQET-light meson. This exercise allows us to investigate the smearing of the quarks in the B meson and to study the nonperturbative renormalization of the velocity (a peculiarity of lattice HQET) -both of which are important prerequisites to the calculation of form factors. We have implemented the HQET propagator equation introduced by Mandula and Ogilvie [9]. The two-point function for an HQET-light meson at finite residual momentum p and bare velocity v is When there is no excited state contamination, the correlator has the form where Z v f and Z v L are the smeared and local matrix elements. The dispersion relation for an HQET-light meson is [10] where E sim is equal to the energy obtained in the Here v R is the renormalized velocity. As a pilot study we generated 20 correlators at β = 5.445 with a light quark kappa value of 0.160. So far, a single exponential source, as in the static simulations, was used. Fig. 2 shows the effective mass plots for a HQET-light meson at zero residual momentum with bare velocities only in the x direction of 0.1, 0.5, and 0.8; the static effective mass plot is also included (equivalent to zero velocity). The single exponential produces almost usable plateaus for all three velocities -the plateaus should improve when we use variational smearing. The signal to noise ratio does not decrease rapidly with increasing velocity. The renormalization of the velocity is caused by the breaking of the Lorentz symmetry by the lattice. This renormalization is required in the calculation of form factors such as the Isgur-Wise function [11]. Our preferred way to extract the renormalization is to use the dispersion relation in Eq (4) with the energy taken from simulations. This approach seems to us [10] [12] to be simpler than the one used by Mandula and Ogilvie [13]. An estimate of the velocity renormalization can be obtained from the data in Fig. 2, using naive fits to the effective masses and using Eq (4) with zero residual momentum. For the bare velocity of 0.5 (0.8) in the x direction, the renormalized velocity was approximately eighty (sixty five) percent of the bare velocity. With higher statistics and better smearing a more sophisticated analysis will be done. The perturbative calculation of Mandula and Ogilvie [13] also gave a renormalized velocity smaller than the bare velocity. This work is supported in part by the U.S. Department of Energy and the NSF We would like to thank Terry Draper for discussions on HQET and B B , and Joe Christensen for providing us with the coefficients for the B B parameter calculation. The runs are being done on the Cornell Theory Center's SP2, and on the 512 node Paragon at the Oak Ridge Center for Computational Science.
2017-09-14T13:17:05.680Z
1996-08-15T00:00:00.000
{ "year": 1996, "sha1": "6eee6025210f3aa3125d169b448d1d93b319adbe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/9608088", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6eee6025210f3aa3125d169b448d1d93b319adbe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237706436
pes2o/s2orc
v3-fos-license
Protein–protein interaction based substrate control in the E. coli octanoic acid transferase, LipB Lipoic acid is an essential cofactor produced in all organisms by diverting octanoic acid derived as an intermediate of type II fatty acid biosynthesis. In bacteria, octanoic acid is transferred from the acyl carrier protein (ACP) to the lipoylated target protein by the octanoyltransferase LipB. LipB has a well-documented substrate selectivity, indicating a mechanism of octanoic acid recognition. The present study reveals the precise protein–protein interactions (PPIs) responsible for this selectivity in Escherichia coli through a combination of solution-state protein NMR titration with high-resolution docking of the experimentally examined substrates. We examine the structural changes of substrate-bound ACP and determine the precise geometry of the LipB interface. Thermodynamic effects from varying substrates were observed by NMR, and steric occlusion of docked models indicates how LipB interprets proper substrate identity via allosteric binding. This study provides a model for elucidating how substrate identity is transferred through the ACP structure to regulate activity in octanoyl transferases. Introduction De novo lipoic acid biosynthesis occurs in all organisms as a branch point from type II fatty acid biosynthesis (FAB), and lipoic acid is the only known essential product of human mitochondrial FAB. 1-5 Octanoic acid is transferred from within the FAB onto a lipoylated target protein, whereupon thiol moieties are subsequently added through the activity of ironsulfur cluster enzymes. 1,6-8 Control of octanoate transfer from the FAB must be maintained, as the role and structure of lipoic acid is reliant on the proper chain length and oxidative state of the fatty acid from which it is derived. In E. coli, the fidelity to select a single fatty acid from within the 30-35 potential acyl substrates attached to ACP poses a selectivity mechanism 9-11 that remains elusive. 12,13 FAB is an iterative, multi-enzyme pathway in which each reaction step is catalyzed upon a fatty acid precursor that is covalently attached to the acyl carrier protein (AcpP in E. coli). 14 AcpP is a small, 77 amino acid protein with a four a-helical bundle structure. 15,16 The acyl substrates are carried on a 4 0 -phosphopantetheine cofactor attached to serine 36 of the AcpP, which carries each fatty acyl intermediate attached as a thioester. In solution, AcpP sequesters acyl cargo within a hydrophobic pocket between its a-helices, only presenting the hydrolyzable thioester once it favorably interacts with a partner protein through protein-protein interactions (PPIs). 17 In the case of LipB, the sourcing of octanoyl-ACP must occur after enoyl reduction but before the substrate can re-enter the elongation cycle for another iteration (Fig. 1A). Here we have pursued an understanding for how the LipB accomplishes this highly selective interaction, where LipB rapidly intercepts C8-AcpP with high fidelity. LipB transfers octanoyl groups scavenged from AcpP to an active site cysteine 169 0 (residues of LipB will hereafter be labeled as prime), with the LipB's hydrophobic pocket sheltering the lipid tail before transferring it to E2 or other lipoyl domains. This creates an octanoyl-modified enzyme, freeing the LipB to scavenge more octanoic acid substrates. 18 LipB is required to source and attach octanoic acid from AcpP, as neither free octanoic acid nor octanoyl-CoA are substrates, requiring LipB to carefully select substrates attached to the AcpP or risk inactivating downstream enzymes. 19 Recent evidence has suggested that the AcpPÁLipB interaction can exert allosteric control over the substrates it interacts with prior to catalysis through control of the ''chain flipping'' event. 20 Chain flipping is the term applied to the exit of the acyl chain from the carrier protein pocket. 21 This creates a ''control'' step which must occur prior to any catalysis. Prior studies have reported decanoic acid crystallized within the LipB active site, suggesting that it is possible for the C10-acyl chain to fit into the LipB pocket. 22 Further, we recently demonstrated that the acyl chain of dodecanoyl-AcpP does not chain flip when attached to the E. coli AcpP, but mutation of the LipB interface residue R145 0 can induce loss of chain length selectivity for chain flipping, 20 indicating both a substrate selectivity by wildtype LipB and the importance of the proper protein-protein interface for this selectivity. To resolve this PPI-based control mechanism at atomic detail, we chose to perform NMR studies of AcpP with C6, C8, and C10 acyl chains titrated with LipB to observe interaction changes based on chain length. These chain lengths represent the known LipB substrate and two most similar chain lengths seen in the cell. This data was then used to guide high-resolution in silico docking to identify the surface features responsible for the experimentally identified binding differences. We have shown that the implementation of NMR titration experiments to guide docking algorithms can accurately and reproducibly deduce PPI poses in ACPdependent pathways. 23 It has been suggested that unique features imparted by identity of the acyl chain can likely serve as a source for binding discrimination by enzymes. [24][25][26] Furthermore, control of reactivity based on substrate has been seen by crosslinking 27 and NMR, 20 but developing a structural model requires atomic level detail. Here we identify the structural features of LipB that allow interaction with C8-AcpP while inducing structural hindrance to C6-and C10-AcpP binding, further elucidating the mechanism and role of PPIs in carrier protein-dependent enzymes. NMR titration to examine residue-by-residue interaction of acyl-AcpP with LipB It has been established that solution-state 15 N-1 H HSQC NMR spectroscopy can function in appreciating the transient dynamic interactions between AcpP and partner proteins. 28,29 Given the known specificity of LipB, we sought to elucidate how substrate specificity is conferred by PPI with AcpP carrying different acyl cargo. In order to detect small functional differences in the interactions of LipB, AcpPs of different chain lengths were prepared as C6-, C8-, and C10-linked 4 0pantetheinamide probes 30 (Fig. S1-S3 and Tables S1-S3, ESI †). These AcpP species were titrated with increasing concentrations, from zero to beyond saturation: 2 molar equivalents of LipB in the case of C6-AcpP, 1.5 equivalents of LipB in the case of C8-AcpP, and 2 molar equivalents of LipB in the case of C10-AcpP. The effect of increasing LipB concentration on each AcpP species was examined. The first observation made when examining perturbations against one another was the difference in chemical shift perturbation (CSP) magnitude between the chain lengths ( Fig. 2 and Fig. S4, ESI †). It was immediately clear that the degree of perturbation was greatest in C8-AcpP, despite all experiments being titrated to saturation. Further, the CSPs of C8-AcpP revealed unique interactions. Whereas most titrations have been noted to have little effect on helix I of the AcpP, LipB effected strong CSPs throughout the early residues and lasting through to residue 18. After this, there was a drop-off in CSPs through the end of loop 1 to residue 30. Small perturbations rose above background at residues 34 and 35, and CSPs continued consistently through helix 2. Next, there were consistent perturbations through loop 2, helix 3, and helix 4. The most unique region of CSP was the very strong migrations occurring in helix I ( Fig. 2A). These perturbations appeared consistent in these regions among C6-, C8-and C10-AcpP, with varying magnitudes. Binding thermodynamics and kinetics demonstrate specificity experimentally Though CSPs are not a quantitative measure of binding, C8-AcpP binding LipB displayed considerably stronger perturbations than C6-or C10-AcpP. In order to deduce quantitative binding parameters of AcpP tethering the three chain lengths, we examined the NMR titration data by applying TITAN line shape analysis. 31 Here, C8-AcpP exhibited a 47.2 AE 5.1 mM K d and a low off rate of 633 AE 98 s À1 . The C6-AcpP bound less strongly, with a 189.9 AE 15.21 mM K d and off rate of 4237 AE 2544 s À1 . C10-AcpP bound slightly better than the C6-AcpP with a 134.8 AE 34.0 M K d and off rate of 1521 AE 225 s À1 (Fig. S5-S7, ESI †). This difference in binding constant was supported by the comparative magnitude of the respective CSP data. TITAN analysis quantitatively supported control that LipB maintains over interactions with AcpPs tethering different acyl chains. The difference in binding strength suggested differences in binding surfaces which we next explored by high-resolution docking. Docking analysis to identify chain length specific interactions LipB was first modeled by homology modeling to the 2QHS Thermus thermophilus lipoyltransferase with ICM Homology 32,33 (Fig. S8, ESI †). The Thermus thermophilus structure was chosen because it shared the highest sequence identity of available crystal structures. AcpP structures were derived from molecular dynamics (MD) simulations and represented the highest population state seen in the simulation 20 (Fig. S9, ESI †). To perform in silico docking experiments, the ICM Fast Fourier Transform protocol was used to generate high quality docking poses and scores for the AcpPÁLipB interface and used to sample AcpP conformations across the entire LipB protein surface. The resulting docking poses were organized based on RMSD from the previously published model of the C8-AcpPÁ LipB docked complex 20 (Fig. S10, ESI †). The methodology used was the same as we previously developed to model six ACPpartner protein interfaces from E. coli FAB and benchmark them with established crystal crosslinked structures. 23 C8-AcpP adopted a low energy structure at 4.03 Å RMSD, with an energy of À50.5 kcal mol À1 (Fig. 3A) based on the ICM energy function. There was a second low energy docked conformation at 15.8 Å RMSD, which was ruled out as inactive with serine 36 17 Å from the LipB active site and the AcpP rotated away from the LipB pocket ( Fig. 3B and C). The C6-AcpP had significantly higher energy poses (Fig. 3A) at low RMSD, with the only similarly stable state at 20 Å RMSD. Like C6-AcpP, C10-AcpP had most stable poses at 18.4 Å and 24.5 Å RMSD, with no similarly stable state near the C8-AcpPs. The high RMSD states of C8-AcpPÁLipB interaction are clearly not oriented for substrate delivery (Fig. S9, ESI †), with the 4 0 -phosphopantetheine positioned away from the active site. Therefore, lower RMSD states were examined to explore how the difference in AcpP structure translated to different energetics for binding. C8-AcpP bound tightly onto the LipB surface, with six arginine or lysine residues available for salt bridges with discrete AcpP residues. The most stable state of C8-AcpP had each of these residues coordinating closely, within 5 Å. The C8-AcpPÁLipB interactions began at helix II, with E41 nearby R99 0 , E47 with R142 0 , and E49 with R93 0 . D51 and R145 0 remained 4.5 Å apart, but side chain rotation could bring the residues within range of a salt bridge. E53 with R144 0 and E60 with K54 0 finished the total salt bridges. The C6-AcpP model displayed a significantly poorer binding surface with LipB ( Fig. 3A and F), matching the results of the thermodynamic and CSP data. Specifically, E41, E48, and D51 lost the proper orientation of interaction (Fig. S3, ESI †). This was due to the structural effects of chain length shortening upon AcpP, with the structure most perturbed on helix II, loop II, and the orientation of helix III. There were two residues within range for a salt bridge in the C6-AcpPÁLipB complex: E47 with the pair R142 0 and R144 0 , and E57 with K54 0 . It is interesting to not how distinctly the binding surface of LipB could be effected by the small structural changes between C6 and C8-AcpP. The C10-AcpP model similarly bound more poorly to the LipB surface, displaying an ability to only bind four residues at the LipB surface. E41 and E47 appeared to be out of any vicinity to interact with R99 0 or R142 0 , but E48 and D51 were within interacting distance of R93 0 . E53 lies within 3.6 Å of R142 0 , and E60 is 3.8 Å from K54 0 (Fig. S4, ESI †). The disparities in possible interactions were further highlighted by aligning the C6-and C10-AcpP structures onto the C8 docked pose to reveal their structural differences. The most immediate difference between them was the orientation of helix III. In the C8-AcpP, the helix was oriented outwards, creating space around the bottom of AcpP helix III for the residues 50 0 -55 0 of LipB (Fig. 3F). For example, residue D56 lay nearby Q51 0 on LipB in the C8-AcpP, but the more closed structure of C6-and C10-AcpP placed D56 in direct steric clash with LipB. This reflected a structural filtering mechanism, where the surface appeared to be arranged to interact with the structural features of C8-AcpP. Other orientations of helix III disallowed proper binding for chain flipping from helix II. These regions of selectivity overlayed with the differences in structure seen in the NMR data. ‡ Though the C6-and C10-AcpPs could likely relax their structure to better bind the LipB surface, this initial instability and necessary relaxation could slow the association, explaining the poor binding of nonsubstrate AcpPs. The effects of this selectivity were evident in the respective CSPs, thermodynamics, and conformational landscape of the docking calculations. Comparing CSPs to the docked model To combine both data sets, the NMR titration of C8-AcpP was compared to the docked model (Fig. S11, ESI †). Beginning at helix I there were perturbations occurring through most of the helix, though they were most prominent at the end of the helix. These likely correspond to the interactions across the ''right'' side of the channel which binds helix II. There were several residues at the base of the helix which would likely experience movement while the ACP adopts a bound state, but the residues at the end were close enough to be in direct contact. This direct contact would lead to a higher degree of perturbation. The CSPs almost fully dropped off until helix II, matching the pose where there was no surface of the LipB in proximity. Helix II showed large perturbations throughout until its end, this agreed well with the model, where most of helix II is deep into the LipB channel. Especially significant were the uncharged residues such as T39, V43, and A45 which usually saw small CSPs but were buried against hydrophobic regions of LipB. The perturbations ended at the bottom of the helix, to be seen again at E53, which bound R144 0 . On helix III further down the ACP D56 had little perturbation, though it was usually participating in interactions. However, this matches the model, where the acidic side chain is either binding a backbone or associating with Q68. However, at the end of helix III strong perturbations continued with a salt bridge at E60. Helix IV saw perturbations which were likely linked to the movement of the helix upon chain flipping. Seen especially in V65, A64, Q66, and Y71, the perturbed residues at the top of helix IV may also have been due to movement of the dynamic helix III upon binding, especially when all the residues on the loop before and after helix III saw some small perturbation. The C6 and C10-AcpP perturbations were very different in their distribution. There were a limited number of electrostatic CSPs at D35, E49, D56, and E57 in the C10-AcpP titration. There were more perturbations of hydrophobic residues at I10, V17, S27, P28, T39, L42, V43, T63, T64, and V65. These C10-AcpP perturbations were consistent with C6-AcpP, though C6-AcpP's CSPs had significantly smaller magnitudes. The largest effects were seen on helix II and III, seeming to indicate interactions at the interface. The only region which saw interactions greater than C8 was the loop preceding helix II and the very top of helix II. The difference in hydrogen bonding matched the docked model for C10, where the helix III angle better aligned D56 and E57 to form interactions. This contrasts with C8-AcpP where D56 was not aligned for a strong interaction. This lower number of salt bridges and poorer interaction suggested a model where there are sets of interactions which must be sufficiently strong to induce chain flipping. Chain flipping is a large dynamic event, and we propose that the deeply buried interface and multiple salt bridges are essential to drive this, with sufficient interactions shifting the residues such as Y71 and I54 to close the acyl pocket and initiate flipping. The stronger surface interactions also explain the higher degree of measured CSPs. The weak interactions by C6 and C10-AcpP lowered the degree of structural perturbation upon binding and decreased the time spent in the ''bound'' state in solution, resulting in smaller shifts on the magnet. Surprisingly, the C8-pantethenamide used in this study saw larger chemical shifts than the C8-pantetheine probe used in a previous study. 20 We hypothesize that this was due to a disruption at the interface of the mutant LipB C169A utilized in that study. With the intricate binding mechanism identified, a mutation nearby the interaction site could significantly affect the interface. Discussion A significant kinetic advantage for any protein interacting with ACP, currently tallied at 27 known enzymes and regulatory proteins in E. coli, [9][10][11] is the ability to discern acyl identity without the requirement of chain flipping. For LipB, the ability to differentiate acyl chains based on the initial PPIs significantly increases the efficiency of this selection process and provides thermodynamic control to maintain the fidelity of lipoic acid biosynthesis. We have recently demonstrated how discrete salt bridge interactions at the protein interface can allow differentiation between C8-and C12-AcpP for chain flipping with LipB. 20 We have now determined the comparative binding constants and CSPs of LipB with acyl-AcpPs of both shorter and longer chain lengths with that of the natural C8 substrate, indicating a clear ability of LipB to select for interaction with C8-AcpP. This is accomplished by possessing a surface that can complement the specific shape of octanoylsequestered (C8-) AcpP, while simultaneously deterring interactions with C6-and C10-AcpP. This selectivity is primarily reliant on helix III perturbations in response to the sequestered acyl chain lengths, previously identified in numerous experimental and theoretical studies. Structurally, this model leverages unique conformational features of AcpP induced by different chain lengths, an incredibly useful evolutionary feature when selectivity for a single chain length is required. Understanding that the ACPÁLipB reaction requires chain flipping controlled by PPIs unlocks the potential to modify these essential and sensitive interactions through inhibition or engineering. The high sequence identity of LipB shared between bacteria implies that the observations made here in E. coli will likely extend to other species. Furthermore, targeting the protein interface of AcpPÁLipB in a pathogen could avoid potential side effects from activity against the human mitochondrial LipB (Fig. S8, ESI †). These data also suggest an important factor to consider when engineering FAB and related acetate pathway proteins. Where poor interface complementarity could lead to a loss of activity, understanding and optimizing these interactions may prove essential. While these transient PPIs can be challenging to observe, we have developed an approach that merges new data with prior observations and formed a model that explains both the specificity and efficiency for lipoic acid biosynthesis. Conflicts of interest The authors declare no conflict of interest.
2021-08-27T16:42:06.096Z
2021-07-28T00:00:00.000
{ "year": 2021, "sha1": "1ce6ef5ca80ba5a23d0a34e4fde5bffd0e7fe85f", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/cb/d1cb00125f", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16eb244988a99cc4a6225ca03a4d01fdfcc85c46", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
258909439
pes2o/s2orc
v3-fos-license
Normothermic machine perfusion versus static cold storage in donation after circulatory death kidney transplantation: a randomized controlled trial Kidney transplantation is the optimal treatment for end-stage renal disease, but it is still severely limited by a lack of suitable organ donors. Kidneys from donation after circulatory death (DCD) donors have been used to increase transplant rates, but these organs are susceptible to cold ischemic injury in the storage period before transplantation, the clinical consequence of which is high rates of delayed graft function (DGF). Normothermic machine perfusion (NMP) is an emerging technique that circulates a warmed, oxygenated red-cell-based perfusate through the kidney to maintain near-physiological conditions. We conducted a randomized controlled trial to compare the outcome of DCD kidney transplants after conventional static cold storage (SCS) alone or SCS plus 1-h NMP. A total of 338 kidneys were randomly allocated to SCS (n = 168) or NMP (n = 170), and 277 kidneys were included in the final intention-to-treat analysis. The primary endpoint was DGF, defined as the requirement for dialysis in the first 7 d after transplant. The rate of DGF was 82 of 135 (60.7%) in NMP kidneys versus 83 of 142 (58.5%) in SCS kidneys (adjusted odds ratio (95% confidence interval) 1.13 (0.69–1.84); P = 0.624). NMP was not associated with any increase in transplant thrombosis, infectious complications or any other adverse events. A 1-h period of NMP at the end of SCS did not reduce the rate of DGF in DCD kidneys. NMP was demonstrated to be feasible, safe and suitable for clinical application. Trial registration number: ISRCTN15821205. Article https://doi.org/10.1038/s41591-023-02376-7 In total, 168 kidneys were allocated to SCS and 170 to NMP. In the SCS arm, one kidney received NMP due to the surgeon's preference. In the NMP arm, 25 kidneys did not receive NMP. In 14 cases, this was due to the inability to secure a cannula into or around the renal artery; in eight cases, this was due to logistics with access to theater or time This involves flushing the kidney to remove donor blood, cooling with a preservation solution at 4 °C and storage on ice while arrangements are made for transplantation. SCS works by reducing the metabolic rate to around 5% of normal, but because this occurs in an anoxic environment, anerobic metabolism ensues 5 . This leads to the depletion of cellular energy in the form of adenosine triphosphate (ATP) and the accumulation of succinate, which drives the production of the reactive oxygen species that underlie ischemia reperfusion injury 6 . SCS is simple, effective and inexpensive and is still the standard of care in the United Kingdom (UK). However, prolonged cold ischemic injury can lead to acute tubular necrosis, the clinical consequence of which is delayed graft function (DGF), requiring a period of post-transplant dialysis support. DCD takes place after cessation of cardiac activity in the donor, leading to an inevitable period of warm ischemia, which can cause acute kidney injury. In addition, DCD kidneys are more susceptible to cold storage injury 7 . These factors lead to higher rates of DCD grafts never functioning (primary non-function (PNF)) and higher rates of DGF. There is evidence that DGF increases the risk of acute rejection, prolongs hospital stay and could adversely affect long-term allograft survival rates 8,9 . Normothermic machine perfusion (NMP) is an emerging technique that uses cardiopulmonary bypass technology with extracorporeal membrane oxygenation to perfuse kidneys with a warmed and oxygenated red-cell-based plasma-free solution 10 . This maintains an organ in a near-physiological state, restoring function ex vivo and, therefore, allowing functional testing. Experimental studies using animal models suggest that kidney NMP has a conditioning effect with maintenance of more stable acid-base homeostasis and a reduction in renal tubular injury, when compared to SCS 11 . NMP also enables a degree of metabolic resuscitation by replenishing ATP levels that have been depleted because of a combination of warm and cold ischemia 12 . The first randomized controlled trial of NMP in liver transplantation demonstrated that, compared to conventional SCS, NMP reduced early allograft dysfunction. There was no effect on bile duct complications, graft survival and patient survival 13 . As a result, NMP is finding increasing clinical application in liver transplantation. The technology is rapidly developing as a method of assessment and treatment to increase organ utilization and has proved to be more cost-effective than SCS 14,15 . There is currently a paucity of evidence for NMP in kidney transplantation. Nonetheless, with high rates of kidney discard 16 , there is clear potential for the introduction of NMP technology. Our non-randomized pilot study in kidney transplantation suggested that a 1-h period of NMP, delivered at the end of the cold storage period and immediately before transplantation, reduced the rate of DGF in extended criteria donor kidneys 17 . Here we report the first multicenter, randomized controlled trial comparing NMP with conventional SCS in DCD kidney transplantation. The primary endpoint was DGF, defined as the requirement for dialysis in the first 7 d after transplant. Patient population The characteristics of the donors and recipients are detailed in Table 1. All kidneys were from Maastricht category III or IV donors. From 13 February 2016 to 4 September 2020, there were 635 eligible DCD donor kidneys across the four UK centers (Fig. 1). In total, 338 patients were consented and randomized into the trial. Recruitment in all centers ceased on 23 March 2020 due to the Coronavirus Disease 2019 (COVID- 19) pandemic. After reviewing the data, the Data Mangement Committee (DMC) made the recommendation that completing recruitment would not alter the primary outcome of the trial. In conjunction with the Trial Steering Committee (TSC), it was decided to officially end the trial on 4 September 2020. No additional patients were recruited between 23 March 2020 and 4 September 2020. Two eligibility periods are presented due to the COVID-19 pandemic: 4 September 2020 is the date the trial officially closed to recruitment, and 23 March 2020 was the start of the UK national lockdown. No participants were recruited between 23 March 2020 and 4 September 2020. Two participants did not have a cold ischemic time (CIT) reported, and two participants received dual transplants and, hence, do not have a left/right kidney variable populated; these four cases were excluded from all risk-adjusted modeling. **Two additional exclusion criteria (donors who underwent normothermic regional perfusion or one of a pair already randomized as a single kidney in the trial) were introduced on 13 October 2017, and, retrospectively, two participants fulfilled these exclusion criteria (both were randomized to NMP). These patients were included in the MITT analysis and excluded from the per-protocol analysis. Article https://doi.org/10.1038/s41591-023-02376-7 There were no withdrawals from the trial. Two participants were randomized in error, as the single kidneys were transplanted as dual kidney transplants. Fifty-five participants experienced a protocol deviation: not receiving the randomized treatment was the most common reason (previously described); 21 kidneys that were one of a pair in the trial that were transplanted in the incorrect order to which they were randomized (due to logistics); and 13 kidneys that were randomized to NMP did not receive 60 min of NMP. All of the participants who experienced a protocol deviation were included in the modified intention-to-treat (MITT) analysis but were excluded from the per-protocol analysis. Four cases had to be excluded from all risk-adjusted modeling, as one of the risk adjustment factors was missing (two participants did not have a cold ischemic time reported, and two participants who received dual transplants did not have a left/ right kidney variable populated) (Fig. 1). Details for missing data and protocol deviations are included in Supplementary Tables 1 and 2. Two interim analyses were carried out, and the DMC recommended continuation after both (Supplementary Information). Primary outcome measure In the MITT analysis, excluding PNF (SCS n = 3 and NMP n = 6), DGF occurred in 83 of 142 (58.5%) patients in the SCS arm and 82 of 135 (60.7%) patients in the NMP arm. No difference was observed between the arms (adjusted odds ratio (OR) (95% confidence interval (CI)) 1.13 (0.69-1.84), P = 0.62; Table 2). Similar rates were found in the per-protocol analysis ( Table 2). Functional DGF was defined as a less than 10% reduction in serum creatinine levels for three consecutive days in the first week after transplantation. Creatinine reduction ratio day 2 and day 5 analyses excluded patients with PNF and DGF. Nine patients were excluded from the MITT cohort and six from the per-protocol cohort due to PNF. Mean ± s.d. or median (IQR) are unadjusted Kaplan-Meier estimates. HRs were calculated using Cox proportional hazards regression model. OR (95% CI) for PNF was calculated using a logistic regression model. ORs were calculated using a logistic regression model adjusted for cold ischemic time, donor age, left/right kidney and transplant center. Length of hospital stay was analyzed using Cox proportional hazards regression model. A negative binomial model was used to calculate the rate ratio (95% CI) of biopsy-proven acute rejection rates. P values were determined from the likelihood ratio test when including and excluding the treatment term from the model. Otherwise, P values were determined from the likelihood ratio test (two-sided test). Secondary outcome measures Nine participants had PNF (three in the SCS arm and six in the NMP arm) and were excluded from subsequent analyses. In the SCS arm, two patients had a vascular thrombosis, and one had cortical necrosis. In the NMP arm, one patient had a vascular thrombosis; two patients had acute rejection; and, in three cases, the reason was unknown. No significant difference was observed in the number of patients who experienced PNF between the treatment arms for both the MITT analysis (adjusted OR (95% CI) 2.34 (0.56-9.86); Table 2) and the per-protocol analysis (adjusted OR (95% CI) 1.95 (0.36-10.56); Table 2). Patients who experienced DGF were excluded from the outcomes and analyses for functional DGF, creatinine reduction ratio on post-transplant day 2 (CRR2) and creatinine reduction ratio on post-transplant day 5 (CRR5). The median duration of DGF was 6 (2-9) days in the SCS arm and 4 (1-9) days in the NMP arm. No significant difference was observed between the groups in the MITT (adjusted hazard ratio (HR) (95% CI) 0.97 (0.70-1.34); Table 2) or the per-protocol analyses (adjusted HR (95% CI) 0.98 (0.68-1.42); Table 2). The CRR on day 2 was significantly higher in the NMP arm in the MITT and per-protocol analysis (SCS 1.58 ± 16.5% versus NMP 7.97 ± 16.5%; adjusted mean difference (95% CI) 6.59% (0.5-12.7%), SCS 1.0 ± 16.3% versus NMP 8.9 ± 13.8%; adjusted mean difference (95% CI), 8.15 (1.6-14.7%), respectively; Table 2). By day 5, no significant difference was observed in the CRR nor in levels of functional DGF ( Table 2). The length of hospital stay was also similar between groups in the MITT and per-protocol analyses (adjusted HR (95% CI) 0.91 (0.71-1.15) and 0.90 (0.69-1.18), respectively; Table 2). The total number of patients who had biopsy-proven acute rejection was 19 in the SCS group and 24 in the NMP group. The unadjusted mean number of biopsy-proven acute rejection episodes per patient was numerically higher in the NMP arm compared to the SCS arm (0.3 per participant versus 0.2, respectively) but was not statistically different (adjusted rate ratio (95% CI) 1.57 (0.83-2.95) for the MITT population; Table 2 and Supplementary Table 3). Serum creatinine and estimated glomerular filtration rate (eGFR) were not statistically significant at 1, 3, 6 or 12 months after transplant in the MITT or per-protocol analyses (P values from the time by treatment interaction term: serum creatinine 0.19 and 0.096, respectively, and eGFR P = 0.42 and P = 0.15, respectively; Supplementary Table 4 and Fig. 2a,b). No differences were observed in tacrolimus trough blood levels between the SCS and NMP groups at 1, 3, 6 and 12 months after transplant (Supplementary Table 5). Table 4). Safety outcomes For safety outcomes, the total number of incidences of biopsy-proven acute rejection, renal artery or venous thrombosis, complications of the renal transplant biopsy and the number of hospital admissions for any recognized complication of the renal transplant, including renal graft dysfunction, infection, surgery related or due to immunosuppression, were the same in both arms of the study (unadjusted mean SCS 0.7 ± 1.2 versus NMP 0.7 ± 1.2 (rate ratio (95% CI) 1.06 (0.73-1.52)) for the MITT analysis population; Table 3). Exploratory assessment of kidney quality during NMP The median renal blood flow was 180 ml/min/100 g (interquartile range (IQR) 120-230), and the median arterial pressure was 76 mmHg (IQR 74-80). The median amount of urine produced was 95 ml (IQR 50-180). The quality assessment score was applied to each of the analyzed kidneys. Forty-six percent had an assessment score of 2 of more. When adjusting for cold ischemic time, donor age, left/right kidney and transplant center, no significant difference in DGF was observed between those kidneys that scored a 1 versus 2 or more in the assessment score (adjusted OR (95% CI) 1.02 (0.47-2.24); Supplementary Table 6). Post hoc subgroup analysis of DGF In a sub-analysis including nine PNF cases in the MITT cohort and six in the per-protocol cohort, no significant difference was observed in rates of DGF between the study arms (P = 0.510 and P = 0.722, respectively; Table 2 and Supplementary Table 7). In both subgroups, patients on dialysis or pre-dialysis, a smaller number of DGF events occurred within the NMP arm; however, this did not reach statistical significance, and no interaction was found between each group and dialysis (P = 0.90 for the interaction term; Supplementary Table 8). In the MITT analysis, if patients receiving a single dialysis session in the Estimated glomerular filtration rate Table 9). Post hoc analysis of CRR2 In the MITT and per-protocol analysis, excluding patients who were not on dialysis before transplant, the CRR2 was significantly higher in the NMP arm (P = 0.0303 and P = 0.0067, respectively; Supplementary Table 10). Post hoc analysis of missing eGFR values A sub-analysis imputing all missing eGFR values with the value 8.5 ml/min/1.73 m 2 did not result in any significant differences between the groups in the MITT analysis (Supplementary Table 11). Post hoc analysis of the effect of a second period of cold ischemia on DGF rates after NMP In the MITT analysis, no statistically significant difference was observed in the duration of the second cold ischemic time in NMP kidneys with initial graft function or DGF (median (range) 113.5 (1-514) minutes in NMP kidneys with initial graft function versus 134.6 (8-696) minutes in NMP kidneys with DGF; P = 0.4451; Supplementary Table 12). Second cold ischemic time had no effect on the rate of DGF (Supplementary Table 12). Discussion We compared a 1-h period of NMP with conventional SCS for kidney transplantation from DCD donors. The NMP protocol had no effect on the primary endpoint, which was the incidence of DGF, defined as the requirement for dialysis in the first 7 d after transplant. No significant statistical differences were observed in the rates of acute rejection, renal function at 12 months, patient survival or graft survival. Our study a b also demonstrated that NMP is a safe procedure, as no significant differences were observed in complication rates when compared to SCS, and no adverse events were directly atrributable to NMP. After the introduction of ex vivo NMP for donor kidneys into clinical practice 10 , the first non-randomized pilot study suggested that a 1-h period of NMP could substantially reduce the rate of DGF 17 . Other non-randomized studies of NMP have recently demonstrated DGF rates in the region of 30%, which was lower than expected for kidneys from this donor source [18][19][20] . In our trial, the rates of DGF were high, at around 60%, in both the NMP and SCS groups. These rates are not unusual and are similar to data previously reported in UK and European randomized trials of hypothermic machine perfusion (HMP) in DCD kidney transplantation [21][22][23] . Also, two recent large studies of controlled DCD kidney transplants in the Netherlands (n = 406 patients) 24 and the UK (n = 225 patients) 25 reported DGF rates of 67% and 65%, respectively, providing up-to-date contemporary data that are consistent with our trial outcomes. Although lower rates of DGF in DCD kidney transplants have been reported in the UK, these were derived from registry data, which are incomplete and of variable quality compared to the actuality of data collected in the setting of a prospective clinical trial 2 . There are several possible causes of the high rates of DGF described here and in some of the previous literature. Physiological parameters during the agonal period, defined as the time between withdrawal of treatment and circulatory arrest, are likely to be critical. In the UK, withdrawal of treatment comprises disconnection from the ventilator and stopping inotropes but, in many centers, does not include removal of the endotracheal tube. Continuing support of the airway in this way can prolong the agonal period. Even more importantly, severe and prolonged hypotension during the agonal period and the consequent inadequate organ perfusion causes additional warm ischemic injury and has been shown to be associated with higher DGF rates in DCD kidneys 24 . We were not able to study the effects of hypotension after the withdrawal of life-sustaining treatment in our trial because the blood pressure during the agonal phase is not routinely recorded in the UK. The high rates of DGF reported here are also related to defining DGF as the requirement for dialysis in the first 7 d after transplant. Although this is the simplest and most widely used definition of DGF 26 , it does not consider severity. In particular, this definition includes patients who undergo a single early postoperative dialysis for hyperkalemia or fluid overload, irrespective of their initial graft function. This overestimates the rate of clinically relevant DGF because there is evidence that a single postoperative dialysis has no effect on long-term transplant outcomes 27 . If patients receiving a single postoperative dialysis were excluded from the analysis, the rates of DGF in both treatment arms were less than 50%, but there were still no significant differences between the SCS and NMP groups. The use of normothermic regional perfusion (NRP), which restores oxygenated blood flow to the abdominal organs in situ after cardiac arrest, has yielded even lower DGF rates of 23-30% 28,29 . Although these results are notable, the effects of NRP have yet to be investigated in a randomized controlled trial, and the series reported may have an element of selection and observation bias. In our trial, NRP was a specific exclusion criterion to remove it as a potentially confounding variable. Conventional SCS was used as the control arm in this study because it is the standard of care in the UK. Although HMP has been shown to reduce the rate of DGF in DCD kidneys when compared to SCS 21,30,31 , it has not been widely adopted in many countries, including the UK. Comparison of NMP and HMP is an important future direction, and there is currently a trial ongoing to address this (ClinicalTrials.gov identifier: NCT04882254) 32 . There are several possible explanations for the lack of effect of NMP on DGF rates. A 1-h period of NMP may not be long enough to reverse the effects of cold ischemia on renal tubular cells. The rationale for our trial design was based on pilot clinical data suggesting that 1 h of NMP could significantly reduce the rate of DGF in DCD kidneys 17 . Further justification for a short period of NMP was based on experimental work using porcine kidneys, which demonstrated that brief periods of NMP restored depleted cellular ATP levels 12 . This was reinforced by a study of human kidneys declined for transplantation that showed an increase in the expression of glycolytic pathway and oxidative phosphorylation genes, suggesting that NMP has the potential to increase cellular capacity to generate ATP and restore homeostasis 33 . There is limited evidence to define the optimal duration of NMP. One experimental study suggested that 1 h of NMP is not as beneficial as more prolonged periods of perfusion 34 . The feasibility of a 24-h period of NMP has been demonstrated using a urine recirculation protocol in discarded human kidneys 35 , but there are currently no published clinical studies of prolonged NMP. The original intention in our protocol was to deliver a short period of NMP immediately before the kidney was transplanted, but this was not always achieved because of the logistics of preparing patients for transplant surgery. After 1 h of NMP, the kidney was re-flushed with cold preservation solution and placed back on ice for a variable second period of cold storage until removal from ice for implantation. The mean duration of this second cold time was just over 2 h, and, although this may not influence outcome, longer additional cold storage would eventually counteract any beneficial effects of NMP. Post hoc analysis demonstrated that there was no difference in the duration of the second cold ischemic period in NMP kidneys with or without DGF. In liver transplantation, donor livers have been maintained by NMP for the full duration of the preservation period 13 . However, this presents an appreciable logistical challenge, and, currently, the most common practice is to transport livers in cold storage and then perform NMP for a few hours before transplantation 36 . Future studies in kidney transplantation will need to address the place of more prolonged periods of NMP using protocols similar to the current practice in liver transplantation. The induction of inflammation during NMP is another potential reason for the lack of its effect on the rate of DGF. The red-cell-based perfusate used in the NMP system was designed to create an anti-inflammatory environment without platelets, white cells or complement 37 . Nonetheless, global transcriptomic studies have clearly demonstrated the opposite effect, with upregulation of several immune and inflammatory pathway genes in human kidneys. More detailed analysis of the transcriptomics data demonstrated that kidneys with a lower expression of oxidative phosphorylation pathways and enhanced upregulation of inflammatory cytokines and chemokines were more likely to have longer periods of DGF compared to those that had immediate graft function or just 1 d of DGF 33 . We have also shown that the addition of a cytokine filter to the NMP circuit attenuates the expression of inflammatory genes in human kidneys, and this has potential for clinical application in the future 33 . Our trial has several limitations. It was designed as an open-label study because of the logistics of the NMP technique. Blinding of the surgical team was not possible because NMP was performed in the operating room while the transplant recipient was being prepared for surgery. However, the perfusion teams were not involved in data analysis. We were unable to perform NMP in 14 kidneys (8.2%) randomized to this treatment arm because of concerns over the technical aspects of cannulating the renal arterial system. The alternatives for arterial perfusion are the use of an aortic patch clamp or direct cannulation of the renal artery. The former is more favorable as it allows perfusion of multiple renal arteries without loss of the aortic patch for subsequent anastomosis in the recipient 38 . Nonetheless, complex renal arterial anatomy or heavily diseased aortic patches sometimes makes cannulation impossible or increases the risk of causing damage such as an arterial dissection. In this early experience, we took a cautious approach and chose not to perform NMP in higher-risk circumstances. More complex arterial perfusion techniques, such as anastomosing the renal arterial system to a tissue-banked deceased donor artery to create a discardable Article https://doi.org/10.1038/s41591-023-02376-7 conduit for perfusion, are possibilities in the future that could increase the rate of implementing NMP. The intention-to-treat analysis takes account of the non-performance of NMP on the outcome of the trial. As NMP requires cannulation of the renal blood vessels, there is potential to cause endothelial damage. There is also a risk of transmitting infection during the period in which the kidney is perfused in an ex vivo organ chamber. In our trial, NMP was not associated with any increase in transplant thrombosis, infectious complications or any other adverse events. No significant differences were observed between the groups in terms of recipient safety outcomes. The incidence of renal arterial or venous thrombosis was very low, and there were few complications related to the renal transplant biopsies. Complications associated with kidney transplantation that require hospitalization are common and were categorized into graft dysfunction, infection, related to surgery or immunosuppression based. In conclusion, a 1-h period of NMP after SCS does not reduce the risk of DGF in DCD kidney transplants. Nonetheless, we have demonstrated that this new technology for kidney preservation is feasible, safe and suitable for clinical application. This trial delivers the first, essential step in exploring the broader potential of NMP in kidney transplantation. Online content Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41591-023-02376-7. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. Trial design This investigator-led, randomized controlled, open-label trial was approved by the UK National Research Ethics Service and local institutional review boards (REC 15/EE/0356), with trial registration number ISRCTN15821205. An independent DMC monitored the progress of the trial. The trial design and methods were published previously 39 . Trial patients Eligible patients enrolled on the transplant waiting list and allocated a suitably matched kidney were enrolled at four UK transplant centers. Inclusion criteria included recipients 18 years of age or older with end-stage renal failure requiring their first or second kidney transplant who received a kidney from Maastricht category III or IV DCD donors 18 years of age or older. Exclusion criteria included recipients receiving a third or subsequent kidney transplant, multi-organ transplants, dual kidney transplants, pediatric en bloc kidney transplants and kidneys preserved by HMP. Two additional exclusion criteria were introduced on 13 October 2017: donors who underwent normothermic regional perfusion or one of a pair already randomized as a single kidney in the trial. This was approved by the Research Ethics Service, local institutional review boards and the TSC. All patients provided written informed consent. Patients were randomly assigned in a 1:1 ratio to 1-h NMP or SCS. The randomization list was created by the trial statistician in SAS Enterprise Guide (version 5.1) with SAS 9.4, stratified by transplant center and using randomly permuted blocks of fixed size 2 and 4 for single and pairs of kidneys, respectively. In cases where paired kidneys from the same donor were transplanted in the same transplant center, the randomization was stratified by kidney (right or left) so that one kidney was randomly allocated to each treatment and in which order they should be transplanted. The randomization process was facilitated using an Interactive Web Response System. After the assignment of treatment arms, no one in the trial was blinded to the treatment allocation. SCS Kidneys were retrieved by National Health Service Blood and Transplant (NHSBT) National Organ Retrieval Service teams, and, after flushing with cold preservation solution, they were stored on ice until transplanted. NMP NMP was performed at the transplanting center for 1 h using a customized pediatric cardiopulmonary bypass system. Kidneys were perfused with an oxygenated red-cell-based solution supplemented with a crystalloid solution and amino acids. Details are documented in the Supplementary Information. After the 1-h period of NMP, kidneys were flushed with hyperosmolar citrate solution at 4 °C to remove the red-cell-based perfusate and to re-cool the organ before transplantation. NMP kidneys were stored in ice and transplanted as soon as possible. Transplantation Kidneys were transplanted into either iliac fossa with anastomosis of the artery to the common, external or internal iliac arteries. The vein was anastomosed to either the common or the external iliac vein. The ureteric anastomosis was performed as an extravesical onlay over a double J stent. Immunosuppression A standard immunosuppressive protocol was used in all four trial centers. All patients received induction therapy with basiliximab 20 mg IV given on the day of transplantation and on the fourth postoperative day. All patients received methylprednisolone 500 mg IV at induction of anaesthesia. Maintenance immunosuppression was given as triple therapy with tacrolimus, mycophenolic acid and prednisolone. Tacrolimus was administered at a dose of 0.1 mg/kg/day orally either in two divided doses (adoport) or as a single daily dose (advagraf). Tacrolimus trough blood levels were measured at least twice weekly, and the therapeutic target range in the first 3 months after transplant was 5-10 ng ml −1 . Levels were analysed at 1, 3, 6 and 12 months after transplant. Patients received mycophenolate mofetil (CellCept) 500 mg twice daily orally and prednisolone 20 mg once daily orally. The dose of prednisolone was tapered to 5 mg once daily by 2-6 weeks after transplantation. Clinical outcomes The primary outcome measure was DGF, defined as the requirement for dialysis in the first week after transplantation. Secondary outcome measures included incidence of PNF; duration of DGF; functional DGF defined as less than 10% fall in serum creatinine for three consecutive days in the first week after transplantation; CRR2 (creatinine day 1 − creatinine day 2 / creatinine day 1 × 100), CRR5 (creatinine day 1 − creatinine day 5 / creatinine day 1 × 100); duration of hospital stay; serum creatinine and eGFR at 1, 3, 6 and 12 months using the Modification of Diet in Renal Disease (MDRD) 4 variable equation; and patient and allograft survival up to 12 months after transplant. For safety outcomes, the total number of incidences of biopsy-proven acute rejection, renal artery or venous thrombosis, complications of the renal transplant biopsy and the number of hospital admissions for any recognized complication of the renal transplant, including renal graft dysfunction, infection, surgery related or due to the immunosuppression, were recorded. Data were collected by each of the participating transplant centers using an online secure database hosted by the NHSBT Clinical Trials Unit. Statistical analysis Study design. The NHSBT Clinical Trials Unit supported the design, data management and analysis of the trial. Historical data spanning a 5-year period for three participating centers showed that the overall rate of DGF in DCD kidney transplants was 50%. This was used as the baseline rate. In a pilot series of kidney transplants from extended criteria donors (ECDs), 18 kidneys undergoing SCS followed by 1 h of NMP were compared to a historical control group of 47 ECD transplants after SCS alone. The DGF rates were 1/18 (6%) in the NMP group compared to 17/47 (36%) in the SCS group. Using a fixed sample size study, with interim analyses after 124 and 248 participants had been enrolled and reached 7 d after transplant, a total of 370 patients receiving a DCD kidney were required to detect a 30% relative reduction in DGF (from 50% to 35%) with a power of 80%, a statistical significance of α = 0.05 and 1:1 allocation. To allow for a study withdrawal rate of 7.5%, a maximum of 400 patients were needed for recruitment. There would be no sample size re-estimation during the trial. Interim analysis A group sequential design, with O'Brien-Fleming stopping rules (which preserved the 5% significance level in the final analysis), was used to allow the DMC to review the primary outcome for evidence of harm, benefit or futility. Two unadjusted interim analyses were performedthe first after 124 patients were randomized and reached 7 d after randomization and the second after 248 patients were randomized and reached 7 d after randomization. The stopping rules were used as a guideline, alongside the other safety data available to the DMC, as an overall assessment of the trial. The interim analyses were performed by the trial statistician who was unblinded to the treatment arm, and these results were presented to the DMC only. The DMC reported its recommendations, without disclosing any trial results, to the TSC, which made the final decision regarding continuation of the trial. Study population The population used for efficacy analyses was a MITT population including all randomized patients who received a transplant. Article https://doi.org/10.1038/s41591-023-02376-7 This was a change from the original protocol because it was deemed illogical to include those participants who did not receive a transplant, as no outcome data were available. Primary and secondary outcomes were also analyzed per protocol, which excluded any participant who did not receive a transplant, was randomized in error, experienced a protocol deviation or was withdrawn from the trial (details are provided in the statistical analysis plan). For both analysis populations, results were presented by randomized treatment, and all ratios and mean differences were presented as NMP versus SCS. All analyses were adjusted for cold ischemic time, donor age, left/ right kidney and transplant center (all as fixed effects). All tests were two-sided, and P values less than 0.05 were considered statistically significant. SAS Enterprise Guide (version 7.15) with SAS 9.4 was used to conduct all analyses. It was pre-specified in the statistical analysis plan that multiple comparisons would be performed, potentially increasing the probability of observing a statistically significant result by chance, but that no adjustments would be made to account for multiple testing. Primary and secondary outcome measures The primary outcome was analyzed using an adjusted logistic regression model and excluded participants who experienced PNF. The data for this outcome were complete, and, therefore, it was not necessary to undertake any of the methods proposed in the statistical analysis plan for assessing the impact of these missing data. Secondary and safety outcome measures were analyzed using logistic regression model (PNF and functional DGF), Cox proportional hazards model (duration of DGF, length of hospital stay, allograft and patient survival), normal linear regression model (CRR at day 2 and day 5, serum creatinine and eGFR at 1, 3, 6 and 12 months) and negative binomial model (biopsy-proven acute rejection and safety outcomes). Missing secondary outcome measures were not imputed and were excluded from the relevant analyses, except for eGFR. Full details can be found in the statistical analysis plan. To ensure model assumptions were met, residual plots were examined. NMP assessment score After NMP, kidneys were allocated a score of 1-5 based on the macroscopic appearance, mean renal blood flow and urine production. A lower value indicated a better score (details in the Supplementary Information). To assess for associations between DGF and the NMP assessment score, a logistic regression model was fitted, adjusting for cold ischemic time, donor age, left/right kidney and transplant center. Participants who experienced PNF were excluded from the analysis. The NMP assessment score was fitted as a binary variable. Post hoc subgroup analyses i. PNF was included in the DGF groups to determine the impact of the pre-transplant preservation interventions on rates of non-function. ii. To determine the effect of pre-transplant recipient dialysis status (receiving dialysis versus pre-dialysis) on DGF in the MITT analysis, we used the same model as that used for the primary outcome but with the inclusion of the pre-transplant dialysis term, and we also assessed the interaction between treatment group and pre-dialysis status. iii. Some patients received a single post-transplant dialysis as a safety measure in response to hyperkalemia or fluid overload, irrespective of renal function. To take account of this, we analyzed the effect of excluding patients who received a single post-transplant dialysis on DGF rates. iv. We also analysed the effect of excluding pre-dialysis patients from CRR2 calculations in both the MITT and per-protocol analyses. v. The duration of the second cold ischemic period after NMP was variable, and this might have influenced the rate of DGF. We, therefore, compared the duration of second cold ischemic time in kidneys with initial function and DGF after NMP. vi. To take account of missing eGFR data in the MITT analysis, we imputed a value of 8.5 ml/min/1.73 m 2 at the 1-, 3-, 6-and 12-month timepoints for patients with PNF, ongoing DGF, graft loss or death 40 . Reporting Summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Data from the trial are stored in an online secure database hosted by the NHSBT Clinical Trials Unit. The protocol, consent form, statistical analysis plan, definition and derivation of clinical characteristics and outcomes, training materials, regulatory documents and other relevant study materials are available online and were published elsewhere. The datasets generated during analysis will be available upon reasonable request from the NHSBT Clinical Trials Unit after de-identification (text, tables, figures and appendices) 9 months after publication and ending 5 years after article publication. Data will be shared with investigators whose use of the data has been assessed and approved by an NHSBT review committee as a methodologically sound proposal. The NHSBT Clinical Trials Unit can be contacted at CTU@nhsbt.nhs.uk. The Clinical Trials Unit will be able to provide a copy of our data-sharing policy and arrange a data use agreement, which will need to be signed. All data use agreements will be in line with the consent given by participants upon agreeing to take part in the trial.
2023-05-27T06:17:44.013Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "d3d543773cbe0c2161131a4ce46e9e8584de94e9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41591-023-02376-7", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "c893c7d03b5fc8c76d81d8a791728fda4b17851b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240420503
pes2o/s2orc
v3-fos-license
Phytochemicals for the treatment of COVID-19 The coronavirus disease 2019 (COVID-19) pandemic has underscored the lack of approved drugs against acute viral diseases. Plants are considered inexhaustible sources of drugs for several diseases and clinical conditions, but plant-derived compounds have seen little success in the field of antivirals. Here, we present the case for the use of compounds from vascular plants, including alkaloids, flavonoids, polyphenols, and tannins, as antivirals, particularly for the treatment of COVID-19. We review current evidence for the use of these phytochemicals against SARS-CoV-2 infection and present their potential targets in the SARS-CoV-2 replication cycle. Introduction The coronavirus disease 2019 caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has infected over 218 million and killed over 5.2 million people (WHO, 2020). The COVID-19 pandemic has underscored the limitations of the current pool of approved antivirals and has emphasized the need for further discovery and development of therapeutic and prophylactic agents for acute viral diseases. Plants have been utilized throughout human history for a variety of ailments and are considered inexhaustible sources of novel pharmacologically active compounds. Phytochemicals and their derivatives have already been approved for non-viral disease states. Among these compounds are paclitaxel from the Pacific yew tree (Taxus brevifolia), and vincristine and vinblastine from periwinkle (Catharanthus rosea) for cancer treatment; morphine from opium poppy (Papaver somniferum) and aspirin derived from salicylic acid from the bark of willow trees (Salix spp.) for pain management; and quinine from quina trees (Cinchona spp.) and artemisinin from sweet wormwood (Artemisia annua) for con-trolling parasitic infections. However, the majority of approved antivirals are synthetic small molecules, and a plant-derived antiviral is yet to be approved (Tompa et al., 2021). Unlike designed synthetic small molecules, plant secondary metabolites have evolved to exhibit biological activity, thereby increasing the likelihood of their interactions with other biological molecules, which is evident in the breadth of pharmacological activity exhibited by phytochemicals (Atanasov et al., 2021). Plant secondary metabolites, especially those that are used as food or as traditional medicine, may also be safer than synthetic molecules. Plants therefore remain rich but underutilized sources of antivirals. Here, we present some of the characterized secondary metabolites from vascular plants that have been observed to inhibit either SARS-CoV-2 replication or SARS-CoV-2 functional components at least in vitro. Druggable Targets in the SARS-CoV-2 Replication Cycle SARS-CoV-2 is a Betacoronavirus under the Coronaviridae family of relatively large positive-sense single-stranded RNA viruses. SARS-CoV-2 particles are spherical, with lipid envelopes surrounding an icosahedral capsid that protects the genomic RNA (around 29.9 kb). The 5 end of the genome encodes 16 nonstructural proteins (NSP1-16), while the 3 end encodes four structural proteins: spike (S), envelope (E), and membrane (M), and nucleocapsid (N). The homotrimeric glycosylated S protein is the predominant structure on the CoV envelope and is central to the CoV entry process. Each S protein consists of two noncovalently bound subunits: the surface-exposed S1, which contains the receptor-binding domain (RBD), and the transmembrane S2, which facilitates fusion with the host membrane. The infection cycle of SARS-CoV-2 is initiated by the interaction of the RBD with angiotensin-converting enzyme 2 (ACE2), the known SARS-CoV-2 receptor in humans ( Fig. 1) (Yan et al., 2020). Proteolytic cleavage at the S1/S2 interface of the S protein triggers a series of conformational changes that lead to fusion of the viral envelope with the host membrane. SARS-CoV-2 S priming has been found to be dependent on transmembrane serine protease 2 (TMPRSS2) on the host cell surface (Hoffmann et al., 2020); however, in the absence of cell surface proteases, endosomal proteases, such as cathepsins B and L, may also facilitate SARS-CoV-2 S priming, suggesting that SARS-CoV-2 enters the host through both direct fusion with the plasma membrane and the endocytic pathway (Ou et al., 2020). Following fusion, the genomic RNA is uncoated in the cy-toplasm, making it accessible for primary translation into two polyproteins pp1a (NSP1-11) and pp1ab (NSP1-10, 12-16), which are co-translationally and post-translationally cleaved by the papain-like protease (PL pro ) within NSP3 and the main protease (M pro ; also known as the 3C-like protease, 3CL pro ) within NSP5 (V'kovski et al., 2021). After proteolytic processing, NSP2-16 assemble to different subcellular localizations and perform their subsequent roles in the SARS-CoV-2 replication cycle. NSP2-11 provide cofactors to the viral polymerase and perform supporting functions for viral replication and translation, including the formation of doublemembrane vesicles where replication and translation take place. RNA-related enzymatic functions are attributed to NSP12-16, such as the viral RNA-dependent RNA polymerase (RdRp) activity of NSP12 and the 3 -5 exonuclease activity of NSP14. Although NSP15 has a uridine-specific endoribonuclease activity, its activity may be more important for host immune evasion than for genomic replication (Pillon et al., 2021). Assembled virus particles bud out of endoplasmic reticulum-to-Golgi intermediate compartments and have been suggested to exit the host cell via the lysosomal traffick-ing pathway (Ghosh et al., 2020). Nucleoside analogs that inhibit the RdRp or terminate the nascent viral RNA chains are among the primary candidates for SARS-CoV-2 inhibition. Considering the effectivity of inhibiting proteases in controlling human immunodeficiency virus and hepatitis C virus infections, proteases involved in the replication cycle of SARS-CoV-2 are also deemed important targets for controlling SARS-CoV-2 infection. These proteases include not only virus-encoded proteases (PL pro and M pro ) but also host proteases, such as TMPRSS2 and cathepsins L and B. Other targets for SARS-CoV-2 inhibition include the S protein and its interaction with ACE2; the endocytic pathway; lipid regulatory pathways; and the lysosomal trafficking pathway. Modulators of the immune response are also considered candidates for COVID-19 treatment, given the relative success of steroids in the treatment of the inflammatory stage of COVID-19 (The RECOVERY Collaborative Group, 2021). With these considerations, several groups have identified phytochemicals that inhibited SARS-CoV-2 production in vitro and have elucidated their targets in the SARS-CoV-2 replication cycle. Other groups have taken the reverse route and have identified inhibitors of specific proteins involved in the SARS-CoV-2 replication cycle and determined whether these activities would translate to inhibiting virus production in cell culture models. Together, these efforts have led to the identification of vascular plant secondary metabolites that have the potential to treat COVID-19. Alkaloids Alkaloids are naturally occurring organic compounds that carry at least one nitrogen atom typically within a heterocyclic ring. The nitrogen atoms in alkaloids are in the negative oxidation state, lending alkaloids their basic properties (Kurek, 2019). Alkaloids are structurally diverse and comprise one of the largest groups of plant secondary metabolites. A number of alkaloids from plants (Table 1) have been observed to inhibit SARS-CoV-2 components or infection in vitro. In two separate studies using Vero E6 cells, berbamine inhibited SARS-CoV-2 S pseudovirus entry, reduced genome replication, and reduced infectious virus production Xia et al., 2021). In one of these studies, berbamine was found to inhibit the ion channel activity of the SARS-CoV-2 E protein at a high concentration. In the same study, a berbamine derivative (BE-33) showed even more potent activity against SARS-CoV-2 infection (EC 50 of 0.94 μM for viral titers) and higher binding affinity to the E protein ion channel than did berbamine (Xia et al., 2021). This derivative also reduced viral loads in the lungs and reduced inflammatory cytokines in human ACE2 (hACE2)-transgenic mice, suggesting anti-inflammatory properties that may be beneficial to severe COVID-19 cases. Berberine, another alkaloid, was observed to reduce infectious virus production but not viral RNA levels in Vero E6 cells, indicating that berberine affects stages later than genome replication (Pizzorno et al., 2020;Varghese et al., 2021). An immunofluorescence-based screening for FDA-approved drugs that inhibit viral genome replication, as indicated by dsRNA production, revealed bromhexine and reserpine, and two antiarrhythmic drugs with antimalarial activity, hydroquinidine and quinidine, as potential anti-SARS-CoV-2 agents (Ku et al., 2020). Other antimalarial alkaloids (quinacrine and quinine) have also been reported to inhibit SARS-CoV-2 infection in vitro (Große et al., 2021;Salas Rojas et al., 2021). The iminosugar castanospemine and its prodrug, celgosivir, protected Vero E6 cells from SARS-CoV-2-induced cytopathic effects (CPE) and reduced viral genome replication (Clarke et al., 2021). Celgosivir treatment led to lower S protein levels, probably owing to its ability to inhibit α-glucosidases, leading to improper S protein folding. A screening on natural products revealed the ability of cepharanthine, hernandezine, and neferine to inhibit SARS-CoV-2 S pseudovirus entry in ACE2expressing HEK293T (HEK293/ACE2) cells (He et al., 2021). Other studies have also shown that cepharanthine can protect Vero E6 and Calu-3 cells from authentic SARS-CoV-2 infection; one of these studies has proposed that cepharanthine disrupts the S-ACE2 interaction (Jan et al., 2021;Ohashi et al., 2021). The anti-SARS-CoV-2 potential of neferine and its analogs (isoliesinine and liesinine) have also been corroborated in a different study, where neferine inhibited Ca 2+ -de-pendent fusion of the viral envelope with the host membrane (Yang et al., 2021b). Emetine, which has displayed inhibitory effects on Zika virus and Ebola virus (Yang et al., 2018), has been reported to reduce SARS-CoV-2 replication in a number of in vitro studies (Choy et al., 2020;Wang et al., 2020b;Jan et al., 2021;Kumar et al., 2021). A small observational study has also shown that, while emetine did not accelerate viral clearance, emetine appeared to improve blood oxygen concentrations and breathing difficulties among symptomatic COVID-19 patients (Fan et al., 2021). Meanwhile, homoharringtonine has been shown to protect Vero E6 cells from SARS-CoV-2 CPE and to reduce viral titers and RNA replication (Choy et al., 2020). Lycorine and oxysophoridine have also been shown to inhibit SARS-CoV-2 replication in Vero E6 cells . Tetrandrine was demonstrated to inhibit the entry of SARS-CoV-2-S pseudoviruses in HEK293/ACE2 cells in a dose-dependent manner, likely owing to its ability to inhibit the two-pore calcium channel 2 (TPC2), which may be involved in the endocytic pathway of SARS-CoV-2 entry (Ou et al., 2020). Derivatives of alkaloids have also exhibited anti-SARS-CoV-2 effects in a number of studies, indicating that alkaloid structures can be optimized for the development of anti-SARS-CoV-2 agents. Isatin derivatives were observed to inhibit SARS-CoV-2 M pro activity . Treatment with topotecan, an analog of the plant-derived alkaloid camptothecin, reduced morbidity and mortality rates in mice infected with SARS-CoV-2, with indications of reduced inflammatory responses (Ho et al., 2021). Derivatives of tylophorine, a naturally occurring alkaloids, have also demonstrated the ability to inhibit SARS-CoV-2 infection in vitro . Flavonoids Flavonoids comprise another large and structurally diverse group of secondary metabolites produced by plants. The basic flavonoid skeleton is characterized by 15 carbon atoms, wherein two primary aromatic rings (A and B) are connected by three carbon atoms (C 6 -C 3 -C 6 ), which may be linked to a third ring (C) (Santos et al., 2017). Flavonoids often occur as aglycones (non-sugar forms), as well as glycosylated and methylated derivatives. Several flavonoids have displayed activity against SARS-CoV-2 infection (Table 2). Both baicalin and its aglycone baicalein have been observed to inhibit SARS-CoV-2 infection in vitro in several studies, some of which have shown that these flavonoids inhibit M pro activity Jo et al., 2020;Su et al., 2020;Liu et al., 2021). Furthermore, baicalein reduced lung damage and lung inflammation in hACE2 transgenic mice infected with SARS-CoV-2, and baicalein treatment protected the mice from infectioninduced body weight loss . Remarkably, a study has also shown that baicalin may inhibit the endoribonuclease activity of SARS-CoV-2 NSP15, which suggests an additional or alternative mode of action (Hong et al., 2021). Naturally occurring baicalein analogs (dihydromyricetin, myricetin, scutellarein, and quercetagetin) have also been demonstrated to inhibit M pro . Other phytochemicals with potential as anti-SARS-CoV-2 agents Compound Brazilin has been shown to inhibit SARS-CoV-2 S pseudovirus entry into A549/ACE2 cells, likely owing to its ability to inhibit the RBD-ACE2 interaction or to inhibit ACE2 or TMPRSS2 activity (Goc et al., 2021). Isorhamnetin also inhibited SARS-CoV-2 S pseudovirus entry in HEK293/ACE2 cells, suggesting an ability to disrupt the S-ACE2 interaction (Zhan et al., 2021). Meanwhile, the addition of panduratin A to SARS-CoV-2-infected Vero E6 in Calu-3 cells reduced viral production in these cell culture models (Kanjanasirirat et al., 2020). Notably, pre-incubation of panduratin A with SARS-CoV-2 particles reduced the infectivity of the virus in Vero E6 cells, suggesting that it affects pre-entry stages of infection. Chrysanthemin, the 3-glucoside of cyanidin, has been reported to inhibit M pro and PL pro in separate studies (Pitsillou et al., 2020a(Pitsillou et al., , 2020b. Other flavonoids (herbacetin, kaempferol, pectolinarin, and quercetin) with the capacity to inhibit SARS-CoV-2 M pro have also been reported in different studies (Abian et al., 2020;Jo et al., 2020;Khan et al., 2021). Meanwhile, rutin, a rutinose-bound form of quercetin, has been found to inhibit the deubiquitinase activity of PL pro (Pitsillou et al., 2020b). Naringenin was observed to protect Vero E6 cells from SARS-CoV-2-induced CPE and has been suggested to inhibit M pro and TPC2 (Abdallah et al., 2021;Clementi et al., 2021). Flavonoids from the leaves of Camellia sinensis have shown inhibitory activity against a variety of viruses and have likewise shown inhibitory effects on SARS-CoV-2 infection in vitro (Xu et al., 2017). Epicatechin-3-gallate (EGCG) and theaflavin-3-3 -di-O-gallate (TF3) have displayed virucidal effects on SARS-CoV-2 particles (Nishimura et al., 2021;Ohgitani et al., 2021). Both have also been shown to inhibit SARS-CoV-2 binding, entry, and M pro activity (Jang et al., 2020;Goc et al., 2021;Henss et al., 2021). EGCG also inhibited the endoribonuclease activity of NSP15 (Hong et al., 2021), whereas TF3 appeared to downregulate cathepsin L levels, which may contribute to its inhibitory effects on SARS-CoV-2 entry (Goc et al., 2021). Whether EGCG or T3G inhibits all or some of these targets remains to determined. However, the findings listed here emphasize the breadth of pharmacological activities of some of these phytochemicals. Terpenes and terpenoids Several other groups of phytochemicals have also displayed anti-SARS-CoV-2 activity in vitro (Table 3). Among these are terpenes and terpenoids, which form one of the largest families of plant secondary metabolites. All terpenes are composed of isoprene units (C 5 H 8 ) and are further classified depending on the number of isoprene units (e.g., diterpenes have four isoprene units, while triterpenes have six) (Perveen, 2021). Terpenes that have additional functional groups or oxidized methyl groups are called terpenoids. The diterpenoid andrographolide and its fluorescent derivative have been reported to inhibit M pro . Andrographolide also reduced SARS-CoV-2 production in Vero E6 and Calu-3 cells and has been proposed to inhibit late stages of infection (Sa-Ngiamsuntorn et al., 2021). Diterpenes cryptotanshinone, tanshinone I, tanshinone II, and dihydrotanshinone I have all demonstrated the capacity to inhibit PL pro in separate studies (Lim et al., 2021;Zhao et al., 2021). Furthermore, cryptotanshinone and tanshinone I reduced SARS-CoV-2 production in Vero E6 cells (Zhao et al., 2021). Triterpenoids from olive leaves (betulin, betulinic acid, maslinic acid, and ursolic acid) were found to inhibit M pro through an enzyme assay (Alhadrami et al., 2021). Triterpene glycosides, also known as saponins, such as glycyrrhizin and platycodin D, have also been observed to have anti-SARS-CoV-2 activity. In particular, glycyrrhizin inhibited the production of infectious virus particles in Vero E6 cells and inhibited M pro in a dose-dependent manner (van de Sand et al., 2021). Meanwhile, platycodin D reduced SARS-CoV-2 S pseudovirus entry into H12299/ACE2 and H12299/ACE2-TMPRSS2 cells . Curcumin Curcumin, which has demonstrated a broad spectrum of pharmacological activities, including anti-inflammatory and antiviral effects (Amalraj et al., 2017), has also been reported to inhibit SARS-CoV-2 S pseudovirus entry into A549/hACE2 cells (Goc et al., 2021). Curcumin is insoluble in water, and, consequently, has poor bioavailability (Anand et al., 2007); several formulations of curcumin have been designed to improve its bioavailability for clinical application. Two small randomized controlled clinical trials have reported the potential benefits of different curcumin formulations to COVID-19 patients (Table 3). In one of these studies, an oral nanomicellular formulation of curcumin was given for 14 days to newly diagnosed COVID-19 patients who also received standard of care (interferon beta-1b, bromhexine, and atorvastatin) (Valizadeh et al., 2020). Curcumin treatment among COVID-19 patients resulted in decreased interleukin (IL)-1β and IL-6 mRNA and serum levels relative to baseline, whereas the mRNA and serum levels of these pro-inflammatory cytokines did not significantly decrease in the placebo group. Mortality due to COVID-19 was also lower in the curcumin treatment group (20%) than in the placebo group (40%). In another study, an oral formulation containing curcumin and piperine (added to improve curcumin bioavailability) was observed to accelerate symptom recovery, shorten hospital stay, and reduce mortality rates among hospitalized COVID-19 patients who received COVID-19 standard of care, indicating that the formulation is a suitable adjuvant therapy for symptomatic patients (Pawar et al., 2021). Other phytochemicals Cardiac glycosides digoxin and ouabain (Table 3) have been reported to inhibit viral genome replication in Vero E6 cells (Cho et al., 2020). Sennoside B and acteoside, both glycosides, have been reported to inhibit M pro (Abdallah et al., 2021). Phytochemicals belonging to other groups, such as maclurin (benzophenone) and chlorogenic acid (quinic acid) have also been reported to inhibit M pro activity (Su et al., 2020;Abdallah et al., 2021). Tannins, which are high molecular-weight polyphenols from plants, such as chebulagic acid, punicalagin, and tannic acid, have all displayed the ability to inhibit M pro activity (Wang et al., 2020a;Du et al., 2021;Tito et al., 2021). Punicalagin and ellagic acid were also observed to interfere with the S protein or RBD interaction with ACE2 (David et al., 2021;Tito et al., 2021). The lignan diphyllin reduced SARS-CoV-2 titers in Vero cells (Stefanik et al., 2021). Nordihydroguaiaretic acid, another lignan, has been reported to inhibit NSP3 and one of its domains, PL pro , in a matrix-assisted laser desorption ionization time-of-flight-based deubiquitylase assay (Armstrong et al., 2021). The stilbenoid resveratrol has been observed to inhibit viral genome replication and virus production in Vero and Vero E6 cells, and has been found to inhibit SARS-CoV-2 infection in human primary bronchial epithelial cells (Pasquereau et al., 2021;ter Ellen et al., 2021;Yang et al., 2021a). Pterostilbene, another stilbenoid, has likewise been reported to reduce virus production in Vero E6 cells and to inhibit SARS-CoV-2 infection in human primary bronchial epithelial cells (ter Ellen et al., 2021). Gingerol, a polyphenol from ginger, has been reported to reduce viral titers in Vero E6 cells. Meanwhile, hypericin, an anthraquinone, has been revealed to inhibit M pro and PL pro in separate screening assays (Pitsillou et al., 2020a(Pitsillou et al., , 2020b. Perspectives and Conclusion The diversity of plant secondary metabolites lend them a huge breadth of pharmacological activities and have made them a rich source of drugs for several disease states. However, the structural complexity of phytochemicals makes them difficult to produce and synthesize in the industrial scale, which is part of the reason why most pharmaceutical companies favor synthetic small molecules for antiviral screening and development. Phytochemicals may also possess suboptimal activity (i.e., require high concentrations that are not achievable in human plasma) and may have poor bioavailability. However, steady progress in synthetic chemistry and in biotechnology have recently allowed the synthesis or semisynthesis of large, complex natural products (Beutler, 2009;Atanasov et al., 2021). Furthermore, even for synthetic molecules, the identification of pharmacophores is imperative to allow derivatization of compounds for improved efficacy and bioavailability and even for structural simplicity, so the requirement for optimization is not unique to phytochemicals. Advancements in the field of drug delivery, as exemplified by the orally administered nano-curcumin that we have cited here, have also improved the systemic bioavailability and pharmacological profiles of phytochemicals in pre-clinical models (Rahman et al., 2020). The technologies for compound optimization and biosynthesis continue to improve and may eventually address some of the limitations to the clinical application of plant-derived antiviral compounds. Thus, we must continue to build our repository of plant-based antivirals, so that the semi-synthetic/biosynthetic techniques could be applied to these plant products as soon as the technology becomes available. Furthermore, studies identifying phytochemicals with broad-spectrum antiviral potential should be performed as they may allow us to focus our efforts on optimizing semi-and biosynthetic techniques to pathways relevant to these compounds. As we have presented, plants provide a vast array of candidates for the treatment of COVID-19. Although we have not named them here, several of these phytochemicals have exhibited activity against other coronaviruses (Islam et al., 2020), indicating that they may be used against future coronavirus outbreaks. Some of these compounds (e.g., catechin and emetine) have also demonstrated the ability to inhibit infection with viruses belonging to other families, which suggests their potential as broad-spectrum antivirals (Ali et al., 2021). Based on the relative successes of plant secondary metabolites in other disease states, such as cancer and antiparasitic infections, the development of plant bioactive compounds from bench-to-bedside is not impossible. Likely, the lack of plant antivirals stems from the parallel lack of impetus to develop treatment agents for acute viral diseases, which is one of the reasons for the slow global response to the COVID-19 pandemic. Concerted efforts are needed to maximize resources, including phytochemicals, for the development of treatment agents for COVID-19 and other viral diseases to ease the blow of large viral outbreaks in the future.
2021-11-03T05:14:19.266Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "87f0f9343d7070e0127d9feb1e6485e58c9ccc51", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12275-021-1467-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "87f0f9343d7070e0127d9feb1e6485e58c9ccc51", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17081994
pes2o/s2orc
v3-fos-license
A Dedicated Binding Mechanism for the Visual Control of Movement Summary The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1–3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems. We applied two types of perturbation during the reaching movements. Target displacements consisted of a 2cm displacement of the visual target in the lateral direction (left or right in the xdirection), ramped over a 50ms interval. Cursor displacements consisted of a 2cm displacement of the visual cursor in the lateral direction. The necessary correction to both perturbation types was equal in size, but opposite in direction, i.e. a target displacement to the right caused a corrective response to the right while a cursor displacement to the right caused a corrective response to the left. Both perturbations could be easily detected and participants were informed about their occurrence before the experiment started. However, responses to such perturbations are highly automatic and immune to voluntary processes [1, 4, S2]. Eye fixation control The position(s) of the fixation cross(es) in eye coordinates were obtained from each participant with a calibration procedure before the start of the experiment. To avoid frequent re-calibration or problems due to drift or head movements, we used a combined manual / automatic procedure to ensure eye fixation throughout a trial. During the training, the experimenter checked the mean eye position of trials reported as valid (obtained from the automatic procedure, see below) for discrepancies between expected and actual mean eye position. Participants fixating elsewhere were reminded to keep their eyes on the fixation cross. During the experiments, we relied on the automatic procedure, but sample inspection ensured that the participants did not change their strategy and fixated somewhere else. The automatic procedure required that 80% of the recorded eye tracking data within the movement phase of a trial was technically valid (i.e. at most 20% missing values received from the eye tracker). From the validly recorded data, 68.2% was required to be within 1mm (in screen coordinates) of the mean eye position of the trial. Trials not fulfilling these criteria were automatically rejected as invalid eye fixation trials, and repeated. Because eye fixation needs a visual anchor to be stable [S3], we can exclude the possibility that the eyes rested on the blank screen adjacent to the fixation cross or moved around if a trial was accepted by the automatic procedure. General procedure Participants initiated a trial by moving the cursor(s) into the start while maintaining eye fixation. After 350ms, the target(s) appeared at 20cm distance from the start positions. Participants were instructed to initiate fast and accurate reaching movements toward the target(s) when the fixation cross changed shape (two small circles were added centrally over the fixation cross), which happened 2.3s after the target(s) appeared. The trial ended when hand velocity dropped below 3.5cm/s for at least 40ms. A trial was considered valid when eye fixation was maintained, movement duration was shorter than 800ms, and maximum velocity ranged between 50 and 80cm/s. Valid trials with endpoint accuracy of at least 7mm were rewarded with one point per hit target, an animated "explosion", and a pleasant tone. A running score was displayed at the top of the screen. Feedback about trial performance (accuracy / velocity / eye fixation) was given via a color scheme at the end of each trial. Participants were encouraged to use this visual feedback to adjust their performance. Invalid trials constituted 15%/10%/20% of all executed trials (experiment 1/2/3) and were repeated by randomly intermixing them into the remaining trials of the current experimental block. In half of the trials, a "force channel" restricted movements, guiding the hands on a straight path to the targets. The force channel was implemented with a spring-like force of 7000N/m applied in the lateral direction. The force with which participants pressed into the channel provided a more sensitive assay of the feedback triggered responses than position data from unconstrained trials [1,15]. The sensitivity is similar to velocity data but as force is measured directly, and not differentiated, no additional noise is introduced. On channel trials, the target or cursor was displaced back after 350 ms to enable participants to reach the target. On non-channel trials, the target and cursor displacements remained, requiring participants to correct for the perturbations. We refrained from using force channel trials during training blocks in order to avoid a possible attenuation of the feedback response [1]. After one training block, they performed 8 experimental blocks of 80 trials. Each block consisted of the randomized permutation of all experimental conditions: perturbation type (target/cursor) x movement (up-/downwards alternating) x channel (y/n) x flash side (left/right hemi-field) x 5 displacement conditions. Four different displacements (left/right hemi-field x leftward/rightward) and one condition without displacement were tested. The cursors, start and target boxes were dark grey throughout the trials. When the mean tangential velocity of both hands exceeded 3.5cm/s ("mvmt onset" in Fig. 1a), the corresponding object (cursor or target) alternated its color between white and gray twice within 50ms ("flash" in Fig. 1a). This exogenous cue was presented on the left or right cursor on blocks with cursor displacements, and on the left or right target on blocks with target displacements. The displacement occurred 100ms after triggering the flashes, which corresponded to roughly 5cm into the movement. Experiment 2: Endogenous cueing of attention Nineteen participants (13 female, 24.0±4.7 years) completed a pre-test and two experimental sessions (~2 hours each). During the pre-test, either one cursor or one target changed its brightness for 350ms during the reaching movement. Participants' task was to decide after the movement whether luminance had increased or decreased during the reach (2 alternative forced choice task). By using different levels of brightness change, we determined a contrast level that yielded a perceptual performance of d' = 0.3 (separately for cursors and targets) for each individual participant, which was then used throughout the experiment. One additional participant was pretested but excluded from the experiment because of chance performance up to the highest contrast level. The experiment consisted of 16 blocks of 48 trials. The site of color change and displacement (cursor vs. target condition) alternated between blocks. Thus, participants could concentrate on either the cursors or targets for the perceptual task. Each block contained 50% non-channel trials used for assessing perceptual performance. These consisted of a randomized permutation of all experimental perception conditions: movement (up-/downwards, alternating) x cuing (left/right) x brightness change (1/3 incongruent, 2/3 congruent with cue) x change direction (brighter/darker). The remaining 50% were channel trials, used for assessing fast feedback responses to displacements, with the randomized permutation of all experimental reaching conditions: movement (up-/downwards, alternating) x cuing (left/right; brightness change was always congruent with the cue) x side of displacement (left/right) x direction of displacement (left/right/none). The cursors, start and target boxes were medium grey throughout the trials. An arrow pointing left-or rightwards adjacent to the fixation cross served as central cue. In perturbation trials, the displacement occurred when both hands had moved an average 5cm in the forward direction. The brightness change occurred 100ms after the displacement (or the point in time when a displacement would have occurred for unperturbed trials) such that it could not interfere with the early response to the displacement. After each successful reach, participants made the perceptual judgment. After four training blocks (up-/downwards without distractors, up-/downwards with distractors), they carried out 20 experimental blocks of 76 trials. Two additional participants were trained but excluded from the second day of the experiment because they performed poorly (best block fewer points than the average block score). The target movement ended in the middle of the screen at about the height of the fixation cross. Participants had the goal to terminate their cursor movement as close to the target endpoint as possible (Fig. 3a). In order to maintain the length of the reaching movement at 20cm while still using the same visual field as in the first two experiments, we compressed the visual scene by factor 2 in the y-direction. Therefore, both cursor and target moved about 10cm visually while the hand moved 20cm physically. At the height of the start box, the cursor was located vertically above the hand. Thus, we alternated between up-and downward blocks to keep the visuo-motor mapping constant within a block. The order of up-and downward blocks was counterbalanced across participants. Even though adaptation to a constant visuo-motor gain mapping happens virtually instantly and generalizes across directions [S4], we started each block with four unperturbed and undistracted movements to allow for adaptation. The remaining 72 trials of each block consisted of the randomized permutation of all experimental conditions: perturbation type (target/cursor) x channel (y/n) x displacements (18 conditions (Fig. 3b). Thus, the motion of the target and distractors mimicked the cursor motion, but their velocity profiles in y-direction correlated only partially with that of the target or cursor (r=0.73±0.01). The actual cursor and target had a red border from target appearance until the cue to initiate the movement. In perturbation trials, the displacement occurred when the corresponding hand had moved 2cm in the forward direction. Statistical analysis As invalid trials were repeated within each block, we averaged over 8/8/10 repetitions (experiment 1/2/3) for each condition and participant. All position and force traces were aligned temporally to the onset of the visual perturbations, or, for unperturbed trials, the point in time when the perturbation would have occurred. The measured lag of 50ms between commanded visual change and the real visual change on the screen due to processing time in the graphical output and the screen refresh rate was taken into account for the analysis. To assess corrective reaching responses, we measured the lateral forces exerted into the channels (perpendicular to the reaching direction, cf. Fig. 1b ,c, 3c,d). Response onsets (Fig. 1b,c,e,2c,3c,d) for each subject and condition were determined by performing t-tests between the force traces of all leftward and rightward corrections until at least 4 consecutive tests revealed significant differences (p < .05). The time stamp of the first of those 4 consecutive tests was taken as response onset [S6, S7]. For all further analyses, we mirrored the force traces for which we expected a negative force response and then averaged over perturbation directions. This automatically removed any constant force profiles caused by the biomechanical properties of the arm and robot. Furthermore, we pooled the data for the conditions of no interest, namely up-and downward movements, and the right and left hand for the first two experiments. To obtain a time-averaged single measure for the early response strength, we averaged the forces around mean onset time across no distractor conditions and participants (from 30ms before until 70ms after response onset, Fig. 1b,c, 3c,d). For statistical assessment, we used repeated-measures ANOVA (within each experiment), and twotailed t-tests between conditions (paired where applicable). Corrections for multiple comparisons were performed using Bonferroni corrections where necessary. Supplemental Control Experiment To exclude the possibility that the lack of attention effect on the processing of hand information resulted from a ceiling effect, we conducted a control experiment replicating Experiment 1. Within this experiment, we tested an additional control condition, in which we suppressed the overall responsiveness to the visual displacements by introducing a distractor for each target and each cursor. The movement of the cursor distractors was implemented in the same manner to those in Experiment 3, and the target distractor were stationary at the same spatial distance as the cursor distractors. The overall response strength was indeed decreased by introducing distractors (Fig. S1). In fact, the responses to target and cursor displacements were suppressed to similar amounts as expected for very complex scenes comparable to the 4-distractor condition in Experiment 3. The response pattern regarding the attention manipulation, however, replicated our previous results from Experiments 1 and 2: Only the force response to target displacements was modulated by the locus of attention. Displacements preceded by the exogenous cue elicited significantly stronger initial responses (for statistical details see Fig. S1) than displacements that were uncued. In contrast, exogenous cuing did not modulate the responses to cursor displacements for any distractor condition. This is corroborated by the significant interaction (F 1,8 =8.554; p=.019) for displacement type (target/cursor) x attention, whereas the three-ways interaction also including distractors (without/with) was not significant (F 1,8 =0.261; p=.624). These results clearly show that processing of visual hand information is not modulated by visual attention, even when there is the possibility that it increases the response strength in the presence of distractors, as shown in the target condition. Figure S1 Responses to target and cursor perturbations in the control experiment. Related to "Results and Discussion". Attention had a significant effect on the response strength to target perturbations (without distractors: t 8 =4.744; p<.001; with distractors: t 8 =2.448; p=.020), but no effect on the response strength to cursor perturbations (both p>.1). Error bars denote 1 SEM between participants. * p<.05, ** p<.01.
2016-10-07T00:49:12.080Z
2014-03-31T00:00:00.000
{ "year": 2014, "sha1": "19430f8cc48b6401e62eaeb9fc16a6b4633bf1a5", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S0960982214001985/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b596d54f36ab48b43722dd694f63540a84fc9097", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258048823
pes2o/s2orc
v3-fos-license
Large Language Models for Business Process Management: Opportunities and Challenges Large language models are deep learning models with a large number of parameters. The models made noticeable progress on a large number of tasks, and as a consequence allowing them to serve as valuable and versatile tools for a diverse range of applications. Their capabilities also offer opportunities for business process management, however, these opportunities have not yet been systematically investigated. In this paper, we address this research problem by foregrounding various management tasks of the BPM lifecycle. We investigate six research directions highlighting problems that need to be addressed when using large language models, including usage guidelines for practitioners. 1 Introduction question needs to be approached with a clearly defined task in mind. Starting with a task focus will move the discussion away from funny or disturbing errors and biases [31] towards how the collaboration between human experts and LLM applications can be organized. Furthermore, this bears the chance to learn about specific categories of failures, which eventually will help to refine the technology in a systematic way. In this paper, we address the research challenge of how LLM applications can be integrated at different stages of business process management. To this end, we refer to the BPM lifecycle [8] and its various management tasks [13]. Our research approach is exploratory in a sense that we developed strategies of how LLM applications can be integrated in specific BPM tasks. We observe various promising usage scenarios and identify challenges for future research. The paper is structured as follows. Section 2 discusses the essential concepts of Deep Learning (DL) and LLM in relation to Business Process Management (BPM) practises. In Section 3 we identify and discuss LLM applications within BPM and along the different BPM lifecycle phases. Based on these applications, Section 4 describes six core research directions ranging from how LLM change the dynamics and execution BPM projects, to data sets, and benchmarks specific to BPM. Section 5 identifies challenges when using LLM. Furthermore, we provide an outlook on how LLM might evolve in the future. Background The advent of LLM applications paves the way towards a plethora of new BPMrelated applications. So far, BPM has adopted natural language processing [1], artificial intelligence [7], and knowledge graphs [14] to support various application scenarios. In this section, we discuss the foundations of DL (Section 2.1) and LLMs (Section 2.2). In this way, we aim to clarify their specific capabilities. Deep learning Recent LLM applications build on machine learning and deep learning models, such as recurrent neural networks (RNNs) and transformer networks. Machine Learning (ML) studies algorithms that are "capable of learning to improve their performance of a task on the basis of their own previous experience" [15]. In essence, ML techniques use either supervised learning, unsupervised learning, and reinforcement learning as a paradigm. Several of them are relevant for LLM. In supervised learning, the ML algorithm receives as an input a collection of pairs, where one pair consists of features representing a concept, along with a label. Importantly, this label is task specific and encodes what the algorithm should learn about the concepts. Such labels can be, for instance, spam and no spam for a spam classifier, or bounding boxes with annotations for an image. There are two cases of supervised learning that are relevant for LLM: few-shot and zero-shot learning. Few-shot learning is when a ML algorithm adapts to a new situation with little amount of labelled data, and zero-shot learning is when the algorithm can do this with no labelled data at all. For example, a language model can be provided with a few input-output pairs, and the model can inverse the mapping function without any parameter changes. In unsuper-vised learning, the algorithm only receives a feature tensor of a concept as an input and the desired output is unknown. The algorithm then finds structural properties of the concepts present in the feature tensor. A typical application is dimensionality reduction, for instance using auto-encoders. In reinforcement learning, the algorithm receives a feature tensor of a concept as an input for which an output is produced, which is then evaluated through rewards. The algorithm then uses this feedback to improve its parameters. ChatGPT uses a form of reinforcement learning known as deep reinforcement learning to improve its language generation capabilities, in particular, "Learning to summarize from human feedback" [29]. ChatGPT is fine-tuned using a reward signal that assesses the quality of its generated responses, with the goal of maximizing the reward signal over time. The model's ability to learn from the reward signal allows it to generate increasingly relevant and coherent responses. Deep learning (DL) is a ML method based on Neural Networks (NN). In general, they are NNs with many layers stacked on top each other, which enables them to learn multiple layers of representations [10]. Importantly, these representations can be learned without supervision.Networks with only one hidden layer are called shallow.Deep networks are able to handle more complex problems compared to shallow networks. Combined with the availability of large amounts of data, improvements on how to speed up the optimization, and powerful computing resources, enables them to be trained effectively. In the context of natural language processing, deep learning has been particularly effective in tasks such as machine translation, sentiment analysis, and named entity recognition. The ability of deep learning to learn multiple layers of representations from input data has proven to be particularly powerful for these tasks. This is because natural language processing involves dealing with sequences of words and characters, and the relationships between these sequences are often complex and multi-layered. The use of large amounts of labeled training data and powerful computational resources has enabled deep learning models to achieve state-of-the-art results in many Natural Language Processing (NLP) tasks. For example, the transformer architecture, introduced in the paper "Attention is All You Need" by Vaswani et al. [33] has become the standard architecture for many NLP tasks, including language translation and language modeling. In recent years, BPM research has integrated the capabilities of deep learning to a large extent for process prediction. For an overview, see [16]. There are also recent applications for automatic process discovery [28], for generating process models from hand-written sketches [27], and for anomaly detection [17]. Large language models LLM are DL models trained on vast amounts of text data to perform various natural language processing tasks. These models, which typically range from hundreds of millions to billions of parameters, are designed to capture the complexities and nuances of human language. The largest models, such as GPT-1 and GPT-3, are capable of generating human-like text, answering questions, translating languages, and computer code. The training process of these models involves processing massive amounts of text data, which is used to learn patterns and relationships between words and phrases. These models then use this information to predict the likelihood of a given token, or sequence of tokens, in a specific context. This allows them to generate coherent and contextually relevant text or perform other language-related tasks. The rise of large language models has resulted in significant advancements in the field of NLP, and they are widely used in various applications, including chatbots, virtual assistants, and text generation systems. One of their strengths is their ability to perform few-shot and zero-shot learning with prompt-based learning [11]. In 2018, Radford et al. introduced GPT-1 (also sometimes called simply GPT) in their paper on "Improving language understanding by generative pretraining" [23]. Generative Pre-trained Transformer (GPT)-1 refers to the largest model the authors have trained (110 million parameters). In the paper, the authors studied the ability of transformer networks trained in two phases for language understanding. In the first phase, they trained a transformer network to predict the next token given a set of tokens that appeared before (also called unsupervised pre-training, generative pre-training, or in statistics auto-regressive). In the second phase, the transformer networks was fine tuned on tasks with supervised learning (also called discriminative fine-tuning). In summary, their major finding is that combining task agnostic unsupervised learning in the first phase, then using this model in a second phase with supervised learning for fine tuning on tasks can lead to performance gains -from 1.5% on textual entailment to 8.9% on commonsense reasoning. In 2019, Radford et al. introduced GPT-2 in their paper "Language Models are Unsupervised Multitask Learners" [24]. Again, GPT-2 refers to the largest model they have trained. GPT-2 is hence a scaled up version of GPT-1 in model size (1.5 billion parameters), and also in training data size. In particular, GPT-2 has roughly more than ten times the number of parameters than GPT-1, and is trained on roughly more than ten times the amount of training data. They report two major findings. First, the unsupervised GPT-2 can outperform language models that are trained on task specific data set, without these data sets being in the training data set of GPT-2. Second, GPT-2 seems to learn tasks (for example question answering) from unlabeled text data. In both cases, however, the performance did not reach the state-of-the-art. In summary, their major finding is that LLMs can learn tasks without the need to train them on these tasks, given that they have sufficient unlabeled training data. In 2020, Brown et al. introduced GPT-3 with the paper "Language Models are Few-Shot Learners" [5]. Unlike the above two cases, GPT-3 refers to all the models the authors have trained, i.e. it refers to a family of models. The largest model the authors have trained is GPT-3 175B, a model with 175 billion parameters. In their paper, the authors showed that language models like GPT-3 can learn tasks with only a few examples, hence the title includes "few-shot learners". The authors demonstrated this ability by fine-tuning GPT-3 on various tasks, including question answering and language translation, using only a small number of examples. In 2023, OpenAI introduced GPT-4 [21]. Contrary to previous versions of GPT, this version is a multimodal model as it can process text and images as an input to produce text. This model is a major step forward as it improves on numerous benchmarks; however, it suffers from reliability issues, a limited context window, and inability to learn from experience like previous GPT models. This release, however, diverges from previous GPT models as OpenAI is secretive about "the architecture (including model size), hardware, training compute, dataset construction, training method, or similar". We only know about the model that it is a transformer style-model, pre-trained on predicting the next token on publicly available and not disclosed licensed data, and then fine-tuned with Reinforcement Learning from Human Feedback (RLHF). Notwithstanding this departure, the authors include in their report findings on predicting model scalability. They in particular report on predicting the loss as a function of compute, and the mean log pass rate (a measure on how many code sample pass a unit test) as a function of compute given a training methodology. In both cases, they find that they could predict the respective measure with high accuracy based on data generated with significantly less compute (1.000 to 10.000 less). They also find the inverse scaling price for a task, meaning that the performance on a task first decreases as a function of model size and then increases after a particular model size. In 2022, OpenAI introduced a conversational LLM -called ChatGPT [19]. As a model, the first version of ChatGPT was based on GPT-3.5 and is an InstructGPT sibling. GPT-3.5 is a GPT-3.0 model trained on a training data set that contains text and software code up to the fourth quarter of 2021 [18]. InstructGPT was introduced in "Training language models to follow instructions with human feedback " [22], and is a GPT-3 model fine tuned with supervised learning in the first step, and in the second step with reinforcement learning from human feedback [29]. ChatGPT is hence a GPT-3.5 model fine tuned for conversational interaction with the user. In other words, the user interacts with the model via sequence of text (the conversation) to accomplish a task. For example, we can copy and paste a text into ChatGPT's input field and ask it to summarize it. We can even be more specific, we can say that the summary should be 10 sentences long and be written in a preferred style. Importantly, if we are unhappy with the result we can ask ChatGPT to refine its own summary without copying and pasting the text it should summarize. At the moment of this writing, ChatGPT can be used with GPT-4 as the backend LLM. There are also other large language models. In 2022, Zhang et al. introduced Open Pre-trained Transformer (OPT) with the paper "OPT: Open Pre-trained Transformer Language Models" [34]. The main contribution of that paper is that it makes all artifacts including the nine models available for interested researchers.These models are GPT-3 class models in parameter size and performance. Another open LLM is BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) (176 billion parameters), which was developed in the BigScience Workshop [26]. Uptake of large language models Above, we briefly discuss LLMs, where we focus particularly on the GPT model family as these are the most popular LLMs, we hypothesise. It is important to recognize the transition from GPT-3 to GPT-4, as it brought a massive increase on a variety of benchmarks, particular on academic and professional exams [21]. These performance increases in NLP tasks are a result of natural language understanding and have, as we argue, massive implications for what can be automated -the automation frontier. This frontier is arguably shifted further when natural language understanding is combined with plugin software components. In fact, at the time of writing, the company behind GPT is experimenting with Chat-GPT plugins. Among the currently offered plugins are Klarna, Wolfram, the integration with vector data bases for information retrieval, and an embedded code interpreter for Python [20]. This has an impact on Robotics Process Automation (RPA), and more broadly on business process automation including Business Process Management Systems (BPMSs), and more generally on how work is carried out. Large language models and the BPM lifecycle In this section we identify applications of LLM within BPM. We systematically explore these applications along the phases of the BPM lifecycle, namely identification, discovery, analysis, redesign, implementation, and monitoring [8]. In this way, we complement recent efforts to build an overarching inventory of LLM applications, such as in other fields like data mining 4 . Identification The BPM lifecycle starts from Identification. Normally, at this stage there is not much structured process knowledge available in the company, and relevant information has to be extracted from heterogeneous internal documentation. This is exactly where LLM shine as they can quickly scan and summarize large volumes of text, highlighting important documents or directly outputting required information. Identifying processes from documentation The idea is to give LLM all relevant documentation existing in the organization as input. This can include legal documents, job descriptions, advertisements, internal knowledge bases and handbooks. The LLM is then tasked to identify which processes are taking place in the organization. It can be further instructed to classify the input documents according to processes they describe. Multimodal LLMs can improve the results even further as charts, presentations and photos can also directly be used as information sources. Process selection LLM can be further asked to assess strategic importance of processes based on, e.g. number and types of documents that refer to them as well as extract this information from process descriptions. If given access to information systems supporting the process or other KPIs, LLM can also assess process health. Finally, assessing feasibility is also theoretically achievable as long as necessary information, e.g. recent technology reports, is given as input as well. Based on these criteria, LLM can prioritize the processes for further improvement. Discovery The second stage of BPM lifecycle is Process Discovery. At this stage one or a combination of process discovery methods is selected to produce process models. When one speaks of automated process discovery, one usually means process mining -a technique of extracting process models and other relevant data out of event logs left by information systems supporting the execution of a process. However, with LLM also other discovery techniques can benefit from (at least partial) automation. Process discovery from documentation Apart from process mining, documentation analysis is an established process discovery method. In this method, process analyst uses the information found in heterogeneous sources such as internal documentation, job advertisements, handbooks, etc. Searching in these documents might require a lot of time and effort. LLM are extremely suitable for this task as they can summarize high volumes of text in a concise and structured way. More precisely, they can output process descriptions in desired format (plain text, numbered lists, etc.). One can also specify the level of detail, as to whether the output should include only the activities and events or also resources and additional information. Finally, as some LLM are also capable of working with structured document formats such as XML, in fact even BPMN models can be produced automatically. Process discovery from communication logs Another information source that can be used in evidence-based discovery is communicaiton logs, i.e. e-mails and chats between process participants: internal employees but also external partners and customers. LLM can extract patterns from these communication logs, which can be seen as various steps in a process. Then, they can similarly produce process descriptions or models. Interview chat bot Possible applications of LLM in process discovery can also go beyond evidence-based discovery. Another common discovery method are interviews with domain experts. In these interviews, process analyst asks questions about the process and produces a process model based on several interviews. Typically, several separate interviews with different domain experts are required to produce the first version of process model. Afterwards, additional rounds of interviews are conducted in order to get and incorporate feedback and to perform validation. In the worst case, domain experts might have conflicting perceptions of the process, then resolving such conflicts becomes a very difficult and timeconsuming task for both process analyst and domain experts. LLM can solve parts of this problem by providing a chat bot interface for domain experts. In this way, the domain experts answer questions in the chat. This can bring a lot of advantages. First, the domain experts do not have to allocate lengthy time slots for interviews but instead talk with the chat bot at desired pace. Second, the feedback loop gets shorter as LLM can produce process models directly after or even during the conversation with the domain expert and also do updates to the model, thus validation can happen simultaneously with model creation. Finally, the benefits will only grow if multiple domain experts interact with the chat bot simultaneously (and independently) but the chat bot can use all of this input in the conversations. The latter option is, however, more difficult to implement. Combined process discovery All process discovery methods have their advantages and drawbacks. Often, a combination of these methods is used to achieve best results. However, this combination is limited by the resources that are allocated for process discovery task. Discovery methods presented above give valuable output yet requiring much less resources. Thus, it is possible to apply more of them simultaneously for even better result. The combination of these methods can be used in addition to traditional process mining or "manual" process discovery, which will provide the richest insights. While it could happen that the results of different methods have some inconsistencies that will have to be fixed, also fixing them can be done in (semi-)automated manner. Process model querying As LLM seem to "understand" process models serialized as XML, they can be used to answer some questions about the model. This can be very useful for quality assurance. First of all, it can be used for checking syntactic quality. While there are tools out there that can do it already, and with much less overhead, it is still convenient to have this feature in LLM because LLM, in contrast to other methods, may be able to check other quality aspects as well. For instance, it can also check semantic quality. Indeed, process analyst can give LLM both interview transcript and a process model as input and LLM can check both validity and completeness based on this interview. It must be noted, of course, that this will only work under the assumption that the interview transcript has these features of validity and completeness. Another way of checking semantic quality of the model would be via process simulation, e.g. to explicitly ask LLM whether the given process model could have produced a given execution sequence or to ask LLM to give possible execution sequences that can be generated by the model. LLM are known to be able to simulate Linux shell, for instance, thus they might be also able to simutale a BPMS execution engine as long as enough input is provided. Finally, LLM can also (at least so some extent) check pragmatic quality of the models as long as some definition of guidelines, e.g. 7PMG is provided as input as well. It must be also noted that LLM can not only spot these quality issues but also suggest fixes. Analysis The next stage is Process Analysis. At this stage, the discovered processes are analyzed to find problems and bottlenecks. While this is a cognitively loaded task, LLM can be used to help human analysts in some regard. Issue discovery If an issue exists in a process, chances are high somebody has already complained about it. Depending on the company, product, and process it can be the customer, partner or an employee and in can happen on different platforms, including social media, support service or internal communication tools. LLM are good at summarizing large volumes of unstructured text as well as finding patterns, and this capability can be used for this task. It is as easy as just scraping the text from these platforms and giving it as input to the LLM with a simple prompt like "find all things customers have complained about". Issue spotting After an issue in the process is found, the next step is to spot the part of the process that creates this issue. In some cases, it can be a difficult task, especially in a complex process. The idea here is to give LLM all process models (or models of the relevant process in case it is known that only one process causes the issue and it is known exactly which process) and the spotted problems. The task of LLM is, by analyzing task names and descriptions to make suggestions which tasks may be responsible for the issue. In advanced cases, LLM might be even capable of suggesting some fixes. It might be something as simple as suggesting to automate some manual task that takes too long but it also might be some more complex process redesign suggestion as long as LLM is given redesign methods as additional input or is trained on redesign methods as well. Redesign The fourth phase of the BPM lifecycle is Process Redesign. In this stage, process improvement suggestions are developed based on discovered issues and general process improvement methods. These suggestions are evaluated, and a to-be process model is developed at the end of this stage. Business process improvement An obvious yet very promising use case is t just ask LLM to redesign the process. As already mentioned, simple issues arising from just one activity can be fixed by the LLM. However, it does not stop there and is theoretically only depending on the quality of the input given to the LLM. Indeed, if it is given exhaustive information about the process (detailed process model as well as description of the process or tasks) as well as detailed description of some redesign method (or it is trained on some redesign methods), redesigning the business process is as simple as just telling the LLM to apply the method on the process. This can, however, be improved even further. First, the description of the issues discovered in the previous phase can be given as additional input to guide process redesign to fix those first. Second, LLM can be instructed to apply different redesign methods and to give separate lists of suggestions given by each of them so the analyst can then select the best options. Moreover, LLM itself can be asked to choose the best suggestions and motivate its choice. It must be noted, of course, that this will only work if sufficient input is given. For instance, for inward-looking redesign methods, the methods themselves as long as detailed process information is required. For outward-looking methods, in addition to that, there should be enough outside information and/or a way for LLM to properly communicate with the outside world. Implementation The next phase of the BPM lifecycle is Process Implementation. It covers organizational and technical changes required to change the way of working of process participants as well as IT support for the to-be process. BPMN model explanations with plain text As mentioned, LLM can work with BPMN models serialized in XML. We have already discussed how LLM can manipulate process models in order to increase quality as well as suggest or incorporate redesign ideas. To close the circle, LLM can produce textual explanations of BPMN models. What is more interesting, one can control the level of detail as well. So, depending on the target audience, LLM can produce textual overview but also detailed descriptions of the models. It can transform it into requirements for software developers if enough details are contained in the BPMN model itself. BPMN model chatbot Building on top of the previous use case, model description can be also tailored to every specific user. This way, given a model or -better -model repository with additional documentation, LLM can prepare specific descriptions for, e.g. process owner but also for individual participants for which all specific tasks they are responsible for are also described and explained in detail. Furthermore, in this use case one can add interaction between the user and LLM. This way, user may ask clarifications for parts he did not understand or generally ask for more details as long as some guidance is required. Process orchestrator LLMs can be accessed via APIs and at the same time can access APIs themselves, opening a huge variety of opportunities. While the former means it can be used for automated tasks and be called by the orchestrator, the latter means that it could theoretically be an orchestrator itself: given executable process model and additional constraints as context as well as the required instance data as input, it can theoretically execute a process by calling other APIs and assigning tasks in a more flexible way than a traditional orchestrator. Monitoring The last phase of the BPM lifecycle is Process Monitoring. At this stage, already implemented processes are executed, and their performance is monitored. The observations collected in this phase are used for operational management as well as serve as input for further iterations of the lifecycle. Process dashboard chatbot Dashboards are a powerful tool that provides overview of the most important KPIs of a process on a single screen. However, the ultimate goal of them is to tell the viewer whether the status of the process is good or not, and the numbers and colors are mostly used as an intermediary medium. LLM can take away this intermediate step and allow the user to directly know the status of the processes. Research directions In this section we propose the research directions. We categorize the research directions into three groups. The first group studies the use of LLM, and their applications, in practice. This includes the use within BPM projects in companies or as part of an Information System (IS) (Section 4.1), the development of usage guidelines for practitioners and researchers (Section 4.2), and also the derivation of BPM tasks (Section 4.3) and their corresponding data sets (Section 4.4). The second group studies how LLM can be combined with existing BPM tools, and more generally BPM technologies, to increase user experience (Section 4.5). Crucially, this group draws from findings in the first group. The third and final group develops large language models specifically for business process management, so these models can understand the context and language of business processes and support various tasks, such as process discovery, monitoring, analysis, and optimization (Section 4.6). Again, this group builds upon the findings of the first group. The use of large language models in BPM practice The first research direction studies the use of LLM in practice. One major question to answer is for which tasks LLM can be used. In Section 3, we present a list of tasks for which LLM can be used. However, this list might not be complete, in addition some of the tasks might turn out to be of little use. Tied to this is the question what tasks will bring, and ultimately bring the most value for an organization. The next big question is the relation between a task and the model properties needed to achieve a pre-defined value. One question here is which tasks can be achieved with already existing models. Another question to study is whether we always need the largest, and hence most accurate model, for each task. We hypothesize that this might not be the case. Finally, and most importantly, the next big question to answer is how LLM will change how work is carried out within BPM projects, and within processes that are actively managed. We for example hypothesize that conversational LLMs might take the spot of the duck in the famous duck approach 5 . This question is a socio-technical systems question, and we hence strongly believe that the BPM community, and the information systems community more broadly, is especially well equipped to contribute to this question. Usage guidelines for researchers and practitioners The second research direction builds usage guidelines for BPM researchers and practitioners. One question such guidelines have to answer is given an organizational context, the lifecycle phase, and the process context of a task, suggest a LLM to achieve an expected value. In addition, such guidelines systematically collect best practices for creating prompts. For example, for the BPM lifecycle phase process implementation, and monitoring and controlling, a company might consider using a LLM within a managed process. Let us assume this company is a bank and wants to automate the task of replying to customer inquiries with LLM. Then this guideline proposes for the process implementation a specific LLM, with the number of parameters it has, gives examples on how to create a prompt template, fill the template with customer background information, and finally on how to integrate the customer inquiry within the prompt template. For process monitoring and controlling, the guidelines might propose a different model for analyzing different inquiry clusters as the lifecycle phase context is different. As an example, consider here that the LLM first categorizes each inquiry into a positive and negative sentiment, and then lists for both the top five inquiry reasons. This research direction builds upon the first research direction, as first research direction, among others, determines the tasks for which LLM can be used in principle. 4.3 Creation, release, and maintenance of task variants specific to BPM This research direction builds and maintains two different task lists. The first list maps general NLP tasks to tasks within BPM. As an example, consider the general NLP task of text summarizing. Within BPM, text summarizing can relate to summarizing a set of process descriptions or task descriptions. We can think of this list as a one to many mapping between NLP tasks on the one hand, and BPM tasks on the other. The second list enumerates tasks that are unique to BPM. This research direction uses the findings from the directions presented in Section 4.1 and Section 4.2. Creation, release, and maintenance of data sets and benchmarks Public data sets and benchmarks are crucial for the progress of LLM in research as they allow researchers to measure progress. In addition, they are also important for practitioners as they define data set properties (such as metainformation) they are likely to need themselves when they fine tune a model. As a result, data sets and benchmarks need to be properly aligned with the automation needs of BPM. Blagec et al. argue similarly as we, but for the clinical profession [3]. In their study, they analyzed 450 NLP data sets and found that "AI benchmarks of direct clinical relevance are scarce and fail to cover most work activities that clinicians want to see addressed". A research direction for the BPM community is hence to do the same for BPM. One question worth studying is whether existing NLP data sets and benchmarks are of relevance to BPM, for example, if they cover the activities of BPM researchers and practitioners. This research direction builds upon the research direction in Section 4.3. LLM and BPM artifacts This research direction studies the interplay of LLM, BPM artifacts, and BPM tasks. The goal is to understand which artifacts are necessary for LLM, and their multimodal successors, to create useful outputs. It can hence be understood as a special case of prompt engineering, which we might call multimodal prompt engineering for BPM. This is an important research direction as the output quality of a LLM depends heavily on the context quality and quantity it is given. In other words, the more context, and the higher the quality of each context, the higher the output quality of the LLM. For this reason, we believe that it should be considered its own research direction. As an example, consider again the customer inquiry process from above. In this case, we can imagine that the context of the LLM depends on the inquiry. In one case, the customer might include an image in the inquiry. Or think of the redesign phase of the inquiry process. During this phase, artifacts are created, for example drawings of processes on a board, comments to these processes in a word processor, and remarks on data availability and access in an audio file. This information might be useful when we ask -a possibly different -LLM why a customer inquiry on current special offers cannot yet be answered. The reason here might be that a central system which stores special offers does not yet exist. This research direction builds upon the directions presented in Section 4.1 and Section 4.2. Development and release of Large Language Models for Business Process Management This research directions studies how LLM are build for BPM tasks, all previously discussed research direction are the foundation for this direction. The goal of the research direction is to build LLMs that are attuned to the specific challenges and requirements of BPM, compared to general-purpose language models. This includes specialized models in the sense of exclusive for, and also general-purpose language models that are fine-tuned on the BPM domain. An important aspect of this direction is to open source the created LLM, as is done for OPT [34]. This is important for researchers can use this model in their studies, and practice as companies can use these models free of charge for their use cases. Discussion In this section we discuss the challenges of LLM, the power of combination and inflated expectations, and end with an outlook and future work. Challenges The use of LLM entails opportunities and challenges. For example, they can help to understand difficult research, but they also carry over deficiencies (including factual errors) in the training data set to the texts they generate [32]. In a systematic study of these errors, Borji analyzes errors of ChatGPT and categorizes them -the author further outlines and discusses the risks, limitations and societal implication of such models 6 [4]. The failure categories identified by the author include reasoning, factual, math, and coding. A similar deficiencies study was done in [2], but these authors focus on LLM in general. A news feature in Nature discusses these and the risks of using LLM [9]. One consequence for education might be that essays as an assignment should be re-considered [30]. The power of combination and managing expectations The major innovation of ChatGPT was not the introduction of a new technology, but the combination of already existing ones and an easy to use user-interface [12]. This effect of combination extends beyond LLM, NLP, or ML innovations. For example, OpenAI is currently experimenting with integrating ChatGPT with software plugins, which might even in the short run lead to a software marketplace for their platform 7 . For this reason, we suggest and advocate in our research directions above to study and build these combinations with existing BPM technologies, instead of solely focusing on developing new ones. In this paper, we have so far made the case for the opportunities LLM realize, shortly discussed their shortcomings, and pointed out how important it is to combine technologies within a field, and across field boundaries. However, we also stress here how important it is to manage, maybe even overshooting, expectations driven by this very recent developments. For example, the speculation about the possible capabilities on the successor of GPT-3 were driven up by the hype to a point where "people are begging to be dissapointed" [12]. Outlook and future work LLM are used, and will be used in commercial products with huge amounts of users. We speculate that this will have an effect on research, as funding agencies might increase the amount of grants for this research field. An ever increasing user base that interacts with LLM (directly or indirectly) is therefore, in our view, inevitable. For future work, we plan to work on developing research directions that are beyond the scope of this paper. We expect that LLM will have an effect on how work is carried out (see Section 2.3 and Section 4.1). But this may have far greater impacts than what we cover here, for example on the BPM capabilities, which are strategy, governance, information technology, people, and culture [25]. Conclusion In this paper we present six research directions for studying and building LLMs for BPM. We use the BPM lifecycle to propose applications of LLM to showcase the impact of these models.
2023-04-11T01:27:03.231Z
2023-04-09T00:00:00.000
{ "year": 2023, "sha1": "fa302c87b1a886f18a62070d744b42f1a6df00ed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fa302c87b1a886f18a62070d744b42f1a6df00ed", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
13174163
pes2o/s2orc
v3-fos-license
BMC Cell Biology BioMed Central Research article Background The bacterial endotoxin, lipopolysaccharide (LPS), is a well-characterized inflammatory factor found in the cell wall of Gram-negative bacteria. In this investigation, we studied the cytotoxic interaction between 2-chloroethyl ethyl sulfide (CEES or ClCH2CH2SCH2CH3) and LPS using murine RAW264.7 macrophages. CEES is a sulfur vesicating agent and is an analog of 2,2'-dichlorodiethyl sulfide (sulfur mustard). LPS is a ubiquitous natural agent found in the environment. The ability of LPS and other inflammatory agents (such as TNF-alpha and IL-1beta) to modulate the toxicity of CEES is likely to be an important factor in the design of effective treatments. Results RAW 264.7 macrophages stimulated with LPS were found to be more susceptible to the cytotoxic effect of CEES than unstimulated macrophages. Very low levels of LPS (20 ng/ml) dramatically enhanced the toxicity of CEES at concentrations greater than 400 μM. The cytotoxic interaction between LPS and CEES reached a maximum 12 hours after exposure. In addition, we found that tumor necrosis factor-alpha (TNF-alpha) and interleukin-1-beta (IL-1-beta) as well as phorbol myristate acetate (PMA) also enhanced the cytotoxic effects of CEES but to a lesser extent than LPS. Conclusion Our in vitro results suggest the possibility that LPS and inflammatory cytokines could enhance the toxicity of sulfur mustard. Since LPS is a ubiquitous agent in the natural environment, its presence is likely to be an important variable influencing the cytotoxicity of sulfur mustard toxicity. We have initiated further experiments to determine the molecular mechanism whereby the inflammatory process influences sulfur mustard cytotoxicity. Background In this investigation, we explored the potential cytotoxic interaction between LPS and CEES using a murine macrophage cell line (RAW264.7). CEES is a monofunctional analog of sulfur mustard (bis-2-(chloroethyl)sulfide) which a bifunctional vesicant and a chemical warfare agent. Both bis-2-(chloroethyl)sulfide) and CEES are known to provoke acute inflammatory responses in skin [1][2][3]. The resulting skin blistering is thought to involve the stimulation of specific protease(s) [4]. Apoptosis is now considered a possible molecular mechanism whereby CEES induces cytotoxicity [5,6]. LPS is a major component of the cell wall of gram-negative bacteria and is known to trigger a variety of inflammatory reactions in macrophages and other cells having CD14 receptors [7,8]. In particular, LPS is known to stimulate the macrophage secretion of nitric oxide [9] and inflammatory cytokines such as tumor necrosis factor-alpha (TNF-alpha) and interleukin-1-beta (IL-1-beta) [10]. For this reason, we also determined if TNF-alpha or IL-1-beta were capable of enhancing the cytotoxic effects of CEES. LPS stimulation of macrophages is known to involve the activation of protein phosphorylation by kinases as well as the activation of nuclear transcription factors such as NF-kappaB [10][11][12][13]. The activation of protein kinase C (PKC) by diacylglycerol is also a key event in LPS macrophage activation [8]. In vitro experiments have shown that the secretion of TNF-alpha and IL-1-beta by LPS-stimulated monocytes is dependent upon PKC activation [13,14]. In this study, we also determined if phorbol myristate acetate (PMA) activation of PKC also enhanced CEES toxicity. Evidence suggests that LPS [15,16] as well as TNFalpha [17] stimulate the production of free radicals by macrophages. Our long-term goal is to understand the molecular mechanism of sulfur mustard toxicity and to determine if free radical production plays an important role in this toxicity. In our experiments, cytotoxicity was measured by a decrease in the optical density by the MTT ((3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay or by an increase in the fluorescence of propidium iodide (PI). The MTT assay is based on reduction of MTT by actively growing cells to produce a blue formazan product with absorbance at 575 nm. A low MTT absorbance indicates cell death. The PI assay differentiates between live and dead cells. Cells that have lost membrane integrity cannot exclude PI, which emits a red fluorescence after binding to cellular DNA or double stranded RNA. A high PI% indicates cell death. Cytotoxic interaction between LPS and CEES as measured by the MTT assay RAW 264.7 macrophages were incubated with LPS (100 ng/ml), CEES (500 µM) or both agents for 24 hours and cell viability then measured by the MTT assay. As shown in Figure 1, we found that LPS-stimulated RAW264.7 macrophages were markedly more susceptible (p < 0.05) to CEES toxicity (24 hr) than resting macrophages as indicated by the dramatic drop in dehydrogenase activity. In the absence of LPS, CEES at a level of 500 µM did not significantly affect cell viability as measured by the MTT assay. The characteristics of CEES toxicity to LPS-stimulated RAW 264.7 macrophages were further characterized by varying the concentrations of both LPS and CEES. The MTT data in Figure 2 show that very low levels (25 ng/ml) of LPS dramatically enhanced the toxicity of CEES (measured after 24 hr) to macrophages at concentrations > 300 µM. In general, LPS levels beyond 25 ng/ml did not further enhance the toxicity of CEES. However, LPS level at a level of 100 ng/ml significantly increased the cytotoxicity of 300 µM CEES compared to LPS activated RAW265.7 cells not treated with CEES. Statistical analyses indicate that cells treated with 100 µM CEES were not significantly different from control cells (not treated with CEES) in the presence of 0, 25 or 50 ng/ml of LPS. However, at a LPS level of 100 ng/ml, cells treated with 100 µM CEES showed a great MTT optical density than control cells. This might be due to a slight mitogenic response to low levels of CEES. Time course for the cytotoxic interaction between LPS and CEES as measured by the propidium iodide (PI) assay In order to insure that our results could be reproduced with an alternative assay for cell viability, we also utilized the propidium iodide (PI) assay. In this assay, the percent fluorescence increase over the control cells (i.e., 100 × (PI treatment -PI control )/PI control ) indicates cytotoxicity. For each experiment, the value for the control cells is zero and is not, therefore, shown. Figure 3 indicates that the PI assay provides the same qualitative results as the MTT assay, i.e., a synergistic toxic interaction between CEES (500 µM) and LPS (20 ng/ml). Moreover, the cytotoxic interaction between LPS and CEES was noticeable after 6 hours (p < 0.05) and reached a maximum 12 hours after exposure. After 24 hours, CEES (500 µM) was significantly toxic to RAW 264.7 macrophages even in the absence of LPS. LPS alone was never significantly different than cells with no treatment, which always has a zero percent PI% increase. Means not sharing a common letter are significantly different (p < 0.05). Cytotoxicity was measured, after 24 hours, by the MTT assay described in the Methods. TNF-alpha and IL-1beta also enhance the cytotoxic effect of CEES LPS from Gram-negative bacteria binds to CD14 and initiates a complex signal transduction pathway involving the Toll receptor family, which eventually results in the synthesis of pro-inflammatory cytokines such as TNF-alpha and IL-1beta. We, therefore, investigated the potential roles of TNF-alpha and IL-1beta in enhancing CEES cytotoxicity. Table 1 shows that IL-1beta (50 ng/ml) and CEES (500 µM) are significantly more cytotoxic (after 24 hours) when administered together than when administered alone. In this experiment, LPS (25 ng/ml) and CEES (500 µM) combined gave PI(%) and MTT values of 120.8 ± 2.3 and 0.25 ± 0.07, respectively. These MTT data suggest that LPS, at 25 ng/ml, is able to enhance the toxicity of CEES (an MTT value of 0.25 ± 0.07) to a greater extent (p < 0.05) than IL-1beta at 50 ng/ml (an MTT value of 0.74 ± 0.07). The ternary mixture of LPS (25 ng/ml), IL-1beta (50 ng/ ml) and CEES (500 µM) was not found to be more cytotoxic than LPS (25 ng/ml) and CEES (500 µM). respectively. This indicates that TNF-alpha at 50 ng/ml is not as potent at LPS (at 25 ng/ml) in enhancing the cytotoxicity of CEES at 500 µM. TNF-alpha combined with LPS (in the absence of CEES) was found to not exert any cytotoxic effect on the RAW264.7 macrophages. Protein kinase C is a non-receptor serine/threonine kinase that is maximally active in the presence of diacylglycerol and calcium ions. In vitro experiments by Coffey et al. [14] have shown that TNF-alpha and IL-1beta secretion and mRNA accumulation by monocytes following LPS treatment is dependent on PKC activity. Phorbol 12-myristate 13-acetate (PMA) is a specific activator of group A and group B protein kinase C. We wanted to determine, therefore, if protein kinase C activation by PMA would enhance the toxicity of CEES. As shown in Table 3, the combination of PMA (50 ng/ml) and CEES (500 µM) was more cytotoxic (after 24 hours) toward the RAW264.7 macrophages than LPS or CEES alone. However, as was the case with IL-1beta and TNF-alpha, PMA was not as effective as LPS in enhancing CEES toxicity. Discussion We have found that inflammatory agents such as LPS, TNF-alpha, IL-1beta, and PMA amplify the cytotoxicity of CEES to RAW264.7 macrophages. There is evidence sug- Values are mean ± SD and means (in a given row) not sharing the same superscript letter are significantly different (p < 0.05). 2 PI(%) is the percent of propidium iodide fluorescence over that of the control cells. 3 The MTT levels are given as 575 nm absorbance units. 3 1.68 ± 0.08 a 1.61 ± 0.04 a 1.55 ± 0.08 a 1.37 ± 0.09 a 0.32 ± 0.05 b 1.08 ± 0.18 c 1 LPS was used at 25 ng/ml, TNF-α at 50 ng/ml and CEES 500 µM. Values are means ± SD and means (in a given row) not sharing the same superscript letter are significantly different (p < 0.05). 2 PI(%) is the percent of propidium iodide fluorescence over that of the control cells. 3 The MTT levels are given as 575 nm absorbance units. gesting that CEES can modulate levels of inflammatory cytokines but this information is controversial and inconsistent. Cultured monocytes exposed to CEES show a transient increase in TNF-alpha [18]. Similarly, cultured normal human keratinocytes exposed to CEES show a transient increase in both TNF-alpha and IL-1beta [19,20]. In contrast to the results of Arroyo et al. [18][19][20], Ricketts et al. [21] found that IL-1beta or TNF-alpha protein did not increase in sulfur mustard-exposed mouse skin. Blaha et al. [1], using an in vitro human skin model, found that CEES treatment resulted in a decreased level of IL-1alpha. Time (hours) Sabourin et al. [3] also addressed this issue and studied the in vivo temporal sequence of inflammatory cytokine gene expression in sulfur mustard exposed mouse skin. These investigators found an increase in IL-1beta mRNA levels after 3 hours that dramatically increased between 6-24 hours post exposure [3]. Moreover, immunohistochemical studies showed an increase in tissue levels of IL-1beta [3]. The in vitro results reported here support a role for inflammatory cytokines in the mechanism and kinetics of CEES toxicity. Our unique findings with regards to LPS are significant because they demonstrate that this bacterial endotoxin enhances CEES toxicity even when present at extremely low levels, i.e., nanograms of LPS per ml. LPS is ubiquitous and is present in serum, tap water, and dust. Military and civilian personnel would, indeed, always have some degree of exposure to environmental LPS, which could increase the toxicity of sulfur mustard. In addition, there is always the possibility of purposeful LPS exposure. Our primary future goals are to understand the general mechanism for the enhanced toxicity of CEES in the presence of inflammatory agents and use this information to develop effective countermeasures. Our experiments with PMA (Table 3) suggest that the activation of protein kinase C may play a key role in molecular mechanism whereby LPS enhances the cytotoxic of CEES. In experiments that will be reported elsewhere, we have found the antioxidants such as RRR-alpha-tocopherol and N-acetylcysteine are effective in reducing the cytotoxic ef-fects of CEES on LPS-stimulated macrophages. We have also initiated experiments to distinguish the roles of apoptosis and necrosis in the observation reported here. Conclusions Our results suggest that that LPS dramatically enhances the toxicity of sulfur mustard. Since LPS is a ubiquitous agent in the natural environment, its presence is likely to be an important variable influencing the cytotoxicity of sulfur mustard toxicity. LPS is known to stimulate the production of inflammatory cytokines such as TNF-alpha and IL-1beta in human monocytes [10]. We, therefore, also determined if these cytokines influenced the cytotoxicity of CEES. The data in Tables 1 and 2 demonstrate that both these cytokines enhance the cytotoxicity of CEES as measured by either the MTT assay or the PI assay. Inhibition of protein kinase C is known to block the secretion of TNFalpha and IL-1beta [10] whereas stimulation of protein kinase C promotes TNF-alpha and IL-1beta production [22]. The data reported in Table 3 show that stimulation of protein kinase C activity by PMA also enhances the cytotoxicity of CEES. Collectively, the data reported here provide the basis for future experiments attempting to determine the signal transduction mechanisms whereby inflammatory agents enhance sulfur mustard cytotoxicity. MTT assay for determination of cell viability This assay for cell viability is based on the reduction of 3-(4,5-dimethylthiazolyl-2)-2,5-diphenyltetrazolium bromide (MTT) by mitochondrial dehydrogenase in viable cells to produce a purple formazan product. Schweitzer et al. [23] have shown that this assay provides linearity between optical density and cell number. This assay was performed by a slight modification of the method described by Wasserman et al. [24,25]. Briefly, at the end of each experiment, cells cultured in 96 well plates (with 100 µl of medium per well) were incubated with MTT (20 µl of 5 µg/ml per well) at 37°C for 4 hours. The formazan product was solubilized by addition of 100 µl of dimethyl sulfoxide (DMSO) and 100 µl of 10% SDS (in 0.01 M HCl) for 16 hours at 37°C. The dehydrogenase activity was expressed as the absorbance (read with a Molecular Devices microplate reader) of the formazan product at 575 nm. Dye exclusion assay (PI assay) This assay uses propidium iodide (PI) dye to differentiate live and dead cell [26][27][28]. Cells that have lost membrane integrity cannot exclude PI, which emits a red fluorescence after binding to cellular DNA or double stranded RNA. After each experiment, cells were washed with PBS and incubated with 200 µl of RPMI medium with 5 µg/ml PI for five minutes at room temperature. PI fluorescence was measured using a Fluostar Galaxy microplate reader using an excitation wavelength of 485 nm and an emission wavelength of 650 nm. We expressed the results as the percent increase in the fluorescence of treated cells over the fluorescence of control cells (no treatments), i.e., 100 × (PI treatment -PI control )/PI control ) Statistical analyses Means among treatments were compared by one-way ANOVA followed by the Scheffe test with a significance level of 0.05. In both the Tables and Figures, means
2014-10-01T00:00:00.000Z
2002-01-01T00:00:00.000
{ "year": 2002, "sha1": "69dd77f3ba88dede819e9cc1b0c24affa7186f92", "oa_license": "CCBY", "oa_url": "https://bmccellbiol.biomedcentral.com/track/pdf/10.1186/1471-2121-4-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69dd77f3ba88dede819e9cc1b0c24affa7186f92", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
17802216
pes2o/s2orc
v3-fos-license
Down regulation of miR-124 in both Werner syndrome DNA helicase mutant mice and mutant Caenorhabditis elegans wrn-1 reveals the importance of this microRNA in accelerated aging Small non-coding microRNAs are believed to be involved in the mechanism of aging but nothing is known on the impact of microRNAs in the progeroid disorder Werner syndrome (WS). WS is a premature aging disorder caused by mutations in a RecQ-like DNA helicase. Mice lacking the helicase domain of the WRN ortholog exhibit many phenotypic features of WS, including a pro-oxidant status and a shorter mean life span. Caenorhabditis elegans (C. elegans) with a nonfunctional wrn-1 DNA helicase also exhibit a shorter life span. Thus, both models are relevant to study the expression of microRNAs involved in WS. In this study, we show that miR-124 expression is lost in the liver of Wrn helicase mutant mice. Interestingly, the expression of this conserved miR-124 in whole wrn-1 mutant worms is also significantly reduced. The loss of mir-124 in C. elegans increases reactive oxygen species formation and accumulation of the aging marker lipofuscin, reduces whole body ATP levels and results in a reduction in life span. Finally, supplementation of vitamin C normalizes the median life span of wrn-1 and mir-124 mutant worms. These results suggest that biological pathways involving WRN and miR-124 are conserved in the aging process across different species. INTRODUCTION exonuclease and the DNA helicase domains homologous to the human WRN protein are encoded by two different genes in C. elegans [17]. The C. elegans wrn-1 gene codes for the ATP-dependent 3′-5′ DNA helicase capable of unwinding a variety of DNA structures [18]. Notably, it has been shown that the RNAi knockdown of the C. elegans wrn-1 gene shortens the life span, increases sensitivity to DNA damage, and accelerates aging phenotypes [17]. Recent discoveries in the fields of development, cancer, and aging have indicated that small non-coding RNAs play a major role in alterations associated with these biological processes. An important class of non-coding RNAs that has been studied in the context of C. elegans aging are the microRNAs (miRNAs) [19][20][21][22][23]. The miRNAs are short RNAs (~22nt) that regulate posttranscriptional gene expression via base pairing to partially complementary sites mainly found in the 3' UTRs of messenger RNAs (mRNAs). miRNAs down regulate protein expression by inhibiting mRNA translation and/or mRNA stability [20]. Individual miRNAs can modulate multiple mRNA targets, and individual mRNAs can be regulated by multiple, distinct miRNAs [20]. Very few studies using rodent tissues have been performed to elucidate the role of miRNAs in aging [24][25][26][27], often with contradictory results [28,29]. In this study, we report the differential expression of several miRNAs in the livers of young (three months old) Wrn hel/hel mice compared to age-matched wild type animals. Among them, one conserved miRNA in animals (miR-124) was down regulated in both the liver of Wrn Dhel/Dhel mice and in whole wrn-1 C. elegans mutants. Deletion of mir-124 in C. elegans resulted in a decrease in life span, an increase in reactive oxygen species (ROS) production, a decrease in ATP levels, and an increase in the aging marker lipofuscin. All these phenotypes could be reversed in mir-124 mutation strains after vitamin C treatment. These results implicate a role for the conserved miR-124 in aging in C. elegans. RESULTS The liver of Wrn hel/hel mice show differential expression of miR-375 and miR-124 We have previously shown that in Wrn hel/hel mice, the liver is the first tissue to show morphological changes compared to age-matched wild type animals [16,30]. Interestingly, the liver undergoes substantial modifications in structure and function in old age including include alterations in liver mass, blood flow, and sinusoidal cell morphology [31]. These changes are associated with significant impairment of many hepatic metabolic and detoxification activities, with implications for systemic aging and age-related disease. We therefore focused our study on the hepatic tissue as the liver plays a pivotal role in whole body homeostasis through the maintenance of nutrient, drug, hormone, and metabolic processes. Total RNA from the liver of two Wrn Dhel/Dhel and two wild type mice at three months of age was extracted to analyze the expression of 755 different miRNAs using the TaqMan-based Array. Although no gross hepatic morphological difference could be observed between Wrn Dhel/Dhel mice and wild type mice at three months of age, the liver of Wrn Dhel/Dhel mice exhibited changes in the expression of a number of miRNAs compared to wild type mice. Supplementary Tables S1 and S2 provide the raw data on all miRNAs. Table 1 summarizes the list of differentially expressed miRNAs in the liver of Wrn Dhel/Dhel mice compared to wild type mice. We next validated the differential expression of the seven miRNAs listed in Table 1 using the liver tissues of four different Wrn Dhel/Dhel mutant and four wild type mice (three months of age) Of the seven miRNAs tested, only miR-375 and miR-124 showed significant differential expressions in Wrn Dhel/Dhel mutant compared to wild type animals ( Figure 1A and Supplementary Figure S1). miR-375 was up regulated more than threefold and miR-124 was down regulated by ten-fold in the liver of Wrn Dhel/Dhel mutant mice compared to the liver of wild type animals ( Figure 1A). To determine whether miR-375 and miR-124 were also differentially expressed during aging, quantitative RT- Figure 1. Expression levels of miRNAs in the liver of Wrn Dhel/Dhel mice compared to wild type mice and in the whole body of wild type and wrn-1(gk99) worms. (A) Total RNA from four mice (at three months of age) of the indicated genotype was used for the quantitative RT-PCR analyses. The levels of the indicated miRNAs in the Wrn Dhel/Dhel mice are relative to the wild type (WT) animals. (B) Expression levels of miR-375 and miR-124 in the liver of four young (three months old) and four old (21 months old) wild type animals. The levels of the indicated miRNAs in the old wild type mice are relative to the young wild type animals. (C) Expression level of mir-124 in wild type and wrn-1(gk99) strains. Twenty-five 7-day old adult worms (post-larval L4 stage) of each strain were sorted and collected for total RNA extraction. The quantification of mir-124 were measured by quantitative RT-PCR (TaqMan assay) and compared with the levels found in wild type animals. (D) mir-124 expression levels in young and older adults wild type worms. All data were normalized by the quantification of the small nucleolar RNA (sn2841). The error bars represent the 95% confidence interval of three independent experiments. The P-values (unpaired Student's t-test) are indicated above each graph. www.impactaging.com PCR was performed on the liver tissues of four young (three months) and four old (21 months) wild type mice. miR-124 was significantly decreased (by five-fold) in the livers of old wild type mice compared to young wild type mice ( Figure 1B). In contrast, there was a nonsignificant increase in miR-375 level in the liver of old wild type animals. These results indicate that the expression of miR-124 correlates inversely with age in the liver of mice ( Figure 1B). The impact of the Wrn helicase on miR-124 expression is conserved in C. elegans We next determine whether the observed alteration of the miRNAs in mice could be a global phenomenon during aging by studying these miRNAs in the nematode C. elegans. A search in the miRNA database miRBase (www.mirbase.org) revealed that miR-124 is conserved in the short-lived C. elegans but not the miR-375. We first determined if the modulation of miR-124 is also conserved in C. elegans animals carrying a lossof-function deletion of the wrn-1 gene (wrn-1(gk99) allele) that encodes the human WRN helicase ortholog [32]. It has been reported that a depletion of the C. elegans wrn-1 gene product by RNAi reduces the life span of this animal [17]. Consistent with these findings, we found that the wrn-1(gk99) mutant animals had a reduced life span when compared to the wild type (N2) animals ( Figure 2A). The median life span of the wrn-1(gk99) animals was 6.8 days compared to 9.0 days for the wild type strain (32% decrease; log-rank test: Pvalue = 1.4 × 10 -11 ). Interestingly, we observed that the expression of the conserved miR-124 is significantly reduced by 20% in the wrn-1(gk99) animals (unpaired Student's t-test: P = 0.048) compared to the wild type strain ( Figure 1C). Furthermore, we found that mir-124 expression is also reduced in older wild type worms (seven days after L4 stage) compared to young worms (at the L4-larvae developmental stage) ( Figure 1D). These results indicate that miR-124 expression is decreased in both Mus musculus and C. elegans during aging and in animals with a mutation in the WRN helicase ortholog. The loss of mir-124 causes a reduction of life span in C. elegans To assess the impact of the loss of miR-124 on aging, we measured the life span of worms carrying a deletion of the mir-124 gene (mir-124(n4255)) [33]. As shown in Figure 2B, the median life span of mir-124(n4255) worms was significantly decreased by 15% (7.7 days versus 9.0 days) compared to the wild type animals (log-rank test: P = 5.4 × 10 -9 ). Notably, animals carrying both deletion of mir-124 and wrn-1 genes (wrn-1;mir-124 animals) displayed a more severe decrease in their life span (48% decrease compared to wild type; log-rank test: P = 5.6 × 10 -13 ) than the single loss of either gene ( Figure 2C). These results indicate that both genes are important in the life span of C. elegans. www.impactaging.com The loss of wrn-1 and mir-124 leads to an increase in reactive oxygen species (ROS) generation and a reduction in ATP levels We have reported that Wrn Dhel/Dhel mice exhibit increased ROS and decreased ATP levels in different tissues compared to age-matched wild type animals [16,34]. To determine whether the loss of wrn-1 also affects ROS levels in C. elegans, we measured ROS levels in whole wrn-1(gk99) worms with dichlorofluorescein (DCFA) staining as described previously [34]. Although not significant, the wrn-1(gk99) mutant worms exhibited an 8% increase in overall ROS levels compared to the wild type strain ( Figure 3A). The loss of mir-124, in return, led to a significant increase in overall ROS levels (16% increase; P = 0.0442). Interestingly, the loss of both wrn-1 and mir-124 resulted in a 40% increase in whole body ROS levels compared to wild type worms (P = 0.0008) ( Figure 3A). We next measured the impact of the loss of wrn-1 and/or mir-124 on ATP levels. The wrn-1(gk99) animals exhibited a 35% decrease in overall ATP levels compared to the wild type strain ( Figure 3B), while the mir-124(n4255) animals exhibited a 52% decrease in the overall ATP levels compared to the wild type strain (P = 0.0409). Finally, the double mutant wrn-1;mir-124 worms showed a 63% decrease in whole body ATP levels compared to wild type animals (P = 0.0137). These results indicate that the loss of both wrn-1 and mir-124 functions significantly affect ATP levels in C. elegans. The loss of wrn-1 and mir-124 lead to an increase of the aging marker lipofuscin To determine whether the reduced life span observed in mir-124(n4255) worms was due to a progeroid phenotype, accumulation of the aging marker lipofuscin was examined. The intensity of the fluorescence observed in the wrn-1(gk99) animals was 21-fold stronger than wild type worms at the third day into adulthood (Figure 4). Similarly, the lipofuscin fluorescence observed in mir-124(n4255) mutant worms was also stronger than in the wild type animals. Finally, there was a synergic effect on the accumulation of lipofuscin in the double mutant wrn-1;mir-124 worms ( Figure 4). These results indicate that the animals lacking either wrn-1 or mir-124 exhibit a progeroid phenotype that is exacerbated by the loss of both genes. Vitamin C restores the normal life span of wrn-1(gk99) and mir-124(n4255) mutant strains Previously, we reported that vitamin C restored the normal life span of Wrn Dhel/Dhel mice [16]. We thus www.impactaging.com decided to test the impact of 10 mM ascorbate [35] on the life span of each C. elegans mutant strain. Vitamin C significantly increased the median life span of wrn-1(gk99) animals when they were grown with a diet containing vitamin C (log-rank test: P = 1.4 × 10 -7 ; Figure 5A). Furthermore, this lifespan extension effect was comparable to wild type animals grown on normal media. The median life span of mir-124(n4255) mutant worms was also significantly increased to a level similar to wild type animals upon vitamin C supplementation ( Figure 5B; log-rank test: P = 3.0 × 10 -9 ). These results indicate that vitamin C significantly increased the life span of animals lacking the wrn-1 or mir-124 genes. Finally, we determined the life span of double mutant wrn-1;mir-124 worms treated with vitamin C. While vitamin C did extend the lifespan of these double mutant worms from 6.6 days to 8.4 days ( Figure 5C, P = 3.1 × 10 -6 ), it was not as a dramatic effect as observed for the single mutant worms. Furthermore, vitamin C treatment did not increase the life span of the double mutants to that of the untreated wild type worms ( Figure 5C; log-rank test: P = 0.0163). Vitamin C decreases ROS levels in all mutant strains We next examined the effect of vitamin C on ROS levels in whole worms of each strain. Twenty-five 7day old adult worms (timed from the post-larval L4 stage) of each genotype were treated with 10 mM vitamin C and then ROS levels were measured. There was no significant difference between untreated and vitamin C-treated wild type worms. In wrn-1(gk99) and mir-124(n4255) worms, vitamin C treatment significantly lowered ROS levels compared to untreated worms ( Figure 6A; P < 0.00005). Finally, vitamin C also significantly decreased ROS levels in wrn-1;mir-124 worms compared to the untreated wrn-1;mir-124 animals (P = 0.00009) ( Figures 6A). Overall, these results indicate that vitamin C significantly decreased ROS levels in all mutant strains tested. -124(n4255), and wrn-1;mir-124 double mutant worms at three days into adulthood. Panels on the right represent the lipofuscin autofluorescence alone. All pictures were taken at the same exposure time. Magnification is 10 X. The histogram at the bottom represents the average intensity of lipofuscin autofluorescence in the different C. elegans strains. Ten to fifteen three-days old (three days into adulthood) worms of each strain were photographed and the fluorescence intensity was quantified using Adobe Photoshop. The fold increase in fluorescence intensity compared to wild type animals is indicated. (Unpaired Student's t-test; *P = 0.00002 for wrn-1(gk99) vs. wild type; **P = 0.00222 for mir-124(n4255) vs. wild type; and ***P = 0.00078 for wrn-1;mir-124 vs. wild type). www.impactaging.com Vitamin C increases ATP levels only in the wrn-1(gk99) mutant strain We also measured ATP levels in vitamin C treated mutant worms. ATP levels were decreased in wild type treated worms compared to untreated wild type worms but this decrease was not statistically different. ATP levels in vitamin C-treated wrn-1(gk99) worms were similar to the ATP levels of untreated wild type worms ( Figure 6B). In contrast, vitamin C significantly increased ATP levels in wrn-1(gk99) worms compared to the untreated wrn-1(gk99) animals by 1.9-fold (P = 0.00972; Figure 6B). ATP level was not significantly increased in vitamin C treated mir-124(n4255) animals compared to the untreated mir-124(n4255) worms ( Figure 6B) and was still at a lower level than untreated wild type animals (P = 0.0155). Thus, vitamin C did not normalized the amount of ATP in mir-124(n4255) worms to the wild type levels. There was a 1.9-fold increase in ATP levels in vitamin C treated wrn-1;mir-124 double mutant worms compared to untreated wrn-1;mir-124 animals (P = 0.0002; Figure 6B). However, the amount of ATP in vitamin C treated wrn-1;mir-124 double mutant worms was still lower than untreated wild type animals (P = 0.0440). Overall, these results indicate that vitamin C significantly increased ATP levels only in worms bearing the wrn-1(gk99) mutation. Vitamin C decreases lipofuscin levels in all mutant strains to the level of untreated wild type animals The intensity of autofluorescence from lipofuscin accumulation was examined in all the mutant strains treated with vitamin C. As indicated in Figure 6C, vitamin C decreased the intensity of autofluorescence in all the mutant strains to untreated wild type levels. These results indicate that vitamin C normalized lipofuscin accumulation in wrn-1(gk99), mir-124(n4255), and wrn-1;mir-124 worms. Important parallels between mouse and C. elegans with a mutation in the WRN helicase In this study, we have demonstrated that a C. elegans animal carrying a deletion of the wrn-1 helicase have a reduced life span, and importantly this phenotype is similar to mice lacking the DNA helicase activity of the human WRN ortholog [16,30,34]. Thus, both models can be used to identify and assess the impact of specific genes that, with the WRN orthologs, affect health or life span. The short life span of the C. elegans allows a ; vitamin C treated wrn-1;mir-124 vs. untreated wildtype worms: P = 0.0163). All experiments were performed three to four times with 20 to 30 worms per genotype. Pvalues were obtained using the log-rank test method. www.impactaging.com rapid evaluation of the impact of a gene on aging, which can then be translated to a more complex organism like the mouse. In this study, we identified miR-124 as a conserved miRNA in both mouse and worm animal models. miR-124 has a role in premature aging through the loss of a functional WRN ortholog helicase activity, although the mechanism by which the loss of WRN affects miR-124 expression remains somewhat unknown. Nevertheless, we demonstrate that a deletion of the mir-124 gene ortholog in C. elegans results in reduced life span, increased whole body ROS levels, and reduced ATP levels. Because total inactivation of both wrn-1 and mir-124 genes had a greater negative impact on ROS and ATP levels than inactivating wrn-1 alone, these results suggest that the decrease of the miR-124 miRNA can contribute to several key biological processes affected in Wrn Dhel/Dhel mice [15,16,34]. In addition, the deletion of mir-124 accelerated the accumulation of the aging marker lipofuscin in C. elegans and thus highlights the importance of this miRNA in the progeroid phenotype. The expression of miR-124 was not only reduced in the livers of young Wrn hel/hel mice compared to agematched wild type mice, but it was also reduced in the livers of old wild type mice compared to young wild type mice. These results indicate that the miR-124 expression signature in the liver of young Wrn hel/hel mice corresponds to the miR-124 signature in old wild type animals. To our knowledge, this is the first study showing a significant altered expression of miR-124 in the liver of aging mice. Previous studies have not shown an alteration of miR-124 during normal hepatic aging in mice or rats, or in the long-lived Ames dwarf mice [24,27,36]. This difference may be due to the different techniques used for the initial miRNA detection. Previous studies utilized hybridization of labeled molecules on nitrocellulose-based microarray [24,27,36] that may be less sensitive than direct quantitative RT-PCR of individual miRNA as was used in this study [37]. Interestingly, the level of miR-124 has also been reported to be down regulated in skeletal muscle of old mice compared to young mice [25]. These results, together with our data, indicate that a decrease of miR-124 can be considered as a common signature in the liver and muscle of aging mice. Our observation of a significant decrease in miR-124 levels in aging C. elegans further supports the role of this conserved miRNA in the molecular signature of aging in different animal species. The miR-124 has been shown to be involved in neurogenesis not only in mouse but also in C. elegans [38,39]. More precisely, the expression of miR-124 in the mouse brain is associated with the differentiation status of neuronal cells [38]. However, miR-124 is Twenty-five worms of each genotype were collected for the ROS or ATP measurements. Experiments were performed in triplicate. (C) Histogram representing the average intensity of autofluorescence lipofuscin in the different C. elegans strains treated with vitamin C compared to untreated wild type worms. Ten to fifteen three-days old (three days into adulthood) worms of each strain were photographed and the fluorescence intensity was quantified using Adobe Photoshop. www.impactaging.com expressed in cell types other than neurons [40,41]. Of relevance to our study, miR-124 is also expressed in the normal human liver [42]. As miR-124 is a regulator of several proteins involved in insulin exocytosis and intracellular signaling in pancreatic beta cell lines [40,41], it is possible that miR-124 may alter insulin action in vivo directly impacting on organismal homeostasis and aging. Importantly, the insulin/insulin-like growth factor-1 signaling pathway is a strong regulator of longevity in C. elegans [23,43,44]. Noticeably, insulin-like peptides are primarily released from neurons in C. elegans [23]. Thus, the mutant C. elegans strains described in this study gives us relevant models to thoroughly decipher the molecular mechanisms involved in WS and aging in general. As miR-124 will affect protein expression by destabilizing RNA levels of target genes or by inhibiting translation of target mRNAs, the next step is to perform large scale proteomic analyses to identify proteins in our Mus musculus and C. elegans animal models involved in the insulin signaling pathway, redox balance, energy homeostasis, and healthy aging. Vitamin C normalizes the life span of mutant wrn-1 and mir-124 strains We recently found that Vitamin C supplementation rescued the shorter mean life span of Wrn Dhel/Dhel mice and reversed several age-related abnormalities in adipose, cardiac, and liver tissues [16]. In this study, we show that vitamin C also rescued the shorter life span of both wrn-1(gk99) and the mir-124(n4255) mutant animals. Furthermore, vitamin C reversed the increased ROS levels, the decreased ATP levels, and the accelerated accumulation of the progeroid marker lipofuscin in both mutant strains. Lipofuscin is believed to be a mix of oxidized and cross-linked macromolecules, including proteins, lipids, and carbohydrates [45]. Such results point to metabolic abnormalities in worms lacking the helicase function of the human WRN ortholog like Wrn hel/hel mice [16,30,34]. Importantly, we found that vitamin C reversed the metabolic abnormalities in both of these models. To conclude, our data indicate that miR-124 is a conserved miRNA that is involved in the aging phenotype across mouse and worm species. Furthermore, the loss of miR-124 expression is associated with the lack of WRN helicase function in both species. Finally, the progeroid phenotypes associated with either WRN or miR-124 mutations can be reversed by vitamin C treatment. Finally, our results with both mouse [16] and worm models of WS suggest that vitamin C supplementation could have beneficial effects for patients with WS. METHODS MicroRNA expression profiling. Care of mice was in accordance with the guidelines of the Centre de Recherche des Centres Hospitaliers Universitaires de Québec. The TaqMan® Array Rodent MicroRNA Card Set v3.0 is a two card set containing a total of 384 TaqMan® MicroRNA Assays per card. The set enables accurate quantification of 755 unique microRNAs for mouse. Included on each array is three TaqMan® MicroRNA Assay endogenous controls to aid in data normalization and one TaqMan® MicroRNA Assay not related to rodent as a negative control. Use of the Megaplex™ RT Primers, Rodent Pool Set v3.0 was required to run the array sets. An additional preamplification step was carried out with Megaplex™ PreAmp Primers. Reactions were performed on four animals, two for each genotype, and according to the manufacturer's protocol (Applied Biosystems, Carlsbad, CA). Raw CTs were then successively normalized using the endogenous U6 and quantile normalization. An empirical Bayesian method within the package limma in BioConductor (http://www.bioconductor.org) was used to identify the significantly modulated miRNAs. A miRNA was judged significantly modulated if the Benjamini-Hochberg adjusted P-value was lower than 0.1. All miRNA analyses were performed using R version 2.14.0. Validation of miRNA expression. The quantitative measure of selected miRNA expressions was performed with TaqMan MicroRNA assays on extracted total RNA from four different Wrn Dhel/Dhel mutant and four wild type mice or on extracted total RNA from four young (three months) and four old (21 months) wild type mice following manufacturer's protocol (Life Techonology, USA). Caenorhabditis elegans strains. All C. elegans strains were maintained as described [46]. Both wrn-1(gk99) and mir-124(n4255) strains obtained from the C. elegans Genetics Center (University of Minnesota, St Paul, MN) were out-crossed four times with the wild type N2 strain to remove possible unrelated mutations. The wrn-1(gk99) contains a 196 bps deletion that inhibits the expression of the protein [32]. The primers used to genotype this strain are 5'-CTGGCTGTAACT GCACCTGA-3' and 5'-AAATGGGAGGGAAAGAGC AT-3'. The mir-124(n4255) strain contains a 212 bps deletion that spans the entire mir-124 sequence. The mir-124 sequence is localized in an intron of the trpa-1 gene. It has been shown that the n4255 deletion does not abrogate the expression of the trpa-1 gene in C. elegans [39]. The primers used to genotype the mir-124(n4255) strain are 5'-TTGCTTCTTCTTCGAGCA www.impactaging.com CA-3' and 5'-AAATGGGAGGGAAAGAGCAT-3'. Expression of mir-124 in C. elegans. Three hundred 7days old adult worms (post-larval L4 stage) were sorted by size to exclude remaining larvae using a COPAS BIOSORT instrument (Union Biometrica, Inc., Somerville, MA, USA). Sorted worms were spun down in an eppendorf tube and lysed in TRIZOL (Invitrogen, Carlsbad, CA) to extract total RNA. To measure mir-124 expression, TaqMan Small RNA assays (Applied Biosystems) were performed as described before. Stemloop qRT-PCR for mature miRNAs was performed on a real-time PCR system (AB 7900; Applied Biosystems). The short nuclear RNA sn2841 were measured and used as an endogenous control. Measurement of life span and aging markers. Worms were transferred to fresh plates and were grown at 25°C. Death was scored by absence of any movement after several light pokes with a platinum wire. Lipofuscin was detected as autofluorescence in adult worms and images were captured using a Zeiss motorized Axioplan 2 microscope (with 525 nm filter) equipped with an AxioCam MRm camera and the AxioVision acquisition software (Carl Zeiss Microscopy GmbH, Jena, Germany). ATP quantification in C. elegans. ATP levels were quantified with the ApoSensor ATP assay kit according to the manufacturer's instruction (BioVision, Mountain View, CA). Luminescence was measured with a Luminoskan Ascent luminometer (Thermo Electron Inc., Milford, MA). Twenty-five 7-days old adult worms (post-larval L4 stage) worms were collected spun in an eppendorf tube and resuspended in 250 mL of assay kit buffer. Worms were crushed in a Dounce homogenizer (25 strokes) and the homogenate was spun 5 min at 13,000 rpm on a bench top centrifuge at room temperature. The ATP level was measured from the homogenate. Protein concentrations were measured using the Bradford assay. Results were expressed as amount of ATP/mg of proteins. All experiments were performed in three independent pools of animals. Statistical analysis. Data on graphs are presented as means + SD. The unpaired Student's t-test and the logrank test were all performed using an alpha level of 0.05 and a two-sided hypothesis. Life span curves were build on differences between strains were considered significant at P-value lower than 0.05 in all statistical analyses. All statistical analyses were performed using R version 2.14.0 (www.r-project.org).
2017-06-16T22:03:03.900Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "64e2703118814e79b546c922fe7a65cee2884e94", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.100489", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64e2703118814e79b546c922fe7a65cee2884e94", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10862843
pes2o/s2orc
v3-fos-license
Rhabdomyolysis Associated with Parainfluenza Virus Influenza virus is the most frequently reported viral cause of rhabdomyolysis. A 7-year-old child is presented with rhabdomyolysis associated with parainfluenza type 2 virus. Nine cases of rhabdomyolysis associated with parainfluenza virus have been reported. Complications may include electrolyte disturbances, acute renal failure, and compartment syndrome. Introduction Rhabdomyolysis is characterized by the breakdown of skeletal muscle fibers. It has a diverse etiology including infections, trauma, strenuous exercise, drug reactions, metabolic disorders, and status epilepticus [1]. In contrast with the adults, Mannix et al. in the largest series of pediatric rhabdomyolysis, reported that viral myositis was the most frequent cause, accounting for 38% (73/191) of cases and in particular during the first decade of life [2]. Viruses associated with rhabdomyolysis include influenza types A and B, HIV, enteroviruses, Epstein-Barr, cytomegalovirus, adenovirus, herpes simplex, and varicella virus. Influenza has been considered as the most frequent [1]. Case Description A 7-year-old boy was referred by his pediatrician in the autumn, with fever up to 39.2 ∘ C for 4 days and lower extremities pain, right knee stiffness, and difficulty in ambulating over the previous 2 days. The child had fatigue, nasal congestion, sore throat, and occasional cough. His sister had rhinorrhea and hoarseness. He had been diagnosed with intermittent asthma and received montelukast daily. No family history of metabolic or neuromuscular diseases was noted. No trauma, increased exercise, insect bites, or urine discoloration was reported. He had no pets and had not traveled recently. His immunizations were up to date except for influenza. On physical examination he had a temperature of 38.6 ∘ C, heart rate of 107 beats/min, respiratory rate of 18/min, blood pressure of 109/70 mm Hg, and oxygen saturation of 100% on room air. The child was unable to walk or stand. Upon palpation, he had severe tenderness over the calves and thighs without any swelling and milder pain in the arms. His abdomen was tender to percussion but soft. He had scattered wheezing. No signs of arthritis or rashes were noted. The remaining of his examination was unremarkable. Increased hydration without potassium was administered, but no alkalinization of the urine or forced diuresis was required. CPK reached up to 21,425 U/L and AST and ALT up to 843 U/L and 245 U/L, respectively. Potassium peaked at 5.5 mEq/L (normal 3.5-5.0) and phosphorus at 5.1 mEq/L (normal 2.4-4.3). CPK decreased to 6,756, and electrolytes normalized on discharge two days later. The child's clinical condition had much improved. Discussion There is not a clear distinction in the literature between acute benign myositis and rhabdomyolysis. Some use the term acute benign myositis for uncomplicated cases of muscle inflammation with elevated CPK and arbitrarily reserve the term rhabdomyolysis when, in addition, myoglobinuria is present [12]. Others refer to rhabdomyolysis only when serum CPK is >1000 U/L [2]. Since rhabdomyolysis means disintegration of striated muscle with resultant leakage of muscle cell constituents, we choose to include all reports of clinical myositis with elevated serum CPK. The first case of rhabdomyolysis associated with parainfluenza was reported in 1976 by McKinlay and Mitchell, in an 8-year-old child who had a benign self-limiting course. Since then, 10 cases have been reported on the PubMed database in the English literature (Table 1). Seven cases including the current one occurred in children, and subsequent discussion refers to them. The median age was 7 years (range 4-10, mean 6.8), and two were girls. All children were healthy except a child with cerebral palsy [3]. All children presented with rhabdomyolysis within 1-5 days after the onset of fever or upper respiratory tract symptoms. Fever was reported in 86% (6/7) [3,4,[6][7][8]. Myalgias involved predominantly the calves and thighs in all children but one, for which no specific description of muscle involvement was available [3]. Muscle weakness or difficulty in walking was reported in all the remaining cases and muscle swelling in 33% (2/6). None of the children with parainfluenza had dark-colored urine, except for one who presented with the "classic triad" of myalgias, muscle weakness, and dark urine [5]. Two had positive urinalysis for blood [5, present]. These findings are in agreement with a review in which only 3.6% of 191 children with rhabdomyolysis presented with dark urine, and only 1 had the "classic triad. " In addition, more than half of childhood rhabdomyolysis cases may have negative heme dipstick results [2]. The median CPK level was 7,563 U/L (range 1,566-50,000, mean 15,508) compared with a median of 4,100 U/L (range 230-1,000,000) in 36 children with influenza [12]. In a series of 18 children with rhabdomyolysis, CPK levels correlated with the development of acute renal failure [13]. In contrast, the need for renal replacement therapy did not correlate with the initial or peak level of CPK in another series of 28 children [14]. In children with parainfluenza the median AST was 204 U/L (range 71-1,040, mean 420). Thrombocytopenia (platelet count 52,000/mcl) was reported in one child [8]. The median time to clinical recovery was 12 days (range 4-52, mean 18.8). In some children complications occurred, but the outcome was favorable. Three out of seven children developed ARF, and two of them received renal replacement therapy [3,4]. One child died because of cardiorespiratory arrest and brain edema [7]. Among patients with rhabdomyolysis associated with influenza, ARF has been reported by Singh et al. in 44% (11/25) of patients and by Agyeman et al. in only 3% (8/311) of children with myositis/rhabdomyolysis [1,12]. Six out of eight children with influenza ARF required renal replacement therapy [12]. In contrast, it has been reported that none of 73 children with rhabdomyolysis associated with viral myositis, resulted in acute renal failure [2]. In addition, renal replacement therapy was not more frequent in children with rhabdomyolysis precipitated by infection in comparison with other causes [14]. In two cases of parainfluenza precipitating rhabdomyolysis, a dual infection was detected in postmortem specimens. In a child, viral particles consistent with a picornavirus were detected by electron microscopy in the muscles [7]. Chlamydia pneumoniae was detected by PCR in an adult with hemorrhagic pneumonia [11]. Whether coinfection carries a worse prognosis in the context of rhabdomyolysis remains to be clarified. The pathogenesis of viral-induced rhabdomyolysis is unclear. Direct viral invasion and toxicity or immunologic mechanisms such as deposition of immune complexes or cross-reactivity have been proposed among others [12]. Recently, it has been suggested that, in genetically susceptible hosts, infection with parainfluenza leads to increased production of interferon-1, a known cause of rhabdomyolysis [4]. Similar histopathologic findings have been reported in both influenza and parainfluenza cases. Diffuse or patchy muscle degeneration and necrosis with little inflammatory infiltration were commonly detected [7,10,12]. Antiviral treatment such as ribavirin in rhabdomyolysis caused by parainfluenza virus was not reported. The efficacy of antivirals such as neuraminidase inhibitors in rhabdomyolysis caused by influenza is unknown. In addition, recurrent rhabdomyolysis and compartment syndrome, first precipitated by parainfluenza and on a second occasion by influenza, have been described in a child unvaccinated for influenza [15]. Recurrent rhabdomyolysis associated with influenza has been reported in 10 children [12]. Our patient's risk of developing recurrent disease upon subsequent viral infection is unknown. His mother agreed for the child to receive the influenza immunization. Influenza vaccination triggering rhabdomyolysis is an extremely rare event and has occurred in adults on statin therapy [16]. Conclusion In conclusion, although mild myalgias are commonly reported and are self-limited in infections caused by parainfluenza virus, worsening pains, difficulty in ambulating, exquisite tenderness to palpation, or muscle swelling should prompt further investigations to detect possible rhabdomyolysis, even in the absence of dark urine. Potential complications include acute renal failure, electrolyte disturbances, and compartment syndrome. Although influenza is the most frequently reported viral cause of rhabdomyolysis, the implementation of laboratory methods such as multiplex polymerase-chain reaction assays may lead to enhanced recognition and characterization of rhabdomyolysis associated with parainfluenza or other pathogens.
2016-05-12T22:15:10.714Z
2013-06-13T00:00:00.000
{ "year": 2013, "sha1": "7a175865a5e9481688533c0b1c530ff21ccc200f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/criid/2013/650965.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "914d646494f09cbb34bbe5da9dfea4c93ad35bb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
67486703
pes2o/s2orc
v3-fos-license
High-frequency contrastive grammar features of the Uralic languages The present article is dedicated to the detection of the ancestral homeland of the Uralic languages and their relevant features. Linguistic geophylogeny used within the framework of the research allowed proving the “Eastern” hypothesis on their ancestral homeland. The method of contrastive queries based on the LangFam program and the database “Languages of the World” defined a set of relevant features of the Uralic languages. Their dominating word order “subject-verb-object” was proved to be nongenealogic. The research found a high correlation between the word order and the number of cases in the Uralic languages. The possible ways of the Uralic people’s migration and the linguistic contacts appearing during their settlement were studied. Introduction Detection of relevant features for a language family is a classic linguistic task (Danilova et al., 2016;Silnitsky, 2006;Tomlin, 1986). After the appearance of such linguistic resources as the database "Languages of the World" of the Institute of Linguistics of the Russian Academy of Sciences (IL RAS) (2013), which contains a big number of grammar features, researchers are given a free hand to use great arrays of data in search for relevant features. The present paper is aimed at studying the Uralic languages from the point of view of their relevant features. The Uralic languages are a big family of languages spoken in Asia and Europe with over 25 million speakers. Some languages from the family have the status of official languages in Europe (such as Hungarian, Estonian, and Finnish). There are many works dedicated to the Uralic family or separate languages, but there are questions on the evolution of the Uralic languages that remain unanswered. For example, recently the dominant word order in different world languages has been studied very actively (Comrie, 1989;Dryer, 1992;Gell-Mann, Ruhlen, 2011;Rijkhoff, 2004). The following works also studied the question of the word order in the world languages (Greenberg, 1963;Haugan, 2001;Hoffman, 1996;Song, 2012;Tomlin, 1986). Numerous linguistic studies are dedicated to the appearance and typology of the Uralic languages (Collinder, 1965;Hajdu, 1985;Janhunen, 2009;Marcantonio, 2002;Napolskikh, 1991Napolskikh, , 1997. However, there has still been no comprehensive work dedicated to the reasons and the time when the word order changed in the Uralic languages so far. The present work regards the facts that the dominant word order in some Uralic languages changed: -in some languages from "subject-object-verb" (SOV) to "subject-verbobject" (SVO) under the influence of the neighboring Indo-European languages (Germanic and Slavic) before the XVI AD (Ehala, 2006); -in some languages from SOV to free word order. Moreover, the SOV→SVO change was accompanied by an enrichment of the case system, which is confirmed by statistical data. The present research is based on the combination of typological, geographical, genealogical and geophylogenetic data on the Uralic language family from the database "Languages of the World" of IL RAS, as well as from the World Atlas of Language Structures (WALS) and the Automated Similarity Judgment Program (ASJP). A detailed description of the "Languages of the World" IL RAS database (Spring 2013), LangFam program and the method of contrastive queries used during this research is given in (Danilova et al., 2016). For the first time, the location of the ancestral homeland of the Uralic languages is defined using the method of linguistic geophylogeny. Section 2 of the paper presents an overview of the works on the Uralic languages and existing hypotheses of their areal contacts, ancestral homeland and ways of migration. Further, in Section 3, the authors suggest a description of the object of the present study and the main goals of the research. Section 4 contains information on the methods and resources used during the research. Sections 5 and 6 provide an analysis of the results and their relevance for the modern scholarship. In Section 7 we give a summary of the research and its results, discuss the limitations of the work and outline the direction of further studies. Nevertheless, in general, an ecologic area defined by the method of linguistic paleontology and an ancestral homeland is not the same. The borders of the latter should also be defined based on the archaeological and anthropological data. Considering this, V. Napolskikh concludes that the Proto-Uralic ecological area coincides with the regions where Proto-Uralic speakers lived before the disintegration of the community. The western border lies further to the west of the border of the Uralic ecological area, near the interfluve of the Vyatka River and the Vetluga River, i.e., in the area of the archaeological border between the Uralic and West Siberian and Volga-Oksk neolith (Napolskikh, 1991). Ways of migration and areal contacts Let us look at the possible ways of migration of the Uralic people and language contacts arising along with their movement. It is important to remember that ethnogenesis is a very complicated process, which can cause merge of peoples, blend or even change of languages. That is why the problem of external contacts of the Uralic languages is also debatable and remains unsolved so far. For example, the reasons for the likeness of the Uralic and Yukaghir languages are unclear: some scholars classify the Yukaghir languages with the Uralic family, others consider the reason to be the result of contacts. Some researchers find some common features between the Uralic languages and the Chukotko-Kamchatkan and Eskimo languages. However, if such contacts did take place, they should be very ancient, as these common features are characteristic of both the Samoyedic and Finno-Ugric languages, thus, the supposed contacts could not bring about the difference between the Uralic languages in the basic word order and means of expressing subject-object meanings. The fact of old contacts between separate branches of the Altaic family and the Uralic languages and between branches of the Indo-European languages and the Uralic languages is universally recognized, but there is no general hypothesis about the time of the first contacts. As for contacts between the Uralic and Altaic languages, the linguistic data (Hajdu, 1985) testifies to continuous connections between the Uralic languages of the Eastern area and the Tungus languages since the existence of the Uralic unity. Besides, Samoyedic and Ugric show traces of the Turkic influence. Moreover, while contacts with the Samoyedic language are believed to have taken place before its disintegration, the presence of Turkic borrowings in Proto-Ugric remains debatable. According to V. Napolskikh (1997), if this contact took place, it was not long and could occur not earlier than at the end of a common Ugric era (III c. BC). Let us now proceed to the Uralic and Indo-European contacts. If the ancestral homeland is localized correctly, after the disintegration of the Samoyedic and Finno-Ugric people the general migration direction of the latter is from the East to the West, probably, with some shift to the South. According to the data of linguistic paleontology, in the III c. BC the Proto-Finno-Ugric ecological area should be located in the Middle Ural, in the Middle and South Trans-Urals, in the south-western sector of West Siberia with the possible inclusion of regions to the west of the Ural Mountainsbasins of the Kama River, the Upper Vychegda River and upper reaches of the Pechora River. Based on the analysis of anthropological data, N. Mokshin (2009) also claims that the Cisurals, West and South Siberia are the most probable territory of formation of the ancestors of the modern Finno-Ugric people. This point of view conforms to the modern level of development of other types of historical sources. Studies (Abaev, 1972;Harmatta, 1977;Kalima, 1936) show that borrowings in the Finno-Ugric languages have Iranian and common Aryan, as well as Indo-Aryan words. V. Napolskikh agrees with the traditional point of view that the first contacts between the Uralic and Indo-European languages took place after the disintegration of the corresponding proto-languages, namely between the Finno-Ugric proto-language and Proto-Indo-Aryan dialects (till late III c. BC). This hypothesis does not contradict the archaeological data. E. Kuz'mina (2007), who suggests the Indo-Aryan ethnic belonging of the Andronovo population, sticks to the point of view on contacts between the Indo-Aryan and Finno-Ugric people at the border of the forest and steppe regions of Eurasia. According to V. Napolskikh, this contact was continuous until the early Middle Ages, and Hungarian continued contacting with the Iranian languages in the XI c., after the disintegration of the Ugric community. Proto-Tocharians were another group of Indo-Europeans who contacted with the Uralic peoples at a rather old period (Napolskikh, 1991). Later the western Finno-Ugric peoples kept on contacting with the Indo-European peoples, but now with their European groups. The most popular theory is that the order of their meetings was the following: -the Proto-Baltic people and the Baltic Finns (probably, Baltic-Finnish Saami); -the Germans and the Baltic Finns, the Germans and the Saami (since early I c. AD), laterthe Slavic people and the Mordvins. The anthropological data testify to the genetic closeness of the Samoyedic people, Ugric people of West Siberia and Yeniseian people (now represented only by the Ket). Basic word order change The reconstruction of the Proto-Uralic languages (Hajdu, 1985), (Honti, 2013) signifies that the mother-tongue had SOV as the dominant word order. Meanwhile, most modern Uralic languages are characterized by SVO, which means that this change took place in the evolution of the Uralic family. Let us look at certain languages and some reconstructions to understand when the change of the word order took place. The prevailing point of view concerning the change of word order SOV→SVO The modern Permic languages have both word orders in question: SVO is dominant in Komi-Zyrian and Komi-Permyak, SOVin Udmurt (Winkler, 2001). V. Ponaryadov (2001) conducted a comparative analysis of the order of the sentence constituents in the Permic languages and reconstructed the word order in the Permic proto-language. The author concluded that Permic had SOV word order, and all three languages (Komi-Zyrian, Komi-Permyak, and Udmurt) went through the SOV → SVO evolution, though the Udmurt system was much less modernized than those of the Komi languages. V. Ponaryadov explains this by the fact that Russian and the neighboring Turkic languages (Tatar and Chuvash) did not exert such influence on Udmurt. The author dates these changes back to XVI-XVII centuries, as the study of diachronic data shows the absence of any significant changes since XVIII c. There are also some data on Estonian since XVI c. M. Ehala (2006) presents data from the corpus of the University of Tartu, which contains texts in literary Estonian of the XVI c. He analyzed 76 clauses from the viewpoint of the word order and position of the verb and found that the verb was located in the final position in 52.4% of all clauses. Nevertheless, this corpus does not contain a big number of texts. Moreover, some of them were written by non-native speakers, which could influence the reliability of the data. Believing that the old word order (SOV) could be preserved in folk sayings and idioms, M. Ehala also analyzed Estonian folklore. He revealed cases when the verb occurred in the second position in the main clause. The SOV order is common for subordinate clauses. The research by M. Ehala (2006) also refers to the results of another study (Remmel, 1963). The author of the latter work randomly chose 28 pages from a collection of folk fairy tales "Eesti rahvanaljandid" (Estonian fairy tales) and calculated the number of subordinate clauses with a verb in the final position. The result turned out to be 152 clauses out of 171. Similar calculations were made for "Valimik eestivanasõnu" [Collection of Estonian sayings]. The verb occurred in the final position in 322 of 381 clauses. Since modern Estonian has SVO as dominant word order, the data of the research in question can testify to the evolution SOV → SVO at the period when these literary monuments were created. Concerning modern Finnish, despite several studies claiming the absence of dominant word order (Braticco, 2016), most researchers agree that the SVO word order is also dominating. Nevertheless, E. Tsypanov (2008) has data that the runes of the national epos "Kalevala" (first published in 1835) still have SOV as the dominant order. Nevertheless, we can also speak about "SOV → free word order" change, as there are at least two languages that do not have a dominating word order -Votic and Saami. Case system of the Uralic languages The modern Uralic languages differ from each other in the number of cases. Noteworthy, the data on their number for the same language often vary. It can be accustomed to by the criteria scholars use to differentiate between case forms and adverbial formations or postpositional constructions. Case systems of the Samoyedic languages have 7-8 elements (Hajdu, 1985), except Selkup, which has 13 cases (Kuznetsova et al., 2002). The Finno-Ugric languages show a significant variation in the number of cases. The richest case system belongs to Hungarian, which has up to 23 cases (Collinder, 1965). Komi-Zyrian and Komi-Permyak are traditionally considered to have 16 cases each (Bubrih, 1949). Udmurt has 15 cases (Perevoshchikov et al., 1962), Finnish -15 (Hakulinen, 1961), Estonian -14 (Harms, 1962), Moksha -13 (Serebrennikov, 1967), Erzya -11 main cases and 6 additional (Evseviev, 1931), Veps -22 (Zaitseva, 1981), Mari -7 (Pengitov et al., 1961), Saami -8-12 (Hajdu, 1985), Eastern dialects of Khanty have 10 cases, Southern dialects -5, Northern -3 (Hajdu, 1985), Mansi -6 (Riese, 2001), Votic -14 (Adler, 1966). The existing reconstruction of the case system of Proto-Uralic (Hajdu, 1985) singles out 8 cases for it (suffix of each case is given in brackets): Lative-Prolative II (-k) As shown above, there are case systems in the Finno-Ugric languages that were considerably enriched compared to the proto-language, as well as case systems that now contain a small number of cases. The latter separate Nominative from local cases, but Genitive and Accusative relations are not marked by any special suffixes. Besides case affixes, the Uralic languages have other means of expressing subjectobject and locative relations. For example, postpositional constructions are quite popular (they are very frequent in the Northern dialects of Khanty due to the small number of cases). Prepositions are used only in the Finnic languages. Moreover, the Samoyedic, Mordvinic and Ugric languages have subjective-objective verb conjugation. Some researchers believe that the opposition of subjective and objective conjugation developed in the groups mentioned above independently (Klemm, 1928(Klemm, -1942. Others claim that verbs in Proto-Uralic had two types of conjugationobjective and objectless (Hajdu, 1985). The selection of these two grammar characteristicsword order and case systemwill be explained in Section 4 of the present paper. "Languages of the World" database "Languages of the World" database is an electronic encyclopedic resource embracing the grammatical features of languages presented in the encyclopedia "Languages of the World." The current version of the database (Spring 2013) includes the description of 315 languages, mostly Eurasian, organized within a user-friendly interface. The format of data presentation is binary, i.e., each of the 3800 features has two possible states: "present" or "absent." All features are presented in a hierarchal way, in the form of a tree. Some features can include a paradigm -the field "feature name" contains all possible values, e.g., for the vowel height: "close/mid/open." "LangFam" as a tool for detection of relevant features Relevant features of a language family represent a set of features present in most languages of the family under study, but these are very rare in languages from other families (Danilova et al., 2016). Thus, relevant features of a family occur as a set only in this family. The detection of relevant features can be very important for typological studies, as it can give information on the possible borrowings and losses of features that were not present in the parent language, or, vice versa, were present only in the proto-language. A set of relevant features can be defined by contrastive queries using LangFam program. It is a table containing the frequency of occurrence of each feature from the "Languages of the World" database for all families and genera from the database. The frequency of a feature occurrence can have values in the range from 0, meaning that none of the languages from the family possesses it, to 1, meaning that the feature is characteristic of all languages from the family under study. A contrastive query means that a set of relevant features for a language family (or, less often, for a genius) is detected in contrast to another language family. Noteworthy, the absence of genealogic kinship of the two language families must be proven and widely acknowledged. In the present study, we searched for the relevant features of the Uralic languages using contrastive queries to the Altaic languages (Stachowski, 2015), though there are still arguments on the existence of the Altaic family. For example, project ethnologue.com acknowledges the Uralic family, but not the Altaic one. Project wals.info acknowledges both the Uralic and Altaic families. We hold to the hypothesis on the existence of the Altaic macro-family. The search for relevant features comes down to detecting all features that have the frequency of occurrence over 0.5 (or 50% of the languages) in the family under study and below 0.05 (or 5% of the languages) in the contrastive family. The resulting list is the set of relevant features of a language family. The list of the languages that have a complete set of relevant features can be received from the "Languages of the World" database using the query master integrated into the database. The query master gives an opportunity to obtain a list of languages based on the presence or absence of the following characteristics: -features, -attribution to a family or genus -area of distribution. It is noteworthy that the number of selected criteria is unlimited, giving the user wide opportunities in work with the data from the database. To get languages with all relevant features, it is necessary to mark the features as "present" in the query master. The result of the search will provide a list of languages possessing all the selected features. Further work with the set of relevant features includes the application of the so-called variation method, i.e., successive exclusion of one feature from the query. It allows finding languages that have an incomplete set of relevant features (e.g., all but one feature), which can provide information on the evolution of languages. Linguistic geophylogeny The present study is based on the method of phylogeography first described in (Hickerson et al., 2010) Walker and L.A. Ribeiro (2011). Given that, we, still, believe that the term "geophylogeny" is more suitable. Thus, it will be used in the present paper. Recently phylogenies and the methods of phylogenetic analysis are becoming a more a more popular tool for suggesting or proving the hypotheses on the motherland of various language families (Dunn, 2014), (Verkerk, 2017), (Chang et al., 2015). In particular, special attention was gained by Bayesian phylogenetic approaches, which, combined with classical methods of comparative linguistics, can answer long-pending questions (Bouckaert et al., 2012). For the present research the phylogenetic tree was taken from ASJP (Wichmann et al., 2016) and manually transformed to minimise the number of intersecting lines from the tree to the map, but still preserving the succession of language separation and the historical deepness of their existence received by the phylogenetic means. Ancestral homeland detection by linguistic geophylogeny The combination of the geographic area of the Uralic languages and the phylogenetic tree is presented in Figure 2 (Here and further (Figures 3, 4, 5) the language numbers in the maps coincide with those from Figure 1.). Some of the languages from the map (Komi-Zyrian, Mator, Moksha, Enets, Nenets) are not presented in the tree, and some of the languages from the tree are not presented in the map (Lule Saami, Skolt Saami, Inari Saami, Estonian Vord, Csango). Figure 2: Geographic area of the Uralic languages shown with a phylogenetic tree Let us look at the figure more closely. The figure confirms the "Eastern" hypothesis. The group of Samoyedic languages was the first to separate from the Proto-Uralic language. The Nenets people went to the West, and the Selkup settled in the Eastern Ural. Later the Ugrian group (Hungarian, Khanty, Udmurt) separated from the remaining languages. After that, the Hungarians went over the Carpathian Mountains (in 895-896 AD) and occupied the lands in the basin of the Middle Danube River (the territory of modern Hungary) (Molnar, 2001). The Khanty and Udmurt peoples settled in the territory of the Middle and Northern Ural. The remaining part of the Uralic speakers (the Finnish people) settled in the territory of North-Eastern Europe (Southern Saami, Northern Saami, KildinSaami), Karelia (Karelian, Veps), the Baltic in the territory of modern Estonia (Livs, Finnish, Estonian, Votic, Izhor), Northern (Nenets) and Western (Komi, Udmurt) Ural, basins of the Kama and Volga Rivers (Moksha, Erzya, Mari (Hill and Meadow)). According to the method of linguistic geophylogeny, the geographic area of the ancestral homeland of the Uralic languages was proven to be located in the habitat of Khanty and Mansi. Thus, the present research will be based on one hypothesis -"Eastern" (Napolskikh, 1991), as the authors of the paper consider it more substantiated. The method of linguistic geophylogeny showed that the data of the phylogenetic tree built by S. Wichmann et al. (2016) contradict the computation of geographic coordinates of the ancestral homeland of the Uralic languages from (Wichmann et al., 2010). As, in general, phylogenetic trees received by ASJP coincide with the trees built manually within the frames of comparative linguistics, we tend to trust the final tree more than the automatic definition of the language ancestral homeland. Evidently, the method of ancestral homeland computation requires further improvement. Relevant features of the Uralic languages The execution of the contrastive query allowed obtaining a set of relevant features of the Uralic languages containing four features. They are given in Table 1. To increase the reliability of the data received by the LangFam program and master of queries of the database "Languages of the World" IL RAS, the researchers double-checked the results, manually. The first feature is the name of the section (e.g., "2.3.4. Case meanings"). The other features are shown to inform to which branch the feature belongs. The numbers of brackets denote the inventory numbers of the features in the database. Figures 1-5.). Figure 3 shows the location of the languages that have a complete set of relevant features. Six of the languages (Estonian, Finnish, Izhor, Karelian, Veps, and Liv) are closely related and belong to the Finnic group. They all are located in the West of the geographic area of Uralic languages. The next step in the study of the languages and their relevant features is the successive exclusion of one feature from the query. The successive exclusion of features was I-III (There are no reliable data on the accentual difference of categorematic and syncategorematic parts of speech in extinct Kamass and Mator. Thus, it is unreasonable to speak about absence or presence of this feature in these two languages.) did not change the initial set of the languages. After the exclusion of the fourth feature: "feature (1765) 2.5.3.simple sentence| (1778) .linear order of parts of the sentence|(1781) ..main|(1782) ...SVO is present" the number of languages increases to 14: Figure 3: Map of the Uralic languages that have a complete set of relevant features The result of the query is presented in Figure 4. The map shows the localization of languages that have the complete set of relevant features (circles) and languages with the basic word order different from SVO (drops). Besides the initial languages, the list also includes Hungarian, Votic, Mari Hill, and Mari. Hungarian has two dominant word orders: SVO and SOV (Kenesei et al., 1998). Votic does not have a prevailing word order (Ariste, 1968). The basic word order in Mari Hill and Mari Meadow is SOV (Pengitov et al., 1961). The results of queries to the database "Languages of the World" IL RAS aimed at search for languages that have Illative and Inessive cases, languages that have none of them and those having only one Illative case are shown in Figure 5. Illative and Inessive cases are absent in all Samoyedic languages (drop marker in Figure 5), except Selkup, which has Illative (de Groot, 2017). As for the Finno-Ugric languages, all of them, except Saami and Khanty, have both Illative and Inessive cases. An interesting question arisen during the analysis of the contrastive query results is an attempt to find an interrelation between the number of cases in the Uralic languages and the basic word order of the sentence. The information about the number of cases and the word order in the Uralic languages is given in Table 2. Figure 6 contains a diagram with all Uralic languages, studied in the present research, except Hungarian, Votic, Saami and Erzya, i.e., all languages with one basic word order (SOV or SVO) and the data on the number of cases of which are the same in different sources. Figure 6 shows that languages with SVO word order have from 12 (Izhor) to 22 (Veps) cases, and languages with SOV word order have from 6 (Mansi) to 13 (Selkup) cases. It means that the transition SOV→SVO in Uralic was accompanied by the enrichment of the case system. The coefficient of correlation "Word order (SOV/SVO)the number of cases" equals 0.83, which is very high. The geographic localization of languages with SVO word order and a rich case system and languages with SOV word order and a fewer number of cases (Figure 7) show that the first group tends to the West, while the secondto the East. Evidently, it is inappropriate to speak about the universal dependency of one feature on the other. The map (Figure 7) shows a distinct geographic, and, consequently, genealogic (cf. Figure 2), division of the two groups of languages. Nevertheless, the geographical distribution of the two groups of languages allows us to suggest a hypothesis that the process of the case system enrichment and the process of dominant word order change were taking place simultaneously, under similar conditions. However, if the change of the basic word of the "western" group order can be stipulated by active contacts with the Indo-European languages, the impetus for the enrichment of the cases system remains unclear and requires further study. Indo-European languages as a source of word order borrowing Taking into consideration P. Hajdu's hypothesis (Hajdu, 1985) on the prevalence of SOV word order in Proto-Uralic, and the data received during the present research, we believe that the basic word order in some Uralic languages changed to SVO at a rather late period, as this evolution took place mainly in the western Finno-Ugric languages. We also think that areal contacts of the Uralic languages with each other, as well as with the languages of other families, played an important role in this process (Marcantonio, 2014). Thus, along with their long history the Uralic languages contacted with the Turkic, Tungusic, Indo-European and, probably, Yeniseian languages. Using LangFam program, we received data on the percent of languages with SOV and SVO word orders for the families mentioned above (the number of languages represented in the database "Languages of the World" IL RAS is given in brackets). As shown in Table 3, only the Indo-European languages have a significant part of languages with the basic SVO word order. However, at the oldest period of their existence, they were also characterized by the sequence subject-object-verb (Lehmann, 1974). Now let us proceed to the basic word order in different groups of the Indo-European language family (Table 4). Noteworthy, the dominance of SVO word order is characteristic of the modern western Indo-European languages, and it is very rare for the Indo-Aryan branch. Thus, the Uralic languages could acquire the order subject-verb-object as a result of contacts with the Indo-European languages of the western area. Notes: 1 -The dominant word order for Gothic is not reliably defined. As shown in (Antonsen, 1975;Hopper, 1975;Lehmann, 2005Lehmann, -2007, the verb in the preliterate Germanic languages was placed at the end, which is a relic of the Proto-Indo-European syntax. It brings to the conclusion that the modern system of the word order is a result of the evolution of the last centuries. Thus, the SVO word order in the Uralic languages of the western and northern area can be the result of contacts, first of all, with the Germanic and Slavic branches. Probably, contacts between the different Uralic languages also played an important role in this process. However, the genetic closeness of most languages that underwent the evolution in the word order (the Finnic languages) does not seem to be the key factor. The word order in other Uralic languages remained subject-object-verb, in some respect due to the "conserving" influence of the neighbor languages. The SOV word order in Proto-Indo-European and Proto-Germanic can explain why the process of transition from one basic word order to another did not begin earlier, despite the long contacts between the Indo-European and Uralic languages, and it can also help define the lower borderline of the beginning of the process to be not earlier than the early second millennium AD. Discussion As shown in previous studies (Danilova et al., 2016;Makarova&Polyakov, 2015), relevant features of a language family are not always genealogical. In particular, Proto-Uralic did not have Inessive and Illative cases. Evidently, these two cases developed in Proto-Finno-Ugric after the disintegration of the proto-language into two branches: Proto-Finno-Ugric and Proto-Samoyedic, as the latter do not have the cases in question. Besides, the basic SVO word order also was not characteristic of Proto-Uralic. It was borrowed from the Indo-European languages as a result of tight areal contacts. Though the SVO word order is not a relevant feature of the Indo-European languages (41.13% of all Indo-European languages have it), it is the most widespread. Noteworthy, the SVO word order is not genealogic for the Indo-European languages as well, as both Proto-Indo-European and Proto-Uralic had SOV (Mallory & Adams, 1997). Noteworthy, the process of basic word order change was accompanied by the enrichment of the cases system of the latter. Presumably, new cases (compared to the proto-language) developed as a result of grammaticalization of postpositional construction, which are very frequent in the Uralic languages. Nevertheless, the question what provided an impetus for this process has not found an answer in studies of modern Uralic. Based on (Greenberg, 1963) and succeeding works (Comrie, 1989;Hoffman, 1996) dedicated to the topic, it can be noted that one of the means of expressing locative relations in the Finnic languages and Saami are constructions with prepositions, which do not occur in other Uralic languages. Speaking about the areal contacts, we can say that this question remains unanswered. So far it is only possible to speak about some "point" phenomena. An example of such phenomena is broadened semantics of the second Prolative in the southern dialect of Veps under the influence of Russian, or the change of morphosyntactic characteristics of Liv translative-comitative under Lithuanian influence (Grünthal, 2003). Another interesting fact is that Selkup is the only Samoyedic language that has the Illative case. This case is absent in languages of peoples, whose area partially intersects with Selkupthe Khanty, Evenki, and Ket. Probably, it was borrowed from Komi-Zyrian after the XVI c., when a mass migration of people from the European part of Russia to Siberia began (Kuznetsova et al., 2002). However, this question remains open and requires further investigation. A high level of correlation between a certain word order (SOV/SVO) and a number of cases was found. Though it is wrong to suggest a universal interdependency of the two grammar features, it is noteworthy that those Uralic languages that underwent the transition from SOV word order also have an enriched case system compared to the proto-language (from 12 to 22 cases opposed to 8 cases in Proto-Uralic). It can be explained by the fact that the change of the basic word order and the enrichment of the case system were occurring under similar circumstances. Nevertheless, while SOV→SVO change is acknowledged to have happened under the influence of the Indo-European languages, the impetus for the development of the big number of additional cases compared to the proto-languages, which is likely to be the result of grammaticalization of postpositional constructions, requires further study. The authors applied the method of linguistic geophylogeny when a phylogenetic tree is combined with a map of the distribution of languages. Thus, the geographic localization of the ancestral homeland becomes more obvious. This method can be applied for localization of the ancestral homeland of other language families. The method of contrastive queries, which had already been tested by us, was used for the Uralic languages. It helped once again demonstrate its possibilities in the sphere of typological and areal researches. The method of contrastive queries and the localization of the ancestral homeland give a gradient (vector of change) of grammar features in a family in the dynamics of its development. Conclusion The Uralic languages have not been studied enough, which can be testified by the existence of different hypotheses on the localization of their ancestral homeland. The present research used new methods and new linguistic resources to specify these data. Particularly, using the method of linguistic geophylogeny, the authors confirmed the "Eastern" hypothesis of the origin of the Uralic languages. Besides, contrastive queries to the "Languages of the World" IL RAS database helped define the relevant features of the Uralic languages. Once again it was proven that the SVO word order is relevant, but not genealogic for Uralic. Based on the word order studies in Estonian and Finnish till the 19 th century, the authors claim that the change of the dominating word order began as late as in the 16 th century. Correlation between a certain word order (SOV/SVO) and the number of cases was found. The paper presented a map showing the change of the case system and the basic word order of a simple clause as a result of areal contacts. The absence of some Uralic languages in the "Languages of the World" database can be mentioned as a limitation of the present work. Though their inclusion would not influence the results of the contrastive query to any considerable extent, this fact still made it impossible to analyze all representatives of the family regarding their relevant features. The possible direction of future work includes a more thorough study on the time when the basic word order changed from SOV to SVO in some Uralic languages. Another question that needs attention is the reason for the significant enrichment of the case system of the "Western" Uralic languages, as none of the language families that were in close contacts with the Uralic languages is characterized by such a high number of cases.
2019-02-17T14:19:40.551Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "c17241eedc3c77a978126d4b6d0de864353baefe", "oa_license": null, "oa_url": "http://xlinguae.eu/files/XLinguae1_2018_15.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b5e6e56b84355b5d6719955947b9c91b4694ff5d", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
270317994
pes2o/s2orc
v3-fos-license
Periparturient Changes in Voluntary Intake, Digestibility, and Performance of Grazing Zebu Beef Cows with or without Protein Supplementation Simple Summary Several studies with Bos taurus cows report a decrease in voluntary intake close to parturition. However, there are few studies on the evaluation of these parameters in grazing Nellore cows receiving protein supplementation, which could mitigate the decrease in forage intake and improve animal performance. Hence, this study sought to understand how the feed intake and performance of Nellore cows on pasture changes during the peripartum period. Our study found a significant reduction in cows’ voluntary intake as they approach parturition, which provides a rational approach to supplementing pregnant cows at the end of gestation, improving production rates in calf–cow systems in the tropics. Abstract We aimed to understand the changes in nutritional parameters and performance of beef cows during the peripartum, whether receiving or not receiving protein supplements. Forty cows were used, divided into two treatments: CON—mineral mix and SUP—protein supplementation. Digestibility trial was performed at 45, 30, and 15 days (d) before the parturition and at 20 and 40 d of lactation. The ADG of cows pre- and postpartum was recorded along with the BCS in gestational (GT) and maternal (MT) tissues in the prepartum. There was an effect of treatment and period (p ≤ 0.044) for intakes of DM and CP. The forage intake was similar (p > 0.90) but varied with the effect of days related to parturition (p < 0.001). There was a 14.37% decrease in DM intake from d −30 to d −15 of prepartum. In the postpartum, at 20 d of lactation, there was an increase of 72.7% in relation to d −15 of prepartum. No differences were observed in postpartum ADG or BCS at parturition and postpartum (p ≥ 0.12). However, higher total and MT ADG (p ≤ 0.02) were observed in animals receiving supplementation, while ADG in GT remained similar (p > 0.14). In conclusion, there is a decrease in intake for pregnant cows close to parturition and greater performance of animals supplemented in prepartum. Introduction In tropical conditions, pregnant beef cows are commonly raised on pastures, where they usually experience mid-to-late gestation during the dry season.This environmental scenario often results in nutrition restrictions for the cows [1].Some studies have demonstrated that protein supplementation during the prepartum (post-weaning) period has a more pronounced effect on animal performance compared to supplementation during other physiological phases of beef cows [2].However, there is a lack of studies assessing how protein supplementation impacts the dynamics of body tissues in beef cows on pasture during the peripartum period. Understanding changes in voluntary intake during both late gestation and early lactation is crucial, as these are the periods with the highest nutritional requirements for beef cows [3].Typically, voluntary intake in Bos Taurus females declines nearing parturition [4], possibly due to limited ruminal space caused by the gravid uterus expansion.After parturition, there is an increase in feed intake [5].In contrast, there is evidence that physical constraints on feed intake during late gestation may be compensated by an increased passage rate [5]. Furthermore, understanding regarding the extent of the decline in feed intake towards the end of gestation in Zebu beef cows under grazing remains to be understood.This could provide a strategic approach for supplementing grazing pregnant beef cows in calf-cow systems in the tropics.Overall, studies investigating the effects of the gestation period on feed intake in beef cows have primarily focused on Bos Taurus, which exhibits some physiological differences compared to Zebu cattle [6]. In the tropics, studies have revealed that protein supplementation may improve forage intake by improving the adequacy of substrates (i.e., energy and protein) in both metabolism and the rumen [7].Therefore, we point out that cows receiving protein supplementation during late gestation might not experience severe restrictions in forage intake, thus potentially improving animal performance. Our hypothesis is that grazing Nellore beef cows exhibit a decrease in voluntary intake close to parturition, followed by an increase after parturition.However, we anticipate that this decrease will be less pronounced in cows receiving protein supplementation, improving animal performance and maternal tissue loss.Hence, we aimed to investigate the pattern of voluntary intake and digestibility in Zebu beef cows under grazing throughout the peripartum period.Simultaneously, we aimed to understand whether protein supplementation during this period alters the pattern of feed intake and animal performance. Materials and Methods The experiment was conducted at the Beef Cattle Facility of the Animal Science Department at the Universidade Federal de Viçosa, Minas Gerais, Brazil (20 • 45 ′ S and 42 • 52 ′ W).All animal care and handling procedures were approved by the Animal Care and Use Committee of the Universidade Federal de Viçosa (Protocol 045/2021). Animal Management, Experimental Design, and Treatments Forty multiparous Nellore cows carrying male fetuses (F1 Nellore x Red Angus), with initial body weight (BW) of 525 ± 46 and initial body condition score (BCS) of 5.25 ± 0.85, were used.Cows were submitted to a fixed-time artificial insemination protocol using semen from the same sire.Cows (experimental unit) were randomly allocated into eight paddocks with seven hectares each, evenly covered with Urochloa decumbens grass, with free access to water and feeders. The experiment was performed according to a completely randomized design.Two treatments were evaluated: control cows, which received only a mineral mixture throughout the entire experiment, and supplemented cows, which received a daily protein supplement at the amount of 1.0 kg a cow per day.The supplement was provided daily at 11:00 h to minimize any interference with animal grazing behavior.Treatment application started 60 days before parturition (220 days of gestation) and continued until 40 days after parturition.All animals were rotated among the paddocks every 7 days, aiming to control the possible effects of paddocks on treatments. The supplement was formulated to contain 28% crude protein (CP) as supplied to meet approximately 24% of the CP maintenance requirements for a pregnant cow averaging 525 kg, 235 days of gestation, and expected calf birth weight of 32 kg, according to the Nutrient Requirements of Zebu and Crossbred Cattle-BR-CORTE [8].The chemical composition of the supplement and pasture can be found in Table 1. 2 Days relative to parturition. 3Neutral detergent fiber with correction for contaminant ash and protein. 4Indigestible neutral detergent fiber. 5Neutral detergent insoluble nitrogen. Sample Collection and Measurements The BCS and BW of the cows were recorded 45 days and 7 days prior to the estimated parturition date, on the day of parturition, and 30 days after parturition.The BCS was assessed by three trained observers using a scale from 1 to 9 [9].Cows were weighed at 08:00 h, except on the day of parturition.Calves' weights were recorded at birth and 30 days after birth.The pregnancy rate of cows at the end of the breeding season also was recorded.During the breeding season, the cows were synchronized, and FTAI was performed.The pregnancy diagnosis was conducted by transrectal ultrasonography.The synchronization protocol was performed as follows: an intravaginal device of progesterone release (Tecnopec Primer, São Paulo, Brazil) was introduced, and cows received an injection of 2.0 mg of oestradiol benzoate (Tecnopec Primer, São Paulo, Brazil) on day 0. On day 7, the intravaginal device was removed, and a 2 mL injection of cloprostenol sodium (MSD Saúde Ciosin Animal, São Paulo, Brazil) was administered.On day 8, cows received 0.5 mL of oestradiol cypionate via injection (Zoetis-Pfizer E.C.P., Campinas, Brazil). Intake and apparent digestibility trials were performed 45, 30, and 15 days before the expected parturition date and 20 and 40 days after parturition, using markers.Each trial lasted nine days, with five days to stabilize the marker's fecal excretion [10], followed by four days for sample collection.Fecal output was estimated using chromic oxide as an external marker.Chromic oxide was infused via the esophagus at a dose of 15 g per animal from the first to the eighth day of each trial.Individual supplement intake was estimated using titanium dioxide, being mixed daily with the supplement in the amount of 15 g per animal [11].Forage intake was estimated using indigestible neutral detergent insoluble fiber (iNDF) as an internal [12].Fecal samples were taken immediately after defecation or directly from the rectum of the animals.Fecal collections were scheduled at 18:00, 14:00, 10:00, and 06:00 h on days 6, 7, 8, and 9 of each trial, respectively [13].Samples were pooled per animal and period (i.e., 45, 30, and 15 days before the expected parturition date and 20 and 40 days after parturition). On the last day of each trial, forage samples were taken from each paddock by handplucked sampling in order to assess the chemical composition of consumed forage.Concurrently, forage availability was assessed by using a metal square (0.5 × 0.5 m) at four randomly chosen points within each paddock.A second forage sample was pooled per paddock. Laboratory Analysis and Calculations Samples of forage, supplement, and feces were oven-dried at 55 • C and subsequently processed to pass through 1 mm and 2 mm sieves.The contents of dry matter (DM; dried for 16 h at 105 • C; INCT method-CA G-003/1), ash (complete combustion at 550 • C; method M-001/2), and N (Kjeldahl method; INCT-CA method N-001/2) were evaluated according to the standard analytical procedures of the Brazilian National Institute of Science and Technology in Animal Science INCT-CA [14] using the samples processed at 1 mm.The content of neutral detergent fiber (NDF) was evaluated according to Mertens et al. [15] using a heat-stable α-amylase and omitting sodium sulfite.The content of NDF was expressed with correction for contaminant ash and protein (NDFap).The content of indigestible neutral detergent fiber (iNDF) was estimated through a 288 h in situ incubation procedure using samples processed at 2 mm [16]. The chromium concentration in the feces samples was assessed through atomic absorption spectrophotometry (GBC Avanta Σ, Scientific Equipment, Braeside, Victoria, Australia) using digestion techniques with nitric and perchloric acids, at the ratio of 3:1 v v −1 , in one-step digestion with sodium molybdate as catalyst [17].The concentration of titanium dioxide in the fecal samples was evaluated by spectrophotometry (INCT-CA; method M-007/2). The average daily gain (ADG) in maternal and gestational tissues was calculated.For this, non-pregnant shrunk body weight (SBWnp), pregnant shrunk body weight (SBWp), and pregnancy components (PREG) were calculated based on the models described by Gionbelli et al. [18], where PREG = gravid uterus plus udder accretion during the pregnancy (GUdp) + weight of udder of a pregnant cow minus the udder weight of the cow in a non-pregnant condition (UDdp); GUdp = increase in the gravid uterus during pregnancy (difference between the weight of the pregnant uterus and the weight of the uterus in the non-pregnant condition) and UDdp = increase in udder weight during pregnancy (difference in weight of the udder of the pregnant cow and the estimated weight of the udder in the non-pregnant condition).The SBW (shrunk body weight) was calculated in the pregnant condition, where SBWp = 0.8084 × BWp 1 . 0303, where BWp = body weight of the pregnant cow.The SBW of a non-pregnant cow corresponds to the difference between the SBWp and the PREG (SBWnp = SBWp − PREG). Potentially digestible DM (pdDM) was estimated according to Paulino et al. [18] as follows: pdDM = 0.98 × (100 Fecal output was estimated by the ratio between the amount of chromium supplied and its concentration in the feces.Individual supplement intake (SI; kg/day) was estimated through the ratio of titanium dioxide in the feces to the concentration of the indicator in the supplement, as follows: where FO is the fecal output (kg/day); CMf is the concentration of the marker in the feces (kg/kg); IS is the marker present in the supplement offered to each group (kg/day); and SOG is the supplement amount offered to the group of animals or treatment (kg/day).Forage intake (FI) was calculated from the following equation: where FO is the fecal output (kg/day); iNDFf is the concentration of iNDF in the feces (kg/kg); SI is the supplement DM intake (kg/day); iNDFs is the concentration of iNDF in the supplement (kg/kg); and iNDFfor is the concentration of iNDF in the forage (kg/kg). Statistical Analysis The experiment was analyzed according to the following model: where Y ij is the observation taken in the experimental unit j submitted to the treatment i; µ is the general constant; T i is the fixed effect of treatments (control or protein supplement); and e (i)j is the random error, assumed to be NIID (0, σ ϵ2 ).The cow was considered the experimental unit.Intake and digestibility characteristics were analyzed by considering the day related to parturition as repeated measures.The choice of the best structure of the (co)variance matrix was based on the lowest Akaike's information criterion value.The degrees of freedom were estimated by the Kenward-Roger method.Effects of maternal treatments, days related to parturition, and interaction between them were analyzed. Data related to the animal performance were analyzed separately for the pre-and postpartum phases.When pertinent, the initial body weight of the cows was used as a covariate in the model. Significance was declared at p < 0.05, and trends were considered at 0.10 ≥ p ≥ 0.05.All statistical analyses were carried out using the PROC MIXED procedure of SAS 9.4 (Inst.Inc., Cary, NC, USA).Binary data (i.e., pregnancy rate) were analyzed using the GLIMIXX procedure of SAS. Intake and Apparent Digestibility On average, the availability (kg/ha) and herbage allowance (kg pdDM/100 kg of body weight) of the Urochloa decumbens grass were 3267 and 6.31, respectively. Overall, protein supplementation increased the intake (p ≤ 0.044; Table 2) of DM, OM, CP, and digested organic matter (DOM) but did not affect the intake of NDFap and NDFi (p ≥ 0.67).On average, supplemented cows exhibited a 9.13% higher total DM intake compared to the cows receiving only the mineral mixture.No interaction (p ≥ 0.16) effect between treatments and days related to parturition was observed on voluntary intake.On the other hand, there was an effect (p ≤ 0.001) of days related to parturition for all characteristics of voluntary intake (Table 2).The intake of total DM and forage decreased (p < 0.01) by 14.37 and 14.23%, respectively, from the 250th (30 days before parturition) to the 265th day of gestation (15 days before parturition; Figure 1).Likewise, from the end of gestation (15 days before parturition) to the early lactation (20 days after parturition), there was an increase (p < 0.01) of 72.7 and 77.3% in total DM and forage intake, respectively (Figure 1).The crude protein intake remained the same (p < 0.05) during the prepartum period, showing an increase 20 days after parturition but with an intermediate value 40 days after parturition (p < 0.05).There was a 19% decrease (p < 0.05) in NDFap intake from the 250th (30 days before parturition) to the 265th day of gestation (15 days before parturition; Figure 2), with higher intake at 20 days after parturition (p < 0.05). There was an effect (p ≤ 0.066) interaction between treatments and days related to parturition for all apparent digestibility characteristics (Table 3).The slicing of this interaction effect showed that protein supplementation increased (p ≤ 0.05) OM digestibility at the last 30 days of gestation and at 40 days after parturition, but no difference (p ≥ 0.05) was detected among the other days related to parturition (Figure 2A).On the other hand, we observed a higher NDFap digestibility in supplemented cows only 40 days after parturition (p < 0.05; Figure 2B).In contrast, protein supplementation improved (p ≤ 0.001) CP digestibility during the prepartum phase, whereas no effect (p ≥ 0.15) was observed during the postpartum period (Figure 2C). 1 NDFap = neutral detergent fiber corrected for contaminants ash and protein. 2 SEM = standard error of the mean.3 Sup.= effect of protein supplementation; S × P = interaction between protein supplementation and days relative to parturition (P). Performance Protein supplementation increased (p ≤ 0.012) both total and maternal ADG in the prepartum period (Figure 3A,B).Conversely, there was no effect of treatments (p > 0.13) on gestational ADG (Figure 3C).Similarly, protein supplementation did not affect (p ≥ 0.12) BSC at parturition and lactation, postpartum ADG, pregnancy rate, and calves' weights at birth and at the age of 30 days (Table 4). Discussion The pdDM constitutes an integrative measure of both quantitative and qualitative characteristics of the forage as it simultaneously defines the available forage that is potentially convertible into animal products.In tropical regions, some authors have suggested that a minimum of 4 to 5 kg of pdDM/100 kg of BW should be ensured to allow selective grazing by animals and, therefore, not affect voluntary intake and performance [19].It is noteworthy that throughout all experiments, the herbage allowance (kg pdDM/100 kg BW) remained stable and within the recommended range.Thus, any effects of inadequate pdDM availability on animal performance would be unlikely. In this study, cows showed body weight loss in maternal tissues during late gestation.Indeed, several authors have reported a pattern of transition from the anabolic state to the catabolic state in pregnant beef cows, on average, from 240 days of gestation [20].During late gestation, there is an increase in the cow's protein requirements [3].In our study, forage had low CP content during the dry season (on average, 57 g CP/kg DM).Thus, it is reasonable to state that cows increased body mobilization as an attempt to meet the demands for both fetal-placental growth and development, as well as meet the N demands for microbial growth in the rumen via recycling [21,22]. Even though all cows lost maternal tissue, protein supplementation avoided the mobilization from the cows' body reserves, leading to higher total weight gain.This can be explained, at least partially, by the additional supply of nutrients, especially protein, via protein supplementation.In agreement, protein supplementation has been reported to increase the mRNA expression of skeletal protein synthesis markers in supplemented cows [23].Furthermore, additional N supply has been reported to enhance N balance, reducing reliance on skeletal muscle as a source of amino acids in pregnant heifers.This has been accompanied by a decreased abundance of proteins related to muscle degradation [24]. Otherwise, protein supplementation did not affect gestational tissue gain, suggesting that the non-supplemented cows adjusted their metabolism to avoid nutrient deficiencies for the fetus.During gestation, females from all species undergo homeorhesis.Thus, mammal females tend to prioritize fetal growth, exhibiting coordinated changes in their tissue metabolism to regulate nutrient partitioning needed to support the fetus [25,26].Additionally, it has been demonstrated that under a metabolizable protein deficiency, placental blood flow is increased, indicating an adaptation of the placental vasculature [27].In these lines, the placenta may enhance the abundance of glucose transporter 3 (GLUT-3) in an attempt to increase its capacity for placental glucose transfer [28]. Despite some studies reporting a higher calf birth weight in cows supplemented during late gestation [29], the pattern of metabolism adjustment (i.e., homeorhesis) corroborates with similar calf birth weights between treatments.This result aligns with some of the studies in tropical conditions, which demonstrate the lack of effects of maternal supplementation for beef cows during late gestation on calf birth weight [30,31].The variation in responses among the studies can be attributed to the level of restriction experienced by non-supplemented cows (i.e., pasture quality), as well as the amount of supplement offered to supplemented cows. Early lactation is a critical period, as the peak of lactation occurs from 3 to 5 weeks after parturition in Nellore cows [32].This period is typically followed by numerous metabolic, physiological, and hormonal changes, all occurring in an integrated manner to support the demands of nutrients for milk synthesis [25].In our study, peripartum protein supplementation was unable to avoid postpartum weight loss.It has been reported that postpartum protein supplementation improves milk yield [33] but does not affect productive performance [31].Our data showed that maternal protein supplementation did not affect offspring performance at the age of 30 days.This result may suggest that milk yield was not affected by treatments. In cow-calf operations, feed supplementation should be utilized at specific times when the efficiency of supplement utilization by beef cows is optimized [34].In the tropics, late gestation in beef cows usually aligns with the dry season.In this regard, substantial evidence suggests that feed supplementation of beef cows during late gestation is more effective than during early lactation . Assessing BCS at late gestation and at parturition is essential as it accurately estimates body energy reserves in beef cows, allowing predictions of reproductive success and understanding the impacts of nutritional strategies on animal performance [35].It is noteworthy that cows in both treatments presented adequate body condition (averaging 5.0) at parturition [36].This can be partially attributed to the adequate herbage forage allowance.Under these conditions, differences in pregnancy rate would not be expected. Providing adequate N supply in the rumen is crucial for optimizing the digestion of fibrous compounds and increasing forage intake [37].Furthermore, maximizing forage intake is related to the metabolic adequacy of absorbed nutrients [38].Our hypothesis posited that protein supplementation could avoid the decrease in the cow's feed intake close to parturition as it potentially enhances fiber degradation, leading to an increased passage rate and thereby promoting increased forage intake.However, we observed that the decrease in feed intake as pregnancy progressed was similar in both treatments. Findings from other studies suggest that forage utilization would be optimized when dietary CP is raised to 100 g/kg DM by protein supplementation [39].In our study, supplementation increased dietary CP close to 70 g/kg DM during the dry season.This could explain the lack of effect of protein supplementation on forage intake during the dry season.In contrast, during the rainy season, forage itself already had a CP content close to 100 g/kg DM, which is considered an optimized diet for the use of pasture.Thus, we would not expect any effect of protein supplementation on forage intake during the rainy season. Our primary concern was to understand changes in feed intake throughout the peripartum period.We would expect a decrease in voluntary intake close to parturition, which was confirmed in both treatments.Exponential fetal growth reduces gastrointestinal capacity, leading to decreased voluntary intake [20,40,41].Moreover, it is important to note that in ruminants, approximately 75% of fetal growth takes place during late gestation, which further imposes constraints on rumen capacity [42].Therefore, cows may be unable to meet their nutrient requirements, leading to maternal weight loss, as observed in our study.Indeed, we observed an average decrease of 14% in forage intake from the 250 to the 265th day of gestation.This finding aligns with studies conducted with Bos taurus pregnant beef heifers, wherein a 9.1% decrease in voluntary intake was reported in the last week of gestation [5]. In the tropics, several authors have reported that forage qualitative characteristics such as CP and iNDF contents are closely associated with forage intake in grazing cattle [43].It is worth mentioning that even with an improvement in pasture quality during late gestation (i.e., a decrease in indigestible fiber and an increase in CP), cows drastically reduced their feed intake.This observation confirms that the compression of the uterus in the rumen prevails over any other effect on intake. After parturition, there was a 77% increase in forage intake compared to the last evaluation period during pregnancy.Lactating ruminants have a higher intake compared to non-lactating ruminants.In fact, differences of up to 100% in voluntary intake have been observed for pregnant or lactating sheep and cattle.However, some authors suggest that constraints on cows' intake capacity persist during the first week of lactation.This constraint arises as the rumen is still returning to its normal volume, with increased intake from this stage onwards [44].In fact, some studies have observed a decrease in rumen weight at the end of pregnancy, followed by a re-establishment of the cows' intake capacity 20 days after parturition [45,46]. As expected, the intakes of DM, DOM, and NDFap followed the variations observed in forage intake throughout the periods.Conversely, variations in CP intake did not align with the forage intake pattern.This discrepancy could be attributed to the highest CP content in prepartum forage during the period of lowest intake (i.e., 15 days before the parturition).This finding justifies the similarity in CP intake by animals throughout the prepartum period.The increase in OM digestibility for animals supplemented at 40 days of lactation is associated with greater NDFap digestibility in this period.Additionally, the enhanced CP digestibility for supplemented cows is explained by the greater CP intake, which increases its participation in the total diet, reducing the relative participation of the metabolic fecal fraction [47].The higher total DM intake in supplemented cows is explained exclusively by the supplement intake, as there was no effect of treatments on forage intake.The similar NDFap intake between treatments reflects the lack of effect of treatments on forage intake, as forage represents most of the dietary fiber. This study brings a rational, practical approach to supplementation strategies for grazing Bos indicus beef cows, as it shows a marked decrease in voluntary intake at the end of gestation.While protein supplementation may not improve the nutritional characteristics and birth weight of calves, it effectively mitigates the loss of maternal tissue. Conclusions Regardless of protein supplementation, grazing Zebu beef cows exhibit a decline in voluntary intake as parturition approaches, followed by a subsequent increase postpartum.While protein supplementation may not improve the nutritional characteristics, it effectively mitigates the loss of maternal tissue.This aspect is crucial for achieving favorable results in cow-calf operations. project administration, and supervision.S.d.C.V.F.: conceptualization, writing-review and editing, project administration, supervision, and funding acquisition.M.F.P.: conceptualization, writingreview and editing, project administration, supervision, and funding acquisition.All authors have read and agreed to the published version of the manuscript. Funding: The scholarship of the first author was supported by Coordenação de Aperfeiçoamento a Pessoa de Nível Superior (CAPES) by schorlaship of the first author (number 88887.482607/2020-00). Institutional Review Board Statement: This study was conducted in accordance with the Ethical Principles in Animal Experimentation, adopted by the National Council for Animal Experimentation Control (CONCEA), and was approved by the Committee on Ethics in Animals on the use of farm animals of Universidade Federal de Viçosa (CEUAP-UFV) (Protocol number 045/2021). Informed Consent Statement: Not applicable. Figure 1 . Figure 1.Intake of forage (solid line) and total dry matter (dotted line) in the peripartum grazing Nellore cows.Means followed by different lowercase letters (p < 0.001) between forage intake periods are different.Means followed by different capital letters (p < 0.001) between periods of total dry matter intake are different. Figure 2 . Figure 2. Apparent digestibility of organic matter (A), neutral detergent fiber corrected for ash and protein (B), and protein (C) throughout the peripartum of grazing Nellore cows.The treatment means (CON and SUP) within each period accompanied by * are different from each other (p < 0.05). Table 1 . Chemical composition of the supplemented and pasture, and forage mass. Table 2 . Effects of protein supplementation on voluntary intake of grazing beef cows during the peripartum period. Table 3 . Effects of protein supplementation on apparent digestibility (g/kg) of grazing beef cows during peripartum period. Table 4 . Effects of protein supplementation on performance of grazing beef cows during peripartum period.
2024-06-08T15:03:37.204Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "ccb6254ad5f9f1442620e00a85263afc6d0318b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/14/11/1710/pdf?version=1717683819", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1568791b9474ee6e6cd84460b356c66253f5953f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
259509037
pes2o/s2orc
v3-fos-license
Antioxidant activity of selected plants extract for palm oil stability via accelerated and deep frying study Antioxidants are organic compounds that help to prevent lipid oxidation and improve the shelf-life of edible oils and fats. Currently, synthetic antioxidants were used as oil stabilizing agent. However, synthetic antioxidants have been causing various health risks. As a result, natural antioxidants such as most parts of olive plant, green tea, sesame, medicinal plants were plays an important role to retard lipid oxidation. The palm oil was continuously frying at 180 °C for 6 days using Lepidium sativum (0.2%w/v) and Aframomum corrorima (0.3%w/v) seeds extracts as antioxidant. The physicochemical properties of oil in the herbal extract additive group significantly maintained the oil quality during frying compared to the normal control and the food sample containing group. The L. sativum extract had a greater oil stability compared to A. corrorima extract. However, the frying oil without herbal extract significantly increase the physicochemical properties of oil such as iodine value, acid value, free fatty acid, total polar compounds, density, moisture content, pH etc. during repetitive frying. The antioxidant activity of the plant extract was outstanding, with an IC50 value in the range of 75–149.9 μg/mL when compared to the standard butyl hydroxy anisole, which had an IC50 value in the range of 74.9 ± 0.06–96.7 ± 0.75 μg/mL. The total phenolic and flavonoid content of the extract for L. sativum was 128.6 ± 0.00 mg GAE/g, 127.0 ± 0.00 mg QE/g, and 130.16 ± 0.001 mg GAE/g, 105.76 ± 0.02 mg QE/g, respectively. The significant effect of the plant extract on the degradation of oil and the formation of free fatty acids was confirmed by Fourier transform infrared spectroscopy. The result of these study revealed that the ethanolic crude extract of L. sativum and A. corrorima had a potential natural antioxidant to prevent the degradation of palm oil. Introduction Lipid oxidation is a major cause of degradation during the storage and processing of edible fats, oils, and fat-containing products. It modifies critical quality control standards for fats and oils [1]. It also causes a variety of physical and chemical changes, resulting in significant decomposition [2,3]. Oxidation is a principal cause of quality deterioration, promotes rancidity and food degradation [4]. Oxidation reactions generate free radicals that set off chain reactions [3,5]. Furthermore, oxidative stress causes several fatal diseases in humans, such as cancer. Frying is a popular and long-standing culinary technique used to prepare meals all around the world [2,5]. The frying process degrades oils, causes them to lose their nutritional content, and produces toxic chemicals that are harmful to one's ethanol was selected for further extraction. The powdered plant materials (100 g for each) were soaked with ethanol (1L) for 24 h at room temperature using maceration technique. The solvent extracts were filtered and concentrated using a rotary evaporator at a temperature of 60 • C with a speed of 90 rpm to have a solid consistency and dried at room temperature. Finally, the crude extract was packed in air-tight glass bottles with proper labels and kept in a refrigerator at 4 • C until used for the next experiment [23]. The qualitative phytochemical screening of the plant crude extracts such as saponin, alkaloids, steroid, tannins, flavonoids, phenolic, terpenoids, glycosides and quinones were investigated using a standard methods reported from the previous similar study [14]. Antioxidant activity of Lepidium sativum and Aframomum corrorima seed extract 2.4.1. DPPH assay The DPPH radical-scavenging activity of plant extracts was determined by adding various concentrations of test extracts to 2.9 mL of a 0.004% (w/v) ethanol solution of DPPH. After 30 min of incubation at room temperature, the absorbance was measured at 517 nm against a blank [24]. The IC 50 values (concentration of sample required to scavenge 50% of free radicals) were calculated from the regression equation. Butylated hydroxy anisole (BHA) was used as a positive control, and all tests were performed in triplicate. DPPH's free radical inhibition (I%) was calculated using equation (1). Where A is Absorbance Ferric ions (Fe 3+ ) reducing antioxidant power assay The reducing power assay was performed using the method described by previous similar study report with minor modifications [25]. An aliquot of 0.2 mL of various concentrations of the extracts (25-125 μg/mL) were mixed separately with 0.5 mL of phosphate buffer (0.2 M, pH 6.6) and 0.5 mL of 1% potassium ferricyanide. The mixture was incubated in a water bath at 50 • C for 20 min. After cooling to room temperature, 0.5 mL of 10% trichloroacetic acid was added, followed by centrifugation (769.23g) for 10 min. The supernatant (0.5 mL) was collected and mixed with 0.5 mL of distilled water. Ferric chloride (0.1 mL of 0.1%) was added to it, and the mixture was left at room temperature for 10 min. The absorbance was measured at 700 nm, and BHA was used as a positive control. The ability of the extracts to reduce Fe 3+ to Fe 2+ was calculated using the following equation (2): Reducing power assay (%) = Acontrol − Asample Asample x 100 Where A is Absorbance Hydrogen peroxide scavenging assay The extract's ability to scavenge hydrogen peroxide (H 2 O 2 ) was determined using similar method from the previous report with a slight modified [26]. An aliquot of 0.1 mL of extracts (25-125 μg/mL) was transferred into the Eppendorf tubes, and their volume was made up to 0.4 mL with 50 mM phosphate buffer (pH 7.4) followed by the addition of 0.6 mL of H 2 O 2 solution (2 mM). The reaction mixture was vortexed, and after 10 min of reaction time, its absorbance was measured at 230 nm. BHA was used as the positive control and the ability of the extracts to scavenge the H 2 O 2 was calculated using the following equation (3): Percentage of H ₂ O₂ scavenging activity % = Acontrol − Asample Asample x 100 Where A is Absorbance Phosphomolybdenum assay The phosphorus molybdenum assay was conducted by preparing an aliquot of 0.1 mL of sample solution of different concentrations μg/mL) treated with 1 mL of reagent solution (0.6 M sulfuric acid, 28 mM sodium phosphate and 4 mM ammonium molybdate) [27]. The tubes were incubated at 95 • C in a water bath for 90 min. The samples were cooled to room temperature and their absorbance was recorded at 765 nm. BHA was used as the positive control. BHA was used as the positive control and the scavenging ability of the extracts was calculated using the following equation (4): Where A is Absorbance Assay for total phenolics The total phenolic content (TPC) was determined using the Folin-Ciocalteu reagent [28]. Briefly, 0.01 g of the crude extract was dissolved in 10 mL of ethanol and vortexed until the mixture became a homogenous stock solution. For the stock solution, 0.2 mL of the supernatant was mixed with 0.8 mL of distilled water. 0.1 mL of Folin-Ciocalteu reagent was added and left for 3 min at room temperature. Then, 0.8 mL of 20% (w/v) Na 2 CO 3 was added into the mixture and incubated for 2 h in the dark. The absorbance was measured using a Uv-Vis spectrophotometer at 765 nm. Gallic acid was used as a standard, and the absorbance, y, obtained after analysis for each plant sample was used in the equation y = 0.0098x -0.2228, R 2 = 0.9991 where x is standard concentration (Fig. 1A in the attached supplementary material). Then, the value obtained, x, was substituted in C1 in equation C--C1 x V/m, where C = is the total phenolic content in mg GAE/g, C1 = is the concentration of gallic acid established from the standard curve, and V is the volume and m is the mass of the extract used. Assay for total flavonoid The total flavonoid content (TFC) was assessed by the aluminum chloride colorimetric method [29]. Briefly, 0.01 g of the crude extract was dissolved in 10 mL of ethanol and vortexed until it became a homogenous stock solution. 0.2 mL of the extract supernatant was mixed with 0.15 mL of 5% NaNO 2 and incubated in the dark for 6 min at room temperature. Then, 0.15 mL of 10% (w/v) AlCl 3 was added to the mixture and kept in the dark for 6 min at room temperature. After that, 0.8 mL of 10% (w/v) NaOH was added into the mixture and incubated in the dark for 15 min at room temperature. The absorbance was measured using a Uv-Vis spectrophotometer at 510 nm. Quercetin (80% (v/v) ethanol) was used as a standard, and the absorbance, y, obtained after analysis for the plant sample was used in the equation y = 0.066x -0.0142, R 2 = 0.9991 obtained from the standard curve (Fig. 1A in the in the attached supplementary material supplementary material). Deep frying protocol The oil was fried for 6 h per day for a total of 6 days. Deep frying was carried out in a stainless steel electrical open fryer (10 L oil capacity) [31]. The treatments were conducted simultaneously in Group-I (oil without any additives), Group-II (normal oil with 0.02% BHA), Group-III (normal oil with 0.2% w/v L. sativum extract and food), Group-IV (normal oil with 0.3% w/v A. corrorima extract and food), and Group-V (normal oil with food). A sample before frying was taken to represent the sample for day 0. The remaining oil was heated to 180 ± 2 • C and allowed to equilibrate at this temperature for 30 min. About 14 batches of 80g food were fried for 2.5 min per day at 30 min intervals for 6 h. Approximately 100 mL of oil samples were collected from each fryer and introduced into amber bottles at the end of each day. All oil samples were flushed with slow bubbles of nitrogen from the bottom of the bottles and stored at − 20 • C prior to physical and chemical analysis. Rapid measurements were taken while there was no moisture (bubbles) in the frying oil after each cycle. The effect of repetitive frying of oil samples on the physicochemical parameter was evaluated [32]. Physicochemical and quality assessment of deep frying oils The physicochemical characteristics of deep-frying oil in each cycle were investigated. The physicochemical properties, such as acid value, refractive index, iodine value, saponification number, moisture content, and pH, were analyzed by the standard protocol of oil analysis, the AOAC official method (969.17). Several methods for the determination of the quality of deep-frying oils have been developed based on physical and chemical parameters. The oxidation parameters such as free fatty acids (FFAs), peroxide value (PV), iodine value (IV), conjugated dienes (CD), and conjugated trienes (CT) were the major parameters to determine the frying oil deterioration [33]. The density and anisidine value of the oil were determined by the method reported in the previous literature [34,35]. The conventional analytical methods, including titrimetric and spectrophotometric techniques, were adopted to overcome the physicochemical analysis result following the guidelines of the official methods of the American Oil Chemists' Society (1998) [33,36]. Statistical analysis The commercial statistical packages (SPSS, version 25) were used to the plant extracts antioxidant activities and physicochemical parameters of oil were performed in triplicates. The statistical differences and homogeneity among the groups were verified using one way ANOVA. The normality of the data were verified using Kurtosis statistics test. The analysis of variance for individual parameters was performed using a Tukey post hoc test to identify the statistical difference on each groups differ from other on the basis of mean values to analyze the significance of multiple comparison measurement of the data at confidence level of 95% (p < 0.05). Total phenolics and flavonoid content The TPC of the ethanolic seed extracts of L. sativum and A. corrorima were 128.6 ± 0.00 and 127.0 ± 0.00 mg GAE/g gallic acid equivalents, respectively ( Table 2 There was a significant amount of total flavonoid content (TFC) in both L. sativum and A. corrorima, which were estimated to be 130.16 ± 0.01 and 105.76 ± 0.02 mg QE/g, respectively. Antioxidant activity The antioxidant activity of the plant extracts was evaluated using the DPPH free radical scavenging activity assay, hydrogen peroxide inhibition assay, phosphor-molybdenum assay, and ferric reducing power assay ( Table 2). The inhibitory concentration (IC 50 ) of the different assays was carried out using the regression equation. A lower IC 50 value indicates a greater potential to scavenge free radicals. The IC 50 value of BHA (the positive control) was lower than the corresponding plant extract. However, the IC 50 values of L. sativum (75.9 ± 0.31 μg/mL) and A. corrorima extract (77.3 ± 0.58 μg/mL), which were lower than BHA (83.4 ± 0.26 μg/mL) at the hydrogen peroxide scavenging assay, indicated that the plant extract had stronger antioxidant activity compared to BHA. Free radical scavenging activity The percentage free radical scavenging activity of ethanolic extracts of L. sativum and A. corrorima increased with increasing concentration (25-125 g/mL). For the scavenging activity, the hydrogen-donating ability of the extract toward the DPPH free radical was evaluated. Both plant extracts increase the free radical scavenging activity (%) with increasing concentration (Fig. 1a). The higher scavenging activity was recorded on the ethanolic extract at 125 μg/m for L. sativum (66.03 ± 0.774 μg/mL), which is comparable to the positive control BHA (71.38 ± 0.834 μg/mL). Moreover, the DPPH quenching ability of A. corrorima increases significantly with increasing concentration (Fig. 1a). Reducing power assay The reducing power of the extract was measured for a concentration up to 125 μg/mL and showed a significant increment as the concentration increased (Fig. 1b). Among the two tested plants, L. sativum seed extract possessed the highest free radical reducing activity (57.89 ± 0.254 μg/mL) compared to A. corrorima (49.68 ± 0.763 μg/mL) at 125 μg/mL. However, when compared to BHA (58.68 ± 0.39 μg/mL), the two plant extracts have a lower ability to reduce free radicals (Fig. 1b). Phosphomolybdenum assay The free radical inhibition of plant extracts increases with concentration ( Fig. 1c). The antioxidant activity of the L. sativum (60.30 ± 0.151 μg/mL) extracts was significantly increased compared to the A. corrorima extract (44.72 ± 0.362 μg/mL) at 125 μg/mL. In addition, the molybdenum ion concentration reduction of the ethanolic extract of L. sativum showed comparable radical scavenging activity with the positive control BHA (79.39 ± 0.69 μg/mL). Hydrogen peroxide scavenging assay The hydrogen peroxide scavenging activity of L. sativum and A. corrorima seed extracts was investigated within the range of concentrations (25-125 μg/mL) as shown in Fig. 1d. The ethanolic extract of the L. sativum and A. corrorima seeds (125 μg/mL) displayed a strong percentage H 2 O 2 scavenging activity (70.60 ± 0.72 μg/mL) and (71.86 ± 0.63 μg/mL) respectively), whereas it was 82.03 ± 0.69 μg/mL for the positive control BHA group. Accelerated oxidative study To evaluate the effect of frying during an accelerated oxidative study, different concentrations of plant extract (0.1-0.4%) were investigated and compared with the positive control BHA for 0-72 h. For the optimization of the effect of plant extract, the physicochemical properties of frying oil, such as acid value, saponification value, iodine value, and peroxide value were depicted as supplementary materials in Fig. 3A, respectively. Furthermore, the total polar compound and conjugated diene and conjugated triene levels of frying oil during the accelerated oxidative study were depicted as supplementary material in Fig. 4A. Based on the optimized concentration, L. sativum (0.2%) and A. corrorima (0.3%) showed a comparable protective effect with the positive control BHA. Therefore, these two concentrations of plant extract were selected for the deep-frying protocol as oil stabilizers. Deep frying study During the accelerated oxidative study, the deep-frying protocol was followed for six days in a row, and the concentration of plant extract was optimized based on the physicochemical analysis of frying oil at various concentrations. During the optimization process L. sativum (0.2% w/v) and A. corrorima (0.3% w/v) plant extracts were chosen for the deep-frying study. Saponification value, acid value and free fatty acid content The SV, AV, and FFA content of oils during repetitive frying were evaluated and depicted in Table 3. The saponification values of deep-frying oil increased with the increase in the frying cycle. There was no significant difference between the normal controls (Group 1) and all the others. Significant variations in SV between each group during the initial day of frying were recorded. There was a significant difference between the normal control and the plant extract-containing group on day 1 (P < 0.05). However, there is no significant difference between the positive and normal control groups. The SV in group I was statistically significant compared to groups III, IV, and V (P < 0.05), but non-significant compared to the positive control (group II). At day 2, the SV of the normal control group was significantly different compared to groups IV and V (P < 0.05). However, the positive control group was statistically significant compared with groups III, IV, and V. The SV of group V was significantly different compared to the entire group (P < 0.05). Furthermore, the SV of group-I at 3, 4, 5, and 6 days was significantly different compared to the entire group. The acid value (AV) of repetitive frying oil was studied for six continuous days. There was a significant increment in the AV of palm oil during the frying period for the entire group. Initially, the AV of group V (1.30 ± 0.01) was higher than the normal, positive control and the plant extract additive group (P < 0.05). However, after the 1st day of frying, the AV of the normal control group was significantly higher compared to the plant additive group and decreased non-significantly compared to the positive control and food sample-containing group (group V). After one day, the AV of each frying oil was significantly different. Furthermore, significant increases in groups I and V were observed when compared to the positive and plant antioxidant-containing groups. The FFA (%) content of oil in each group increased with the frying period. However, after day 1, the FFA (%) content of the frying oil with food sample was significantly higher than the positive control and the plant antioxidant additive group. However, there was no significant difference between the normal control and group V. After days 1 through 6, the FFA (%) of the normal control group was statistically significant compared to the entire group. Peroxide value, p-anisidine value and total oxidation In this study, the PV, p-AV, and TOXOX values of frying oil in each group increased throughout the study period (Table 4). There was a significant difference between the normal control group and the entire group during the initial period (P < 0.05). The p-AV of oil increases at random as the frying time increases for the entire group. The higher p-AV of the oil was observed in groups I and V throughout the frying periods. The PV of frying oil under positive control significantly decreased compared to the other group. After day 1, the PV of frying oil dramatically increases with increased frying time. There was a significant difference observed between the normal control group and the entire group in the initial period (p < 0.05). However, group V had the highest p-AV of frying oil from the Note: AC = A. corrorima, LS = L. sativum, SV = saponification value, AV = acid value, FFA = free fatty acid. The experimental data was in triplicate and the statistical value significant at (P < 0.05). first (6.06 ± 0.02 to the final (36.22 ± 0.31) day of frying). In this study, the higher peroxide values in the frying oil after 6 days that contained a food sample (Sambussa) (37.00 ± 0.95 meq O 2 /kg) were an indication of the higher degree of oxidation. The TOXOX value results were also compiled and depicted in Table 4. During the initial period of frying, the TOTOX in the normal control group was statistically significant compared to the groups II, III, and V (P < 0.05). Total polar compounds (TPc) and iodine value (IV) of frying oil The TPc of oil in the food sample containing group (group V) was higher than the overall TPc (Table 5). A significant difference was observed between the normal control and the rest of the others (P < 0.05) in the initial and day-1 periods of frying. However, there was no significant difference between the TPc of the plant extract additive and the normal control group (P > 0.05) at day 2. At day 2, there was no significant variation observed between the positive control and the plant extract additive group. However, 0.2% w/v significantly decreases the TPc of frying oil compared to the 0.3% w/v additive group. Moreover, the plant extract additive group significantly decreases the TPc of oil compared to the normal control and frying oil with food (group V). The IV of frying oil decreased throughout the study period. There was no significant difference between the normal, positive control, and plant extract additive groups in the initial day of frying (P > 0.05). However, there is a significant difference between normal cooking and frying oilcontaining foods. Similarly, up to day 3 (Table 5), the IV of frying oil in the positive control and the 0.2% w/v L. sativum and 0.3% w/v plant extract additive groups were comparable. However, after day 3, there was a significant difference between the 0.3% w/ Note: AC = A. corrorima, LS = L. sativum, PV = peroxide value, P-AV = anisidine value, TOTOX = total oxidation. The experimental data was in triplicate and the statistical value significant at (P < 0.05). v A. corrorima additive and the positive control group. Conjugate diene and conjugate trienes The conjugate diene and conjugate triene levels of the frying oil were investigated using UV-visible spectroscopy with respect to the extinction coefficient (K). It can be observed that all samples exhibited a steady increase in absorbance between 232 nm and 270 nm, indicating an increase in the formation of both conjugated dienes and trienes during repeated frying (Fig. 2). Effect of pH, density and moisture content The pH, density, moisture content, and refractive index of the oil were evaluated throughout deep frying and are depicted in Fig. 3. The pH of a cooking-quality vegetable oil is normally kept neutral and usually ranges from 6.9 to 6.7 (Fig. 3a). The density of the frying oil was also evaluated during the deep-frying study and is shown in Fig. 3b. Increase the frying cycle to increase its density for all samples. The density of the food sample containing group was greater than the entire group, which was 0.89 ± 0.35. However, in the final period of frying, the density was lower on the positive control (0.82 ± 0.34) and A. corrorima plant extract (0.3%) additive group (0.88 ± 0.97) (Fig. 3b). The parentage moisture content of deep-frying oil showed a great deal of variation between the groups throughout the study periods, as depicted in Fig. 3c. The moisture content of food samples containing oil (group V) and the normal control (group I) was higher than that of the plant extract additive group. However, the moisture content of the positive control group was lower than that of the food and plant extract additive group. The greater moisture content of group V at day 6 of frying was 43.09 ± 0.99. The RI value of the oil was investigated, and it was observed to be in the range of 1.432-1.462 throughout the deep-frying study (Fig. 3d). Fourier transform infrared spectroscopy of oil after 6 th day of frying After heating and frying, the level of oxidation was assessed using FTIR spectroscopy. In this study, five samples (one sample for each group) were investigated. The FTIR spectra of frying palm oil showed a significant difference in the band at room temperature (Fig. 4). Discussion In this study the effect of plant extract on the stability of palm oil were evaluated and the palm oil was continuously frying at 180 • C for 6 days. The antioxidant activity, TPC and TFC of L. sativum and A. corrorima seed extracts were discussed. The antioxidant and oil stability of potential of the plant might be due to the secondary metabolite found in the extract. Secondary metabolites are natural products that are primarily produced by bacteria, fungi, and plants. They are low-molecular-weight molecules with a range of biological importance, including antioxidants and antimicrobial activity [37,38]. The phytochemical screening result of the plant extract revealed that, the presence various secondary metabolite that helps to stabilize oil during frying. The ethanolic extract of L. sativum showed a better solvent to extract various secondary metabolite compared to other solvent extracts. This study confirmed the previous study's findings [37] and stated that L. sativum was rich in alkaloids, glycosides, phenols, terpenoids, flavonoids, and other secondary metabolites. Similarly, the ethanolic crude extract of A. corrorima showed a positive result for phenol, flavonoid, tannin, and glycoside. Secondary metabolites, particularly phenol and flavonoids, have been shown to have significant radical scavenging activity [38]. The antioxidant activity of the plant extract might be due to the bioactivity potential of the secondary metabolite [14]. Flavonoids, which are phenolic compounds present in medicinal plants, exhibit antioxidant activity [39]. The phytochemicals such as alkaloids, flavonoids, and terpenes are essential in antioxidant, analgesic, neuroprotective, antimicrobial, and antimalarial actions [14,38]. They also serve as anticancer and antidiabetic agents [24]. In general, the phytochemical screening results of the plant seed extracts might have promising medicinal applications since tannin; terpenoids, saponins, phenols, and flavonoids are among the major phytochemicals of the plant seed extracts [14]. The successful screening of phenolic and flavonoid compounds can be influenced by a number of factors, including sample size, storage conditions, weather, extraction method, the presence of any interfering substances, and the solvent [38,39]. However, no single solvent or mixture of solvents has been shown to effectively extract phenolic compounds from these two species. The phenolic hydroxyl groups have a remarkable ability to scavenge free radicals [28]. On the other hand, flavonoids are biologically important compounds with a broad spectrum of biological activities such as antioxidant, anticancer, anti-inflammatory, anti-allergic, anti-angiogenic, and anti-allergic. The TPC of the methanolic and ethanolic extracts of L. sativum were 94.48 ± 1.82 mg GAE/g and 86.48 ± 0.22 mg QE/g, respectively [14]. The variation in the ethanolic extract might be due to the maturation period, geographical location, and method of extraction. The total phenol content of the methanolic extract from the seed of L. sativum at 46 mg GAE/g [40]. The variation might be due to the solvent, method of extraction, and geographical location of the study plant [41]. Furthermore, the total phenolic content of A. corrorima seed extracts was comparable to those of L. sativum extract. This indicated that the extracts were responsible for the free radical scavenging activity associated with oxidative stability [42,43]. The primary mechanism underlying the antioxidant activity of phenolic compounds is their redox properties, which can be helpful in absorbing and neutralizing free radicals, quenching singlet and triplet oxygen, or dissolving peroxides [14]. The TPC of A. corrorima seed extracts demonstrated significantly higher levels than previously reported in the literature. The TPC of the A. corrorima hydro distillation extract was 3.98 ± 0.27 mg GAE/g for the seed and 1.32 ± 0.07 mg GAE/g for the pod [17]. The disagreement might be due to the method of extraction, the solvent used, the plant seed harvesting period, the different geographical distribution of the plant, and other environmental factors [23]. The principal antioxidants or free radical scavengers in plants are correlated with phenolic compounds [44]. The study results revealed that the two plant extracts might have strong radical scavenging activity due to their greater phenolic content. The bioactivity of phenolic compounds might be associated with their ability to chelate metals, inhibit lipoxygenase, and scavenge free radicals. Moreover, the TFC of L. sativum was 37.63 ± 2.14 mg QE/g [14]. The disagreement might be due to the geographical distribution of the plant, the method of extraction and solvent used [39]. Numerous studies have found that flavonoids found in herbs contribute significantly to their antioxidant effects [45]. Flavonoids are extremely powerful scavengers of most oxidizing compounds, including single oxygen and different free radicals [14,46]. The TFC of the hydro-methanolic extract of A. corrorima is 19 ± 0.4 mg QE/g which was lower than the current study result [46]. The phenolic compounds are major secondary metabolites that consist of a large group of biologically active compounds. Due to their redox properties, phenolics act as antioxidants and reducing agents. The antioxidant activity of the plant seed extract was evaluated using the DPPH, ferric reducing power, phosphor-molybdenum, and hydrogen peroxide scavenging assays. Increase the concentration of plant extract to increase the percentage of free radical inhibition. The antioxidant activity of the L. sativum was strongly correlated with the positive control. This might be due to the presence of various phytochemicals such as flavonoids and phenolic compounds [22]. The antioxidant activity of different extracts was found to correlate significantly with their total phenolic content, and that L. sativum seeds could be used in food supplement preparations or as a food additive, for caloric gain or to protect against oxidation in nutritional products [14]. The percentage inhibition of the A. corrorima ethanolic extract was lower when compared to the L. sativum and the positive control. This might be due to the lower concentration of flavonoids and phenolic compounds since those compounds have been reported to scavenge free radicals, superoxide, and hydroxyl radicals by transfer [47]. Plant-derived flavonoids possess antidiarrheal, antimicrobial, antioxidant, and anti-inflammatory properties [38]. Polyphenolic compounds and flavonoids form complexes with bacterial cell walls and exert biological functions [48]. Moreover, the antioxidant capacity can be attributed to the extract's chemical composition and polyphenol content [49]. The radical scavenging activity of the plant extract might be due to the secondary metabolites (phenol, tannin, flavonoid, alkaloids, etc.) responsible for molybdenum ion percentage reduction [12,49]. The naturally occurring amounts of H 2 O 2 in the air, water, human body, plants, microorganisms, and food are at low concentration levels. It is quickly decomposed into oxygen and water to create hydroxyl radicals that can initiate lipid peroxidation [50]. These indicated that L. sativum and A. corrorima ethanolic extracts exhibited better H 2 O 2 scavenging activity, which might be attributed to the presence of various secondary metabolites that could donate electrons to hydrogen peroxide, thereby neutralizing it into H 2 O [40]. In general, increasing the concentration of plant extract increases the percentage free radical inhibition. The ethanolic extract of the L. sativum and A. corrorima seed (125 μg/mL) displayed a strong percentage H 2 O 2 scavenging activity. These indicated that L. sativum and A. corrorima ethanolic extracts exhibited better H 2 O 2 scavenging activity, which might be attributed to the presence of phenolic groups that could donate electrons to hydrogen peroxide, thereby neutralizing it into H 2 O [51]. The saponification value represents the number of saponifiable units (acyl groups) per unit weight of oil [52,53]. A high SV indicates that the oil contains a higher proportion of low molecular-weight fatty acids or vice versa [52]. The SV, which is expressed in milligrams of potassium hydroxide, is used to calculate the average molecular weight of oil (mg KOH g − 1 oil) [53]. The saponification values of deep-frying oil increased with the increase in the frying cycle. However, there was a significant variation observed between the groups on each day of frying. The findings of this study were in close agreement with the previous study report and stated that at the elevated cooking temperature of 350 • C, the SV increased to 250 mg of KOH per 100 g of oil and produce more FFA during frying [54]. Furthermore, the results of these studies are supported by the previous similar study report and state that a high SV results in a high level of short-chain fatty acids and higher glycerol content [55]. The acid value of frying oil rises as the frying cycle lengthens. When compared to the positive and plant antioxidant additive groups, groups I and V showed a significant increase. The highest mean AV of 4.82 mg KOHg − 1 was recorded on the fifth day of frying [56]. The difference in the AV oil might be due to the type of oil used for frying. The AV of oil generally rises with increased frying times. On the sixth day of frying, a food sample containing a group had a higher AV than the other group. An increase in AV could be attributed to the moisture content of the fried product, which accelerates the hydrolysis of oil. It is known that water can promote the hydrolysis of triacylglycerol to form FFA [57]. The FFA (%) content of the oil was randomly increased with increased frying time. FFA levels were found to rise as the number of frying cycles increased, both for heating and frying. In addition to that, the plant extract-containing group and the positive control group showed a significantly lower FFA (%) content compared to group V. These might be due to the transfer of water from the food sample to the oil, which would accelerate the hydrolysis of triglycerides. The increase in AV and FFA was caused by the cleavage and oxidation of double bonds to form carbonyl compounds, which then oxidized to low-molecular weight fatty acids during frying [58]. The study also found that the plant extract additive group and the positive control significantly inhibited FFA enhancement in frying oil. Peroxide values represent the primary reaction products of lipid oxidation, which can be measured by their ability to liberate iodine from potassium iodide [57]. PV is the most widely used test for determining the state of oxidation in fats and oils. It also indicates the fat's or oil's rancidity or degree of oxidation, but not stability [57,59]. A carbonyl bond such as aldehyde was generated during the secondary lipid oxidation, and it can react with the anisidine value (0.25% in glacial acetic acid) solution, forming a yellow-colored solution. Significant decrements in the PV of oil were observed after the addition of the plant extract compared to the normal control and the frying oil with food (group V) throughout the study period. This indicated that the plant extract prevent the oxidation of oil up on frying. These might be due to the bioactive secondary metabolite that responsible to prevent oil oxidation. Moreover, the herbal extract with food sample significantly prevents the degradation of oil during excessive frying (6 day). The food sample with herbal extract significantly maintains the degradation of oil. The PV increases during the first 20 frying cycles at 160 • C, and then decreases [60]. The significant variation might be due to the frying cycle and oil type. Based on the amendments made to the Malaysian Food Act 1983 by the Food (Amendment) (No. 3) Regulations 2014, the maximum PV of cooking oil is 10 meq O 2 /kg of oil (Food Act, 1983). The higher the peroxide values, the more oxidized the oil. Additionally, the process of oil breakdown is significantly influenced by the water content or humidity. Moreover, high frying temperatures make peroxide unstable; it quickly breaks down and becomes a dimer and a volatile chemical [61]. Therefore, the greater PV on the fried food containing oil was due to the deterioration and degradation of the oil. Anisidine analysis is the appropriate method for evaluating secondary lipid oxidation. The p-AV in frying oil is an indication of organic peroxides that decompose into secondary products, including alcohols, carboxylic acids, aldehydes, and ketones. The quality of oil can be determined by evaluating the absorbance at 350 nm of the solution [62]. Aldehydes formed during oxidative degradation are secondary decomposition products, and the non-volatile portion of carbonyls remains in the frying oil [63]. The higher p-AV of frying oil was revealed throughout the study. This indicated the formation of primary and secondary oxidation products. Similarly, the plant extract additive group significantly decreased the p-AV of frying oil compared to the normal control and the food sample (group V). A lower p-AV indicates that less rancid oil is produced [61]. However, the plant extract and the positive control group significantly retarded the oxidation of frying oil compared to the food sample and the normal group throughout the study period. The Pandanus amaryllifolius leaf extract significantly decreased the level of p-AV throughout the study period, which was due to the secondary metabolite in the plant extract [30]. The results of these studies revealed that the thermal degradation of the aldehydes formed at higher temperatures results in a lower accumulation of oil at the higher frying temperature [64]. Moreover, the results of these studies are not in good agreement with the previous study report from Felix A. et al. (1998), which stated that the maximum p-AV was reached on the second day of frying for both frying temperatures and then decreased consistently until the end of frying time. The disagreement might be due to oil replenishment in the previous study, different oil types used in the frying cycle, and temperature. Moreover, the TOTOX value of the frying oil was evaluated primary and secondary oxidation of oil. The TOTOX index is a good indicator of the total deterioration of fats and oils. The lower the TOTOX value, the better the frying oil quality [57,63]. Oxidation proceeds very slowly at the initial stage, taking time to reach a rapid increase in oxidation rate. The TOTOX value is a common approach to determining the resistance to oxidative rancidity of edible oils. The TOTOX value after the initial days of frying showed a significant difference between the plant extract-containing group, the normal control, and group V (frying with food) throughout the study period (P < 0.05). The TOTOX value of all the oil was extremely greater than the proposed limit, which is an indication of oil oxidation. The variation might be due to the oil type, the frying condition, and the frying cycle. A similar study was also conducted by Morienga oliefera and showed lower p-AV and TOTOX values than either soybean oil or palm olein heated at 185 • C for 30 h. Since TOTOX was correlated with PV and p-AV, the oil-containing plant extract significantly reduced the TOTOX value compared to the normal control and the food containing frying oil. The TOTOX value was given a more accurate description related to the oxidative conditions of the cooking oil after repeated frying. The lower the TOTOX value, the better the oil quality. The good-quality vegetable oils have TOTOX values of ≤4 [59]. The measurement of total polar compounds is useful in estimating heat misuse in frying oils [5]. Evaluating total polar compounds has been characterized as one of the best indicators of the overall quality of oils, and it provides critical information about the total amount of newly formed compounds having a higher polarity than triacylglycerol [64]. The formation of total polar compounds, which indicates oil deterioration, is strongly related to the primary and secondary oxidation that takes place during frying [65]. The result of this study revealed that the total polar compounds of groups I and V were greater than the standard limit after five days of frying. However, the plant extract additive and the positive control group were less than the standard limit throughout the study period. This indicates that the oil should be avoided after five days of frying, but the plant extract showed a positive effect on the reuse of the oil even after six days of frying. This might be due to the plant antioxidant that prevents the oil from thermal degradation [59] . When the amount of total polar components reaches 25%, oil is considered to be thermally degraded and should be replaced with fresh oil [64,66]. The IV was a direct determination of the unsaturation level of the oil. Iodine was used to halogenate the double bonds in the unsaturated fatty acids. Commonly, frying leads to a reduction in unsaturation, thus indicating a decrease in double bonds. The IV of frying oil in the positive control and the 0.2% w/v L. sativum and 0.3% w/v plant extract additive groups were comparable. However, after day 3, there was a significant difference between the 0.3% w/v A. corrorima additive and the positive control group. A decrease in the IV throughout the cycles is consistent with the decrease in double bonds as oil becomes oxidized. The oils such as olive, soybean, and sunflower had lower iodine values [67]. However, the addition of plant extract did not appear to reduce the oxidation as the cycle progressed compared to the normal control and the food sample-containing group. As a result, the reduction in the iodine value of the oil up to the sixth frying day was caused by complex physicochemical changes in the oil, which resulted in an unstable characteristic against susceptible oxidative rancidity. These study results were also in line with the previous study report, which stated that the decrease in the IV of the oils after frying shows relatively higher oxidation [2]. Furthermore, the current study is consistent with previous research by Pineda et al. [68], who discovered a decrease in the IV of olive oil, high oleic sunflower oil, and sunflower oil while frying. It might be caused by a decline in the oil samples unsaturation. The increase in oxidation rate can also be observed in the change of specific absorptivity at 232 and 270 nm, which measures the contents of conjugated dienes (CDs) and conjugated trienes (CTs). The K 232 is associated with the generation of primary oxidation products (i.e., conjugated dienes), and the K 270 is used to determine fat oxidation, with parameter values varying depending on oxidation conditions (i.e., conjugated trienes). Conjugated dienes and trienes are a good measure of the primary oxidation of the oil [57]. Double bonds in lipids are changed from non-conjugated to conjugated bonds upon oxidation [69]. The CD and CT levels of frying oil increased with increased frying cycles throughout the study period. The CD of three oil samples increased with a longer frying cycle at 160 • C [68]. The result of this study was also in line with a previous similar report and stated that the CD value of sesame oil increased throughout the frying period [63]. The CD and CT levels of groups I and V were greater than the positive and the plant extract additive groups. However, the A. corrorima seed extract additive group showed a lower CD and CT level compared to the L. sativum extract additive group. The CD value of sesame oil increased throughout the frying period. In general, the lower CD and CT of the plant extract additive group indicate the potential antioxidant activity of the plant extract, which helps stabilize the oil during repetitive frying. The formation of hydroperoxide from polyunsaturated fatty acids leads to the conjugation of the penta-diene structure. This causes the absorption of UV radiation at 230-234 nm for conjugated dienes. When hydrogen abstraction happens on two active methylenes on C-11 and C-14, it produces two pentadienyl radicals, which result in the production of a mixture of conjugated dienes and trienes [57,63]. High extinction coefficients (K 232 and K 270 ) are an indication of advanced oil deterioration [59]. This leads to an increase in UV absorption at 270 nm attributable to conjugated trienes, apart from 232 nm for CD. The greatest reductions in pH values were observed in groups I and V. These might be due to the degradation and hydrolysis of oil to form FFAs. The formation of FFAs during thermal treatment is an important dynamic of vegetable oils that may be related to the decrease in pH [70]. The plant extract additive group significantly maintains the reduction of pH compared to the normal control and food control groups. These indicate the oil was stabilized by plant antioxidants. The pH value of frying oil decreases as the frying cycle (days) is increased throughout the entire group. The greater amount of density that was recorded in group V might be due to the formation of high-molecular-weight polymeric compounds upon frying [55]. The density of frying oil that contained food samples was due to the transfer of food samples to the oil, which increased with each frying cycle. The moisture content of oil increases with the length of the frying cycle. This might be due to the mass transfer that occurs during the frying process, which includes water loss, oil absorption, and heat transfer. The presence of water in food and oil speeds up the hydrolysis of the oil and protects it from oxidation during frying. The increase in the moisture content of the oil could be caused by the oil being exposed to food moisture and air humidity from its surroundings, which could aid in rancidity and oxidative stress [35]. The higher moisture content in this group might be due to the accumulation of water from the food sample on the oil and the exposure of the oil to food moisture and air humidity from the environment, which could facilitate rancidity and oxidative stress on the oil [71]. The frying process involves mass transfer, including water loss, oil absorption, and heat transfer [35,71]. The water content in food and oil accelerates the hydrolysis of the oil and also provides protection against oxidation of the oil during frying. Thus, the longer the frying cycle, the longer the oil is exposed to the humidity of the environment. In a nutshell, both plant additive groups significantly inhibited the elevation of moisture content compared to the normal and food sample-containing groups. The RI is a parameter that is related to molecular weight, fatty acid chain length, degree of unsaturation, and conjugation. To detect adulteration in edible oils, the refractive index can be used as a quality control technique [72]. The RI is affected by the content of saturated and unsaturated fatty acids. Increasing the frying cycle decreases the refractive index value. This indicated that the unsaturated part of the oil was removed and more saturation was formed during frying. The result of this study is not correlated with the previous study report and stated that the RI increase is believed to be related to the high saturated fatty acid content and the non-hydrogenation of palm oil, making it less resistant to heat [55]. The deviation might be due to the reaction conditions and the frying cycle. However, the study was consistent similar study report and illustrated that the RI values decrease as the temperature is increased [73]. This might be due to the formation of Tran's fatty acid upon oxidation, which affects the change in the RI value. Furthermore the trans-acids formed during hydrogenation affect refractive index values but not iodine values. There was no statistical variation in the RI value between the groups. However, the plant additive group and the positive control did not significantly prevent the enhancement of the RI value compared to the normal control and the oil with food throughout the study period. The RI value of the plant extract additive group is nearly constant compared to the other groups except the positive control. This indicates that the antioxidant potential of the plant extract inhibits the physicochemical changes of deep frying. The plant extract and the positive control affected the position of the band, and it showed a shift when the proportion of fatty acids changed. The FTIR spectra of the normal control and the food sample-containing group showed strong OH bands at 3300-3500 cm − 1 , respectively ( Fig. 4a and e). These indicated that the triglyceride molecules were degraded and FFA was formed during frying. However, there was no significant variation between the positive control ( Fig. 4b) and the plant-treated group. The percentage transmittance of the food sample additive and normal control group were higher, indicating that absorbance was decreasing. This could be attributed to oil hydrolysis and degradation into FFA. Continuous frying of corn and mustard oil samples increases the transmittance and hydrolyzes the triglyceride molecule [74]. The FTIR spectra also showed intense bands in the region of 2950-3000 cm − 1 that were assigned to sp 3 carbon stretching, which was for the terminal methyl group of the fatty acid chain. The medium peak at 2800 cm − 1 was assigned for aliphatic CH 2 stretching. Carbonyl stretching of the ester functional group (C--O) stretching is assigned to the medium peak at 1600-1700 cm − 1 . However, the peak shift to the greater wave number that was observed in this region in groups I and V might be due to the effect of frying. The strong, intense peaks at 980-1100 cm − 1 indicate C-O (CH 3 O − ) stretching of the ester. The medium peaks at 1480 cm − 1 indicate CH bending of sp 3 carbon (alkane) [75,76]. Conclusions The quality of the palm oil used in this study was significantly affected by accelerated and deep frying, as revealed by assessing the physicochemical parameters. The plant extract additive group showed a significant improvement in oil quality throughout the study period. The FTIR spectra of frying oil revealed the formation of free fatty acids in the normal control and food sample-containing groups. The positive control and plant extract-treated groups significantly retarded oil degradation and maintained oil quality. Therefore, the optimum concentration of L. sativum (0.2% w/v) and A. corrorima (0.3% w/v) extract were recommended to the restaurants or street food vendors used as an alternative antioxidant. Moreover, the identification of organic compounds that retard the degradation and oxidation of oil needs to be investigated. Further study on various medicinal plants should be investigated to enhance the oil's stability and investigate the potential substitution of synthetic antioxidants.
2023-07-11T15:09:40.037Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "f8d616808fb7c62598bff0e822ebec94d943cc09", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.heliyon.2023.e17980", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1fa5d422dd95cf6893950df6a3d68d24d2294da", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
228954751
pes2o/s2orc
v3-fos-license
Bulk phase charge transfer in focus – And in sequential along with surface steps In recent decades, catalysis has witnessed increasing interests in many catalytic reactions with bulk phase and interface charge transfer steps as a distinguished feature. Here, the charge can be cations, anions, electrons or holes. Research into both bulk phase and interface charge transfer has changed our understanding and in-focus design of catalysts and reactors, due to the clear difference in kinetics from those classical catalytic reactions, where only surface steps are concerned. This perspective selects several types of representative reactions and discusses the challenges and opportunities to innovations in catalytic technologies from the viewpoint of recognizing and accelerating the key step of the charge transfer at interface or in the catalyst bulk phase, as well as incorporating the surface steps into the overall kinetics. Introduction The understanding and development of catalytic processes have been of crucial importance for the advancement of modern industry.A catalyst allows chemical reactions to follow a more energetically favorable path without itself appearing in the overall stoichiometry of the reaction, thus significantly reducing the overall energy consumption [1].There are two categories of catalytic reactions, homogeneous and heterogeneous catalytic reactions, of which the heterogeneous catalysis has been more favorable because of the ease of product separation and higher catalyst stability [1].Fig. 1a illustrates a simplified scheme of the surface step of a heterogeneous catalytic reaction, which normally occurs in a real reactor with reactant fluid in a sequential order of reactant external/internal diffusion, adsorption, surface reaction, product desorption and internal/external diffusion [2]. From the perspective of charge transfer, heterogeneous catalytic reactions typically involve electron transfer in the surface reaction steps.Further, in a large number of heterogeneous reactions, the transfer of ions in the surface steps is also involved.The most well-known examples are the classical heterogeneous acid and base catalyzed reactions, which are two kinds of the most important reactions for modern hydrocarbon processing.For instance, the cracking of large molecules catalyzed by acidic zeolites has been recognized as the largest process in modern industry [3,4].Other acid-catalyzed reactions include isomerization, alkylation/dealkylation, hydration/dehydration, esterification, halogenation and sulphonation reactions [5].For these reactions, surface proton transfer is a key step (Fig. 1b).Taking the catalytic benzene ethylation reaction as an example, the proton residing on the surface of the acid catalyst is first transferred to ethanol to form an ethyl carbonium, which then reacts with benzene to form an aromatic carbonium.The carbonium finally turns into ethyl benzene while releasing a proton, which returns to the catalyst surface so that the proton does not appear in the overall stoichiometry of the reaction.Similarly, carbanion is formed via the surface transfer of hydroxide ion in a base catalyzed reaction [3].The surface ion exchange steps are also involved in many other reactions (Fig. 1c), such as hydro-desulfurization and hydro-denitrogenation [6]. Clearly, the above-mentioned reactions are featured with the participation of surface charge transfer steps.Meanwhile, the development of catalysis has gradually posed an increasing interest towards catalytic reactions where not only surface charge transfer, but also considerable bulk phase charge transfer is involved (Fig. 1d).Catalytic processes such as selective catalytic oxidation, photocatalysis, and electrocatalytic reactions include essential charge transfer steps both within bulk phase and across interfaces, differing largely from those depicted in Fig. 1a-c.Charge transfer is very often sluggish both in bulk phase and across an interface, which greatly influences the kinetics of the overall reaction.Thus, the understanding, modelling and promoting of bulk phase and interface charge transfer steps in these reactions become important.This article discusses several representative catalytic reactions with the characteristics of bulk phase and interface charge transfer in a general form, aiming to offer some insights into related fields.Summary and perspective on mechanism elucidations in these systems using advanced characterization methods can be found in other feature papers [7][8][9][10][11][12][13][14]. Heterogenous catalytic oxidation The Mars-van Krevelen (MvK) mechanism proposed in 1954 is by far the most widely acknowledged principal in establishing kinetic models for catalytic oxidation reactions [15].Fig. 2 depicts the MvK mechanism of an oxidation reaction on the surface of an oxide catalyst.The reductive reactant (R H ) is molecularly or dissociatively adsorbed on the oxide surface and oxidized by the lattice O 2− to form either a selective oxidation product (R O ) or complete oxidation products, such as H 2 O and CO 2 , which are subsequently desorbed from the surface and diffuse to the bulk fluid phase.The oxygen vacancy sites generated accordingly are simultaneously replenished by the surface adsorption and dissociation of gaseous O 2 molecules followed by the lattice O 2− diffusion. The MvK mechanism has been intensively verified via oxygen partial pressure monitoring and isotopic tracing experiments [16,17].The participatory O 2− is present within a few surface layers of the catalyst, where the bulk phase diffusion of lattice O 2− from inner layers to the surface occurs.The O 2− transfer is via an O 2− vacancy hopping mechanism (sometimes interstitial hopping) and is directly governed by the difference of O 2− chemical potential between the inner layers and the surface layer.Therefore, it is correlated with the lattice defect concentration [18,19], as well as the metal-oxygen bonding strength [20].The regeneration of the catalyst requires an inward diffusion, which is further correlated with the chemical potential of oxygen in the atmosphere.This suggests that all approaches that enhance the O 2− transfer would be useful for accelerating the catalytic process, even though the initial activation of the first C-H bond is often believed to be the rate-determining step (RDS) for hydrocarbon oxidation [21].Depending on the comparative rate of the sequential steps, the O 2− transfer step may become the RDS as the surface steps are accelerated.The MvK mechanism has been the base for many practical selective and non-selective catalytic oxidation processes [22,23], and more recently, chemical looping [24][25][26] and rich oxygen combustion [27,28] have been observed in industrial practices.For instance, Dupont previously made an attempt in utilizing circulating fluidized-bed reactors for mass production of maleic anhydride with a vanadium phosphorus oxide catalyst, where the selective oxidation of n-butane and the re-oxidation of the catalyst occur in two separate reactors [29][30][31].Even though this approach was eventually not adopted in the commercial practice, the concept behind has been intriguing.Except for lattice O 2− , the MvK concept has been also practiced in understanding and developing other catalytic systems that involve lattice components, such as S, Cl, H and N, etc. [32]. Heterogeneous photocatalysis Heterogeneous photocatalysis is another sort of catalytic reactions where bulk phase and interface transfer of charges (electrons and holes) are involved.The term "photocatalysis" has been referred to in a broad concept, which includes both thermodynamically favourable downhill reactions, such as organic pollutant oxidation, as well as thermodynamically unfavourable uphill reactions, such as hydrogen evolution reaction from water [34,35].When the photon energy is higher than the band gap of the photocatalyst, the electrons in the valence band (VB) are excited into the conduction band (CB) of the semiconductor, leaving holes in the VB.The resultant photoinduced electrons and holes transfer to the surface and take part in the reduction and oxidation, respectively (Fig. 3).However, the recombination of the electron-hole pairs, either in the bulk or at the surface/interface leading to a lowering of quantum yield, always accompanies the bulk phase and interface charge transfer [36]. The transfer of electrons and holes within a photocatalyst is intrinsically a physical process that occurs in microseconds, and can be enhanced with speeding up the surface steps, as well as shortening the transfer length of charge carrier [37,38].Forming an interfacial potential gradient accelerates the transfer of charges to the external surface and has been proved to be effective in enhancing the overall performance.Such a potential gradient has been realized via the formation of heterojunctions or surface states, at the interfaces of semiconductor-electrolyte [39], metal-semiconductor [40], semiconductor-semiconductor [41].Even more than one junction has been applied in some designs [42,43].Also, doping and tailoring the nanostructure of the semiconductor bulk, interface and surface show positive effects due to the rate enhancement of both the physical and chemical steps, as well as the rate diminishment of electron/hole pair recombination.In addition, sacrificial agents (electron donors or acceptors) are often used to consume one unwanted charge (hole or electron) so that the favored reaction can be proceeded more efficiently.Detailed reviews on those approaches can be found in literature [33,41,44].Since the surface steps are normally sluggish compared to the transfer of electrons and holes, cocatalysts and/or photo-electrocatalysts are also often incorporated on the surface of powders or photoelectrodes to enhance the reactions [38,45].Such an effect has often been explained with the reduce of concentration overpotential.In fact, all these means, when applied on the RDS, would greatly improve the overall kinetics and energy efficiency. Heterogeneous electrocatalysis Similar to heterogenous photocatalysis, heterogeneous electrocatalysis has also been applied in a broad scope, referring to catalysis in both downhill reactions in a Galvanic cell and uphill reactions in an electrolytic cell [46].While ion transfer through both bulk and interfaces serves as key steps in conventional electrochemical devices, electrocatalytic reaction-based systems are further characterized by the involvement of at least one gaseous species, such as oxygen [47,48], hydrogen [49], carbon dioxide [50] and nitrogen [51].Among them, the catalysis of oxygen molecules is essential in many energy conversion and storage devices.Herein, we focus on two types of heterogeneous electrocatalysis systems, where bulk phase and interface ion transfers are discussed together with surface reaction steps. Solid oxide fuel cells (SOFCs) Charge transfer processes are key steps in fuel cell reactions and sometimes one of them may become the RDS.Many efforts have been devoted to enhancing both charge transfer and surface reaction steps for fuel cells that operate in a temperature range from ambient temperature to 300 • C.These systems include alkaline fuel cells [52,53], proton exchange membrane fuel cells (PEMFCs) [54,55] and solid acid fuel cells [56,57], and great success has been demonstrated in commercial scale with PEMFCs.Comparatively, SOFCs normally operate at high temperatures (600−1000 • C), and are thus more adaptive to achieving highly efficient cogeneration and flexible fuel choice from hydrogen, to hydrocarbon, and even carbon [58][59][60].Fig. 4a shows a general configuration of a typical SOFC device, where two separate electrode compartments sandwich around a dense O 2− conducting electrolyte layer while the electrodes being connected through an external circuit.During operation, the anode catalyzes the oxidation of the fuel and releases electrons via the external circuit, while the cathode undergoes a catalytic oxygen reduction reaction (ORR) with the electrons from the external circuit.The cathode catalytic ORR process is directly correlated to the oxygen incorporation and subsequent bulk O 2-transfer steps, where the former can be sufficiently enhanced by increasing the three phase boundary area and/or by leveling up the surface oxygen vacancy concentration [61][62][63][64][65].The bulk O 2− transfer, on the other hand, is generally based on the O 2− vacancy hopping mechanism that occurs in the above mentioned heterogenous catalytic oxidations, thus also varying with the chemical potential of O 2− .In addition, achieving a sufficiently high bulk O 2− transfer rate within the electrolyte becomes more crucial as the operating temperature decreases.It has been intensively demonstrated that, at high temperatures (800−1000 • C), the yttria-stabilized zirconia (YSZ) electrolyte with a bulk O 2− conductivity of ~0.1 S cm −1 is capable of delivering a practically acceptable power output [66,67].However, decrease of the operating temperature to an intermediate range (600−800 • C) causes significant drop in the bulk O 2− conductivity, which can be compensated either by thinning the YSZ electrolyte layer or utilizing higher O 2− conductive electrolytes (such as doped ceria materials) so as to ensure a comparable power output [67].Further decrease of the operating temperature to a low range (400−600 • C) has been confronted with major challenges in drastically grown resistance of bulk O 2− transfer through the electrolyte, as well as the oxygen incorporation step at the cathode side [68,69].Even though improvement has been achieved by utilizing either oxide-molten carbonate composite electrolyte or proton conducting electrolyte, a satisfactory performance comparable to that at high operating temperature has not yet been realized [70][71][72][73][74]. Currently, the bulk phase O 2− transfer is still the major focus in further improving the SOFC technology. Lithium-oxygen batteries (LOBs) Rechargeable batteries have long been following an advancing path where charge (electrons and ions) transfer steps both in bulk phase and across interfaces form the fundamental basis [75,76].This has led to the commercial successfulness of lithium ion batteries (LIBs) in 1991, with ever-since continuation on the system optimization [77].Metal oxygen batteries couple the charge transfer steps similarly to conventional rechargeable batteries further with the catalytic steps at the cathode, where a bi-functional catalyst is commonly included to catalyze both the ORR and oxygen evolution reaction (OER) processes.Compared with other metal (Na, Al, Mg, Fe, Zn, etc.) -oxygen batteries, lithium-oxygen batteries (LOBs) have been receiving more attention, owing to their high specific energy and rechargeability [48].The mostly focused subtype of LOBs [70], non-aqueous LOBs, are used here as an example to illustrate the bulk phase charge transfer, as shown in Fig. 4b.Similar to SOFCs, Li + transfer in LOBs also includes interface and bulk phase steps.The ion transfer occurs between lithium metal anode and solid electrolyte interface (SEI), SEI and electrolyte, as well as electrolyte and cathode.The transfer of Li + from SEI into the electrolyte typically involves a solvation process where an anion (from the electrolyte) shell encompasses Li + and subsequent diffusion and migration to the cathode.At cathode, the solvated Li + further experiences a de-solvation process before catalytic cathode reactions occur.In non-aqueous LOBs, the most well established electrolytes for Li 2 O 2 -and Li 2 O-based chemistries are 1 M lithium bis (trifluoromethanesulfonyl) imide in tetraethylene glycol dimethyl ether and eutectic molten nitrate (LiNO 3 -KNO 3 ), which have Li + conductivities of ~0.01 and 0.1 S cm −1 , respectively [78][79][80].The Li + conduction in these systems are based on free ion motion driven by the chemical potential gradient of Li + , and is fast enough to assist a fast electrochemical process, whereas the intrinsically slow cathode catalysis in these systems is mainly responsible for the overall low power output, viz.an areal current density of several mA cm -2 or even less [81].This contrasts well with LIBs, where the electrode intercalation process (bulk phase Li + transfer within the electrode materials) is often the RDS [82].Even though the bulk ion transfer is not the central concern for the current LOBs, future derivatives such as high temperature LOBs, operated at a temperature that is comparable to SOFCs, would likely to confront a situation where the bulk phase or interface ion transfer becomes the RDS. Summary and outlook Bulk phase charge transfer has turned out to be a fundamentally important step in a series of catalytic systems, including heterogeneously catalyzed selective oxidation, heterogeneous photocatalysis and heterogeneous electrocatalysis (such as SOFCs and LOBs).This article scratches the very surface of such a theme by briefly presenting the selected systems with a focus on the need to recognize and accelerate the RDS in the sequential charge transfer and surface reaction steps.The involvement of bulk phase and interface charge transfer implies the necessity of establishment models to incorporate charge transfer and surface steps when designing respective catalytic reactions and reactors.Even though in some cases, the charge transfer step may not be the RDS, it might turn to be the one under certain conditions.Thus, the RDS principle applies, and the research orientation is cleared up.This perspective has intended to present such a concept. Credit author statement Zhengze Pan initiated the writing under the instruction of Yongdan Li. Yicheng Zhao, Cuijuan Zhang and Hong Chen discussed and polished the manuscript. Yongdan Li proposed the topic. Fig. 1 . Fig. 1.General schemes of different catalytic reactions: (a) without a bulk charge transfer step; (b) with surface ion transfer between the catalyst and reactants; (c) with surface ion transfer between the catalyst and different reactants; (d) with surface ion transfer between the catalyst and different reactants accompanied with considerable ion transfer, and/or electron/hole transfer steps in bulk phase. Fig. 3 . Fig.3.Scheme of the mechanism of a heterogeneous photocatalysis reaction.Reprinted from reference[33] with permission.
2020-10-28T19:20:52.529Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "589e30de6f8bc7f8d1e32c4a77117252c2d23d28", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cattod.2020.09.023", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1fe69743dc35b72dde1a21f8a3bc59f2f0b12678", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
2747330
pes2o/s2orc
v3-fos-license
A Real-Time Joint Estimator for Model Parameters and State of Charge of Lithium-Ion Batteries in Electric Vehicles Accurate state of charge (SoC) estimation of batteries plays an important role in promoting the commercialization of electric vehicles. The main work to be done in accurately determining battery SoC can be summarized in three parts. (1) In view of the model-based SoC estimation flow diagram, the n-order resistance-capacitance (RC) battery model is proposed and expected to accurately simulate the battery’s major time-variable, nonlinear characteristics. Then, the mathematical equations for model parameter identification and SoC estimation of this model are constructed. (2) The Akaike information criterion is used to determine an optimal tradeoff between battery model complexity and prediction precision for the n-order RC battery model. Results from a comparative analysis show that the first-order RC battery model is thought to be the best based on the Akaike information criterion (AIC) values. (3) The real-time joint estimator for the model parameter and SoC is constructed, and the application based on two battery types indicates that the proposed SoC estimator is a closed-loop identification system where the model parameter identification and SoC estimation are corrected mutually, adaptively and simultaneously according to the observer values. The maximum SoC estimation error is less than 1% for both battery types, even against the inaccurate initial SoC. OPEN ACCESS Energies 2015, 8 8595 Introduction The battery is a bottleneck technology for electric vehicles (EVs).It is valuable both in theory and practical application to carry out research on the state estimation of batteries, which is very crucial to optimize the energy management, extend the cycling life, reduce the cost and safeguard the application of batteries in EVs.However, batteries, with their major time-variable, nonlinear characteristics, are further influenced by such random factors as driving loads, operation environment, etc., in terms of application in EVs.Real-time, accurate estimation of their state of charge (SoC) is challenging [1][2][3]. An assortment of techniques has previously been reported to measure or estimate the SoC of the cells or battery packs, each having its relative merits, as reviewed by Xiong et al. [4].Generally, the model-based SoC estimation method is able to combine different kinds of SoC estimation methods to avoid the shortcomings of each one [5][6][7][8][9][10]. Figure 1 is the model-based SoC estimation flow diagram: the battery models and the online data are stored in the memorizers and the real-time data on current, voltage and temperature are collected by the sensors, then the main key technologies are left as the three aspects shown in Figure 1 as ①, ②, ③.For the first aspect, model selection/parameter identification, reference [7] summarized the battery models built in the National Renewable Energy Laboratory's advanced vehicle simulator, which include an internal resistance model, a resistance-capacitance model, the PNGV (partnership for new generation Of vehicles) model, a lead acid neural network model, a fundamental lead acid model, and Saber's lead acid electrical RC (resistance-capacitance ) model; also the application limitations of these models are deeply researched and listed based on comparisons.Reference [11] proposes a generalized impedance-based model, which takes into account the non-homogeneous battery dynamics with nonlinear lumped elements.This model is verified to simulate the dynamic and transient response of lithium-ion polymer batteries accurately by experimental results.Reference [12] uses the recursive least square method (RLS) to estimate the LiFePO4 cell voltage and SoC online, and the results show that one or two RC networks connected to the Rint model in series are reasonable for the dynamic simulation of the LiFePO4 battery module.Reference [13] takes the multi-swarm particle swarm optimization (MPSO) method to select the optimal model out of the twelve equivalent circuit models for the LiNMC cell and LiFePO4 cell, and the results indicate that the first-order RC model is preferred for LiNMC cells, while the first-order RC model with one-state hysteresis seems to be the best choice for LiFePO4 cells.Reference [14] takes six battery models into consideration and applies the least square method and extended Kalman filter (EKF) to identify the LiPB cell model parameters and estimate its voltage; the results indicate that the battery model accuracy could improve greatly with additional hysteresis and filter states at some cost in complexity, while it makes no sense to increase the filter state order beyond 4.In summary, it is difficult to build one unique battery model for all kinds of batteries with an acceptable accuracy.In addition, factors also influence the model selection process such as the dataset used, the model parameter identification method, the battery SoC estimation method, the model evaluation rule, and so on.However, we find that the n-order RC battery model is expected to simulate the characteristics of various kinds of batteries more accurately [11,12,15].Usually model parameters can be identified during or after the model selection process. For the second aspect, real-time model parameter identification, it is optional.In fact, the battery parameters change along with the battery application and aging.In online or even offline model parameter identification it is hard to accurately determine battery characteristics.Herein, it is necessary to identify the battery parameters in real-time.Based on the Thevenin battery model, reference [16] applies the moving window least square (LS) method to realize real-time model parameter identification, and a state observer is built to estimate the battery SoC simultaneously.However, in real applications it is difficult for the moving window LS to choose a reasonable parameter updating frequency to achieve stability because of the open-loop characteristic of the joint estimator.Reference [17] uses the adaptive joint EKF to identify a real-time model parameter which has a high order state-space equation by taking the model parameter as the state set.Besides, it is difficult to obtain accurate SoC just by the interpolation of the OCV (open circuit voltage)-SoC relationship based on the estimated OCV, especially for batteries with the flat OCV-SoC relationship.To tackle the problem caused by the flat OCV vs. SoC segments when the OCV-based SoC estimation method is adopted, reference [18] proposes a method combining the coulombic counting and the OCV-based method, where two different real-time model-based SoC estimation methods for Lithium-ion batteries are presented, one based on model parameter identification using the weighted RLS method and another based on state estimation using the EKF method. For the third aspect, real-time SoC estimation, reference [19] implements the particle filter to estimate battery SoC, which is more accurate than EKF.However, the generation and operation of the numerous particles will create a heavy calculation burden in real applications.To reduce the calculation requirement, reference [20] proposes a new SoC estimation method based on a square root unscented Kalman filter using a spherical transform (Sqrt-UKFST) with unit hyper sphere.However, it still suffers from certain numerical stabilities.In contrast, the central difference Kalman filter (CDKF) is a stable algorithm and able to generate certain number of points for state estimation intelligently.References [21,22] indicate that CDKF, as one sigma-points Kalman filter (SPKF) method, is able to avoid the linearization error of the battery model and improve the model's precision for SoC estimation, which has the potential to solve the nonlinear estimation problems [23][24][25][26][27]. Contribution of the Paper In order to select the optimal battery model based on the measured data on current and voltage, the Akaike information criterion (AIC) is applied to make a tradeoff between battery prediction precision and complexity.Based on the selected battery model in different life stages, the input-output equation is built and the RLS algorithm is used to identify the model parameter in real time to track the time-varying characteristic of the battery.Also, the relationship between OCV and SoC is employed to describe the nonlinear characteristic of the battery, and to avoid the linearization error of the state-space equation for SoC estimation, the CDKF algorithm is applied to realize the real-time nonlinear SoC estimation.By combining the RLS and CDKF algorithms, the real-time closed-loop joint estimator for battery parameter and SoC is constructed. Organization of the Paper In order to estimate the parameters and SoC of a battery in real time, this paper is organized as follows: the battery modeling and real-time parameter identification process is described in Section 2.Then, the CDKF-based SoC estimator is constructed in Section 3. Section 4 shows the dataset of the lithium-ion cell for verification.Then, verification and discussion are in Section 5; Finally, some conclusions are made in Section 6. Battery Modeling and Real-Time Parameter Identification The n-order RC battery model is employed here to simulate the battery characteristics and the schematic diagram is shown in Figure 2, where iL is the load current with a positive value in the discharge process and a negative value in the charge process; Ut is the terminal voltage; UOC is the open circuit voltage (OCV); Ri is the equivalent ohmic resistance; Ci is the ith equivalent polarization capacitance and Ri is the ith equivalent polarization resistance simulating the transient response during a charge or discharge process; Ui is the voltage across Ci. i = 1, 2, 3, … , n. Schematic diagram of the n-order resistance-capacitance (RC) battery model. According to reference [10], the n-order RC battery model is expressed as: where ai and bi (i = 1, 2, 3, … , n) are the fitting coefficients and can be expressed as the functions of the battery model parameters. The RLS method is applied to identify the model parameters in real time, and the identification process is as follows [10]: where ˆk  is estimate of the parameter vector k  ; , Ls k K is the algorithm gain matrix and , Ls k P is the covariance matrix; the constant  is the forgetting factor and is very important to obtain a good estimated parameter set with small error, typically The reader is referred to reference [10] for the detailed mathematical model deduction and RLS-based model parameter identification process. State of Charge Definition In this study, battery SoC has been defined by the following Equation [28]: where t  represents the sampling time, a C represents the available capacity of battery, and  denotes the current efficiency of battery. State-Space Modeling The state space equation of Figure 2 for SoC estimation is expressed as [28]: , , where U z is the OCV value obtained by the interpolation based on the OCV-SoC relationship, which represents the battery nonlinear characteristic.Both k  and k  are assumed unrelated white Gaussian random processes, with zero mean and covariance matrices with known value: , . SoC Estimation Using the Central Difference Kalman Filter Algorithm To deal with the battery's nonlinear characteristic, the CDKF algorithm is proposed to build the battery SoC estimator.The CDKF algorithm estimates the mean and covariance of the output of a non-linear function using a small fixed number of function evaluations.A set of points (sigma points) is obtained from the function so that the (possibly weighted) mean and covariance of the points exactly matches the mean and covariance of the a priori random variable being modeled.These points are then passed through the non-linear function, resulting in a transformed set of points.The a posteriori mean and covariance that are sought are then approximated by the mean and covariance of these points.It is noted that the CDKF algorithm can avoid the linearization error of the state-space equation.The reader is referred to reference [21] for the detailed calculation process of the CDKF algorithm. Figure 3 shows the general diagram of the real-time joint SoC estimator and the operating steps are as follows:  Data measurement.The sensors collect the real-time data on current, voltage and temperature at each sampling time, and then the collected data are applied to identify the model parameters and estimate the SoC real-timely. Model parameter identification.The RLS method is used to realize real-time model parameter identification based on the collected data of current and voltage.Then the identified model parameters are transferred to the CDKF-based SoC estimator and the estimated OCV value is transferred back in turn.Herein, a stable and accurate RLS-based model parameter identification process can ensure, and at the same time is based on the good stability and high accuracy of the CDKF estimator. CDKF-based SoC estimator.The CDKF algorithm is used to estimate the SoC based on the identified model parameters.In this process, if model parameters are not identified correctly, the CDKF estimator will not work normally, thus leading to the wrong returned OCV value.However, the RLS and CDKF automatically correct the wrong estimates based on the big observer errors and gain matrices simultaneously, then both estimates of them will converge to the true values quickly, which realizes the close-loop SoC estimation process.Herein the proposed estimator in this paper is able to estimate the SoC accurately against different operating environment disturbances. , , , 1 Error covariance time update , , , Compute the covariance matrices The real-time collection of current, voltage and temperature of batteries Real-time model parameter identification based on Eq. ( 2) The n-order RC battery model in Fig. 1 , , , , , Experiment Setup As a verification case, the LiFePO4 cell and LiMn2O4 cell are selected to evaluate the proposed method.The test bench setup is shown in Figure 4.It is made up of an Arbin BT2000 battery cycler (Arbin company, College Station TX, USA), a thermal chamber used to control the operation temperature, a computer used to do programming and store experimental data and one LiFePO4/LiMn2O4 cell.The battery cycler channel is applied to load the current or power profiles on the test cells with the voltage range of 0-60 V and current range of ±300 A, and its recorded data include current, voltage, temperature, charge-discharge amp-hours (Ah), watt-hours (Wh), etc.The measurement error of the current and voltage sensors inside the Arbin BT2000 cycler is less than ±0.05%.The measured data is passed to the host computer through TCP/IP ports.The test cell is connected with the Arbin BT2000 cycler and then placed inside the thermal chamber to maintain the desired temperatures to perform special behavior. Battery Test The characteristics of the two types of batteries used are shown in Table 1.From Table 1 we can see that the LiFePO4 cell capacity has decreased about 4.35% and the LiMn2O4 cell capacity has decreased about 9.11%.The experimental data used in this paper are respectively shown in Figures 5-9.It is important to note that in this research, we only consider the operation temperature at 25 °C.Figures 5-7 show the experimental data of the LiFePO4 cell.Figure 5 is the relationship between battery SoC and OCV. Figure 6 shows the details of the hybrid pulse power characteristic (HPPC) test [29].Figure 6a is the current profile, and Figure 6b is the voltage profile.Figure 6c,d Figures 8 and 9 show the experimental data of the LiMn2O4 cell.Figure 8 is the relationship between battery SoC and OCV.The profiles of the Beijing Driving cycles (BJDC) are shown in Figure 9. Figure 9a describes the experimental current.Figure 9b is the terminal voltage, and Figure 9c describes the calculated SoC.The sample frequency is set to 1 Hz. Verification and Discussion Considering practical applications, only the portion of the test data within 10%-90% SoC in these datasets is used in SoC estimation. Model Selection Referring to the n-order RC battery model, Reference [12] points out the model parameters will increase abundantly with the increase of the number of RC networks, and the calculation burden will be heavier and a larger memory will be required to store the large amount of sample data.It is meaningful to properly select a minimum RC network with an acceptable accuracy. In this paper the AIC is employed to establish a tradeoff between the model complexity and prediction precision.The information-theoretic or entropic AIC criterion aims to identify an optimal and parsimonious model in data analysis from a class of competing models which takes model complexity into account [30].The AIC model used here is as: where n denotes the RC order of the combined model. 2 ˆk s denotes the sum of the residual error squares based on the RLS method (Equation ( 2)) and is expressed as: where N denotes the data length. The datasets used here are the HPPC profiles of the LiFePO4 cell.Figure 10 plots the model voltage prediction errors with n = 0 − 5, and the statistical results are shown in Table 2. From Table 2, we can see that the minimum AIC value is −4.80 if n equals 1.When n equals 0, the battery model is simple but the model precision is poor.From Table 2 we can see that the maximum voltage prediction error is up to 67.02 mV, while the model precision mainly operates on calculating the AIC which is then large.From Figure 10 we can see that the voltage prediction precision improves greatly if n increases to 1 from 0, and Table 2 shows that the maximum voltage error decreases about 60 mV, thus leading to the decrease of AIC.However, from Figure 10 and Table 2 we can see that the model precision improves only a little as n continues to increase; for example the standard deviation decreases only 0.05 mV if n increases to 5 from 1, and the maximum voltage error is even worse during this process, while the model complexity mainly operates on calculating the AIC.Herein the AIC value will continue to increase with the increase of n from 1. Hence the model with n = 1 is the optimal tradeoff between the model complexity and precision, which is thus selected to implement the SoC estimation later.The duration of each case is also shown in Table 2.Note that all the procedures in this paper are run in Matlab/Simulink R2012b version with the HP Z620 workstation (Hewlett-Packard Development Company, Palo Alto, CA, USA) equipped with an Intel Xeon E5-2620 v2@2.10GHzCPU and 32 GB of RAM. SoC Estimation The datasets firstly used here are the DST profiles of the LiFePO4 cell, which are regarded as the real-time data to drive the CDKF-based SoC estimator.It is noted that the exact initial SoC is 90%.Also, an inaccurate initial SoC of being 80% is set to evaluate the robustness of the proposed estimator. Figure 11 is the model parameter identification results with different initial SoCs of the LiFePO4 cell.Figure 11a is the internal resistance R0. Figure 11b is the polarization resistance R1 and Figure 11c is the polarization capacitance C1.From Figure 11 we find that the estimation results of the 80% initial SoC case are the same with that of the 90% initial SoC case, which is in accordance with the real applications.Besides, the model parameters converge to the true values quickly after the initial fluctuation.We also find that as the SoC decreases the R0 increases gently, but the R1 and C1 both change strongly.This is because the polarization resistance and capacitance describe the transient characteristic of battery which varies greatly when the battery is in charge or discharge, while the internal resistance does not. Figure 12 is the voltage prediction results with different initial SoCs of the LiFePO4 cell based on the identified model parameters.Figure 12a describes the reference terminal voltage and the predicted terminal voltage.Figure 12b describes the voltage prediction error.Statistical results of the voltage prediction error are shown in Table 3. From Figure 12 and Table 3 we find that the predicted voltage agrees very well with the reference voltage and the maximum predicted voltage error is less than 5 mV for both cases of initial SoCs being respectively 90% and 80%.It is noted that, in all cases, SoC reference profiles are calculated from the Arbin data-logger using power counting on measured data.In order to get the reference SoC profiles, the cells are firstly fully charged and lastly fully discharged after several loading profiles test with nominal current.In this way, we can get accurate initial SoC and terminal SoC.Then, the SoC reference trajectory is achieved based on the Arbin data-logger and the correction from the current efficiency map.In most cases, the battery's current efficiency is close to 100%.The SoC estimation results with the accurate initial SoC of the LiFePO4 cell are shown in Figure 13. Figure 13a describes the reference terminal voltage and the estimated terminal voltage.Figure 13b shows the voltage estimation error.The reference SoC and the estimated SoC are shown in Figure 13c.Figure 13d describes the SoC estimation error.It is noted that the accurate CDKF-based SoC estimation depends on the accurate RLS-based model parameter identification, and the return of the accurate OCV will further ensure the high estimation accuracy of the RLS algorithm.From Figure 13a,b we find that the voltage estimation result is good, and Table 4 shows that the voltage mean error and standard deviation are respectively −14.30 and 11.32 mV.However, if the more accurate estimated voltage is needed, we can always apply the RLS-based voltage estimation in real applications.From Figure 13c,d we find that the SoC estimation is very accurate, and Table 4 shows that the maximum SoC estimation error is only 0.04%.In considering that the model parameters are identified in real time based on the RLS algorithm, the CDKF-based SoC estimator is expected to realize the accurate voltage and SoC estimation against different operation conditions.Table 4 also shows the joint estimation duration in this case, which is 48.585 s.Compared to the duration in Table 2 we find that the calculation complexity will be much heavier if the dual-CDKF joint estimator is constructed referring to reference [18].However, during practical applications the accurate initial SoC value cannot be obtained, so the robustness of the SoC estimator against inaccurate initial values should be systematically studied.Figure 14 and Table 5 show the SoC estimation results with an inaccurate initial SoC, where the initial SoC is incorrectly set to 80%. Figure 14 describes the SoC estimation results with an inaccurate initial SoC of the LiFePO4 cell.Figure 14a describes the reference voltage and the estimated voltage. Figure 14b shows the estimated voltage error.Figure 14c describes the reference SoC and SoC estimation, and Figure 14d shows the SoC estimation error.The statistical results of the SoC estimation are listed in Table 5.From Figure 14, we find that the estimated terminal voltage and SoC are converged to the reference trajectory quickly after only one second for correcting the erroneous initial state of the SoC estimator.This is because the proposed estimator can precisely estimate the voltage and adjust the Kalman gain according to the terminal voltage error between the true values and the estimated values timely.The erroneous SoC estimation brings bigger terminal voltage errors, which will in turn produce a large Kalman gain matrix and then compensate the SoC estimation in efficient closed-loop feedback.Thus it can achieve the accurate SoC estimates, especially with a significant SoC offset.From Table 5 we find that the maximum SoC estimation error is only 0.05% after the inaccurate initial SoC converges with the reference value.The joint estimation duration is 49.115 s when the inaccurate initial SoC is set to 80%. Thus the joint estimator is able to describe the LiFePO4 cell characteristics accurately and realize the precise real-time battery parameter and SoC estimation even when the cell capacity decreases about 4.35%.In order to further verify the proposed SoC estimator against different battery types, the BJDC datasets of the LiMn2O4 cell are used.It is noted the exact initial SoC is 90%. Figure 15 is the model parameter identification results with different initial SoCs of the LiMn2O4 cell.Figure 15a is the internal resistance R0. Figure 15b is the polarization resistance R1 and Figure 15c is the polarization capacitance C1.Still, as the results show in Figure 11, the estimation results of the 80% initial SoC case are the same with that of the 90% initial SoC case.Also, the model parameters converge to the true value quickly, and compared to the internal resistance, the polarization resistance and capacitance fluctuates more strongly.Figure 16a shows the reference voltage and the predicted voltage, and Figure 16b shows the voltage prediction error.The statistical results of the voltage prediction error are presented in Table 6.Still from Figure 16 and Table 6, we find that the RLS-based predicted voltage is very accurate, with the maximum estimation error being within 7 mV for both cases.Figure 17 shows the SoC estimation results with different initial SoCs of the LiMn2O4 cell, where the initial SoCs are respectively set to 80% and 90%. Figure 17a describes the reference voltage and the estimated voltage and Figure 17b describes the voltage estimation error.Figure 17c shows the reference SoC and the estimated SoC, and Figure 17d describes the SoC estimation error.The statistical results of the SoC estimation are listed in Table 7. From Figure 17 we find that for the case of the initial SoC being 80%, the estimated terminal voltage and SoC converged with the reference trajectory quickly after only one second for correcting the erroneous initial state of the SoC estimator.Afterwards, it traces the reference SoC trajectory stably and accurately just like the exact initial SoC case does.Table 7 shows that the maximum SoC estimation error is less than 1% for both cases.The voltage estimation result may not be as good as enough, still in real applications we can apply the RLS-based estimated voltage.The simulation durations of two cases are also shown in Table 7. Herein, the joint estimator is able to describe the LiMn2O4 cell characteristics accurately and realize the precise real-time battery parameter and SoC estimation even when the cell capacity decreases about 9.11%. Conclusions In view of the model-based SoC estimation schematic diagram, the n-order RC battery model is proposed to simulate the major time-variable, nonlinear characteristics of batteries.To select a reasonable n, the AIC criterion is applied to establish the optimal tradeoff between the model complexity and prediction precision, and the results show that the first-order RC model is the best. Then, a real-time joint estimator based on the RLS and CDKF algorithms is built to realize real-time battery model parameter identification and SoC estimation.The results based on the LiFePO4 cell and the LiMn2O4 cell indicate that the proposed SoC estimator is a closed-loop identification system where the model parameter identification and SoC estimation are corrected mutually, adaptively and simultaneously according to the observer values.The maximum voltage prediction error is within 10 mV and the maximum SoC estimation error is less than 1%, even against an erroneous initial SoC.It is noted that the LiFePO4 cell and the LiMn2O4 cell are in different life stages.Herein the proposed SoC estimator is expected to estimate SoC accurately even under different application and battery aging conditions. Figure 1 . Figure 1.Model-based state of charge (SoC) estimation flow diagram. Figure 3 . Figure 3.The general diagram of the real-time joint SoC estimator. Figure 5 . show the experimental data of the LiFePO4 cell.Figure5is the relationship between battery SoC and OCV.Figure6shows the details of the hybrid pulse power characteristic (HPPC) test[29].Figure6ais the current profile, and Figure6bis the voltage profile.Figure6c,d describes a sample HPPC current curve and voltage curve, respectively.The profiles of the Dynamic Stress Test (DST) are shown in Figure7.Figure7adescribes the experimental current.Figure7bis the terminal voltage, and Figure7cdescribes the calculated SoC.The sample frequency is set to 1 Hz. 2 Figure 13 Figure13describes the SoC estimation results, where the initial SoC value is set at the exact 90%.It is noted that, in all cases, SoC reference profiles are calculated from the Arbin data-logger using power counting on measured data.In order to get the reference SoC profiles, the cells are firstly fully charged and lastly fully discharged after several loading profiles test with nominal current.In this way, we can get accurate initial SoC and terminal SoC.Then, the SoC reference trajectory is achieved based on the Arbin data-logger and the correction from the current efficiency map.In most cases, the battery's current efficiency is close to 100%. Figure 13 . Figure 13.SoC estimation results with the accurate initial SoC of the LiFePO4 cell: (a) Voltage; (b) Voltage error; (c) SoC; (d) SoC error. Figure 16 Figure 16 describes the voltage prediction results with different initial SoCs of the LiMn2O4 cell.Figure16ashows the reference voltage and the predicted voltage, and Figure16bshows the voltage prediction error.The statistical results of the voltage prediction error are presented in Table6.Still from Figure16and Table6, we find that the RLS-based predicted voltage is very accurate, with the maximum estimation error being within 7 mV for both cases. Figure 16 . Figure 16.The voltage prediction results of the LiMn2O4 cell: (a) Voltage; (b) Voltage error. Table 1 . Main specifications of the two types of batteries. Table 2 . Statistical results of voltage prediction error. n Maximum ( Table 4 . Statistical results of the SoC estimation error. Table 5 . Statistical results of the SoC estimation error. Table 6 . Statistical results of the voltage prediction error. Table 7 . Statistical results of the SoC estimation error.
2015-09-18T23:22:04.000Z
2015-08-12T00:00:00.000
{ "year": 2015, "sha1": "62deeec4341f7f5ad51bd54d7dae8dba08ecb241", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/8/8/8594/pdf?version=1439384150", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "62deeec4341f7f5ad51bd54d7dae8dba08ecb241", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
167208341
pes2o/s2orc
v3-fos-license
SmartFire: Intelligent Platform for Monitoring Fire Extinguishers and Their Building Environment Due to fire protection regulations, a minimum number of fire extinguishers must be available depending on the surface area of each building, industrial establishment or workplace. There is also a set of rules that establish where the fire extinguisher should be placed: always close to the points that are most likely to be affected by a fire and where they are visible and accessible for use. Fire extinguishers are pressure devices, which means that they require maintenance operations that ensure they will function properly in the case of a fire. The purpose of manual and periodic fire extinguisher checks is to verify that their labeling, installation and condition comply with the standards. Security seals, inscriptions, hose and other seals are thoroughly checked. The state of charge (weight and pressure) of the extinguisher, the bottle of propellant gas (if available), and the state of all mechanical parts (nozzle, valves, hose, etc.) are also checked. To ensure greater safety and reduce the economic costs associated with maintaining fire extinguishers, it is necessary to develop a system that allows monitoring of their status. One of the advantages of monitoring fire extinguishers is that it will be possible to understand what external factors affect them (for example, temperature or humidity) and how they do so. For this reason, this article presents a system of soft agents that monitors the state of the extinguishers, collects a history of the state of the extinguisher and environmental factors and sends notifications if any parameter is not within the range of normal values.The results rendered by the SmartFire prototype indicate that its accuracy in calculating pressure changes is equivalent to that of a specific data acquisition system (DAS). The comparative study of the two curves (SmartFire and DAS) shows that the average error between the two curves is negligible: 8% in low pressure measurements (up to 3 bar) and 0.3% in high pressure (above 3 bar). Introduction One of the most crucial safety aspects of any building is ensuring that all the necessary fire safety measures are in place. The building must have a clearly marked emergency exit route that will safely guide people out of the building [1]. However, within the possibilities of the building, other measures must also be in place to try, as far as possible, to extinguish or contain fire. Fire extinguishers are one such measure, which are used to extinguish or control small fires in emergency situations. However, they are not designed for use in uncontrolled fires that endanger the user (i.e., no exit route, smoke, explosion hazard, etc.), or that requires the expertise of a fire department. Typically, a fire extinguisher consists of a cylindrical pressure vessel containing an agent that can be discharged to extinguish fire. There are also fire extinguishers made from non-cylindrical pressure vessels but they are less common [2,3]. There are two main types of fire extinguishers: stored-pressure and cartridge-operated. In stored pressure units, the expellant is stored in the same chamber as the firefighting agent itself. Different extinguishers are used depending on the fuel that has caused the fire; these are classified by letters, which refer to the type of fire the extinguisher is designed for. The types of fire are: • Class A: Fires that involve solid fuels such as wood, cardboard, plastic, etc. • Class B: Fires that involve liquid fuels such as oil, gasoline or paint. • Class C: Fires that involve gas fuels such as butane, propane or city gas. • Class D: Fires of this type are the rarest, the fuel is a metal, and the burning metals are magnesium, sodium or aluminium powder. Taking this categorization into account, we can better understand the types of available fire extinguishers and the fire extinguishing agents they contain. The most common extinguishing agents include: • Water: Suitable for Type A fires always in places where there is no electricity. Water is not suitable for fires involving liquid fuels such as gasoline or oil because it is denser than those liquids; as a result, the fuel would remain on top of the water and it would not be possible to extinguish the fire. • Water spray: Ideal for extinguishing Type A fires and suitable for Type B fires. They should never be used in the presence of electric current as water could cause electrocution. This type of fire extinguisher is good outside of homes where there is no electrical risk, such as gardens, barbecues, etc. • Foam: Ideal for Type A and B fires; we have all seen firefighters spray foam at emergency drills. As with the previous one, it is dangerous in the presence of electricity. • Dust: It is the most common type and is used in any building. It is suitable for fires of Types A, B and C. It is powdery so it can be used in the presence of electricity. It is the most recommended extinguisher for houses, offices or any other type of building. • CO 2 extinguishers: CO 2 is a gas and thus cannot conduct electricity. This type of fire extinguisher is suitable for fires of Types A, B and C. It is usually used in the presence of delicate elements where other types of extinguishers would damage the objects. If we use a standard fire extinguisher in a laboratory, for example, the foam or powder could damage expensive machines and equipment. Thus, CO 2 extinguishers are ideal for this type of environments. Although they are the most versatile of all available extinguishers, they are also the most sensitive ones; changes in temperature can affect them considerably. Given that they do not have a manometer, it is impossible to know their pressure in real time, thus manual inspection is necessary. Since fire extinguishers are pressure appliances, they require maintenance operations to ensure they function properly in the case of a fire. For this reason, the condition of fire extinguishers is checked periodically, including their security seals, inscriptions, hoses, etc. This mainly includes checking the state of charge of the extinguisher (weight and pressure), the bottle of propellant gas (if available), and the state of all mechanical parts (nozzle, valves, hose, etc.). All this is done to make sure they are safe and that they will be effective in extinguishing a fire. Moreover, it is necessary to ensure that they are located in a visible place that is easy to access [4]. Currently, the inspection of the fire extinguishers must be performed by a specialized operator. These checks are carried out manually every three months, one year or five years, depending on the type of the fire extinguisher. All fire extinguishers cannot exceed twenty years of useful life. Thus, the investigation of automated inspection methods would be of great benefit as it would allow one to control the condition of the fire extinguishers in real time. As a result, the company responsible for inspecting the fire extinguishers would be notified immediately after an abnormal condition (in terms of weight or pressure) is detected by the automated system. Hence, an automated system is going to provide the confidence that all the fire extinguishers are in their optimal condition in the case of a fire. Moreover, this would contribute to economic saving since it would no longer be necessary for operators to go for periodic inspections. The automated system would review the state of all the fire extinguishers simultaneously. At the time of inspection, both mechanical components and pressure are checked. It is unusual to detect faults in the state of the mechanical components, although it is possible to detect faults in the state of the pressure of the extinguisher. The reason for inspecting both aspects of the fire extinguishers at the same time is more for economic reasons rather than functional reasons. However, reviews are also carried out in the event of detection of a mechanical defect (lack of any component, damaged elements, etc.) or variation in pressure (small and inaccurate manometers or lack thereof, e.g., in CO 2 extinguishers). To be able to monitor the state of the extinguishers in real time and perform autonomous control, it is necessary to place sensors on them, including pressure, temperature, humidity and smoke sensors [5,6]. Devices with high levels of pressure such as CO 2 units require sensors that are more expensive than the extinguisher itself, and thus the proposed system provides a much more economical solution. In this way, it is possible to analyze the impact of the incidence of factors such as humidity or temperature on the condition of the fire extinguisher. However, there are even greater possibilities if a position sensor is deployed. If the set of sensors includes a position sensor, it would be possible to find out (when the temperature began to rise in the building) when each fire extinguisher was used; how the fire spread through the building; and the order in which the fire extinguishers were used (when it was remove from its support). The platform will therefore provide mechanisms that will allow one to understand how people behave in the event of a fire. This, in turn, would allow one to assess the security measures deployed in the building and adapt them to the behavior of people in emergency situations [7,8]. This information is very useful as it enables us to understand how people act in the event of a fire. Human behavior during the initial phase of a fire is, therefore, an important factor in terms of survival [9,10]. This would enable the development or optimization of exit routes within buildings or dwellings. The goal of the work presented in this article was to provide a novel system of soft agents for the detection of pressure changes in fire extinguishers, as well as to record changes in the parameters that affect them, such as temperature, humidity, etc. If the information indicates any incidence or if any value is outside its normal range of values, an alert will be sent [11]. There is a similar system called en-Gauge Fire Extinguisher (http://www.engaugeinc.net/fire-extinguisher-monitoring) to the one presented in this article but it is a commercial product adapted to a concrete fire extinguisher model, which warns when the pressure level has fallen below an operable level, and also requires the installation of a plate on the wall. Our prototype can be used in any fire extinguisher, does not require work on the wall and thanks to the web platform can visualize the pressure at any time, which will allow in the future including machine learning techniques to perform predictive maintenance processes. The main contributions of this paper include: the use of sensors to acquire information on the state of fire extinguishers and the detection of anomalies in the pressure of the extinguisher without the need for human intervention. The platform will could even provides knowledge on the behavior of people during a fire and on their use of the fire extinguishers.This is because one will know the area where the fire originally occurred by knowing which extinguisher was used first. To complement the data acquired from the fire extinguisher, the developed platform obtains data from the building environment [12]. The developed system uses a combination of information from the fire extinguishers and the building environment, providing knowledge of variables that influence the state of fire extinguishers, making real-time inspection possible without the need for operators. The rest of the article is structured as follows: Section 2 reviews related state-of-the-art projects and most commonly used technologies. Section 3 describes the SmartFire platform. Section 4 outlines the case studies that were performed to evaluate the proposed platform. Section 5 outlines the results. Finally, Section 6 draws conclusions from this proposal and discusses future lines of research. Related Work This section presents a thorough review of state-of-the-art literature in the field of fire extinguishers. We analyzed the variables that affect the pressure under which the internal composition of fire extinguishers is stored and also performed an in-depth study of how to correctly measure the pressure of a fire extinguisher from its outside. Moreover, we looked into different technologies used in the literature, examining their advantages and disadvantages for the development of our platform. We must point out, however, that the current literature does not present any type of platform that would be capable of analyzing the parameters that affect the pressure in fire extinguishers. This makes evident the contribution of our platform, which collects data autonomously and controls whether the parameters of all fire extinguishers are similar, detecting and notifying possible incidents. The measurement of pressure is related to the measurement of deformation and in the case of small deformations the technology most used for this purpose is the use of strain gauges. These are used fundamentally for the determination of tensions, such that in our case it is sought to relate these with the pressure [13,14]. Another of the conditioning factors/limitations of this work consisted in finding a low-cost monitoring platform that would not make the final price of the extinguisher more expensive, capable of taking measurements (temperature, humidity, etc.) inside buildings with wireless communication and low consumption [15]. Proposals with SmartFire Sub-Objectives There are no systems that realize in a single prototype the objectives proposed in this article. Here are some works that have proposed some solutions for partial objectives covered by the SmartFire prototype. Park et al. proposed the measurement of pressure gauge using color segmentation for the safety management of fire extinguisher [16]. The main idea is that pressure gauge includes a green color indicating the normal pressure. In the work done by Jia-ming Jin, the release characteristics of the gas extinguishing agent from fire extinguisher vessel at different filling conditions was studied [17,18]. The results show that the outlet pressures of the fire extinguisher vessel have the same trend basically at different initial temperature: all of them decline rapidly with the jetting time and then tend to be gentle. Park et al. [18] proposed again a fire extinguisher maintenance system using smart NFC communication as well as real-time pressure measurement. The proposed system consists of three steps in the flow of information. The first step is to identify the fire extinguisher through NFC tagging in the fire extinguisher module using the smart device. The fire extinguisher appearance check and the real-time pressure measurement is performed in the second step, and the last step sends the check status information to the management server. In particular, the actual pressure value is calculated based on the angle of the green area and the indicating needle. However, the use of NFC severely limits these systems in large multi-story office buildings. Other work is based on measuring pressure drops such as the work presented by [19]. They developed an application of the portable fire extinguishing equipment in fire prevention of power transmission line results and showed that massive wildfires can be extinguished quickly. By using the equipment, electric power company's ability of resisting wildfires has gained remarkable improvement. Algorithm for Calculating the Pressure of a Fire Extinguisher In mechanical design, to be able to calculate the stress of an element, it is necessary to know what material it is made of, its geometry and the conditions it is subjected to. In terms of the development of behavioral theories (the development presented in this paper), they are based on two principles: (i) equilibrium, i.e., both external forces (in this case pressure) and internal forces (the reaction of the material) counteract each other; and (ii) compatibility of deformations, i.e., the deformations in different elements are contiguous with each other. With regard to pressurized vessels, there are two theories [20], depending on the relationship between the radius of the element and its thickness. If the thickness (t) is at least one order of magnitude smaller than the inner radius, it will be considered as a thin-walled case; there are also some other upper limits for these criteria given in the literature such as r i /t > 20 where r i is the inner radius, which means that, when r i /t < 20, the thickness is more relevant than the radius [21]. Thus, a different formulation must be used in each case. A pressure vessel can be spherical or cylindrical with hemispheric or ellipsoidal endings. We took measurements of the cylindrical part since it is a well-defined area, in which the following stresses appear ( Figure 1): • σ θ , the tangential tension, goes in the tangent direction to the circumference. It is the highest of all and increases the perimeter of the cylinder, as a result of the level of deformations, the wall of the cylinder separates along its generator. • σ r , the radial stress goes in the direction of the radius at the deformation level, making the radius increase in length. It is the smallest of them all. • σ z , the stress in the direction z makes the cylinder increase in length, at the deformation level it is as if the cylinder were pulled at the ends and stretched. In mechanical engineering, depending on the case, i.e., thin or thick wall, different formulations are available, the latter being the most complex. 1. Thin wall, r i /t > 20. In this case, the effect of the thickness is not considered, hence the radial stress (σ r ) is not taken into account, since it is assumed that the tangential stress (σ θ ) is much larger than the radial stress, which is therefore ignored; the expressions for these stresses [20] are: Tangential stress: Axial stress: Radial stress: These expressions relate internal pressure (P), radius (r) and thickness (t). For the case of the radius, we assume (since thickness is not considered) that both internal and external radii are equal (r i = r o ). Some authors use the mean radius (r m ), which is more accurate. In our case, what we can measure are deformations and not stresses, but through Hooke's Law [22], which relates tensions and deformations (Equations (5) and (6)) within the linear elastic regime, it is possible to relate the deformation caused by the internal pressure of the gauge because our extinguisher works in this regime. The linear modulus of elasticity (E) is a known characteristic of the material, and ε ij is the deformation measured with the gauge, where the subscripts ij indicate the direction in which the deformation occurs. In radial systems, the spatial directions x,y,z are replaced by the radial direction (r), tangential (θ) and z is maintained. The objective of this work was to relate the deformation (ε) with the pressure (P) that it produces; we can relate the values of P with ε since we know the stress values as functions of the pressure (Equations (1) and (2)) and the value of the deformation (ε) is obtained directly from the gauge. Depending on how the gauge is positioned on the container, it is measured. If the gauge is positioned in the axial direction, the value of ε z is measured and if it is placed tangentially, ε θ is obtained. σ r,θ,z = Eε r,θ,z Then the value of the tangential stress (Equation (1)) is introduced in Equation (5); thus, via algebra, we can obtain the internal pressure of the vessel (Equation (6)) since it presents the relation P vs. ε θ according to known geometrical parameters and material properties, in this case, the Young's modulus (E), and the Poisson's ratio (ν), which is a parameter of the material and is provided by the manufacturer or obtained through mechanical test. We operate in the same way to obtain the relationship between P and ε z (Equation (7)). In thin wall cases, the radial direction deformation value is ignored, and the gauge cannot be placed in this direction. A large diameter fire extinguisher with a common pressurization (13 bar) is a thin-walled type of fire extinguisher (wheeled extinguisher and extinguishing tanks). In Equation (5), the values measured by the gauge in tangential direction together with Equation (1) are replaced by Equation (6). Similarly, we operate in the z (axial) direction, providing Equation (7), which is the expression to be used in metal containers, since in the radial direction the deformation value is ignored and the gauge cannot be placed. Measurement of the deformation in tangential direction: Measurement of axial deformation: Most of the portable devices are pressurized to 13 bar and their diameters are larger than the thickness (especially in wheeled extinguisher and extinguishing tanks) with the aim of optimizing the volume, thus thin wall is the most common situation, except the CO 2 devices due to the high pressure. 2. Thick wall, r i /t < 20 In this case, the idea is the same, but the formulation is more complex because it is not possible to apply the simplification of negligible thickness in the equations, as it does have an influence here. In Figure 1, we see that the internal pressure generates a series of stresses that will deform the material, as shown in Figure 2. In Figure 2, we can see an undeformed and a deformed element, where the value of the deformation can be calculated by applying finite differences in such a way that we can obtain the value of the deformation according to the displacements. Equations (8) and (9) allow relating deformations to stress, and are fundamental equations exposed in the Fundamentals of Machine Elements deformation theory [23]. These equations are used in the following steps for the calculation of pressure. Here, again, our goal is to calculate pressure as a function of deformation. We know that pressure generates stresses that in turn generate deformations. Now, we reverse this logic and obtain the stresses from the deformations. This is achieved by applying Hooke's law (Equation (4)) where the expressions of both radial (Equation (10)) and tangential (Equation (11)) deformations are functions of both radial and tangential stresses. By equating Equations (8) and (10), we obtain Equation (12), and, by equating Equations (9) and (11), we obtain Equation (13). ∂δ r ∂r Equations (12) and (13) form a system of two equations and three unknowns, where, if we apply the criterion of equilibrium to the system of forces that generate the tensions (Figure 1), we obtain Equation (14), such that this equation complements the previous system and it can be solved as a compatible and determined system. By substituting Equation (14) into Equation (13) and deriving it from r, we get Equation (15) δ If we now substitute Equation (14) into Equation (12) and equate it to Equation (15), by substituting and operating, we obtain the relationship in Equation (16), which can also be expressed as Equation (17). Integrating once: Re-integrating and simplifying: Integrating once again: Applying the boundary conditions to Equation (20), for a general pressurization case where we would have to: By operating, we obtain the value of the constants: such that, by substituting the value of the constants in Equation (20) and simplifying, we obtain: The tangential stress is provided by Equation (14), where by substituting the value of the radial stress (Equation (25)) and the value of its derivative with respect to r (Equation (26)) For this particular case, where P o « P i , it is assumed that P o = 0, substituting in Equations (25) and (27) would give particular stress values. Finally, to return to Equation (11), we can relate the deformation with the pressure Since the strain gauge is placed on the outside radius, the expression is reduced to: Since both the material and the geometry are known, by placing a gauge on the outside of the fire extinguisher in the tangential (circumferential) direction, we can determine the value of the internal pressure in the vessel (fire extinguisher). Agent Based Monitoring Platform Nowadays, the evolution of the electronics field has allowed us to greatly reduce the size of sensors, thus we are able to collect values for very different parameters through the placement of small devices. The field of communications has also evolved enormously through the development of communication protocols that allow us to send long distance data with little power consumption. For this reason, the developed device is going to allow us to measure the pressure of the extinguishers autonomously and send the measurements to an external platform that manages those measurements and detects anomalies. There is no device for measuring pressure changes in fire extinguishers with the size and accuracy characteristics required for this task, nor is there a platform to monitor a set of fire extinguishers and send notifications to the personnel responsible for their maintenance. There are some similar works but their focus is not the same as that of this work. One of these works is the one proposed by Chow, who presented a fire safety classification system (EB-FSRS) to evaluate the fire safety provisions in the existing high rise non-residential buildings in Hong Kong. The aim is to investigate to what extent the fire safety provisions of existing buildings deviate from the expectations of the new codes [24]. Other work has focused on the development of computer vision techniques to be used by drones to detect fires in open spaces, such as those developed by Chamoso et al. [25] and Verstock et al. [26]. Rashid et al. developed a multi-sensor-based fire-extinguishing robot and demonstrated its implementation with a brief discussion on its construction and operation [27]. As can be seen from these works, the great majority are focused on extinguishing or detecting fires but not on ensuring that the security measures are in optimum condition. In this respect, the system of soft agents allows one to implement one agent in each SmartFire prototype in a building in a way that allows them to communicate, coordinate and cooperate when monitoring the set of extinguishers in the building. This agent methodology has been widely applied in the monitoring work in various areas [28,29]. Thanks to the monitoring of the environmental conditions of an extinguisher, the use of this methodology in the future is going to make it possible to analyze the origin of a fire, the conditions in which it occurred and the behavior of people in this situation. With the environmental values of each extinguisher, new soft agents can be deployed for the application of big data techniques based on preventing possible sources of fires. SmartFire Platform This section details the SmartFire architecture, the description of the prototype and the software architecture that allows us to apply the algorithm for calculating the pressure of a fire extinguisher. Prototype Overview At the Mechanical Engineering Department laboratory, University of Salamanca, Zamora, we have HBM equipment, a Quantum MX840A (Figure 3c), which is used specifically for data acquisition and is particularly accurate when measuring with strain gauges. The Quantum system is used exclusively to calibrate the SmartFire prototype. The proposed architecture (Figure 4) requires both a signal capture system (input) and wireless communications (output) all managed by a control system. The management system uses a microcontroller of the Arduino series, specifically the wemos WiFi and bluetooth Battery, which allows for wireless communications. This is of vital importance because the system should be coupled with the extinguisher and be an isolated component. In other words, the system does not need to be connected to a power supply network or to a fixed data network, as it has an autonomous power supply system (it can even be connected to a solar panel). The prototype includes a DH22 sensor on the outside of the housing to also obtain temperature and humidity data. The features are: WiFi module (ESP-WROOM-02) with digital and analog 10-bit I/O, hence the use of the amplifier (HX711 Load Cell Amplifier Module), which provides 12 bits. The amplifier is the same as that used when weighing systems, which in the end uses a strain gauge that relates the deformation of the weighing systems with the applied load. The latter is connected to a wheatstone bridge for gauging. A Programmable Logic Control (PLC) of the Omron CP1H-XA brand is used (the A indicates that it has an analog I/O card; Figure 3d). This element is used to control all the generated test equipment (general prototype). The PLC manages the entire power system of the installation, controls the valves of the circuit that provide the pressure to the test specimen (pressure vessel). In addition, the pressure sensor is connected to the analog board of the PLC, which works in trigger mode, sending the order to both, the prototype and the Quantum to record the pressure values obtained in steps of 0.5 bar (50 kPa). Pneumatic elements: The laboratory of Mechanical Engineering of the EPSZ has diverse pneumatic material with which the tests were performed, from the compressor, ducts, and electrovalves to instruments for measuring pressure, both basic (manometers) and high precision (head pressure sensor rotary 0-10 mpa Panasonic), as well as pressure regulators. The manometers are used as redundant instruments (a total of two are used) to verify that the measurement offered by the PLC of the pressure sensor is within the proper range (the accuracy of the pressure gauge is ±0.1 bar and that of the sensor is ±0.01 bar. This is done as a control method, to avoid configuration failures in the PLC). Soft Agent Platform The autonomy of soft agent systems allows them to interact with each other without human intervention. Their ability to perceive and react to changes in the environment makes this methodology an ideal approach for obtaining environmental data and responding to those changes with appropriate actions. Characteristics such as extensibility and flexibility make it possible to add new functionalities or include other algorithms and sensors. These advantages have led to numerous monitoring proposals that employ agent systems [30,31]. Agent systems are often applied in the field of process automation due to their ability to deal with more complex systems. Various agent systems have been developed for the management of energy optimization processes [29]. However, there is no record of any state-of-the-art developments that would use a soft agent platform to monitor the state of fire extinguishers in a building. This soft agent platform allows us to communicate with the SmartFire prototype for the reception of pressure values in real time for each of the monitored fire extinguishers. The platform was developed using the JADE framework which facilitates the development of communication processes between agents by making use of the FIPA-ACL communication standard ( Figure 5). This platform based on an architecture of lightweight agents communicates with the SmartFire prototype for the reception of pressure values in real time for each fire extinguisher monitored. The platform makes it possible to visualize this data through a website, as shown in Figure 6, and to send notifications to the people responsible for the maintenance of the extinguishers in the event that a fire extinguisher is out of the threshold of normal pressure values. Figure 7 shows the diagram of the functional concept of architecture, which shows how the agent-based system allows to carry out part of the reception of the measurements of the deployed prototypes to make notifications, statistics or visualization of data among others. Case Study: Laboratory Validation This section details all the components that were involved in the case study ( Figure 8) and how the case study was conducted. The main idea was to take redundant measures and make sure that all of their values were the same, so that some methods would work for others. Step 1: Make a thin wall pressure vessel. Since the theory of cylinders has been sufficiently tested and its use for metallic materials has been proven, a calibrated specimen made of metal was constructed. In our case, a controlled pattern pressure vessel (Figures 3 and 8) was manufactured with the following dimensions: inner diameter, d i = 170 mm; length, 450 mm; and thickness, t = 1.5 mm. The vessel was made of steel of controlled properties (Table 1). The dimensions of this test specimen (pressure vessel) were mainly determined by the pressure limitations of the test laboratory; although it was possible to access higher pressure levels, this was not necessary since the results were perfectly extrapolated. A series of strain gauges ( Figure 9) were placed on the extinguisher, arranged in both radial and axial directions, the purpose of which was to capture measurements in different directions, since redundant measurements increase precision. These gauges were connected to both of HBM's commercial data acquisition system (DAS), Quantum (Figure 3c, blue and green cables), and the microcontroller (Figure 3, orange cable). The data acquired by the Quantum were displayed on the notebook (Figure 8) through the commercial Catman Easy software version 3.1.3.22. Step 2: Calibrate pressure sensor. By means of the pressure regulator (Figure 3a), which includes a manometer, fixed pressure values were administered to the container. The values were verified with the different manometers inserted in the circuit and compared with the reading provided by the pressure sensor (Figure 3a). The pressure sensor was placed in a series with the rest of the manometers in the circuit. The reading of the pressure sensor was performed by the PLC, to which the pressure sensor was initially connected. This reading was displayed on the computer through its control software (CX-one), which provided measurements in real time. It was observed that the readings of all the pressure gauges, as one of the pressure sensors, coincided with what the calibration of the PLC considered good. The function of the PLC was to detect the pressure level to obtain the pressure values. Once the pressure set point was reached, it allowed 10 s to elapse. During this period, the value stabilized and was then recorded. Step 3: Check the gauge dimension. Once the validity of the pressure measurement system was checked, the strain gauge measurement was checked and the formulation was applied (Section 2.2). The specimen parameters (material and geometry) and pressure were entered into the computer and the theoretical deformation measurement of the vessel was obtained in both tangential and axial directions. The theoretical results were then compared with the measurements provided by the Quantum. The obtained results confirmed the measures. Step 4: Check the SmartFire system measurement Figure 10. Pressure was applied progressively, where the PLC, at prescribed pressure steps, ordered both the Quantum and the SmartFire prototypes to register the deformation values, such that pressure-deformation measurements were obtained from both systems. Results Once the case study was validated, it was necessary to compare the results obtained by both the PLC system and the SmartFire prototype to validate whether the prototype was as precise in calculating the changes in pressure as the professional machine. To find out if the small prototype had the same precision, the curves obtained in Step 4 of the case study were compared ( Figure 11). This comparison showed that the error between the two curves occurred in low pressure measurements (up to 3 bars) was an average of 8%. At high pressure (from 3 bar up to 8 bar, maximum value used in the test), there was an average error of 0.3%, a negligible value. These values were used to verify the measurement capabilities of SmartFire. fail before it is produced. Experimental validation in the laboratory has allowed us to certify that the SmartFire prototype rigorously complies with the precision requirements of safety monitoring devices. In addition, the SmartFire prototype allows, in a very economical way, to carry out a review of the extinguishers of a building versus the traditional method, as it is not necessary to send anybody to check the device physically, because one can know instantaneously when the extinguisher ceases to be in optimal conditions. As future lines of work, we intend to design a case study in an office building for real-time monitoring of fire extinguishers. This will allow us to find out when a fire extinguisher is not suitable for use immediately after an anomaly occurs. Moreover, it will be possible to calculate the average time that the extinguisher would continue to be in an anomalous state if only traditional inspections existed. This case study would show the average time that elapses from the time an anomaly has occurred in the fire extinguisher, to the moment it gets its periodic check, which can be up to three months, a period by which many extinguishers may have failed. Carrying out this experiment will further validate this proposal, eliminating high-risk situations in which there is a fire in a building whose fire extinguishers are not operational. Furthermore, The case of CO 2 extinguishers will be studied in more detail, so much so that in this study it has become clear that it is a different and more complex case that requires a more in-depth study.
2019-05-28T13:10:06.746Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "53305e3b7dc9f51616fc9473d0bff9002cf12c61", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/19/10/2390/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53305e3b7dc9f51616fc9473d0bff9002cf12c61", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine", "Computer Science" ] }
158673644
pes2o/s2orc
v3-fos-license
Good Governance and State Mechanisms: Defending Human Rights and Anti-Corruption This essay deals with the international literature with a multi-pronged approach in combatting corruption. It aims to share an important strategy in combating a contextual approach. It also suggests that Good Governance should include issues of quality rather than just providing the decent minimum for human life. This paper argues that in setting priorities for action, priority should be given to putting in place measures which alleviate poverty. Keywords—Good Governance; State Mechanism; combatting corruption; poverty I. INTRODUCTION Corruption kills. The money stolen through corruption every year is enough to feed the world's hungry 80 times over. Nearly 870 million people go to bed hungry every night, many of them children, corruption denies them their right to food, and, in some cases, their right to life. [1] In the period 2000-2009, developing countries were estimated to have lost US$8.44 trillion to illegal conduct, 10 times more than foreign aid payments. [2] The topic for this presentation is State mechanisms for good governance which both protects human rights and attacks the problem of corruption. A great deal has been written about tackling anticorruption and the good governance measures that states can take to reduce corruption. In this short presentation I would like to just make a few key points. The first is that States in implementing good governance should give priority to addressing poverty. The second is that good governance can no longer ignore issues of equality and distributive justice. The third point refers to the current debate about what works in reducing corruption and whether uniform approaches are the answer. In conclusion I offer some comments in relation to Anti-Corruption agencies. A. Giving priority to reducing poverty My first point is that in developing countries in tackling corruption, priority should be given to measures alleviating poverty. This is referred to as the bottom up approach. It starts by identifying the most urgent needs within the particular country and then determines which measures will best respond to those needs. [3] B. Setting strategic priorities The key concepts for good governance, responsibility, transparency, accountability and participation are well understood. But what is lacking are strategic approaches to dealing with poverty and underdevelopment. [4] The problem is that no priorities are set nor strategies sequenced to take into account what might be easily achieved in the short or longer term. [5] As the leading Harvard researcher indicates, the best strategy may be to aim for 'good enough governance', and to ask not what is missing but what is improving and what is working [6]. Recent research questions whether good governance is a necessary pre-requisite for economic development and alleviation of poverty. [7] In some countries small changes can lead to significant gains for poverty reduction even when institutional governance may be quite weak. There is compelling evidence that the particular context in each country is crucial. There is no one size fits all, what will work depends upon the state of development so that interventions must be 'appropriate to time, place, historical experience, and local capacity' [8]. It may be that informal institutions and processes will work just as well as institutional approaches [9]. The questions we need to ask are: what types of corruption disproportionately affect the poor, [10] whether good governance pays off for reducing poverty and what is gained from measures to reduce corruption. Measures which enhance democratic, open, transparent government do not necessarily lead to a more equal society or reduce poverty so that the current view is that whilst democracy is a desirable goal, it is not necessarily linked to development [11]. In developing countries with democratically elected governments, the constituent majority are poor. The natural expectation is the influence of the majority would have a redistributive effect and translate into greater services for the poor. But this has not been proven to be the case with politically powerful groups wielding the most influence on government policies [12]. It is clear that in developed democratic countries there continues to be significant inequality and poverty. For example, in Australia, on the most recently available statistics, 13.3% of Australians lived in relative poverty. [13] If 'good governance' goals set as a priority the reduction of poverty, this should involve careful weighing of benefits of any particular strategy. For example, would protection from police and the ability to access public services without having to pay a bribe be given priority? Where poverty is the focus, it may be preferable to provide health services to women and children in rural areas rather than engage in costly public service reforms [14]. C. Strategic responses targeting the poor Corruption most directly affects the most vulnerable citizens indirectly through the diversion of resources from government revenue and directly as a consequence of having to pay bribes to access basic public services. [15] The 2017 Transparency International Report on the Asia Pacific reported that 25% of those surveyed reported paying a bribe/favour for government services; this was equivalent to over 900 million people across the Asia pacific. [ * Utilities was not asked in Mongolia and China. In Malaysia the results are based on the total population due to differences in how the questions were implemented during fieldwork. The chart shows that in Indonesia access to public services is compromised by corruption. This has a very serious impact on the very poor. The KPK has recently made an arrest in relation to corruption involving the ID card which may help in addressing this issue [18]. In relation to the bribing of police, it is often argued that increasing police pay (so there is no implicit assumption by the state that they will be paid through bribes) might, initially, be a cheaper alternative than setting up expensive institutional structures. In Georgia (formerly part of the USSR) extreme measures were taken. About half the force were fired, salaries were increased, traffic police disbanded and additional training introduced. [19] That does not mean that this would always work. There is no 'one size fits all'. It depends so much upon the political climate and committed political leadership. [20] I recall a story I was told by a law academic in one of the ASEAN countries. Final year law students were asked whether they had every bribed a policeman when stopped for a traffic offence. Almost the whole class had. In contrast if this same question were asked in Australia law schools, one would not expect any student to say they had. Law students would be frightened that they would be prosecuted and even worse that they would never be admitted to practice if convicted of an offence. Consequently, there may be very strong biases in particular populations which would need to be taken into account in any attempt to counter bribery or corruption. III. GOOD GOVERNANCE AND DISTRIBUTIVE JUSTICE The second issue I would like to raise is the question of distributive justice. I want to suggest that 'good governance' requires States not only to be concerned with providing a 'decent minimum' for life but also with the fair distribution of economic wealth. The wealthier have become wealthier through the influence of powerful groups and the beneficial effects of government policies. A measure of existing inequality is the gini index which measures the differences between the income of the poorest and richest. A zero index shows perfect equality and a ranking of 100 perfect inequality. Indonesia and Australia are similar with an index of about 35% and ranked respectively 93 and 100 on the inequality index. [21] Oxfam reported on 16th January 2017 that 9 billionaires owned the same wealth as poorest half the world and that since 2015, the richest 1% has owned more wealth than the rest of the planet. [22] In 2017, it was reported that Advances in Social Science, Education and Humanities Research, volume 162 Indonesia's four richest men are worth as much as poorest 100 million. [23] In Australia the disparity is worse with Oxfam reporting in 2014 that Australia's richest 1% own as much as bottom 60%. [24] The greatest inequality is in Africa and the greatest equality is in the Scandinavian countries (26%). IV. GOOD GOVERNANCE AND REDUCING CORRUPTION. A. Diverting resources. Much has been written and said on how corruption negatively affects human rights. At its simplest, corrupt activities divert resources which reduce governments' ability to deliver basic social goods [25] such as education, health and welfare which are the key to economic, social and cultural rights. [26] Corruption also threatens civil and political rights by weakening the trust in government and public institutions and decisions by public officials which do not promote the public interest [27]. In the legal arena, equality before the law, the right to a fair trial, access to justice are all compromised by corruption affecting the police, prosecutors, lawyers and judges [28]. Corruption in the rule-of-law system weakens the very accountability structures which are responsible for protecting human rights and contributes to a culture of impunity, since illegal actions are not punished and laws are not consistently upheld [29]. B. Tackling corruption -what works? What is to be done? What will make the difference? What works? There is general agreement that there is no magic solution. What may work in one country may not necessarily work in a different political, economic and cultural environment There is surprisingly little empirical evidence about what actually works in reducing corruption. The countries that are very low on the corruption index, New Zealand, Denmark, Sweden and Finland, [30] are characterised by open, accountable and transparent government, and a strong commitment to human rights [31]. It may be a factor that these countries have quite small populations [32]. The available evidence suggests that public finance management reforms, strengthening horizontal accountability mechanisms and transparency tools, such as freedom of information, transparent budgeting and asset declarations can have an impact on controlling corruption [33]. Lessons drawn from successful approaches indicate that there is no silver bullet against corruption, and that contextual factors linked to the local political economy, as well as the legal and institutional framework, are key to the success of anticorruption interventions [34]. The effectiveness of anticorruption approaches is usually maximised by a combination of complementary (top-down and bottom-up) approaches and success driven by the interaction of a number of reforms introduced simultaneously [35]. Some attempts to deal with petty corruption have utilised cashless payment systems. In India payments are paid directly to bank accounts accessible by the recipients through mobile phones preventing skimming by government agents. The Indonesian city of Surkarta now allows online booking for services such as waste collection and electronic payment using mobile phones. [36] But Nigerian research suggests that this likely to be of fairly limited value with its effectiveness largely limited to petty corruption [37]. But for the poor this may be more important than establishing new institutions to combat corruption. In relation to mobile payments, this may substitute one set of risks with another set of risks; there are also significant issues with access to mobile phones and financial risks arising out of the use of mobile technology. More important for Indonesia is the proposed reform of judicial appointment processes and judicial training to ensure the integrity of the judicial system [38]. These involve measures to ensure merit based selection of candidates and judicial training provided by the KPK (Corruption Eradication Commission). This index is a practical measure of open government, accountability, lack of corruption, limitations on government powers, fundamental rights, order and security, regulatory enforcement, civil and criminal justice. Notably the Scandinavian countries are ranked 1,2,4 (Denmark, Norway, Sweden), so the countries with the greatest equality also demonstrate the strongest commitment to the rule of law. [40] Beside a strong commitment to anti-corruption by political leaders, Finland, Sweden, Denmark and to a certain extent New Zealand all share a common set of characteristics that are typically correlated with lower levels of corruption. These include freedom of the press, high GDP per capita, low inequality, literacy, priority to human rights and freedom of information. Finland, Denmark, Sweden and New Zealand all have high GDP per capita, low inequality rates, literacy rates close to 100 %, and prioritise human right issues (e.g. gender equality, freedom of information) [41]. One important factor in Australia has been investigative journalism which has uncovered serious corruption by politicians. The evidence relating to institutional structures such as Anti-corruption commissions is that such Commissions have not generally been very effective, KPK may be the exception. C. Anti-Corruption Commissions The independence, see below. Agencies are however expensive to administer, involve political risks and target the higher levels of administration and government. But in relation to Indonesia, despite the risks, the KPK has been fearless in combatting corruption. Its recent high profile arrests for corruption [47] sends a critical message to government and citizens that it will prosecute those guilty of corruption at the highest levels. The NSW, ICAC, has not had significant outcomes in terms of successful prosecutions and it continues to be the subject of costly civil suits testing the limits of their jurisdiction [48]. It has shifted its priorities from investigation to education and prevention. For example, ICAC assisted my university to develop a gift policy. Under that policy all gifts to staff over $AUD25.00 (the price of several boxes of quality chocolates) must be reported to the university and recorded by the University. In contrast to the Indonesia KPK,[49] ICAC has no power to prosecute cases of corruption. It can only recommend prosecution to the Director of Public Prosecutions (DPP). Indeed, if success is judged by the number of successful prosecutions, ICAC in its early years would not be considered particularly successful. Prosecutions have been few and slow. A key reason for this is that ICAC has been able through its investigative processes to garner large volumes of evidence much of which may not be admissible as evidence in a prosecution; the criminal standard of proof 'beyond reasonable doubt' is difficult to satisfy. A particular problem relates to public hearings which ICAC conducts which are regarded as 'naming and shaming' and leading to serious injury to reputations with no redress available when no illegal conduct has been proven before the courts. V. CONCLUSION The international literature suggests a multi-pronged approach to combatting corruption. The measures must be adapted to the particular context so what may be effective in one country may not be effective in another. I have argued that in setting priorities for action, priority should be given to putting in place measures which alleviate poverty. These may involve informal mechanisms rather than costly institutional structures and that the emphasis should be on 'good enough governance'. I have also suggested that 'good governance' should include issues of equality rather than just providing the decent minimum for human life.
2019-05-20T13:04:12.824Z
2017-01-01T00:00:00.000
{ "year": 2018, "sha1": "75974bcd0189a8188c8c63e8532c9a0e20e9406b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/iclj-17.2018.7", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "575f7b43bd828db40471c4c43438769d33aaa22a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
250928870
pes2o/s2orc
v3-fos-license
Case Report: Recurrent Autoimmune Hypoglycemia Induced by Non-Hypoglycemic Medications We present a case of recurrent autoimmune hypoglycemia induced by non-hypoglycemic agents. We review reported cases of autoimmune hypoglycemia related to non-hypoglycemic agents, and discuss the effects of different detection methods for insulin autoantibodies on the results obtained. We aim to provide information for clinicians and a warning for medication usage. Considering the increasing number of clopidogrel-induced AIH cases and the hypoglycemia-induced increase in the risk of cardiovascular events, we recommend that cardiovascular disease patients being treated with clopidogrel be informed of this rare side effect and that clinicians be vigilant for the possibility of autoimmune hypoglycemia in this patient population. INTRODUCTION Autoimmune hypoglycemia (AIH) or insulin autoimmune syndrome (IAS) is a rare condition characterized by recurrent hypoglycemia, hyperinsulinemia, and positive insulin autoantibodies (IAAs). AIH was first reported by Hirata et al. in 1970 and is also called Hirata's disease (1). AIHassociated hypoglycemia has a spontaneous and irregular onset, and varies in severity, duration, and remission rates (2). The underlying etiology is IAA formation triggered by autoimmune diseases, sulfhydryl drugs, or insulin use (2). We report a case of recurrent AIH caused by nonhypoglycemic agents. CASE DESCRIPTION A 76-year-old woman presented with a 3-year history of recurrent palpitations, hand tremors, and sweating with worsening of these symptoms since 1 month. The symptoms usually occurred with hunger. During severe episodes, she had abnormal behavior and confusion. Her venous blood glucose levels during the episodes were 1.4-2.8 mmol/L. The symptoms were relieved by eating or intravenous glucose. The patient had been examined at a regional hospital 2 years ago. A 75-g oral glucose tolerance test and insulin-C-peptide release test showed an extremely high serum insulin level along with a low blood glucose level, which indicated endogenous hyperinsulinemia (Table 1). However, a qualitative IAA test (immunoblot assay; Blot Biotech, Shenzhen, China) was negative. Tests for antinuclear antibody profile, immunoglobulins (IgG, IgM, and IgA), and complements (C3 and C4) were negative. The hemoglobin A1c level was 5.7%. The levels of growth hormone, insulin-like growth factor-1, thyroid hormones, reproductive hormones, and cortisol were within their reference ranges. Blood and urine ketones were negative. Enhanced abdominal magnetic resonance imaging and positronemission tomography-computed tomography showed no significant findings. She had a history of hypertension and coronary heart disease without a history of thyroid disease, malignant tumor, or diabetes. She had never been exposed to hypoglycemic agents or exogenous insulin, neither did her cohabitants. Since the cause of the hypoglycemia was unclear, she was transferred to another hospital. At the second hospital, her insulin level was found to be significantly elevated (245 mIU/mL; chemiluminescence method) during hypoglycemia. However, the free insulin concentration detected after polyethylene glycol precipitation was much lower ( Table 1). A qualitative IAA test was negative (same kit as above). A diagnosis of AIH with an unclear cause was considered. She was administered 4 mg methylprednisolone tablets three times a day for 1 week, but the hypoglycemia still recurred. She was then treated with 80 mg methylprednisolone injection daily for 6 days, which provided some relief and considerably decreased the insulin and C-peptide levels. The injections were replaced with 12 mg methylprednisolone tablets, which were gradually tapered and discontinued within approximately 1 month. Follow-up tests revealed a fasting insulin level of 56.72 mIU/mL and a C-peptide level of 3.38 ng/ mL. The hypoglycemia stopped after this treatment. Six months before the current admission, the hypoglycemia recurred, and the insulin and C-peptide levels again increased. Months later, the patient frequently experienced symptomatic hypoglycemia, with sporadic palpitations, hand tremors, sweating, and unbearable hunger, which were obvious at night and before meals, and relieved by eating food. Her peripheral blood glucose levels during hypoglycemia were 2.1-3.4 mmol/L. A detailed medication history revealed that 3 years ago, she started taking clopidogrel for coronary heart disease and atrial fibrillation 1 week before the first hypoglycemic symptoms; these tablets were discontinued 1 month later. Nine months ago, three months before hypoglycemia recurrence, she was treated with meropenem for 1 week due to an infectious fever (Timeline shown in Figure 1). After admission to our hospital, laboratory examinations revealed the following: random blood glucose, 2.4 mmol/L; plasma insulin, >300 mIU/mL (1.9-23 mIU/mL); and Cpeptide, 11.6 ng/mL (1.1-4.4 ng/mL). Interestingly, a qualitative immunoblot assay (same kit as above) was still negative for IAAs, but a quantitative chemiluminescence assay (Shenzhen YHLO Biotech, Shenzhen, China) showed a very high IAA titer of 61.8 cutoff index (COI), its results were interpreted as follows: COI < 0.9, non-reactive; COI ≥ 0.9 to < 1.1, indeterminate; and COI ≥ 1.1, reactive. We therefore conducted a further IAA subtype analysis, which identified IgG1 and IgG3 as the main subtypes. In addition, electrochemiluminescence was used to verify this finding, and the result was strongly positive. Localizing studies showed no evidence of insulinoma. Other related examinations showed no obvious abnormalities. Considering the medical history, we diagnosed AIH, and treated the patient with oral methylprednisolone starting at 4 mg twice a day; this treatment reduced the frequency of hypoglycemia and gradually reduced the insulin, C-peptide, and IAA levels even as the dosage was reduced. After 4 months of treatment, the insulin and C-peptide levels were normal, and the IAA titer was 3 COI. The patient had no further hypoglycemic episodes. DISCUSSION In this report, a non-diabetic patient presented with recurrent autonomic nervous excitation symptoms (palpitations, hand tremors, and sweating), hunger, and neurological glucose deficiency symptoms (abnormal behavior and confusion). During symptomatic phases, her blood glucose level was <2.8 mmol/L. The symptoms were relieved by eating or glucose infusion, which is consistent with the Whipple triad, confirming hypoglycemia. The plasma insulin and C-peptide levels were significantly increased during hypoglycemia. The insulin/C-peptide molar ratio was > 1, indicating endogenous hyperinsulinemia (3). The insulin recovery rate decreased significantly after polyethylene glycol precipitation, suggesting the presence of insulin-antibody complexes (3,4). Although a qualitative IAA test was negative, AIH was still considered. Glucocorticoid treatment relieved the hypoglycemia and sharply decreased the insulin and C-peptide values, but did not result in normal insulin levels. During the current hospitalization, the patient's insulin level was significantly increased, and a quantitative test showed an elevated IAA titer; the diagnosis of AIH was clear. A detailed review of the patient's history revealed that just 1 week before the first hypoglycemic symptom, she had taken clopidogrel tablets. Although clopidogrel does not contain sulfhydryl groups, its metabolites do (5). Therefore, it was considered to have induced AIH. The cause of the second recurrence was more difficult to determine. The patient had received meropenem for 1 week 3 months before the recurrence. Although no case of meropeneminduced AIH has been reported to date, its chemical structure is similar to that of imipenem (6), which is known to cause AIH. Moreover, both drugs contain sulfhydryl groups, which can induce hypoglycemia. In addition, after the first course of methylprednisolone, the insulin level decreased but remained high, suggesting that the clopidogrel-induced IAAs were not completely eliminated. Thus, the recurrence might be ascribed to residual IAAs or a meropenem-triggered amplified immune response. The onset time of drug-induced AIH varies greatly, from days to months and even years after the drug exposure (2, 7-10). On average, the onset time is 4-6 weeks (11,12). Many cases of AIH caused by non-hypoglycemic drugs are self-limiting. On average, spontaneous remission occurs within 3-6 months (13). Persistent recurrent hypoglycemia can last for 2.1-21.9 years (14). The key to AIH treatment is cessation of the responsible agent. Another important part is dietary management. Small, frequent, low-carbohydrate meals are helpful (15). Medical treatments include acarbose, somatostatin analogues, diazoxide, glucocorticoids, azathioprine, and rituximab (2). Prednisone is the most common choice; the initial dose is 30-60 mg/d, and this is gradually reduced until the IAA test is negative, which can take 2 weeks to 1 year (16). Methylprednisone and dexamethasone can also be used (17). In the case of refractory hypoglycemia, plasmapheresis may rapidly reduce the IAA titer (18), and it has also been successfully treated with rituximab to deplete B lymphocytes (19). IAA testing is necessary for AIH diagnosis. Qualitative immunoblot assay and quantitative chemiluminescence assay are commonly used detection methods in clinical practice. In some cases, the results of the two methods may be inconsistent. For our patient, the IAA titer during hypoglycemia was significantly increased according to the chemiluminescence assay but negative according to immunoblotting. This may be explained as follows: (1) The two kits use insulin antigens derived from different sources; the YHLO kit uses recombinant human insulin as the antigen, while the immunoblot assay uses native insulin from a bovine source, which has some differences to human insulin (20,21). (2) Both assays could only detect the IgG type of IAAs, but the YHLO chemiluminescence kit could detect four IgG subtypes. Thus, patients with certain IgG subtypes may have false-negative results with the immunoblot assay. (3) The different methods use different antigen coatings and reaction systems. In the chemiluminescence method, a recombinant antigen is biotinylated with small molecular biotin and combined with streptavidin-coated paramagnetic particles. This method has a unique antigen, unique secondary antibody and substrate, and unique reaction environment and conditions. The antigen-antibody reaction is more efficient, thereby achieving the best reaction effect and leading to antibody detection. The immunoblot method uses a mixed protein antigen that is extracted from bovine pancreatic cells, which includes insulin, glutamic acid decarboxylase, and islet cell antigen. In addition, the antigen is directly coated onto a nitrocellulose membrane. So, it may be difficult to achieve a sufficient reaction for each detection item (20)(21)(22). AIH is a hyperinsulinemic hypoglycemia associated with insulin antibodies. Its pathogenesis is believed to be associated with autoimmune deficiency or specific drug-induced IAAs against a susceptible genetic background. The relationship between some non-hypoglycemic drugs and AIH has been confirmed. Most of these drugs or their active metabolites contain sulfhydryl groups. These groups interact with the disulfide bond of insulin molecules, causing structural changes in insulin, which triggers an immune response and leads to IAA production. Insulin antibodies are characterized by low affinity and high volume. As blood glucose levels rise after meals, normally secreted insulin binds to IAAs, affecting insulin use (2). When the blood glucose concentration decreases, the insulin-antibody complex will dissociate spontaneously, releasing a large amount of active free insulin and causing hypoglycemia (23). We searched the PubMed and CNKI databases for related studies published between January 1970 and July 2021, by using the key words "insulin autoimmune syndrome, Hirata's disease, and autoimmune hypoglycemia". Cases of AIH associated with non-hypoglycemic agents are listed in Table 2. The top three drugs were methimazole, alipoic acid, and tiopronin (24). Clopidogrel was the fifth most common drug if adding our case (5,17,18,(24)(25)(26). Moreover, 11 of the 134 AIH cases with unknown causes were considered to have been caused by clopidogrel tablets, as each of these patients had a history of exposure to clopidogrel or a history of coronary heart disease or coronary or carotid artery stent implantation (19,(27)(28)(29)(30)(31). Therefore, the incidence of clopidogrel-caused AIH might be much higher than is currently believed. This may be explained by the low awareness of non-hypoglycemic agentinduced AIH. Clopidogrel plays an important role in the treatment of cardiocerebrovascular diseases caused by high platelet aggregation. It is the first-line choice for patients with atherosclerotic cardiovascular diseases. Cardiovascular disease patients with hypoglycemia face higher risks of cardiovascular events. Therefore, healthcare providers caring for patients using clopidogrel should be aware of the rare but serious side effect of autoimmune hypoglycemia. If such symptoms occur, timely blood glucose testing and sugar intake are necessary. We suggest that AIH be included as a rare but serious side effect of clopidogrel in clinical medication guides. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Sir Run Run Shaw Hospital, Zhejiang University School of Medicine. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS QZ -treating the patient and writing the article. HZ -treating the patient and writing the article. WQ -clinical data collection and analysis. FW -clinical data analysis and article review. CQtesting and analysis of insulin autoantibody subtype. YY -testing and analysis of insulin autoantibody subtype. YK -testing and analysis of insulin autoantibody subtype. FZ -treating the patient and reviewing the article. JZ -treating the patient, writing and reviewing the article. All authors contributed to the article and approved the submitted version.
2022-07-22T13:37:06.331Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "5f85d555be11a7c2d65a37a0a6c9a45f44f1375b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "5f85d555be11a7c2d65a37a0a6c9a45f44f1375b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221236969
pes2o/s2orc
v3-fos-license
Exosomes in Angiogenesis and Anti-angiogenic Therapy in Cancers Angiogenesis is the process through which new blood vessels are formed from pre-existing ones. Exosomes are involved in angiogenesis in cancer progression by transporting numerous pro-angiogenic biomolecules like vascular endothelial growth factor (VEGF), matrix metalloproteinases (MMPs), and microRNAs. Exosomes promote angiogenesis by suppressing expression of factor-inhibiting hypoxia-inducible factor 1 (HIF-1). Uptake of tumor-derived exosomes (TEX) by normal endothelial cells activates angiogenic signaling pathways in endothelial cells and stimulates new vessel formation. TEX-driven cross-talk of mesenchymal stem cells (MSCs) with immune cells blocks their anti-tumor activity. Effective inhibition of tumor angiogenesis may arrest tumor progression. Bevacizumab, a VEGF-specific antibody, was the first antiangiogenic agent to enter the clinic. The most important clinical problem associated with cancer therapy using VEGF- or VEFGR-targeting agents is drug resistance. Combined strategies based on angiogenesis inhibitors and immunotherapy effectively enhances therapies in various cancers, but effective treatment requires further research. Introduction Angiogenesis and inflammation are processes that play important roles in the development of cancer, from the initiation of carcinogenesis, the carcinoma in situ stage, to the advanced stages of cancer [1]. Excessive abnormal angiogenesis has a central role in tumor progression. It is induced by an imbalance between pro-and anti-angiogenic factors, dominated by tissue hypoxia-triggered overproduction of vascular endothelial growth factor (VEGF) [2]. Proliferation and metastatic spread of cancer cells depends on an adequate supply of oxygen and nutrients and the removal of waste products [3]. Tumor cells utilize different strategies to communicate with neighboring tissues for facilitating tumor progression; one of these strategies is a release of exosomes [4,5]. Tumor cells, as well as immune cells, secrete exosomes that affect the activation of immune cells and immune surveillance [6]. Exosomes, harboring various cargoes that can accelerate angiogenesis, play an important role in cancer invasiveness [7]. According to numerous studies, by releasing exosomes, tumor cells are able to promote tumor epithelial-mesenchymal transition, angiogenesis, and immune Mechanism of Angiogenesis Cancer cells have a special mechanism for blood vessel development, which involves either incorporation of host blood vessels into the tumor or a unique ability to express the endothelial cell (EC) phenotype and form structures similar to vessels. Pathologic angiogenesis, according to Folkman's postulate [32], is a necessary step for a cancerous tumor of a volume more than 1 mm 3 to develop. At the initial stage of progression, tumor nodules are clusters of about 1 million cells. At this The first reports of exosomes classified them as structures that enable the removal of unnecessary metabolites from cells. Currently, they are considered important elements of intercellular into the vessel's lumen [43]. The literature suggests that sprouting angiogenesis is caused by hypoxia, whereas intussusceptive angiogenesis relies on hemodynamic factors [44]. The sprouting process is regulated by the balance between proangiogenic agents, including VEGF, and quiescence-inducing factors, such as pericyte contact and VEGF inhibitors [45]. At the onset, angiogenic expansion of primary capillary plexuses occurs, which leads to capillary vessel development and then capillary vessel system formation, together with physiological expansion of surrounding tissues [46]. This series of molecular and physiologic changes will lead to variability in the epithelium. Next, further maturation of vessel structures takes place, meaning their diameter and wall thickness will increase in a process called arteriogenesis. Primarily it is wall cells that multiply and acquire more specialized features such as contractibility [47]. The mechanism of angiogenesis is a multistage process and many growth factors, substances and types of cells take part in it ( Figure 2). When cells in a dormant vessel sense angiogenetic signals the surrounding pericytes part from the vessel's wall and epithelial cells relax connections between each other, transforming into "tip cells" that are localized in the growing end of the vessel that has long filopodia [48]. Because of these filopodia, the vessel cell recognizes the concentration gradient of proangiogenic agents released by other cells. Proliferation of stalk cells localized in the growing end of the vessel is controlled inter alia by NOTCH, WNT, placenta growth factor (PlGF), and fibroblast growth factor (FGF) [49,50]. Specific apical cells also excrete inhibiting signals into their surroundings and in this way halt uncontrolled migration towards angiogenic signals. After apical cells, there are endothelial stalk cells with fewer filopodia, stretching, proliferating, and creating the vessel lumen through tight and adhering connections with neighboring endothelial cells to aid the process of sprouting [51]. To achieve a functional and mature blood vessel, migration, as well as proliferation of endothelial cells, needs to be stopped. Finally, the recruitment of pericytes and endothelial smooth muscles must take place. Pericytes will build capillary vessel walls and stabilize newly developed vessels [52]. Various signaling pathways are involved in the endothelium/pericyte cross-talk, which promote endothelial retirement and new vessel stabilization. The best known are angiopoietin-1 (ANG-1)/Tie2, transforming growth factor (TGF-β)/TGF-R, and ephrinB2/EphB4 [53][54][55]. However, intussusceptive angiogenesis seems to be characterized by low proliferative potential and poor extracellular matrix degradation. Hemodynamic factors appear to contribute to intussusceptive angiogenesis and additionally VEGF-A seems to play an important role in shear stress-based splitting of capillaries [56]. This process includes the splitting of vessels through the inclusion of tissue pillars. Afterwards, these pillars are surrounded by supporting cells, such as fibroblasts and pericytes, which create extracellular matrix. Accordingly, this event leads to part of the vessel splitting, creating two new ones [57]. Moreover, vessel formation in tumors can occur by recruitment of circulating endothelial progenitor cells or bone marrow-derived hematopoietic cells, which may differentiate and accumulate into clusters called blood islands [58]. The angioblasts located at the periphery of the blood islands, are precursors for endothelial cells, while those at the center differentiate to hematopoietic cells [59]. Vasculogenesis is initiated by interaction between VEGF and the tumor microenvironment, which may mobilize VEGFR-2+ EPCs in the bone marrow. To mobilize endothelial progenitor cells (EPCs) and promote neovascularization, tumors also secrete adiponectin, chemokines C-C motif ligand (CCL)2 and CCL5, and the hypoxia responsive chemokine SDF-1 [60][61][62]. Through the vascular mimicry process, aggressively growing tumor cells can form structures similar to vessels without participation of endothelial cells [63]. To form and stabilize vessels, the endothelial-like tumor cells can secrete heparan sulfate, collagens IV and VI, proteoglycans, tissue transglutaminase antigen 2 and laminin [64]. Increased vascular mimicry has been observed following anti angiogenic therapy and is considered a marker for poor prognosis of cancer progression [65,66]. It was observed that tumor endothelial cells can contain similar somatic mutations to malignant tumor cells [67]. This process contains trans-differentiation of cancer stem cells to endothelial cells and vascular smooth muscle-like cells [68]. It was found that human glioma stem cells developed vessels with endothelial cells expressing proteins such as VEGFR-2, CD34 and CD144 [69]. following anti angiogenic therapy and is considered a marker for poor prognosis of cancer progression [65,66]. It was observed that tumor endothelial cells can contain similar somatic mutations to malignant tumor cells [67]. This process contains trans-differentiation of cancer stem cells to endothelial cells and vascular smooth muscle-like cells [68]. It was found that human glioma stem cells developed vessels with endothelial cells expressing proteins such as VEGFR-2, CD34 and CD144 [69]. Endogenous Regulators of Angiogenesis Regulators of angiogenesis are a very large and heterogenous group with a wide range, including polypeptides, metabolites and hormones that collaborate in order to form new blood vessels [70]. A major factor of angiogenesis under normal conditions and in a disease state is vascular endothelial growth factor A (VEGF-A) [71]. It is a member of the gene factors family, which includes VEGF-B, VEGF-C, VEGF-D, VEGF-E and placenta growth factor (PlGF). These factors show different affinities to tyrosine kinase receptors (VEGFR) -1, -2 and -3 [72]. VEGF-A may bind to VEGFR-2 (mainly occurring on blood vessel ECs) and contributing to the process of angiogenesis, whereas VEGF-C and -D preferentially bind to VEGFR-3, which is predominantly found on lymphatic ECs, resulting in proliferation of lymphatic vessels [71]. VEGF promotes the cancer stem cells' functionality and may initiate tumorigenesis by activation of epithelial-mesenchymal transition (EMT) [73]. This leads to a loss of cell polarity and cytoskeletal changes, which cause an increase in cell motility and a decrease in cell adhesion through loss of E-cadherin and ZO-1 [74]. To allow endothelial cell migration and formation of capillary sprouts, basal membrane degradation must occur. For this process, matrix metalloproteinases, MMP-2, MMP-9, and urokinase plasminogen Endogenous Regulators of Angiogenesis Regulators of angiogenesis are a very large and heterogenous group with a wide range, including polypeptides, metabolites and hormones that collaborate in order to form new blood vessels [70]. A major factor of angiogenesis under normal conditions and in a disease state is vascular endothelial growth factor A (VEGF-A) [71]. It is a member of the gene factors family, which includes VEGF-B, VEGF-C, VEGF-D, VEGF-E and placenta growth factor (PlGF). These factors show different affinities to tyrosine kinase receptors (VEGFR) -1, -2 and -3 [72]. VEGF-A may bind to VEGFR-2 (mainly occurring on blood vessel ECs) and contributing to the process of angiogenesis, whereas VEGF-C and -D preferentially bind to VEGFR-3, which is predominantly found on lymphatic ECs, resulting in proliferation of lymphatic vessels [71]. VEGF promotes the cancer stem cells' functionality and may initiate tumorigenesis by activation of epithelial-mesenchymal transition (EMT) [73]. This leads to a loss of cell polarity and cytoskeletal changes, which cause an increase in cell motility and a decrease in cell adhesion through loss of E-cadherin and ZO-1 [74]. To allow endothelial cell migration and formation of capillary sprouts, basal membrane degradation must occur. For this process, matrix metalloproteinases, MMP-2, MMP-9, and urokinase plasminogen activator (uPA) are responsible [75]. Expression of membrane-type matrix metalloproteinases (MT-MMPs) promotes VEGF-mediated cell invasion. VEGF also induces vascular permeability, which may facilitate the escape of tumor cells into the bloodstream and promote distant metastases [76]. An interesting fact is that the role of PlGF in modulating angiogenesis process is not so obvious. PlGF may initiate cross-talk between VEGFR-1 and VEGFR-2 [77], while other studies reported its antiangiogenic properties [78] ( Figure 2). Proteolysis mediated by plasmin (PLA) is an essential feature of angiogenesis and cell invasion [79]. Whereas antithrombin is a key inhibitor of the coagulation cascade, it may also have an anti-angiogenic function [80]. Other angiogenesis inductors are platelet-derived growth factors (PDGF)-B and -C and fibroblast growth factor (FGF)-1 and -2, which may bind to their respective receptors on blood vessel ECs and induce their proliferation and migration. The PDGF family includes four heparin-binding polypeptide growth factors (A, B, C, and D). PDGF binds and transduces signals through two cell-surface tyrosine kinase receptors, PDGFRα and PDGFRβ [81]. It can lead to a promotion of vessel maturation, recruitment of pericytes and VEGF upregulation. Among all members of the PDGF family, the most noteworthy for its potent angiogenic activity in vivo is the PDGF-B/PDGFRβ axis [82,83]. The fibroblast growth factor (FGF) family has 22 members and most of them show high affinity to tyrosine kinase receptors FGFR-1, FGFR-2, FGFR-3 and FGFR-4. FGF expression in tumors is responsible for resistance to anti-angiogenic therapy [84,85]. Both FGF-2, also known as basic FGF (bFGF), and VEGF can initiate angiogenesis by increasing secretion of MMPs, plasminogen activator and collagenase, responsible for the degradation and rebuilding of extracellular matrix [86]. A recent study reported that FGF modulates endothelial metabolism driven by MYC-dependent glycolysis signaling, important for blood and lymphatic vascular development [87]. Another important angiogenic factor is angiopoietin. The angiopoietin family comprises ligands ANGPT-1, ANGPT-2, and ANGPT-4. Their signaling is mediated by endothelial receptor tyrosine kinases, TIE-1 and the more well known TIE-2. Interestingly, both ligands (ANGPT-1 and ANGPT-2) bind to TIE-2, but have various effects [88,89]. ANGPT-1 initiates vessel maturation and newly formed vessel stability by the Akt/survivin pathway [90]. On the contrary, ANGPT-2 may induce vessel destabilization, pericyte separation, vessel germination and angiogenesis [88]. Increased ANGPT-2 expression has been reported in tumor-associated vessels of a few human cancers in response to hypoxia and VEGF action [91]. The angiopoietin (ANGPT)-TIE system is crucial for the angiogenic switch in tumors, and together with VEGF-A promotes the initiation of angiogenesis and maturation of new vessels [92]. On the other hand, there is a wide range of antiangiogenetic factors such as thrombospondin-1 (TSP1), which is a large glycoprotein present in ECM, or proteolytic product of collagen XVIII, called endostatin [43,44] (Figure 3). other solid tumors, such as pancreatic cancer and breast cancer, create exosomes that induce neovascularization [7]. Aside from solid tumors, it was also shown that exosomes are produced by chronic myelogenous leukemia cells and that they have an impact on blood vessel creation through a direct interaction with ECs [101]. Exosome Uptake by ECs Exosomes can interact with target cells such as ECs, but also with immune cells to initiate and facilitate angiogenesis. Uptake of tumor-derived exosomes by normal endothelial cells activates angiogenic signaling pathways in endothelial cells and stimulates new vessel formation [102]. Exosomes can affect T cells through direct receptor-ligand interactions, but in ECs, exosomes usually use the internalization pathway [103]. ECs internalize exosomes produced by cancer cells within 2-4 h. This was confirmed, among others, by studies where ECs easily captured PKH26-dyed exosomes during the first 4 h [104]. Immediately after internalization, exosomes are directed to the perinuclear zone. When tubules are formed in vitro, exosomes move to the periphery of the cell and enter advanced pseudopods. After complete remodeling, adjacent ECs probably transport exosomes to other ECs and to other cells in the TME (tumor microenviroment) through nanoparticle structures [105]. Other angiogenesis inhibitors are interferon-alpha and -beta and angiostatin, a cleavage product of plasmin [45,93]. A balance between angiogenesis promoters and inhibitors is regulated by different pathways. Hypoxia, cellular nutrient deficiency, hypoglycaemia, and metabolic acidosis, among others, are cell environmental factors contributing to a proangiogenic imbalance, which frequently takes place at the gene level due to oncogene activation or tumor suppressor gene inactivation [7,94]. Tumor cells trigger this imbalance, while inflammatory cells infiltrate the surrounding tissues. This can lead to angiogenic switches and progression from hyperplasia to hypervascularised tumor. The study of Rip1Tag2 mice showed the phases of carcinogenesis, from normal cells to hyperplasia, and adenoma to highly advanced carcinoma. VEGF-A was shown to be the main regulator of EC proliferation, migration, and vessel formation [95]. Angiogenesis occurred preferentially in mice that overexpressed human VEGF-A165 in pancreatic β-cells, even at an early stage of carcinogenesis [96]. In contrast, inhibiting VEGF-A caused suppression of the angiogenic switch and tumor growth [97]. Creation of new blood vessels, which begins at an early stage of cancer development, is associated with the number of exosomes produced by cancer. Proangiogenic effects were observed with use of exosomes originating from cancer cell lines, as well as exosomes isolated from cancer patients blood samples and urine collected from urinary bladder cancer patients [98]. It is interesting that gliomas, for example, have richer vasculature compared to other solid tumors. Both in vitro and in vivo studies showed that EVs excreted by gliomas contain angiogenic proteins [99,100]. Moreover, other solid tumors, such as pancreatic cancer and breast cancer, create exosomes that induce neovascularization [7]. Aside from solid tumors, it was also shown that exosomes are produced by chronic myelogenous leukemia cells and that they have an impact on blood vessel creation through a direct interaction with ECs [101]. Exosome Uptake by ECs Exosomes can interact with target cells such as ECs, but also with immune cells to initiate and facilitate angiogenesis. Uptake of tumor-derived exosomes by normal endothelial cells activates angiogenic signaling pathways in endothelial cells and stimulates new vessel formation [102]. Exosomes can affect T cells through direct receptor-ligand interactions, but in ECs, exosomes usually use the internalization pathway [103]. ECs internalize exosomes produced by cancer cells within 2-4 h. This was confirmed, among others, by studies where ECs easily captured PKH26-dyed exosomes during the first 4 h [104]. Immediately after internalization, exosomes are directed to the perinuclear zone. When tubules are formed in vitro, exosomes move to the periphery of the cell and enter advanced pseudopods. After complete remodeling, adjacent ECs probably transport exosomes to other ECs and to other cells in the TME (tumor microenviroment) through nanoparticle structures [105]. History of Anti-Angiogenic Therapy Since pre-clinical studies showed that tumors induce the sprouting of new vessels from the surrounding vasculature, there was great optimism that inhibition of pro-angiogenic growth factors would represent an effective antiangiogenic therapy for most tumor types [106]. Further studies in animal models established that the vascular supply can be suppressed by inhibition of VEGF [107]. Based on successful randomized trials, anti-VEGF therapeutics have entered clinical practice for the therapy of cancer. Bevacizumab, a VEGF-specific antibody, was the first antiangiogenic agent to enter the clinic, and is currently approved for use in colorectal and lung cancer treatment [108]. So far, several VEGF blockers have been approved for clinical use in cancer [109]. In addition, several multi-targeted tyrosine kinase inhibitors (TKIs), which block the signaling of pathways such as VEGF, have been approved, including sorafenib, sunitinib, pazopanib, and vandetanib [110]. Among the currently available anti-angiogenic drugs, bevacizumab, sunitinib, pazopanib, endostar, regorafenib, axitinib, sorafenib, ranibizumab, and aflibercept are the most used in the treatment of various cancer types [38]. Classification of Angiogenesis Inhibitors Angiogenesis inhibitors are classified into direct and indirect agents. Direct endogenous inhibitors (endostatin, arrestin, and tumstatin) target vascular ECs, but unfortunately, phase II or III clinical trials did not result in significant effects on patients [109,110]. Indirect angiogenesis inhibitors (AIs) target tumor cells or tumor-associated stromal cells and prevent the expression of pro-angiogenic factors or block their activity [109]. To develop anti-angiogenic agents, four main strategies are applied: the inhibition of endogenous factors promoting blood vessel formation, the identification and application of natural angiogenesis inhibitors, molecule inhibition promoting the invasion of surrounding tissue through tumor blood vessels, and the incapacitation of actively proliferating endothelial cells [38]. As a result, the last decade has given rise to many anti-angiogenic agents developed for cancer treatment, with at least eighty drugs being investigated in preclinical studies and phase I-III clinical trials. Despite promising preclinical results, anti-angiogenic monotherapies offer mild clinical benefits. The most important clinical problem associated with cancer therapy using VEGF-or VEFGR-targeting agents is drug resistance, as a result of clonal expansion or sub clonal evolution of tumors with the upregulation of other angiogenic factors [111]. VEGF-dependent alterations, non-VEGF pathways and stromal cell interactions are mechanisms of resistance [112]. Because recent literature highlights the variability of patient and tumor responses to anti-angiogenic drugs, predictive in vitro models that can recapitulate the drug response have been used for a personalized medicine approach [113]. In advanced cancers, the tumor develops escape strategies and quickly overcomes the inhibition of angiogenic pathways. Because of these limitations, it is crucial to identify biomarkers that are able to predict responses and prognoses related to anti-angiogenic treatment. Over recent years, extracellular vesicle involvement in tumor progression and resistance has been thoroughly considered [114]. In the next section, we will focus on the interaction between exosomes and anti-angiogenic resistance in cancer cells. Prostate Cancer Angiogenesis in prostate cancer was initially associated with promising findings in early studies, but phase III clinical trials, mainly conducted after 2010, have offered disappointing results thus far [115]. Most anti-angiogenic clinical studies in prostate cancer have targeted VEGF-A because it was found to be overexpressed in prostate cancer and associated with poor prognosis and metastasis [115]. Some randomized phase II trials on bevacizumab, which involved patients with hormone-sensitive prostate cancer, showed improved relapse-free survival when bevacizumab was used alongside hormone-deprivation therapy [116]. In phase III, when bevacizumab was used together with docetaxel chemotherapy and prednisone hormonal therapy, some improvement in progression-free survival was observed, yet it caused no significant changes in the overall survival of metastatic, castration-resistant, prostate cancer patients [117]. Furthermore, bevacizumab is associated with increased toxicity and a greater incidence of treatment-related deaths [117]. This suggests that in hormone-resistant refractory tumors, in which conventional treatment options are particularly prone to failure, adding bevacizumab treatment does not have any clinical benefit. Even so, bevacizumab has some positive effects, especially on hormone-sensitive recurrent prostate cancer [117]. To summarize, these findings suggest that anti-angiogenic therapy has no clinical benefit when added to chemotherapy or hormonal therapy in refractory, castration-resistant prostate cancer [115]. Further possible treatment options, including direct targeting of VEGFR-2-expression, indirect inhibition of angiogenesis, and targeting the interplay between tumor or stromal cells and angiogenesis, have been evaluated. Lu et al. suggested that anti-VEGFR-2-AF is a prospective therapeutic Ab for prostate cancer treatment that inhibits angiogenesis, through vascular endothelial cells, and tumorigenesis at the same time by VEGFR-2-expressing tumor cells [118]. The therapeutic efficacy of anti-VEGFR-2-AF is currently under study in preclinical trials using solid and liquid xenograft mouse models [118]. Hepatocellular Carcinoma Advanced hepatocellular carcinoma (HCC) has limited treatment options, where overall survival can be improved but no cure has been found [119]. Given the highly vascular nature of HCC, anti-angiogenic therapy is currently the recommended therapy for advanced stage disease [120]. Sorafenib and lenvatinib are used as first-line treatments for advanced unresectable HCC [119]; however, both have numerous side effects [119]. Regorafenib and cabozantinib are tyrosine kinase inhibitors. They are the only anti-angiogenic drugs seen to be advantageous as a second-line therapy in patients progressing on sorafenib. Regorafenib and cabozantinib show statistically significant improvement compared to placebo in the overall survival and progression-free survival of patients [121,122]. Melanoma Melanoma is amongst the cancer types where anti-angiogenic therapy has been disappointing in terms of showing no overall survival benefit in phase 3 trials [123]. Nevertheless, pre-clinical and clinical trials are being conducted to examine the effects of various anti-angiogenic experimental therapies [124]. Although most studies focus on VEGF signaling inhibition, others are aimed at determining the effect of multikinase inhibitors or the inhibition of angiogenic integrin activity [125]. Ovarian Cancer The anti-angiogenic agent bevacizumab, given concomitant to combination chemotherapy followed by maintenance therapy, is considered the standard of care in patients with advanced ovarian cancer. It is given as first-line therapy and in those with platinum-sensitive recurrent ovarian cancer [126]. Colorectal Cancer The second-line regimen choice in metastatic colorectal cancer is greatly dependent on the systemic therapies given as first-line treatment. Anti-angiogenic agents (e.g., bevacizumab, ramucirumab and aflibercept) are indicated for most patients. Epidermal growth factor receptor (EGFR) inhibitors do not improve survival in a second-line setting [127]. Recently, a number of new orally available multiple receptor tyrosine kinase inhibitors have been tested in late-stage clinical trials, with modest efficacy [128]. Breast Cancer Although the scientific rationale for anti-angiogenics appears to be well supported, so far studies have not demonstrated clinically significant benefits of adding these therapeutic agents in breast cancer [129]. Studies conducted with anti-angiogenic agents have not yet displayed clinically significant benefits as monotherapy, in combination with chemotherapy, endocrine treatment, or maintenance therapy, whether it be in the metastatic or early setting [129]. Although small improvements in complete pathologic response and progression-free survival have been shown with bevacizumab, this did not translate into improved long-term outcomes, such as disease-free survival and overall survival [129]. Lung Cancer The treatment of advanced non-small cell lung cancer includes chemoimmunotherapy or targeted therapy with TKIs. When the median overall survival was thought to be less than one year, the addition of anti-angiogenics to chemotherapy resulted in modest increases in survival. More recently, the use of anti-angiogenics has fallen out of favor with the advent of check-point inhibitors, which have shown durable long-term responses that have never been previously observed [130]. Pancreatic Cancer For pancreatic cancer treatments, multiple clinical trials of anti-angiogenic agents have been carried out, yet the results are overwhelmingly disappointing [131]. Pre-clinical studies suggesting that VEGF is a therapeutic target in pancreatic cancer have offered promising results in pre-clinical studies. However, phase III trials of gemcitabine plus anti-angiogenic therapy with bevacizumab and oraxitinib (a VEGFR inhibitor) failed to reach the primary endpoint of overall survival [132]. Although improved progression-free survival was observed in a few clinical trials, to date none have shown significant prolongation of overall survival for pancreatic cancer patients [131]. Glioblastoma Since the approval of bevacizumab, it has been used as a second-line therapy in glioblastoma multiforme [133]. Bevacizumab may be beneficial in prolonging progression-free survival, but the routine addition of bevacizumab to standard therapy for newly diagnosed glioblastoma is not recommended in clinical practice [134]. Tumor-Derived Exosomes in Angiogenesis One of the factors determining the development of cancer and its progression is a sufficient supply of oxygen and nutrients [151]. At the initial stage of tumor development, the blood vessels in the tumor microenvironment are quite poorly organized. Vascular homeostasis disorder in conditions of dominance of proangiogenic signaling activates angiogenously inactive clusters of cancer cells. This mechanism is referred to as the "angiogenesis switch". In the TEM, this condition is achieved by transferring angiogenic factors from cancer cells to endothelial cells [152]. As a result, the tumor is vascularized through: (1) the formation of new blood vessels in the tumor structure using circulating progenitor cells, bone marrow-derived hematopoietic cells (vasculogenesis) and cancer cells (vasculogenic mimicry and transdifferentation of cancer cells); or (2) co-optation of existing blood vessels (sprouting angiogenesis and intussusceptive angiogenesis) [42]. In turn, intensification of angiogenesis stimulates tumor growth and metastasis [70,153]. Exosomes, as transporters of numerous biomolecules, mediating communication between different types of cells, play an important role in the process of angiogenesis. According to ExoCarta data, 1116 types of lipids, 9796 subsets of proteins and 6246 types of mRNA in exosomes have been found so far. Exosomes, by providing numerous pro-and anti-angiogenic factors such as mRNA, miRNA and proteins, reprogram recipient cells by introducing changes in their functional profile [154]. One of the first reports on the role of TEX in angiogenesis concerned research on glioblastoma and colon cancer. Exosomes have been shown to be intensively absorbed by vessel cells, thereby promoting angiogenesis [154,155]. The question is what factors present in exosomes determine tumor angiogenesis? In remodeling of the tumor microenvironment, inducing angiogenesis, numerous proteins transported by TEX are involved (Figure 4). The molecular and genetic cargo of TEX is responsible for phenotypic and functional reprogramming of endothelial cells and other normal cells residing in the TME [98,156]. It was shown that TEX-driven cross-talk of MSCs with immune cells blocks their anti-tumor activity and/or converts them into suppressor cells [157]. The Role of Exosomes in Resistance to Anti-angiogenic Therapies Despite the fact that anti-angiogenesis therapies may prolong progression-free survival (PFS), they have a limited impact on overall survival (OS) and do not constitute a permanent cure in renal In exosomes derived from glioblastoma cells, seven proangiogenic proteins were found: angiogenin, VEGF, fibroblast growth factor (FGF), interleukin-6, interleukin-8 and tissue inhibitors of metalloproteinases 1 and 2 (TIMP-1 and TIMP-2). These proteins are involved in angiogenesis and increased malignancy in this tumor [154]. The presence of pro-angiogenic VEGF and IL-6 has also been found in melanoma-derived exosomes [158]. In turn, CD147-positive exosomes derived from epithelial ovarian cancer cells can promote the angiogenic phenotype in endothelial cells in vitro [159]. Lang et al. showed that gliomas can induce angiogenesis by secreting linc-POU3F3-rich exosomes [160]. Angiogenesis is also promoted by TEX enriched in matrix metalloproteinases, especially MMP-2, MMP-9 and MMP-13. These proteins have been found in glioblastoma-, melanoma-, myeloma-and nasopharyngeal carcinoma-derived exosomes [154,158,[161][162][163][164]. Exosomes isolated from the peritoneal fluid of patients with colorectal cancer and from pancreatic adenocarcinoma cell cultures contains significant amounts of tetraspanin 8 (Tspan8), which promotes cancer metastasis and angiogenesis [165,166]. Another proangiogenic protein is annexin II (Anx II), whose presence has been demonstrated in exosomes derived from breast cancer cells. This protein, acting as a co-receptor for tissue plasminogen activator (tPA), plays an important role in tumor neoangiogenesis [167]. TEX are also RNA carriers. Lang et al. showed that intergenic non-coding RNA POU3F3 (linc-POU3F3) present in glioma cell-derived exosomes is involved in the process of neoangiogenesis [168]. The angiogenic potential of TEX depends on the conditions under which they are secreted. Studies on the effects of esophageal squamous cell carcinoma (ESCC) exosomes on the ability to form vessels by human umbilical vein endothelial cells (HUVECs) showed that HUVEC cultured with exosomes secreted under low oxygen conditions showed a better ability to form vessels compared to those grown from normal exosomes [169]. Exosomes secreted by cancer cells under hypoxic conditions are enriched in micro-RNAs, such as miR-135b, miR-210, miR-21, miR-30b, miR-30c, and miR-424. These exosomes promote angiogenesis through suppressing expression of factor-inhibiting HIF-1, promoting hypoxic signaling, or upregulating the expression of proangiogenic factors [170]. Exosomal miR-23a derived from hypoxic lung cancer cells also stimulates angiogenesis by targeting prolyl hydroxylase and the tight junction protein ZO-1 [171]. The involvement of miR-23a in angiogenesis has also been demonstrated in hypoxic HCC [172]. It has been shown that exosomes containing miR-155 secreted by gastric cancer (GC) cells significantly increase the rate of tumor angiogenesis by enhancing the expression of VEGF [173]. Skin cancer-derived exosomes and melanoma-derived exosomes can promote angiogenesis by delivering miR-9 to endothelial cells and activating the JAK-STAT pathway [174,175]. Research by Yang et al. showed that exosomes secreted by miR-130-rich gastric cancer cells inhibited c-myb gene expression in vascular endothelial cells, promoting angiogenesis and tumor growth [176]. In turn, research by Zhang et al. showed that exosomes transporting hepatocyte growth factor siRNA (HGF siRNA) exerted an inhibitory effect on angiogenesis and tumor growth in gastric cancer [177]. The Role of Exosomes in Resistance to Anti-Angiogenic Therapies Despite the fact that anti-angiogenesis therapies may prolong progression-free survival (PFS), they have a limited impact on overall survival (OS) and do not constitute a permanent cure in renal cell carcinoma (RCC), colorectal cancer (CRC), or breast cancer (BC) [178][179][180][181]. This lack of clinical benefit can be put down to preexisting resistance or rapid adaptation to anti-angiogenic agents. Multiple resistance mechanisms against AIs exist. They include the upregulation of alternative angiogenic factors by tumor cells, the involvement of stromal cells, and co-option/mimicry. Sunitinib, a multi-targeted receptor tyrosine kinase inhibitor, is one of the first line agents for patients with advanced RCC. However, intrinsic or acquired resistance to sunitinib has become a major issue for treatment [182]. LncARSR was identified as a mediator of sunitinib resistance by Qu et al. in renal cell carcinoma. By acting as a competing endogenous RNA for miR-34 and miR449, it increases the expression of AXL and c-MET targets, showing that the exosome-mediated transmission of lncARSR can confer resistance to sensitive cells [183]. They also found that lncARSR could be secreted from resistant cells via exosomes, transforming sunitinib-sensitive cells into resistant cells, thereby disseminating drug resistance [183]. RAB27B is a leading protein involved in exosome secretion [184]. Oncogenic effects have been reported in several cancers [185][186][187]. RNA sequence and pathway analysis has suggested that the oncogenic effects of RAB27B could potentially be associated with mitogen-activated protein kinase (MAPK) and VEGF signaling pathways [188]. These results have shown that RAB27B is a prognostic marker and a novel therapeutic target in sunitinib-sensitive and -resistant RCCs [188]. Placental growth factor (PlGF) is a member of the VEGF subfamily that binds to VEGFR-1 and its co-receptors, NRP-1 and 2. PlGF/VEGFR-1 signaling activates the downstream PI3K/Akt and p38 MAPK pathways independent of VEGF-A signaling [189,190]. Anaplastic lymphoma kinase (ALK), in exosomes secreted by BRAF inhibitor-resistant melanoma cells, has recently been demonstrated to transfer drug resistance by activating the MAPK signaling pathway in recipient cells [191]. Furthermore, in pancreatic cancer, EVs released by upregulated RAB27B have been shown to activate p38 MAPK [192]. Alterations in EVs produced by glioblastoma cells following bevacizumab treatment were described by Simon and colleagues [193]. Interestingly, bevacizumab, which is able to neutralize glioblastoma cell-derived VEGF-A, was found to be directly captured by glioblastoma cells and sorted at the surface of respective EVs. It was observed that treatment with bevacizumab induces changes in the proteomic content of EVs, which is associated with tumor progression and therapeutic resistance. Accordingly, the glioblastoma cells inhibiting EV production improved the anti-tumor effect of bevacizumab. Together, this data underlines the potential new mechanism of glioblastoma escape from bevacizumab activity [193]. Moreover, cetuximab, an EGF-R monoclonal IgG1 antibody, has been observed to be associated with EVs derived from treated cancer cells, suggesting that such processes could be implicated in the limited response of tumors to therapy [194]. Exosomes as Drug Carriers of Anti-Cancer Therapy Exosomes have recently turned out to be possible natural carriers of therapeutic agents for cancer therapy [195]. Exosomes are formed by a lipid bilayer delimiting an aqueous core. This allows delivery of hydrophobic and hydrophilic drugs, thus increasing versatility [196]. Although many body cells produce exosomes, mesenchymal stem cells (MSCs) are among the most prolific. Therefore, they are more suited to mass production of exosomes for drug delivery [197]. Clinical trials of MSC-derived exosomes that are currently underway focus on gene delivery, regenerative medicine, and immunomodulation [198]. It is believed that MSC-derived exosomes have intrinsic homing capabilities, similar to those of MSCs. In cancer treatment, they can penetrate the tumor site [199]. Similarly, MSC-derived exosomes could potentially target hypoxia, as hypoxia is a potent mediator that directs exosome migration [200]. It has been observed in hypoxia studies that hypoxic cancer cells avidly uptake exosomes that are produced in hypoxic conditions [201]. It has been shown that miRNA-enclosed exosomes derived from cancer cells could interact with endothelial cells and thereby stimulate endothelial cell proliferation, migration and tube formation [202]. Some studies suggest that miRNA may be valuable as novel targets for the treatment of carcinoma. A Chinese group identified that exosomal miR-9 derived from nasopharyngeal carcinoma cells inhibits angiogenesis by targeting Midkine (a heparin-binding growth factor) and regulating the PDK/AKT pathway in nasopharyngeal carcinoma [203]. The findings of this study suggest that miR-9 and Midkine may be valuable as novel targets for the treatment of human nasopharyngeal carcinoma [203]. Circulating microRNAs (miRNAs) in exosomes are used as functional biomarkers for diagnostics and prognostics, while synthetic miRNAs in exosomes could be applicable for therapeutics. As increased PD-L1 expression was observed after anti-angiogenic treatment, Allen et al. treated refractory pancreatic, breast and brain tumor mouse models with combined therapy using PD-1/PD-L1 pathway blockers and anti-angiogenic agents to increase the efficacy of anti-angiogenic therapy based on VEGF and VEGFR-2 inhibition. They showed that anti-PD-1 therapy sensitized and prolonged the efficacy of the anti-angiogenic therapy in pancreatic and breast cancer models. Moreover, the anti-angiogenic therapy improved anti-PD-L1 treatment, especially by increased cytotoxic T cell infiltration [204]. It was shown that chemotherapy stimulates secretion of exosomes and alters exosome composition. Exosomes secreted during therapy can be transferred to both tumor and host cells, altering their behavior and enhancing tumor survival and progression [205]. On the other hand, cancer cell-derived EVs may be used as effective carriers of drugs such as paclitaxel, increasing their cytotoxicity [206]. Enhanced permeability and retention (EPR) effect-based nanomedicine, based on tumor blood flow, is a promising strategy for successful anticancer therapy [207,208]. Tumor blood flow is frequently obstructed as tumor size increases because advanced large tumors show heterogeneity in the EPR effect. Accordingly, it would be very important to apply enhancers of the EPR effect in the clinical setting to make the EPR effect more uniform [209,210]. Conclusions and Outlook Angiogenesis is controlled by various angiogenic and anti-angiogenic factors, which are carried by exosomes. The imbalance between these factors leads to dysregulation of angiogenesis during development of tumors. Effective inhibition of tumor angiogenesis might arrest tumor progression. Combined strategies based on angiogenesis inhibitors and immunotherapy effectively enhances the benefits of therapies in cancers [211]. Angiogenesis-targeted therapy of cancer is considered a promising strategy for management of cancer progression, but effective treatments requires further research. An increased understanding of the cross-talk between tumor cells, endothelial cells, and immune cells during immune checkpoint blockade therapy may lead to new combinatorial treatment regimens [212]. Control of exosome composition may serve as an effective strategy to augment the long-term efficacy of anti-angiogenic therapies for tumors. Given the molecular complexity of angiogenesis, a better understanding of how exosomes participate in this process represents an important challenge, which can open new paths for the development of novel and effective anti-angiogenic drugs. To clarify the molecular mechanisms by which exosomal miRNAs, mRNAs, and proteins inhibit angiogenesis in endothelial cells and transmit drug-resistance, in silico analysis should be performed to predict possible miRNA, mRNA, and protein targets using database resources. Most importantly, the clinical relevance of exosomal molecules in cancer patients awaits further validation in larger samples sizes.
2020-08-20T10:01:29.425Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "c1f598d107100a78df8c772c47e52d051fce8c73", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/16/5840/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd420184218ade03cbc5ebd878f37bb72433ebfa", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
211564611
pes2o/s2orc
v3-fos-license
COVID-19 outbreak on the Diamond Princess cruise ship: estimating the epidemic potential and effectiveness of public health countermeasures Abstract Background Cruise ships carry a large number of people in confined spaces with relative homogeneous mixing. On 3 February, 2020, an outbreak of COVID-19 on cruise ship Diamond Princess was reported with 10 initial cases, following an index case on board around 21-25th January. By 4th February, public health measures such as removal and isolation of ill passengers and quarantine of non-ill passengers were implemented. By 20th February, 619 of 3700 passengers and crew (17%) were tested positive. Methods We estimated the basic reproduction number from the initial period of the outbreak using SEIR models. We calibrated the models with transient functions of countermeasures to incidence data. We additionally estimated a counterfactual scenario in absence of countermeasures, and established a model stratified by crew and guests to study the impact of differential contact rates among the groups. We also compared scenarios of an earlier versus later evacuation of the ship. Results The basic reproduction rate was initially 4 times higher on-board compared to the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${R}_0$\end{document} in the epicentre in Wuhan, but the countermeasures lowered it substantially. Based on the modeled initial \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${R}_0$\end{document} of 14.8, we estimated that without any interventions within the time period of 21 January to 19 February, 2920 out of the 3700 (79%) would have been infected. Isolation and quarantine therefore prevented 2307 cases, and lowered the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${R}_0$\end{document} to 1.78. We showed that an early evacuation of all passengers on 3 February would have been associated with 76 infected persons in their incubation time. Conclusions The cruise ship conditions clearly amplified an already highly transmissible disease. The public health measures prevented more than 2000 additional cases compared to no interventions. However, evacuating all passengers and crew early on in the outbreak would have prevented many more passengers and crew from infection. Introduction Cruise ships carry a large number of people in confined spaces with relative homogeneous mixing over a period of time that is longer than for any other mode of transportation. 1 Thus, cruise ships present a unique environment for transmission of humanto-human transmitted infections. The association of acute respiratory infections (ARI) incidence in passengers is statistically significant with season, destination and duration of travel. 2 In February 2012, an outbreak of respiratory illness occurred on the cruise ship off Brazil, resulting in 16 hospitalizations due to severe ARI and one death. 3 In May 2009, a dual outbreak of pandemic (H1N1) 2009 and influenza A (H3N2) on a cruise ship occurred: of 1970 passengers and 734 crew members, 82 (3.0%) were infected with pandemic (H1N1) 2009 virus, and 98 (3.6%) with influenza A (H3N2) virus. 4 Four subsequent cases were epidemiologically linked to passengers but no evidence of sustained transmission to the community or passengers on the next cruise was reported. 4 In September 2000 an outbreak of influenza-like illness was reported on a cruise ship sailing off the Australian coast with over 1100 passengers and 400 crew on board, coinciding with the peak influenza period in Sydney. 5 The cruise morbidity was high with 40 passengers hospitalized, two of whom died. A total of 310 passengers (37%) reported suffering from an influenza-like illness. In December 2019, a novel coronavirus, SARS-CoV-2, emerged in Wuhan, China and rapidly spread within China and then to various global cities with high interconnectivity with China. 6 ,7 The resulting ARI due to this coronavirus, a disease now coined COVID-19, is thought to be mainly transmitted by respiratory droplets from infected people. The mean serial interval of COVID-19 is 7.5 days (95% CI, 5.3 to 19) and the initial estimate for the basic reproductive number R 0 was 2.2 (95% CI, 1.4 to 3.9), 8 although higher R 0 have since been reported with a mean of more than 3. 9 On 18 February 2020, China's CDC published their data of the first 72 314 cases including 44 672 confirmed cases. 10 About 80% of the confirmed cases were reported to be mild disease or less severe forms of pneumonia, 13.8% severe and 4.7% critically ill. Risk factors for severe disease outcomes are older age and co-morbidities. The progression to acute respiratory distress syndrome occurs approximately 8-12 days after onset of first symptoms, with lung abnormalities on chest CT showing greatest severity approximately 10 days after initial onset of symptoms. 11-13, 14 Evidence is mounting that also mildly symptomatic or even asymptomatic cases can transmit the disease. 15, 16 On 3 rd February, 2020, an outbreak of COVID-19 was reported on Cruise Ship Princess Diamond off the Japanese coast, with initially 10 persons confirmed to be infected with the virus. The number has since ballooned into the largest coronavirus outbreak outside of mainland China. By 19th February, 619 of 3700 passengers and crew (17%) were tested positive. By end February, six persons had died. The outbreak was traced to a Hong Kong passenger who embarked on January 21st and disembarked on January 25th. After docking near New Taipei City, on January 31, the ship arrived in Yokohoma, Japan. By the following day, the Japanese health ministry ordered a 14day quarantine for everyone on board and rushed to close its ports to all other cruise ships. The public health measures taken according to news reports and the media were removal of all PCR positive passengers and crew from the ship and their isolation in Japanese hospitals. The remaining test-negative passengers and crew remained on board. Passengers were quarantined in their cruise ship cabins, and only allowed out of the cabin for one hour per day. By 20th February, the decision to evacuate was made and more than 3000 passengers left the ship. Most were air-evacuated by their respective countries. 10 The cruise ship with a COVID-19 index case onboard between the 21-25th January serves as a good model to study its potential to spread in a population that is more homogenously mixed, compared to the more spatially variable situation in Wuhan. We set out to study the empirical data of COVID-19 confirmed infections on the Cruise ship Diamond Princess, to estimate the basic reproduction number (R 0 ) under cruise ship conditions, the response effectiveness of the quarantine and removal interventions, and compare scenarios of an earlier and later evacuation of the ship. Methods We used data on confirmed cases on the cruise ship as published on a daily basis by public sources 17,18 to calibrate a model and estimate the basic reproduction number R 0 from the time sequence and amplitude of the case rates observed. COVID-19 is thought to have been introduced by an index case from Hong Kong visiting the ship between the 21 st to 25th of January, 2020. We thus used the date of 21 st January 2020 as the first time point, t = 0, assuming the index case was infectious from the first day on the ship. The estimates of R 0 and the associated Covid-19 incidence on the cruise ship was derived using a compartmental model estimating the dynamics of the number of susceptible (S), exposed (E), infected (I), and recovered (R) individuals, adapted but modified from a published COVID-19 study. 19 We analyzed two instances of the model assuming respectively: (1) a homogenous population (3700 individuals), and (2) a stratified population of crew (1000 individuals) and guests (2700 individuals). The model used a relationship between the daily reproductive number,β, and R 0 to infer the transmissibility and contact rate across the whole cruise ship population by the relationship: where the infectious period equals to one over the recovery rate (γ ), i = 1/γ . In the homogeneous model, the infectious period, i, of COVID-19 was set to be 10 days based on previous findings. 8 In the situation of no removal (ill persons taken off the ship to be isolated in a Japanese hospital), the incubation period (or, the latent period), l was estimated to be approximately 5 days (ranging from 2 to 14 days). 20 In order to model the removal/isolation and quarantine interventions, we implemented time dependent removal and contact rates as described in Table 1. We performed additional sensitivity analysis reducing the R 0 to 3.7, an estimate of the average value across mainland China studies of COVID-19. 9 We further estimated a counterfactual scenario of the infections dynamics assuming no interventions were implemented, in particular no removal and subsequent isolation of ill persons. We assumed an infectious period of 10 days, with a contact rate remaining the same as in the initial phase of the outbreak. Additionally, in the stratified model of crew and guests, the contact rate was assumed to be different due to the assumption that crew could not be easily quarantined as they had to continue their services on board for all the passengers and possibly had more homogeneous mixing with all the passengers, whereas passengers may be mixing more within their preferred circles and areas. We kept the transient change in the contact rate and the removal of all PCR confirmed patients starting from the 2 nd and the 5th of February respectively as in the first model. Parameters are described in Table 1. The model describing a homogeneous population onboard can be described by: where S denote all susceptible people on the cruise ship, E all exposed, I all infected and R all recovered or removed, and where N = S + E + I + R denotes the whole population. The model describing a stratified population onboard can be described by: where S denotes susceptible, E exposed, I infected and R recovered or removed, N = S + E + I + R, and the subscript g and c are indicating guest and crew respectively. Overall, we assume mortality is negligible. Models with interventions were calibrated to reports of total infection occurrence, while models simulating the counterfactual scenarios where left with the naïve parameter settings (no countermeasures). The net effects of the countermeasures where estimated as the difference between the counterfactual scenario and the model with the interventions. Model parameters are described in Table 1. The effectiveness of the countermeasures was estimated by calibration of the model to data. We here also present estimations of the plausible consequences of a hypothetical third intervention strategy, whereby all individuals onboard would have been evacuated either on 3 rd of February or 19th of February. We estimated and presented the number of latent cases on 3 rd February evacuation and on 19th February, 2020. Results Using the SEIR model assuming relatively homogenous mixing of all people onboard, we calibrated the predicted cumulative number of infections from the model to the observed cumulative number of infections among all people onboard and estimated the initial R 0 to be 14.8. This translates into an estimate of β (the daily reproduction rate) to 1.48. To derive this estimate we calibrated functions describing transient change in the β as a result of changes in contact rate and the removal of symptomatic infections. The parameter values of contact rate, quarantine interventions and removal presented in Table 1 are the results of the calibration to the observed cumulative incidence data. The contact rate between persons on the cruise ship was calibrated to give the best fit to data with a reduction of 70% by the quarantine countermeasure with onset 3 rd February, 2020. The transient function of removal and isolation of infected cases with an onset on 5th February, 2020, reduced the infectious period from 10 to 4 days, and substantially reduced the transmission and sub-sequent infections on the ship. In Figure 1 we present the change in R 0 based on the relationship between R 0 and β and how it is affected by the transient countermeasures of quarantine and removal of ill patients from the model. Here R 0 should be interpreted as the basic reproductive rate in a totally naïve population on the Diamond Princess (i.e. same contact rate), and not the actual basic reproductive number over time on the cruise ship. The R 0 was 14.8 initially and then R t declined to a stable 1.78 after the quarantine and removal interventions were initiated ( Figure 1). The predicted cumulative number of cases over time from this model described the observed cases well, but overestimated the cumulative case incidence rate initially ( Figure 2). This allowed to compensate for reporting bias in the initial phase, given that the proportion of testing of all passengers was patchy while at the end of the study (19th February, 2020) the testing of passengers had a higher coverage and was more complete. The modelled cumulative number of cases on 19 February, 2020, is 613 out of the 3700 people at risk, while the observed reported number of cases is 619. The counterfactual scenario assuming homogenous rates among crew and guests without any interventions (no removal off the ship or isolation of ill persons nor any quarantine measures for the remaining passengers on boat), estimated the number of cumulative cases to be 2920 out of the 3700 after 30 days, that is by 19th of February (Figure 2). The net effect of the combined interventions was estimated to prevent a total number of 2307 cases by 19th February, 2020 (Figure 2). In a sensitivity analysis we modified the R 0 to 3.7 (and consequently β to 0.37) as this has been reported the average basic reproduction number from studies of COVID-19 in China. 9 However, from our simulation, even in the absence of any intervention, such a low R 0 cannot explain the rapid growth of incident cases on the cruise ship ( Figure 3). This sensitivity scenario excluded countermeasures from the model making it unrealistic that such a low R 0 value could be the true value in the cruise ship situation with confined spaces and high homogeneous mixing of the same persons. The estimate with the lower R 0 value also omitted to consider the strong interventions put into place, making it even more unrealistic. We additionally modeled a scenario stratified by crew and guests whereby we assumed the parameter values of transmission risk to be lower for crew to guest than for guest to crew ( Table 1). The predicted cumulative number of infected crew and guests by 19th of February from this model was 168 out of 1000 (16.8%) and 464 out of 2700 (17.2%), respectively (Figure 4). The total number of cumulative cases by 19th of February predicted from this model was 632, close to the observed number of cases of 619. The predicted cumulative incidence rates were overestimated for crew while underestimated for guests based on available tests results at the time of writing ( Figure 4). These data still need to be validated against the empiric data of test results in all crew and passengers which should soon become available. Instead of keeping all passengers on board, another option would have been to evacuate all individuals onboard the cruise ship earlier, and allow them to go home for a potential quarantine in their respective home countries. We modeled that an evacuation by 3 rd February, 2020, would have resulted in 76 latent cases (cases during the incubation time), while an evacuation by 19th February would have resulted in 246 latent cases. Discussion Modelling the COVID-19 on-board outbreak reveals important insights into the epidemic risk and effectiveness of public health measures. We found that the reproductive number of COVID-19 in the cruise ship situation of 3700 persons confined to a limited space was around 4 times higher than in the epicenter in Wuhan, where R 0 was estimated to have a mean of 3.7. 9 Interestingly, a rough estimation of the population per square km on this 18deck ship is 286 by 62 meters (0.32 km 2 ). Assuming that only 50% of decks are being used, approximately 24 400 persons are confined per km 2 on a ship compared to approximately 6000 persons per km 2 (9 000 000/1528) in urban Wuhan. This means that the population density was about 4 times higher on the cruise ship. Thus, both R 0 and contact rate are dependent on population density, as also suggested by previous research. 21 In population-based models on observational data the population per square km is often substantially different, affecting the R 0 and β coefficient implicitly by changes in the contact rate expressed as: The local estimate of R 0 can be divided into a localized contact rate and a multiplier that is necessary for moving from one population to another: contact rate = contact rate localized * pd, where pd is the population density multiplier. In our case it was approximated to 4. Here the contact rate is related to a contact rate in a defined population in a certain area and the population density multiplier modifies the contact rate when moving across different local populations and geographical areas representing heterogeneity in population density. In the case of the cruise ship, the potential relationship of R 0 to population density thus appears to be mainly attributed by the contact rate and mixing effects. This information is also important for other settings characterized by high population densities. With such a high R 0 , we estimated that without any interventions within the time period of 21 st January to 19th February 2920 out of the 3700 (79%) would have been infected, assuming relatively homogenous mixing between all people on board. The quarantine and removal interventions launched when the outbreak was confirmed (3 rd February and 5th of February) substantially lowered the contact rate and reduced the cumulative case burden by an estimated 2307 cases by 19th February. We note, however, that the longer time span of simulation beyond 19th February, assuming people would stay on the boat, would reduce the net effect of the intervention substantially. We further note that an earlier evacuation would have corresponded to disembarking a substantially lower number of latent undetectable infections (76 vs. 246), likely giving rise to some further transmission outside the ship. We also found that contact rate of guest to guest and crew appeared higher than the contact rate from guest to crew, perhaps driven by high transmission rates within cabins. However, testing of crew was delayed, and there was a testing bias towards testing more passengers than crew. Hence our analysis needs to be revisited when all data is available. The limitations of our study include our lack of data on the lag time between onset of symptoms, the timing of testing and potential delay to the availability of test results. Due to the large number of people, not everyone was tested, and we suspect that the timing of the test results do not totally tally with realtime onset of cases. We had no access to data on incident cases in crew versus passengers, nor any data on whether there was clustering of cases around certain nationalities or crew members. Furthermore, although the Hong Kong passenger was assumed to be the index case, it could well have been possible that there was more than one index case on board who could have contributed to transmission, and this would have lowered our estimated R0. Lastly, our models are based on human-to-human transmission and do not take into account the possibility that fomites, or water systems with infected feces, contributed to the outbreak. The interventions that included the removal of all persons with confirmed COVID-19 disease combined with the quarantine of all passengers substantially reduced the anticipated number of new COVID-19 cases compared to a scenario without any interventions (17% attack rate with intervention versus 79% without intervention) and thus prevented a total number of 2307 additional cases by 19th February. However, the main conclusion from our modelling is that evacuating all passengers and crew early on in the outbreak would have prevented many more passengers and crew members from getting infected. A scenario of early evacuation at the time of first detection of the outbreak (3 February) would have resulted in only 76 latent infected persons during the incubation time (with potentially still negative tests). A late evacuation by 19th February would have resulted in about 246 infected persons during their incubation time. These data need to be confirmed by empiric data of testing all evacuated persons after 19th February, and may be an overestimate as we assumed a stable R 0 after quarantine was instituted. However, the R 0 probably declined over time, as the implementation of quarantine measures were incrementally implemented leading to better quarantine standards towards the end of the quarantine period. In conclusion, the cruise ship conditions clearly amplified an already highly transmissible disease. R 0 is related to population density, and is particularly driven by contact rate and mixing effects, and this explains the high R 0 in the first weeks before
2020-03-01T14:03:25.709Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "6e516a3db7568cb1f8686e2665dca74c77e61075", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/jtm/taaa030", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4b357690e98a267ddb7dc36c3ae2c2a8e9aa4859", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17568276
pes2o/s2orc
v3-fos-license
Hydrogen-rich water protects against inflammatory bowel disease in mice by inhibiting endoplasmic reticulum stress and promoting heme oxygenase-1 expression AIM To investigate the therapeutic effect of hydrogen-rich water (HRW) on inflammatory bowel disease (IBD) and to explore the potential mechanisms involved. METHODS Male mice were randomly divided into the following four groups: control group, in which the mice received equivalent volumes of normal saline (NS) intraperitoneally (ip); dextran sulfate sodium (DSS) group, in which the mice received NS ip (5 mL/kg body weight, twice per day at 8 am and 5 pm) for 7 consecutive days after IBD modeling; DSS + HRW group, in which the mice received HRW (in the same volume as the NS treatment) for 7 consecutive days after IBD modeling; and DSS + HRW + ZnPP group, in which the mice received HRW (in the same volume as the NS treatment) and ZnPP [a heme oxygenase-1 (HO-1) inhibitor, 25 mg/kg] for 7 consecutive days after IBD modeling. IBD was induced by feeding DSS to the mice, and blood and colon tissues were collected on the 7th d after IBD modeling to determine clinical symptoms, colonic inflammation and the potential mechanisms involved. RESULTS The DSS + HRW group exhibited significantly attenuated weight loss and a lower extent of disease activity index compared with the DSS group on the 7th d (P < 0.05). HRW exerted protective effects against colon shortening and colonic wall thickening in contrast to the DSS group (P < 0.05). The histological study demonstrated milder inflammation in the DSS + HRW group, which was similar to normal inflammatory levels, and the macroscopic and microcosmic damage scores were lower in this group than in the DSS group (P < 0.05). The oxidative stress parameters, including MDA and MPO in the colon, were significantly decreased in the DSS + HRW group compared with the DSS group (P < 0.05). Simultaneously, the protective indicators, superoxide dismutase and glutathione, were markedly increased with the use of HRW. Inflammatory factors were assessed, and the results showed that the DSS + HRW group exhibited significantly reduced levels of TNF-α, IL-6 and IL-1β compared with the DSS group (P < 0.05). In addition, the pivotal proteins involved in endoplasmic reticulum (ER) stress, including p-eIF2α, ATF4, XBP1s and CHOP, were dramatically reduced after HRW treatment in contrast to the control group (P < 0.05). Furthermore, HRW treatment markedly up-regulated HO-1 expression, and the use of ZnPP obviously reversed the protective role of HRW. In the DSS + HRW + ZnPP group, colon shortening and colonic wall thickening were significantly aggravated, and the macroscopic damage scores were similar to those of the DSS + HRW group (P < 0.05). The histological study also showed more serious colonic damage that was similar to the DSS group. CONCLUSION HRW has a significant therapeutic potential in IBD by inhibiting inflammatory factors, oxidative stress and ER stress and by up-regulating HO-1 expression. METHODS Male mice were randomly divided into the following four groups: control group, in which the mice received equivalent volumes of normal saline (NS) intraperitoneally (ip); dextran sulfate sodium (DSS) group, in which the mice received NS ip (5 mL/kg body weight, twice per day at 8 am and 5 pm) for 7 consecutive days after IBD modeling; DSS + HRW group, in which the mice received HRW (in the same volume as the NS treatment) for 7 consecutive days after IBD modeling; and DSS + HRW + ZnPP group, in which the mice received HRW (in the same volume INTRODUCTION Inflammatory bowel disease (IBD), including ulcerative colitis (UC) and Crohn's disease (CD), is a chronic and relapsing disease primarily caused by the production of pro-inflammatory cytokines and leukocyte infiltration, resulting in structural and functional damage to the bowel. It is associated with environmental factors, genetics, microbial factors and so on [1][2][3] . The major symptoms of IBD include inflammation of the colon, abdominal pain, altered visceral sensation, diarrhea, rectal bleeding, weakness and weight loss [4] . CD is often located in the terminal ileum and/or the colon and characterized by formation of non-caseating granulomas, which are involved in transmural and discontinuous inflammation in the mucosa. In contrast, UC is a colon disorder in which inflammation is restricted to the mucosal and submucosal areas, initially affecting the rectum, but it may extend continuously and diffusely throughout the colon [5] . The major therapeutic goals in IBD patients are the alleviation of inflammation and the attenuation of IBD symptoms, mainly abdominal pain and altered bowel movements. The current range of treatments for IBD covers both conventional and biological therapies. Conventional therapy includes the use of antiinflammatory drugs, immunosuppressive agents, antibiotics, and probiotics; biological therapy mainly includes the use of different anti-TNF-α agents, and a plethora of other novel biological agents [6] . Dextran sulfate sodium (DSS)-induced IBD in mice is a classical mouse IBD model that is accepted worldwide. The mechanism of DSS-induced colitis is mainly due to the direct toxicity to the colonic epithelial cells, subsequently increasing the permeability of the intestinal mucosa and allowing the transport of luminal bacterial products from the bowel lumen to the submucosal tissue [7,8] . Molecular hydrogen, which has been explored as a new medical gas over the last ten years, is a potent anti-oxidative, anti-apoptotic, and anti-inflammatory agent and an ideal therapy for many diseases [9] . The benefit of hydrogen as a novel anti-oxidant is that it can penetrate cell membrane, diffuse into the cytosol and target organelles easily, and selectively reduce hydroxide radicals and peroxynitrite without affecting physiological reactive oxygen species (ROS) involved in normal cell signaling [10] . Moreover, hydrogen therapy has been proven to be safe and effective in many clinical trials [11,12] . With respect to intestinal diseases, previous studies have shown that hydrogen may alleviate intestinal ischemia-reperfusion injury, UC, and colon inflammation [13][14][15] . However, the detailed mechanism responsible for this effect is not yet well illustrated. Hydrogen-rich water (HRW) is an effective, convenient way to deliver molecular hydrogen, which has the same effectiveness as inhaled hydrogen gas and is more suitable for clinical applications. Therefore, the main aim of our study was to assess the protective effect of HRW on IBD in mice and explore the detailed mechanisms involved. Experimental animals and preparation of HRW This study was conducted using male C57BL/6J mice (4-5 wk old, 21-26 g) (Animal Feeding Center of Xi'an Jiaotong University Medical School). The animals were acclimatized to laboratory conditions (23 ℃, 12 h/12 h light/dark cycle, 50% humidity, and ad libitum access to food and water) for one week prior to experimentation. All mice were housed (5 per cage) in clear, pathogen-free polycarbonate cages in the animal care facility, and they were fed a standard animal diet (No. 120161128007, Jiangsu Xietong Pharmaceutical Bio-technology Co., Ltd.) and water ad libitum under controlled temperature conditions with 12-h lightdark cycles. They were cared in accordance with the Ethical Committee, Xi'an Jiaotong University Health Science Center. The study was reviewed and approved by the Xi'an Jiaotong University Health Science Center Institutional Review Board. All procedures involving animals were reviewed and approved by the Institutional Animal Care and Use Committee of the Xi'an Jiaotong University Health Science Center. The animal protocol was designed to minimize pain and discomfort to the animals. All animals were euthanized with isoflurane gas for tissue collection. HRW was produced by Naturally Plus Japan International Co., Ltd. and was stored under atmospheric pressure at 4 ℃ in an aluminum bag with no dead volume, as performed in our previous studies [16][17][18] . Induction of IBD IBD was induced by DSS feeding. Male C57BL/6J mice were provided with drinking water containing 5% (wt/vol) DSS (35-50 kDa, Sigma-Aldrich, Steinheim, Germany) ad libitum from day 0 to day 5. On days 6 to 7, the animals received tap water (without DSS). Control animals received tap water throughout the entire experiment. Euthanasia Mice were sacrificed after being anesthetized with isoflurane gas on the 7 th d after IBD modeling, and blood samples were collected from the periorbital plexus. The serum was separated by centrifugation at 3000 g for 15 min at 4 ℃. The colon without the cecum was removed immediately from each mouse and stored at -80 ℃ until further analysis. Statistical analysis Measurement data are expressed as the mean ± standard error of mean (SEM). Differences between the experimental and control groups were assessed by either the analysis of variance or t test, as applicable, using SPSS 18.0 (SPSS, 165 Inc.). A P value less than 0.05 was considered statistically significant. Treatment with HRW significantly alleviates the symptoms of DSS-induced IBD in mice To investigate the effect of HRW treatment on IBD, the weight change and disease activity index were assessed on the 7 th d after IBD modeling ( Figure 1). The results showed that the weight of the mice showed a downward trend on day 7 after IBD induction. However, the DSS + HRW group had significantly less weight loss compared with the DSS group on the 7 th d (P < 0.05). Considering the change in weight, blood in stool, and stool consistency, the disease activity index was calculated. The disease activity index observably increased after DSS treatment, and the DSS + HRW group exhibited a lower extent of disease than the DSS group (P < 0.05). Treatment with HRW markedly ameliorates colonic damage in DSS-induced IBD Mice were sacrificed, and the colons were assessed on the 7 th d after IBD modeling. We discovered that the average length of the colons exhibited a significant reduction after DSS administration. More importantly, HRW exerted a protective effect against the shortening of the colon in the DSS + HRW group, in which the colon was markedly longer than that in the DSS group (P < 0.05). In addition, the colonic wall thickening was alleviated in the DSS + HRW group in comparison to the DSS group (P < 0.05). The macroscopic damage score assessed by diarrhea, colon damage and colon length showed that the DSS + HRW group also received a lower score than the DSS group (P < 0.05) ( Figure 2). For the further study of the alterations of the colon, obtained and stained with hematoxylin and eosin to evaluate the morphology. Two researchers examined the results in a blinded fashion. The microscopic total damage score was assessed using the following parameters: the depletion of goblet cells (0: absence; 1: presence), crypt abscesses (0: absence; 1: presence), the destruction of mucosal architecture (1: normal; 2: moderate; 3: extensive), the extent of muscle thickening (1: normal; 2: moderate; 3: extensive), and the presence and degree of cellular infiltration (1: normal; 2: moderate; 3: transmural) (Supplementary Table 3) [21] . Measurements of cytokines in murine serum The levels of serum TNF-α, IL-6 and IL-1β were measured with commercial ELISA kits according to the instructions from the manufacturer (Dakewe, Shenzhen, China). Measurement of colonic oxidative stress The concentrations of malonaldehyde (MDA), superoxide dismutase (SOD) and glutathione (GSH) in colon tissue were measured as markers of oxidative stress of colon tissue. Colon tissues were homogenized on ice in 10 volumes (w/v) of NS. The homogenates were centrifuged at 4000 rpm at 4 ℃ for 15 min for MDA, SOD and GSH detection by using assay kits purchased from Nanjing Jiancheng Corp., China. MDA levels in the supernatants were determined by measurement of thiobarbituric acid (TBA)-reactive substance levels using an MDA assay kit according to the manufacturer's instructions. Western blot analysis Proteins were extracted from the colon according to the manufacturer's instructions. BCA protein assay kit was used to detect the concentration of extracted we conducted the experiments in microcosmic aspect. The histological study revealed that the mice in the DSS group developed severe colonic inflammation including mucosal hyperemia, inflammatory cell infiltration, formation of crypt abscesses, destruction of the mucosal architecture, and the depletion of goblet cells. Conversely, the DSS + HRW group exhibited mild inflammation that was much closer to normal. The microcosmic scores based on the goblet cell depletion, crypt abscesses, destruction of the mucosal architecture, muscle thickening, and cellular infiltration in the DSS + HRW group were also lower than those in the DSS group (P < 0.05) (Figure 3). This evidence indicated that HRW could improve colonic damage in DSS-induced IBD. HRW inhibits oxidative stress and inflammatory factors in DSS-induced IBD Oxidative stress and inflammation play an initial and crucial role in the process of IBD. The oxidative stress parameters in the colon, including MDA and MPO, were significantly decreased in the DSS + HRW group compared with the DSS group (P < 0.05). The protective indicator, SOD, was markedly increased with the use of HRW. Additionally, HRW also reversed the depletion of GSH caused by DSS administration ( Figure 4). These facts demonstrated that HRW could indeed inhibit oxidative stress. To explore the anti-inflammatory mechanism of HRW, inflammatory factors were assessed on the 7 th d after IBD modeling by determining the plasma HRW inhibits endoplasmic reticulum stress in DSSinduced IBD Endoplasmic reticulum (ER) stress participating in a cellular process triggered by a variety of conditions that disturb the folding of proteins in the ER also aggravates the progress of IBD. The pivotal proteins involved in ER stress, including p-eIF2α, ATF4, XBP1s and CHOP, were detected to assess the effect of HRW on ER stress in DSS-induced IBD ( Figure 6). The results demonstrated that the expression of these proteins was significantly increased after DSS administration. Moreover, the p-eIF2α, ATF4, XBP1s and CHOP proteins were dramatically reduced after HRW treatment in contrast to the control group. These findings indicated that HRW may ameliorate the manifestation of IBD by inhibiting the process of ER stress. HRW up-regulates HO-1 expression to alleviate IBD HO-1 has anti-inflammatory and anti-oxidative effects protecting against many diseases. On the 7 th d after IBD modeling, the mice were sacrificed, and the colon tissues were obtained to detect the HO-1 expression (Figure 7). The results revealed that HRW treatment markedly accelerated HO-1 expression compared with the DSS group. For the further study of the role that We discovered that colon length was visibly reduced in the DSS + HRW + ZnPP group compared with the DSS + HRW group (P < 0.05). The colonic wall in the DSS + HRW + ZnPP group was also thicker than that in the DSS + HRW group (P < 0.05). In addition, the DSS + HRW + ZnPP group got a higher macroscopic damage score in comparison to the DSS + HRW group (P < 0.05). The histological study showed that the DSS + HRW + ZnPP group exhibited more serious colonic damage that was similar to that observed in the DSS group ( Figure 8). Based on these findings, we confirmed that HO-1 plays a key role in the mechanisms for HRW to alleviate IBD. DISCUSSION Hydrogen has anti-oxidant, anti-inflammatory, antiapoptotic and other protective effects, and great progress has been achieved in the research of hydrogen therapy for diseases such as metabolic disorders, tissue ischemia reperfusion injury, myocardial injury, and hepatic injury [23][24][25][26] . In this study, a model of IBD was established in mice by DSS feeding, and the therapeutic role of HRW was assessed. We demonstrated that treatment with HRW significantly alleviated the symptoms and colonic damage in DSS-induced IBD. The mechanisms by which HRW alleviates DSS-induced IBD may include the following: (1) inhibiting the secretion of inflammatory factors, such as TNF-α, IL-6 and IL-1β, to ameliorate the inflammatory response; (2) inhibiting oxidative stress, such as reducing MPO and ROS as well as increasing SOD and GSH; (3) inhibiting ER stress such as de-creasing the expression of p-eIF2α, ATF4, XBP1s and CHOP; and (4) up-regulating HO-1 expression to ease oxidative stress and decrease inflammation. All the evidence revealed that HRW is a potential new method for the treatment of IBD. IBD is an enteric disorder characterized by acute and chronic intestinal inflammation. The etiology and precise pathogenesis of IBD are still unclear. However, several possible causes, including genetic, infectious, immunological factors and dysfunction of the adaptive and innate immune systems in response to the fecal microbiome, have been recognized [27][28][29][30] . DSS is a physical agent with an intrinsic capacity to disrupt the epithelial barrier and activate macrophages, causing inflammation and tissue damage [8] . The related histological changes include ulceration and inflammation of the intestinal mucosa with leukocyte infiltration. The clinical presentation includes weight loss, blood in stool, and diarrhea. In the present study, we chose to evaluate changes in weight, the disease activity index, colon shortening, colonic wall thickening, histological study, and macroscopic and microcosmic scores to assess the severity of the DSS-induced IBD. Oxidative stress is an imbalance of oxidation and anti-oxidation systems in the body and may be caused by excessive detrimental ROS, depletion of GSH, etc. In IBD, the production of ROS and MPO exceeds anti-oxidant defenses and leads to a state of oxidative stress that fuels inflammation and causes direct mitochondrial damage [31,32] . Hydrogen selectively quenches detrimental ROS, such as hydroxyl radicals and peroxynitrite, but it does not damage physiological ROS, such as superoxide anion radicals, hydrogen peroxide, and nitric oxide [33] . In this study, we found that HRW significantly reduced the levels of MDA and MPO and facilitated the protective indicators SOD and GSH. Furthermore, we measured the levels of inflammatory factors, and the results revealed that HRW markedly inhibited the release of TNF-α, IL-6 and IL-1β. TNF-α is one of the most important pro-inflammatory cytokines, and it stimulates the production of downstream cytokines such as IL-6 and IL-8 and plays a significant role in activating the cytokine cascade [34,35] . The researchers reported that anti-TNF-α monoclonal antibodies and other drugs had dramatically improved the treatment of IBD [36,37] . In summary, our study revealed that HRW could quench detrimental oxidative stress and exert an anti-TNF-α role to alleviate IBD. ER stress participates in a cellular process triggered by a variety of conditions that disturb the folding of proteins in the ER. ER stress further triggers the unfolded protein response (UPR) by activating the PKR and PERK signals and phosphorylating eIF2α, which is required by the initiation phase of polypeptide chain synthesis [38,39] . ATF4 is a UPR-dependent transcriptional factor, and its sustained expression may up-regulate CHOP expression and induce apoptosis [40] . XBP1s is also a UPR-dependent transcriptional factor induced by AFT6 [41] . ER stress exerts important roles in many diseases such as ischemia/reperfusion injury in the liver, diabetes, and cardiac myocyte injury [42][43][44] . A previous study also proved that epithelial ER stress participated in CD and UC [45] . Moreover, studies have found that hydrogen has anti-apoptotic and antiinflammatory functions [46,47] . In this study, we discovered that HRW dramatically reduced the expression of p-eIF2α, ATF4, XBP1s and CHOP proteins and conclude that HRW protects against IBD by inhibiting ER stress. To determine the deeper mechanism of the protective effect of HRW against IBD, we focused on the effect of hydrogen on HO-1 expression. Heme oxygenases (HOs) catalyze the rate-limiting step in heme degradation, which can produce bilirubin, iron, and carbon monoxide (CO). HO-1, increased by stimuli that induce cellular stress, reduces the secretion of inflammatory cytokines in many diseases, such as sepsis and LPS-stimulated macrophages [48,49] . Additionally, HO-1 conferred its cytoprotective effects by increasing anti-oxidative capacity and inhibiting oxidative stress [50,51] . In addition, recent studies have shown that HO-1 was involved in the downstream effect of Treg cells [52] . Based on these facts, we speculated that hydrogen may confer its cytoprotective role by upregulating HO-1. We measured the level of HO-1 and used ZnPP, an HO-1 inhibitor, for the further study. Not surprisingly, HRW treatment markedly up-regulated HO-1 expression, and the use of ZnPP clearly reversed the protective role of HRW. We verified that HO-1 indeed plays a key role in the mechanisms by which HRW alleviates IBD. The detailed mechanism may be that HO-1 inhibits the secretion of inflammatory cytokines and oxidative stress to alleviate IBD. In this study, we have proven that HRW has a significant therapeutic potential in the treatment of IBD by inhibiting inflammatory factors and oxidative stress. More importantly, we discovered that HRW could inhibit ER stress to prevent apoptosis and upregulate HO-1 expression. The high level of HO-1 further exerted anti-oxidative and anti-inflammatory functions in the process of IBD. Additionally, due to its advantageous distribution characteristics, hydrogen can penetrate biomembranes and diffuse into the cytosol, mitochondria, and nucleus, successfully targeting the organelles. All of these effects make HRW a potential new treatment method against DSSinduced IBD. However, our study is based on animal experiments, and prospective clinical studies are needed to evaluate whether HRW is fit for the clinical treatment of IBD. In conclusion, the results of the present study demonstrate that HRW can alleviate the symptoms and colonic damage in DSS-induced IBD, most likely due to its unique cytoprotective properties such as its anti-oxidant and anti-inflammatory activities. More importantly, HRW can inhibit ER stress and up-regulate HO-1 expression. All of these findings indicate that HRW can be a potential therapy for DSS-induced IBD. Background Inflammatory bowel disease (IBD) is a chronic and relapsing disease, and therapeutic goals include controlling inflammation and ameliorating clinical symptoms. Hydrogen-rich water (HRW) is a potent anti-oxidative, anti-apoptotic, and anti-inflammatory agent and an ideal therapy for many diseases. Research frontiers Effective therapeutic schemes for IBD are lacking. Research of mechanisms and new therapeutic approaches for IBD has received increasing attention from scientists and clinicians. Hydrogen therapy is a new medical approach that has recently gained much appreciation. HRW exerts considerable anti-oxidative, anti-apoptotic, and anti-inflammatory effects. More importantly, drinking HRW is very convenient in the course of daily life. To explore the effect of HRW on different types of diseases and promote its clinical usage is currently an important goal in hydrogen medicine. Innovations and breakthroughs The present study concluded that HRW can significantly prevent IBD in mice by inhibiting inflammatory factors, oxidative stress, and endoplasmic reticulum stress and by up-regulating HO-1 expression. Moreover, based on the use of the pharmaceutical inhibition of HO-1, we can conclude that HO-1 may be a key effective protein in HRW function. Applications Hydrogen therapy may be a safe and effective treatment for IBD. Moreover, the application of drinking HRW is very convenient and acceptable for usage. Terminology Hydrogen is the lightest gas in nature, which has powerful anti-oxidant and anti-inflammatory effects. It has therapeutic effects in many diseases, which is proven by many basic research and clinical studies. HRW is produced by forcing hydrogen gas into water by a specific device under high pressure. Peer-review Congratulations. It is a very well designed work with very interesting results. Some suggestions: Was there any examination or histological study of the puncture site? It would have been interesting to know if there is any reaction in that place.
2018-04-03T00:56:44.515Z
2017-02-28T00:00:00.000
{ "year": 2017, "sha1": "507961d06d1dd2d864425f27f5daa93717171f90", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5330822?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "507961d06d1dd2d864425f27f5daa93717171f90", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
252858563
pes2o/s2orc
v3-fos-license
The Role of the Paraventricular-Coerulear Network on the Programming of Hypertension by Prenatal Undernutrition A crucial etiological component in fetal programming is early nutrition. Indeed, early undernutrition may cause a chronic increase in blood pressure and cardiovascular diseases, including stroke and heart failure. In this regard, current evidence has sustained several pathological mechanisms involving changes in central and peripheral targets. In the present review, we summarize the neuroendocrine and neuroplastic modifications that underlie maladaptive mechanisms related to chronic hypertension programming after early undernutrition. First, we analyzed the role of glucocorticoids on the mechanism of long-term programming of hypertension. Secondly, we discussed the pathological plastic changes at the paraventricular nucleus of the hypothalamus that contribute to the development of chronic hypertension in animal models of prenatal undernutrition, dissecting the neural network that reciprocally communicates this nucleus with the locus coeruleus. Finally, we propose an integrated and updated view of the main neuroendocrine and central circuital alterations that support the occurrence of chronic increases of blood pressure in prenatally undernourished animals. Introduction Since the early studies of David Barker and colleagues, much evidence has accumulated demonstrating that environmental conditions during early life are critical for the proper development of organisms, supporting the concepts of fetal programming [1] and developmental origin of health and disease (DOHAD) [2]. In this context, nutrition is one of the most relevant conditions that impacts early development generating deleterious long-term effects, thereby constituting a crucial pathognomonic component of chronic non-communicable diseases [3]. Caloric and protein malnutrition defined, respectively, as deficiencies in either total food or protein intakes, can occur as specific or mixed forms in the newborn and in later childhood [4], whereas malnutrition in pregnant mothers likely involves inadequate nutrition of the fetus both in calories and proteins. Such chronic fetal macronutrient deficiency manifests as full-term babies with low birth weight, i.e., small for gestational age (SGA). Although fetal undernutrition is not completely dependent on maternal nutrition (there also are other well-known determinants, like a poor maternal metabolic and endocrine status, abnormal feto-placental perfusion, smoking during pregnancy), the main factor determining SGA newborns in developing countries and second in importance in developed countries, is a low energy intake [5]. Undernutrition during fetal life have been shown to produce long-term adaptations in humans and animals, such as lower body weight and brain alterations, including loss of neurons and glia, impaired neuronal differentiation, and deficits in neuroplasticity [6,7]. In addition, several reports have shown that prenatal undernutrition produces metabolic disparities and cardiovascular illnesses, such as hypertension [3,8,9]. The Food and Agriculture Organization of the United Nations claims that millions of persons worldwide are born with low birth weight due to prenatal undernutrition [10,11], and many epidemiological studies have reported that prenatal undernutrition is associated with an increased risk of developing long-term diseases, like hypertension [12,13]. For instance, mothers exposed to the Dutch famine during gestation gave rise to offspring predisposed to develop hypertension in middle age [14] and, more recently, a higher incidence of chronic hypertension was reported in the Chinese population that suffered famine between 1959 and 1961 [15]. Indeed, few studies have quantified the effect of low birth weight on blood pressure in humans. Painter et al. [16] found that for every kilogram reduction in birth weight due to maternal exposure to famine there was a 2.7 mmHg increase in systolic pressure. More recent studies have established a significant increase in the prevalence of hypertension due to prenatal famine, the prevalence being 10.2% in the group without exposure to famine and 16.1% for those with fetal exposure to famine. These data were calculated from Chinese populations exposed to famine during 1959-1961 [17]. A fairly similar result was described in an Ethiopian population cohort composed of individuals exposed and not exposed to prenatal famine (1983)(1984)(1985), where 64/350 (18.3%) of exposed individuals and 31/350 (8.9%) of unexposed individuals presented with hypertension. In that study, it was found that systolic blood pressure increased by 1.05 mmHg (95% CI 0. 29, 4.42) and diastolic blood pressure by 2.47 mmHg (95% CI 1.01, 3.95), and multivariate logistic regression analysis of the model indicated a positive association between famine exposure and risk of hypertension in adults [18]. Low-birth-weight human patients also showed activation of the hypothalamic-pituitaryadrenal (HPA) axis along with high fasting cortisol levels when challenged with exogenous adrenocorticotropic hormone (ACTH), which are thought to be involved in the hypertension exhibited by these patients [19]. Similarly, persistent autonomic dysfunction is programmed in low-birth-weight human patients, as revealed by increased cardiac sympathetic activity at rest and reduced cardiac reflexes in response to head tilt [20]. In animal models, rats with prenatal and perinatal malnutrition showed chronic increases in systolic pressure and heart rate during adulthood [21][22][23][24][25], and an increased neuronal tone of the sympathetic peripheral system [26]. Therefore, protein and calorie food restrictions in pregnant dams result in hypertensive adult offspring [27,28] through mechanisms that include altered hypothalamic programming [19,26,29], sustaining the significant role of nutritional programming on the neural and endocrine control of the cardiovascular system. Indeed, an increase in the activity and excitability of neurons in the hypothalamic paraventricular nucleus (PVN) has been described in some animal models of hypertension, which are closely related to an increase in sympathetic impulse. For instance, electrophysiological studies in spontaneously hypertensive rats have shown higher neuronal activity of presympathetic PVN neurons that innervate the nucleus tractus solitarius [11,13,30,31]. Although GABAergic input normally inhibits presympathetic PVN neurons in normotensive rats [32,33], this effect is impaired in spontaneous hypertensive rats due to a post-translational modification (glycosylation) of the Na + -K + -2Cl − cotransporter 1, resulting in an increased sympathetic tone [34]. Impairment of GABAergic inhibitory input in the dorsomedial and paraventricular hypothalamic nuclei has also been described in a model of neurogenic hypertension, the Schlager mouse [35,36]. In addition, epigenetic modulation of hypothalamic angiotensin signaling in the PVN has been reported to contribute to salt-sensitive hypertension induced by prenatal glucocorticoid excess in offspring of mothers that were undernourished during pregnancy [37]. The present review summarizes the primordial neuroendocrine and neuroplastic mechanisms that may explain the chronic increase in blood pressure in the offspring of prenatally malnourished individuals. Additionally, we discuss how the PVN and other brain nuclei are critical neuroadaptive components of the neural networks that may cause and maintain hypertension in these animals. Peripheral changes (e.g., renal, vascular) that may partly account for fetal undernutrition-induced hypertension in later life have been reviewed elsewhere and are not the aim of the present review (for review of renal changes, see [38,39]; for review of vascular adaptations, see [40][41][42]). Hypothalamic CRF Expression and Subsequent Development of Hypertension in Subjects Who Suffered from Prenatal Undernutrition: The Role of Central Noradrenaline Hypophysiotropic neurons synthesize the corticotropin releasing factor (CRF) in the medial parvocellular division of the hypothalamic PVN, which is then released from the median eminence, activating anterior pituitary corticotroph cells to release ACTH into the systemic circulation [43]. In turn, ACTH induces the synthesis and release of glucocorticoids (GC) in the cortex of the adrenal glands [44], which at basal levels maintain cardiovascular homeostasis by acting on the vasculature and the heart [45]. In addition to the hypophysiotropic cells that give rise to the HPA axis, there are other CRF-synthesizing neurons in the PVN, the preautonomic neurons, that project to autonomic centers in the brainstem and spinal cord, providing essential control over the responses of the cardiovascular system acting in parallel with HPA [46]. Dysregulation of Hypothalamic CRF Levels in Prenatally Undernourished Subjects: The Role of the Glucocorticoid Feedback and Placental Barrier The HPA axis is highly sensitive to nutritional insults during prenatal life. Indeed, animals exposed to maternal undernutrition have higher CRF mRNA and protein levels in the hypothalamus [47] and ACTH plasma levels in adulthood [47][48][49]. Additionally, it has been shown that young children with low birth weight-an index of intrauterine growth restriction-have a higher risk of developing hypertension, together with higher serum levels of cortisol, ACTH, and CRF [50]. In addition, post-mortem studies in hypertensive patients have shown higher levels of hypothalamic CRF mRNA and protein [51]. Interestingly, an increase in spontaneous neuronal activity was observed in the PVN of adult rats previously exposed to prenatal undernutrition [52], suggesting that parvocellular neurons of the PVN in these animals not only express but also release more CRF because of the higher rate of neuronal firing. Thus, CRF-mediated hyperactivity of the HPA axis could lead to increased plasma corticosterone and higher blood pressure [53], whereas hypothalamic CRF could also trigger the activation of the sympathetic-adrenal-medullary axis [54], greatly contributing to the increase in blood pressure. Therefore, both the hypophysealadrenal axis and the sympathetic-adrenal-medullary system constitute output mechanisms whereby CRFergic neurons of the PVN would trigger hypertension in previously undernourished adult subjects. Through these mechanisms, PVN neurons can control peripheral cardiovascular viscera via corticosterone secretion and noradrenaline/adrenaline release, respectively. It is therefore apparent that increased levels of hypothalamic CRF are in the origin of prenatal undernutrition-induced hypertension. Thus, what is the main factor underlying the increased hypothalamic CRF levels in adult hypertensive animals previously undernourished during intrauterine life? Published reports indicate that rodents exposed to prenatal undernutrition have a reduction in the expression of glucocorticoid receptors (GR) in the hippocampus [55], hypothalamus [56] and pituitary [57] at birth, adolescence, and adult life. These persistent decreases in GR expression found in previously undernourished animals could lead to reduced feedback control at the hypothalamus, thereby resulting in higher CRF levels together with hyperactivity of both the HPA axis and sympathetic output in adulthood. It is noteworthy that this deleterious effect on the hypothalamic glu-cocorticoid feedback caused by early undernutrition was confirmed with a glucocorticoid challenge functional test, involving injections of subcutaneous (100 µg/kg) and intra-PVN (50 pmol) dexamethasone in adult rats that had suffered from malnutrition during prenatal life. In that study, these treatments gave rise to reduced serum corticosterone levels in eutrophic but not in prenatally malnourished animals [58]. Furthermore, bilateral intra-PVN microinjection of GR agonists and antagonists showed significantly smaller effects on systolic pressure, heart rate, and plasma glucocorticoids when administered to prenatally malnourished rats, compared to eutrophic controls [28]. Therefore, a persistent reduction in GR expression in these brain areas could result in a decreased negative feedback control leading to increased activity of both the HPA axis and sympathetic output in adult life. This claim is endorsed by human and animal studies where higher expression levels of CRH, ACTH, and corticosterone/cortisol were observed in offspring of prenatally malnourished individuals [28,47,50]. One of the supporting mechanisms explaining reduced GR expression after prenatal undernutrition involves a malfunction at the placental barrier. Usually, maternal glucocorticoids should not cross the placental barrier. However, the placental activity of 11β-hydroxysteroid dehydrogenase type 2-the enzyme that catalyzes rapid metabolism of cortisol and corticosterone to inert steroids-is significantly decreased in fetuses of rats exposed to maternal malnutrition [19,55,56,59], thereby resulting in overexposure of the fetuses to maternal glucocorticoids. This overexposure phenomenon has also been reported in babies with reduced birth weight [60]. Finally, increased glucocorticoid signaling to the PVN would lead to down-regulation of GR mRNA and protein levels [61][62][63], which may result in the aforementioned reduced glucocorticoid feedback, together with an increased activity of both HPA and sympathetic-adrenal-medullary systems, as observed in prenatally undernourished subjects [57,64]. It is noteworthy that, in other studies, the GR mRNA expression in the PVN [57] and whole hypothalamus [65] of ewes, as well as the GR mRNA and protein in the rat hypothalamus [66], were not affected by previous exposure to prenatal undernutrition. Besides, Stevens et al. [67] have shown an increase in GR mRNA expression in the hypothalamus of perinatally undernourished ewes, while Li et al. [68] observed that PVN GR immunoreactivity was not affected in pre-term baboons (90% gestation) from mothers fed 70% of the control diet, although peptide expression and serum levels of ACTH and cortisol were found to be increased in the same animals. This apparently conflicting evidence could be the result of comparing GR levels between different animal species that may have different periods of vulnerability during intrauterine life, since ungulates (sheep) and primates (baboons) are born with more advanced neurobiological development (neurogenesis, synaptogenesis, gliogenesis, oligodendrocyte maturation, and age-dependent behaviors) than rats [67,68]. Furthermore, comparing data involving the PVN [57,68] vs. the entire hypothalamus [55,[64][65][66], or changes in GR protein [56,68] vs. those in GR mRNA [57,65,67] may also lead to conflicting results. Taken together, the aforementioned results suggest that increased CRF levels and subsequent hypertension caused by the activation of both the HPA axis and the sympatheticadrenal-medullary network may occur in animals exposed to prenatal malnutrition also due to other independent intervening factors, unrelated to changes in GR expression in the PVN. Increased Expression of CRF in the PVN and Subsequent Hypertension Shown by Prenatally Malnourished Animals Are Likely due to Central Noradrenergic Hyperactivity One of the most likely factors involved in the increased expression of CRF in previously undernourished adult animals is the central noradrenergic hyperactivity that develops in these animals shortly after birth. Indeed, a body of literature has shown that the brains of prenatally malnourished rats exhibit increased synthesis, release, and turnover of noradrenaline [69][70][71][72][73][74][75][76][77][78][79]; for review see [7]. Extracellular noradrenaline levels were also recently reported to be significantly higher in both hemispheres, at all-time points, in adult animals that were malnourished during prenatal life compared to well-fed controls [80]. This finding confirmed older data [81], which found that extracellular noradrenaline levels in the prefrontal cortex were higher in perinatally protein-deprived rats compared to control ones, as measured by microdialysis. The central noradrenergic hyperactivity found in the brain of malnourished animals has functional implications. Indeed, noradrenaline-dependent interhemispheric electrophysiological dominance is suppressed in these animals, an effect that may be prevented by chronic treatment with clonidine (a presynaptic adrenoreceptor agonist that could reduce noradrenaline release) [73] or with α-methyl-p-tyrosine (an inhibitor of tyrosine hydroxylase and thus of noradrenaline synthesis) [78]. Furthermore, the defective long-term potentiation (LTP) found in the prefrontal cortex of prenatally malnourished rats was restored to normal levels by antagonizing or generating knockdown of α 2C -adrenoceptors, which suggests that the observed LTP neuroplastic deficit is generated by an excess of noradrenaline in the brain [82]. There are good reasons to consider central noradrenergic hyperactivity as a likely causal factor for both the increased levels of hypothalamic CRF and the downstream hypertensive effect developed by animals suffering from fetal malnutrition: (i) Axonal noradrenergic neurons from the A1, A2, A5, A6 (locus coeruleus) and A7 brainstem areas project densely to magnocellular and parvocellular regions of the PVN [83][84][85]; for in-depth review see [86]. (ii) In the PVN, noradrenaline stimulates transcription of the CRF gene very rapidly [87] by activating α 1 -adrenoceptors [88], with subsequent increase in cytosolic calcium in PVN neurons [89] and CRF release [90]. (iii) Microinjection of the α 1 -adrenoceptor agonist phenylephrine in the PVN excites parvocellular neurons [91] and increases blood pressure in eutrophic normotensive rats [92,93], while microinjection of the α 1 -adrenoceptor antagonist prazosin counteracts the hyperactivity of PVN neurons [52] and the hypertensive state observed in prenatally undernourished rats [24,52,93]. (iv) Electrical stimulation of the brainstem-PVN noradrenergic connection excites the majority of PVN neurons, the effects being counteracted by the α 1 -adrenoceptor antagonists ergotamine and prazosin and mimicked by the α 1 -adrenoceptor agonist phenylephrine [91,[94][95][96][97]. In addition, elevated levels of basal neuronal activity were simultaneously found also at the locus coeruleus (LC) [52], suggesting that the hyperactivity of PVN neurons is secondary to hyperactive noradrenergic inputs from the LC to the PVN. Of note, both the firing rate and the number of spontaneously active cells in the LC had been previously described as significantly higher in perinatally undernourished rats than in controls [98]. Despite this last series of observations, it seems inappropriate to consider neuronal hyperactivity in the LC of animals exposed to prenatal malnutrition as the sole factor influencing CRF activity in the PVN-and thereby the downstream activation of both the HPA axis and the sympathetic-adrenal-medullary network-because neuronal activity in the LC is in turn influenced by many reciprocal neural inputs. For instance, in rats exposed to prenatal stress [99,100] or prenatal malnutrition [52] it has been reported that LC and PVN neurons interact reciprocally, as part of an excitatory LC-PVN closed-loop where the tonic neuronal activities of these nuclei mutually influence each other. Thus, the above data strongly suggest that increased brainstem noradrenergic input to the NPV is at the origin of the CRF overexpression observed in early malnourished animals, which in turn leads to increased blood pressure through intensified neural and endocrine signaling. However, it is worth noting that while there are experimental data showing increased potassium-induced norepinephrine release in the cerebral cortex of early malnourished animals [72,73,76,79], which is released only from axons originating in the LC [101], similar determinations in the hypothalamus of malnourished animals are lacking. Indeed, while norepinephrine turnover is increased in the hypothalamus of early malnourished rats [75], hypothalamic norepinephrine release was not determined. Maladaptive Programming of Paraventricular-Coerulear Network and Development of Hypertension in Prenatally Malnourished Adult Animals In order for LC and PVN neurons to interact reciprocally as part of an excitatory closed loop between the LC and PVN in stressed and prenatally malnourished rats, where tonic neuronal activities in the two nuclei influence each other [52,93,99,100,102,103], in addition to the noradrenergic excitatory connection to the PVN there should be reciprocal excitatory pathways from the PVN to the LC. Available information in this regard indicates that such excitatory connections from the PVN to the LC are primarily provided by CRFergic innervation. Anatomical studies have shown nerve endings that are positive for CRF immunoreactivity, in apparent contact with neurons that exhibited tyrosine hydroxylase immunoreactivity and are therefore presumed to be noradrenergic [104,105]. Afterwards, unequivocal evidence was presented indicating that some peroxidase-labeled PVN neurons have monosynaptic associations with gold-silver labeled catecholaminergic dendrites in the LC [106], and that immunoreactive labeling of CRF receptors developed on perikarya and dendrites of tyrosine hydroxylase-positive neurons of the LC, as revealed by double labeling under epifluorescence [107] and electron microscopy [108]. Furthermore, many of the LC neurons can be excited by CRF microinjection into the nucleus [109,110] through activation of CRF 1 receptors [111,112], eventually coexpressed with CRF 2 receptors [113]. Of note, CRF microinjection into the LC of healthy normotensive rats, in addition to increasing the rate of neuronal activation in the LC also increased the neuronal activity in the PVN, an effect that was prevented by prior microinjection of the α 1 -adrenoceptor antagonist prazosin into the PVN. This evidence supports the existence of functional excitatory noradrenergic connections from the LC to the PVN triggered by the CRF microinjection into the LC [52]. Thus, a morphofunctional neural network-i.e., PVN-to-LC and LC-to-PVN excitatory connections-can be distinguished, in agreement with the proposed closed positive feedback loop that reciprocally interconnects the PVN and LC through CRFergic and noradrenergic projections. The Paraventricular-Coerulear Network: Differences between Eutrophic and Prenatally Undernourished States It is clear that in healthy normotensive animals (i) administration of norepinephrine or α 1 -adrenergic receptor agonists could activate PVN neurons [91,96,114], which results in hypertension and tachycardia [92,93], and (ii) CRF administration to the LC excites central LC neurons [109,110] through activation of CRF 1 receptors [111,112], which also leads to increased blood pressure and heart rate [24,52,93,115]. These observations are consistent with the aforementioned anatomical and functional studies showing that some PVN and LC neurons are reciprocally and monosynaptically interconnected via CRFergic and noradrenergic axons. Interestingly, unlike the effects of agonists in the PVN and LC in normotensive eutrophic rats, the administration of the same agonists in the same brain sites in hypertensive undernourished animals did not produce any cardiovascular effect [24,93]. In contrast, administration of the α 1 -adrenoceptor antagonist prazosin into the PVN or the CRF receptor antagonist α-helical CRF into the LC, lowered blood pressure and heart rate in hypertensive undernourished animals only, failing to modify these cardiovascular parameters in normotensive eutrophic controls [24,93]. In summary, the antagonists of these receptors were active only in prenatally malnourished rats while the agonists were effective only in eutrophic animals, when microinjected into the aforementioned nuclei. Besides, both the CRF antagonist α-helical CRH [116] and the CRH 1 selective antagonist antalarmin [117] were found to be antihypertensive when administered i.c.v. in acute rat models of hypertension, whereas i.c.v. administered α-helical CRF or injected near the LC did not alter the LC spontaneous neuronal firing rate in normotensive rats [118]. A similar picture emerges in animals under stress, since i.c.v. injection of the antagonist α-helical CRF significantly attenuated hypertension and tachycardia in stressed rats but not in unstressed subjects; in contrast, i.c.v. injection of the agonist, CRF, increased blood pressure and heart rate in unstressed rats [119]. Regarding α 1 -adrenoceptor ligands, intra-PVN microinjection of prazosin [120] or perfusion of the PVN with dialysate containing prazosin [121], showed that the α 1 -adrenoceptor antagonist did not alter control levels of mean arterial pressure or heart rate in normotensive rats. In contrast, intra-PVN microinjection of prazosin decreased the systolic pressure and heart rate in previously undernourished adult hypertensive rats [24,93] altogether with lowering the spontaneous firing rate in PVN neurons [52]. In contrast, both noradrenaline and the α 1 -adrenoceptor agonist phenylephrine increased the frequency of spontaneous excitatory postsynaptic currents in PVN slices from normotensive control animals, but not in PVN slices from animals made hypertensive by chronic intermittent hypoxia [122]. Why are agonists of α 1 -adrenoceptors and CRF receptors inactive in undernourished hypertensive animals? As stated elsewhere [93], the hypothesis of agonist inefficacy in undernourished animals caused by desensitization of α 1 -adrenergic and CRFergic receptors due to hyperactivity exhibited by central noradrenergic and CRFergic systems is not tenable, because the respective antagonists are in fact active at these receptors in the same animal models. Instead, because in undernourished rats the LC [52,98] and PVN [52] basal neuronal activity is about twice that of normal rats, it was argued that in these animals the PVN and LC neurons are already fully active and therefore insensitive to further excitation by application of exogenous agonists [52]. In fact, the spontaneous discharge rate of LC neurons may not increase more than two-fold after high doses of both intra-LC [109] and i.c.v. CRF [109,123], which appears to work as a frequency limit for the rate of neuronal firing, at least in these two nuclei. A similar result was found in rats chronically stressed by separation from the mother during lactation, where the LC neurons had spontaneous firing rates two-fold higher than those of controls [110]. In those stressed rats, CRF application did not further activate LC neurons but did increase LC firing rate in LC neurons in control rats [110]. Regarding the increased spontaneous neuronal rhythm found in the LC of previously malnourished animals, it has been proposed that this might be the consequence of reduced negative feedback mediated by somatodendritic α 2 autoreceptors, which are found to be decreased in the LC of perinatally undernourished rats [98]. However, later studies showed that α 2 -adrenoceptors are increased, at least in the cerebral cortex of adult previously undernourished rats, as detected by [ 3 H]-rauwolscine binding [124]. Therefore, the fact that LC and PVN neurons in undernourished animals have a higher rate of spontaneous firing remains yet unexplained (but see below). Another intriguing issue is why α 1 -adrenoceptor and CRF receptor antagonists, unlike agonists, do not modify blood pressure and heart rate in normotensive eutrophic animals. Prazosin is a selective inverse agonist for α 1 -adrenoceptors [125] that binds almost equally the α 1A , α 1B , and α 1D adrenoceptor subtypes [126], while α-helical CRF is a non-selective competitive antagonist for CRF 1 and the splice variants CRF 2(a) and CRF 2(b) receptors in the rat [127]. As competitive ligands, their ability to induce receptor-mediated effects requires prior receptor activation by tonically released noradrenaline in the PVN and CRF in the LC. Thus, the fact that microinjection into the PVN and LC of the aforementioned ligands did not produce hypotension, might lead to the notion that in healthy normotensive rats there is no tonic release of norepinephrine and CRF in the PVN and CL, respectively, or that such tonic neurotransmitters release is not, significantly involved in normal basal values of blood pressure. Therefore, it seems apparent that an excitatory noradrenergic/CRFergic feed-forward loop interconnecting reciprocally the PVN with the LC, which necessarily needs tonic activity in at least one of these two nuclei, is enabled in undernourished animals by some type of neuroplastic adaptative mechanism. Interruption of one of the two arms in the reciprocal communication between PVN and LC, both in malnourished hypertensive and eutrophic normotensive animals, could shed some light on this mechanism. Disruption of the PVN-LC Reciprocal Communication: Effects on Neuronal Activity and Cardiovascular Parameters in Undernourished and Eutrophic Animals It is noteworthy that the transient cardiovascular effects of agonists observed in healthy normotensive animals, i.e., intra-PVN phenylephrine or intra-LC CRF, were not prevented by disruption of the reciprocal communication between the PVN and the LC using appropriate antagonists, i.e., α-helical CRF intra LC [93] or prazosin intra-PVN [24,93]. This means that the hypertension and increased heart rate seen in healthy normotensive rats by agonist-induced excitation of either PVN or LC neurons did not involve paraventricular-coerulear excitatory interactions. In other words, the excitation of PVN or LC neurons by the respective agonist can independently activate the sympathetic-adrenal-medullary system and thus generate the described cardiovascular effects, without the need for coactivation of the complementary nucleus. In contrast, disruptions in malnourished rats of the CRFergic connection from the PVN to the LC with intra-LC α-helical CRF, or the reciprocal noradrenergic connection from the LC to the PVN with intra-PVN prazosin, were both independently capable of suppressing the hypertension and bradycardia in these animals [93]. This implies that a feed-forward closed loop of mutual excitation between both nuclei is required for producing the cardiovascular effects. Indeed, intra-PVN microinjection of prazosin in prenatally malnourished rats has been found to depress the neuronal firing that is increased in the PVN of those animals, but also in the LC, probably due to the removal of the CRFergic arm of the PVN-LC bidirectional communication pathway [52]. This would interrupt the feed-forward loop of mutual excitation between both nuclei, thus producing a drop in blood pressure and heart rate in the hypertensive undernourished animals by decreasing the output of the sympathetic-adrenal-medullary system. Although disruption of the reciprocal α 1 -adrenergic arm of the closed loop also reduced blood pressure and heart rate in malnourished rats [93], the effect of CRF receptor blockade in LC neurons of those animals has yet to be tested electrophysiologically. Finally, administration of an antagonist in one of the nuclei (i.e., prazosin in the PVN) in undernourished rats effectively allowed the complementary nucleus (i.e., the LC) to recover full responsiveness to the agonist administered (in this case, CRF), now inducing hypertension and tachycardia [93]. This means that reduction in tonic activity in the LC by suppressing the PVN-LC CRFergic communication is enough to rescue the ability of LC neurons to respond to the agonist in these animals [93]. Similar cardiovascular responses are produced when the antagonist (α-helical CRF) is administered to the LC and the agonist (phenylephrine) to the PVN [93]. This issue strongly supports the contention that agonists are inactive in undernourished animals because LC and PVN neurons are already fully active and therefore insensitive to further excitation by application of exogenous agonists. Taken together, the above data indicate that a reciprocal excitatory feedforward loop between CRFergic neurons of the PVN and noradrenergic neurons of the LC is enabled in animals malnourished during fetal life, which would lead to increased tonic activity of these neurons, while this loop is functionally absent in the eutrophic control counterpart, or at least not working properly (see Figure 1 for summary of supporting experimental protocols). This permissive mechanism could well be at the base of the high spontaneous firing rate of neurons in the LC and PVN from undernourished animals, escalating beyond the firing rate values found in the eutrophic animals, with the consecutive hypertension and tachycardia. Importantly, the reciprocal excitatory feedback loop between PVN and LC neurons in prenatally malnourished adult rats is certainly more complex when considering the local regulatory neuronal circuits in these nuclei, which include some well-characterized modulatory interneurons. In fact, in the PVN, together with the direct α 1 -adrenoceptor-mediated excitatory noradrenergic input to CRF-expressing parvocellular neurons [128][129][130][131][132], there are GABA-synthesizing interneurons that inhibit PVN neuronal activity [133]. The noradrenergic input to the PVN has been shown to inhibit PVN-surrounding GABAergic interneurons via α 2 -adrenoceptor activation, resulting in disinhibition of the parvocellular PVN neurons [91,134] altogether with an increased sympathetic activity triggered from the PVN [135], thus reinforcing a α 1 -adrenoceptor-mediated direct activating effect on parvocellular neurons of the PVN. In addition, noradrenergic input also activates local intra-PVN glutamatergic interneurons via α 1 -adrenoceptors [136], leading to indirect parallel heterosynaptic activation of parvocellular PVN neurons, increasing once again the hypertensive effects and tachycardia. With respect to the modulatory influences on the LC, a rather similar picture emerges because in addition to the excitatory CRF input coming into LC norepinephrine-synthesizing neurons [137][138][139] these neurons also receive synaptic contacts from GABAergic peri-LC interneurons that provide inhibitory regulation mediated by GABA A receptors [140,141]. Anatomical, electrophysiological and optogenetic data showed that noradrenergic LC-core neurons and GABAergic peri-LC neurons receive axon afferents from different brain regions concerned with processing and evaluation of sensory stimuli and stressors. However, the input coming from the PVN arrives rather directly to noradrenaline-expressing LC-core neurons [141], thus supporting a direct PVN-LC connection as part of the feed-forward excitatory loop that could be enabled in prenatally undernourished adult rats (see Figure 2). curved arrows indicate enabled communication between nuclei, and segmented black curved arrows indicate disrupted communication. Note red micropipettes for injecting the agonists, and grey micropipettes for injecting the antagonists. Overall, agonists produce increased neuronal firing rate in normotensive healthy animals leading to hypertension and tachycardia, the antagonists in the complementary nuclei preventing the excitation there and the cardiovascular effects. In hypertensive undernourished animals, any of the two antagonists turn off the increased neuronal firing rate showing these animals both in the PVN and LC, thus normalizing the blood pressure and heart rate. effect on parvocellular neurons of the PVN. In addition, noradrenergic input also activates local intra-PVN glutamatergic interneurons via α1-adrenoceptors [136], leading to indirect parallel heterosynaptic activation of parvocellular PVN neurons, increasing once again the hypertensive effects and tachycardia. With respect to the modulatory influences on the LC, a rather similar picture emerges because in addition to the excitatory CRF input coming into LC norepinephrine-synthesizing neurons [137][138][139] these neurons also receive synaptic contacts from GABAergic peri-LC interneurons that provide inhibitory regulation mediated by GABAA receptors [140,141]. Anatomical, electrophysiological and optogenetic data showed that noradrenergic LC-core neurons and GABAergic peri-LC neurons receive axon afferents from different brain regions concerned with processing and evaluation of sensory stimuli and stressors. However, the input coming from the PVN arrives rather directly to noradrenaline-expressing LC-core neurons [141], thus supporting a direct PVN-LC connection as part of the feed-forward excitatory loop that could be enabled in prenatally undernourished adult rats (see Figure 2). Which are the signaling mechanisms to the reciprocal excitatory PVN-LC feedforward loop in animals undernourished early in life? The underlying neural modifications that may underpin the recruitment of such a feedforward closed loop are likely based on epigenetic adaptive molecular changes involved in controlling the excitability and neuroplasticity of specific subsets of noradrenergic and CRFergic long-axon neurons, together with the GABAergic and glutamatergic interneurons existing in the PVN and the LC. In this regard, while no reports on epigenetic modifications induced by early undernutrition affecting noradrenergic and/or CRFergic systems that may be related to hypertension exist to date, it is known that human undernutrition is normally accompanied by other nutri- Which are the signaling mechanisms to the reciprocal excitatory PVN-LC feedforward loop in animals undernourished early in life? The underlying neural modifications that may underpin the recruitment of such a feedforward closed loop are likely based on epigenetic adaptive molecular changes involved in controlling the excitability and neuroplasticity of specific subsets of noradrenergic and CRFergic long-axon neurons, together with the GABAergic and glutamatergic interneurons existing in the PVN and the LC. In this regard, while no reports on epigenetic modifications induced by early undernutrition affecting noradrenergic and/or CRFergic systems that may be related to hypertension exist to date, it is known that human undernutrition is normally accompanied by other nutritional deficiencies (i.e., folate and polyunsaturated fatty acids), which can lead to epigenetic changes, for instance, in GABA systems of the brain. Indeed, folate deficiency during development results in epigenetically increased homocysteine levels, a molecule that competes with GABA for GABAergic receptors thus reducing the effects of GABA in the brain [142], while deficits of polyunsaturated fatty acids may lead to hypermethylation of some gene promoters resulting in downregulated expression of GABA-related genes [143]. Thus, dietary deficits of both folate and polyunsaturated fatty acids could theoretically program epigenetic changes in brain GABA-related circuits that may lead to neuronal disinhibition. However, it must be taken into account that protein-deficient purified diets used to study the effects of malnutrition in experimental animals are usually compensated for with an excess of folate and sometimes with polyunsaturated fatty acids, which clearly precludes comparison with human protein/energy malnutrition. PVN as the Output System That Mediates Hypertension and Increased Heart Rate during Tonic Activation of the Paraventricular-Coerulear Network in Prenatally Malnourished Animals The natural question that arises from the aforementioned studies is: are the PVN and/or the LC the output point of the excitatory feedforward PVN-LC closed loop installed in undernourished animals? It is well established that the PVN provides a dominant source of excitatory drive to the cardiovascular system via a sympathetic outflow [54,144,145]. Indeed, a critical CRFergic pathway that originates in parvocellular cells of the PVN is projected to the rostral ventrolateral medulla [106,146]. These CRFergic neurons, in turn, have a direct and highly potent synaptic relationship with spinal preganglionic sympathetic neurons that control the sympathetic output to different target organs involved in the regulation of blood pressure, including the heart and blood vessels, the kidney, and the adrenal medulla. As already mentioned, PVN neurons are usually activated by α 1 -adrenoceptor agonists such as phenylephrine [83,91,128], and α 1 -adrenoceptor stimulation with phenylephrine in the PVN has been found to induce cardiac chronotropic and inotropic responses [92]. In contrast, the LC is involved in the integration and distribution of stress-related afferent signals to forebrain structures like the cerebral cortex, hippocampus, cerebellum, most thalamic nuclei, and partially the hypothalamus. In addition, LC neurons, which are activated by CRH [109] via CRF 1 receptors [111,112], could also modulate blood pressure and heart rate, as LC projects to sympathetic preganglionic spinal neurons, which are in turn excited by the noradrenaline released through activation of α 1 -adrenoceptors [101]. However, LC stimulation is associated with moderate and variable increases in heart rate and blood pressure [147] because LC also projects to the rostral ventrolateral medulla [148] where noradrenaline exerts an inhibitory effect via stimulation of α 2 -adrenoceptors [149], thus dampening the sympathoexcitation evoked in preganglionic spinal neurons [101]. It seems very likely that the cardiovascular effects generated by intra-LC microinjected CRF are exerted primarily by transferring neuronal excitation from LC to PVN neurons via the α 1 -adrenoceptor-mediated excitatory pathways mentioned above, with PVN being the output system responsible for the increases in systolic pressure and the heart rate. This view is strongly supported by data showing that intra-LC CRF-induced hypertension and tachycardia in normotensive healthy rats are both suppressed by blocking α 1 -adrenoceptors in the PVN with prazosin [24,52,93]. A similar reasoning could also apply in fact to the hypertensive state associated to prenatal undernutrition and maintained by the PVN-LC excitatory feedforward closed loop described, where the PVN output would be translated to sympathoexcitation. Indeed, it seems clear today that development of hypertension in rats exposed to protein restriction during pregnancy and lactation is associated with sympathetic hyperactivity [150]. It should be noted, however, that today it is apparent that PVN neurons do not appear to play an essential role in regulating the sympathetic response to short-term cardiovascular changes. Rather, they seem to be involved in longterm challenges such as sustained water deprivation, chronic hypoxia, pregnancy, stress, and other forms of enduring hypertension [144]. The establishment of the PVN nucleus as a main output center that transfers increased neuronal activity to elevated blood pressure and tachycardia requires at least two main conditions: 1. The PVN should not have a negative feedback mechanism associated with the down-regulation of PVN neurons (i.e., some inhibitory effect exerted backward on α 1 -adrenoceptor expression on PVN neurons). The absence of inhibition is required to sustain the continuous overactivity of the PVN in a pathological condition, such as in prenatal undernutrition. The evidence shows that prenatal undernutrition did not change the amounts of α 1 -adrenoceptor binding sites in the hypothalamus determined with [ 3 H]-prazosin, but a lower expression of α 1A -adrenoceptor mRNA was measured by in situ hybridization [93]. In that study [ 3 H]-prazosin binding identified all three α 1 -adrenoceptor subtypes [126], while the deoxynucleotide probe used was specific for the α 1A -adrenoceptor mRNA subtype [93]. It is also noteworthy that the binding assay was performed in the entire hypothalamus, while in-situ hybridization allowed specific recognition of mRNA in delimited regions of the PVN [93]. In this context, further experiments are needed to establish the net effect on the whole α 1 -adrenoceptor spectrum. This issue is of paramount importance because the three α 1 -adrenoceptor subtypes are expressed in the PVN [151][152][153], and they suffer different regulation processes depending on the primary condition. For example, α 1D up-regulates while the α 1A and α 1B subtypes down-regulate in a concentration-dependent manner during an agonist challenge (i.e., endogenous noradrenaline), and down-regulation of the latter was accompanied by reductions of mRNA (for review see [154]). Interestingly, higher levels of α 1A -adrenoceptor mRNA have reported in the PVN of rats suffering from chronic hypertension [155], but unchanged α 1A , α 1B , or α 1D -adrenoceptor mRNA in the whole hypothalamus [156] and the VPN [157] has also been reported. Besides, chronic stress (which is often accompanied by hypertension) sensitizes the HPA axis to further acute stress (as measured by transient plasma ACTH increase) in rats, enhancing the response to α 1 -adrenergic receptor activation in the PVN [158]. Thus, despite the implicit importance of the above results, the specific regulation mechanisms of α 1 -adrenoceptor subtypes due to prenatal nutritional maladaptive programming remains still unknown. 2. Permanent sensitization of the PVN should be required to maintain the integrity of the excitatory feed-forward loop. In this regard, it has been reported that noradrenaline induces an α 1 -adrenoceptor-mediated increase and an α 2 -adrenoceptor-mediated decrease in GABA-dependent spontaneous inhibitory postsynaptic current in a subset of parvocellular neurons of the PVN [159], which possibly represents a metaplastic regulation of GABAergic transmission in these neurons. Hippocampal long-term potentiation [160] and cerebral cortex long-term depression [161,162] have been reported also to be promoted by α 1 -adrenoceptors, but studies on neuroplasticity processes involved in long-lasting sensitization of neurons in the PVN, which can promote an enduring sympathetic activation thus favoring chronic hypertension, are still lacking. Conclusions Taken together, the evidence reviewed may be summarized as follows: 1. Both hypertension and tachycardia induced in healthy normotensive rats by either α 1 -adrenoceptor-mediated excitation of PVN neurons or CRF receptor-mediated excitation of LC neurons do not imply serial or reciprocal excitatory interactions between the two nuclei, as revealed by the fact that the cardiovascular effects observed were not prevented by disruption of the communication between the nuclei. 2. Simultaneous concurrent tonic neuronal activity in the PVN and the LC is required to maintain elevated arterial blood pressure and heart rate scores in prenatally malnourished animals. In addition, reciprocal noradrenergic and CRFergic excitatory connections between the PVN and the LC give rise to a feedforward paraventricularcoerulear closed loop of neuronal activity, which is an essential part of the molecular etiological component of hypertension and tachycardia generated in animals submitted to prenatal undernutrition. 3. The PVN may act as the exit point of the paraventricular-coerulear loop that downstream activates the sympathetic system, producing hypertension and tachycardia in malnourished animals. As such, it is essential that α 1 -adrenoceptor desensitization does not occur in the PVN of malnourished rats, allowing the PVN to function as the output locus in the paraventricular-coerulear network. More research is required to support this point. 4. Whether noradrenergic hyperactivity in prenatally and perinatally undernourished animals is the primary factor involved in the triggering of neuronal hyperactivity in the PVN-LC communication, or on the contrary, it is a consequence of activity in such an interconnected set of neurons, is not entirely clear at present. Additionally, whether some epigenetic mechanisms may be underlying some of the remarkable characteristics that such an interactive neural system acquires under conditions of malnutrition remains unknown. Indeed, both early-life stress and early-life undernutrition similarly led to life-long alterations in the neuroendocrine stress system, partially by modifying epigenetic regulation of gene expression [163]. Increased CRF production via epigenetic mechanisms cannot be discarded since prenatal restraint stress is associated with the demethylation of CRF promoter, thereby enhancing CRF transcriptional responses to stress in adolescent rats [164]. However, no epigenetic modifications underlying altered CRF expression in prenatally undernourished animals have been reported so far. 5. Other central nervous system programming factors that may underlie hypertension due to prenatal undernutrition, such as enhanced sympathetic-respiratory coupling at early life, inappropriate activation of the renin-angiotensin system, glucocorticoid neuronal remodeling, should not be neglected. Carefully designed experimental protocols should be arranged in order to study the specific contributions of those neural/endocrine components as well as the possibility of a relationship with the neuronal hyperactivity in the paraventricular-coerulear network. As a final consideration, it is worth to highlight that further investigation is required to generate new data that may describe the mechanisms involved in each of these relevant aspects, which may shed light on the functional link between malnutrition and pathological programming of hypertension in humans.
2022-10-13T15:42:06.752Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "db5fdc9c1ba497c8a68d6a9cb5b269ff73cbb201", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/19/11965/pdf?version=1665293514", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2db7c662255cad5aa54f698f60f4dcb2f15cc7e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267956612
pes2o/s2orc
v3-fos-license
Public health genomics research in Italy: an overview of ongoing projects Public health genomics (PHG) aims to integrate advances in genomic sciences into healthcare for the benefit of the general population. As in many countries, there are various research initiatives in this field in Italy, but a clear picture of the national research portfolio has never been sketched. Thus, we aimed to provide an overview of current PHG research projects at the national or international level by consultation with Italian institutional and academic experts. We included 68 PHG projects: the majority were international projects in which Italian researchers participated (n = 43), mainly funded by the European Commission, while the remainder were national initiatives (N = 25), mainly funded by central government. Funding varied considerably, from € 50,000 to € 80,803,177. Three main research themes were identified: governance (N = 20); precision medicine (PM; N = 46); and precision public health (N = 2). We found that research activities are preferentially aimed at the clinical application of PM, while other efforts deal with the governance of the complex translation of genomic innovation into clinical and public health practice. To align such activities with national and international priorities, the development of an updated research agenda for PHG is needed. Introduction Public health genomics (PHG) is an attempt to responsibly and effectively translate the rapid increases in genome-based knowledge and technologies into population health benefits (1).Recent decreases in sequencing costs and the marked progress in data science present many opportunities for the incorporation of genomic information into new strategies for clinical and public health practice.Precision medicine (PM) focuses on identifying the most effective medical intervention for patients based on their genetic and biochemical profile, as well as specific environmental and lifestyle factors (2).Impressive advances have been made in cancer, where next generation sequencing has enabled the prediction of cancer susceptibility, sensitivity to therapy, prognosis and residual disease (3).Moreover, the use of population level data on genomics and other health determinants is expected to improve health outcomes at the population level, paving the way for a field called precision public health (PPH) (4).Some emerging applications of PPH are the development of more intensive screening programs for people at greater risk of cancer, and the detection and investigation of infectious disease outbreaks (5). To realize the full potential of these promising approaches, many countries are fostering research initiatives aimed at driving the development of public policies, guidelines and health programs for the implementation of evidence-based genomic applications (6)(7)(8).Italy is no exception, although research activities seem to have begun in a disorganized way.The task of assuring coordination of national efforts in PHG has recently been entrusted to the Inter-institutional Committee for PHG (IC), a national board established in 2022, which includes representatives from the Italian Ministry of Health (MoH), the Italian Institute of Health (ISS), the National Agency for Regional Healthcare Services (AGENAS), the Italian Medicine Agency (AIFA), and the Italian Regions, in addition to academic experts in the field (9). In particular, the IC is responsible for supervising PHG activities throughout the country and evaluating their alignment with European priorities.For its first assignment, the IC aimed to investigate the current state of development of PHG research in Italy.Since no comprehensive repository for national PHG projects is available, a mapping exercise was performed.The aim of this paper is to describe the methods and the results of an overview of the current PHG research portfolio in Italy and to discuss its main policy implications. Subject of investigation and inclusion criteria This overview included any research project involving PHG that started or was ongoing in Italy during 2022 (January 1st-December 31st).Both national projects and international projects in which Italian researchers participated were eligible for inclusion, whereas projects sponsored only at a regional level (i.e., involving one or more Italian Regions without a national commitment) were excluded.For the purposes of this study, PHG was defined as "a multidisciplinary field concerned with the responsible and effective translation of genomic science and technologies into clinical and public health practice" (10). Data collection We retrieved candidate projects by email consultation with the Italian IC, which comprised 39 expert representatives from the following institutions: I. Italian MoH (N = 26, from seven different Directorates-General); II.ISS (N = 2); III.AGENAS (N = 2); IV.AIFA (N = 2); V. Italian Regions (N = 4); VI.Italian universities (N = 3, leaders in the field of PHG from three different universities). The experts were provided with the details of the overview during an online meeting and were asked to participate in the design of the project tracker tool (PTT).A draft of the PTT was e-mailed to the experts and their feedback was used to improve it.The final PTT consisted of an Excel data sheet collecting the following information for each project: title; start/end date; objectives; funder; lead institution; number of implementing partners; number of countries involved; funding; referring website; involvement of the IC (Supplementary Table S1). In May 2022, we officially started data collection by emailing the experts with the PTT and the instructions for completion.To ensure the completeness of the results, we asked the experts to continuously update the PTT with newly funded research projects as they became available over the year.Data collection was closed at the end of December 2022.Reminders were issued during IC meetings and were also sent twice by e-mail during the year. Project selection and data synthesis For each project selected, unclear or missing information was clarified or retrieved by exploring official websites (Supplementary Table S2).Two researchers removed duplicates and screened the projects according to the inclusion criteria.Projects that clearly did not meet the eligibility criteria were excluded. A descriptive analysis of the projects included was performed, using frequencies, percentages and ranges.Moreover, according to their primary aim, projects were mapped to three thematic categories and six sub-categories, as follows: (I) governance, further divided into (Ia) networking and coordination for innovation, (Ib) data and infrastructure, (Ic) adoption of health technology; (II) PM, further divided into (IIa) cancer and (IIb) non-oncological diseases; and (III) PPH, including only one sub-category, namely (IIIa) surveillance of infectious diseases.Topic attribution was cross-checked by three reviewers. Project selection After removal of duplicates, a total of 147 projects resulted from the consultation with IC members (Figure 1).Screening by inclusion criteria selected 68 projects (11-70) (Supplementary Table S2).The reasons for exclusion were: projects not in progress during the year 2022 (N = 33; concluded before 2022 or yet to start); projects off-topic (N = 20; mainly basic research) or topic not clear due to lack of information (N = 7); and projects sponsored only at the regional level (N = 19). General features The characteristics of the 68 projects included in the study are extremely variable.Thus, with regard to timing, they were launched between the years 2017 and 2022, and are expected to conclude between 2022 and 2029 (Supplementary Table S2), with a duration of 1 to 10 years (median: 3, interquartile range: 3-4.5).The number of countries involved ranges from 1 (Italy alone) to 20 (median: 4, interquartile range: 1-6), while the number of partners involved ranges from 1 (one Italian participant) to 84 (large multi-country projects; median: 6, interquartile range: 4-15).The total amount of funding ranged from € 50,000 to € 80,803,177 (median: € 871,820 interquartile range: € 471,000 -€ 4,000,000). Funders and coordinators There are markedly more international (N = 43) than national (N = 25) projects (Table 1; Supplementary Table S2).with almost all the international ones being funded by the European Commission (EC; N = 40), while the main funder of national projects is central government (N = 14).A small number of national projects are funded by bank groups (N = 2), while the remaining national and international projects are financed by non-profit organizations (N = 12). Overall, the retrieved projects are mostly coordinated by healthcare facilities (N = 26) and universities (N = 23; Table 1; Supplementary Table S2), with the remainder being managed by research organizations, either governmental (N = 11) or non-governmental (N = 8), most of the latter being non-profit (N = 7).Of the international projects, one quarter are coordinated by an Italian institution (N = 17).Finally, almost all of the retrieved projects count among its funders or executors at least one of the Institutions represented in the IC (N = 63). (I) Governance The 20 projects in the governance category aim to ensure a coordinated approach to the long-term implementation of genomics for the personalization of healthcare (Supplementary Table S2).They can be further classified into three sub-categories (Table 1). (Ia) Networking and coordination for innovation The first sub-category, "Networking and coordination for innovation, " includes nine EU-funded projects aimed at fostering collaboration across European countries and beyond, with special regard to the development of training programs for researchers and healthcare professionals, the identification of recommendations for the harmonization of research and implementation initiatives, and the engagement of relevant stakeholders (including citizens, patients, healthcare professionals, policy makers and private companies; Supplementary Table S2).In Italy, an important leading role is played by the Catholic University of the Sacred Heart of Rome, which coordinates three of these projects funded through EU Horizon programs.The first is the ExACT (European network staff eXchange for integrAting precision health in the health Care SysTems) project, which involves eight member states (MS) plus the United Kingdom (UK), Canada and the United States, working together since 2019 to train a new generation of precision health professionals through a fiveyear secondment plan (16).The second is PROPHET (A PeRsOnalized Prevention roadmap for the future HEalThcare), a four-year project involving 12 EU MS plus the UK, launched in 2022 to develop a strategic roadmap for the implementation of innovative, sustainable and effective personalized programs to prevent common chronic diseases (27).Finally, the four-year IC2PerMed (Integrating China in the International Consortium for Personalized Medicine) project was launched in 2020 with the specific aim of fostering EU-China cooperation in the field of personalized medicine.This project is managed by the International Consortium for Personalized Medicine, a MS-driven initiative of over 40 international ministries and funding agencies (20). (Ib) Data and infrastructure The second sub-category, "Data and infrastructure, " includes nine projects aimed at developing infrastructure, tools and regulatory frameworks for the collection and use of genomic data (Supplementary Table S2).The key driver of this sub-category is the 2018 One Million Genomes Initiative (1 + MG), which is committed to creating a European data infrastructure for genomic and clinical data to support research, personalized healthcare and health policy formation (71).In fact, Italy is involved in both EU-funded projects launched to realize 1 + MG initiatives, coordinated by ELIXIR, a European intergovernmental organization.The first is the three-year B1MG (Beyond 1 Million Genomes), launched in 2020 to provide coordination and support to the 1 + MG initiative by defining the infrastructure, data standards and legal guidance for cross-border access to genomic data (11).The second is the 2022 GDI (Genomic Data Infrastructure) project, which will implement the recommendations of B1MG to create and deploy the technical capacity for accessing genomic data by 2027 (17).Since the end of 2021, Italy has further supported the fulfillment of these European goals through a dedicated two-year national project financed by the MoH-National Centre for Disease Prevention and Control and coordinated by Sapienza University of Rome, entitled "Italian Genomic Strategy" (30).Moreover, the ten-year project "Health Big Data, " dedicated to the deployment of an IT platform for sharing genomic and clinical data between the national Scientific Institutes for Research, Hospitalization and Healthcare, has been funded by the Italian Ministry of Economy and Finance since 2019 and is coordinated by a national oncology network named Alliance Against Cancer (19). (Ic) Health technology adoption Finally, the third sub-category, "Health technology adoption, " embraces two projects, one Italian and one European, that focus on guiding genomic-technology acquisition and use through health technology assessment (HTA) and procurement, respectively (Supplementary Table S2).The Italian project was financed at the end of 2019 by the MoH for a period of two years to design a comprehensive national path for the HTA of genetic and genomic tests, including the three phases of priority setting, assessment and appraisal (14); the European OncNGS (NGS diagnostics in 21st century oncology: the best, for all, at all times) project, launched in 2020 with the participation of eight buyers from five EU MS, coordinated by researchers in Belgium, will prepare a pre-commercial procurement procedure to provide the best next-generation sequencing diagnostic technologies for all solidtumor and lymphoma patients by 2026 (25). (II) Precision medicine PM is the largest category and comprises 46 projects aimed at assessing the use of genomic information to provide a more precise approach to diagnosis, prognosis and treatment of disease (Supplementary Table S2).In particular, the majority of these projects aim to identify molecular biomarkers that predict the course of a disease and the response to treatment.While customizing care using a combination of clinical and genomic factors is not unusual among the projects in this study, very few also consider lifestyle and environmental factors.PM projects can be further classified according to the disease of interest (Table 1). (IIa) Cancer Over half of the PM projects focus on cancer (N = 29), especially hematological cancers and gastrointestinal cancers (Figure 2A; Supplementary Table S2).Among the international projects coordinated by Italy, one example is the three-year project IMAGene (Epigenomic and machine learning models to predict pancreatic cancer), supported by ERA PerMed, a funding scheme for personalized medicine research projects cofounded by the EC (49).IMAGene, which involves participants from six MS and is coordinated by the European Institute of Oncology in Milan, commenced in 2022 to develop, implement and test a comprehensive Cancer Risk Prediction Algorithm for the early detection of pancreatic cancer in high-risk asymptomatic subjects, based on germline mutations, DNA methylation profiling and magnetic resonance imaging.Among the national projects, one that has attracted many participants is the 2019 GerSom (Germline Somatic Panel) project, which is funded by the MoH for a period of three years under the supervision of Alliance Against Cancer (47).It aims to improve the management of patients with ovarian, breast and colorectal cancer through the validation of a genomic panel for somatic and germline variants, which allows the diagnosis of both genetic risk and sensitivity to new drugs. (IIb) Non-oncological diseases Of the projects on non-oncological diseases (N = 17), the largest group addresses neurological diseases (N = 5), especially multiple sclerosis (Figure 2B; Supplementary Table S2).Among the international projects coordinated by Italy, there is another three-year ERA PerMed project named FindingMS (An integrated approach to predict disease activity in the early phases of multiple sclerosis), launched in 2019 with the participation of three MS and coordinated by the San Raffaele Research Hospital of Milan (45).It aims to design a predictive algorithm of disease activity for multiple sclerosis based on genetic and environmental factors, which will enable personalized treatment from the early phases of the disease.At the national level, a notable example is the 2022 NEUDIG (Unveiling the hidden side of NEUrodevelopmental DIsorder Genetics) project, funded by the Ministry of University and Research for a period of three years and coordinated by the University of Genova (38).Its purpose is to strengthen the molecular diagnosis of neurodevelopmental disorders by non-conventional genomic, transcriptomic, and functional analyses. (III) Precision public health Only two projects mapped to the PPH category, i.e., the use of genomic technologies to improve public health policy and practice (Supplementary Table S2).They are two national projects, funded by the Italian MoH-National Centre for Disease Prevention and Control in 2020 and 2022 for a period of two years each.They both focus on the same thematic sub-category, i.e., the molecular surveillance of infectious viral diseases (Table 1): while the first project, "Molecular characterization of SARS-CoV-2 in Italy, " coordinated by the ISS, is aimed at monitoring the SARS-CoV-2 virus in the country, by both time and geographical area, through genomic analysis (70), the other, i.e., SURVEID (SURVeillance of Emerging Infectious Diseases), coordinated by the Experimental Zooprophylactic Institute of Lombardia and Emilia-Romagna, plans to test a metagenomics next generation sequencing diagnostic platform for the surveillance of emerging viral disease threats (30). Discussion This paper highlights the main features of current PHG research in Italy by the collection and analysis of national and international research projects on the topic.Overall, PHG research in Italy seems to be mainly funded by the EC and is managed by healthcare facilities and universities.Notably, three main research themes for the integration of genomics into healthcare were identified: governance, PM and PPH.The predominant theme appears to be PM, mainly in its narrowest sense, as most PM projects aim to explore the use of molecular biomarkers to guide clinical decision making, with a particular focus on therapeutic choices for cancer patients (72).By contrast, only a few PM projects embrace a broader approach, as they combine the assessment of genomic and environmental or lifestyle determinants to guide the management of patients, more often in the field of non-oncological diseases (73).While PM, in its various facets, dominates the current research portfolio, PPH is clearly the least active research area.According to our results, in Italy the integration of genomics into public health strategies has probably been boosted by the Covid-19 pandemic, as the two PPH projects retrieved relate to the genomic surveillance of viral infectious diseases, including Covid-19.In the meantime, considerable efforts seem to be underway to guarantee the governance and sustainability of the long-term implementation of PM, the third key research theme emerging from the overview.In particular, major investments in governance are being made to foster partnership and collaboration across European countries to tackle two main implementation challenges: (i) ensuring the alignment of research activities and workforce education; and (ii) enabling the sharing and use of large-scale genomic data. To the best of our knowledge, this overview is the first attempt to portray the Italian PHG research portfolio over a specific period of time, with the ultimate aim of informing relevant national decision makers.Nevertheless, it has to be considered that, although most of the projects identified will take months and even years to conclude, the growing number of PHG research projects funded each year may rapidly change the scenario we outline here.Thus, it would be useful to regularly update the database to monitor any potential change in research investment.To this end, designing a national or international repository for PHG research projects would surely be appropriate, as this would reduce duplicated effort, improve transparency, highlight opportunities for funding and monitor progress in this quickly evolving area. Moreover, it would be interesting to understand whether the currently funded research activities align with national and international reference standards in the field.Unfortunately, we are not aware of any updated benchmark recommendations for PHG research, either nationally or at the European level.Although the "National plan for the innovation of the Health System based on omics sciences, " published in 2017, identified seven research opportunities for the integration of omic sciences into the National Health Service, these are likely to be outdated.The opportunities identified in 2017 were: (I) Big data and computational medicine; (II) Health literacy of citizens and healthcare professionals; (III) Drug repositioning and pharmacogenomics; (IV) Primary prevention of chronic disease; (V) Secondary prevention of breast cancer; (VI) Early detection of cancer; and (VII) Undiagnosed rare diseases (74).More recently, given the need to update the plan, the National Health Council issued its own recommendations for new priorities to be addressed (75).With regard to genomics research, the main indication is to further investigate the complex interactions between genetic and non-genetic factors in the pathogenesis of disease and to include non-genetic factors in riskassessment algorithms.This resolution is consistent with our results, as the role of environmental and lifestyle factors appears to be neglected in ongoing Italian research. At the European level, we found no official documents on shared research priorities for PHG.Some very general directions are provided by a policy briefing recently published in the context of the 1MG project (76).The first aim of the document is to set out policy recommendations for the implementation of genomics in healthcare, some of which also concern the field of research.It is recommended that close cooperation between clinical, research and industrial partners be established to ensure that the latest advances in science and technology are captured as they arise, and that research and clinical outcomes are coordinated.A further recommendation is to implement a data management plan to facilitate sharing of genomic and health information for clinical and research purposes at regional, national and international levels.According to our results, these two proposals are already being pursued in the research projects currently ongoing in Italy. The main limitation of our work is in the comprehensiveness of the overview, which is affected by two conditions.First, we included only projects reported by members of the IC.Indeed, nearly all projects benefit from the involvement of at least one of the consulted institutions, as a funder or executor.However, since the IC includes the main national governmental and academic institutions involved in PHG, it is expected to account for most of the ongoing activities.Nevertheless, it is worth mentioning that a government funding organization not included in the IC emerged from the overview, namely the Ministry of Research (MoR).Indeed, the commitment of the MoR to PM is being strengthened by its recent decision to assign part of the research funds provided by the National Recovery and Resilience Plan to a nationwide research partnership, called HEAL ITALIA, which aims to create a Health Extended ALliance for Innovative Therapies, Advanced Lab-research, and Integrated Approaches of Precision Medicine (77).Thus, although it is not purely a healthcare institution, the future involvement of the MoR in the IC could be considered.Second, as stated in the Methods, we included only projects financed on either an international or a national scale.In fact, the Italian national healthcare system is decentralized to 21 regional healthcare systems, with different degrees of autonomy, and just four Regions participate in the IC on behalf of all the others, making the search for regional projects flawed by a potential selection As for the accuracy of the evidence provided, we confirmed all data by an internet search of official websites.Nevertheless, we cannot exclude the possibility that in some cases a lack of clear information on the projects' aims, together with their cross-cutting nature, may have affected their inclusion or assignment to particular thematic categories, even though both the project selection and data collection were performed by at least two authors.Lastly, it should be noted that the present overview made no attempt to assess the quality of the projects retrieved or whether they met their objectives, as this was not our goal. In conclusion, we have provided an overview of the national research portfolio in PHG in Italy, with the primary aim of informing policy makers and fostering the coordination of national efforts for the implementation of evidence-based genomic applications.We found that research investments, mainly supported by the EC, are preferentially aimed at the clinical application of PM, but significant endeavors are also underway on the governance of the complex translation of genomic innovation into clinical and public health practice.Nevertheless, this is only the first step on a challenging path toward a coordinated and sustainable research agenda.First of all, this overview should be regularly updated to keep up with the systematic launch of new research projects.Then, updated research plans should be developed to align national activities with national and international priorities and avoid unaddressed needs and waste of resources. FIGURE 2 FIGURE 2 Number of precision medicine projects by disease.(A) Cancer.(B) Non-oncological diseases. TABLE 1 Description of the projects retrieved (N = 68). 10.3389/fpubh.2024.1343509Frontiers in Public Health 07 frontiersin.orgbias.For this reason, we tried to avoid the tangle of regionally funded projects by focusing on centrally funded efforts in PHG research.
2024-02-27T17:18:55.726Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "69b2f167296cc1fadf549877d41d6e7506096dfd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1343509/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "768ec5ea90366d815b30e43f9586f5dd8a20d52a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
7221911
pes2o/s2orc
v3-fos-license
Thyroid Hormone Promotes Postnatal Rat Pancreatic β-Cell Development and Glucose-Responsive Insulin Secretion Through MAFA Neonatal β cells do not secrete glucose-responsive insulin and are considered immature. We previously showed the transcription factor MAFA is key for the functional maturation of β cells, but the physiological regulators of this process are unknown. Here we show that postnatal rat β cells express thyroid hormone (TH) receptor isoforms and deiodinases in an age-dependent pattern as glucose responsiveness develops. In vivo neonatal triiodothyronine supplementation and TH inhibition, respectively, accelerated and delayed metabolic development. In vitro exposure of immature islets to triiodothyronine enhanced the expression of Mafa, the secretion of glucose-responsive insulin, and the proportion of responsive cells, all of which are effects that were abolished in the presence of dominant-negative Mafa. Using chromatin immunoprecipitation and electrophoretic mobility shift assay, we show that TH has a direct receptor-ligand interaction with the Mafa promoter and, using a luciferase reporter, that this interaction was functional. Thus, TH can be considered a physiological regulator of functional maturation of β cells via its induction of Mafa. Neonatal b cells do not secrete glucose-responsive insulin and are considered immature. We previously showed the transcription factor MAFA is key for the functional maturation of b cells, but the physiological regulators of this process are unknown. Here we show that postnatal rat b cells express thyroid hormone (TH) receptor isoforms and deiodinases in an age-dependent pattern as glucose responsiveness develops. In vivo neonatal triiodothyronine supplementation and TH inhibition, respectively, accelerated and delayed metabolic development. In vitro exposure of immature islets to triiodothyronine enhanced the expression of Mafa, the secretion of glucose-responsive insulin, and the proportion of responsive cells, all of which are effects that were abolished in the presence of dominant-negative Mafa. Using chromatin immunoprecipitation and electrophoretic mobility shift assay, we show that TH has a direct receptor-ligand interaction with the Mafa promoter and, using a luciferase reporter, that this interaction was functional. Thus, TH can be considered a physiological regulator of functional maturation of b cells via its induction of Mafa. Diabetes 62:1569-1580, 2013 b -Cell replacement therapy is a major goal of diabetes research. Insulin-positive cells have been successfully derived from stem cells (1)(2)(3)(4), but these cells have not been responsive to glucose in vitro and must be considered functionally immature. Fetal and neonatal rodent b cells also lack glucose responsiveness (5) and therefore provide a good model to study the acquisition of glucose responsiveness and b-cell maturation. Neonatal b cells are characterized by low expression of many key genes of the specialized b-cell phenotype (6). We previously showed that in neonatal islets, Mafa is expressed at ;10% of the adult level and that its adenoviral-mediated reconstitution to adult levels in P2 islets induced secretion of glucose-responsive insulin (7). But what regulates Mafa in vivo? To identify physiological regulators, we considered the changes that normally occur in the physiological milieu during the neonatal period (8). During the second postnatal week, serum thyroxine (T 4 ) (9) and corticosterone (10) surge and prolactin levels rise by postnatal day 20 (11). We hypothesized that thyroid hormone (TH) could physiologically regulate Mafa, enhancing its expression and driving the maturation of the insulin secretory response to glucose in neonatal b cells. TH serum levels start increasing in parallel with Mafa expression increases, and TH is important in the postnatal development of the central nervous system and the digestive track (8). Inhibition of TH synthesis prevents or delays maturation of these systems, and TH administration results in precocious development. Triiodothyronine (T 3 ) has been shown to enhance the differentiation of a human pancreatic duct cell line toward a b-cell phenotype (12). Moreover, thyrotoxicosis leads to hyperinsulinemia with increased hepatic glucose production and insulin resistance (13); hypothyroidism reduces production of hepatic glucose and insulin resistance (14). The effects of TH are mediated by TH receptors (THRs), which are members of the nuclear receptor superfamily. Three major isoforms-THRA1, THRB1, and THRB2exhibit similar ligand-dependent regulation of gene activity, whereas a fourth isoform, THRA2, lacks the ligand-binding and transactivation domains (15). Different isoforms have been identified in whole pancreas during development (16), and THRA1 has been identified in adult islet cells (17); however, little is known about their expression in b cells during the postnatal period. The active TH bound to receptors is 3,5,39-T 3 , and available T 3 is derived from the thyroid gland or from conversion from thyroxine (T 4 ) by type 1 or type 2 iodothyronine deiodinases (D1 and D2). A third deiodinase, type 3 (or D3), inactivates T 3 . D3-null animals, which had higher levels of TH during development, were glucose intolerant with impaired secretion of glucose-stimulated insulin, suggesting that early exposure to high amounts of T 3 might be deleterious to developing b cells (18). However, the role of TH in b-cell development under physiological conditions as well as the mechanisms involved are still unclear. Herein we show that postnatal rat b cells express THR isoforms and deiodinases in an age-dependent pattern and therefore have the ability to respond to the rapidly rising T 4 concentration that peaks at about postnatal day 15. In vivo neonatal T 3 supplementation and TH inhibition, respectively, accelerated and delayed metabolic development. In vitro exposure of immature islets to T 3 enhanced Mafa expression and increased glucose responsiveness, effects that were abolished in the presence of dominantnegative (DN) Mafa. Using chromatin immunoprecipitation (ChIP) and electrophoretic mobility shift assay (EMSA) we show that TH has a direct receptor-ligand interaction with the Mafa promoter; using a luciferase reporter, we then showed that this interaction is functional. Thus, TH is a physiological stimulus for the postnatal maturation of functional b cells. RESEARCH DESIGN AND METHODS Animals. Female Sprague-Dawley rats with litters of various ages (P0 is day of birth) were purchased from Taconic Farms (Germantown, NY) and kept under conventional conditions with free access to water and food. Animals were killed under anesthesia at postnatal days 2-28 or as an adult; blood was collected by cardiac puncture, and pancreas was excised for histology or islet isolation. Islets were isolated (19), cultured overnight in RPMI-1640 medium, and were handpicked to ensure purity. For each sample from postnatal days 2 or 7, islets from 10 pups were pooled; for samples from postnatal days 9 to 28, islets from 2 or 3 pups were pooled; and for the adult sample, islets from one animal were used. Three to six samples per age group were included. For immunostaining, excised pancreas was either fixed for 2 h in 4% paraformaldehyde for paraffin embedding or embedded in optimal cutting temperature medium (Tissue Tek) and was frozen in chilled isopentane. The Joslin Institutional Animal Care and Use Committee approved all animal procedures. Plasma insulin and T 4 levels. Plasma insulin and T 4 levels were measured using enzyme-linked immunoassay (ALPCO, Windham NH) and COAT-A-COUNT total T 4 kit (DPC, Los Angeles, CA), as previously described (20). Neonatal models of T 3 supplementation and inhibition of TH synthesis. Timed-pregnant Sprague-Dawley rats were randomized into one of three groups: 1) Control pups received subcutaneous injections of 0.9% NaCl daily for 7 days, starting at postnatal day 1. 2) For inhibition of TH synthesis, rats were given tap water ad libitum with 20 mg methimazole (MMI)/100 mL water (21) from birth to postnatal days 15 or 21, when the pups were killed. 3) For T 3 supplementation, pups received subcutaneous injections of T 3 (0.05 mg/g body weight) daily for 7 days starting on postnatal day 1 (22). Body weight and fed glucose levels were measured weekly. For intraperitoneal glucose tolerance tests, glucose solution was injected intraperitoneally (2 g/kg) after fasting for 4 h, and blood glucose levels were determined at 0, 30, and 120 min. Islets were isolated from T 3 -treated and control rats at postnatal day 7 and reported as in vivo. The effectiveness of treatments was evaluated by growth parameters, T 4 levels, and deiodinase activity when the animal was killed (23) ( Table 1). Hepatic D1 activity was measured in 5-to 20-mg liver homogenate with 10 mmol/L dithiothreitol and 1 mmol/L 125 IrT 3 for 60 min. Background levels were determined by the addition of 1 mmol/L propylthiouracil (20). Islet culture: Hormone treatment and adenoviral infection. At postnatal day 7 or 9, islets were cultured for 4 days in RPMI-1640 medium (20 mmol/L glucose and 10% charcoal-stripped [CS] FBS) with T 3 (150 pmol/L, equivalent to 7.5 pmol/L free T 3 in 10% CS-FBS [24]) for RNA or insulin secretion assays. As an alternative, islets at postnatal day 9 were cultured for 2 days in RPMI-1640 medium in 10% FBS with dexamethasone (50 pmol/L) for RNA. To test the specificity of MAFA as the mechanism of T 3 effect, islets at postnatal day 7 were totally dispersed and cultured in RPMI-1640 medium (20 mmol/L glucose, 10% CS-FBS) with or without T 3 (7.5 pmol/L free T 3 ) and with or without adeno-CMV-DN Mafa-Ires-Gfp (DN-Mafa) (25,26) (MOI 2) or control adeno-CMV-Ires-Gfp (Ad-Gfp). Reaggregated islets were cultured for 4 days for secretion or RNA. Insulin secretion in vitro. Insulin secretion was measured by static sequential incubation in 2.6 mmol/L and 16.8 mmol/L glucose in Krebs-Ringer bicarbonate buffer (16 mmol/L HEPES and 0.1% BSA; pH 7.4), as previously described (27). Supernatants and cells were frozen until assayed. As an alternative, we measured insulin secretion of single b cells using the reverse hemolytic plaque assay (7,28) in which secreted insulin is revealed by the presence of hemolytic plaques around secreting cells. The percentage of insulin-secreting cells forming plaques and the area of the plaques were measured and multiplied to calculate the secretion index, a measure of the overall secretory activity of b cells. Quantitative real-time PCR. Total RNA was isolated with a PicoRNA extraction kit (Arcturus) and reverse transcribed (SuperScript reverse transcriptase, Invitrogen, Grand Island, NY). Real-time quantitative PCR used SYBR green detection and specific primers (Supplementary Table 1). Samples were normalized to a control gene (S25), and the comparative threshold cycle method used to calculate levels of gene expression. Immunostaining. Paraffin sections were incubated at 4°C overnight with primary antibodies listed in Supplementary For each antigen, all images were taken with the same settings in confocal mode using a Zeiss LSM 410 or LSM 710 microscope and were handled similarly in Adobe Photoshop; at least two animals per age were examined. Morphometric evaluation. Images covering entire whole pancreatic sections were collected using the LSM 710 tile-scan system. For MAFA nuclear staining, insulin-positive cells were scored as high, low, or undetected nuclear MAFA. Data from four or five individual animals were averaged; between 36 and 58 islets were sampled for each age. For proliferation and apoptosis, insulinpositive and Ki-67-positive or TUNEL-positive cells were counted in 160-327 islets and 2,764-6,189 cells per group; data from individual animals (n = 3-5) were averaged. b-Cell mass was calculated by multiplying the relative area of b cells by the pancreatic weight (29). The densitometric mean of the intensity of MAFA staining was calculated using the AxioVision Measure feature and were considered a reflection of protein levels (n = 6-7 animals per group). Cross-sectional area of the b cell was used as an indicator for cell size and was determined by dividing the number of nuclei in a given insulin-positive area; 7,000-8,794 nuclei were counted among four animals in each group. Cell line culture. INS-1 cells maintained in RPMI-1640 medium containing 11 mmol/L glucose + 10% FCS, 10 mmol/L HEPES, 2 mmol/L L-glutamine, penicillin/ streptomycin, 1 mmol/L sodium pyruvate, and 20 mmol/L b-mercaptoethanol were switched to RPMI-1640 medium containing 1.6 mmol/L glucose + 10% CS-FBS 24 h before treatment and then were kept for 14 h in T 3 (150 pmol/L) and actinomycin D (5 mg/mL). MIN-6 cells, maintained in high-glucose Dulbecco's modified Eagle's medium (DMEM) supplemented with 15% FBS, were switched to high-glucose DMEM with 15% CS-FBS and T 3 (10 nmol/L) for 24 h; they then were harvested for ChIP or to assess the effect of MMI (1.5 mmol/L, 0.15 mmol/L, and 0.015 mmol/L) on gene transcription. For the luciferase reporter construct, a 10-Kb Nhe1-Fse1 fragment from the BAC mouse genomic library clone 128L24 spanning the mouse Mafa promoter to downstream of the transcription start site was cloned into HindIII-BglII cut -238 wild-type luciferase plasmid (30). Renilla luciferase in a SacI backbone was used as a transfection control. For transient transfections, MIN-6 cells were transfected using lipofectamine and 7 mg of final DNA and grown in highglucose DMEM with/without 100 nmol/L T 3 for 24 h. ChIP assay. Rabbit anti-THR (TRa/b, Santa Cruz FL-4083) was used with the Imprint ChIP kit (Sigma Aldrich, St. Louis, MO; CHP1) by following manufacturer's instructions. DNA from 250,000 cells was used for each condition in four independent experiments. Samples were analyzed by quantitative PCR using specific primers for three putative thyroid response elements (TREs; S1, S2, and S3) (Supplementary Table 1) and were run on gels. Gel-mobility shift assay (EMSA). Nuclear extracts were obtained from HEK1 cells transfected with Thrb, or untransfected cells as controls, using the NucBuster Protein Extraction Kit (Novagen, EMD Biosciences, San Diego, CA) following the manufacturer's instructions. Twenty micrograms of nuclear extract were incubated with 80 pmol/L of double-stranded oligonucleotides, reproducing the potential TRE S3 in the rat Mafa gene or the reported TRE of D1 (31) at room temperature for 30 min. The reaction buffer for binding was 75 mmol/L Tris (pH 7.8), 264 mmol/L potassium chloride, 1.5 mmol/L EDTA, 30 mmol/L b-mercaptoethanol, 30% glycerol, and 1.2 mg/mL BSA. In some of the binding assays, anti-THRB antiserum (Affinity Bioreagents, Golden, CO) or a random antibody were added 1 h before addition of the DNA probes. After the binding reactions, samples were analyzed by separation on 10% polyacrylamide gel in 13 Tris-acetate-EDTA buffer followed by 40 min of staining with SYBR green EMSA nucleic acid gel stain (Molecular Probes). Data analysis. Data are shown as mean 6 SEM. For statistical analysis, unpaired Student t tests were used to compare two groups, and one-way ANOVA followed by a Bonferroni post hoc test were used to compare more than two groups. A P value ,0.05 was considered statistically significant. RESULTS Postnatal glucose and insulin levels change concurrently with expression of TH, deiodinases, and THR. Blood glucose levels at postnatal day 2 were significantly lower than levels in adults (Fig. 1A), but they increased, peaking at postnatal day 21 at a level 20% higher than that of adults. Perhaps as a result of rising plasma glucose levels and as part of the functional maturation process of b cells, plasma insulin levels peaked at postnatal day 11, with levels threefold higher than in newborns or adults (Fig. 1B). During the first postnatal month (Fig. 1C), the rate of glucose disposal evaluated by the area under the curve (AUC) of intraperitoneal glucose tolerance test was progressively enhanced; that at postnatal days 7 and 11 were slower (high AUC values) than at later ages. Adult values were not yet achieved by postnatal day 21. For TH to physiologically regulate the functional development of b cells, its serum level and the tissue regulation of biologically active T 3 must be sufficient and its receptors must be expressed by b cells. Serum T 4 , the principal circulating TH, steadily increased over the first 2 weeks, peaking at about twice adult levels at postnatal day 15 before falling to adult levels by postnatal day 21 (Fig. 1D). Because tissue concentrations of T 3 are determined by deiodinases as well as by serum T 3 , we measured the islet expression of deiodinases Dio1, Dio2, and Dio3 transcripts during postnatal development (Fig. 1E). Dio1 mRNA in islets increased throughout the postnatal period; its positive regulation by TH provides an increased paracrine supply of T 3 as the islet develops. In contrast, expression of both Dio2 and Dio3 fell during the postnatal period. D1 and D3 proteins were examined by immunostaining ( Supplementary Fig. 1) in pancreas from animals at postnatal day 7 and adult animals. The small number of islets obtained from rats at postnatal day 7 precludes protein quantification using Western blots. However, careful titration of antibody, parallel staining, and confocal imaging allowed assessment of protein levels by differences in intensity. Islet D1 protein was much lower at postnatal day 7 than in adults ( Supplementary Fig. 1A), although its mRNA level did not differ. This discrepancy might reflect developmental differences in the synthesis of selenoproteins that occur downstream of transcription (32). In contrast, D3 ( Supplementary Fig. 1B) had very strong cytoplasmic localization in islets at postnatal day 7 and became almost undetectable in adult islets. As serum T 4 increased, islet expression of thyroid receptors increased at both mRNA and protein levels. Thra mRNA increased sharply between postnatal days 7 and 9; they were sixfold higher in islets at postnatal day 9 than in adults (Fig. 1F) and decreased to adult levels by postnatal day 15. By immunostaining ( Fig. 2A), nuclear THRA protein (both the T 3 -binding THRA1 and the non-T 3binding THRA2 isoforms are recognized by the antibody) increased from postnatal day 7 to day 10 but became much lower by adulthood (Fig. 2C). Similarly, Thrb mRNA increased between postnatal days 7 and 9 and decreased to adult levels by postnatal day 15 (Fig. 1F). THRB protein also increased by postnatal day 7 but was mainly cytoplasmic; nuclear THRB localization was seen only by postnatal day 15 and increased in the adult ( Fig. 2B and C). Comparison of mRNA expression of Thra and Thrb in islets ( Supplementary Fig. 2) shows a change in predominant receptor isoform through development: Thra predominates at early ages, Thra and Thrb are equal from postnatal day 9 to 15, after which Thrb becomes the predominant isoform in islets. T 3 supplementation until postnatal day 7 accelerated metabolic development. To analyze the direct effects of TH, newborn rats were injected with T 3 from postnatal days 1 to 7. As expected, this supplementation increased body and pancreatic weights, reduced T 4 levels (because of suppression of thyrotropin by T 3 ), and increased D1 activity in the liver ( Table 1). The pancreas of animals treated with T 3 had greater density of both acinar and islet cells (Fig. 3A), greater b-cell proliferation (20% vs. 8% Ki-67 + insulin + cells in untreated animals) ( Fig. 3B and C), and no change in the frequency of apoptotic b cells (Fig. 3D). Their b-cell mass was unchanged (Fig. 3E), but their b cells were smaller (Fig. 3F), with no significant change in the number of cells (Fig. 3G). Fasting glucose levels were lower (Fig. 3H) and fed plasma insulin levels elevated (Fig. 3I), but their response to intraperitoneal glucose tolerance tests were not different (data not shown). These results show that T 3 supplementation enhanced the functional development of b cells. Inhibition of TH synthesis delayed pancreatic development. Animals treated with MMI from birth until postnatal day 15 or 21 were confirmed as having hypothyroidism because of delayed growth with lower body and pancreatic weights, decreased circulating T 4 (due to inhibition of TH synthesis from the thyroid gland), and decreased activity of D1 in the liver (Table 1). At postnatal day 15 there was no change in pancreatic cell density ( Supplementary Fig. 3A), b-cell proliferation ( Supplementary Fig. 3C) or b-cell mass (Supplementary Fig. 3B). However, animals treated with MMI at postnatal days 15 and 21 had lower fasting blood glucose levels than their untreated agematched controls (Supplementary Figs. 3D and 4A), similar to that of younger animals (Fig. 1A). Compared with untreated controls, plasma insulin levels of animals treated with MMI were lower at postnatal day 15 ( Supplementary Fig. 3E) but were fivefold higher at postnatal day 21 (Supplementary Fig. 4B); values were similar to those at postnatal day 11, when plasma insulin levels peak (Fig. 1B). The Values shown as means 6 SEM. A: Fed blood glucose (n = 5-27 animals per age; *P < 0.0001); B: plasma insulin (n = 4 animals per age; *P < 0.04). AUC for intraperitoneal glucose tolerance test (C) and serum T 4 (D) seen at different ages; n = 5-27 animals per age, *P < 0.0001 with respect to the previous age in C and D. Deiodinase (Dio1, type 1; Dio2, type 2; Dio3, type 3) (E) and thyroid receptor isoform mRNA (F) over the same time course by quantitative PCR; the same samples are used for both. Data are expressed as -fold change with respect to adult levels (10 weeks old) using S25 as the internal control gene (n = 4-6 samples per age, each pooled from 3-10 animals; *P £ 0.0001). gradual postnatal development of glucose clearance was delayed by inhibiting T 4 synthesis ( Supplementary Fig. 3G and Fig. 1C). Animals treated with MMI at postnatal day 21 had glucose clearance similar to untreated animals at postnatal day 7 (Fig. 1C). Animals treated with MMI at postnatal day 15 were not as affected, possibly because of shorter treatment and differences in T 3 sensitivity of other glucosedisposing tissues. Overall, the lack of TH during the postnatal period delayed the development of efficient glucose disposal. Local regulation of TH action in postnatal rat islets by T 3 or MMI treatment. Systemic changes in TH status can be regulated by local changes in T 3 concentrations by deiodinases in target tissues. Changing levels and isoforms of thyroid receptors also potentially regulate local TH effects. Islets from animals treated with T 3 at postnatal day 7 had increased nuclear THRA (Fig. 4A) and cytoplasmic THRB protein (Fig. 4C), suggesting more functional receptors. In animals treated with MMI at postnatal day 21 ( Fig. 4B and D), both THRA and THRB had the nonfunctional cytoplasmic localization typical of younger ages. Deiodinase changes also reflected the hormonal status, with increased Dio1 and Dio3 mRNAs in islets after T 3 treatment (Fig. 4E) and reduced Dio1 and Dio3 and elevated Dio2 mRNA levels ( Supplementary Fig. 3F) after MMI treatment until postnatal day 21. This confirmation that normal neonatal changes in deiodinases and thyroid receptors were replicated by external manipulation of TH status suggests that the endogenous events are causally related to the TH status. In vivo T 3 modulation affected Mafa gene expression and its protein nuclear translocation. Islet transcriptional profile changed with in vivo T 3 treatment (Fig. 4E and F): Mafa, Mafb, Pgc1a (peroxisome proliferatoractivated receptor g coactivator-1 a), and Thrb mRNA increased; Pdx1 and Neurod1 mRNA were unchanged; and Rest (RE1-silencing transcription factor) mRNA decreased. In contrast, Mafa decreased at postnatal day 15 with MMI treatment (Supplementary Fig. 5D and Supplementary Table 3); partial T 3 supplementation reversed this decrease. Mafb, preproinsulin, Ins2, Glp1r, and Thra mRNAs increased and Rest mRNA decreased in P15 islets (Supplementary Table 3). Rest, Thra, and Thrb mRNAs Table 3). Insulin staining was more intense in the T 3 -treated group and less intense in MMI-treated animals (Fig. 5A). T 3 supplementation increased MAFA protein both in nuclear location (Fig. 5A) and amount (Fig. 5B). In pups treated with T 3 at postnatal day 7, 93% of b cells had nuclear MAFA compared with only 62% in untreated pups (Figs. 5A and C); 51% of T 3 -treated pups had MAFA high nuclear staining compared with 10% of untreated animals. The observed effect of T 3 on MAFA translocation could be an indirect effect mediated by altering the redox state of the cell (33) through enhancement of mitochondrial function following increased expression of Pgc1a (Fig. 4C). It is surprising that even with the increased nuclear expression of MAFA, glucose-stimulated insulin secretion was not increased in islets from T 3 -treated animals (Fig. 5D). In contrast, hypothyroidism at postnatal day 15 decreased Mafa mRNA levels ( Supplementary Fig. 5D) and protein ( Supplementary Fig. 5A and B) without changes in location ( Supplementary Fig. 5C). MMI treatment also decreased b-cell function as estimated by homeostasis model assessment-B index (Supplementary Fig. 5E) (18). The direct effect of MMI on transcription was ruled out because no significant changes were found in the above genes after culturing MIN-6 cells in the presence of MMI (Supplementary Fig. 5F). In vitro T 3 specifically increased Mafa mRNA and improved glucose-stimulated insulin secretion. Because the in vivo data may be confounded by the effects on insulin sensitivity by thyrotoxicosis or hypothyroidism (14), we developed an in vitro system to directly test whether increasing Mafa was the mechanism mediating the T 3 -maturation effect. DN MAFA lacks an N-terminal transactivation domain and can form heterodimers with endogenous MAFA, impairing its ability to activate target genes (25). In T 3 -treated Ad-Gfp transduced islets, Mafa mRNA increased significantly (Fig. 6A), confirming the in vivo T 3 results (Fig. 4F); glucokinase mRNA also increased. It is important to note that these increases were absent in T 3 -treated islets infected with Ad-DN-Mafa, indicating a specific role of MAFA in T 3 -mediated effects. Interestingly, the T 3 -induced Mafa increase was inhibited in the presence of DN-Mafa, suggesting a positive feedback mechanism of MAFA upon its own expression. In vitro T 3 -treatment induced glucose-stimulated insulin secretion. In static incubations at postnatal day 7, islets, whether untreated or Ad-Gfp infected, had little response to 16.8 mmol/L glucose (Fig. 6B), whereas T 3treated islets, both uninfected and Ad-Gfp infected, had insulin secretion increased 5.5-fold (Fig. 6B). This effect also was abolished in the presence of Ad-DN-Mafa. Using the reverse hemolytic plaque assay to evaluate insulin secretion from individual b cells (Fig. 6C), the effect of T 3 on glucose responsiveness was shown to result from an increased proportion of insulin-secreting cells (Fig. 6D). Glucocorticoids inhibit the effects of T 3 on glucosestimulated insulin secretion. Paradoxically, glucosestimulated insulin secretion was not increased in islets from animals treated with T 3 in vivo (Fig. 5D), as was expected from our in vitro studies (7) and those described earlier. However, T 3 affects the maturation of other tissues, including the adrenal gland (34), so the blunted in vivo glucose responsiveness may have resulted from increased glucocorticoids. In untreated animals, circulating corticosterone was quite low after birth and increased only modestly by postnatal day 15 and then surged to adult levels ( Supplementary Fig. 6A). Glucocorticoid receptor FIG. 4. T 3 treatment results in differential changes of thyroid receptor proteins and islet gene expression. Islets from T 3 -treated and untreated animals at postnatal day (P) 7 immunostained for THRA (A) and THRB (C) or from MMI-treated rats at P21 stained for THRA (B) and THRB (D). Representative confocal images were taken at the same settings for each protein so differences in intensity reflect differences in protein. At least three animals each group. Changes in Dio1, Dio2, and Dio3 (n = 3) (E) and key islet gene (n = 18) (F) mRNA in islets isolated from T 3 -treated and untreated control pups by quantitative PCR at P7. Values shown as mean 6 SEM; *P < 0.05 with respect to control animals at P7. FIG. 5. In vivo T 3 treatment increased Mafa expression and enhanced its nuclear localization but did not increase glucose-stimulated insulin secretion. A: Representative images from T 3 -treated and control animals at postnatal day (P) 7 immunostained for insulin (green) and MAFA (red); bottom panels show only the red channel for MAFA visualization. B: Intensity was quantified as densitometric mean of MAFA staining from at least three animals per group. C: Nuclear localization of MAFA in islets of animals treated with T 3 increased compared with untreated controls at P7. Quantification of 600-2,200 insulin + cells from four or five animals for each group. D: Glucose-stimulated insulin secretion in static incubations of islets freshly isolated from in vivo T 3 -treated animals at P7 or control animals (n = 5-6 experiments). Values shown as mean 6 SEM; *P < 0.02 with respect to controls. mRNA in islets remained unchanged from embryonic day 20.5 to postnatal day 28 ( Supplementary Fig. 6B). However, with T 3 supplementation, corticosterone levels at postnatal day 7 were twice the normal values and were comparable with those at postnatal day 11 ( Supplementary Fig. 6C). In contrast, after MMI treatment, corticosterone levels were reduced at postnatal day 21 ( Supplementary Fig. 6D). These data indicate a rapid T 3 -dependent change in adrenal function that could affect in vivo islet function over the neonatal period. To assess whether increased glucocorticoids explained the difference in functional maturation of in vitro and in vivo T 3 treatments, islets at postnatal day 9 were cultured for 48 h with T 3 , dexamethasone, or both. In islets cultured with T 3 alone, Mafa mRNA was increased significantly ( Supplementary Fig. 6E), consistent with islets from animals treated with T 3 at postnatal day 7 (Figs. 4F and 6A). However, in islets cultured in the presence of dexamethasone, with or without T 3 , Mafa levels remained unchanged and Pdx1 mRNA was significantly suppressed compared with untreated controls (Supplementary Fig. 6E). Insulin secretion was blunted in similarly cultured islets at postnatal day 9 ( Supplementary Fig. 6F). These results suggest that islets from animals treated with T 3 in vivo lacked enhanced glucose-stimulated insulin secretion due to the counter effects of increased glucocorticoids. T 3 directly enhanced Mafa transcription. To analyze the mechanism by which T 3 regulates Mafa expression, we used b-cell lines (INS-1 and MIN-6), both of which express THRA and THRB (data not shown). T 3 increased Mafa mRNA transcriptionally rather than by enhancing its stability, as shown by a 90% inhibition of Mafa mRNA in the presence of transcription inhibitor actinomycin D (Fig. 7A). For T 3 to have direct effect on Mafa transcription, THR must bind to TREs. Using AliBaba 2.1 software, we found potential TRE motifs in the proximal promoter and coding sequence of mouse Mafa and designed quantitative PCR primers for them (Supplementary Table 1). In ChIP assay with MIN-6 cells (Fig. 7B-D), S2 had 2-fold and S3 had 10fold enrichment with respect to IgG, indicating both were TREs. Further evidence of binding of THR to S3 is provided by EMSA, in which a band observed in the presence of HEK1 nuclear extract from cells transfected with Thr expression plasmid was inhibited upon incubation with antibody against THR (Fig. 7E), suggesting a specific disruption of the THR:S3 interaction. In addition, in a luciferase assay using a reporter construct containing the Mafa gene 59-promoter region (S3 was not included) (Fig. 7F), T 3 induced a significant increase of luciferase activity, indicating S2 is active in the presence of T 3 and increases Mafa transcription. The modest size of the increase in luciferase activity is primarily the result of the lack of the S3 site in the construct and the high basal Mafa/luciferase expression in the MIN-6 cells, limiting the impact of T 3 on the transcription of Mafa/luc gene. Overall, these results indicate that T 3 regulates Mafa transcription through direct receptor-ligand interaction. DISCUSSION Previously we showed that overexpression of Mafa in neonatal islets to approximately adult Mafa mRNA levels induced glucose-responsive insulin secretion and thus the functional maturation of b cells (7). The data presented here support TH as a physiological regulator of Mafa FIG. 6. In vitro T 3 selectively increased Mafa mRNA and enhanced glucose-stimulated insulin secretion. A: Culture for 4 days with T 3 (7.5 pmol/L free T 3 ) induced changes in key islet mRNA levels in islets isolated from animals at postnatal day (P) 7 and infected with either control Ad-Gfp (black bars) or DN Mafa (white bars) as normalized to those of Ad-Gfp without T 3 treatment. (*P < 0.04 respect to Ad-Gfp; +P < 0.05 respect to Ad-Gfp+T 3; n = 4-7). B: Insulin secretion from similarly infected and cultured islets at P7 in response to 2.6 mmol/L glucose (black bars) and 16.8 mmol/L glucose (white bars) in sequential static incubations (seven experiments for Ad-Gfp +T 3 and four for Ad-Gfp +T 3 +DN-Mafa; *P < 0.04 with respect to Ad-Gfp). C: T 3 -enhanced glucose-responsive insulin secretion was confirmed by reverse hemolytic plaque assay for individual cell secretion, showing an increased percentage of insulin-secreting cells (D) (three experiments; *P £ 0.006 with respect to 2.6 mmol/L glucose). Values shown as mean 6 SEM. expression and postnatal functional maturation of b cells. Increased expression of THR and increased serum T 4 levels accompanied the increased Mafa mRNA between postnatal days 7 and 9. Islets treated with T 3 in vitro at postnatal day 7 showed increased glucose responsiveness with a greater proportion of responsive b cells. Importantly, expression of DN-MAFA blocked secretion of glucose-responsive insulin and the increased Mafa mRNA. These data, along with our previous work (7), indicate that T 3 induction of glucose responsiveness was dependent on MAFA. Moreover, the thyroid receptor directly interacts with two putative TREs in the Mafa gene. In vivo manipulation of TH (treatment with T 3 or MMI) resulted in the expected change in Mafa expression, demonstrating the importance of this mechanism during normal physiological development. TH is a known regulator of development in different tissues, yet its developmental effects on b-cell function have not been described. Previous studies showed that the T 3 -THRA complex enhanced proliferation of RINm5f cells (35) and that THRA knockout mice had greater wholebody insulin sensitivity (36). We now show that Mafa is a direct target of TH in b cells. The physiological importance of T 3 regulation of islet development is underscored by our in vivo models of T 3 modulation. Although a postnatal switch of thyroid receptor isoforms has been described in the heart, dorsal root ganglia, and sciatic nerve with Thra characteristic of immature tissues and Thrb characteristic of functionally mature tissues (37,38), such an isoform switch had not been previously described for b cells. Comparison of the relative mRNA expression of the isoforms shows that Thra predominates B: Schema of the murine Mafa promoter and coding sequence. The top arrow indicates the transcription start site. The black ovals indicate the experimentally tested TRE sites by ChIP: site 1 (S1), site 2 (S2), and site 3 (S3). The TREs are localized at 22,342/-2,354, 21,927/-1,946 and +647/ +659. The amplified sequences are shown as is their conservation in other species. S1 is not conserved in humans or rats, whereas S2 is conserved in rats but not in humans. C: In ChIP studies of MIN-6 cells using an antibody against THR, putative TREs from S2 and S3 showed direct binding of the THR, but no binding was evident at S1. Gel representative of three or four experiments. D: Quantitative PCR products from ChIP-amplified DNA with corresponding IgG, RPII, and input. (values shown as mean 6 SEM; *P < 0.03; n = 3-4 controls). E: Electrophoretic mobility shift assay showing a band observed in the presence of HEK1 nuclear extract from cells transfected with Thr expression plasmid that was inhibited upon incubation with antibody against. S3, potential TRE Site 3 in Mafa cds; NE, nontransfected nuclear extract; D1 TRE, known TRE in Dio1 promoter; IgG, unspecific antibody; a-THR, antibody against Thr. F: A dual luciferase reporter assay using a firefly luciferase reporter construct with the 59 Mafa promoter region in MIN-6 cells grown in high-glucose DMEM +/2 100 nmol/L T3 for 24h. Renilla luciferase in a SacI backbone was used as a transfection control (values shown as mean 6 SEM; *P < 0.01; n = 3). (A high-quality color representation of this figure is available in the online issue.) at early ages, that both Thra and Thrb are equal from postnatal days 9 to 15, and that Thrb becomes the predominant isoform in adult islets. At the protein level, b cells expressed both THRA and THRB at postnatal day 7, but the "immature" isoform THRA had nuclear localization between postnatal days 7 and 15, whereas the THRB isoform was mainly cytoplasmic, suggesting that THRA mediates TH effects on b cells during the early postnatal period. THRB showed nuclear translocation only at postnatal day 15, which was when T 3 plasma levels peaked. Nucleocytoplasmic shuttling of both THRA and THRB have been described in other cell types (39)(40)(41) and can be mediated by T 3 in an energy-dependent process (39); their nuclear export is considered passive (40), but it is unclear why there is differential shuttling of the two isoforms. We also showed that the effects T 3 has on other tissues could influence the observed phenotype. The surprising absence of glucose-stimulated insulin secretion in islets from pups treated with T 3 until postnatal day 7 was likely due to the accelerated maturation of the adrenal gland induced by T 3 . At birth the adrenal lacks full function, and circulating corticosterone increases only gradually after postnatal day 7 until about postnatal day 15, when a surge occurs (10). Size of the adrenal gland and circulating corticosterone levels previously have been directly related to circulating T 3 levels (42). In our studies, corticosterone levels doubled in animals that received T 3 supplementation and significantly decreased in those in which T 4 synthesis was inhibited. Even low concentrations of dexamethasone decreased Pdx1 mRNA and blocked the stimulatory effects of T 3 on Mafa mRNA and insulin secretion. In conclusion, we have shown that TH is a physiological regulator of b-cell maturation, a process mediated through direct interaction of THR and the Mafa promoter. In vitro, the active hormone T 3 increased glucose-stimulated insulin secretion and potentially could have a similar maturation role for in vitro stem cell-derived b cells. Identification of additional physiological regulators that drive b-cell maturation and glucose responsiveness should lead to effective strategies for developing fully mature in vitro-derived b cells for replacement therapy for diabetes.
2016-05-12T22:15:10.714Z
2013-04-16T00:00:00.000
{ "year": 2013, "sha1": "7e38d339cba5cb2d5b2c997838441affb8755d5a", "oa_license": "CCBYNCND", "oa_url": "https://diabetes.diabetesjournals.org/content/diabetes/62/5/1569.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6f4f8a2882358840f7c3b1b283fb5c43e6f1a7ad", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231785607
pes2o/s2orc
v3-fos-license
Evaluation of Clinical outcomes of Generic versus Reference Ivabradine in Heart Failure Patients Economic benefits associated with the usage of generic drugs have been suggested to increase patients' adherence to their medications and to improve patients' health outcomes. However, the therapeutic equivalence of certain generic products to their branded counterparts has been questioned. Our study aims to compare the efficacy and safety of generic and branded ivabradine in adult patients with chronic heart failure with reduced ejection fraction (≤40%) (HFrEF). This was a randomized, open-label, crossover, and two-period comparative study. A total of 32 patients with HFrEF were randomized into two groups. Group A received brand ivabradine® for 12 weeks followed by generic ivabradine for the next 12 weeks. Group B received generic ivabradine for 12 weeks followed by brand ivabradine for the next 12 weeks with no washout period. The efficacy outcomes included resting heart rate (HR), New York Heart Association Functional Classification (NYHA FC), Quality of life (QoL) using Minnesota Living with Heart Failure (MLWHF), and ejection fraction (EF). After taking the drugs for the first 12 weeks, no statistically significant difference was detected in all efficacy outcomes between Group A and Group B. After crossover and taking drugs for a further 12 weeks, similar results were obtained. Only minor side effects, mainly phosphenes were observed in both products. No mortality was demonstrated in both groups. This study showed no statistically significant difference between the generic and brand ivabradine in terms of efficacy and safety. The results suggest that generic ivabradine can be a safe substitute for branded ivabradine for economic reasons. INTRODUCTION Generic drugs often cost significantly less than their branded counterparts which can enhance patient adherence and decrease health care expenditures [1][2][3][4]. This is important for patients with insufficient income and in case of restricted budgets of medical insurance programs [4]. However, generic drugs are considered therapeutically equivalent only based on simple bioequivalence studies. On the other hand, branded drugs have to demonstrate their clinical efficacy and safety [5,6]. So, whether generic drug products are truly therapeutically identical and interchangeable with their branded counterparts is still controversial and thus can compromise the response and/or safety of patients [7]. Accordingly, and due to the worldwide dynamic expansion of the pharmaceutical market, it is essential to prove the therapeutic equivalence of the generic drugs, which are chemical equivalents of their branded counterparts in terms of active ingredients [8,9]. Ivabradine is a precise inhibitor of the cardiac pacemaker (If) current channel, which modifies pacemaker movement in the sino-atrial node. It gives pure negative chronotropic action without influencing atrioventricular or intraventricular conduction or contractility with no impact on blood pressure [10,11]. Ivabradine was approved by the European Medicines Agency in 2005 and by the United States Food and Drug Administration in 2015 [12]. It is marketed by Servier under the name Procoralan (worldwide) and by Amgen (which acquired United States commercial rights to the drug from Servier) under the name Corlanor. Currently, it is incorporated in the American College of Cardiology/American Heart Association task force 2017 and the 2016 ESC guidelines for the management of heart failure. It is licensed as an additional drug or as an alternative to beta-blockers (if not tolerated) when the resting heart rate (HR) remains ≥ 70 bpm in patients with chronic heart failure with reduced ejection fraction (≤40%) (HFrEF) [13][14][15]. This reduction in HR has been associated with improved QOL and better prognosis in patients with HF [16,17]. However, Ivabradine efficacy in HF patients with diastolic dysfunction still needs extensive evaluation [18]. Ivabradine generics have been introduced into the Egyptian market, with the cheapest licensed under the trade name Bradipect® by October Pharma. According to the first national large scale registry to study heart failure (HF) patients in Egypt, the prescription rate for ivabradine in ambulatory patients with HF was 20.4% [19]. Although ivabradine generics are estimated to have similar efficacy and tolerability, head-tohead evaluation of generic and reference ivabradine in terms of efficacy and tolerability was never performed. This study aims to compare the therapeutic equivalence of generic versus brand name ivabradine in adult Egyptian patients with HFrEF. METHODS A randomized, open-label, 2-sequence, 2period crossover study was conducted on 32 Egyptian patients (16 patients in each group) over a period of 24 weeks with no washout period for the ethical reason [20]. Patients were recruited from the outpatient clinic of the Critical Care Medicine Department, Cairo University Hospitals, and the Cardiology outpatient clinic, Ain Shams University Hospitals during the period from October 2015 to December 2017. All HF patients with age ≥18 years, New York Heart Association Functional Classification (NYHA FC) II, III or IV, sinus rhythm, regular resting heart rate (HR) ≥70 beats/min, and ejection fraction (EF) ≤40% were considered for inclusion into the study. Patients with HF with preserved EF (HFpEF), atrial fibrillation or flutter, thyrotoxic heart disease, severe renal impairment defined as serum creatinine ˃3 mg/dl, and severe hepatic impairment with signs of liver cell failure were excluded. Besides, patients on nondihydropyridine calcium-channel blockers, class I anti-arrhythmic, and/or strong inhibitors of cytochrome P450 3A4 were excluded. Randomization Patients were randomized to Group A and Group B (two phases in each group) by choosing from closed envelopes that were previously prepared. Patients in Group A (16 patients) received brand ivabradine (Procoralan©) tablets for 12 weeks followed by generic ivabradine (Bradipect) tablets for another 12 weeks, while patients in Group B (16 patients) received generic ivabradine for 12 weeks followed by brand ivabradine for another 12 weeks. Data Collection Demographic and clinical characteristics were assessed at baseline and monthly thereafter, ( Table 1). Quality of life was assessed using the Minnesota Living with HF (MLWHF) questionnaire [21]. Also, self-reported side effects and patient adherence to medications were recorded. Echocardiography was performed by the same operator that was blinded to treatment allocated and the previous ECHO findings during the whole study to calculate EF by 2D modified Simpson's technique. Renal and liver function tests, complete blood count (CBC), NYHA FC, and EF were assessed at baseline and end of each phase. Medication adherence was evaluated by pill count. Patients in both groups were considered adherent to their medications provided they have taken at least 80% of the prescribed pills [20]. Ethical Approval Approval was granted from the committee of ethics of faculty of the pharmacy Ain Shams University (approval number: 238) and Future University in Egypt (approval number: REC-FPSPI-4/28). All recruited patients signed informed consent before participation in the study. Primary and Secondary Outcomes Primary outcome measures were resting HR, EF, NYHA FC, and QoL at the 12 th and 24 th week. Also, mortality from cardiovascular disease, adverse events, and the number of hospital admissions for worsening HF were assessed as secondary outcomes. Statistical Analysis Statistical analysis was performed using the SPSS software (version 22.0). Chi-square test or Fisher's exact test were used for categorical variables. Independent-samples t-test was used for continuous variables and Mann-Whitney Utest was used if numerical data were not normally distributed. Two-way ANOVA was used to compare the mean difference of change between groups [22] followed by Mauchly's posthoc analysis for pairwise analysis. The significance level was set at P˂0.05. By using the PASS 11th release, the minimal sample size for a cross-over design to detect a significant statistical difference between the 2 groups was 14 participants in each group assuming power=0.80 and α=0.05, Effect Size=0.5 [23-26]. Baseline Assessment A total of 32 patients were randomized to Group A or Group B (two phases in each group). Ischemic heart disease was the most common etiology of HF (78.1%). Regarding comorbidities, 53.1% were hypertensive, 43.8% were diabetic, and 25% had dyslipidemia. There was no significant difference between both groups in laboratory parameters, demographic data, cardiac parameters, and NYHA FC. However, the mean EF of group A was significantly lower than group B, p-value=0.02, ( Table 1). Guideline directed medical therapy (ACEIs/ARBs, β-Blockers, spironolactone, diuretics), patients at ≥50% target dose of β blocker, digoxin, statins, antiplatelets, and anticoagulants were comparable in both groups. There was no change in brand or doses during the study either in beta-blocker or digoxin after randomization. HR All patients received at least 80% of their drugs during the study period. At the end of phase 1 (12 th week), a comparable reduction in HR occurred in the two groups, P-value=0.64. However, at the end of phase 2 (24 th week), no significant deviations were noticed from data detected at the end of phase 1 in the two groups, P-value=0.69, The interaction of time*treatment was not significant (P-value= 0.28), (Fig. 1). EF At the start of phase 1, the mean baseline of EF was significantly less in group A compared to group B. However, at the end of phase 1, the mean EF increased from 27 Fig. 2 shows the percentage of patients with LVEF ≤35% in the two groups, the improvement in EF was comparable during phase 1 and phase 2 within groups, P-value =0.29 and 1.0, respectively (Fig. 2). Fig. 3 shows the NYHA FC classes at baseline, week 12, and week 24. The improvement in NYHA FC was similar in both groups at week 12 (87.5% in group A versus 93.8% in group B) with no further improvement at week 24 (Fig 3). QOL At the end of phase 1, the mean value of the QOL improved at the end of phase 1 with no further significant change at the end of phase 2. Also, the actual QOL improvement was -12±15.23 with a mean % change of 36.71±30.22 in group A versus -12.81±9.19 and mean % change of 37.66±23.48 in group B, P-value=0.29, and 0.51, respectively. At the end of phase 2, there was no further improvement and percent change in QOL between the two groups (P-value=0.55). The interaction of time*treatment was not significant (P-value=0.85). Adverse events and Cost saving At the end of both phases, two patients were hospitalized for worsening HF in group A, and none in group B. Bradycardia (i.e. HR ˂50bpm) occurred in two patients in group A. In addition, visual side effects (Phosphenes) occurred in one patient in each group. There was no mortality during the whole study period. The total cost of the brand product was 2331.80 USD, whereas, the total cost of the generic product was 1469.27 USD. If the only generic product was used, the cost-saving would have been 862.53 USD (26.95 USD/patient), which reflects almost 40% saving. (Table. 2). was conducted which focused on the effect of ivabradine on HR, EF, NYHA FC, and quality of life. Accordingly, in the present study, the primary efficacy outcomes of branded ivabradine were compared to its generic counterpart in terms of resting HR, EF, NYHA FC, and QoL monthly, at 12 th week and up to 24 weeks of treatment. The study showed that both generic and branded ivabradine were therapeutically equivalent in patients with HFrEF concerning HR, EF, NYHA FC, and QOL. In addition, both groups showed a similar toxicity profile. In-group A, the resting HR was reduced by 21.1±15.2 bpm at end of 3 months (90.13±7.11 bpm to 68.25±9.54 bpm after 1 month and to 69±11.41 bpm after 3 months versus 27.12±20.6 bpm (94.25±12.71 bpm to70.63±10.16 bpm after 1 month and 67.31±8.68 bpm after 3 months) in-group B. This is following a study conducted in Egypt to investigate the efficacy of ivabradine in idiopathic dilated cardiomyopathy (ICM) patients with chronic HF [39]. The baseline HR was reduced from 96±15 to 72 with a mean reduction of 24±13 at 3 months. However, the magnitude of this reduction was slightly higher than that observed in the INTENSIFY study (85±11.8 bpm at baseline to72±9.9 bpm after 1 month and 67±8.9 bpm after 4 months, with 18±12.3bpm, mean reduction). In addition, the latter study reported that the HR reduction was greater in patients with higher baseline HR. This observation might explain the reason behind the relatively higher reduction in HR in our study and the latter Egyptian study where the mean baseline HR in both studies was ≥90 bpm. Similarly, in the present study, reduction in HR was slightly higher compared to SHIFT [37] (79.7 bpm to 64 bpm after 1 month) and BEAUTIFUL [36] (79.1 bpm to 65 bpm after 1 month) studies. Besides, HR was reduced by 8.3±9.7 bpm (71.5±10 to 63.2±9.9 bpm after 3 months of the study period) in the ivabradine group in the BEAUTIFUL Echo sub-study which aimed to assess the effect of HR decrease by ivabradine on left ventricular size [40] Also, a randomized open blinded endpoint study to assess the effect of HR reduction with carvedilol, ivabradine and their combination on exercise capacity in HF patients receiving a maximal dose of ACEIs [41], reported similar results (76.3±12.8 bpm to 58.1±5.4 bpm). All latter studies with a lower reduction in HR compared to the present study recorded lower baseline resting HR. Accordingly, this adds evidence to the observations of the INTENSIFY study which reported that patients with higher baseline HR experiences higher reductions in HR. In the present study, the percentage of patients improved after 3 months of treatment in both groups based on NYHA FC (87.5% in group A versus 93.8% in group B with no significant difference). Also, the percentage of patients with NYHA FC I and II increased from 37.5% to 93.8% in group A versus 6.3% to 81.3% in group B. It is worth mentioning that none of our patients were classified as NYHA FC I at baseline, however, by end of the study 28% were NYHA FC I. This improvement in NYHA FC was higher than that observed in the SHIFT study (28%). Also, in the INTENSIFY study, NYHA I and II increased from 9.6% and 51.1% to 24.0% and 60.5% respectively after 4 months of treatment. This difference from our results may be attributed to the higher percentage of NYHA FC III, IV and lower percent of NYHA FC II at baseline compared to both SHIFT and INTENSIFY study. Also, the Egyptian study conducted in ICM patients with HF recorded an improvement in NYHA FC by 12% only after 3 months of ivabradine [39]. Oppositely, most of our patients had HF due to IHD (78%) and only (22%) had DCM. The percentage of patients with LVEF ≤35% in our study in both groups A and B declined from 93.8% to 62.5% and from 62.5% to 43.8%, respectively after 3 months. In the INTESFIY study, LVEF ≤35% at baseline declined from 26.6% to 17.4% after 4 months. This better improvement may be due to the higher percentage of patients with LVEF ≤35% at baseline. This is further supported by our study results which showed that there was an improvement in the mean change value of LVEF for both groups A and B by 5.94% ±3.07 and 7.31%±5.82 after 3 months. This is following the Egyptian study which recorded an improvement in LVEF in the ivabradine group by mean change 6 A meta-analysis on generic versus brand-name drugs used in cardiovascular diseases was published in 2016. The latter study showed that spending generic as an alternative to brand-name cardiovascular drugs does not indicate a loss in either efficacy or safety [31]. However, there were major limitations to their meta-analysis. First, 50% of the studies evaluated were bioequivalence trials, had a short follow-up period, low study power due to small sample size and most of the study populations were healthy volunteers. Second, in most studies, either the generic manufacturer sponsored the study, or the source of funding was not reported, thus the results might be subjected to sponsorship bias. Those limitations are similar to a meta-analysis conducted in 2008 to evaluate the therapeutic equivalence of generic and brand-name drugs used in cardiovascular disease [27]. The latter study concluded that although there is no proven evidence to support the superiority of brandname drugs to their generic counterparts, a significant number of articles counsel against interchanging between generic and branded drugs [27]. Briefly, studies comparing therapeutic equivalence of branded drugs versus their generics have limitations and show conflicting results. However, the present study conducted several measures to overcome some of those limitations. First, the study was conducted in a sufficient period of 6 months. Second, HF patients comprised the study population, not healthy volunteers. Third, a crossover design and suitable sample size with suitable power (80%) were used. Also, there was no sponsorship bias or any type of conflict of interest. A limitation to this study was the open-label study design. Additionally, EF was significantly lower at baseline in the group of patients who started with the brand drug compared to the group who started with a generic drug. However, actual improvement and percent change were used to evaluate the outcome of EF to overcome this limitation. Moreover, there was no washout period for ethical reasons. Further studies with a larger sample size are required to confirm study results. Conclusion This study showed no statistically significant difference between the generic and brand-name ivabradine in terms of efficacy and safety. Based on our results, we propose that generic ivabradine can be a safe substitute for branded ivabradine for economic reasons. Further studies with a larger sample size are required to confirm study results.
2021-01-07T09:11:03.042Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a44aabe28f85f55f1967611c5ab0e1c9a27eadd5", "oa_license": "CCBY", "oa_url": "https://aps.journals.ekb.eg/article_135132_8fca44663f0cbc5eca3f5bfca8c6a5f4.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ea3a5dfa4023ab1da7a243643888e4924de3cf5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
49333991
pes2o/s2orc
v3-fos-license
Metformin in Pregnancy: Mechanisms and Clinical Applications Metformin use in pregnancy is increasing worldwide as randomised controlled trial (RCT) evidence is emerging demonstrating its safety and efficacy. The Metformin in Gestational Diabetes (MiG) RCT changed practice in many countries demonstrating that metformin had similar pregnancy outcomes to insulin therapy with less maternal weight gain and a high degree of patient acceptability. A multicentre RCT is currently assessing the addition of metformin to insulin in pregnant women with type 2 diabetes. RCT evidence is also available for the use of metformin in pregnancy for women with Polycystic Ovarian Syndrome and for nondiabetic women with obesity. No evidence of an increase in congenital malformations or miscarriages has been observed even when metformin is started before pregnancy and continued to term. Body composition and metabolic outcomes at two, seven, and nine years have now been reported for the offspring of mothers treated in the MiG study. In this review, we will briefly discuss the action of metformin and then consider the evidence from the key clinical trials. Introduction In recent years, metformin has gained acceptance as a safe, effective and rational option for reducing insulin resistance in pregnant women with type 2 diabetes, gestational diabetes (GDM) or polycystic ovarian syndrome (PCOS). It may also provide benefit in obese non-diabetic women during pregnancy. In the UK, the National Institute for Health and Care Excellence (NICE) recommends that women with gestational diabetes should be offered metformin if blood glucose targets are not met with diet and exercise within 1-2 weeks [1]. The Scottish Scottish Intercollegiate Guidelines Network (SIGN) guidelines recommend that metformin or glibenclamide may be considered as initial pharmacological glucose lowering treatment in GDM [2]. Diabetes Canada states that metformin may be used as an alternative to insulin adding that women should be informed that metformin crosses the placenta, longer term studies are not yet available and the addition of insulin is necessary in approximately 40% to achieve adequate glycaemic control [3]. By contrast, the American Diabetes Association (ADA) states that insulin is the first line of treatment for GDM [4]. However, it classifies metformin as Category B and refers to evidence of safety and efficacy from randomised trials whilst noting that long term safety data for offspring is lacking [4]. We will briefly consider the action of metformin and then review the major clinical trials of metformin use in pregnancy including its use in obese non-diabetic women and as an addition to insulin in type 2 diabetes. We conclude by looking at the potential impact of in utero exposure in the offspring of mothers receiving metformin in pregnancy. Metformin in Obese Non-Diabetic Pregnant Women Maternal obesity is associated with adverse pregnancy outcomes including an increased risk of gestational diabetes, preeclampsia, and macrosomia [17][18][19][20][21]. Sebire et al. [21], in a study of 287,213 pregnancies in London, reported odds ratio (OR) for obese women compared to those with normal Body Mass Index (BMI): GDM (OR: 3.6), PET (OR: 2.1), LGA (OR: 2.36). Whilst the mechanism for these complications is not well defined, maternal insulin resistance has been implicated. Metformin, by reducing insulin resistance, might be expected to improve maternal and foetal outcomes in this condition. Chiswick and colleagues (2015) reported a double blind, placebo controlled trial (EMPOWaR) in nondiabetic (normal glucose tolerance test) obese (BMI of 30 kg/m 2 or more) predominantly Caucasian (96%) women receiving metformin 500 mg (up to a maximum of 2500 mg/day) or placebo from 12-16 weeks gestation until term [22]. Three women developed preeclampsia in either group. Fasting plasma glucose and HOMA-IR, measures of maternal insulin resistance, were lower at 28 weeks in women who received metformin but these differences were not maintained at 36 weeks possibly due to poor study drug compliance late in pregnancy. The EMPOWaR investigators also noted significantly lower inflammatory markers including C-reactive protein and Interleukin-6 (IL-6) in women who received metformin. Disappointingly, these changes had no significant effect on the subsequent development of gestational diabetes. The primary outcome, a z score corresponding to the gestational age, parity, and sex-standardised birthweight percentile of live-born babies was no different in the offspring of metformin treated women compared to controls [22]. We performed a similar double blind placebo-controlled trial in 450 nondiabetic severely obese (BMI > 35 kg/m 2 ) mixed ethnicity (70% Caucasian, 25% African or Afro-Caribbean, 5% South Asian or mixed) women receiving up to 3000 mg metformin daily [23]. Adherence to the study regimen was good (≥50% of tablets taken in nearly 80% of the women) and did not differ significantly between the two groups. Metformin was associated with less maternal gestational weight gain (kg) (median: 4.6 vs. 6.3; p < 0.01) and less preeclampsia (6/202 vs. 22/195; p: 0.001) compared with the placebo. However, metformin did not reduce the median neonatal birthweight z score (primary outcome) or the incidence of gestational diabetes. There was no significant difference between the groups in the incidence of other pregnancy complications or of adverse foetal or neonatal outcomes [23]. There are several possible explanations for the failure of these studies to show an impact on birth weight or gestational diabetes. Firstly, metformin was initiated late in the first trimester and it is possible that a beneficial effect would have been observed if started around the time of conception. Evidence from women with polycystic ovaries who receive metformin pre-or peri-conception, indicates a reduction in the risk of GDM [7]. Secondly, it is possible that a beneficial effect on birthweight in these women requires a high dose of metformin (2500-3000 mg/day) and too few women in these trials took this dose for long enough. Women with obesity in pregnancy may represent a heterogeneous group with varying degrees of resistance to the insulin sensitising effects of metformin. Genetic polymorphisms in drug uptake transporter genes have been implicated as a possible mechanism accounting for variation in metformin response [24]. Another possibility is that the impact of metformin will be seen in early childhood rather than at birth. Beneficial changes in the body composition of children who were exposed to metformin in utero in the MiG study have been reported at 2 years of follow-up (MiG TOFU) [25]. Similar results might be seen in the children of women who participated in EMPOWaR or Metformin in Obese non-diabetic Pregnant women (MOP) trials and such studies are in progress. We wanted to test the hypothesis that metformin reduces the incidence of GDM and associated features of the metabolic syndrome in women with the highest insulin resistance at baseline. We therefore performed a sub analysis on a subset of 118 patients in whom data on maternal fasting insulin, HOMA-IR, visceral fat mass (VFM) and inflammatory markers was collected. Their baseline characteristics are shown in Figure 1. These 118 women were randomised to either metformin or placebo (59 in each group) [26]. Body composition was assessed at entry to the study, at 28 weeks gestation, at term, and postnatally by the InBody TM 720 using Direct Segmental Multi-frequency Bioelectrical Impedance Analysis Method (DSM-BIA Method) which has been validated and correlates well with intra-abdominal fat area assessed by CT scan [27] and DEXA [28]. characteristics are shown in Figure 1. These 118 women were randomised to either metformin or placebo (59 in each group) [26]. Body composition was assessed at entry to the study, at 28 weeks gestation, at term, and postnatally by the InBody TM 720 using Direct Segmental Multi-frequency Bioelectrical Impedance Analysis Method (DSM-BIA Method) which has been validated and correlates well with intra-abdominal fat area assessed by CT scan [27] and DEXA [28]. We found that metformin attenuated the physiological increase in fasting insulin and HOMA-IR at 28 weeks gestation in comparison with placebo ( Figure 2). Maternal gestational weight gain (kg) was significantly reduced in the metformin group (3.9 ± 4.6 vs. 7 ± 4.5, p: 0.003). Changes in visceral fat mass in the two groups are shown in Figure 3. Whilst the rise in VFM was reduced in women receiving metformin, this did not achieve statistical significance. Postnatally, there was a greater decrease in visceral fat mass in the metformin group (−8. We found that metformin attenuated the physiological increase in fasting insulin and HOMA-IR at 28 weeks gestation in comparison with placebo ( Figure 2). Maternal gestational weight gain (kg) was significantly reduced in the metformin group (3.9 ± 4.6 vs. 7 ± 4.5, p: 0.003). Changes in visceral fat mass in the two groups are shown in Figure 3. Whilst the rise in VFM was reduced in women receiving metformin, this did not achieve statistical significance. Postnatally, there was a greater decrease in visceral fat mass in the metformin group (−8.8 ± 15.5 vs. −0.6 ± 15.8; p: 0.01). characteristics are shown in Figure 1. These 118 women were randomised to either metformin or placebo (59 in each group) [26]. Body composition was assessed at entry to the study, at 28 weeks gestation, at term, and postnatally by the InBody TM 720 using Direct Segmental Multi-frequency Bioelectrical Impedance Analysis Method (DSM-BIA Method) which has been validated and correlates well with intra-abdominal fat area assessed by CT scan [27] and DEXA [28]. We found that metformin attenuated the physiological increase in fasting insulin and HOMA-IR at 28 weeks gestation in comparison with placebo ( Figure 2). Maternal gestational weight gain (kg) was significantly reduced in the metformin group (3.9 ± 4.6 vs. 7 ± 4.5, p: 0.003). Changes in visceral fat mass in the two groups are shown in Figure 3. Whilst the rise in VFM was reduced in women receiving metformin, this did not achieve statistical significance. Postnatally, there was a greater decrease in visceral fat mass in the metformin group (−8. When stratified as high (HOMA IR > 75th percentile) or normal insulin resistance, only 1 (6.6%) of the 15 women with high maternal insulin resistance allocated to metformin developed GDM compared with 4 (44.4%) of the nine women with high maternal insulin resistance allocated to placebo (p: 0.04). Women with the most severe insulin resistance at entry to the study allocated to placebo had a greatly increased risk of GDM (OR: 5.7 (1.2-27.5)). Nevertheless, the overall incidence of GDM was not significantly reduced by metformin treatment when stratified by baseline maternal insulin resistance. It remains possible that the lifestyle advice offered to both groups diluted a real effect on GDM frequency and that larger numbers may be needed to show a significant difference. When stratified as high (HOMA IR > 75th percentile) or normal insulin resistance, only 1 (6.6%) of the 15 women with high maternal insulin resistance allocated to metformin developed GDM compared with 4 (44.4%) of the nine women with high maternal insulin resistance allocated to placebo (p: 0.04). Women with the most severe insulin resistance at entry to the study allocated to placebo had a greatly increased risk of GDM (OR: 5.7 (1.2-27.5)). Nevertheless, the overall incidence of GDM was not significantly reduced by metformin treatment when stratified by baseline maternal insulin resistance. It remains possible that the lifestyle advice offered to both groups diluted a real effect on GDM frequency and that larger numbers may be needed to show a significant difference. When stratified as high (HOMA IR > 75th percentile) or normal insulin resistance, only 1 (6.6%) of the 15 women with high maternal insulin resistance allocated to metformin developed GDM compared with 4 (44.4%) of the nine women with high maternal insulin resistance allocated to placebo (p: 0.04). Women with the most severe insulin resistance at entry to the study allocated to placebo had a greatly increased risk of GDM (OR: 5.7 (1.2-27.5)). Nevertheless, the overall incidence of GDM was not significantly reduced by metformin treatment when stratified by baseline maternal insulin resistance. It remains possible that the lifestyle advice offered to both groups diluted a real effect on GDM frequency and that larger numbers may be needed to show a significant difference. The reduced rate of preeclampsia observed in the MOP trial contrasts with the lack of effect in EMPOWaR. One explanation for this difference is that gestational weight gain was reduced in MOP but not in EMPOWaR [22,23]. Lower maternal weight gain was associated with a reduced rate of preeclampsia in a recent meta-analysis of metformin and risk of preeclampsia [29]. The reduced rate of preeclampsia observed in the MOP trial contrasts with the lack of effect in EMPOWaR. One explanation for this difference is that gestational weight gain was reduced in MOP but not in EMPOWaR [22,23]. Lower maternal weight gain was associated with a reduced rate of preeclampsia in a recent meta-analysis of metformin and risk of preeclampsia [29]. Adverse Events There was no significant difference in the incidence of serious adverse events between the groups, but the incidence of side effects like nausea, vomiting, and diarrhoea were higher in the metformin group as compared to the placebo group. Eleven patients receiving metformin and four receiving placebo complained of nausea and vomiting. Similarly, more patients in the metformin group complained of diarrhoea (nine patients) as compared to the placebo group (two patients). In three patients, one in the metformin group and two in the placebo group, foetal scan showed foetal growth restriction with estimated foetal weight < 5th percentile and abnormal foetal Doppler studies. The trial medications were stopped in these patients as per protocol guidelines. The trial medications were started between 12 and 18 weeks of gestation. The percentage of women taking >2500 mg of Metformin per day was an overall 88.1%. Metformin in Gestational Diabetes Diabetes in pregnancy may be pre-gestational, which is when a woman with established diabetes becomes pregnant, or gestational, which is traditionally defined as "carbohydrate intolerance of varying severity with onset or first recognition during pregnancy". The International Association of Diabetes and Pregnancy Study Groups (IADPSG), the ADA and others have recently attempted to distinguish women with probable pre-existing DM that is first recognised during pregnancy (overt diabetes) from transient manifestation of pregnancy related insulin resistance (gestational diabetes) [30]. The prevalence of gestational diabetes (GDM) is increasing worldwide as the pregnant population is becoming older and also as the prevalence of obesity is increasing. Comparisons of prevalence between countries are difficult because different diagnostic criteria are currently adopted [31]. Evidence for the use of metformin in gestational diabetes comes from randomised controlled trials as well as case-control observational studies. The landmark Metformin in Gestational Diabetes (MiG) trial had a major impact on the management of GDM in many countries including in the UK [32]. In this study, women were randomised to either metformin or usual treatment i.e., insulin. A high proportion of women assigned to metformin required supplementary insulin (46%) but at considerably lower doses than women receiving insulin alone. The primary outcome was a Adverse Events There was no significant difference in the incidence of serious adverse events between the groups, but the incidence of side effects like nausea, vomiting, and diarrhoea were higher in the metformin group as compared to the placebo group. Eleven patients receiving metformin and four receiving placebo complained of nausea and vomiting. Similarly, more patients in the metformin group complained of diarrhoea (nine patients) as compared to the placebo group (two patients). In three patients, one in the metformin group and two in the placebo group, foetal scan showed foetal growth restriction with estimated foetal weight < 5th percentile and abnormal foetal Doppler studies. The trial medications were stopped in these patients as per protocol guidelines. The trial medications were started between 12 and 18 weeks of gestation. The percentage of women taking >2500 mg of Metformin per day was an overall 88.1%. Metformin in Gestational Diabetes Diabetes in pregnancy may be pre-gestational, which is when a woman with established diabetes becomes pregnant, or gestational, which is traditionally defined as "carbohydrate intolerance of varying severity with onset or first recognition during pregnancy". The International Association of Diabetes and Pregnancy Study Groups (IADPSG), the ADA and others have recently attempted to distinguish women with probable pre-existing DM that is first recognised during pregnancy (overt diabetes) from transient manifestation of pregnancy related insulin resistance (gestational diabetes) [30]. The prevalence of gestational diabetes (GDM) is increasing worldwide as the pregnant population is becoming older and also as the prevalence of obesity is increasing. Comparisons of prevalence between countries are difficult because different diagnostic criteria are currently adopted [31]. Evidence for the use of metformin in gestational diabetes comes from randomised controlled trials as well as case-control observational studies. The landmark Metformin in Gestational Diabetes (MiG) trial had a major impact on the management of GDM in many countries including in the UK [32]. In this study, women were randomised to either metformin or usual treatment i.e., insulin. A high proportion of women assigned to metformin required supplementary insulin (46%) but at considerably lower doses than women receiving insulin alone. The primary outcome was a composite of neonatal hypoglycaemia (<2.6 mmol/L), respiratory distress, need for phototherapy, 5 min Apgar score < 7 or premature birth (before 37 weeks), and was no different between the two treatment groups [32]. Maternal weight gain from enrolment to terma secondary outcome was significantly less in women taking metformin vs. those on insulin (0.4 ± 2.9 kg in the metformin group vs. 2.0 ± 3.3 kg in the insulin group; p < 0.001). Other secondary outcomes including birthweight, neonatal anthropometrics and rates of large for gestational age (>90th percentile) were also similar in the metformin and insulin groups. However, the rates of severe hypoglycaemia (<1.6 mmol/L) were reduced in the metformin group vs. insulin therapy [32]. The MiG trial also found that patient acceptability was much higher for metformin than for insulin; when asked if they would choose it again for subsequent pregnancies, 77% of women on metformin said they would versus only 27% for those on insulin. Gastrointestinal side effects of metformin required 32 women (8.8%) to reduce their dose but only seven (1.9%) had to stop treatment [32]. In the light of the MiG findings, we conducted a case-control observational study comparing pregnancy outcomes in 100 women with GDM exclusively treated with metformin vs. 100 with GDM exclusively treated with insulin and matched for age, weight, and ethnicity [33]. Both groups had similar baseline maternal risk factors. The incidences of gestational hypertension, pre-eclampsia, induction of labour and rate of Caesarean section were similar but, as in the MiG trial, mean maternal weight gain from the enrolment to term was significantly lower in the metformin group. The pregnancy outcomes in the women who were treated with metformin alone, demonstrated lesser incidence of prematurity, neonatal jaundice, and admission to neonatal unit with an overall improvement in neonatal morbidity as compared to the women treated with insulin alone. There was no significant difference in the incidence of foetal macrosomia between the two groups of women [33]. In a further case-control study, we compared outcomes in 324 metformin-treated GDM women with 175 GDM women managed with dietary measures alone and matched for age and ethnicity [34]. Despite greater glucose intolerance and hence increased maternal risk in the metformin group, the proportion of macrosomic babies (birth weight [BW] centile > 90th centile) and small for gestational age (SGA) (BW < 10th centile) in this group was significantly lower than women treated with diet alone (12.7% versus 20%; p < 0.05 [macrosomia]; 7.7% versus 14.3% [SGA] p < 0.05) [34]. When comparing metformin with other treatments, post-prandial glycaemic levels may be important and it is notable that in a meta-analysis of three randomised controlled studies of GDM patients, lower post-prandial glucose was observed in metformin vs. insulin treated patients although these differences did not achieve statistical significance [35]. In a recent systematic review, metformin did not increase the rate of preterm delivery or Caesarean section, or risk of small for gestational age babies [36]. However, metformin was associated with lower risks of large for gestational age babies, neonatal hypoglycaemia and admission to neonatal intensive care units, as well as reduced rates of pregnancy induced hypertension [36]. In its latest guidance (2015), NICE states "offer metformin to women with gestational diabetes if blood glucose targets are not met using changes in diet and exercise within 1-2 weeks. Offer insulin instead of metformin to women with gestational diabetes if metformin is contraindicated or unacceptable to the woman." [1]. Metformin for Women with Type 2 Diabetes Whilst randomised controlled trial evidence assessing the use of metformin in pregnant women with type 2 diabetes is not currently available, clinicians will be very familiar with the clinical scenario of a woman whose diabetes is well controlled on metformin who presents in early pregnancy. In this situation, stopping the metformin risks exposing the foetus to the hazards of hyperglycaemia. Reassuringly no increase in congenital malformations or neonatal mortality have been seen in two meta-analyses of observational studies [37,38]. By contrast, Hellmuth and colleagues (2000) reported increased perinatal mortality and preeclampsia in a retrospective study of 50 women with type 2 diabetes taking metformin compared with those treated with sulphonylureas or insulin [39]. However women taking metformin were more obese than women in the other treatment groups which may have confounded the results. More recently, Hughes and Rowan (2006) reported no difference in maternal and foetal outcomes in women taking metformin compared with those on insulin despite the metformin group being at higher risk at baseline of adverse outcomes [40]. In another observational study, Ekpebegh and colleagues analysed 379 women with type 2 diabetes using oral agents between 1991-2000 in South Africa [41]. The authors found a high perinatal mortality (125 events per 1000 births) in women treated with oral agents (sulphonylureas or sulphonylureas plus metformin) but not with metformin alone. Insulin, by contrast, either after oral agents or after diet, was associated with a low perinatal mortality [41]. Currently, the Canadian multi-centre MiTy trial is assessing the possible benefit of adding metformin to insulin for type 2 diabetes in the first or second trimesters [42]. The primary outcome is a composite neonatal outcome of pregnancy loss, preterm birth, birth injury, moderate/severe respiratory distress, neonatal hypoglycaemia, or neonatal intensive care unit admission longer than 24 h. The trial aims to enrol 500 participants [42]. NICE (2015) recommends that women with pre-existing type 2 diabetes can receive metformin as an adjunct or alternative to insulin in the preconception period and during pregnancy when the likely benefits from improved glucose control outweigh the potential for harm [1]. As the evidence base for treatment expands, this balance will be more clearly defined. Metformin for Women with PCOS There is increasing evidence that hyperinsulinaemia and insulin resistance play a central role in the pathogenesis of polycystic ovarian syndrome (PCOS) [43]. Women with PCOS are more insulin resistant than weight-matched women with normal ovaries [44]. Obesity in combination with PCOS reduces the chances of conception and the response to fertility treatment, and increases the risk of miscarriage and GDM [45]. Metformin has an established place in the management of PCOS being used to induce ovulation, reduce miscarriage rates, prevent foetal growth restriction, and improve the metabolic associations such as glucose intolerance [46]. Since metformin may be used pre-conception, it is reassuring that there is no evidence of teratogenicity. A meta-analysis of first trimester exposure to metformin in 351 women found no increase in birth defects [16]. The evidence base for the use of metformin in PCOS is mainly derived from case-controlled studies with occasional conflicting results. An exception is a randomised placebo-controlled trial in which 257 women with PCOS received metformin (500 mg twice daily increasing to 1000 mg twice daily) or placebo from the first trimester to delivery [47]. The investigators found no difference in the primary outcome, a composite of preeclampsia, GDM and preterm delivery [47]. Metformin may need to be started pre-pregnancy and be given at higher doses to reduce pregnancy related complications. Thus, De Leo and colleagues (2011) in a prospective study of 98 women with PCOS in whom metformin (1700-3000 mg/day) was started before conception and continued until 37 weeks of pregnancy, reported a significant reduction of pregnancy complications, such as gestational diabetes and gestational hypertension compared with 110 pregnant controls [48]. The decrease in preeclampsia was not statistically significant and mean neonatal Apgar scores, weight and length were similar. between the two groups [48]. Reductions in pregnancy related complications were also demonstrated by Khattab et al. (2011) who compared 200 nondiabetic PCOS women who had conceived on metformin and continued it (1000-2000 mg/day) throughout pregnancy with 160 women who has similarly conceived on metformin but discontinued it when finding themselves pregnant [49]. The investigators found a statistically significant reduction in preeclampsia (OR: 0.35, 95% CI: 0.13-0.94) and GDM (OR: 0.17, 95% CI: 0.07-0.37) in those who continued metformin [49]. In another study, continuation of metformin throughout pregnancy resulted in reduced rates of foetal growth restriction, GDM, pre-term labour and increased live births [50]. No congenital anomalies, intrauterine deaths, or stillbirths were reported. Metformin may also reduce early pregnancy losses. A meta-analysis of randomised controlled trials comparing ovulation induction treatments concluded that clomiphene plus metformin was more effective than clomiphene alone in terms of ovulation and pregnancy [51]. Potential Impact for Children after in Utero Exposure to Metformin Could the use of metformin in pregnancy exert long term beneficial, neutral, or deleterious effects on the offspring? At present, a definitive answer cannot be given. Evidence suggests that intrauterine exposure to the hyperglycaemia of diabetes poses an increased risk of childhood obesity and diabetes in later life, over and above any risk attributable to genetic factors [52,53]. These infants are insulin resistant [54]. It could therefore be postulated that insulin resistance resulting from epigenetic changes and foetal programming could be reversed by metformin. A hint that this might indeed be the case comes from follow-up of infants of women in the MiG study examined at 2 years of age [25]. The metformin exposed infants had increased subcutaneous fat as assessed by subscapular and biceps skinfolds in comparison with non-exposed infants, whilst total body fat was similar. The MiG investigators hypothesized that this represents a healthier fat distribution [25]. If indeed, these children were shown to have less visceral fat, they would be expected to be more insulin sensitive. Longer term studies are clearly needed. Eight year follow-up of 12 children of PCOS mothers exposed to metformin during pregnancy showed increased fasting glucose, systolic blood pressure, and lower Low Density Lipoprotein (LDL) compared with placebo [55]. The significance of these findings is unclear given the small numbers. Recently the MiG trialists reported body composition and metabolic outcomes at 7 years (109/181 children in Adelaide) and 9 years (99/396 children in Auckland) of age [56]. At 7 years, there were no differences in offspring measures. At 9 years, metformin exposed children had increased weight, arm and waist circumferences, waist: height (p < 0.05), body mass index, triceps skin fold (p: 0.05), DEXA fat mass and lean mass (p: 0.07) [56]. Body fat percentage was similar by DEXA and bioimpedence. Visceral adipose tissue and liver fat were similar by Magnetic Resonance Imaging (MRI). Metabolic markers including HbA1c, fasting glucose, fasting lipids, adiponectin, and leptin were all similar. The significance of these findings in terms of long term cardiovascular risk is uncertain. It is important to note that the effects of diabetes in pregnancy on childhood obesity may not become manifest until after aged 6-9 years [57]. Planned follow-up of the offspring in the EMPOWaR and MOP trials may shed more light as these studies compared metformin with placebo rather than insulin. The MiTy Kids trial will assess the children of mothers with type 2 diabetes receiving metformin in addition to insulin. Summary Metformin in pregnancy does not increase congenital abnormalities and is generally well tolerated. Serious side-effects are very rare. In GDM, because it reduces maternal weight gain compared with insulin, metformin is now the preferred option if glucose targets are not met with dietary measures. This is especially the case for the increasing number of women who are obese. It can be safely added to insulin for women inadequately controlled on insulin alone and allows lower doses of insulin to be used. For women with type 2 diabetes in pregnancy, the risks are significantly higher than in GDM. The evidence base for decision making is more limited but in the absence of evidence favouring insulin, metformin should be continued for women already established on it. Insulin may need to be added if glucose targets are not achieved. We await with interest the results of the ongoing MiTy trial. Metformin started before pregnancy and continued until term in women with PCOS has benefits both for the mother (reducing GDM, gestational hypertension, preterm labour) and the developing foetus (reducing early pregnancy loss, foetal growth retardation). At present, we do not recommend metformin for nondiabetic pregnant women with obesity. We do not know whether preconception use of metformin in larger doses would have been effective in reducing birth weight centiles. The reduction in preeclampsia seen in the MOP trial is intriguing and needs further study. The largely unanswered question is the long term impact of intrauterine metformin exposure on childhood development. The MiG TOFU results at 9 years could be interpreted as showing a neutral effect as body fat, visceral adipose tissue, and liver fat were similar in metformin and insulin groups. Conversely the unexpected finding of increased body mass index in the metformin offspring might indicate an increased risk of childhood obesity. The low follow-up rate, however, makes the results difficult to interpret. On-going long term follow-up studies including from the offspring of mothers in the obesity trials will help answer this current uncertainty.
2018-07-05T23:33:12.933Z
2018-06-06T00:00:00.000
{ "year": 2018, "sha1": "0e6071c0bcb38a770a3e64ce88b012635386c986", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/7/1954/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e6071c0bcb38a770a3e64ce88b012635386c986", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233399840
pes2o/s2orc
v3-fos-license
Experience of cold-water immersion on recovery efficiency after soccer match Background: immersion in cold-water is one of the most common recovery and rehabilitation techniques among athletes. However, several factors such as shocking induced by cold water can affect the effectiveness of this technique. Aim: The aim of this study was to investigate the effect of 4 weeks cold water habituation on the effectiveness of CWI recovery technique on muscle damage and function indices of young soccer players. Methods: Twenty young men with no previous experience of CWI participated in this study. Output power and RSADec of subjects were measured. The subjects then performed a simulated soccer test and, after collecting blood samples, were immediately immersed in 15 ° C water for 15 minutes. Twenty-four hours later blood sampling and functional tests were repeated. Subjects then were divided randomly into two groups of exercise with CWI recovery and exercise with passive recovery. After four weeks, the blood sampling and performance tests repeated like the pre-test. Results: The CWI had no significant effect on serum levels of AST and LDH before and after 4 weeks of CWI (P> 0.05). Also, there was no significant difference in power output and RSADec after CWI before and after cold water habituation (P> 0.05). Conclusions: It seems that the experience of recovering by immersion in cold-water has no effect on the effectiveness of this method. Therefore, soccer coaches and athletes should think more about using this recovery method. INTRODUCTION Nowadays, immersion in cold water is one of the popular ways of recovery among athletes specially team sports athletes. CWI is one of the most popular methods among athletes, which is done mainly to reduce muscle damage caused by sports activities and accelerate the athlete's recovery. In this method, all or part of the person's body is placed in cold water. Although there is no specific protocol for performing immersion in cold water, a water temperature of 15° C for at least 10 minutes has been suggested (1). In this regard, immersion in 10° C water for 10 minutes after a soccer match significantly decreased muscle soreness in compared to the control group of young male soccer player. Also, immersion in 15° C water for 14 minutes decreased muscle soreness indices compared to the control group after a session of strength training (2,3). Benefits of CWI in reducing post-exercise fatigue perception has been reported too (4). However, not all studies observed benefits of CWI recovery after exercise (2,5). Some studies have even argued the CWI could be harmful for performance and in some physiological markers such as endothelial injury index while the control group in these studies were better conditions than the CWI group (6) (7). In another study, the efficacy of CWI (14° C for 20 minutes) versus warm water immersion (39 ° C for 20 minutes) was investigated on load of training after 5 days of heat training in. The result showed that CWI can have negative effect on load of training (8). Although several factors, such as water temperature, duration and frequency of immersion in water can affect the effectiveness of CWI, however, individuals' response to cold shock of immersion in cold water may also be considered as a negative factor (9). In other words, the previous experience of CWI by athlete may decreased negative effects of shock of immersion and consequently increase the effectiveness of CWI as a recovery technique. In support of this idea, it has been reported that regular immersion of one leg for 15 minutes at 10 ° C for 4 weeks lead to microvascular adaptations in compared with other leg which was as control (10). Since there is little information about cold water immersion habituation on the effectiveness of CWI recovery, the aim of this study was to evaluate the muscle damage and performance indices of young soccer players after four weeks of cold-water habituation. Participants This quasi-experimental study was conducted with two groups of immersion in cold water (CWI) and passive recovery (C) with pre-posttest design. Twenty young men with 18±1-year age and no previous experience of CWI participated in this study. Information on the properties of anthropometric variables is shown in Table 1. Before the study, written informed consent was obtained from subjects and the study was approved by the Ethical Committee on Human Research of the Ferdowsi University of Mashhad, Iran, and was conducted according to the Declaration of Helsinki and its later amendments. The inclusion criteria were included the experience of attending at least 2 years of professional soccer training, not consuming sports, and pharmaceutical supplements during the research, and not participating in any activity other than the prescribed training program. The exclusion criteria were non-regular participation in the exercises for more than 3 consecutive sessions and a total of 5 sessions during the research and any injuries or physical problems that made it difficult to perform the exercises. Instrumentation To measure physical performance of participants, stopwatch (Japan, Q and Q HS43 Sport Stopwatch) and measuring tape (Germany, seca 201) were applied. Biochemeicla analysis for LDH and AST (PARS AZMON kit -Iran) serum levels was assesed by using photometric method. CWI and passive recovery protocole: The CWI group was immersed in cold water naked (with a sports shorts) in 15 ° C water for 15 minutes until chest height immediately after training (11). The passive recovery (C) group performed the stretching exercises immediately after training for 10 minutes for the quadriceps, twins, and hamstring muscles, respectively. The exhausting exercise protocol: The exhausting exercise in this study was a simulated soccer match involving agility, walking, jogging, and running at different intensities. In this test, subjects first warmed up for 15 minutes. The warm-up included running, stretching, and jumping. Subjects then performed the LIST. Briefly, the test consisted of five 15-minute sets that separated the first three sets by a 3-minute rest from the next two sets. In this test every 15 minutes simulated soccer training subjects consisted of 3 repetitions of 20 m jogging, one 20 m running, 4 seconds rest, 3 repetitions of 20 m jogging with 55% VO 2 max and 3 repetitions of 20 m sprinting with 95% VO2max (subjects' running speed was controlled based on each subject's shuttle run test). Subjects then ran to fatigue to ensure proper exercise pressure, such that they ran the distance between two 20-m lines alternately with 55 and 95% VO 2 max until they failed to reach the line twice at the specified time. The exhausting exercise lasted approximately 90 minutes (12) ( Figure 1). The shuttle run test was used to estimate the VO 2 max (13). RSA Dec determination: RSA test were used for this purpose. The test consisted of 15 times running the 40-meter route, with subjects running at maximum speed. Subjects rested 30 seconds after each 40-m run. At the end of the test, the percentage of performance reduction was calculated based on the following formula which s is the time of each sprint and sbest is the best record among 15 repetitions (14). Output power: Sargent vertical jump test were used to estimate power. The test was done by placing the subject next to the wall. Then he raised his hand from the side to the highest point, the touch point recorded for him. The subject then jumped to the highest point after bending the knees to 90 degrees. The distance between the second and the first point was recorded as the subject's jump distance. Each subject performed the test three times and the best record was recorded. The subject was also allowed to take a 2-minute pass between each attempt. The output power was then calculated using the following formula in watts (15). Measurement of blood biomarkers: Blood samples were collected by an expert in laboratory science from the left arm (in sitting position) immediately after the exhaustive exercise session and 24 hours after the recovery protocol before and after 4 weeks. Blood samples were then centrifuged at 3000 rpm for 20 minutes and then stored at -80 °. Blood samples were collected by an expert in laboratory science from the left arm (in sitting position) immediately after the exhaustive exercise session and 24 hours after the recovery protocol before and after 4 weeks. Blood samples were then centrifuged at 3000 rpm for 20 minutes and then stored at -80 °. The blood serum was used to determine LDH and AST. Procedures In the first session, Familiar with the procedure and signed informed consent form before the study, the subjects' initial information including anthropometric characteristics and 3 days food recall were collected. In the second session, the VO 2 max was estimated. In the third session (24 h after the second session), the functional tests inn order to estimation the RSA Dec and power output were carried out. In the fourth session (48 hours after the third session), participants performed a simulated soccer game as an exhausting exercise which was followed by blood sampling and the CWI recovery protocole. At the fifth session (after 24 hours of the fourth session) blood samples were collected and the functional tests were repeated. Subjects then were randomly divided into two groups of cold-water immersion recovery (CWI group, N=10) and passive recovery (C group, N=10). After four weeks, blood sampling and functional tests before and after stimulated soccer game were repeated like the pre-test ( Figure 2). All subjects participated in a soccer training program for 4 weeks and 5 sessions per week. The soccer training program was included 15 minutes of warm-up, 15 minutes of agility and speed, 50 minutes of specialized soccer (tactical and technical) training and 10 minutes of cooling for both groups. The C group performed 10 minutes of stretching exercise after each session, whereas the CWI group performed 2 sessions per week the cold-water immersion protocol and the remaining three sessions were similar with C group. Statistical Analysis Independent variable of this study was 4 weeks of CWI applying during condition phase of young soccer players. After collecting and entering the data in SPSS software version 22, the raw data were analyzed. Descriptive statistics were used to calculate the central tendency and dispersion indices of the variables. After confirming the normality of the data distribution by Shapirovilk test and analysis of variance homogeneity by Levon's test, repeated measure ANOVA and Bonferroni post hoc test were used to determine intra-and intergroup changes of serum levels of LDH and AST immediately and 24 hours and performance tests per and post of exehustive task, befor and after of 4 weeks of study. A significance level of p≤0.05 were considered. RESULTS The results showed that before and after of four weeks of cold-water habituation, there was no significant difference foe LDH and AST serum levels of young soccer players after CWI recovery (P >0.05). Based on the results of this study, there were no significant difference before and after four weeks of cold-water habituation on the output power and RSA Dec of young soccer players after CWI recovery (P> 0.05). DISCUSSION The purpose of this study was to investigate the effect of four weeks of cold-water habituation on the effectiveness of CWI on muscle damage and performance indices of young soccer players. The results of our study demonstrated that previous experience of CWI before a comptation did not improved effecancy of CWi recover strategy after match. Although several factors can affect recovery efficiency by CWI, including the characteristics of the recovery protocol (water temperature, duration of immersion in water, frequency of immersion), the intensity and physiological nature of the activity or competition performed, and the characteristics of athletes (age, gender, fitness), however, few studies have examined the effect of adaptation and previous experience of immersion in cold water on the efficiency of CWI recovery. Past studies have reported improvements in the immune system, decreased stress and inflammatory responses, increased antioxidant concentrations, increased heat shock proteins, and improved response to exercise in hypoxic conditions in adapted versus non-adapted subjects to cold water (16)(17)(18)(19)(20). The results of the present study showed that there was no significant difference in LDH serum levels between groups at all four measurement times (immediately and after 24 h of the exhaustive test before and after four weeks) ( Table 2). In a study of martial athletes (Jujitsu), LDH levels were lower after 24 hours in the CWI group than in the control group. In this study, 8 trained men were immersed immediately in the 4-minute repetition at 6° C immediately after training. Between every four minutes, subjects rested one minute out of the water (21). In contrast, another study examined the effect of CWI after rugby competition on muscle damage indices. In this study, 20 trained rugby men were divided into two groups: CWI and inactive recovery, and CWI group immersed in cold water for 10 minutes in 15° C. The results showed that there was no significant difference between the control and CWI groups at LDH, CK and AST levels after 24 h (22). It seems that the peak activity in serum LDH observes 8h after exercise, future studies should consider serial blood sampling from immediate to 48 h after CWI recovery (23). In general, the proposed mechanism of the effect of coldwater immersion on the reduction of muscle damage has been suggested that after 24 hours of intense training, the damaged muscles will become painful and swollen. In addition, increased blood flow to the muscle causes swelling of the muscle tissue. The nerves receive these unusual messages and then send the pain message to the brain. The result of all these complex processes is ultimately the release of muscle damage markers into the bloodstream (24). Immersion in cold water by reducing muscle blood flow can prevent or reduce muscle swelling after training and send pain signals to the brain. On the other hand, immersion in cold water reduces permeability of the vascular wall and prevents cell swelling, which results in both mechanisms, a decrease in the transmission of pain messages, and consequently a decrease in the diffusion of muscle damage markers into the bloodstream (4,25). However, the rate of release of these enzymes in trained individuals appears to be different from untreated individuals (26). According to the results of the present study, there was a no significant difference in AST serum level between groups at both stages before and after 4 weeks immediately and 24 hours after CWI (Table 2). Similarly, Takadi et al., Who investigated the effect of CWI following a simulated rugby game, reported no effect of cold water immersion on serum levels of AST (22). Increased levels of AST circulation during intense and high impact activities appear to be unaffected by recovery techniques, such as CWI, which are generally performed after the end of training or competition. The results of the present study showed that there was no significant difference between the groups in both stages before and after 4 weeks in RSA Dec ( Table 3). The ability to repeat sprints is one of the key factors in soccer (27). The results of several studies that examined the effect of CWI recovery on the ability to repeat sprints were consistent with our results (28-31).. One of the possible causes of the lack of effect of CWI on the ability to repeat sprints can be that immersion in cold water probably has no effect on muscle lactate concentration, and by lowering blood and muscle pH, the enzymes involved in the anaerobic glycolysis pathway is impaired, which results in slower energy production and reduced performance (32) . Antonio et al. In their study showed that the effects of immersion in cold water were more significant 24 and 48 hours after recovery (2). Although in the present study, habituation to cold water immersion did not improve the effectiveness of the recovery technique of CWI, future studies appear to evaluate the effects of CWI recovery over different time periods. Limitations This research has limitations. Nutritional, body water status and sleep quality and quantity of participants of this study did not evaluated during the study. All of this factor can affect the rate of muscle damage and performanc. The participents were asked to avoid from intense activity 24 hours prior to the testing sessions. However, we cannot confirm whether they complied. The subjects were not blinded during the data collection on whether they were receiving CWI or passive recovery. According to the findings of the present study, it seems that cold water habituation does not improve the potential benefits of CWI recover in young soccer players following maximal activity. In fact, applying CWI recovery does not have a significant effect on the effectiveness of CWI recovery on muscle damage and function after a stimulated match in young soccer players.
2021-04-27T06:16:33.331Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "39fa2b41279d813f9d57a9a05efb32be44f601fa", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "379958a3d1672b0564c9498426798be0803c943c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268388673
pes2o/s2orc
v3-fos-license
Interactions between halotolerant nitrogen-fixing bacteria and arbuscular mycorrhizal fungi under saline stress Background and aims Soil salinity negatively affects crop development. Halotolerant nitrogen-fixing bacteria (HNFB) and arbuscular mycorrhizal fungi (AMF) are essential microorganisms that enhance crop nutrient availability and salt tolerance in saline soils. Studying the impact of HNFB on AMF communities and using HNFB in biofertilizers can help in selecting the optimal HNFB-AMF combinations to improve crop productivity in saline soils. Methods We established three experimental groups comprising apple plants treated with low-nitrogen (0 mg N/kg, N0), normal-nitrogen (200 mg N/kg, N1), and high-nitrogen (300 mg N/kg, N2) fertilizer under salt stress without bacteria (CK, with the addition of 1,500 mL sterile water +2 g sterile diatomite), or with bacteria [BIO, with the addition of 1,500 mL sterile water +2 g mixed bacterial preparation (including Bacillus subtilis HG-15 and Bacillus velezensis JC-K3)]. Results HNFB inoculation significantly increased microbial biomass and the relative abundance of beta-glucosidase-related genes in the rhizosphere soil under identical nitrogen application levels (p < 0.05). High-nitrogen treatment significantly reduced AMF diversity and the relative abundance of beta-glucosidase, acid phosphatase, and urea-related genes. A two-way analysis of variance showed that combined nitrogen application and HNFB treatment could significantly affect soil physicochemical properties and rhizosphere AMF abundance (p < 0.05). Specifically, HNFB application resulted in a significantly higher relative abundance of Glomus-MO-G17-VTX00114 compared to that in the CK group at equal nitrogen levels. Conclusion The impact of HNFB on the AMF community in apple rhizospheres is influenced by soil nitrogen levels. The study reveals how varying nitrogen levels mediate the relationship between exogenous HNFB, soil properties, and rhizosphere microbes. Introduction Approximately 20% of the world's arable land is currently at risk of salinity, and this percentage is steadily increasing by 10% each year, posing a considerable challenge to agricultural production and contributing to land degradation (Farooq et al., 2021).Apple trees are highly resistant to salinity and alkali stress, making them the preferred fruit-tree species for the efficient development and utilization of saline-alkali land in the Yellow River Basin.However, nitrogen deficiency is a major cause of restricted apple growth in saline soils.Typically, crop yield is improved with the application of nitrogen.However, excessive nitrogen application does not always result in a continuous yield increase; it reduces nitrogen-use efficiency and leads to environmental problems (Grassini et al., 2013;Faostat, 2016).Thus, it is important to consider that using inorganic nitrogen fertilizers can increase nutrient amounts and soil salinity, which may damage plants instead of promoting growth (Jha et al., 2012).In apple cultivation on saline-alkali lands, one potential solution to address this issue is to develop efficient, sustainable, and environmentally friendly biological nitrogen fertilizers that can partially replace chemical nitrogen fertilizers (Chen et al., 2023a). Many studies have confirmed the importance of microbial inoculants in achieving higher crop yields, improving crop quality and soil fertility, and deepening our understanding of the mechanisms of interactions between certain bacterial and plant strains in specific ecosystems (Primieri et al., 2021;Liu et al., 2022).However, there are still some problems with the application of nitrogen-fixing bacteria under natural conditions: (1) several nitrogen-fixing bacterial strains struggle to colonize saline soils for extended periods, resulting in unstable nitrogen fixation effects.Additionally, there has been limited research on the screening methods, application, and mechanism of action of salt-tolerant nitrogen-fixing bacteria, also known as halotolerant nitrogen-fixing bacteria (HNFB).(2) While several studies have focused on the growth-promoting effects of exogenously inoculated plant growth-promoting rhizobacteria (PGPR) on host plants (Thirkell et al., 2020), only a few have examined the synergistic effects of PGPR with other rhizosphere microorganisms on saline soils and crops, especially beneficial flora with unique abilities.In recent years, there has been a trend toward developing compound bacterial fertilizers comprising two or more bacterial strains instead of singlestrain fertilizers, and a shift from fuzzy decision support systems to clear decision support systems.For example, in maize stems, xylem selectively recruited conserved microorganisms dominated by γ-proteobacteria, and the combination of Klebsiella variicola MNAZ1050 and Citrobacter sp.MNAZ1397 increased nitrogen accumulation in maize by 11.8% (Zhang et al., 2022). Halotolerant PGPRs with 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase activity can reduce plant ethylene accumulation, prevent oxidative stress, promote plant growth, and regulate the rhizosphere microbial community structure.Consequently, halotolerant PGPRs have become a valuable microbial resource for promoting crop growth in saline-alkali soils (Ji et al., 2022a;Li et al., 2023).Among the beneficial microorganisms in saline soils, ascomycetous fungi (AMF) can form a symbiotic relationship with most crops, improving soil nutrient acquisition, promoting plant growth and water absorption, and initiating defense responses in host plants (Bennett and Groten, 2022).Although AMF may not prove to be a "sustainable savior" in agroecosystems (Thirkell et al., 2017), they have the potential to help crops assimilate nutrients (Thirkell et al., 2020).Currently, there is a lack of research on HNFB-mediated AMF community responses. Changes in the core strains in the rhizosphere of crops can alter the types and quantities of metabolites and affect the interaction network of the entire community (Coyte et al., 2015).For example, Niu et al. (2017) constructed a synthetic community composed of seven bacteria and found that in the absence of Enterobacter cloacae, the abundance of Brevibacterium parvum increased, while other species disappeared from the community, demonstrating the importance of key species in the microbiome.Therefore, analyzing the composition of core microorganisms is necessary for accurately regulating the rhizosphere microbial community, improving microbial community function, and elucidating the mechanisms of microbial community-plant interactions (Hogle et al., 2023).However, because chemical nitrogen fertilizers and nitrogen-fixing bacteria coexist in the agricultural sector, they may interact with each other, with uncertain consequences for the soil-microbe system (Tian et al., 2017); whether the coexistence of chemical nitrogen fertilizers and nitrogen-fixing bacteria affects the core strains remains unknown. In this study, we inoculated the rhizosphere of apple trees in saline land with a compound flora composed of two salt-tolerant strains that can stably colonize saline land, exhibit ACC deaminase activity, and demonstrate high efficiency in synergistic nitrogen fixation.Additionally, we examined the effects of compound inoculation on the apple plants and the rhizosphere AMF communities under three nitrogen application levels.We proposed and tested the following two hypotheses: (1) the AMF community in the rhizosphere of apple trees in saline-alkali soil is unique, and excessive nitrogen application has a negative effect on the structure and function of the AMF community; (2) exogenous HNFB positively affects the structure and function of the AMF community by influencing one or more core rhizosphere AMF species. HNFB strains and culture media The microbial inoculant consisted of a mixture of Bacillus subtilis HG-15 and Bacillus velezensis JC-K3 strains.The 16S rDNA sequences of these two strains were deposited in the NCBI database under accession number MN689681 and MT605169.Both strains have been shown to possess efficient nitrogen fixation ability, antagonistic activity, and other growth-promoting characteristics in our previous studies (Ji et al., 2021(Ji et al., , 2022b)).The nitrogen fixation activity of the HG-15 strain is 24.30 ± 0.75 mg N/g glucose, while that of the JC-K3 strain is 30.25 ± 0.42 mg N/g glucose.The ACC deaminase activity of the HG-15 strain is 14.816 ± 0.965 μmol/(mg h), while that of the JC-K3 strain is 18.10 ± 0.97 μmol/(mg h).Luria-Bertani liquid medium was used as the seed and fermentation medium.When the spore formation rate in the fermentation liquid exceeded 95%, diatomite sterilized at 121°C for 20 min was added at a concentration of 10% to the fermented liquid.The bacteria were allowed to adsorb onto the diatomite.The suspension was then centrifuged at 3,100 × g for 20 min.The supernatant was discarded, and the sediment was stored at −40°C for 48 h before being placed in a lyophilizer (Labconco FreeZone ® Plus 4.5 L; Kansas City, MO, USA) and treated at −48°C and 9 Pa for 48 h (Ji et al., 2020).The densities of the HG-15 and JC-K3 strains in the resulting solid microbial agents were 451 × 10 8 CFU/g and 498 × 10 8 CFU/g, respectively.The two bacterial preparations were diluted to a concentration of 20 × 10 8 CFU/g with sterile diatomite, and then mixed in a 1:1 ratio for later use. Experimental design The experiment was conducted in the Weifang Economic Development Zone (119°3′30′′E, 36°48′14′′N), Shandong Province, China, in 2022.A four-year-old dwarf rootstock M9T337 grafted Fuji apple (Red Fuji) plant was selected as the test material [electrical conductivity (EC) = 671 s/cm, pH = 7.6, 14.51 g/kg soil organic matter, 59.25 mg/kg soil available nitrogen, 21.46 mg/kg soil available phosphorus (by Olsen P test), and 122.17 mg/kg soil exchangeable potassium].Before planting, all seedlings were fertilized with urea (46% N) at three different concentrations: 0 mg N/kg (N0, low nitrogen level), 200 mg N/kg (N1, normal nitrogen level), and 300 mg N/kg (N2, high nitrogen level), which corresponded to 0, 240, and 360 kg N/ha of field fertilizer (urea), respectively.Simultaneously, 10 g of potassium sulfate (containing 50% K 2 O) and 17 g of calcium superphosphate (containing 14% P 2 O 5 ) were applied per plant.Apple plants with strong and uniform growth were selected and planted in pots in mid-March, with one plant per pot.The pots and soil used in this experiment were not sterilized.After 14 days of plant colonization, the three nitrogen application levels were established: without bacteria (CK, with the addition of 1,500 mL sterile water +2 g sterile diatomite) or with bacteria (BIO, with the addition of 1,500 mL sterile water +2 g mixed bacterial preparation).This resulted in a total of six groups, each containing 12 pots.The experimental plants were arranged randomly, while the other experiments were conducted according to standard field procedures.Samples were taken at the flower bud morphological differentiation stage in July 2022.This experiment was repeated thrice. Rhizosphere soil sampling and analysis Soil pH and EC values were analyzed using digital pH (FE20) and EC (FE930) meters (Mettler Toledo, Switzerland), respectively.The soil-water ratios used for analysis were 1:2.5 and 1:5.The organic matter content of the soil was determined using the method described by Aj and Black (1934).The total nitrogen content in the soil (soil N) was determined using the Bremner (2009) method. DNA extraction and polymerase chain reaction (PCR)-based amplification Soil genomic DNA was extracted from 0.5 g of soil using a FastDNA SPIN Kit for Soil (MP Biomedicals, Irvine, CA, USA) according to the manufacturer's instructions.The DNA quality was examined using 1.0% agarose gel electrophoresis, and the DNA concentration was quantified using a NanoDrop 2000 UV-Vis spectrophotometer (Wilmington, USA) (Zeng et al., 2021).Nested PCR was conducted to amplify specific fragments of the AMF 18S rRNA gene.The first PCR reaction consisted of a 20-μL mixture containing 1 μL of genomic DNA (approximately 10 ng), 2 μL of 2.5 mM dNTPs, 0.4 μL of FastPfu DNA Polymerase (5 U/μL), 0.4 μL of each primer [10 μM; AML1 (5′-ATCAACTTTCGATGGTAG GATAGA-3′)/AML2 (5′-GAACCCAAACACTTTGGTTTCC-3′) primer pair], 4 μL of 5-fold Fastpfu DNA Buffer (Takara, Dalian, China), and molecular-grade water.The products from the first PCR (approximately 10 ng used as the template) were then amplified in a second PCR reaction using the primers AMV4.5NF (5′-AAGCT CGTAGTTGAATTTCG-3′) and AMDGR (5′-CCCAACTA TCCCTATTAATCAT-3′), following the same protocol as the first PCR step.The thermal cycling conditions for both PCR steps were as follows: initial denaturation at 95°C for 3 min, 27 cycles of denaturation at 95°C for 30 s, annealing at 55°C for 30 s, elongation at 72°C for 45 s, and final elongation at 72°C for 10 min.The PCR products were extracted using 2% agarose gels, purified with an AxyPrep DNA Gel Extraction Kit (Axygen, Union City, CA USA) according to the manufacturer's protocol, and quantified with a QuantiFluor ST instrument (Promega, Madison, WI, USA). Illumina MiSeq and bioinformatics analyses Quantified and purified PCR products were sent to Majorbio BioPharm Technology Co. Ltd. (Shanghai, China) for sequencing using the Illumina MiSeq PE300 platform (San Diego, CA, USA).The raw sequences were deposited in the NCBI Sequence Read Archive (SRA) database (Accession ID PRJNA999929).The forward and reverse raw sequences were merged using FLASH (Mago and Salzberg, 2011) by overlapping paired-end reads with a required overlap length of >10 base pairs (bp) and quality controlled using Trimmomatic software (Bolger et al., 2014).Low-quality sequences (average quality score < 20) containing ambiguous bases, sequences without valid primer or barcode sequences, and sequences with a read length < 50 bp were excluded.The permitted maximum error ratio of the overlapping sequences was 0.2, which was used as the basis for screening overlapping sequences. Non-repeating sequences were then extracted, and individual sequences that did not repeat were removed using Usearch 7.0 (Edgar, 2013).The sequences were subsequently clustered into operational taxonomic units (OTUs) with a 97% similarity cut-off using the QIIME software (Caporaso et al., 2010).After sequences were clustered, the taxonomy of each OTU was classified from the domain level to the OTU level using the RDP Classifier algorithm against the MaarjAM database (Maarjam 081) (ÖPik et al., 2010), with a default confidence threshold of 0.7. Real-time quantitative reverse transcription PCR (qRT-PCR) analysis The PCR products were purified and then ligated into a pMD18 vector using a pMD™ 18-T Vector Cloning Kit (TaKaRa Bio Inc.).Plasmid extraction and purification were performed using a MiniBEST Plasmid Purification Kit Ver.4.0 (TaKaRa Bio Inc.).The concentration and purity of the plasmid were determined using a NanoDrop 2000 microspectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).After determining the plasmid copy number, the preparation was serially diluted to prepare 10 1 to 10 8 copies.A standard curve (R 2 = 0.99), plotting the logarithm of the initial amount of template DNA as the abscissa and the Ct value of each diluted sample during the PCR reaction as the ordinate, was used to establish an amplification efficiency of 90-100%.The primers used for quantitative PCR (qPCR) amplification of the nitrogen-fixing bacteria were PolF (TGCGAYCCSAARGCBGACTC) and PolR (ATSGCCATCATYTCRCCGGA), while AM fungi were amplified using the AMV4.5NF and AMDGR primers.qPCR amplifications were performed using an ABI 7900 (USA) fluorescence quantitative PCR thermocycler using 15-μL reaction systems containing 7.5 μL × 2 SYBR Premix Ex Taq II, 0.3 μL × 50 ROX Reference Dye, 0.6 μL 10 μmol•L −1 pre-primer, 0.6 μL 10 μmol•L −1 post-primer, 2 μL DNA template, and 4.0 μL double-distilled water (ddH 2 O).The amplification program comprised an initial pre-denaturation at 95°C for 30 s, followed by 40 cycles of chain cleavage at 95°C for 5 s, annealing at 62°C for 30 s, extension at 72°C for 60 s, and signal acquisition at 83°C for 10 s.All samples were analyzed in triplicate. Statistical analyses Data analysis was performed using IBM SPSS 19.0 (IBM, Armonk, NY, USA).The plant and soil parameters followed a normal distribution, and Student's t-test and one-way analysis of variance (ANOVA) were used to compare differences among plant parameters (p < 0.05).Interaction between nitrogen application and bacterial addition was detected using a two-way ANOVA.Redundancy analysis (RDA) was conducted to examine the relationships between the relative abundance of fungal and AMF taxa and the chemical properties of soil samples, using Canoco 4.5.1 (Microcomputer Power, Ithaca, NY, USA).The non-parametric factorial Kruskal-Wallis sum-rank test of the LEfSe tool was used to identify and detect the characteristics of groups exhibiting significant differences in abundance.LEfSe utilizes linear discriminant analysis (LDA) to estimate the effect of the abundance of each component (species) on the differences. Effects of nitrogen fertilizer and exogenous HNFB on soil chemical properties Among the different nitrogen application treatments, the MBP and MBC were significantly higher in the BIO + N1 treatment group than in the BIO + N0 group (by 28.08 and 6.25%, respectively) and the BIO + N2 group (by 36.85 and 82.68%, respectively) (p < 0.05).The MBN content in the BIO + N1 treatment group was significantly higher than in the BIO + N2 treatment group (by 51.23%) (p < 0.05).The EC and pH values of the BIO + N0 and BIO + N2 treatment groups were significantly higher than those of the BIO + N1 group (p < 0.05).For the BIO treatment, as the concentration of applied nitrogen increased, MBN and MBP first increased and then decreased.Moreover, the MBC/MBN ratio decreased gradually, with values of 8.24, 7.27, and 6.02 for the three nitrogen application levels, respectively, (Supplementary Figure S1). The MBC of the CK + N2 group was significantly lower than that of the CK + N0 group (by 39.07%) and the CK + N1 group (by 38.97%) (p < 0.05).The MBN of the CK + N1 group was significantly higher than that of the CK + N0 group (by 24.61%) and the CK + N2 group (by 112.68%) (p < 0.05).The EC of the CK + N2 group was significantly higher than that of the CK + N0 group (by 3.13%) and the CK + N1 group (by 9.65%) (p < 0.05).In the CK treatment groups, MBN first increased and then decreased with increasing nitrogen application levels.The MBC/MBN ratio exhibited a trend of initially decreasing and then increasing, with values of 8.46, 6.78, and 8.81 for the three nitrogen application levels, respectively.Nitrogen application did not result in significant differences in MBP or pH in the CK treatment groups (Supplementary Figures S1C,E).Under the same nitrogen level, the MBP, MBC, and MBN levels in the BIO group were significantly higher than those in the CK group (p < 0.05).Additionally, the EC level in the CK group was significantly higher than that in the BIO group (p < 0.05).There was no significant difference in the pH levels between the BIO and CK groups.The soil N content of the BIO + N0 group was significantly higher than that of the CK + N0 group (p < 0.05) (Supplementary Figure S1). Regardless of HNFB inoculation, excessive nitrogen application had a negative impact on MBN and MBC in the rhizosphere soil, further exacerbating salt stress.N1 application significantly increased the MBC and MBP content in the rhizosphere soil, while reducing the degree of salt stress.Inoculation with HNFB also significantly increased MBN, MBC, and MBP levels in the rhizosphere soil under both N0 and N1 conditions, leading to a reduction in salt stress.Under excessive nitrogen application, HNFB significantly increased MBC and MBP in the rhizosphere soil.Under low-nitrogen conditions, HNFB significantly increased the total nitrogen content in the rhizosphere soil.The results of the two-way ANOVA further confirmed that the nitrogen application level and bacterial treatment had significant effects on the physical and chemical properties of the soil and showed interaction effects on the MBP, MBC, and EC (Table 1).In total, 365,930 valid AMF sequences (average length, 215 bp) were obtained, accounting for 98.86% of the original sequences and covering most of the AMF community in the rhizosphere soil (Supplementary Figure S2).Based on a 97% similarity analysis, 60 OTUs were classified from the effective AMF sequences, including one phylum, three families, three genera, and 20 species.There were 23 common OTUs among the six treatments, 35 common OTUs in the BIO groups, and 23 common OTUs in the CK groups.The results showed that HNFB increased 12 OTUs in the apple rhizosphere (Figure 1).Therefore, excessive nitrogen application was not instrumental in increasing AMF species in the rhizosphere soil and HNFB inoculation enriched the AMF species in the rhizosphere soil under N0 and N1 levels. Inoculation with exogenous HNFB increased the rhizosphere AMF community richness (Simpson, ACE and Chao index, Table 2) under N0 conditions and maintained rhizosphere AMF community richness under N1 and N2 conditions.After inoculation with exogenous HNFB, excessive nitrogen application significantly reduced AMF richness and diversity compared to those at low or normal nitrogen levels (Table 2).Two-way analysis further confirmed that nitrogen fertilizer and HNFB application significantly affected the AMF richness of the rhizosphere soil, and there was an interaction with the AMF richness.However, nitrogen fertilizer and HNFB had no significant effect on or interaction with the AMF diversity (Supplementary Table S1).The results of qRT-PCR revealed a gradual decline in the abundance of nitrogen-fixing bacteria in soil with an increase in nitrogen application rate.In addition, the abundance of F and p values represent the results of the interaction analysis between nitrogen application levels and bacterial treatment (two-way ANOVA).HNFB, halotolerant nitrogen-fixing bacteria; MBN, microbial biomass nitrogen; MBC, microbial biomass carbon; MBP, microbial biomass phosphorus; EC, electrical conductivity; N, nitrogen; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3). FIGURE 1 Venn diagrams depicting the AMF community structure in the different treatment samples at the OTU level.OTU, operational taxonomic unit.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level.AMF was significantly lower at N0 and N2 than at N1 (p < 0.05), and under the same level of nitrogen application, the abundances of nitrogen-fixing bacteria and AMF in the inoculated HNFB group were significantly higher than those in the CK group (p < 0.05) (Supplementary Figure S3).The top 10 most abundant AMF are shown in Figure 2. We found that the main AMF genus was Glomus.In detail, Glomus-Glo7-VTX00214, Glomus-sp.-VTX00304,Glomusviscosum-VTX00063, and Glomus-sp.VTX00301 were the dominant species in each treatment group (Figure 2; Supplementary Figure S4).Among the CK groups, the relative abundance of s__Glomus-Wirsel-OTU16-VTX00156 in the N0 group was 15.48%, which was significantly higher than that in the N1 (2.68%) and N2 (0.81%) groups (p < 0.001) (Figure 3A; Supplementary Figures S4B,D,F).Among the BIO groups, the relative abundance of Glomus-MO-G17-VTX00114 was 4.56% in the N1 treatment group, which was significantly higher than that in the N2 (3.72%) and N0 (1.41%) groups (p < 0.05) (Figure 3B; N treatment indicate significant differences between the nitrogen application level groups for the same treatment (one-way ANOVA, p < 0.05).Microbial treatment indicate significant differences between the results of the different treatments for the same nitrogen application level (two-sided t-test).AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level. FIGURE 2 The relative abundance of AMF in the apple rhizosphere soil across the different treatment groups at the species level.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level. Supplementary Figures S4A,C,E).The relative abundance of s__ Glomus-Wirsel-OTU16-VTX00156 in the BIO group increased with increasing levels of applied nitrogen.Furthermore, the relative abundance of this fungus in the BIO + N0 group was significantly lower than that in the CK groups (Figure 3C) (p < 0.001).For the N0 and N2 treatment groups, the relative abundance of Glomus-MO-G17-VTX00114 in the BIO group was significantly higher than that in the CK group (Figures 3C,D) (p < 0.05).The relative abundance of Glomus-MO-G17-VTX00114 decreased after excess nitrogen application with or without exogenous HNFB inoculation (Figures 3A,B) (p < 0.05). Effects of nitrogen fertilizer and exogenous HNFB application on AMF community function in the apple rhizosphere The PICRUSt2 software was used to predict the function of the microbial community detected in the samples based on the amplicon sequencing results, and the enzyme types produced by the flora in the different treatment groups and their relative levels were determined (Supplementary Table S2).The relative abundances of beta-glucosidase and acid phosphatase in the BIO + N1 group were significantly higher The significant differences in the relative abundance of AMF in the apple rhizosphere soil from different treatment groups at the species level.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level.than those in the BIO + N0 and BIO + N2 groups (Figure 4A) (p < 0.05).The relative abundance of beta-glucosidase in the CK + N1 group was significantly lower than that in the CK N0 and N2 groups (Figure 4A) (p < 0.05).Acid phosphatase levels in the BIO group first increased and then decreased with increasing nitrogen application levels, and the BIO + N1 group had significantly higher levels than those in the BIO + N2 and BIO + N0 groups (p < 0.05).Nitrogen application in the CK group had no significant effect on the relative abundance of acid phosphatases (Figure 4B).In contrast, in the BIO group, the relative abundances of alkaline phosphatase and urease increased with increasing nitrogen levels.The N2 group had significantly higher levels compared to the N0 and N1 groups (Figure 4C) (p < 0.05).There was no significant difference in the relative abundances of alkaline phosphatase and urease between the CK + N1 and CK + N2 groups.However, they were significantly higher than those in the CK + N0 group (Figures 4C,D) (p < 0.05).Urease levels in the BIO + N0 group were significantly higher compared to the BIO + N1 and N2 groups (p < 0.05), whereas urease levels in the CK + N0 group were significantly lower compared to the CK + N1 and N2 groups (Figure 4D) (p < 0.05).In conclusion, excessive nitrogen application reduced the relative abundance of beta-glucosidase, acid phosphatase, and urease, regardless of whether HNFB was inoculated.However, when HNFB was inoculated, the relative abundance of alkaline phosphatase significantly increased with increasing nitrogen application levels. Under the same nitrogen application level, the relative abundance of beta-glucosidase in the BIO group was significantly higher than that in the CK group (p < 0.05) (Figure 4A).The relative abundance of acid phosphatase in the BIO + N1 group was significantly higher than that in the CK + N1 group (p < 0.05); however, there was no significant difference due to HNFB inoculation at the N0 and N2 levels (Figure 4B).The relative abundances of alkaline phosphatase and urease in the CK + N0 and N2 groups were significantly higher than those in the BIO + N0 and N2 groups (p < 0.05), and there were no significant differences between the CK + N1 and BIO + N1 groups (Figures 4C,D). Therefore, inoculation with exogenous HNFB significantly increased the relative abundance of beta-glucosidase at the N0 level but significantly reduced the abundance of alkaline phosphatase and urease.The relative abundances of beta-glucosidase and acid phosphatase significantly increased at the N1 level, and the relative Relative levels of enzymes produced by the microbial community in the different treatment groups.The relative levels of (A) beta-glucosidase, (B) acid phosphatase, (C) alkaline phosphatase, and (D) urease in the rhizosphere after different treatments were detected using PICRUSt2 software.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level.Ji et al. 10.3389/fmicb.2024.1288865Frontiers in Microbiology 09 frontiersin.orgabundances of beta-glucosidase, alkaline phosphatase, urease significantly increased at the N2 level. Correlation analysis of environmental factors We conducted an analysis of the variance inflation factor (VIF).The VIF values of the environmental factors MBP (VIF = 4.59), MBN (VIF = 5.90), pH (VIF = 1.48),EC (VIF = 7.70), and soil N (VIF = 1.53) were less than 10 and could be used for RDA.However, the VIF of MBC was 19.63, which was greater than 10, indicating collinearity with other environmental factors.Therefore, MBC was removed from the RDA to ensure an accurate assessment of the effect of soil physical and chemical factors on structural latitude biodiversity. Significant differences in microbial structure were observed after different nitrogen treatments and the addition of HNFB.By combining environmental factor analysis, we found that EC and pH were positively correlated with Glomus-viscosum-VTX00063 and s__ Glomus-Wirsel-OTU16-VTX00156.Soil N and MBP were positively correlated with Glomus-MO-G17-VTX00114, while MBN was positively correlated with Glomus-sp.VTX00304 cells (Figure 5A).The results of the correlation analysis of the environmental factors were consistent with those of the RDA.The clustering relationship between EC and pH was similar, with EC being significantly negatively correlated with Glomus-MO-G17-VTX00114 (p < 0.05).The pH showed a significant negative correlation with Glomusperpusillum-VTX00287 (p < 0.01), Glomus-Glo2-VTX00280 (p < 0.05), and Glomus-group-B-Glomus-ORVIN-GLO4-VTX00278 (p < 0.05) levels.Soil N was significantly positively correlated with Glomus-MO-G17-VTX00114 (p < 0.01) (Figure 5B). PERMANOVA was used to interpret the correlation between different environmental factors and the AMF community structure in various samples.A permutation test was employed to determine the statistical significance of the division.The results revealed that EC, soil N, and MBP significantly influenced the samples (R 2 = 0.677, p = 0.001) (Figure 6).Both the BIO + N0 and BIO + N2 groups, as well as the BIO + N1 and BIO + N2 groups, were significantly impacted by MBP (p < 0.05) and MBC (p < 0.05).Similarly, the CK + N0 and CK + N2 groups were significantly affected by EC (p < 0.05) and MBN (p < 0.05).Similarly, the BIO + N1 and CK + N1 groups were significantly affected by EC (p < 0.05) and MBN (p < 0.05).Additionally, the BIO + N2 and CK + N2 groups were significantly affected by EC (p < 0.05), MBN (p < 0.05), and MBC (p < 0.05).The pH did not have an overall significant effect, nor did it differ significantly between groups.Therefore, nitrogen treatments primarily influenced the AMF community structure in the rhizosphere soil by regulating the content of MBP and MBC, following inoculation with exogenous HNFB.In the absence of inoculation with exogenous HNFB, nitrogen primarily affected the AMF community structure by regulating the EC and MBN content in the rhizosphere soil.Under the N1 and N2 treatments, significant differences in the AMF community structure between the BIO and CK groups were mainly attributed to differences in EC and MBN. This study examined the collinear model of AMF related to nitrogen fertilizer and HNFB application by constructing two molecular ecological networks.In the AMF co-occurrence network of the CK and BIO samples, Glomus-Glo7-VTX00214 played a vital role (Figures 7A,B).Consistent with the results of the flora composition depicted in Figure 2, Glomus-Glo7-VTX00214 had the highest relative abundance in each treatment, and its relative abundance in the BIO group was greater than that in the CK group under the N0 and N1 treatments.Furthermore, the results from the random forest analysis further demonstrated that Glomus-Glo7-VTX00214 was the most crucial predictor of the AMF structure in the apple rhizosphere soil (Figure 7C). Differences between the effects of nitrogen application levels and that of exogenous HNFB inoculum on soil physical and chemical properties In our results, nitrogen application led to significant changes in soil MBC and MBN content.Most notably, excessive N application reduced MBC and MBN regardless of whether HNFB was inoculated (Supplementary Figures S1A,B).The main reason underling this change may be that excessive nitrogen can increase nitrogen uptake by crops, but it also alters soil nitrogen effectiveness (Cairney, 2011), affecting nitrogen absorption and reabsorption by plants (Ostonen et al., 2011) and the exchange of metabolites between roots and microorganisms.The competition between plants and microorganisms for available nutrients, such as MBC and MBN, under salt stress was intensified, thereby reducing the available MBC and MBN (Ganeshamurthy and Reddy, 2015).Furthermore, exogenous HNFB significantly increased microbial biomass in rhizosphere soil under both low and normal nitrogen conditions and reduced salt stress.We speculate the main reasons for the above finding is that inoculation with PGPR can increase the number and diversity of rhizosphere microorganisms, improve microbial nitrogen fixation, promote metabolite exchange, and enhance ion-transport efficiency between roots and rhizosphere microorganisms, thereby increasing nutrient-use efficiency (Zhang et al., 2022). We found that after HNFB inoculation, although all soil MBC/ MBN ratios were > 6, these ratios decreased with increasing nitrogen application levels (Supplementary Figure S1).On one hand, this indicates that fungi still play a beneficial role in the rhizosphere soil, ensuring relatively stable soil carbon sequestration capacity (Chowdhury et al., 2011;Morrissey et al., 2013).On the other hand, excessive nitrogen application further reduces the MBC/MBN ratio and leads to a shift in the apple rhizosphere soil flora from fungi to bacteria.However, in the control group without HNFB inoculation, the soil MBC/MBN ratio increased with increasing nitrogen application levels (Supplementary Figure S1).Therefore, this finding suggests that an improved HNFB composition might increase the presence of synergistic fungal species and mitigate the adverse effects of the soil microbial biomass on the C/N imbalance caused by a purely bacterial flora. Consistent with previously reported findings (Silva et al., 2013;Dynarski and Houlton, 2017), in our results, we observed a decrease in the abundance of nitrogen-fixing bacteria in response to increased nitrogen application rates (Supplementary Figure S3A).The main reason underlying this observation may be that nitrogen levels are negatively correlated with the number of nitrogen-fixing bacteria and nitrogen-fixing activity.Lower levels of nitrogen application have been shown to be more conducive to the nitrogen-fixing activity of nitrogen-fixing microorganisms (Sarathchandra et al., 1988;Bailey et al., 2002;Bagyaraj, 2010).These findings indicate that the nitrogen PERMANOVA analysis of the AMF community in the six treatment groups.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level. application rate can controlled within a reasonable range to ensure a high amount and activity of microbial nitrogen fixation in salinealkali soil. Phosphorus is a limiting factor that affects the activity of nitrogenfixing microorganisms.However, both low and excessive nitrogen application could not assist microbial agents in increasing MBP, which significantly increased (p < 0.05) under normal nitrogen application levels (Supplementary Figure S1C).Under the same nitrogen application conditions, MBP in the CK group was significantly lower than that in the BIO group, which further confirms that HNFB plays a beneficial role in improving soil microbial vigor and nutrient content when applied to saline soil (Primieri et al., 2021;Liu et al., 2022).The interaction between nitrogen application and exogenous HNFB on MBP further confirms the critical role of the balance between nitrogen and phosphorus in maintaining soil microbial activity (Table 1).qRT-PCR results revealed that the abundance of arbuscular fungi was significantly higher under N1 conditions than under N0 and N2 conditions and was higher under N2 conditions than under N0 (Supplementary Figure S3B).The same trend was observed for MBP treated with HFNB inoculation.Although it is unclear how native AMF communities respond to diminishing soil phosphorus effectiveness, AMF can obtain phosphorus via exoenzymes (Meeds et al., 2021).Therefore, the effects of nitrogen application and exogenous HNFB on AMF community structure are also important factors influencing phosphorus accumulation in the microbial biomass. Generally, when there is a high input of nitrogen, it can enhance soil nitrification, which leads to an increase in the release of H + during nitrogen transformation and soil acidification (Wang et al., 2023).However, our results showed that the rhizosphere soil pH in the BIO + N1 group was significantly lower than that of the BIO + N0 and BIO + N2 groups.On the other hand, there were no significant differences among the other groups and treatments (Supplementary Figure S1E).Therefore, applying excessive amounts of urea in saline soils may not promote nitrification in the soil, and it is difficult to significantly affect soil pH by simply adjusting the amount of nitrogen applied.In contrast, when HNFB was inoculated Colinear network analysis and random forest model showing the relationship between key AMF species.Collinear network analysis of the (A) CK and (B) BIO groups.(C) Random forest model, wherein the ordinate is the species and the abscissa is the measured value of species importance.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level. under nitrogen application, it reduced the soil pH.This suggests that normal nitrogen application helps maintain a higher HNFB-plant biochemical response. Differences in the effects of nitrogen application and that of HNFB inoculum on the AMF community In nitrogen-deficient ecosystems, plants increase their nitrogen use efficiency (NUE) to meet growth needs (Li et al., 2016).However, excessive nitrogen application inhibits the symbiotic relationship between plants and microorganisms (Laws and Graves, 2005), including AMF (Panneerselvam et al., 2023).The effects of nitrogen treatment and HNFB on AMF community composition and abundance during short-term fertilization can help explain changes in soil physicochemical properties and enzyme activities.Low nitrogen treatment, due to the lack of carbon return from the host, reduces fungal colonization in plants, particularly AMF (Irving et al., 2022).This may be why N1 had the highest AMF abundance and N0 had the lowest abundance in the CK group (Table 2).In contrast, the AMF community richness and diversity in the BIO group were higher than those in the CK group at the same nitrogen application level, suggesting a more sustainable low-input agricultural cropping system, consistent with a previous study (Primieri et al., 2021). In the BIO group, there was no significant difference in AMF diversity between the treatment without nitrogen application and the N1 treatment.However, the diversity in the N0 group was significantly higher than that in the N2 group (Table 2).The negative effect of excess nitrogen on the rhizosphere AMF community in the apple rhizosphere aligns with the findings of previous studies (Zeng et al., 2021).Hence, there is compelling evidence to suggest that long-term and/or increased application of exogenous HNFB under salt stress conditions can effectively improve crop rhizosphere soil microbial ecology, positively influencing AMF community structure and diversity.The bacterial members of exogenous HNFB are indispensable for maintaining functional stability. Similarly, nitrogen and exogenous HNFB caused changes in the AMF community composition (Figures 2, 5).The dominant relative abundance of s__Glomus-Wirsel-OTU16-VTX00156 in the CK + N0 group decreased significantly with increased nitrogen content and HNFB inoculation (Figure 2; Supplementary Figure S4).This suggests that this strain is better adapted to low-nitrogen conditions.It is likely that exogenous HNFB and this strain have a mutually inhibitory relationship, and the combination of HNFB and this strain may reduce the effect of bacterial action. Effects of nitrogen application and exogenous HNFB inoculum on AMF community function Previous studies have found that a sufficient nitrogen source is conducive to the synthesis of phosphatases (Braun et al., 2010), particularly alkaline phosphatases (Chen et al., 2023b).Our results demonstrate that the activities of beta-glucosidase, alkaline phosphatase, acid phosphatase, and urease were significantly affected by the nitrogen application level and exogenous HNFB (Figure 6).Nitrogen fertilizer can increase soil β-glucosidase and alkaline phosphatase activities while reducing urease activity.This contradicts the results of the β-glucosidase and urease assessments conducted in this study (Figure 4).One possible reason is that the number, diversity, and richness of microorganisms in salt-stressed soil are lower compared to those in regularly cultivated land (Chen et al., 2021).The microbial community structure in the rhizosphere soil is influenced by crop root metabolites and differs significantly from that in the topsoil (Wang et al., 2021).Under the N1 treatment, nitrogen-fixing bacteria expedite the conversion of soil substances, enhance metabolism in plant roots, promote shedding, and increase soil organic matter content.As a result, enzymatic reaction substrates and enzyme activity are elevated (George et al., 2006).An increase in alkaline phosphatase levels may be associated with an increase in MBC content as nitrogen application levels rise.MBC is a major predictor of the abundance of microorganisms carrying phoD and is positively correlated with alkaline phosphatase activity (Luo et al., 2019).The decrease in urease activity can be attributed to the inhibition of released urea (NH 4 + ) hydrolysis products, as urease is involved in the nitrogen release process and NH 4 + is considered an end product of urease.Microbial communities determine the relative abundance of enzyme-encoding genes and influence their expression.Therefore, significant differences in soil enzyme activity may be related to differences in microbial biomass and composition. Exogenous HNFB, nitrogen application, indigenous flora, and plants are closely linked, and focusing solely on HNFB or nitrogen application may not provide a deeper understanding of the process of soil phosphorus transformation under real-world production conditions (Smith and Read, 2008).When soil phosphorus effectiveness is low, AMF can accelerate soil organic phosphorus mineralization and inorganic phosphorus activation by secreting phosphatases and organic acids (Fujii et al., 2012;Fan et al., 2019).For example, nitrogen application enhances acid phosphatase activity, increases the participation in ester-phosphate bond hydrolysis in soil organic phosphorus, and releases orthophosphate for plant uptake.Moreover, organic acids can release orthophosphate from the medium and other readily decomposable forms of soil inorganic phosphorus by chelating iron and aluminum (Lin et al., 2020).However, once AMF infects plant roots, if there is a change in the AMF community structure, the polyphosphate absorbed by the hyphae outside the roots is degraded into orthophosphate (Solaiman et al., 1999).This degradation promotes soil phosphorus transformation and plant phosphorus absorption. The positive response of key AMF species to exogenous HNFB The triple symbiotic system, consisting of nitrogen-fixing bacteria, AMF, and crops, synergistically promotes crop biomass and nitrogen fixation.This growth promotion effect is significantly superior to that of rhizobia, AMF, and crop symbionts (Primieri et al., 2021).Previous studies have shown that AMF contributes more to host growth under stressful conditions compared to normal conditions (Feng et al., 2020).Our results also confirm that AMF can establish ineffective interactions with exogenous HNFB and highlight the presence of core strains and functions that important synergistic effects.This synergy resembles that reported for other crops such as Glycine max (Wang et al., 2011), Sibiraea angustata (He et al., 2021), and Amorpha canescens (Larimer et al., 2016), and it may play a crucial role in sustainable low-input agricultural planting systems (Artursson et al., 2006).However, these effects depend on the specific combination of species involved in the interaction, specifically HNFB and/or AMF.A meta-analysis has even reported conflicting results, contradicting the hypothesis that AMF and nitrogen-fixing bacteria always exhibit synergistic effects in different plants (Larimer et al., 2010).This suggests that the occurrence of any synergistic effect may depend on the properties of the microsymbionts involved, as well as the spatial and temporal scales, the host, and/or the environmental conditions.The findings of our study provide a theoretical reference for the interaction between HNFB and AMF, with an emphasis on the dominance of Bacillus.AMF can form a complex mycelial network with plants, affecting plant growth and stress resistance.Glomus is the dominant genus of the AMF of many soil types, and it was the only one genus was identified in this study.Of course, this result is not unique.For example, more than 740,000 valid sequences were obtained from 77 citrus root samples, 99% of which were of Glomus; furthermore, Glomus-MO-G17-VTX00114 was confirmed to be a key species to ensure the stability of the AMF community in the rhizosphere and improve crop stress resistance (Song et al., 2015).Accordingly, our study further confirmed that the relative abundance of Glomus-MO-G17-VTX00114 would be reduced by excessive nitrogen application (Figures 3A,B) and could be increased by HFNB application (Figures 3C,D).Moreover, Glomus-Glo7-VTX00214 showed potential as an auxiliary HNFB strain in the apple rhizosphere AMF.Therefore, these results suggest an effective way to improve the relative abundance of key AMF species in the apple rhizosphere.Our proposed explanations are: (1) salt stress exerts a filtering effect on AMF (Rath et al., 2018), and Glomus has a significant advantage in terms of salinity tolerance or competitiveness; (2) plant variety is the most important factor affecting the composition of rhizosphere microorganisms (Edwards et al., 2023).Among AMF, Glomus may have an extremely close interaction with the root system of the examined apple plant, specifically selected for its variety.Simultaneously, several application studies have found that Glomus can be used to promote the growth of saline-alkali crops (Ouziad et al., 2006;Jahromi et al., 2008;Latef and He, 2014;Bharti et al., 2016;Amir and Samira, 2023) and indicate that the Glomus species has great potential for application in saline-alkali land agricultural production. In this study, the relative abundance of Glomus-MO-G17-VTX00114 for the N1 treatment in the BIO and CK groups was significantly higher than that in the N0 and N2 treatment groups.Generally, nitrogen fertilization is more beneficial to crops and the environment; this strain likely forms a synergistic relationship with the two nitrogen-fixing bacteria that were inoculated together to improve the rhizosphere AMF community structure and salt tolerance in apple trees in saline land (Figure 3).Glomus-Glo7-VTX00214 was confirmed to play an important role in the AMF community composition and was the most important predictor (Figure 6).This strain can be used as an auxiliary strain to improve the structural and functional stability of the core flora.Here, nitrogen application was performed to increase the accuracy of the results.Moreover, the results of the flora composition and key species assessments were based on accurately identified and named dominant strains, ensuring that key strains could be isolated and cultured.Given the key role of AMF in plant productivity, nutrient cycling, and ecosystem responses to global change, a deeper understanding of the cross-scale coupling between plants and AMF diversity will reduce the uncertainty of ecosystem consequences when predicting species gains and losses (Fei et al., 2021).Therefore, the results of this study provide an important reference for the construction of a multitrophic synthetic HNFB with a more reasonable structure and greater diversity of species. In general, our results showed that adding inorganic nitrogen alone weakened the function of AMF, whereas inoculation with HNFB stimulated its activity, thereby improving the salt and alkali resistance of plants.Excessive nitrogen application has a negative effect on the structure and function of the AMF community, which is worse than that under low nitrogen conditions.The core and auxiliary strains, determined through complex treatment and conditions in field experiments, are crucial references for the artificial synthesis of HNFB strains that have the ability to stably colonize saline-alkali land and exhibit a stronger synergistic effect.Thus, these results provide an overall "landscape" of the apple-AMF association in agricultural settings in major apple-production areas under different N application levels in China.They should serve as a guide for the future development of major apple-adapted AMF as potential bio-fertilizers. Conclusion This study revealed the uniqueness of the AMF community in the apple rhizosphere in saline-alkali soil and indicated that the AMF community was significantly affected by different N application levels and HNFB inoculation.This is evidenced by following findings.Inoculation with exogenous HNFB promoted soil nutrient accumulation and alleviated salt stress; simultaneously, exogenous HNFB increased the richness of rhizosphere AMF communities in low-nitrogen conditions and maintained their richness under both normal and high-nitrogen application conditions.However, excessive nitrogen application significantly reduced the richness and diversity of AMF after exogenous HNFB inoculation, highlighting the ecological risk associated with excessive nitrogen application in saline soil for the AMF community.Furthermore, in the apple rhizosphere AMF, the core strains Glomus-MO-G17-VTX00114 and Glomus-Glo7-VTX00214 exhibited potential as core and auxiliary HNFB strains, respectively.The new strain combination analyzed in this study is expected to be developed into a multitrophic HNFB with a more balanced structure and a greater variety of species.Considering the importance of AMF for crop growth in saline soil, it is essential to understand the response mechanisms of the apple and rhizosphere microorganisms to nitrogen application and exogenous HNFB for the sustainable management of apple agriculture in saline-alkali soil.Although this study did not accurately determine the appropriate amount of nitrogen fertilizer for apple tree growth in saline land, it provides data for exploring the relationship between exogenous HNFB-plant-AMF communities under different nitrogen application levels in complex habitats.This study could contribute to enhancing the salt tolerance of crops in saline land and reducing the reliance on chemical nitrogen fertilizers. FIGURE 4 FIGURE 4 FIGURE 5 (A) Redundancy analysis of the AMF community based on Bray-Curtis distances.Different colored points represent samples from the different treatments.The closer the points of the two samples are, the more similar their species compositions.The red arrows indicate environmental factors, and the blue arrows indicate the dominant species.(B) Correlation between different environmental factors and AMF species.The X-axis and Y-axis represent environmental factors and species, respectively, and the right legend shows the color interval of the different R values.The left and upper sides represent the clustering trees for the species and environmental factors, respectively.*p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001.This analysis is based on a weighted UniFrac matrix.AMF, ascomycetous fungi; CK, control wherein the three nitrogen application levels were set up without bacteria; BIO, treatment wherein the three nitrogen application levels were set up with bacterial inoculation (Bacillus subtilis HG-15 + Bacillus velezensis JC-K3); N0, low nitrogen level; N1, normal nitrogen level; N2, high nitrogen level. TABLE 1 Interaction analysis of nitrogen application levels and HNFB treatment. TABLE 2 Diversity index of AMF in the apple rhizosphere soil samples from different treatment groups.
2024-03-15T15:02:16.010Z
2024-03-13T00:00:00.000
{ "year": 2024, "sha1": "405c3385ebb47c8a58c90db94e4ac0a9eb289b88", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1288865/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00d722a155c0f813ca2d37573a92076f54ad0c3c", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
245304060
pes2o/s2orc
v3-fos-license
The Need for Person-Centered Measures for Dementia Research and Care Abstract The importance of person-centered medical and psychosocial care has become widely recognized, but there is abundant evidence that care is not always person-centered. In 2018, the Alzheimer’s Association published their evidence-informed Dementia Care Practice Recommendations, which address nine domains all grounded in a person-centered perspective. Following that work, the Association launched LINC-AD -- Leveraging an Interdisciplinary Consortium to Improve Care and Outcomes for Persons Living with Alzheimer's and Dementia. An early effort of LINC-AD, and the focus of this symposium, examined what measures are available to guide care and assess outcomes, and the extent to which they embrace person-centeredness. The results have been disappointing. This session will highlight the importance of person-centered measures in five domains of the Dementia Care Practice Recommendations, based on comprehensive reviews of literature. Each paper, presented by LINC-AD research advisors, will examine available measures and raise questions about gaps using a person-centered lens. Katie Maslow will describe frequently used measures and identify person-centered measures that could be added to studies of alternate procedures intended to increase detection and diagnosis. Drs. Mast and Molony will discuss a person-centered approach to item development and testing for assessment. Emilee Ertle will discuss the need to measure interpersonal and contextual factors associated with behavioral expressions. Drs. Prizer and Zimmerman will compare measures of dressing ability and their person-centered components. Dr. Calkins will examine the strengths and limitations of environmental assessment tools. As Discussant, Dr. Gitlin will integrate the findings from all five presentations, suggesting directions for the future. United States,2. CWRU School of Nursing,Cleveland,Ohio,United States Grandmothers living with or raising grandchildren who had just completed the final data point of an NIH-funded, national, behavioral RCT were asked to complete an additional data collection point to capture the effects of the Covid-19 pandemic on their families' access to healthcare and financial security. In Spring 2020, 258 grandmothers completed measures of access to healthcare and financial security (3 and 4 item composite scales), family strain, family functioning, and psychosocial and demographic variables. Financial security (Adj. R2=.52) was explained by knowing other grandfamilies; better family functioning; and fewer financial worries, unmet service needs, and depressive symptoms. Access to healthcare (Adj. R2=.24) was explained by being married, employed and having fewer financial worries and unmet service needs. Findings that family functioning, knowing other grandfamilies and depressive symptoms contributed to financial security, and that marital and employment status affect access to healthcare show the importance of support. IMPLEMENTING AN INTERVENTION PROGRAM DURING THE COVID-19 PANDEMIC: CHALLENGES AND SUCCESSES Nancy Mendoza, The Ohio State University, The Ohio State University, Ohio, United States During the COVID-19 pandemic, the implementation of intervention programs for grandfamilies are facing multiple challenges. In this paper, we will present some of the challenges and successes of introducing the GRANDcares Plus Project (GRANDc+) during the COVID-19 pandemic. As an intervention program, GRANDc+ has demonstrated positive outcomes for grandfamilies, such as increased satisfaction with life, knowledge of services, self-care practices, and supportive social networks. Due to the pandemic, the implementation of GRANDc+ has been met with many challenges including, training of facilitators, following CDC's COVID-19 guidelines/recommendations, and considering grandfamilies needs, concerns and safety. The pandemic has and continues to have detrimental effects on grandfamilies; this makes it more vital than ever to support grandfamilies through interventions like GRANDc+, despite what challenges we may face. Our presentation will provide insights into identifying, managing, and overcoming the challenges of implementing interventions during the COVID-19 pandemic. THE NEED FOR PERSON-CENTERED MEASURES FOR DEMENTIA RESEARCH AND CARE Chair: Sam Fazio Co-Chair: Sheryl Zimmerman Discussant: Laura Gitlin The importance of person-centered medical and psychosocial care has become widely recognized, but there is abundant evidence that care is not always person-centered. In 2018, the Alzheimer's Association published their evidenceinformed Dementia Care Practice Recommendations, which address nine domains all grounded in a person-centered perspective. Following that work, the Association launched LINC-AD --Leveraging an Interdisciplinary Consortium to Improve Care and Outcomes for Persons Living with Alzheimer's and Dementia. An early effort of LINC-AD, and the focus of this symposium, examined what measures are available to guide care and assess outcomes, and the extent to which they embrace person-centeredness. The results have been disappointing. This session will highlight the importance of person-centered measures in five domains of the Dementia Care Practice Recommendations, based on comprehensive reviews of literature. Each paper, presented by LINC-AD research advisors, will examine available measures and raise questions about gaps using a person-centered lens. Katie Maslow will describe frequently used measures and identify person-centered measures that could be added to studies of alternate procedures intended to increase detection and diagnosis. Drs. Mast and Molony will discuss a person-centered approach to item development and testing for assessment. Emilee Ertle will discuss the need to measure interpersonal and contextual factors associated with behavioral expressions. Drs. Prizer and Zimmerman will compare measures of dressing ability and their person-centered components. Dr. Calkins will examine the strengths and limitations of environmental assessment tools. As Discussant, Dr. Gitlin will integrate the findings from all five presentations, suggesting directions for the future. ADDING PERSON-CENTERED MEASURES TO RESEARCH ON DETECTION AND DIAGNOSIS OF DEMENTIA Katie Maslow, Gerontological Society of America, Washington, District of Columbia, United States In the United States, numerous studies on detection and diagnosis of dementia show that large proportions of subjects refuse initial screening tests. Moreover, among those who accept the tests, score poorly, and are therefore referred for a diagnostic evaluation, large proportions do not follow up to get the evaluation. Available data on characteristics of subjects who refuse initial screening and follow-up evaluation suggest that incorporating procedures based on person-centered concepts and practices, such as procedures that acknowledge individuals' unique characteristics and attempt to involve, enable, and empower them, could lead to more effective detection and diagnosis. Based on results of an analysis of measures used in studies conducted in the U.S. and elsewhere, this presentation will describe frequently used measures and identify person-centered measures that could be added to studies of alternate procedures intended to increase detection and diagnosis. Person-centered principles continue to redefine the nature of dementia care, but less attention has been given to integration of person-centered principles into clinical assessment and dementia research. As a result, identification of deficits and cognitive impairment tends to dominate clinical and research efforts, whereas strengths and positive characteristics need more research. This paper examines existing positive psychosocial measures of psychological wellbeing, hope, spirituality, resilience, social relationship, dignity, and at-homeness. Many of these measures demonstrate strong psychometric properties and have been identified as promising outcome measures for strengths-based studies and approaches to care. This paper will evaluate the extent to which these measures used a person-centered approach to item development and testing, and whether item content is consistent with person-centered principles. Future directions for instrument development require greater inclusion of people living with dementia and family caregivers. PERSON-CENTERED ASSESSMENT OF BEHAVIOR CHANGES IN PEOPLE WITH DEMENTIA Benjamin Mast, 1 Gail Mountain, 2 Ann Kolanowski, 3 Esme Moniz-Cook, 4 Margareta Halek, 5 and Emilee Ertle, 6 ,1. University of louisville, louisville, Kentucky,United States,2. University of Bradford,West Yorkshire,England,United Kingdom,3. Penn State,University Park,Pennsylvania,United States,4. University of Hull,Hull,England,United Kingdom,5. Witten/Herdecke University,Witten,Germany,6. University of Louisville,Louisville,Kentucky,United States Behavioral and psychological symptoms of dementia are increasingly being reconceptualized as expressions of distress and unmet needs. Measures that evaluate context are needed to increase our understanding of factors that influence these expressions. This review evaluated measures for two common behavioral states that are experienced as challenging for caregivers: apathy and resistance to care. A systematic literature search identified measures of apathy or resistance to care for people living with dementia. Eight measures of apathy and three measures of resistance to care were identified. Reliability and validity of these measures were evaluated using the COSMIN framework, as well as reported contextual factors within which the behavior occurs. The identified measures had fair to good reliability and validity in people living with dementia. However, available measures need to move beyond symptomatic constructs for this complex paradigm, and toward the varied interpersonal and contextual factors associated with behavioral expression. In 2018, the Alzheimer's Association set forth Dementia Care Practice Recommendations in nine domains, one being support for activities for daily living (e.g., dressing, toileting, eating/nutrition). For example, preservation of dressing independence is important for dignity, autonomy, and to decrease caregiver burden. Measurement is necessary to guide care and assess outcomes related to dressing, but availability of related measures to assess processes, structures, and outcomes of care has not been examined; more so, the extent to which the related measures are person-centered is completely unexplored territory. This session will present a critical assessment of available measures grounded in the Donabedian Model. Of 21 identified measures, 4 assessed dressing alone,
2021-12-19T05:09:40.930Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "8224aab5eb9f98cd5a7ebc5b0fcc3645f7e08607", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/innovateage/article-pdf/5/Supplement_1/262/41799622/igab046.1008.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8224aab5eb9f98cd5a7ebc5b0fcc3645f7e08607", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
236976304
pes2o/s2orc
v3-fos-license
Accounting for spatiality of renewables and storage in transmission planning The current governance process to plan the German energy system omits two options to substitute grid expansion: First, placing renewables closer to demand instead of where site conditions are best. Second, utilizing storage instead of additional transmission infrastructure to prevent grid congestion. In the paper, we apply a comprehensive capacity expansion model based on the AnyMOD modelling framework to compare the status quo to alternative planning approaches for a fully renewable energy system. To represent spatiality and fluctuations of renewables, the German electricity sector is modelled with great spatio-temporal detail of 32 NUTS2 regions and hourly time-steps. In addition to the German electricity sector, analysis also accounts for exchange of energy with the rest of Europe and demand for electricity and electricity-based fuels, like hydrogen or synthetic gases, from the industry, transport, and heat sector. The results reveal that a first-best solution can be well approximated if the current planning approach also considered storage for congestion management. Placing renewables different has no significant effect in our case, because the available potential must be exploited almost entirely leaving little room for optimization. Furthermore, a sensitivity on the first-best scenario prohibiting additional transmission lines entirely suggests that grid expansion can be substituted at tolerable costs. A B S T R A C T The current governance process to plan the German energy system omits two options to substitute grid expansion: First, placing renewables closer to demand instead of where site conditions are best. Second, utilizing storage instead of additional transmission infrastructure to prevent grid congestion. In the paper, we apply a comprehensive capacity expansion model based on the AnyMOD modelling framework to compare the status quo to alternative planning approaches for a fully renewable energy system. To represent spatiality and fluctuations of renewables, the German electricity sector is modelled with great spatio-temporal detail of 32 NUTS2 regions and hourly time-steps. In addition to the German electricity sector, analysis also accounts for exchange of energy with the rest of Europe and demand for electricity and electricity-based fuels, like hydrogen or synthetic gases, from the industry, transport, and heat sector. The results reveal that a first-best solution can be well approximated if the current planning approach also considered storage for congestion management. Placing renewables different has no significant effect in our case, because the available potential must be exploited almost entirely leaving little room for optimization. Furthermore, a sensitivity on the first-best scenario prohibiting additional transmission lines entirely suggests that grid expansion can be substituted at tolerable costs. Introduction To decarbonize the energy system, the primary supply of energy has to shift towards renewables like wind and solar. As a result, also the topology of power generation shifts and new options for grid planning arise. First, cost and potential of wind or solar greatly depend on location and, compared to fossil sources, capacities of individual plants are an order of magnitude smaller [1]. This imposes a trade-off on their deployment: either place plants where site conditions are best and rely on the grid to bring electricity to consumers or -to reduce the need for transmission infrastructure -place plants close to demand. Second, with increasing shares of wind and solar, matching intermittent generation with demand increasingly requires storage systems [2]. If placed the right way, storage systems can be charged while the grid is underutilized and discharged when the grid is under stress to relieve congestion, making storage a substitute for grid expansion. In Germany, planning transmission infrastructure is the responsibility of transmission system operators (TSO). In a continuous process the four TSOs, under regulation of the Federal Network Agency, develop scenarios for the next 15 years of power supply and use these scenarios to identify impending congestion and outages. Since Germany, a single zonal market, pursues a "copper-plate", meaning free flow of electricity within the country, it is the TSOs' task to prevent any congestion and enable market-based dispatch of all generators plus commercial exchanges with neighboring markets. Therefore, planning is focused on optimizing operation or expanding the transmission grid. Only in extreme situations or as a temporary measures to manage congestion until other projects are completed, TSOs adjust the market-based dispatch ex-post, referred to as redispatch [3]. In addition, the outlined process does not account for the two options to substitute grid infrastructure in renewable systems: placing renewables closer to demand and storage systems. Investment into generation capacities is private and driven by a single zonal market and a support scheme for renewables that is largely independent of location. Consequently, sites selected for renewables do not reflect the spatiality of demand or bottlenecks of the transmission grid. In the past this lead to a concentration of investment in the north contributing to congestion within the German market zone. For storage systems the situation is similar, the market design provides no incentive for regional investments and TSOs do not include them in the planning process. Regulation in other European countries is similar, although smaller market zones often provide better incentives for regional investments [4]. Grimm et al. [5] and Kemfert et al. [6] investigate how including redispatch in the planning framework and not just as a temporary measure impacts grid expansion. Both papers base their analysis on the same TSO projections for 2035, but apply different models [7]. The multi-stage equilibrium model in Grimm et al. relies on a stylized grid representation, but accounts for the different objectives of TSOs, private investors and the central planner. The optimization model in Kemfert et al. on the other hand is limited to the central planner, but represents the power grid with greater detail instead. Both papers find that deviating from the zonal market dispatch increases social welfare and is able to substitute 57 percent of planned transmission lines according to Grimm et al., or 48 percent according to Kemfert et al., respectively. In addition, Grimm et al. point out that in a first-best case where investment into generation considers grid constraints as well, the required transmission lines are reduced by two thirds. Using a very disaggregated model, Drechsler et al. [8] also find that the location of renewable energies has a clear impact on transmission requirements. Following up on these findings, this paper investigates how including redispatch and the placement of generation and storage systems impacts system planning. In contrast to the sources above, we do not base our analysis on current energy scenarios by the TSOs, but on an own scenario that models a fully renewable energy system in Germany and Europe. This system is characterized by intermittent renewables, a consequent dependence on storage, and new demands for electricity outside the power sector. Thus, it fundamentally differs from the system analyzed in previous research. The applied model is introduced in section 2, followed by comparative scenarios and the underlying data assumptions in section 3. The results obtained on this basis are discussed in section 4, before the a summary of key findings, policy implications and an outlook on future work follows in section 5. Applied modeling framework Quantification of different planning processes follows a two-step procedure based on a techno-economic optimization model of the German energy system using the AnyMOD framework [9,10]. The model chooses from a range of technologies that generate, convert, or store energy carriers to efficiently satisfy an exogenous demand. Eqs. 1a to 1h provide a highly stylized version of the model formulation. To differentiate them, variables are written in capital and parameters in lower-case letters. According to the energy balance in Eq. 1b, the sum of generation , , , storage input , , and storage output , , over all technologies has to match demand given by the parameter , , at each time-step and for each energy carrier . The following storage balance connects storage inand output with the storage level , at each time-step for each storage technology. Eqs. 1d to 1f enforce capacity constraints on storage in-and output, storage levels and generation ensuring production does not exceed the capacity . For generation, capacity constraints include a capacity factor , that specifies the share of capacity available for generation at time-step . Finally, the objective function Eq. 1a is composed of total investment costs computed from capacities and specific investment costs in Eq. 1g and total variable costs computed from generation and specific variable costs in Eq. 1h. For a full description of the underlying optimization model, that also includes the representation of different regions and how they can exchange energy carriers, see Göke [9]. In Figure 1 all considered technologies, depicted as gray circles, and their interaction with energy carriers, depicted as colored squares, are visualized. Entering edges of technologies refer to their input carriers; outgoing edges relate to outputs. For example, the biomass plant uses biomass as an input to generate electricity. Storage technologies, like pumped hydro or compressed air energy storage (CAES), have an entering and an outgoing edge to represent charging and discharging. Due to its pivotal role for renewable systems, the model's focus is on electricity. For long-term storage of electricity, the analysis includes hydrogen and synthetic methane. Setting an exogenous demand for these carriers also captures the demand for synthetic fuels outside of the power sector, for example in aviation. Beyond that, representation of other sectors is limited to their electricity demand induced by sector integration. These demands are treated separately using the carriers "residential heat" for hot water and space heat, "process heat" for industrial heating, and "e-mobility" for electric vehicles. Demand from these sectors is exogenous since the model does not include deployment of technologies outside the power sector. The demand for each carrier has to be met by the various technologies for each considered time-step and region whereby time-steps and regions can vary by energy carrier. 1 For electricity, the model applies an hourly temporal resolution to capture the fluctuating nature of intermittent renewables. Hydrogen and synthetic gas are balanced daily since they are less sensitive to short-term imbalances. Electric mobility uses a daily resolution, too, assuming vehicle charging is flexible. Lastly, residential and process heat apply a 4-hour resolution to account for the thermal inertia of buildings and load shifting potentials in the industry. The spatial resolution is uniform for all energy carriers but varies by scenario for reasons that will be elaborated on in the following section. Figure 2 provides an overview of all regions. These include 29 regions for European countries and 38 NUTS2 regions for Germany, which are modelled separately since our research question focuses on spatial effects and requires great regional detail. Furthermore, the model allows for regular trading: Electricity, hydrogen and synthetic gases can be exchanged between regions, given the required grid infrastructure. Investment and dispatch for this infrastructure is, analogously to technologies, calculated by the model. Since the paper focuses on Germany, other European countries are only [11], ENTSO-E [12] included to account for cross-border trade of energy. Therefore, technology and grid capacities for these countries are exogenous and the model only decides on their dispatch. The high-spatio temporal detail for Germany paired with a representation of the European energy system and impact of sector integration on the system, is a unique feature of the model enabled by the AnyMOD framework. For deciding on investment and dispatch of technology and grid capacities, the model considers investment, operating, and dispatch costs to find the least-cost solution to satisfy the given demand. So, mathematically our approach is a linear minimization of system costs, which, since demand is exogenous and therefore assumed to be inelastic, is equivalent to welfare maximization. The model is limited to a single year and omits the transformation from today to a renewable system. Also, exchange of electricity neglects loop flows and how line expansion affects transmission losses. These simplifying assumptions are necessary to keep the computational complexity manageable. Considered scenarios Analysis of the different planning processes builds on several scenarios summarized by Figure 3. These scenarios differ regarding the sequence in which investment and dispatch decisions are determined. The first-best case on the very left only deploys the model once. Investment and dispatch of generation, storage, and transmission are all determined simultaneously for each of the 38 German NUTS2 regions. As a result, the trade-offs between grid expansion and placing generation differently, using storage systems, or deviating from a market-based dispatch are all internalized by the model. The scenario thus corresponds to a social welfare optimum. Note that this setting is currently not practiced, as it would require a major change to current regulation. A policy frequently proposed in the dedicated literature to achieve this optimum is nodal pricing [13]. To gain insight on the general importance of transmission, analysis also includes a sensitivity of the first-best without any transmission expansion at all. In all other scenarios, investment and dispatch of generation, storage, and transmission is not determined simultaneously but sequential. The first step computes investment in generation and storage technologies, ignoring all grid constraints and assuming a free flow of electricity within Germany. Accordingly, results correspond to a market-based dispatch with a single German zone. The second step introduces the grid to determine investment into transmission, but fixes technology investment depending on the scenario. Since in the absence of grid constraints the model is indifferent where to place storage systems, these are distributed proportionally to renewable generation across the 38 regions. This is plausible given the assumed absence of regional prices, because investors have an incentive to place renewables and storage at the same sights to decrease costs for construction and grid access. If transmission losses incurred in the second step render the problem unsolvable, because demand cannot be fully met, the entire process is repeated with a correspondingly increased demand in the first step. Since the sequential scenarios separate investment into generation and transmission, they contrast from the firstbest and represent today's planning approach. In that case, the implementation of corresponding policies likely requires less regulatory change. The following lists all sequential scenarios detailing how they fix results from the first in the second step and what kind of planning policy is simulated this way. The list follows the order from left to right in Figure 3. • All storage: In this scenario, dispatch decisions in the second step can deviate from the market-based dispatch determined in the first. In addition, storage investment in the first step is not binding, but serves as a lower limit instead. This means storage is considered for grid relieve in the planning process, resulting in additional storage capacities on top of market driven investments. • Short-term storage: This scenario is equivalent to "All storage", but additional storage investment is limited to short-term storage, namely battery and CAES. • Long-term storage: The scenario is again equivalent to "All storage", but now additional investment is limited to technologies for long-term storage of electricity, which are electrolysis, methanation, hydrogen plants, and gas plants. • None: In this scenario all technology investment, even storage is fixed in the second step. However, dispatch in the second step can still deviate from the market-dispatch computed in the first step. In conclusion, only in the first-best scenario system planning considers all three substitutes for grid expansion: placement of generation, storage systems, and deviating from the zonal market dispatch. The following three scenarios consider storage and a deviating dispatch, but do not consider a different placement of generation. The last scenario only considers dispatching capacities differently. Data The following section summarizes the most important quantitative assumptions used in the model. To ensure consistency, as much data as possible was based on the same underlying scenario of a renewable European energy system, the "Societal Commitment" scenario developed in the openENTRANCE project [14]. For comprehensive information on all inputs see the link in the supplementary material. Supply For the German NUTS regions, generation and storage capacities are determined according to the outlined scenarios based on investment and operating costs [15,14,16]. To account for cross-border trade, the other European countries are included in these scenarios as well, but their generation and storage capacities are fixed to not distort results. These capacities are instead computed in a preceding step using the same input data, but reducing Germany to a single node. For the sensitivity of the first-best case without grid expansion, this preceding step is carried out without any expansion of the European transmission grid. Capacity limits and factors of renewables for the other European countries are based on Auer et al.. Capacity factors from the German NUTS regions are extracted from renewables.ninja [17,18]. An input not provided anywhere in the literature are capacity limits of wind and photovoltaic (pv) broken down by German NUTS2 regions. Therefore, these assumptions were derived based on publicly available sources specifically for this study. To ensure consistency with the rest of input data, summed limits for Germany corresponds to Auer et al. [14]. First, highly resolved satellite data for land use provides the urban, sub-urban, agricultural, and forested area in each NUTS2 region [19]. Other literature gives the share for those areas that are typically suited for wind an solar [20,21]. According to the product of area size and share suited for renewables, the total limit is distributed across all urban, sub-urban, agricultural, and forested areas. Next, site quality for each of these area is extracted from geodata on average full-load hours for wind and pv [22,23]. To derive renewable limits graded by quality in each NUTS2 region, areas are clustered into different groups based on site quality. Capacity factors for each group are derived by scaling the original time-series according to site quality, but keeping the total energy potential of each NUTS region unchanged. Fig. 4 shows the resulting energy potential per area for onshore wind, openspace pv and rooftop pv. Potential for onshore wind and openspace pv is based on agricultural and forested areas and, thus, potential is highest in the least populated regions. Potential for roofop pv on the contrary relates to urban and sub-urban areas, which means potential concentration in densely populated NUTS regions, in particular cities. To provide some context Fig. 5 compares potentials used in this paper to other literature. The derived capacity limits are sorted by full-load hours, aggregated, and plotted against energy quantities. Accordingly, the decreasing slope of these lines represents the declining site quality when the share of exploited potential increases. Other sources are represented as points. Wherever these only specified a capacity limit, plotting assumed the same full-load hours as in our data. For onshore wind the assumed potential is at the lower end of values found in the literature, whereas assumptions for pv are largely in the middle of the observed range [24,25,21,26,27,28,29]. The set potential for offshore wind amounts to 70 GW with 5,100 full-load hours that strongly decrease due to wake effects as soon as installed capacities exceed 50 GW [30]. The potential is distributed across NUTS regions currently connected to offshore wind parks. Demand Given the importance of sector integration, analysis of renewable systems must consider all sectors, but our model only covers synthetic fuels and electricity explicitly. Therefore, heating and transport are implicitly included by adding [14] the demand for synthetic fuels and electricity that decarbonization of these sectors requires [14]. Fig. 6 provides the resulting demand for Germany; magnitude and structure are similar for other countries. The data distinguishes between two different synthetic fuels: synthetic methane and hydrogen. According to Auer et al., synthetic methane is exclusively used to provide process heat for industrial processes. Hydrogen is also used for industrial processes, but to a small extent also for residential heating. The majority of hydrogen demand, namely 60 percent, stems from freight transport. Also, a small share is used in aviation. As has been explained in section 2, the model treats electricity demand from process heat, residential heat, and electric mobility separately. Process heat, in particular steam generation, constitutes the largest share of electricity demand. Shares for residential heat, mostly heat-pumps, and electric mobility are considerably smaller. Finally, demand that does not fit into any of these categories, for example household appliances, is labelled "conventional". For Germany, national demand from Auer et al. has to be distributed across the 38 NUTS2 regions modelled. For this purpose, electricity demand from process heat is distributed according to gross domestic product, residential heat Transmission For representation in the model, the physical transmission infrastructure is aggregated according to the covered regions. Due to the long lifetime of transmission infrastructure, the current electricity grid displayed in Fig. 2, is available in the model without additional investments. For Europe, these pre-existing capacities built on TSO data on net transfer capacities and include all projects to be completed by 2025 [12]. Capacities between German NUTS2 regions are aggregated from a nodal dataset. Apart from electricity, the model also includes a representation of today's gas grid assuming future utilization for hydrogen [11]. However, today's capacities were found to already exceed future needs. For this reason, transport restrictions for hydrogen are neglected within Germany. The model represents transmission infrastructure as net transfer capacities and consequently simplifies their dispatch to a transport problem neglecting technical constraints. Investment costs and losses of transmission depend on the length of the aggregated lines as displayed in Fig. 2 and amount to 2.29 million Euro per GWkm and 5 percent per 1000 km, respectively [34,35]. Results The results first focus on the first-best scenario and its sensitivity to create an understanding for the modelled system in general and for the role of the transmission infrastructure in that system in particular. This understanding is then necessary to comprehend the comparison of the first-best to the sequential planning scenarios in the second part. Fig. 7, the quantitative counterpart to Fig. 1, shows the energy flows for Germany that result from solving the model for the first-best scenario. Energy flows for the other scenarios differ of course but show no fundamental differences. Total electricity demand amounts to 1350 TWh and is covered by 766 TWh of generation from wind onshore, 200 TWh from wind offshore, 178 TWh from openspace pv, 74 TWh from rooftop pv and lastly 39 TWh from hydro, which includes run-of-river and reservoirs. With regard to Fig. 7, this means all renewables technologies except rooftop pv fully exploit their energy potential. Since sector integration makes up most of the demand and is assumed to be flexible within certain limits, storage systems only play a relatively minor role. Batteries provide 18 TWh of electricity and 16 TWh of electricity are generated from stored hydrogen, while the larger share of hydrogen satisfies the exogenous hydrogen demand. In addition, 114 TWh of hydrogen are imported from other European countries. Electricity is both imported and exported leading to an import surplus of 13 TWh. The demand for synthetic methane is entirely met through biomass, independent from the rest of the energy system. First-best scenario If the first-best is solved without grid expansion, generation from rooftop pv increases by 50 TWh, but plant capacities do not shift and just increase in regions with unexploited potential. Instead, long-and short-term storage substitute for grid expansion; generation from hydrogen turbines increases to 55 TWh, output from batteries to 27 TWh. This substitution can also be observed when mapping grid and storage capacities as done in Fig. 8. 2 If grid expansion is disabled, less capacity is available to transport electricity from regions in the north with large potential to regions in the south and southwest with highest demand. In return, capacities of hydrogen turbines and batteries increase substantially in these regions. Since grid expansion occurs in the first-best solution, disabling it will rise system costs, with higher investment costs for generation and storage overcompensating the decrease of transmission costs. For Germany system costs for Germany rise by 2.5 percent, but the more meaningful comparison of European system costs shows an increase of 4.5 percent. This seems plausible given Neumann and Brown [36] observe a 10 percent increase of system costs when limiting capacities to today's grid when modeling a renewable European power system, but note that consideration of sector integration is likely to reduce this difference. Such reduction could be explained by the added flexibility and high utilization of renewables potential from sector integration. Sequential scenarios compared to first-best To compare the first-best with today's planning framework, grid expansion and system costs in Germany are compared for different scenarios in Table 1. For a sensible benchmark of grid investment, expansion of each line is multiplied with its length and totalled. In comparison, the pre-existing grid that is available without additional investments amounts to 39,653 GWkm. As expected, system costs and grid expansion are smallest for the first-best scenario. If grid expansion is determined after generation and storage, but considers all storage technologies as a substitute, results only show a slight increase in grid expansion. However, in the scenarios that only consider short-term storage or no storage at all, expansion increases substantially doubling the capacity of the pre-existing grid. If the second planning step only allows for additional long-term storage, viz. electrolyzers and hydrogen turbines, grid capacities increase by 50 percent. Longterm storage presumably has a more pronounced effect than short-term storage because strictly speaking it does not only allow for additional storage, but also to shift demand to some extent since electrolyzers also have to satisfy an exogenous hydrogen demand and hydrogen can be freely transported within Germany. Difference in system costs are closely correlated with grid expansion. Overall, the largest proportion of costs, about 77 percent in the first-best scenario, is incurred by generation. The next factor is transmission costs, accounting for Table 1 Key benchmarks of scenarios compared Figure 9: Storage capacities and grid expansion compared to first-best 12 percent of costs, followed by long-and short-storage with 9 and 2 percent, respectively. Since transmission only makes up a relatively small proportion of system costs, they are affected less severely by the different scenarios. Still, system costs increase by up to 8 percent if grid expansion does not consider long-term storage, which is higher than in the first-best case without grid expansion. Again, the scenarios show little difference with regard to the placement of generation, because capacity limits are almost fully exploited to satisfy demand. Difference are most significant when comparing the first-best to the "none" scenario. To compensate for lower capacity factors from placing renewables closer to demand, the first-best installs 13 GW more rooftop pv and 5 GW more onshore wind. The trade-off between grid and storage is again visualized in Fig. 9. Between the first-best and the scenario with storage as a substitute ("all storage"), no differences are visible with regard to transmission capacities. For storage investment on the other hand, there are significant differences. In the first-best, hydrogen turbines and batteries are exclusively located in importing regions with high demand. In the storage scenario, capacities are concentrated in these regions too, but since here only the second step of investment considers the grid, all regions have some capacity. The map on the right shows capacities, if no substitute for grid expansion is considered. Here, storage capacities are evenly distributed and grid capacities, especially from exporting regions in the North to importing regions in the South, are considerably higher. Also, average utilization of transmission capacities decreases to 909 full-load hours, compared to 1,300 in the first-best. Conclusions In this paper we applied a capacity expansion model to investigate substitutes for transmission infrastructure in renewable energy systems and how use of these substitutes depends on the underlying planning approach. The model is applied to the German power sector but takes detailed account of sector integration and cross-border exchange. Results show that consideration of storage, in particular long-term storage, for congestion management greatly decreases grid investment and thus also system costs. If this option is enabled by a first-best setting that optimizes investment and dispatch of generation, storage, and transmission simultaneously, or, more similar to today's planning process, independent from generation investment in a subsequent step together with transmission investment, does not make significant difference. Findings also suggest that grid expansion can be substituted largely by storage placed in high-demand regions, causing a 4.5 percent increase in system costs. On the other hand, very small effects on welfare and grid expansion were observed from considering grid constraints when placing renewables. These results are partly driven by the characteristics of the underlying scenario for the entire energy system used in this study. As section 3.2.1 shows, the assumed capacity limits for renewables, in particular for wind, are at the lower end of literature values. Therefore, the available potential for renewables is almost fully exploited to satisfy demand leaving little room to optimize their placement. Similar to findings on redispatch in Grimm et al. [5], results indicate that transmission planning can substantially benefit from modifications to the current policy framework. The first-best solution requiring invasive changes, like the introduction of nodal pricing, can be well approximated, if planning considers storage as a substitute for grid expansion. Conceivable instruments to this end are an obligation for TSOs to consider storage investments or a split of the Germany market into a north-east and south-west price zone to create incentives for private storage investment. In both cases, consumers in exporting regions benefit at the expense of importing regions, either in the form of lower grid charges or smaller market prices. Apart from that, the relatively small sensitivity of grid expansion on system costs suggests that an exclusive focus on costs is too narrow. Given the public opposition transmission faced in the past, a bearable increase in system costs might be preferable to high levels of grid expansion. The applied capacity expansion framework AnyMOD captures all important features of renewable energy systems: intermittent renewables, importance of storage, increased demand and added flexibility from sector integration, as well as cross-border trade of electricity. In return, some economic and technical aspects relevant for transmission planning had to be neglected, which may limit the significance of our results. On the economic side, the approach omits path dependencies by focusing on a single year and abstracts from the different agents involved in the planning process, which hinders the recommendation of more specific policy instruments. On the technical side, restrictions of operating power grids, like physical power flows or n-1 security, were omitted. Also, although dividing Germany into 38 different regions is comparatively detailed, it does not compare to the 500 nodes of the actual transmission grid represented in power system models. To cover these aspects, future research needs to adapt economic and technical models to the characteristics of renewable energy systems.
2021-08-12T01:16:25.040Z
2021-08-10T00:00:00.000
{ "year": 2021, "sha1": "1d38e14f791bcf2b1b2964dd19225de31bb4faa6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.04863", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1d38e14f791bcf2b1b2964dd19225de31bb4faa6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
37633459
pes2o/s2orc
v3-fos-license
Don't Leave Home Without It: Planetary Protection for Robotic and Human Missions In planetary exploration and the search for life beyond Earth, the unique capabilities provided by human explorers will be advantageous to science only if the biological contamination associated with human presence is understood and controlled. The practice of preventing cross- contamination between the Earth and other planetary bodies is called planetary protection. NASA has a planetary protection policy in place for solar system exploration missions, and compliance with it is mandatory. Thus, planetary protection must be incorporated in mission planning and development from the beginning. NASA's planetary protection policy is intended to prevent "forward contamination", contamination of other solar system bodies by Earth microbes and organic materials, and "backward contamination", contamination of Earth by potential alien life. As NASA's space exploration program expands to encompass human as well as robotic planetary missions, planetary protection will become a more complicated enterprise. HISTORICAL INTRODUCTION Planetary Protection covers policies and practices that both protect other solar system bodies from contamination by terrestrial biological material to preserve future opportunities for scientific investigation (forward contamination), and also protect the Earth from harmful contamination by materials returned from outer space via robotic or human missions (backward contamination). The international science community has long recognized that 1 1 U.S. Government work not protected by U.S. copyright. 2 the exploration of other worlds may pose hazards to living organisms associated with the exploration of other worlds have been recognized for many years. Hypothetical scenarios for both forward contamination and backward contamination were published long before the beginning of the space age [e.g., 1,2]. In the 1950s, Nobel Laureate Joshua Lederberg and others expressed concerns about the risk of transferring biological material inadvertently from one planet to another. In 1956, the International Astronautical Federation considered issues regarding biological contamination of the Moon and other planetary bodies, specifically that the contamination of other planets with Earth life might permanently compromise the ability to do scientific research on potential life indigenous to those bodies, or that returned samples might carry organisms harmful to Earth. [3] In 1958 the International Council of Scientific Unions developed guidelines for preventing harmful crosscontamination of the planets, and directed its newly-created Committee on Space Research (COSPAR) to work with national space agencies on policies for preventing biological contamination during space exploration. At the same time, the US National Academy of Science formed the Space Studies Board as an advisory body on matters pertaining to human exploration of space, including the potential for interplanetary contamination. Late that year, the United Nations created its Committee on the Peaceful Uses of Outer Space (UN-COPUOS), which was chartered to organize "the mutual exchange and dissemination of information on outer space research", among other activites [4]. One of the first products of UN-COPUOS was a report on legal issues arising from the human exploration of space. This report recommended the development of international agreements to "minimize the adverse effects of possible biological, radiological, and chemical contamination". Over the next decade, discussions within the UN emphasized the importance of legal principles for space exploration. In 1967 the US and USSR jointly proposed the "Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies", known as the Outer Space Treaty, in which Article IX addresses Planetary Protection. The article states, in part, that signatories should "pursue studies of outer space, including the Moon and other celestial bodies, and conduct exploration of them so as to avoid their harmful contamination and also adverse changes in the environment of the Earth resulting from the introduction of extraterrestrial matter" [5]. The US and USSR signed this treaty within weeks of each other, and the US Senate ratified shortly thereafter. The Outer Space Treaty still serves as the major international law governing the uses of outer space. NASA has practiced planetary protection since the early exploration of the Moon, when the practice was called Planetary Quarantine. NASA implemented the first quarantine procedures for returned samples and astronauts during the Apollo program. Category I includes any mission to locations considered not of direct interest for understanding chemical evolution or the origins of life. Category I missions have no implementation requirements beyond the categorization itself. Category II includes any mission to a body of significant interest relative to chemical evolution that poses only a remote chance of contamination that could jeopardize future exploration. Category II missions have implementation requirements of documentation only. Any mission planning to fly by or orbit a body of significant interest relative to chemical evolution and the origins of life or for which contamination would jeopardize future exploration is assigned to Category III. Requirements imposed on Category III missions include significant documentation, as well as cleanliness and/or orbital lifetime restrictions. Category IV missions include landers and probes to the surfaces of planetary bodies of significant interest to chemical evolution and the origins of life, or for which contamination would jeopardize future exploration. Requirements for Category IV missions include thorough documentation as well as cleanliness requirements designed to minimize biological contamination of the target body. CURRENT PLANETARY PROTECTION POLICY All sample return missions are assigned to Category V for the return leg, with the outbound leg assigned the appropriate category for that mission and target combination. Category V missions are categorized as either 'unrestricted Earth return' for samples from locations not of biological concern, in which case documentation is the only requirement, or 'restricted Earth return' for samples from planetary bodies of biological concern. Samples designated 'restricted Earth return' are automatically considered hazardous to Earth until demonstrated otherwise by appropriate testing. For these samples the strictest possible precautions are mandated, including containment that will protect both the Earth from the sample and the sample from the Earth. This will require the construction of a facility operating at better than Biosafety Level Four (the most stringent containment available for pathogens such as the smallpox or ebola viruses). Missions assigned to Category III or Category IV must meet specific constraints on spacecraft cleanliness before launch as well as operations during the mission. However, the specific requirements a mission must meet to minimize forward biological contamination will depend on the target planetary body, and are determined based on advice from the Space Studies Board, as well as the Planetary Protection Subcommittee of the NASA Advisory Council. For example, missions to Europa must fulfill the requirement of maintaining a less than 10 -4 probability of contaminating an europan ocean over the lifetime of the mission. The Planetary Protection Subcommittee has recently recommended that other icy moons in the outer Solar System should be protected using a similar probabilistic approach, with a maximum 10 -4 probability of introducing a viable Earth microbe into a liquid water body on any icy moon, whether the liquid water is present naturally or induced by a spacecraft. [9] PLANETARY PROTECTION FOR MARS For missions to Mars, limits on contamination are based on those established for the Viking missions, which set numerical limits on the number of heat-resistant culturable organisms, termed 'spores', that are most likely to survive heat sterilization treatment and also the trip to Mars. At that time, Mars was protected based on a probability of contamination requirement, so Viking mission planners calculated the maximum number of spores allowable at launch that would meet the probabilistic requirement. In the early 1990s COSPAR updated the policy for Mars, so subsequent lander missions to Mars must meet numerical rather than probabilistic limits. As it did for Viking, NASA currently utilizes a traditional, cultivation-based assay to evaluate the number of spores present on a spacecraft. This number is accepted as a proxy for spacecraft cleanliness, and newer methods are being approved that can identify more of the contaminants known to be present but missed by the spore assay. Category III missions, with spacecraft intended to fly by or orbit Mars, are allowed to carry no more than 5x10 5 spores in total at launch. An alternative requirement for orbiting missions, should cleanliness requirements not be met, is to maintain a minimum orbital lifetime of greater than 50 years from the mission launch date. Category IV landed missions to Mars are assigned one of three subcategories (IVa, IVb, and IVc) that have differing cleanliness requirements depending on the location of the landing site and the specific objectives of the mission. The surface of Mars is very cold and dry-in most places, too cold or dry to permit the growth and reproduction of Earth organisms. However, the subsurface of Mars is likely to be warmer and wetter, and therefore more hospitable to Earth life. Certain geological formations on the martian surface suggest that liquid water may occasionally be present, and such formations have been termed Special Regions that merit special protection. A Mars Special Region is currently (2007) defined by COSPAR as "a region within which terrestrial organisms are likely to propagate, OR A region that is interpreted to have a high potential for the existence of extant Martian life forms." Thus, Special Regions as currently defined encompass both certain features on the surface of Mars, and, conservatively, the entire subsurface below the depth where surface equilibrium conditions prevail. Spacecraft or subsystems accessing special regions, and spacecraft carrying life-detection experiments must meet the strictest limits on biological contamination, the same levels that were met by the Viking landers. Missions that will land on the surface of Mars in a 'non-special' region are designated Category IVa, and these spacecraft are allowed to carry 3x10 5 spores on their exposed surfaces, limited to no more than 300 spores per square meter. Lander missions to Mars that carry experiments intended to search for life are designated Category IVb. Spacecraft or lander subsystems that are intended to access a 'special region' either through vertical or horizontal mobility are designated Category IVc. Spacecraft or subsystems that are assigned either Category IVb or Category IVc may carry no more than 30 spores total on the exposed spacecraft surfaces. This level of cleanliness is obtained by meeting the Category IVa requirement of 3x10 5 spores for the total surface area and then performing a sterilization treatment, such as Dry Heat Microbial Reduction (DHMR), that has been demonstrated to reduce the initial spore load by four orders of magnitude. Further definition of Special Regions is likely to involve a combination of specific parameters that can be measured accurately. The Mars Exploration Program Analysis Group has proposed two suitable parameters: temperature and water activity. Water activity is a measure of the availability of water to participate in chemical or biological reactions. In most cases, water activity can be considered as equivalent to the relative humidity of an environment divided by 100. The Planetary Protection Subcommittee has recommended setting limits on these two parameters to define Special Regions: a "water activity" of 0.5 or greater, and (simultaneously) a temperature of -25C or warmer. These numeric limits will be revisited regularly and modified as appropriate based on the most up-to-date scientific information. The intent is to define as Special Regions only those locations on Mars that have available water, at a temperature that could support life. Based on our current understanding of Mars, this special regions designation encompasses only a small fraction of the surface of the planet, excluding both equatorial and polar latitudes. The space community has expended considerable effort on planning for sample return missions to Mars, since such a mission was first proposed several decades ago. The Space Studies Board has produced a number of advisory reports, and NASA has held a series of workshops yielding a 'Draft Protocol' for the containment and assessment of samples returned from Mars. The primary objective of the protocol is to protect the Earth against potential biological contamination from martian samples. However, the more difficult task is likely to be protecting the samples from contamination by biological materials from Earth, since contamination by Earth organisms or compounds could jeopardize the detection of martian life, past or present. Samples that are returned from Mars shall be held in the highest possible level of containment, to protect Earth from the samples and to protect the samples from Earth. The Space Studies Board recommends that samples demonstrated to contain no biohazardous materials may be released from the containment facility. However, samples for which this demonstration is not possible shall remain in containment unless they are sterilized prior to release. A resurgence of interest in Mars sample return, exemplified by the recent plan to collect samples in a 'cache' on the Mars Science Laboratory for collection and return by a future mission, has highlighted the need for updates to the Draft Protocol and sample containment facility design. MISSIONS Workshops held in the 1990s and early 2000s, in the US and jointly with international partners, have resulted in an international consensus on planetary protection policy for human missions [10][11][12][13][14]. This international consensus will serve as a basis for COSPAR guidelines and new NASA Procedural Requirements documents for human mission implementation. One outcome of these workshops has been the recognition that there are no basic differences between planetary protection principles for human and robotic missions. The highest priority is to protect Earth, and by extension the astronaut explorers. Forward contamination of the target body that might endanger scientific research must be avoided to the greatest extent feasible. Some fundamental assumptions regarding human mission activities underlie current thinking about planetary protection policy and requirements. Humans invariably carry associated microbial populations that are necessary for survival, thus forward contamination is a significantly greater risk with human missions than robotic missions. For this reason, the greater capabilities of human explorers can contribute to the astrobiological exploration of the solar system only if human-associated contamination is controlled and understood. Even with technology improvements, it will be not be possible to eliminate contamination by conducting all human-associated processes and mission operations within entirely closed systems. For exploration targets like Mars, it may be sufficient to protect certain areas of the planet stringentlythe so-called 'Special Regions' in which Earth organisms might be able to propagate-and allow limited contamination at other locations. Backward contamination is an ongoing risk for human missions during operations and return of the crew to Earth, in contrast to robotic missions for which contamination can be controlled by containment of samples after return. Crewmembers exploring other planets will inevitably be exposed to planetary materials, as was first demonstrated during the Apollo program. According to the recent consensus on planetary protection for human missions, these exposures should occur, to the maximum extent practicable, under controlled conditions. It is understood, however, that exposure cannot be prevented entirely. Accordingly, careful planning will be required to understand the nature and consequences of such exposures, and to avoid the need for decisions about whether crew members are allowed to return to Earth. For some missions, the potential for human exposure to extraterrestrial life must be addressed in the plan. Nevertheless, safeguarding the Earth from harmful backward contamination must always be the highest planetary protection priority. These assumptions help to define a set of general policy considerations that apply to all human missions. To mitigate risk to astronauts and to the Earth, planetary protection must be considered a critical element for successful human missions. Compliance with planetary protection requirements should be addressed in the development of all human mission subsystems. Planetary protection risks should be identified and evaluated together with other mission risks to be reduced, mitigated, or eliminated. To ensure proper implementation of planetary protection provisions during the mission, general human factors must be considered along with planetary protection issues when developing technologies and procedures. Likewise, planetary protection considerations should be included in human mission planning, training, operations protocols, and mission execution. Finally, to facilitate compliance and rapid mitigation when necessary, every human mission should have a crew member onboard the mission assigned primary responsibility for the implementation of planetary protection requirements. Planetary protection provisions are too important, and in a crisis may become too urgent, to subject discussions to a potential 20 minute round-trip communication delay. Considerations for Planetary Protection Implementation Astronaut safety is one of the highest priorities for human missions. The Space Studies Board has recommended operational constraints for human mission activities that are designed to ensure the safety of astronauts. These constraints include the designation of "Zones of Minimum Biological Risk" (ZBRs), regions that have been demonstrated to be safe for humans. Astronauts will only be allowed in areas that have been demonstrated to be safe [10]. ZBRs for human landing sites can be identified through direct investigation by precursor missions, either on the ground or remotely. Areas around human habitats shall be cleared as "safe" through appropriate robotic exploration, after which human activity would be allowed. Special Regions shall only be accessed using sterilized clean equipment, and facilities for transfer of collected samples under appropriate contamination control will be required. Crew health maintenance is critical for mission success. Standard tests on the medical condition of the crew and their responses to pathogens or adventitious microbes should be developed, provided, and employed regularly during the mission as part of routine health monitoring. This information will also be essential for evaluating the effects of exposure events, to understand their severity and assess the need for quarantine measures. To permit the isolation of potentially contaminated or infectious crew member(s), a quarantine capability for both individual crewmembers and for the entire crew should be provided during the mission. After the mission, a quarantine capability and appropriate medical testing should be provided, and could be implemented in conjunction with a health monitoring and stabilization program as the crew are integrated back into the general population. To minimize the potential for harmful exposure events, operations for human missions shall include isolation of humans from direct contact with planetary materials until testing can verify that exposure to the material is safe. Exploration, sampling, and base activities shall be performed in a manner to limit inadvertent exposure of humans to material from untested areas. For the initial landing site, testing will probably have been performed as a part of precursor mission activities, but a means for allowing controlled access to untested areas, or areas that are considered unsafe, must be provided during human missions. Operational Constraints for Human Missions to Mars In line with current planetary protection policy for robotic missions, human missions to Mars shall avoid the inadvertent introduction of Earth organisms or organic molecules into Mars Special Regions, as well as the inadvertent exposure of humans to martian materials. Cleanliness and containment capabilities must be factored into landing site selection and decisions about operational access to scientifically desirable locations. Exploration of Special Regions, including access to subsurface ice or water, shall be restricted as appropriate relative to the microbial and organic cleanliness of the human-associated or robotic systems utilized. Appropriate technological capabilities will be critical factors in establishing the levels and kinds of contamination allowed for any particular human mission activity. To control forward contamination during human missions, exploration, sampling, and base activities must be designed and developed to ensure safe and effective operations while maintaining the required level of planetary protection compliance. One particular challenge is extravehicular activities-specific technologies and procedures for field operations will need to be developed, characterized and optimized. For example, systems will be required to allow controlled, sterile, surface and subsurface sampling operations, so that uncontaminated samples can be obtained. Sterilized and recleanable robots, under appropriate operational constraints, are one suitable approach for ensuring appropriate access. To assess levels of biological contamination and monitor changes, an inventory of microbial populations and organic materials carried aboard the spacecraft should be established prior to launch and updated throughout the mission. This will serve as a record of contamination potentially released by human-associated spacecraft and transportation systems, as well as a baseline in the case of unexpected developments. Monitoring technologies will be required for ongoing evaluation of contamination released by human-associated activities, as will technologies to mitigate contamination resulting from off-nominal release events. These inventory and monitoring activities will support both planetary protection and crew-health objectives. SUMMARY For space exploration to proceed safely, and to continue expanding our understanding of the Solar System, planetary protection must be an integral part of overall mission planning and execution. Robotic missions currently comply with planetary protection requirements based on mission objectives and target planetary body. Requirements for human missions are not fully developed, but the basic principles of planetary protection-protecting science and protecting the Earth-will not change. Human missions Mars will be subject to planetary protection requirements that will protect astronauts, the Earth, and the potential for scientific discovery. Compliance with the requirements will be challenging, but is both necessary and worthwhile. Decades of experience with robotic missions, and lessons learned from the Apollo experience, have provided important insights into the nature and timing of planetary protection requirements. Proper planetary protection makes sense to ensure the success of future missions, and to protect the Earth from unknowns that missions are sent to explore. Accordingly, the design of appropriate technologies and procedures for planetary protection must be included in the development of both robotic and human missions. The result will be scientifically sound and productive missions that preserve the pay-off potential of future exploration, and ensure the safety of Earth and that our astronauts can come safely home.
2017-02-18T01:23:54.562Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "4a2969975c00dc9ae22eba8837fcd83810feb2db", "oa_license": "CC0", "oa_url": "https://zenodo.org/record/1263742/files/article.pdf", "oa_status": "GREEN", "pdf_src": "Unpaywall", "pdf_hash": "4a2969975c00dc9ae22eba8837fcd83810feb2db", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [] }
1166261
pes2o/s2orc
v3-fos-license
Multi-functional magnesium alloys containing interstitial oxygen atoms A new class of magnesium alloys has been developed by dissolving large amounts of oxygen atoms into a magnesium lattice (Mg-O alloys). The oxygen atoms are supplied by decomposing titanium dioxide nanoparticles in a magnesium melt at 720 °C; the titanium is then completely separated out from the magnesium melt after solidification. The dissolved oxygen atoms are located at the octahedral sites of magnesium, which expand the magnesium lattice. These alloys possess ionic and metallic bonding characteristics, providing outstanding mechanical and functional properties. A Mg-O-Al casting alloy made in this fashion shows superior mechanical performance, chemical resistance to corrosion, and thermal conductivity. Furthermore, a similar Mg-O-Zn wrought alloy shows high elongation to failure (>50%) at room temperature, because the alloy plastically deforms with only multiple slips in the sub-micrometer grains (<300 nm) surrounding the larger grains (~15 μm). The metal/non-metal interstitial alloys are expected to open a new paradigm in commercial alloy design. of the limited number of slip systems in the hexagonal close-packed crystal structure of magnesium, wrought magnesium alloys in sheet form provide limited deformability at ambient temperatures. Substantial effort has been devoted to protect as-cast magnesium and its alloys from corrosion 13 by producing a protective layer 14 and/or by incorporation of a couple of alloying elements to control their microstructures 15,16 . However, a cost-efficient and environmentally friendly coating technique remains challenging to find. Certain trace impurities, such as iron, nickel, and copper, are reportedly notably harmful to the corrosion resistance of magnesium 17 . The addition of zirconium, which easily reacts with such impurities and effectively removes them, can improve the corrosion resistance of magnesium 18 . Aluminum has also been shown to improve the corrosion resistance of magnesium by increasing the amount of the β-phase 19 . The addition of rare earth elements, such as an yttrium, reduces the corrosion rates in dilute chloride solutions 20 . Despite the partial success of alloying approaches, the formation of passive films has not been realized. Large-scale studies to improve the formability of wrought magnesium alloys have been conducted. In the conventional thermomechanical processes to fabricate sheets, a strong basal texture develops and remains essentially unchanged after the recrystallization annealing [21][22][23][24] . The sheets mostly deform via the < a> -slip in the basal plane, which is aligned along the rolling direction because the independent slip systems along the < c> -axis are difficult to activate. Mass flow along the transverse direction of the sheet occurs predominantly by mechanical twinning 24 . Thus, different deformation modes of the < a> -slip and mechanical twinning are simultaneously activated during deformation and do not allow an easy forming process. The contribution of mechanical twinning in the forming process can be mitigated by reducing the grain size of the sheet to approximately a few micrometers 25 . Such small grains can be developed using a special thermomechanical process, such as equal-channel angular pressing. In this study, we propose a new alloying method that uses oxygen atoms in Mg metal. TiO 2 nanoparticles are selected as an oxygen source because they are inexpensive and easily decomposed at approximately 400 °C, and the cationic titanium can be completely separated from the magnesium melt 26 . In the initial casting stage, TiO 2 nanoparticles with high chemical potential energy are inserted into liquid Mg, and they decompose to O, Ti, and TiO − , as shown in Fig. 1 (stage 1). Most of the Ti and TiO − sink and separate from the liquid (because of the relatively high specific gravity), which enables the oxygen atoms to be alloyed (stage 2). Because the amount of dissolved oxygen atoms are not sufficient to form magnesium oxides, they become supersaturated into the octahedral sites of Mg after solidification and create the Mg-O alloy (stage 3). Using the mother Mg-O alloy, a third element, such as aluminum or zinc, is alloyed to develop engineered magnesium alloys to elucidate the applicability of the Mg-O alloy as a casting material and a wrought sheet. The new Mg-O alloy can have significant potential for use in the magnesium industry, where high-performance, lightweight materials are in demand and would address critical issues in the research field of interstitial alloys. To determine the position of oxygen atoms in a Mg-0.3 at. % O alloy, we observed the microstructure of the as-cast Mg-O alloy using high-resolution transmission electron microscopy (HRTEM) with energy-dispersive X-ray spectroscopy (EDS). Figure 2A reveals an EDS map of the as-cast Mg-O alloy, where Mg and O are presented in green and red, respectively. The detected oxygen atoms are randomly dispersed instead of being arranged with regular periodicity. Furthermore, the HRTEM image of the oxygen-containing area (marked region in Fig. 2A,B) shows typical Mg lattices instead of MgO. This result implies that oxygen atoms are dissolved in the Mg lattices without forming oxides. As revealed in the magnified image of the marked region (Fig. 2B), the Mg lattice is slightly expanded in most regions, with an estimated lattice spacing of 2.81 Å from the converted inverse fast Fourier transform (IFFT) image. This value is much larger than that of pure Mg (2.60 Å 27 ). No body centered cubic structures (which are typical of MgO) are observed throughout the specimen. The diffraction pattern of the distorted lattice demonstrates the typical basal (0002) plane. Moiré fringes are also observed because of the overlap of pure Mg and expanded Mg-O lattices. These observations support the hypothesis that oxygen atoms occupy octahedral sites within the Mg rather than forming oxides, a behavior that leads to expanded lattice spacing. The expanded lattice structure of the Mg-O alloy was further evaluated using X-ray diffraction (XRD). High-resolution XRD patterns of pure Mg and Mg-O alloy (Fig. 2C) show peaks for the (1011 -0), (0002), and The peaks of all measured planes of the Mg-O alloy are shifted to lower angles compared to those of pure Mg, which indicates an increase of interplanar spacing as a result of lattice swelling. On the other hand, no peaks corresponding to magnesium oxides (29.3° and 38.5° in 2θ ) are detected for either specimens in the XRD analysis. In the X-ray photoelectron spectroscopy (XPS) depth profiles, the oxygen concentration in both Mg-O alloy and pure Mg was high (approximately 50 at. %) on the surface because of the surface oxide layer, whereas it steeply decreased with the increase in depth from the surface ( Fig. S2 and Table S1). However, after removing a contaminated surface layer, a large amount of oxygen (0.3 at. %) was detected in the depth range of 15.2-18.2 μm in the Mg-O alloy, whereas the oxygen concentration in pure Mg was negligible (Fig. 2D). Furthermore, the comparison of the O 1s peaks of the Mg-O alloy and pure Mg, which were obtained at certain positions (0, 0.7, 3, and 6 μm from the surface), reveals that oxygen atoms were dissolved in the Mg-O alloys throughout the entire depth range, whereas the oxygen concentration decreased to ~0% below a depth of 3 μm in pure Mg (Fig. 3S). Based on these results, we conclude that oxygen atoms that are immersed in the magnesium liquid do not form oxides and are instead uniformly dispersed and supersaturated in the Mg lattices. Hence, they reveal no significant structural and chemical variation in the sample. In this type of Mg-O alloy, the elastic modulus increases to a value approximately 10% greater than that of pure Mg (Table S2). It is hypothesized that the incorporation of nonmetal atoms in the interstitial spaces of magnesium may increase the Mg-O ionic bonding character and average Mg-Mg interatomic distance, which enhances the stiffness of magnesium 28 . The yield stress of the Mg-O alloy (~73 MPa) is also three times higher than that of pure Mg ( Fig. 3 and Table S2). The lattice expansion together with the enhancement of stiffness and strength were also observed in a Mg-O alloy with 9 wt. % aluminum (Mg-O-9Al alloy) (Figs S4 and S5, respectively). According to the interstitial solid solution strengthening mechanism 29-31 , the introduction of interstitial oxygen atoms into a magnesium crystal produces a lattice dilation that typically gives rise to a spherically symmetric stress field. Thus, the interaction of dislocations with the resultant stress field gives rise to enhanced flow stress properties 32 . For practical applications using this type of Mg-O alloy, the Mg-O-9Al alloy was studied further. As previously mentioned, this alloy exhibits enhanced mechanical properties (see supplementary text). In addition, when the magnesium alloy was exposed to an aqueous solution, it quickly eroded as a result of galvanic corrosion. This process occurred because of the potential difference between the matrix and the secondary phase 33,34 . This tendency is significantly mitigated when both phases contain oxygen atoms, as revealed in the Tafel plot (Fig. 4A). In particular, the corrosion potential increases by approximately 0.12 mV, and the current density is approximately 100 times lower in the Mg-9Al-O alloy than in the Mg-9Al alloy; the corrosion current density of the Mg-O-9Al alloy was 5.36 × 10 −7 A/cm 2 , while for the Mg-9Al alloy it was 2.33 × 10 −5 A/cm 2 . The result of electrochemical impedance spectroscopy (EIS) (Fig. S6) also supports that the oxygen containing alloys are covered with more uniform protective film that promotes the formation of passivation layers. Furthermore, the Mg-O-Al alloys exhibit better corrosion resistance than Mg-Al alloys in a more corrosive solution (3.5 wt.% NaCl solution, Fig. S7). The details of the corrosion behavior of these Mg-O alloys are available in the supplementary text. To further protect the high intrinsic dissolution tendency of magnesium in aqueous solutions, a surface treatment, such as an anodizing or plating process, must be used. The environmentally friendly anodizing process is one of the most common and simple surface treatment processes used for this type of material. In general, large pores and a poor interface are generated near the interface of the matrix and the coating layer, when the Mg-9Al alloy is anodized by plasma electrolytic oxidation (PEO) 35 (Fig. 4B). However, in the case of the Mg-O-9Al alloy, a clean interface is developed (Fig. 4C). During the PEO process, a significant amount of oxygen is supplied from both the solution and the alloy containing oxygen atoms, producing the tight Mg(Al)O oxide coating layer (see supplementary text). A comparison of the surface morphology of Mg-O-9Al (Fig. 5A) and Mg-9Al (Fig. 5B) alloys after the PEO treatment also reveals consistent results. The Mg-O-9Al alloy exhibits a relatively smooth surface, whereas the Mg-9Al alloy shows a large amount of pores, which can cause pitting in the corrosive solution. The Mg-O alloy exhibits a superior capacity for plastic deformation. In general, Mg alloys with a hexagonal closed-packed structure have limited slip systems. Therefore, the plastic deformation of these materials can occur with the help of twinning activation at room temperature 36 . The typical flow curve of a Mg-2Zn sheet with an average grain size of 15 μm developed using conventional thermomechanical processes exhibits approximately 30% elongation to failure Fig. 6), and deformation twins are observed in the test sample (Fig. 6A). However, for a Mg-O alloy containing 2 wt. % Zn (Mg-O-2Zn alloy), fabricated using conventional rolling and annealing processes, the sheet exhibits significantly greater (> 50%) elongation to failure without revealing the activation of twins (Fig. 6B). The flow curve of the Mg-O-2Zn alloy is also significantly different from that of the Mg-2Zn sheet. In the early stages of deformation, the yield-drop phenomenon is observed, and constant flow stress remains until 1.5% elongation is reached (clearly shown in the inset), similar to the typical yield-drop phenomenon observed in low-carbon steel 6 . In general, during the hot-rolling process used to fabricate the sheet, many deformation twins with a low-angle twin boundary of 3.8° (tilted with respect to the matrix) are produced over an extensive area. The boundaries disappear when the sample is recrystallized during the annealing process, producing the final microstructure, as demonstrated in the Mg-2Zn sheet. In this range, dislocation activities along the c-axis are observed on the prismatic plane (Fig. S9). For the Mg-O-2Zn alloy sheet, however, many sub-boundaries form within the deformation twins as a result of the presence of accumulated oxygen atoms. These boundaries do not disappear, but instead develop as grain boundaries during the recrystallization process (see supplementary text, Fig. S8). A HRTEM image (Fig. 6C) of the deformed Mg-O-2Zn shows that the plastic deformation mainly occurs via non-basal slipping in sub-micrometer grains (< 300 nm) that surround the larger grains (~15 μm), thereby providing extraordinary deformation capacity at room temperature that has not been observed in conventional Mg alloys. Thermal conductivities of pure Mg, the Mg-O, Mg-9Al and Mg-O-9Al alloys were evaluated at room temperature as shown in Fig. 7. The alloys containing interstitial oxygen atoms exhibit much enhanced thermal conductivities (> 20 W/mK enhanced) than their monolithic counterparts. A possible hypothesis to explain the high thermal conductivity of the Mg-O and Mg-O-9Al alloys is that the motion of interstitial oxygen atoms may help to transfer thermal energy in the preferential direction; the significant strain in the lattice surrounding the oxygen atoms may help them to easily jump to neighboring interstitial sites 37 , which dramatically changes the electronic population around the oxygen atoms 38 . Excessively alloyed oxygen atoms in magnesium and in magnesium alloys increase the environmental resistance of these materials against mechanical loads or chemical corrosion, and significantly improve the plastic deformation and heat transfer capacities of these materials. Because oxygen atoms occupy the interstices of magnesium, ionic bonding character gradually dominates the system, which provides greater bonding energy. Changes in bonding energy alter the strength and formability of magnesium. Furthermore, the presence of oxygen atoms influences the corrosion properties of these alloys, leading to the formation of a protective layer via an anodization process. This allows anodizing particles, such as magnesium oxides, to be adsorbed homogeneously onto the magnesium surface. In addition, the solute oxygen atoms play an important role in thermal behavior and thermomechanical processes that govern the development of the superior alloy microstructures. Here, we developed a new class of magnesium alloys that contain excessively dissolved oxygen atoms. The interstitial oxygen atoms, which do not form oxides and significantly expand the lattice spacing of magnesium, increase the environmental resistance of these materials against mechanical loads or chemical corrosion and improve the plastic deformation and heat transfer capacities of these materials. Further studies, particularly work involving the underlying physics regarding the atomic binding characters and their roles in the thermomechanical behaviors of these materials, are required and can offer interesting perspectives on how to achieve the exotic performances of metallic materials by atomic-level design. Materials and Methods The Mg-O alloy was fabricated using a gravity casting method. Pure Mg was melted in a boron nitride-coated low carbon steel crucible under a dynamic SF 6 + CO 2 atmosphere, to which 1.0 vol. % TiO 2 nanoparticles (with an average particle size of 50 nm) were put into a Mg melt at 720 °C. The alloy was held at this temperature for 30 min, and then the liquid material was slowly poured into a preheated rectangular steel mold (10 mm thickness). Pure Mg was also cast using the same method as a control. Basic characteristics, including the lattice structure of the specimens, were analyzed using XRD (CN2301; Rigaku) with a Cu Kα radiation source (λ = 1.5405 Å). To evaluate the amount of oxygen atoms in the Mg-O alloy and pure Mg, the materials were examined using XPS (Thermo VG) with a monochromatic Al Kα source. Microstructural examination was also carried out using HRTEM (JEOL 2100FX) attached with an EDX spectrometer unit. Uniaxial tension and compression tests were carried out for the Mg-O alloy and pure Mg under a constant cross head speed with an initial strain rate of 1 × 10 −4 s −1 . The gage lengths of tension and compression were 10 and 5 mm, respectively. The elastic modulus was determined using an ultrasonic elastic constant measuring system (HKL-01-UEMT; Hankooklab). Electrochemical tests were performed with a conventional three-electrode cell in which a carbon plate was used as the counter electrode and a calomel electrode was used as the reference electrode. The thermal conductivity was determined by multiplying thermal diffusivity by the specific heat and density. The thermal diffusivity was measured by means of the laser flash technique (LFA447, NETZSCH, Germany), the specific heat was measured by differential scanning calorimeter (DSC, DSC8000, Perkin Elmer, USA), and the density of specimens was measured by Pycnometer (Ultrapycnometer 1000, Quantachrome Co. Ltd, USA).
2016-05-04T20:20:58.661Z
2016-03-15T00:00:00.000
{ "year": 2016, "sha1": "50a050688845f3e27da3c7fa1b40208b11a5f90e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep23184.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50a050688845f3e27da3c7fa1b40208b11a5f90e", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
228095174
pes2o/s2orc
v3-fos-license
RESEARCH ON THE EFFECT FACTORS OF TECHNICAL PERFORMANCE ON SMEs BY INDUSTRIAL SECTORS This study's research question is that there will be a crucial element in improving business performance among SMEs' success variables and competencies. Significantly, there should be different variables for performance in industry sectors. 1) The success variables of SMEs vary widely, but four characteristics of technology, management, commercialization, and exit strategies were selected. 2) The mediator is a technology innovation and technology marketing function. 3) The dependent variables are technical, financial, and nonfinancial performance. Previous literature had problems studying only the effects of each of the three variables, so we established a hypothesis and research model that focused on causality studies linking them. According to the data group analysis result for 3330 CEOs of SMEs, the six industries' performance impact factors were different. As a result of comparative analysis of changes in performance impact by industry, it was found that the largest increase in Information Technology (IT)/Software (S/W), Life/Food, and Crafts sectors. The key research finding is that it has verified the essential elements of critical performance improvement. We provided that different success variables and competencies differ in performance across industries. The results are expected to contribute to SMEs' CEO and government policymakers' practical applications, support organizations, academia, and industry. . Introduction This study examined the concepts of entrepreneurs' success variables and entrepreneurs' competencies that can enhance technology-based SMEs' business success to revitalize the technology-based entrepreneurship ecosystem. Besides, by analyzing the impact and analyzing the impacts' characteristics, we verified the impacts of the success factors on the entrepreneurial competency and identified the correlations. To achieve this goal, we reviewed the problems and limitations of previous studies through literature review and then explored the factors to derive, analyze, and evaluate the core factors. Theoretical research systematically analyzed previous studies related to the Technical Performance and influential factors of technology start-up companies that have been studied in Korea and abroad. In particular, this study examined the effects of technology start-up companies' success variables on the technology marketing capability and technology innovation capacity, which are the mediating effects of technology startup capability. Also, we verified the impact of the technology startup success variables on the business performance of technology startup companies, set up a research model, and performed research on the effects through Smart PLS. The survey subjects were 330 start-up companies' CEO and conducted online surveys in six industries, types of manufacturing, and gender, and 205 questionnaires were collected and used for empirical analysis. For statistical analysis, SPSS 22 was used for basic statistical analysis. The results were presented using Smart PLS3.2.9 for evaluation and significance evaluation of the research model's measurement and structural models. We Investigated the components of entrepreneurs' ability to perform entrepreneurship under poor entrepreneurship conditions and evaluated their influence to improve their ability and capacity for success of start-ups based on the characteristics of the entrepreneurs' capabilities and the conditions of their establishment. Creating and expanding business performances by linking core competencies and starting a business to successful start-ups is an important task. We reviewed previous studies to identify problems and limitations and select variables. Success variables have been studied in various ways. However, in this study, technology, management, commercialization, and exit strategies have been selected because they are essential success variables for companies. The mediation variables are SMEs' technology marketing competency and technology innovation competency, which affect performance and verify the mediating effect on performance. The dependent variables are financial performance, non-financial performance, and Technical Performance, which are indicators of performance. The objectives of this study are as follows. First, the causal relationship between the success variables of SMEs on competence and performance. Second, the effect of success variables on performance through the mediating effect of competency. Third, the effect of competency on performance. Fourth, the effect on the performance of six industries including electrics/electronics, machinery/parts, IT/SW, chemicals/textiles/materials, life/food, crafts/others. The reason for selecting six industrial classifications is that, since 2001, the Korea Startup Promotion Agency has been operating a business support policy by designating six industrial classifications of small and medium-sized startups. Theoretical background Previous research on technology-based SMEs: Technology-based startups and ventures already play a significant role in developing the national economy and serve as a growth engine for industrial innovation (Autio 1997;Kortum and Lerner 2001). A company focused on R and D or using new technical knowledge or knowledge (Cooper, Williard and Woo 1986). In Korea, it is classified into technology-based startups, ventures, and general startups according to the type of startup of the Small and Medium Business Administration and the Korea Startup Association. Small and medium-sized startup companies are identified as new technology-based companies that have just been established. Within seven years, which mainly corresponds to the manufacturing and knowledge service industry, the Small and Medium Business Startup Support Act defines a company within seven years after its establishment as a startup company. In a similar sense, it means a new SME that is technology-intensive as a venture company. It is also recognized as a result of high-tech or new technology-based startups with high risk and high-profit potential. However, the definition is similar because the distinction is not clear. Examples include spin-offs, technology-based spin-offs, new technology-based companies, research-based ventures, and high-tech startups (De Cleyn and Braet 2009). In another respect, research-based startups are defined as new business startups that develop and sell new products or services based on proprietary technologies, and research, development, innovation, exports, and employment. In this regard, small and medium-sized startups have contributed to the economy and have played a key role in bringing new technologies to the market (Heirman and Clarysse 2004). A small and medium-sized start-up company means a start-up based on the entrepreneur's skills, experience, and expertise, and is also called a start-up technology. The United States recognizes that technologybased start-ups are accompanied by personal and collective assets associated with advances in scientific and technical knowledge to create and maintain value (Bailetti 2012). In Korea, as of April 2019, according to the 2018 Entrepreneurship Research Results Report, there were 2,030,987 small and medium-sized startups. The organization type was 89.0% for individual entrepreneurs and 11.0% for corporate entrepreneurs. The distribution of organizational form by industry was high for individual entrepreneurs from 1st to 7th year. Corporate entrepreneurs were found to maintain similar levels at around 10%. The start-ups in one year accounted for the largest portion with 24.3%, followed by 20.6% in 2 years, 16.0% in 3 years, 12.6% in 4 years, 10.2% in 5 years, 8.8% in 6 years, and 7.5% in 7 years. In terms of 18 major categories, 26.5% of the wholesale and retail industries were the highest, followed by 25.8% of the lodging and restaurant industry, 8.9% of the manufacturing industry, and 7.8% of repair and other personal service industries (Start-up Promotion Agency 2019). The results of researching previous studies on the performance of SMEs can be summarized as follows. The effect of management ability on financial performance was studied (Kim and Seo 2017). The effects on technology competency, Technical Performance, and management performance were also studied (Lee I.K 2016). However, the previous studies were limited to each of the influence variables. Also, only the success variable's management performance was studied in part, and there was no study on the mediating effect of SMEs' competency. Overall, the previous study was only to identify success variables for management performance and individual relationships of competencies. Therefore, some problems and limitations cannot determine the causal or dynamic relationship between variables. The specific purpose of this study is as follows. First, the causal relationship examined empirically, the effects of the technology startup success variables on capability, and the Technical Performance of technology startup companies. Second, this study empirically analyzed the effect of the technology startup success variables on technology performance through the mediating effect of technology startup capability. Third, we empirically identified the effect of technology entrepreneurship capability on technology start-up companies' Technical Performance. Fourth, empirically analyzed the influence factors of technology startup success variables on technology performance by using Partial Least Square-Structural Equation Modeling (PLS-SEM), a statistical analysis technique, and causal relationship. Fifth, identified the impact factors affect business performance in six industries such as electrics/electronics, machinery/parts, IT /SW, chemicals/textiles/materials, life/food, crafts, and other industries. The moderating effect of differences performance was confirmed using Data Group Analysis. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 8 Number 2 (December) http://doi.org/10.9770/jesi.2020.8.2 (67) This study's differences are summarized as follows: 1) The success variable, an independent variable, was selected as four sub-factors. Management performance, a dependent variable, was selected as three sub-factors: technical performance, financial performance, and non-financial performance. 2) As a parameter, a sub-factor of competency, the effect on business performance was studied by selecting technology innovation competency and technology marketing competency. 3) The relationship between the entrepreneur's business and the impact on management performance according to industry was compared and verified. Theory and hypotheses As we looked at the effect of success variables on the SMEs' competency and management performance in the theoretical background: it has been found that previous studies have failed to comprehensively study potential influence variables such as management ability, technical ability, exit strategy, technology commercialization competency, and technical marketing competency. Therefore, in this study, the success variables were studied not only on the effect on corporate competency and business performance but also on SMEs' competency in management performance. We established three hypotheses to test these research topics. Technical expertise and management skill were studied as factors influencing SMEs' innovation capability and competitiveness (Hwang, Choi and Shin 2020). The research model suggested that technological competence will have a positive (+) effect on the core competency of small and medium-sized entrepreneurs. (Kim, Cho and Lee 2020). According to entrepreneurs, six startup success factors were studied (Prohorovs, Bpositively 2019). Therefore, the success variables expected to affect the entrepreneurs' ability to be studied sporadically in previous studies were summarized as external factors such as entrepreneurship education, government support, and investment were excluded. In this study, it was necessary to study the entrepreneurs' success and technical factors as factors that influenced entrepreneurs' success, excluding external factors. Therefore, there is a need to verify the effect of SMEs' competency on technology and management. For this reason, entrepreneurs propose the following hypothesis. Hypothesis 1. Entrepreneurial success variables will have a positive effect on competency. As an independent variable, the effect of corporate competencies on the success of business incubators was studied (Pauceanu, Alpenidze, Edu and Zaharia 2019). The dynamic competencies positively impact the business performance of start-ups was presented (Seo and Lee 2019). An empirical study on the effect of technology commercialization competency on management performance, technical competency, and marketing competency as a control variable for technology commercialization competency as independent variables was studied (Park and Yang 2018). An empirical research model on the impact of performance and technology commercialization competency was presented (Bae, Song and Kim 2018). Technology innovation and commercialization competencies studied the effect of management performance (Kim and Park 2018). Therefore, the necessary competencies that affect performance as the number of mediating success variables need to focus on technological competencies and verify their effectiveness. The reason was that it was necessary to test the hypothesis that in order to create business results, there would be necessary variables that mediate success factors. External factors such as entrepreneurship, government support, and investment were excluded. As a mediator of entrepreneurs' success, excluding external influences, it was necessary to focus and study the technical factors. Therefore, there is a need to verify the performance impact on the technical side. For this reason, the following hypothesis was proposed. Hypothesis 2. SMEs' competencies will have a positive impact on performance. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 8 Number 2 (December) http://doi.org/10.9770/jesi.2020.8.2 (67) The effect of SMEs' CEO technology competency on management performance was studied (Lee I.K. 2016). A research model was presented on the impact of technological competency on management performance (Yoon, J.H. 2018). Knowledge and networks in the global start-up process study suggested a network (Englis, Wakkee and Van Der Sijde 2007). The effect of core competencies and network competencies on SMEs' management performance was studied (Kim and Bang 2017). An effect of network competency on technological innovation capability and innovation performance was studied (Kim, J.Y. 2017). Therefore, it was found that the success variables affecting business performance were studied from various perspectives. In this study, it was divided into a management perspective and a technical perspective. In terms of management, it is necessary to categorize it into four sub-factors: management and exit strategies, technical ability, and technical commercialization ability, to verify the effectiveness. The reason is that to create business results, and it is difficult to identify the effect factors without excluding external factors. As a variable of success for entrepreneurs, it was necessary to study focusing on technology and management factors, excluding external influences. Therefore, the following hypothesis was proposed. Hypothesis 3. Success variables will have a positive effect on Technical Performance. The conceptual research model is shown in Figure 1. Studies on differences in performance and impact according to SMEs' industrial classification were not found in previous studies. It is very commercially meaningful in that it can provide a realistic and feasible alternative. This study confirmed whether there is a difference in the size of the impact on the industry sector's business performance. Using Data Group Analysis (DGA) to validate the differences in competency and business performance across six industries is a unique and differentiated point from previous studies. The final structural study model is shown in Figure 2. Materials and methods The data was collected using an online questionnaire method for 330 CEOs of SMEs based on manufacturing. The questionnaire was collected from 205 CEOs, and the response rate was 62%. SMEs are less than five years after the start-up. The industrial sector is six fields defined in Korean start-up company classification criteria: electrics/electronics, machinery/parts, IT/SW, chemicals/textiles/materials, life/food, crafts/others. The 5-point scale was used to measure the questionnaire. The collected data were analyzed and verified along with the basic statistics and measurement and structural models by SPSS 22 and Smart PLS 3.2.9. Among the collected data, insignificant measurement indicators were removed through factor analysis. To confirm the reliability and validity required for evaluating the reflex measurement model, the internal consistency reliability, convergence validity, and discriminative validity were evaluated by running the PLS algorithm. The internal consistency reliability was evaluated by Cronbach's α, Dijkstra-Henseler's rho_A, and Composite Reliability (CR). Outer loadings, measurement variable reliability, evaluated the convergent validity, Average Variance Extracted (AVE). Discriminant validity in a reflective measurement model was assessed using Fornell-Larcker Criterion (FLC), cross-loading (Hair et al. 2017). The model of this study consisted of a reflective measurement model consisting only of reflective indicators. The collected data were analyzed first using SPSS, and the analysis of factors was used to remove insignificant measurement indicators. In this study, internal consistency reliability, concentration validity, and discriminant validity were analyzed and evaluated. Cronbach's α, Dijkstra-Henseler's rho_A, and Composite Reliability (CR) of internal consistency reliability assessment are criteria for evaluating internal consistency reliability, and Average Variance Extracted (AVE) is a criterion for evaluating concentration validity. If the AVE square root of the diagonal is larger than the correlation between the study variables below the diagonal, the discriminant validity (67) between the study variables is evaluated. For the evaluation results and interpretation of the reflective measurement model, Fornell-Larcker Criterion was used for external loading, measurement variable reliability, AVE value, Cronbach's α, rho_A, Composite Reliability (CR), and discriminant validity. PLS-SEM performs Bootstrapping and Blindfolding to evaluate the reflected structural model and verify the hypothesis, and then verify and analyze the multicollinearity, coefficient of determination (R 2 ), effect size (f 2 ), and predictive suitability (Q 2 ). Finally, we confirmed that the structural model was suitable and evaluated the path coefficient's significance and the model's suitability. Lastly, by introducing industry-specific variables, we confirmed the difference in influence by industry. By verifying the influence on the moderating effect on business performance as a dependent variable, we confirmed that this study's model was suitable and found that it had a moderating effect. The demographic characteristics are as follows. The gender distribution was 66.8% for men and 33.2% for females, respectively, and the proportion of males was 60.5% for a private company and 39.5% for corporate companies. In the industry sectors, electrics/electronics 18.5%, machinery/parts 14.8%, IT/SW 17.6%, chemicals/textiles/materials 17.6%, life/food 12.7%, and crafts/others 19.0%. Regarding the number of years of respondents' start-ups, 32.2% was less than two years or less than three years, 32.2% was less than one year, or less than two years, less than 7.3%, more than five years 4.9%. Looking at the sales volume of t,he previous year, less than 0.1 million $ was the highest with 35.1%, followed by 32.2% from 0.1 million $ to less than 0.3 million $, 22.0% from 0.3 million $ to less than 0.5 million $, and less than 0.5 million $ to 1 million $ 9.3 %, 1 million $ or more 1.3%. In terms of the manufacturing method, outsourcing and in-house manufacturing accounted for the largest share at 62.0%, outsourcing 22.9%, and 15.1%. In terms of the number of employees, less than three people were the most 46.3%, more than three and less than six people were 39.0%, more than five and less than ten people were 13.7%, more than ten people were 1.0%. By age group, the 30s were the highest with 40.0%, the 40s were 38.5%, the over 50s were 12.7%, and the 20s were 8.8%. Management ability As a study on the manager's psychological characteristics, it means that creative innovation that enables the development of new products, technologies, and procedures through new ideas, development, and research and development through innovation of management characteristics (Franco, Hope and Lu 2017). Research on management ability is significant because it can explain the relationship between the manager's differences and management performance more systematically and concretely than research based on its characteristics. The evaluation of observable management ability can give the company's manager a direction for the company's development. Early-stage SMEs are not precisely organized, so there is a limit to creating results based on the organization's capabilities. Although management abilities vary from time to time in each study, technical competence, strategic thinking ability, and organizational competency are considered to be very important in common (Andreou, Karasamani, Louca and Ehrlich 2017). Technical ability Technical ability is an essential resource for promoting and supporting a company's innovation strategy and sustainable success and as an important result of innovation activities (Burgelman and Sayles 2004). The company's technical ability was presented in seven categories: learning ability, R and D ability, resource allocation ability, production ability, organizational ability, and strategic planning ability (Yam et al. 2004). Technology commercialization ability In a narrow sense, technology commercialization is limited to how products or services are created after the basic research or development stage, which is a technology development activity. New technologies acquired through own research and development or external procurement can be defined as a continuous process from prototype manufacturing, pilot production, mass production system construction, marketing, and sales activities to link actual production and sales (Nevens 1990). It has been reported that systematic technological innovation ability and technology commercialization ability affect management performance by revealing that a long-term strategic plan is being made (Booz, Allen and Hamilton 1982). Exit strategy ability A study was conducted on the exit strategies of SME managers (Kim, S.Y 2014). The venture company's EXIT strategy and cases by type were studied (Kwon, O.H 2009). An empirical study was conducted on business commercialization and technological innovation on management performance (Bae, H.B., Song, M.K., and Kim, S.G 2018). Technology innovation competency Technology innovation competency is critical that leads to the continuous growth of a company. At the same time, it is a characteristic of a comprehensive company that promotes and supports technological innovation (Burgelman, Christensen and Wheelwright 2008). On the other hand, it was analyzed that the relationship between R and D investment level and business performance was negative or not at all (Coombs and Bierly 2006). In a study of the technological innovation system framework and the entrepreneur's view of innovation, the technological innovation system-generated valuable insights into the processes that need stimulation for the successful development and implementation of innovative sustainability technologies (Planko, Cramer, Hekkert, and Chappin 2017). It has been shown whether innovation capacity positively affects the company's performance (Saunila 2017). Technology marketing competency Viewing the results of the analysis of the success or failure of technology development, marketing's importance is reduced. In other words, 20 ~ 40% of the technical failures are due to defects in the technology itself (Miller and Power 2005). A rest is due to the lack of marketing competency, especially in high-tech products, due to the lack of marketing competency reaching 75% (Clugston 1995). The concept of technology marketing is interpreted differently depending on the researcher and expressed in two ways. As a unique research area of marketing, it is a high-tech product marketing that sells or purchases products with technology-typed products through marketing techniques. Dependent variables Technical Performance Technical Performance has a relatively large effect on technical and technical management competency, production support, marketing competency, research and development competency, and new product development competency. It has a significant impact on market information as well as business performance. It is said that the securing of superior technology can directly act as a determinant of investment by venture capital or other 1128 investment companies because it is directly related to the growth or profits of venture companies (Johannisson 1986). Evaluation of the measurement model The evaluation of the research model's measurement model was carried out using the PLS Algorithm of Smart PLS 3.2.9 to analyze and evaluate internal consistency reliability, concentration validity, and discriminant validity. PLS path modeling was developed by Wold (1982), which is essentially a regression sequence in the form of a weight vector. The weight vector obtained at convergence satisfies the fixed-point formula (Dijkstra 2010). The PLS Algorithm execution basic setting for evaluating the reflective measurement model was performed using the path weighting method, the maximum repetition 1000 times, and the stopping criterion set to 10 -7 . Internal consistency reliability was assessed by Cronbach's Alpha, Dijkstra-Henseler's rho_A, and Composite Reliability (CR). The convergent validity by outer-loadings, measurement variable reliability, and AVE. The results are shown in Table 1. The abbreviation of latent variables* is as follows. Management Capability (MG-C), Technology Capability (TEC-C), Exit Capability (EX-C), Technology Commercialization Capability (TC-C), Technology Innovation Competency (TIC-C), Technology Marketing Competency (TM-C), Technical Performance (TECH-P). Convergent validity was assessed by outer loadings, measurement variable reliability, and AVE. The measurement variables' external loads were all over the threshold of 0.7, indicating a concentration validity. The results of external loading and cross-loading analysis are shown in Tables 2 and 3. 1130 the threshold value of 0.5 or more. Fornell-Larcker Criterion (FLC), Cross Loadings are presented as criteria for determining the reflective measurement model's discriminant validity. FLC is a criterion for determining discriminant validity. Since the square root of the AVE of the diagonal is larger than the correlation between the study variables below the diagonal, the discriminant validity between study variables is evaluated. The results are shown in Table 4. Evaluation of the structural model The structural model's evaluation can be seen as a procedure to confirm the research model designed and developed by the researcher and confirm that the structural model is suitable. Once the structural model is found to be a suitable model, hypothesis testing can be performed. This study evaluated and confirmed multicollinearity, coefficient of determination (R 2 ), effect size (f 2 ), and predictive suitability (Q 2 ) for evaluation of structural model in PLS-SEM. Table 5 shows the verifying of the internal VIF value by executing the PLS algorithm to confirm multicollinearity. If the Inner VIF Values among the study variables are less than 5, it can be judged that there is no multicollinearity. As a result, all of them are less than 5, so it can be estimated that there is no multicollinearity. Evaluate the explanatory power of the endogenous research variables, the results of verifying the coefficient of determination R 2 by executing the PLS algorithm are shown in Table 6. The effect size (f 2 ) is used as a criterion for evaluating the relative influence of exogenous study variables (or predictors or independent variables) on the endogenous study variables. If f 2 is 0.02, it is evaluated as a small effect size, 0.15 as a medium effect size, and 0.35 as a big effect size. Table 7 shows the results of checking the effect size (f 2 ). Evaluate whether the structural model has predictive suitability for specific endogenous study variables. For this purpose, predictive suitability (Q 2 ) is used. If the structural model is Q 2 greater than 0 for a specific endogenous study variable, it is predicted to be predictive fit. Blindfolding of Smart PLS 3.2.9 was performed to confirm the results of Cross-Validated Redundancy and to evaluate Q 2 . The results are shown in Table 8. Verification of hypotheses Since the structural model's evaluation is appropriate, the hypothesis verification can be carried out through bootstrapping. The significance and adequacy of the path coefficient are evaluated using the t value calculated through the execution of bootstrapping. Through this, a hypothesis test was conducted. Bootstrapping of Smart PLS 3.2.9 carried out the significance and suitability evaluation of the path coefficients. It was verified by checking the t-value, p-value, and confidence interval required for hypothesis testing at the significance level of .05. The results are shown in Tables 9 and Figure 3. strongly influenced technical performance (TECH-P). Hypothesis 3: MG-C→TECH-P, Management Capability (MG-C) strongly influenced technical performance (TECH-P). Moderation effect verification According to the industrial classification, the moderation effect on the Technical Performance was significant with the path coefficient of .114, the T-statistic of 1.782, and the P-Value of .075 (p <.10). The results are shown in Tables 10 and Figure 4. As shown in Table 10 and Figure 4, it was confirmed that the impact factors on technical performance differ by industry. In particular, technology commercialization capability affects technical performance due to the moderating effect of industrial classification. This result proved the study's purpose that the influence factors showing the modulating effect will differ depending on the industry sector. A data group analysis was conducted to verify what latent variables have different effects depending on the industry sector and whether there are differences between industries. Analysis result of technical performance impact by industry sectors We compared and verified the differences between R 2 , f 2 , total indirect effects, specific indirect effects, and total effects on the Technical Performance impacts of six industries. Six industries are DIV 1 (electrics/electronics), DIV 2 (machinery/parts), DIV 3 (IT/SW), DIV 4 (chemicals/ textiles/ materials), DIV 5 (life/food), DIV 6 (crafts/others). We used Data Group Analysis to identify differences in Technical Performance according to six industry classification. The results are shown in Tables 11 to 15. TECH-P TIC-C TM-C TECH-P TIC-C TM-C TECH-P TIC-C TM-C TECH-P TIC-C TM-C In Table 11, R 2 shows that DIV 1 (electrics/electronics) has a low impact on Technical Performance from .858 to .810, but increases in all other DIV 2 ~ DIV 6. In Tables 12 and 13, f 2 shows that factors affecting Technical Performance differ according to DIV, which means that learning and education methods that improve existing uniform abilities and competencies to create technological outcomes according to industries are different. Differently, it is essential to know that specialized education and learning for each industry sector is required. In Table 14, the total indirect effects were found to increase technological capability on Technical Performance except DIV 4 (chemicals/textiles/materials) and DIV 6 (crafts/others). In Table 15, the Special Indirect Effect on Technical Performance is significant except for DIV 3 (IT/SW) and DIV 6 (crafts/others) when the technical capability is through the technology marketing capability. Electronics confirmed an increase in indirect effects. Besides, the exit strategy (EX-C) increased significantly in DIV 5 (life/food) and DIV 6 (crafts/others) via technology marketing. Overall, the technology marketing competency (TM-C) confirmed that the Special Indirect Effect was high. In Tables 16 and 17, the total effect on the Technical Performance was found to be increased in three or more factors: DIV 3 (IT/SW), DIV 1 (electrics/electronics), and DIV 5 (life/food). In particular, DIV 3 (IT/SW) found that the total effect of technology capability, technology innovation capability, and technology commercialization capability on technology performance was remarkably increased. It proves to be an essential factor for IT/SW companies to increase their Technical Performance. Conclusions and implications Given that young SMEs' success in the world is one of the essential policies for each country's future survival, this study examined the causal relationship between the degree and influence of factors affecting entrepreneurial capability and the influential drivers and technical performance. We analyzed and verified whether the mediation effect is significant, and the impact on the business performance of six industries such as electrics/electronics, machinery/parts, IT/SW, chemicals/textiles/materials, life/food, crafts/others. The moderating effects of how the factors differ and the degree of influence were verified. In particular, due to verifying the moderating effect on the industry sector's technical performance, it was confirmed that the technology capability is the main influence factor path on technical performance. As for the total effect on the Technical Performance, it was confirmed that DIV 3 (IT/SW), DIV 1 (electrics/electronics), and DIV 5 (life/food) increased the total effect on three or more factors. Mainly, DIV 3 (IT/SW) found that the total effect of technology capability, technology innovation capability, and technology commercialization capability on technology performance was remarkably increased. It proves to be an essential factor for IT/SW companies to increase their technical performance. In the information technology industry and the SoftWare industry, we are trying to survive and grow in the rapidly developing IT environment by making efforts to enhance technology capability, innovation capability, and technology commercialization capability by using Digital Transformation to create the technological performance. This study suggests overcoming the failure of SMEs' technical performance and which capability and competence in focusing on sustainability. Management capability and technical marketing competency were an important, influential driver for technical performance. Technology capability had a significant influence on both technical marketing and technological innovation capability. This study's results will be provided to government policymakers and practitioners of government support agencies to stimulate youth entrepreneurs' success. The limitations of this study and future research subjects can be summarized as follows. First, this study was conducted for the founding companies. However, the significant technology start-up companies' technical field was found in electronics/electronics, machinery/parts, IT/SW, chemicals/textiles/materials/, life/food, and crafts/others. In setting up the surveys into two fields, there was a failure to subdivide all possible technology fields to which technology start-up companies belong. In the future, it will be necessary to conduct further research on the technical fields that have been subdivided with the specification of the technical fields. Second, there was a practical limitation that the former founders could not be targeted in the part of the research that was limited to the youth founding academy in Korea. We will carry out future research as a research topic to include a broader range of founders, including technology startups by country, region, industrial complex, and industry. Third, some of the contents of the questionnaire conducted in this study were focused on the manufacturing field, so there were practical limitations that it was difficult to reflect as many diverse and technical founders as possible. Groups of technologies in the future can be categorized into manufacturing, non-manufacturing, IT, and industry 4.0, or the eight projects included in the innovation growth performances and future plans jointly announced by related ministries as reporting items for the Korea Innovation Growth Conference. It is necessary to expand further to the core leading business and conduct further research, and to conduct future research as an in-depth research topic following this research.
2020-11-26T09:07:32.558Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "1d576e7e08e703a99ac4ba6a234423dc403461b3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.9770/jesi.2020.8.2(67)", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ced0c9bd37f7dcec0065f9d2d3a8718d18ed042d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
95072367
pes2o/s2orc
v3-fos-license
Sub-wavelength diffraction-free imaging with low-loss metal-dielectric multilayers We demonstrate numerically the diffraction-free propagation of sub-wavelength sized optical beams through simple elements built of metal-dielectric multilayers. The proposed metamaterial consists of silver and a high refractive index dielectric, and is designed using the effective medium theory as strongly anisotropic and impedance matched to air. Further it is characterised with the transfer matrix method, and investigated with FDTD. The diffraction-free behaviour is verified by the analysis of FWHM of PSF in the function of the number of periods. Small reflections, small attenuation, and reduced Fabry Perot resonances make it a flexible diffraction-free material for arbitrarily shaped optical planar elements with sizes of the order of one wavelength. Let us continue the introduction by linking the metal-dielectric multilayers for sub-wavelength imaging with the concepts taken from Fourier Optics. We refer to the model of a linear shift invariant scalar system (LSI) [10,11] for the description of optical multilayers in a situation when they act as imaging nano-elements for coherent monochromatic light. In-plane imaging through a layered structure consisting of uniform and isotropic materials represents a LSI, for either TE or TM polarisations. Linearity of the system is the consequence of linearity of materials and validity of the superposition principle for optical fields. Shift invariance results from the assumed infinite perpendicular size of the multilayer and the freedom in the choice of an optical axis. A scalar description is valid for the TE and TM polarisations in 2D since all other field components may be derived from E y or H y , respectively, where the co-ordinate system is oriented as in fig. 1. For the TM polarization, the magnetic field H y (x, z) may be represented with its spatial spec-trumĤ y (k x , z) where, at least for lossless materials, the spatial spectrum is clearly separated into the propagating part k 2 x < k 2 0 ǫ and evanescent part k 2 x > k 2 0 ǫ. The transfer function t(k x ) (TF) is the ratio of the output to incident fields spatial spectra and corresponds to the amplitude transmission coefficient of the multilayerĤ Due to reflections, the incident fieldĤ Inc y (k x , z = 0) differs from the total fieldĤ y (k x , z = 0). The point spread function (PSF) is the inverse Fourier transform of the TF and has the interpretation of the response of the system to a point signal δ(x). The response to an arbitrary input H Inc y (x, z = 0) can be further expressed as its convolution with the PSF a. x Ag x Ag x Ag x INCIDENCE PLANE OUTPUT PLANE PSF of an imaging system usually provides clear information about the resolution, loss or enhancement of contrast, as well as the characteristics of image distortions. However, for subwavelength imaging, the PSF is not a straightforward measure of resolution and even imaging of objects smaller than the FWHM of PSF is possible [12]. In this paper we demonstrate a diffraction-free material for subwavelength sized optical beams. We combine the following properties of the multilayer: PSF with sub-wavelength size and little dependence on the thickness of the structure, high transmission, low losses, and a limited dependence of the imaging properties on the size of external layers. Together, these properties allow to use a multilayer as a flexible construction material for various optical imaging nano-devices. IMAGING WITH SUB-WAVELENGTH RESOLUTION IN METAL-DIELECTRIC MULTILAYERS The dispersion relation of a two-component infinite stack for the TM polarisation has the form [13] where K z is the Bloch wavenumber, Λ = d 1 + d 2 is the period of the stack, d i and ǫ i are the layer thickness and permittivity of material i = 1, 2, the filing fraction of material 1 is f = d 1 /Λ and the local dispersion relations are Wavevector k x is conserved at the layer boundaries and depends on the incidence conditions. The Bloch wavenumber K z is real for a Bloch mode in a lossless stack, however for evanescent waves in a finite stack or for lossy materials K z may be complex. In the first BZ, the real part of K z satisfies π/Λ <= re(K z ) <= π/Λ. The group velocity as well as the imaginary part responsible for absorption do not depend on the choice of BZ. When the layers are thin K z Λ, k iz Λ << 1, the second order expansion of (4) over the arguments of trigonometric functions leads to the dispersion relation for an uniaxially anisotropic effective medium with . This is the basis of the effective medium approximation thoroughly discussed by Wood et al. [2], also as the basis for applying the nearfield approximation. The transmission coefficient of the Fabry-Perot (FP) slab consisting of the homogenized effective medium is then given as [2], where k z and ǫ refer to the external medium, and L is the total thickness of the slab. At the same time t(k x ) is the already mentioned coherent amplitude transfer function of the imaging system [11]. When |ǫ x /ǫ z | << 1 (which is satisfied for a lossless metal for ǫ 1 /ǫ 2 = −d 1 /d 2 [7]), for certain slab thickness the resonant FP condition becomes independent on the angle of incidence. This happens when L = λm/ √ ǫ x , m ∈ N . Then t(k x ) ≡ exp(−ıK z L) and the FP slab introduces the same phase shift for all harmonics of the spatial spectrum. Belov and Hao [7] proposed to combine this condition with impedance matching between the external medium and the effective FP slab ǫ = ǫ x = f ǫ 1 + (1 − f )ǫ 2 and referred to that regime as canalization. However, Li et al. [8] questioned the importance of impedance matching in favour of the FP resonance condition. In fact, for a lossless metal and dielectric, the FP resonance is sufficient to entirely eliminate reflections resulting in perfect imaging t(k x ) ≡ 1 without impedance matching. In reality, losses make the condition ǫ = ǫ x only approximate and, at the same time, the finite value of Λ limits the validity of homogenisation. Moreover, transmission through a finite slab strongly depends on the material and thickness of the external layers and appears to be the largest for a symmetrically designed slab ( Fig. 1a) with half-width dielectric layers located at the boundaries [3]. NUMERICAL DEMONSTRATION OF DIFFRACTION-FREE PROPAGATION OF SUB-WAVELENGTH SIZED OPTICAL BEAMS After recalling the theory of sub-wavelength imaging in metal-dielectric multilayers, we now focus on an imaging regime which may be called diffraction-free. We note, that the FP resonances are accompanied with a field pattern inside the slab similar to a standing wave [2,9]. Therefore look for structures for which t(k x ) ≈ const nonetheless FP resonances are weak. Let us now focus on an example of a metal-dielectric periodic multilayer consisting of silver and high refractive index dielectric, which enables us to demonstrate and explain the diffraction-free propagation of sub-wavelength sized optical beams. The structure operates at the wavelength of λ = 422nm, when the permittivity of silver is equal to ǫ 1 = −5.637+0.214ı [14], and the permittivity of the dielectric such as T iO 2 or SrT iO 3 [10,15] is ǫ 2 = (2.6) 2 . Layer thicknesses are assumed to be d 1 = 10nm, d 2 = 12nm with periodic symmetric or non-symmetric composition shown in fig. 1a,b. The fraction d 1 /d 2 may be justified with the use of EMT. The corresponding permittivity of the effective medium gives ǫ x ≈ 1.02 + 0.1ı and ǫ z ≈ −158 + 191ı, which assure impedance matching with air ǫ = 1 ≈ ǫ x together with the condition for the extreme anisotropy |ǫ x /ǫ z | << 1. The imaginary part of the refractive index n x = √ ǫ x is reduced by the factor of 50 compared to bulk silver, resulting in low-loss transmission. In fig.2a we show the transfer function calculated rigorously with TMM for a range of filling factors 0 < d 1 /Λ < 0.6, when the total thickness of the structure is fixed at L = 660nm. Indeed, the impedance matching which occures when a. d 1 = 10nm, d 2 = 12nm results in reduced reflections, high transmission of both propagating and evanescent spatial harmonics, and a flat phase of the transfer function for a broad range of k x /k 0 . The size of the corresponding PSF is of the order of λ/10 ≈ 40nm assuring the super-resolving properties of the device. Fig. 2b shows the FWHM of PSF alongside with the reflection and transmission coefficient in the function of number of periods, for the symmetric, non-symmetric and effective-index composition of the structure. The major properties of the structure are the following: the size of PSF varies slowly with the size of the structure and is almost the same for the symmetric and non-symmetric composition, the FP resonances observed in transmission are weak (as opposed to those observed in reflections), the attenuation is uniform as the number of layers is increased. These properties assure an almost diffraction-free propagation through the structure, with a similar attenuation and the size of PSF for symmetric and non-symmetric composition and for a broad range of structure sizes L. This behaviour is illustrated in fig. 3a with an FDTD [16] simulation showing the uniform non-diverging distribution of the Poynting vector inside the structure for a beam size of the order of λ/10. Transmission takes place along the direction normal to the layer boundaries with negligible divergence. Furthermore, our material may be used to fabricate slabs cut at arbitrary angle to layer surfaces. As is shown in fig. 3b,c we continue to observe the diffractionfree propagation for inclined layers, whether the aperture covers only a single or multiple periods of the grating. This may be heuristically explained with the similar PSF for a symmetric and nonsymmetric composition of multilayers. Both of them are encountered at the cross sections drawn along the propagating beam, starting from various points inside the aperture and normal to the layer boundaries. Furthermore, it is possible to construct other simple optical nano-elements. In fig. 4 we demonstrate the operation of a double multilayer, which may be part of a cloaking device or an optical interconnect, as well as prism for imaging subwavelength beams. We have demonstrated a diffraction-free, low-loss material, which is impedance matched to air. The diffraction-free propagation is verified by the analysis of FWHM of PSF in the function of the number of periods. The material consists of silver and a dielectric with refractive index of n = 2.6 and operates at λ = 422nm. It has the effective imaginary part of refractive index smaller than that of silver by a factor of 50. The reflections are in between −10dB and −30dB, and the FWHM of PSF is at the order of 40nm. Small reflections, small attenuation, and reduced Fabry Perot resonances make it a flexible metamaterial for arbitrarily shaped optical planar devices with sizes of the order of one wavelength, such as the elements of optical interconnects or cloaks.
2010-02-03T07:13:59.000Z
2010-02-03T00:00:00.000
{ "year": 2010, "sha1": "0ae9ee6abeee582b5a150a00274b29d2947ed1bb", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00339-011-6286-3.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0ae9ee6abeee582b5a150a00274b29d2947ed1bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
150373374
pes2o/s2orc
v3-fos-license
Analysis of vitexin in aqueous extracts and commercial products of Andean Passiflora species by UHPLC-DAD © 2018 Paula Sepúlveda et al. This is an open access article distributed under the terms of the Creative Commons Attribution License -NonCommercial-ShareAlikeUnported License (http://creativecommons.org/licenses/by-nc-sa/3.0/). *Corresponding Author Geison M. Costa, Departamento de Química, Facultad de Ciencias, Pontificia Universidad Javeriana, Bogotá D.C., Colombia; Pontificia Universidad Javeriana, Building 52, Office 617 Cra 7, N° 43-83, Colombia. E-mail: modesticosta.g @ javeriana.edu.co Analysis of vitexin in aqueous extracts and commercial products of Andean Passiflora species by UHPLC-DAD INTRODUCTION More than 500 species comprise the Passiflora genus, growing in the form of lianas or vines that climb by tendrils, or as arboreous or shrub-like species (Hernández and Bernal, 2000).Latin America has the highest occurrence of these species, commonly known as passion fruits; Colombia, Brazil, Ecuador, and Perú are the countries with the highest diversity of the species (Fischer and Rezende, 2008). In previous chemical studies on this genus, different classes of secondary metabolites have been identified (alkaloids, cyanogenic glycosides, fatty acids, terpenes and saponins), with flavonoids being reported as having most of the pharmacological properties described for these species (Dhawan et al., 2004;Patel et al., 2011;Farag et al., 2016).Vitexin is one of the most frequently reported flavonoids for Passiflora (Costa et al., 2013). Based on several studies that show preclinical and clinical evidence of pharmacological activity of Passiflora, some species are included in official pharmacopeias.There are monographs of P. incarnata L in the European, British and Spanish Pharmacopoeias (European Pharmacopoeia, 2011;British Pharmacopoeia, 2009;Real Farmacopea Española, 2002) while P. edulis and P. alata are included in the Brazilian Pharmacopoeia (Agência Nacional de Vigilância Sanitária, 2010).In all these monographs, one of the tests described to confirm the identity of the species is by Thin Layer Chromatography, which evaluates the flavonoids fingerprint, including the presence of vitexin.In quantitative terms, the assay seeks to determine the total flavonoids, expressed as vitexin in Passiflora incarnata or as apigenin in P. alata and P. edulis, by colorimetric method.Only the Brazilian Pharmacopoeia describes another identification test, which is based on the HPLC profile. In this context, the aim of this work was to determine the vitexin content of some Colombian Andean Passiflora species and commercial products by Ultra-High Performance Liquid Chromatography (UHPLC), in order to evaluate the usefulness of this flavonoid as a chemical marker of the analyzed species. General methods For the extraction procedures, distilled water was used.Acetonitrile-HPLC grade (Merck), formic acid RA (Merck), and water purified using a Milli-Q system (Millipore®) were filtered through a 0.22 µm membrane (CNW Technologies) and degassed by ultrasound bath before UHPLC analysis.The reference standard used was vitexin (Fluka, 95%). The UHPLC analyses were carried out in a Thermo scientific Dionex Ultimate 3000 equipped with Dionex Ultimate 3000 Diode Array Detection (DAD), Dionex Ultimate 3000 RS Pump, on-line degasser and autosampler.The data were processed using the software Chromeleon Client, version 6.80 SR15. Plant material and extraction Aerial parts of Passiflora species were collected from different places of the Colombian Andean region (Table 1).The leaves were air dried separately at 40°C and finely powdered.10 g of leaves from each species were extracted, separately, by infusion with 100 mL of boiling water (95°C, plant:solvent 1:10, w/v) for 10 minutes, then filtered, centrifuged at 5000 rpm/30 min and finally, the supernatant was frozen and lyophilized to obtain the crude extract.The samples for UHPLC analysis were prepared by dissolving 1.0 mg of the dried crude extracts in 1 mL of methanol:water (1:1, v/v) and filtering through a 0.22 µm membrane before injection.*ND = Not detected.The data represent the mean ± SD of three replicates. Passiflora botanical drugs For this study, we selected some Passiflora botanical drugs from health food stores in Bogotá, Colombia (Table 2). Were included in the study only products containing exclusively Passiflora leaf extracts in its composition, excluding products that contained flower extracts, products containing mixtures of distinct plant extracts and homeopathic products. For products in drops, 500 µL of the product was diluted with 500 µL of a methanol-water solution (1:1).In the case of capsules, 15 mg of the dry powdered content was dissolved in 1 mL of methanol-water (1:1).All samples were filtered through a 0.22 µm membrane before injection. Validation of analytical procedures The validation of analytical procedures was performed according to the ICH guidelines (ICH, 2005).The validated parameters were specificity, linearity, accuracy, precision (repeatability and intermediate precision), limit of detection (LOD) and limit of quantification (LOQ). RESULTS AND DISCUSSION As described earlier, flavonoids are the most frequently metabolites reported in species of the Passiflora genus.Among the species analyzed in this study, Passiflora ligularis, P. tarminiana, P. mixta, P. cumbalensis, P. tripartita var mollissima, P. tripartita var tripartita, and P. edulis var flavicarpa showed the most complex flavonoid profiles, with several peaks corresponding to these metabolites (identified by their DAD spectra -data not shown).All of them belong to the Taxonia subgenus, also known as banana passion fruits.With the exception of P. edulis var flavicarpa, which belongs to Passiflora subgenus (Ocampo et al., 2007).The complexity of the extract composition was one of the challenges to be overcome when developing the analytical method for the analysis of different Passiflora species.Although the developed methodology does not allow the separation of all flavonoids peaks from the samples, especially the complex ones, it enabled the differentiation of flavonoids fingerprint from each the extracts. Linearity and sensitivity From the regression coefficient (r 2 ) obtained, the method developed for the quantification of vitexin showed good correlation between the response and the concentration of the flavonoid.In addition to least-squares regression, ANOVA analysis was also performed to confirm the significant regression of the method.The calculated F value was 17809.414,which is higher than the tabulated F value (F 1,19 = 4,381) at a 95% confidence level, demonstrating that the regression was significant.The limits of detection (LOD) and quantitation (LOQ) were determined by successive dilutions of the calibration curve until a signal to noise ratio of 10:1 was observed, with a relative standard deviation (% RSD) > 5% for the LOQ and a ratio of 3:1 for the LOD (Table 3). Precision and accuracy Precision was evaluated as repeatability and intermediate precision.Three concentration levels of the standard solutions were analyzed in triplicate within one day and on three consecutive days, respectively.Both parameters were satisfactory (Table 3) since the relative standard deviation (RSD) to all values were found to be below 5%, according to the limit recommended in the ICH guideline. Accuracy was expressed as the recovery percentage obtained after spiking a sample with known amounts of the standard solution (Table 4); it was calculated using the equation: Recovery (%) = (Theoretical content × 100)/Experimental content.The reported data represents the average percentage of triplicates and its relative standard deviation. Vitexin quantification in Passiflora extracts In general, Passiflora crops require constant pruning throughout the harvest and postharvest seasons to improve the structure of the plants and increase their productivity (Ocampo and Wyckhuys, 2012).This good agricultural practice generates a large number of leaves that are considered as waste, but due to its previously reported pharmacological activity, it could be used to produce botanical drugs.However, one of the first steps necessary to develop a safe, effective, high-quality botanical drugs is to implement analytical techniques to quantify the chemical or therapeutic markers of both the raw material and the final product. In this study, vitexin was detected in P. edulis var edulis, P. tripartita var mollissima, P. mixta, P. edulis var flavicarpa, P. tarminiana and P. tripartita var.tripartita leaf extracts.Although only P. edulis var edulis, P. tripartita var mollissima, P. mixta contained vitexin above the quantifiable limits of the method; 0.3 ± 0.0 mg g −1 of extract for P. edulis var edulis; 2.49 ± 0.2 mg g −1 of extract for P. tripartita var mollissima; and 4.58 ± 1.23 mg g −1 of extract for P. mixta, the highest amount (Table 1 and Figure 1).However, no vitexin was detected in P. cumbalensis, P. ligularis, P. quadrangularis, and P. alata.Our method was able to detect vitexin in some species, even at low concentrations.P. edulis var flavicarpa, P. tarminiana, and P. tripartita var.tripartita presented quantities of this flavonoid below LOQ (Table 1).This result is consistent with some data from the literature reporting the quantification in species such as P. quadrangularis, P. alata, P. edulis var flavicarpa, and P. edulis var edulis, in which the concentration of this compound is low, unquantifiable, or undetectable (Gomes et al., 2017;Costa et al., 2016;Zucolotto et al., 2012). Therefore, the poor presence of vitexin indicates the low viability of this flavonoid as a chemical marker in the species analyzed in this study, except for P. mixta, and P. tripartita var mollissima.On the other hand, vitexin derivatives such as isovitexin-2''-O-rhamnoside, vitexin-2"-O-xyloside and especially vitexin-2"-O-rhamnoside, found as a commercial standard, could be used as preferable analytical markers and even as potential therapeutic markers, as they are the major compounds in some Passiflora species (Costa et al., 2013) and have been related to its pharmacological activities.Some biological effects attributed to those vitexin derivatives are: antioxidant, improvement in survival and function of ADSCs (adipose-derived stem cells) in vitro, and strong inhibition of DNA synthesis in MCF-7 breast cancer cells in the case of vitexin-2"-O-rhamnoside (Wei et al., 2014;Ninfali et al., 2007).Also in relation to vitexin-2"-O-xyloside, a recent study concluded that it has the ability to inhibit the proliferation of both CaCo-2 colon cancer cells and HepG2 liver cancer cells and that the effect was magnified by the combination with avenanthramides (Scarpa et al., 2017).Other authors have confirmed the cytotoxicity in CaCo-2 tumor cell lines, and a synergistic effect when it is combined with other phytochemicals such us betalains, epigallocatechin-3-gallate and isothiocyanates (Farabegoli et al., 2017). Passiflora botanical drugs The Passiflora products licensed in Colombia are indicated as sedatives and adjuvants in the treatment of anxiety and sleep disorders of nervous origin.Only two species are approved to be commercialized as botanical drugs: P. tripartita var mollissima and P. incarnata, with flowers and leaves considered as raw material for extraction (INVIMA.Instituto Nacional de Vigilancia de Medicamentos y Alimentos, 2017).An analysis of the products most commonly found in Bogotá D.C, Colombia gave the following results: Vitexin could be detected and quantified in four (A-1, A-2, B-1 and B-2) of the six botanical drugs analyzed (Table 2, Figure 2).It is important to highlight the presence of higher amounts of vitexin in P. tripartita var mollissima products compared to the low levels detected in our aqueous extract of the same species.These differences in vitexin content could be related mainly to the solvent used in the extraction process, once the botanical drugs claims to be produced from a hydroalcoholic extract while ours extracts were aqueous infusion.The difference in the solubility of vitexin between ethanol and water corroborates with the observed results (Chen et al., 2017). Products C-1 and C-2 containing P. incarnata, the species reported in most of the Pharmacopoeias and most recognized worldwide, did not contain vitexin (Figure 3).This indicates additional efforts to develop adequate methods for analyzing herbal products or botanical drugs that will accurately characterize the species and enable quality control of the products. CONCLUSIONS Based on the results obtained in the validation parameters, linearity, precision, accuracy, LOD and LOQ, a reliable UHPLC-DAD method was developed for the quantification of vitexin in distinct matrices from different Passiflora species.Only three of the ten evaluated species contained quantifiable amounts of this flavonoid: P. mixta, P. tripartita var mollissima, and P. edulis var edulis.Three species contained vitexin below the limit of quantification, and four did not contain this flavonoid, or else it was found in lower concentrations than the limit of detection.Based on these results, the use of vitexin as a chemical marker for the quality control of highly cultivated Passiflora species in Colombia is not recommended. The method was also useful in the analysis of Passiflora products commercialized in Bogotá.It was possible to identify three products with significant quantities of vitexin, as well as one with low amounts, and two products of P. incarnata leaves with undetectable quantities. Table 1 : Passiflora species with their respective collection zones and quantification of vitexin in crude extracts. Table 3 : Calibration, sensitivity and precision data for vitexin standard solutions.
2019-04-28T04:54:36.348Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "a9bfd1ccfcb215a40f585228a921b2eb22615fc4", "oa_license": "CCBY", "oa_url": "https://www.japsonline.com/admin/php/uploads/2723_pdf.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a9bfd1ccfcb215a40f585228a921b2eb22615fc4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
222094554
pes2o/s2orc
v3-fos-license
An E ffi cient Network Resource Management in SDN for Cloud Services : With the recent advancements in cloud computing technology, the number of cloud-based services has been gradually increasing. Symmetrically, users are asking for quality of experience (QoE) to be maintained or improved. To do this, it has become necessary to manage network resources more e ffi ciently inside the cloud. Many theoretical studies for improving the users’ QoE have been proposed. However, there are few practical solutions due to the lack of symmetry between implementation and theoretical researches. Hence, in this study, we propose a ranking table-based network resource allocation method that dynamically allocates network resources per service flow based on flow information periodically collected from a software defined network (SDN). It dynamically identifies the size of the data transmission for each service flow on the SDN and di ff erentially allocates network resources to each service flow based on this size. As a result, it maintains the maximum QoE for the user by increasing the network utilization. The experimental results show that the proposed method achieves 29.4% higher network e ffi ciency than the general Open Shortest Path First (OSPF) method on average. Introduction Various services have been provided, based on recent advancements in cloud computing technologies [1], and their types and targets have been continuously expanding. Early cloud-based services mainly focused on storage services, whereas various current services include big data analysis, games, and multimedia processing. Cloud-based services are expected to achieve the fastest rate of growth over the next few years. However, owing to advancements in these technologies and their market growth, processing procedures have become more complex and the amount of data required in the service provisioning procedures has drastically increased in symmetry. Therefore, an efficient management method for the internal resources of the cloud has symmetrically become essential for providing services more smoothly while maintaining the quality of experience (QoE). The traditional network structure currently used on the Internet cannot flexibly deal with dynamically requested services. Software defined networking (SDN) [2] technology has emerged to address this issue. SDN symmetrically separates the network structure into a control plane and a data plane and manages network switches through the Openflow [3] protocol. At the control plane, the controller conducts network control functions such as routing, flow table management, and topology management. At the data plane, the switches forward packets using the flow rules received from the controller in symmetry. The overhead of the control plane in SDN may increase a little. However, SDN can efficiently manage network resources because it enables a logical access to the network and dynamically adds user-desired functions to the controller. Therefore, in this study, we propose a method for efficiently managing resources when providing cloud services in SDN. The proposed method collects and analyzes the traffic pattern of the services currently being transmitted to the network using SDN. Based on the analyzed information, the priority of services is symmetrically determined by predicting the usage of services requested from the user. The proposed method dynamically allocates the network resources for each service based on the priority of services in symmetry. Related Works With the recent advancements and increasing demand for cloud computing technology, a delay problem in content delivery has seriously emerged at the cloud. Various studies have been actively conducted to address this problem. Studies on the management aspect of network resources have mainly focused on processing, such as traffic control, bandwidth load balancing, and path allocation, during the data transmission required to provide services to the relevant processing servers. Many previous studies have mainly focused on server computing resource management. However, recently, resource management methods considering both network resources and computational resources have been symmetrically gaining attention because of the need to consider delays actually occurring in the network. In [4,5], an orchestrator system for managing network resources inside the cloud was proposed. The proposed system manages network resources using a python based open source Openflow/SDN (POX) controller in a Xen-based cloud environment. This system determines whether each virtual machine (VM) is executed using the virtual machine monitor (VMM) handler, and forwards the information to the engine. Next, the system allocates the network path to the determined VM. Here, it identifies the status of the network using the SDN controller. The core of these studies is to predict the load information of the network. To this end, the authors proposed a formula that predicts the network load information based on previous log information and the current state of the network traffic. In [6], an integrated management framework based on SDN was proposed to reduce the resource usage during the network communication procedure. The proposed framework applies a real-time VM migration method to reduce the inefficiencies in dynamic network traffic and internal cloud links. The proposed migration method calculates the network load of the VMs from where the service is requested based on the S-CORE [7] algorithm, and then determines whether to apply migration based on this information. In [8], the network is automated using Intent-Based Networking (IBM), which is similar to SDN, and improves the quality of service (QoS) of the network using cloud technology. It distributes the additionally required resources to the cloud by using cloud technology instead of physical backup resources when the network approaches the maximum overhead. In [9], the QoS for various services is satisfied using network monitoring and flow level scheduling by employing SDN for a cloud data center network (CDCN). The proposed system suggests a resource allocation mechanism that maintains the maximum link utilization for all services, focusing on traffic management problems. In [10], a multi-service differentiated (MSD) traffic management strategy for SDN-based cloud data centers is proposed to process various types of increasing traffic. A Fibonacci tree optimization algorithm is applied to optimize the MSD traffic management. In [11], the authors comprehensively surveyed and evaluated the performance of existing delay-constrained least-cost (DCLC) routing algorithms in the SDN environment. They adapted 26 DCLC routing algorithms to the SDN controller and compared their performances within the 4D evaluation framework. According to their results, the Lagrange relaxation-based aggregated cost algorithm and the search space reduction delay-cost-constrained routing algorithm performed well in most of the evaluation space. In [12], the authors studied the convergence performance of the Open Shortest Path First (OSPF) routing protocol in both the legacy network and the SDN architectures. They measured the packet forwarding delay and convergence time after a network failure. According to their results, the routing convergence time in SDN architecture is less than that in the distributed Symmetry 2020, 12, 1556 3 of 12 legacy network architecture. In [13], the authors similarly analyzed the performance of the network convergence in the legacy network and the SDN architectures. In [14], the authors modified the OSPF routing protocol to dynamically change the metric calculation based on the network requirements. They give the privilege to multimedia traffic flows based on network requirements to improve the quality of multimedia services. Their protocol increases the network bandwidth utilization and reduces the loss rate and the delay of multimedia traffic flows. However, the goal of their protocol differs from that of our mechanism. Their protocol focuses on the service differentiation and provides more resources to multimedia traffic flows. Meanwhile, our mechanism focuses on the overall network resource utilization and provides more efficient network resources to the flows that can use them more. In addition, their protocol requires the flows to be explicitly classified into multimedia and non-multimedia types when the traffic enters the network. In [14], the authors used an artificial intelligence module, proposed in [15], for flow classification. However, that module is just optimized for separating small sensor data from multimedia data and does not consider diverse traffic data. In addition, their protocol requires the installation of a new message exchanging protocol on both the controller and the switches. In [16], the authors proposed a way to reduce the delay for the significant traffic in SDN due to the separation of the control and data planes. In [17], the authors proposed a hierarchical resource allocation scheme to decrease the interference between the macro and the micro cells in SDN-based cellular networks. Our study does not focus on the situation where there are so many network nodes that the distributed protocols cannot operate on them. We also do not focus on the performance comparison of routing protocols with the legacy network and the SDN either in symmetry. The performance of routing protocols in the legacy network and the SDN was symmetrically compared and analyzed well in [12,13]. In this study, we focus on efficient network resource management with SDN and propose a new network resource allocation mechanism that can be used in SDN. As mentioned above, many studies have been conducted on the load balancing and routing in SDN. However, there has been no approach to increase the overall network resource utilization by giving more efficient network resources to the flows that can use them more like ours. Need for Network Resource Management in SDN In a cloud-based service, a flow can be created and removed when a service begins and ends, respectively. Each flow takes a different form depending on the transmission pattern of the packets, and this pattern can vary depending on the type of service requested and the state of the service. Figure 1 shows an example of the internal packet sequence of the flow developed when two services are created. As indicated in the figure, the number of packets entering the flow varies dynamically depending on the state of each service. For example, regarding the flow for processing a real-time streaming service, the packets consistently come in at regular time intervals. In the flow for processing a web service, the packets explosively come in only when the web moves or starts, after which the packets occasionally enter to maintain the session. Therefore, the packet pattern of each flow differs owing to various factors when multiple users request multiple services. This difference dynamically changes the amount of network resources used by the flow. Symmetry 2020, 12, x FOR PEER REVIEW 3 of 12 architecture. In [13], the authors similarly analyzed the performance of the network convergence in the legacy network and the SDN architectures. In [14], the authors modified the OSPF routing protocol to dynamically change the metric calculation based on the network requirements. They give the privilege to multimedia traffic flows based on network requirements to improve the quality of multimedia services. Their protocol increases the network bandwidth utilization and reduces the loss rate and the delay of multimedia traffic flows. However, the goal of their protocol differs from that of our mechanism. Their protocol focuses on the service differentiation and provides more resources to multimedia traffic flows. Meanwhile, our mechanism focuses on the overall network resource utilization and provides more efficient network resources to the flows that can use them more. In addition, their protocol requires the flows to be explicitly classified into multimedia and non-multimedia types when the traffic enters the network. In [14], the authors used an artificial intelligence module, proposed in [15], for flow classification. However, that module is just optimized for separating small sensor data from multimedia data and does not consider diverse traffic data. In addition, their protocol requires the installation of a new message exchanging protocol on both the controller and the switches. In [16], the authors proposed a way to reduce the delay for the significant traffic in SDN due to the separation of the control and data planes. In [17], the authors proposed a hierarchical resource allocation scheme to decrease the interference between the macro and the micro cells in SDN-based cellular networks. Our study does not focus on the situation where there are so many network nodes that the distributed protocols cannot operate on them. We also do not focus on the performance comparison of routing protocols with the legacy network and the SDN either in symmetry. The performance of routing protocols in the legacy network and the SDN was symmetrically compared and analyzed well in [12,13]. In this study, we focus on efficient network resource management with SDN and propose a new network resource allocation mechanism that can be used in SDN. As mentioned above, many studies have been conducted on the load balancing and routing in SDN. However, there has been no approach to increase the overall network resource utilization by giving more efficient network resources to the flows that can use them more like ours. Need for Network Resource Management in SDN In a cloud-based service, a flow can be created and removed when a service begins and ends, respectively. Each flow takes a different form depending on the transmission pattern of the packets, and this pattern can vary depending on the type of service requested and the state of the service. Figure 1 shows an example of the internal packet sequence of the flow developed when two services are created. As indicated in the figure, the number of packets entering the flow varies dynamically depending on the state of each service. For example, regarding the flow for processing a real-time streaming service, the packets consistently come in at regular time intervals. In the flow for processing a web service, the packets explosively come in only when the web moves or starts, after which the packets occasionally enter to maintain the session. Therefore, the packet pattern of each flow differs owing to various factors when multiple users request multiple services. This difference dynamically changes the amount of network resources used by the flow. A cloud system can use the network resources of various paths to deliver the desired content to the user. If highly effective network resources with low cost are first allocated to a flow that consumes A cloud system can use the network resources of various paths to deliver the desired content to the user. If highly effective network resources with low cost are first allocated to a flow that consumes a large amount of network resources, the user QoE can be significantly improved. Figure 2 shows an example of a cloud network topology with two different network cost paths and three service flows with a dynamically changing the amount of data transmitted. The cloud system shown in Figure 2a has two different paths between the user and server. It is assumed that the s1-s3-s5 path is less costly than the s1-s2-s4-s5 path and can support only a single service flow. Figure 2b shows the size of the data transmission required when processing the three services. In this scenario, if the cloud network is configured using a traditional network structure, the network resources cannot be dynamically allocated according to the amount of data. Therefore, in the traditional network structure, when Service1 is first requested, it is transmitted through the s1-s3-s5 path and the remaining services are transmitted through the s1-s2-s4-s5 path. At this time, when the network utilization is analyzed over time, Service1 is considered to effectively transmit most of the data in the t 0 -t 1 section, thus using the path efficiently. However, in the t 1 -t 2 section, Service2 is transmitting most of the data but cannot use the low cost path. In the t 2 -t 3 section, Service1 transmits hardly any data, whereas Service3 steadily transmits data, leading to a sharp decrease in network utilization. Consequently, in a traditional network structure, network resources cannot be efficiently used because they cannot be dynamically allocated according to the amount of data transmitted. Symmetry 2020, 12, x FOR PEER REVIEW 4 of 12 a large amount of network resources, the user QoE can be significantly improved. Figure 2 shows an example of a cloud network topology with two different network cost paths and three service flows with a dynamically changing the amount of data transmitted. The cloud system shown in Figure 2a has two different paths between the user and server. It is assumed that the s1-s3-s5 path is less costly than the s1-s2-s4-s5 path and can support only a single service flow. Figure 2b shows the size of the data transmission required when processing the three services. In this scenario, if the cloud network is configured using a traditional network structure, the network resources cannot be dynamically allocated according to the amount of data. Therefore, in the traditional network structure, when Service1 is first requested, it is transmitted through the s1-s3-s5 path and the remaining services are transmitted through the s1-s2-s4-s5 path. At this time, when the network utilization is analyzed over time, Service1 is considered to effectively transmit most of the data in the t0-t1 section, thus using the path efficiently. However, in the t1-t2 section, Service2 is transmitting most of the data but cannot use the low cost path. In the t2-t3 section, Service1 transmits hardly any data, whereas Service3 steadily transmits data, leading to a sharp decrease in network utilization. Consequently, in a traditional network structure, network resources cannot be efficiently used because they cannot be dynamically allocated according to the amount of data transmitted. If the traffic pattern changes are detected in real time and paths are dynamically allocated using the SDN structure, network resources can be more efficiently managed. The SDN structure enables the changes in the amount of data transmitted by each service flow to be identified in real time. Thus, it can dynamically allocate the most efficient network resources to the service flow that transmits the most data per unit time according to the amount of data transmission identified. If the SDN structure is adopted in the above example, the s1-s3-s5 network path can be switched and allocated to Service1, Service2, Service3, Service1, and Service3 per unit time. In this manner, the network efficiency continues to increase, thereby increasing the user's QoE. However, we should accurately predict the changes in flow arising from various factors to obtain such results. Therefore, in this study, we propose a method for more precisely detecting and processing the flow changes in an SDN structure. Proposed Network Resource Management Method in SDN The proposed method manages resources efficiently at the network level when providing services in a cloud environment. Figure 3 shows the proposed system structure for efficient network resource management. As the figure indicates, the SDN controller is periodically notified regarding the amount of data and the number of packets transmitted. The information collected by the SDN controller is sent to the priority calculator, which uses it to calculate the weight of the flows. The ranking table manager sorts and stores the flows by priority based on the calculated weight. It then determines the network path of the flow using the set threshold. The ranking table manager continues to update and store the priority of the services. The flow rule generator creates a flow rule by setting the differential network paths for the services after checking the ranking table. The generated flow If the traffic pattern changes are detected in real time and paths are dynamically allocated using the SDN structure, network resources can be more efficiently managed. The SDN structure enables the changes in the amount of data transmitted by each service flow to be identified in real time. Thus, it can dynamically allocate the most efficient network resources to the service flow that transmits the most data per unit time according to the amount of data transmission identified. If the SDN structure is adopted in the above example, the s1-s3-s5 network path can be switched and allocated to Service1, Service2, Service3, Service1, and Service3 per unit time. In this manner, the network efficiency continues to increase, thereby increasing the user's QoE. However, we should accurately predict the changes in flow arising from various factors to obtain such results. Therefore, in this study, we propose a method for more precisely detecting and processing the flow changes in an SDN structure. Proposed Network Resource Management Method in SDN The proposed method manages resources efficiently at the network level when providing services in a cloud environment. Figure 3 shows the proposed system structure for efficient network resource management. As the figure indicates, the SDN controller is periodically notified regarding the amount of data and the number of packets transmitted. The information collected by the SDN controller is sent to the priority calculator, which uses it to calculate the weight of the flows. The ranking table manager sorts and stores the flows by priority based on the calculated weight. It then determines the network path of the flow using the set threshold. The ranking table manager continues to update and store the priority of the services. The flow rule generator creates a flow rule by setting the differential network paths for the services after checking the ranking table. The generated flow rule is sent back to the controller, which sends it to the switch through the Openflow protocol. Finally, each switch forwards packets based on the updated flow rule. It is difficult to predict exactly how much data a specific service will send at a specific point in the future. Therefore, in this study, we propose a scheduling method that predicts the amount of future traffic required by the service by analyzing the amount of traffic that has been generated by the service flow. The proposed method includes a procedure for calculating the priority and a procedure for allocating the network path based on the priority. The priority is contingent on how much data have been sent at a certain time in the past. The priority increases for flows that have recently sent large amounts of data, whereas the priority decreases for flows that have not. If the priority of flows does not change over time, current high demanding flows may not use more efficient network paths because past high demanding flows are occupying it. For solving this, we use the exponential weighted moving average for the amount of data transmitted in the past for each flow k at time t, as expressed in Equation (1). where, refers to the amount of data transmitted for flow k between time t-1 and t, and α refers to the weight. Figure 4 shows the procedure for allocating the network path based on the priority through the ranking table in the application. The controller periodically calculates and forwards it to the ranking table. The ranking table then manages the collected flows by sorting them based on . It then applies a threshold for differential network path allocation. This threshold can be set differently depending on the characteristics of the service and is used as the criterion for allocating efficient network paths in the SDN. It is difficult to predict exactly how much data a specific service will send at a specific point in the future. Therefore, in this study, we propose a scheduling method that predicts the amount of future traffic required by the service by analyzing the amount of traffic that has been generated by the service flow. The proposed method includes a procedure for calculating the priority and a procedure for allocating the network path based on the priority. The priority is contingent on how much data have been sent at a certain time in the past. The priority increases for flows that have recently sent large amounts of data, whereas the priority decreases for flows that have not. If the priority of flows does not change over time, current high demanding flows may not use more efficient network paths because past high demanding flows are occupying it. For solving this, we use the exponential weighted moving average SP t k for the amount of data transmitted in the past for each flow k at time t, as expressed in Equation (1). where, B t k refers to the amount of data transmitted for flow k between time t-1 and t, and α refers to the weight. Figure 4 shows the procedure for allocating the network path based on the priority through the ranking table in the application. The controller periodically calculates SP t k and forwards it to the ranking table. The ranking table then manages the collected flows by sorting them based on SP t k . It then applies a threshold for differential network path allocation. This threshold can be set differently depending on the characteristics of the service and is used as the criterion for allocating efficient network paths in the SDN. Figure 4 shows the procedure for allocating the network path based on the priority through the ranking table in the application. The controller periodically calculates and forwards it to the ranking table. The ranking table then manages the collected flows by sorting them based on . It then applies a threshold for differential network path allocation. This threshold can be set differently depending on the characteristics of the service and is used as the criterion for allocating efficient network paths in the SDN. Performance Evaluation In this section, we evaluate the performance of the network management method based on the ranking table that is applied to a cloud system in an SDN environment. As a result that it is difficult to collect the input data for the experiment from actual cloud-based services, we obtained them through the Ostinato [18] packet generator by analyzing the user request patterns. Tables 1 and 2 show the settings used for the system and the experimental data applied during the experiments, respectively. The data used for the experiment are largely divided into multimedia and web service data. All the data have 180 flows. The packet size for the web service is generated randomly. It is assumed that the service request pattern of the web data is a random request made again within 10 min of an incoming request. Figure 5 shows the network topology used for the experiments. The topology consists of eight switches and sends data traffic through four network paths. The cost of each link is 1. Hence, the total cost of each network path is the number of hops composing the path. Hence, the network paths s1-s2-s8 and s1-s3-s8 are better than network paths s1-s4-s6-s8 and s1-s5-s7-s8. It is assumed that the paths s1-s2-s8 and s1-s3-s8 do not include more than 60 flows each. Therefore, not all flows can use the optimal paths. Some of them have to use the non-optimal paths s1-s4-s6-s8 and s1-s5-s7-s8. Figure 5 shows the network topology used for the experiments. The topology consists of eight switches and sends data traffic through four network paths. The cost of each link is 1. Hence, the total cost of each network path is the number of hops composing the path. Hence, the network paths s1-s2-s8 and s1-s3-s8 are better than network paths s1-s4-s6-s8 and s1-s5-s7-s8. It is assumed that the paths s1-s2-s8 and s1-s3-s8 do not include more than 60 flows each. Therefore, not all flows can use the optimal paths. Some of them have to use the non-optimal paths s1-s4-s6-s8 and s1-s5-s7-s8. Figure 6 shows the data workloads used for the experiment, which have six workload types. Each workload represents the number of multimedia and web services requested over time. In the figure, the amount of data for multimedia and web services is the same for both Workload-E and the Workload-F. In this study, we evaluated the performance of the six workload types in the experiments. These experiments aim to verify how much data traffic is transmitted through optimal paths by efficiently allocating the network resources to the service flow. For performance comparison, we also implemented the OSPF, which is the most commonly used in the conventional path selection algorithms. Many researchers have investigated its performance in the SDN environment [12,13] and compared their proposals with the OSPF [14]. In our study, we also evaluate the OSPF and compare it with our proposed algorithm. In the experiments, we set the weighting factor α of our proposed method as 0.5. We also update the ranking table every 5 min and distribute the flow rules to the switches at the same frequency. Figure 7 shows the performance evaluation results for Workload-A in which the multimedia services increase and the web services decrease. The vertical axis indicates the network utilization, and the horizontal axis indicates the flow of time in minutes. The network utilization is the amount of data sent divided by the maximum amount of data that can be sent through the optimal paths. This shows how many data are sent through the network paths. In the experiments, up to 120 flows can be sent through the optimal paths and the measurement interval is 5 min. Therefore, the maximum amount of data that can be sent through the optimal paths over the past 5 min is the sum of the amount of data in the top 120 flows based on the amount of data that enters during the corresponding amount of time. If the sum of the network utilizations of the optimal paths is 1, it indicates that all of the top 120 flows are assigned to the optimal paths. In the graph, the empty-blue-square line represents the results of paths 3 and 4 with long hop counts. The solid-black-square line represents the network utilization of the data transmitted through paths 1 and 2, which are optimal paths owing to the small number of hops. The solid-red-triangle line represents the sum of the amount of data transmitted through paths 1 and 2, and shows how much data flows through the optimal paths. If this value is 1, it indicates that the maximum amount of data that could be sent through the optimal paths was actually sent through those paths. The results confirm that the network utilization of paths 1 and 2 gradually decreases over time for the OSPF algorithm. Meanwhile, the utilization of paths 1 and 2 increases over time for the proposed algorithm. In the experiments, a total of 200 services were requested. At the first 0-5 min, the numbers of flows for multimedia services and web services were 20 and 180, respectively. Thus, many flows for the web services were allocated to optimal paths (paths 1 and 2) in the beginning. In this situation, the OSPF does not change the flows allocated to the optimal paths even though the number of flows for the multimedia services, which have a large amount of data transmitted, increases over time. In contrast, the proposed algorithm dynamically reallocates the paths based on the continuously checked priority of the flows. It lowers the priority of the web flows when new multimedia flows enters. Therefore, it changes the flows allocated to the optimal paths from the web services to the multimedia services when new multimedia services come in, thus leading to an increase in the efficiency in terms of network usage as shown in Figure 7b. figure, the amount of data for multimedia and web services is the same for both Workload-E and the Workload-F. In this study, we evaluated the performance of the six workload types in the experiments. These experiments aim to verify how much data traffic is transmitted through optimal paths by efficiently allocating the network resources to the service flow. For performance comparison, we also implemented the OSPF, which is the most commonly used in the conventional path selection algorithms. Many researchers have investigated its performance in the SDN environment [12,13] and compared their proposals with the OSPF [14]. In our study, we also evaluate the OSPF and compare it with our proposed algorithm. amount of data transmitted, increases over time. In contrast, the proposed algorithm dynamically reallocates the paths based on the continuously checked priority of the flows. It lowers the priority of the web flows when new multimedia flows enters. Therefore, it changes the flows allocated to the optimal paths from the web services to the multimedia services when new multimedia services come in, thus leading to an increase in the efficiency in terms of network usage as shown in Figure 7b. The same results are observed in the other workloads as shown in Figures 8-12. However, the performance evaluation results of Workload-C in Figure 9 show an insignificant difference between the two algorithms. This is because the OSPF also uses the network efficiently as more multimedia services are initially requested than the number of web services and the number of multimedia services remains the same, thus resulting in the transmission of multimedia services through paths 1 and 2 from beginning to end. The same results are observed in the other workloads as shown in Figures 8-12. However, the performance evaluation results of Workload-C in Figure 9 show an insignificant difference between the two algorithms. This is because the OSPF also uses the network efficiently as more multimedia services are initially requested than the number of web services and the number of multimedia services remains the same, thus resulting in the transmission of multimedia services through paths 1 and 2 from beginning to end. Figure 13 shows a graph of the total network utilization during the entire experiment time. The total network utilization refers to the amount of data transmitted through paths 1 and 2 divided by the maximum amount of data that can be transmitted for each workload. The results show that the total network utilization of the proposed algorithm is 29.4% higher than that of OSPF on average. The proposed algorithm shows 69.2% higher utilization than OSPF on Workload-A and 1% higher on Workload-C. Workload-A starts with 180 web services sending a relatively small amount of data and 20 multimedia services sending a relatively large amount of data. The number of flows of web services decreases and the number of flows of multimedia services increases gradually over time. In this situation, OSPF mostly assigns the web flows to the optimal paths in the beginning and does not replace them with the incoming multimedia flows. In contrast, the proposed algorithm reallocates the optimal paths assigned to the earlier web flows to the new multimedia flows. As a result, the proposed algorithm provides much better performance than the OSPF. The situation is slightly different on Workload-C. On that workload, the number of multimedia flows is initially 100 and does not change over time. Whereas, the number of web flows starts at 20 and grows to 180 over time. In Figure 13 shows a graph of the total network utilization during the entire experiment time. The total network utilization refers to the amount of data transmitted through paths 1 and 2 divided by the maximum amount of data that can be transmitted for each workload. The results show that the total network utilization of the proposed algorithm is 29.4% higher than that of OSPF on average. The proposed algorithm shows 69.2% higher utilization than OSPF on Workload-A and 1% higher on Workload-C. Workload-A starts with 180 web services sending a relatively small amount of data and 20 multimedia services sending a relatively large amount of data. The number of flows of web services decreases and the number of flows of multimedia services increases gradually over time. In this situation, OSPF mostly assigns the web flows to the optimal paths in the beginning and does not replace them with the incoming multimedia flows. In contrast, the proposed algorithm reallocates the optimal paths assigned to the earlier web flows to the new multimedia flows. As a result, the proposed algorithm provides much better performance than the OSPF. The situation is slightly different on Workload-C. On that workload, the number of multimedia flows is initially 100 and does not change over time. Whereas, the number of web flows starts at 20 and grows to 180 over time. In this situation, both the OSPF and the proposed algorithm assign the optimal paths to the multimedia Figure 13 shows a graph of the total network utilization during the entire experiment time. The total network utilization refers to the amount of data transmitted through paths 1 and 2 divided by the maximum amount of data that can be transmitted for each workload. The results show that the total network utilization of the proposed algorithm is 29.4% higher than that of OSPF on average. The proposed algorithm shows 69.2% higher utilization than OSPF on Workload-A and 1% higher on Workload-C. Workload-A starts with 180 web services sending a relatively small amount of data and 20 multimedia services sending a relatively large amount of data. The number of flows of web services decreases and the number of flows of multimedia services increases gradually over time. In this situation, OSPF mostly assigns the web flows to the optimal paths in the beginning and does not replace them with the incoming multimedia flows. In contrast, the proposed algorithm reallocates the optimal paths assigned to the earlier web flows to the new multimedia flows. As a result, the proposed algorithm provides much better performance than the OSPF. The situation is slightly different on Workload-C. On that workload, the number of multimedia flows is initially 100 and does not change over time. Whereas, the number of web flows starts at 20 and grows to 180 over time. In this situation, both the OSPF and the proposed algorithm assign the optimal paths to the multimedia flows and do not replace them even though the number of web flows increases. Therefore, there is little difference in network utilization on Workload-C. In summary, the proposed algorithm shows higher performance than the OSPF on all workloads. As a result, the network resource management method proposed in this paper is clearly more effective than the OSPF. Conclusions In this paper, we presented an efficient network resource allocation method in SDN to efficiently support cloud services. We dynamically calculated the priority of services over time using a ranking algorithm, based upon which we differentially allocated the network paths. Such a differential path allocation improves the bandwidth utilization of the network paths. To prove its efficiency, we simulated the proposed network resource management system on an SDN and evaluated its performance. The performance evaluation results showed that the proposed method improved the network utilization in comparison with the OSPF method. The network model and workloads used in this study may not accurately represent actual service models and workloads. Therefore, in future research, we will consider the configurations of actual network models under various situations. We will also study the effect of the dynamic link cost change, and additionally evaluate the performance on network workloads gathered from the Internet. Finally, we have not yet found a similar mechanism where the goal is the same as ours. We will evaluate its performance and compare it with our mechanism if we find it in the future. Conclusions In this paper, we presented an efficient network resource allocation method in SDN to efficiently support cloud services. We dynamically calculated the priority of services over time using a ranking algorithm, based upon which we differentially allocated the network paths. Such a differential path allocation improves the bandwidth utilization of the network paths. To prove its efficiency, we simulated the proposed network resource management system on an SDN and evaluated its performance. The performance evaluation results showed that the proposed method improved the network utilization in comparison with the OSPF method. The network model and workloads used in this study may not accurately represent actual service models and workloads. Therefore, in future research, we will consider the configurations of actual network models under various situations. We will also study the effect of the dynamic link cost change, and additionally evaluate the performance on network workloads gathered from the Internet. Finally, we have not yet found a similar mechanism where the goal is the same as ours. We will evaluate its performance and compare it with our mechanism if we find it in the future.
2020-10-02T13:06:21.676Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "cfb8de6775366ccb109bc0a62088cc18a7287e55", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/12/9/1556/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4e7a69bb2ff1dc174b45075cebe5f8e326319716", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
8020926
pes2o/s2orc
v3-fos-license
BRICHOS - a superfamily of multidomain proteins with diverse functions Background The BRICHOS domain has been found in 8 protein families with a wide range of functions and a variety of disease associations, such as respiratory distress syndrome, dementia and cancer. The domain itself is thought to have a chaperone function, and indeed three of the families are associated with amyloid formation, but its structure and many of its functional properties are still unknown. Findings The proteins in the BRICHOS superfamily have four regions with distinct properties. We have analysed the BRICHOS proteins focusing on sequence conservation, amino acid residue properties, native disorder and secondary structure predictions. Residue conservation shows large variations between the regions, and the spread of residue conservation between different families can vary greatly within the regions. The secondary structure predictions for the BRICHOS proteins show remarkable coherence even where sequence conservation is low, and there seems to be little native disorder. Conclusions The greatly variant rates of conservation indicates different functional constraints among the regions and among the families. We present three previously unknown BRICHOS families; group A, which may be ancestral to the ITM2 families; group B, which is a close relative to the gastrokine families, and group C, which appears to be a truly novel, disjoint BRICHOS family. The C-terminal region of group C has nearly identical sequences in all species ranging from fish to man and is seemingly unique to this family, indicating critical functional or structural properties. CHOS proteins. These clearly separate into 12 groups; the 8 previously known families, 3 novel families, and one divergent group of only two sequences (cf Fig. 1). Group A is a novel family that clusters closely with the ITM2 families, albeit with low bootstrap values. The position in the dendrogram indicates that group A with its primarily insect and Caenorhabditis sequences may be ancestral to the ITM2 families. The divergent group branches off before group A, and its echinoderm and amphioxus sequences are compatible with an ancestral nature. GKN1, GKN2 and group B are closely related families that are also colocalised in the genome, suggesting that group B may be a third type of gastrokine. Group B is found only in mouse, rat, cow and dolphin, while GKN1 and GKN2 are found in a wide range of mammals (also frog and chicken, respectively). LECT1 and TNMD are widespread in vertebrates, from fish through armadillo and elephant to human, though TNMD has so far not been reported in frog. Group C is another novel family. Neither this nor proSP-C clusters strongly with any other family, but both are present in tetrapods. While group C is found in fish but not frog, the opposite is true for proSP-C which is consistent with its role as a pulmonary surfactant constituent. BRICHOS proteins have four regions; hydrophobic, linker, BRICHOS and C-terminal (length distributions shown in Table 1). The hydrophobic region is most often a transmembrane segment (predictions and [3]) but may be a signal peptide in GKN1 and GKN2 [4]. In proSP-C it functions as both [5]. All families except GKN1 and GKN2 have an additional N-terminal region that is poorly conserved, highly variable in length and likely separated from the other regions by a membrane. This region is not further investigated in this study. All statements regarding the C-terminal region exclude proSP-C since it is absent from this family. Conservation and secondary structure As shown in Table 2, 3, 4 and 5, residue conservation differs considerably among the regions. The spread in ID (average pairwise percent identities) for the hydrophobic region is wide, from 26% in group A to 96% in proSP-C, indicating drastically different functional constraints. Conversely for the BRICHOS region, all families have 51-83% ID, indicating similar functions among the families. The remaining regions show wide ID spreads. The GC values (group conservation, Table 2, 3, 4 and 5) show the largest spread for the hydrophobic region, with highest values for proSP-C and ITM2A. The linker region shows the lowest GC values (8-46%). Despite high numbers for cscore and ID, the LECT1 linker region shows an extremely low GC value (8%) compared to its other regions (37-48%). The three ITM2 families show similar values in all regions except the hydrophobic one, whose 36-86% GC might indicate differering structural constraints. The regional conservation differ considerably between families (cf Fig. 2). proSP-C has its highest cscore in the hydrophobic region (96%) while for group C it is highest in the C-terminal region (76%). The hydrophobic region is the most conserved in ITM2A while it is the least conserved in group C. Fig. 3 shows alignments for each region. Remarkably, although the degree of conservation is high in individual families, only three residues are completely conserved in the superfamily; D144, C160 and C219 (human ITM2A numbering), all in the BRICHOS region. The corresponding cysteines in proSP-C form an internal disulphide bridge [6] which could be the case for all families. C244 and C261 in the C-terminal region are strictly conserved in all families, except in group A where they are absent from all sequences, and in TNMD where one stickleback sequence has tyrosine replacing the latter cysteine. However since the stickleback genome project is still ongoing, this might represent a sequencing error. Thus, these cysteines might also form a disulphide bridge. The structure is still unknown for the BRICHOS proteins. However while the degree of conservation across the superfamily is low there is remarkable coherence in secondary structure, not only in the BRICHOS domain. Also, the few natively disordered regions are with few exceptions found N-terminally of the hydrophobic region, indicating that the proteins may have otherwise well defined tertiary structures. Hydrophobic region The hydrophobic region is strongly predicted to be helical (Fig. 3a). Notable exceptions are GKN1 and GKN2 where the first 6 residues of the predicted signal peptide show strand tendencies. The proSP-C prediction surprisingly shows strand tendencies, disagreeing with experimental evidence of a helical structure [7]. The remarkably high conservation in ITM2A, ITM2B and proSP-C (Fig. 2), and the high number of strictly conserved valines in proSP-C, are unusual for a transmembrane segment, indicating possible additional roles (e.g. protein interactions). The high degree of conservation in proSP-C is expected since it corresponds to mature SP-C Dendrogram of the BRICHOS superfamily Figure 1 Dendrogram of the BRICHOS superfamily. 12 groups are clearly distinguished; proSP-C (pulmonary surfactant protein C precursor), group C, GKN2 and GKN1 (gastrokine-2 and -1), group B, LECT1 (chondromodulin-1), TNMD (tenomodulin), the divergent group, group A, and ITM2A, ITM2C and ITM2B (integral membrane protein 2 A, C and B). UniProtKB sequences are denoted by accession number and identifier, e.g: O43736|ITM2A_HUMAN. GenomeLKPG sequences are denoted by their external identifier (Ensembl or NCBI) prepended with the organism's NCBI Taxonomic identifier, e.g. 13618.ENSMODP00000005214. Red circles highlight the bootstrap numbers for each family. Only sequences with less than 90% sequence identities are shown. [5,8]. No interactions with other proteins have been described for mature helical SP-C, except for possible homodimerisation [9]. Linker region The linker region ( Fig. 3b) favours coil and strand conformations and shows a lower degree of conservation, except in proSP-C where the high degree of conservation in the hydrophobic region extends into this region. BRICHOS region The BRICHOS region shows the highest degree of conservation near the strictly conserved aspartic acid and first cysteine residues, but is less conserved in the C-terminal half (Fig. 3c). The initial section is predicted to form three short strands interspersed with short coils. The remainder is dominated by two helices that are conserved in all families, separated by a coil-strand-coil region. Surprisingly, proSP-C instead shows slight helical tendencies here. The BRICHOS domain of ITM2 has a conserved net negative charge correlated with a conserved net positive charge in the C-terminal region, being most extreme for ITM2A with net charges -5 and +6 in the different regions (Fig. 4). This characteristic is shared by group A, but less pronounced. Furthermore, group A lacks the remarkably high number of conserved hydrophobic residues in the ITM2 BRICHOS domains. It is more similar to the other families in this respect, in accordance with group A being ancestral to ITM2. LECT1 and TNMD are similar in many aspects but have drastically different conserved net charges, especially in the BRICHOS domain and C-terminal region. GKN1, GKN2 and group B may have a central natively disordered segment coinciding with a strongly predicted coiled segment (cf Fig. 3c, group B not shown). This is surprising since this characteristic is not shared by the other families. C-terminal region The C-terminal region is extremely well conserved in group C (Fig. 5) with nearly identical sequences in all spe- Numbers give minima, maxima, medians and standard deviations for the region lengths. The C-terminal region is absent from the proSP-C family, and consequently the length characteristics for this region are shown excluding proSP-C. Conservation the linker region for the different BRICHOS families, shown in percent. Column headings as explained in Table 2. ITM2A 83 67 58 ITM2B 89 83 71 ITM2C 89 82 71 Group A 66 57 39 GKN1 79 53 35 GKN2 82 74 50 TNMD 77 70 55 LECT1 78 64 37 group C 75 51 29 proSP-C 67 67 30 Conservation the BRICHOS region for the different BRICHOS families, shown in percent. Column headings as explained in Table 2. cies ranging from fish to man. However, three sequences have a poorly conserved insertion of 30-odd residues whose boundaries correlate with splice sites for surrounding exons, potentially stemming from spliceoforms or incorrect exon predictions. Excluding these increases the average cscore to from 52% to 94%. GKN1 and GKN2 show a low degree of conservation in this region, as does group A, which is surprising given its similarity to the well conserved ITM2 families. The C-terminal region is well conserved in ITM2, TNMD and LECT1, although LECT1 and TNMD have a long and less conserved insertion (Fig. 3d). These insertions may be largely natively disordered, however while most of these segments are likely coiled, the initial parts of the segments are ascribed a moderate probability of being helical. Group A also shows signs of native disorder in this segment, contrarily to ITM2. Transmembrane predictors ascribe a moderate probability for group C to have a transmembrane helix here, which would be unexpected considering its predicted strand structure and extreme conservation. Conservation the C-terminal region for the different BRICHOS families, shown in percent. Column headings as explained in Table 2. The numbers for group C are presented excluding the insertions shown in Fig. 5. Figure 2 Conservation profiles of BRICHOS proteins. Each row describes one BRICHOS family and each column describes one region. The vertical axis in each plot shows cscores from 0% to 100%, and the horizontal axes span the length of the corresponding family and region. Conservation profiles of BRICHOS proteins Conservation, secondary structure and native disorder Surprisingly, conservation in LECT1, TNMD and group C increases near the C-terminus (Fig. 2). The decrease for TNMD stems from a truncated stickleback sequence. This part contains four strictly conserved cysteines which could potentially form disulphide bridges or coordinate metal ions. The C-terminal regions of the BRICHOS proteins have no detectable homologues in UniProtKB, making the well conserved C-terminal regions of group C, LECT1 and TNMD unique to this superfamily and especially interesting for further studies. Disease-related mutations Several mutations in the proSP-C BRICHOS region correlate with lung disease. Notably, N138T and N186S increase susceptibility to perinatal RDS [10] while substituting asparagine for the residue type that is most frequent in orthologues. Three substitutions are associated with SMDP2. A116D affects a strictly conserved position (except one arginine in frog). R167Q is a naturally occurring polymorphism and affects a non-conserved position. L188Q affects a strictly conserved position and is found in association with familial interstitial lung disease [11]. Also, mutant proSP-C L188Q does not function as a chaperone for unfolded SP-C [8]. The linker regions also has disease related substitutions. E66L is associated with abnormal targeting to early endosomes and likely toxic gain of function [12], and affects a strictly conserved position. I73T causes abnormal trafficking and accumulation of aberrantly processed proSPC within alveoli [12]. Orthologues hold isoleucine, methionine and leucine, however positions 71-72 are strictly conserved, suggesting importance of this segment. Notably, protein sorting predictions [13][14][15][16] are unchanged following the substitution, and thus disagree with experimental results. In ITM2B, two stop codon disruptions associated with dementia yield amyloidogenic proteins elongated by 11 residues; duplication of 10 nucleotides between the penultimate and final translated codons in FDD [17], and a single base substitution in FBD [18]. In the BRICHOS region of GKN1, E104T is associated with breast cancer [19] and is conserved to lysine in all other species (except asparagine in cow, and glutamine in mouse and rat). Methods Sequences were collected using HMMER [20], both with the BRICHOS model from PfamA [21] and a custom HMMER model with equal specificity and slightly higher sensitivity. Partial sequences were manually removed. MSAs were made using dialign-t [22] and mafft L-INS-i [23]. Neighbour joining dendrograms were built using ClustalX [24]. Transmembrane topology was predicted using Phobius [25] and TMHMM [26]. Secondary structure elements were predicted using Prof [27], PredictProtein [28] and Psipred [29]. DISOPRED2 was used for native disorder prediction [30]. Due to its small size, group B was excluded from quantitative conservation comparisons. Conservation scoring The cscore is similar to the ClustalX qscore (see source code), being a diminishing function of the average euclidean distance to the centroid for the substitution score vectors for the symbols in the MSA. However, this algorithm uses a linear distance-to-score transform and penalises partially gapped positions less severely than does the ClustalX variant. In the cscore algorithm, the centroid C i is calculated using the expression Multiple sequence alignment of the C-terminal region of group C Figure 5 Multiple sequence alignment of the C-terminal region of group C. Asterisks denote positions with at most one divergent residue. Sequence labels follow the same format as in Fig. 1. N denotes the number of sequences, M i, j the symbol in sequence j at position i, S x the score vector for residue type x, σ the set of n symbols described by S, and N u the number of symbols in the position that are not described by S. Thus, unlike ClustalX, gaps and other symbols not in σ do not contribute to the placement of the centroid. Rather, when calculating the average euclidean distance d i to the centroid, these symbols are assigned the penalty distance where d λ is half the maximum distance between any two vectors in S. The transform from distance to cscore c i is not exponential as in ClustalX, but rather a partially linear function of d i d u is defined so that c i = 0 for positions where only one residue is in σ. Consequently, d i can be greater than d λ in exceptional cases (e.g. fully gapped positions), and the nonlinearity in equation 3 will assign c i = 0 to such positions. Conclusions We have characterised the BRICHOS superfamily and its four regions with distinct properties. We find large variation in conservation in both regions and families, which implies differences in functional constraints. Secondary structure elements are seemingly well conserved even in regions with low residue conservation. This coupled with the apparent low degree of predicted native disorder indicates that tertiary structure may be similarly conserved. We show that most of the known disease related mutations are in highly conserved positions, and that in two cases related to proSP-C and RDS, it is the substitution from the atypical human asparagines to the otherwise strictly conserved threonine and serine that are associated with disease. We have identified three novel BRICHOS families; group A, which may be ancestral to the ITM2 families; group B, which is a close relative to the GKN families, and group C, which appears to be a truly novel, disjoint BRICHOS fam-ily. The C-terminal region of group C is unique to this family, with nearly identical sequences in all species ranging from fish to man, indicating critical functional or structural properties.
2016-10-09T20:37:22.850Z
2009-09-11T00:00:00.000
{ "year": 2009, "sha1": "671f8e6b0b2f60c7db1ccec40e07cf8f2c5c2777", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-2-180", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04f4152979f409224083a02d7e3c57d7d4c8ddc9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245575756
pes2o/s2orc
v3-fos-license
Energy Consumption Delay Evaluation of NB-IoT Terminal Based on Markov Model The energy consumption of terminal of Internet of Things has at-tracted much attention in the study of smart Internet of Things. How to simulate the energy consumption process of the terminal from the theoretical level, so as to analyze the energy consumption and delay of the terminal are important issues. In this paper, taking the power monitoring terminal as an ex-ample, a Markov model is established for the Narrow-Band Internet of Things (NB-IoT) terminal with periodic automatic reporting. The working state of the terminal includes PSM (Power Saving Mode), random access (RACH), data transport and receive (Tx/Rx), short eDRX (Extended Discontinuous Reception), long eDRX and terminal disconnection (ERROR). According to the proposed model, the effects of network quality, maximum possible number of RACH request times ( Rmax ) and data retransmission times ( N 1 , N 2 ) on terminal energy consumption and delay are analyzed. The numerical results show that network quality, maximum number of random access and maximum number of data retransmission directly affect the energy consumption and service quality of the terminal. Reasonable configuration of the above indicators can effectively improve the service life of the terminal and meet the customer’s requirements for the terminal service quality under the condition of maximum power saving. The model provides a reference for energy consumption and delay optimization of NB-IoT terminal. Keywords NB-IoT · Markov · PSM · Energy Consumption Analysis · Time Delay Analysis Introduction In the mobile communication technology industry, the research on terminal energy consumption optimization has never stopped. With the rapid development of Internet of things communication technology in recent years, intelligent devices emerge in endlessly. In order to make the terminal equipment work for a longer time, the energy consumption requirements of mobile communication are more and more stringent [1]. The general objectives of NB-IoT include supporting large-scale connectivity, enhancing coverage, reducing cost and complexity, ultra-low power consumption and flexible delay characteristics. In the power monitoring scenario, most of the power monitoring terminals are powered by batteries, and the number of terminals is huge, the deployment range is wide, and the deployment area is diverse [2], which makes the cost of replacing terminal batteries very high. Therefore, in order to effectively implement the technology, the energy consumption of terminal equipment must be controlled to a minimum. In order to reduce the energy consumption of the terminal, we can optimize the communication mechanism to reduce the unnecessary energy consumption in the communication process, so as to reduce the overall energy consumption of the terminal and prolong the service life of the terminal. Therefore, NB-IoT is optimized based on the discontinuous receiving mechanism of LTE system and introduces eDRX and PSM [3]. The performance of DRX is evaluated in [4][5][6], and it is found that DRX mechanism can effectively reduce terminal energy consumption; In [7][8][9][10], the authors have conducted modeling and Analysis on DRX mechanism. Tirronen et al. [11] discusses how DRX works in LTE and how it affects different machine to machine (M2M) traffic scenarios. Ramazanali et al. [12] establishes a Markov chain model of LTE / LTE-A DRX mechanism. The DRX cycle has long and short, and analyzes that under the condition of meeting the wake-up delay requirements, the maximum power saving can be achieved. Jin et al. [13] divides the time period for the steady-state power-saving operation into several independent parts and analyzes the power-saving operation in each part and thereafter combine the results into an aggregate result. In [14], a Markov model with terminal working state as state variable is established for NB-IoT extended discontinuous receiving mechanism, and the corresponding power consumption and delay model are given. Although the power consumption of the terminal is reduced, there are also potential delays. In order to optimize DRX mechanism, predecessors have done a lot of mechanism innovation. In reference [15][16][17][18][19][20], the authors have studied the mechanism innovation of DRX. However, their methods mainly consider the influence between QoS and DRX parameters, and in the Internet of things devices, the control of energy consumption is often higher than the requirements of QoS. in [21], for 5g network, an energy-saving resource and sleep planning scheme is proposed. In [22], LTE / LTE-A is considered to improve the current DRX and paging mechanism to achieve efficient and energy-saving IoT. In [23,24], a flexible discontinuous reception scheme is proposed to minimize the wake-up delay. In [25], the author proposed and analyzed the modeling of battery power consumption in downlink data receiving of narrowband Internet of things devices. The above literature evaluates the impact of DRX mechanism on terminal energy consumption and time delay, and does not consider whether the terminal is disconnection or not. In this paper, a Markov model is established for NB-IoT terminal by adding the terminal disconnection state while refining the working state of the terminal. The main contributions of this article are summarized as follows: -In order to describe the process of terminal energy consumption more accurately, a Markov model is established for the NB-IoT terminal with periodic automatic reporting. The working state of the terminal includes PSM, RACH, Tx/Rx, short eDRX, long eDRX and ERROR. -According to the proposed Markov model, the energy consumption and time delay analysis model is derived. In addition, the influence of network quality, maximum possible number of RACH request times and Tx/Rx times on terminal energy consumption and delay are analyzed in this paper. The numerical results show that network quality, maximum number of random access and maximum number of data retransmission directly affect the energy consumption and service quality of the terminal. Reasonable configuration of the above indicators can effectively improve the service life of the terminal and meet the customer's requirements for the terminal service quality under the condition of maximum power saving. This paper is organized as follows. Section II introduces the Markov model and each state of the terminal. Section III provides the theoretical analysis of terminal energy consumption and delay. Section IV presents the numerical analysis results of the above models. Section V makes a conclusion of our work. NB-IoT terminal energy consumption model The scenario envisaged in this paper is that the NB-IoT terminal periodically sends up data. Sleep the rest of the time. Access connections are made periodically. As mentioned above, the working status of a NB-IoT terminal can be divided into six types: PSM, random access, data receiving and sending, short eDRX, long eDRX and terminal disconnection. As shown in Fig. 1, The states are described as follows: (among them, PSM, RACH, Tx/Rx, short eDRX long eDRX, and ERROR are represented by S1, S2, S3, S4, S5 and S6 respectively.) S1: This status indicates that the device has just been powered on or transferred to after data reporting is successful, after entering S1, start the report cycle timer T1. S2 (RACH) is entered when abnormal conditions occur or the reporting cycle arrives. S2: In this state, random access is performed. After entering S2, if the access is unsuccessful for Rmax consecutive times, it is judged that the device has a fatal error such as network disconnection. The device enters S6 state, otherwise it will jump to S3 state. S3: This state refers to the reporting and receiving of data, after entering S3, start the data transceiver timer T3. If the data is reported successfully, it will jump to S1 state. If the terminal still does not receive the ACK after the T3 timer expired, it means that the data upload failed. If the number of failures is less than or equal to N 1 , jump to S4, otherwise jump to S5. S4: This state is eDRX short cycle. After entering S4, start the short cycle timer T4, When T4 is expired, the terminal will jump to S1 when receiving ACK, otherwise it will jump to S3. S5: This state is eDRX long cycle. After entering S5, start long cycle timer T5, When T5 is expired and the terminal receives an ACK, it indicates that the data is sent successfully and jumps to S1; otherwise, it indicates that the data transmission fails. If the number of failures is greater than or equal to N 2 , it means that the device has a fatal error such as network disconnection and jumps to S6 state, otherwise it will jump to S3. S6: This state is proposed for the first time in this paper. In practical applications, NB-IoT terminals are often disconnected due to a variety of unpredictable reason, including network signals, hardware problems, external environment, and software systems. In this state, it indicates that the device has a fatal error (such as device disconnection). After the device restarts, it will jump to S1 state. Suppose that the probability of each random access failure is p r , the average backoff time is T r , and the feedback information time follows the exponential distribution of the parameter λ r . Suppose the probability of data transmission failure is p t , and the average transmission time is T t . After each successful data transmission, the response time follows the exponential distribution of the parameter λ t . Let p ij is the state transition probability from state S i to state S j . Then the transition probability matrix is shown in equ.1: Since state 1 is awakened after an abnormal event or the end of sleep, it must enter state 2 after being awakened, so p 12 = 1; p 26 is the probability from state 2 to state 6 after continuous Rmax, so p 26 = p r Rmax ; The condition from state 3 to state 4 is that as long as the number of consecutive failures in state 3 is less than N 1 , then all of them will jump to state 4. As long as there is a success, it will no longer enter, then there are p 61 means that the system needs to be restarted in case of a major error. It is an inevitable event from state 6 to state 1, so p 61 = 1. Suppose that the steady-state probability of each state is [q 1 q 2 q 3 q 4 q 5 q 6 ]. According to the steady-state condition, the following formula can be obtained, and the steady-state probability of each state can be obtained by combining the equ.2 and equ.3. Energy consumption and delay analysis of NB-IoT terminal Let E p and D p be the total energy consumption and total delay of the system in the business process, E p (k) and D p (k) be the average energy consumption and average delay in the state S k , for k ∈ {1, 2, 3, 4, 5, 6}, then E p and D p can be given in equ.4 and equ.5: Energy consumption analysis For state 1, since the research is oriented to one session process, the starting point is from the expiration of the data reporting cycle timer or the triggering of abnormal events, and then it will immediately go to state 2. Therefore, the time spent in state 1 can be ignored, so that E p (1) can be given in equ.6: For state 2, it is assumed that the energy consumed by each random access process is E RACH . The average number of random access failures isR = Rmax j=0 j(p r ) j , so that E p (2) can be given in equ.7: For state 3, it is assumed that the energy consumed in each data sending and receiving process is E T . In this state, the average number of data transmission isN = N2 j=0 j(p t ) j , so that E p (3) can be given in equ.8: For state 4, it is assumed that the average power is W 4 and the sleep time is T 4 . Since state 3 determines the average number of data transfers, the average number of times to enter this state is N 4 = P (3, 4)N , so that E p (4) can be given in equ.9: E p (4) = W 4 T 4 N 4 (9) For state 5, it is assumed that the average power is W 5 = W 4 and the sleep time is T 5 . Since state 3 determines the average number of data transfers, the average number of times to enter this state is N 5 = P (3, 5)N , so that E p (5) can be given in equ.10: For state 6, the system is restarted in this state, and the energy consumption required to restart the system is assumed to be E r . After the system restarts, the system enters the PSM, and in this case, the reporting data process is restarted immediately, so that E p (6) can be given in equ.11: Delay analysis For state 1, the PSM cycle is assumed to be T p . Since the research is oriented to a session process, it will jump to state 2 immediately after the end of the data reporting cycle or the triggering of abnormal events, therefore, the time spent in state 1 can be ignored, so that D p (1) can be given in equ.12: For state 2, since the average backoff time in each random access process is T r , the feedback information time obeys the exponential distribution of parameter λ r , so that D p (2) can be given in equ.13: Where, the first half represents the average delay of random access success, and the second half represents the average delay of random access failure. For state 3, the average transmission time of each data transmission is T t . After each successful data transmission, the response time follows the exponential distribution of the parameter λ t , so that D p (3) can be given in equ.14: Where, the first half represents the average delay of successful data transmission, and the second half represents the average delay of data transmission failure. For state 4, the sleep time is set as T 4 , and the response time for each successful data transmission follows the exponential distribution of parameter λ t , so that D p (4) can be given in equ. 15: The first half represents the average delay of receiving the ACK response before the end of sleep, and the second half represents the average delay of not receiving the ACK response after the end of sleep. For state 5, the sleep time is set as T 5 , and the response time for each successful data transmission follows the exponential distribution of parameter λ t , so that D p (5) can be given in equ. 16: The first half represents the average delay of receiving the ACK response before the end of sleep, and the second half represents the average delay of not receiving the ACK response after the end of sleep. For state 6, the state indicates that the terminal needs to restart the system, so the average delay is given in equ.17: Numerical analysis of energy consumption and delay of NB-IoT terminal Considering the energy consumption of a single NB-IoT terminal, in order to maximize the service life of the terminal, it means that the energy consumption should be as small as possible in a fixed time. Assuming that the energy consumption in the time period T L is E, so that E can be given in equ. 18: Where p suc is the success rate of a single service, which can be expressed as shown in equ.19: Consider the delay of a NB-IoT terminal to complete a single service successfully. If the data is sent successfully, it means the end of a session. Because the success rate of a single business is p suc , it takes an average of 1/p suc sessions for a single business success. Suppose that the delay of successful transmission of a single service is D, so that D can be given in equ.20: Where, T 6 represents the time required to restart the system. According to reference [2,14], the reference values of parameters required for simulation are shown in Table 1. Based on the above assumptions, simulation analysis is carried out to verify the effectiveness of the model. The impact of Rmax on terminal energy consumption, delay and service success rate is analyzed. Since the Rmax is directly affected by the degree of network congestion, p r ={0.1, 0.3, 0.5, 0.7, 0.8, 0.9} is used to represent the degree of network congestion. In order to ensure that the analysis results can be more affected by Rmax and ensure the correctness of the analysis, p t = 0.5 is selected as the network environment parameter of the simulation. Other parameters remain unchanged, and the simulation results are shown in Fig. 2. As shown in Fig. 2, when the network congestion is not serious, because the state of disconnection and restart is added, when Rmax is too small, (c) Relationship between business success rate and Rmax. Fig. 2 The influence of Rmax on terminal in time T L it may lead to unsuccessful access and continuous restart. At this time, the energy consumption will be very high. With the increase of Rmax, the access success rate will increase and the energy consumption will decrease. When Rmax reaches a suitable value, the energy consumption becomes stable, even if the given Rmax increases, it will not increase energy consumption, in this case, the access can be successful without reaching the maximum value of Rmax. However, if the network congestion degree is above p r = 0.7, as shown in Fig. 2, with the increase of Rmax, the service success rate gradually increases, but at the same time, the terminal energy consumption starts to rise. Due to the increase of service success rate, the number of system restarts is continuously reduced, and the delay will be correspondingly reduced. That is also verified that the degree of network congestion is an important factor to determine the terminal energy consumption and delay. Fig. 3 describes the relationship curves of average terminal power consumption, average delay, service success rate and network transmission quality in time respectively. Assume that represents the theoretically ideal network transmission quality, and represents that the network is unavailable. At the same time, assume that the network congestion degree is good, namely, is the threshold of random access times. As can be seen from the figure, with the deterioration of network transmission quality, the average power consumption and average delay of terminals are increasing. When it reaches 0.7 or above, the average power consumption and average delay of terminals rise more and more obviously, while the success rate of services declines more and more obviously. When the power consumption exceeds 0.8, the average power consumption and average delay increase exponentially. The curve reflected in the figure is consistent with the actual phenomenon, which reflects the correctness of the model from the side. The following describes the relationship curve between terminal energy consumption and delay, service success rate and data retransmission times N 1 and N 2 in T L time. Where N 1 represents the threshold for entering into long-cycle sleep. When the cumulative number of data transmission failures reaches N 1 , it will be transferred to eDRX long-term sleep, otherwise it will enter eDRX short-cycle sleep. N 2 refers to the threshold of system restart. When the cumulative number of data transmission failures reaches N 2 , it will enter state 6. Then N 2 − N 1 represents the number of times in eDRX for long-cycle sleep. In Fig. 4, p r , p t , Rmax = (0.8, 0.8, 5) are taken as the basic simulation parameters. As shown in Fig. 4, when the network quality is poor, with the increase of retransmission times, although the terminal energy consumption and delay are declining, the success rate is very low. Further, with the increase of retransmission times, the success rate will slightly increase, but the final success rate is still very low, while the terminal energy consumption and average delay are rising. This is due to the terminal constantly trying to retransmit, which is consistent with the actual situation. Through the analysis of section 2 and section 3, it can be seen that the addition of terminal offline state makes the model more detailed for the working state of the terminal. In practical application, when Rmax, N 1 and N 2 are configured, the rationality of parameter configuration can be verified by the model, which provides an important reference value for the impact of terminal energy consumption and delay in different scenarios. Actual, in NB-IoT, there are relatively few studies using Markov model to analyze terminal energy consumption and delay model. The innovation point proposed in this paper is closer to the actual application state. A better parameter combination can be obtained through simulation analysis. In view of the comparative experiment, the model proposed in this paper is compared with the model without S6 state as well as the general model, and many parameters eDRX are idealized. The energy consumption and delay under different network quality are compared. It can be seen that under different network performance, the algorithm proposed in this paper has advantages in energy consumption and is in hibernation state when the connection cannot be made. But its delay performance deteriorates. Because the terminal is off line inevitably cause delay to increase. The six-state transition proposed in this paper is more perfect, such as sleep mechanism, and has better performance than the traditional eDRX. Conclusion In order to analyze the energy consumption and time delay of NB-IoT terminal, Markov chain is introduced to model and analyze the working state of NB-IoT terminal. Taking power monitoring terminal as an example, PSM, RACH, Tx/Rx, short eDRX, long eDRX and ERROR are taken as the state variables of Markov model, and the influence of network quality, maximum possible number of RACH request times and Tx/Rx times on terminal energy consumption and delay are analyzed. The numerical results show that the Markov model proposed in this paper is accurate and effective for NB-IoT terminal energy consumption and delay analysis. And also form the simulation results, when the success rate of random access is low, the appropriate increase of maximum possible number of RACH request times helps to reduce the energy consumption and delay of the terminal, which provides an important reference value for the later optimization of terminal energy consumption and delay.
2021-12-31T16:10:30.587Z
2021-12-29T00:00:00.000
{ "year": 2021, "sha1": "896d1101670aca4618f73639bbd4e473e1c0c077", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1036939/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "afa30b16c8eba37a02c90d0b58722730e43a3b93", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
218600136
pes2o/s2orc
v3-fos-license
Thioporidiols A and B: Two New Sulfur Compounds Discovered by Molybdenum-Catalyzed Oxidation Screening from Trichoderma polypori FKI-7382 Two new sulfur compounds, designated thioporidiol A (1) and B (2), were discovered by the MoS-screening program from a culture broth of Trichoderma polypori FKI-7382. The structures of 1 and 2 were determined as C13 lipid structures with an N-acetylcysteine moiety. The relative configuration at the C-5 and C-6 position of 1 was determined by the derivatives of α-methoxy-α-phenylacetic acid diesters, and the absolute configuration of the N-acetylcysteine moiety was determined by advanced Marfey’s analysis. Compounds 1 and 2 were evaluated for anti-microbial, cytotoxic and anti-malarial activities. Compound 2 exhibited anti-microbial activity against Candida albicans ATCC 64548. Introduction Natural products have been utilized as drugs and agricultural and chemical reagents. Newman and Cragg reported that 64% of metabolites approved as new drugs between 1981 and 2010 were directly or indirectly associated with natural products [1]. Furthermore, according to the KEGG MEDICUS (https://www.kegg.jp/kegg/medicus/) database, which presents information on commercially available medicines [2], 87% of low molecular-weight (0.1-1 kDa) medicines contain nitrogen atoms and 25% of medicines contain sulfur atoms. Therefore, searching natural resources for nitrogen-and sulfur-containing metabolites is expected to yield many unique medicines. The search for new medicines is often focused on screening for nitrogen compounds, since the methods used to screen for nitrogen compounds are quite simple. For example, Dragendorff's reaction can identify tertiary or quaternary amines, and compounds with odd molecular weights can be presumed to have at least one nitrogen atom (the nitrogen rule) [3,4]. However, there have been no reports about simple methods of searching for sulfur compounds. Therefore, we established MoS-screening, a method for screening of sulfur compounds using a combination of molybdenum-catalyzed oxidation and liquid chromatography-mass spectrometry (LC/MS) [5]. The MoS-screening approach allows us to identify compounds containing sulfide in its structure from microbial broths. If sulfur compounds are contained in the broths, they will be oxygenated by Mo-catalyzed oxidation. This oxidation is generally dominant over olefin epoxidation, primary alcohol Antibiotics 2020, 9, 236 2 of 8 and secondary alcohol oxidation. In other words, the sulfur compounds in microbial broth could be identified as sulfinyl, sulfonyl or both products, using LC/MS analysis. Then, when the original broth is compared with the oxidative broth, the oxidized peak can be easily identified. Identification of the corresponding oxidative products is relatively straightforward, because the ultraviolet (UV) spectrum is unchanged by oxidation. In this study, to discover new sulfur compounds from microorganisms, we employed MoS-screening in combination with in-house databases and natural product databases such as the Dictionary of Natural Products (http://dnp.chemnetbase.com/). During our recent MoS-screening, a fungal metabolite with a mass-to-charge ratio (m/z) of 372.3221 [M + H] + was deduced to be a new sulfur compound. The producing strain FKI-7382 was isolated from a sediment sample collected at Omuta city, Fukuoka Prefecture, Japan, and identified as the genus Trichoderma polypori by DNA barcoding. As a result of purification guided by LC/MS analyses from the culture broth of the fungal strain FKI-7382, a new sulfur compound, designated thioporidiol A (1), was isolated together with its analog, thioporidiol B (2). Here, we report the fermentation, isolation, structure elucidation and biological activity of 1 and 2. MoS-Screening of the Culture Broth of the Fungal Strain FKI-7382 The MoS-screening program was applied to screen microbial broths for new sulfur compounds. Microbial broths were prepared in 50% aqueous ethanol and dispensed across two 96-well plates. (NH 4 ) 6 Mo 7 O 24 ·4H 2 O and 30% H 2 O 2 were added to the wells in one plate, while, as a control, only H 2 O was added to the wells of the other plate. After 6 h of shaking, all of the wells were analyzed by LC/MS and the data were compared between the two plates to identify any sulfur compounds. MoS-screening of the broth cultures from 150 different fungal strains yielded a single potential sulfur compound. The candidate compound (1) was produced by the fungal strain Trichoderma polypori FKI-7382 and showed a retention time of 8.82 min and UV absorbance peaks at 232 nm ( Figure 1). High resolution electrospray ionization mass spectrometry (HRESIMS) data show an [M+H] + ion at m/z 372.1835, indicating a molecular formula of C 18 H 29 NO 5 S (calculated value for m/z 372.1836). The red chromatogram in Figure 1 further indicated that the candidate compound was oxygenated to a sulfonyl (7.95 min, m/z = 404.1745 [M+H] + ). Comparisons of the LC/MS data and UV spectrum of the candidate compound with those of known natural products contained in the Dictionary of Natural Products database (http://dnp.chemnetbase.com/) confirmed that the candidate was a new sulfur compound. Structure Elucidation of 1 and 2 Compound 1 was isolated as a colorless amorphous solid by silica gel and octadecylsilyl (ODS) column chromatography from a culture broth of the fungal strain FKI-7382. The 1 H nuclear magnetic resonance (NMR) and heteronuclear multiple quantum correlation (HMQC) spectra in CD 3 OD (Table 1) indicated the six olefinic protons, two oxymethine protons, five methylene protons, a methine proton adjacent to a heteroatom, an acetyl proton and a methyl proton. The 13 C NMR spectrum indicated the presence of two carbonyl carbons, six unsaturated carbons, two oxygenated carbons, three carbons seemingly adjacent to a hetero atom and two methyl carbons. The gross structure of 1 was deduced from detailed analyses of 2D NMR data, including 1 H-1 H correlation spectroscopy (COSY), HMQC and heteronuclear multiple bond correlation (HMBC) spectra in CD 3 OD (Figures S1-S6). The 1 H-1 H COSY spectra revealed the presence of five partial structures, a-e, as shown in Figure 2A. The connectivity of partial structures a and b was determined by the HMBC cross-peaks of H-1 to C-3, H-2 to C-4, H-3 to C-1 and H-4 to C-2. The connectivity of partial structures c and d was determined by the HMBC cross-peaks of H-9 to C-11, H-10 to C-12, H-11 to C-9 and H-12 to C-10. The HMBC cross-peaks of H-6 to C-7 and H-7 to C-6 and their chemical shifts suggested the connectivity of the C-6 and C-7 constituting 1,2-diol. The partial structure e, the HMBC cross-peaks of H-1´to C-3´, H-2´to C-3´, and 5´-Me to Antibiotics 2020, 9, 236 3 of 8 C-4´and their chemical shifts suggested an N-acetylcysteine moiety. Finally, the connectivity of C-13 and C-1´via a sulfur atom was determined by the HMBC cross-peaks of H-13 to C-1´and H-1´to C-13. The geometry of three olefins was determined by detailed analysis of the homo-decoupling of 1 H-NMR (Table 1) and nuclear overhauser effect (NOE). Thus, the planar structure of 1 was determined as shown in Figure 2A. was added to the wells of the other plate. After 6 h of shaking, all of the wells were analyzed by LC/MS and the data were compared between the two plates to identify any sulfur compounds. MoSscreening of the broth cultures from 150 different fungal strains yielded a single potential sulfur compound. The candidate compound (1) was produced by the fungal strain Trichoderma polypori FKI-7382 and showed a retention time of 8.82 min and UV absorbance peaks at 232 nm ( Figure 1). High resolution electrospray ionization mass spectrometry (HRESIMS) data show an [M+H] + ion at m/z 372.1835, indicating a molecular formula of C18H29NO5S (calculated value for m/z 372.1836). The red chromatogram in Figure 1 further indicated that the candidate compound was oxygenated to a sulfonyl (7.95 min, m/z = 404.1745 [M+H] + ). Comparisons of the LC/MS data and UV spectrum of the candidate compound with those of known natural products contained in the Dictionary of Natural Products database (http://dnp.chemnetbase.com/) confirmed that the candidate was a new sulfur compound. Figure 1. Comparison of total wavelength chromatograms (TWCs) obtained from liquid chromatography-mass spectrometry (LC/MS) analysis of the oxidized and non-oxidized culture broth of the strain FKI-7382. Compound 1 has three chiral carbons at C-6, C-7 and C-2´. Freire et al. reported that the absolute or relative configuration of secondary/secondary (sec,sec)-1,2-diols can be determined by comparing the NMR spectra of the resulting product of α-methoxy-α-phenylacetic acid (MPA) [6]. If the relative stereochemistry of the diol is anti, its absolute configuration can be directly determined from the differences of 1 H-NMR chemical shifts between room temperature and low temperature (∆δ T1T2 ) for substituents of the diols. Meanwhile, if the diol is syn, the assignment of its absolute configuration requires the preparation of both the bis-(R)-and bis-(S)-MPA esters, comparison of their roomtemperature 1 H-NMR spectra and calculation of the ∆δ RS signs for the methines of the α-protons of the diols. The preparation of the MPA diester of 1 is described below. To protect C-2´ carboxylic acid, compound 1 was derivatized by TMS-diazomethane to methyl ester 3 ( Figure 3). The sec,sec-1,2-diols at C-6 and C-7 of 3 were derivatized to (R)-MPA diester 4. After separation of the crude product by preparative thin layer chromatography (TLC), final purification was conducted by preparative highperformance liquid chromatography (HPLC) to afford 4. Compound 4 was assigned by 1D and 2D NMR data. 1 H NMR data measured in 213K and analysis of ∆δ T1T2 values confirmed the antistereochemistry for C-6 and C-7 (Figures 3, S13, S14). Finally, to determine the absolute configuration of the N-acetylcysteine moiety, 1 was defined by advanced Marfey's analysis of the hydrolysate of 1 after desulfurization with Raney nickel as a catalyst [7,8]. After treatment of 1 with Raney nickel for desulfurization, the product was hydrolyzed by 6 M HCl. The advanced Marfey's procedure of the hydrolysate led to identification of the stereochemistry of the alanine as the L-alanine (Figure 4). Compound 1 has three chiral carbons at C-6, C-7 and C-2´. Freire et al. reported that the absolute or relative configuration of secondary/secondary (sec,sec)-1,2-diols can be determined by comparing the NMR spectra of the resulting product of α-methoxy-α-phenylacetic acid (MPA) [6]. If the relative stereochemistry of the diol is anti, its absolute configuration can be directly determined from the differences of 1 H-NMR chemical shifts between room temperature and low temperature (∆δ T1T2 ) for substituents of the diols. Meanwhile, if the diol is syn, the assignment of its absolute configuration requires the preparation of both the bis-(R)-and bis-(S)-MPA esters, comparison of their room-temperature 1 H-NMR spectra and calculation of the ∆δ RS signs for the methines of the α-protons of the diols. The preparation of the MPA diester of 1 is described below. To protect C-2´carboxylic acid, compound 1 was derivatized by TMS-diazomethane to methyl ester 3 ( Figure 3). The sec,sec-1,2-diols at C-6 and C-7 of 3 were derivatized to (R)-MPA diester 4. After separation of the crude product by preparative thin layer chromatography (TLC), final purification was conducted by preparative high-performance liquid chromatography (HPLC) to afford 4. Compound 4 was assigned by 1D and 2D NMR data. 1 H NMR data measured in 213K and analysis of ∆δ T1T2 values confirmed the anti-stereochemistry for C-6 and C-7 ( Figure 3, Figures S13 and S14). Finally, to determine the absolute configuration of the N-acetylcysteine moiety, 1 was defined by advanced Marfey's analysis of the hydrolysate of 1 after desulfurization with Raney nickel as a catalyst [7,8]. After treatment of 1 with Raney nickel for desulfurization, the product was hydrolyzed by 6 M HCl. The advanced Marfey's procedure of the hydrolysate led to identification of the stereochemistry of the alanine as the L-alanine (Figure 4). Compound 2 was isolated as a colorless amorphous solid and determined to have the molecular formula C 18 H 29 NO 5 S by HRESIMS (m/z 372.1837 [M+H] + , calculated for 372.1836). The 1 H NMR, 13 C NMR and UV spectra were similar to that of 1. These results suggested that 2 is an analog of 1. The gross structure of 2 was deduced from detailed analyses of 2D NMR data, including 1 H-1 H COSY, HMQC and HMBC spectra in CD 3 OD ( Figure 2B, Figures S6-S12). These results suggested that 2 is an enantiomer of 1. Unfortunately, the stereochemistry of compound 2 cannot be determined at C-6, C-7 and the N-acetylcysteine moiety due to low yield. Compound 1 and 2 were evaluated for several biological activities, including anti-cancer, anti-microbial and anti-malarial activities. As a result of these assays, 1 showed no activity. However, 2 showed anti-microbial activity against Candida albicans ATCC 64548, that is a sensitive stain for fluconazole [9]. When the strain ATCC 64548 was treated with 30 µg of 2 on a 6 mm paper disc, an 8 mm inhibition zone was produced. These results suggested that the stereochemistry is important for anti-microbial activity against C. albicans ATCC 64548. Compound 1 has three chiral carbons at C-6, C-7 and C-2´. Freire et al. reported that the absolute or relative configuration of secondary/secondary (sec,sec)-1,2-diols can be determined by comparing the NMR spectra of the resulting product of α-methoxy-α-phenylacetic acid (MPA) [6]. If the relative stereochemistry of the diol is anti, its absolute configuration can be directly determined from the differences of 1 H-NMR chemical shifts between room temperature and low temperature (∆δ T1T2 ) for substituents of the diols. Meanwhile, if the diol is syn, the assignment of its absolute configuration requires the preparation of both the bis-(R)-and bis-(S)-MPA esters, comparison of their roomtemperature 1 H-NMR spectra and calculation of the ∆δ RS signs for the methines of the α-protons of the diols. The preparation of the MPA diester of 1 is described below. To protect C-2´ carboxylic acid, compound 1 was derivatized by TMS-diazomethane to methyl ester 3 ( Figure 3). The sec,sec-1,2-diols at C-6 and C-7 of 3 were derivatized to (R)-MPA diester 4. After separation of the crude product by preparative thin layer chromatography (TLC), final purification was conducted by preparative highperformance liquid chromatography (HPLC) to afford 4. Compound 4 was assigned by 1D and 2D NMR data. 1 H NMR data measured in 213K and analysis of ∆δ T1T2 values confirmed the antistereochemistry for C-6 and C-7 (Figures 3, S13, S14). Finally, to determine the absolute configuration of the N-acetylcysteine moiety, 1 was defined by advanced Marfey's analysis of the hydrolysate of 1 after desulfurization with Raney nickel as a catalyst [7,8]. After treatment of 1 with Raney nickel for desulfurization, the product was hydrolyzed by 6 M HCl. The advanced Marfey's procedure of the hydrolysate led to identification of the stereochemistry of the alanine as the L-alanine (Figure 4). General Experimental Procedures Silica gel and octa-decanoyl-silicon (ODS) were purchased from Fuji Silysia Chemical (Aichi, Japan). All solvents were purchased from Kanto Chemical (Tokyo). Fermentation of Strain FKI-7382 and Isolation of 1 and 2 Strain FKI-7382 was grown on a modified Miura's medium (LcA: consisting of 0.1% glycerol, 0.08% KH 2 PO 4 , 0.02% K 2 HPO 4 , 0.02% MgSO 4 ·7H 2 O, 0.02% KCl, 0.2% NaNO 3 , 0.02% yeast extract and 1.5% agar (adjusted to pH 6.0 before sterilization)) slant. A loop of spores of the strain was inoculated into five 500 mL Erlenmeyer flasks containing 100 mL seed medium consisting of 2% glucose, 0.2% yeast extract, 0.5% hipolypeptone, 0.1% KH 2 PO 4 , 0.05% MgSO 4 ·7H 2 O and 0.1% agar, which was shaken at 210 rpm on a rotary shaker at 27 • C for 3 days. Twenty bags, each containing 500 g of rice (Hanamasa, Tokyo), 5 g of seaweed tea (ITO EN, Tokyo) and 200 mL of tap water were sterilized, and then a 25 mL aliquot of the seed culture was added to each. The bags were allowed to ferment for 13 days at 25 • C. The total volume of cultured rice (10 kg) was extracted with 10 L of MeOH and centrifuged to separate the cells and supernatant. Absolute Configuration of the N-Acetylcysteine Moiety of 1 Advanced Marfey's analyses for acid hydrolysis of a de-sulfurized 1 with Raney nickel were used to determine the absolute configurations of the N-acetylcysteine moiety of both. To a stirred solution of 1 (1.1 mg, 2.96 µmol) in MeOH (1.0 mL), Raney Ni (17.9 mg) was added at room temperature. The suspension was then purged with an H 2 atmosphere under 1 atm. After stirring for 21 h at room temperature, the suspension was filtered through a filter pad and washed with MeOH (1.0 mL × 3) and H 2 O (1.0 mL × 3). MeOH and H 2 O filtrates were concentrated in vacuo. To hydrolyze the acetyl moiety of N-acetylcysteine, the H 2 O filtrate, processed as above, was dissolved in 500 µL of 6 M hydrochloric acid (HCl), followed by heat treatment at 100 • C for 12 h. The product was concentrated to dryness in vacuo and the residue was dissolved in 500 µL of H 2 O. Twenty microliters of 1 M NaHCO 3 and 50 µL of N α -(5-fluoro-2,4-dinitrophenyl)-D-leucinamide (D-FDLA) were added to the 50 µL of hydrolysates, followed by incubation at 37 • C for 1 h. The mixtures were neutralized by addition of 20 µL of 1 M HCl and then concentrated to dryness in vacuo. The resultant dried residue was dissolved in 1 mL acetonitrile, followed by passage through a filter. Similarly, the standard L-and D-alanine were derivatized according to the method described above. The D-FDLA derivatives of the hydrolysate and the standard amino acids were subjected to LC/MS analysis at 40 • C using the following gradient program: solvent A, H 2 O with 0.1% formic acid; solvent B, MeOH with 0.1% formic acid; linear gradient 5-100% of B from 2 to 10 min. Anti-Microbial Activity of 1 and 2 The antimicrobial activities of 1 and 2 against six microorganisms, Bacillus subtilis ATCC6633, Kocuria rhizophila ATCC9341, Staphylococcus aureus ATCC6538P, Escherichia coli NIHJ, Xanthomonas oryzae pv. oryzae KB88 and Candida albicans ATCC 64548, were evaluated using the paper disc method. Agar plates were spread with the six strains, and then paper discs including 1 and 2 (final concentration: 1, 3, 10, 30 µg/disc) were placed. All microorganisms, except X. oryzae, were incubated at 37 • C. X. oryzae was incubated at 27 • C for 48 h. After incubation, the inhibition zones were measured.
2020-05-13T13:03:51.246Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "1538ba22aafaaebcd06ee9727e01425f36e276ce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/antibiotics9050236", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71e0eb60e186fd9d63694b173fead3146da1bb82", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
115200418
pes2o/s2orc
v3-fos-license
Exercise Overrides Blunted Hypoxic Ventilatory Response in Prematurely Born Men Purpose Pre-term birth provokes life-long anatomical and functional respiratory system sequelae. Although blunted hypoxic ventilatory response (HVR) is consistently observed in pre-term infants, it remains unclear if it persists with aging and, moreover, if it influences hypoxic exercise capacity. In addition, it remains unresolved whether the previously observed prematurity-related alterations in redox balance could contribute to HVR modulation. Methods Twenty-one prematurely born adult males (gestational age = 29 ± 4 weeks], and 14 age matched controls born at full term (gestational age = 39 ± 2 weeks) underwent three tests in a randomized manner: (1) hypoxia chemo-sensitivity test to determine the resting and exercise poikilocapnic HVR and a graded exercise test to volitional exhaustion in (2) normoxia (FiO2 = 0.21), and (3) normobaric hypoxia (FiO2 = 0.13) to compare the hypoxia-related effects on maximal aerobic power (MAP). Selected prooxidant and antioxidant markers were analyzed from venous samples obtained before and after the HVR tests. Results Resting HVR was lower in the pre-term (0.21 ± 0.21 L ⋅ min-1 ⋅ kg-1) compared to full-term born individuals (0.47 ± 0.23 L ⋅ min-1 ⋅ kg-1; p < 0.05). No differences were noted in the exercise HVR or in any of the measured oxidative stress markers before or after the HVR test. Hypoxia-related reduction of MAP was comparable between the groups. Conclusion These findings indicate that blunted resting HVR in prematurely born men persists into adulthood. Also, active adults born prematurely seem to tolerate hypoxic exercise well and should, hence, not be discouraged to engage in physical activities in hypoxic environments. Nevertheless, the blunted resting HVR and greater desaturation observed in the pre-term born individuals warrant caution especially during prolonged hypoxic exposures. INTRODUCTION An estimated 10% of infants are born prematurely each year (Purisch and Gyamfi-Bannerman, 2017). Pre-term birth and associated medical interventions hinder lung development and can result in life-long anatomical and functional sequelae of the respiratory (and many other) systems (Lovering et al., 2014). Numerous studies demonstrate persistence of various respiratory limitations and symptoms in the pre-term born individuals during the course of maturation (McLeod et al., 1996;Palta et al., 2001;Vrijlandt et al., 2007). It has previously been shown in both human and rodent studies that perinatal hyperoxia, often associated with treatment of premature newborns, can provoke significant alterations in cardiorespiratory control which can subsequently affect ventilatory responses in normoxic and hypoxic conditions (Bisgard et al., 2003;Bavis, 2005). In addition, hyperoxia-related carotid chemoreceptor dysfunction also seems to result in abnormal ventilatory responses of prematurely born individuals (Bates et al., 2014). While the overall consequences of these cardio-respiratory dysfunctions are not well established, the blunted hypoxic ventilatory response (HVR), consistently reported in pre-term born infants (Katz-Salamon and Lagercrantz, 1994), might have important implications. Indeed, although limited, recent evidence indicates that the reduced HVR can persist with aging (Bates et al., 2014). Recent evidence also suggests that prematurely born adults could exhibit higher oxidative stress levels (Filippone et al., 2012) as compared to their full-term born counterparts. Indeed, as recently reviewed by Martin et al. (2018) preterm-birth related higher oxidative stress levels may persist into adulthood and consequently negatively influence a number of physiological processes. Given that oxidative stress levels have previously been shown to importantly modulate HVR (Pialoux et al., 2009a,b) it seems important to investigate this relationship as it might give us mechanistical insight into the HVR modulation. The above noted respiratory and systemic sequelae may importantly influence exercise capacity as well as the ability of pre-term born individuals to adapt to altitude/hypoxic environment. Reduced overall exercise capacity of the pre-term cohort is clearly established by numerous observational and interventional trials (Rogers et al., 2005;Vrijlandt et al., 2006;Saigal et al., 2007;Lovering et al., 2013;Svedenkrans et al., 2013). While the consequences of pre-term birth are rather well characterized for normoxic exercise, the potential influence of prematurity on hypoxic exercise tolerance remains largely unclear. Very few studies to-date compared hypoxic exercise capacities between the pre-term and full term born individuals. While Farrell et al. (2015) found significantly lower absolute power outputs in preterm vs. full-term born individuals with relatively low aerobic capacity during graded exercise in normoxia, surprisingly, no such difference was noted during hypoxic exercise. Also, Duke et al. (2014) did not find any significant differences between the pre-term born individuals with low diffusion capacity and healthy full term born individuals in the relative cardio-respiratory and pulmonary gas exchange responses to graded exercise in neither normoxia nor hypoxia. It is important to note that all of the above studies have scrutinized the effects of hypoxia in cohorts of untrained and inactive pre-term born individuals. Accordingly, there is an obvious lack of studies investigating hypoxic exercise responses in healthy, aerobically fit and active pre-term born individuals which are most likely to engage in high-altitude activities. Therefore, and given an ever-increasing number of individuals who visit high-altitude regions for sports participation or recreation , it is crucial to determine, identify, and subsequently reduce the hypoxia-related clinical risks in the active pre-term born individuals. The purpose of this study was to determine the effects of acute hypoxia on poikilocapnic HVR and exercise performance in healthy, active pre-term born adult males and compare them to their age and aerobic capacity matched counterparts born at full-term. We hypothesized that: (i) pre-term born adults will exhibit lower HVR both, at rest and during moderate exercise and (ii) hypoxia will induce significantly greater reduction in maximal aerobic power (MAP) in the active pre-term vs. fullterm born adults. Finally, we also tested the potential differences in systemic oxidative stress between the pre-term vs. full-term born individuals. Participants The recruitment procedure for the pre-term born participants was based on the National pre-term birth register established and maintained at the Clinical Centre in Ljubljana, Slovenia. The following inclusion criteria were used for the pre-term (gestational age ≤32 weeks; gestational body mass ≤1500 g; hyperoxic treatment at birth) and full-term (gestational age ≥38 weeks; gestational body mass ≥2500 g) born individuals. Exclusion criteria included permanent altitude residence (≥1000 m), cardiopulmonary, hematological and/or kidney disorders, regular medication use, smoking and altitude/hypoxia exposure (≥2000 m) within the last moth prior to the study. Upon initial medical record screening and individual telephone interviews, twenty-one prematurely born adult males [gestational age = 29 ± 4 weeks; gestational body mass = 1264 ± 297 g; (mean ± SD)] and fourteen age and aerobic capacity matched controls born at full term (gestational age = 39 ± 2 weeks; gestational body mass = 3671 ± 499 g) were recruited. Baseline characteristics of both groups are detailed in Table 1. The participants of both groups were recreationally active (≥2 aerobic exercise sessions per week) and were provided with extensive written and verbal explanation of the experimental procedures as well as potential risks involved. All participants completed health and activity questionnaires and signed a written informed consent before the study onset. Study Overview The study comprised of familiarization visit during which all participants underwent a medical pre-screening, baseline anthropometric measurement and lung function assessment and three experimental visits on separate occasions during which the participants underwent the following tests in a randomized Anthropometry and Lung Function Test Baseline measurements of body mass and body height were performed using a stadiometer-scale (Libela ELSI, Celje, Slovenia). The % body fat was calculated using the Jackson and Pollock (1978) equation from nine skin-fold measurement sites (triceps, subscapular, chest, suprailiac, abdominal, thigh (3 sites) and inguinal). The lung function testing was performed using the pneumotachograph (Cardiovit AT-2plus, Schiller, Baar, Switzerland) in line with the established criteria (Miller et al., 2005). The device was calibrated prior to each test using a 3-L syringe. Each test was performed three consecutive times and the highest of the three values of forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV 1 ) were recorded and subsequently used to calculate the FEV 1 /FVC ratio. The percentage predicted FVC and FEV 1 values were calculated based on the GLI ERS Task Force equations (Quanjer et al., 2012). Exercise Tests On two separate occasions, the participants performed two graded exercise tests in a randomized and single-blinded manner on an electromagnetically braked cycloergometer (Ergo Bike Premium, Daum electronics, Fürth, Germany). The protocol commenced with a rest period (5-min in normoxic condition during all normoxic test and 5-min normoxic and 5-min hypoxic during all hypoxic exercise tests), followed by a 5-min warm up at 60 W. Thereafter the workload was increased for 40 W · 2 min −1 until volitional exhaustion. The participants were required to maintain a cadence ≥60 revolutions · min −1 throughout the whole test. The test was terminated when the participants were unable to maintain the assigned cadence. Strong verbal encouragement from the experimental personnel was applied during the later stages of each test. The exercise tests were always performed at the same time of the day under standardized environmental conditions with participants seated through the whole test duration. During both tests, the participants breathed through an oro-nasal mask (Vmask TM , Hans Rudolph, Shawnee, KS, United States), connected to a two-way low resistance valve (2700 NRBV; Hans Rudolph Inc., Shawnee, KS, United States). On one occasion the valve was connected to a 200-L Douglas bag containing ambient air (normoxia (placebo); F i O 2 = 0.21; P i O 2 = 147 mmHg). On another occasion, the Douglas bag contained normobaric hypoxic gas mixture (hypoxia; F i O 2 = 0.13; P i O 2 = 91 mmHg). Breath-by-breath gas exchange and ventilation responses during resting and exercise phases were measured using a calibrated metabolic cart (Quark CPET, Cosmed, Rome, Italy). Peak O 2 uptake (VO 2peak ), was defined as the highest 60-s average of O 2 uptake during the test. The same time average was used to determine the cardiorespiratory values and ratings of perceived exertion at volitional exhaustion. The resting values were derived from the 60-s averages of the last minute of the rest period during the normoxic exercise tests and the last minute of the resting period in hypoxic condition during the hypoxic exercise tests. MAP was calculated using the following equation as detailed previously (Debevec et al., 2010): W compl corresponds to the last completed workload and t corresponds to the number of seconds during final uncompleted workload. Blood Sampling and Biochemical Analyses Venous blood samples were obtained before and immediately after the hypoxia chemo-sensitivity test to determine the plasma markers of oxidative stress [advanced oxidation protein products (AOPP), malondialdehyde (MDA) and nitrotyrosine] and antioxidant status [superoxide dismutase (SOD), ferric-reducing antioxidant power (FRAP), glutathione peroxidase (GPX) and catalase]. Five milliliters samples were, on each occasion, obtained from the antecubital vein and collected in the EDTA tubes (Vacutainer K2E, Becton Dickinson, Plymouth, United Kingdom). The venous blood was immediately centrifuged (10-min at 3500 rpm; 4C) and the obtained plasma frozen to −80 • C in small (400-µL) aliquots for subsequent blinded analysis performed within 6 months. An additional 3 mL of whole blood was obtained before the test and analyzed immediately using the cytochemical impedance method (Pentra120; Horiba ABX Diagnostics, Montpellier, France; coefficient of variation (CV) <2%) to determine the baseline hemoglobin levels. The prooxidant/antioxidant biochemical analysis was performed using TECAN Infinite 2000 plate reader (Männedorf, Switzerland). Briefly, AOPP plasma concentration was read at 340 nm and is expressed as µmol · L −1 of chloramine-T equivalent as previously described (Witko-Sarsat et al., 1996). The intra-assay analysis CV was 5.4%. Malondialdehyde concentrations were determined via extracting the pink chromogen formed from the reaction between MDA and 2-thiobarbituric acid at 100 • C and measuring its absorbance at 532 nm by spectrophotometry using 1,1,3,3-tetraethoxypropan as standard (Ohkawa et al., 1979). The intra-assay analysis CV was 2.2%. Concentrations of plasma nitrotyrosine, as peroxynitrite (ONOO − ) protein nitration end product, were measured as previously described (Galinanes and Matata, 2002) via a competitive ELISA assay using an anti-nitrotyrosine primary antibody produced in rabbit (Sigma-Aldrich, Saint-Louis, MI, United States; 1:10,000, 1.5 h at room temperature). The fixation of the anti-rabbit IgG-HRP-conjugate secondary antibody (Invitrogen, Carlsbad, CA, United States; 1:100, 1 h at room temperature) was then read by spectrophotometry at 450 nm. A bovine serum albumin solution at different concentrations was used as standard. The intra-assay analysis CV was 6.8%. The plasma SOD activity was measured using the method Beauchamp and Fridovich (1971), slightly modified by Oberley and Spitz (1984) by spectrophotometry using the degree of inhibition of the reaction between superoxide radicals, produced by a hypoxanthine -xanthine oxidase system, and nitroblue tetrazolium. The intra-assay analysis CV was 5.6%. FRAP is determined using the method of Benzie and Strain (1996) by measuring the ability of the plasma to reduce ferric into ferrous iron. FRAP of plasma was calculated using an aqueous solution of known Fe 2+ concentration (FeSO 4 , 7H 2 O 2 ) as standard at a wavelength of 593 nm via spectrophotometry at a controlled temperature (37 • C). The intra-assay analysis CV was 2.9%. Glutathione peroxidase activity in the plasma was assessed by the modified method of Paglia and Valentine (1967) by spectrophotometry at a wavelength of 340 nm the rate of oxidation of NADPH into NADP + after addition of glutathione reductase (GR), reduced glutathione (GSH) and NADPH, using H 2 O 2 as a substrate. The intra-assay analysis CV was 4.6%. Catalase activity was determined by the method of Johansson and Borg (1988) using formaldehyde as a standard and hydrogen peroxide (H 2 O 2 ) as a substrate. The catalase activity was subsequently determined by the formation rate of formaldehyde read by spectrophotometry induced by the reaction of methanol and H 2 O 2 using catalase as enzyme. The intra-assay analysis CV was 3.1%. Statistical Analysis Data are presented as mean ± SD unless otherwise indicated. Dependent-samples Student's t-test was used to compare participants' baseline characteristics. Two-way unbalanced ANOVA (pre-term vs. full-term group × rest vs. exercise) was used to compare the resting and exercising HVR of the pre-term and full-term born participants. For the graded exercise test variables analysis, a two-way unbalanced ANOVA with repeated measures (Hypoxic vs. Normoxic and/or Pre vs. Post) was employed to test for interactions and main effects for all exercise responses and prooxidant/antioxidant markers. If a significant F-ratio for the main effect or an interaction was observed a Tukey's HSD post hoc test was used to elucidate specific differences. Pearson's correlation coefficient was used to explore the bivariate correlations between the changes in MAP and HVR and the measured oxidative stress and antioxidant markers. Normality of distribution was confirmed using the Kolmogorov-Smirnov test. A priori power analysis using G * Power software (version 3.1.9.3) was conducted to determine the appropriate sample size. Based on the data from previous reports (Bates et al., 2014;Duke et al., 2014;Farrell et al., 2015) 13 participants per group were required to yield the targeted analysis power of ≥0.8 at α = 0.05 for the main outcomes (HVR and MAP). The Post hoc analysis indicated the statistical power of 0.91 and 0.93 for the HVR and MAP, respectively with the employed sample size (Pre-term N = 21; Full-term N = 14). The level of significance was defined a priori at P < 0.05 and the analyses performed using Statistica 12.0 (StatSoft, Tulsa, United States). Group Characteristics As noted in Table 1, the groups were matched for all of the baseline anthropometric characteristics except for the body height, with the pre-term born participants being significantly shorter that the full-term ones (p = 0.04). Importantly the groups had comparable indexes of pulmonary capacity (FEV 1 and FEV 1 /FVC ratio), hematological parameters and peak aerobic capacity. Exercise Responses A significantly lower absolute MAP was observed in the pre-term than in the full-term individuals in both normoxic (pre-term: 272 ± 39 W; full-term:322 ± 33 W p < 0.001) and hypoxic (preterm: 235 ± 36 W; full-term:273 ± 28 W p < 0.001) conditions (Figure 2). However, the hypoxia-induced reduction of MAP was comparable between the two groups (pre-term: −8.6 ± 0.4 vs. full-term: −8.5 ± 0.5%; p = 0.35; Figure 2). It is also of note that the measured cardio-respiratory parameters at rest and at volitional exhaustion, detailed in Table 2, during both, normoxic and hypoxic graded tests were similar between the two groups. The only exception being a significantly higher V T observed at exhaustion in the full-term as compared to the pre-term born individuals during the normoxic exercise (p = 0.046). Prooxidant/Antioxidant Balance As detailed in Table 3. No baseline differences between the two groups were noted in neither of the measured oxidative stress Data are presented as mean ± SD. SpO 2 , capillary O 2 saturation; HR, heart rate;VO 2peak , peak oxygen uptake;V E , minute ventilation; V T , tidal volume; f R , respiratory frequency; RPE leg , reported ratings of perceived exertion for the legs; RPE dys , reported ratings of perceived exertion for dyspnea. * p < 0.05 denotes significant differences as compared to pre-term born individuals at exhaustion. No differences between the groups were noted at rest. nor antioxidant capacity markers (p > 0.05). Also, the hypoxia chemo-sensitivity test did not induce any significant between-or within-group changes in the measured markers (p > 0.05). Correlations No significant correlations were observed between the measured MAP values in hypoxia and the calculated resting and exercise HVR values (r: −0.2 to 0.3; p > 0.5). Also, no correlations were noted between the hypoxia-related MAP reductions and the V E, SpO 2 and HR changes during the resting and exercise parts of the hypoxia chemo-sensitivity test nor between the baseline oxidative stress markers and HVR responses (r: −0.3 to 0.4; p > 0.4). DISCUSSION The primary aim of this study was to elucidate the effects of poikilocapnic hypoxia on HVR and exercise tolerance in prematurely born but otherwise healthy, physically active adults and compare their responses to those observed in their age as well as pulmonary and aerobic capacity matched counterparts born at full term. Thus, this is the first study to demonstrate that the previously observed reductions in HVR in untrained individuals can also be observed in active or "trained" prematurely born adult individuals which are more likely to be exposed to altitude during mountainous sporting activities. Additionally, we sought to investigate the potential differences in systemic oxidative stress between the two cohorts and further elucidate the relationship between oxidative stress levels and HVR modulation. Our findings indicate that the blunted resting HVR can still be observed in pre-terms following maturation. Interestingly, the reduced ventilatory response to acute normobaric hypoxia was not noted during moderate intensity exercise and moreover, the hypoxia-induced MAP reduction was comparable between the two cohorts. The obtained data also do not suggest that healthy, prematurely born adults would exhibit significantly higher systemic oxidative stress levels. Hypoxic Ventilatory Response While research is limited, abnormal ventilatory responses to changes in ambient O 2 availability have previously been reported in pre-term born infants (Katz-Salamon and Lagercrantz, 1994) and, recently, also in adults (Bates et al., 2014). These abnormalities might be a consequence of both perinatal periods of hypoxia and/or hyperoxia (Bavis, 2005). Perinatal hyperoxia, often employed in prematurity treatment, was suggested as the key underlying factor for the altered resting cardiorespiratory control (Bisgard et al., 2003) and carotid chemoreceptor dysfunction (Bates et al., 2014) observed in otherwise healthy pre-term born individuals. Our data lends further support to this notion since we show that in prematurely born individuals that underwent hyperoxic treatment at birth, the blunted resting HVR can still be observed after maturation (i.e., in adults). This abnormal ventilatory response might importantly compromise the hypoxic acclimatization ability of the pre-term born individuals. Especially, given that HVR plays a prominent role in the ventilatory acclimatization known to be a key determinant of, at least, short term acclimation/acclimatization to hypoxia (Powell et al., 1998;Teppema and Dahan, 2010). The fact that the prematurely born individuals, in addition to the blunted HVR, also demonstrated greater RH-related desaturation during the hypoxia sensitivity test could also negatively influence their acclimatization to prolonged hypoxic exposure and/or result in higher susceptibility to acute and chronic altitude sickness. Unfortunately, no study to date addressed this particular topic and further work on the consequences of chronic hypoxia in the pre-term born population is clearly needed. In contrast to the resting condition, our data indicate that the exercise HVR seems comparable between the pre-term and full-term born adults. Indeed, even a low-to moderate intensity exercise, as performed during the hypoxia chemo-sensitivity test, seems to abolish the blunted resting response in the active and healthy pre-term born individuals. This can be explained by the preponderant role of exercise-related respiratory drive (Forster et al., 2012;Duffin, 2014) that can override the blunted resting HVR and consequently enable optimal ventilatory response to hypoxic exercise. In this regard it is also prudent to note that aerobic capacity (e.g., training status) does not seem to importantly influence ventilatory responsiveness to hypoxia in humans (Sheel et al., 2006). Exercise Responses As mentioned previously, reduced exercise capacity is commonly observed in pre-term born individuals (Rogers et al., 2005;Vrijlandt et al., 2006;Saigal et al., 2007;Lovering et al., 2013;Svedenkrans et al., 2013). This is in line with the present results where, despite a comparableVO 2peak and resting indices of pulmonary function the pre-term born individuals exhibited significantly lower MAP values during both, normoxic and hypoxic graded exercise tests (Figure 2.). Interestingly, while the consequences of prematurity on normoxic exercise capacity have been well studied, exercise tolerance during hypoxia received very little scientific attention. In contrast to their initial hypothesis both to-date studies (Duke et al., 2014;Farrell et al., 2015) that experimentally examined the effects of hypoxia on exercise performance did not find any significant hypoxia-related reduction in MAP. Even though the pulmonary gas exchange efficiency was not measured in the present study, it needs to be mentioned that both Duke et al. (2014) as well as Farrell et al. (2015) did not observe any differences in this potentially important exercise-limiting respiratory factor. The data of Farrell et al. (2015) also include an extremely interesting observation that whereas the MAP was significantly lower in the preterm vs. the full-term individuals in normoxia, this difference was diminished during hypoxic exercise. While they initially hypothesized that hypoxia would exacerbate the MAP reductions, their somewhat paradoxical finding suggests that pre-term born individuals might actually tolerate hypoxia during exercise better when compared to their full-term born counterparts. One could thus speculate that the pre-term birth along with the associated medical and physiological consequences might precondition preterm born individuals for subsequent hypoxic exposures as was recently demonstrated in a rodent model of neonatal hyperoxia (Goss et al., 2015). Nevertheless, the fact that the observed hypoxia-related reductions in MAP were comparable between the two groups clearly suggests that acute hypoxia does not seem to compromise exercise capacity of the pre-term born individuals significantly more than it compromises the capacity of the full-term born adults. Also, our data indicate that regardless of the observed reduction in resting HVR in the pre-term group, the exercise ventilatory responses seem to be comparable between the two cohorts. The only ventilatory pattern difference observed between the groups was a higher tidal value at normoxic volitional exhaustion in the pre-term as compared to the fullterm individuals. However, since both minute ventilation as well as respiratory frequency was similar between the groups and that no such difference was observed at volitional exhaustion in hypoxia we believe that this difference is physiologically negligible. It has to be emphasized again that these results are derived from a group of otherwise healthy and active pre-term and full-term born individuals that also display a relatively high aerobic capacity (i.e., meanVO 2peak = 49-53 mL · kg −1 · min −1 ). As mentioned in the introduction section, the reason for testing aerobically fit and active pre-term born individuals was that they are most likely to engage in activities at high-altitudes. Taken together, the data from the present and previous (Duke et al., 2014;Farrell et al., 2015) studies suggest that exercising in normobaric hypoxia seems to be well tolerated by healthy prematurely born adults. Nonetheless, the effects of prematurity per se on exercise hyperpnea, an extremely flexible and complex layered response comprising feed-forward, feed-backward and adaptive components (Mitchell and Babb, 2006;Forster et al., 2012), under both normoxic and hypoxic conditions warrants further scrutiny. Oxidative Stress and HVR Modulation The third aspect of the present study was related to the redox-balance and its potential influence on HVR modulation. Although it is currently unclear whether the augmented systemic oxidative stress levels observed in the pre-term born infants (Robles et al., 2001;Buonocore et al., 2002) persist into adulthood recent evidence indicates that this might be the case (Filippone et al., 2012). Indeed, Filippone et al. (2012) found higher exhaled 8-isoprostane (oxidative stress marker) levels in preterm as compared to the full-term born adolescents. While one could therefore expect that prematurely born adults also exhibit significantly elevated systemic oxidative stress levels (Martin et al., 2018) our analysis of a number of oxidative and antioxidant plasmatic markers of redox balance did not show any significant baseline differences between the two cohorts. Moreover, it is also plausible that ROS overproduction, potentially resulting from pre-term birth may not be detected within the plasma since the higher oxidative stress marker in pre-term adolescents was previously demonstrated in the exhaled breath and not in plasma (Filippone et al., 2012). These findings might also be importantly influenced by the selection criteria used in the present study (healthy, active, male survivors of prematurity with no chronic anatomical and clinical consequences). Based on the previously observed relation between oxidative stress and HVR in rodents (MacFarlane et al., 2008) and humans (Pialoux et al., 2009a,b) we also aimed to determine potential relation between the two in the pre-term born individuals. Nevertheless, no significant correlation was observed between the measured baseline oxidative/antioxidant markers and HVR responses. Also, the hypoxia sensitivity test did not provoke any measurable changes in the redox balance markers in either group even though oxidative stress changes were previously reported following such testing (Pialoux et al., 2009a). Additionally, both acute hypoxia (Magalhaes et al., 2005;Debevec et al., 2014) and exercise (Powers et al., 2011) have previously been show to independently augment oxidative stress (Debevec et al., 2017a). A possible explanation could be that such an acute and short-term (8 min) normobaric hypoxic exposure combined with a low-intensity exercise is not sufficient to importantly alter redox balance in otherwise healthy pre-term born individuals. Limitations and Methodological Considerations While the present study provides novel insight into the hypoxic exercise tolerance of the otherwise healthy, active pre-term born adults, there are a few limitations we would like to acknowledge. Firstly, as with essentially all human studies on the topic, the independent effects of hyperoxia and prematurity are almost impossible to discern, since the vast majority of prematurely born infants are subjected to high inspired O 2 levels. Secondly, the reported responses were obtained in a rather narrow age group (adults between 18 and 24 years of age) comprised of males only and prospective studies are warranted to investigate hypoxic exercise tolerance in both older and younger individuals of both sexes to provide additional insight into the potential maturation and sex-related effects. Thirdly, as we only tested acute effects of poikilocapnic hypoxia on ventilatory responses and exercise tolerance we cannot properly infer any valuable conclusions regarding the ability of preterm born individuals to efficiently adapt to prolonged/chronic hypoxia. Accordingly, and especially given the blunted resting HVR observed in the present and other studies (Duke et al., 2014;Farrell et al., 2015), prospective investigations should also examine the ability and kinetics of the physiological acclimatization to prolonged hypoxic exposure in the pre-term born individuals. Also, it is of note that normobaric hypoxia has been employed in the present study as a surrogate for high altitude-related hypoxia. Since differential ventilatory (Savourey et al., 2003;Faiss et al., 2013) and oxidative stress (Ribon et al., 2016) responses between hypobaric and normobaric hypoxia have previously been demonstrated, this has to be taken into account. Accordingly, and especially given that all of the up-todate studies, including this one, employed normobaric hypoxic exposures future well-designed studies investigating acute and chronic responses to terrestrial (e.g., hypobaric) hypoxia are warranted. Finally, our data show a rather high inter-individual variability for some oxidative stress markers, in particular nitrotyrosine. This is however usual for such assays (Ogino and Wang, 2007) as various exogenous modulators such as intake of nitrates and dietary antioxidants, physical fitness level and smoking could explain the oxidative stress plasma markers individual variations. By design, the participants were non-smokers and, moreover since the peak oxygen uptake, an established marker of physical fitness, was homogenous within our population, it is unlikely that these parameters play a major role in the observed variability. Potentially, the large variability in these markers could be related to the differences in exogenous nitrates and antioxidants intakes that were not controlled. CONCLUSION In summary, the obtained data show that resting HVR, observed in pre-term born infants, remains blunted well into adulthood. However, the diminished resting HVR is overridden by exerciserelated respiratory drive already at low-moderate exercise intensities. Importantly, the present results clearly indicate that hypoxia-related MAP reduction is comparable between the two groups and moreover, that healthy and active adult survivors of pre-term birth do not seem to exhibit significantly higher resting systemic oxidative stress levels as compared to their age and activity matched counterparts. Collectively, these findings suggest that healthy prematurely born adults tolerate exercise in hypoxic conditions well, and should, hence not be discouraged to engage in recreational or competitive activities in hypoxic conditions. Nevertheless, the observed blunted HVR and greater desaturation at rest in the pre-term vs. full-term born individuals warrant further investigation. ETHICS STATEMENT The study protocol was pre-registered at ClinicalTrials.gov (NCT02780908), approved by the National Committee for Medical Ethics at the Ministry of Health of the Republic of Slovenia (0120-101/2016-2) and conducted according to the guidelines of the Declaration of Helsinki. AUTHOR CONTRIBUTIONS TD, VP, GM, MM, and DO conceived and designed the research. TD, MM, and DO conducted the experiments. TD, VP, GM, AM, and MM analyzed the data. TD wrote the manuscript. All authors read and critically revised the manuscript. FUNDING Funded by Slovene Research Agency (Grant No. J3-7536) and Ljubljana University Medical Centre (Grant No-TP20140088) grants. Part of the data reported in the present manuscript have been presented at the 20th International Hypoxia Symposium in Lake Louise, Canada and the 23rd annual Congress of the European College of Sport Science in Dublin, Ireland with the abstracts published in the respective proceedings (Debevec et al., 2017b.
2019-04-16T13:31:26.366Z
2019-04-16T00:00:00.000
{ "year": 2019, "sha1": "533a3b2204b741547804451a99ec6511f36d3f3a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.00437/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "533a3b2204b741547804451a99ec6511f36d3f3a", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
22992943
pes2o/s2orc
v3-fos-license
Tumor stem cell, or its niche, which plays a primary role in tumorigenesis? Cancer research over the past decades has focused on neoplastic cells, or a fraction of them, i.e. tumor stem cells, as the ultimate causes of tumorigenesis. However, during recent years, scientists have come to realize that tumorigenesis is not a solo act of neoplastic cells, but rather a cooperative process in which the roles of numerous types of non-neoplastic cells should be recognized. These tumor-residing non-neoplastic cells constitute the so-called tumor-associated stroma, which in certain cases even greatly surpasses the neoplastic cellular compartment that was previously thought of as a sole determiner leading to a seemingly autonomous growth pattern. In this review, we summarize several recent research highlights that have unveiled many previously unappreciated roles for microenvironmental factors, especially during the initiation stage of tumorigenesis. It is becoming increasingly clear that the stroma’s regulatory effects constitute not only an essential force for maintaining tumor growth, but also primary causes initiating tumorigenesis. INTRODUCTION In spite of the fact that tumor-caused mortality rates have actually declined by about 10%-18% over the last decade, tumors are still among the leading causes of human mortality ranging from middle to old age. This situation, to a great extent, reflects a looming fact: the cellular and molecular bases underlying tumor origin and development still remain largely obscure to our comprehension. From the viewpoint of mainstream medical research, tumorigenesis is basically a process of neoplastic cell autonomy wherein a few genetic or epigenetic alterations intrinsically occurring to a given somatic cell (presumably a somatic stem cell or progenitor) transform it into a tumorigenic cell, which, in turn, will embark on an out-ofcontrol growth, successfully defying the regulatory activities from surrounding normal tissue cells, such as contact inhibition and immunosurveillance from the immune system. Nevertheless, even as early as several decades ago, occasional scientific reports emerged indicating that, at least for certain types of neoplasia, the tumorigenic growth is not such a totally autonomous process but needs cooperative actions from certain environmental factors. Notably, around the turn of this century, this unorthodox thought has gradually come into the spotlight. Accumulating works have since revealed that the versatile roles of tumor stroma, to different extents, contribute to the development and clinical manifestation of neoplasia. In the following sections, we will propose several representative scenarios, emphasizing the plausible primary contributions of tumor-associated stroma to the initiation of tumorigenesis. MALIGNANT TRANSFORMATION The primary roles of intrinsic defects within neoplastic cells for initiating tumor formation have long been established. Accordingly, it is easy to accept the notion that neoplastic cells will exert potent stimulatory effects to coax stroma into a supportive microenvironment for the sake of their growth. It was probably hard to envision decades ago that certain primary factors or defects within the stroma might constitute a permissive role or an essential fueling force to drive the malignant transformation. As illustrated in the case of AKT activation-driven anchorage-independent growth of melanocytes and their malignant transformation, a hypoxic environment of normal skin plays a permissive role via stimulating HIF1α activity [1] . Conversely, a normoxic environment will greatly inhibit HIF1α activity and, thus, inhibit the occurrence of melanoma even in the presence of oncogene activation. Subsequent studies suggested that tumorigenesis regulatory mechanisms may involve: (1) normoxia will decrease HIF1α activity, allowing an expression of α integrin 5 that, in turn, will prompt anoikis of pre-tumor stem cells (TSCs) of melanoma during the tumor budding stage; (2) HIF1α activation increases mRNA and protein levels of Notch1, which facilitates melanoma development even in xenograft models; and (3) HIF1α activates the expression of macrophage migration inhibitory factor to delay premature senescence. In another study, a common genetic effect occurring in both focal neoplastic cells and stromal mast cells was shown to elaborate tumor formation of the neurofibroma, which is notably composed of multiple types of tissue cells including Schwann cells, fibroblasts, endothelial cells, hematopoietic cells and pericytes/smooth muscle cells. It was previously observed that the loss of heterogeneity of tumor suppressor gene neurofibromatosis type 1 (Nf1) in Schwann cells is necessary, but not sufficient, to fuel the tumor formation. On the other hand, during neurofibroma formation in the Nf1-deficient mouse model, it was noticed that an infiltration and/or expansion of c-Kit + FcεRI + mast cells into peripheral nerves preceded the manifestation of clinical tumors [2] . Remarkably, hematopoietic cells, of which the majority are actually mast cells, account for 3%-7% of tumor cellularity. Yang et al [2] elegantly demonstrated that a haploinsufficiency of Nf1 within hematopoietic mast cells is absolutely required for in vivo mast cell infiltration as well as the tumor formation that is otherwise characteristic of the proliferative Nf1 -/-Schwann cells. To further support an essential contribution from the mast cells, the mast cells with a genetic defect in the c-Kit gene or wild type mast cells with a prior inhibition on c-Kit kinase activity, failed to support the tumorigenic proliferation of Nf1 -/-Schwann cells. Actually this study poses an exceptional case wherein tumor formation may not always arise from the primary defects within a single cell as the tumor clonal theory has claimed, and that the primary defects within two lineages of cells might be needed for the initiation and development of tumors. PRIMARY ABNORMALITIES IN STROMA STIMULATE A NEOPLASIA-LIKE PHENOTYPE WITHOUT MALIGNANT TRANSFORMATION What about the situations wherein the primary defects occur only in stroma cells? Can a neoplasm arise that is mainly composed of non-stromal cells with a normal genetic background? Two elegant works by Walkley et al [3] and Kim et al [4] have actually illustrated this out-of-expectation scenario. The first study involved the development of myeloproliferative disorders (MPD) that featured a phenotype of granulocytosis, which have been largely regarded as a group of neoplasia intrinsic to hematopoietic stem cell (HSC) defects (such as in the case of JunB deficiency of HSCs). Intriguingly, Walkley et al [3] have revealed a deficient hematopoietic microenvironment component that is sufficient to result in the development of a full scale MPD phenotype in mouse models. Although the exact cellular and molecular mechanisms are still awaiting further clarification, the reciprocal bone marrow transplantations between RARγ +/+ and RARγ -/strains have clearly pinpointed a retinoic acid signaling defect within the hematopoietic microenvironment, but not in the HSCs, as the primary cause of this special subtype of MPD. Notably, neither RARγ +/+ nor RARγ -/hematopoietic cells within a RARγ -/microenvironment were malignantly transformed by acquiring proliferative autonomy. In support of this scenario, in a likely case, a primary defective Notch activation arising from Mib1 deficiency within a microenvironmental compartment, but not within hematopoietic cells, also caused a MPD-like phenotype [4] . This scenario of a primary stromal defect-fueled abnormal proliferation of non-stromal cells is not only restricted to liquid neoplasia. As revealed in a study of the smooth muscle cell-targeted Lkb +/or Lkb -/mouse models, Katajisto et al [5] observed that the occurrence of Peutz-Jeghers syndrome, an abnormal epithelial proliferation along the gastrointestinal tract that is at high risk of forming carcinoma, was attributed to a featured increase of Sma + Desminmyofibroblast component within the stromal area of gastrointestinal polyps. The myofibroblastlike cells cored the polyps, and a reduced Smad-2 phosphorylation level was evident within the epithelial cells of the proliferative zone, especially within those surrounding the Sma + fibroblast-like cells, indicating that a molecular mechanism relating to a decreased production of TGFβ by Lkb -/stroma was responsible for the abnormal epithelial proliferation. INTO REAL TSC Further, it is interesting to ask whether a primary stromal defect can serve as the ultimate cause underlying the malignant transformation of non-stromal compartments. The answer probably is yes. Indeed in certain circumstances, the abnormal microenvironment can serve as a potent carcinogen, as illustrated in studies of the enhanced activities of stroma-derived metalloprotease-3 and -9 (MMP3 and MMP9) [6] . Abnormally elevated activity of MMPs was found to deplete the surface E-cadherin of mammary cells, which led to the loss of cell-cell adhesion, relocalization of β-catenin into the nucleus, expression of Rac1b isoform, and the generation of reactive oxygen species [6] . Finally, the resulting epithelial-mesenchymal transition and genomic instability fueled the development of overt breast cancer at a high frequency. As mentioned above, it is well demonstrated that deficiency of TGFβ signaling in epithelial cells leads to their malignant transformation. On the other hand, recent work by Kim et al [7] indicated an unexpected scenario in which a primary TGFβ signaling defect within T lymphocytes, but not within the epithelium, triggered the generation of a familial juvenile polyps-like syndrome that spontaneously evolved to metastatic gastrointestinal cancer. In the analyses of two T helper lymphocyterestricted conditional Smad 4 -/mouse models [7] , the authors discovered that a prominent infiltration of IgAsecreting plasma cells occurred to the epithelial neoplasm microenvironment, which indicated a skewed production of TH2 type cytokines including IL-6 by Smad 4 -/-T lymphocytes. In this regard, strong evidence from both human and murine studies is available, revealing a common transforming mechanism that consistent IL-6 signaling through Stat3 activation is associated with malignant transformation of gastrointestinal tract epithelium [8,9] . OR EVEN SELECT THE OUTGROWTH OF ABNORMAL STROMAL CELLS On the other hand, probably in most cases, we need to accept the notion that malignant neoplastic cells do predominate in the origin and progression of tumor tissues. However, even in these situations, the oncogenic activity of a primary defect within neoplastic cells has to be realized via a mediating role of the otherwise normal stromal cells. This scenario is well demonstrated in understanding the oncogenic roles of an active Hedgehog (Hh) signaling status detected in many types of tumors. Numerous previous studies have indicated an autocrine mode of Hh for prompting the growth of neoplastic cells. However, in a recent analysis concerning the development of epithelial tumors [10] , it was discovered that some previous reports that presumed an inhibiting effect of Hh inhibitors on in vitro epithelial tumor growth via an autocrine mechanism of Hh signaling, actually came from "off-target" activity. In line with this, it was shown that an epithelium-specific transgenic expression of Smom2 itself, an active mutant of Smoothened, failed to induce the malignant transformation of pancreatic cells. Based on the analyses of human primary tumor samples-nude mouse xenograft models, Yauch et al [10] further demonstrated a relationship between the expression levels of IHh and SHh in inoculated tumor cells with those of Gli and Patch in host-derived stroma, while at least within some successfully implanted tumor samples, no evidence for Hh signaling activation within neoplastic cells themselves was confirmed. As expected, in these xenograft models, the administration of Hh signaling inhibitor or Hhneutralizing antibody indeed delayed the growth of tumor, and the MEF cells from wild type, but not from a Smo -/background, were found to support the inoculation and growth of primary tumor cells expressing Hh, indicating a critical role for an Hh paracrine mechanism from tumor to stroma. The stroma would supposedly send feedback to neoplastic cells after Hh signaling activation, constituting an essential force fueling the tumorigenesis of the epithelium. Finally, it must be emphasized that, in certain circumstances, some detectable genetic or epigenetic abnormalities in stroma cells represent a secondary response to malignant tumor cell-derived stimuli or even stress, rather than a primary event. In a murine prostate cancer model, as generated by the epithelial transgenic expression of Apt121, a potent Rb pathway inactivator, it was observed that tumorigenic progression was dependent on the genetic status of p53 within stroma; i.e. wild type, heterozygous or null [11] . The TgAPT121 prostate tumor with a p53 -/background has been characterized as having an extensive hypercellular mesenchyme, even with a so-called stromal tumor. The phenotypic characterization of Sma + S100A4 + CK8fibroblast-like stroma indicated that it was not derived from a feasible epithelial to mesenchymal cell trans-differentiation. Most intriguingly, the proliferative mesenchyme within prostate cancer with a p53 +/+ and p53 +/background was found to experience a progressive loss of p53 copies, indicating the stress-response of stroma to prostate cancer selectively favors the out-growth of the abnormal stroma with defective p53 function. CONCLUSION Do tumors develop independent of tumor microenvironment-derived supporting cues? Now the answer is clear. The tumor microenvironment exerts a tremendous effect on tumor budding and progression; and sometimes the altered stroma even constitutes the sole ultimate cause fueling the tumorigenesis. The interplay between the microenvironment and the evolving tumor cells is dynamic and complex, involving extensive reciprocal interactions. Changes in the context in which a tumor is hatching will largely determine the tipping of the balance either in favor of desirable tumor-suppression or undesirable tumorpromotion. Worthy of mentioning, these new findings convey at least two important biological implications: (1) for clinical tumors that need an essential contribution from certain primary defects of nonneoplastic stroma to originate and develop, the conventional method of measuring TSCs, based on the conventional conception of the clonal nature of tumorigenesis, may fail by simply inoculating a sole neoplastic compartment of tumor tissues into normal syngeneic or several routinely used immunocompromised recipients, such as NOD/SCID mice; and (2) perhaps for all types of clinical tumors, the interplay pathways between tumor cells and non-neoplastic stroma represent new avenues open to influence by therapeutic interventions. Therefore, understanding and developing accurate strategies aimed at cancer-supportive or tumor-inductive microenvironments, in combination with the standard anti-tumor approaches, seems to be most promising for preventing the development of or eradicating well-established tumors. Results from researches on these approaches are anticipated.
2018-04-03T02:01:10.393Z
2010-05-15T00:00:00.000
{ "year": 2010, "sha1": "d93308ceb9be47c8d7ad723723323aef7535a67b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4251/wjgo.v2.i5.218", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b6863a1e976ecf6b856ad5c92c80f31f1737053b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
256088901
pes2o/s2orc
v3-fos-license
Direct lineage tracing reveals Activin-a potential for improved pancreatic homing of bone marrow mesenchymal stem cells and efficient ß-cell regeneration in vivo Despite the potential, bone marrow-derived mesenchymal stem cells (BMSCs) show limitations for beta (ß)-cell replacement therapy due to inefficient methods to deliver BMSCs into pancreatic lineage. In this study, we report TGF-ß family member protein, Activin-a potential to stimulate efficient pancreatic migration, enhanced homing and accelerated ß-cell differentiation. Lineage tracing of permanent green fluorescent protein (GFP)- tagged donor murine BMSCs transplanted either alone or in combination with Activin-a in diabetic mice displayed potential ß-cell regeneration and reversed diabetes. Pancreatic histology of Activin-a treated recipient mice reflected high GFP+BMSC infiltration into damaged pancreas with normalized fasting blood glucose and elevated serum insulin. Whole pancreas FACS profiling of GFP+ cells displayed significant homing of GFP+BMSC with Activin-a treatment (6%) compared to BMSCs alone transplanted controls (0.5%). Within islets, approximately 5% GFP+ cells attain ß-cell signature (GFP+ Ins+) with Activin-a treatment versus controls. Further, double immunostaining for mesenchymal stem cell markers CD44+/GFP+ in infiltrated GFP+BMSC deciphers substantial endocrine reprogramming and ß-cell differentiation (6.4% Ins+/GFP+) within 15 days. Our investigation thus presents a novel pharmacological approach for stimulating direct migration and homing of therapeutic BMSCs that re-validates BMSC potential for autologous stem cell transplantation therapy in diabetes. Introduction Stem cell-derived β-cells present a clear proof-of-concept for cell-based diabetes medication. After "Novacell protocol," substantial progress has been made in generating stem cellderived ß-cells from embryonic stem (ES) cells, induced pluripotent stem (iPS) cells, or adult progenitor cells [1][2][3]. BMSCs, however, show great promise for diabetes mitigation both in rodents and newly diagnosed individuals with type-1 diabetes [4][5][6][7]. Previous studies have shown that rodent BMSCs spontaneously differentiate into endocrine pancreatic cells [8][9][10]. The mechanism, however, remains largely unknown to state whether BMSCs can transdifferentiate into βcells in vivo or are required to support paracrine interplay for existing β-cell growth and differentiation? Most studies addressing the contribution of BMSCs in β-cell survival or regeneration have used transplantation of naive BMSC, while some others have used genetic tag, i.e., GFP for monitoring transplanted cells [11,12]. An early study by Hess et al. demonstrated the blood glucose-lowering effect within a week after intravenous infusion of GFP-tagged allogenic BMSCs into STZ-induced diabetic mice [4]. The authors reported as low as 0.5% frequency of donor GFP + BMSC to reach the pancreas while fewer differentiate into insulin-producing cells within the host islets. In another similar study, only 1% allogeneic chimerism of repopulated BMSCs were shown to reach recipient pancreas and reverse diabetes in NOD mice [12]. Ianus et al. also showed that infused BMSCs can differentiate into insulin-producing islet cells when transplanted into lethally irradiated mice [8]. Interestingly, few other groups replicating similar studies with BMSCs reported no evidence for such an antidiabetic effect after BMSC infusion [11,13,14]. The migration of BMSCs to colonize in degenerating pancreas appears to be the key for stimulating β-cell regeneration [15,16]. Moreover, a method to stimulate pancreatic migration and trans-differentiation into β-cells limits their scope in cell therapy. It has been extensively demonstrated that the CXCR4stromal-derived factor-1 axis is crucial for BMSC migration and homing. Modulating the expression of the CXCR4 gene in BMSC could alleviate their tissue-specific homing [17,18]. The use of chemical/biological modulators such as TGF-β family member protein Activin-a is shown to support differentiation of BMSCs [3,19,20]. Furthermore, Activin-a induces definitive endoderm differentiation by stimulating CXCR4 expression in ES/iPS cell-derived ß-cells and regulate migration to enhance homing [21][22][23]. We, therefore, used a lineage tracing approach in STZinduced diabetic mice to demonstrate the potential of Activin-a in stimulating migration, improving pancreatic homing and efficient endogenous ß-cell differentiation. Animals Animal protocols were approved and performed as per Committee for the Purpose of Control and Supervision on Experiments on Animals (CPCSEA) and our Institutional Animal Ethics Committee (IAEC, MSU Baroda) (License no: 938/PO/a/06/CPCSEA) guidelines. We used male Balb/ c mice, 6-8 weeks old, weighing 25-30 g and housed at 26°C with 12 h light-dark cycle and food/water ad libitum. Diabetes induction and blood glucose measurement Diabetes was induced by 5 days multiple low dose streptozotocin (STZ) injections (70 mg/kg b wt). Fasting blood glucose was monitored with Accu-Chek Glucometer. Isolation and purification bone marrow-derived mesenchymal stem cells BMSCs were isolated from the tibia and femur bones of 4week-old balb/c mice by modifying the protocols adapted from Zhu et al. and Hsiao et al. using differential trypsinization steps [24,25]. For a more detailed protocol, please refer to supplementary methods. Generation of permanently labeled GFP + BMSC One hundred thousand donor BMSCs were transfected with pPB-eGFP (1 μg) and pCYL43-PBase (2 μg) DNA vector (a gift from Sanger Institute, UK; for map, see supplementary figure-1) using lipofectamine 2000 (Invitrogen) in 1:3 volume ratio. Following transfection, stable GFP-expressing clones were selected on puromycin antibiotic at 300 μg/ml for the first 2 days and 900 μg/ml for the next 7 days. GFP + BMSCs colonies were hand-picked using 3.2 mm clonal discs (Sigma Aldrich, USA) and FACS sorted for enriched GFP + cells (see supplementary methods for cloning and purification strategy). Flow cytometry Live BMSCs at p#25 or single islet cells were acquired on BD Aria-III flow cytometer for GFP + cell quantification. For immunocharacterization, formalin-fixed BMSCs or islet cells were stained for MSC surface markers (CD34, CD44, CD90, CD45, CD117, Vimentin, and SMA) and endocrine differentiation antibodies (CD49b and PDX1). Cells were fixed in 4% formalin (30 min on ice), Triton-X100 permeabilized, and stained with primary labeled antibodies overnight at 4°C for key MSCs and endocrine differentiation markers (see Table 1 for details). Data were acquired on BD Aria-III sorter using DIVA software and later analyzed with FlowJo software (FlowJo, USA). In vitro differentiation of BMSC into islet-like clusters GFP + BMSCs were assessed for in vitro islet differentiation and formation of ILCC using Activin-a (20 ng/ml) as described previously [26,27]. Transplantation of GFP-labeled BMSCs For lineage tracing, one million GFP + BMSCs were preincubated with Activin-a (2.5 μg/ml) for 30 min prior to the transplants and then injected intravenously into STZ-induced diabetic recipient mice, followed by daily Activin-a injections (25 μg/kg b wt) for 15 days posttransplantation. Diabetic STZ treated mice (control) did not receive donor GFP-labeled BMSC and Activin-a treatment, while a group of recipient mice received only donor GFP + BMSC without Activin-a treatment, served as BMSC control. Tissue preparation and immunohistochemistry Pancreatic tissues from all mice at day 30 post diabetes induction were harvested, formalin-fixed, and sliced in 5 μm sections while BMSC for immunocytochemistry were fixed in 10% formalin overnight. For histology, tissue sections were deparaffinized with grading xylene and ethanol grades and rehydrated in water. Both tissues or cells were permeabilized and blocked with 4% donkey serum (Sigma Aldrich, USA) for 1 h at RT, followed by primary antibodies (see Table 1 for details) incubation overnight at 4°C. The next day, cells were washed and labeled with secondary antibodies (see Table 2 for details) for 30 min at RT. Nuclei were marked with DAPI and mounted with Fluoromount-G (VECTASHIELD, USA). Images were captured on LSM710 confocal microscope and analyzed using Zen10 software (Carl Zeiss, USA). Protein extraction and Western blotting FACS-sorted green cells and single-cell islet suspension from diabetic and recipient mice isolated islets were lysed in RIPA buffer (1% triton X-100, 1% sodium deoxycholate, 0.1% SDS, 0.15 mM NaCl, 0.01 M Sodium Phosphate, pH 7.2). Fifteen micrometers protein after bradford quantification was loaded on 12% SDS-page to transfer onto a nitrocellulose membrane. Membranes were blocked with 1% BSA in PBS and probed with primary antibodies (see Table 1) at 4°C overnight. The HRP-labeled secondary antibody was then probed for 30 min at RT. Membranes were finally stained with Chemiluminescence detection reagent and images were captured on the gel documentation system (GE Healthcare). Densitometric protein expression was measured from pooled cell extracts from 3 mice in duplicates, and fold changes with SD were calculated using Fiji software. Serum insulin ELISA Serum insulin from animals was measured using a mouse insulin ELISA kit (Mercodia Inc., USA). Statistical analysis All statistical analysis was performed using GraphPad Prism-6 software using either two-way ANOVA or Bonferroni test for p value calculations with > 95% confidence. Statistics is described in legends for each figure. The number of mice transplanted is limited to N = 3 due to the huge cost incurred for daily Activin-a injections. Derivation, generation and characterization of GFP + BMSC We isolated BMSCs from donor mice surgically by modifying the previously described protocols from Zhu et al. and Hsiao et al. [24,25]. A homogeneous population of BMSC without hematopoietic and macrophage contamination was achieved by differential trypsinization technique [25]. To perform lineage tracing of BMSCs, we created traceable BMSCs by permanent genomic integration of GFP using piggyback transposomal elements ( Fig -3). These marker expressions correspond to BMSC according to the International Society of Cellular Therapy System [28]. It has been well known that human BMSCs do not express the CD34 marker, but at least in mice, there are marked differences in the CD34 expression profile. These have been discussed widely in two independent reports confirming the presence of CD34 expression in murine BMSCs [29,30]. Model of pancreatic injury and lineage tracing of GFP + BMSC to contribute to the new ß-cell formation To evaluate the BMSC potential for repair and restoring lost ß-cell mass, we adopted the STZ-induced diabetic model for partial ß-cell ablation and mild hyperglycemia. As per National Institutes of Health (NIH) and the Animal Models of Diabetic Complications Consortium (AMDC C), USA, the recommended blood glucose level for diabetes induction in STZ-treated mice under a non-fasting state should be > 200 mg/dl (11.1 mmol/l), whereas for a fasted animal, it should be > 150 mg/dl (8.4 mM) [32,33]. Hence, we injected STZ at 70 mg/kg body weight for 5 days to attain glycemia > 11 mM in a fasted condition. It has been well documented that the pancreatic transcriptional reprogramming markers are only expressed at early time points for a very short duration during the β-cell regeneration process. To study BMSC-derived β-cell regeneration, it is mandated to perform lineage tracing studies early-on, post-transplantation. Hence, we performed lineage tracing on day 30. We designed an experimental approach to study pancreatic repair upon transplantation of donor allogeneic GFP-expressing BMSC (Fig. 2a). Fasting blood glucose more than 15 mM for 30 days and depleted serum insulin levels confirm the model establishment for ß-cell death (Fig. 2c, d) Histo-morphological assessment of pancreatic sections stained with hematoxylin and eosin (H + E) and immunohistochemistry for insulin (red) showed pancreatic injury and ß-cell damage in islets at day 30, resulting in hypoinsulinemia and hyperglycemia (Fig. 2f, g). Another set of diabetic un-transplanted representative mice (n = 2) was sacrificed and the pancreas was harvested solely to survey the GFP expression in pancreatic cells by flow cytometry and microscopy and found negative for GFP signals (Fig. 2e). Established hyperglycemic recipient mice were intravenously transplanted with transgenic GFP + BMSC, and blood glucose and serum insulin levels were measured. Control nondiabetic mice retained physiological glycemic control over the total duration of the study, while nontransplanted diabetic mice exhibited a hyperglycemic response after STZ injection with elevated blood glucose and severely depleted insulin levels (Fig. 2f, g). The data from mice transplanted with allogenic GFP + BMSC without Activin-a treatment followed a similar glycemic pattern as the diabetic controls and fails to reverse diabetes. These findings coincide with the earlier similar reports where BMSC failed to reverse hyperglycemia. Additionally, in another group of our experimental design, where Fig. 1 Generation of GFP tagged mouse bone marrow-derived mesenchymal stem cells and differentiation into functional pseudo-islets. a Schematic representation for BMSC isolation and stable GFP+ clone selection. b Fluorescent images of stable GFP + (green) expressing BMSC and flow-cytometric quantification of sorted GFP + BMSC line. c Immunophenotyping of mesenchymal stem cell markers in GFP + BMSC in comparison untransfected BMSC using flow cytometry. d Schematic representation of islet differentiation protocol into functional islet-like cell clusters and representative microscopic images of GFP + BMSC at days 0, 4, and 7. Immunostaining images for vimentin (red) and insulin (green) are represented at initiation and completion of differentiation steps. e Immunostaining images for insulin (green) and somatostatin (red); c-peptide (green) and glucagon (red); NeuroD1 (green) and Neurog3 (red), and pdx1 (red) and Nestin (green) in differentiation islet-like clusters mice transplanted with GFP + BMSC and also treated with Activin-a for 15 days, interestingly, we found a profound effect of this treatment on glucose-lowering and increased serum insulin levels after 30 days. We speculate that these results reflecting the reversal of diabetes in Activin-a treatment mice could be attributed to two possibilities: (1) GFP + BMSC contributes to a new ßcell generation that resulted in increased insulin and Fig. 2 Transplantation of GFP + BMSC into STZ induced diabetic mice model. a Experimental design and timeline for the development of STZ diabetic mice and assessment of pancreatic regeneration with GFP + BMSC in combination with Activin-a. b Evidence for the establishment of diabetes and pancreatic injury after STZ injections by representative pancreatic histology (H&E) and immunostaining for insulin (red). Graphical representation of fasting c blood glucose and d serum insulin levels in control and STZ treated mice. Data represent mean ± SEM with N = 3 mice per group. e Validation of GFP expression in recipient STZ treated mice pancreas using flow cytometry and immunostaining for insulin (red). Graphs represent f fasting blood glucose and g serum insulin levels in controls and donor transgenic BMSC recipient mice. Data represent mean ± SEM with N = 3 mice per group. All statistical analysis was performed using Graphpad Prism software using two-way ANOVA and Bonferroni test for p value calculations reduced blood glucose, and (2) Activin-a treatment substantially stimulate insulin biosynthesis or release from pre-existing ß-cells. To further test this, we performed lineage tracing and surveyed GFP-expressing cells in recipient mice's pancreas and liver. The aim is to survey for the evidence of transgenic BMSC contributing to ß-cell regeneration. Activin-a treatment stimulates pancreatic migration and homing of GFP + BMSC We hypothesized that the effect on blood glucose and serum insulin levels in Activin-a treatment mice with bone marrow-derived stem cells is a result of the new ßcell formation. To investigate this, we first examined the migration pattern and homing of GFP-expressing BMSC in diabetic control and GFP + BMSC transplanted mice under the influence of Activin-a treatment. Pancreas and liver tissues harvested at day 30 from all groups of animals were digested to single-cell suspension for FACS quantification of GFP + cells. Whole pancreatic cells sorting from diabetic control and BMSC transplanted mice without Activin-a treatment displayed less than 1% (0.7 ± 0.44) GFP + cell migrating to the pancreas, whereas BMSC recipient mice treated with Activin-a presented significantly higher GFP 6 ± 0.42% expressing cells (Fig. 3a). Subsequently, no significant migration and homing were observed into the liver in all the groups (Fig. 3b), suggesting that Activin-a could only promote efficient pancreatic lineage migration of GFP + BMSC but not into the liver. Further, to identify the specific molecular signature of pancreas migrated GFP + cells, we performed FACS profiling for GFP + cells with CD44 (mesenchymal marker) in the single-cell population. Both normal (0.12 ± 0.01%) and diabetic control (0.13 ± 0.01%) mice islet cells did not present CD44 + cells, indicating that MSCs do not considerably reside within the islets. However, untreated diabetic recipient mice displayed approximately 0.31 ± 0.21%, while Activin-a treated recipient showed a significantly high number of CD44 + cells (2.12 ± 0.31%), respectively, within the total cell population (Fig. 3d, Suppl. Fig-4). The fact that recipient mice received donor allogeneic BMSC, we then quantified the presence of GFP + cells specifically within the islet cell population. As expected, controls and untreated recipient diabetic mice pancreata contained an extremely low number of GFP + cells out of total islet population (control 0.75 ± 0.001%, diabetic control 0.83 ± 0.091%, and GFP-BMSC transplanted 0.51 ± 0.21%). Activin-a treated transplanted mice dramatically displayed a high frequency of GFP + cells (4.72 ± 0.87%) within the isolated islet cell population (Fig. 3e). This implied that Activin-a treatment in recipient mice could potentially stimulate efficient migration and improved homing of transplanted BMSCs to the injured pancreas. If the donor BMSC were to contribute to new islet cell generation with Activin-a, we hypothesize that the subset of migratory GFP + cells in islets should demonstrate loss of CD44 expression without losing GFP signals. The GFP + CD44 − cells thereby present evidence of donor BMSC cell trans-differentiation into new islet cells. To do this, we FACS analyzed the dual stained (GFP/CD44) islet cells in each group. Again, no dual-stained cells in both controls were detected. A tiny fraction of undifferentiated GFP + CD44 + cells (0.13 ± 0.01%) was observed in untreated donor BMSC recipient mice. Similarly, Activin-a treated recipients demonstrated 3.67 ± 0.13% GFP + CD44 − (differentiated) and only 0.57 ± 0.07% GFP + CD44 + (undifferentiated) cells (Fig. 3c, e). The extent of differentiation of donor BMSC could be calculated by subtracting the frequency of undifferentiated cells GFP + CD44 + from the total GFP + cells quantified within the islets. We observed 25% of cells (0.13 out of 0.51%) of donor GFP + BMSC in untreated and 88% cells (4.15% out of 4.72%) of donor GFP + BMSC in Activin-a treated BMSC transplanted animals undergo transdifferentiation (Fig. 3f, Suppl. Fig-5). These observations collectively indicate that despite the potential, due to the fairly low migration of transplanted donor BMSC into the injured pancreas, not enough BMSCs could deliver and transdifferentiate into new insulin-producing cells which ultimately accounts for donor BMSC failure to mitigate hyperglycemia in control BMSC alone, recipients. On the other side, Activin-a treatment in conjunction with BMSC infusion in the recipient mice demonstrated this proof-ofconcept for BMSC transdifferentiation. Although BMSC in untreated animals holds the similar potential to produce new islet cells, however, due to the fairly low migration of transplanted donor BMSC into damaged islets, not enough BMSC deliver new insulin-producing cells and ultimately fails donor BMSC to reverse hyperglycemia in non-treated animals, unlike Activin-a treated ones. Transplanted GFP + donor BMSC gives rise to β-cells in injured pancreas revealing evidence of β-cells neogenesis with Activin-a treatment To investigate the endogenous β-cell regeneration, we compared the total number of insulin+ cells and GFPexpressing insulin cells in the pancreas. In our experimental model for lineage tracing using GFP + BMSC as shown in Fig. 4a, at day 30, GFP − Ins + cells would denote endogenous β-cell regeneration while the dual-positive GFP + Ins + cells would confirm trans-differentiation of transplanted bone marrow-derived cells. Immunohistochemistry in diabetic control mice did not display any GFP-expressing cells but reduced insulin immunopositive region depicted β-cells damaged by STZ treatment (Fig. 4b). Further, occasional scattered GFP + cells were observed in the acinar region of untreated BMSC transplanted mice but devoid of insulin co-expression reflected the presence of undifferentiated BMSCs within islets. Moreover, in Activin-a-treated recipient mice, we could find a high ratio of GFP + cells in acinar, ducts, and islet regions. These animals presented 8.7 ± 0.46% GFP + cells, of which 6.4 ± 0.30% were GFP + β-cells per section of pancreatic tissue (Fig. 4c). We recorded the GFP-expressing cells infiltrated in large-sized islets coexpressing insulin as well as small clusters or β-cell Fig. 3 Quantification of GFP + BMSC in recipient mice pancreas and liver tissues. FACS analyses dot plots representing percentage population migrating to the a pancreas and b liver tissues in diabetic and donor BMSC recipient mice. Graphs present quantification of the mean frequency of GFP + cells in both pancreas and liver tissues in all groups of animals. Data represent mean ± SEM with N = 3 mice per group. c Immunophenotyping of CD44 and GFP-expressing cells in FACS sorted total pancreatic cell suspension. Graphs representing quantification of d total CD44 + cells; e CD44 + GFP + dual population in harvested mice pancreas. f Graph showing quantification for the extent of endocrine differentiation in migratory donor BMSCs by reduced CD44 expression. This is calculated by subtracting CD44 + GFP + dual population from the total GFP + population. Data represent mean ± SEM with N = 3 mice per group. All statistical analyses were performed using Graphpad Prism software using two-way ANOVA and Bonferroni test for p value calculations Fig. 4 In vivo lineage tracing of transplanted GFP + BMSC in recipient mice pancreas. a Schematic representation of the experimental model to lineage trace GFP + BMSC contributing to new ß-cell generation. b Representative image from Activin-a treated GFP + BMSC recipient mouse pancreas showing GFP-labeled ß-cells by co-immunostained for GFP (green) with insulin (red). Nuclei were stained with Dapi (blue). The graph displays quantification of c total GFP+ cells and d GFP+ ß-cells within the islets of regenerating the pancreas. Data represent mean ± SEM with N = 3 mice per group. All statistical analyses were performed using Graphpad Prism software using two-way ANOVA and Bonferroni test for p value calculations. e Representative confocal microscopic images of immunostaining for insulin (red) and GFP (green) representing infiltration of transplanted GFP + BMSC in mature islets and ductal regions. Small clusters of GFP+ ß-cell present evidence for de novo BMSC-derived ß-cell formation from the transplanted BMSC. Nuclei were stained with Dapi (blue). f Proteomic characterization by western blotting and densitometric quantification of key pancreatic endocrine differentiation transcription factors from FACS sorted islet cells of 3 pooled representative mice pancreas, depicting evidence of new ß-cell differentiation markers. Chemiluminescence signals were exposed for 1-2 min and images were captured on the Gel Documentation system (GE Healthcare) and analyzed with ImageJ software. A single cropped area of key proteins from each condition is represented, while the graph represents densitometry quantification for each protein with standard deviation from 3 pooled mice cell extracts in duplicates aggregates (Fig. 4d, Suppl. Fig-6). Interestingly, the entire cells in these clusters were found to be GFP positive along with insulin co-immunostaining, representing an index of β-cell neogenesis. Activin-a-mediated Neurogenin-3 re-activation suggests the mechanism of trans-differentiation into GFP + BMSCderived β-cells Using FACS-sorted single GFP + β-cells from BMSC controls and Activin-a treated BMSC recipient mice pancreas, we investigated the mechanism of new β-cell formation by protein expression. Western blot analysis for key mesenchymal stem cells (CD90 and CD44) and β-cell differentiation markers (Nestin, Pdx1, and Neurog3) suggested neuroendocrine reprogramming in GFP + cells with Activin-a treatment. FACS-sorted green cells demonstrate high expression of CD90 in Activin-a-treated animals compared to STZ-treated diabetic controls. Correspondingly, CD44 remains fairly undetectable in diabetic control and Activin-a-treated groups Vs untreated BMSC recipients, suggesting lineage transformation of GFP + BMSC into endocrine cells with Activin-a treatment. Increased protein expression of nestin, pdx1, and neurog3 in Activin-a treated BMSC recipients provides clear evidence for pancreatic endocrine cell reprograming with sequential activation of β-cell transcription factors. Neurog3 reactivation in GFP + BMSC deciphers mechanism of transdifferentiation into new β-cells (Fig. 4e, Suppl. Fig-7). Discussion Over two decades, several studies presented shreds of evidence for generating β-cells from BMSC [34,35]. Despite the BMSC potential, the underlying mechanism that governs critical signals for migration and homing of BMSC remains elusive. Lack of evidence and efficacy in migration for β-cell regeneration remained questionable. Hess et al. reported a lowering of blood glucose levels and 0.5-2% GFP + cells reaching the pancreas [4]. Subsequently, others have reported no significant transdifferentiation of BMSC into insulin-producing cells, in vivo [14]. Our recent study suggests that permanently GFP-expressing BMSCs can efficiently reverse chemicalinduced hyperinsulinemia and hyperglycemia. Using an endocrine cell-differentiating agent, Activin-a, we were now able to force prominent migration and colonization of GFP + BMSCs into the injured pancreas. We believe that pre-incubation of BMSCs with Activin-a and enhanced CXCR4 expression in infused BMSCs could potentially accelerate the pancreatic migration and triggers endocrine differentiation for β-cell trans-differentiation. Our results redefined significant migration of GFP + BMSC (~6%) into diabetic pancreas with Activin-a treatment, compared to 0.5-1% homing in BMSC transplanted/diabetic controls. Wang et al. in 2006 reported that transplantation of GFP + BMSCs into neonatal mice displayed 40% cell migration into exocrine while only a few contributes to the endocrine compartment [36]. We further redefine this with Activin-a that improves the absolute homing of BMSC specifically into endocrine (islets) fraction. Flow analysis of GFP and CD44 dual markers (Fig. 3e) and insulin/GFP imaging (Fig. 4b, d) confirm this observation. Other reports raised the concern of BMSC contributing to the development of fibrosis [37][38][39]; however, we did not observe this. We anticipate this could be potentially an outcome of crude bone marrow population infusion (including hematopoietic cells) while we have used more enriched and characterized and BMSC populations. The presence of GFP+ β-cells within islets and small β-cell clusters in Activin-a treated mice confers endogenous pancreatic regeneration by BMSC transdifferentiation with daily Activin-a injections. We believe this is attributed to key endocrine transcriptional reprogramming initiation with Activin-a treatment. Protein expression profiling from FACS sorted green cells display concrete evidence for new β-cell generation and reveal a new mechanism of transdifferentiation by sequential activation of β-cell differentiation markers, precisely via neurogenin-3 re-activation in migrated GFP + BMSCs. Our method of endogenous pancreatic regeneration using BMSCs and differentiation growth factors like Activin-a could substantially influence a newer paradigm of cell therapy for diabetes in a wider diabetic population using GMP grade autologous BMSCs transplantation. Conclusion Our study concludes Activin-a potentiation in migration, homing, and β-cell differentiation of transplanted BMSCs. This novel pharmacological approach for stimulating direct migration and homing of therapeutic BMSCs reignites the scope for autologous BMSC transplantation therapy to treat diabetes.
2023-01-23T14:26:31.852Z
2020-07-30T00:00:00.000
{ "year": 2020, "sha1": "02ea3197f6087298425fa1605998b4d3edeb9864", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13287-020-01843-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "02ea3197f6087298425fa1605998b4d3edeb9864", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
14384498
pes2o/s2orc
v3-fos-license
Landau Gauge QCD: Functional Methods versus Lattice Simulations The infrared behaviour of QCD Green's functions in Landau gauge has been focus of intense study. Different non-perturbative approaches lead to a prediction in line with the conditions for confinement in local quantum field theory as spelled out in the Kugo-Ojima criterion. Detailed comparisons with lattice studies have revealed small but significant differences, however. But aren't we comparing apples with oranges when contrasting lattice Landau gauge simulations with these continuum results? The answer is yes, and we need to change that. We therefore propose a reformulation of Landau gauge on the lattice which will allow us to perform gauge-fixed Monte-Carlo simulations matching the continuum methods of local field theory which will thereby be elevated to a non-perturbative level at the same time. Introduction The Green's functions of QCD are the fundamental building blocks of hadron phenomenology [1]. Their infrared behaviour is also known to contain essential information about the realisation of confinement in the covariant formulation of QCD, in terms of local quark and gluon field systems. The Landau gauge Dyson-Schwinger equation (DSE) studies of Refs. [2,3] established that the gluon propagator alone does not provide long-range interactions of a strength sufficient to confine quarks. This dismissed a widespread conjecture from the 1970's going back to the work of Marciano, Pagels, Mandelstam and others. The idea was revisited that the infrared dominant correlations are instead mediated by the Faddeev-Popov ghosts of this formulation, whose propagator was found to be infrared enhanced. This infrared behaviour is now completely understood in terms of confinement in QCD [1,4,5], it is a consequence of the celebrated Kugo-Ojima (KO) confinement criterion. This criterion is based on the realization of the unfixed global gauge symmetries of the covariant continuum formulation. In short, two conditions are required by the KO criterion to distinguish confinement from Coulomb and Higgs phases: (a) The massless single particle singularity in the transverse gluon correlations of perturbation theory must be screened non-perturbatively to avoid long-range fields and charged superselection sectors as in QED. (b) The global gauge charges must remain well-defined and unbroken to avoid the Higgs mechanism. In Landau gauge, in which the (Euclidean) gluon and ghost propagators, D ab µν (p) = δ ab δ µν − p µ p ν p 2 Z(p 2 ) p 2 , and D ab G (p) = −δ ab G(p 2 ) are parametrised by the two invariant functions Z and G, respectively, this criterion requires (a): lim The translation of (b) into the infrared enhancement of the ghost propagator (2b) thereby rests on the ghost/anti-ghost symmetry of the Landau gauge or the symmetric Curci-Ferrari gauges. In particular, this equivalence does not hold in linear covariant gauges with nonzero gauge parameter such as the Feynman gauge. As pointed out in [5], the infrared enhancement of the ghost propagator (2b) represents an additional boundary condition on DSE solutions which then lead to the prediction of a conformal infrared behaviour for the gluonic correlations in Landau gauge QCD consistent with the conditions for confinement in local quantum field theory. In fact, this behaviour is directly tied to the validity and applicability of the framework of local quantum field theory for non-Abelian gauge theories beyond perturbation theory. The subsequent verification of this infrared behaviour with a variety of different functional methods in the continuum meant a remarkable success. These methods which all lead to the same prediction include studies of their Dyson-Schwinger Equations (DSEs) [5], Stochastic Quantisation [6], and of the Functional Renormalisation Group Equations (FRGEs) [7]. This prediction amounts to infrared asymptotic forms for p 2 → 0, which are both determined by a unique critical infrared exponent with 0.5 < κ < 1. Under a mild regularity assumption on the ghost-gluon vertex [5], the value of this exponent is furthermore obtained as [5,6] κ = (93 − √ 1201)/98 ≈ 0.595 . The conformal nature of this infrared behaviour in the pure Yang-Mills sector of Landau gauge QCD is evident in the generalisation to arbitrary gluonic correlations [8]: a uniform infrared limit of one-particle irreducible vertex functions Γ m,n with m external gluon legs and n pairs of ghost/anti-ghost legs of the form when all p 2 i ∝ p 2 → 0, i = 1, . . . 2n + m. In particular, the ghost-gluon vertex is then infrared finite (with n = m = 1) as it must [9], and the non-perturbative running coupling introduced in [2,3] via the definition approaches an infrared fixed-point, α S → α c for p 2 → 0. If the ghost-gluon vertex is regular at p 2 = 0, its value is maximised and given by [5] Comparing the infrared scaling behaviour of DSE and FRGE solutions of the form of Eqs. (3), it has in fact been shown that in presence of a single scale, the QCD scale Λ QCD , the solution with the infrared behaviour (4) and (6), with a positive exponent κ, is unique [10]. Because of its uniqueness, it is nowadays being called the scaling solution. This uniqueness proof does not rule out, however, the possibility of a solution with an infrared finite gluon propagator, as arising from a transverse gluon mass M, which then leads to an essentially free ghost propagator, with the free massless-particle singularity at p 2 = 0, i.e., Z(p 2 ) ∼ p 2 /M 2 , and G(p 2 ) ∼ const. for p 2 → 0. The constant contribution to the zero-momentum gluon propagator, D(0) = 3/(4M 2 ), thereby necessarily leads to an infrared constant ghost renormalisation function G. This solution corresponds to κ Z = 1/2 and κ G = 0. It does not satisfy the scaling relations (4) or (6). This is because in this case the transverse gluons decouple for momenta p 2 ≪ M 2 , below the independent second scale given by their mass M. It is thus not within the class of scaling solutions considered above, and it is termed the decoupling solution in contradistinction [11]. The interpretation of the renormalisation group invariant (7) as a running coupling does not make sense in the infrared in this case, in which there is no infrared fixed-point and no conformal infrared behaviour. Without infrared enhancement of the ghosts in Landau gauge, the global gauge charges of covariant gauge theory are spontaneously broken. Within the language of local quantum field theory the decoupling solution can thus only be realised if and only if it comes along with a Higgs mechanism and massive physical gauge bosons. The Schwinger mechanism can in fact be described in this way, and it can furthermore be shown that a non-vanishing gaugeboson mass, by whatever mechanism it is generated, necessarily implies the spontaneous breakdown of global symmetries [12]. Landau Gauge QCD in the Continuum and on the Lattice Early lattice studies of the gluon and ghost propagators supported their predicted infrared behaviour qualitatively well. Because of the inevitable finite-volume effects, however, these results could have been consistent with both, the scaling solution as well as the decoupling solution. Recently, the finite-volume effects have been analysed carefully in the Dyson-Schwinger equations to demonstrate how the scaling solution is approached in the infinite volume limit there [13]. Comparing these finite volume DSE results with latest SU(2) lattice data on impressively large lattices [14,15], corresponding to physical lengths of up to 20 fm in each direction, finite-volume effects appear to be ruled out as the dominant cause of the observed discrepancies with the scaling solution. The lattice results are much more consistent with the decoupling solution which poses the obvious question whether there is something wrong with our general understanding of covariant gauge theory or whether we are perhaps comparing apples with oranges when applying inferences drawn from the infrared behaviour of the lattice Landau gauge correlations on local quantum field theory? The latter language is based on a cohomology construction of a physical Hilbert space over the indefinite metric spaces of covariant gauge theory from the representations of the Becchi-Rouet-Stora-Tyutin (BRST) symmetry. But do we have a non-perturbative definition of a BRST charge? The obstacle is the existence of the so-called Gribov copies which satisfy the same gauge-fixing condition, i.e., the Lorenz condition in Landau gauge, but are related by gauge transformations, and are thus physically equivalent. In fact, in the direct translation of BRST symmetry on the lattice, there is a perfect cancellation among these gauge copies which gives rise to the famous Neuberger 0/0 problem. It asserts that the expectation value of any gauge invariant (and thus physical) observable in a lattice BRST formulation will always be of the indefinite form 0/0 [16] and therefore prevented such formulations for more than 20 years now. In present lattice implementations of the Landau gauge this problem is avoided because the numerical procedures are based on minimisations of a gauge fixing potential w.r.t. gauge transformations. To find absolute minima is not feasable on large lattices as this is a nonpolynomially hard computational problem. One therefore settles for local minima which in one way or another, depending on the algorithm, samples gauge copies of the first Gribov region among which there is no cancellation. For the same reason, however, this is not a BRST formulation. The emergence of the decoupling solution can thus not be used to dismiss the KO criterion of covariant gauge theory in the continuum. Strong Coupling Limit of Lattice Landau Gauge From the finite-volume DSE solutions of [13] it follows that a wide separation of scales is necessary before one can even hope to observe the onset of an at least approximate conformal behaviour of the correlation functions in a finite volume of length L. What is needed is a reasonably large number of modes with momenta p sufficiently far below the QCD scale Λ QCD whose corresponding wavelengths are all at the same time much shorter than the finite size L, π/L ≪ p ≪ Λ QCD . It was estimated that this requires sizes L of about 15 fm, especially for a power law of the ghost propagator of the form in (3) to emerge in a momentum range with (10). A reliable quantitative determination of the exponents and a verification of their scaling relation (4) on the other hand might even require up to L = 40 fm [13]. As an alternative to the brute-force method of using ever larger lattice sizes for the simulations might therefore be to ask what one observes when the formal limit Λ QCD → ∞ is implemented by hand. This should then allow to assess whether the predicted conformal behaviour can be seen for the larger lattice momenta p, after the upper bound in (10) has been removed, in a range where the dynamics due to the gauge action would otherwise dominate and cover it up completely. Therefore, the ghost and gluon propagators of pure SU(2) lattice Landau gauge were studied in the strong coupling limit β → 0 in [17,18]. In this limit, the gluon and ghost dressing functions tend towards the decoupling solution at small momenta and towards the scaling solution at large momenta (in units of the lattice spacing a) as seen in Figure 1. The transition from decoupling to scaling occurs at around a 2 p 2 ≈ 1, independent of the size of the lattice. The observed deviation from scaling at a 2 p 2 < 1 is thus not a finite-size effect. The high momentum branch can be used to attempt fits of κ Z and κ G in (3) and the data is consistent with the scaling relation (4). With some dependence on the model used to fit the data, good global fits are generally obtained for κ = 0.57(3), with very little dependence on the lattice size. For the scaling solution one would expect the running coupling defined by (7) to approach its constant fixed-point value in the strong-coupling limit, and this is indeed being observed for the scaling branch [17]: The numerical data for the product (7) levels at α c ≈ 4 for large a 2 p 2 . As expected for an exponent κ slightly smaller than the value in (5), see [5], this is just below the upper bound given by (8), α c ≈ 4.45 for SU (2). When comparing various definitions of gauge fields on the lattice, all equivalent in the continuum limit, one furthermore observes that neither the estimate of the critical exponent κ nor the corresponding value of α c are sensitive to the definition used [17]. This is in contrast to the decoupling branch for a 2 p 2 < 1, which is very sensitive to that definition. Different definitions, at order a 2 and beyond, lead to different Jacobian factors. This is well known from lattice perturbation theory where, however, the lattice Slavnov-Taylor identities guarantee that the gluon remains massless at every order by cancellation of all quadratically divergent contributions to its self-energy. The strong-coupling limit, where the effective mass in (9) behaves as M 2 ∝ 1/a 2 , therefore shows that such a contribution survives nonperturbatively in minimal lattice Landau gauge. This contribution furthermore depends on the measure for gauge fields whose definition from minimal lattice Landau is therefore ambiguous. One might still hope that this ambiguity will go away at non-zero β, in the scaling limit. While this is true at large momenta, it is not the case in the infrared, at least not for commonly used values of the lattice coupling such as β = 2.5 or β = 2.3 in SU (2), as demonstrated in [17]. Lattice BRST and the Neuberger 0/0 Problem It would obviously be desirable to have a BRST symmetry on the lattice which could then provide lattice Slavnov-Taylor identities beyond perturbation theory. In principle, this could be achieved by inserting the partition function of a topological model with BRST exact action into the gauge invariant lattice measure. Because of its topological nature, this gauge-fixing partition function Z GF will be independent of gauge orbit and gauge parameter. The problem is that in the standard formulation this partition function calculates the Euler characteristic χ of the lattice gauge group which vanishes [19], Neuberger's 0/0 problem of lattice BRST arises because we have then inserted zero instead of unity (according to the Faddeev-Popov prescription) into the measure of lattice gauge theory. On a finite lattice, such a topological model is equivalent to a problem of supersymmetric quantum mechanics with Witten index W = Z GF . Unlike the case of primary interest in supersymmetric quantum mechanics, here we need a model with non-vanishing Witten index to avoid the Neuberger 0/0 problem. Then however, just as the supersymmetry of the corresponding quantum mechanical model, such a lattice BRST cannot break. In Landau gauge, with gauge parameter ξ = 0, the Neuberger zero, Z GF = 0, arises from the perfect cancellation of Gribov copies via the Poincaré-Hopf theorem. The gaugefixing potential V U [g] for a generic link configuration {U} thereby plays the role of a Morse potential for gauge transformations g and the Gribov copies are its critical points (the global gauge transformations need to remain unfixed so that there are strictly speaking only (#sites−1) factors of χ(SU(N)) = 0 in (11)). The Morse inequalities then immediately imply that there are at least 2 (N −1)(#sites−1) such copies in SU(N) on the lattice, or 2 #sites−1 in compact U(1), and equally many with either sign of the Faddeev-Popov determinant (i.e., that of the Hessian of V U [g]). The topological origin of the zero originally observed by Neuberger in a certain parameter limit due to uncompensated Grassmann ghost integrations in standard Faddeev-Popov theory [16] becomes particularly evident in the ghost/anti-ghost symmetric Curci-Ferrari gauge with its quartic ghost self-interactions [20]. Due to its Riemannian geometry with symmetric connection and curvature tensor R ijkl = 1 4 f a ij f a kl for SU(N), in this gauge the same parameter limit leads to computing the zero in (11) from a product of independent Gauss-Bonnet integral expressions, for each site of the lattice. This corresponds to the Gauss-Bonnet limit of the equivalent supersymmetric quantum mechanics model in which only constant paths contribute [21]. The indeterminate form of physical observables as a consequence of (12) is regulated by a Curci-Ferrari mass term. While such a mass m decontracts the double BRST/anti-BRST algebra, which is well-known to result in a loss of unitarity, observables can then be meaningfully defined in the limit m → 0 via l'Hospital's rule [20]. Lattice Landau Gauge from Stereographic Projection The 0/0 problem due to the vanishing Euler characteristic of SU(N) is avoided when fixing the gauge only up to the maximal Abelian subgroup U(1) N −1 because the Euler characteristic of the coset manifold is non-zero. The corresponding lattice BRST has been explicitly constructed for SU(2) [19], where the coset manifold is the 2-sphere and χ(SU(2)/U(1)) = χ(S 2 ) = 2. This indicates that the Neuberger problem might be solved when that of compact U(1) is, where the same cancellation of lattice Gribov copies arises because χ(S 1 ) = 0. A surprisingly simple solution to this problem is possible, however, by stereographically projecting the circle S 1 → R which can be achieved by a simple modification of the minimising potential [22]. The resulting potential is convex to the above and leads to a positive definite Faddeev-Popov operator for compact U(1) where there is thus no cancellation of Gribov copies, but Z U (1) GF = N GC , for N GC Gribov copies. As compared to the standard lattice Landau gauge for compact U(1) their number is furthermore exponentially reduced. This is easily verified explicitly in low dimensional models. While N GC grows exponentially with the number of sites in the standard case as expected, the stereographically projected version has only N GC = N x copies on a periodic chain of length N x and ln N GC ∼ N t ln N x on a 2D lattice of size N t N x in Coulomb gauge, for example, and in both cases their number is verified to be independent of the gauge orbit. The general proof of Z U (1) GF = N GC with stereographic projection which avoids the Neuberger zero in compact U(1) [22] follows from a simple example of a Nicolai map [21]. Applying the same techniques to the maximal Abelian subgroup U(1) N −1 , the generalisation to SU(N) lattice gauge theories is possible when the odd-dimensional spheres S 2n+1 , n = 1, . . . N −1, of its parameter space are stereographically projected to R × RP (2n). In absence of the cancellation of the lattice artifact Gribov copies along the U(1) circles, the remaining cancellations between copies of either sign in SU(N), which will persist in the continuum limit, are then necessarily incomplete, however, because χ(RP (2n)) = 1. For SU(2) this program is straightforward. . The standard and stereographically projected gauge fields on the lattice are defined as The gauge-fixing conditions F = 0 and F = 0 are their respective lattice divergences, in the language of lattice cohomology, F = δA and F = δ A. A particular advantage of the non-compact A is that they allow to resolve the modified lattice Landau gauge condition F = 0 by Hodge decomposition. This provides a framework for gauge-fixed Monte-Carlo simulations which is currently being developed for the particularly simple case of SU(2) in 2 dimensions. In the low-dimensional models mentioned above it can furthermore be verified explicitly that the corresponding topological gauge-fixing partition function is indeed given by Z as expected from χ(RP (2)) = 1. The proof of this will be given elsewhere. Conclusions and Outlook Comparisons of the infrared behaviour of QCD Green's functions as obtained from lattice Landau gauge implementations based on minimisations of a gauge-fixing potential and from continuum studies based on BRST symmetry have to be taken with a grain of salt. Evidence of the asymptotic conformal behaviour predicted by the latter is seen in the strong coupling limit of lattice Landau gauge where such a behaviour can be observed at large lattice momenta a 2 p 2 ≫ 1. There the strong coupling data is consistent with the predicted critical exponent and coupling from the functional approaches. The deviations from scaling at a 2 p 2 < 1 are not finite-volume effects, but discretisation dependent and hint at a breakdown of BRST symmetry arguments beyond perturbation theory in this approach. Non-perturbative lattice BRST has been plagued by the Neuberger 0/0 problem, but its improved topological understanding provides ways to overcome this problem. The most promising one at this point rests on stereographic projection to define gauge fields on the lattice together with a modified lattice Landau gauge. This new definition has the appealing feature that it will allow gauge-fixed Monte-Carlo simulations in close analogy to the continuum BRST methods which it will thereby elevate to a non-perturbative level.
2008-12-03T05:41:00.000Z
2008-12-03T00:00:00.000
{ "year": 2008, "sha1": "b00925c387bec46efa4af643f21361a283a2d007", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b00925c387bec46efa4af643f21361a283a2d007", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244068388
pes2o/s2orc
v3-fos-license
Gastric pneumatosis and concurrent aeroportia due to gastric outlet obstruction: A case report Introduction and importance Gastric pneumatosis with concurrent hepatic portal vein gas is an extremely rare condition in the adult population. It can be idiopathic or associated with well-known etiologies. Gastric outlet obstruction can progressively inflate the stomach and cause pneumatosis. Regarding abdominal signs and the presence of acute abdomen, management varies from just conservative to emergent surgical interventions. Case presentation We introduce an adult patient who presented to our hospital with weakness and dyspnea. After initial measures, unexpectedly we found intraabdominal free gas, concurrent gastric pneumatosis, and aeroportia. Due to the absence of positive abdominal signs, the patient was treated successfully without any surgical or endoscopic interventions. Discussion Gastric outlet obstruction is a well-known cause of gastric pneumatosis. Progressive dilation of the stomach due to pyloric stenosis is well-described both in infants and adult populations. Conclusion In stable patients, gastric drainage and correction of electrolyte disturbance are the only required treatment. However endoscopic and surgical interventions should be considered in unstable patients or those developing acute abdomen. Introduction Gastric pneumatosis, also known as gastric emphysema is defined as the presence of air inside the stomach wall. A variety of known etiologies have been described in the literature including gastric outlet obstruction, trauma, severe vomiting, and ischemia [1]. This entity usually does not require surgical treatment because patients are hemodynamically stable and do not show any signs of acute abdomen [2]. It may be accompanied by some other nonspecific disorders such as the presence of air inside the biliary tree [3]. Hereby, we introduce an adult patient with repetitive vomiting and severe electrolyte disturbance who developed severe gastric pneumatosis associated with pneumobilia. We reported the case in line with the Surgical Case Report (SCARE) guidelines [4]. Case presentation A 41-year-old man presented to our emergency department with weakness, fatigue, dyspnea, and a history of repetitive vomiting in the past few days. He also mentioned a history of weight loss in recent months without any other alarming symptoms. The patient was a heavy smoker and addicted to opioids and he had no positive findings in family history and physical examination. Due to his abnormal electrocardiogram (long QT-interval) he was admitted and initial blood tests were requested. Laboratory data showed severe electrolyte imbalances such as hyponatremia, hypokalemia, and compensated metabolic alkalosis (Table 1). Although chest x-ray was normal ( Fig. 1), we performed a lowdose lung computed tomography (CT) due to the COVID-19 outbreak in the country. The results showed that both lungs were uninvolved but we found some gas in the abdominal cavity. Since the patient had no abdominal signs in physical examination, we decided to perform an abdominopelvic CT with IV and oral contrasts (Figs. 2 and 3). Unexpectedly, we found a huge stomach that was extended to the level of the umbilicus. Also, there was some gas inside the stomach wall, biliary tree, and even abdominal cavity. According to these findings, a nasogastric tube was inserted immediately and about 2 l of gastric juice was drained. Also, electrolyte disturbance was corrected properly. Due to the absence of abdominal tenderness, we just observed the patient. Upper gastrointestinal endoscopy was performed during his first days of admission and the diagnosis was gastric outlet obstruction due to inflammation; then, biopsies were taken. Conservative management with a high dose of intravenous pantoprazole was performed. The patient could eat orally on the third day and was discharged with appropriate oral medications. Discussion Gastric pneumatosis is a rare clinical entity and is defined as the presence of air inside the stomach wall. It may be associated with portal venous gas or pneumoperitoneum [5]. Although it is well-described in infants and in the context of hypertrophic pyloric stenosis [6], there are fewer reports associated with this condition in the adult population. Diagnosis is usually made by CT scan and the presence of air in the gastric wall. Because of its silent course and absence of peritoneal signs, it is usually treated conservatively, and surgical exploration is rarely needed. There are some major well-known causes for this disorder such as gastric outlet obstruction, trauma, or ischemia. Gastric outlet obstruction can be secondary to other conditions like malignancy, pyloric stenosis, volvulus and also duodenal stenosis [7]. The pathophysiology of this disease is not known precisely. However, excessive dilation of the stomach seems to allow air to enter its wall. It can rupture into the peritoneal cavity and develop peritonitis or it can penetrate microscopically without the development of acute abdomen. Also, it can move into the biliary tree via different routes [8]. Noninvasive management is almost always recommended [2,3,5]; however, endoscopic treatment has been described well by some authors [9]. According to a comparative review conducted in 2018, the short-term prognosis of conservative management seems to be good and further imaging and endoscopic follow-ups are recommended [1]. Conclusion Gastric emphysema with concurrent hepatic portal vein gas seems to be a benign intra-abdominal condition. In hemodynamically stable patients, it rarely requires surgical treatment and can be managed conservatively. Also, if the underlying condition is recognized, it is better to treat that entity. If gastric outlet obstruction is the major cause of gastric emphysema, endoscopic follow-ups should be considered. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of the journal on request. Provenance and peer review Not commissioned, externally peer-reviewed. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Funding The authors did not receive any financial support for this report. Research registration number Not applicable. CRediT authorship contribution statement ME: Study design. SR: Case presentation. NG: Data gathering and writing manuscript.
2021-11-13T16:02:39.653Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "95446f66e0a883ca8fd4a934e8b86d25b2bc5c57", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2021.106584", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0dcccaaa20fb3c3d1cc9d5ea3e07be7a4dd6df20", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208058325
pes2o/s2orc
v3-fos-license
Utility of Inflammatory Markers in Predicting Hepatocellular Carcinoma Survival after Liver Transplantation Inflammatory markers have been studied in cancers and chronic states of inflammation. They are thought to correlate with tumor pathology through disruption of normal homeostasis. Markers such as neutrophil to lymphocyte ratio (NLR) among others have shown promise as prognostic tools in various cancers. In this study, we evaluate complete blood count based inflammatory markers in hepatocellular carcinoma (HCC) to predict overall and recurrence-free survival of patients after liver transplant. Between 2001 and 2017, all HCC indicated liver transplants were retrospectively reviewed. Inclusion criteria included presence of complete blood cell counts with differential within three months prior to transplantation. Exclusion criteria included retransplantation and inadequate posttransplant followup. A total of 160 patients with HCC were included in the study. Of those, 74.4% had hepatitis C virus as the underlying cause of HCC. Calculated Model for End stage Liver Disease (MELD) scores were statistically worse in patients with elevated NLR (≥5), derived NLR (≥3), and low lymphocyte to monocyte ratio (LMR) (<3.45), whereas elevated platelet to lymphocyte ratio (PLR) (≥150) did not correlate with MELD. Of the tumor characteristics, low LMR was associated with tumor presence and microvascular invasion on explant. Though overall survival trended towards better outcomes with low NLR and dNLR and high LMR, these did not reach statistical significance. High LMR also trended towards better recurrence-free survival without statistical significance. Low PLR was associated with statistically significant overall and recurrence-free survival. In conclusion, while prior studies in HCC have identified NLR as surrogate for tumor burden and survival, in this study we highlight that PLR is a good surrogate of mortality and recurrence-free survival in HCC transplant patients. Further, future study of PLR, NLR, and LMR in larger HCC populations before and after interventions may help clarify their clinical utility as a simple and noninvasive clinical tool as prognostic markers. Introduction Hepatocellular carcinoma (HCC) is the fourth leading cancer in mortality and the sixth most common cancer worldwide [1]. According to the Scientific Registry of Transplant Recipients, there is a steady growth in liver transplant (LT) numbers with an annual 3% increase as of 2017. Of those on the wait list for LT, HCC is on the rise as an indication and accounts for nearly 10% of transplants in the same report [2]. HCC recurrence risk is perpetuated by the strong association with cirrhosis and the high vascular nature. This limits long term outcomes of localized treatment options such as resection and ablative therapy, and thus LT remains the cornerstone curative treatment in management of HCC [3,4]. Despite the growth in LT numbers, donor supply is a restrictive factor and there is continued need to identify HCC patients who derive the major benefit from LT. Transplanted patients who suffer from HCC recurrence have a rough median survival of about a year after recurrence. One approach to mitigate this risk was the development of the Milan criteria, with which the risk of recurrence after LT is estimated to be around 10-20% [3,[5][6][7][8]. In recent years, the use of the Milan criteria has been shown to be unreliable in predicting recurrence after LT [9,10]. In contrast, tumor size, response to local ablative treatment, alpha-fetoprotein (AFP), C-reactive protein, and microvascular invasion may 2 BioMed Research International better correlate with recurrence [7,[11][12][13]. Nevertheless, most of these factors still fall short due to their collection in the posttransplant period and need for explant pathology. Multiple studies have attempted to identify pretransplant predictors of HCC recurrence to improve patient selection and transplant outcomes. The development of cancer leads to neoangiogenesis and sets the stage for chronic inflammatory response that disrupts the normal immunological pathway [14]. Involvement of leukocytes and platelets, along with recruitment of interleukins and growth factors, mediates tumor growth [9,13]. This disruption in normal homeostasis in these pathways can be detected with simple, easy to measure markers that may serve as surrogates of cancer aggressiveness and survival. The use of inflammatory marker ratios obtained from complete blood count (CBC) testing has been applied in different cancers and chronic states of inflammation [15][16][17]. We and others have previously published that the neutrophil to lymphocyte ratio (NLR) can be utilized to predict HCC recurrence [18]. In this study, we assess the ability of NLR as well as other CBC derived inflammatory markers to predict overall and recurrence-free survival in all comers with HCC undergoing liver transplantation over a 17year period. Materials and Methods . . Study Population. Liver transplant registry at the University of Florida in Gainesville, Florida, was used to identify adult patients who underwent LT with indication of HCC during the period of 2001-2017. The study was approved by the hospital institutional review board. In a retrospective review of the institute's electronic medical records, patient locoregional therapy prior to transplant, demographics at the time of transplant, tumor pathology, and posttransplant survival were obtained. Inclusion criteria included presence of CBC with differential within three months prior to transplantation and tumor burden within Milan criteria at time of transplant. Exclusion criteria included retransplantation, subsequent diagnoses of non-HCC malignancy (primarily cholangiocarcinoma) on explant pathology, and inadequate posttransplant followup. . . Inflammatory Surrogate Criteria. Routine CBC prior to transplant was used to obtain white cell count differential and platelets. The relative increases in neutrophils, monocytes, and platelets, along with decrease in lymphocytes in the inflammatory beds, were used as surrogate markers for inflammation and were calculated as neutrophil to lymphocyte ratio (NLR), derived neutrophil to lymphocyte ratio (dNLR) utilizing absolute neutrophil count for the derivation, lymphocyte to monocyte ratio (LMR), and platelet to lymphocyte ratio (PLR). Each surrogate marker was divided into two groups for comparison. The cutoff values were based on prior publications, including NLR of 5 [18], dNLR of 3 [15], LMR of 3.45 [19], and PLR of 150 [20]. . . Tumor Characteristics, Followup a er Transplant, Tumor Recurrence, and Survival. AFP tumor marker for all patients was collected prior to transplant. These data were included if they were obtained within 6 months of transplant date. Pretransplant treatments including locoregional therapy and/or resection for all patients were reviewed. As the study period spanned across different MELD scoring patterns, to unify the data, we collected the set of creatinine, total bilirubin, INR, and sodium values for each patient to calculate MELD scores at transplant. Following transplant, all explanted livers were evaluated and discussed in a multidisciplinary board review including hepatologists and pathologist. These reviews were studied and data including evidence of tumor cells on explant, histopathological grade and stage, and microvascular invasion was collected. If no tumor was found due to ablative treatment/necrosis, this was marked as no tumor on explant. If histopathological grade was not assessed due to lack of enough tissue pathology, we marked those grades as no histopathology on explant despite evidence of tumor cells. This can explain discrepancy between evidence of tumor on explant and total patients with evidence of histological grade. After transplant, patients were followed in clinic and monitored by a hepatologist at least every three months in the first year, followed by every six months for the next two years and yearly thereafter. These visits allowed detection of biochemical and clinical changes in patient status such as evidence of LFT abnormalities. HCC recurrence screening protocol included AFP measurement and imaging every six months for the first three years after transplant. Patients were followed for five years after transplant or until end of study time or death whichever occurred first. . . HCC versus Non-HCC Cirrhosis and Relation to Inflammatory Surrogate Markers. In order to validate the correlation of inflammatory surrogates to HCC versus underlying cirrhosis, we performed a subset analysis to compare the role of these surrogates in survival of non-HCC transplant by using a 1:1 Propensity matching to our HCC patients based on age and gender. . . End Points and Statistical Analysis. Primary outcomes of study were effect of inflammatory surrogates on five-year overall and recurrence-free survivals. Secondary endpoint was correlation of biomarkers to established prognostication factors such as AFP, MELD scores, and tumor pathology. SPSS version 25.0 was used for statistical analysis. The 2 test was used to compare categorical variables and t-test was used to compare continuous variables. Overall survival (OS) was defined from time of transplant to death or end of study. Recurrence was diagnosed based on imaging or biopsy proven recurrence of cancer. Recurrence-free survival (RFS) was defined from time of transplant to recurrence or censored for recurrence at end of study or death. End of study was defined as 5 years from transplant. Univariate and multivariate analysis to estimate hazard ratio were calculated using Cox regression analysis. Factors were included in multivariate analysis if P value was <0.1 in the univariate analysis. Kaplan Meier was used to estimate overall and recurrence-free survival. Table 3. Factors significantly correlating with poor OS included AFP >300 and microvascular invasion on tumor explant. Of the inflammatory markers, higher NLR, dNLR, and PLR and lower LMR show worse OS and higher hazard ratio but of these, only PLR was statistically significant. Multivariate analysis of NLR, PLR, AFP, and microvascular invasion showed elevated hazard ratio for all included criteria with P values of 0.9, 0.01, 0.33, and 0.09 respectively. Kaplan Meier's 5-year OS stratified by inflammatory surrogates are shown in Figure 1. Dispersion of inflammatory surrogate markers and survival is shown in Figure 2. . . Recurrence and Recurrence-Free Survival. Twelve patients (7.5%) had recurrence by the end of study period. All recurrences occurred within the first two years after LT. Median recurrence-free time was 4.87 years and 5-year RFS rate was 91.7%. Seven of the recurrences occurred as extrahepatic metastasis and the rest recurred in the transplanted liver. Univariate analysis of patient and tumor characteristics effect on RFS are detailed in Table 3. Factors significantly correlating with poor RFS included race other than white, AFP >300, and microvascular invasion on tumor explant. Of the inflammatory markers, higher NLR, dNLR, and PLR and lower LMR show worse RFS and higher hazard ratio but of these, only PLR was statistically significant. Multivariate analysis of PLR, race, AFP, and microvascular invasion showed elevated hazard ratio for all included criteria with P values of 0.001, 0.005, 0.15, and 0.07 respectively. Kaplan Meier's 5-year RFS stratified by inflammatory surrogates are shown in Figure 3. Dispersion of inflammatory surrogate markers and recurrence is shown in Figure 4. Discussion HCC is a highly angiogenic tumor that arises in the setting of chronic inflammation and cirrhosis. The role of inflammation in cancer development has been under research for decades and recent focus has been on attempts to utilize surrogate markers of inflammation to predict tumor virulence and survival. The NLR and other peripheral marker ratios calculated from white cell count differential could serve as systemic inflammatory surrogates for tumor biology, aggressiveness, and risk for adverse outcomes. In this study, we examine long-term outcomes of patients with HCC undergoing liver transplant regardless of etiology and its relationship to various inflammatory surrogates derived from ratios of the differential white blood cell count. While we note that the NLR and dNLR are not reliable predictors of long-term HCC outcome (OS and RFS) at 5-years, we did find a correlation between LMR and microvascular invasion. Further, our study shows PLR to be an independent predictor of longterm survival in patients receiving a liver transplant for HCC. Neutrophils along with monocytes and platelets play important role in inflammatory responses to injury, infection, or tumor. Their activation and migration to damaged tissue lead to tissues growth and angiogenesis in attempts from the immune system to repair wound. Lymphocytes act as regulatory cells in inflammatory states, activating prohealing cytokines. An imbalance between these cells specifically suppression of lymphocytes and increased activation of platelets, neutrophils, and monocytes can promote ongoing injury and tumor growth [14,21]. In a number of studies, the NLR has been shown to correlate with survival in solid tumors [17,18]. Our group and others have previously shown elevated NLR to correlate with OS and RFS in HCC after LT [18,22,23]. In instances when NLR can't be derived due to a missing lymphocyte count, calculating the dNLR may be a useful alternative [24]. In our study, patients with both NLR ≥5 and dNLR ≥3 showed a higher rate of microvascular invasion and worse long-term outcomes; however these were not clinically significant (5-year OS: 60.5% vs 77.2%; and RFS: 88% vs 92.8%). Other studies such as Parisi et al.'s have also noted a lack of significant correlation between NLR and survival [10]. The lack of standardized timing (up to three months prior to LT) of when to obtain CBC may contribute to the inconsistent observations. In addition, the severity of underlying cirrhosis and compromised liver function by HCC burden could be another factor to explain the elevations in NLR and dNLR in our population [25]. The finding supporting this explanation is shown by the correlation between patient's MELD score and the NLR/dNLR with a statistical trend of worse OS (P 0.09 and 0.12). In our subgroup analysis, non-HCC patients with underlying cirrhosis did not have a correlation between NLR/dNLR and survival. Further studies to include MELD scores for all transplanted patients as well as a larger study size may help better delineate these correlations. The LMR is another useful marker that has been investigated as an indicator of survival in solid tumors [19,26,27]. While there is ambiguity regarding the molecular processes behind the impact of LMR, low lymphocyte and high monocyte counts are implicated in cytokine production aiding in tumor progression [28]. In addition, elevated monocyte counts are correlated with microvascular invasion and poorer prognosis of HCC [29]. These findings are boosted by our results showing patients with higher LMR (≥3.45) having significantly lower rates of tumor present in the explant with less microvascular invasion. Previous studies show a correlation between high LMR and improved OS and RFS in HCC after LT [26,30,31]. While we observed improved 5-year OS (78.8% vs 68.9%) and RFS (96.4% vs 88.7%) with higher LMR in our population, these findings did not reach significance. The severity of underlying liver disease likely contributes to this as we noted in our subgroup analysis of non-HCC patients. For example, Raffetti et al. note CBC derived markers are not predictive of cancer development in HIV patients who are in chronic inflammatory state [32]. Smaller sample size and the dynamic nature of CBC values preceding LT may explain some of the other differences in reaching statistical significance. Platelet activation releases cytokines that aid with tumor angiogenesis. Platelets combined with lymphocytes as the PLR have been studied as an additional prognostic factor for survival in cancer [20,33,34]. Similar to prior publications, we found PLR ≥150 to be a strong predictor of worse 5-year OS (40.2% vs 79.4%) and RFS (70.2% vs 95.9%) in HCC patients with hazard ratio of 3.18 for OS and 7.95 for RFS [22,35,36]. In addition, elevated PLR correlated closely with AFP >300 (P 0.07) and tumor presence on explant (P 0.08) which are well-established predictors of survival [7,11]. On multivariate analysis, PLR does continue to show significant correlation with OS and RFS. Interestingly, in contrast to other CBC derived inflammatory markers, PLR was not associated with underlying MELD score. Though the mechanism is unclear, these findings suggest PLR could better reflect cancer specific survival in tumors with background of underlying inflammation such as HCC and cirrhosis. One remarkable incidental finding was evidence of worse RFS in patients who were of non-white races. This may correlate with patient's health literacy as well as financial means to followup in clinic and warrants further investigations. In this study, most patients (130) underwent at least one form of locoregional therapy prior to transplant. Though this may be considered a confounding factor, we propose differently. Given the waitlist for transplant, different forms of ablative therapy are standard practice to bridge treatment and are in line with current guidelines [3,4]. If we were to exclude these patients, the study results would not reflect reallife conditions. Notable limitations to our research are the retrospective nature of the study and that it reflects a single center experience. In addition, choosing three months as the window for CBC may have resulted in some variability in results; however, a narrower time frame would have led to an underpowered analysis. A growing number of cancer studies show the importance of systemic inflammatory surrogate markers in the prognosis of patients. Our study is an important addition to the growing evidence that such markers may serve to prognosticate and predict the outcome of HCC patients after LT. Conclusion While prior studies in HCC have identified NLR as a tumor surrogate, in this study we highlight that PLR is a good surrogate of mortality and recurrence-free survival in HCC LT patients. This work supports further study of PLR in combination with other pretransplant markers, such as AFP and imaging, to determine if its addition into a model better captures transplant eligibility and benefits. Further, future study of PLR, NLR, and LMR in larger HCC populations before and after interventions may help clarify their clinical utility as a simple and noninvasive clinical tool as prognostic markers. Data Availability The data used to support the findings of this study are available from the corresponding author upon request.
2019-10-17T09:05:44.350Z
2019-10-14T00:00:00.000
{ "year": 2019, "sha1": "5565ff5b35376141397e75096ebb9e1c098f05e9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2019/7284040.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8c4c5186abde9fe1a0585be1c2a349b88a24895", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16195198
pes2o/s2orc
v3-fos-license
Use of a multilocus variable-number tandem repeat analysis method for molecular subtyping and phylogenetic analysis of Neisseria meningitidis isolates Background The multilocus variable-number tandem repeat (VNTR) analysis (MLVA) technique has been developed for fine typing of many bacterial species. The genomic sequences of Neisseria meningitidis strains Z2491, MC58 and FAM18 have been available for searching potential VNTR loci by computer software. In this study, we developed and evaluated a MLVA method for molecular subtyping and phylogenetic analysis of N. meningitidis strains. Results A total of 12 VNTR loci were identified for subtyping and phylogenetic analysis of 100 N. meningitidis isolates, which had previously been characterized by pulsed-field gel electrophoresis (PFGE) and multilocus sequence typing. The number of alleles ranges from 3 to 40 for the 12 VNTR loci; theoretically, the numbers of alleles can generate more than 5 × 1011 MLVA types. In total, 93 MLVA types were identified in the 100 isolates, indicating that MLVA is powerful in discriminating N. meningitidis strains. In phylogenetic analysis with the minimal spanning tree method, clonal relationships, established with MLVA types, agreed well with those built with ST types. Conclusion Our study indicates that the MLVA method has a higher degree of resolution than PFGE in discriminating N. meningitidis isolates and may be a useful tool for phylogenetic studies of strains evolving over different time scales. Background Neisseria meningitidis is one of the major causative agents of bacterial meningitis and septicemia in children and young adults [1]. Periodically, it causes large epidemics in Africa, especially in the sub-Saharan meningitis belt, and in Asia [1]; however, it is still a serious problem in many industrialized countries [2,3]. Occasionally, a meningococcal pandemic occurs after large population movements, such as pilgrimages [4,5]. Epidemiological studies of N. meningitidis, using various subtyping methods, allow the identification of a disease out-break and investigation of the disseminating meningococcal strains. With the advent of molecular biology, a number of molecular methods have been developed for epidemiological studies of N. meningitidis. Among the methods, pulsedfield gel electrophoresis (PFGE) and multilocus sequence typing (MLST) are the most frequently used subtyping techniques [6,7]. PFGE usually exhibits high discrimination for bacterial isolates, but it generates fingerprint image data that makes a comparison between laboratories difficult. In contrast, MLST is based on sequence data from seven conserved housekeeping genes; sequences that differ at even a single nucleotide are assigned to different alleles. The combination of alleles at the seven housekeeping genes is designated the sequence type (ST) of the isolate; numerous STs can be obtained. A Neisseria MLST database has been established that allows STs to be compared electronically via the Internet. STs are grouped into clonal complexes by their similarity to a central allelic profile (genotype). These central genotypes are identified by a number of heuristic means, including BURST and split decomposition, along with feedback from public health laboratories and epidemiologists. Once a central genotype has been identified, clonal complexes are defined as including any ST that matches the central genotype at four or more loci unless it more closely matches another central genotype [8]. The accumulation of nucleotide changes in housekeeping genes is a relatively slow process, and the allelic profile of a meningococcal strain is stable over time. Therefore, MLST is a powerful tool for study of global epidemiology of meningococci [6]. However, MLST provides lower discrimination than PFGE for fine typing of some clonal groups of N. meningitidis [9]. In recent years, the multilocus variable-number tandem repeat (VNTR) analysis (MLVA) technique has been developed for fine typing of many bacterial species [10][11][12][13][14][15][16][17][18][19]]. In addition, Yazdankhah et al. [20] have recently developed a MLVA method with four VNTR loci for genotyping of N. meningitidis isolates and successfully differentiated the serogroup W135 isolates from sporadic cases and outbreaks. In this study, we successfully developed a MLVA method with 12 VNTR loci to analyze a panel of N. meningitidis isolates, which had previously been characterized by PFGE and MLST. [9]. b Number in parentheses indicates the second copy of the locus. The second allele indicated in the parentheses was ignored in the MST analysis. Of the 12 loci, at least 9 were located in coding region of annotated genes ( Table 1). The VNTRDB program used each of the three genomic sequences in turn as a "parent" sequence to search repeat loci and, then, located each of the loci at the other two genomes, so that a locus, for example NMTR9a with only one repeat unit in strains MC58 but with 2 repeat units in strain Z2491 and 3 repeat units in strain FAM18, could be found (Table 1). MLVA genotyping The MLVA genotyping was performed on 100 N. meningitidis isolates, which were collected between 1996 and 2002, and their PFGE patterns and ST types were characterized previously [9]. The results showed that the majority of the isolates carried only one copy of each of the 12 loci; however, five isolates carried extra copy of NMTR1, NMTR7, NMTR9 or NMTR18 locus, two isolates did not carry the NMTR1 locus, and three isolates did not carry the NMTR12 locus ( Table 2). The number of alleles at each of the 12 loci ranged from 3 to 40 alleles counted on the 100 isolates analyzed (Table 3). Six loci (NMTR1, NMTR2, NMTR7, NMTR9, NMTR10 and NMTR12) had more than 10 alleles and four loci (NMTR1, NMTR2, NMTR7 and NMTR9) had a high allelic polymorphism index (≥ 0.9) ( Table 3). Based on the allele number for each of the 12 loci determined in this study, at least 5 × 10 11 MLVA allelic profiles (MLVA types) are expected. A total of 93 MLVA types were identified for the 100 isolates ( Table 2). The majority of MLVA types represented only one isolate; however, each TW4, TW5, TW51, TW52, and TW62 types represented two isolates and TW3 represented three isolates. TW62 was identified in two serogroup B isolates (NM255 and NM256), which were obtained from two cases in a meningococcal disease outbreak in a family. TW52 was identified in two serogroup C isolates (NM377 and NM378) with a close epidemiological relationship. TW3, TW4, and TW5 were identified in serogroup Y isolates collected from sporadic cases; the isolates were derived from a newly imported clone [9]. The two serogroup W135 isolates with TW51 type were collected in cases at a 2-year interval. Phylogenetic tree built with MLVA profiles As shown in the previous study [9], PFGE exhibited a higher degree of discrimination than MLST for the isolates analyzed. However, the results of this study showed that MLVA exhibited much higher resolution than PFGE on the same panel of isolates. MLVA discriminated all of the serogroup B isolates and 29 of 31 serogroup W135 isolates, which were collected from sporadic cases ( Table 2). In contrast, only two ST type and four PFGE patterns were identified in the 31 serogroup W135 isolates ( Table 2). Only one ST type and two PFGE patterns were identified in the 11 serogroup Y isolates (Table 2). However, these isolates were further discriminated into seven MLVA genotypes. Phylogenetic analysis The clonal relationships among the 100 isolates were constructed with the MLVA types by the minimal spanning tree (MST) method. In the analysis with 12 loci, MLVA types matching at eight or more loci were regarded as clonally related. Consequently, eight distinct MLVA groups were established and the grouping feature established with the MLVA types had good agreement with that built with ST types (Figure 1). The two serogroup A isolates were characterized as different MLVA types (TW48 and TW59), differing in three loci, both carried ST-7 type within the ST-5 complex ( TW1, TW2, TW27, TW55 and TW63) were separated from the T4 group. However, they had a closer genetic relationship with the genotypes within the T4 group. All the MLVA types, except TW65 and TW88, representing the serogroup W135 isolates, were clustered in T7 group. The two MLVA types (TW25 and TW52), identified in three serogroup C isolates, had a closer clonal relationship with the W135 isolates than other serogroup isolates, although they differed at five loci with the closest MLVA types within the T7 group. A total of 32 MLVA types were identified in the 31 serogroup W135 and three serogroup C isolates; in contrast, only two ST types (ST-11 and its single locus variant, ST-3016) were found in the isolates ( Table 2). The isolates with TW25 and TW52 types emerged in 2001 and 2002, respectively. Since TW25 and TW52 differed in as many as seven loci, the two MLVA strains should not be derived from a common imported strain. The serogroup Y isolates shared a close clonal relationship as the seven MLVA types, forming a compact cluster. Six MLVA types differed in only one or two loci with the founder type, TW3, which was identified in the earliest collected isolates in Taiwan. MLVA allelic profiles of isolates from patient-contact episodes Five isolates, collected from healthy contacts of four patients were characterized by MLVA. The MLVA profiles were identical for isolates from three episodes. Two isolates from the fourth episode differed in a single locus, NMTR-7 (Table 4). Discussion Our data demonstrate that the MLVA method is powerful for subtyping and useful for phylogenetic investigation of N. meningitidis isolates. The MLVA exhibited a much higher discriminatory power than PFGE for the isolates tested and the resulting data agreed well with the epidemiological observations. Of the 100 N. meningitidis isolates Our study showed that the clonal relationships between the isolates, established with MLVA types, was in good agreement with those built with ST types. As shown on Figure 1, strains within a ST complex or ST group shared more common VNTR loci. Among the 12 loci, four (NMTR1, NMTR2, NMTR7 and NMTR 12) were highly polymorphic; they could have higher variation rates. The remaining loci could have moderate and low variation rate. Thus, different sets of VNTR loci may be useful for phylogenetic investigation of isolates evolving over different time scales. Phylogenetic investigations of spreading of N. meningitidis strains over a long time scale will best be carried out using loci with a low or moderate variation rate. Forensics and outbreak investigations may use loci with a higher variation rate. In our study, the MST grouping features built with 10 or 11 loci, which excluded one or two highly polymorphic loci, such as NMTR1, NMTR2 or both from 12 loci, remained similar but tighter to that with 12 loci (data not shown). Therefore, use of more VNTR loci with a lower variation rate will increase the power of MLVA in phylogenetic studies of N. meningitidis strains evolving over a long time scale. The allelic profiles of the 11 serogroup Y isolates demonstrated the level of stability for the 12 VNTR loci. The comparison of the allelic profiles indicated that VNTR2 had the highest variation rate; five additional alleles at NMTR2, but only one at NMTR1, NMTR7 and NMTR9, evolved in the serogroup Y isolates over a 2-year time span. The stability of the VNTR loci was also demonstrated by the comparison of the allelic profiles of isolates from four patient-contact episodes. Although a single locus variant was observed in isolates from a patient-contact episode (Table 4), this MLVA method should be sta-ble enough for forensic and outbreak investigations. Since variation normally occurs in only a small portion of isolates from an outbreak [15], such variation is usually not a problem for interpretation of MLVA data. Conclusion MLVA exhibits a higher degree of resolution than PFGE for fine typing of N. meningitidis isolates and produces portable data that can easily be used for comparisons between laboratories via the Internet. MLVA data can also be used to investigate phylogenetic relationship between N. meningitidis strains. Therefore, MLVA can be adopted as an epidemiological tool for forensics and disease outbreak investigations, and for investigating clonal relationship among meningococcal strains. However, the mutation rate for each VNTR loci is still unknown. To fully exploit the value of MLVA, more VNTR loci need to be explored and more N. meningitidis isolates, of known epidemiological history, need to be characterized. , searches tandem repeat loci from one of the three genomic sequences and then locates the positions of each of the loci at the other two compared genomes. The three genomic sequences are used in turn as the "parent" sequence, so that a locus with only one repeat unit at a genome, but with two or more repeat units at other genomes, will not be missed. Searches found more than 300 repeat loci that were common to all the three strains and had variable repeat units between the three strains. Twenty-three repeat loci that had short repeat unit length (≤ 30 bp), more than 85% repeat sequence identity, and no indels were selected for further evaluation with 10 genetic distinct strains. Twelve loci, which were detected in all of the 10 testing isolates and amplified with only one amplicon, were chosen for genotyping of N. meningitidis isolates ( Table 1). Preparation of crude bacterial DNA Meningococcal isolates, stored at -70°C, were plated onto trytic soy agar with 5% sheep blood and incubated overnight at 37°C under a 5% CO 2 atmosphere. A loopful (10 µl) of bacterial growth was removed from the plate, suspended in 100 µl of TE buffer (10 mM Tris-Cl, 1 mM EDTA, pH 8.0) in an Eppendorf tube, and boiled for 10 min. After centrifugation at 3700 g for 10 min, the supernatant was transferred to a new tube and used for PCR amplification. PCR amplification and analysis of VNTR regions The primer sets specific to the 12 VNTR regions are listed on Before size analysis the fluorescent amplicons were diluted in water, usually at a 1:100 or 1:200 ratio, then separated by capillary electrophoresis on an ABI Prism 3130 Genetic Analyzer with GeneScan 500 LIZ Size Standard (cat # 4322682; Applied BioSystems). Data were collected and lengths of amplicons were determined with GeneScan Data Analysis Software, ver 3.7 (Applied Bio-Systems). All amplicons with different lengths from each locus were subjected to nucleotide sequence determination to verify the repeat sequence and the numbers of repeat units in the amplicons. The primers (without dye label) used for nucleotide sequence determination were the same as the primer sets used for PCR amplification. DNA sequencing was performed using the ABI Prism Big Dye Terminator cycle sequencing ready reaction kit and an ABI Prism 3130 Genetic Analyzer. The numbers of repeat units for the 12 VNTR loci (Table 1) and the predicted sizes of amplicons (Table 5) for the N. meningitidis strains Z2491, MC58 and FAM18 were taken as the standards to infer the number of repeat unit of each locus for the isolates tested. Data analysis The numbers of repeat units for each locus were saved as "Character Type" data in BioNumerics software (version 3.5; Applied Maths, Kortrijk, Belgium) and then subjected to cluster analysis using the Minimum Spanning Tree method. The polymorphism information index or Nei's diversity index (DI) was calculated for evaluating allele diversity as 1-Σ (allele frequency) 2 . Bacterial strains A total of 105 N. meningitidis isolates, collected from meningitis patients and healthy contacts, were included in this study. The collection from patients comprised 2 serogroup A isolates, 52 serogroup B isolates, 3 serogroup C isolates, 31 serogroup W135 isolates, 11 serogroup Y and 1 non-groupable isolate ( Table 2). They were collected from sporadic cases between 1996 and 2002 in Taiwan, except two pairs of isolates (NM255 and NM256; NM377 and NM378), which were, respectively, isolated from a meningococcal disease outbreak in a family and from two cases with a close temporal and spatial connection. All the 100 isolates from patients have been characterized by PFGE and MLST in a previous study by Chiou et al. [9]. Five isolates from healthy contacts were collected from four independent patient-contact episodes ( Table 4). Authors' contributions JC Liao designed all the primers and MLVA analyzed all the isolates. CC Li was in charge of searching potential VNTR loci by computer software and MST clustering analyses. CS Chiou initiated and managed the project, analyzed data, and wrote the report. All authors read and approved the final manuscript.
2016-05-04T20:20:58.661Z
2006-05-11T00:00:00.000
{ "year": 2006, "sha1": "e105e7569e71f7a2e80f8926e05f98be1bc0624b", "oa_license": "CCBY", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-6-44", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8fa3ba5361b918831d84548cb6fafda9e55365d4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231584196
pes2o/s2orc
v3-fos-license
Effect of the Affordances of the FM New Media Communication Interface Design for Smartphones Smartphone equipment has promoted the widespread use of new media communication, and users have changed from being passive to actively receiving information. This has changed people’s lifestyles, and has enriched the convenience and entertainment value of knowledge acquisition. However, the new media communication systems can be too complex, and the interface design for users to interact with the new media may not be sufficiently intuitive, which causes interface usability problems. Therefore, this research study focuses on the concept of affordance and the impact of user perception pertinent to smartphone applications on frequency modulation (FM) new media interface design. The experiment is a between-subjects design using one-way ANOVA to examine three different operation modes, namely the Litchi, Himalayas and Archimedes types. The experimental data were obtained through task performance and subjective evaluation. The results indicate that: (1) Visual information presentation methods, such as viewing and deleting, affect user perception. The three operating modes revealed significant differences, with the Himalayas type taking the least amount of task performance time. (2) There were significant differences among the different operation modes, with the Himalayas type the best in terms of the users’ subjective evaluation. (3) The overall analysis of task performance and satisfaction consistently showed that the Himalayas type was better than the Litchi and Archimedes types in all aspects. (4) Smartphone user interface applications provide users with cognitive support, objects, functions, and sensory affordance, which enhance the user’s interactive experience of FM new media. Introduction With the birth and development of mobile media and new types of smart equipment, especially smartphone mobile equipment, the quality of new media communication has increased. This has changed people's lifestyles, and the users' original media contact time and habits have been disrupted, while the fragmentation and dissemination of information have also cultivated new reading and audiovisual habits [1]. On the one hand, the fragmentation of listening time and content enables users to listen across time and space. However, the traditional listening mode requires a fixed time to hear the relevant broadcast information. With traditional radios, it is necessary to play and hear the information at a fixed time, the positioning of the program is uncertain, the information content tends to be singular, and the range of choices is narrow. The listener passively accepts a large amount of input information and cannot listen to the information according to his/her personal preferences. In contrast, the smartphone frequency modulation (FM) new media listening platform has a large capacity through the "cloud" processor, and the information content is rich and high-quality, including audiobooks, language learning, and emotional columns, with information sourced from home and abroad. Its customization, personalization, and diversification have become a trend [2]. The FM new media uses a smartphone application (app) to operate tasks that are simple, intuitive, and easy to perform. It can provide the term affordance has become more abundant and widely used in its conceptual meaning. Evans et al. [11] systematically combined and analyzed from the perspective of media communication to clarify the development of the concept of affordance. They suggested that it can be defined according to the research orientation, emphasizing the consistency of the cited conception and application, and expanding its operability definition. They also pointed out that the development of the concept of affordance should follow threshold standards, namely Anonymity, Persistence, and Visibility. Their research results also show that the creation of affordance has positive significance for the application of new media communication. Norman [7] proposed perceptual affordance based on changes in people's living environment, which helps highlight the hidden meaning of artifacts and enhances intuitive interaction. It is a broad application in the HCI field, and its concept and application are consistent. Gaver [12,13] put forward the term technological affordance, emphasizing the grouping support of time and space, which helps generate action through technical interaction. Hartson [14] further expanded the concept of affordance and identified four types, namely cognition, object, function, and sensory affordances. Based on the analysis of video media, Van Osch and Mendelson [15] proposed the types of affordance among developers, users, and artifacts. Besides, the cross integration of new media and interactive design language will further broaden the development of this concept. New media serve as a form of social interaction and provide space for communication [9]. It has also been shown that new media products not only need to strengthen the usability of design but also need to inject the sociality of emotional needs to improve user cognition. Given the consistency of concept and application, this study is based on the concept of perceptual affordance proposed by Norman [7]. Through the expansion of the conception of affordance and its four types in Hartson [14], this study aimed to investigate the impact of affordance design on FM new media and the effect of interface design. Figure 1 shows the conceptual diagram of interaction design pertaining to the affordance factor generated in this study. Affordances in a New Medium With the change in communication methods pertaining to new media, users have changed from performing passive to active activities. Users tend to enter and receive commands through smart devices, and interact with their virtual interfaces. The affordance provided by active devices can directly affect the usability and satisfaction of smart media [16]. The smart interfaces of new media have a variety of interface forms to establish interaction with users, such as touch graphics, voice control, and gestures. Therefore, the user interface of new media should provide reasonable capabilities in terms of vision, hearing, and touch to improve its usability. Users implement tasks through the interface icons on the touch screen. The graphics are the main elements for the human-computer interaction. Only well-designed and reasonable icons can convey the intent of the complex system [17]. It is conducive to understanding, learning, and attracting attention and other advantages [18][19][20] to help achieve a multiplier effect. Hartson [14] expanded the four types of affordance, namely function, object, cognition, and perception, which can effectively promote the interaction of new media interfaces, help users operate tasks more conveniently, obtain the required services, and play a positive guiding role. The design features of cognitive affordance (i.e., visual cues or information) can help users understand and recognize how a task is performed. The lack of affordance clues can lead to reduced search and click-to-action buttons on the display. However, the depth and contrast on the buttons (or both) improve search performance [21]. Object affordance helps users take action more easily when carrying out tasks. Functional affordance helps user complete tasks. Sensory affordance helps users perceive the characteristics of objects. The application of these four types of affordance in the new media interface design is mutually integrated, which can improve the usability of direct interaction between smart devices and users. Norman [7] indicated that interaction design should follow the six principles of design to help user complete tasks quickly; these principles are Visibility, Feedback, Constraints, Mapping, Consistency, and Affordance. He also emphasized that the content of the interface design should be clear and concise to help reduce unnecessary information, and that it should fully mobilize the invisible implications of the graphics or icons, which is beneficial to providing a sense of affordance. This would also help users easily use the new media interface, reduce their psychological load, and reduce their perceived uncertainty. Interaction Design of FM New Media The interface of smart devices provides a variety of interaction possibilities for new media communication and highlights usability. However, usability focuses on the intuitiveness and ease of learning in the interaction between people and products [22]. As a platform to support and promote social activities, communication, and collaboration, new media communication must consider usability and guide users to participate in the social interaction. Usability attracts designers' attention, while sociality brings together multiple identities to participate [23]. Zhao et al. [9] established three crucial components of social media design, namely social media as an IT artifact, user type, and the role in social media application domains. They respectively refer to the emphasis on content and form, the emphasize user types in social media interaction design and the solving of specific problems or presenting different functions. They believe that the concept development is multi-dimensional, and social benefits are brought by the comprehensive framework. These are all conducive to our comprehensive understanding of the application of affordance design on the new media virtual platform. Research Objective and Questions The purpose of this study is to investigate the user interface design of FM new media pertinent to smart operation mode. The experiment was designed to investigate how "operation mode" (i.e., the Litchi, Himalayas, and Archimedes types) on the Smart FM new media interface would affect users' task performance, system usability (measured using the system usability scale [SUS]) and their subjective evaluations. We raised the following four questions: Method and Materials The experiment adopted a one-way ANOVA design. The independent variable is "operation mode". It is a between-subjects factor, and the three levels were the Litchi, Himalayas and Archimedes types. The dependent variables were the users' task performance and scores from SUS and subjective evaluations. The study tasks included five items for the purpose of investigating the participants' behavior in the new media operating environment. We implemented them on a HUAWEI P10 smartphone (HUAWEI, Shenzhen, China) with a 5-inch screen. According to online research, from traffic, rankings, downloads, and audiences, these three apps rank high on the FM platform, especially Himalaya and Lychee, which have certain influence and popularity. The content of the three types is distinctive and representative. They have similar functions for the experimental tasks. That is, the three user interfaces are all smartphone apps, among which the Litchi type is a professional FM platform that focuses on "sound" and its interface content modules are more related to arts and entertainment information. The Himalayas type is a FM new media platform with more systematic and comprehensive functionality and content integration. The interface information is rich and diverse, and has a great deal of social influence. The Archimedes type is a new media platform which falls between the professional and mass FM new media with many stations. All the three apps were very popular in the market based on top on-line rankings and download rates (see Figure 2). The experimental results were analyzed based on the one-way ANOVA. That is, by using the SPSS software (IBM, Armonk, NY, USA), the participants' task completion time, SUS scores, and subjective evaluations were analyzed. Significant main effects were further analyzed by post hoc comparisons. Participants The convenience sampling method was used in this study because common mobile device users may use the FM new media and they are all potential participants. Therefore, there is no specific restriction in terms of participants. A total of 30 users participated in this experiment (i.e., nine males, 21 females). Their education level was above bachelor's degree, and their age was between 20 and 40 years old. They all had experience in using mobile apps. Nonetheless, their experience level was assessed based on their background information. Among them, 14 participants did not use the FM new media before, seven of them with one year of use experience, and nine of them with more than two years of use experience. Based on the above-mentioned information, 46.7% (i.e., 14 participants) of them were viewed as having lower use experience, while 53.3% (i.e., 16 participants) of them as having higher use experience. According to the distribution of data, most people had used the app and could complete the experiment independently. Procedure According to the independent variable, the participants were divided into three groups. Each participant was required to use one of the three operation modes, that is, the Litchi, Himalayas, and Archimedes types. The collection of the research data adopted both quantitative and qualitative tools. More specifically, when the experiment was conducted, the participant was asked to sit in front of a table. There were informed consent, personal background information questionnaire, and task descriptions on the table. After the participant filled in the information and understand the task features, s/he was given a mobile phone to conduct the assigned task in sequence (see Table 1) and each of the task completion time was recorded for further analysis. Then s/he was asked to fill out the SUS and subjective evaluation questionnaires. In the end, a semi-structure interview was also conducted to help collect the information pertinent to his/her task difficulty, personal feelings, etc. Detailed experimental process is provided as Figure 3. Task5: share songs to WeChat moments Please find the music from the category and share the third music. More specifically, during the experiment, the participants were randomly assigned to three groups to test one of the three FM new media user interfaces. The participants were informed that they should perform five tasks in sequence as quickly and accurately as possible. After completing all the tasks, they were required to fill out the System Usability Scale (SUS) questionnaire (see Table 2). Each item of the SUS questionnaire was scored using a five-point Likert scale (from 1 "strongly disagree" to 5 "strongly agree"). After that, participants were also asked to complete a subjective evaluation questionnaire regarding FM new media app. The questionnaire was designed based on a 7-point Likert scale (from 1 for less satisfied to 7 for greatly satisfied). Finally, we conducted semi-structured interviews with participants on related issues. The interview mainly focused on overall user experience about the interface design and design suggestions. The total experiment time was less than 45 min. Table 2. The Likert scale items of system usability scale (SUS) questionnaire. No. Questions 1 I think that I would like to use this APP system frequently. 2 I found the APP system unnecessarily complex. 3 I thought the APP system was easy to use. 4 I think that I would need the support of a technical person to be able to use this APP system. 5 I found the various functions in this APP system were well integrated. 6 I thought there was too much inconsistency in this APP system. 7 I would imagine that most people would learn to use this APP system very quickly. 8 I found the APP system very cumbersome to use. 9 I felt very confident using the APP system. 10 I needed to learn a lot of things before I could get going with this APP system. Results and Analysis A one-way analysis of variance (ANOVA) was performed to help analyze the collected data. The results generated from the one-way ANOVA of each task pertinent to participants' completion time are illustrated in Table 3. Task Analysis As shown in Table 3, the first task required the participants to "Query view list". The effect of the operation mode showed a significant difference (F (2, 27) = 6.375, p = 0.005 < 0.05). The subsequent post hoc comparison showed that the Himalayas type (M = 54.20, SD = 26.27) and the Litchi type (M = 103.32, SD = 28.04) had a significant difference (p = 0.012 < 0.05). The Himalayas type and the Archimedes type (M = 116.07, SD = 59.56) also showed a significant difference (p = 0.001 < 0.05). The Litchi type and the Archimedes type showed no significant effect (p = 0.492 > 0.05). The results indicated that the Himalayas type operation time was the shortest, and the Archimedes and Litchi types were the longest. According to results of the observations and interviews, the reason could be that participants using the Himalayas type interface can see the position of the ranking check so that they can quickly complete the task. However, the other two types require more than two additional steps to perform the task, and the task-related information was not obvious, which made the implementation of the tasks not intuitive. This is also because the lack of compelling visual cues can lead to reduced search and click performance [21]. The second task required the participants to "search for audiobooks". The results showed that the effect of the operation mode showed no significant difference (F (2, 27) = 1.043, p = 0.366 > 0.05). The third task required the participants to "select download content". The effect of the operation mode showed no significant difference (F (2, 27) = 0.943, p = 0.402 > 0.05). The fourth task required the participants to "delete the downloaded content". The effect of the operation mode showed a significant difference (F (2, 27) = 5.242, p = 0.012 < 0.05). The subsequent post hoc comparison showed that the Himalayas type (M = 16.23, SD = 7.80) and the Litchi type (M = 35.80, SD = 23.33) had no significant difference (p = 0.053 > 0.05). The Himalayas type and the Archimedes type (M = 47.24, SD = 28.32) showed a significant difference (p = 0.003 < 0.05). The Litchi type and the Archimedes type also showed no significant difference (p = 0.248 > 0.05). The results indicated that the Himalayas type operation time was the shortest, and the Archimedes type was the longest. According to observations and interviews, the reason could be that the download indicator on the Himalayas type interface is in the upper right corner of the homepage. The operation tasks tend to be intuitive, making it convenient to delete information. The download button of the Archimedes type may not be obvious to the user. When searching for content, a user will need to perform multiple clicking steps. Only well-designed icons can convey system intent [17]. The fifth task required the participants to "share songs to WeChat moments". The effect of the operation mode showed no significant difference (F (2, 27) = 1.173, p = 0.325 > 0.05). System Usability Scale (SUS) The participants were asked to fill out the SUS questionnaire after they had completed the tasks. The results generated from the one-way ANOVA are shown in Table 4. From Table 4, it can be seen that there existed a significant difference in the effect of the operation mode (F (2, 27) = 7.067, p = 0.003 < 0.05). The subsequent post hoc comparison showed that the Himalayas type (M = 69.60, SD = 14.31) and the Litchi type (M = 45.50, SD = 18.55) had a significant difference (p = 0.012 < 0.05). The Himalayas type and the Archimedes type (M = 37.50, SD = 25.52) showed a significant difference (p = 0.001 < 0.05). The Litchi type and the Archimedes type also showed no significant difference (p = 0.376 > 0.05). That is, the Himalayas type was better than the Litchi and Archimedes types. According to the system usability scale (SUS), the evaluation greater than 68 means that it met the users' needs and approval, while the operation mode below 68 indicated that the participant was not satisfied. The participants were satisfied with the Himalayas type (i.e., 69.60 > 68). However, the Litchi and Archimedes types both scored below 68, which showed that they were not satisfied with those two types. The reason for the participants' dissatisfaction may be that the ease-of-use of the system affects users' perception of operation, and it can help them complete the operation process quickly and smoothly. Well-designed products follow the principles of interaction design and are user-centric. The Himalayas type operating system is in line with the participants' visual perception of FM new media. Its interface visual clues or displayed information are hierarchical and easy to understand, which can effectively guide the participants to recognize the highlight of design characteristics regarding cognitive affordance. The object affordance of the operation interface mode is helpful for implementing the task. The function icons and text design consistently affect the effectiveness of the system function, while its functional affordance can help participants complete the task effectively. The design features of the size of icons, text, and buttons promote participants' sense of affordance, which is beneficial to user perception and search tasks. Therefore, the Himalaya type has the design characteristics of the four factors of affordance, and participants can perceive the ease-of-use of the interface operating system and facilitate smooth interaction [14]. Subjective Evaluations By using a 7-point Likert scale (i.e., from 1 for less satisfied to 7 for greatly satisfied), the results of the participants' subjective evaluations after completing the operational tasks are presented as follows. Table 5 illustrates the one-way ANOVA results of "interface aesthetics", "search convenience", "download intuitiveness", "ease of deleting content", "information reading fluency", "ease of use", "ease of query", "functional convenience" and "overall acceptance" of the interface based on the participants' subjective evaluation. For "interface aesthetics", the effect of the operation mode showed no significant difference (F (2, 27) = 1.024, p = 0.373 > 0.05). By comparing the averages, they show that participants' total score for the Litchi (M = 3.50, SD = 1.18) and Archimedes (M = 3.40, SD = 1.26) types are less than 4, and the Himalayas (M = 4.10, SD = 1.10) type is more than 4. Therefore, participants' subjective evaluation results pertinent to interface aesthetics tended to be neutral. For "search convenience", the effect of the operation mode showed no significant difference (F (2, 27) = 0.515, p = 0.603 > 0.05). By comparing the averages, they show that participants' total score for the Archimedes (M = 3.80, SD = 2.10) type is less than 4, and the Litchi (M = 4.00, SD = 1.33) and Himalayas (M = 4.50, SD = 1.18) types are equal to or more than 4. Therefore, participants' subjective evaluation results pertinent to search convenience also tended to be neutral. For "download intuitiveness", the effect of the operation mode showed no significant difference (F (2, 27) = 2.377, p = 0.112 > 0.05). By comparing the averages, they showed that participants' total score for the Litchi (M = 3.60, SD = 1.17) type is less than 4, and the Himalayas (M = 4.80, SD = 1.03) and Archimedes (M = 4.40, SD = 1.51) types were more than 4. Therefore, participants' subjective evaluation results pertinent to download intuitiveness also tended to be neutral. For "ease of deleting content", the effect of the operation mode showed no significant difference (F (2, 27) = 1.518, p = 0.237 > 0.05). By comparing the averages, they showed that participants' total score for the Litchi (M = 4.30, SD = 1.77), Himalayas (M = 5.30, SD = 1.16) and Archimedes (M = 4.20, SD = 1.69) types were all more than 4. According to the evaluation criteria of the 7-point Likert scale, the participants were all satisfied with the three operation modes. For "information reading fluency", the effect of the operation mode showed a significant difference (F (2, 27) = 4.134, p = 0.027 < 0.05). The subsequent post hoc comparison showed that the Himalayas type (M = 5.20, SD = 1.32) and the Litchi type (M = 3.10, SD = 1.73) had a significant difference (p = 0.008 < 0.05). The Himalayas type and the Archimedes type (M = 4.00, SD = 1.83) showed no significant difference (p = 0.113 > 0.05). The Litchi type and the Archimedes type also showed no significant difference (p = 0.230 > 0.05). The results indicated that the Himalayas type operation time had the highest satisfaction in terms of information reading fluency, and the Litchi type had the lowest satisfaction. The reason may be that the Litchi type lacks an overall information column, and the difference between the information columns of each module is not clear, which also causes reading difficulties. Due to the limitation of the screen size of the mobile phone, the interface information might be too small for users to easily pay attention to the displayed content. For "ease of use", the effect of the operation mode showed a significant difference (F (2, 27) = 4.134, p = 0.027 < 0.05). The subsequent post hoc comparison showed that the Himalayas type ((M = 4.40, SD = 1.17) and the Litchi type (M = 2.90, SD = 1.66) had a significant difference (p = 0.029 < 0.05). The Himalayas type and the Archimedes type (M = 2.80, SD = 1.48) also showed a significant difference (p = 0.020 < 0.05). The Litchi type and the Archimedes type showed no significant difference (p = 0.879 > 0.05). The results indicated that the Himalayas type had the highest satisfaction in terms of ease of use. The Litchi and Archimedes types had the lowest satisfaction. The reason may be that the overall arrangement of the Himalayas type interface information is more systematic and well organized. When the content is rich and rationally arranged, it might enhance participants' awareness of interface information and highlight the design characteristics regarding cognitive affordance. For "ease of query", the effect of the operation mode showed a significant difference (F (2, 27) = 4.134, p = 0.027 < 0.05). The post hoc comparison showed that the Himalayas type (M = 4.80, SD = 1.32) and the Litchi type (M = 2.80, SD = 1.44) had a significant difference (p = 0.004 < 0.05). The Himalayas type and the Archimedes type (M = 3.50, SD = 1.58) showed no significant difference (p = 0.053 > 0.05). The Litchi type and the Archimedes type also showed no significant difference (p = 0.285 > 0.05). The results indicated that the Himalayas type operation time had the highest satisfaction in terms of ease of query, and the Litchi type had the lowest satisfaction. For "functional convenience", the effect of the operation mode showed no significant difference (F (2, 27) = 1.664, p = 0.208 > 0.05). By comparing the averages, they showed that participants' total score for the Litchi (M = 4.20, SD = 1.99), Himalayas (M = 5.40, SD = 1.78) and Archimedes (M = 4.10, SD = 1.52) types are all more than 4. According to the evaluation criteria of the 7-point Likert scale, the participants were all satisfied with the three operation modes. For "overall acceptance of the interface", the effect of the operation mode showed no significant difference (F (2, 27) = 2.291, p = 0.121 > 0.05). By comparing the averages, they show that participants' total score for the Archimedes (M = 3.20, SD = 1.14) type was less than 4, and the Litchi (M = 4.00, SD = 1.15) and Himalayas (M = 4.40, SD = 1.51) types were equal to or more than 4. Therefore, participants' subjective evaluation results pertinent to download intuitiveness also tended to be neutral. Discussion Previous research has shown that the perceived affordance of new media interaction design is multi-dimensional [9]. Users have the ability to implement intuitive interaction through the user interface design [7]. The four types of affordance concept expansions include cognition, function, sensory, and object affordance [14], which contribute to the interactive experience of users with new media. Based on the experimental results, the analyzed results are addressed as follows. Participants' Task Performance In terms of task performance, the three operating interfaces of FM new media have significant differences. The Himalayas task performance takes the shortest time, and participants can quickly complete the assigned tasks. This means that the user interface is recognizable and easy to operate because of higher affordance. We can learn from the analysis of the four types of affordance. Cognitive affordance emphasizes a simple and easy-to-operate interface design. Interface graphics and text follow the principle of consistency, and the use of familiar icons and text can stimulate the cognitive level of the brain [24]. Affordance clues can affect search and click on web buttons to call-action [21]. They can fully mobilize the user's subjective initiative and participate in the interaction. For example, on the function to delete the downloaded content and the sliding prompt, the interface configuration of the primary download information and the secondary download information should be simple, intuitive, and logical. When the hidden functions of different levels slide the extended information in the horizontal and vertical directions, it should also provide easy-to-recognize guidance prompts, which can effectively and quickly help participants delete the content function. Object affordance refers to the possibility of helping users take action. Interface design is an active device for the new media of smart touch screens. When users interact with the new media interface, although there is no explicit object affordance, it is the stealth meaning of the touch screen interface. This includes icons, icon size, shadow, color, and so on. and can provide stealth affordance. One possible explanation is that the size of the touch screen also affects the interaction [25], and the corresponding interface design elements are displayed through invisible energy, and are designed appropriately to trigger users to take action. The close connection between smartphone screen size and interface elements is directly related to usability issues. For example, a larger button can more easily trigger action than a smaller one. Moreover, when the finger is too fast, the size of the icon on the touch screen may affect the degree of input failure [26]. At the same time, it may also affect user safety, task, and satisfaction. The precise input method corresponds to the content of the link interface, system settings, and interactive group status [25]. The intuitiveness and clarity of functional affordance can help users quickly complete the FM interface operation tasks. In terms of usability function configuration, we learned through "Checking the music rankings" that the display functions effect of the Litchi and Archimedes types are lower than that of the Himalayas type. This may be related to the inconsistency of the interface information, resulting in the information content being not sufficiently intuitive, and needing a specific search function to see the relevant information. It not only affects the user's operability but also affects visibility and makes reading more difficult. On the function menu, the graphics and text should be designed with more consistent and clear considerations. On the Litchi and Archimedes user interfaces, the meaning of the words can easily confuse the audience, so that it is impossible to interpret the relationship between the sound and the program. Participants' Subjective Evaluation This research study further examines the impact of the three operation interfaces on the subjective evaluation of users. The overall results showed that there are significant differences between the three operating interfaces, with the participants unanimously rating Himalayas better than Litchi and Archimedes. One possible explanation is that the Himalayas interface is easier to use, and the integration of information, functions, and icons is more systematic and well organized. It not only considers the usability of the design but also takes into account the social nature of the emotional needs of new media. It also follows the principles of interaction design, pays attention to the overall effect of the interface, avoids multiple clicks on the functional structure, and provides clear and intuitive semantic transmission. As an FM listening platform, the functions of "interface query view", "music ranking list", and "click to play order" are all frequent options for users. The overall information structure of the interface should conform to the public perception and interaction logic. For instance, the searching for personal data and downloading and storing information should be in the most conspicuous location, and the interface texts should be clear. When a user searches for Archimedes personal information, the semantic expression might not be sufficiently intuitive, and the display position tended to be invisible. Given the above analysis, designers should consider the usability of users from multiple perspectives, and strengthen the visual affordance of the interface according to user needs to help enhance the user's interactive experience from unfamiliar to familiar status. Conclusions The focus of this study was on applying the concept of affordance and its factors to improve the usability of the FM new media communication platform. The study analyzed the current situation of new media and the impact of the application of affordance factors regarding the design of a new media interface. The result shows that the concept of affordance has a significant impact on users' task performance of the FM new media operation mode, and affects the subjective evaluation of users. We summarize the interface design of the FM smart app platform and provide several suggestions as follows. There are significant differences between the three user interfaces, where the Himalayas type is better than the Litchi and Archimedes types. That is, the participants' task performance takes the shortest time. The subjective evaluation results show that the three user interfaces also have significant differences. Participants are most satisfied with the Himalayas type as it meets the needs of the users. The overall results show that the visual affordance directly affects the user's perception, and effectively improves the user's task performance and subjective evaluations. To improve the effective development of FM new media products, we put forward the following points based on the summarized results of the FM platform interface design. The FM new media interface is the communication window of human-computer interaction. Designers should give full consideration to user needs, simplify interface operation complexity, provide clear and easy-to-recognize information content, and mobilize design functions, visual senses, and object affordance applications [6]. It is important to take into account both the usability and social benefits of the FM new media platform. In addition, the user's intuition with the smart touch screen interface during the use of FM new media is not only affected by the size of the active device's touch screen, but also by its interface design. Therefore, designers need to consider direct or indirect object affordance to improve invisible affordance design. When traditional media are switching to new media, users turn from passive to active behavior, and perform tasks through visual cues or information. Enhancing the cognitive affordance of the interface design can help guide users to perform tasks. It is also important to clarify the interface information and help users complete tasks through functional icons. Designers should follow the principle of consistency in graphic design, make full use of the information that users are familiar with, and use the affordance design concept to improve user experiences. It is recommended that future research can be combined with the current high-profile VR/AR to help better understand user needs and enhance interactivity. At the same time, incorporating studies pertinent to the facial expression of social robots (e.g., Song & Luximon [27]) can also help design better innovative media platform to promote social interactions.
2021-01-13T06:17:19.454Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7ce94c140e06fa9ae8dc1e161498511628671a12", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/2/384/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f49f7f6ce2d1d34cc6ff1898d3e58e6ff1d10ae2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
27670866
pes2o/s2orc
v3-fos-license
Improved Treatment of Cosmic Microwave Background Fluctuations Induced by a Late-decaying Massive Neutrino A massive neutrino which decays after recombination (t>10^{13} sec) into relativistic decay products produces an enhanced integrated Sachs-Wolfe effect, allowing constraints to be placed on such neutrinos from present cosmic microwave background anisotropy data. Previous treatments of this problem have approximated the decay products as an additional component of the neutrino background. This approach violates energy-momentum conservation, and we show that it leads to serious errors for some neutrino masses and lifetimes. We redo this calculation more accurately, by correctly incorporating the spatial distribution of the decay products. For low neutrino masses and long lifetimes, we obtain a much smaller distortion in the CMB fluctuation spectrum than have previous treatments. We combine these new results with a recent set of CMB data to exclude the mass and lifetime range m_h>100 eV, \tau>10^{12} sec. Masses as low as 30 eV are excluded for a narrower range in lifetime. I. INTRODUCTION Anisotropies in the cosmic microwave background (CMB) contain an enormous amount of information about the universe. Data presently available has been used [1]- [11] to constrain from two up to eight cosmological parameters. With the promise of ever more precise measurements of these anisotropies, it has become possible to envision CMB fluctuations as a tool to go beyond this minimal set and constrain other areas of physics. Recent proposed constraints include limits on Brans-Dicke theories [12], constraints on time-variation in the fine-structure constant [13,14], tests of finite-temperature QED [15], and limits on various models for both stable [16] and unstable [17][18][19] massive neutrinos. All of these additional constraints, with one exception, are based on the high-precision fluctuation spectra expected from the MAP and PLANCK satellites. The sole exception is reference [18], in which Lopez et al. pointed out that the radiation from a neutrino decaying into relativistic decay products could produce such a large integrated Sachs-Wolfe (ISW) effect, that a fairly large mass-lifetime range can be ruled out from current observations. Lopez et al. argued that a neutrino with a mass greater than 10 eV and a lifetime between 10 13 and 10 17 sec could be ruled out. (Although this calculation assumes nothing about the nature of the decay products other than that they are relativistic, this limit is most useful when applied to decay modes into "sterile" particles such as a light neutrino and a Majoron, since other, more restrictive limits apply to photon-producing decays). Hannestad [19] showed that the MAP and PLANCK experiments should produce an even larger excluded region in the neutrino mass-lifetime plane. In this paper, we improve on a major approximation of references [18,19]. In these papers, the relativistic decay products were simply added to the background neutrino energy density in the program CMBFAST [20]. However, when the massive neutrinos decay, the spatial distribution of the decay products is determined by the distribution of the non-relativistic decaying particles; it is not identical to the distribution of the background massless neutrinos. In fact, the approach of references [18,19] violates energy-momentum conservation. Although in this approach energy and momentum are explicity conserved at zeroth order (the mean), the first order perturbations violate energy-momentum conservation. This may seem like a small effect, but it actually has significant consequences for the CMB fluctuation spectrum. In the next section, we discuss the formalism for the Sachs-Wolfe effect in the presence of neutrinos decaying after recombination. In section III, we present our results, showing the effects of correctly incorporating the spatial distribution of the decay products, and provide a simple physical explanation of these effects. In section IV, we show how our revised calculation affects the excluded region in the neutrino mass-lifetime plane, and in section V we briefly summarize our conclusions. A comparison of our new results with current data leads to the excluded region m h > 100 eV, τ > 10 12 sec, although smaller masses can also be excluded for a smaller range of τ . II. THE ISW EFFECT WITH AN UNSTABLE NEUTRINO: FORMALISM To calculate the CMB fluctuations in the presence of a decaying massive neutrino, we first review the basic precepts of the pertinent linear perturbation theory. The perturbed homogeneous, isotropic FRW metric can be parametrized as where a is the scale factor normalized to unity today and τ is the conformal time defined by dτ = dt/a, t being the proper time of a comoving observer. This particular gauge is referred to as the conformal Newtonian gauge because the behavior of the potentials (φ, ψ) is akin, loosely speaking, to that of the Newtonian potential. These potentials determine the large scale CMB behavior. In particular, the photon temperature perturbation decomposed into its Fourier and angular modes can be shown to be [21] ∆ where the subscript '0' refers throughout to the present time and κ is the optical depth from the present to some conformal time τ in the past. For the purpose of clarity, all the sources contributing to the anisotropy from inside the last scattering surface have been set to zero in Eq. 2. The effect of the sources contributing to the anisotropy between last scattering and the present (as given in Eq. 2) is called the Integrated Sachs-Wolfe (ISW) effect. The power in the ℓ th multipole is normally defined as ℓ(ℓ + 1)C ℓ with [20] The sources mentioned in connection with the ISW effect can be varied. At any time, the modes that are important to the ISW effect correspond to those scales which are smaller than the sound horizon of the whole fluid (matter+radiation) at that time. For these modes, the potentials can decay if there is radiation pressure or if the universe expands rapidly. In models with no cosmological constant, the main contribution to the ISW effect comes from just after recombination (since radiation redshifts faster than matter). Inclusion of a cosmological constant leading to a rapid expansion of the universe late in its history would boost the power on larger scales (small ℓ). Any other astrophysical process which contributes to the radiation content of the universe between last scattering and the present will lead to an increase in the total ISW effect. One such scenario is that of a massive particle decaying around or after last scattering. We will consider the case of a massive neutrino decaying non-relativistically into (effectively) massless particles. The details of the daughter particles turn out to be irrelevant. To quantify the evolution of the massive neutrino density, we will consider the Boltzmann equation for its distribution. In a homogeneous and isotropic universe, the distribution of the collisionless massive neutrino decaying non-relativistically into two massless particles follows [22] where t d is the mean lifetime of the neutrino, and ǫ h and q h are the comoving energy and momentum: ǫ 2 h = q 2 h + m 2 h a 2 , and a superscript '0' will be used throughout to denote unperturbed quantities. We make the following simplifications throughout our treatment: (1) neglect inverse decays, (2) neglect spontaneous emission, (3) neglect Pauli blocking factor. The solution approaches the familiar exp(−t/t d ) behavior as the neutrino becomes non-relativistic. The evolution equation for the energy density of the unstable neutrino is the integral of Eq. 4. It readṡ where overdots represent differentiation with respect to conformal time. It should be noted (as it is important if the decay is not completely non-relativistic) that the right hand side contains the product of m h and n 0 h (number density) and not ρ 0 h . We now turn on the perturbations in the metric. Although the conformal Newtonian gauge is the most useful in which to understand the ISW effect, for computational purposes 1 we will define all our variables in the synchronous gauge. Thus, we will express the integrand in Eq. 2 in terms of perturbations in the synchronous gauge. The synchronous gauge has the property that the coordinate time and the proper time of a freely falling observer coincide. All the perturbations are in the spatial part of the metric (g ij = a 2 δ ij + a 2 h ij ) in this gauge. The perturbation h ij can be Fourier transformed and broken up into its trace and a traceless part as [23] Instead of working with the conjugate momentum in the perturbed space-time, we will use q h and ǫ h as defined above [24] and in keeping with that, we will write out the perturbed massive neutrino distribution as Due to the fact that the decay term is linear in f h , the form of the equation for the evolution of Ψ h is identical to that of the stable massive neutrino but with f 0 h now given by Eq. 4. The stable massive neutrino case has been clearly worked out in Ref. [23]. The decay radiation rises exponentially from being negligible in the past to some maximum value at τ ∼ τ d and then drops off as a −4 like normal radiation. It is more informative therefore to follow the quantity r rd = ρ 0 rd /ρ 0 ν where 'rd' denotes the decay radiation and ρ 0 ν is the cosmological density in a massless neutrino. The evolution equation for The treatment of the perturbations in the decay radiation will be analogous to that of the massless neutrino as worked out in Ref. [23]. To evolve the perturbations in the decay radiation, we will integrate out the momentum dependence in the distribution function by defining (in Fourier space) where q = qn and Ψ rd is defined analagously to Eq. 7. The equation governing the evolution of F rd can be worked out to givė where µ =k ·n and P n (µ) are the Legendre polynomials of order n. The series of terms in these equations arises because the perturbed quantities depend on the direction of momentum and to get the contribution to a daughter particle with momentum q, we need to integrate over all possible q h . Thus Eq. 10 depends on both µ and q · q h . The situation simplifies enormously for non-relativistic decays because each term N p , which contributes to the p th multipole progressively, is of O( q p h /a p m p h ) or higher. In Eqs. 10 and 11, the series has been truncated by only keeping terms up to O(q 2 h /a 2 m 2 h ) in the integrand. Similar equations for the evolution of perturbations in the decay radiation can be found in references [25], [27]. Apart from N 0 , the terms on the right-hand side of Eq. (10) are completely negligible for non-relativistic decays. The use of Eq. 10 is our only difference from the treatment in ref. [18]. In the latter paper, the relativistic decay products were simply added to the neutrino background in CMBFAST. This is equivalent to setting the right hand side of Eq. 10 to zero. Since the perturbations in the decay products are determined by the perturbations in both the metric and the decaying massive particles, they are correctly described by Eq. 10. Although this may seem like a minor difference, it produces very large effects, as we now show. III. THE ISW EFFECT WITH AN UNSTABLE NEUTRINO: RESULTS The formalism outlined above for the evolution of an unstable neutrino and its decay products was integrated into the CMBFAST code [20]. We investigated a range of masses from 10 eV to 10 4 eV and lifetimes from 10 12 to 10 18 seconds. The underlying cosmology was taken to be a standard (Ω = 1) CDM model with h = 0.5 (with H 0 = 100h km sec −1 Mpc −1 ); baryon density Ω B h 2 = .02 and scale invariant isentropic initial conditions (the same model was used in ref. [18]). Our results are shown in Fig. 1 for several masses and lifetimes, along with the results obtained by simply adding the decay products to the relativistic background. As pointed out in Ref. [18] there is indeed an enhancement in the spectra at relatively large scales due to the ISW effect produced by the decaying neutrino. We will see in section IV that for many values of neutrino mass and lifetime, the spectrum produced is far from that observed today, and therefore a large region of parameter space is ruled out due to this effect. The location of this ISW induced bump is determined by the lifetime of the neutrino. For lifetimes shorter than the age of the universe, inhomogeneities on scales k project onto angular scales ℓ ∼ kτ 0 where τ 0 is the conformal time today, and we assume a flat universe. The potentials vary in time (and hence cause the ISW effect) most significantly at the time of decays on scales of order the sound horizon: k 2 sh ≃ 3/(4τ 2 d w) where w = P/ρ. Therefore, the bump in the spectrum is produced at ℓ ∼ k sh τ 0 ≃ (τ 0 /τ d )(4w/3) −1/2 . At these late times, the dominant contribution to w comes from the decay radiation; hence w ≃ Ω rd /3 where Ω rd is the fraction of critical density in decay radiation. Therefore, the ISW bump should be roughly at . For a matter dominated universe the conformal time and time are related as follows: τ ∝ t 1/3 . For a m h = 10 eV, t d = 10 15 sec neutrino, Ω rd ≃ 0.15 and τ 0 /τ d ≃ (4 × 10 17 sec/10 15 sec) 1/3 ≃ 7.4. Therefore, in this case we expect ℓ ISW ≃ 29. The actual peak occurs at a larger value of l, due to entropy fluctuations which decrease w, thereby increasing k sh and, finally, ℓ ISW . Notice from Figure 1 that we find quantitative disagreement with the results of Lopez et al. [18] (dashed curves). The new results show that a more accurate treatment of the spatial distribution of the decay products produces a surprisingly large change in the CMB fluctuation spectrum compared to the results of reference [18]. This difference is larger for smaller masses as can be seen in the figure. At least for low masses, the most obvious difference between the the old and new spectra is the smaller size of the ISW effect for the new case. This difference has a physical explanation: by not properly treating the perturbations in the decay radiation we overestimate an important source of the potential decays that drive the ISW effect. To see this we first expand the Boltzmann equation for decay radiation perturbations, Eq. 9, in multipole moments, F rd = l F rd,l P l , to obtain the following hierarchy, shown here for l ≤ 2: where δ rd = F rd,0 /r rd , θ rd = 3kF rd,1 /4r rd and σ rd = F rd,2 /2r rd . The treatment of Lopez et al. [18] is equivalent to neglecting the right hand sides of the equations above. This simplification breaks down near τ ∼ τ d , whereṙ rd /r rd is not negligible. Neglecting theṙ rd /r rd terms in the Boltzmann equations for the decay radiation perturbations results in errors in the perturbations. Let us focus on θ rd , which turns out to be primarily responsible for the big difference. Consider Eq. 14 for modes above the horizon at τ ∼ τ d , since for these modes the approximate treatment of Ref. [18] gives wrong results. For these modes the k 2 terms calculated in the approximate scheme can be shown (see Appendix) to be roughly similar to its exact value. Then the exact solution θ rd is related to the approximate solution θ a rd bẏ where the superscript here and in what follows denotes the solution to the set of equations 13-15 obtained in the approximate scheme by neglecting the feedback terms on the right hand side. The exact solution for θ rd is therefore much smaller than the approximate one. Examples for several different modes are shown in Figure 2. These large overestimates of θ rd lead to correspondingly large overestimates of the ISW effect and are primarily responsible for the differences between our spectra and those generated in Ref. [18]. The Appendix demonstrates precisely how the perturbations in the decay-produced radiation affect the potentials that govern the ISW effect, and how treating the decay products as identical to the massless neutrinos violates energy-momentum conservation. The bottom line is that the ISW effect depends significantly on the behavior of θ rd and inaccuracies in it lead directly to inaccuracies in the C l 's. Why does the approximation work better for higher mass neutrinos? The ISW effect is generated during times when the universe has appreciable radiation. For low-mass neutrinos whose decay radiation never dominates the energy density, the decay radiation redshifts away relative to the matter, and is only important near τ ∼ τ d . Therefore, neglecting theṙ rd /r rd terms creates errors in the decay radiation perturbations at the crucial time when they are driving the ISW effect. If the neutrino is massive enough, then its decay products are important for a range of times with τ ≫ τ d when the approximation is good. So the approximate treatment works better for higher-mass neutrinos, like the m h = 10 keV, t d = 10 12 sec case. There are other visible differences between the anisotropy spectra generated in reference [18] and our more accurate treatment. One difference, which exacerbates the rise in power at large scales, is a drop in the small-scale ISW effect. For modes which enter the horizon when there is significant radiation, the δ h term in Eq. (13) is an important source term. This increases δ rd relative to δ a rd and since δ rd is a source for the evolution of θ rd , it implies that θ a rd < θ rd . Thus there is a decrease in the ISW effect at small scales in the approximate scheme of ref [18]. This is not visible for the 10 eV unstable neutrino (in Fig. 1) because of the comparatively large signature of the first peak, but it is readily apparent for the 10 keV neutrino because of the large ISW effect at small scales. IV. COMPARISON WITH CURRENT CMB DATA Since the detection of anisotropies in the CMB by COBE [28], there have been dozens of observations of anisotropies on a wide variety of angular scales (refs. [29]- [40]). We now use these observations to place more accurate limits on neutrino mass and lifetime. In ref. [18], a very rough constraint was placed on decaying neutrino models: a model was excluded if the power at l = 200 was greater than at l = 10. As we have noted in the previous section, a more accurate treatment of the decaying neutrinos results in a much smaller distortion in the CMB spectrum for a certain range of neutrino masses and lifetimes. However, as we will see, consideration of all the data leads to constraints which are almost as stringent as the rough contours in ref. [18]. CMB experiments typically report an estimate of the band power where W i,l is the window function which depends on beam size and chopping strategy of experiment i. Each of these comes with an error bar or, in the case of correlated measurements, an error matrix M −1 . The naive way to constrain parameters in a theory then is to form Here we have explicitly written the dependence of C i on the theoretical C l 's which in turn depend on the cosmological parameters. This naive statistic is useful only if the band power errors are Gaussian. In fact, the probability distribution is typically non-Gaussian, with a large tail at the high end and a sharp rise at the low end of the distribution. In recognition of this, and guided by some compelling theoretical arguments, Bond, Jaffe, and Knox [41] proposed forming an alternative statistic: where with x i an experiment dependent quantity, determined by the noise. The covariance matrix is now Bond, Jaffe, and Knox [41] have tabulated and made available the relevant data from the experiments in refs. [28]- [40]. We use this information and formalism 2 to constrain the mass and lifetime of unstable neutrinos. The χ 2 in eq. 19 depends on the parameters of the cosmological model. In principle, it would be nice to allow as many parameters as possible to vary in addition to the mass and lifetime of the neutrino. This must be balanced against the constraints imposed by non-negligible time needed to run the modified version of CMBFAST 3 . Our strategy is to vary the mass and lifetime of the neutrino; the overall normalization of the C l 's; the primordial spectral index (equal to one for Harrison-Zel'dovich fluctuations); and the calibration of each experiment. For the other cosmological parameters, we make "conservative" choices. That is, we choose values likely to make the power on small scales (l ∼ 200) as large as possible compared with the power on large scales. This acts against the effect of the decaying neutrino, which boosts up power on large scales, and therefore leads to more conservative limits. At each point in (m, τ ) space, we use a Levenberg-Marquardt algorithm (see e.g. [43,44]) to find the values of normalization, spectral index, and calibration which minimize the χ 2 defined in Eq. 19. The contours in Fig. 3 show these best fit χ 2 in the (m, τ ) plane. Figure 3 shows the constraints on the neutrino mass and lifetime for a Hubble constant h = 0.5 and Ω B h 2 = .02 2 We also account for calibration uncertainty in the manner set down in ref. [41]. 3 The modified version, accounting for decaying neutrinos, takes about ten times longer than the plain vanilla code. in a flat (Ω = 1) matter dominated (Ω Λ = 0) universe. The high baryon content is above the favored value of Tytler and Burles [42] and serves to raise the power on small scales. Masses greater than 100 eV are ruled out for almost all lifetimes we have explored (τ > 10 12 sec). For lifetimes between 10 14 and 10 15 sec, masses as low as 30 eV are excluded at the two-sigma level. These results are similar to those of ref. [18], but more reliable because of the improvements in the calculated spectra and the more careful treatment of the data. We checked that the contours for a different set of (h, Ω b ) were similar to the contours in Fig 3. Fig. 4 shows the results for a cosmological constant-dominated universe. Again, a sizable region is ruled out, reflecting the robustness of the constraint. Hannestad [19] performed a similar calculation, using future CMB experiments to rule out decaying neutrino models, but he used the same approximation as in reference [18]; the decay products were added into the background neutrino density. We expect that his excluded-region contours for low masses should shrink since ISW effect is the main discriminator for these masses. It has been noted that the decay products from a very massive neutrino could keep the universe substantially populated with radiation or even radiation-dominated for most of its history. The presence of radiation has the effect of stopping the growth of density perturbations, which in a matter-dominated universe would grow as δ ∼ a. Since these density perturbations should (eventually) collapse into the structure we see today, it is clear that structure formation arguments can also provide constraints on the neutrino mass and lifetime. Very coarse constraints on the radiation density can placed by requiring that the scales relevant to structure formation are able to grow sufficiently (assuming of course, we know the initial perturbations), as is done in Ref. [26]. In fact, for a scale-invariant initial spectrum, the structure formation arguments of Ref. [26] also rule out a region at the bottom-right of our excluded region. A more detailed analysis yields more stringent constraints [27]. In light of this, it is important to understand that the constraints from CMB are most useful for low masses, i.e., for massive decaying neutrinos which do not affect the late-time growth of the density perturbations appreciably. Future experiments (MAP and PLANCK) have the potential to constrain neutrino masses as low as 1 eV and maybe even lower [19]. In the end, CMB and large scale structure constraints on massive decaying neutrinos both overlap and complement each other. V. CONCLUSIONS Our results indicate that for calculations involving the effects of decaying particles on CMB fluctuations, exact conservation of energy-momentum (not just conservation of the mean energy-momentum) is crucial. When perturbations in the decay products are correctly treated as being determined by the perturbations in both the metric and the decaying massive particle, energy and momentum of the massive particle plus its decay products are conserved. The result is a much smaller change (when an unstable neutrino is added) in the CMB fluctuation spectrum than was noted in ref. [18]. However, by using a comparison with current data, rather than a simple constraint on C 200 /C 10 , we have been able to obtain an excluded region only slightly less restrictive than that obtained in ref. [18]. This excluded region will grow as more data becomes available, culminating potentially in very restrictive limits from MAP and PLANCK [19]. Our results, of course, can be generalized to arbitrary decaying particles. The CMB spectra used in this work were generated with a modified version of CMBFAST [20]. We thank Lloyd Knox for providing the data used to generate the constraints in section IV. This work was supported by the DOE and the NASA grant NAG 5-7092 at Fermilab and by the DOE grant DE-FG02-91ER40690 at Ohio State. demonstrating the required cancellation. In our approximation, the decay radiation perturbations can be calculated the θ term is important, and inaccuracies in the large scale behavior are generated. The condition thatα given by Eq. (A10) should match that obtained from Eq. (A11) is conservation of momentum for the massive neutrino and its decay products. In the approximate scheme, the unperturbed quantities for the decay radiation are calculated correctly while the perturbations in it are set equal to that of the massless neutrino. This violates the energymomentum conservation conditions for the system of the massive neutrino plus its decay products, and this is the reason behind the fact that different combinations of the Einstein equations lead to different potential decay rates. It may be noted that the CMBFAST code [20] used to calculate the fluctuation spectrum implicitly assumes energymomentum conservation. When this condition is violated, the code cannot produce internally consistent results. In Fig. 5, we have plotted the result of using Eq. A10 (in place of Eq. A11) with the approximate scheme and as expected, good agreement with the actual curves is obtained. This exercise clearly shows that it is important to check for energy-momentum conservation when using approximate methods to model any part of the energy-momentum tensor. 0.08, h = 0.5) in the presence of a decaying neutrino with the indicated mass and lifetime (solid curve). The dashed curve is the spectrum obtained when the decay products are added to the background neutrinos and α is evolved as an explicit differential equation instead of being fixed as in the standard CMBFAST code. The dotted curve gives the fluctuation spectrum in the absence of a decaying neutrino.
2017-09-16T00:15:56.142Z
1999-07-27T00:00:00.000
{ "year": 1999, "sha1": "cd36a3a33e4aeb45fbd1441814b0cdca0c2a6e4e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9907388", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dab80b108e7881fa8879caab5038cd334866a341", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248419171
pes2o/s2orc
v3-fos-license
Reduced incidence of acute pharyngitis and increased incidence of chornic pharyngitis under COVID-19 control strategy in Beijing In this study, we investigated the impact of the COVID-19 control regulation on the incidence of pharyngitis based on the data from Beijing Jiangong Hospital. From 2019 to 2022, cases of acute pharyngitis are decreased, while cases of chronic pharyngitis are increased. During COVID-19 pandemic, a strict control regulation is performed in Beijing. Wearing mask and social distance would reduce acute pharyngitis incidence. However, constantly taking throat swabs would increase the incidence of chronic pharyngitis. This study will advance our understanding of the potential benefit and secondary disaster caused by COVID-19 strict control policy. Dear editor, We read with interest the article by Tré-Hardy et al. on the kinetics of anti-spike IgG (S-IgG) levels after vaccination among healthcare workers who received the mRNA-1273 (Moderna) vaccine. 1 The authors reported that the antibody levels markedly reduced between 3 and 6 months after the second dose. We also investigated the kinetics of S-IgG levels for 6 months after receiving the BNT162b2 (Pfizer-BioNTech) vaccine and the T -cell response among low responders in a cohort of workers at the National Center for Geriatrics and Gerontology, including a hospital and a research institute, in Japan. We have previously reported that seroprevalence against the nucleocapsid of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) among our staff was equivalent to that observed among the local community and that 99.4% of the participants had S-IgG. 2 , 3 However, the kinetics of the antibody titer against S-IgG after vaccination remain unclear. Additionally, T-cell-mediated cellular immunity may affect COVID-19 recovery, even with a low antibody response. 4 The relationship between humoral and cellular immune responses to vaccination has been rarely investigated. Of the 878 employees, 800 agreed to participate in the survey (participation rate: 91.1%). They received the first vaccination between February and June 2021 and the second dose 3 weeks later. Blood samples were obtained between June 14 and 18, 2021. This study was approved by the Institutional Review Board of the Ethics and Conflicts of Interest Committee (approval no 1481). All participants provided written informed consent. We performed all laboratory tests in-house using two S-IgG chemiluminescence enzyme immunoassays: the SARS-CoV-2 S-IgG from Sysmex and ARCHITECT SARS-CoV-2 IgG assay from Abbott. The positive cutoff value was 20 BAU/mL for Sysmex's and 50 AU/mL for Abbot's assays, respectively. The participants' characteristics are summarized in Table S1 ( N = 800). The mean age ± SD was 41.0 ± 11.6 years, and 66.4% were women. Clinical staff (doctors, nurses, and allied healthcare professionals) accounted for 63.8%, whereas the others were engaged in basic research and investigation, general office duties, and other nonclinical work. Most participants ( n = 642, 80.3%) received two doses of BNT162b2 vaccination at least 1 week before their annual health checkups in June. Seven participants received the second dose in March, seven in April, 515 in May, and 113 in June. Of the 527 eligible participants who had received both vaccine doses by May 2021, all were positive for S-IgG except one participant. An age-dependent decline in antibody response was ob-served with Spearman correlation coefficient of -0.305 for the Sysmex tests ( p < 0.001; Fig. 1 A). Women tended to develop a higher S-IgG titer in each age group. The difference was significant in the age group ≥51 years (Mann-Whitney U test with Bonferroni correction, p < 0.001), and the age difference was statistically significant regardless of sex (Kruskal-Wallis test, p < 0.001) ( Fig. 1 B). The antibody titer tended to decline post-vaccination in a timedependent manner, with a Spearman correlation coefficient of −0.325 ( p < 0.001; Fig. 1 C). The median titer of samples collected 6-20 days after the second vaccination ( n = 118) was 2636 BAU/mL for the Sysmex test. Among the individuals who received the second dose 21-50 days before the survey ( n = 512), the median antibody titers decreased to 1599 BAU/mL. The antibody titer of participants who received the second dose > 50 days before sample collection ( n = 12) was reduced to 28.5%. Similar results were also obtained in the Abbott test (Fig. S1). Seven participants with low antibody titers were invited to the additional investigation of T-cell-dependent cellular immunity in October 2021. We collected peripheral blood mononuclear cells and detected T cells secreting interferon-γ (IFN-γ ) in response to SARS-CoV-2 peptides using the T-SPOT Discovery SARS-CoV-2 kit (Oxford Immunotec). Two participants had a count that was comparable with those showing higher antibody responses. One participant was receiving an immune suppressant for kidney transplantation; he showed negative responses in the cellular immunity and seroprevalence tests. Two participants-one was immunocompromised and the other had a history of Klippel-Trénanay-Weber syndrome and protein-losing gastroenteropathy-with a titer of < 50 also showed negative responses. A 20-week pregnant woman showed a negative result in the cellular immunity test, although her antibody titers were relatively high compared with that of the other participants. Further, 297 participants received additional medical checkups on December 16 or 17, 2021, who are required for employees engaged in specific work activities, such as night shift, donated a blood sample. The participant characteristics in the follow-up survey are presented in Table S2. A sharp decline in S-IgG titer was observed in the 6 months after receiving both vaccine doses ( Fig. 2 A). Median titers were reduced by 88.4% (from 2821 to 327 BAU/mL) for those who participated in the June survey 6-20 days at post-vaccination and by 88.9% (1537 to 172 BAU/mL) for those who participated at 21-50 days. However, most participants ( n = 293, 98.7%) maintained a positive humoral immune response. Sex-dependent differences were not observed, whereas the titer significantly declined with age, with a Spearman correlation coefficient of −0.248 ( Fig. 2 B, C). Consistent with the previous reports, 1 , 5 , 6 the anti-spike antibody titer resulting from mRNA vaccination was markedly reduced within a few months and continued to decline thereafter, sug- gesting the requirement of additional doses. Conversely, cellular responses, which remain broad to wild type and variants, 7 were maintained even at 5-6 months after the second dose in those who developed a sufficient antibody titer. A survey of larger cohorts in the future is warranted to confirm our observation. Funding This work was supported by the Japan Health Research Promotion Bureau Research Fund [grant number 2020-B-09] and research funding for longevity sciences from the National Center for Geriatrics and Gerontology [grant number 21-48]. Data availability statement Not applicable. Declarations of Competing Interest None. differences in COVID-19 mortality. However, the alternations of sex hormones were accompanied by changes of many risk factors for severity and mortality of COVID-19, such as obesity, diabetes, and other comorbidities. It was unconvincing to judge the causality between sex hormones and COVID-19 outcomes based on epidemiological data due to above-confounding factors. Mendelian randomization (MR) analysis is an analytic method to estimate the causal effect, which overcome the limitations of measurement errors and confounding frequently encountered in observational studies. In this study, we performed a sex-stratified two-sample MR analysis to explore the causal relationship of serum sex hormone levels including bioavailable testosterone (BAT), total testosterone (TT), and sex hormone binding globulin (SHBG) on COVID-19 outcomes. We obtained summary statistics for sex hormones including BAT, TT, and SHBG from previous genome-wide association analyses study based on genetic data in UK Biobank. 2 The datasets of sex hormones could be downloaded from https://www.ebi.ac.uk/ gwas/ . The GWAS summary statistics for COVID-19 susceptibility (C2: covid vs population), hospitalization (B2: hospitalized covid vs population) and severe disease (A2: critically ill covid vs population) outcomes were obtained from the COVID-19 Host Genetics Initiative ( https://www.covid19hg.org/ , Release 5, European ancestry cohorts but excluded UKBB). The selection of instrumental variables was followed by previous procedure (1) selected SNPs which effect on sex hormones was significant ( P < 5 × 10-8); (2) matched these SNPs with the outcome dataset by rsid; (3) obtained independent SNPs through clumping procedure based on linkage disequilibrium with r 2 < 0.001 or the physical distance more than 10 0 0 0 kb via PLINK; (4) removed SNPs that were significantly associated with the outcome. 3 We calculated the proportion of variance explained for all remaining instruments to evaluate the strength of instrument variables. To investigate the causal effect, we performed the MR analysis via inverse variance weighted (IVW), weighted median, MR-Egger, MR-RAPS, MR-Lasso, and MRPRESSO method within males and females separately. All computations were performed using the Mendelian Randomization package. In the study, we obtained 208, 141, and 216 instruments separately of serum TT, BAT, and SHBG levels when studied causal effect of sex hormones and COVID-19 susceptibility in females, which could explain 7.42%, 5.44% and 10.95% of the variance of TT, BAT and SHBG respectively. The number of instruments and the PVE for instruments used in analysis of causal effect of sex hormone on COVID-19 hospitalization and severe disease was approximately same as analysis in COVID-19 susceptibility. The results showed that each one standard deviation (SD) increase in serum BAT levels was associated with a higher risk of COVID-19 hospitalization (OR:1.214, 95% CI:1.011-1.458, p = 0.038) and COVID- 19 severe disease (OR:1.336, 95% CI:1.024-1.744, p = 0.033) based on IVW method, and the results were verified by MR-Lasso, MR-PRESSO method ( Fig. 1 ). For a SD increase in serum BAT level, we observed no statistically significant effect upon odds of COVID-19 susceptibility ( p = 0.800) ( Fig. 1 ). Similarly, we observed no significant difference in risk of COVID-19 outcomes associated with an SD increase in serum TT or SHBG levels based on all MR analysis methods ( Fig. 1 ). The results suggested a causal effect of increased serum BAT levels on the higher risk of COVID-19 hospitalization and severe disease in females. No statistically significant difference in risk of COVID-19 outcomes associated with a SD increase in serum BAT, TT or SHBG levels was found in males ( Fig. 2 ). The results demonstrated no causal relationship between sex hormones and COVID-19 outcomes in males. In summary, we performed a sex-stratified two-sample Mendelian randomization analysis and demonstrated a causal relationship of increased BAT levels and higher risk of COVID-19 hospitalization and severe disease in females, not in males. A null causal relationship was observed for TT or SHBG levels with COVID-19 outcomes in females and males. The relationships between gender difference, sex hormone difference and COVID-19 outcomes have been reported based on several observational studies 1 , 4 and various hypotheses have been postulated to explain the relationships, including a sex-dependent difference in immune responses, sex-related expression difference of angiotensinconverting enzyme 2 and transmembrane protease serine 2. [5][6][7] Our study demonstrated a direct causal effect of BAT levels and COVID-19 hospitalization and severe disease in females based on MR analysis. The MR analysis exploited the natural random allocation of genetic variants and limited the potential confounding and reverse causal effect, thus providing powerful evidence for the causal effect of testosterone on COVID-19 outcomes. In addition, the BAT referred to testosterone loosely bound to albumin and free form testosterone, which participated in the biological process in vivo and therefore may become more relevant to proposed causal relationship. This is consistent with our results that only BAT levels have a causal effect on COVID-19 outcomes, but not serum TT and SHBG levels. Another innovative point in our present study was that the causal relationship of BAT and COVID-19 outcomes was only observed in females, but not in males. We inferred that the causal relationship between BAT and COVID-19 outcomes may be non-linear, and both high BAT and low BAT levels were risk factor for the poor prognosis of COVID-19. Further studies based on male population excluded patients with testosterone deficiency are needed to verify the relationship. In the study, we were unable to obtain gender-specific GWAS data of COVID-19 outcomes from update datasets, further study should be performed. Then, the findings that BAT increased the risk of COVID-19 hospitalization and severe disease in females helped better understand the role of sex hormones in COVID-19 occurrence and progression, and provided evidence for hormone therapy in the treatment of COVID-19 in females. Funding This work was supported in part by grants from the National Natural Science Foundation of China ( 81770860 ) and Key Research and Development Plan of Shandong Province ( 2017CXGC1214 ). Declaration of Competing Interest The authors declare that there is no conflict of interest. Recently in this Journal, Li and colleagues showed that wild bird-origin H3N8 avian influenza virus can potentially adapt well to a mammalian host [1] . This suggests that H3N8 virus may pose a potential threat to human health. Several previous studies have also shown that H3N8 virus are associated with persistent infection outbreaks in dogs and horses, and have been isolated from pigs, donkeys, and most recently seals [2][3][4] . A case of acute respiratory distress syndrome in children caused by H3N8 influenza virus has been identified ( Fig. 1 ). A 4-year-old boy with no significant medical history was transferred to the pediatric intensive care unit of Henan Provincial People's Hospital on April 10, 2022. His chief complaints were fever, drowsiness for 5 days, cough for 2 days, dyspnea for 1 day and 5 h after extracorporeal membrane oxygenation (ECMO). Two days before admission, the child came to the local hospital, where he was considered to have community-acquired pneumonia and received antibiotics and aerosol therapy. One day before admission, the child developed dyspnea and hypoxemia, and was transferred to Zhumadian Central Hospital for endotracheal intubation and ventilator assisted breathing. On April 10, alveolar lavage fluid and peripheral blood samples were taken for metagenomics nextgeneration sequencing(mNGS). Chest CT showed pneumonia in the upper and lower lobe of the right lung and left lung ( Fig. 2 A). After 1 day of treatment with antibiotics and "oseltamivir", the fluctuation of blood oxygen saturation under high ventilator parameters was 60-80%, and the oxygenation index was 50. ECMO was recommended. With the consent of the children's parents, the ECMO team of our hospital rushed to the local hospital for femoral vein-internal jugular vein ECMO catheterization 5 h before admission to our hospital. The mNGS results of the bronchoalveolar lavage fluid and peripheral blood samples collected on April 10 showed that 7,888,701 reads suspected H3N8 viruses were detected in the bronchoalveolar lavage fluid and 192 reads in the blood. After admission, patients were given ECMO combined with ventilator for assisted respiration,"oseltamivir" for antiviral, "meropenem" for anti-infection, "methylprednisolone sodium succinate" to reduce inflammation, "plasma and immunoglobulin" infused and other treatments. On April 12, bronchoalveolar lavage was performed again. The mNGS re-examination showed 183 reads virus sequences in the bronchoalveolar lavage fluid, and no virus sequences was detected in the blood. Part of bronchoalveolar lavage fluid sample was sent to the Center for Disease Control and Prevention. Whole genome sequencing confirmed the positive for avian H3N8 influenza virus, and H3N8 influenza virus was successfully isolated from the bronchoalveolar lavage fluid. On April 19, chest CT showed extensive interstitial changes and consolidation in the lungs, with obvious small airway involvement ( Fig. 2 B). Multiple deep lavage was performed with bronchoscope with an outer diameter of 2.8 mm. On April 20, the number of virus sequences in the bronchoalveolar lavage fluid decreased to 10 reads, and no virus was detected in peripheral blood. The child is in critical condition and is still receiving ECMO combined with ventilator support so far. In 2014, Karlsson et al. found that the H3N8 avian influenza virus isolated from seals showed high affinity for mammalian receptors, could be transmitted via respiratory droplets, and could replicate efficiently in human lung cells in vitro [3] . Later studies by Hussein et al. also confirmed that seal H3N8 virus and bird H3N8 virus can bind to human lung tissue and replicate in human lung cancer cells [5] . These studies suggest the possibility of H3N8 virus transmission to humans. This case is the first human case of H3N8 infection. Fortunately, so far, no other cases of infection have been found in close contact with the child. This patient started with mental symptoms of fever and lethargy, and then developed severe acute respiratory distress syndrome in a short period of time. The H3N8 virus can still be detected in the lavage fluid on the 11th day of admission, which is similar to other severe avian influenza such as H7N9, and the clearance rate in the body is slower. Pan et al. found that the time from the onset of infection to the virus turning negative in 13 cases of severe human infection with H7N9 avian influenza was (15.9 ± 6.4) days, and the time from antiviral treatment to the virus turning negative was (9.8 ± 7.4) days [6] . In this case, chest CT review on the 9th day of admission revealed partial pulmonary fibrosis, which may be related to the pulmonary interstitial and diffuse alveolar damage caused by virus. When healthy female mice were infected with H3N8 virus, the lung tissue of mice showed progressively aggravated interstitial inflammatory hyperemia at 3 and 6 days after infection [7] . There are no reports on lung imaging of H3N8-related cases, but some studies have shown that the absorption time of severe H7N9 avian influenza lesions is more than 1 month, and residual fibrotic lesions such as gridlike or cord-like are still gradually absorbed in patients with 1-year follow-up [6] . This suggests that even if the patient in this case can be cured and discharged, it is still necessary to follow up the child's imaging prognosis for a long time. Declaration of Competing Interest The authors have no competing interests to declare. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.jinf.2022.05.007 . A novel clinical therapy to combat infections caused by Hypervirulent Carbapenem-Resistant Klebsiella pneumoniae Dear editor, We read with great interest on a recent study by Tao Lou and colleagues entitled "Risk factors for infection and mortality caused by carbapenem-resistant Klebsiella pneumoniae : A large multicentre case-control and cohort study" in this journal. 1 Carbapenemresistant K. pneumoniae (CRKP) has been increasingly reported in China reaching 21.9% of K. pneumoniae infections in hospitals according to CHINET (China Antimicrobial Surveillance Network) 2021 report ( http://www.chinets.com ). 2 Data from this study showed a very high mortality rate caused by CRKP infections, reaching 24.2% for the 28-day crude mortality and over 45% for blood-stream infections. It is urgent to develop novel therapies to combat the infections caused by CRKP in China. Here, we report a novel and effective therapy that we have developed to treat infections caused by CRKP and Carbapenem-resistant and hypervirulent K. pneumoniae (CR-HvKP) that exhibited a much higher mortality rate than CRKP through oral administration. 3 We screened over 1500 FDA-approved pharmaceutical compounds from a commercial drug library (Selleckchem, Drug library L1300) 4 and found a compound, zidovudine (a HIV drug), which exhibited antibacterial effect on various species of Gramnegative bacteria including E. coli, P. aeruginosa, Salmonella spp. , and K. pneumoniae . The MIC of zidovudine on 30 clinical strains of CR-HvKP was determined and shown to be ranged from 1.25 to 5 μg/mL. Further tests showed that rifampicin exhibited very strong synergistic effect with zidovudine. The MICs of rifampicin alone on these clinical CR-HvKP strains were between 16 μg/mL to > 64 μg/mL. However, when rifampicin was used in combination with zidovudine in a ratio of 8:5, all these 30 strains exhibited much lower MIC, ranging from 0.25 / 0.15625 μg/mL to 4 / 2.5 μg/mL (zidovudine / rifampicin). A clinical CR-HvKP 1 strain was selected to perform further tests. Time-killing assay showed that at 20 μg/mL of zidovudine, the population size of CR-HvKP 1 was reduced at the first 4 h and then re-grew to reach the same size as no treatment control at 24 h; at 128 μg/mL of rifampicin, the bacterial population reduced at 6 h, but regrew after 24 h, suggesting that CR-HvKP 1 could develop resistance to zidovudine or rifampicin after long term treatment ( Fig. 1 a, b). When zidovudine and rifampicin were simultaneously present at 10 μg/mL and 16 μg/mL respectively, CR-HvKP 1 could be eradicated within 24 h, confirming the synergistic antimicrobial effect of these two drugs ( Fig. 1 c). The synergistic effect was further tested in the murine sepsis infection model, where the ICR mice were injected with 3 × 10 7 CFU of CR-HvKP 1 intraperitoneally. Treatment of different combinations of zidovudine and rifampicin were applied to the mice through oral gavage at 1-hour post-infection and every 12-hour intervals. The mortality rate of mice without any treatment was 100% in 24 h; the mortality rate of mice was 100% in 36 h when treated with zidovudine (10 mg/kg); the mortality rate of mice was 66% in 72 h when treated with rifampicin (8 mg/kg); the mortality rate was 0% in 72 h when treated with a combination of zidovudine (10 mg/kg) and rifampicin (8 mg/kg). This finding demonstrated that the treatment efficacy of simultaneous usage of zidovudine and rifampicin was much higher than single use of each drug ( Fig. 1 d). A resistance development test was performed to assess the rate of drug-induced resistance in K. pneumoniae by rifampicin, zidovudine, or a combination of both. Incubation of rifampicin or zidovudine with CR-HvKP 1 led to the increased of MIC to 4096 μg/mL and 1280 μg/mL by the first or second generation, respectively, while combinational use of these two drugs led to increase the MIC to 32 μg/mL + 20 μg/mL in 6 days, a much slower resistance development rate when compared to single use of drugs ( Fig. 1 e). Both in vitro and in vivo toxicity tests were performed with results showing that single and combinational uses of rifampicin and zidovudine led to low level of hemolysis and HepG2 cell lysis and no significant damage occurred in the liver and kidney of mice was observed after 7 days, 10-fold therapeutic dose treatment ( Fig. 1 Zidovudine is an HIV drug that inhibits the propagation of HIV virus through interfering with the activity of viral reverse transcriptase. 5 We hypothesized that the antibacterial effect of zidovudine is probably due to the inhibition of bacterial RNA Polymerase complex (RNAp), and hence bacterial RNA transcription. We successfully co-expressed subunits of K. pneumoniae RNAp to prepare the core enzyme by a dual-plasmids expression system, pACYC-Duet and pET-Duet. 6 The interaction between RNAp and zidovudine was determined using isothermal titration calorimetry (ITC) tests with results showing that K d of zidovudine was 7.8 μM, with the reaction stoichiometry (N sites) and enthalpy ( H) being 0.239 ± 0.273 and -335 ± 520 kJ/mol, respectively ( Fig. 2 a, b). It implied that zidovudine had a high affinity to RNAp from the thermodynamic aspect. Molecular docking of zidovudine to E. coli RNA polymerase (4YG2) 7 was performed using AutoDock Vina. 8 Zidovudine was shown to interact with the β' subunit of RNAp (RpoC), which formed a tight contact (2.3 Å ) with the residues Asn 495 in chain J of the β' subunit, while rifampicin binds with the β subunit of RNAp (RpoB) ( Fig. 2 c). 9 Binding pockets of zidovudine and rifampicin were located in the primary channel of RNAp near the Histopathology staining of kidney and liver from ICR gavaged with saline or 10-fold therapeutic dose of zidovudine and rifampicin combination treatment after 7 days was stained with Haematoxylin-Eosin (H&E), and evaluated for histological features. Data represents the mean ± SD ( n = 3); statistical analysis was operated by one-way ANOVA for multiple groups. * P < 0.05, * * P < 0.01, * * * P < 0.001. String length (mm) for CR-HvKP 1 colonies at sheep blood agar containing different drugs. (e) measurement of mucoviscosity of CR-HvKP 1 different drugs treatment. The cultures were diluted to OD 600 at 1.0 and centrifuged for 2 min at 10 0 0 x g. The supernatant was then obtained for OD 600 measurement to determine the mucoviscosity. (f) Virulence gene rmpA2 expression under different drugs treatment was determined by qRT-PCR. Data represents the mean ± SD ( n = 3); statistical analysis was operated by one-way ANOVA for multiple groups. * P < 0.05, * * P < 0.01, * * * P < 0.001. active center, suggesting that zidovudine would inhibit bacterial growth through a mechanism similar to that of rifampicin but involves an alternative pathway. 10 We last check if inhibition of RNAp could inhibit the virulence expression of CR-HvKP 1. Our data showed that rifampicin or zidovudine alone could reduce the string length and hypermucoviscosity ( Fig. 2 d, e) as well as the expression of transcription regulators rmpA2 , which were hallmarks of virulence of K. pneumoniae ( Fig. 2 f). In conclusion, this study has developed an effective combinational therapy against clinical CR-HvKP infections with low toxicity through drug repurposing approach and depicted its mechanisms of action to pave the way for further clinical trial. Conflict of interest The authors declare that they have no conflict of interest. Immunosuppression impaired the immunogenicity of inactivated SARS-CoV-2 vaccine in non-dialysis kidney disease patients Dear editor , Patients with chronic kidney disease (CKD) are at higher risk for coronavirus disease 2019 (COVID-19)-related morbidity and mortality than general populations and, early vaccination should be prioritized for this vulnerable population. However, numerous uncertainties exist regarding the safety and efficacy of vaccination, especially in CKD patients on immunosuppressive drugs. 1 Concerning safety and efficacy, a significant portion of CKD patients (1720/2509, 68.6%) showed hesitation toward vaccination in telephone survey of our center (Fig. S1). Previous studies focused on exploring immune responses to COVID-19 vaccine in patients on dialysis or receiving kidney transplant. 2 , 3 Three recent studies reported the humoral response to mRNA severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) vaccines in patients with CKD, 4-6 yet no serial data available for inactivated SARS-CoV-2 vaccines. We conducted a pilot, prospective study to survey the safety and humoral response to inactivated SARS-CoV-2 vaccine in CKD patients receiving a 2-dose immunization of inactivated SARS-CoV-2 vaccine (Fig. S2). We recruited 45 CKD patients, and 100 healthy controls, 100 hypertension patients and 100 diabetes patients with matched sampling time after vaccination at Peking University First Hospital, Yunnan University Affiliated Hospital and Center for Disease Control and Prevention in Hainan. Baseline characteristics of the participants are described in Table 1 . All participants received a 2-dose immunization (3-5 weeks between two doses) of inactivated SARS-CoV-2 vaccine (SinoVac or Sinopharm). We collected serum samples 20-33 days after the second dose of vaccination. To assess the humoral response to inactivated SARS-CoV-2 vaccines, we measured neutralizing antibody using competitive inhibition method, and anti-SARS-CoV-2 receptor-binding domain (RBD)-specific IgG and IgM using chemiluminescence immunoassay (MCLIA, Bioscience Co., Tianjin, China). We also recruited 31 CKD patients and 67 healthy controls receiving a third dose of inactivated SARS-CoV-2 vaccine ( Table S1 ) to evaluate the immunological enhancement in this population. This study was approved by the Peking University First Hospital (2021[441]) and Yunnan University (CHSRE2021021). age of diabetes patients is significantly older than CKD patients. None of these participants reported severe adverse effects after vaccination. Using neutralizing antibody titer of 2 as seroconversion cutoff, we found 84% (38 of 45) of CKD patients seropositive, which was lower than that in healthy controls (98%), hypertension patients (98%) and diabetes patients (95%). The median neutralizing antibody titers in CKD patients was 6.29 (IQR, 2.78-14.62), which was significantly lower than that in healthy controls [8.91 (IQR, 6.14-16.01), P = 5.15 × 10 −3 ], hypertension patients [8.66 (IQR, 5.24-15.68), P = 0.02], and diabetes patients [9.14 (IQR, 4.62-16.22), P = 0.04] ( Fig. 1 A ). Conversely, we did not observe anti-RBD IgG or IgM differences between CKD patients and controls ( Fig. 1 B, C ), despite of strong association between neutralizing antibody and anti-RBD IgG levels (Fig. S3). To better understand the immunogenicity of inactivated SARS-CoV-2 vaccine in CKD patients, we fur-ther stratified CKD patients into subgroups according to their disease diagnosis, estimated glomerular filtration rate (eGFR) levels and medication status (receiving immunosuppressants or not). Notably, immunosuppressive medication ( Fig. 1 D ) rather than eGFR levels ( Fig. 1 E ) or disease types (Fig. S4) showed effect on the reduction of immunogenicity. There were only 72.2% (13/18) of CKD patients receiving immunosuppressants tested seropositive after 2dose vaccination. Moreover, we observed an immunosuppressantdependent association between neutralizing antibody level and eGFR after adjusting for age and gender ( r = 0.627, P = 0.02), suggesting that immunosuppressive agents could sensitize neutralizing antibody response to kidney function in non-dialysis kidney patients. Interestingly, a third dose significantly boosted neutralizing antibody in CKD patients while immunosuppressants impeded the boosting effects (Fig. S5). Our initial analysis showed that majority (84%) of CKD patients acquired detectable neutralizing antibody against SARS-CoV-2 without severe adverse effects, while the antibody titers were lower than controls. In contrast, we did not observe such difference in anti-RBD IgG or IgM. The deviance between neutralizing antibody and anti-RBD IgG responses in CKD patients indicates that neutralizing antibody rather than IgG might be the most important marker reflecting humoral immune response in CKD patients. Subgroup analyses showed that immunosuppressive therapies rather than eGFR levels or disease diagnosis impair SARS-CoV-2 vaccine-induced immunity. Our analysis in nondialysis kidney disease patients receiving inactivated SARS-CoV-2 vaccine greatly supplemented previous studies on immunosuppressive therapies, 7 , 8 dialysis, 9 and kidney transplantation 10 impaired mRNA vaccine responses. We found that taking immunosuppressants hampers neutralizing antibody response in CKD patients and sensitizes neutralizing antibody response to kidney function. Additionally, our data demonstrates that CKD patients, even for those on immunosuppression treatment, can benefit from a third vaccination boost by improving their humoral immunity. Disclosure statement None declared. Declaration of Competing Interest Z.Z. served as a PI in a phase 4 clinical study sponsored by Sinovac Biotech Ltd. The funder has no role in study design, implementation and manuscript writing in this study. Dear editor, We read with great interest an article by Ji-Young Min and colleagues documenting the simultaneous administration of adjuvanted recombinant zoster vaccine (RZV) and the 13-valent pneumococcal conjugate vaccine (PCV) in adults aged ≥50 years. 1 Their results suggest that adults may benefit from receiving RZV and a PCV at the same healthcare visit. We would like to provide additional context for the implications of those conclusions by sharing results of a study we conducted to describe trends in invasive pneumococcal disease (IPD) incidence among LA County residents for six respiratory seasons from 2015 to 16 to 2020-21. Los Angeles (LA) County recorded its first case of COVID-19 in January 2020 and implemented non-pharmaceutical interventions (NPIs), such as wearing face masks and social distancing, to mitigate community transmission beginning in March 2020. 2 Similar to the overall U.S. population, LA County experienced a reduction in circulating influenza and other viral respiratory pathogens after widespread implementation of NPIs. 3 , 4 Since Streptococcus pneumoniae is a respiratory pathogen that is also transmitted via contact with respiratory droplets, we hypothesized that implementation of COVID-19 related control measures would be associated with similar reductions in IPD incidence. The LA County Department of Public Health (DPH) conducts surveillance for 10 million residents across 86 cities (excluding the cities of Long Beach and Pasadena). Healthcare providers and clinical laboratories are mandated to report IPD cases to DPH. 5 A confirmed IPD case is defined as the isolation of S. pneumoniae from a normally sterile body site (e.g., blood or cerebrospinal fluid) in a resident of LA County. All reported cases are investigated by DPH to determine if they meet the case definition. The COVID-19 pandemic required DPH to redirect staff to support the local response so no additional follow-up was conducted on IPD reports with incomplete information. Therefore, we established a probable IPD case definition for reports that occurred after January 1, 2020 and lacked sufficient information to establish if the confirmed case definition was met. All reported cases are entered into a local surveillance data management system. We analyzed data by respiratory seasons, defined as July to June of the following year. The date of IPD cases' occurrence was calculated from the earliest of symptom onset date, specimen collection date, specimen result date, date of diagnosis, date report is received by public health department, or the date the case is added to the surveillance database. We described cases by age, sex, and race/ethnicity based on information included in the case reports to DPH. We were able to directly compare groups without statistical testing since we were analyzing a complete data set of all reported IPD cases in LA County. We described trends in IPD among LA County residents during six respiratory seasons from 2015 to 16 to 2019-21. Corresponding with the introduction and widespread transmission of COVID- Abbreviations: Q1, first quartile; Q3, third quartile. 1 A confirmed case is defined as the isolation of Streptococcus pneumoniae from a normally sterile body site (e.g., blood or cerebrospinal fluid) in a resident of LA County. All reported cases are investigated by DPH to determine if they meet the case definition. A probable case lacked sufficient information to establish if the confirmed case definition was met. There were 11 probable cases in the 2019-2020 season and 10 probable cases in the 2020-2021 respiratory season. 19 in 2020, the number of IPD cases declined from the 2019-20 season to the 2020-21 season. The reduction in IPD incidence during the COVID-19 pandemic has been observed in other regions of the world. Compared with the 2018-19 season, IPD incidence during the 2019-20 respiratory season in the United Kingdom was 30% lower. 6 In Hong Kong, where strict social control measures were put in place with near universal masking, IPD cases decreased by 74.7% during 2020 compared with the comparable pre-pandemic period of 2015-2019. 7 Conversely, IPD incidence increased after relaxing social control measures. In Switzerland, IPD incidence decreased by 73% from February to April 2020, remained low from April 2020 to February 2021, and then increased by 23% from March to May 2021 when social control measures were lifted. 8 The decline in IPD cases associated with the COVID-19 pandemic is likely multifactorial. Public health recommendations/mandates for wearing facemasks, social distancing, and increased hand hygiene likely contributed to reduced transmission by decreasing the risk of coming into contact with respiratory droplets from someone with pneumococcal colonization. Moreover, school age children have a higher prevalence of pneumococcal colonization compared with adults, and they are an important source of transmission to adults in their household and community. Therefore, closure of LA County schools and childcare centers during the early part of 2021 likely resulted in decreased community transmission of S. pneumoniae . 9 Finally, influenza activity declined substantially after the implementation of pandemic control measures; influenza activity was non-existent during the 2020-21 season. 10 As a result, it is possible that there were fewer IPD cases as a secondary complication of influenza infection. Our analysis is limited by the fact we relied on a passive surveillance system. As a result, cases will not be reported if people were less likely to seek care or get tested for IPD during the pandemic. Under ascertainment of cases during the pandemic is unlikely, however, because IPD tends to be a serious illness and most people will present for medical attention. Therefore, the magnitude of the reduction in IPD incidence during the pandemic cannot be fully explained by differential health seeking behavior alone. Additional analysis is necessary to determine the precise contribution of the various pandemic control measures to the overall reduction in incidence of IPD and other respiratory pathogens. At a minimum, persons at high risk for IPD should consider wearing a facemask when in public spaces even after pandemic control measures are lifted given the evidence for the protection they confer from respiratory infections. Declaration of Competing Interest None of the authors have any relevant conflicts of interest to disclose. Dear editor, We have read with great interest the paper by Yoon et al. ( 1 ) reporting infectious SARS-CoV-2 samples were not detected after the third dose of remdesivir despite concomitant dexamethasone treatment. Remdesivir is a nucleoside prodrug of an adenosine analogue that was approved for the treatment of COVID-19 based on reduced hospitalization duration and trend of reduced mortality in a randomized setting ( 2 ). Besides renal and liver toxicities, remdesivir has been shown to be associated with bradycardia ( 3 ), a favorable side-effect ( 4 ) of unknown mechanism that is potentially related to similarity of its metabolites to adenosine ( 5 ). Adenosine levels have an important role in control of inflammation and might dampen the hosts anti-microbial response ( 6 ) and consequently promote bacterial superinfections. Despite this potential association, there are no published data on occurrence of bacterial infections during remdesivir treatment in COVID-19 patients. In this paper we aim to investigate occurrence of bacteremia in a large reallife cohort of remdesivir-treated in comparison to matched control COVID-19 patients from our institution. Among a total of 5959 consecutively hospitalized COVID-19 patients treated at our institution from 3/2020 to 6/2021, we retrospectively evaluated 876 consecutive remdesivir-treated patients. They were compared to a matched case-control cohort of 876 patients. Matching was based on age, sex, Charlson comorbidity index, WHO severity of COVID-19 at presentation and maximal level of oxygen requirement at the time of remdesivir application (to account for the fact that remdesivir was given to respiratory deteriorating patients). All patients were tested positive by either PCR or antigen test and had presence of COVID-19 symptoms. All patients were adults and Caucasian. Patients were treated according to the contemporary guidelines with majority receiving LMWH thromboprophylaxis and corticosteroids. Analyzed blood cultures were sampled during the whole hospitalization, and at least 48 h after initiation of remdesivir in remdesivir-treated patients. Clinically significant bacteremia was considered in the case of positive blood cultures which were sampled based on clinical reasoning of treating physicians, with exclusion of single blood cultures with isolates of contaminants such as coagulase-negative Staphylococcus (CoNS) or Corynebacterium spp, or more than one positive blood culture with same isolates but without clinical course consistent with blood-stream infection. MedCalc statistical program ver 20.104 (MedCalc Software Ltd, Ostend, Belgium) was used for all statistical procedures. The Mann-Whitney-U test, the X 2 -test, the log-rank test and the Cox-regression were used. P values < 0.05 were considered statistically significant. A total of 1752 COVID-19 patients were evaluated (876 remdesivir-treated and 876 matched controls). In comparison of remdesivir vs control patients there were no significant differences in neither age (65 vs 66 years, P = 0.109), Charlson comorbidity index (3 vs 3 points, P = 0.115), male sex (61.8% vs 61.8%, P = 1.0 0 0), Bacteremia was detected in 193 (11%) patients, more frequently in remdesivir-treated patients (12.6% vs 9.5%, P = 0.039). As shown in Fig. 1 A, a trend of bacteremia was more pronounced with remdesivir treatment irrespectively of the level of oxygen demand at the time of remdesivir application (regardless whether remdesivir was given prior to or during requirement for HFOT and MV), although no significant difference could be shown for particular subgroups. There was a statistically significant interaction between remdesivir use, bacteremia and death ( P < 0.001), with patients without bacteremia during hospitalization experiencing significantly improved survival (HR = 0.70, 95% CI (0.58-0.85), P < 0.001, Fig. 1 B) and no evident benefit of remdesivir use present in patients experiencing bacteremia (HR = 0.92, 95% CI (0.66-1.27), P = 0.634, Fig. 1 C). Frequencies of specific pathogens isolated from blood cultures in remdesivir-treated and matched control patients are shown in Table 1 . Remdesivir use was significantly associated with a higher occurrence of bacteremia due to Gram-positive bacteria ( P = 0.019), especially Enterococcus faecalis ( P = 0.019). In addition, a nearly significant result was observable for bacteremia due to Acinetobacter baumannii ( P = 0.093). To the best of our knowledge, this is the first report on higher frequency of bacteremia in COVID-19 patients treated with remdesivir. Bacteremia occurs in a substantial proportion of severe and critical COVID-19 patients who are candidates for remdesivir and the drug might especially predispose Gram-positive bacteremia, particularly with Enterococcus faecalis. Mechanisms behind these observations remain uncertain. It can be speculated that remdesivir metabolites that are adenosine analogues alter innate and specific immunity in the same fashion as adenosine does ( 7 ). Thus, remdesivir use might attenuate inflammation and promote bacterial virulency resulting in higher frequency of sepsis. Since remdesivir is given intravenously, this might also facilitate bacteremia, especially in the context of personal protective equipment that might affect dexterity of health-care workers and impose difficulties in delivering health-care ( 8 ). Importantly, no increase in bacteremia due to CoNS or Corynebacterium spp. was observed. Thus, higher frequency of bacteremia is not driven by pathogens that could be as-sociated with contamination of blood cultures. Also, since patients were matched based on the level of oxygen demand at the time of remdesivir application, two groups were balanced regarding HFOT and MV requirement and higher frequency of bacteremia is not likely to be driven by differences in intensive level of care. Our data show that occurrence of bacteremia significantly moderates the relationship of remdesivir use with survival and attenuates potential beneficial effects of remdesivir, implying that this phenomenon has important clinical consequences. Bacterial co-infections substantially affect in-hospital mortality of COVID-19 patients ( 9 ) and special considerations should be given to patients with bacterial co-infections who are candidates for remdesivir, carefully weighting risks and benefits on an individual basis, implementing measures of increased surveillance or completely avoiding the drug. Limitations of our work are retrospective study design and single center experience. Our results are representative of a tertiarylevel institution and treatment of severe or critical COVID-19 patients and might not be generalized to other clinical contexts. Nevertheless, our data based on a large real-life cohort of remdesivir-treated and matched control patients imply that remdesivir use might be associated with higher frequency of bacteremia and this might affect prognosis of remdesivir treated patients. Future studies on this very important subject are needed. Funding None. Ethical approval The study was approved by the University Hospital Dubrava Review Board (nm. 2021/2503-04). Declaration of Competing Interest None. Outcome predictors in SARS-CoV-2 disease (COVID-19): The prominent role of IL-6 levels and an IL-6 gene polymorphism in a western Sicilian population Dear editor , Recently in this Journal , Grifoni and colleagues 1 reported that IL-6 levels at hospital admission seem to be the best prognosticator for negative outcomes in patients with coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Considering the preponderant role of host response in influencing the clinical evolution of SARS-CoV-2 infection, we studied polymorphisms of the IL-6 gene -174G/C (rs1800795), as well as its receptor IL-6R (rs2228145), TNFα (rs1800629), ANGP2 (rs55633437), and MX1 (rs2071430) in patients hospitalized with COVID-19 to identify any genetic predisposition to a worse outcome. Other clinical features, such as IL-6 serum level, and risk factors for severity and mortality were also analyzed. 2 We analyzed 316 Sicilian (Italy) patients, divided into two groups according to the type of oxygen and ventilation therapy they required during hospitalization: (1) non-critical ( n = 271), patients admitted for reasons other than SARS-CoV-2 pneumonia with a positive preadmission screening, and patients hospitalized for SARS-CoV-2 infection without respiratory insufficiency; (2) critical ( n = 45), subjects who underwent non-invasive ventilation (Suppl. Materials). Table 1 shows some demographic features, the P/F ratio (pO/FiO) at admission, and the PSI as indicators of the severity of respiratory involvement in the two groups. Men were more prevalent in the two groups, but without statistical significance. Average age significantly increased with increasing disease severity, in line with the literature data. 1 , 2 , 3 The P/F ratio, indicative of the degree of respiratory failure at admission, was confirmed to be related to disease outcome ( p < 0.0 0 01). The PSI, split into two severity groups, was found to reflect the degree of disease impairment, being significantly less severe at baseline (grade 2) in non-critical than in critical subjects ( p < 0.0 0 01). Instead, other indicators of the severity of pulmonary involvement, such as absence of pneumonia, pulmonary thickening, pulmonary consolidation, ground glass, and mixed CT features, did not show statistically significant differences between the groups ( Table 1 ). The presence of comorbidities such as COPD, diabetes, and hypertension did not reach statistically significant percentages in the two groups ( Table 1 ). Only the presence of ischemic heart disease was found to be significantly correlated with disease severity( p < 0.001) ( Table 1 ), a finding already described in the literature, with cardiovascular disease being one of the most important underlying diseases affecting the prognosis of patients with COVID-19. 4 In addition, we found significantly lower platelet counts in subjects with more severe disease ( p < 0.03). Furthermore, other parameters known to be related to the worst outcome of the disease (i.e., D-dimer and NLR) 2 , 5 were confirmed in our series to be significantly related ( p < 0.005 and p < 0.05, respectively) ( Table 1 ). Finally, in accordance with Grifoni et al., 1 we found maximum serum IL-6 levels, which were significantly higher in critically ill than non-critical subjects ( p < 0.0 0 01) ( Table 1 ), to be strongly correlated with the worst prognosis among the markers of inflammatory status. For the genetic study, IL-6 rs1800795, IL-6R rs2228145, TNFα rs1800629, ANGP2 rs55633437, and MX1 rs2071430 genotype distribution in critical and non-critical subjects is shown in Table 1 Demographic characteristics, blood gas analysis parameters, presence of comorbidities, laboratory parameters and characteristics of chest CT scan images of the study population, divided by degree of respiratory involvement. Table 2 and Suppl. Table 1 , and all followed the Hardy-Weinberg equilibrium. The CC genotype frequency for IL-6 rs1800795 was significantly more frequent in critically ill than non-critical patients ( p = 0.044).A significantly greater association was also observed between the rs1800795 polymorphism in critical subjects in the recessive model (CC vs. GG + CG) than in non-critical patients ( p = 0.044) ( Table 2 ). No significant associations were found in genotype frequencies for the other 4 SNPs analyzed (Suppl. Table 1 ). Since the rs1800795 variant of IL-6 affects gene transcription and serum IL-6 levels, we analyzed the association between this variant's presence and IL-6 serum levels ( N = 138). In accordance with Smieszek et al., 7 no significant correlation between IL-6 levels and the IL-6 rs1800795 genotype was found (Suppl. Fig. 1A). In addition, since some studies have reported that IL-6R variants may correlate with circulating IL-6 levels, 8 we correlated the rs2228145 genotype with IL-6 serum levels, finding no significant correlation between IL-6 levels and the IL-6R rs2228145 genotype (Suppl. Fig. 1B). Subsequently, we examined the cumulative effects of the selected SNPs, developing a genetic risk score (GRS) by summing the number of risk alleles. For each SNP, a score of 0 was defined for homozygous non-risk alleles, 1 for heterozygous risk and non-risk alleles, and 2 for two homozygous risk alleles. A higher mean risk score was significantly associated with critical patients when the sum of the five scores for the rs1800629, rs1800795, rs2228145, rs2071430, and rs55633437 variants was considered for each patient ( p = 0.026) (Suppl. Table 2 ). The mean of the gene count score was 3.67 ± 1.26 in the non-critical group and 4.26 ± 1.13 in the critical group. The cumulative effects of 4, 3, and 2 variants were also analyzed (Suppl. Table 2 ). After removing genetic variants, the contribution of IL-6 rs1800795 appeared to be crucial for risk prediction; in all cases where critical patients showed significant differences in GRS compared to non-critical patients, the IL-6 gene variant was present. Moreover, in all cases of significant differences between critical and non-critical patients, the MX1 variant was always present together with the IL-6 variant, except in one case (i.e., for the 4 variants IL-6, MX1, ANGP2, and TNF-α) in which no significant difference was observed ( p = 0.0516), possibly due to the small number of patients. This suggests that the copresence of the MX1 and IL-6 variants could confer greater risk of disease severity. Our study confirmed the prominent role of IL-6 levels and the genetic predisposition of an IL-6 gene variant in response to SARS-CoV-2 infection as predictors of COVID-19 disease severity and unfavorable outcomes, as well as the role of age and ischemic heart disease as important negative prognostic factors. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.jinf.2022.04.043 . Dear editor , In this Journal, we described the impact of non-pharmaceutical interventions during COVID-19 pandemic on pertussis, scarlet fever and hand-foot-mouth disease in China. 1 As SARS-CoV-2 has swept the globe, the incidence spectrum of various diseases has been greatly affected in different pandemic prevention policies and people's coping strategies in various countries. In China, to achieve the strict COVID-19 control policy, nucleic acid testing and wearing mask are two of major control methods. 2 , 3 The typical nucleic acid testing largely relied on taking throat swabs. Both wearing mask and taking throat swabs could potentially affect the incidence of pharyngitis. However, the incidence trend of pharyngitis during COVID-19 pandemic in China remains inconclusive. Pharyngitis is a common specific or nonspecific inflammatory reaction of pharyngeal mucosa, submucosal tissue and lymphoid tissue. 4 It can be divided into acute pharyngitis and chronic pharyngitis. Acute pharyngitis is often accompanied by upper respiratory infection diseases, which can be divided into bacterial pharyngitis, viral pharyngitis and fungal pharyngitis. 4 Chronic pharyngitis is a part of chronic inflammation of the upper respiratory tract. The course of disease is often and easy to aggravate repeatedly. Tissue lesions in adjacent parts, physical and chemical stimuli such as high temperature, dryness and dust can induce the onset or aggravation of chronic pharyngitis. We compared the reported cases of acute pharyngitis and chronic pharyngitis before and after the outbreak of COVID-19 to explore the impact of COVID-19 control policies during the pandemic on pharyngitis. The monthly numbers of acute pharyngitis, chronic pharyngitis, and throat swabs were collected from 2019-2022 in Beijing Jiangong Hospital, most of whose patients are from Baizhifang Subdistrict and Taoranting Subdistrict of Xicheng District, Beijing, China. Data analysis and visualization were performed in statistical software (GraphPad Prism 8.0). A paired samples t -test was used to compare morbidity across different groups. As shown in Fig. 1 A, the monthly reported number of acute pharyngitis is gradually reduced after the outbreak of SARS-CoV-2. Based on the annual reported cases, the annual increased rate of acute pharyngitis is −32.9% and −48.7% for 2020 and 2021, respectively ( Fig. 1 B and Table 1 ). Compared with that in the first three months of 2021, the increment rate of acute pharyngitis in the first three months of 2022 is −81.4% ( Fig. 1 B and Table 1 ). We reason that more and more people wear mask and keep social distance during COVID-19 pandemic, which could reduce incidence of acute pharyngitis. The monthly number of chronic pharyngitis in 2020 is decreased compared with that in 2019 ( Fig. 1 C). Interestingly, the monthly number of chronic pharyngitis is increased from 2020 to 2022 ( Fig. 1 C). Based on the annual reported cases, the annual increased rate of chronic pharyngitis is −18% and 110.1% for 2020 and 2021 respectively ( Fig. 1 B and Table 1 ). The increased rate of chronic pharyngitis in the first three months of 2022 is 57.5% compared with that in the first three months of 2021 ( Fig. 1 B and Table 1 ). In beijing, the strict regulation of COVID-19 required constant nucleic acid testing for SARS-CoV-2. In most cases, throat swabs were taken for nucleic acid testing. As shown in Fig. 1 D, the number of throat swabs in Beijing Jiangong Hospital is greatly increased after the outbreak of SARS-CoV-2. The number of chronic pharyngitis and the number of throat swabs are positively linearly correlated (R 2 = 0.6525, p < 0.0 0 01), suggesting that taking throat swabs could increase the incidence of chronic pharyngitis ( Fig. 1 E). During COVID-19 pandemic, wearing mask and social distance are required, which could reduce upper infections. 5 , 6 After the outbreak of COVID-19, China took a national SARS-CoV-2 nucleic acid testing strategy, which involved inpatient screening, rapid screening in fever clinics, travel screening, and large scale population screening. 7 Taking throat swabs become consistent, which could be a major physical stimulus for increasing chronic pharyngitis. Since August 2021, China has been sticking to "Dynamic COVID-Zero" strategy. 8 As a potential secondary disaster caused by COVID-19 control policy, chronic pharyngitis should be paid more attention during Dynamic COVID-Zero time. Pharyngitis is more common in autumn, winter and spring, so there is a low incidence in summer ( Fig. 1 A and 1C). Under the influence of Beijing's medical insurance policy, the medical insurance balance returns to zero on January 1 every year. Combined with the upcoming Spring Festival from January to February, local residents in Beijing have more visits and follow-up visits before January 1, forming low visit to hospital from January to February ( Fig. 1 A and C). In this study, we investigated the impact of the COVID-19 control regulation on the incidence of pharyngitis based on the data from Beijing Jiangong Hospital. From 2019 to 2022, cases of acute pharyngitis are decreased, while cases of chronic pharyngitis are increased. During COVID-19 pandemic, a strict control regulation is performed in Beijing. Wearing mask and social distance would reduce acute pharyngitis incidence. However, constantly taking throat swabs would increase the incidence of chronic pharyngitis. This study will advance our understanding of the potential benefit and secondary disaster caused by COVID-19 strict control policy. Declaration of Competing Interest The authors declare that there are no conflicts of interest. We read with interest the letter by Dimeglio et al. reporting the impact of vaccination and pre-immunity on the proliferation of Omicron BA.1 and BA.2 sublineages in France 1 . The new emerging Omicron strain of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is currently spreading worldwide. The Omicron strain has multiple spike protein mutations compared with other variants of concern, such as the Alpha and Delta strains 2 . Consequently, there is concern that serum antibody activity against the Omicron strain in vaccinated or convalescent persons will be weaker than that against previous SARS-CoV-2 strains 3 , 4 . Because the characteristics of infectivity and treatment response differ among Omicron sublineages 5 , 6 , it is important to understand the evolutionary process in real time. To determine the viral lineage of SARS-CoV-2, we performed whole genome sequencing analyses or TaqMan assays using SARS-CoV-2-positive samples ( n = 1298) collected consecutively in Yamanashi, Japan from September 2021 to March 2022 (Supplemental materials) 7 , 8 , 9 . During this period, we identified Delta strain ( n = 159) and Omicron strain ( n = 1139). After the first case of Omicron was identified in January 2022, Omicron rapidly replaced Delta as the prevalent strain of SARS-CoV-2 ( Fig. 1 A). We next examined whether the viral load varied with patient age. There was no apparent correlation between patient age and In summary, this study indicates that after the expansion of the SARS-CoV-2 Delta strain, a rapid spread of the Omicron strain occurred. Sublineage BA.1 was very minor in Japan when Omicron was first discovered. First, sublineage BA.1.1 expanded dominantly and was then gradually replaced by sublineage BA.2. A transition from sublineage BA.1.1 to sublineage BA.2 was clearly observed over approximately one month. The results of the present study show that the amount of viral load in the nasopharyngeal swab was higher for sublineage BA.2 than for sublineage BA.1.1. These epidemiological and viral characteristic results indicate that Omicron sublineage BA.2 is more transmissible than sublineage BA.1.1. Although a high incidence of household COVID-19 infec-tions stemming from young children has been reported 10 , our results indicate that the Omicron strain retains a fairly high viral load across age groups, which may contribute to the high infectivity of the Omicron strain and its accelerated spread. These data provide insights for determining appropriate COVID-19 prevention and control measures for homes, schools, workplaces, and facilities for the elderly during the spread of Omicron strain viruses. Declaration of Competing Interest None. The negative impact of the COVID-19 Pandemic on immunization and the positive impact on Polio eradication in Pakistan and Afghanistan Dear editor, In a recent letter by Usman Ayub Awan et al. [1] , the authors described the outbreak of the wild poliovirus in Afghanistan. We agree with the concerns raised by the authors and here we would like to discuss the current polio situation and the impact of the COVID-19 pandemic on polio vaccination in Pakistan and Afghanistan. The global polio eradication initiative was launched in 1988, when the wild poliovirus was endemic in 125 countries, paralyzing 350,0 0 0 individuals per year. Enormous progress has been made towards the eradication of polio by reducing 99% of global incidence due to immunization effort s [2] . Vaccines and the importance of immunization have taken center stage in public discourse like never before. As the world focus on COVID-19 Vaccination campaigns, what does this mean for other vaccine-preventable diseases such as polio? The covid-19 pandemic has compromised public health systems around the world and disrupted the delivery of essential health services including effort s to achieve polio eradication. The three-decades-long derive to eradicate polio was going back even before COVID-19 hit, and the pandemic has made a bad situation worse. In 2019 and 2020, cases of wild poliovirus rose in Pakistan and Afghanistan, the last two countries where it is endemic. The ongoing COVID-19 Pandemic is disrupting life-saving immunization services globally, but with a particular impact in lowincome countries including Pakistan and Afghanistan. The highest number of unimmunized children live in south Asia, with 97% of them living in Pakistan, Afghanistan, and India [3] . The evidence from previous Ebola epidemics has demonstrated that even temporary interruptions of routine immunization services can lead to secondary public health crises such as outbreaks of vaccinepreventable diseases including polio and measles. Pakistan and Afghanistan are the fifth and thirty-seventh most populous countries in the world. Both countries spend less than 1% of their gross domestic product on health services. Owing to the low investment in the health sector many vaccines preventable diseases (VPDs) are still endemic in Pakistan and Afghanistan including polio and measles. The polio eradication program in Pakistan and Afghanistan successfully reduces the number of confirmed polio cases from 306 in 2014 to only 12 in 2018 in Pakistan and from 28 cases in 2014 to 21 cases in 2018 in Afghanistan. Unfortunately, there was a resurgence of polio cases to 147 in 2019 and 84 polio cases in 2020 were reported in Pakistan. Similarly, 29 polio cases in 2019 and 56 in 2020 were reported in Afghanistan. Surprisingly, in the presence of millions of unimmunized children, only one polio case was reported in Pakistan and 4 polio cases were reported in Afghanistan in 2021. Only one polio case is reported in Afghanistan and zero polio cases in Pakistan in 2022. From 2010-to 2022, a total of 1122 and 342 polio cases were reported from Pakistan and Afghanistan respectively. Detail of reported polio cases in Pakistan and Afghanistan is given in Fig. 1 . The coverage of routine immunization including polio and measles vaccination in Pakistan and Afghanistan is far lower than the 90% that is required for herd immunity and immunization against polio has further declined by 50% during the COVID-19 pandemic when 50 million children in Pakistan [3] and 23 million children in Afghanistan [4] did not receive a polio vaccination owing to the suspension of immunization activities. Before the COVID-19 pandemic, more than 60 0,0 0 0 Afghan children miss polio vaccination because of the refusal of their parents [5] . There is an 18,593 live birth average per day in Pakistan and a 3986 live birth average per day in Afghanistan, the suspension of door-to-door polio immunization campaigns has left a huge and growing pool of children who are susceptible to polio in both countries. When the virus finds them, it will tear through the unimmunized population and may lead to more polio outbreaks in near future. Recently polio outbreak has been declared in Malawi, the poliovirus strain found in the child in Lilongwe has been linked to one circulating in Pakistan [6] . Due to the disruption in immunization, the crystal-clear evidence of the unprecedented upsurge in vaccine-preventable diseases such as measles in Pakistan and Afghanistan resulted in 28,125 suspected measles cases, including 800 deaths reported from Pakistan [7] and 35,300 suspected measles cases including 156 deaths were reported from Afghanistan in 2021. Most other VPDs such as Diphtheria, Pertussis, typhoid, TB, Hepatitis, Rota, and Rubella virus are on the rise due to the disruption in immunization owing to a pandemic, but poliovirus is unprecedently declined in both countries. Multiple hurdles in polio eradication are still there in both countries such as poor immunization coverage, vaccine hesitancy, vaccine refusal, conspiracy theories, political instability, overcrowded population, killing of polio workers, population movement, remote locations, and conflict. In the presence of multiple hurdles fueled by the COVID-19 pandemic, the more than 99% decline in polio cases in Pakistan and Afghanistan is difficult to understand for the scientific community. The sudden absence of polio circulation is a daunting task that cannot be verified reliably because of the limitations of existing surveillance tools, which include acute flaccid paralysis surveillance supplemented by environmental surveillance. In a fully susceptible population, acute flaccid paralysis surveillance can detect one in 100 to 10 0 0 infections. However, with increasing population immunity, surveillance for clinical signs of poliovirus infection become much less sensitive, allowing poliovirus to circulate undetected for many years. If the decline in polio cases in Pakistan and Afghanistan is due to the vaccination campaigns, now the key is to prevent a resurgence in the coming peak season for polio transmission in both countries. World Health Organization and patterners in the global polio eradication initiative are committed to fully supporting the government of Pakistan and Afghanistan to tackle polio in its last strongholds and failure to eradicate polio now could result in a resurgence of the disease, with as many as 20 0,0 0 0-30 0,0 0 0 new cases worldwide every year. Although Pakistan and Afghanistan face distinct polio eradication challenges, they are linked epidemiologically because of high rates of cross-border population movement. Transit-point vaccination must be maintained as emigration from Afghanistan potentially increases after the Taliban control. The persistence of polio in Pakistan and Afghanistan, and the increase in circulating vaccine-derived poliovirus, has been a warning call for intensifying efforts for control despite the disruption imposed by the COVID-19 pandemic. Disruption in immunization in both countries coupled with Taliban rule in Afghanistan threat-ens to reverse significant achievements in polio eradication and the world must seek innovative approaches to work with the Taliban to get polio and other VPDs under control. The COVID-19 pandemic is a magnifying glass that has highlighted the larger threat of existing public health challenges such as polio. The long-term trajectory of the pandemic is uncertain, and the risk of resurgence of epidemic-prone diseases including polio is very high. Additionally, at the same time, the health system's capacity to perform a meaningful epidemiological analysis of the situation may also be impacted. Pakistan and Afghanistan are already struggling to eradicate polio and disruption in immunization owing to the pandemic the risk of polio outbreak is on the brink and it may provide the virus a milieu to spread further and faster and could also result in the worldwide spread of polio infection. The world cannot afford to have another epidemic of polio after the devastating impact of the COVID-19 pandemic. Dear editor, A study with population-based published by He-Ling Bao and colleagues entitled "Prevalence of cervicovaginal human papillomavirus infection and genotypes in the pre-vaccine era in China: A nationwide population-based study" in this journal. 1 We read with great interest. Yet, some points remain to be discussed, including whether the prevalence of high-risk human papillomavirus (HR-HPV) was different in level of economic development. A lot of previous studies have showed that there were disparities significantly in HPV prevalence across varying economic development regions. [2][3][4][5] There might be several reasons for this results. Firstly, the different women population was selected. As Bao et al. 1 study mentioned, main groups are mainly from urban or developed areas in their study that might affect the estimation. Secondly, the sensitivity and specificity of different HPV testing and reagents are still vary significantly, 4 , 6 which might cause within-group variations in HPV prevalence. In addition, the data lack of HPV infection in some provinces and derived from existing database also may be another reason of inconsistence. We conducted a prospective study based on large population to investigate the epidemiological characteristics of HR-HPV infection and the 14 genotype distribution of HR-HPV in different regions of China using self-collected from vaginal samples. The included population was divided into the undeveloped, middle developed and highly developed region groups according to the reginal Gross Domestic Product (GDP), the relationship of HPV subtypes especially HPV16 /18 and GDP were analyzed. A total of 20,103 women aged 30-59 years were recruited from 13 provinces in China from September 2018 to July 2020. 35.81% of them was from remote and rural areas. All participants registered, signed electronic informed consent, and filled in individual information voluntarily using an internet-based cervical cancer screening (CCS) platform. The exfoliated cells of the vagina and cervix with self-sampling was obtained using a designed the sampling kit including a sampling brush, a storage card or preservation solution, a printed graphic sampling instruction. The samples were tested for 14 HR-HPV types (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, and 68) using the SeqHPV and BMRT HPV PCR assays. SPSS 22.0 statistical analysis software was used for data analysis in this study. The overall prevalence of HR-HPV infection was 13.86% among 20,103 women. The vast majority of HR-HPV (77.18%) were still a single type. Inner Mongolia had the highest prevalence (21.55%), while the lowest prevalence was in Jilin (9.51%) in 13 regions. HPV prevalence in Qinghai (14.08%,195/1384) and Guangxi (10.26%,195/1900) were included. The infection rate of overall HPV in different regions are shown in Fig. 1 . According to the various GDP, namely highly developed, middle developed, and undeveloped regions, the significant differences were observed in the prevalence of HR-HPV in analyzed data of three stratifications ( p < 0.01).It also shows that the prevalence of HPV16/18 decreases with the increases of the reginal GDP. In addition, the undeveloped regions had the highest prevalence of combined-and single-type prevalence (HPV16/18, HPV16/18/ 31/ 33/ 52/58, single-type infection). All HR-HPV infections except for multiple-type infections were significantly associated with the reginal economic status ( Table 1 ). The present study was conducted based on population-based study involved 13 provinces in China. It is so far the first as well as the largest prospective study in China that used HR-HPV testing of self-collected vaginal samples, which 13.86% of women were tested positive of HR-HPV. The HR-HPV prevalence in this cervical screening program were found significantly different in geographical regions, with a range from 9.51% to 21.55%. Reginal variation on HR-HPV prevalence was also evidenced by a previous study based on physician-sampling. 3 , 4 In addition, our study included not only urban and suburban women, but also rural women, so the data is more comprehensive. In this analysis of HR-HPV prevalence and subtypes in term of GDP, the positive rate of HR-HPV, combined-and single-type prevalence (HPV16/18, HPV16/18/ 31/ 33/ 52/ 58, single-type infection) in diverse GDP was statistically different, while development status in Bao et al. 1 study there was no statistical difference ( p = 0.7301). In another study of large sample the level of GDP per capita(low, middle, and high), the prevalence of HPV infection were 16.72, 18.01 and 13.41%. 4 Meantime, the undeveloped regions in our study had the highest prevalence of specific combinations of HR-HPV subtypes, of which HPV16/18 infections decreased with increasing GDP. In addition, the highest prevalence of HPV single-type in the undeveloped regions in this study, and the relationship between education level and HR-HPV infection also provide the evidence for varying HPV infection rates in diverse states of economic development. These data also provide a strong basis for HPV vaccination and accurate CCS in various regions, in particular, lower coverage of CCS, large population, imbalanced economic development, and culture variations in China. 7 , 8 In summary, based on self-collected HPV testing, the undeveloped regions had the highest prevalence of specific combinations and single-type of HR-HPV subtypes, of which HPV16/18 infections decreased with increasing GDP. These demonstrate that it is crucial to develop specific plans for CCS according to the distribution of various subtypes. Finally, with WHO recommending self-collected HPV testing as one of the optional primary CCS method 9 , using the internet-based self-sample screening can improve a much greater coverage of CCS in different level of GDP. Ethics approval Ethical approval of the study was granted by the Ethics Committee of the Peking University People's Hospital (2018PHB056-01), and registered on the Chinese Clinical Trial Website (https://www.chictr.org.cn, ChiCTR20 0 0 032331). Declaration of Competing Interest All authors declare no conflict of interest. Funding The study was supported by Association for Maternal and Child Health Studies (No. 2018AMCHS00801 );the National Key Research and Development Program of China (No. 2016 YFC1302901 ) Table 1 Characteristics of 13 cases of proctitis and 1 case of prostatitis caused by Neisseria meningitidis . As most reported cases of Nm proctitis were caused predominantly by serogroup C lineages, 2 we believe that MenACWY vaccination should be provided to MSM, particularly in those at highrisk of STIs. Moreover, vaccines targeting Nm might help in reduc-ing the probability of reinfection, even though they failed to reduce oropharyngeal carriage at 12 months. 10 In conclusion, proctitis and prostatitis by Nm were described in 14 MSM, with and without HIV infection, successfully treated with ceftriaxone. Declaration of Competing Interest Authors have no conflicts of interest to disclose. Dear editor, We have read with great interest a recent study of Dr. Ying Luo regarding GBM model may be of great benefit served as a tool for the accurate identification of ATB. 1 In their study, A total of 2619 participants (1025 ATB and 1594 LTBI) were enroled in discovery cohort. ATB patients had significantly higher levels of tuberculosisspecific antigen/phytohemagglutinin ratio and coefficient variation of red blood cell volume distribution width, and lower levels of albumin and lymphocyte count than those of LTBI individuals. 1 In a similar study, we have further found out that Functional cervical lymph node dissection can shorten the anti-tuberculosis medication treatment time in patients with cervical lymph node tuberculosis. We gathered inpatient and post-discharge follow-up data on 360 patients clinically diagnosed with cervical lymph node TB at Wuhan Pulmonary Hospital between January 2017 and August 2021, including surgical and non-surgical patients. The number of cervical lymph nodes and the period of anti-tuberculosis medication treatment were compared after patients were home. The data distribution was described in Table 1 . The patients ranged in age from 2 to 83 years old, with an average age of 38 and a standard deviation of 20. The surgical treatment group's patients ranged in age from 2 to 75 years old, with an average age of 38 and a standard deviation of 16. The surgical treatment group's patients ranged in age from 6 to 83 years old, with an average age of 45 and a standard deviation of 20. The number of lymph nodes in the neck of the patients ranged from 8.15 to 1.61 according to the follow-up of the patients from one month to six months after discharge from the hospital, the number of lymph nodes in the surgical treatment group ranged from 1 to 0.69 with a standard deviation of less than 1, and the number of lymph nodes in the non-surgical treatment group ranged from 4.57 to 4.17 with a standard deviation of around 1. This indicates that after surgery, patients with cervical lymph nodes have fewer remaining lymph nodes in the neck. In this paper, we had collected 360 HNTB patients, 258 of them were with surgery treatment and 66 of them were without surgery treatment during their hospital days. The color doppler ultrasonography prompt was adopted to keep tracking the quantitative analysis of lymph nodes for one month, two months, three months, and six months for paper. Standard deviation (SD) analysis was shown in Fig. 1 , the average anti-tuberculosis time of the patients with the surgical treatment was 98.02 days and for the patients without surgery treatment was 96.13 days before the hospital stage. The average anti-tuberculosis time in the hospital stage was 12.76 days for surgical treatment patients and 8.74 days for without surgical treatment patients. The average anti-tuberculosis time after the hospital was 205 days with a standard deviation of 42.39 for surgical treatment patients, and 372 days with a standard deviation of 71.54 for without surgery treatment patients. The diagnosis of HNTB is usually challenging due to the nonspecific presentation of systemic symptoms and the paucibacillary nature, hindering the clinical effectiveness of conventional testing. 2 In our study, out of a total of 360 patients, 269 patients (74.7%) required surgical treatment, while only 91 patients (25.3%) were cured with medical treatment alone, which is a high proportion compared to the study by Mohapatra and Janmeja, 3 who operated on half of the patients (50.8%, 33), and much higher than Monga S, who successfully treated 83% of patients in a prospective study of 140 cases with short-term chemotherapy for six months. 4 In functional cervical lymph node dissection, a transverse streak incision in the corresponding area of the cervical lymph node is selected as the surgical approach. The broad neck muscle and sternocleidomastoid muscle are separated and the internal jugular vein is found, and the internal cervical lymph nodes are cleaned one by one along the outside of the internal jugular vein. The anti-tuberculosis medicine therapy utilized in this hospital was based on the WHO's anti-tuberculosis treatment guideline. Patients with cervical lymph node tuberculosis got anti-tuberculosis medications whether they had surgery or not. 4 After the patient was discharged from the hospital, he or she would return to the hospital for regular evaluation since the paient required to continue taking anti-tuberculosis medications. We also did follow-up and re-examination for patients with cervical lymph node TB in the outpatient department of Wuhan Pulmonary Hospital. We measured the size of the patient's cervical lymph nodes on a regular basis. The number of cervical lymph nodes was determined using color Doppler ultrasonography to better quantify the alterations in the lymph nodes. 5 Tuberculosis treatment necessitates long-term treatment with a cocktail of medications. 6 These medications are linked to a number of side effects that can cause serious morbidity, such as deafness, and in some cases, death. 7 Our research found that functional cervical lymph node dissection can shorten the anti-tuberculosis medication treatment time in patients with cervical lymph node tuberculosis. Dear Editor, We read with interest that infection with Omicron variant can occur in patients who presented a high antibody titer, even though their concentration was at 2.4 higher than infection with Delta variant [1] . The SARS-CoV-2 pandemic has shown the succession or superposition of epidemics linked to numerous viral variants [2] . Until recently, the overall rate of reinfection with SARS-CoV-2 has been relatively low, below 2% according to several international studies [ 3 , 4 ]. The Omicron (B.1.1.529) variant has been described for the first time on November 2021 in Gauteng province, South Africa and spread rapidly worldwide. One study conducted in South Africa demonstrated that it was associated with an increased hazard ratio of reinfection, suggesting its substantial ability to evade immunity from prior infection [5] . In addition, vaccine efficacy against this variant was reported to be reduced to around 56% for the Pfizer vaccine [6] . We report here the incidence and proportion of reinfections with the Omicron variant among patients diagnosed in our institute. Our laboratory has massively screened SARS-CoV-2 infections by real-time reverse transcription-PCR (qPCR) under the same conditions since the emergence of this virus in France in February 2020. We thus have a cohort of patients screened and diagnosed as infected for the period February 27, 2020-March 6, 2022, making it possible to calculate the rate of reinfections over this entire period without the bias of variable screening strategy or capacity. An automatic reinfection detection system has been implemented through the laboratory information system of our institute's laboratory, on the basis of two qPCR-positive samples spaced at least 90 days apart with a negative qPCR between two episodes, according to the CDC definition of reinfection case [ https://www.cdc.gov/ coronavirus/2019-ncov/php/invest-criteria.html ]. SARS-CoV-2 RNA genotyping was performed by using sequencing or variant-specific qPCR, as described elsewhere [3] . From February 7, 2020 to March 6, 2022, 1646 of 80,863 patients found SARS-CoV-2-positive experienced a reinfection. Their mean age ± standard deviation at time of the second infection was 38.3 ± 16.4 years, ranging from 9 months to 97 years, and 60.1% were female. In Marseille, we observed five major epidemics of SARS-CoV-2 infections due to different mutants or variants ( Fig. 1 ) Fig. 1. Dynamics of SARS-CoV-2 infections (left axis) and reinfections (right axis) (a) and of major SARS-CoV-2 variants determined (b) in patients diagnosed at IHU Méditerranée Infection, 2020-2022. Reinfection cases were observed since the second epidemic among patients whose first infection occurred during one of the five epidemics ( Fig. 1 ). The overall mean time span between first infection and reinfection was 334 ± 146 days and significantly increased overtime from one epidemic to another (Supplementary Figure). The first patient reinfected with the Omicron variant was detected mid-December. Then, this variant rapidly became predominant in reinfected patients until the study's endpoint, as of 6 March 2022, with 885 cases out of 1397 ( Fig. 1 ). In earlier studies, we reported that the prevalence of reinfection among SARS-CoV-2 infections diagnosed in our institute was 0.2% (58/29,154 cases), 0.3% (41/12,283 cases) and 1.5% (110/7152 cases) during the second, third and fourth epidemic (until 24 August 2021), respectively [ 3 , 8 ]. In the present study, we confirm a 1.5% reinfection rate (179/12,135 cases) during the entire fourth epidemic (until November 2021), and observe a marked increase in the reinfection rate that reaches 6.8% (1397/20,542 cases) during the on-going fifth epidemic ( Fig. 2 ) Fig. 2 represents the prevalence of reinfection and the estimated risk for reinfection in SARS-CoV-2-infected patients according to the period of first infection. Contrary to our previous assessment that this estimated risk decreased over time [3] , we observed little variation, between 2.4 and 3.0%, in this updated study. This is likely because of cumulative numbers of reinfections overtime with occurrence of new cases of reinfection that were diagnosed after our previous assessment. The increase in the proportion of reinfections with the Omicron variant is additional evidence that the genetic variability of SARS-CoV-2 has resulted in antigenic changes leading to reduced protection conferred by a previous infection. Our recent observations of a lower severity of infections with the Omicron variants as indicated by low rates of hospitalization, transfer to intensive care units, and death is good news in this context [ 9 , 10 ]. The Omicron variant is likely distantly related to other SARS-CoV-2 variants which may account for a higher rate on reinfection and lower efficacy of vaccines. Indeed, despite the currently considerable proportions of people vaccinated and/or infected in France, present data and recently published data [5] suggest that resulting immunity could not prevent an endemicization of SARS-CoV-2. Ethical approval This retrospective study has been approved by the ethics committee of the University Hospital Institute Méditerranée Infection (No. 2022-016). Access to the patients' biological and registry data issued from the hospital information system was approved by the data protection committee of Assistance Publique Declaration of interest The authors have no conflicts of interest to declare relative to the present study. Didier Raoult was a consultant for the Hitachi High-Technologies Corporation, Tokyo, Japan from 2018 to 2020. He is a scientific board member of the Eurofins company and a founder of a microbial culture company (Culture Top). Funding sources had no role in the design and con-duct of the study, the collection, management, analysis, and interpretation of the data, and the preparation, review, or approval of the manuscript. Surface electrostatic shift on spike protein decreased antibody activities against SARS-CoV-2 Omicron variant Dear editor, In this Journal, Pascarella and colleagues recently described the value of electrostatic potentials of the spike receptor binding and N-terminal domains in addressing transmissibility and infectivity of SARS-CoV-2 variants of concern. 1 They interestingly found that the Omicron B.A1 variant has the highest net surface charge of SARS-CoV-2 receptor-binding domain (RBD), which may enhance affinity of Omicron RBD binding to the receptor angiotensinconverting enzyme 2 (ACE2). 1 However, whether the protein surface electrostatic shift affect antibody activities is still unknown. A vaccine's efficacy or effectiveness against SARS-CoV-2 infection usually ranges from 19% to 99%. 2 However, some vaccines show notably lower efficacy, and the reason for this remains unknown. Marked reductions in neutralizing activity have been observed against Omicron relative to the ancestral pseudovirus in plasma from convalescent individuals and from individuals who had been vaccinated against SARS-CoV-2. [3][4][5] The SARS-CoV-2 Omicron variant encodes 37 amino acid substitutions in the spike protein, 15 of which are in RBD, thereby raising concerns about the effectiveness of available vaccines and antibody-based therapeutics. 3 In addition to these mutations, the other possible reasons for the considerable decline in the neutralizing activity against Omicron have yet to be documented. In this study, we computed sequence-based antibody epitopes on spike proteins of SARS-CoV-2. Four epitopes with high surface accessible scores have been found and named #RBD, #CS (S1/S2 cleavage site), #S2-1 (S2 subunit-1) and #S2-2 (S2 subunit-2), respectively ( Fig. 1 A and Supplementary Fig. 1 , and Supplementary Table 1 ). Then these epitopes were synthesized chemically. The ascites production of monoclonal antibodies against these epitopes was generated by inoculation of mice. However, unlike the other three epitopes, the epitope #CS failed to generate any ascites antibodies, which may be because the cleavage impairs its antigenicity. Mouse monoclonal antibody against the epitope #RBD showed a relatively high endpoint (dilution) titer against the SARS-CoV-2 original strain ( Fig. 1 B ). A human neutralizing monoclonal antibody, CV30, 6 , 7 also in complex with the RBD, showed a much higher endpoint titer to the original strain than mouse anti-#RBD antibody ( Fig. 1 C ). This may be because CV30 binds more residues (32 residues) than the epitope #RBD (14 residues). However, endpoint titers of both antibodies against RBD were significantly lower when binding to the Omicron S protein, by a factor of 10 for CV30. This is consistent with a previous report stating that the neutralizing activity against Wuhan-Hu-1 and retained detectable neutralization against Omicron, with decreases about 21- To present the sites more clearly, only one of the three monomers is labeled. (B) Endpoint (dilution) titers of mouse monoclonal antibodies against epitopes #RBD, #S2-1, and #S2-2 respectively. (C) Endpoint titers of human monoclonal antibodies against RBD, S1 subunit, and S2 subunit, respectively. (D) Endpoint titers of mouse anti-#RBD, anti-#S2-1, and anti-#S2-2 antibodies after PNGase F treatments. (E) Endpoint titers of human anti-RBD, anti-S1, and anti-S2 antibodies after PNGase F treatments. Bars represent standard deviations of three independent replicates. Values followed by different letters are significantly different at P < 0.05 according to Duncan's multiple range test. 39-fold. 3 Five of 9 residues binding to CV30 at the C-terminal of RBD were found to be mutated in Omicron ( Supplementary Fig. 1 ). This might be an important reason for its decreased binding activity. The epitope #S2-1 with the highest SA score of 4.431 is located on the interface between subunits S1 and S2, which might be uncovered by transmembrane protease serine 2 (TMPRSS2) cleavage ( Supplementary Fig. 2 ). However, neither TMPRSS2 nor its inhibitor Camostat 8 affected antibody activity ( Supplementary Fig. 2 ). Unexpectedly, mouse monoclonal antibody against the epitope #S2-2 showed the highest endpoint (dilution) titer to the original strain (3 times higher than the antibody against #RBD; Fig. 1 B ). Human monoclonal antibody against S2 subunit confirmed this finding (2 times higher than the antibody titer against S1 subunit; Fig. 1 C ). The unexpectedly high activity of non-neutralizing antibodies against S2 subunit were consistent with the fact that the vaccine efficacies against severe disease are usually higher than 90%, no matter how low the vaccine efficacies against SARS-CoV-2 infection are. 2 Nevertheless, activities of both anti-S2 antibodies declined 11-23-fold when binding with Omicron S protein ( Fig. 1 BC ). Possible reasons for the dramatically reduced antibody activity against the Omicron variant were further investigated. Previous studies have suggested that N-linked glycosylation on the S protein may compromise its antibody activities. 9 , 10 Most SARS-CoV-2 epitopes are shielded by glycans, and only areas of the protein surface at the apex of the S1 domain (RBD region) are not surrounded by glycosylation sequons ( Fig. 1 A ). Consistent with the structure analysis, oligosaccharide-removing treatments involving PNGase F increased all antibody titers, especially for anti-S2 antibodies ( Fig. 1 DE ). However, after PNGase F treatments, antibody titers against Omicron were still much lower than those against the original strain ( Fig. 1 DE ). Given that all the three epitopes #RBD, #S2-1, and #S2-2 are completely conserved in all SARS-CoV-2 strains ( Supplementary Fig. 1 ), the large decline in the antibody activity against Omicron cannot be attributed to the mutations. We noticed that 11 residues had been substituted into the alkaline amino acid on Omicron S protein, which may change the electrostatic potential on the surface of the protein. We computed the electrostatic potential on the S protein. The surface of the Omicron S protein is uniformly positively charged. However, a large part of original SARS-CoV-2 S protein surface is electrically neutral or negatively charged, but its RBD is positively charged ( Fig. 2 A ). For the original strain, epitopes #RBD and #S2-2 distribute on electrically neutral areas, which are positively charged in the Omicron strain. When the pH value of the binding system was adjusted to 5.0 (the positive charge could be neutralized), antibody titers against Omicron increased exponentially. This effect was doubled for anti-RBD antibodies, 7-11-fold increases for anti-S2 antibodies ( Fig. 2 BC ). However, pH 9.0 decreased all antibody activity ( Fig. 2 DE ). The large decline in antibody activity against Omicron RBD (neutralization) may be attributed to both residue mutations and the shift in electrostatic potential. The large decline in antibody activity against the Omicron S2 subunit (non-neutralizing antibody activity) may be mainly attributable to a shift in electrostatic potential on the surface of the protein. Because the heterogeneity in antigen surface-charge distribution causes charge-related heterogeneity in monoclonal antibodies, 11 new vaccines against the full- length Omicron S protein may be developed that have both negatively charged neutralizing antibodies and negatively charged nonneutralizing antibodies with high affinity to the Omicron variant. Declaration of Competing Interest The authors declare no competing interests.
2022-04-29T13:08:04.774Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "6881913c17377640b64a77003ff1426218e39d31", "oa_license": null, "oa_url": "http://www.journalofinfection.com/article/S0163445322002493/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8c7cf566fa693a25040c5fb985ad323cf9396688", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230113961
pes2o/s2orc
v3-fos-license
THE IMPACT OF FINANCIAL LITERACY ON INVESTMENT DECISIONS: WITH SPECIAL REFERENCE TO UNDERGRADUATES IN WESTERN PROVINCE, SRI LANKA Article History Received: 23 October 2020 Revised: 13 November 2020 Accepted: 30 November 2020 Published: 21 December 2020 INTRODUCTION In the last couple of years, financial literacy received special attention from researchers, financial institutions and policy makers (Kumari, 2017;Lusardi, 2019). The capability to manage personal finances has become increasingly important in today's world. People must plan for long-term investments for their retirement and children's education. They must also decide on short-term savings and borrowing for a vacation, education, emergency, a house, a car loan, and other items. Additionally, they must manage their own medical and life insurance needs (Chen & Volpe, 1998). Financial literacy is a basic concept in understanding money and its use in daily life. This includes the way income and expenditure are managed and the ability to use the common methods of exchanging and managing money. Also, financial literacy incorporates an understanding of everyday situations that need to be understood such as savings, borrowings, credit and insurance (Roy & Jane, 2018;Singh & Kumar, 2017). The understanding of financial terminologies and concepts includes an understanding of key financial views central to investing and managing funds to increase wealth and security. Individuals require an awareness of features available for borrowing and investing. This awareness includes the understanding of brochures and annual statements, complex interest calculations and delaying the use of funds for utilization. Individuals further need to be aware that high return investments are also likely to involve high risk, the realization that market values fall as well as rise, and the principles of variation. This need introduces a new complex set of skills in relation to products and how they work, the advantages and disadvantages. The other component of financial literacy is the skill to utilize knowledge and understanding to make beneficial financial decisions (Kumari & Ferdous, 2019;Wagland & Taylor, 2009). According to Lusardi (2019) Students need financial skills perhaps more now than ever before. The reason being that the current developments in the financial market have focused renewed attention on the importance of people being both well informed about their financial option and discerning financial consumers-in short, being financially literate. Also, financial literacy can help to prepare consumers for tough financial times, by promoting strategies that mitigate risk such as accumulated savings, differentiating assets, and purchasing insurance. Financial literacy has typically related individuals' knowledge of economics and finance with their financial decisions related to savings, retirement planning, or portfolio choice. There is a basis of economic theory: where you have educated consumers, you will find strong competition and effective markets. In other words, financial literacy is essential for business, the economy, and the country and in this age of globalization. Thus, in this study, financial literacy is defined as the understanding and knowledge of basic economic and financial concepts, as well as the ability to use that knowledge to manage financial resources. Therefore, in this study researcher attempt to examine that to what extent financial literacy influence on investment decisions of Sri Lankan undergraduates. LITERATURES People make a variety of financial decisions in the course of everyday life, about saving, investing and borrowing. Day by day the global marketplace is increasingly risky and is becoming more vulnerable. One of its main inferences include increasing costs of goods and services that push people to be able to make well-informed financial decisions (Lusardi & Mitchell, 2011). More recent, Australian study attentions on financial literacy relevant to decision-making investment in the context of retirement funds through objective tests of both basic and advanced financial knowledge and understanding (Gallery, Gallery, Brown, Furneaux, & Palm, 2011). The researchers showed factor analysis and developed three areas of financial literacy, namely, general financial matters, such as understanding compound interest; general investment matters, such as understanding the importance of diversification; and specific superannuation investment matters, such as the understanding of the relative risks and returns of investment options (Gallery et al., 2011). Gallery et al. (2011) study builds on the work of emphasis on measures of financial literacy that are exact to decision-making in the context of retirement investment choice decisions. Indeed, research shows that individuals with higher levels of financial literacy manage to have higher throwaway incomes and a greater capacity to 'spend, save and invest' (Garman, 1997). Müller and Weber (2010) the results of indicate that financial literacy is positively related to investments in low-cost funds. Nevertheless, they report that even the most urbane investors select actively managed funds instead of less expensive ETFs (exchange traded funds) or index fund replacements. Even finance professors with seemingly high financial literacy do not tool their knowledge when building their own portfolio. For example, Doran, Peterson, and Wright (2010) find that the professors' insight regarding market efficiency and the significant optimal investment strategy are unconnected to their actual, realized behavior. Investment decisions are, despite their high financial literacy, driven by behavioral factors comparable to amateur investors. The authors argue that the professors' document that a significant number of finance professors do not participate in the stock market at all. Australian research that investigates financial literacy and investment choice decisions notwithstanding the important economic and social impact of retirement, there appears to be limited prior. Though there have been a growing number of financial studies of literacy and retirement decisions conducted in Australia in recent years, they have mainly been absorbed on retirement planning, portfolio allocation (Bateman, Louviere, Thorp, Islam, & Satchell, 2010) and savings intentions (Croy, Gerrans, & Speelman, 2010). With the exception of Gallery et al. (2011) (thereafter refers to as Gallery et al. (2011)) However, there have been current studies that questioned the effectiveness of financial literacy in improving financial decision-making (Fernandes, Lynch Jr, & Netemeyer, 2014;Miller, Reichelstein, Salas, & Zia, 2014). There is a large body of research exploring whether individuals are well equipped to make financial decisions particularly as individuals are increasingly in charge of their financial wellbeing during their working lives and after retirement over the past two decades. Moreover, on the quality of investment decisions several studies document only a weak, if any, impact of financial literacy. From Van Rooij, Lusardi, and Alessie (2011) a survey of Dutch households, find that people with higher levels of financial literacy are significantly more likely to participate in the stock market. Van Rooij et al. (2011) say that individuals who have lower financial literacy are much less likely to capitalize in stocks. Lusardi & Mitchell, (2007)find in the individual's financial literacy a good indicator for the change of his/her portfolio. Using investor's wealth and profession as a substitution for financial literacy. Dhar and Zhu (2006) find experiential evidence that more literate investors are less flat to the nature effect. The results of Müller and Weber (2010) describe that financial literacy is related to positively to investments in low-cost funds. Nevertheless, they report that even the classiest investors select vigorously managed funds instead of less expensive ETFs or alternatives index fund. Even finance professors with presumably high financial literacy do not implement their knowledge when building their own portfolio. Doran et al. (2010) find that the professors' insight concerning market efficiency and the important optimal investment strategy are unrelated to their actual, realized behavior. The authors argue that the professors' investment decisions are, despite their high financial literacy, driven by behavioral factors comparable to amateur investors. Hilgert, Hogarth, and Beverly (2003) document that a significant number of finance professors do not participate in the stock market at all. Several studies document only a weak, if any, impact of financial literacy on the quality of investment decisions. Over the past few decades in Australia there have been substantial changes in the landscape for the management of individual and household wealth. One of the most obvious changes is that persons are gradually facing compound decisions for securing their own financial wellbeing in retirement. Extensive choices added to this complexity are the, in terms of choice of retirement fund, and choice of investment options, available to fund members. However, Clare (2006) despite being offered these choices, research data indicate that the majority of fund members do not exercise choice and consequently join the default fund nominated by employers, and/or accept the default investment options nominated by fund trustees. Literature from personal and pension finance suggests that financial literacy is one of the key requirements for making informed financial decisions. Knowledge about Financial Products and Investing Behaviors Despite the many welfares that financial literacy has for consumers, the financial system and the broader economy, improving financial literacy remains a continuing challenge. This can be particularly ascribed to the increased necessity to save for retirement and the increased complexity of financial products and services making it more important but also more difficult to make informed investment decisions. There are several studies, which examine the question whether individuals are well prepared for this task (Hilgert et al., 2003;Lusardi & Mitchell, 2008;Lusardi & Mitchell, 2007) these studies generally indicate that financial illiteracy is widespread and that many individuals lack knowledge of even the most basic economic principles. Capuano and Ramsay (2011) describes Financial literacy also allows people to make the best use of financial products and invest without waste or experiencing unnecessary costs. With more throwaway income and greater capacity to save and invest, financially literate people tend to have more financial products and are more productive investors (Cole & Fernando, 2008). Conversely, financial illiteracy is thought to be associated with the spiraling debt problem impacting people with a 'buy now, pay later' credit mentality (Hall, 2008). Lusardi, Michaud, and Mitchell (2013) also identify differential wealth outcomes due to differences in the levels of financial knowledge that individuals possess financial wellbeing is more likely to be enhanced. As financially literate consumers will be more confident when making decisions about finance. Briefly, while participation in retirement pension plans in the US and UK is voluntary, Australia's mandatory superannuation regime means that nearly all-working Australians are 'forced savers'. Additionally, (Jappelli & Padula, 2013;Van Rooij et al., 2011) a number of studies have examined financial literacy and its relationship with voluntary financial decisions, such as participating in the stock market or making portfolio choice. Furthermore, OECD (2012) the number of financial decisions that individuals have to make is increasing as a consequence of changes in the market and the economy. For example, savings to cover much longer periods of retirement longer life expectancy means that individuals need to ensure that they accumulate. In addition to the benefits identified for individuals, financial literacy is important to economic and financial stability for a number of reasons. Financially literate investors can "create a more competitive, innovative, safe, stable, accessible, disciplined and liquid financial system and markets" (Capuano & Ramsay, 2011). Market conditions in unpredictable ways and more likely to take appropriate steps to manage their risks they are also less likely to react to. All of these factors will lead to a more well-organized financial services sector and possibly less costly financial regulatory and supervisory requirements (OECD, 2012). Recent literature has explored the relationship between retirement outcomes and financial literacy and. In particular, Lusardi and Mitchell (2007a); Lusardi and Mitchell (2007b); Lusardi and Mitchell (2008) extensive work using different sources of US data shows that financial literacy is key for retirement planning and preparedness. More recently, the positive and statistically significant relationships between financial literacy and retirement planning has been confirmed using data from several other countries. For example, based on two surveys conducted before and after the financial crisis, show that financial literacy is strongly related to retirement preparation in the Netherlands Alessie, van Rooij, and Lusardi (2011). In research studies conducted in Germany (Bucher-Koenen & and in Russia (Klapper & Panos, 2011) similar findings are also presented that financial literacy is a strong predictor of financial planning for retirement. However, research on the relationship between retirement and financial literacy planning has yielded mixed evidence in Australian and New Zealand studies. In a customized survey to a representative sample of 1,024 Australians, Agnew, Balduzzi, and Sunden (2003) identify that collective levels of financial literacy were similar to comparable countries with the young, least educated, unemployed and those not in the workforce most at risk of insufficient retirement planning. For instance, Benjamin, Brown, & Shapiro, (2013) examine outcomes for a sample of near-retirement English workers how numerical ability and other cognitive functions affect wealth and retirement savings. The researchers find that numerical ability, measured by an index constructed using five basic numeracy questions, is strongly correlated with savings for retirement and asset. The preceding section highlights the literature that shows that financial literacy impacts decision-making in a range of financial situations, including participation in the stock market and pension plans in the US. Besides financial literacy, a number of studies have explored other potential influences on financial decisions (Bailey, Nofsinger, & O'neill, 2003;Dulebohn, Murray, & Sun, 2000;Dvorak & Hanley, 2010). US. Hung, Yoong, and Brown (2012) contrasting results were obtained in research from the examine the correlation between financial literacy and several aspects of individual choices related to retirement saving accounts RAND American Life Panel was used data from the, the authors find evidence supporting a positive relationship between financial literacy and how much a respondent has thought about retirement. Lusardi and Mitchell (2011) did not, examined financial literacy in the US, wherein they demonstrate that financial literacy is particularly low among the young, women, and the less-educated. Moreover, Hispanics and African-Americans scored the least on financial literacy concepts. They also showed leaving them better positioned for old age that people who score higher on the financial literacy questions were much more likely to plan for retirement (Arora, 2016). Research shows that better financial education is necessary if individuals are to achieve their retirement objectives, and that financial literacy is pivotal to making informed retirement saving decisions. Knowledge about Investment Options and Investment Decision To comprehend a range of information to evaluate and monitor the performance of alternate investment options in the context of superannuation investment choice decisions, making informed investment decisions requires fund members to have a certain level of financial knowledge. To determine the best match to their riskreturn preferences more specifically, fund members need to evaluate each option's investment strategy, investment portfolio, and the expected investment risks and expected returns (Brown, Gallery, & Gallery, 2002). Fund members also need to understand the various fee structures, such as entry, exit, management and investment fees and the potential effects of these fees on net returns. To be associated with decision-making in a range of financial situations financial literacy has been shown. For example, higher levels of financial literacy are linked with increased stock market participation (Christelis, Jappelli, & Padula, 2010;Van Rooij et al., 2011;Yoong, 2011) higher private retirement saving (Bucher-Koenen & Lusardi, 2011) greater portfolio diversification and increased wealth holdings (Lusardi et al., 2013;Lusardi & Mitchell, 2007a). Lusardi and Mitchell (2006); Lusardi and Mitchell (2008) Van to stock market participation devised two special modules to measure financial literacy and study its relationship. They found that the majority of respondents display basic financial knowledge and have some grasp of concepts such as interest compounding, inflation, and the time value of money. Their estimates showed that the relationship between literacy and stock market participation remains positive and statistically significant in the Generalized Method of Moments regression and the OLS estimates did not differ significantly from the GMM estimates. They found that financial literacy affects financial decision-making: Those with low literacy are much less likely to invest in stocks. Danes and Hira (1987) surveyed 323 college students from Iowa State University, using questionnairecovering knowledge of credit card, insurance, personal loans, record keeping, and overall financial management. Student's Basic Money Management Behavior and Investment Decision They found that the participants have a low level of knowledge regarding overall money management, credit cards, and insurance. They also found that male knows more about insurance and personal loan, but females know more about issues covered in the section of overall financial management knowledge. More knowledgeable about personal finance was found from Married students. There are many studies carried out among the youths including school and college students on financial literacy. While others analyzed based on stream of education and other personal characteristics some of them used pure demographic variables for evaluation. However, as Van Rooij et al. (2011) caution, although education is highly correlated with financial literacy with university degrees who display low levels of more advanced financial knowledge, there is a large proportion of individuals. Thus, to make investment decisions more highly educated individuals do not necessarily have the requisite knowledge and skills. Ibrahim, Harun, and Isa (2009) concluded that student's demographic variable including social background, financial attitude; financial knowledge and family sophistication significantly affect the financial literacy level of students. In the USA, Peng, Bartholomae, Fox, and Cravener (2007) on higher levels of personal financial responsibility stated that university students take. These students face more financial challenges in conjunction with relevant instruction. It is also more likely that college students are experiencing more challenges with finances as they pay bills, use credit cards, working, saving, budgeting monthly expenses, and manage debt. Thus, importance of financial literacy among college students there is supreme. Jariah, Husniyah, Laily, and Britt (2004) inspected financial behavior of university and college students. Among 1500 students surveyed, found that 90% were interested in learning about exact subjects in financial education, the highest percentage of them were found the necessity of counseling services, followed by knowledge about savings and investment, budgeting, how to increase their income and financial management. They further found that those female students were more managed to enjoy shopping and got items that were on sale than male, and males however, watched to hide their spending habits from their families. Similarly, Shaari, Hasan, Mohamed, and Sabri (2013) examined using questionnaires survey the financial literacy among 384 university students from local Universities of Malaysia. The results of their study revealed that the spending habit and year of study have a significant positive relationship with the financial literacy, whereby the age and gender are negatively associated with the financial literacy. It has concluded that financial literacy can inhibit the university students from engaging in wide debt especially credit card debt. Lusardi, Mitchell, and Curto (2010) tested financial literacy among young adults. They showed that financial literacy is low; less than one-third of young adults possess basic knowledge of interest rates, inflation and risk diversification. Financial literacy is strongly related to socio demographic characteristics and family financial sophistication. According to Mahdzan and Tabiani (2013) increasing financial literacy and ability encourages better financial decision-making, enabling better planning and management of life events such as education, housing purchase, or retirement. This is mainly more applicable for college students. Lusardi and Mitchell (2006) and Van Rooij et al. (2011) provide empirical evidence that individuals with low financial literacy are more likely to rely on informal sources of advice, such as family and friends, while more financially skilled individuals are more likely to consult formal sources of advice such as professional advisors. Similarly, in a study of German private pension plans, Bucher-Koenen and Lusardi (2011) find to solicit financial advice than those with lower literacy that individuals with higher financial literacy are more likely (Bailey., Nofsinger, & O'Neill, 2004;Duflo & Saez, 2002Van Rooij et al., 2011) studies on the effect of sources of advice on financial decisions have often been conducted in the context of voluntary participation. Although the results of the studies above are informative, the relationship between sources of advice on financial literacy and investment choice decisions in a compulsory superannuation setting still remains to be explored. Besides in order to improve their financial literacy and to make informed financial decisions Information regarding seeking advice from financial experts or their peers, individuals also resort to sourcing information from different channels. A review by Capuano and Ramsay (2011) of 23 financial literacy surveys from the World Bank and a number of countries reveals a low level of financial understanding and awareness among respondents in these studies. Although the target audience in these surveys varies from high school students to adults, and the methodology differs from objective to subjective measures, these studies suggest that there are a significant number of people with low levels of financial literacy. This is disturbing given that retirement decision-making responsibility is progressively passed on from governments and trustees to individuals. However, find a strong effect of financial literacy on contributions to defined contributions plans (Hung et al., 2012). On the other hand, from a survey administered to a sample of 280 employees from a liberal arts college in New York, Dvorak and Hanley (2010) identify that individuals with high levels of financial knowledge are more likely to actively participate in the defined contribution plan by making personal contributions. Financial Skills and Investment Decision Financial skills mean the ability to use the knowledge of financial services implied in financial literacy. Empirical evidence suggests financial literacy has significant impact on financial status of an individual. Further (Atkinson & Messy, 2011) noted that financially literate people become wealthy by accumulating wealth. However, some researchers (e.g. Mahdzan and Tabiani (2013)) argued that all persons with sound financial knowledge make accurate investment decisions. Chen and Volpe (1998) emphasis that any individual should have a skill to evaluate the new and complex financial instruments to make informed judgments to maximize the benefits of financial decisions. Saha (2016) further argue that individuals are considered financially literate, if they are competent and can demonstrate they have used knowledge they have learned in making investment decisions. Further if anybody doesn't have ability to analyse available financial options, he cannot be considered as financially literate individual (Roy & Jane, 2018). Therefore, financial skills enable individuals to make informed decisions about their money and minimize their chances of being misled on financial matters. Accordingly, person should have knowledge as well as skills to make some better financial decisions in life (Singh & Kumar, 2017). Therefore, boosting financial literacy skills may well be critically important for financial and investment decisions. Roy and Jane (2018) further noted that when people become more experienced in financial matters, they increasingly become financially sophisticated and it is predicted that individual become more financially competent. However, in the present context, young people have financial knowledge but do not have the basic financial skills necessary to develop and maintain a budget, to understand credit, to understand investment vehicles, or to take advantage from the banking system (Lusardi, 2019;Rai, 2019;Saha, 2016;Singh & Kumar, 2017). Improving Usage of Financial Products and Investment Decision Improving financial literacy through education programs has become a resounding issue since a lack of financial literacy has been acknowledged as one of the aggravating factors of the global financial crisis (Gallery & Gallery, 2010) This discussion is centered mainly on the issue of the knowledge gaps that persist about fundamental relationships between usage of financial products, education and behavior. Few studies have been able to construct sophisticated measures of financial literacy and definitely establish causal links between financial education, literacy and behavior. Truly, some researchers argue that financial literacy is a secondary concern when it comes to decision-making, partly because evidence on financial education programs has been mixed. While early evaluations notably from the US suggested that workplace financial education initiatives increased pension plans participation (Bernheim & Garrett, 2003). More recent research has found minimal impacts, particularly when other factors such as peer-effects and psychological traits were considered (Fernandes et al., 2014). To achieve optimal retirement outcomes, governments are increasingly aware of the need for individuals to have sound financial knowledge and skills. Yet, financial literacy studies on the general population as well as subgroups within the population indicate that financial illiteracy is widespread (OECD, 2005). While early studies of workplace financial education program such as those from Rai (2019) and Bernheim and Garrett (2003) have concluded that financial literacy is an antecedent to various healthy financial behaviors. Several recent literature reviews have drawn different conclusions about the effects of financial literacy and financial education (Adams & Rau, 2011;Willis, 2008). In particular, Adams and Rau (2011) conclude, "both experimental and non-experimental studies demonstrate that understanding the basic principles of saving, such as compound interest, has a direct effect on financial preparation. This effect holds after controlling for demographic characteristics". However, Willis (2008) argues that research to date has yet to produce reliable, statistically significant evidence of the effectiveness of financial literacy program on improving consumer financial conditions. Based on the above previous literature, which are explained in literature review the conceptual framework ( Figure 1) has been developed. HYPOTHESES First described the direct relationship of the independent and dependent variables. Financial literacy was identified as a positively influenced variable on the investment decision (Sahrawat, 2010). As explained by the previous researchers there is a strong causal relationship between financial literacy and investment decision (Bonga & Mlambo, 2016;Park, Lee, & Lee, 2015;Thorat, 2006). Therefore, while considering those previous literatures the researcher formulated a hypothesis to test the direct relationship as a main hypothesis. In addition to the main hypothesis, five sub-hypotheses were also formulated to test the dimensional impact of financial literacy on their investment decisions. People with low financial literacy are more likely to have problems with debt according to the research carried out mainly in developed countries has shown that financial literacy is an important component of sound financial decision making and can have important implications for financial behavior (Lusardi, 2019). Financial literacy is the combination of consumers'/investors' understanding of financial products and concepts and their ability and confidence to appreciate financial risks and opportunities, to make informed choices, to know where to go for help, and to take other effective actions to improve their financial well-being OECD (2005). Also, Greater financial understanding and knowledge allows those members of society who are otherwise excluded from the main stream financial sector to get the opportunity to use financial products and services (Capuano & Ramsay, 2011). Based on the above literature following H1a hypothesis was developed. H1a: Financial product related knowledge of undergraduate students is significantly affecting on their investment decision. According to the findings from Hogarth (2002) there is a consistent theme running through most definitions of financial literacy including followings points: • Being knowledgeable, educated and informed on the issues of money. • Understanding the basic concepts underlying the management of money. • assets (e.g. the time value of money in investments and the pooling of risks in insurance). • Using that knowledge and understanding to plan and implement financial decisions. Based on the above mentioned points H1b hypothesis was developed by researcher. H1b: The experience of undergraduate students who are using financial products affecting on their investment decision. When context is important, one of the most significant factors in financial decision-making is how the available options are 'framed'. Namely, how these options relate to one another, how they are explained, and what other information is provided at the same time (Kahneman & Tversky, 1984). Also Decision-making in superannuation issues is complex and can have a substantial impact on retirement outcome, individuals need to have a sufficient level of financial literacy to understand and make informed decisions in superannuation matters. Individuals who do not understand financial issues, such as risk and return on investments, and the level of savings needed to fund retirement, are likely to have considerably less retirement income than they desire (Lusardi & Mitchell, 2006). Based on the above literature following H1c hypothesis was developed. H1c: Investment option related knowledge of undergraduate students are affecting on their investment decision. According to Lusardi and Mitchell (2011) Personal Finance comprises all financial decisions and activities of an individual, including budgeting, insurance, savings, investing, debt servicing, mortgages and more. Personal financial decisions may involve paying for education, financing durable goods such as real estate and cars, buying insurance, e.g. health and property insurance, investing and saving for retirement. Based on above literature researcher developed H1d hypothesis for the test impact of financial decision. H1d: Undergraduate's basic money management behavior is affecting to their investment decision. Most of the researchers have explored that each selected dimension of the financial literacy has a significant impact on the degree of investment decision. For instance, Saha (2016) and Thilakaratna (2012) conducted their research to test the relationship between financial skills and financial decision of the shareholders. Some researchers identified that there is a direct relationship between financial skills and investment decision of small-business (e.g. (Kumari, Ferdous, & Siti, 2020b). In addition to that, some researchers mentioned most of the women are having poor financial skills, therefore, their poor financial skills are positively associated with their investment decision (e.g. (Kumari, Ferdous, & Siti, 2020a;Lusardi, 2019;Rai, 2019;Singh & Kumar, 2017)). Based on the above discussion, the researcher has developed the H1e hypothesis to determine the relationship between financial skills and investment decision. H1e: Financial skills affecting to the investment decision of undergraduate students. METHODOLOGY In order to examine university students' financial literacy level and its impact on their investment decisions, this research is conducted under positivism philosophy, deductive research approach and quantitative research strategy. Present study is mainly focused on identifying the influence made by the financial literacy on investment decision of younger generation in Sri Lanka, and need to be verify the most significant determinants of the financial literacy on investment decision. At the initial phase of the study, the extensive literature review was carried out with the purpose of identifying the determinants of financial literacy. In the second phase, a survey was conducted among 200 undergraduates representing four government universities in western provinces in Sri Lanka, with the assistance of a researcher administrated questionnaire. The sample was selected based on the convenient sampling method and the unit of analysis was the individuals belongs to the government university system. The reliability of scales was measured by Cronbach's Alpha coefficients. Furthermore, a partial least squares structural equation model (PLS-SEM) was employed as the principle data analysis approach, and Smart PLS 3 was employed as the main analytical software. Degree of financial literacy was tested based on 28 items identified by previous researchers and Principle Component Analysis was employed to determine the key factors of financial literacy. RESULTS AND DISCUSSION The structural model will have denoted the relationships among the main constructs in the conceptual framework by using path coefficients. Accordingly, the path coefficients represent the hypothesized relationships among the constructs in the model (Hair, Sarstedt, Ringle, & Gudergan, 2018;Ringle, Sarstedt, Mitchell, & Gudergan, 2020). The value for path coefficient should fall in between -1 and +1. When it tends towards +1, it is interpreted as strong positive relationship which is statistically significant and vice versa. However, whether Path coefficient is significant or not depends on its standard error which can be obtained by considering two types of criteria. As bootstrap standard error enables to compute the t values and p values for all structural path coefficients, p value can be considered to assess significant level of path coefficients (Hair et al., 2018). Generally, 5% significant level can be considered as the threshold level of p value, accordingly, p value must be smaller than 0.05 to demonstrate the significant relationship among constructs. Further, respective t value should be fall in the range of -1.96 to +1.96 to assure the significant level of path coefficients. Therefore, said condition can be considered as criteria 01. Moreover, Hair et al. (2018) suggest that researchers should check the bootstrap confidence intervals under Bias Corrected approach (BCa) in order to further test the significant levels of path coefficients, in the case of 1 st criterion is not satisfied. Accordingly, if bootstrap confidence interval does not have a zero value, the path coefficient is still significant. It can be considered as criterion 02. The path diagram is given in the Figure 2 and the summary of the statistics taken by bootstrapping techniques are given in Table 1. (Ringle et al., 2020). The researchers should assess the R 2 values of all the endogenous constructs as a measure of the model's in-sample predictive power. According to Hair et al. (2018) when R 2 values become 0.25, 0.50, and 0.75, it implied that the respective endogenous variables are weak, moderate, and strong respectively. Therefore, one of the main parts of the structural model evaluation is the assessment of coefficient of determination (R 2 ). In the present research, financial literacy is the main construct of investment decision (dependent variable). As per the estimated structural model given in Figure 2, the overall R 2 (0.689) is found to be a moderate level. In this case, it suggests that the five dimensions of financial literacy i.e. students' knowledge about financial products, accessing financial products, money management, knowledge about financial investment option, and financial skills, can jointly explain 68.9% of the variance of the endogenous construct (investment decision). The R 2 value is 0.689; it is shown inside the blue circle of the investment decision construct in the PLS diagram (see Figure 2). The individual path coefficients in the structural model and the Therefore, H1a was accepted. As per the second sub-hypothesis (H1b), it was tested the relationship between an accessing of undergraduate students who are using financial products affecting and their investment decision. According to the output results it revealed as: slandered β = 0.03 revealed that there is a positive relationship among these two variables; p = 0.639 means, probability value exceeds the threshold value (0.05); t = 0.47 explained less t value than 1.96; and Bca (Bias Corrected) confidential intervals lower = -0.09 and upper = 0.16 (zero laid between two confident intervals), it confirmed that, the students accessing financial products insignificantly effects on their investment decision. While considering the least significant variable of investment decision, knowledge about financial product taken less impact, therefore, H1b was rejected. Further, when it considers the last sub-hypothesis, the path coefficient (β = 0.722) was reported in the path of financial skills. That means there is a positive significant relationship between financial skills of undergraduates and their investment decisions. Further, in terms of the other statistical values as: p = 0.000; t= 10.275; and Bca (Bias Corrected) confidential intervals lower = 0.576 and upper = 0.852, revealed that there is a strong relationship among those two variables and it was the most significant determinant of financial literacy on the undergraduates' investment decision. Therefore, H1e was accepted. CONCLUSION In this age of globalization and financial development, it is crucial to research and find ways to improve the financial literacy skills of people especially students who are viewed as the future generation in every country. This study focus on the impact of financial literacy on investing decisions. A study held among university undergraduates in western province, Sri Lanka. Two hundred students from 4 government universities (University of Colombo, University of Kelaniya, University of Sri Jayawardhanapura and Open University of Sri Lanka) in Sri Lanka participated in this study, making it the first comprehensive study on the state of financial literacy among university students in Sri Lanka. The framework used in this study, assesses students' knowledge in money management, savings, borrowing and investing. In addition, the study examines student's application of financial knowledge and understanding in terms of their financial behavior and decision-making. Several important findings have emerged from this study and these are summarized and discussed below. With regards to students' knowledge in finance literacy, researcher has observed that students' level of knowledge in savings is medium whereas their knowledge in general finance, investment and insurance are low. This means that students have adequate knowledge in savings and borrowing but inadequate knowledge in the other components. The overall mean percentage of correct scores for the entire survey is 48.6%, indicating averagely, the respondents answered less than half of the questions correctly. Thus, the findings show that lack of financial knowledge is widespread among university students in Sri Lanka. This study reveals that some amount of income is needed to promote high financial literacy. Apart from the basic characteristics of respondents, family characteristics, the geographical areas where the students lived, source of fund for education and participation in the financial market showcase where students could be exposed to financial matters. The difference in all the financial literacy dimensions and the entire survey is significant. Researcher observes from the university analysis, that students who have to make decisions related to money management have more financial literacy knowledge than students who do not have to make various decisions related to money management in areas which face payments of fuel bills, insurance cover, repairs and maintenance, etc. Within the work environment and in the process of working to support their education, they get exposed to financial issues such as money management, savings, borrowings, and investments. Also, Researcher finds that students with at least a personal account or an investment account are financially literate than those without any of them. Although, the money management practice of students is good in general. Comparatively students who are financially knowledgeable have better practice than that of the students who lack financial knowledge. The findings revealed that financial literacy positively and significantly influenced the undergraduates' investment decisions. Further, when focused on the dimensions of financial literacy, three dimensions have significantly made an impact on the level of investing decision. Among them, the most significant dimension was financial skills. Therefore, researcher concluded that financial skills can be considered as a main determinant of financial literacy to enhance undergraduates' investment decisions. Knowledge about financial investment option was identified as the second most influential dimension. Therefore, the knowledge about financial investment option
2020-12-31T09:08:13.151Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "6003417fb38a3c90052effdd38e31e732c7459c6", "oa_license": null, "oa_url": "https://doi.org/10.18488/journal.137.2020.42.110.126", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bff60808187bb3aeba6cca735197ef88f1cdf63e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
232164504
pes2o/s2orc
v3-fos-license
Effect of childhood overweight on distal metaphyseal radius fractures treated by closed reduction Background The medical community has recognized overweight as an epidemic negatively affecting a large proportion of the pediatric population, but few studies have been performed to investigate the relationship between overweight and failure of conservative treatment for distal radius fractures (DRFs). This study was performed to investigate the effect of overweight on the outcome of conservative treatment for DRFs in children. Methods We performed a retrospective study of children with closed displaced distal metaphyseal radius fractures in our hospital from January 2015 to May 2020. Closed reduction was initially performed; if closed reduction failed, surgical treatment was performed. Patients were followed up regularly after treatment, and redisplacement was diagnosed on the basis of imaging findings. Potential risk factors for redisplacement were collected and analyzed. Results In total, 142 children were included in this study. The final reduction procedure failed in 21 patients, all of whom finally underwent surgical treatment. The incidences of failed final reduction and fair reduction were significantly higher in the overweight/obesity group than in the normal-weight group (P = 0.046 and P = 0.041, respectively). During follow-up, 32 (26.4%) patients developed redisplacement after closed reduction and cast immobilization. The three risk factors associated with the incidence of redisplacement were overweight/obesity [odds ratio (OR), 2.149; 95% confidence interval (CI), 1.320–3.498], an associated ulnar fracture (OR, 2.127; 95% CI, 1.169–3.870), and a three-point index of ≥ 0.40 (OR, 3.272; 95% CI, 1.975–5.421). Conclusions Overweight increases the risk of reduction failure and decreases the reduction effect. Overweight children were two times more likely to develop redisplacement than normal-weight children in the present study. Thus, overweight children may benefit from stricter clinical follow-up and perhaps a lower threshold for surgical intervention. Background Distal radius fractures (DRFs) are the most common fractures in children. According to previous studies, DRFs account for 20 to 35% of all pediatric fractures [1][2][3]. These fractures are located in the epiphysis in 20% of children and in the metaphysis in 80% [4,5]. Management of DRFs is controversial. Although the use of percutaneous pin fixation for completely displaced fractures is advocated by some scholars because of its good fixation effect, closed reduction with cast immobilization is still the most common treatment for these fractures [6][7][8]. However, the high incidence of redisplacement is a clear limitation of conservative treatment. Some authors have reported that the incidence ranges from 10 to 91% according to different definitions of redisplacement [9,10]. In general, about one-third of patients may develop redisplacement during follow-up. Childhood overweight or obesity continues to be a serious problem worldwide despite recent increases in awareness and prevention initiatives. According to recent studies, about one-third of children and adolescents in the USA are classified as either overweight or obese [11,12]. This has resulted in increased awareness of the effects of overweight on the care of the pediatric population. Studies have shown that overweight in children is correlated with a higher incidence of extremity fractures and more complications following the treatment of some fractures [13,14]. For conservatively treated fractures, successful reduction and maintenance of proper alignment can be difficult in overweight or obese children because of the larger soft tissue envelope. In this way, overweight may lead to reduction failure or loss of fracture reduction, increasing the possibility of further surgical treatment. Few studies to date have been performed to investigate the relationship between overweight and failure of conservative treatment for DRFs, especially in children. Therefore, the present study was performed to analyze the effect of overweight on the outcome of conservative treatment. We hypothesized that overweight increases the risk of reduction failure and the incidence of fracture redisplacement. Patient population After receiving approval from our institutional review board, we performed a retrospective study of children with displaced DRFs in our hospital from January 2015 to May 2020. The inclusion criteria were an age of 2 through 16 years and confirmation of distal metaphyseal radius fractures by X-ray or computed tomography images. A metaphyseal fracture was defined as a fracture proximal to and within 4 cm of the growth plate of the distal radius [6]. Patients with open fractures, epiphyseal injuries, concomitant upper extremity fractures, inadequate follow-up, or fractures initially treated by K-wires/plate fixation were excluded from this study. Treatment procedure and follow-up All patients initially underwent closed reduction in the emergency room. When initial manipulation failed, additional reduction was performed under brachial block or general anesthesia in the operating room. If repeated reduction failed, the treatment was defined as failed final reduction and surgical treatment was planned. All reduction procedures were performed by experienced surgeons, and successful reductions were fixed by shortarm casts. Anteroposterior (AP) and lateral radiographs were taken before and after treatment. The patients were followed up at 1, 2, 4, and 6 weeks after casting. Radiographs were taken at each routine follow-up visit. Redisplacement was defined as any modification from the initial AP or lateral radiograph after treatment (Fig. 1). After the fracture had healed, the cast was removed and wrist function exercises were started. The follow-up was finished when satisfactory wrist joint function was achieved. Parameter evaluation Basic data including age, sex, height, and weight were collected from the electronic medical records. Each child's body mass index (BMI) percentile was defined using sex-specific BMI-for-age charts established by the Centers for Disease Control and Prevention. Normalweight children were defined as those with a BMI-forage percentile (BMI percentile) of < 85, and overweight children were defined as those with a BMI percentile of 85 to < 95. Children with a BMI percentile of ≥ 95 were included in the obese cohort. These cutoff values were based on the Centers for Disease Control and Prevention definitions for children and teenagers [15][16][17]. The parameters obtained from radiographs included the distance to the epiphysis (distance from the radius fracture to the growth plate), whether an associated distal ulnar fracture was present, assessment of initial redisplacement, and the extent of reduction. The assessment of initial redisplacement included fracture translation and fracture angulation. The extent of reduction was classified as anatomic, good, or fair. Anatomic reduction was defined as complete anatomic fracture reduction with neither translation nor angulation, good reduction was defined as residual dorsal angulation of < 10°or residual translation of < 2 mm, and fair reduction was defined as angulation of 10 to 20°or translation of 2 to 5 mm [18]. The quality of immobilization was assessed using the three-point index as described by Alemdaroğlu et al. [18]. The corresponding measurements were obtained from radiographs after immobilization. This index was calculated as follows: Three-point index = [(proximal radial gap + ulnar fracture site gap + distal radial gap)/contact between fracture fragments in AP plane] + [(proximal dorsal gap + volar fracture site gap + distal dorsal gap)/contact between fracture fragments in lateral plane] Two independent observers assessed the radiological findings in a blinded manner, and the mean values were used for the data analysis. Data analysis IBM SPSS Statistics for Windows, version 19.0 (IBM Corp., Armonk, NY, USA), was used to perform all statistical analyses. Categorical data were analyzed for significance by Fisher's exact probability method, and numerical data were analyzed by the independentsamples t test. Variables that were demonstrated to be potentially associated with redisplacement after the univariate analysis (P < 0.10) were entered into the multiple logistic regression analysis, and a P value of < 0.05 was considered statistically significant. Results After excluding patients who underwent initial wire or plate fixation, 142 children were included in this study. Among these patients, 101 (71.1%) were male and 41 (28.9%) were female. Their mean age at the time of fracture was 9.2 ± 3.1 years. According to the abovedescribed criteria, 91 (64.1%) children were of normal weight, 32 (22.5%) were overweight, and 19 (13.4%) were obese ( Table 1). All patients initially underwent closed reduction under a local block in the emergency room. Initial reduction failed in 39 patients, all of whom required additional manipulations in the operating room. The final reduction procedure failed in 21 of these patients, all of whom then underwent surgical treatment. After conservative treatment, radiographs confirmed anatomical reduction in 32 (26.4%) patients, good reduction in 49 (40.5%), and fair reduction in 40 (33.1%). The patients' data according to their different weight statuses in various treatment stages are shown in Fig. 2. The incidence of failed initial reduction was higher in the overweight/obesity group than in the normal-weight group (37.3% and 22.0%, respectively), but the difference was not statistically significant (P = 0.077). The incidences of failed final reduction and fair reduction were significantly higher in the overweight/obesity group than in the normalweight group (P = 0.046 and P = 0.041, respectively). During follow-up, 32 (26.4%) patients developed redisplacement after closed reduction and cast immobilization. Overall, 23 (71.9%) redisplacements occurred within 1 week after treatment, and 9 (28.1%) occurred 1 week later. No patients developed any serious complications such as compartment syndrome or permanent median nerve dysfunction. We performed univariate and multivariate analyses to examine the effect of overweight on redisplacement. In the univariate analyses, we found that an overweight status (P = 0.002), the presence of an associated ulnar fracture (P = 0.004), initial translation of ≥ 50% (P < 0.013), and a high three-point index (P < 0.001) were potential risk factors associated with redisplacement after closed reduction, while other factors were not (P ≥ 0.10). The details of the univariate analyses are listed in Table 2. In the further multivariate logistic regression, the three factors associated with the incidence of redisplacement during follow-up were overweight/obesity [odds ratio Discussion Closed reduction and casting is a widely accepted treatment for displaced distal metaphyseal radius fractures. Although the initial reduction failed in some patients in the present study, the overall rate of successful reduction was 85.8%. However, the rate of redisplacement was 26.4% during follow-up. The incidences of failed final reduction and fair reduction were significantly higher in the overweight/obesity group than in the normal-weight group. The univariate and multivariate analyses showed that overweight/obesity, an associated ulnar fracture, and a high three-point index were independent risk factors associated with the incidence of redisplacement. Overweight children were two times more likely to develop redisplacement than normal-weight children. The medical community has recognized overweight and obesity as an epidemic negatively affecting a large proportion of the pediatric population across the nation, introducing new physiological and social problems. Although most of the associated health concerns involve endocrine abnormalities and an increased risk of cardiovascular disease later in life [19,20], clinicians should also be aware of the orthopedic issues associated with childhood overweight, including an increased risk of fracture and greater fracture severity [21][22][23]. Growing numbers of reports are detailing higher rates of complications associated with surgical and conservative treatments in overweight children [24,25]. Several studies have revealed a relationship between obesity and DRFs [26,27], but few have focused on the pediatric population. The large soft tissue envelope in the forearm of overweight children creates more difficulty in achieving effective reduction. In the present study, the incidence of failed initial reduction was higher in the overweight/ obesity group than in the normal-weight group, but the difference was not statistically significant. We believe that this lack of significance may have been due to the relatively small sample size. The incidence of failed final reduction was significantly higher in the overweight/ obesity group than in the normal-weight group, confirming our hypothesis that overweight increases the risk of reduction failure. In a previous study of pediatric bothbone forearm fractures, Okoroafor et al. [28] also found that a higher percentage of overweight and obese children than normal-weight children required surgical intervention after failure of nonsurgical management. Moreover, we found that the incidences of failed final reduction and fair reduction were significantly higher in the overweight/obesity group than in the normal-weight group, which supports the conclusion of Auer et al. [29] that obese children with distal radius and forearm fractures achieve poorer reduction. Fracture redisplacement in the present study usually resulted from fracture instability or weak external fixation. Overweight/obesity, an associated ulnar fracture, and a high three-point index were three potential risk factors associated with the incidence of fracture redisplacement. An associated ulnar fracture increases the instability of fractures and thus increases the risk of redisplacement. A high-quality cast following reduction is important, and a high three-point index representing poorly molded cast can lead to unsatisfactory external fixation [30]. We speculate that the forearms of overweight or obese children have a disproportionate muscle-to-adipose tissue ratio, and the increased distance gives the cast less of a mechanical advantage to control the angulation or translation of the fracture. Children with these risk factors require a significantly more frequent follow-up visit. This study has several limitations. First, this study was affected by the limitations inherent to its retrospective observational design. Second, this study only included patients with distal metaphyseal radius fractures. The results are not applicable to children with DRFs of the epiphysis. Third, a limited number of risk factors were investigated in the present study. Inclusion of other factors in future studies may provide more valuable information. Finally, the radiological measurements were performed without considering interobserver or intraobserver reliability, and the measurements could have been influenced by minor differences in the patients' forearm positioning during the radiographic examinations. Similar studies with a prospective design, inclusion of more factors, and reliable measurements are still necessary. Conclusion We found that the rate of successful reduction for displaced distal metaphyseal radius fractures was 85.8% and that the rate of redisplacement was 26.4% during follow-up. Overweight increases the risk of reduction failure and reduces the reduction effect. Overweight/ obesity, an associated ulnar fracture, and a high threepoint index were demonstrated to be independent risk factors associated with the incidence of redisplacement. Overweight children are two times more likely to develop redisplacement than normal-weight children. These patients may benefit from stricter clinical follow-up and perhaps a lower threshold for surgical intervention.
2021-03-10T14:52:21.035Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "2c54080bf53fd64d1a58926473b5de41ed0c32da", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-021-02336-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c54080bf53fd64d1a58926473b5de41ed0c32da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22495796
pes2o/s2orc
v3-fos-license
Correlation of cervical sagittal alignment parameters on full-length spine radiographs compared with dedicated cervical radiographs Background Radiographic parameters to evaluate the cervical spine in adult deformity using 36-inch films have been proposed. While 36-inch films are used to evaluate spinal deformity, dedicated cervical films are more commonly used to evaluate cervical spine pathology. The purpose of this study is to determine correlations between sagittal measures from a dedicated cervical spine radiographs and 36-inch spine radiographs. Methods Patients who had standing cervical and 36-inch radiographs within four weeks of each other were identified. On separate occasions, the following measures were determined: C0-C2, C0-C7, C1-C2 and C2-C7 sagittal Cobb angles; T1 slope; chin-brow-vertical angle (CBVA), C1-C7 sagittal vertical axis (SVA), C2-C7SVA, center of gravity-C7 sagittal vertical axis (COG-C7SVA). Paired t-tests and correlation analyses were done between parameters from the cervical and the 36-inch film. Results Radiographic measurements were collected on 40 patients (33 females and 7 males, mean age of 48.9 ± 14.5 years). All correlations were statistically significant at p < 0.001. C0-C2 Cobb had the strongest correlation (r = 0.81) and C2-C7 Cobb had the weakest (r=0.62). Among sagittal balance parameters, COG-C7SVA had the weakest correlation (r = 0.42) and C1-C7SVA (r = 0.64) and the C2-C7SVA (r = 0.65) had strong correlations. The T1 slope and the CBVA had correlation coefficients of 0.74 and 0.91, respectively. There was no statistically significant difference in measures taken from the cervical film and 36-inch film, except for the C0-C7 Cobb (p = 0.000) with a measurement difference of 7° and the T1 tilt (p = 0.000) with a measurement difference of 5°. Conclusion Except for COG-C7 SVA, strong correlations between most cervical spine parameters taken from a dedicated cervical film and those taken from a 36-inch film were seen. 36-inch radiographs provide a reasonable estimation of cervical sagittal spine parameters and may obviate the need for a dedicated cervical spine radiograph. Background Over the past 10 years, there has been an increased focus on the evaluation and treatment of adult scoliosis [1][2][3]. Several studies have examined the complexity of adult scoliosis patients, based in part on the interaction of the deformity with the normal aging processes of the spine [2][3][4][5][6][7]. The intersection between degeneration and deformity is most evident in relation to the lumbar spine. Typically, assessment of adult scoliosis patients involves evaluation of both the primary deformity and the lumbar spine [4,8,9]. Even when managing a primary thoracic curve, treatment decisions may revolve around the impact of any potential surgery on the unfused lumbar levels [4,[8][9][10][11]. Recently, attention has been directed to the impact of adult scoliosis or scoliosis treatment on the cervical spine [12][13][14][15][16][17][18][19][20]. Several authors have proposed a set of standardized radiographic parameters [21] to help evaluate the cervical spine in patients with adult spinal deformity using full-length 36-inch radiographs. While this is the standard radiograph used to evaluate spinal deformity, dedicated cervical spine radiographs are more commonly used to evaluate cervical spinal pathology. With the need to limit costs and exposure to radiation, there is a need to determine whether a separate cervical spine radiograph, aside from the long 36-inch radiograph, is necessary to evaluate the sagittal parameters of the cervical spine. Recent studies have reported a higher incidence of cancer in adolescent idiopathic scoliosis patients who have had multiple radiographs [22]. As the effect of radiation exposure is cumulative, decreasing the number of radiographs taken over an individual's lifetime, regardless of age, should be considered. The purpose of this study is to determine whether there is a correlation between sagittal measures of the cervical spine taken from the 36-inch spine radiographs and sagittal measures of the cervical spine taken from cervical spine radiographs. Methods From a multi-surgeon spine specialty clinic, patients who had a 36-inch spine radiograph as well as a separate standing cervical spine radiograph within four weeks of each other were identified. All radiographs were taken using a Picture Archiving and Communication System (PACS). All 36-inch standing radiographs were taken with the beam centered at the thoracic area in order for both femoral heads and the cervical spine to be visible. All 36-inch spine films were taken in the "clavicle" position [23]. The "clavicle" position has the patient full flex both elbows with the hands in a relaxed fist, wrists flexed, hands are centered in the supraclavicular fossae, midway between the suprasternal notch and acromion, passively flexing the humerus forward. This position has been standard at our center since 2002. All dedicated cervical spine films were taken with the beam centered at C4 approximately the level of the angle of the mandible. T1 slope is the angle between the angle between a horizontal line and the upper end plate of T1. Sagittal plane translation of the cervical spine is measured through the C7 SVA, which is a plumb line in line with the posterior superior aspect of C7. C1-C7 SVA is the distance between a plumb line dropped from the anterior tubercle of C1 and the C7-SVA. C2-C7 SVA is the distance between a plumb line dropped from the centroid of C2 (or odontoid) to the C7 SVA. COG-C7 SVA is the distance between a plumb line dropped from the anterior portion of the external auditory canal to the C7 SVA. Paired t-tests and correlation analyses were performed between the sagittal radiographic parameter as measured on the cervical spine radiograph and corresponding paired radiographic parameter on the 36-inch radiograph. Correlation coefficients between 0.60 and 0.80 indicate a marked degree of correlation; while coefficients between 0.80 and 1.00 indicate robust correlations [24].This study was reviewed and approved by the University of Louisville Institutional Review Board (13.0757) and the Norton Healthcare Office of Research Administration (13-N0234). Results Radiographic measurements were collected on 40 patients. There were 33 females and 7 males with a mean age of 48.9 ± 14.5 years. All correlations were statistically significant at p < 0.001 (Table 1). All sagittal Cobb measures showed a marked correlation. The C0-C2 sagittal Cobb had the strongest correlation (r = 0.81) and the C2-C7 sagittal Cobb had the weakest (0.62). Among the sagittal balance parameters, the COG-C7 SVA had the weakest correlation (r = 0.42), and the C1-C7 SVA (r = 0.64) and the C1-C7 SVA (r = 0.65) had strong correlations. The T1 slope and the CBVA had correlation coefficients of 0.74 and 0.91, respectively. Paired t-tests showed that there was no statistically significant difference in the measures taken from the cervical radiograph and 36-inch radiograph (Table 2), except for the occiput-C7 sagittal Cobb angle (p = 0.000) with a measurement difference of 7°and the T1 tilt (p = 0.000) with a measurement difference of 5°. Discussion The importance of restoration of sagittal spinal alignment on treatment effectiveness and clinical outcomes during deformity correction has been the subject of numerous studies [9,25,26]. Most of these studies focus on the importance of the restoration of lumbar lordosis and its relation to the pelvic incidence [8,9,24]. Only recently has the role of cervical sagittal measures in outcomes for spine deformity been studied [12][13][14][15][16][17][18][19][20]. A study by Smith et al. [27] showed that surgical correction of positive sagittal spinopelvic malalignment results in improvement of abnormal cervical hyperlordosis. In contrast, Oh et al. [16] showed that cervical lordosis is commonly seen in patients with adult spinal deformity and does not appear to normalize after thoracic corrective surgery. Patients with substantial compensatory cervical lordosis have been shown to be at increased risk of sagittal spinal pelvic malalignment [17]. Also, a study on adult spinal deformity patients showed that a more proximal upper end vertebra was predictive of the presence of neck pain complaints [28]. Thus, with the increasing evidence of the role of cervical sagittal parameters on clinical outcomes and disability in adult spinal deformity along with the need to control costs and limit patient exposure to radiation, this study was undertaken to determine whether a separate cervical spine radiograph, aside from the long 36inch radiograph, is necessary to evaluate the sagittal parameters of the cervical spine. Data from this study showed that, except for COG-C7 SVA, there were strong correlations between most cervical spine parameters taken from a dedicated cervical spine radiograph and those taken from a 36-inch radiograph. In addition, measures taken from the cervical radiograph and 36-inch Table 1 Correlation coefficients between radiographic parameters measured on the 36-inch radiograph and the cervical spine radiograph. All correlations were statistically significant at p < 0.001 radiograph were similar, except for the C0-C7 sagittal Cobb and the T1 tilt. Whether these differences have any clinical relevance needs to be further studied. Especially since the only study published looking at cervical sagittal parameters and clinical outcomes showed weak correlations between patient reported outcomes and C2-C7SVA and COG-C7 SVA [29]. In certain patients, dedicated cervical spine films may still be indicated to rule out malignancy or other pathologies. Further studies with multiple observers should also be done to determine the reliability of these cervical measures as determined from a 36-inch radiograph and dedicated cervical spine film. Conclusions A dedicated cervical spine radiograph may not be necessary to evaluate the sagittal parameters of the cervical spine when a full-length 36-inch radiograph has already been obtained. Authors' contributions LYC-acquisition of data, analysis and interpretation of data drafting of the manuscript, critical revision of the manuscript, CLS-acquisition of data, analysis and interpretation of data, critical revision of the manuscript, JRD-conception and design, analysis and interpretation of data, acquisition of data, critical revision of the manuscript, SDG-conception and design, analysis and interpretation of data, acquisition of data, critical revision of the manuscript. All authors read and approved the final manuscript.
2017-11-03T01:48:34.093Z
2014-11-01T00:00:00.000
{ "year": 2016, "sha1": "a56637495c9a21fa8aa8977aa19eef5c37decc6c", "oa_license": "CCBY", "oa_url": "https://scoliosisjournal.biomedcentral.com/track/pdf/10.1186/s13013-016-0072-0", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a56637495c9a21fa8aa8977aa19eef5c37decc6c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6141312
pes2o/s2orc
v3-fos-license
Robust asymptotic stabilization of nonlinear systems with non-hyperbolic zero dynamics: Part I In this paper we present a general tool to handle the presence of zero dynamics which are asymptotically, but not locally exponentially, stable in problems of robust nonlinear stabilization by output feedback. We show how it is possible to design locally Lipschitz stabilizers under conditions which do not rely upon any observability assumption on the controlled plant, by thus obtaining a robust stabilizing paradigm which is not based on design of observers and separation principles. The main design idea comes from recent achievements in the field of output regulation and specifically in the design of nonlinear internal models. In this sense the results presented in this paper also complement in a non trivial way a certain number of works recently proposed in the field of output regulation by presenting meaningful conditions under which a locally Lipschitz regulator exists. The present work is complemented by a part II paper submitted to this conference ([11]) in which possible applications of the presented tool in the context of the robust stabilization and regulation of minimum-phase nonlinear systems and robust nonlinear separation principle are presented. Introduction The problem of output feedback stabilization in the large for nonlinear systems has been the subject of a remarkable research attempt in the last twenty years or so (see [11]). The attempt has been initially turned to identify systematic design procedures for state-feedback stabilization of specific classes of nonlinear systems. To this respect it is worth mentioning the research current focused on back-stepping design procedures for lower triangular nonlinear systems with [15] for the global case and [27] for the semiglobal case. Then, the attention of the researchers shifted to the identification of partial-state and output feedback stabilization algorithms mainly addressed in a semi-global sense due to intrinsic limitations characterizing this class of problems (see [22]). Within the number of research directions undertaken in this field, a special role has been played by nonlinear separation principles based on the design of an explicit full state observer (see [28]). The main limitation of this approach is, thought, the lack of a guaranteed level of robustness of the resulting controller mainly due to the absence of a well-established theory of robust nonlinear state observers. Furthermore, full state observability of the controlled plant is not, in principle, a necessary condition for output feedback stabilization. A step forward to overcome these limitations has been taken in [27] with the definition of Uniform Completely Observable (UCO) statefeedback control law, namely a stabilizing state dependent law which can be expressed as nonlinear function of the control input and output and their time derivatives. In this case the issue is not to estimate the full-state but rather to reproduce directly the stabilizing law through the estimation of the input-output derivatives. This, in [27], has been achieved by a mix of back-stepping and partial-state observation techniques yielding an output feedback stabilizer which is robust in the measure in which the UCO function does not depend on the uncertainties and the UCO control law is vanishing on the desired asymptotic attractor. Furthermore the asymptotic features of the resulting closed-loop system are subjected to the requirement that the initial state-feedback UCO-based closed-loop system is locally exponential stable. Practical stability must be accepted otherwise (see also [5] at this regard). The latter limitation may be overtaken with the design of a local nonlinear observer in the spirit of [28] by resuming again nonlinear separation principles. However, so-doing, the same limitations outlined before come out. Exponential stability assumptions are recurrent in several contexts of nonlinear control literature while studying asymptotic behaviors of nonlinear systems. Backstepping ( [7], [27]), in which the backstepped control law is usually required to exponentially stabilize the controlled dynamics, singular perturbation ( [16], [30]), in which the so-called boundary layer system is required to posses an exponentially stable attractor, averaging ( [24], [30]), in which exponential stability of the so-called averaged system is needed, stabilization by output feedback ( [11], [27]), in which hyperbolic minimum-phase assumptions are usually required, are just a few contexts, involving problems of both synthesis and analysis of nonlinear systems, where the possibility of concluding asymptotic (and not only practical) results relies upon requirements that certain dynamics fulfill exponential stability assumptions. A particular mention must be done for the design of output feedback stabilizers for nonlinear systems which can be written in so-called normal form (see [11]). In this context exponential stability of the so-called zero dynamics (that is, hyperbolic zero dynamics) is very often a crucial pre-requisite if one is willing to address robust output feedback stabilization by means of locally Lipschitz regulators. In this paper we present a tool to handle the presence of not necessarily hyperbolic zero dynamics in the stabilization of nonlinear systems by output feedback. As particular application, the tool is then used to extend the main "UCO" results presented in [27] by, so doing, overtaking the obstacle of exponential stability in the backstepping procedure and output derivatives observer design. More specifically, by means of the mathematical tools which have been developed in a context of nonlinear output regulation (see [18]), [3]), we show how the design of a dynamic output feedback control law which asymptotically stabilizes a compact attractor can be obtained by starting from a UCO state-feedback control law which does not necessarily stabilize in exponential way the desired asymptotic attractor and which is not necessarily vanishing on it. We will show that these limitations can be removed by means of design techniques aiming to robustly get rid of interconnections terms between nonlinear dynamics arising in the stability analysis which are not vanishing on the desired asymptotic attractor and which, as a consequence, can not be dominated only by means of high-gain. This will lead to identify a dynamic back-stepping and an extended partial-state observer algorithms which embed solution techniques typical of internal model-based design. This work is organized as follows. In the next section the framework and the general result is given. Then, Section 3 discusses the proposed framework and solution by properly framing the result in the existing literature. Section 4, articulated in three subsections, is focused on the application of the proposed tool in the UCO context presented in [27]. Then, Section 5 presents a few conditions, obtained by mild adaptation of results proposed in the output regulation literature, useful to construct the dynamic regulator which solve the problem discussed in Section 2. Finally Section 6 and 7 concludes with an example and final remarks. Notation For x ∈ IR n , |x| denotes the Euclidean norm and, for C a closed subset of IR n , |x| C = min y∈C |x−y| denotes the distance of x from C. For S a subset of IR n , clS and intS are the closure of S and the interior of S respectively, and ∂S its boundary. A class-KL function β(·, ·) satisfying |s| ≤ d ⇒ β(t, s) ≤ Ne −λt |s| for some positive d, N, λ is said to be a locally exponential class-KL function. For a locally Lipschitz system of the formż = f (z) the value at time t of the solution passing through z 0 at time t = 0 will be written as φ f (t, z 0 ) or, if the initial condition and the system are clear from the context, as z(t) or z(t, z 0 ). For a smooth systemẋ = f (x), x ∈ IR n , a compact set A is said to be LAS(X ) (respectively LES(X )), with X ⊂ IR n a compact set, if it is locally asymptotically (respectively exponentially) stable with a domain of attraction containing X . By D(A) we denote the domain of attraction of A if the latter is LAS/LES for a given dynamics. For a function f : IR n → IR n and a differentiable real-valued function q : IR n → IR, L f q(x) denote the Lie derivative at x of q along f . For a smooth systemẋ = f (x), x ∈ IR n the ω-limit set of a subset B ⊂ IR n , written ω(B), is the set of all points x ∈ IR n for which there exists a sequence of pairs (x k , t k ), with x k ∈ B and t k → ∞ as k → ∞, such that lim k→∞ φ f (t k , x k ) = x. The framework and the main result The main goal of this paper is to present a design tool to handle the presence of asymptotically but not necessarily exponentially stable zero dynamics in robust output-feedback stabilization problems of nonlinear systems. Although the tool we are going to present lends itself to be useful in a significant variety of control scenarios, in order to keep confined the discussion while maintaining a certain degree of generality, we focus our attention on the class of smooth systems of the forṁ with measurable output y m = Cy y m ∈ IR in which the linear system (A, B, C) is assumed to have relative degree r with the pair (A, C) observable, κ is a positive design parameter and v is a control input. In the previous system the variable w ∈ IR s represents an exogenous variable which is governed bẏ with W a compact set which is invariant for (2). As a particular case, the signals w(t) generated by (2) may be constant signals, i.e. s(w) ≡ 0, namely constant uncertain parameters taking value in the set W and affecting the system (1). In general, the variables w can be considered as exogenous signals which, depending on the considered control scenario, may represent references to be tracked and/or disturbances to be rejected. Remark 1 As a consequence of the fact that W is a (forward and backward) invariant set for (2), the closed cylinder C n+r := W × IR n+r is invariant for (1), (2). Thus it is natural to regard system (1), (2) on C n+r and endow the latter with the relative topology. This will be done from now on by referring to system (1), (2). Analogously, the dynamics described by the first n equations of (1) and by (2) will be thought as evolving on the closed set C n := W ×IR n which will be endowed with the relative topology. ⊳ We shall study the previous system under the following "minimum-phase" assumption. Assumption There exist compact sets A ⊂ C n which is locally asymptotically stable for the systemẇ = s(w) Under this assumption, there exists a compact set X ⊂ C n such that A ⊂ intX and A is LAS(X ) for system (3). In this framework we consider the output feedback stabilization problem which consists of designing a locally Lipschitz regulator of the forṁ and, given arbitrary bounded sets Y ⊂ IR r and N ⊂ IR ν , a positive κ ⋆ , such that for all κ ≥ κ ⋆ and for some B ⊂ IR ν+n the set B × {0} is LAS(N × X × Y) for the closed-loop system (1), (4). The important point here is that ϕ k and ρ k must be locally Lipschitz. This restriction has strong practical motivations like sensitivity to noise or numeric and discrete time implementation. The goal of the following part is to present a result regarding the solution of the robust stabilization problem formulated above. In order to ease the notation, in the following we shall drop in (1) the dependence from the variable w which, in turn, will be thought as embedded in the variable x (with the latter varying in the set C n ). This, with a mild abuse of notation, will allow us to rewrite system (1) and (2) in the more compact forṁ and system (3) asẋ = f (x, 0). The existence of a locally Lipschitz regulator solving the problem at hand, will be claimed under an assumption which involves the ability of asymptotically reproducing the function q(x(t), 0), where x(t) is any solution ofẋ = f (x, 0) which can be generated by taking initial conditions on A, by means of a locally Lipschitz system properly defined. The following definition aims to formally state the required reproducibility conditions which will be then used in the forthcoming Theorem 2. Definition 1 (LER, rLER). A triplet (F (·), Q(·), A), where F : IR m → IR m and Q : IR m → IR are smooth functions and A ⊂ IR m is a compact set, is said to be Locally Exponentially Reproducible (LER), if there exists a compact set R ⊇ A which is LES forż = F (z) and, for any bounded set Z contained in the domain of attraction of R, there exist an integer p, locally Lipschitz functions ϕ : IR p → IR p , γ : IR p → IR, and ψ : IR p → IR p , with ψ a complete vector field, and a locally Lipschitz function T : IR m → IR p , such that and for all ξ 0 ∈ IR p and z 0 ∈ Z the solution (ξ(t), z(t)) oḟ where β(·, ·) is a locally exponentially class-KL function. Furthermore the triplet in question is said to be robustly Locally Exponentially Reproducible (rLER) if it is LER and, in addition, for all locally essentially bounded v(t), for all ξ 0 ∈ IR p and z 0 ∈ Z the solution (ξ(t), z(t)) oḟ where β(·, ·) is a locally exponentially class-KL function and ℓ is a class-K function. ⊳ We postpone to Section 3 a broad discussion about this definition and to Section 5 the presentation of sufficient conditions for a triplet to be rLER. With this definition at hand, we pass to formulate the following theorem which fixes a framework where the stabilization problem previously formulated can be solved by means of a locally Lipschitz regulator. The proof of this theorem can be found in Appendix A. Theorem 2 Let A be LAS(X ) for the systemẋ = f (x, 0) for some compact set X ⊂ C n . Assume, in addition, that the triplet (f (x, 0), q(x, 0), A) is LER. Then there exist a locally Lipschitz regulator of the form (4), a compact set R ⊇ A, a continuous function τ : R → IR ν , and, for any compact set Y ⊂ IR r and N ⊂ IR ν , a positive constant κ ⋆ , such that for all κ ≥ κ ⋆ the set is LES(N × X × Y) for (5), (4) and the set is LAS(N × X × Y) for (5), (4). Furthermore, if A is also LES for the systemẋ = f (x, 0), the set R can be taken equal to A. Remark 3 By going throughout the proof of the previous theorem, it turns out that the regulator (4) solving the problem at hand has the forṁ in which κ is a sufficiently large positive number and (ϕ(·), ψ(·), γ(·)) are the locally Lipschitz functions which are associated to the triplet (f (x, 0), q(x, 0), A) in the definition of local exponential reproducibility. ⊳ A brief digression about the problem The structure of (1) and the associated problem, apparently very specific, are indeed recurrent in a number of control scenarios in which robust non linear stabilization is involved. We refer to Section 4.1 for the presentation of a few relevant cases where this occurs. For the time being it is interesting to note how the previous formulation presents two main peculiarities which make the problem at hand particularly challenging. The first is that the function q(w, x, y), coupling the x and y subsystem in (1), is not necessarily vanishing on the desired attractor A × {0}, namely the desired attractor A × {0} is not necessarily forward invariant for (1) in the case v ≡ 0. In this respect the first crucial property required to the regulator is to be able to reproduce, through the input v, the uncertain coupling term q(w, x, 0) by providing a not necessarily zero steady-state control input. This issue is intimately connected to arguments which are usually addressed in the output regulation literature (see [18], [3]), [10], [26]), in which the goal is precisely to make attractive a set, on which regulation objectives are met, which is not invariant for the open-loop system. The second peculiarity, apparently not correlated to the previous one, relies in the fact that the set A is assumed to be "only" asymptotically stable for (3) and no exponential properties are required. In this respect the study of the interconnection (1) is particularly challenging as it is not sufficient, in general, to decrease the linear asymptotic gain ( [29]) between the "inputs" x and the "outputs" y of the y-subsystem (which is what one would make by increasing the value of κ since the matrix A is Hurwitz) to infer asymptotic properties in the interconnection. Indeed the presence of a not necessarily linear asymptotic gain between the "inputs" y and the "outputs" x of the x-subsystem requires a non trivial design of the input v which, intuitively, should be chosen to infer a certain locally non-Lipschitz ISS gain to the y-subsystem. The rich available literature on nonlinear stabilization already provides successful tools to solve the problem at hand if the previous two pathologies are dropped, namely if the assumption is strengthen by asking that the set A is also LES(X ) for (3) and that the "coupling" term q(w, x, y) is vanishing at A × {0}. As a matter of fact, under the previous conditions, it is a well-known fact that the set A × {0}, which is forward invariant for (1) with v = 0, can be stabilized by means of a large value of k as formalized in ( [27], [2]). In the case A is not exponentially stable for (3) and/or the coupling term q(w, x, y) is not vanishing on the desired attractor, the problem becomes challenging and more sophisticated choices for v must be envisaged. In particular, while preserving the local Lipschitz property of the regulator, the only conclusions which can be drawn if v ≡ 0 is that the origin is semiglobally practically stable in the parameter κ, that is the trajectories of the system can be steered arbitrary close to the set A × {0} by increasing the value of κ (see [27], [2], [18]). Even in the simpler scenario in which q(w, x, 0) ≡ 0 for all (w, x) ∈ A, a large value of k is not sufficient to enforce the desired asymptotic behavior in the case the set A fails to be exponentially stable for (3). In this case the asymptotic properties of the system have been studied in [5] by showing how the trajectories are attracted by a manifold which, only in a particular case depending on the linear approximation of the system, collapses to the origin (see Theorem 6.2 in [5]). In these critical scenarios an appropriate design of the control input v becomes inevitable in order to compensate for the coupling term q(w, x, y) which cannot be only dominated by a large value of κ. In particular, a first possible option, motivated by small gain arguments and gain assignment procedures for nonlinear systems (see [14], [13]), is to design the control v in order to assign, to the y-subsystem, a certain nonlinear ISS gain suitably identified according to small gain criterions and to the asymptotic gain of the x-subsystem (1). This option, however, necessarily leads to design control laws which are not, in general, locally Lipschitz close to the compact attractor and, thus, which violates a basic requirement of the above problem. An alternative option to design the control v is to be inspired by nonlinear separation principles (see, besides others, [27], [28], [2], [11], [8]), namely to design an appropriate state observer yielding an asymptotic estimate (ŵ,x,ŷ) of the state variables, and to asymptotically compensate for the coupling term q(w, x, y) by implementing a "certainty equivalence" control law of the form v = −q(ŵ,x,ŷ). Indeed, under suitable conditions, the tools proposed in [28] would allow one to precisely fix the details and to solve the problem at hand in a rigorous way. This way of approaching the problem, though, presents a number of drawbacks which substantially limit its applicability. First, the design of the observer clearly requires the formulation of suitable observability assumptions 1 on the controlled plant, and in particular of its (w, x) components, not in principle necessary for the stabilization problem to be solvable, which may be not fulfilled for a number of relevant cases. Moreover, according to the state-of-the-art of the observer design literature ( [8]), the design of the observer may be a challenging (if not impossible) task in case of uncertain parameters affecting the observed dynamics. Finally, it is worth noting how approaching the problem according to the previous design philosophy, leads to inherently redundant control structures, by requiring the explicit estimate of the full state (and of possible uncertainties) in order to reproduce the signal q(w, x, y). As opposite to the previous strategies, Theorem 2 provides a design procedure which does not rely upon domination of the interconnection term q(w, x, y) but rather on its asymptotic reconstruction which, however, it is not based upon the design of an observer of the state variables (w, x, y). In this respect the crucial property underlying the Theorem is the local exponential reproducibility property which, according to its definition, relies upon two requirements. The key first requirement, for a triplet (F, Q, A) to be LER, is that there exists a set R which contains A and which is LES for the autonomous systemż = F (z). The second crucial requirement characterizing the definition is that there exists a locally Lipschitz system of the formξ = ϕ(ξ) + ψ(ξ)u ξ y ξ = γ(ξ) (13) with input u ξ and output y ξ , such that system (7), modelling the cascade connection of the autonomous systemż = F (z) with output y z = Q(z) with the system (13), has a locally exponentially stable set described by graph T | R and, on this set, the output y ξ equals y z (see (6)). The domain of attraction of graph T | R is required to be of the form Z ×IR p with Z any compact set in the domain of attraction of R (note that, according to the definition, system (13) is allowed to depend on the choice of Z). In this respect the second requirement can be regarded as the ability, of the system (13), of asymptotically reproducing the output function Q(z(t)) of systemż(t) = F (z(t)) with initial conditions of the latter taken in Z. Note how the "output reproducibility" property required to system (13) does not hide, in principle, any kind of state observability property of the systemż = F (z) with output y z = Q(z). In other words system (13) must be not confused with a state observer of the z-subsystem as its role is to reproduce the output function Q(z(t)) and not necessarily to estimate its state. As the definition of robust LER, we only note that, in addition to the previous properties, it is required that system (9) exhibits an ISS property (without any special requirement on the asymptotic gain) with respect to the exogenous input v. Output-feedback from UCO state-feedback in presence of nonhyperbolic attractors In this part we show how the theory of robust nonlinear separation principle presented in [27], [2] can be extended with the tools developed in the previous sections. In particular we are interested to extend the theory of [27] by showing how to design a pure output-feedback semiglobal controller stabilizing an attractor when it is known how the latter can be asymptotically (but not exponentially) stabilized by means of a Uniform Completely Observable (UCO) state-feedback controller. Consider the smooth systeṁ in which u and y are respectively the control input and the measured output and W is a compact set which is invariant forẇ = s(w). As discussed in the previous section, the variables w emphasize the possible presence of parametric uncertainties and/or disturbance to be rejected and/or reference to be tracked (in the latter case the measurable output y plays more likely the role of regulation/tracking error). As done before, in order to simplify the notation, we drop the dependence of the variable w and we compact system (14) in the more convenient formż which is supposed to evolve on a closed invariant set C m which is endowed with the subset topology (such a closed set being, in the form (14), the closed cylinder C m := W × IR m ). We recall (see [27]) that a functionū : IR m → IR is said to be UCO with respect to (15) if there exist two integers n y , n u and a C 1 function Ψ such that, for each solution oḟ we have, for all t where the solution makes sense, where y (i) (t) denotes the ith derivative of y at time t. Motivated by [27] we shall study system (15) under the following two assumptions: a) there exist a smooth functionū : IR m → IR and compact sets A ⊂ C m and Z ⊂ C m , such that the A is LAS(Z) for system (15) with u =ū(z); 2 b)ū(z) is UCO with respect to (15). In this framework we shall be able to prove, under suitable reproducibility conditions specified later, that the previous two assumptions imply the existence of a locally Lipschitz dynamic output feedback regulator able to asymptotically stabilize the set A. The main theorem in this direction is detailed next. In this theorem we refer to an integer ℓ u ≥ n u defined as that number such that for the systeṁ there exist smooth functions C i such that the first n y +1 time derivatives of y can be expressed as Theorem 4 Consider system (15) and assume the existence of a compact set A ⊂ C m and of a smooth functionū(z) such that properties (a) and (b) specified above are satisfied. Assume, in addition, that the triplets and such that the set A × B is LAS(Z × N ) for the closed-loop system (15), (21). This result extends Theorem 1.1 of [27] in three directions. First, note that we are dealing with stabilization of compact attractors for systems evolving on closed sets. This is a technical improvement on which, though, we would not like to put the emphasis. Second, note that the UCO control lawū(z) is not required to be vanishing on the attractor A which, as a consequence, is not required to be forward invariant for the open loop system (15) with u ≡ 0. In this respect the proposed setting can be seen as also able to frame output regulation problems. Finally, the previous result claims that, by means of a pure locally Lipschitz output feedback controller, we are able to restore the asymptotic properties of an UCO controller without relying upon exponential stability requirements of the latter and robustly with respect to uncertain parameters. The last two extensions are conceptually very much relevant and can be seen as particular application of the tools presented in the previous sections. Following the main laying of [27], the proof of the claim is divided in two subsections which contain results interesting on their own. Robust Asymptotic Backstepping In this part we discuss how the UCO control lawū can be robustly back-step through the chain of integrators of (16). As commented above, the forthcoming proposition extends in a not trivial way the results of [27] in the measure in which one considers the fact thatū(z) is not vanishing on the attractor and that A is not necessarily locally exponential stable for the closed-loop system. We show that the existence of the static UCO stabilizer for (15) implies the existence of a dynamic stabilizer for (18) using the partial state ξ i , i = 0, . . . , ℓ u , and the output derivatives y (i) , i = 1, . . . , n y . This is formally proved in the next proposition. Proposition 1 Consider system (18) under the assumption (a) previously formulated. Assume that the triplet (19) is rLER. Then there exists a positive ν, a compact set R ⊃ A, a continuous function τ : R → IR ν+ℓu+1 , and, for any compact set Ξ ′ ⊂ IR ℓu+1 and N ′ ⊂ IR ν , a locally Lipschitz regulator of the forṁ with ξ = col(ξ 0 , . . . , ξ ℓu ) such that the sets and where theū (i) (z), i = 1, . . . , ℓ u , are recursively defined as and the further change of variablẽ where g is a positive design parameter and the a i 's are coefficients of an Hurwitz polynomial. By letting ζ := col(ζ 0 , . . . , ζ ℓu−1 ) system (18) in the new coordinates reads aṡ where B = col(0, . . . , 0, 1), C = (1, 0, . . . , 0),Ã(z, Cζ) = A(z,ξ 0 +ū(z)) − A(z,ū(z)), H is a Hurwitz matrix and ℓ g (·) is a smooth function such that As the triplet (19) is rLER, there exists a compact set R ⊇ A which is LES forż = A(z,ū(z)) with D(R) ⊇ D(A). Furthermore, by the fact that (19) is rLER and by definition of rLER, also the triplet (A(z,ū(z)), −L (ℓu+1) A(z,ū(z))ū (z), R) is rLER. We consider now the zero dynamics, with respect to the input u 1 and output ζ ℓu , of system (24) given bẏ For this system it can be proved (by means of arguments which, for instance, can be found in [18]), that for any compact set M ∈ IR ℓu there exists a g ⋆ > 0 such that for all g ≥ g ⋆ the sets R × {0} and A × {0} are respectively LES(Z × M) and LAS(Z × M) for (26). Fix, once for all, g ≥ g ⋆ . By the previous facts, by (25), by the fact that the triplet (A(z,ū(z)), −L A(z,ū(z))ū (z), R) is rLER, and by Proposition 6 in Appendix B, it follows that the triplet ((26), ℓ g (z, ζ, 0), R × {0}) is LER. Now fix where κ is a positive design parameters and v is a residual control input. From the previous results, it follows that system (24) with (27) fits in the framework of Theorem 2 , by which it is possible to conclude that there exists a locally Lipschitz controller of the forṁ a continuous function τ ′ : R × {0} → IR p and, for any compact set M ℓu ⊂ IR and N ′ ⊂ IR p , a positive constant κ ⋆ , such that for all κ ≥ κ ⋆ the set (24), (27) and (28). Furthermore, by properly adapting the arguments at the end of the proof of Theorem 2, it is possible also to prove that the set (24), (27) and (28). Extended Dirty Derivatives Observer In this part we present a result which allows one to obtain a pure output feedback stabilizer once a partial state-feedback stabilizer (namely a stabilizer processing the output and a certain number of its time derivative) is known. Along the lines pioneered in [6] and [27], the idea is to substitute the knowledge of the time derivatives of the output with appropriate estimates provided by a "dirty derivative observer" (by using the terminology of [27]). In our context, though, we propose an "extended" dirty derivative observer, where the adjective "extended" is to emphasize the presence of a dynamic extension of the classical observer structure motivated by the need of handling the presence of possible not exponentially stable attractors in the partial-state feedback loop and the fact that, on this attractor, the measured output is not necessarily vanishing. More specifically we assume, for the system (15), the existence of a dynamic stabilizer of the formς =φ(ς, y, y (1) , . . . , y (ny) ) ς ∈ IR d u =ρ(ς, y, y (1) , . . . , y (ny) ) (32) such that the following property hold for the closed-loop system: a) there exists a compact set R ⊃ A and a continuous function τ : R → IR d such that the sets graphτ and graph τ | A are respectively LES(Z × H) and LAS(Z × H) for the closed-loop system (15), (32) for some compact set H ⊂ IR d ; b) there exist smooth functions C i , i = 0, . . . , n y + 1, such that the output derivatives y (i) of the closed-loop system (15), (32) can be expressed as y (i) = C i (z, ς), i = 0, . . . , n y +1 and the following holdsρ (ς, y, y (1) , . . . , y (ny) ) graphτ =ū(z) . Remark 5 Note that the previous conditions are automatically satisfied under the assumptions of Section 4.1 and by virtue of the results presented in the previous section. As a matter of fact, by bearing in mind (18) and Theorem 1 (and specifically (22)), the main outcome of the previous Section has been to design a dynamic controller of the forṁ in which, according to (17) and to the definition of ℓ u , u(z) = Ψ(y, y (1) , . . . , y (ny) , ξ 0 , . . . , ξ ny ) . (21) such that the set A × B is LAS(Z × N ) for the closed-loop system (15), (21). Sufficient conditions for exponential reproducibility Having established with Theorems 2 and 4 the interest of local exponential reproducibility for solving the problem of (robust) output feedback stabilization via a locally Lipschitz regulator, in this section we present a number of results which are useful to test when a triplet (F, Q, A) is rLER (and thus LER) and, eventually, to design the functions (ϕ, ψ, γ). As also commented in Section 5, the first requirement behind the definition is the existence of a compact set R ⊇ A which is LES forż = F (z). In this respect we present a result which claims that the existence of a set R which is LES forż = F (z) is automatically guaranteed if the set A is LAS forż = F (z). Thus, put in the context of Theorem 2, the first requirement of the definition is not restrictive at all. Details of this fact are reported in the following proposition whose proof can be found in [20]. We pass now to analyze the second crucial requirement behind the definition of rLER, namely the existence of locally Lipschitz functions (ϕ, ψ, γ) and T such that conditions (6) and (10) are satisfied for system (9). Being the property in question related to the ability of reproducing any signal Q(z(t)) generated by the systemż(t) = F (z(t)) by taking its initial conditions in the set R, it is not surprising that the theory of nonlinear output regulation, and specifically the design techniques proposed in the related literature to construct internal models, can be successfully used to this purpose (see [17]). In particular, in the following, we present two techniques which are directly taken, with minor adaptations, from the literature of output regulation. First, we follow [4] and we present a method which draws its inspiration from high-gain design techniques of nonlinear observers. Specifically it is possible to state the following proposition which comes from Lemma 1 and from minor adaptations 3 of the main result of [4] (see the quoted work for the proof). Proposition 3 Let F : IR m → IR m and Q : IR m → IR be given smooth functions and A ⊂ IR m be a given compact set which is LAS forż = F (z). Assume, in addition, that there exist am > 0, a compact set S such that A ⊂ intS and a locally Lipschitz function f : IRm → IR such that the following differential equation holds Then the triplet (F, Q, A) is rLER. In particular (ϕ, ψ, γ) can be taken as the functions ϕ : IRm → IRm, ψ : IRm → IRm, γ : IRm → IR defined as and γ(ξ) = ξ 1 , where L is a positive design parameter to be taken sufficiently large, Remark 6 It is well-known (see, for instance, [8]) that a sufficient condition for a pair (F, Q) to satisfy property (37) locally with respect to a point z 0 is that its observability distribution at z 0 (see [9]) has dimension m at z = z 0 , namely if the systemż = F (z) with output y z = Q(z) satisfies the observability rank condition (by using the terminology of [9]) at z 0 . Such a condition represents an observability condition for the systemż = F (z) with output y z = Q(z) which, however, is far to be necessary to fulfill the property of rLER. In this respect it must be stressed again that the property of local exponential reproducibility does not involve any state observability property of systemż = F (z) with output y z = Q(z) but rather a property of output reproducibility. ⊳ Clearly the high-gain technique to design output observer behind Proposition 3 is not the only tool which can be used to design the functions (ϕ, ψ, γ) for a triplet which is rLER. In order to enrich the available tools, we present now a result motivated by the theory of state observers pioneered in [23] (and developed in [1], [25]) which, in turn, has inspired the technique to design internal models developed in [18] in the context of nonlinear output regulation (see also [21]). In this respect it is interesting to observe that if instead of asking (ϕ, ψ, γ) to be locally Lipschitz these functions were required to be only continuous, the theoretical tools presented in [18] are sufficient to prove that any smooth triplet (F, Q, A) is rLER if A is LAS forż = F (z). In particular, if (H, G) ∈ IR p×p × IR p×1 , p > 0, is an arbitrary controllable pair with H a Hurwitz matrix, and R is the set which is LES foṙ z = F (z) (whose existence is guaranteed by Lemma 1 since A is LAS for these dynamics), then it turns out that if the functions ϕ and ψ in (9) are chosen as ϕ(ξ) = Hξ and ψ(ξ) = G then property (10) holds true with (see Propositions 1 in [18]) Furthermore, if p is chosen so that p ≥ 2m + 2 and the matrix H is taken so that σ(H) ∈ {ζ ∈ C I : ℜ(ζ) < −ℓ} \ S where S ∈ C I is a set of zero Lebesgue measure and ℓ is a sufficiently large positive number, then there always exists a class-K function ρ(·) such that the following partial injectivity condition holds (see Proposition 2 in [18]) and, in turn, the latter guarantees the existence of a continuous function γ : IR p → IR such that also property (6) holds true (see Proposition 3 in [18]). As shown in [21] (see Proposition 4 there), a possible expression of such a γ(·) is given by The previous arguments, however, are not conclusive if system (13) is required to be locally Lipschitz. In this respect an extra condition (see the forthcoming (42), (43)) is needed to guarantee the existence of a locally Lipschitz γ as precisely proved in [12]. The main results in this direction are presented in the next lemma (whose proof is a minor adaptation of the main result of [12]). Proposition 4 Let O be an open bounded set which is backward invariant forż Let Ω(z) be the distribution defined as If there exists a constant c ≤ m such that then for any compact set R ⊂ O, with R ⊇ A, the function ρ in (40) can be taken linear and the function γ in (41) is Locally Lipschitz. As a consequence the triplet (F, Q, A) is rLER. Remark 7 By also bearing in mind Remark 6, it is worth noting how condition (43) does not require the observability distribution (38)to be full rank, namely the systemż = F (z) with output y z = Q(z) to be observable in any sense. Rather, condition (43) can be regarded as a regularity condition of the observable part of systemż = F (z) with output y z = Q(z). ⊳ Remark 8 By going throughout the technical details in [12], it is possible to observe that the requirement about the existence of an open bounded set O which is backward invariant forż = F (z) is uniquely motivated by the need of having the function T (z) in (39) welldefined and C 2 . In this respect the requirement about the existence of a bounded invariant set O can be substituted by the requirement that the function T (z) in (39) is well-defined and C 2 for all z ∈ intR for a proper choice of the matrix H. In this case, by the details in [12], it turns out that the rank condition (43) must be substituted by The requirement in Lemma 4 about the existence of a bounded set O which is backward invariant forż = F (z), may be practically overtaken by properly "clipping" the function F (z) outside the set A. As a matter of fact, being the property of rLER related to the ability of reproducing the signals Q(z(t)) generated by the systemż(t) = F (z(t)) by taking its initial conditions in a neighborhood of A, it turns out that any triplet (F c , Q c , A), with F c : IR m → IR m , Q c : IR m → IR functions which agree with F and Q on some compact S containing A in its interior, can be used in place of (F, Q, A) to check whether the latter is rLER and eventually to design the functions (ϕ, ψ, γ). In this respect, the presence of a bounded backward invariant set O may be forced by properly clipping to zero the vector field F (z) outside A. Alternatively, by bearing the previous remark, the function T be forced to be well-defined and C 2 by properly clipping the function Q(z) outside A. For reason of space we omit the technical details to rigorously prove the previous intuition and we refer the reader to the example in Section 6 for an illustrative example. Example Consider the systemẇ = 0 with control input u ∈ IR, measured output y ∈ IR in which w is a constant signal taking value in the interval W := [w, w]. By defining C 1 := W × IR and we address the problem of stabilizing the set A × {0}, which is invariant for the previous system with u = 0, by means of a y-feedback. The set A is LAS for the zero dynamics of (44) with domain of attraction D(A) = C 1 and, by defining u = −κy + v, the previous problem fits in the framework of Section 2. Note, in particular, that by only increasing the value of k while setting v = 0 the desired asymptotic stabilization objective can not be met. As a matter of fact, different equilibria characterize the system according to the value of w. For w = 0, the y-component of the system has three equilibria given by (0, 1/κ 3 , − 1/κ 3 ) which, for w = 0, collapse in one equilibrium, solution of w + y = κ 3 y 3 , which tends to 0 as k tends to ∞. So, with v = 0, only practical stabilization of the set A × {0} in the parameter k can be achieved. In order to apply Theorem 2 and to obtain asymptotic stabilization of the set A × {0} by means of dynamic feedback, we let z := col(w, x), and we check the exponential reproducibility of the triplet (F (z), Q(z), A). To this purpose, let S be a compact set of the form so that A ⊂ intS, and note that, by Lemma 1, there exists a set R, A ⊆ R ⊂ intS, which is LER with D(R) = D(A). Both Proposition 3 and 4 can be used to prove that the triplet in question is rLER and, indeed, to design the functions (ϕ, ψ, γ). By following Proposition 3, it is easy to check that namely condition (37) holds (withm = 2), and thus the triplet (F (z), Q(z), A) is rLER. According to the proposition, the functions (ϕ, ψ, γ) can be designed as where λ 0 and λ 1 are such that s 2 + λ 1 s + λ 0 is an Hurwitz polynomial, L is a sufficiently large design parameter and f c (·) is any smooth bounded function such that The functions (ϕ, ψ, γ) can be designed according to Proposition 4 as well. However note that this proposition cannot be applied as such due to the absence of a bounded invariant set containing A (indeed finite escape time occur in backward time for initial conditions outside A). To overtake this obstacle and by bearing in mind Remark 9, pick a smooth function a : IR → IR ≥0 such that a(s) = 1 for all 3 √ w − 1 2 ≤ s ≤ 3 √ w + 1 2 and a(s) = 0 for all s ≤ 3 √ w − 1 and s ≥ 3 √ w + 1, and consider the systemż = a(x)F (z). For this system the bounded set O defined as which is open with respect to the subset topology induced by C 1 , is invariant. Thus, Lemma 4 can be applied to the triplet (a(x)F (z), Q(z), A). In this specific case where * is a junk term, from which it follows that which implies, by Proposition 4, that the triplet (a(x)F (z), Q(z), A) is rLER. But, as the functions F (z) and a(x)F (z) agree on S, the fact that the triplet (a(x)F (z), Q(z), A) is rLER can be shown to imply that also the triplet (a(x)F (z), Q(z), A) is such. Thus, according to Proposition 4, the functions (ϕ, ψ, γ) can be also designed as where (H, G) ∈ IR 5×5 × IR 5×1 is an arbitrary controllable pair with the matrix H such that σ(H) ∈ {ζ ∈ C I : ℜ(ζ) < −ℓ} \ S where S ∈ C I is a set of zero Lebesgue measure and ℓ is a sufficiently large positive number, the function T defined as in (39) and ρ a sufficiently large positive number. As a result the problem of output feedback stabilization of the set A×{0} can be achieved, by bearing in mind Theorem 2 and the subsequent remark, by the following dynamic con- with the functions (ϕ, ψ, γ) designed as in (45) or (46) and κ a sufficiently large number. A Proof of Theorem 2 By the definition of LER of the triplet (f (x, 0), q(x, 0), A) there exists a set R ⊇ A which is LES forẋ = f (x, 0) and , for any compact set X 1 ⊂ D(R), there exist an integer ν, locally Lipschitz functions ϕ : IR ν → IR ν , ψ : IR ν → IR ν , γ : IR ν → IR and a smooth function T : IR n → IR ν such that and for all ξ 0 ∈ IR p and x 0 ∈ X 1 the solution (ξ(t), x(t)) oḟ where β(·, ·) is a locally exponentially class-KL function. As A is LAS(X ) and R ⊇ A, we have D(R) = D(A) and thus the previous properties hold, in particular, with X 1 = X . Furthermore, in case A is LES forẋ = f (x, 0), it is possible to show 4 that (47) and (49) hold also with R replaced by A possibly with a different class-KL function β(·, ·). Assume, without loss of generality (as (A, B, C) has relative degree r and (A, C) is observable), that the pair (A, C) is in the canonical observability form and that B = (0, . . . , 0, 1) T , and choose, as candidate controller, the systeṁ which, by the structure of A and B, is of the form (4) since B T Ay = a r y m for some real number a r . Consider now the change of variables Note that such a change of variables is well-defined for all y and η as ψ is complete. Since and using the fact that it turns out that the closed-loop dynamics (1), (50) in the new coordinates can be described as the feedback interconnection of a system of the forṁ and a system of the formẏ = κAy + B(q(x, 0) + γ(χ)) +l 2 (x, χ, y) in whichf (x, y),l 1 (x, χ, y) andl 2 (x, χ, y) are locally Lipschitz functions satisfyingf (x, 0) = 0,l 1 (x, χ, 0) = 0 andl 2 (x, χ, 0) = 0 for all x ∈ C and χ ∈ IR ν and withl 1 andl 2 possibly dependent on X . Let N be an arbitrary compact set of IR ν and denote by Ξ the image of N under the function (51) (note that N may depend on X ). Since system (52) with y = 0 is nothing but (48), it turns out that graph T | R is LES(X × Ξ) for system (52) with y = 0. Furthermore, by (47), the term q(x, 0) + γ(χ) in (53) is identically zero for (x, χ) ∈ graph T | R . From these facts and the results in [27], [2], it follows that for any compact set Y ∈ IR r there exists a κ ⋆ > 0 such that for all κ ≥ κ ⋆ the set graph T | R × {0} is LES(X × Ξ × Y) for (52), (53). By taking τ = T | R the previous result proves the first part of the theorem, namely that graph τ × {0} is LES(X × Y × N ) for the closed-loop system (1), (50). We prove now the second claim of the theorem, namely that graph τ | A × {0} is LAS(X × Y ×N ). Let κ ≥ κ ⋆ be fixed and note that, as graphτ ×{0} attracts uniformly the closed-loop trajectories leaving X × Y × N , Proposition 5 yields that in which ω(S) denotes the omega limit set of the set S associated to the closed-loop system. We prove now that if (x, y, χ) ∈ ω(graphτ × {0}) then necessarily x ∈ ω(R) in which ω(R) denotes the omega limit set of the set R associated to the systemẋ = f (x, 0). Indeed, consider a sequence {x n , y n , χ n } with (x n , χ n ) ∈ graphτ and so in particular x n ∈ R, and y n ≡ 0, and a divergent sequence {t n }, such that, the following holds where x(t, x n ) and χ((x n , χ n ), t) denotes the solution oḟ with initial conditions (x n , χ n ). x n being in R, this impliesx ∈ ω(R). Now, considering the system given by the first dynamics in (55) and using the fact that A ⊆ R uniform attracts the trajectories of this system leaving X , Proposition 5 in Appendix yields that ω(X ) = ω(R) = ω(A) ⊆ A. By this and the previous arguments we conclude that the x components of the closed-loop trajectories are uniformly attracted by ω(A) ⊆ A. From this the result follows by standard arguments. ⊳ B Auxiliary results be a given smooth system and let S be a compact set which is forward invariant for (56) and which uniformly (in the initial condition) attracts the trajectories of (56) originating in a compact set D ⊃ S. Then ω(D) = ω(S) ⊆ S. Proof First of all note that ω(D) and ω(S) exist and that, by definition, ω(S) ⊆ ω(D). Furthermore ω(S) ⊆ S as S is forward invariant for (56). To prove that ω(D) = ω(S) suppose that it is not, namely that there exist az ∈ ω(D) and an ǫ > 0 such that |z| S ≥ ǫ. As S uniformly attracts the trajectories of (56) originating from D, there exists a t ǫ/2 > 0 such that |z(t, z 0 )| S ≤ ǫ/2 for all z 0 ∈ D and for all t ≥ t ǫ/2 . Moreover, by definition of ω(D), there exist sequences {z n } ∞ 0 and {t n } ∞ 0 , with z n ∈ D and lim n→∞ t n = ∞, such that lim n→∞ z(t n , z n ) =z. This, in particular, implies that for any ν > 0 there exists a n ν > 0 such that |z(t n , z n ) −z| ≤ ν for all n ≥ n ν . But, by taking ν = min{ǫ/2, ν 1 } with ν 1 such that t n ≥ t ǫ/2 for all n ≥ n ν 1 , this contradicts that S uniformly attracts the trajectories of the system originating from D. ⊳ Proposition 6 Consider a system of the forṁ and assume that there exist a compact set A ⊂ IR n 1 and a smooth function τ : IR n 1 → IR n 2 such that the set
2016-01-11T18:29:14.669Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "1a997b77129d734c82213e2cd7533c66d30d35ad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bece0476bf834d592669d126d35caa40299976c5", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
13904918
pes2o/s2orc
v3-fos-license
On the Entropy Function and the Attractor Mechanism for Spherically Symmetric Extremal Black Holes In this paper we elaborate on the relation between the entropy formula of Wald and the"entropy function"method proposed by A. Sen. For spherically symmetric extremal black holes, it is shown that the expression of extremal black hole entropy given by A. Sen can be derived from the general entropy definition of Wald, without help of the treatment of rescaling the AdS_2 part of near horizon geometry of extremal black holes. In our procedure, we only require that the surface gravity approaches to zero, and it is easy to understand the Legendre transformation of f, the integration of Lagrangian density on the horizon, with respect to the electric charges. Since the Noether charge form can be defined in an"off-shell"form, we define a corresponding entropy function, with which one can discuss the attractor mechanism for extremal black holes with scalar fields. I. INTRODUCTION The attractor mechanism for extremal black holes has been studied extensively in the past few years in supergravity theory and superstring theory. It was initiated in the context supersymmetric BPS black holes [1,2,3,4,5,6] and generalized to more general cases, such as supersymmetric black holes with higher order corrections [7,8,9,10] and non-supersymmetric attractors [11,12,13,14,15]. Recently, A. Sen has proposed a so-called "entropy function" method for calculating the entropy of n-dimensional extremal black holes, where the extremal black holes are defined to be the spacetimes which have the near horizon geometry AdS 2 × S n−2 and corresponding isometry [16,17,18,19]. It states that the entropy of such kind of extremal black holes can be obtained by extremizing the "entropy function" with respect to some moduli on the horizon, where the entropy function is defined as 2π times the Legendre transformation ( with respect to the electric charges ) of the integration of the Lagrangian over the spherical coordinates on the horizon in the near horizon field configurations. This method does not depend upon supersymmetry and has been applied or generalized to many solutions in supergravity theory, such as extremal black objects in higher dimensions, rotating extremal black holes, various non-supersymmetric extremal black objects and even near-extremal black holes [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. In general, for spherically symmetric extremal black holes in a theory with Lagrangian L = L(g ab , R abcd , Φ s , A I a ), the near horizon geometry of these black holes has the form AdS 2 × S n−2 [17,18]. Due to SO(1, 2) × SO(n − 1) isometry of this geometry, the field configuration have the form as follows: The metric can be written down as where v 1 , v 2 are constants which stand for the sizes of AdS 2 and S n−2 . Some other dynamical fields such as the scalar fields and U (1) gauge fields are also taken to be constant: Φ s = u s and F I ρτ = e I . The magnetic-type fields are also fixed with magnetic-charges p i . Then, for this configuration, defining f (v 1 , v 2 , u s , e I ; p i ) = dx 2 ∧ · · · ∧ dx n−1 √ −gL , (1. 2) where the integration is taken on the horizon, and {x 2 · · · x n−1 } are angle coordinates of S n−2 , those constant moduli can be fixed via the equations of motion ∂f ∂v 1 = 0, ∂f ∂v 2 = 0, ∂f ∂u s = 0, ∂f ∂e I = q I , (1. 3) where q I are electrical-like charges for U (1) gauge fields A I a . To relate the entropy of the black holes to these definitions, one defines f λ as (1.2) with the Riemann tensor part in L multiplied by a factor λ, and then one finds a relation between f λ and the Wald formula for spherically symmetric black holes [44]: S BH = −2π∂f λ /∂λ| λ=1 . Consider the structure of the Lagrangian, one can find When the equations of motion are satisfied, the entropy of black holes turns out to be S BH = 2π(e I q I − f ). Therefore, one can introduce the "entropy function" for the extremal black holes which is obtained by carrying an integral of the Lagrangian density over S n−2 and then taking the Legendre transformation with respect to the electric fields e I . For fixed electric changes q I and magnetic charges p i , these fields u s and v 1 and v 2 are determined by extremizing the entropy function with respect to the variables u s and v 1 and v 2 . And then the entropy of the extremal black holes is given by the extremum of the entropy function by substituting the values of v 1 , v 2 and u s back into the entropy function. In addition, let us notice that if the moduli fields u s are only dependent of the charges q I and p i , the attractor mechanism is then manifested, and the entropy is a topological quantity. This is a very simple and powerful method for calculating the entropy of such kind of extremal black holes. In particular, one can easily find the corrections to the entropy due to the higher derivative terms in the effective action. However, we notice that this method is established in a fixed coordinate system (1.1). If one uses another set of coordinates for the AdS 2 part, instead of the coordinates {ρ, τ }, it seems that one can not define an entropy function as (1.5) because the function f is not invariant under the coordinate transformation. In addition, the reason that to get the entropy of black holes, one should do the Legendre transformation with respect to the electric charges, but not include magnetic charges seems unclear in this procedure. Some authors have pointed out that the entropy function E resulting from this Legendre transformation of the function f with respect to electric charges transforms as a function under the electric-magnetic dual, while the function f does not [37]. But it is not easy to understand the Legendre transformation with respect to the angular-momentum J in the rotating attractor cases [32]. There might be a more general formalism for the entropy function, and the Legendre transformation can be naturally understood in this frame. In this paper, we will elaborate these issues in the "entropy function" method and show that a general formalism of the "entropy function" method can be extracted from the black hole entropy definition due to Wald et al. [44,45,46]. In this procedure, we only require that the surface gravity of the black hole approaches to zero. Our entropy expression will reduce to the expression of A. Sen if we choose the same coordinates as in [17,18]. The extremal black holes are different objects from the non-extremal ones due to different topological structures in Euclidean sector [49,50,51]. The extremal black hole has vanishing surface gravity and has no bifurcation surface, so the Noether charge method of Wald can not be directly used [44]. Thus, in this paper we regard the extremal black holes as the extremal limit of non-extremal black holes as in [17,18,42]. That is, we will first consider non-extremal black holes and then take the extremal limit. In this sense, the definitions of Wald are applicable. The paper is organized as follows. In section II, we make a brief review on the entropy definition of Wald and give the required formulas. In section III, we give the near horizon analysis for the extremal black holes and derive the general form of the entropy. In section IV, we define the entropy function and discuss the attractor mechanism for the black holes with various moduli fields. The conclusion and discussion are given in section V. II. THE DEFINITION OF WALD In differential covariant theories of gravity, Wald showed that the entropy of a black hole is a kind of Noether charge [44,45]. In this paper, we will use the Wald's method to define the entropy functions for spherically symmetric black holes. Assume the differential covariant Lagrangian of n-dimensional space-times (M, g ab ) is where we have put the Lagrangian in the form of differential form and ǫ is the volume element. R abcd is Riemann tensor (since we are mainly concerning with extremal black holes, therefore we need not consider the covariant derivative of the Riemann tensor). {Φ s , s = 0, 1, · · · } are scalar fields, {A I a , I = 1, · · · } are U (1) gauge potentials, and the corresponding gauge fields are We will not consider the Chern-Simons term as [18]. The variation of the Lagrange density L can be written as where Θ = Θ(ψ, δψ) is an (n − 1)-form, which is called symplectic potential form, and it is a local linear function of field variation (we have denoted the dynamical fields as ψ = {g ab , Φ s , A I a }). E ψ corresponds to the equations of motion for the metric and other fields. Let ξ be any smooth vector field on the space-time manifold, then one can define a Noether current form as where " · " means the inner product of a vector field with a differential form, while L ξ denotes the Lie derivative for the dynamical fields. A standard calculation gives It implies that J[ξ] is closed when the equations of motion are satisfied. This indicates that there is a locally constructed (n − 2)-form Q[ξ] such that, whenever ψ satisfy the equations of motion, we have In fact, the Noether charge form Q[ξ] can be defined in the so-called "off shell" form so that the Noether current (n − 1)-form can be written as [46] J where C a is locally constructed out of the dynamical fields in a covariant manner. When the equations of motion hold, C a vanishes. For general stationary black holes, Wald has shown that the entropy of the black holes is a Noether charge [44], and may be expressed as here ξ be the Killing field which vanishes on the bifurcation surface of the black hole. It should be noted that the Killing vector field has been normalized here so that the surface gravity equals to "1". Furthermore, it was shown in [45] that the entropy can also be put into a form where ǫ ab is the binormal to the bifurcation surface H, while E abcd R is the functional derivative of the Lagrangian with respect to the Riemann tensor with metric held fixed. This formula is purely geometric and does not include the surface gravity term. In this paper, since we will treat a limit procedure with surface gravity approaching to zero, we will not normalize the Killing vector such that the surface gravity equal to one. So we use the formula (2.8) to define the entropy of black holes as in [17,18,42]. For an asymptotically flat, static spherically symmetric black hole, one can simply choose ξ = ∂ t = ∂ ∂t . For the Lagrangian as (2.1), we have are the equations of motion for the U (1) gauge fields, the scalar fields and the metric g ab , respectively. The symplectic potential form has the form Let ξ be an arbitrary vector field on the space-time, The Lie derivative of ξ on the fields are (2.14) Substituting these Lie derivatives into the symplectic potential form, we find The first line in the above equation will give the Noether charge form, while the second line together with the terms in ξ · L in Eq. (2.3) will give the constraint which corresponds to the equations of motion for the metric. For example, the first term in the second line combined with scalar fields terms in ξ · L will give the energy-momentum tensor for scalar fields. Similarly the second term in the second line will enter the energy-momentum tensor for the U (1) gauge fields in the equations of motion for the metric. The last line in the above equation will give the constraint which corresponds to the equations of motion for the U (1) gauge fields. Thus, we find where The " · · · " terms are not important for our following discussion, so we brutally drop them at first. We will give a discussion at the end of the next section for these additional terms. Especially, the constraint for the U (1) gauge fields is simply The term Q F in the Q was not discussed explicitly in the earlier works of Wald et al. [44,45,46]. This is because that the killing vector vanishes on the bifurcation surface and the dynamical fields are assumed to be smooth on the bifurcation surface. However, in general, the U (1) gauge fields are singular on the bifurcation surface, so one have to do a gauge transformation, such that the ξ a A ′ a are vanished on this surface, and then Q F . This gauge transformation will modify the data of gauge potential at infinity and an additional potential-charge term ΦδQ into the dynamics of the charged black holes from infinity, where Φ = ξ c A c | H is the electrostatic potential on the horizon of the charged black hole and Q is the electric charge [47]. Another treatment is: We only require the smoothness of the gauge potential projecting on the bifurcation surface, i.e., ξ a A a instead of the gauge potential itself, so Q F will generally not vanish on the bifurcation surface, and then Φ = ξ c A c | H is introduced into the law of black hole without help of gauge transformation [48]. Similarly, in the next sections of this paper we only require that the projection of the gauge potential on the bifurcation surface is smooth. Since our final result will not depend on the gauge potential, the gauge transformation mentioned above will not effect our discussion. One can do such gauge transformation if necessary. In this paper, however, we will merely use the explicit form of the Noether charge (n − 2)-form and we will not discuss the first law. Certainly, it is interesting to give a general discussion on the thermodynamics of these black holes. The relevant discussion can be found in a recent paper [43]. III. ENTROPY OF EXTREMAL BLACK HOLES In this section, we will use the formulas above to give the general entropy function for static spherically symmetric extremal black holes. Assume that the metric for these black holes is of the form where N, γ are functions of radial coordinate r, and dΩ 2 n−2 is the line element for the (n − 2)dimensional sphere. The horizon r = r H corresponds to N (r H ) = 0. If the equations of motion are satisfied, the constraint C a = 0, and we have Consider a near horizon region ranged from r H to r H + ∆r, we have If ξ is a Killing vector, then Θ = 0, and r H +∆r Thus we arrive at Taking ξ = ∂ t , (since we consider the asymptotically flat space-time, N (r) has the property lim r→∞ N (r) = 1, such that ∂ t has a unit norm at infinity.), we have ∇ [a ξ b] = 1 2 N ′ ǫ ab , and where we have defined a function B(r) Note that the Q F terms in the right hand side of Eq. (3.4) can be written as where A I t = (∂ t ) a A I a ,ẽ I ≡ F I rt (r H ), and the U (1) electrical-like charges are defined to be q I = − r 1 (n − 2)! ∂L ∂F I ab ǫ aba 1 ···a n−2 dx a 1 ∧ · · · ∧ dx a n−2 . (3.8) They do not change with the radii r. This is ensured by the Gaussian law. Note that there is an integration on the sphere part in (3.8), therefore the only F I rt in F I ab is relevant, so that we can simply write F I ab (r H ) as −ẽ I ǫ ab . Considering −2ẽ I 2 =ẽ I ǫ abẽI ǫ ab we have Substituting this result into the definition of the electric charges, we find q I = − ∂ ∂ẽ I r H L 2(n − 2)! ǫ ab ǫ aba 1 ···a n−2 dx a 1 ∧ · · · ∧ dx a n−2 = ∂f (r H ) ∂ẽ I . (3.10) Heref (r H ) will be defined below in Eq. (3.12). The last term in the right hand side of Eq. (3.4) can be written as Thus we arrive at = ∆rq IẽI − ∆rf (r H ) . (3.14) Considering the limit ∆r → 0, we find So far, we have not specialized to extremal black holes; therefore, the above results hold for general non-extremal black holes. For the extremal black holes limit with N ′ (r H ) → 0, while N ′′ (r H ) = 0, from (3.15) we have Since we view the extremal black holes as the extremal limit of non-extremal black holes, the entropy formula of Wald is applicable for the extremal black holes. Note that B(r H ) is nothing but the integration in Eq. (2.8) without the 2π factor. Thus, the entropy of the extremal black holes can be expressed as This is one of main results in this paper. It is easy to see that this entropy form is very similar to the one in the "entropy function" method of A. Sen. But some remarks are in order: (i). We have not stressed that the extremal black holes have the near horizon geometry AdS 2 × S n−2 as in [17,18] although the vanishing surface gravity and the metric assumption (3.1) may coincide with the definition through the near horizon geometry. However, let us notice that some extremal black holes have near horizon geometries of the form AdS 3 products some compact manifold X. In our procedure, the near horizon geometry is not necessary to be AdS 2 × S n−2 and the only requirement is to have vanishing surface gravity. Therefore our procedure can be used to discuss that kind of extremal black holes whose near horizon geometry is of the form AdS 3 × X by simply modifying the metric assumption in Eq.(3.1). (ii). Our result is explicitly invariant under coordinate transformation, and this can be easily seen from the above process. We have not used the treatment method Eq.(1.4) employed by A. Sen. (iii). The Legendre transformation with respect to the electric charges appears naturally in this procedure, while the Legendre transformation with respect to the magnetic charges does not appear. (iv). If we choose a set of coordinates as the one in [17,18], our expression for the entropy is exactly same as the one given by A. Sen. This can be seen as follows. In the extremal limit N ′ (r H ) = 0, we can rewrite the metric near the horizon as Redefine the coordinates as Then, the near horizon metric can be further rewritten as The components of gauge fields F I rt andf are dependent of coordinates, in this new set of coordinates they areẽ Since the entropy is invariant under the coordinate transformation, we find in these coordinates like {τ, ρ, · · · }, This is nothing but the entropy formula given by A. Sen for extremal black holes. Since the factor 2/N ′′ (r H ) in (3.17) disappears in this new set of coordinates, the entropy formula becomes more simple and good look. This is an advantage of this set of coordinates. But we would like to stress that the entropy expression with the factor "2/N ′′ (r H )" makes it invariant under coordinate transformation. (v). Finally the functionf (r H ) is evaluated for the solution of the equations of motion, i.e. all the fields: {g ab , Φ s , F I ab } are on shell. For example, if the near horizon geometry has the form 25) and the equations of motion are satisfied, then we can express the entropy in the form (3.24). There v 1 and v 2 should equal to 2/N ′′ (r H ) and γ(r H ). N , γ, and other fields, should satisfy the equations of motion. One may worry about that the conserved charge form Q in Eq.(2.18) is not complete: For example, we will have an additional term ǫ aba 1 ···a n−2 ξ a ∇ b D(φ) if the action has a dilaton coupling term D(φ)R. In general, the conserved charge form can be written as where W a , Y and Z are smooth functions of fields and their derivatives, and Y = Y(ψ, L ξ ψ) is linear for the field variation [45,46]. Obviously, Y and dZ will not give contributions to the near horizon integration (3.2) if ξ is a killing vector. It seems that ξ a W a will give an additional contribution to this integration. For the extremal case, this contribution will vanish due to the smoothness of W a and the vanishing surface gravity. For example, the term corresponding to the dilaton coupling mentioned above will vanish in the near horizon integration. So the final form of the entropy (3.17) will not change. For the non-extremal case, this term essentially appear in the near horizon integration if we add the ξ a W a into Q. However, if necessary, we can always change the Lagrangian L to be L + dµ and put the conserved charge form Q into the form of (2.18) without the " · · · " terms, where µ is a (n − 1)-form. This change of Lagrangian will not affect the equations of motion and the entropy of the black holes [45,46]. Then, the formulas (3.4) and therefore (3.15) are still formally correct for the non-extremal case after considering that ambiguity of the Lagrangian and thereforef (r H ). But this ambiguity has no contribution to Eq. (3.17) which describes the entropy of the black hole in the extremal case. IV. ENTROPY FUNCTION AND ATTRACTOR MECHANISM In this section we show further that one can define an entropy function with the help of the entropy definition of Wald. The Noether current can always be written as where C a corresponds to constraint. The constraint for the U (1) gauge fields is (2.21). If the equations of motion for the U (1) gauge fields hold, this constraint vanishes. In this section, we will assume the equations of motion for the U (1) gauge fields are always satisfied, but not for the metric and scalar fields. In other word, we will not consider the constraint for the gauge fields. Assuming that the metric of the extremal black holes has the form on the horizon r = r H of an extremal black hole, one has N (r H ) = 0, N ′ (r H ) = 0, but N ′′ (r H ) = 0. Thus the near horizon geometry will be fixed if N ′′ (r H ) and γ(r H ) are specified. This means the "off-shell" of the near horizon geometry corresponds to the arbitrariness of the parameter N ′′ (r H ) and γ(r H ). In the near horizon region ranged from r H to r H + ∆r, we have With this, we obtain Define our "entropy function" as If the equations of motion are satisfied, obviously, this E will reduce to the entropy of extremal black holes given in the previous section. Therefore this definition is meaningful. Further, from Eq. (4.3), we have Recalling that the equations of motion for the U (1) gauge fields have been assumed to hold always, and following the calculations in the previous section, we have This expression looks the same as the one given in the previous section. However, a crucial difference from the one in the previous section is that here the fields need not be the solutions of the equations of motion. To give the entropy of the extremal black holes, we have to solve the equations of motion or extremize the entropy function with respect to the undetermined values of fields on the horizon. It is easy to find that entropy function has the form where, for simplicity, we have denoted the N ′′ (r H ) and γ(r H ) by N ′′ and γ, respectively. The terms u ′ s will not appear because those kinetic terms of scalar fields in the action always have a vanishing factor N (r H ) = 0 on the horizon. Similarly, γ ′ (r H ), γ ′′ (r H ) will not appear because that the components of the Riemann tensor which include these terms have to contract with the vanished factors N (r H ) or N ′ (r H ). Certainly, this point can be directly understood from the near horizon geometry in Eq. (3.20). So, extremizing the entropy function becomes The electric charges are determined by The entropy of the black hole can be obtained by solving these algebraic equations, and substituting the solutions for N ′′ , γ, u s back into the entropy function. If the values of moduli fields on the horizon are determined by charges of black holes, then the attractor mechanism is manifest. Then the entropy has the form a topological quantity which is fully determined by charges [17,18]. These definitions will become more simple if one chooses the coordinates {τ, ρ, · · · } so that one can define then, the entropy function can be written as where e I are gauge fields on the horizon in this set of coordinates, and q I = ∂f ∂e I are electric charges which are not changed with the coordinate transformation. So, in this set of coordinates, our entropy function form reduces to the entropy function defined by A. Sen [17,18]. V. CONCLUSION AND DISCUSSION In this paper, we have shown that the "entropy function" method proposed by A. Sen can be extracted from the general black hole entropy definition of Wald [44]. For a spherically symmetry extremal black hole as described by metric (3.1), we find that the entropy of the black hole can be put into a form which is similar to the one given in Ref. [17,18]. To get this entropy form, we have regarded the extremal black hole as the extremal limit of an non-extremal black hole, i.e., we have required (and only required) that the surface gravity approaches to zero. In a special set of coordinates, i.e., {τ, ρ · · · }, this entropy is exactly of the same form as the one given by A. Sen. We have obtained a corresponding entropy function (4.7). After extremizing this entropy function with respect to N ′′ , γ and other scalar fields, one gets the entropy of the extremal black holes. Similarly, in the coordinates {τ, ρ · · · }, our entropy function reduces to the form of A. Sen. Note that in our procedure, we have neither used the treatment of rescaling AdS 2 part of the near horizon geometry of extremal black holes, nor especially employed the form of the metric in the coordinates {τ, ρ, · · · } as Eq.(1.1). In this procedure, it can be clearly seen why the electric charge terms e I q I appear, but not the magnetic charges terms in the entropy function. Recently it was shown that for some near-extremal black holes with BTZ black holes being a part of the near horizon geometry, that the "entropy function" method works as well [40]. A similar discussion for non-extremal D3, M 2 and M 5 branes has also been given in [41]. Therefore it is interesting to see whether the procedure developed in this paper works or not for near-extremal black holes. In this case, N ′ (r H ) is an infinitesimal one instead of vanishing. Eq. and r * = B(r H )/B ′ (r H ) approximately equals to " 1 n−2 · radius of the black hole" if the higher derivative corrections in the effective action are small. Thus, after considering that ambiguity iñ f (r H ) becomes very small and for large r * (sometimes, this corresponds to large charges), the entropy function method gives us an approximate entropy for near-extremal black holes, but the attractor mechanism will be destroyed [15]. In addition, it is also interesting to discuss the extremal rotating black holes with the procedure developed in this paper. Certainly, in this case, the Killing vector which generates the horizon should be of the form χ = ∂ t + Ω H ∂ φ instead of ξ = ∂ t . A term including angular-momentum J will naturally appear in the associated entropy function [32]. This issue is under investigation.
2007-07-30T15:52:06.000Z
2007-04-10T00:00:00.000
{ "year": 2007, "sha1": "3fbe1cebd99ed37d3b6d102c6739f252c45d218d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0704.1239", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3fbe1cebd99ed37d3b6d102c6739f252c45d218d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235652775
pes2o/s2orc
v3-fos-license
Sophorolipid-Based Oligomers as Polyol Components for Polyurethane Systems Due to reasons of sustainability and conservation of resources, polyurethane (PU)-based systems with preferably neutral carbon footprints are in increased focus of research and development. The proper design and development of bio-based polyols are of particular interest since such polyols may have special property profiles that allow the novel products to enter new applications. Sophorolipids (SL) represent a bio-based toolbox for polyol building blocks to yield diverse chemical products. For a reasonable evaluation of the potential for PU chemistry, however, further investigations in terms of synthesis, derivatization, reproducibility, and reactivity towards isocyanates are required. It was demonstrated that SL can act as crosslinker or as plasticizer in PU systems depending on employed stoichiometry. (ω-1)-hydroxyl fatty acids can be derived from SL and converted successively to polyester polyols and PU. Additionally, (ω-1)-hydroxyl fatty acid azides can be prepared indirectly from SL and converted to A/B type PU by Curtius rearrangement. Introduction The main components of well-known and very versatile polyurethanes (PU) [1,2] are polyisocyanates and polyols [3,4]. The morphology and thus the properties of PU are mainly determined by the intrinsic structure of the polyol component(s) since their content exceeds typically 60 wt.% of the PU [5][6][7]. The polyols most commonly utilized are polyether, polyester, and polycarbonate polyols [5,8]. All types of polyols are usually produced by means of petrochemical resources. Considering the need for a sustainable economic activity [9], the development of polyols based on renewable resources and the research on their applicability for PU systems are of increased significance [10][11][12][13][14][15][16][17][18]. Among bio-based polyol platforms, sophorolipids (SL) are well documented and show high potential to provide a versatile toolbox for polyol building blocks [19][20][21][22][23][24][25][26]. Specific advantages of SL are their non-pathogenic production organisms, possessing high productivity and an efficient rate of substrate conversion [21]. SL consist of sophorose, a hydrophilic di-glucose, coupled to a hydrophobic hydroxyl fatty acid (HFA) by a glycosidic bond (Figure 1(1B)). The resulting SL is amphiphilic and thus readily utilized as bio-based surfactant in, e.g., surface technology [24,27,28]. The most effective organism to produce SL is bombicola, which is capable of precipitating up to 400 g L −1 SL into the fermentation medium [20]. Moreover, structural variation is feasible by feeding different lipid derivatives with C16 and C18 fatty acids, which incorporate in particularly good yields [29,30]. The most common SL derivatives are lactonic SL (LSL) and acidic SL (ASL), as depicted in Figure 1 [24,31]. Dihydroxyl alkyl glycosides are structurally similar to SL and were already converted successfully to PU by reaction with diisocyanates [14,32]. In these procedures, the double bond of the fatty acid moiety is transformed into a vicinal diol structure, which is subsequently converted to PU by reaction with the isocyanate (NCO) groups of isophorone diisocyanate (IPDI). Consequently, SL should perform similarly to give PU. Moreover, LSL has six hydroxyl (OH) groups (two acetylated primary OH groups and four secondary OH groups) and ASL seven OH groups (two acetylated primary OH groups and five secondary OH groups) per molecule. This means that a successive conversion with (di)isocyanates is generally possible because the secondary OH groups are readily available for the reaction with NCO groups, whereas the primary OH groups become available after, e.g., saponification [33]. The complete deacetylated ASL should also allow a certain selectivity since primary OH groups react with NCO groups about three times faster than secondary OH groups [34][35][36][37][38]. Utilizing appropriate reaction conditions, primary OH groups can purposely be consumed first in order to produce linear and thus thermoplastic PU [38][39][40][41][42][43]. In contrast, complete conversion of all OH groups leads to significant crosslinking and therefore to a thermosetting material. Considering the Flory-Stockmayer relation [44], the degree of crosslinking and thus the nature of the thermosetting material can be tailored to a certain extent, e.g., by appropriate choice of the NCO/OH ratio (index) [44][45][46]. In summary, SL provide a high number of potential reaction sites with different reactivity toward NCO groups and should therefore be suited as crosslinking agent and potentially as a linear building block [32]. Additionally, SL are suited as internal emulsifier in PU dispersion technology because of their amphiphilic nature [47][48][49]. HFA form renewable resources like castor oil, which can be converted to polyester polyols or directly to PU [17,[50][51][52]. Typically for such HFA is that the OH group is localized in the center of the molecule. The resultant polyester or PU systems consequently contain dangling chains that reduce crystallinity and glass transition temperature [52,53]. In contrast to that, ω-or (ω-1)-HFA yield systems without side chains and with lower ester-or urethane group concentrations. Therefore, dangling chains are not expected and lipophilicity is increased suggesting enhanced hydrolysis resistance and improved interactions to hydrophobic surfaces [54]. ω-or (ω-1)-HFA with varying chain lengths can be obtained by fragmentation of SL [26] and successively converted chemically [55] or enzymatically [56][57][58][59][60] to yield polyesters via A/B self-condensation. Note that successive conversion with diisocyanates to high molecular weight PU requires two-fold OH termination, which can be accomplished, e.g., by initiation with or addition of suitable diols [55,61]. Additionally, the acid moiety of HFA can be converted to azide Dihydroxyl alkyl glycosides are structurally similar to SL and were already converted successfully to PU by reaction with diisocyanates [14,32]. In these procedures, the double bond of the fatty acid moiety is transformed into a vicinal diol structure, which is subsequently converted to PU by reaction with the isocyanate (NCO) groups of isophorone diisocyanate (IPDI). Consequently, SL should perform similarly to give PU. Moreover, LSL has six hydroxyl (OH) groups (two acetylated primary OH groups and four secondary OH groups) and ASL seven OH groups (two acetylated primary OH groups and five secondary OH groups) per molecule. This means that a successive conversion with (di)isocyanates is generally possible because the secondary OH groups are readily available for the reaction with NCO groups, whereas the primary OH groups become available after, e.g., saponification [33]. The complete deacetylated ASL should also allow a certain selectivity since primary OH groups react with NCO groups about three times faster than secondary OH groups [34][35][36][37][38]. Utilizing appropriate reaction conditions, primary OH groups can purposely be consumed first in order to produce linear and thus thermoplastic PU [38][39][40][41][42][43]. In contrast, complete conversion of all OH groups leads to significant crosslinking and therefore to a thermosetting material. Considering the Flory-Stockmayer relation [44], the degree of crosslinking and thus the nature of the thermosetting material can be tailored to a certain extent, e.g., by appropriate choice of the NCO/OH ratio (index) [44][45][46]. In summary, SL provide a high number of potential reaction sites with different reactivity toward NCO groups and should therefore be suited as crosslinking agent and potentially as a linear building block [32]. Additionally, SL are suited as internal emulsifier in PU dispersion technology because of their amphiphilic nature [47][48][49]. HFA form renewable resources like castor oil, which can be converted to polyester polyols or directly to PU [17,[50][51][52]. Typically for such HFA is that the OH group is localized in the center of the molecule. The resultant polyester or PU systems consequently contain dangling chains that reduce crystallinity and glass transition temperature [52,53]. In contrast to that, ω-or (ω-1)-HFA yield systems without side chains and with lower ester-or urethane group concentrations. Therefore, dangling chains are not expected and lipophilicity is increased suggesting enhanced hydrolysis resistance and improved interactions to hydrophobic surfaces [54]. ω-or (ω-1)-HFA with varying chain lengths can be obtained by fragmentation of SL [26] and successively converted chemically [55] or enzymatically [56][57][58][59][60] to yield polyesters via A/B self-condensation. Note that successive conversion with diisocyanates to high molecular weight PU requires two-fold OH termination, which can be accomplished, e.g., by initiation with or addition of suitable diols [55,61]. Additionally, the acid moiety of HFA can be converted to azide in order to accomplish Curtius rearrangement [62] yielding A/B type polyurethanes [51,[63][64][65][66][67][68]. The present work shows that LSL can act as crosslinker or as plasticizer in PU systems depending on employed stoichiometry. (ω-1) HFA-based polyester polyols and (ω-1) HFA-based PU systems are feasible; however, low amounts of glucose impurities seem to limit the versatility of the reaction since there is a substantial extent of crosslinking and branching. Additionally, we demonstrate the fundamental access to A/B type PU systems by applying Curtius rearrangement on (ω-1) HFA azides obtained from LSL, confirming the Curtius approach to A/B type PU systems of other groups [51,[64][65][66][67][68][69]. Measurements and Equipment Size exclusion chromatography (SEC) was carried out on a PSS Polymer SECcurity system based on Agilent 1260 hardware modules equipped with SECcurity isocratic pump, vacuum degasser, refractive index and UV-Vis detector (254 nm), column oven, and a standard auto sampler. A styrene-divinylbenzene copolymer column (SDV linear XL (100-3,000,000 Da)) with 5 µm particle size and 1000 Å porosity was calibrated with polystyrene ReadyCal Kit (PSS Polymer) standards. Measurements were carried out in THF at 30 • C with a flow rate of 1.0 mL min −1 . Integration of the signals was performed via software package "WinGPC Unity" from PSS Polymer. 1 H-and 13 C-NMR spectra were recorded using a Bruker Ascend 400 spectrometer (400 MHz) at room temperature using CDCl 3 or THF-d 8 as solvent (sample conc. = 0.05 mg µL −1 ). Tetramethylsilane (TMS) was used in all experiments as internal standard. ATR-IR spectra were recorded using a Bruker Platinum-ATR equipped with a MIR-RT-DLaTGS detector and a KBr radiation plate. OPUS software of Bruker Optic GmbH was used for data handling. Spectra were recorded at room temperature, applying 24 scans, and a resolution of 4 cm −1 . Background spectra were recorded in ambient atmosphere prior to each measurement. For reaction monitoring, inline-IR-spectroscopy was used applying Thermo Fisher Nicolet iS 50-spectrometer equipped with a probe coupler and the ZnSe-ATR-tube FlexiSpec ® from art photonics GmbH, Berlin. HgCdTe detector was cooled with liquid N 2 . The software "Macros Basic" enabled 20 automated scans per minute in a resolution of 2 cm −1 . Background spectra were recorded in dry N 2 prior each measurement. "OMNIC" software was used to calculate peak areas. Differential scanning calorimetry (DSC) was conducted applying Q2000 DSC from TA instruments consisting of an auto sampler and a RCS90 cooling unit calibrated by an indium standard. Samples were placed in Tzero aluminum crucibles. SL containing samples were heated to 150 • C and cooled to −60 • C twice using a rate of 10 K min −1 . The results of the second cycle were used for evaluation. X-ray diffraction (XRD) was conducted with a Bruker D2 Phaser 2nd Gen equipped with a Cu tube (λ = 1.54184 Å), fixed slit 0.4 mm, 4 • grid, and a Lynx 1D Modus detector as single measurements in order to obtain data for crystallinity (angle of incidence: 5-80 • ; rotation: 0-15 rpm; increment: 0.03 • ; time per step: 0.02 s; scan type: Coupled Two Theta Theta; scan modus: Continuous PSD fast). Samples were applied as films or grinded powder. For evaluation, Bruker's Diffrac EVA software was used. High-performance liquid chromatography (HPLC) was conducted using Shimadzu Nexera XR equipped with a BM-20 A communications bus module, two LC-20 AD XR, a SIL-D0ACXR auto sampler, a column oven, and a VWR-ELSD 80 detector. Conditions: Acid number, hydroxyl number and the content of NCO groups were determined according to DIN EN ISO 660 2009, DIN 53240-2 (ASTM E1899-08), and DIN EN ISO 11909 2007, respectively. Automated titration was conducted with a TitroLine 7000 titration unit from SI Analytics. Shore A hardness was measured five times for each sample according to DIN EN ISO 868 applying SAUTER HBA 100-0 and SAUTER TI-A0 with a contact pressure of 5 kg and calibrated with Durometer Test Block Kit AHBA-01 (SAUTER). Tensile tests were conducted according to DIN EN ISO 527-1/-2 (ASTM D 638) on a Shimadzu Autograph AG-X plus tensile testing machine (50 mm min −1 ). TrapeziumX (Shimadzu, Kyoto, Japan) software was used for evaluation. The samples were polymer films produced by casting the liquid polymer (2000 µM) on a Teflon plate with a squeegee. The preparation of SL solutions was as follows: 100 mg (0.145 mmol) of (1A) (LSL) or 90.5 mg (0.145 mmol) (1B) (ASL) was transferred at room temperature to 2 mL of solvent and the solubility evaluated optically. 2.3. Synthesis of 17-Hydroxyoctadec-9-Ene Acid ((ω-1) HFA) (ω-1) HFA (2) (Figure 2) was produced by a modification of published procedures in absence of carcinogenic dioxane, but at the expense of reaction time [26,71]. A typical protocol was as follows. LSL (1A), derived from starmerella bombicola (97% purity) [19], was dissolved in 5 M NaOH solution. To this solution, a further 5 M NaOH was added dropwise until a constant pH was reached. After pH adjustment to 3.5 with diluted HCl, the intermediate (1B) was crystallized at 7 • C and purified by lyophilization to give a white powder. A total of 5.0 g (0.008 mol) (1B) was placed in 100 mL round-necked flask and dissolved under stirring in 50 mL 1 M HCl. The reaction proceeds at 80 • C, turning the clear liquid to a turbid solution comprising particles and further to a biphasic system with yellowish, oily droplets. The reaction progress was monitored by HPLC. After completion of hydrolysis, the pH was adjusted to 3.5 by addition of diluted NaOH. The product was purified by solvent extraction (distilled H 2 O/CHCl 3 ) and removal of excessive solvent in vacuo. The product (2) accumulated as yellowish oil in 83.5% yield. Since dimers and oligomers were identified, (2) was stirred in 5 M NaOH at 80 • C for a further 16 h. Repeated solvent extraction and removal applying the same conditions yielded a brownish oil. Synthesis of (ω-1) HFA Based Polyester Diol OH-terminated (ω-1) HFA-based polyester diol (3) (Figure 3) was prepared according to a published procedure [61]. A total of 9.0 g (0.03 mol) (ω-1) HFA (2) and 0.7 g (0.006 mol) 1,6-hexanediol was introduced in a 100 mL flask equipped with a Vigreux column, thermometer, and distilling link. The reaction mixture was heated to 200 • C within 1 h. Reaction water was removed by distillation. In contrast to the published procedure, no catalyst (SnCl 2 ) was used. Instead, 7.0 mg (0.042 mmol) 4-tert-butylcatechol was added in order to prevent undesired reactions at the double bond. After 24 h, the reaction was stopped, leaving a highly viscous and sticky product, which was partially soluble in THF. Polymers 2021, 13, x FOR PEER REVIEW 5 of 20 purified by solvent extraction (distilled H2O/CHCl3) and removal of excessive solvent in vacuo. The product (2) accumulated as yellowish oil in 83.5% yield. Since dimers and oligomers were identified, (2) was stirred in 5 M NaOH at 80 °C for a further 16 h. Repeated solvent extraction and removal applying the same conditions yielded a brownish oil. Synthesis of (ω-1) HFA Based Polyester Diol OH-terminated (ω-1) HFA-based polyester diol (3) (Figure 3) was prepared according to a published procedure [61]. A total of 9.0 g (0.03 mol) (ω-1) HFA (2) and 0.7 g (0.006 mol) 1,6-hexanediol was introduced in a 100 mL flask equipped with a Vigreux column, thermometer, and distilling link. The reaction mixture was heated to 200 °C within 1 h. Reaction water was removed by distillation. In contrast to the published procedure, no catalyst (SnCl2) was used. Instead, 7.0 mg (0.042 mmol) 4-tert-butylcatechol was added in order to prevent undesired reactions at the double bond. After 24 h, the reaction was stopped, leaving a highly viscous and sticky product, which was partially soluble in THF. 12-Hydroxystearic Acid Based Polyesterdiol OH-terminated 12-hydroxystearic acid based polyesterdiol (4) (Figure 4) was prepared according to a published procedure [61]. A total of 5.0 g (16.7 mmol) 12-Hydroxystearic acid and 0.41 g (3.5 mmol) 1,6-hexanediol was introduced in a 20 mL flask equipped with a Vigreux column, thermometer, and distilling link. The reaction mixture was heated to 200 • C within 1 h. Reaction water was removed by distillation. After 24 h, SnCl 2 was added in order to complete the conversion. After another 24 h, vacuum was applied to remove water residues leaving a highly viscous liquid as the product. with a Vigreux column, thermometer, and distilling link. The reaction mixture was heated to 200 °C within 1 h. Reaction water was removed by distillation. After 24 h, SnCl2 was added in order to complete the conversion. After another 24 h, vacuum was applied to remove water residues leaving a highly viscous liquid as the product. Synthesis of 17-Hydroxyoctadec-9-Enoyl Azide ((ω-1) HFA Azide) The synthesis of (5) ( Figure 5) follows a modification and combination of several published procedures in order to improve reaction time and yield [72,73]. In a 100 mL flask, 0.75 g (2.50 mmol) (ω-1) HFA (2) was dissolved in 15 mL dry CH2Cl2. To this solution, 0.25 mL (3.0 mmol) oxalyl chloride and 2.30 mL (0.03 mmol) DMF was added, and the reaction mixture was stirred at room temperature for 1 h. Then the solution was cooled to 0 °C and 0.65 mg (10 mmol) of an aqueous NaN3 solution was added dropwise. The resulting mixture was stirred at 0 °C for 3 h. The organic phase was extracted by 3 × 30 mL CHCl3. The combined organic layers were washed with 40 mL brine, dried over anhydrous Na2SO4, and concentrated under reduced pressure. The product accumulated as a yellow oil. The synthesis of (5) ( Figure 5) follows a modification and combination of several published procedures in order to improve reaction time and yield [72,73]. In a 100 mL flask, 0.75 g (2.50 mmol) (ω-1) HFA (2) was dissolved in 15 mL dry CH 2 Cl 2 . To this solution, 0.25 mL (3.0 mmol) oxalyl chloride and 2.30 mL (0.03 mmol) DMF was added, and the reaction mixture was stirred at room temperature for 1 h. Then the solution was cooled to 0 • C and 0.65 mg (10 mmol) of an aqueous NaN 3 solution was added dropwise. The resulting mixture was stirred at 0 • C for 3 h. The organic phase was extracted by 3 × 30 mL CHCl 3 . The combined organic layers were washed with 40 mL brine, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure. The product accumulated as a yellow oil. with a Vigreux column, thermometer, and distilling link. The reaction mixture was heated to 200 °C within 1 h. Reaction water was removed by distillation. After 24 h, SnCl2 was added in order to complete the conversion. After another 24 h, vacuum was applied to remove water residues leaving a highly viscous liquid as the product. Synthesis of 17-Hydroxyoctadec-9-Enoyl Azide ((ω-1) HFA Azide) The synthesis of (5) ( Figure 5) follows a modification and combination of several published procedures in order to improve reaction time and yield [72,73]. In a 100 mL flask, 0.75 g (2.50 mmol) (ω-1) HFA (2) was dissolved in 15 mL dry CH2Cl2. To this solution, 0.25 mL (3.0 mmol) oxalyl chloride and 2.30 mL (0.03 mmol) DMF was added, and the reaction mixture was stirred at room temperature for 1 h. Then the solution was cooled to 0 °C and 0.65 mg (10 mmol) of an aqueous NaN3 solution was added dropwise. The resulting mixture was stirred at 0 °C for 3 h. The organic phase was extracted by 3 × 30 mL CHCl3. The combined organic layers were washed with 40 mL brine, dried over anhydrous Na2SO4, and concentrated under reduced pressure. The product accumulated as a yellow oil. Table 1. The corresponding amounts of HIC, PIC, or HDI and DBTDL were added to the solution and the reaction was monitored by inline IR and NCO titration. Residual amounts of diisocyanate were quenched with excessive MeOH prior to further investigations. Figure 6 shows characteristic IR spectra obtained applying this procedure exemplarily for the sample LSL-HDI 1.1 . (ω-1) HFA-based polyesterdiol (3) was dissolved in THF for 7 d. After this period, solubility was complete and M n was measured to be 1561 g mol −1 . A total of 25 mL THF solution containing 3.5 g (2.3 mmol) (3) was heated to 60 • C and 500 ppm DBTDL and 0.58 g (3.5 mmol) HDI was added. According to NCO titration, the reaction was completed after 16 h, cooled to room temperature and the solvent removed under reduced pressure. 12-Hydroxystearic acid based polyesterdiol (4) was dissolved in THF for 7 d. After this period, M n was measured to be 1862 g mol −1 . A total of 25 mL THF solution containing 3.5 g (2.3 mmol) (4) was heated to 60 • C and 500 ppm DBTDL and 0.47 g (2.8 mmol) HDI was added. According to NCO titration, the reaction was completed after 16 h, cooled to room temperature, and the solvent removed under reduced pressure. Preparation of A/B Type PU In a 100 mL round-necked flask, 0.80 g (2.5 mmol) 17-Hydroxyoctadec-9-enoyl azide (5) was dissolved in 1 mL THF under N 2 atmosphere. Curtius rearrangement and successive polymerization was started applying different temperatures and allowed to proceed for different periods of time (for details see Section 3). The resulting A/B type polymer (6) is depicted in Figure 7. 2.7.2. Synthesis of PU Based on (ω-1) HFA-Based Polyesterdiol and 12-Hydroxystearic Acid Based Polyesterdiol (ω-1) HFA-based polyesterdiol (3) was dissolved in THF for 7 d. After this period, solubility was complete and Mn was measured to be 1561 g mol −1 . A total of 25 mL THF solution containing 3.5 g (2.3 mmol) (3) was heated to 60 °C and 500 ppm DBTDL and 0.58 g (3.5 mmol) HDI was added. According to NCO titration, the reaction was completed after 16 h, cooled to room temperature and the solvent removed under reduced pressure. 12-Hydroxystearic acid based polyesterdiol (4) was dissolved in THF for 7 d. After this period, Mn was measured to be 1862 g mol −1 . A total of 25 mL THF solution containing 3.5 g (2.3 mmol) (4) was heated to 60 °C and 500 ppm DBTDL and 0.47 g (2.8 mmol) HDI was added. According to NCO titration, the reaction was completed after 16 h, cooled to room temperature, and the solvent removed under reduced pressure. Preparation of A/B Type PU In a 100 mL round-necked flask, 0.80 g (2.5 mmol) 17-Hydroxyoctadec-9-enoyl azide (5) was dissolved in 1 mL THF under N2 atmosphere. Curtius rearrangement and successive polymerization was started applying different temperatures and allowed to proceed for different periods of time (for details see Section 3). The resulting A/B type polymer (6) is depicted in Figure 7. Ternary Urethane Systems For kinetic investigations, 16.42 g (7.0 mmol) PBA and 5 g (7.0 mmol) LSL were dissolved in 80 mL acetone. At 50 • C, 6.1 g (0.048 mol) HIC or 5.7 g (0.048 mol) PIC was added. The reaction was immediately started by addition of 500 ppm DBTDL and monitored by NCO titration. For variation of the polyol composition, corresponding amounts of LSL and PBA ( Table 2) were placed in an appropriate flask, dissolved in acetone, and heated to 50 • C. After addition of HDI and 500 ppm DBTDL, the reaction was started and monitored by NCO titration. After completion of the reaction, the product was cast on a Teflon plate applying a squeegee (2000 µM) and dried at room temperature. Solubility Studies Lactonic SL (LSL, Figure 1(1A)), and acidic SL (ASL, Figure 1(1B)) can be produced in good yields and with a purity of at least 97% [19,25,27]. The melting points of LSL and ASL ranges between 55 • C and 65 • C. Above this temperature range, the SL remain highly viscous. The reaction with isocyanates in bulk is thus quite challenging, which is why the utilization of an appropriate solvent becomes highly favorable. The solubility of LSL and ASL in selected solvents is listed in Table 3. LSL is soluble in several solvents, particularly in acetone, MEK, and THF, all being solvents commonly used in PU synthesis. In contrast to that, ASL shows poor solubility in these solvents. In fact, solubility is sufficient in distilled water and methanol, both being solvents with relatively high polarity. This is assumed to be the result of the higher intrinsic polarity of ASL due to the additional OH-and COOH-functionalities. Note however that acceptable solubility is also measured using less polar DMF and DMSO, suggesting a more complex solubility behavior that is determined not alone by polarity. Water and methanol are well-known to be inappropriate solvents to study the conversion of polyols with isocyanates since they react themselves with isocyanates forming urea and urethane groups [1,3]. DMF and DMSO are improper solvents as well because DMF reacts with isocyanates forming amidines and biurets [74,75] and DMSO yields sulfide esters by reaction with the carboxyl group of ASL [76]. Hence, the reaction of SL with isocyanates in solution is studied applying LSL. Reaction of LSL with Monoisocyanates LSL comprises four secondary OH groups (Figure 1(1A)). The extent of the conversion of the OH groups can be investigated by reaction with monofunctional isocyanates HIC and PIC. Assuming complete conversion, the reactions of LSL with HIC or PIC should yield products with M n,theo,LSL-HIC = 1197 g mol −1 and M n,theo,LSL-PIC = 1165 g mol −1 , respectively. As shown in Figure 8, the experimental molecular weights are determined to be in the range of about 1000 g mol −1 for all systems applying slight excess of isocyanate. This means that on average, 2.5 OH groups of the LSL reacted with the isocyanate. In contrast to this, a five-fold excess of PIC must yield a product with M n,theo,LSL-PIC ≈ 1155 g mol −1 , thus indicating complete conversion. In addition to the urethane products, by-product formation is observed (Figure 8). Residual isocyanates could not be detected in a significant amount. For the PIC systems with low isocyanate excess, the molecular weight of the by-product is about 140 g mol −1 and can be assigned to phenyl urea (M n = 136.15 g mol −1 ) being an impurity of PIC. It is well-known that monoisocyanates dimerize to form uretdiones [79,80]. According to 13 C-NMR analysis and conversion-time-investigations (results not shown) [81], the signals at ca. 250 g mol −1 correspond to the uretdione dimers of HIC (M n,theo,HIC-uretdion = 254.37 g mol −1 ) and PIC (M n,theo,PIC-uretdion = 238.24 g mol −1 ), respectively. The dimerization explains thus excellently the incomplete conversion of OH groups of LSL applying minor excess of isocyanate. With large excess, however, uretdione formation becomes prominent, but remains insufficient to prevent completion of the reaction. Polymers 2021, 13, x FOR PEER REVIEW 11 of 20 contrast to this, a five-fold excess of PIC must yield a product with Mn,theo,LSL-PIC ≈ 1155 g mol −1 , thus indicating complete conversion. In addition to the urethane products, by-product formation is observed (Figure 8). Residual isocyanates could not be detected in a significant amount. For the PIC systems with low isocyanate excess, the molecular weight of the by-product is about 140 g mol -1 and can be assigned to phenyl urea (Mn = 136.15 g mol −1 ) being an impurity of PIC. It is well-known that monoisocyanates dimerize to form uretdiones [79,80]. According to 13 C-NMR analysis and conversion-time-investigations (results not shown) [81], the signals at ca. 250 g mol −1 correspond to the uretdione dimers of HIC (Mn,theo,HIC-uretdion = 254.37 g mol −1 ) and PIC (Mn,theo,PIC-uretdion = 238.24 g mol −1 ), respectively. The dimerization explains thus excellently the incomplete conversion of OH groups of LSL applying minor excess of isocyanate. With large excess, however, uretdione formation becomes prominent, but remains insufficient to prevent completion of the reaction. Stoichiometric Influence To produce high molecular weight polyurethanes, LSL has to be converted with isocyanates with a functionality f of at least two. In contrast to the reaction with monofunctional isocyanates, the reaction with HDI does not lead to measurable uretdione formation. However, because fLSL is larger than two, crosslinking is substantial and thus the point of gelation must be considered when different stoichiometric NCO/OH ratios (indices) are applied. The point of gelation can be calculated applying the Flory-Stockmayer relation (1) [44][45][46]. with pOH and pNCO = conversion of OH and NCO groups, respectively, and fOH and fNCO = functionality of the OH and the NCO component. For the LSL/HDI system, the point of gelation is calculated to be 1/3. Thus, gelation is to be expected within an index range of 0.33 to 3.03. A series of experiments applying different indices allows detailed inspection of the reaction of LSL with HDI ( Figure 9). The reaction of LSL with 10-fold excess of HDI leads to a product mixture that is still soluble in THF. SEC shows the nature of the components of this mixture (Figure 9, dotted line). Peak c (Mn = 2190 g mol −1 ) relates to the oligomeric/polymeric adducts formed by the reaction of LSL with HDI. In addition to this, (Table 1). Reaction of LSL with Diisocyanates Stoichiometric Influence To produce high molecular weight polyurethanes, LSL has to be converted with isocyanates with a functionality f of at least two. In contrast to the reaction with monofunctional isocyanates, the reaction with HDI does not lead to measurable uretdione formation. However, because f LSL is larger than two, crosslinking is substantial and thus the point of gelation must be considered when different stoichiometric NCO/OH ratios (indices) are applied. The point of gelation can be calculated applying the Flory-Stockmayer relation (1) [44][45][46]. with p OH and p NCO = conversion of OH and NCO groups, respectively, and f OH and f NCO = functionality of the OH and the NCO component. For the LSL/HDI system, the point of gelation is calculated to be 1/3. Thus, gelation is to be expected within an index range of 0.33 to 3.03. A series of experiments applying different indices allows detailed inspection of the reaction of LSL with HDI ( Figure 9). The reaction of LSL with 10-fold excess of HDI leads to a product mixture that is still soluble in THF. SEC shows the nature of the components of this mixture (Figure 9, dotted line). Peak c (M n = 2190 g mol −1 ) relates to the oligomeric/polymeric adducts formed by the reaction of LSL with HDI. In addition to this, low molecular weight fractions can be detected (a and b Peaks in Figure 9), which correspond to residual HDI and to mono-and diurethanes from quenching residual HDI with excessive MeOH. Reducing the index to 1.1 leads to a product mixture that is not completely soluble any more, indicating significant crosslinking as predicted by the Flory-Stockmayer (Equation (1)). SEC analysis of the soluble fraction proves crosslinking/branching to a substantial extent (Figure 9, dashed line). Inverting the NCO/OH ratio to an index of 0.5 leads to a product with incomplete crosslinking (Figure 9, solid line). This appears to be contradicting the prediction for the lower limit of 0.33 using the Flory-Stockmayer relation. Presumably, unreacted OH groups are sterically shielded by vicinal urethane groups formed first. This reduces the effective amount of OH groups for the reaction with HDI. Consequently, the effective lower limit of the Flory-Stockmayer gelation range is somewhat increased and seems to be larger than 0.5. Polymers 2021, 13, x FOR PEER REVIEW 12 of 20 low molecular weight fractions can be detected (a and b Peaks in Figure 9), which correspond to residual HDI and to mono-and diurethanes from quenching residual HDI with excessive MeOH. Reducing the index to 1.1 leads to a product mixture that is not completely soluble any more, indicating significant crosslinking as predicted by the Flory-Stockmayer (Equation (1)). SEC analysis of the soluble fraction proves crosslinking/branching to a substantial extent (Figure 9, dashed line). Inverting the NCO/OH ratio to an index of 0.5 leads to a product with incomplete crosslinking (Figure 9, solid line). This appears to be contradicting the prediction for the lower limit of 0.33 using the Flory-Stockmayer relation. Presumably, unreacted OH groups are sterically shielded by vicinal urethane groups formed first. This reduces the effective amount of OH groups for the reaction with HDI. Consequently, the effective lower limit of the Flory-Stockmayer gelation range is somewhat increased and seems to be larger than 0.5. Peak a corresponds to residual HDI, b to mono-and diurethanes from quenching residual HDI with excessive MeOH, and c to oligomers/polymers from the reaction of LSL with 10-fold excess of NCO groups. Kinetic Investigations The reaction between OH and NCO groups is generally accepted to follow second order kinetics [36,82] and can be studied by investigation of the corresponding second order plot and determination of kapp according to Equation (2) [38]. with = concentration of NCO groups at time t and = concentration of NCO groups at the beginning of the reaction. Figure 10 shows second order kinetic plots of reactions of PIC, HIC, and HDI with LSL and PBA, respectively. The relative reactivity of LSL and PBA towards the isocyanates is similar and follows the order PIC > HIC > HDI. All determined kapp are in the typical range for rate constants of conversions of isocyanates with primary or secondary alcohols [83]. In fact, PBA reacts with the corresponding isocyanates about 2.5 to 3.5 times faster than LSL (compare kapp in Figure 10a with the corresponding kapp in Figure 10b). This is because PBA is terminated by primary OH groups, whereas LSL contains secondary OH groups only. The difference in the reactivity is also observable in the second order plots of ternary reaction mixtures comprising isocyanate and an equimolar mixture of PBA and LSL ( Figure 11). From this, two different kapp for each reaction can be calculated being kapp1 = 16.2 × 10 −4 L mol −1 s −1 and kapp2 = 7.99 × 10 −4 L mol −1 s −1 for the PBA/LSL/PIC mixture and kapp1 = 4.64 × 10 −4 L mol −1 s −1 and kapp2 = 2.61 × Figure 9. Size exclusion chromatograms of products from the reaction of LSL with HDI applying different NCO/OH ratios (index). Reaction conditions: solvent = acetone; T = 50 • C; c DBTDL = 500 ppm. Peak a corresponds to residual HDI, b to mono-and diurethanes from quenching residual HDI with excessive MeOH, and c to oligomers/polymers from the reaction of LSL with 10-fold excess of NCO groups. Kinetic Investigations The reaction between OH and NCO groups is generally accepted to follow second order kinetics [36,82] and can be studied by investigation of the corresponding second order plot and determination of k app according to Equation (2) [38]. with c NCO = concentration of NCO groups at time t and c NCO 0 = concentration of NCO groups at the beginning of the reaction. Figure 10 shows second order kinetic plots of reactions of PIC, HIC, and HDI with LSL and PBA, respectively. The relative reactivity of LSL and PBA towards the isocyanates is similar and follows the order PIC > HIC > HDI. All determined k app are in the typical range for rate constants of conversions of isocyanates with primary or secondary alcohols [83]. In fact, PBA reacts with the corresponding isocyanates about 2.5 to 3.5 times faster than LSL (compare k app in Figure 10a with the corresponding k app in Figure 10b). This is because PBA is terminated by primary OH groups, whereas LSL contains secondary OH groups only. The difference in the reactivity is also observable in the second order plots of ternary reaction mixtures comprising isocyanate and an equimolar mixture of PBA and LSL ( Figure 11). From this, two different k app for each reaction can be calculated being k app1 = 16.2 × 10 −4 L mol −1 s −1 and k app2 = 7.99 × 10 −4 L mol −1 s −1 for the PBA/LSL/PIC mixture and k app1 = 4.64 × 10 −4 L mol −1 s −1 and k app2 = 2.61 × 10 −4 L mol −1 s −1 for the PBA/LSL/HIC reaction. The differences between k app1 and k app2 is about a factor of two in each ternary system, suggesting that LSL and PBA do not affect each other in terms of reactivity towards isocyanates. Polymers 2021, 13, x FOR PEER REVIEW 13 of 20 10 −4 L mol −1 s −1 for the PBA/LSL/HIC reaction. The differences between kapp1 and kapp2 is about a factor of two in each ternary system, suggesting that LSL and PBA do not affect each other in terms of reactivity towards isocyanates. Behavior of Ternary Systems Ternary PU systems containing LSL, PBA, and HDI are well-suited model formulations to study the influence of LSL on the product performance. For this, the content of LSL is systematically varied in the polyol mixture. To prevent significant crosslinking and enable reasonably high molecular weights, an index of 0.5 is applied. SEC analysis reveals a monomodal molecular weight distribution for the binary system PBA/HDI (0 mol.% LSL) with an average molecular weight of Mn = 4820 g mol −1 , which agrees well to Mn,theo of 4619 g mol −1 according to Flory (Figure 12) [84]. In contrast to that, increasing the amount of LSL in the polyol mixture leads to a broadening of the molecular weight distribution and an increase of modality. The SECs of the samples containing 70 mol % and 50 mol % LSL in particular reveal signals at molecular weights that correspond to pure LSL. This strongly suggests incomplete incorporation of LSL in the PU system. This is most likely because PBA is consumed first due to the reduced reactivity of the secondary OH groups of LSL. Consequently, residual LSL remains in the reaction mixture. An increased Behavior of Ternary Systems Ternary PU systems containing LSL, PBA, and HDI are well-suited model formulations to study the influence of LSL on the product performance. For this, the content of LSL is systematically varied in the polyol mixture. To prevent significant crosslinking and enable reasonably high molecular weights, an index of 0.5 is applied. SEC analysis reveals a monomodal molecular weight distribution for the binary system PBA/HDI (0 mol.% LSL) with an average molecular weight of Mn = 4820 g mol −1 , which agrees well to Mn,theo of 4619 g mol −1 according to Flory (Figure 12) [84]. In contrast to that, increasing the amount of LSL in the polyol mixture leads to a broadening of the molecular weight distribution and an increase of modality. The SECs of the samples containing 70 mol % and 50 mol % LSL in particular reveal signals at molecular weights that correspond to pure LSL. This strongly suggests incomplete incorporation of LSL in the PU system. This is most likely because PBA is consumed first due to the reduced reactivity of the secondary OH groups of LSL. Consequently, residual LSL remains in the reaction mixture. An increased Behavior of Ternary Systems Ternary PU systems containing LSL, PBA, and HDI are well-suited model formulations to study the influence of LSL on the product performance. For this, the content of LSL is systematically varied in the polyol mixture. To prevent significant crosslinking and enable reasonably high molecular weights, an index of 0.5 is applied. SEC analysis reveals a monomodal molecular weight distribution for the binary system PBA/HDI (0 mol.% LSL) with an average molecular weight of M n = 4820 g mol −1 , which agrees well to M n,theo of 4619 g mol −1 according to Flory (Figure 12) [84]. In contrast to that, increasing the amount of LSL in the polyol mixture leads to a broadening of the molecular weight distribution and an increase of modality. The SECs of the samples containing 70 mol % and 50 mol % LSL in particular reveal signals at molecular weights that correspond to pure LSL. This strongly suggests incomplete incorporation of LSL in the PU system. This is most likely because PBA is consumed first due to the reduced reactivity of the secondary OH groups of LSL. Consequently, residual LSL remains in the reaction mixture. An increased amount of LSL in the product leads also to a reduction of properties such as shore A hardness and crystallinity, suggesting a plasticizing character of LSL ( Figure 13). Polymers 2021, 13, x FOR PEER REVIEW 14 of 20 amount of LSL in the product leads also to a reduction of properties such as shore A hardness and crystallinity, suggesting a plasticizing character of LSL ( Figure 13). (ω-1) HFA-Based Polyester Polyols and PU Systems Thereof The preparation of (ω-1) HFA (2) proceeds according to the sequence shown in Figure 14. Successive alkaline and acidic hydrolysis lead to a mixture of monomeric HFA with its dimers and some oligomers up to an overall yield of 74.5%. Further purification to produce pure monomeric HFA is possible by repeated saponification and solvent extraction, but reduces the overall yield to 48.4%. Fortunately, the subsequent polymerization towards A/B-type polyesters does not require highly pure monomeric HFA, since the dimers condense in the same way and yield the desired A/B-type polyesters. The further amount of LSL in the product leads also to a reduction of properties such as shore A hardness and crystallinity, suggesting a plasticizing character of LSL ( Figure 13). (ω-1) HFA-Based Polyester Polyols and PU Systems Thereof The preparation of (ω-1) HFA (2) proceeds according to the sequence shown in Figure 14. Successive alkaline and acidic hydrolysis lead to a mixture of monomeric HFA with its dimers and some oligomers up to an overall yield of 74.5%. Further purification to produce pure monomeric HFA is possible by repeated saponification and solvent extraction, but reduces the overall yield to 48.4%. Fortunately, the subsequent polymerization towards A/B-type polyesters does not require highly pure monomeric HFA, since the dimers condense in the same way and yield the desired A/B-type polyesters. The further The preparation of (ω-1) HFA (2) proceeds according to the sequence shown in Figure 14. Successive alkaline and acidic hydrolysis lead to a mixture of monomeric HFA with its dimers and some oligomers up to an overall yield of 74.5%. Further purification to produce pure monomeric HFA is possible by repeated saponification and solvent extraction, but reduces the overall yield to 48.4%. Fortunately, the subsequent polymerization towards A/B-type polyesters does not require highly pure monomeric HFA, since the dimers condense in the same way and yield the desired A/B-type polyesters. The further polymerization is thus investigated, applying the mixture of monomeric and dimeric/oligomeric (ω-1) HFA. polymerization is thus investigated, applying the mixture of monomeric and dimeric/oligomeric (ω-1) HFA. OH-terminated A/B-type polyesters can be produced by modifying a procedure published elsewhere [61]. In this synthesis, the mixture of monomeric and dimeric/oligomeric (ω-1) HFA is polymerized in presence of 1,6-hexanediol. The product is highly viscous, sticky, and not completely soluble in THF (Mn of soluble fraction ≈ 2200 g mol −1 ). For comparison, a polyester consisting of 12-hydroxystearic acid and 1,6-hexanediol is produced applying equal conditions. This product is oily and highly viscous as well; however, remains fully soluble in THF (Mn ≈ 2600 g mol −1 ). These outcomes support other investigations [81], which indicate that notable crosslinking has occurred during the synthesis of (ω-1) HFA-based polyester diols and is most likely the result of inseparable glucose impurities remaining in the (ω-1) HFA. The reaction of HDI with the soluble fraction of (ω-1) HFA-based polyester diols yields a dark brownish oily product that is not completely soluble in THF. IR and SEC analysis reveals the product to be PU with Mn ≈ 3500 g mol −1 and a significant amount of species with Mn >10 6 g mol −1 (results not shown). This is also attributed to residual glucose moieties leading to notable crosslinking and branching during PU synthesis. Direct Conversion of (ω-1) HFA to PU by Curtius Rearrangement A/B type PU (6) can be obtained by Curtius rearrangement [51,[62][63][64][65][66][67][68], thus, fatty acid derivatives such as (ω-1) HFA represent potential A/B type monomers for the synthesis of PU [69]. By modification of model reactions [72,73], a reaction procedure comprising (ω-1) HFA (2) → (ω-1) HFA azide (5) → (ω-1) HFA-based A/B PU (6) could be developed ( Figure 15). This sequence leads to macromolecular species with Mn of max. 30,000 g mol −1 after extended reaction time (Figure 16a). With a reaction temperature of 60 °C, the product obtained after 16 h shows a notably high PDI that indicates significant side reactions such as crosslinking, branching, or the formation of macrocycles. Reduction of the reaction temperature to 50 °C leads to species with Mn of about 15,000 g mol −1 and PDIs in the range between 1.5 and 2.0. An increase of the temperature to 80 °C leads to a significant reduction of obtained molecular weight most likely because of favored formation of macrocycles [51,69]. OH-terminated A/B-type polyesters can be produced by modifying a procedure published elsewhere [61]. In this synthesis, the mixture of monomeric and dimeric/oligomeric (ω-1) HFA is polymerized in presence of 1,6-hexanediol. The product is highly viscous, sticky, and not completely soluble in THF (M n of soluble fraction ≈ 2200 g mol −1 ). For comparison, a polyester consisting of 12-hydroxystearic acid and 1,6-hexanediol is produced applying equal conditions. This product is oily and highly viscous as well; however, remains fully soluble in THF (M n ≈ 2600 g mol −1 ). These outcomes support other investigations [81], which indicate that notable crosslinking has occurred during the synthesis of (ω-1) HFA-based polyester diols and is most likely the result of inseparable glucose impurities remaining in the (ω-1) HFA. The reaction of HDI with the soluble fraction of (ω-1) HFA-based polyester diols yields a dark brownish oily product that is not completely soluble in THF. IR and SEC analysis reveals the product to be PU with M n ≈ 3500 g mol −1 and a significant amount of species with M n >10 6 g mol −1 (results not shown). This is also attributed to residual glucose moieties leading to notable crosslinking and branching during PU synthesis. Direct Conversion of (ω-1) HFA to PU by Curtius Rearrangement A/B type PU (6) can be obtained by Curtius rearrangement [51,[62][63][64][65][66][67][68], thus, fatty acid derivatives such as (ω-1) HFA represent potential A/B type monomers for the synthesis of PU [69]. By modification of model reactions [72,73], a reaction procedure comprising (ω-1) HFA (2) → (ω-1) HFA azide (5) → (ω-1) HFA-based A/B PU (6) could be developed ( Figure 15). This sequence leads to macromolecular species with M n of max. 30,000 g mol −1 after extended reaction time (Figure 16a). With a reaction temperature of 60 • C, the product obtained after 16 h shows a notably high PDI that indicates significant side reactions such as crosslinking, branching, or the formation of macrocycles. Reduction of the reaction temperature to 50 • C leads to species with M n of about 15,000 g mol −1 and PDIs in the range between 1.5 and 2.0. An increase of the temperature to 80 • C leads to a significant reduction of obtained molecular weight most likely because of favored formation of macrocycles [51,69]. FT-IR reaction monitoring of the polymerization step clearly shows the typical strong asymmetric stretching frequency of triatomic N3 groups at 2150 cm −1 for the starting material (Figure 16b) [85]. The reaction signals at 1795, 1720, and 1540 cm −1 become distinct after 16 h, indicating the formation of the urethane functionality [86][87][88][89]. In addition to these signals, the asymmetric stretch vibration of the NCO group [85,86,90,91] at 2264 cm −1 becomes evident after 30 minutes, suggesting effective decomposition of the azide. Note that this signal is still observable after 16 h reaction time, indicating incomplete conversion and most likely the formation of NCO-terminated species. Conclusions In contrast to acetic and deacetylated sophorolipid (ASL), the lactonic and acetylated form of sophorolipid (LSL) is soluble in solvents commonly used in PU synthesis. The conversion of LSL with mono-or diisocyanates is feasible and the products obtained show the expected behavior for PU systems based on polyols with functionalities higher than FT-IR reaction monitoring of the polymerization step clearly shows the typical strong asymmetric stretching frequency of triatomic N3 groups at 2150 cm −1 for the starting material (Figure 16b) [85]. The reaction signals at 1795, 1720, and 1540 cm −1 become distinct after 16 h, indicating the formation of the urethane functionality [86][87][88][89]. In addition to these signals, the asymmetric stretch vibration of the NCO group [85,86,90,91] at 2264 cm −1 becomes evident after 30 minutes, suggesting effective decomposition of the azide. Note that this signal is still observable after 16 h reaction time, indicating incomplete conversion and most likely the formation of NCO-terminated species. Conclusions In contrast to acetic and deacetylated sophorolipid (ASL), the lactonic and acetylated form of sophorolipid (LSL) is soluble in solvents commonly used in PU synthesis. The conversion of LSL with mono-or diisocyanates is feasible and the products obtained show the expected behavior for PU systems based on polyols with functionalities higher than FT-IR reaction monitoring of the polymerization step clearly shows the typical strong asymmetric stretching frequency of triatomic N 3 groups at 2150 cm −1 for the starting material (Figure 16b) [85]. The reaction signals at 1795, 1720, and 1540 cm −1 become distinct after 16 h, indicating the formation of the urethane functionality [86][87][88][89]. In addition to these signals, the asymmetric stretch vibration of the NCO group [85,86,90,91] at 2264 cm −1 becomes evident after 30 min, suggesting effective decomposition of the azide. Note that this signal is still observable after 16 h reaction time, indicating incomplete conversion and most likely the formation of NCO-terminated species. Conclusions In contrast to acetic and deacetylated sophorolipid (ASL), the lactonic and acetylated form of sophorolipid (LSL) is soluble in solvents commonly used in PU synthesis. The conversion of LSL with mono-or diisocyanates is feasible and the products obtained show the expected behavior for PU systems based on polyols with functionalities higher than two, i.e., crosslinking in the reaction with diisocyanates applying indices below the point of gelation. The reactivity of LSL is in the typical range for conversions of isocyanates with secondary alcohols and about two to three times lower compared to polyols containing exclusively primary OH end groups such as commercially available PBA. This difference leads to incomplete incorporation of LSL in, e.g., ternary HDI/PBA/LSL systems applying an index of 0.5. Consequently, LSL acts here like a plasticizer reducing for instance crystallinity and shore A hardness. (ω-1) HFA-based polyester polyols are in general producible; however, low amounts of residual glucose impurities seem to lead to considerable crosslinking during the synthesis. Subsequent reaction of soluble fractions of this polyester with HDI leads to different PU products with M n of about 3500 g mol −1 and >10 6 g mol −1 , respectively, indicating substantial crosslinking and branching. (ω-1) HFAbased A/B type PU systems are also feasible by applying a reaction procedure comprising (ω-1) HFA (2) → (ω-1) HFA azide (5) → (ω-1) HFA-based A/B PU (6). Such systems show molecular weights up to ca. 30,000 g mol −1 . The products obtained show a significant amount of NCO groups remaining in the system, suggesting incomplete conversion and formation of NCO-terminated PU species.
2021-06-28T05:07:34.755Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "33a1a31787cafa7d34c77d0a17c7adbe1f62a393", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/13/12/2001/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33a1a31787cafa7d34c77d0a17c7adbe1f62a393", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
81436345
pes2o/s2orc
v3-fos-license
Risk Factors of Autism Spectrum Disorder (ASD) There were 112,000 ASD sufferers in Indonesia in 2012 and in 2015 it was estimated that there were 1 per 250 children or 134,000 sufferers. The proportion of ASD was 62.8% and in 2016 it was 1.28 out of 1000 children in 2015. This study aimed to determine the risk factors for the incidence of ASD in Pontianak City. The research method was analytic observational with a control case study design. The sample was 70 people (35 cases and 35 controls) taken by purposive sampling technique. Data analysis used Chi-Square test (α= 0,05). The results showed that the factors associated with the incidence of ASD were father’s age (p = 0.03; OR = 4.00; CI = 1.250-12.804), stress history during pregnancy (p = 0.04; OR = 3.18; CI = 1.138.93) and insufficient months of birth (p = 0.036; OR = 4.88; CI = 1.22-19.4), while age of mother during pregnancy, passive smoker, antenatal hemorrhage and pregnancy interval were not associated with the incidence of ASD (p> 0.05). The conclusion of this study is father’s age, the presence of a history of stress during pregnancy and insufficient months of birth associated with the incidence of ASD. INTRODUCTION Child growth and development occur from the prenatal period, the natal period until the child is born. The optimal period is influenced by various factors both internal and external so that in the process, the child is vulnerable to experiencing various health problems (Chamidah, 2009). One of them is the occurrence of developmental disorders in the brain which leads to obstruction of the development of interaction, communication, and behavior. This disorder is known as autism spectrum disorder or ASD. Autism spectrum disorder (ASD) is a developmental disorder with characteristics of weak ability to socialize and communicate, often accompanied by stereotypical behavior and the background of limited interest and behavior that occurs before the age of 3 years. ASD limits the capacity of individuals to be able to carry out daily activities and live productively. Some individuals can live freely independently and productively but most others live in limitations, discrimination, and deprivation of the right to obtain health and education services. Children with ASD even have a high risk of becoming victims of violence, bullying, and torture. Based on the existing symptoms, the incidence of ASD also affects aspects of development, learning, and health. The impact on aspects of development and learning for children who suffer from ASD begins at the age of 1 year. Children with ASD read words but they are unable to speak directly when they meet their peers, do not respond when their names are called, there is no eye contact or gesture, no smiles spontaneously to people who see them or say goodbye without being asked (WHO, 2016 ) Children with ASD are rigid in how to learn and are unable to move information in different situations. They have difficulty in seeing object as a whole object. They generally focus on certain aspects of an object, for example, children with ASD see a toy car for only a wheel or a door. They focus on parts of a tool in a room, for example, table legs or small pieces of paper on the floor. Health impacts can also be felt for ASD sufferers based on Croen's study (2015) which known the health effects are allergies, asthma, metabolic disorders such as diabetes and heart and even ASD sufferers have a 1.4 times risk of being obese and 3 times at risk for constipation. The number of ASD cases in the world experienced a significant increase, based on UNESCO data in 2011 as many as 35 million cases of ASD worldwide which mean that there were 6 out of 1000 people in the world who suffer from ASD. Identi-fication of data from the CDC (Center for Disease Control and Prevention) revealed an increase in the number of ASD cases from 2000 to 2016. In 2000-2002 cases of ASD occurred was 1 in 150, whereas in 2004 there was an increase in cases 1 per 125 children, the next two years, the cases increased by a ratio of 1 to 110 children. In 2008 the ASD case increased with a ratio of 1 to 88 and in 2010 to 2016 cases of ASD were reported to increase again with a ratio of 1 in 68 children. The number of children with ASD in Indonesia is still not well documented and it is difficult to know clearly because there is no accurate ASD data collection. In 2012 the prevalence of children suffering from ASD was 1.68 per 1000, which means that more than 112,000 people with ASD in Indonesia were aged > 5 years. This number continues to increase from year to year. Based on the data of outpatient children in the Bangkong River Mental Hospital, West Kalimantan Province, the number of autistic patient visits from 2012 was 426, in 2013 it was 335 and for 2014 was 256. The average number of visits was one patient had three visits in a week, in the last three years in West Kalimantan there have been around 21 autistic children who are doing therapy (Suharningsih, 2015). Based on data collection in all clinics, both clinics and special schools in various parts of Pontianak City, the number of ASD cases in 2016 in Pontianak City was 137 cases (Secondary data, 2016). The proportion of ASD from Bina Anak Bangsa Special School data in 2014 was 39% and in 2015 amounted to 37.8% of cases. The proportion of cases based on data from the Pontianak City Autism Center Service Center in 2015 was 62.8% and in 2016 the prevalence of ASD cases in Pontianak City was 1.28 out of 1000 children. The cause of ASD is multifactorial, namely the presence of genetic factors and the presence of influential environmental factors (Ganaie et al., 2014). The CDC stated that the cause of ASD in children is the presence of a gene disorder, having twin siblings, consumption of valproic drugs and thalidomide during pregnancy and the age of both parents who are getting older. The most potential causes of ASD that have been studied in recent years are genetic, biological, perinatal, neuroanatomy, immunology, biochemical, environmental, psychosocial and family factors (Elwardany et al., 2013). Preliminary survey results with interviews conducted by researchers on 10 mothers in two Special Schools in Pontianak City there were 8 people who had children with ASD, as many as 62.5% had husbands aged> 40 years, 50% of respondents aged> 35 years while pregnant, Respondents who were exposed to cigarette smoke from their husbands during pregnancy were 100%, respondents who experien-ced pregnancy hemorrhage as much as 50%, had a pregnancy interval with previous children <2 years as much as 37.5%, experienced stress during pregnancy as much as 25%, respondents who gave birth on insufficient months as much as 37.5%. This study aimed to obtain information on the dominant risk factors with the incidence of Autism Spectrum Disorder (ASD) and to determine how much risk between the relationship of various determinant factors with the occurrence of ASD. METHOD The type of research used was analytic observational research with a case-control study design. The study was conducted by retrospective observation of the role of the independent variables studied, namely father's age, maternal age during pregnancy, passive smoking, antenatal hemorrhage, gestational distance, history of stress during pregnancy and insufficient months of birth on the dependent variable, the incidence of ASD. The sample in this study consisted of case and control groups. Case group was children diagnosed with ASD by health professionals (pediatricians, psychiatrists, and psychologists) who are recorded as therapy participants in two therapeutic facilities with the highest number of ASD cases, namely UPTD Autism Center and Bina Anak Bangsa Foundation with ages ranging from 2 to 5 years. Meanwhile, the control group was taken from children who were not affected by ASD who had almost the same sex and age characteristics as the case group, namely male and aged 2 to 5 years. Minimum sample size calculation was obtained as many as 35 samples with a comparison of the sample size between 1: 1 case and control that is 35 cases and 35 controls, so the total sample was 70 people. The sampling technique used was purposive sampling technique using inclusion and exclusion criteria. The inclusion criteria of the case groups in this study were those with ASD aged 2-5 years who had live biological mothers, live in Pontianak City, male sex and enrolled in therapy in Pontianak City, while the exclusion criteria for the case group were respondents whose biological mothers not in Pontianak City; working outside the area or dying, respondents suffer from other diseases such as mute and deafness and respondents who were born at home. Data collection in this study consisted of primary and secondary data, primary data obtained through interviews with mothers of respondents who were guided by the research questionnaire and secondary data obtained from register data and medical records. Data analysis was carried out by univariate and bivariate through Chi-Square test (α = 0.05; CI = 95%) to find out how much the risk (odds ratio) of the factors studied on the incidence of ASD. RESULTS AND DISCUSSION Characteristics of respondents in this study consisted of age, birth order, parental education, parental work, history of trauma or accident, labor and history of complications. The characteristics of each respondent can be seen in Table 1 The univariate analysis described all independent variables, namely father's age, mother's during pregnancy, passive smoking, antenatal hemorrhage, pregnancy interval, history of stress during pregnancy and insufficient months of birth by calculating frequency and percentage. Table 2 shows that the proportion of the tendency of respondents surveyed was 40% more in the case group who had father aged 40 years old and over, the proportion of respondents in the case group who had mother aged 35 years and over was 28.6% more than those in the control group. However, based on the condition of passive smokers the proportion of the tendency of respondents exposed to cigarette smoke during pregnancy was more experienced in the control group, which was 68.6%. Based on the history of antenatal hemorrhage, the tendency of respondents in the case group to experience antenatal hemorrhage was 25.7% greater than the control group, while the proportion of the tendency of respondents whose risk of pregnancy was more experienced in the control group was 8.6%. The proportion of the tendency of respondents who had a history of stress during pregnancy was higher in the case group by 48.6% and by 31.4%, the case group had more insufficient months of birth compared to the control group. Bivariate analysis is an analysis that is used to see the relationship between independent variables, namely father's age, mother's age during pregnancy, passive smoker, antenatal hemorrhage, pregnancy interval, history of stress during pregnancy and insufficient months of birth with the dependent variable, the incidence of ASD. The results of the bivariate analysis showed that there was a relationship between father aged ≥ 40 years with the incidence of ASD, from the calculation of Chi Square statistical tests obtained a p-value of 0.03 (p≤0.05) and it was found that father aged ≥ 40 years are at risk 4 times greater for their child to experience ASD compared to fathers who are <40 years old when the mother experiences pregnancy. This study is in accordance with Budi's study (2015) that there was a significant relationship between father aged ≥ 40 years and the incidence of ASD and 6.3 times greater risk for children to be born with ASD compared to father aged <40 years old. This is because the body DNA of a man that is copied from one cell to another in the sperm allows for an error, in terms of genetics called "copy error". The copying process will be increasingly tired and less efficient so that the sperm produced will form abnormal chromosomal structures (de novo mutation), these sperm mutations are then used to control the development of the fetal brain which can eventually cause children to become autistic (Hultman et al., 2011) . The 2012 New York Times article on research conducted by deCode Genetics Company (a genebased disease research firm) on population DNA in Iceland suggests that older men increase the risk of developing mental illnesses such as schizophrenia and ASD compared to the younger one. This is due to random mutations that become more common with increasing age by contributing to a 2% risk of mutation. De novo mutation plays a very large role in the occurrence of brain disorders because there are approximately 50% active genes in them that play a role in the development of the brain's nerves. The results of bivariate analysis showed that there was no significant relationship between mother aged ≥ 35 years during pregnancy with the incidence of ASD p-value obtained at 0.065 (p> 0.05). This study is in line with the results of Arulita's (2014) study that there was no relationship between mother aged ≥ 35 years during pregnancy with the incidence of ASD (p = 1.000). The results of a study conducted by Budi (2015) showed similarities that mother aged ≥ 35 years during pregnancy did not have a significant relationship to the incidence of ASD (p = 0.261). Although the result of the study showed no significant relationship to the incidence of ASD, the age of pregnant mother of ≥ 35 years old is a high-risk gestational age. More difficult and long-term deliveries, and stillbirths are other problems that can be found in pregnancy and childbirth of mother aged ≥ 35 years old (Sibuea et al, 2013). High-risk pregnancy is the pregnancy that can cause pregnant women and babies to become ill or die before labor takes place (Sinsin, 2008). This age group has not entered the safe age for pregnancy, that is between the ages of 20-34 years, where in this age range the physical and psychological conditions of the mother are in prime condition to receive pregnancy (Hardiyanti, 2014). The unrelated results of the study were influenced by most of the case respondents having mothers aged <35 years, amounting to 71.4%, consisting of 30-34 years old of 34.2%, aged 28-29 years of 28.6% and age 20-24 years of 8.6%, this caused the tendency of probability value that did not have statistical relationship. The results of the bivariate analysis showed that there was no significant relationship between passive smoker and the incidence of ASD with a pvalue of 0.801 (p> 0.05). This is similar to the results of Nurbayatin's (2015) study which suggested that pregnant women as passive smokers were not associated with the incidence of ASD (p = 1,000). Exposure to cigarette smoke, especially in pregnant women as dangerous as people who smoke cigarettes directly, nicotine contained in cigarette smoke is a vasoconstrictor substance that will cause vasoconstriction of blood vessels and increase heart contraction, consequently will affect the supply of oxygen and nutrients to the fetus. The supply of oxygen to the fetus becomes inadequate that will cause hypoxia and fetal growth to be disrupted. In addition, a decrease in nutrient supply will cause the fetus to be deficient in nutrients and interfere with growth in the fetus (Hanum, 2017). The result of the study was not related to the incidence of ASD because the proportion of respondents exposed to cigarette smoke at home was more experienced by control respondents, it was also influenced by the existence of confounding, namely the presence of pregnant mother that is not only in the house so that exposure to cigarette smoke is possible to be obtained from other environments such as places work or public area. The results of the bivariate analysis showed that there was no significant relationship between antenatal hemorrhage and the incidence of ASD with obtained p-value of 0.155 (p> 0.05), the result of this study is in accordance with research conducted by Zahra (2014) that antenatal hemorrhage is not a risk factor for ASD in children. A study by Arulita (2014) also stated that the antenatal hemorrhage in pregnant women is not related to the incidence of ASD (p = 0.145). Antenatal hemorrhage is a condition of pregnant women who experience vaginal bleeding during pregnancy over 28 weeks or more because of a disturbance in the placenta. Bleeding during pregnancy is one of several complications of pregnancy that will cause the fetus to experience hypoxia (Gardener et al., 2009). Unrelated research results caused the case respondents who had experienced bleeding during pregnancy were only 25.7%, other causes there were other factors that influence the occurrence of antenatal hemorrhage such as blood pressure during pregnancy, history of anemia and hypertension. The results of the bivariate analysis showed that there was no significant relationship between the interval of pregnancy with the incidence of ASD (2015) which stated that there was no significant relationship between pregnancy interval and ASD events (p = 0.20). Although the results of this study showed no significant association with the incidence of ASD, the two-year pregnancy interval was the age of determination that had been recommended by the National Population and Family Planning Board (Badan Kependudukan dan Keluarga Berencana Nasional or BKKBN) to reduce the increase in maternal mortality rates and infant mortality. The interval between pregnancies that are too close can endanger the baby to be born because the physical condition of the mother's uterus is not perfect. Too short interval causes the mother does not have time to recover, which is at risk of anemia due to iron deficiency in pregnancy (Ningrum, 2014). Too close pregnancy interval will increase the risk of bleeding, complications of pregnancy, premature babies and the risk of bleeding during childbirth (Sawitri, 2014). The result of the study was not related due to differences in the characteristics of the study sample, the majority of respondents who had siblings from previous births tended to be less than the respondents whose birth order was the first child, the proportion of the first child in the case respondents was 68.6% or the difference in proportion difference was two times compared to the proportion of second and third children. The results of the bivariate analysis showed that there was a relationship between the history of stress during pregnancy and the incidence of ASD, from the calculation of Chi-Square statistical test, obtained p-value of 0.04 (p≤0.05) and it was known that mothers who had a history of stress while pregnant were 3.18 times greater for children with ASD than for mothers who had no history of stress during pregnancy. The result of this study is in line with the research conducted by Zhang (2010) that history of stress during pregnancy has a relationship with the incidence of ASD in children with an odds ratio of 4.08. The results of Salman's research (2016) show similar things that stress during pregnancy affects the incidence of ASD. The stressors commonly experienced by respondents in this study can be seen in Figure 1. Figure 1 shows that the majority of respondents experienced stress due to first pregnancy with a distribution of 28%, having problems with family members, cousins, in-laws or divorce by 16%, having sleep trouble due to changes in sleep hours during pregnancy by 16% and stress due to workload by 16%, while the distribution of respondents who experienced stress due to financial problems by 12% and respondents who suffered from the disease at pregnancy by 12%. History of stress during pregnancy can increase the level of adrenaline in the mother's body, resulting in placental vasoconstriction and disrupt the flow of blood to the brain directly to the fetus (Zhang et al., 2010). Research conducted by Mulder in Kinney (2008) found that prenatal stress can damage a child's brain development with some damage that interferes with circulation in the flow of the placenta and uterine, causes hypoxia in the fetus, stimulates the release of stress hormones that can spread through the placenta, increases complications of pregnancy and childbirth, changes the response of genes and disrupts hormonal patterns that are important in the development of brain structure and function. The results of the bivariate analysis showed Figure 1. Prenatal Stress Type that there was a relationship between insufficient months of birth and the incidence of ASD, from the calculation of Chi-Square statistical tests obtained p-value of 0.036 (p≤0.05) and the result of the study is also known that insufficient months of birth had a risk of 4.88 that is greater for experiencing ASD. The result of this study is in line with research conducted by Arulita (2014) that insufficient months of birth have a risk of 9.75 times greater for ASD. Generally, pregnancy is called at enough months if it lasts between 37-41 weeks, while labor that occurs before 37 weeks is called preterm labor. Prematurity is associated with infant morbidity and mortality and is one of the biggest contributors to perinatal mortality and neonatal morbidity, both short and long-term. Problems that can arise by preterm birth are neurological development problems that vary from severe to mild, such as behavioral abnormalities, difficulties in learning and understanding language, concentration/attention and hyperactivity disturbances (Sulistiarini, 2016). Babies born prematurely are found to have fewer parts on the surface of the cortical gray box which causes inhibition of child development such as cognitive development, barriers in social relations and language and communication disability (Goldin, 2015). CONCLUSION Risk factors for the incidence of ASD in Pontianak City in 2017 were the age of the father, history of stress during pregnancy and insufficient months of birth. Further research is needed to determine the determinant factors that are thought to influence the incidence of ASD, such as the environment, history of trauma or accident, LBW (Low Birth Weight), hyper bilirubin, anemia during pregnancy, drug consumption, history of fever during pregnancy, diabetes mellitus, history of cosmetic use, history of amalgam use on teeth, consumption of folic acid, birth of postmature and asphyxia.
2019-03-18T14:04:31.656Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "cca433331749504530d471a233d209dee9bba42f", "oa_license": "CCBY", "oa_url": "https://journal.unnes.ac.id/sju/index.php/ujph/article/download/20565/12013", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3cba3ae05fb4a76a65cc9bbf7f51bdc86cf254e5", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
225656264
pes2o/s2orc
v3-fos-license
Simulation of Transmission of Drinking Water Sources to Reservoirs: Case Study PDAM Tirta Jati, Cirebon, Indonesia Population dynamics have very important effects on ecosystems, including those related to water availability. The availability of clean water is closely related to the condition of the population in an area. Every human being must drink 8 (eight) glasses of water per day. Plants and animals also need water. So it can be said that water is one source of life. Regional Water Supply Company (PDAM) is a company whose ownership is held by the regional head as the operator of drinking water supply for the community. Companies are encouraged to utilize developing technology to manage their drinking water assets effectively and efficiently so they can improve services to the community. EPANET 2.0 is one tool that is suitable for connecting piping network models. Quantum Geographic Information System (QGIS) is a model simulating hydraulics which parameters can be launched through a processing stage. The master network system model created in QGIS was included in EPANET 2.0 for rasterization. Then enter the required data such as pipe length, height of each node, water flow per tapping, pipe roughness coefficient, and peak hour factor. The results of the analysis in the form of pipe hydraulics, such as pipe speed and pressure from each node are used as a basis for network development plans. The ratio of the total population per family in Sumber District in 2018 is 6 people/head of the family. The population in the administrative area served by PDAM is 22,560 people. Domestic water needs can be calculated based on data on average water usage per month recorded at PDAM Tirta Jati, Cirebon Regency. The number of domestic customers was 2,446 in August 2019 with an average monthly water consumption of 15.02 m3/unit. Based on analysis using Epanet and QGIS, parameter data for pipe flow rates are 0.63 - 1.05 meters/second. While the result for water pressure in the pipe is 140.95 meters at node 274. Introduction High population growth resulted in not all components of the community can enjoy clean water. Provision of water to meet community needs is one of the important agendas in ensuring the basic needs of the community. Of all the liquids, consisting of 95% ground water, 2% water content, and 3% are rivers, lakes, and other flows [1]. The availability of clean water is closely related to the condition of the population in an area. With rapid population growth and demands for quality of life in Cirebon, it requires a sustainable supply of water for household needs. To ensure a safe supply of drinking water, it is necessary to evaluate the performance of water transmission and distribution from water sources from rivers to reservoirs and households. For this reason, analytical methods are needed to improve the efficiency and control of the transmission network from water sources to reservoirs so that water distribution is guaranteed. Among the pipeline system analysis methods is to use the EPANET 2.0 program [2] and the Geographic Information System (GIS) as well as creating an information monitoring model related to the quantity 2 and continuity of water [3]. Utilize technology related to the quantity and sustainability of services so that the PDAM can distribute water accurately. Water loss is the problem most often faced by each PDAM [4]. Based on the problem of the level of water loss in the water distribution network. Non-Revenued Water (NRW) is strongly associated with pressure loss in pipes [5]. Geographic Information Systems (GIS) can be defined as a means of providing up-to-date information for planning and operational tasks. To facilitate water distribution, stages of the planning and design process, resource management, operations and exploitation, reconditioning areas, customer relations, etc. are required. Process stages are based on Geographic Information Systems [6]. Geographic Information Systems (GIS) provide significant improvements in terms of quality of work and efficiency and optimization of resources, and also contributed to manage database manager drinking water [7]. In addition, it is also very useful for integrating water networking tools are ready to run on the simulator [8]. Some trends have been made to integrate GIS with hydraulic models. The new developments presented in this paper, by using Q-GIS 2.18 as Geographic Information Systems (GIS). PDAM Tirta Jati Cirebon has a drinking water supply system from Cigusti spring which has a discharge capacity of 45 liters / second, then sent through a transmission pipe with a diameter of 250 mm / 10 inches along 7,000 meters to the reservoir with a gravity flow system. The height of the water source is located at surface height 198 m and the height of the reservoir is 95 m, from the difference in surface height required instruments that reduce the speed and pressure of water so that the speed and air pressure in the transmission pipeline is adjusted to the pipe specifications. Theory Geographic Information System is a system for the management, storage, processing (manipulation), analysis and delivery of data are spatially associated with the earth [9]. Epanet is a computer program that describes the simulation hydraulic and water quality trends flowing in the pipeline. The network itself consists of pipes, nodes (connection points), pump, valve and tank or reservoir [10]. One parameter that is measured is the pressure. Pressure measuring devices such as pressure gauge (pressure indicator or manometer), mounted on a pipe which shows a pressure indicator [11], Epanet is a software developed by the Drinking Water Research Division of EPA (Environmental Protection Agency) United States which simulates the hydraulic modeling and water quality in a distribution network system. The distribution network consists of points / node / junction pipes, pumps, valves and reservoirs. In the water distribution network modeling, the first step put these data above, then the initial conditions, the estimated water use and an operating system set the desired water distribution. Furthermore Epanet program will predict the direction and flow in each pipe, the pressure at each node, the water level in the tank and the concentration of chemicals in the whole network during the simulation period [12]. Study locations The most potential surface water source for services in Sumber Regency Cirebon Regency is the Cigusti spring with a height of 275 in Kuningan District. Data from PDAM Tirta Jati Cirebon for service areas in Sumber Regency shows that in 2019 in August, there were 2,446 housing connections. Generally one house connection unit serves one family head. Based on data from the Cirebon Central Statistics Agency (BPS), the number of household connections in the Sumber District area in 2019 in August was 2,446 house connections. Figure 1 and 2 present map of District and Housing service. Sumber Regency is part of Cirebon Regency, 85.7% of the topography is lowland, the rest is a hill with an altitude of less than 500 meters above sea level with an area of 25.65 km 2 . Geographically, Sumber District is bordered by the West by Dukupuntang District, in the north it is bordered by Weru and Plumbon Districts, in the east it is bordered by Talun District, and in the South it is bordered by Kuningan Regency. The relationship graph in Figure 3 is a comparison between the number of customers for one year 2017-2019 with 24-hours service. Figure 3 are determined from January 2017 to August 2019 with a total of 152 connection units. Reading with this chart facilitates analysis in assessing flow during peak hours or water usage during normal hours so that the results of field survey data are carried out in 24 (twenty four) clean air congestion services in housing. The relationship graph in Figure 4 is the level of water consumption comparison between 2017-2019 and 24-hour service. The results bearing in mind the number of customers has fluctuated increasing and decreasing in the graph, the level of water consumption is strongly influenced by the number of customers. This makes it easy to read graph analysis to assess water flow during peak use hours or water consumption during normal hours so that field survey data are conducted for 24 (twenty four) hours of clean water in a Taman Tukmudal Indah residential. Domestic Water Needs Cirebon Based on the above calculation known taps on the service level Sumber District Kabupaten Cirebon by 65.05%. This value is still less than the government's target of piped water supply services category of the city is at 100% so that the necessary efforts to improve service coverage of drinking water distribution systems. Domestic requirement is meant to meet the needs of clean water for domestic purposes is done through House Connection (SR) [13] . The level of water loss can be expressed as the ratio between the loss of water and the amount of water that is distributed into the water piping network. Water loss can be calculated based on physical loss minus non-technical loss. The high leakage causes damage to the company because there is an imbalance between the amount of water that is distributed to the company's revenue from the sale of water [14]. Research method In this study used secondary data combined with the primary data. Primary data is needed to verify the present (up to date) data. Secondary data regarding water resources are generally issued by the relevant agencies, such as PDAM Tirta Jati Cirebon, Office of Human Settlements and Spatial Planning Agency. The process of creating a systematic water distribution network and Optimal Use of GIS and EPANET [15]. The stages of research presented in Figure 6. Data collection is done by holding a large direct measurement of water pressure a number of 6 (six) survey point on the pipe bridge. Water pressure was observed and recorded every hour, 24 times or 24 hours per day with the instrument '' manometer ''. Total points survey taken for sample = 7 (seven) points survey (Sampling) carried out in connection with research on the main distribution line pipe network can be seen in Figure 7. Results and Discussion Measurement of flow velocity directly in the field at some point represents the state of the entire network, Epanet 2.0 Model simulation was conducted to determine the accuracy of measured data field. Table 2 and 3 presents the results of a simulation using a pressure manometer and simulation program and G-Hydrolic Epanet. To show the trend of simulation results with field measurements following Figure 8 presented the comparative results. From Figure 8 we can see trends from water pressure graphs using G-Hydrolic simulation, EPAnet 2.0 and Manometer (pressure gauge in the field). The black line shows the results of the G-Hydrolic simulation which shows that node 182 has a water pressure of 124.9 meters which is quite high, while the black dotted line is the trend of the EPAnet 2.0 simulation model that illustrates the water pressure node, there are differences in the number of 2 simulation models between G -hydrolic and EPAnet 2.0 ie at the highest node the difference is j18 no. 7.47 m. Unlike the case with manometer readings in the field placed at nodes 50,98,107,182,274 and 284, the water pressure on the manometer tends to be below the numbers in G-hydrolic and EPAnet 2.0 simulations. This situation shows that the piping network system using simulation models is only limited to reference planning. This requires prevention efforts that have to be made by the PDAM Cirebon to overcome this problem. Various efforts such as network modification, addition of pumps, and others can be alternative solutions to the problem. However, pipe replacement in design modification needs to be designed in an economical way In addition, all aspects that will affect the cost should be considered as a whole. Factors that influence the difference between the results of the EPANET model simulation with direct measurements on the transmission pipeline simulation are: Effect of pipe type factors that influence the Hazen-Williams coefficient. The possibility of leakage in the transmission pipe that produces water pressure when the pressure measurement becomes small (both leakage is good and large enough). The effect of the air valve installed on the bridge bridge becomes smaller so that the process of exhausting the air in the pipe is not optimal, it is necessary to regulate the regulating valve is much smaller than the valve regulating data obtained. Pressure is one of the factors that support people's satisfaction with the service taps, in a piping system we know the static pressure and the dynamic pressure or hydraulic pressure. Static pressure (static pressure) is the pressure when liquid water is not flowing and dynamic pressure (dynamic pressure) pressure at cait substance flowing [16], Water is supplied to consumers through transmission pipes and distribution pipes are designed to be able to serve customers to the farthest place, with a minimum water pressure of 10 MKA or 1 atm. In the distribution of water, to be able to reach all areas of service and to maximize the level of service, then the mandatory thing to note is the rest of the water pressure. The remainder of the low water pressure is 5 MKA (meter water column) or 0.5 atm (one atm=10 m) and the highest is 8 atm or equivalent to 80 m [17]. Figure 9. Flowrate Flow velocity in pipes is also limited by certain values (see Figure 9). Flow rates that are too high can cause scouring on the pipe surface, while very low surfaces can cause deposition on the pipe so that the flow rate limits on the pipe that can be used are as follows: -maximum speed = 2-3 m/sec -The minimum speed = 0.3 m/sec. From Figure 9, it can be seen from the graph the trend of water pressure using a simulated G-Hydrolic, Epanet 2.0 and flowmeter (flow velocity measuring field). The black line shows the simulation results that show the G-Hydrolic that the node 182 has a high enough pressure, amounting to 124.9 meters of water, while the dashed line is the trend of black color EPANET 2.0 simulation model that describes nodes water pressure, there is a difference in the numbers of two simulation models between G-hydrolic and Epanet 2.0 ie the highest node is no j98 difference of 0.15 m/d. Unlike the case with the reading of the flowmeter instruments in the field are placed at the nodes 50,98,107,182,274 and 284, the flow velocity in the flowmeter are likely to be below the number on the simulated G-hydrolic and Epanet 2. Conclusion Water pressure measurements directly in the field at some point which is considered to represent the condition of the entire network, it should also be proven by way of data validation 2.0 Epanet calculation results with field measurements to obtain results closer to the real situation in the field. The results of the validation of the simulation model of Epanet 2.0 was conducted to determine the accuracy of the simulation results Epanet 2.0 with measurable data field. In the calculation results Epanet 2.0 with data measured in the field comparison is approaching.Discharge of domestic water demand can be calculated based on the data the average -average water consumption per month was recorded in PDAM Tirta Jati Cirebon. The number of subscribers/SR domestic amounted to 2,446 units of a connection with the average usage per month is 14.22 m 3 / units in August 2019.
2020-06-18T09:08:22.551Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "05f2ea5443fa3dca7f274c309f7d63b8a84dfd39", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/498/1/012072", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9d6c71c6ae0a28efdba33202a522cfa21b0fa9d4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
250453625
pes2o/s2orc
v3-fos-license
YAP1-NUTM1 Gene Fusion in Eccrine Porocarcinoma with Late Metastatic Recurrence: A Case Report Abstract is missing (Short communication) Eccrine porocarcinoma (EPC) is a rare malignant neoplasm that arises from the intraepidermal part of the sweat duct gland and frequently manifests as a solitary nodule or plaque on the limbs or the head and neck area in elderly patients (1,2). EPCs exhibit invasive growth and may have high morbidity and mortality potential (1,2). Because of its low incidence, lack of distinct clinical features, variable histomorphological appearance, and absence of specific immunohistochemical methods, EPC diagnosis is often challenging (1)(2)(3)(4). Recently, highly recurrent YAP1 and NUTM1 gene rearrangements have been described in cases of poromas and EPCs, highlighting the potential usefulness of immunohistochemical and molecular studies in the diagnosis of these neoplasms (3). CASE REPORT Clinical history. An otherwise healthy 51-year-old woman was referred to the department of Dermatology for evaluation of a 3-month history of enlarged and swollen lymph nodes on her right groin. Past medical history included the diagnosis of an EPC on the right popliteal fossa that was completely excised 15 years previously (at this time, a sentinel lymph node biopsy was not considered), with no signs of locoregional recurrence after a 10-year clinical follow-up period ( Fig. 1a-b). Physical examination disclosed several tender erythematous subcutaneous nodules located on the right groin corresponding to enlarged lymph nodes. The rest of the physical examination, including a complete gynaecological and pelvic examination, was unremarkable, and no signs of local recurrence were detected in the postsurgical scar on the right popliteal area. Histological, immunohistochemical and fluorescence in situ hybridization findings. A core-needle biopsy was obtained and the observed histopathological features were diagnosed as consistent with diffuse infiltration by moderately-differentiated squamous cell carcinoma (SCC). A positron emission tomography-computed tomography ruled out visceral involvement and no other hypermetabolic foci were detected. An inguinal lymphadenectomy was performed, and histopathological examination showed a diffuse lymph node infiltration by large pleomorphic round and oval cells arranged in groups and lobules with capsular rupture in 2 out of 6 lymph nodes removed (Fig. 1c, d). These malignant cells expressed the immunohistochemical markers cytokeratin AE1/ AE3 and epithelial membrane antigen (EMA). YAP (C-terminus) and NUT immunohistochemistry were also performed on both the original EPC cutaneous specimen and the lymph node sample. The same results were observed in both specimens: a diffuse strong nuclear positivity for NUT, along with a total loss of YAP1 expression, which was consistent with the presence of an underlying YAP1-NUTM1 translocation (Fig. 1e). The fluorescence in situ hybridization (FISH) analysis confirmed the YAP1 gene rearrangement (Fig. 1f). Treatment and follow-up. With the diagnosis of surgically resected metastatic EPC, and based on a multidisciplinary tumour board decision, adjuvant radiotherapy on inguinal region was recommended and the patient is currently undergoing a close clinical and imaging surveillance every 3 months. DISCUSSION New molecular pathways involved in the pathogenesis of poroid neoplasms have been described recently (3,(5)(6)(7). Thus, cytogenetic translocations involving YAP1, specifically the YAP1-MAML2 and YAP1-NUTM1 fusions, have been identified in approximately 89% of poromas and 64% of EPCs (3). The tumorigenic role of YAP1 fusions might be explained by the activation of transcription factors and promotion of anchorage-independent growth in epithelial cells (3). Such genomic rearrangements seem to be specific of poromas and EPCs, since YAP1 fusions have not been identified in other skin neoplasms (3). The current patient, to our knowledge, represents the first description of YAP1-NUTM1 fusion in a metastatic EPC, demonstrating the presence of the genetic rearrangement in both the primary tumour and its metastasis. As in the current patient, the detection of a YAP1-NUTM1 gene fusion in cases of metastatic lymph node involvement from a malignant neoplasm with unknown origin favours the poroid nature of the primary tumour. The demonstration of specific molecular rearrangements in poroid neoplasms also represents a diagnostic opportunity to use immunochemistry as a useful diagnostic tool for these neoplasms. Thereby, recent studies have postulated that NUT immunohistochemistry might be considered as a potential histological marker of poromas and EPCs, since NUT would be overexpressed in YAP1-NUTM1-rearranged tumours (3,5,8). In this sense, it has been shown that this marker could have a high specificity (close to 100%) in the diagnosis of poroid neoplasms, since other skin tumours (including histological mimics of EPC, such as SCC or hidradenocarcinoma) do not express NUT (5,8). Therefore, and given the high degree of concordance between the molecular and immuno histochemical results found in recent investigations (3,5), NUT immunohistochemistry may represent a simpler, faster and more accessible technique than the molecular approach to better characterize EPC cases. EPC represents a cutaneous neoplasm with high rates of extracutaneous spread. It has been demonstrated that 22.3% of EPC cases present as metastatic disease at the time of diagnosis, including regional lymph node (17%), distant (3.9%) and cutaneous metastases (1.5%) (1). Overall survival of patients with metastatic EPC has not been well characterized, but is probably poor (9). Moreover, the time from EPC diagnosis until the development of metastases is variable, but in most cases it has been estimated to be less than 1 year (1). Nevertheless, the present case demonstrates that late metastases could develop in EPC cases. Therefore, a long-term follow-up, with regular skin and lymph node examination, seems advisable for this malignancy. Given its low incidence, little guidance is available in the literature regarding EPC management. For local disease, surgical resection represents the main treatment and may include wide local excision or Mohs micrographic surgery (10). Sentinel lymph node biopsy should also be considered in EPC cases exhibiting high-risk features (1). Systemic treatment of unresectable metastatic EPC has not been established, and several therapeutic regimens have been postulated in different case reports with variable results (radiotherapy, chemotherapy-based regimens, cetuximab, anti-PD1 agents, among others) (9,11). The identification of the presence of YAP1 fusions in EPC cases might also have therapeutic implications, representing a potential therapeutic target for this rare malignancy. In conclusion, we present here a unique case of an aggressive EPC that developed regional lymph node metastasis after 15 years from the treatment of the primary tumour, in which the immunohistochemical pattern and FISH results were consistent with a YAP1-NUTM1 rearrangement. A better understanding of the molecular pathways involved in the development of these neoplasms would help not only to improve the characterization and diagnosis of EPC cases, but also to the development of targeted therapies in the era of personalized medicine.
2022-07-13T06:16:23.688Z
2022-07-11T00:00:00.000
{ "year": 2022, "sha1": "07e487ca5911a6685d9e18ba75b9b68c1d74b351", "oa_license": "CCBYNC", "oa_url": "https://medicaljournalssweden.se/actadv/article/download/2417/6551", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b3df9470aa94b3ca1eecbdd9e4860c59ee2e76a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118836204
pes2o/s2orc
v3-fos-license
K-matter as Mach's principle realization It is shown that if one takes into account Mach's principle in the form which follows from quantum theory and considers it as a complementary constraint between the parameters which characterize the energy density and geometry of the universe in addition to Einstein equations for a FRW universe, non-relativistic matter transforms into an analogue of K-matter. The exact solutions of the Einstein equations for the universe with such matter and cosmological constant are found. It is demonstrated that the Machian universe under consideration with a nonzero cosmological constant is equivalent to the open de Sitter universe. In the limit of zero cosmological constant such a universe evolves as a Milne universe, but in contrast to it, it contains matter with nonzero energy density. The possible application of proposed approach to the description of the present cosmological data is discussed. The problem of the age of the universe is considered as an example. Introduction As is well-known, the standard ΛCDM model gives the satisfactory description of the most of the present cosmological data under the assumption of the existence of dark energy as the largest constituent of mass-energy in the universe. It is believed that a high level of fine-tuning is required in this model. Even if the smallness of cosmological constant and "coincidence problem" (an almost equal contribution of matter and dark energy to the total energy budget of the universe at the present epoch) are not problems in themselves [1], nevertheless one must take into account the phenomenological character of the ΛCDM model, as regards the choice of the form of the energy density and equation of state. It should not be ignored that there were some indications that specific cosmological observations differed from the predictions of the ΛCDM model at statistically significant level [2]. It makes the search for alternative models not as unreasonable as it might seem. It was noticed that the models (called "coasting cosmology", since in such models the universe expands with constant velocity [3]) in which the scale factor R of the universe depends on synchronous proper time t linearly, R(t) ∼ t, agrees good enough with the cosmological observations [4,5]. In particular, this approach does not suffer from the horizon problem. It is in concordance with the early universe nucleosynthesis constraints and fits to type Ia supernovae data. One can expect that such a scenario will agree with estimates of age of the universe in comparison to ages of old objects and provide the degree scale for the first acoustic peak of the cosmological microwave background. It is compatible with constraints from large scale structure formation and agrees with the physics of recombination as deduced from cosmic microwave background anisotropy. There are also expectations that the primordial lithium problem (a discrepancy between the primordial lithium predicted from the WMAP data and the stellar abundance determinations) [6] might be resolved under the assumption of a linear evolution of the scale factor [4,5]. The linear dependence of a scale factor on time can be associated with a Milne model of the universe. This model is based on the assumptions that the universe is open (k = −1) and that the gravitational action of matter can be neglected (the energy density ρ = 0). It cannot be correct near the point of initial cosmological singularity, t = 0, since in this limit the energy density of matter tends to infinity and gravity cannot be neglected. One attempt to settle this problem was to consider a model of a universe (called "Dirac -Milne" universe by analogy of sea of positive and negative energy states proposed by Dirac) containing equal quantities of matter and antimatter under the assumption that antimatter is characterized by a negative gravitational mass [5]. The linear dependence of a scale factor R on time can be achieved in cosmological models in which the energy density ρ ∼ R −2 and the total pressure p = − 1 3 ρ. Such "coasting cosmologies" describe the universe dominated by exotic "K-matter" which may be related to cosmic strings [3,7]. In the present paper we show that if one takes into account an additional constraint between the parameters which characterize the energy density and geometry of the universe in addition to Einstein equations for a Friedmann-Robertson-Walker (FRW) universe, non-relativistic matter with the energy density evolving as ρ ∼ R −3 transforms into an analogue of K-matter. This constraint can be interpreted as Mach's principle [8] in Sciama's formulation [9,10]. Being introduced to explain the inertial forces acting on a body via the quantity and distribution of matter in the whole universe, nowadays Mach's principle has many definitions [11,12]. Despite its simplified character, Sciama's linearized theory gives a specific mathematical relation between the parameters of the universe instead of general statement of Mach's principle. Quantum roots of Sciama's relation Sciama's relation obtains a natural explanation in the framework of quantum isotropic cosmological model [13,14]. Generally speaking, quantum theory adequately describes properties of various physical systems. Its universal validity demands that the universe as a whole must obey quantum laws as well. Since quantum effects are not a priori restricted to certain scales, then one should not conclude in advance that they cannot have any impact on processes at scales larger than Planckian (more detailed arguments can be found, e.g., in Refs. [15]). Quantum theory for a homogeneous and isotropic universe can be constructed on the basis of a Hamiltonian formalism with the use of material reference system as a dynamical system [13,14]. Defining the time parameter or the "clock" variable, it is possible to pass from the Wheeler-DeWitt equation to the Schrödinger-type equation. The similar equations containing a time variable defined by means of coordinate condition were considered by a number of authors under the quantization of the FRW universe (see, e.g., Refs. [16]). Using the Schrödinger-type equation one can obtain equations of motion for the expectation values of a scale factor and its conjugate momenta. These equations pass into the equations of general relativity when the dispersion around the expectation values for a scale factor, matter fields and their conjugate momenta can be neglected. Under this approach, in semi-classical limit, the equations of the theory are reduced to the form of Einstein equations for the FRW universe [14]. Such a quantum theory predicts that the following relation must hold for the expectation value of the scale factor R in the state |M which describes the universe with the definite total amount of mass M much larger than Planck mass, M ≫ M P , G is the Newtonian gravitational constant (for details, see Refs. [14]). In classical limit, it appears to be possible to pass from the expectation value M |R|M to the classical value of the scale factor R(t) which evolves in time in accordance with the Einstein equations for the FRW universė where is the energy density of matter with the mass M in the equivalent flat-space volume (4π/3)R 3 , Λ is the cosmological constant, is the isotropic pressure, and k = +1, 0, −1 for spatially closed, flat or open models. In semi-classical limit, the relation (1) takes the form of Sciama's inertial force law which describes Mach's principle [9,10], The same equality between the mass and "radius" of the universe was considered by Whitrow and Randall [17]. It is also similar to the relation valid for the Einstein universe (see, e.g., Ref. [18]). For the present-day universe the radius of its observed part is estimated as R 0 ∼ 10 28 cm ∼ 10 61 (in units of Planck length l P ∼ 10 −33 cm), the mass-energy is M 0 ∼ 10 56 g ∼ 10 80 GeV ∼ 10 61 (in units of Planck mass m P ∼ 10 19 GeV), and the mean energy density equals to ρ 0 ∼ 10 −29 g cm −3 ∼ 10 −122 (in units of Planck energy density ρ P ∼ 10 93 g cm −3 ). It means that nowadays ρ 0 ∼ G −1 R −2 0 . Then from the definition of energy density ρ 0 ∼ M 0 R −3 0 , it follows that the relation R 0 ∼ GM 0 must hold. The same conclusion can be made from the exact equation (5). Since this equation must be true for an arbitrary chosen instant of time t, there arises the problem of mass increase, as interpreted from the point of view classical cosmology. Namely, it follows that total mass increases proportionally to a scale factor, M (t) ∼ R(t), if the gravitational constant G and velocity of light c are both constant. This difficulty can be resolved, in particular, if one supposes that the natural constants G or c change with time. The questions arised in connection with these problems were discussed using different frameworks and for different purposes. According to Dirac's large number hypothesis, the Newtonian constant G must depend on time, so that G ∼ t −1 and R ∼ t 1/3 [19] or G ∼ t −1 and R ∼ t [20]. In the Brans-Dicke theory the constant G is related to the average value of some dynamical scalar field φ which is coupled to the mass density ρ of the universe, φ ≈ G −1 , where φ ∼ ρR 2 [21,22]. Models with varying speed of light were applied in order to solve the horizon, flatness, cosmological constant, and other cosmological problems (see, e.g., Refs. [23]). Matter creation processes in the context of the cosmological models and their influence on the evolution of the universe were studied in Refs. [24]. If we go back and consider the equation (5) as following from the relation (1), then we can interpret it in terms of quantum theory. In quantum model the state vector of isotropic universe is a superposition of all possible |M -states which are not orthogonal between themselves, so that the inner product M 1 |M 2 = 0, and the universe can transit spontaneously from the state with the mass M 1 to the state with the mass M 2 = M 1 with nonzero probability P (1 → 2) = | M 1 |M 2 | 2 . For example, the probability of transition of the universe from the ground state (with respect to gravitational field) to any other state obey the Poisson distribution with the mean number of occurrences n = 1 2 (M 2 − M 1 ) 2 (for more details, see Refs. [14]). Then R 1 → R 2 , when M 1 → M 2 . If one would try to interpret this result in terms of the Newtonian cosmology, describing the universe as a flat Euclidean 3-space filled with a uniform matter with the energy density ρ(t) (3), such a transition would correspond to the passage to the sphere of radius R 2 > R 1 which includes a mass M 2 > M 1 . FRW equations with Mach's principle If one assumes that Mach's principle is a fundamental law of nature, it must be implemented into the classical field equations. One point of view is that Einstein's field equations need not to be modified, while Mach's principle should be considered as an additional condition. Such an approach was chosen by Wheeler who proposed to understand Mach's principle as a selection rule (boundary condition) of the solutions of the field equations [25]. The Brans-Dicke theory mentioned above uses another way in which the field equations are generalized to become Machian [21,26]. Since in our approach Mach's principle in the form (5) follows from quantum theory in semi-classical limit, under classical description it can be introduced as an addition constraint and added to the classical field equations (2). With account of the constraint (5), the energy density of matter (3) takes the form of K-matter energy density with the corresponding equation of state, According to common classification (see, e.g. Ref. [27]), matter with such an equation of state can be attributed to strings, since it naturally appears in string cosmology. But in this approach it does not mean that the universe is string-dominated. The energy density and pressure in the form (6) arise as an effect of an additional constraint between the global geometry and the total amount of matter in the universe as a whole. The field equations are reduced to the forṁ Their solution is Expansion of this solution for small | Λ 3 t 2 | yields From the Hubble expansion rate one obtains the expansion in the same limit If Λ = 0, the expressions for the scale factor (8) and the Hubble expansion rate (10) are equivalent to the respective expressions for the de Sitter model of the universe with k = −1. In the limiting case Λ = 0 it appears that This solution formally coincides with the solution of Milne model of open universe (k = −1), R(t) ∼ t. But in contrast to the Milne model, where the energy density of matter vanishes, ρ = 0, in the case under consideration the energy density of matter is nonzero, For a spatially flat universe (k = 0) this density equals to the critical density, ρ = ρ c ≡ 3H 2 8πG . The equation (13) can be rewritten in the Whitrow-Randall form [17], i.e. Gρt 2 is an invariant determined by the parameter n = 2 − k characterizing the geometry of the universe. Introducing a dimensionless parameter K as in the model of K-matter, and using (6), one finds that K = 2. This value agrees with the observational constraints on the parameter K obtained by Kolb [3] and Gott and Rees [7]. The calculations with the parameters for standard ΛCDM model give the same value of H 0 t 0 for the present-day universe as follows from Eq. (12). Really, using the WMAP 7-year data [28] for the age of the universe t 0 = 13.75 ± 0.13 Gyr and the Hubble parameter H 0 = 71.0 ± 2.5 km s −1 Mpc −1 , one finds: H 0 t 0 = 0.998 ± 0.045. At the same time, substituting the cosmological constant Λ = (1.302±0.143)×10 −56 cm −2 which corresponds to the dark energy density parameter Ω Λ = 0.734 ± 0.029 [28] into Eq. (10) with the corresponding age of the universe t 0 , one gets a somewhat excessive value: H 0 t 0 = 1.233 ± 0.029. It is necessary to keep in mind, of course, that the use of the values of the parameters of ΛCDM model in these estimations of H 0 t 0 has only illustrative character, since Eqs. (8)- (11) were obtained under the different model assumptions. In the model, where the scale factor depends on time linearly (12), the age of the universe and the Hubble expansion rate depend on the redshift z according to the simple laws For the present expansion rate measured by Hubble Space Telescope observations, H 0 = 73.8 ± 2.4 km s −1 Mpc −1 [29], the age of the universe appears to be equal t 0 = 13.26±0.43 Gyr. This value does not differ drastically from the value predicted by the WMAP 7-year data for the ΛCDM model, and it lies within the expected limit of 12 to 14 Gyr. Conclusion remarks In the coasting cosmological models considered without reference to Mach's principle or matter creation, it is assumed the existence of a specific form of matter, such as K-matter with the energy density, which decreases in expansion as R −2 , or such a matter, whose energy density can be neglected in the open universe (Milne universe). The incorporation of Mach's principle into the theory does not change the physical properties of matter itself (such as a perfect fluid in the form of dust with the corresponding equation of state), but it takes into account the constraint which reflects collective behavior of matter in the universe considered as a whole. The local properties of the matter are not affected by Mach's principle. The horizon problem, the luminosity distance-redshift relation, the angular diameter distance-redshift relation, and the galaxy number count as a function of redshift in the model of the FRW universe with energy density ρ ∼ R −2 were studied by Kolb [3]. In the case of a K-dominated universe, kinematic tests limit the parameter K to be K 1. In the model which takes into account Mach's principle in the form (5) the universe behave as K-dominated with the parameter K = 2 which agrees with the analysis of Refs. [3,7]. There is some indication that in the cosmological model where the scale factor linearly depends on time, the light element abundances, the position of the first acoustic peak of the CMB can be satisfactorily described [4,5]. From the analysis of type Ia supernovae discovered by the Supernova Cosmology Project, it follows that the data are consistent with the model in which the mass density and cosmological-constant energy density vanish, (Ω M , Ω Λ ) = (0, 0) [30]. It means that the model characterized by linear dependence of the scale factor on time agrees well with the SNe Ia observations [4]. It was shown that the accelerating expansion of the present-day universe extracted from the observed luminosity of the type Ia supernovae can be explained by the theory which takes into account the feedback coupling between geometry and matter (Mach's principle) [31]. In the model which accounts for Mach's principle, an assumption of large amounts of dark energy in the universe is not required to explain cosmological observations. The cosmological model with the scale factor R which evolves in time according to the equation (9) with zero or small cosmological constant can be a good alternative to the standard cosmological model.
2011-07-14T09:18:43.000Z
2011-07-14T00:00:00.000
{ "year": 2011, "sha1": "319cb40a8f6204da8fe636688022f9eacdc012be", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "319cb40a8f6204da8fe636688022f9eacdc012be", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237385602
pes2o/s2orc
v3-fos-license
Growth-enhanced Performance by Pleurotus ostreatus Cultivated on Salon Effluent and Spent Calcium-carbide Amended Substrates Aims: To investigate the growth response of Pleurotus ostreatus, a wood-rotting fungus, to different growth substrates [Sawdust (SD), dry banana leaves (BL) and a combination of both BL and SD (BLSD)] amended with waste [salon effluent (SE) and spent calcium-carbide (SC)]. Place and duration of study: Department of Plant and Ecological Studies, University of Calabar, Cross River State, Nigeria, between May 2015 and August 2015. Methodology: Amendments were applied to growth substrates at different levels of concentration as follows: 0 ml and 0 g, 5 ml and 5 g, 10 ml and 10 g, 15 ml and 15 g per kg substrate. Mature mushrooms were harvested and assessed on the following parameters; number of fruit bodies, fresh weight, dry weight, length of stipe, girth of stipe, pileus area using conventional method. Results: Number of fruitbodies, fresh weight, dry weight and stipe length increased with increase in concentration of additives. Best performances of these growth parameters were obtained at 15 g/kg and 15 ml/kg concentration. The highest number of fruitbodies (with a peak mean value of 28.42 fruitbodies at 15 g/kg concentration), highest value of fresh weight and dry weight were observed in Original Research Article Oni et al.; AJOB, 12(2): 1-11, 2021; Article no.AJOB.69868 2 SD. The longest stipe length, largest stipe girth and pileus area were observed in BLSD, though it exhibited least performances in other growth parameters. BLSD amended with salon effluent produced mushrooms with the largest pileus area (with a peak mean value of 53.8 cm 2 at 15ml concentration) compared to the other substrates. Conclusion: This study reveals that all growth parameters of P. ostreatus assessed were positively influenced by all the levels of amendments on the substrates used in this study. Therefore, these wastes could be used to increase the yield of P. ostreatus and possibly remediate sites polluted by these wastes. INTRODUCTION Effective waste management is one of the greatest challenges facing humanity and the planet. The ongoing trend of industrialization and economic growth has resulted in increased waste production especially in cities with high population [1]. It is on record that of the wastes produced world over, the majority results from industrial, agricultural and domestic activities. Concerns exist worldwide about sound management of these wastes. If they are allowed to accumulate they become sources of severe environmental pollution [2]. There is a dire need for an adequate and safe waste management system that is inexpensive, easy to use, requiring no after storage liabilities for secondary waste, and even allows the treated materials to be reused thereafter [1]. The nutritional and medicinal values of mushrooms have long been known. However modern researchers have found that particular strains of fungi may be used to manage waste and remediate certain polluted sites. This rising curiosity in the use of mushroom for waste management stems from the numerous advantages it has over commercialized remediation technologies [3] [4] Many mushroom species across the globe has been use to rid the environment of waste generated from industrial, household and agricultural activities. [5][6][7][8][9][10], [4]. Pleurotus commonly referred to as oyster mushroom, is a widespread edible mushroom, said to be first domesticated in Germany for basic consumption at the time of the First World War. However, today it is grown as food for marketing around the world and is listed as the third largest domesticated mushroom in the world [4]. Increasing interest in the cultivation and consumption of Pleurotus ostreatus is due to its easy and cheap production technology and higher biological efficiency. This mushroom has been cultivated on a lot of agricultural wastes and has also been reported for used in mycoremediation purposes. Pleurotus has been cultivated on diverse agro wastes including cotton stalks, groundnut haulms, soybean straw, pigeon pea stalks and leaves, wheat straw, paddy straw, rice straw, sawdust, dry banana leaves, cereal straw, grass straw, cotton waste, corn cobs, sugarcane or sorghum bagasse, coffee waste, groundnut haulms, vinegar wastes, winery wastes, banana peel, palm kernel cake and groundnut cake and animal dung [11] [5] [12] [6-7] [13][14][15][16] [9] [17] All agro wastes evaluated supported the growth of the mushroom, though with varied degree of efficiency thereby ensuring the production of healthy and safe food from waste for the population. Straw-bedded horse manure is said to be the most used substrate for cultivating mushrooms in Europe as well as in the USA and Canada [18] Mandeel et al. [19] reported the use of untreated organic wastes such as chopped office papers, cardboard, sawdust and plant fibres in Bahrai or mushroom cultivationn. Atikpo et al. [20] have reported the use of fresh fish waste, cooked fish waste, sawdust from Tryplochyton scleroxylon and rice bran as substrate in the cultivation of Pleurotus ostreatus in Ghana. Mushrooms have also been extensively employ in the management of industrial waste especially in crude oil and spent engine oils contaminated soils. There have been several reports on the use of mushrooms to clean up oil spills which have become a regular occurrence. For example, several fungi like Pleurotus ostreatus, Lentinus subnudus and Pleurotus tuber-regium have been use to decontaminate soils contaminated with petroleum hydrocarbon [21] [3] [10]. Ogbo and Okhuoya [8] showed that P. tuber regium was able to decontaminate crude oil contaminated soils reducing the various petroleum hydrocarbons in crude oil to varying degrees. It was said that the contaminants improved the growth of the mushroom. In another experiment, Adenikpekun, 2008 noted that Pleurotus tuber-regium is able to increase nutrient contents in soils polluted with 1 -40% engine-oil concentration after six months of incubation. Solid sludge and effluent of both cardboard and handmade paper industries was composed for developing a mushroom growing method to accomplish zero waste discharges. Results from this study revealed that when 50% paper industries waste was mixed with 50% (w/w) wheat straw, there was significant increase (96.38%) in biological efficiency when compared to what was obtained in the wheat straw alone [22]. However, industrial waste is not limited to oil spillage alone. Calcium carbide is a chemical represented with the formula CaC 2 . It is industrially used in the production of acetylene and calcium cyanamide. Acetylene with the formula C 2 H 2 is a very valuable hydrocarbon owing to the energy that is confined in the triple-bond between carbon and hydrogen atoms. This energy is use in different ways for an array of industrial purposes [23]. Acetylene gas is mostly use as cylinder gas for metal construction. The gas is used to cut or bond various types of metals, clean them of surface deficiencies, and strengthen them by means of flame hardening. However, calcium carbide has a characteristic smell which is unpleasant to some. The spent calcium carbide also serves as serious environmental pollutant if it is not treated properly. Calcium carbide is a rich source of the nitrification inhibitor, acetylene and plant hormone ethylene. [24] [23] [25] [26] Many research works supports the use of CaC 2 as an effective inhibitor of oxidation of NH + 4 into NO -3 which results in increased availability of nitrogen to plants thereby improving the tolerance, growth and yield of crops [27][28][29][30]. Hair relaxer is used worldwide for hair care and beautification. However, the used hair relaxer water can become a source of air and land pollution if not properly disposed. A careful observation of the ingredients of most relaxers reveals its high nutritional content. Most hair relaxer ingredients includes shea butter, olive oil, mink oil, tea-tree oil, jojoba oil, milk protein, egg protein, vitamin E, honey and silk amino acids. These ingredient have high nutrient content hence may be a good supplement for mushroom cultivation. In the quest for effective but low cost waste management system, this research utilized two agro wastes (sawdust and dry banana leaves) and two industrial wastes (spent carbide and salon effluent) in the cultivation of Pleurotus ostreatus. Location of Study This research was carried out at the University of Calabar Staff quarters, University of Calabar, Cross River State. The University of Calabar lies between longitude 4° 57' 0" North and 8° 19' 0" East [5]. Source of Materials The additives (spent carbide and hair relaxer water) were sourced within Cross River State. The spent carbide was obtained from an automobile workshop at Charmley Street and salon effluent was obtained from a hair dressing salon at Eyo-ita Street both in Calabar, Cross River State. Materials for substrate composition (dry banana leaves and sawdust) were obtained from the University of Calabar farms and government owned timber market at MCC road Calabar, respectively. The spawn of P. ostreatus were obtained from Royal farms at Ikot-Effanga, Calabar. Rice brand was obtained from rice mill in Ugep, Yakurr L.G.A of Cross River State and lime (gypsum) was purchased from Watt market in Calabar. Spawning The pasteurized substrates was allowed to cool for 12 hours, then each substrate was divided into1.0 kg portions and each of these portions was treated with 0 g, 5 g, 10 g or 15 g of spent carbide or 0 ml, 5 ml, 10 ml or 15 ml of salon effluent before being dispensed into plastic bags measuring 30 cm x 12 cm. Each of these bags was inoculated with spawns of Pleurotus ostreatus. The open bags were secured with PVC pipes 2 cm in diameter and length wrapped with rubber band and plugged with cotton wool [4] [6][7]. Each experimental unit was replicated thrice. Spawn Running The bags were hung with ropes serially from the roof down with each line carrying a maximum of eight bags. Room temperature was in the range of 25 0 C -30 0 C and relative humidity between 60 % and 75 % achieved by spraying the compost bags and walls of the spawn running room 2 to 3 times daily with clean water, light penetration was limited [31] [6-7]. Cropping and Harvesting At the end of the spawn run (when the mycelium completely colonized the substrate) the bags were moved to the cropping room and cropping was carried out as described by Markson et al, 2017a. Mature mushrooms (mushrooms with fully opened pileus) were harvested and assessed on the following parameters; number of fruit bodies, fresh weight, dry weight, length of stipe, girth of stipe, pileus area using conventional method [11] [4]. Data Analysis The experimental design was a complete randomized design (CRD) with three replicates. Data collected were analyzed using SPSS version 21.0. Means were separated using Fisher's Least Significant Difference (LSD) test. RESULTS AND DISCUSSION All growth parameters were positively influenced by the treatments. The number of fruitbodies produced by spent carbide amended substrates was significantly higher (P≤0.05) than those produced by salon effluent in all the substrates, across all the flushes (Table 1). Fig. (1) reveals that the highest numbers of fruitbodies were obtained at 15 (g/kg and ml/kg) concentration in SD and BL whereas in BLSD the number of fruitbodies obtained at 10 and 15 (g/kg and ml/kg) concentrations were comparable (P≤0.05). Fresh and dry weights of fruitbodies produced on substrates treated to spent carbide were significantly higher (with peak mean values of 54.63g and 6.45g respectively both on BLSD) than those produced on substrates treated to salon effluent (Tables 2 and 3). The least value of .00g was observed in BL and BLSD substrates treated to salon effluent. Figs. 2 and 3 shows that best performances of fresh and dry weights were observed at 15 (g/kg and ml/kg) followed by 10 (g/kg and ml/kg). Mean values for stipe length, dry weight and pileus area on BLSD and SD treated to salon effluent were significantly higher (P≤0.05) than those produced on BLSD and SD treated to spent carbide in the first four flushes (Tables 4-6). However, the reverse was observed in BL where the mean values for stipe length, dry weight and pileus area of mushroom produced by spent carbide were significantly higher (P≤0.05) than those produced on BL treated to salon effluent across all the flushes. Figs 4 -6 indicates that best performances of stipe length, stipe girth and pileus area for BL and BLSD were observed at 15 (g/kg and ml/kg) whereas for SD substrates stipe girth and pileus area were significantly higher (P≤0.05) at 10 (g/kg and ml/kg) but stipe length was significantly higher (P≤0.05) at 15 (g/kg and ml/kg). The number of fruitbodies produced following each treatment in all substrates was concentration dependent. This explains the higher number of fruitbodies obtained at 15 (g or ml) concentrations in the first three flushes whereas in the fourth and fifth flushes the number of fruitbodies at concentrations 10 and 15 were comparable (P≤0.05) suggesting that at higher flushes the nutrient level required for the development of the mushroom fruitbodies declines with the declining growth vigor of the mycelial. The least number of fruitbodies were produced at the fourth and fifth flushes. This is a clear result of nutrient depletion from the substrates. From this observation, it is expedient to state that the number of mushroom flushes obtained in any mushroom culture at any time depends not only on the substrate type but also on the nutrient status of the substrate. SD substrates gave significantly (P≤0.05) higher number of fruitbodies at 10 and 15 concentrations compared to BL and BLSD substrates indicating that the compatibility between the treatment and the substrate is higher in SD substrate than in BL and BLSD. Comparing the effect of the two additives, spent carbide amended substrates produced mushrooms with significantly (P≤0.05) higher number of fruitbodies across all the flushes and in all substrates. This may imply that the quality or type of nutrient component necessary for mushroom growth is higher in spent carbide than in salon effluent and that certain component(s) of carbide is very essential for mushroom growth. Calcium carbide (CaC 2 ) is an effective nitrification inhibitor [26] [32] [33] A report by Banerjee et al. [34] asserted that CaC 2 inhibits Nitrosomonas activity hence prolongs the stay of N in soil as NH 4 + ion which makes nitrogen available for organism to use. The work of many researchers also supported the use of CaC 2 as an effective inhibitor of oxidation of NH 4 + into NO 3 under both flooded and non-flooded soil conditions [29] [30]; [27]. Oei [35] reported that for mushroom cultivation, a nitrogen source such as rice bran should be supplemented in the substrate. The nitrogen is usually converted to ammonium nitrate which is available for use by the mushrooms for growth and development. The fresh and dry weight of the fruitbodies produced by Pleurotus ostreatus followed the same pattern, with highest value at 15 g / kg or ml / kg concentration across the flushes. The trend was similar to number of fruitbodies produced which implies that the availability of higher nutrient at higher treatment levels (10 g / kg and 15 g /kg) promoted better growth hence higher dry weight, fresh weight and number of fruitbodies recorded. Stipe length was longest at concentration 15 g / kg or ml / kg. Considering the treatments, BL treated with spent carbide produced mushrooms with significantly (P≤0.05) longer stipe length than those produced in BL treated with salon effluent in all the flushes, which suggests the role of nitrogen in promoting stipe elongation. However, in BLSD and SD substrates, the stipe length of fruitbodies produced by spent carbide treated substrate were significantly (P≤0.05) shorter than those produced by BLSD and SD treated with salon effluent. This negative impact on the stipe length in BLSD and SD substrate is probably contributed by the presence of various growth inhibitors in sawdust which have been reported in wood. It is likely that such substances may have negatively impacted on the growth elongation of stipe in this case [6]. The stipe girth was highest at 15 g / kg or ml / kg in all the flushes except in the third flush where the highest value was at 10 g / kg or ml / kg concentration. BLSD substrate produced fruitbodies with significantly (P≤0.05) larger stipe girth than in other substrates at all concentrations. This could be as a result of BLSD having least number of fruitbodies. Pileus size was also treatment dependent. At 10 g / kg and 15 g / kg concentrations, the mean values of pileus area were comparable (P≤0.05) but significantly (P≤0.05) higher than the pileus area produced at other concentration levels. However, at other treatment levels, BLSD produced fruitbodies with pileus area significantly (P≤0.05) larger than those of the other two substrates. Spent carbide in BL substrate enhanced the production of fruitbodies with significantly (P≤0.05) larger pileus area than when salon effluent was added whereas the pileus area of fruitbodies produced in BLSD and SD treated with spent carbide were smaller than those produced in BLSD and SD treated with salon effluent. There seem to be compatibility between spent carbide and BL substrates than salon effluent and sawdust substrates. The synergy between spent carbide and BL substrates resulting in larger Pleurotus pileus area is likely a function of the ease of hydrolyses of the soft banana leaves tissues by the fungal enzymes coupled with the nitrogen made available by the denitrification inhibitory action of spent carbide. These two factors appear to make nutrients available to the mushroom for expansion of its pileus. The affinity of salon effluent with sawdust substrates is likely a function of the nutrient (especially the protein, enzymes and hair growth promoting ingredients) in the hair relaxer that promotes and stimulates fungal growth vigor and possibly its ability to produce more hydrolytic enzymes necessary for the degradation of sawdust to release nutrients for the mushroom which are then converted to tissues (pileus). CONCLUSION This work utilized two agro wastes (sawdust and dry banana leaves) and two industrial wastes (spent carbide and salon effluent) in the cultivation of Pleurotus ostreatus. Results from this study reveal that all growth parameters of P. ostreatus assessed were positively influenced by all the levels of the amendments (salon effluent and spent carbide) on the substrates (BL, BLSD and SD) used in this study. Sawdust substrate was the best in supporting the number of fruitbodies, high fresh and dry weights whereas best results of stipe length, stipe girth and pileus area were identified in BLSD. The performances of these growth parameters were observed to be best at 15 ml and 15 g per kg substrate. Though both amendments positively influenced the growth parameters, the performance of spent calcium-carbide was found to be the best. Hence, these wastes could be used to increase the yield of P. ostreatus which in turn result in proper management of these wastes and possibly remediate sites polluted by them.
2021-09-01T15:15:29.038Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0d9f9b4d3e58c544258a6c0744146e9847acfd5a", "oa_license": null, "oa_url": "https://www.journalajob.com/index.php/AJOB/article/download/30157/56572", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd56a6f5dc9f57921f865f4b6fbbcea7e21becdc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
57293965
pes2o/s2orc
v3-fos-license
Association of capillary haemangioma with bilateral hydronephrosis in an infant * Introduction Capillary (or strawberry) haemangiomas are benign localized tumours of blood vessels, usually occurring in the head and neck region. They have a malformational, hamartomatous basis and spontaneously disappear within the first few years of life. They occur in 1-2% of all neonates and have a female: male ratio of 3:1. They typically arise early in life, grow rapidly during a proliferative phase and then slowly regress in an involutional phase. We present a case of capillary haemangioma of face, neck and upper part of chest associated with bilateral hydronephrosis. Introduction Capillary (or strawberry) haemangiomas are benign localized tumours of blood vessels, usually occurring in the head and neck region 1 .They have a malformational, hamartomatous basis and spontaneously disappear within the first few years of life 2 .They occur in 1-2% of all neonates and have a female: male ratio of 3:1 1 .They typically arise early in life, grow rapidly during a proliferative phase and then slowly regress in an involutional phase 1 .We present a case of capillary haemangioma of face, neck and upper part of chest associated with bilateral hydronephrosis. Case report A 7 month old boy came to the outpatient clinic with his parents.He was conscious and playful, immunized according to age, with normal development.His systemic examination was unremarkable.His mother had developed gestational diabetes mellitus and she gave a history of swollen kidneys in the antenatal ___________________________________________ 1 ultrasound examination of fetus.Infant had developed a reddish skin rash (which later became hypertrophied) on lower face including left parotid region, some anterior part of neck and extending to upper part of chest from the 18 th day of life, which was diagnosed as a capillary haemangioma (Figures 1A and 1B). *Permission given by parents to publish photograph He was treated with an oral β-blocker, propranolol 1.5 mg/kg and Timolol drops for local application on affected site.No side effects were recorded.On further evaluation, his ultrasound scan of abdomen revealed a right kidney 5.4×3.0 cm in size and a left kidney 6.1×4.1 cm in size at 4 months.Ultrasound examination did not reveal any intra-abdominal mass.Hence, to minimize radiation hazard to the infant, CT scan or MRI of the abdomen was not performed to rule out the possibility of intra-abdominal mass lesion.Later, bilateral hydronephrosis was confirmed by intravenous pyelography, which suggested left sided vesicoureteral junction (VUJ) obstruction and right sided pelvicalyceal ureteric junction (PUJ) obstruction (Figure 1C).Patient was surgically managed with ureteric reimplantation on left side and pyeloplasty was performed on the right side.The post-surgical course of infant was uneventful with normal renal function. Discussion Most haemangiomas are solitary; when multiple (with or without associated lesions in internal organs) or affecting a large segment of body, condition is known as multifocal angiomatosis 1 .Kasabach and Merrit reported a case of haemangioma involving skin and deep soft tissues of thigh in 1940 3 .Haemangiomas may present as small isolated lesions or as large masses that cause systemic symptomatology, impair vital or sensory functions or cause disfigurement, despite their self-limited course 1 .Their pathogenesis and optimal management remain unknown Department of Regenerative Medicine and Department of Paediatrics, 2 Department of Pathology, Laboratory Medicine, Transfusion Services and Immunohaematology, 3 Department of Nephrology and Transplantation Medicine, 4 Department of Anaesthesiology and Critical Care Medicine, 5 Department of Urology and Transplantation, G. R. Doshi and K.M. Mehta Institute of Kidney Diseases & Research Centre -Dr.H. L. Trivedi Institute of Transplantation Sciences *Correspondence: umangpaedia@yahoo.co.in (Received on 01 March 2016: Accepted after revision on 22 April 2016) The authors declare that there are no conflicts of interest Personal funding was used for this project.Open Access Article published under the Creative Commons Attribution CC-BY License. 7.There was no external compression observed due to any mass producing (PUJ) obstruction and VUJ) obstruction in right and left kidney respectively in this patient.Corticosteroids are the first line of treatment for infantile capillary haemangiomas.Other options include laser therapy, interferon alfa-3, vincristine and propranolol which can inhibit the growth of these haemangiomas7.
2018-10-14T07:31:40.027Z
2017-03-05T00:00:00.000
{ "year": 2017, "sha1": "e588770c7e72036d6e123f80f3048e088f668632", "oa_license": "CCBY", "oa_url": "http://sljch.sljol.info/articles/10.4038/sljch.v46i1.8077/galley/6120/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e588770c7e72036d6e123f80f3048e088f668632", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248399706
pes2o/s2orc
v3-fos-license
Flight-Fecundity Trade-offs: A Possible Mechanistic Link in Plant–Herbivore–Pollinator Systems Plant–herbivore and plant–pollinator interactions are both well-studied, but largely independent of each other. It has become increasingly recognized, however, that pollination and herbivory interact extensively in nature, with consequences for plant fitness. Here, we explore the idea that trade-offs in investment in insect flight and reproduction may be a mechanistic link between pollination and herbivory. We first provide a general background on trade-offs between flight and fecundity in insects. We then focus on Lepidoptera; larvae are generally herbivores while most adults are pollinators, making them ideal to study these links. Increased allocation of resources to flight, we argue, potentially increases a Lepidopteran insect pollinator’s efficiency, resulting in higher plant fitness. In contrast, allocation of resources to reproduction in the same insect species reduces plant fitness, because it leads to an increase in herbivore population size. We examine the sequence of resource pools available to herbivorous Lepidopteran larvae (maternally provided nutrients to the eggs, as well as leaf tissue), and to adults (nectar and nuptial gifts provided by the males to the females), which potentially are pollinators. Last, we discuss how subsequent acquisition and allocation of resources from these pools may alter flight–fecundity trade-offs, with concomitant effects both on pollinator performance and the performance of larval herbivores in the next generation. Allocation decisions at different times during ontogeny translate into costs of herbivory and/or benefits of pollination for plants, mechanistically linking herbivory and pollination. INTRODUCTION Plant-herbivore and plant-pollinator interactions are both well-established, but largely independent fields of study. Pollination is a mutually beneficial interaction and historically has been the most thoroughly studied of all mutualisms (Bronstein, 1994). The key issue in the study of pollination is how plants obtain and donate high-quality pollen to maximize reproductive output. In the case of the over 85% of plant species that are animal-pollinated (Ollerton et al., 2011), this involves attracting and rewarding partners that will transfer pollen among flowers of the same species. Herbivory, in contrast, is an antagonistic interaction between plants and animals. In some cases, consumption of leaves can dramatically reduce plant growth and survival (Lehndal and Ågren, 2015). Key issues in the study of herbivory have been how plants defend themselves against being eaten, and when and how herbivores are able to circumvent these defenses (Núñez-Farfán et al., 2007). In recent years, it has become increasingly well-recognized that pollination and herbivory are not, as might be suggested by these contrasting concerns, independent of each other (Rusman et al., 2019). Rather, they interact in ways that synergistically contribute to a plant's reproductive success (Marquis, 1992;Bronstein et al., 2007;Jacobsen and Raguso, 2018;Haas and Lortie, 2020;Johnson et al., 2021). The presence of herbivore damage, for instance, can reduce the likelihood that pollinators will be attracted to flowers; it can also reduce resources necessary to produce flowers, seeds, and fruits. Herbivores may also simply consume the flowers. In all of these cases, herbivory reduces plant fitness through reduced effectiveness of pollination. In other situations, however, the presence of herbivores actually enhances pollination. This occurs, for example, when a single species is both the pollinator and herbivore of the same plant species. In these cases, the probability of pollination and herbivory increase together. The best-known examples are highly specialized insects, such as fig wasps and yucca moths, that pollinate plants, then lay eggs in the flowers, with the pollinator's offspring subsequently destroying a portion of the developing seeds (Kato and Kawakita, 2017). More common, but not as well-studied, are cases in which insects feed on floral nectar, then lay eggs on the leaves of the same individual plant or on neighboring plants of the same species; the pollinator's offspring in this case are folivores of their host plant. The best-known of these herbivorous pollinators are Lepidoptera, including but not restricted to those with narrow diet breadths (Bronstein et al., 2009;Altermatt and Pearse, 2011). Recent conceptual advances linking herbivory and pollination have largely adopted a plant perspective (e.g., Lucas-Barbosa, 2016;Jacobsen and Raguso, 2018;Kessler and Chautá, 2020). In this perspective, we develop a framework that links herbivory and pollination from the animal perspective instead. Specifically, we explore the idea that trade-offs between investment into flight vs. fecundity functionally link insect pollination and herbivory. Flight-fecundity trade-offs in insects are a wellstudied phenomenon (Johnson, 1963;Roff, 1986Roff, , 1990Roff, , 1994Rankin and Burchsted, 1992;Dingle, 1996;Zera et al., 1999;Zera and Brink, 2000;Zera and Larsen, 2001;Gu et al., 2006;Hanski et al., 2006;Karlsson and Johansson, 2008;Guerra and Pollack, 2009;Tigreros and Davidowitz, 2019). At a basic level, allocation of resources to flight will modify an insect pollinator's efficiency, with a resultant increase in plant fitness. In contrast, allocation of resources to fecundity leads to an increase in the herbivore population size produced in the next generation. Increased allocation of resources to fecundity may or may not translate linearly into herbivore damage as damage may differ among populations (Marquis, 1992), the strength of selection induced by the herbivore can differ (Agrawal et al., 2012), tolerance vs. resistance to herbivores may mitigate damage (McCall et al., 2020), when during ontogeny herbivory occurs effects overall damage (Boege and Marquis, 2005) and the quality of the host plant and its effect on herbivore growth may mitigate damage (Davidowitz et al., 2003;Wilson et al., 2019), among other factors. Larval Lepidoptera are predominantly herbivores and most adults are pollinators (Hahn and Brühl, 2016), often of the same plant species (Altermatt and Pearse, 2011), making them ideal to address this link between herbivory and pollination. We note that this linkage exists whether the pollinator lays eggs on the same plant or on different individual plants of the same species and whether the plant being eaten and the plant being pollinated are different species, which may result in differential costs and benefits of herbivory and pollination, respectively. Here, we associate resource allocation to flight with increased pollination efficiency and allocation to fecundity with herbivory damage. In addition to nectar foraging and pollen transfer, flight is of course also used for other functions, such as to find mates and host plants (Chai and Srygley, 1990;Willis and Arbas, 1991;Mitra et al., 2016). However, because nectar foraging is the most relevant function of flight to a plant's fitness due to its resultant pollination, we focus on the nectar foraging function of flight. The efficiency of an animal as a pollinator entails more than just flight. It encompasses numerous pollination-related traits including multimodal signaling, used by the pollinator to find the flower (Raguso and Willis, 2002), the reliability of the signal used by the plant to attract the pollinator (Von Arx et al., 2012), proboscis length matching with nectar tube length (Haverkamp et al., 2016;Soteras et al., 2020), flower handling time (Kunte, 2007;Riffell and Alarcón, 2013), pollen transport distances (Herrera, 1987), and floral constancy (Goulson et al., 1997). We focus on allocation to flight (flight muscles and wings), as this is the largest resource sink related to pollination (G. Davidowitz, unpublished data). Below, we first provide a general background on trade-offs between flight and fecundity in insects. We then examine the sequence of resource pools available to Lepidopteran herbivores and pollinators. Finally, we discuss how subsequent acquisition and allocation of resources from these pools may alter the flight-fecundity trade-off, with concomitant effects both on pollinator performance and the performance of larval herbivores in the next generation. FLIGHT-FECUNDITY TRADE-OFFS In insects, allocation to flight begins with an allocation to flight muscle and wings: larger flight muscles increase power output and larger wings reduce wing loading, both of which increase flight performance (Dudley, 2002). In general, resource allocation to flight is essential as it allows the adult to find mates, disperse, and forage for additional resources. In insect pollinators in particular, the dimensions of flight muscle and wings can have significant effects on pollinator flight (Dudley, 2002), affecting, for example, the ability to forage for nectar from flowers buffeted by the wind while hovering (Hedrick and Daniel, 2006;Sprayberry and Daniel, 2007). Subsequent investments are needed to fuel flight itself, which is the most Frontiers in Plant Science | www.frontiersin.org energetically expensive mode of locomotion known (McCallum et al., 2013). In insects, flight can be 30-fold more costly than terrestrial locomotion (Harrison and Roberts, 2000). Insects that act as pollinators often hover while feeding on nectar, a behavior that is energetically demanding (Biewener and Patek, 2018). For example, hovering hawkmoths require 170 times more energy than basal metabolism (Bartholomew and Casey, 1978). The energy from nectar available to the insect differs across plant species and may differ among plant populations and communities as well Lebeau et al., 2016). The nectar load itself can affect the stability and maneuverability of the insect in flight, with potential effects on feeding efficiency (Mountcastle et al., 2015). Feeding efficiency, in turn, may translate into pollinator effectiveness (Goulson, 1999). Flight distance is an important component of pollinator efficiency as it may affect the pollen dispersal ability of the insect pollinator (Schulke and Waser, 2001;Pasquet et al., 2008). Allocation to reproduction involves investments into the reproductive system as well as to eggs. Larval diet can affect the number of ovarioles in the ovary, and hence the maximum number of eggs that can be laid; fecundity is reduced on poor quality larval diets due to fewer ovarioles (Sisodia and Singh, 2012;Aguila et al., 2013). In all insects, reproductive output is determined by the availability of nutritional resources, whether acquired during the larval or the adult stages (Wheeler, 1996;Papaj, 2000;Awmack and Leather, 2002). This is discussed in depth, below. Investments in flight and fecundity trade off (two words) because both require the same macronutrient resources, proteins, carbohydrates, and lipids, all of which are often in limited supply (Baker and Baker, 1986;van Noordwijk and de Jong, 1986;Stearns, 1989;Zera and Harshman, 2001;Boggs, 2009;Saeki et al., 2014;Tigreros and Davidowitz, 2019). Although other limiting resources, such as time available to devote to life-history activities, can also trade-off, nutrient-based tradeoffs are probably the dominant type of trade-off in nature (Zera and Harshman, 2001;Boggs, 2009;Agrawal, 2020). Tigreros and Davidowitz (2019) showed that in wing monomorphic insect species, 76% of studies showed a flightfecundity trade-off when resource availability was manipulated. The more resources allocated to flight, the fewer resources that are available for fecundity (and vice versa), resulting in a negative association between flight and fecundity. As a consequence, we can predict a negative association between the role of an insect as an herbivore and that as a pollinator (see above). With this introduction to nutrient-based trade-offs between flight and fecundity, we next examine the sequence of nutrient pools available to Lepidoptera. THE SEQUENCE OF RESOURCE POOLS The timing of the acquisition and allocation of nutrients can influence acquisition of additional resources (Figure 1). Some empirical studies suggest that allocation to traits related to acquisition ability, such as flight, may directly influence the further acquisition of resources (King et al., 2011;Descamps et al., 2016). Increased allocation to locomotion, for example, can improve an organism's ability to forage and acquire additional resources. The quantity and quality of resources that a juvenile herbivore acquires can modify its nectar preferences as an adult (Mevi-Schütz and Erhardt, 2003); this in turn may influence its effectiveness as a pollinator. We distinguish between plant-derived resources (foliage and nectar) and insect-derived resources (maternally provided provisions to the egg, and nuptial gifts that males provide to females during copulation). These resources are available at different times during an insect's ontogeny (Figure 1) and differ in their relative amounts of proteins, carbohydrates, and lipids (see below). These resource pools can have significant consequences for the growth of the herbivorous juvenile and the pollinating adult, with potential fitness consequences to the plant. Below, we examine each of these resource pools in the order they are available to the insect. Maternally Provisioned Resources The first resource pool to which herbivorous insects have access is provided by mothers, through the nutritional resources they deposit into eggs (Roach and Wulff, 1987;Bernardo, 1996;Fox and Czesak, 2000). In contrast to the leaf tissue that will be consumed once the insect emerges from the egg (see below), nutrients in eggs include substantial amounts of proteins (~40%-50%) and lipids (30%-40%). As a consequence, maternal egg provisioning of nutritional resources can have profound effects on offspring development and subsequent life-history traits (Mousseau and Dingle, 1991;Bernardo, 1996;Mousseau and Fox, 1998;Fox and Czesak, 2000;Hunt and Simmons, 2000). This in turn can influence flight-fecundity trade-offs once the offspring eclose as adults. At the same time, females experiencing flight-fecundity trade-offs may adjust the number of eggs they produce as well as the quantity of nutrients provisioned to each egg (Tigreros and Davidowitz, 2019). Females of the Speckled Wood butterfly, Pararge aegeria, that are forced to fly long distances, for example, produce smaller eggs and smaller offspring that take longer to develop (Gibbs et al., 2010). Similarly, females experiencing poor nutritional environments during either the larval or adult stage generally decrease the nutrients they put into eggs (Bernardo, 1996;Mevi-Schütz and Erhardt, 2005;Geister et al., 2008). In other cases, however, Lepidoptera may increase nutrient investment in eggs to improve offspring performance on low-quality host plants (Rotem et al., 2003). As a consequence, the provisioned egg itself may provide a link between the maternal and offspring resource acquisition and allocation strategies, as well as associated life-history trade-offs (Figure 1). Leaf Tissue The larvae of most Lepidoptera feed on green plant tissues. These tissues contain large amounts of carbohydrates, but only a small fraction of the lipids and protein (nitrogen) that a larva needs. While some of the dietary carbohydrates are converted into lipids (Arrese and Soulages, 2010), the limited availability of dietary protein leads to a fundamental nutritional mismatch between Lepidoptera (as well as other herbivores) and their host plants (Slansky, 1978;Mattson, 1980;Wilson et al., 2019). For example, host plants of the cabbage butterfly (Pieris rapae) contain only 1.9%-5.9% N (~9.4%-36.9% protein), compared to about 13% N content in the adult bodies at eclosion (Morehouse and Rutowski, 2010). To make up such differences, insects engage in compensatory feeding, eating more of nutrientpoor diets to reach their nutritional requirements (Simpson and Simpson, 1990;Nestel et al., 2016). This nutritional mismatch in the larval stage often contributes to flight-fecundity tradeoffs in Lepidoptera, because limited nutritional resources from leaf tissue are differentially allocated to flight (wings and flight muscle) vs. reproductive (ovaries and eggs) structures of the adult (Tigreros and Davidowitz, 2019). Furthermore, some of the resources acquired from the larval diet are stored and carried over through metamorphosis (Arrese and Soulages, 2010). After emerging, but before finding a nectar source, adults must maintain their bodies and fuel flight solely with larval stores. These endogenous reserves can be used, together with adult feeding, to produce eggs and fuel flight (Figure 1). Two contrasting scenarios of allocation of nutrients from leaf tissue can be envisioned. First, when juvenile resources are limited, due either to low abundance or to low nutritional value of the host plant, fewer resources will be available to "build" the adult. In one scenario, we hypothesize that fewer resources are allocated to flight but allocation to fecundity is maintained, resulting in reduced efficiency of the adults during the feeding stage (when pollination occurs), while maintaining a high level of offspring herbivory. A net reduction in plant fitness might result. Alternatively, in a second scenario, we hypothesize that reduced nutrients available for juvenile herbivores may result, in the adult stage, in reduced allocation of resources to fecundity but not to flight. In this case, pollination efficiency may remain high and herbivore populations may be smaller in the next generation, with net fitness benefits to the plant. FIGURE 1 | Interaction between a plant and a Lepidopteran that is an herbivore as a larva and a pollinator as an adult. The central dashed box indicates resource pools to the insect. Host-plant foliage is the resource for larvae (green arrows from dashed box), nectar is a resource for adults (orange arrows), and nuptial gifts are a resource given to the female by the male (purple arrow). For simplicity, only resources relevant to flight-fecundity trade-offs are shown and allocation to other functions such as maintenance, are omitted. Blue lines indicate resources and green lines indicate effects on plant fitness. Larvae consume foliage for nutrient storage and growth (soma; strait blue arrows at top) which are available as resource pools in the adult following metamorphosis (curved blue arrows). Adult Lepidoptera can allocate resources to flight or fecundity (thick blue arrows). The consequences of flight-fecundity allocation decisions to the plant (double-lined green arrows) through herbivory and pollination are indicated by the thick green arrows. Allocation of resources to fecundity by males and females reduces plant fitness, green arrow (−), via herbivory. Allocation of resources to flight increases plant fitness, (+) green arrow, through pollination. Eggs produced by male allocation to nuptial gifts, and female allocation to fecundity, produce the next generation of herbivores (rightmost blue arrow). RESOURCE ACQUISITION AND ALLOCATION IN POLLINATING ADULTS Floral Nectar Nutrient deficiencies in the larval stage, which, can lead to flight-fecundity trade-offs, might be compensated for by the subsequent acquisition and allocation of nectar nutrients (Figure 1). A growing number of studies indicate that nectar can be as important as larval-derived reserves in supporting both flight and fecundity in adult females. Throughout their adult lives, moths and butterflies typically feed on floral nectars, which are carbohydrate-rich solutions (20%-50% sugars) enriched by small amounts of essential and non-essential amino acids (Baker and Baker, 1986;Lanza et al., 1995;Nicolson and Thornburg, 2007;Willmer, 2011). In general, females that feed on nectar produce more eggs than females that do not (Sasaki and Riddiford, 1984;von Arx et al., 2013). There are at least two explanations for this. First, carbohydrates from nectar provide the energy necessary to fuel flight (O'Brien, 1999), and contribute to the synthesis of non-essential amino acids for egg production (O'Brien et al., 2002(O'Brien et al., , 2004. Second, contrary to the paradigm that essential amino acids can only be drawn from the larval diet (O'Brien et al., 2002), some studies have shown that nectar-derived essential amino acids enhance fecundity in Lepidoptera (Mevi-Schütz and Erhardt, 2005;Levin et al., 2017b), especially when resources acquired by the larvae are limited (Mevi-Schütz and Erhardt, 2005). Resources acquired by male and female adult Lepidoptera (and other nectar-feeders) are not necessarily identical. In a comprehensive literature review, Smith et al. (2019) showed that male and female pollinators differ in the species of flowers visited, as well as in their visitation frequencies. Female pollinators tend to visit a higher diversity of flowers than males, whereas males tend to forage over greater distances than females. These differences can potentially result in differences between conspecific males and females in their quality as pollinators (Smith et al., 2019). Once nectar has been ingested, how it is subsequently invested into life-history functions can also differ between sexes: females metabolize nectar-derived amino acids before utilizing larval-derived amino acids, whereas males preferentially use amino acids from larval stores before using those derived from nectar (Levin et al., 2017a). Males also allocate more nectar-derived amino acids to flight muscles than do females (Levin et al., 2017a). Finally, there are sex-related differences in how essential (EAA) and non-essential amino acids (NEAA) are allocated: after feeding, males metabolize EAAs more readily than females, whereas females preferentially allocate EAAs to reproduction (Levin et al., 2017a). Male Nuptial Gifts Adult females can acquire nutrients from nuptial gifts, not only from nectar. These nutritional gifts are a type of reproductive investment that is widespread across animal taxa (Vahed, 1998;Lewis and South, 2012;Boggs, 2018). In insects, males transfer a structure called a spermatophore during mating, which includes both sperm and additional nutrients. These nutrients can be used by the female in oogenesis and somatic maintenance (Boggs, 1990(Boggs, , 1997Karlsson, 1998). In contrast to leaf tissue and nectar, nuptial gifts contain substantial amounts of protein. For example, nuptial gifts in Pierid butterflies contain as much as 50% protein (Bissoondath and Wiklund, 1996;Karlsson, 1998;Tigreros, 2013) with a large percent of that being essential amino acids: for example, ~35% (Meslin et al., 2017). While providing an additional source of macronutrients for adult females, nuptial gifts have the potential to both ameliorate and magnify flight-fecundity trade-offs. In Pierids, a single nuptial gift can provide the necessary nutrients to produce 50-80 eggs, a substantial contribution to female fecundity (Karlsson, 1998;Wiklund et al., 1998;Wedell and Karlsson, 2003). Amino acids supplied through nuptial gifts can change female reliance on amino acid-rich nectar preference (Mevi-Schütz and Erhardt, 2003), which may affect the pollination efficiency of the female. At the same time, because a nuptial gift is more than 80% water (Boggs and Watt, 1981), an important resource in arid environments (Contreras et al., 2013), female acquisition of nuptial gifts can increase the cost of flight by increasing wing loading. For example, a fresh spermatophore in P. rapae may add up to 10% of the female eclosion mass (Tigreros, unpublished). Males may rely on both larval-and adult-derived resources to produce nuptial gifts. For example, nitrogen content in larval diets can change the composition of nuptial gifts (Bonoan et al., 2015), and nectar uptake by males can increase the size of the nuptial gift by adding more nutrients than those derived from the larva diet (Watanabe and Hirota, 1999;Levin et al., 2016). Nuptial gifts can be costly to produce, representing up to 15% of the male body weight in Lepidoptera (Svärd and Wiklund, 1989). As a consequence, males of species with substantial nuptial gift donation may prefer to mate with (Rutowski, 1985;Tigreros et al., 2014), and transfer more nutrients to females that are more fecund (Bonoan et al., 2015). In this case, a female's ability to acquire nutrients from this resource pool (Tigreros, 2013;Tigreros et al., 2014;Bonoan et al., 2015) would depend on how she had previously allocated resources to flight and fecundity (Figure 1). THE EFFECTS OF SEQUENTIAL ACQUISITION AND ALLOCATION OF RESOURCES ON PLANT FITNESS The acquisition of resources has typically been considered as a single event (the stem of the "Y" model, sensu van Noordwijk and de Jong, 1986). In most systems, however, resource acquisition and decisions governing resource allocation are not fixed, but rather dynamic processes that change continually across an organism's life (Zera and Harshman, 2001;Boggs, 2009;Kooijman, 2009; Figure 1). Acquisition of additional resources is predicted to reduce or mask potential trade-offs (Kaitala, 1987;Chippindale et al., 1993;Nijhout and Emlen, 1998;Zera and Harshman, 2001;Harshman and Zera, 2007). This suggests that organisms may have a means to modulate (and even ameliorate) the expression of a trade-off when acquiring resources from additional pools, with implications for plant fitness. For example, females of the Map butterfly, Araschnia levana, raised on low-quality larval diets prefer nectar with amino acids, whereas females raised on high-quality diets do not (Mevi-Schütz and Erhardt, 2003). These nectar amino acids can enhance butterfly fecundity thereby increasing damage by the offspring herbivores (Mevi-Schütz and Erhardt, 2005). Thus, the sequential acquisition of resources may change their allocation to flight or to fecundity over time. Therefore, we may also expect the strength of the trade-off between flight and fecundity to change as the nutritional needs and nutrient availability change across an organism's life cycle (Figure 1). For example, an herbivore feeding on a nutritionally poor host plant might allocate more resources to flight at the expense of fecundity, with the potential fitness benefit to the plant. If, however, the emerged adult has access to an abundance of nutrient-rich nectar, it may shift these resources to increased fecundity (Sasaki and Riddiford, 1984;Levin et al., 2016Levin et al., , 2017a, thereby obviating the flight-fecundity trade-off imposed by larval resources. In another example, resources already allocated to flight may be reallocated to reproduction following flight muscle histolysis in aging butterflies (Jervis et al., 2005;Stjernholm et al., 2005), with a resultant increase in herbivory costs to the plant. FUTURE DIRECTIONS In this perspective, we have argued that trade-offs in resource allocation between flight and fecundity in insects can provide a mechanistic link between pollination and herbivory with subsequent effects on plant fitness. To further develop this idea, we provide additional questions for future research. 1. Here, we have focused on Lepidoptera. Do flight fecundity trade-offs in other insect pollinator taxa, such as solitary bees, flies, and beetles, have similar effects on plant fitness? 2. We have argued that flight-fecundity trade-offs should have a direct impact on plant reproduction. It will be exciting to explore, via models and empirical studies, how flightfecundity trade-offs influence plant population dynamics and evolution. Do different strengths of these trade-offs translate to different effects on the plants? 3. We have focused on insects that feed on leaves as juveniles and on nectar as adults. However, some specialized insect pollinators feed on seeds in the juvenile stage; still others shift from feeding on leaves to feeding on flowers when the latter become available. In many cases the adults do not feed at all (e.g., fig wasps and yucca moths; Kato and Kawakita, 2017). Do the flight-fecundity trade-offs discussed here illuminate these interactions as well? 4. In arid environments, water is another critical resource that adult insects gain from feeding on nectar (Contreras et al., 2013). Does this additional resource alters in any way the resource allocation trade-offs between flight and fecundity we discuss here? 5. Does plant density-dependence affect how the flight-fecundity trade-off affects plant fitness? More specifically, does the flight-fecundity trade-off differentially affect pollination when the pollinator has numerous, vs. few, plants available at which it can feed, and how does the flight-fecundity trade-off affect herbivory when the female can lay eggs on numerous versus few possible host plants? These, and additional, yet to be identified questions, make flight-fecundity trade-offs an exciting area of future research into the mechanistic link between pollination and herbivory, and plant-insect interactions more broadly. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS GD, NT, and JB developed the ideas for the manuscript and all authors were involved in the writing and editing. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by National Science Foundation (NSF-USA) grant IOS-2122282 to GD and NT.
2022-04-28T02:16:36.052Z
2022-04-25T00:00:00.000
{ "year": 2022, "sha1": "ea0e58fbe5e71214653b39331985292945d860ca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "ea0e58fbe5e71214653b39331985292945d860ca", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
91148168
pes2o/s2orc
v3-fos-license
Root Structure and Belowground Biomass of Hybrid Poplar in Forestry and Agroforestry Systems in Mediterranean France In poplar, one of the most used species of forestry and agroforestry, below ground biomass allocation plays an important role in providing anchorage as well as efficient nutrient and water distribution channel. Available literature on this aspect is not enough in hybrid Poplar, Populus euramaricana I-214. Therefore, the study was aimed at finding how this species developed its root system and how much belowground biomass was allocated in Forest System (FRS) and Agroforest System (AFS). This was done using soil excavation and root coring methods. Coarse roots were distributed in all directions but their number and proximal cross section area (CSA) were not uniform. In the case of AFS tree maximum CSA was distributed in the south and south-west direction while in FRS it was in the north-east and south-east direction. Fine roots were observed throughout the rooting zone along with coarse and medium roots up to a maximum depth of 2.4 m in FRS and 2.8 m in AFS. Total belowground biomass was higher in AFS tree (130 kg tree) than FRS tree (120 kg tree). But on hectare basis FRS accumulated (24.5 Mg ha) more biomass than AFS (18.1 Mg ha). However, if practiced in surplus agriculture area and considered the system as a whole, AFS allows grain production in lieu of some biomass deficit. Introduction Poplars have consistently been part of the agriculture and forest resource sectors in temperate regions as well as tropical country like India where cotton wood has been introduced substantially as block plantation and extensively as agroforestry crop (Jha, 1999;Block et al., 2006;Chauhan et al., 2012;Gera, 2012). Immediate and long-term needs in both the agriculture and forest resource sectors have created a niche for the production of wood from managed plantations of native poplar species and their hybrid varieties (Jha, 1999;Block et al., 2006). In agricultural landscapes, the implementation of agroforestry systems has the potential to provide a high carbon sequestration capacity compared to other greenhouse gas mitigation strategies (Jose and Bardhan, 2012). Short rotation forestry crops are currently assuming growing importance in many countries where surplus agriculture and other land is becoming available and poplar stands are expanding on them, for example, Bulgaria, Canada, China, Germany, Serbia, Spain, USA etc. (Calfapietra et al., 2010). The aim is to benefit from the goods directly and services like carbon sequestration indirectly. This system has covered thousands of hectares in Europe alone to generate renewable energy, mostly using poplars and willows (Herve and Ceulemans, 1996;Venendaal et al., 1997;Verwijst, 2001;Langeveld et al., 2012). European farmers are increasingly attracted to energy crops following the most recent changes in the common agricultural policy and rapid development of the bioenergy sector (Spinelli et al., 2008). Wider use of poplar can contribute to European Union goals to ensure 20% of its energy consumption from renewable resources until 2020 and continue further in the future (Jansons et al., 2014). Poplar based agroforestry has the capability of enhancing soil organic carbon up to 83% (Singh et al., 1989). Longer duration carbon locking role is played by the root system of the vegetation (Kumar et al., 2006;Nair et al., 2009) which has some other roles, like nutrient and water acquisition, anchoring etc. Fine and coarse roots are key contributors to belowground net primary productivity, and play critical roles in the biogeochemical cycling of forest and woodland ecosystems (Clark et al., 2001;Brunner and Godbold, 2007;Malhi et al., 2011;Smith et al., 2013;Raich et al., 2014). The storage capacity and the rate of carbon sequestration in this biogeochemical cycle depend on various factors such as the climate, soil type, tree species used for afforestation, current forestry practices, pre-afforestation management and land use history (Post and Kwon, 2000;Paul et al., 2002). more accurate, a trade-off was made and instead of multiple trees, single tree harvesting (Fang et al., 1999) was done in both AFS and FRS during summer 2009. Tree selection was done on following parameters: (i) tree was representative of the plantation having average diameter at breast height of all the trees in the plantation, (ii) it was from inner area not the border of the plantation, (iii) it's neighbouring trees had normal form and vigour and (iv) both the trees were of same clone (I-214) and same treatment (6 m pruning). Selected AFS tree matched all these qualifications in toto, but FRS tree was of little higher girth (1.41 m) than average (1.36 m) of the plantation. Therefore, biomass calculation for FRS was normalized by a factor 0.93 (square of the ratio of average tree and harvested tree) in this case (Jha, 2017). Root harvesting Stump and different types of roots were harvested at different depth and breadth in soil. Although multiple methods of belowground biomass harvesting have been recommended (Addo-Danso et al., 2016), excavation method was used for harvesting of roots to capture lateral root variability in larger volume of soil (Berhongaray et al., 2015). One quarter of the rooting zone of a single tree from both the plantations was selected randomly for excavation (Fortier et al., 2015b). This zone was divided into 2D voxels (volume elements of soil, analogous to pixels of 1m length x 1m breadth x 0.5m depth) by marking squares (1m 2 ) on the ground. All the voxels were given unique identification number, for example first voxel with the tree stump in the centre had 0,0,0 identity and adjacent voxels had 1,0,0 on X axis (row), 0,1,0 on Y axis (alley) and 0,0,0.5 on Z axis. Harvesting was done from selected voxel columns ( Fig. 1) starting from the farthest one near the excavation trench so that the task of removal of cut soil remains easy. These voxel columns were dug carefully using soil pick (MBW, Slinger, WI, USA) releasing high pressure air (125 PSI). Roots collected from each voxel were brought to the laboratory and categorised into three groups based on size. Although roots are categorized and named differently (Lodhiyal et al., 1995;Laclau, 2003;Tufekcioglu et al., 2003;Das and Chaturvedi, 2005;Fortier et al., 2015a), three categories viz., fine roots (< 2 mm), medium size roots (2 mm to 10 mm) and coarse roots (>10mm) were adopted in the present study. Among all roots, fine roots represent only a small fraction of total tree biomass, but fine root production and turnover are significant components of the biomass turnover (Amthor, 1986;Lambers et al., 2000;Chen et al., 2004;Al Afas et al., 2008). Coarse roots are multifunctional tree components providing key functions such as transport (nutrients, photosynthate, water), storage (sugars and nutrients), biomechanical stabilization, as well as the framework upon which fine root develop and connect (Resh et al., 2003;Guo et al., 2013;Cook and Weigh, 2005). Aboveground biomass in poplar plantations or forestry system (FRS) and agroforestry systems (AFS) has been widely studied around the world (Laureysens et al., 2004;Zabek and Prescott, 2006;Fang et al., 2007;Christersson, 2010;Fortier et al., 2010;Truax et al., 2012;). In spite of crucial role of belowground parts for woody biomass production and carbon sequestration in soil (Berhongaray et al., 2015), fewer or disproportionate studies have evaluated the belowground biomass of these systems (Fortier et al., 2013). In other words, the poplar root system still remains the most poorly studied and understood portion of the tree (Friend et al., 1991). Therefore, the objectives of the present study conducted in AFS and FRS in Mediterranean region of France was to assess and compare (i) distribution of belowground biomass to fine, medium and coarse roots, (ii) orientation of coarse roots around the stump root, (iii) extent of vertical and horizontal spread of roots in soil and (iv) the advantage of one system over the other. Study sites Two experimental plots, Forestry (Plantation) System (FRS or PLS) and Agroforestry System (AFS), were located side by side in the vicinity of Vezenobres township (Longitude 4 o 9' E, Latitude 44 o 2' N, elevation 138 m a.s.l.) in the Mediterranean region of France. The soil was sandy alluvial fluvisol with 8% clay, 42% silt and 50% sand. Pure sand and gravel layers occurred at different depths, about 1.1-1.3 m and 2.5-2.9 m. The climate is sub-humid with an average temperature of 14.8 o C and an average annual rainfall of 1172 mm. Potential evapotranspiration (580 mm) was higher than average rainfall (267 mm) during the main growing season, May to August. Water table fluctuation was also common in the area (Mulia and Dupraz, 2006). The AFS and FRS plots were established in 1996 using better performing I-214 and I-4551 clones of hybrid poplar (Populus euramericana). AFS trees were spaced 16 m (alley) x 4.5 m (row) while FRS trees had spacing of 7 m x 7 m. The trees were pruned at 6 m and 10 m following a block design. Durum wheat was grown in AFS keeping fallow every 3 or 4 years. P. euramericana I-214 clone with 6 m pruning was selected for the present study in 2009. For the last 3 years the AFS plot was devoid of agriculture. Tree sample selection Tree harvesting and dry matter estimation method was selected for structure and biomass estimation of roots. Since harvesting method is time and resource consuming but The stump root was excavated along with the proximal roots from first voxel column. All the secondary roots on the stumps were numbered and their proximal diameter or girth was recorded with reference to north, north east, east, south east, south, south west, west and north west directions using metal callipers or tailor's tape in order to determine their cross section area (CSA). Soil coring method was also used in the present study for getting another set of fine root data since soil excavation method is reported to under estimate fine roots due to its loss during excavation (Friend et al., 1991) and recommendation of coring method for uniformly distributed fine roots (Mulia and Dupraz, 2006;Levillain et al., 2011). Nine and six well spread coring points were selected in the alley of AFS and FRS trees, respectively (Fig. 2). Coring was done using micro-caterpillar driller (Sondeuse EMCI 300C with core size 1.1 m by 0.1 m). Soil cores were drilled out from maximum penetrable depth. The cores were divided into sub-cores of 0.2 m length and broken into two halves to observe presence of living and dead, fine and coarse roots. The live roots were smooth, light coloured and non-friable as compared to the dead roots. Roots were counted on both the faces of core breaks for further use. Root biomass estimation Harvested roots were cleaned, weighed and their samples were dried at 90 °C temperature in oven till constant weight. Fresh and dry weight ratio was used to calculate the biomass for harvested voxels. For remaining voxels biomass was extrapolated mathematically. As per site observation and trend in cellules' biomass, exponential decrease in root growth was assumed and exponential regression relationship between biomass and distance from the tree (along Y axis) was developed. Linear decrease was adopted along X axis for want of enough of data and indicative trend with distance in AFS. Values of nonsampled cellules were calculated using the exponential decay equation constants (Sigma plot software). Weightage was applied to them cellule wise, since contribution of these were different for a quarter of the scene. Exponential decrease was used for both the axes X and Y in FRS. The biomass values calculated so far were corrected by using distance matrix, representing the voxels. In this case also weightage was applied in biomass calculation for the cellules. Quarter root biomass was arithmetically extrapolated to determine total underground biomass. Fine root biomass by coring method was estimated using fine root number, density constant (143.55), specific root length (17.86 m g -1 ) and rooted volume in following formula (Mulia, 2005): Root structure and distribution Soil excavation showed that roots were growing horizontally as well as vertically. Secondary roots in both the trees grew on the stump root in all the directions (Fig. 3), but the orientation of these roots was not uniform in any of the quarters as opposed to the counterpart quarters in the azimuth. Their number, thickness and orientation by depth varied within the two trees. Total number of secondary roots was higher in FRS (140) than AFS (54) while total CSA of these roots were more in AFS (3,243 cm 2 ) than FRS (3,082 cm 2 ). Growth of stump root terminated bluntly before 1.5 m in FRS while it extended beyond 2.0 m in AFS giving the appearance of a tap root. The horizontal roots radiated farther beyond 7.0 m in AFS and 3.0 m in FRS. Vertical and oblique roots were also seen in some voxels far from the tree base. The pattern of coarse root orientation on the stump root revealed that north-south orientation had more root CSA than east-west in both the trees. When intermediary orientation, north-east and north-west, and south-east, and south-west were combined, root area distribution stood lopsided. In the case of AFS tree, maximum distribution was in south and south-west direction while in FRS it was north-east and south-east direction (Fig. 3 a & b). The voxel-wise CSA distribution was 64%, 10% and 26% in 0-50 cm, 51-100 cm and 100-150 cm depth, respectively, in However, morphological observation of root orientation indicated that secondary roots were prominently coming out of stump root in two tiers in AFS tree with a gap of 60-70 cm, first tier closer to the ground and second at the bottom of stump root. There were very few secondary roots growing on the stump root between the two tiers (Photos in Fig. 3). In FRS no such tier differentiation was evident since the secondary roots were growing in continuity all along the stump root. Fine roots were observed throughout the rooting zone along with coarse and medium roots. They were excavated from sub-surface (10 cm) up to a maximum depth of 2.4 m in FRS and 2.8 m in AFS. However, rooting depth was variable along the horizontal distance from the tree. It seemed to be increasing from tree line up to 1.7 m -2.0 m distance and afterwards there was decrease in rooting depth with increase in distance. Fine root density also varied at different depths without showing any trend of increase or decrease (Fig. 4). Belowground biomass distribution Tree height, girth, and density in AFS and FRS were 30.7 m, 1.39 m and 139 tree ha -1 and 30.7 m, 1.41 m and 204 tree ha -1 , respectively. Other results related to biomass are recorded in Table 1. The assessment of different 425 components of root biomass was based on regression equations developed from root biomass (RB) and voxel distance (Y) from the tree. All the six equations (RB=a*exp(-bY) related to trend-lines presented in Fig. 5 were highly significant (r 2 = 841** to 999**). Total belowground biomass was higher in AFS tree (130 kg tree -1 ) than FRS tree (120 kg tree -1 ). The pattern was similar in other components also like fine roots, medium roots, coarse roots and stump root. Dry root mass allocation into different components like fine root, medium root, coarse root and stump root was 4%, 10%, 45% and 41%, respectively in AFS tree. In the case of FRS tree, fine root and coarse root contribution remained same but medium root was 1% lower and stump root 1% higher. The two methods of fine roots' biomass estimation resulted in varied quantity. Coring (7.7 kg tree -1 , AFS; 5.9 kg tree -1 , FRS) yielded higher biomass than excavation (5.4 kg tree -1 , AFS; 4.6 kg tree -1 , FRS) in both the trees. Biomass accumulation in rooting space of tree through fine roots depended on its density and length. However, density of fine roots was not consistent through depth of soil or distance from tree in both the cases of AFS and FRS (Fig. 4). AFS stored fine root biomass in to 2.8 m soil depth while in FRS storage depth was restricted to 2.4 m. Fine root biomass storage (Fig. 6) in AFS varied from 0.96 kg (0-0.2 m) to 0.05 kg (2.6-2.8 m) while in FRS it varied from However, generalization showed that there was maximum fine root biomass storage in first meter (48% in AFS and 45% in FRS) followed by second meter (27% in AFS and 40% in FRS) and then third meter (26% in AFS and 15% in FRS). Root excavation method Possible reason of higher fine root quantity estimation by coring than excavation in both the tress, AFS and FRS, could be explained by a hypothesis that excavation method results in sampling error since roots break off and get lost during excavation (Millikin and Bledsoe, 1999;Niyama et al., 2010). Bledsoe et al. (1999) also found that complete recovery of entire deep rooted system was difficult even under ideal condition. Similar to this, Friend et al. (1991) observed in P. trichocarpa x P. deltoides clones that field excavation failed to recover at least 68% of fine root biomass. This loss appeared to be very high as compared to the present study where there was 70-77% recovery of fine roots in relation to coring method. Root harvesting depth Most of the workers, owing to resource and time consumption factor coupled with assumption of root presence in that area only, used excavation method and explored fine roots to limited depth, for example, 0.4 m (Ostonen et al., 2005), 0.5 m (Bayala et al., 2004), 0.3-0.5 m (Jiangen et al., 2008), 0.6 m (Tomlinson et al., 1998), 0.8 m (Moreno-Chacon and Lusk, 2004), 0.6-0.9 m (Misra et al., 1998), 1.0 m (Purbopuspito andRees, 2002;Dowell et al., 2009), 1.5 m (Smith et al., 1999) and 2.0 m (Moreno et al., 2005) in different species. In the case of Poplar, few studies like, Puri et al. (1994), Fang et al. (2007) and McIvor et al. (2009) explored 0.3 m, 1.0 m and 1.4 m depth, respectively. However, in the present study, excavation was extended to 3.0 m depth since roots were observed up to 2.4 m to 2.8 m during coring. This finding is supported by Mullia and Dupraz (2006) and Heilman et al. (1994) who also recorded poplar roots up to 3.0 m and beyond this, respectively. Hansen et al. (2003) and Rosengren et al. (2006) reported that 95% of all fine roots were located within 1.0 m in temperate and boreal forest ecosystems. Callesen et al. (2016) suggested this depth as pragmatic 'effective rooting depth' which is not in conformation with plantation systems in Mediterranean condition where medium roots and coarse roots were found much below this depth. Simple indication of present finding was that effective rooting depth should be beyond 1.0 m, otherwise there could be omission of substantial amount of root recovery (52-55 %) since its distribution in first, second and third meter depth, respectively, was 48%, 27% and 25% in AFS and 45%, 40% and 15% in FRS. As such estimation of these deeper layer roots, being out of the ploughing layer, are very important from the view point that they have longer residence time in the soil since they are better protected and undisturbed (Nair et al., 2009). Fine and coarse root distribution Fine roots were distributed in the deeper layer also but its concentration was higher in first layer (0-10 cm) in both AFS and FRS systems. Quite a few workers in Poplar (Dickman et al., 1996;Lukac et al., 2003;Al Afas et al., 2008) and other species like scot pine, Japanese cedar, Khasi pine etc. (Friend et al., 2000;John et al., 2001;Janssens et al., 2002;Konopka et al., 2005Konopka et al., & 2006 also reported concentration of fine roots in upper layer. This variation of root concentration may be due to varied presence of coarse root, nutrient and moisture availability, soil structure, temperature and microbial activity in different soil layers. Interaction among these factors are more dynamic in subsoil region in comparison to deeper layer (Block et al., 2006;Konopka et al., 2006). Uneven distribution of roots in different directions or rooting quarters may have similar reasons. Many researchers (Kellman, 1979;Watson and O'Loughlin, 1990;Puri et al., 1994, Abernathy andRutherford, 2001;McIvor et al., 2005) concluded that structural roots are largely confined to top 0.3 m of soil profile in Poplar and other species. This understanding does not hold well in the present case since more than 50% of root biomass was distributed beyond this depth. However, there are a few more reports of deep seated coarse root distribution in cottonwood (Rood et al., 2011), loblolly pine (Albaugh et al., 2006), and dehesa vegetation (Moreno et al., 2005). Root orientation and growth Root number and CSA in Populus x euramericana (Tasman variety) varied between different depths but not in different directions (McIvor et al., 2005). Contrary to this, these two varied with change in direction as well in the present study (Populus euramericana I-214). Although Smith (2001) and Kalliokoski et al. (2008) observed strong assumption of symmetrical dimension of root system, Puri et al. (1994) and McIvor et al. (2009) recorded highly asymmetric roots in poplar and other species owing to the effect of non-symmetrical mechanical stress and heterogeneous nutrient availability in soil (Coutts et al., 1999;Casper et al., 2003). Root growth is essentially opportunistic in its timing and its orientation. It takes place whenever and wherever the environment provides water, oxygen, minerals, support and warmth (Perry, 1989). Variation of number in secondary roots and their proximal CSA in different directions in two different systems and even within the same tree of present investigation indicated that distribution of resources was not uniform. Substantial (Harrington and DeBell, 1996). Therefore, exploring the limited or one quarter of the rooting space of a tree (Fortier et al., 2015b), and extrapolation of the value from it may not give accurate estimation and lead into either overestimation or underestimation of root growth. Henderson et al. (1983) had also confirmed in Picea sitchinensis that no reliable estimate can be obtained from measuring only one quarter of the space. Two tiered root orientation in the AFS tree, probably due to damage of upper layer roots during ploughing of inter-row space for agriculture, was reported earlier also in sandy location by Perry (1989) in Pinus and other trees. This was done strategically to absorb water and nutrients from surface layer by first tier. Deep seated second tier allowed survival under drought or other adverse condition. Rood et al. (2011) also observed that in drier regions the cottonwood becomes phreatophytic and produces deeper root system to access moisture from ground water. Belowground biomass Total root biomass (18.1 Mg ha -1 in AFS and 24.5 Mg ha -1 in FRS) was within the reported range (14.8 Mg ha -1 to 29.6 Mg ha -1 ) of hybrid poplar buffer (Fortier et al., 2013) but fine root biomass (0.75 Mg ha -1 in AFS and 0.94 Mg ha -1 in FRS) was very low (1.86 Mg ha -1 to 2.62 Mg ha -1 ; Fortier et al., 2015a). The condition was similar as compared to other systems like, young tree plantation (6 Mg ha -1 to 42 Mg ha -1 ; Lukac et al., 2003 andBlock, 2004) and mature forest (5 Mg ha -1 to 52 Mg ha -1 ; Steele et al., 1997 andPinno et al., 2010). Though the edapho-climatic factors govern biomass production, the reason for higher fine root biomass could be higher plantation density as hypothesised by Berhongaray et al. (2013). This was confirmed in present study also as FRS had higher density and fine root biomass than AFS. However, higher fine root or total root biomass on per tree basis in AFS than FRS could be due to different management regime. FRS trees got only post-planting silvicultural treatment like pruning while AFS got additional advantage of environment manipulation like irrigation and fertilizer application to the alley crop. Latter had also lesser inter-tree competition for underground resources like nutrients and moisture. Jha and Gupta (1991) and Banerjee et al. (2009) have also suggested that providing extra irrigation, fertilizer doses, weeding and hoeing during the early age of intercropping enhanced tree growth resulting in more biomass accumulation (Singh and Sharma, 2007). Corroborating results were found in other studies like, agrisilviculture (Pingale et al., 2014), fruit trees (Raizada et al., 2013), young Populus deltoides plantation (Kern et al., 2004) and Acacia mangium (Danial et al., 1997). Forest and agroforest systems The root system of two differently nurtured trees was different on accounts of coarse root orientation and resource allocation in spite of being same clone, age and locality. AFS showed more plasticity due to changed culture regime. This is in line with the hypothesis of Mulia and Dupraz (2006) that trees grown in association with annual winter crops develop a different rooting pattern as compared to trees grown in pure forestry stands. Root depth and architecture are partly controlled by physical and agronomic factors (Bishopp, 2009;Fukaki and Tasaka, 2009) but substantially by the genotype and age (Wullschleger et al., 2005;Kell, 2012). But in the present case genetic control hypothesis for biomass variation could be ruled out (both trees same clone and age), and be assigned to soil structure and nutrient availability. Additional factor for deep rooting could also be the available moisture in water table around 3.0 m level. There is indirect support from Hallgren (1989) that poplar is an opportunistic rooter and does not produce deep roots if water table is at higher level. As discussed earlier coarse and fine roots of poplar in plantation and agroforestry system are located near soil surface (Tufekcioglu et al., 1999;Douglas et al., 2010 etc.) with 1.0 m as effective rooting depth (Callesen et al., 2016) may have some limitation. Contrasting to this much deeper roots in the present case had an advantage of extracting nutrients and moisture from larger area as well as acting as safety net for trapping leachable nutrients from upper layer (Allen et al., 2004;Dougherty et al., 2009). On this account AFS is more useful than FRS since it had root spread more deep and wide. Plasticity of AFS roots, an adaptation feature (Perry, 1989), get support from Gary (2000) who speculated that ploughing effected pruning of lateral roots could be the reason to drive down the coarse root to deeper layer since they were damaged and could not grow laterally beyond this in the tilled space. It is also possible that the presence of roots of agriculture crop played its role in this plasticity (Yocum, 1937;Mulia and Dupraz, 2006). Conclusions The hybrid poplar had deep seated root system in fluvisol in Mediterranean region. Coarse roots occupied the available space in all the directions but their orientation in a section may not be the mirror image of any of the quarter or the half of the rooting zone, possibly because of uneven soil structure and uneven nutrient availability. Differences were found in the trees of same species/clone at the same age but grown under two different systems -monoculture (FRS/PLS) and agrisilviculture (AFS). Secondary root orientation was tiered in the latter, possibly because of ploughing of tree inter-row space and presence of crop roots. Belowground allocation of biomass was higher in different root components -fine, medium and coarse roots in AFS tree. On hectarage basis it was more in FRS mainly due to higher tree density and optimum use of available nutrients. If introduced in agriculture land AFS has the advantage of grain production with some compromise on biomass vis a vis FRS.
2019-04-02T13:11:24.093Z
2017-09-30T00:00:00.000
{ "year": 2017, "sha1": "d01dca9b1fcfad0bc17269449c1d1257fa21a824", "oa_license": "CCBY", "oa_url": "https://www.notulaebiologicae.ro/index.php/nsb/article/download/10155/8845", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d01dca9b1fcfad0bc17269449c1d1257fa21a824", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
1910361
pes2o/s2orc
v3-fos-license
Inhibition of Ca(2+) signaling by Mycobacterium tuberculosis is associated with reduced phagosome-lysosome fusion and increased survival within human macrophages. Complement receptor (CR)-mediated phagocytosis of Mycobacterium tuberculosis by macrophages results in intracellular survival, suggesting that M. tuberculosis interferes with macrophage microbicidal mechanisms. As increases in cytosolic Ca2+ concentration ([Ca2+]c) promote phagocyte antimicrobial responses, we hypothesized that CR phagocytosis of M. tuberculosis is accompanied by altered Ca2+ signaling. Whereas the control complement (C)-opsonized particle zymosan (COZ) induced a 4.6-fold increase in [Ca2+]c in human macrophages, no change in [Ca2+]c occurred upon addition of live, C-opsonized virulent M. tuberculosis. Viability of M. tuberculosis and ingestion via CRs was required for infection of macrophages in the absence of increased [Ca2+]c, as killed M. tuberculosis or antibody (Ab)-opsonized, live M. tuberculosis induced elevations in [Ca2+]c similar to COZ. Increased [Ca2+]c induced by Ab-opsonized bacilli was associated with a 76% reduction in intracellular survival, compared with C-opsonized M. tuberculosis. Similarly, reversible elevation of macrophage [Ca2+]c with the ionophore A23187 reduced intracellular viability by 50%. Ionophore-mediated elevation of [Ca2+]c promoted the maturation of phagosomes containing live C-opsonized bacilli, as evidenced by acidification and accumulation of lysosomal protein markers. These data demonstrate that M. tuberculosis inhibits CR-mediated Ca2+ signaling and indicate that this alteration of macrophage activation contributes to inhibition of phagosome–lysosome fusion and promotion of intracellular mycobacterial survival. Introduction Tuberculosis is a global health problem with enormous impact on human morbidity and mortality (1). Approximately one-third of the world's population is infected with Mycobacterium tuberculosis , and three million people die of active disease each year. An essential virulence characteristic of M . tuberculosis is its ability to successfully parasitize monocytes and macrophages, despite the presence of multiple microbicidal mechanisms within these cells (2). The molecular mechanisms responsible for the intracellular survival of M . tuberculosis are unknown. Multiple host-pathogen interactions may impact the fate of M . tuberculosis within human monocytes and macrophages and, consequently, the presence or absence of disease in infected individuals. The earliest interaction between M . tuberculosis and mononuclear phagocytes is the binding and uptake of the bacilli by plasma membrane phagocytic receptors (3). Phagocytosis of M . tuberculosis , in either the presence or absence of serum, is predominantly mediated by the complement receptor (CR)1, 1 CR3, and CR4 (4)(5)(6). In human monocytes and monocyte-derived macrophages (MDMs), the ␤ 2 -integrin CR3 is the major phagocytic receptor for M . tuberculosis , and anti-CR3 Abs inhibit ingestion of tubercle bacilli by ‫ف‬ 80% (5). In serum-free conditions, the macrophage mannose receptor also mediates mycobacterial phagocytosis, although its contribution to ingestion of M . tuberculosis is much less in the presence of complement proteins (7). The ability of M . tuberculosis to enter macrophages via the CR-mediated phagocytic pathway may contribute to its intracellular survival, as, in many cases, CR ligation does not trigger phagocyte microbicidal responses (8,9). Studies with murine macrophages demonstrate that the class of phago-288 Defective Macrophage Ca 2 ϩ Signaling in Tuberculosis cytic receptor that mediates ingestion of M . tuberculosis has a strong influence on the extent of phagosomal maturation. CR-mediated phagocytosis of M . tuberculosis results in a phagosome that is unable to fuse with lysosomes (10). Conversely, if the bacillus is opsonized with M . tuberculosis -specific Abs, its ingestion is mediated by macrophage Fc ␥ Rs, and the mycobacterial phagosome undergoes full maturation to a phagolysosome (11). These results suggest that Fc ␥ R-mediated ingestion of M . tuberculosis must mobilize signaling pathways that are distinct from those that are activated by CRs, which are responsible for the difference in phagosome maturation. The relevance of these observations to human disease has been questioned, because the antimycobacterial activity of murine macrophages is much more easily demonstrated in vitro than that of human macrophages. Although multiple investigators have demonstrated that CR-dependent ingestion of M . tuberculosis by human macrophages is also followed by defective phagosomal maturation (12), to our knowledge, no data is available on the effects of Ab opsonization on survival of M . tuberculosis within human macrophages. Furthermore, the biochemical mechanisms responsible for incomplete maturation of M . tuberculosiscontaining phagosomes are unknown. Many distinct signal transduction pathways contribute to the activation of phagocyte antimicrobial defenses, but their integrative function and relative priority in the killing of specific pathogens is unknown. Stimulation-induced increases in cytosolic Ca 2 ϩ concentration ([Ca 2 ϩ ] c ) are essential for activation of the phagocyte respiratory burst, production of nitric oxide, secretion of microbicidal granule constituents, and synthesis of proinflammatory mediators, including TNF-␣ (13)(14)(15)(16)(17). Based on these considerations, three questions of specific relevance to the pathogenesis of tuberculosis were investigated in this study: (a) Does virulent M . tuberculosis alter Ca 2 ϩ -mediated signal transduction in human macrophages? If so, (b) Do these alterations in macrophage Ca 2 ϩ signaling contribute to incomplete phagosomal maturation and intracellular survival of M . tuberculosis , and (c) Does the route of entry into human macrophages, i.e., via CR-versus Fc ␥ R-mediated phagocytosis, affect the intracellular viability of M . tuberculosis ? Abs. Polyclonal (A-188) and monoclonal Abs (CS-40, CS-35) to lipoarabinomannan (LAM) from M. tuberculosis were provided by Drs. Patrick Brennan and John Belisle (Colorado State University, Fort Collins, CO; National Institutes of Health grant AI-75320). A-188 and CS40 are specific to LAM from the virulent Erdman strain of M. tuberculosis, whereas CS35 recognizes an epitope common to LAMs from several strains of M. tuberculosis. mAbs to CD18 (H52) and lysosome-associated membrane protein (LAMP)1 were obtained from the Developmental Studies Hybridoma Bank (University of Iowa, Iowa City, IA). F(abЈ) 2 fragments of ␣-CD18 were prepared by digestion with pepsin as previously described (20) and partially purified by protein G-Sepharose chromatography. Goat anti-human C3 IgG was obtained from Atlantic Antibodies, Inc. Preparation of Macrophage Monolayers. PBMCs were isolated from healthy, purified protein derivative (PPD)-negative, adult volunteers and cultured in Teflon wells for 5 d in RPMI 1640 with 20% fresh autologous serum as previously described (21). Macrophages were purified by adherence to chromic acid-cleaned, collagen-coated glass coverslips for 2 h at 37ЊC in 5% CO 2 . Monolayers were washed repeatedly and incubated in RPMI, 20 mM Hepes (RH), pH 7.4, 2.5% serum for use in experiments. Effects of experimental manipulations on macrophage viability were assessed by exclusion of trypan blue, and monolayer density was determined by nuclei counting with naphthol blue-black stain (22). Bacteria. The Erdman, H37Rv, and H37Ra strains of M. tuberculosis were obtained from the American Tissue Type Culture Collection and were cultured and prepared for use in experiments as noted previously (5,7,21). In brief, aliquots of frozen M. tuberculosis stocks in 7H9 broth were thawed, cultured for 9 d on 7H11 agar at 37ЊC in 5% CO 2 /95% air, scraped from agar plates, and suspended in RH by vortexing briefly in an Eppendorf tube containing two glass beads. After settling, the supernatant was transferred to a new tube and allowed to settle once again. Heat killing was accomplished by incubating this final suspension at 100ЊC for 10 min and confirmed by absence of CFUs (23,24). Gamma-irradiated (killed) M. tuberculosis was provided by Drs. Patrick Brennan and John Belisle (Colorado State University). For experiments requiring complement-opsonized (C-op) bacilli, aliquots of M. tuberculosis (live, heat-killed, or gamma-irradiated) were preopsonized in 50% human serum for 30 min at 37ЊC and then washed three times in PBS. Ab opsonization of M. tuberculosis was achieved by incubating the bacilli with 10 g/ml CS-40 or CS-35 or 10 l of A-188 for 30 min, followed by washing in PBS. After opsonization, M. tuberculosis preparations were resuspended in HBSS using glass beads, and clumped organisms were allowed to settle, as described above. M. tuberculosis suspensions were counted in a Petroff-Hauser chamber, and the concentration of bacteria was adjusted for use in experiments. Final M. tuberculosis preparations contained Ͼ95% single bacteria, with Ն75% viability by determination of CFUs (5,21). The effects of various experimental manipulations on the viability of M. tuberculosis were also determined by analysis of CFUs. Analysis of Phagocytosis. Phagocytosis of M. tuberculosis was determined as previously described (5,21). In brief, macrophage monolayers adherent to glass coverslips ‫2ف(‬ ϫ 10 5 MDMs per coverslip) in 24-well tissue culture plates were incubated with M. tuberculosis (multiplicity of infection [MOI] of 10:1) in RH, 2.5% autologous nonimmune serum. After incubation for various intervals, monolayers were washed repeatedly to remove nonadherent bacteria, fixed in 10% formalin, and stained with auramine-rhodamine for 20 min (5,21). Coverslips were washed with distilled water and incubated with acid alcohol for 3 min, washed, and incubated in KMnO 4 for 2 min. Adherent bacteria were quantitated by fluorescence microscopy of triplicate cover-slips for each experimental condition MDMs per coverslip), and results of a set of experiments were expressed as the mean (Ϯ SEM) number of adherent M. tuberculosis per 100 macrophages (phagocytic index). Previous electron microscopic studies of this assay have indicated that all adherent mycobacteria are phagocytosed, both under control conditions and in experiments in which phagocytosis is inhibited or augmented (5,21). Western Blot to Detect C3 Fixation to M. tuberculosis. Heat-killed or live M. tuberculosis was incubated in 50% human serum for 30 min at 37ЊC. The bacteria were recovered by centrifugation at 12,000 g for 10 min, washed twice, and solubilized in SDS sample buffer (62.5 mM Tris/HCl, pH 6.8, 2% SDS, 10% glycerol, 75 mM ␤-ME, 0.0025% bromphenol blue). After SDS-PAGE on 10% gels, proteins were transferred to polyvinylidene difluoride membranes, Western blotted with goat anti-human C3 IgG, and detected by enhanced chemiluminescence as described (5,21). Determination of Intracellular Calcium. Calcium measurements were performed at the Cell Fluorescence Core Facility (Veterans Affairs Medical Center, Iowa City, IA). MDMs were adhered to collagen-coated glass coverslips and incubated with 10 M Fura2-AM in HBSS for 30 min at 37ЊC. [Ca 2ϩ ] c in single MDMs, or the mean [Ca 2ϩ ] c of groups of 10-20 cells, was determined using a Photoscan II spectrofluorometer (Photon Technology Intl.) with a Nikon microscope (Nikon, Inc.). [Ca 2ϩ ] c was determined from the ratio of fluorescence emission intensities at 510 nm after excitation at 340 and 380 nm, respectively. Background fluorescence intensities at each excitation wavelength were subtracted from each data point. The ratios of the corrected fluorescence intensities (R) were then converted to the actual calcium concentration using the formula, where the maximum and minimum ratios, as well as the dissociation constant, were empirically derived from [Ca 2ϩ ] curves generated with the instrument. In certain experiments, the effects of the absence of extracellular Ca 2ϩ were determined by incubation of MDMs in Ca 2ϩ -free HBSS with 3 mM EGTA. To chelate cytosolic Ca 2ϩ , MDMs were preincubated with MAPTAM (15-25 M) for 30 min at 37ЊC (26,27). Analysis of CFUs. MDMs, adherent to collagen-coated glass coverslips, were infected at an MOI of 1:1 with Erdman M. tuberculosis (preopsonized with complement or anti-LAM Abs) in RH, 2.5% heat-inactivated autologous serum. After 1 h, the monolayers were washed and repleted with buffer containing 1% heat-inactivated serum. 24, 48, and 96 h after infection, supernatants were transferred to sterile microfuge tubes, monolayers were lysed with ice cold sterile water, and SDS was added to a final concentration of 0.25%. Lysates were combined with their corresponding supernatants and resuspended in 7H9, and serial dilutions were plated in duplicate on 7H11 agar. Colonies were counted 2 wk after plating. To determine the effect of elevation of MDM intracellular [Ca 2ϩ ] on mycobacterial survival, monolayers were infected at a 1:1 ratio with C-op M. tuberculosis in HBSS containing the Ca 2ϩ ionophore A23187 (1 M) or an equivalent volume of ethanol solvent (0.1%). After 20 min, monolayers were washed and repleted with 20 g/ml phosphatidylcholine vesicles, 1% autologous serum in RH, to reverse the A23187-mediated influx of extracellular Ca 2ϩ (28). DPPC vesicles were prepared by evaporation of a chloroform/methanol (2:1) solution under N 2 and resuspension in HBSS by sonication for 10 min at 25ЊC (21). CFUs were counted as described above. Confocal Microscopy. The acidophilic dye LysoTracker Red (Molecular Probes, Inc.) was incubated at a 1:10,000 dilution with MDM monolayers in RH, 2.5% autologous serum, for 2 h at 37ЊC. Unincorporated dye was removed by washing, followed by infection with M. tuberculosis for 1 h. After removal of nonadherent bacilli, LysoTracker Red was added to each well at the same concentration used for initial labeling. 24 h after infection, MDMs were fixed in 3.75% paraformaldehyde for 15 min and permeabilized with ice cold methanol/acetone (1:1). The localization of M. tuberculosis was ascertained by incubating monolayers with auramine for 20 min at 25ЊC, followed by a 3-min incubation in acid alcohol. After thorough washing, monolayers were blocked with a PBS, 5% BSA, 10% goat serum for 1 h. In parallel experiments using Abs to the lysosomal protein markers LAMP-1, cathepsin D, and CD63, coverslips were incubated with the appropriate primary Abs (diluted in blocking solution) for 1 h at 25ЊC, washed, and then incubated with the corresponding fluorophore-conjugated secondary anti-IgG Ab for 1 h. After repeated washings, coverslips were mounted with buffered glycerol solution and sealed with nail polish. Confocal microscopy was performed on a Zeiss Laser Scan Inverted 510 microscope (Carl Zeiss, Inc.). An argon/krypton laser (excitation, 488 nm; emission, 505-530 nm) was used for detection of auramine fluorescence, and a helium/neon laser (excitation, 543 nm; emission, Ͼ585 nm) for detection of Texas Red and LysoTracker Red. The percentage of M. tuberculosis phagosomes colocalizing with the marker of interest was determined by counting Ͼ25 phagosomes from at least 10 different fields per condition. Analysis of Data. Data from each experimental group were subjected to an analysis of normality and variance. Differences between experimental groups composed of normally distributed data were analyzed for statistical significance using Student's t test. Nonparametric evaluation of other data sets was performed with the Wilcoxon Rank Sum test (29). C-op Zymosan Induces an Increase in Cytosolic Ca 2ϩ in Human Macrophages. As binding and phagocytosis of M. tuberculosis by macrophages in the presence or absence of serum is primarily mediated by CRs (4, 5, 30), we first characterized macrophage Ca 2ϩ signaling induced by the model particulate CR-ligand, C-op zymosan (COZ) (31,32). Previous studies in neutrophils have demonstrated that COZ induces a significant increase in [Ca 2ϩ ] c due to stimulation of CR3 and, to a lesser extent, CR1 (33)(34)(35)(36)(37). In addition to CR3 and CR1, macrophages, unlike neutrophils, also express high levels of CR4 (38). Therefore, it was necessary to characterize in detail the effects of COZ on [Ca 2ϩ ] c in human macrophages to serve as a control for subsequent experiments with M. tuberculosis. MDMs were purified from PBMCs of healthy, PPDnegative adult donors after 5-d culture in RPMI, 20% autologous serum, by adherence to collagen-coated glass coverslips (21). After loading of MDMs with the Ca 2ϩ -sensitive dye Fura2 (10 M), monolayers were washed and placed in Ca 2ϩ , Mg ϩ2 -containing HBSS (CHBSS). Levels of [Ca 2ϩ ] c in single MDMs were determined by fluorescence ratio imaging of Fura2 (25). The basal level of [Ca 2ϩ ] c in resting MDMs ranged from ‫05ف‬ to 150 nM ( Fig. 1 A). Incubation with COZ at a particle/cell ratio of 10:1 resulted in a rapid increase in macrophage [Ca 2ϩ ] c , which peaked in the 300-800 nM range and gradually returned to basal levels over the next 8-10 min (Fig. 1 A; fold-increase in [Ca 2ϩ ] c ϭ 4.6; range, 2.4-6.5-fold; n ϭ 20). Subsequent addition of thapsigargin (1 M), which inhibits the Ca 2ϩ / ATPase responsible for reaccumulation of [Ca 2ϩ ] c into endoplasmic reticulum stores (39), resulted in a further increase in macrophage [Ca 2ϩ ] c . This thapsigargin-induced increase in [Ca 2ϩ ] c provided verification of the intact capacity of the intracellular Ca 2ϩ storage pool and the functional integrity of the capacitative Ca 2ϩ entry mechanism (40,41). The COZ-induced increase in macrophage [Ca 2ϩ ] c was due to both release of Ca 2ϩ from intracellular stores and influx of extracellular Ca 2ϩ , as the average magnitude and duration of the elevated [Ca 2ϩ ] c was significantly attenuated, but not abolished, by incubation of MDMs in Ca 2ϩ , Mg ϩ2 -free HBSS containing 3 mM EGTA (data not shown). Under these conditions, the residual elevation in [Ca 2ϩ ] c is due to release from intracellular Ca 2ϩ stores in the endoplasmic reticulum, as evidenced by the increase in Fura2 fluorescence upon addition of thapsigargin. Preincubation of MDMs with the intracellular Ca 2ϩ chelator MAPTAM (12.5 M), followed by placement in Ca 2ϩ -free HBSS, 3 mM EGTA (EHBSS), completely inhibited the increase in [Ca 2ϩ ] c due to COZ (data not shown). To test the hypothesis that COZ-induced [Ca 2ϩ ] c elevations were dependent on stimulation of the ␤ 2 -integrins CR3 (CD11b/CD18) and CR4 (CD11c/CD18), MDMs were preincubated with F(abЈ) 2 fragments of ␣-CD18 mAb (H52). Subsequent addition of COZ did not cause a significant change in [Ca 2ϩ ] c ( Fig. 1 B). Inhibition of the increase in [Ca 2ϩ ] c by ␣-CD18 F(abЈ) 2 fragments was specific for CR-dependent stimuli, as there was no effect on the [Ca 2ϩ ] c elevation stimulated by platelet activating factor (PAF; Fig. 1 B). These experiments demonstrate that, similar to neutrophils (35,36,42), CR-dependent stimulation of human macrophages with COZ results in a marked increase in [Ca 2ϩ ] c , which is derived from both intracellular and extracellular Ca 2ϩ pools. Furthermore, the ␤ 2 -integrins, CR3 and CR4, are responsible for the majority of macrophage CR-stimulated Ca 2ϩ signaling. Fig. 2 A), the magnitude and duration of which were comparable to that of uninfected MDMs. This response to thapsigargin confirmed the adequacy of both intracellular Ca 2ϩ stores and the capacitative coupling of store depletion to the influx of extracellular Ca 2ϩ in M. tuberculosis-infected MDMs. Phagocytosis of M. tuberculosis Does Not Cause a Significant The lack of an increase in macrophage [Ca 2ϩ ] c was not due to a failure to bind or ingest M. tuberculosis. At an MOI of 10:1, the mean (Ϯ SEM) number of ingested bacilli per macrophage was 5.36 Ϯ 0.41, and 73 Ϯ 4% of MDMs ingested at least one tubercle bacillus (21). To ensure that each MDM phagocytosed at least one tubercle bacillus, select single-cell [Ca 2ϩ ] c determinations were conducted with increased MOIs of 30:1 and 100:1. At these higher levels of infection, each MDM phagocytosed at least one bacillus, as determined by subsequent staining of cell monolayers with auramine-rhodamine (data not shown). However, even at MOIs of 30:1 (data not shown) and 100:1 ( Fig. 2 Ca 2ϩ -mediated signal transduction is characterized by a complex series of positive and negative regulatory circuits, as well as distinct temporal and spatial determinants of signal propagation (49)(50)(51)(52). To determine whether the defect in Ca 2ϩ signaling accompanying infection with M. tuberculosis resulted in a global depression of macrophage Ca 2ϩdependent signal transduction, we tested the response of infected MDMs to PAF, a potent Ca 2ϩ -mobilizing ligand that binds to a G protein-coupled receptor (53). 10 min after infection with Erdman M. tuberculosis, macrophages were incubated with 100 nM PAF, and levels of [Ca 2ϩ ] c were determined via fluorescence of Fura2. As demonstrated in Fig. 3 A, PAF induced a rapid and significant rise in [Ca 2ϩ ] c in infected macrophages that did not differ in onset, amplitude, or duration from the response of uninfected cells to PAF (data not shown). Similarly, addition of PAF concurrent with Erdman M. tuberculosis also resulted in an intact [Ca 2ϩ ] c response (Fig. 3 B). These PAF-induced elevations in [Ca 2ϩ ] c indicate that infection with M. tuberculosis does not render the macrophage refractory to Ca 2ϩ -mediated signal transduction. To exclude a potential inhibitory Inhibition of Macrophage Ca 2ϩ Signaling Is Dependent on the Viability of M. tuberculosis. The specific virulence determinants that enable M. tuberculosis to survive within the phagosomes of human macrophages are unknown. In addition, there are no avirulent strains of M. tuberculosis that may be used to define the molecular mechanisms that regulate essential pathogenic interactions between tubercle bacilli and mononuclear phagocytes. Despite these limitations, considerable evidence indicates that the failure of M. tuberculosiscontaining phagosomes to mature into acidic microbicidal phagolysosomes is an important component of tuberculous pathogenesis (24,54,55). Clemens and Horwitz have demonstrated that this inhibition of phagosomal maturation is dependent on the viability of M. tuberculosis, as phagosomes containing heat-killed M. tuberculosis develop into mature phagolysosomes (23,24). We tested the hypothesis that the M. tuberculosis-induced inhibition of macrophage Ca 2ϩ signaling would demonstrate a similar requirement for bacterial viability. Erdman M. tuberculosis was killed by heating to 100ЊC for 10 min, followed by opsonization in autologous, nonimmune serum as described above for live bacilli (23,24). Particular care was taken to ensure that the preparation of heat-killed M. tuberculosis consisted of Ͼ95% single bacilli, as noted in Materials and Methods. The loss of viability of heat-killed M. tuberculosis was verified by absence of growth on 7H11 agar. Heat-killed Erdman M. tuberculosis induced a rapid and significant rise in macrophage [Ca 2ϩ ] c (Fig. 4 A; fold-increase in [Ca 2ϩ ] c ϭ 3.8; range, 2.1-6.5-fold; n ϭ 16), which closely resembled that induced by COZ. Utilization of an alternate protocol for heat killing (80ЊC, 60 min; reference 5) resulted in similar stimulation of increased [Ca 2ϩ ] c by dead, C-op M. tuberculosis (data not shown). The increase in levels of macrophage [Ca 2ϩ ] c induced by heat-killed M. tuberculosis was completely inhibited by preincubation of these cells with F(abЈ) 2 fragments of ␣-CD18 mAb (Fig. 4 B), indicating a major role for CR3 and/or CR4 in the initiation of this response. Studies with Ca 2ϩ -free media (Fig. 4 C) and intracellular Ca 2ϩ buffering (Fig. 4 D) indicated that the increase in [Ca 2ϩ ] c stimulated by heat-killed M. tuberculosis resulted from both release of Ca 2ϩ from intracellular stores as well as influx of extracellular Ca 2ϩ . As heat killing of M. tuberculosis may induce changes in mycobacterial surface structures that could alter MDM Ca 2ϩ signaling by mechanisms other than the loss of bacterial viability, similar studies were conducted with M. tuberculosis that had been killed by gamma irradiation. Incubation of MDMs with gamma-irradiated M. tuberculosis resulted (Table I) Table I). These results demonstrate that the lack of increase in macrophage [Ca 2ϩ ] c during phagocytosis of C-op M. tuberculosis is associated with increased intracellular survival of mycobacteria. Elevation of Macrophage [Ca 2ϩ ] c Is Associated with Reduced Survival of C-op M. tuberculosis. The previous set of experiments demonstrated that the opsonin on M. tuberculosis and, consequently, the class of phagocytic receptor that primarily mediated its ingestion were major determinants of the extent of mycobacterial survival within human MDMs. Although these observations correlated with the difference in Ca 2ϩ mobilization between Ab-and C-op M. tuberculosis, a causal relationship between [Ca 2ϩ ] c and intracellular mycobacterial viability cannot be inferred, as multiple differences exist between Fc␥R-and CR-mediated phagocytosis (8,9). Therefore, to directly test our hypothesis, we used the Ca 2ϩ ionophore, A23187, to modulate the cytosolic Ca 2ϩ levels in human MDMs during phagocytosis of C-op M. tuberculosis. After loading with Fura2 and washing to remove unincorporated dye, MDMs were placed in HBSS solutions in which the concentration of extracellular free Ca 2ϩ was buffered in the range of 225-700 nM with EGTA. These levels of Ca 2ϩ were chosen to approximate the [Ca 2ϩ ] c that occurred in MDMs stimulated by COZ, C-op dead M. tuberculosis, and Ab-op mycobacteria. Addition of 1 M A23187 resulted in a rapid equilibration of the intracellular and extracellular Ca 2ϩ concentrations (Fig. 6 A). To mimic the temporally restricted elevation of [Ca 2ϩ ] c initiated by the particulate stimuli noted above, the effects of A23187 were reversed after 20 min by addition of phosphatidylcholine vesicles (20 g/ml) (28), which resulted in a rapid return of [Ca 2ϩ ] c to a level approximating that of resting macrophages (Fig. 6 A). To examine the effects of cytosolic Ca 2ϩ levels on survival of intracellular M. tuberculosis, parallel sets of infected MDM monolayers were lysed and viable mycobacteria quantitated 24 and 48 h after infection by analysis of CFUs. Compared with untreated, M. tuberculosis-infected macrophages, MDMs incubated with A23187 during infection contained ‫%05ف‬ less viable M. tuberculosis at the 24and 48-h time points (Fig. 6 B and Table II). These results were not due to a direct bactericidal effect of A23187, as incubation of M. tuberculosis suspensions in the calcium ionophore, followed by addition of phosphatidylcholine vesicles, under the exact conditions applied to infected macrophages did not result in alteration of mycobacterial viability (data not shown). These results indicated that ionophoreinduced elevation of macrophage [Ca 2ϩ ] c during phagocytosis of C-op M. tuberculosis was associated with decreased intracellular survival of the bacilli. Elevation of Macrophage Cytosolic Ca 2ϩ Correlates with Maturation of M. tuberculosis-containing Phagosomes to Acidic Phagolysosomes. A key aspect of tuberculous pathogenesis is the ability of M. tuberculosis to limit the maturation of its phagosome, thereby preventing the development of microbicidal phagolysosomes (10-12, 23, 24, 54, 55). We tested the hypothesis that mycobacterial inhibition of macrophage Ca 2ϩ signaling contributes to retardation of phagosomal maturation (inhibition of phagosome-lysosome [P-L] fusion) by (a) characterizing the degree of maturation of phagosomes containing either live or killed C-op M. tuberculosis and (b) determining the effects of modulation of [Ca 2ϩ ] c on P-L fusion. The extent of maturation of M. tuberculosis-containing phagosomes 24 h after infection was characterized by confocal microscopy, using three lysosomal protein markers (cathepsin D, LAMP-1, CD63), combined with the determination of phagosomal pH with the acidophilic fluorophore, LysoTracker Red. The three protein markers were used in combination, because use of a single marker can provide ambiguous results. For example, LAMP-1 localizes to both late endosomes and lysosomes (24). LysoTracker Red was employed for assessment of phagosomal acidification, as this fluorophore is stable to fixation, ensuring that biosafety conditions are maintained during confocal microscopy. 24 h after infection of human MDMs, live, C-op M. tuberculosis was located in immature phagosomes that exhibited low amounts of the lysosomal protein markers. The percentage of phagosomes positive for cathepsin D, LAMP-1, and CD63 were 32, 37, and 25%, respectively (Fig. 7). Additionally, only 41% of phagosomes containing live M. tuberculosis colocalized with LysoTracker Red. These results are in agreement with previous characterizations of the maturational state of M. tuberculosis-containing phagosomes in macrophages, as determined by epifluorescence, confocal immunofluorescence, and cryoimmunoelectron microscopy (10-12, 23, 24, 54, 55). To further evaluate the potential causal role of macrophage cytosolic Ca 2ϩ in P-L fusion, the maturation of phagosomes containing live, C-op M. tuberculosis was determined after transient elevation of [Ca 2ϩ ] c with A23187, followed by quenching with phosphatidylcholine vesicles. Ionophore-induced elevation of [Ca 2ϩ ] c to ‫005ف‬ nM for 20 min during phagocytosis of live, C-op M. tuberculosis resulted in a striking reversal of the block in phagosomal maturation. The percentage of phagosomes positive for cathepsin D increased from 32 (control) to 92%, LAMP-1 positivity increased from 37 to 82%, and CD63 positivity increased from 25 to 83% (Fig. 7). Elevation of [Ca 2ϩ ] c also promoted increased phagosomal localization of LysoTracker Red, from a control value of 41 to 89% in MDMs treated with A23187. Elevation of [Ca 2ϩ ] c was required for the A23187-induced increase in P-L fusion, as incubation of macrophages in EHBSS during ionophore treatment resulted in a profile of phagosomal staining for the lysosomal protein markers and LysoTracker Red that was indistinguishable from values for control, untreated MDMs (Fig. 7). In marked contrast to the intracellular compartmentation of live tubercle bacilli, phagosomes containing dead (gammairradiated) M. tuberculosis progressed to fully mature phagolysosomes, as determined by high levels of all three lysosomal protein markers (Fig. 8). 88% of phagosomes containing killed M. tuberculosis were positive for cathepsin D, whereas the corresponding values for LAMP-1 and CD63 were 77 and 76%, respectively. 88% of these phagosomes accumulated LysoTracker Red, consistent with their acidification. Incubation of macrophages in EHBSS or chelation of cytosolic Ca 2ϩ with MAPTAM resulted in failure of phagosomes containing dead M. tuberculosis to accumulate lysosomal protein markers (Fig. 8). Compared with the percentage of phagosomes positive for cathepsin D, LAMP-1, and CD63 in Ca 2ϩ -containing media noted above, removal of extracellular Ca 2ϩ resulted in significantly less colocalization with all three lysosomal protein markers: 66, 30, and 47%, respectively. Chelation of intracellular Ca 2ϩ with 12.5 M MAPTAM resulted in even more pronounced reductions in phagosomal accumulation of lysosomal markers: cathepsin D, 37%; LAMP-1, 24%; and CD63, 38%. As MAPTAM produces more significant reductions in basal and stimulated [Ca 2ϩ ] c compared with EGTA ( Fig. 4), these results are fully consistent with the hypothesis that [Ca 2ϩ ] c regulates the maturation of phagosomes containing dead M. tuberculosis. Interestingly, MAPTAM but not EGTA produced significant decreases in accumulation of LysoTracker Red: untreated control, 88%; MAPTAM, 49%; and EGTA, 86% (Fig. 8). As removal of extracellular Ca 2ϩ reduces but does not eliminate the increase in [Ca 2ϩ ] c induced by killed M. tuberculosis, these results are consistent with the hypothesis that a lesser increase in [Ca 2ϩ ] c is required for phagosomal acidification than for accumulation of lysosomal protein markers (especially LAMP-1 and CD63). In summary, the results of characterization of phagosome maturation via confocal microscopy strongly support the hypothesis that levels of cytosolic Ca 2ϩ regulate P-L fusion in M. tuberculosis-infected human macrophages. In all cases, elevation of macrophage [Ca 2ϩ ] c correlated with maturation of M. tuberculosis-containing phagosomes to phagolysosomes, and lack of elevation of [Ca 2ϩ ] c correlated with incomplete phago-somal maturation. Furthermore, ionophore-induced increases in [Ca 2ϩ ] c and the accompanying maturation of phagosomes containing live C-op M. tuberculosis correlated with decreased survival of mycobacteria within human macrophages. Discussion Macrophages possess multiple microbicidal mechanisms to eliminate phagocytosed microorganisms and, consequently, represent a strategic target for inactivation by potential pathogens (56). The molecular mechanisms that allow M. tuberculosis to successfully survive and replicate within mononuclear phagocytes are unknown. Our overall hypothesis is that Ca 2ϩ -dependent signaling mechanisms are potential targets for inhibition of macrophage activation by M. tuberculosis, as [Ca 2ϩ ] c is a critical regulator of several antimicrobial responses, including generation of reactive oxygen and nitrogen intermediates, secretion of microbicidal proteins and peptides, and synthesis of antimycobacterial cytokines, such as TNF-␣ (13,14,57). This study demonstrates that multiple strains of pathogenic M. tuberculosis inhibit Ca 2ϩ -mediated signal transduction during infection of human macrophages. Inhibition of macrophage Ca 2ϩ signaling is tightly coupled to the failure of mycobacterial phagosomes to mature into acidic, microbicidal phagolysosomes and to successful intracellular survival of M. tuberculosis. Two determinants of mycobacteriainduced inhibition of macrophage Ca 2ϩ signaling have been defined. First, the bacilli must be viable, as killing of M. tuberculosis by heat or gamma irradiation reverses the inhibition of Ca 2ϩ -mediated signal transduction. Although the basis of this requirement is unknown, the dependence on mycobacterial viability has previously been demonstrated for the inhibition of P-L fusion in M. tuberculosis-infected human macrophages (12,23,24). Second, infection of macrophages in the absence of increased [Ca 2ϩ ] c is specific for phagocytosis via CRs. Redirecting the phagocytosis of M. tuberculosis to Fc␥Rs, via opsonization with specific polyclonal or monoclonal Abs, reverses mycobacteria-induced impairment of macrophage Ca 2ϩ signaling, and, more importantly, reduces the intracellular survival of M. tuberculosis within human MDMs. The reduction in the intracellular survival of Ab-op bacilli was not due to a difference in phagocytosis, as both the phagocytic index (the number of bacilli ingested per macrophage) and the percentage of MDMs that phagocytosed at least one bacillus did not differ between the two groups. Although mechanisms other than induction of elevated [Ca 2ϩ ] c may contribute to the decreased viability of Ab-op bacilli, direct evidence for a causal role of [Ca 2ϩ ] c in regulating the survival of M. tuberculosis within human macrophages was obtained with the calcium ionophore, A23187. Thus, in this in vitro model of primary infection of human macrophages, the lack of an increase in [Ca 2ϩ ] c during CRmediated phagocytosis correlated with inhibition of P-L fusion and increased intracellular survival of M. tuberculosis. Conversely, elevation of [Ca 2ϩ ] c was associated with increased P-L fusion and reduced intramacrophage viability. The essential role of [Ca 2ϩ ] c in triggering multiple phagocyte antimicrobial defenses suggests that the inhibition of phagocytosis-initiated Ca 2ϩ signaling confers a survival advantage on M. tuberculosis at the time of its entry into macrophages. Immunoelectron microscopy of M. tuberculosis-containing phagosomes supports the hypothesis that the bacilli's protected "intracellular niche" is established at a relatively early time point during infection of macrophages (24,54). The large number of Ca 2ϩ -dependent biochemical reactions and cellular functions suggests that M. tuberculosis-induced inhibition of changes in [Ca 2ϩ ] c may compromise several components of macrophage activation and antimicrobial function. The use of the term "inhibition" to characterize the lack of increase in [Ca 2ϩ ] c during phagocytosis of M. tuberculosis is meant in an operational sense, as the mechanism remains unknown. Lack of initiation of a Ca 2ϩ signaling pathway or its rapid termination could both yield the observed results. As CRs, especially CR3, are the primary mediators of phagocytosis of M. tuberculosis in human MDMs (5) As the fluorescent detection of [Ca 2ϩ ] c is highly sensitive, our hypothesis is that no Ca 2ϩ signal is initiated during phagocytosis of live M. tuberculosis. However, the biochemical signals that normally link CRs to increases in [Ca 2ϩ ] c are unknown, and, therefore, we cannot ascertain whether these intermediate steps are "not initiated" or "initiated but inhibited." These mechanistic uncertainties are an additional reason that we have used the more general phrase, "inhibition of Ca 2ϩ signaling." However, we recognize that further definition of the mechanism(s) by which CR-induced phagocytosis of C-op M. tuberculosis occurs in the absence of a change in macrophage [Ca 2ϩ ] c may necessitate a revision of our current model and terminology. Comparison of our results with those recently reported by Majeed et al. (19) illustrates both the similarities and differences in the interactions of M. tuberculosis with mononuclear phagocytes versus neutrophils (PMNs). Although neutrophil ingestion of the attenuated H37Ra strain of M. tuberculosis also occurred in the absence of a rise in [Ca 2ϩ ] c , PMNs killed 73% of phagocytosed tubercle bacilli in 2 h (19). Whether induction of a rise in Ca 2ϩ via physiologic or pharmacologic intervention would augment PMN P-L fusion or bactericidal activity toward M. tuberculosis was not reported, and no virulent strains of M. tuberculosis were used (19). Furthermore, M. tuberculosis does not successfully par-asitize human neutrophils, and several studies have demonstrated that PMNs kill intracellular M. tuberculosis by both oxygen-dependent and -independent mechanisms (60)(61)(62). Finally, caution is required in comparing Ca 2ϩ -mediated signal transduction of neutrophils with that of macrophages, as these two classes of phagocytes have been reported to differ in the Ca 2ϩ dependence of antimicrobial functions (26,59). Zimmerli et al. have reported that human macrophages do not require an increase in [Ca 2ϩ ] c for fusion of lysosomes with phagosomes containing COZ, coagulase-negative staphylococci, or the vaccine strain M. bovis BCG (27). The differences between their results and ours may be due, at least in part, to differences in both the methods of measuring phagosomal maturation and the characteristics of the phagocytosed particles. Zimmerli et al. used colocalization of the particles with LAMP-1 and endocytosed rhodamine dextran to define maturation of phagolysosomes. However, neither LAMP-1 nor dextran localize specifically to lysosomes, as both markers label late endosomes as well (24,27). Perhaps the Ca 2ϩ requirement for P-L fusion is influenced by characteristics of the particle, e.g., its virulence; M. tuberculosis is a highly virulent intracellular pathogen, whereas coagulase-negative staphylococci are extracellular pathogens of low virulence, BCG is nonpathogenic, and zymosan is a cell wall preparation from the nonpathogenic yeast, Saccharomyces cerevisiae. Finally, the lack of statistically significant differences in P-L fusion between control and Ca 2ϩ -buffered MDMs in their study (27) may have been influenced by the small sample size, as the average decrease in P-L fusion in MAPTAM-treated MDMs compared with control MDMs was as great as 23%, and the standard deviations ranged from 30 to 100% of the mean values. Further studies will be required to clarify the variables that affect the Ca 2ϩ dependence of phagosomal maturation in human macrophages. A fascinating aspect of M. tuberculosis-induced inhibition of Ca 2ϩ signaling is its particle specificity during concurrent or subsequent addition of a Ca 2ϩ -mobilizing stimulus. Although MDMs did not generate an increase in [Ca 2ϩ ] c during infection by live, C-op bacilli, these same cells maintained the capacity to respond to other phagocytosed particles (COZ, killed or Ab-op M. tuberculosis) or soluble stimuli (PAF) by increasing their levels of [Ca 2ϩ ] c . Therefore, viable, C-op M. tuberculosis does not introduce a generalized defect in the Ca 2ϩ signaling pathways of human MDMs. These results suggest that CR-induced increases in [Ca 2ϩ ] c are spatially restricted to each specific phagocytic event, i.e., each forming phagosome, although further studies will be required to directly test this hypothesis. This proposed focal nature of phagocytosis-associated Ca 2ϩ signaling or, in the case of M. tuberculosis, the lack thereof, is consistent with previous studies demonstrating the tightly regulated spatial constraints of Ca 2ϩ -mediated signal transduction (36,50,51,63). In fact, Stendahl et al. (36) have recently demonstrated that during phagocytosis of COZ, [Ca 2ϩ ] c levels are highest in the periphagosomal region (34). Whereas our data demonstrating increased P-L fusion after Ab opsonization of M. tuberculosis are in agreement with those of Armstrong and Hart (11), our studies differ with respect to the consequences for mycobacterial survival. In this study, opsonization of M. tuberculosis with specific monoclonal or polyclonal Abs resulted in significant decreases in intracellular viability within human macrophages. In contrast, Armstrong and Hart demonstrated that survival within murine macrophages was similar for serum-and Ab-op M. tuberculosis (11). As numerous investigators have documented significant differences between the tuberculocidal capacities of human versus murine macrophages (for review see references 3, 6, 46-48), we hypothesize that this species specificity is a major factor contributing to the contrasting effects of Ab opsonization on mycobacterial survival noted in our two studies. Zimmerli et al. recently demonstrated that Ab-mediated inhibition of individual CRs or the mannose receptor did not alter the intracellular survival of M. tuberculosis within human MDMs (58). This study differs from ours in two respects. First, the effect of Fc␥R-mediated phagocytosis on mycobacterial survival was not determined, and second, receptor-blocking reagents were used to direct phagocytosis of M. tuberculosis to unblocked receptor (58). However, blocking reagents may introduce confounding effects by stimulating the receptors to which they bind, and it is often difficult to block multiple receptor classes ligated by complex particles, such as M. tuberculosis. In contrast, opsonization of M. tuberculosis with specific ligands provides a direct, physiologically relevant analysis of the impact of individual receptor classes on the intracellular survival of M. tuberculosis. The lack of Ca 2ϩ mobilization during ingestion of M. tuberculosis may represent an important mechanism of immune evasion that contributes to its survival within human macrophages. As the specific mechanism(s) by which human macrophages kill intracellular M. tuberculosis is unknown, it is difficult at present to define the means by which inhibition of Ca 2ϩ signaling promotes mycobacterial survival, although inhibition of P-L fusion is likely to contribute. Despite these challenges, characterization of the molecular mechanisms responsible for M. tuberculosis-induced alterations in macrophage Ca 2ϩ signaling and its specific contribution to intracellular survival will provide important insights into the pathogenesis of tuberculosis and may contribute to the development of novel therapies to treat this formidable disease.
2014-10-01T00:00:00.000Z
2000-01-17T00:00:00.000
{ "year": 2000, "sha1": "ed57e887fa605893e87e38da324b5b325454e8e3", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc2195750?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ed57e887fa605893e87e38da324b5b325454e8e3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264522855
pes2o/s2orc
v3-fos-license
Exploring the basis of heterogeneity of cancer aggressiveness among the mutated POLE variants Mechanisms and consequences of POLE mutations in human cancers. B. MANUSCRIPT ORGANIZATION AND FORMATTING: Full guidelines are available on our Instructions for Authors page, https://www.life-science-alliance.org/authorsWe encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript.If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information.These files will be linked online as supplementary "Source Data" files.***IMPORTANT: It is Life Science Alliance policy that if requested, original data images must be made available.Failure to provide original images upon request will result in unavoidable delays in publication.Please ensure that you have access to all original microscopy and blot data images before submitting your revision.***- --------------------------------------------------------------------------Reviewer #1 (Comments to the Authors (Required)): This is a review paper which attempts to describe the current knowledge about POLE variants, their influence on tumor mutation rates and clinical implications of POLE mutations.Overall it does cover a lot of the material that is currently in the literature about POLE variants.However, there are other recent reviews including (PMID: 33255191, PMID: 37388540) that describe more detail about the points raised in this review.The current manuscript could add to the field by exploring some of the issues not discussed in other reviews such as comparing the clinical relevance of different variants, a direct comparison between germline and somatic variants, response to therapeutics in POLE mutant cancers, morphological differences that are found in POLE mutant endometrial and colorectal cancers, and variants of unknown significance and how to determine their impact, as well as when the POLE mutations are thought to occur relative to other cancer driver mutations.The comparison of POLE and POLD1 mutations in hypermutated cancers is also something that could be added to make this review more robust.Overall, in many sections, primary and recent references are missing or incomplete.There are statements throughout the manuscript that warrant an original citation where none is added, or some sentences list a few recent citations while missing the original papers.For a review paper, the citations are a critical component.These authors should provide more comprehensive references and citations.It is expected that a review will include both original and recent citations, and present a comprehensive assessment of the literature.The inconsistent and incomplete references is the major shortcoming of this manuscript.Importance of the 3' to 5' exonuclease proofreading activity of DNA Polymerase for accurate duplication of genomic DNA in human cells This section compromises a decent evaluation of the current understanding of the mechanism of action of POLE.POLE mutations and cancer 1. Major Criticism: This section starts with a description of MMR deficient cancers, then progresses into a description of POLE deficient cancers.The references for many sentences are incomplete or missing, here are some examples, (page 5) "POLE mutations have also been detected, albeit less frequently, in other types of gastrointestinal cancer, as well as in brain, breast, ovary, prostate, lung, kidney, cervix, and bone tumors (13,(20)(21)(22)." does not include earlier papers that initially describe the variants, such as (17,59,60) and does reference databases such as COSMIC, cbioportal which is a bit confusing.Not including the appropriate references is a major problem throughout this manuscript. 2. The end of this section (page6) contains this sentence, "Several POLE variants within the exonuclease domain such P286R, V411L, L424V, S459F, P286H, F367S and L424I show a decrease in exonuclease activity, as measured by biochemistry experiments using purified Polε."There are no references for this statement (this is a review article, accurate references are essential) and it seems out of context, the authors should describe more about these variants in this section, or leave it to a later section but this one sentence alone seems incomplete, and needs more discussion.In addition, the section above this sentence is all about germline variants, however the variants listed in this sentence are found in somatic POLE mutations, this should be made clear.Heterogeneity of the POLE mutation impact 3. Major Criticism: Again, not including references or citing the earlier work is a major weakness for this review, for example on page 7, the authors do not include all the references they can for many statements, "Besides the DNA mismatch repair defects that underlie Lynch syndrome, the mutations in POLE highlight the critical role of replication errors in predisposition to colorectal and endometrial cancers.This is in contrast to cancers of the breast and ovary, in which double-stranded DNA break repair is more significant in predisposition."There are many references to support this statement, however none are present.Review articles are a good source of finding important references, however this one does not include a large amount and misses that opportunity in multiple sections.4. The title for this section implies they will discuss more details about the different variants, however this section does not do that.This section would benefit from a larger discussion about what is currently known about the variants.This entire section can be expanded to go into detail about each variant. Mutator phenotype and mutation signature in POLE variants 5. Major Criticism: Again, this section falls short of including references, for example on page 8, this sentence is lacking appropriate references only including later citations while ignoring earlier references (17,59,60) "a high increase in mutation rates have been documented in cancers with P286R, D275V, P286H, F367S, L424V, P436R, and S459F changes located closed to the DNA binding cleft of Pol (31,32)".And again, this sentence "Importantly, all of these variants lead to a much higher mutation frequency as compared to the variant that completely eliminates proofreading" is included without any references .Again, "Among hypermutated cancers presenting {greater than or equal to} 10 Mut/Mb, approximately 25% are associated with mutations in MMR genes alone, whereas a large number of ultra-hypermutated cancers ({greater than or equal to} 100 Mut/Mb) show mutations affecting both MMR and the replicative DNA polymerases, mainly POLE ", is missing citations.Again, on page 8, "Specific signature errors have been shown to be enriched in mutated POLE cancers.These include three trinucleotide hotspot mutations: C>A transversions in TCT context (C>A-TCT), C>T-TCG and T>G-TTT " should include references 17, 15, 59. 6.They should state what types of cancers have MMR and POLE mutations together, which types of cancer that does not occur, and in general expand this section to have a comprehensive review. Mutated POLE tumors and immunogenic response 7. Major Criticism: This sentence mixes up very different types of high mutation load tumors, and consistent with other sections of this manuscript is missing citations."It is becoming increasingly clear that cancers with high intrinsic mutation load (MMR and POLE mutated cancers) or cancers related to mutagenic environmental genotoxic exposure such lung, melanoma and bladder cancers, which show all a high mutational burdens, respond generally well to immunotherapy." This entire section makes very bold statements with little to no references to back them up. In this section, as in the rest of the manuscript, they do not make a clear distinction between somatic variants and germline.They discuss their N363K variant with no distinction that they are now describing a germline variant.There are recent papers examining POLE variants and response to therapeutics, in particular, (PMID: 34250404) is not included in this section, yet is a recent description of POLE variants and immune response. In general, this review could benefit from a more complete survey of the current and past literature, adding relevant references, and expanding some sections as noted above. Reviewer #2 (Comments to the Authors (Required)): This is a well-written, concise review of the mechanisms and consequences of the different POLE mutations in cancer.The authors also propose some mechanistic basis underlining the mutation heterogeneity and discuss novel considerations for the choice and efficacy of therapies for POLE mutated tumors.My only comment is not a mandatory request.I leave it to the authors to decide if they would like to add a short mention in the section "POLE mutation and cancer" of the recent study on single-strand events that could indicate how POLE mutations arise in cancer.They could refer to the publication of Gilad Evrony (Mei Hong Liu et al. 2023 bioRxiv doi: https://doi.org/10.1101/2023.02.19.526140. Reviewer #3 (Comments to the Authors (Required)): In review, the manuscript presents a summary of current literature on mammalian DNA polymerase involved in the replication of their leading and lagging strand.The authors have assembled and analyzed the current mutations in DNA polymerase epsilon and delta.In part they consider the association between mutant DNA polymerases and cancer. The major conclusion of the paper is that mutations in these DNA polymerases are associated with specific types of human cancers.The data is well supported but does not necessarily indicate that the association is causative.The paper indicates that human tumors contain large numbers of different mutations.While it would be easy to suggest additional experiments, these would involve many laboratories and many years of research. I suggest that the authors consider adding two columns to table one stating the locations of each of the mutations, and the second column indicating the enhancement of the mutation frequencies that have been reported.In addition, a clearer distinction between initiation and promotion of carcinogenesis appearing earlier in the paper may be useful.The legend to figure one contains a new original concept.The authors hypothesize that mutations in DNA polymerase could result in an enhanced mutagenesis without altering base specificity.In particular, studies on mutant polymerases may provide a new target for treatment options for specific cancers.This section starts with a description of MMR deficient cancers, then progresses into a description of POLE deficient cancers.The references for many sentences are incomplete or missing, here are some examples, (page 5) "POLE mutations have also been detected, albeit less frequently, in other types of gastrointestinal cancer, as well as in brain, breast, ovary, prostate, lung, kidney, cervix, and bone tumors (13,(20)(21)(22)."does not include earlier papers that initially describe the variants, such as (17,59,60) and does reference databases such as COSMIC, cbioportal which is a bit confusing.Not including the appropriate references is a major problem throughout this manuscript. We have now included in the entire paragraph appropriate references as required by the reviewer, especially for the types of cancers for which POLE has been found mutated (Campbell et al., 2017;Cancer Genome Atlas, 2012;Cancer Genome Atlas Research et al, 2013;Cerami et al, 2012;Forbes et al, 2015;Grossman et al, 2016;Shinbrot et al., 2014) (see page 5). 2. The end of this section (page6) contains this sentence, "Several POLE variants within the exonuclease domain such P286R, V411L, L424V, S459F, P286H, F367S and L424I show a decrease in exonuclease activity, as measured by biochemistry experiments using purified Polε."There are no references for this statement (this is a review article, accurate references are essential) and it seems out of context, the authors should describe more about these variants in this section, or leave it to a later section but this one sentence alone seems incomplete, and needs more discussion.In addition, the section above this sentence is all about germline variants, however the variants listed in this sentence are found in somatic POLE mutations, this should be made clear. We apologize for the lack of references regarding the decreased in exonuclease activity of the POLE variants and the lack of clarity on germline vs somatic variants.We have now introduced original references showing that these POLE mutations affect the activity of the exonuclease (Korona et al, 2011;Parkash et al, 2019;Shinbrot et al., 2014) (see page 6).We have also better described the germline versus somatic POLE mutations : see page 5 the paragraph " The last decade has witnessed the identification in cancers from many tissue types of multiple somatically acquired missense mutations clustering in the sequence encoding the exonuclease proofreading domain of POLE……." and page 5-6 the paragraph "Mutations in the exonuclease domain of POLE can also be inherited through the germline, leading to a rare autosomal dominant familial cancer predisposition syndrome documented as polymerase proofreadingassociated polyposis (PPAP), characterized……." Paragraph Heterogeneity of the POLE mutation impact 3. Major Criticism: Again, not including references or citing the earlier work is a major weakness for this review, for example on page 7, the authors do not include all the references they can for many statements, "Besides the DNA mismatch repair defects that underlie Lynch syndrome, the mutations in POLE highlight the critical role of replication errors in predisposition to colorectal and endometrial cancers.This is in contrast to cancers of the breast and ovary, in which double-stranded DNA break repair is more significant in predisposition."There are many references to support this statement, however none are present.Review articles are a good source of finding important references, however this one does not include a large amount and misses that opportunity in multiple sections. We have also added in this paragraph some aspects on the comparison between POLE and POLD1 mutations in hypermutated cancers in order to make the review more robust (see page 7 the paragraph : "Generally, there are much less cancer driver mutations in POLD1 than in POLE in human cancers.This might be due to the reduced fitness and viability of POLD1 mutants as Pol holds multiple critical roles besides lagging strand replication, including its ability to proofread in trans the errors made by Pol and Pol, its role during MMR and during Okazaki fragment maturation"). 4. The title for this section implies they will discuss more details about the different variants, however this section does not do that.This section would benefit from a larger discussion about what is currently known about the variants.This entire section can be expanded to go into detail about each variant.The reviewer is correct and we apologize.As required by the reviewer, we have now expanded the paragraph page 8 by describing more precisely the differential mutagenic impact of the POLE variants in haploid and diploid yeast, and the lack of correlation between the increase of mutation rate, the TMB and the frequency of the variant in tumors. Major Criticism: Again, this section falls short of including references, for example on page 8, this sentence is lacking appropriate references only including later citations while ignoring earlier references (17, 59, 60) "a high increase in mutation rates have been documented in cancers with P286R, D275V, P286H, F367S, L424V, P436R, and S459F changes located closed to the DNA binding cleft of Pol (31,32)" As required, we have now included earlier references to the section and again, this sentence "Importantly, all of these variants lead to a much higher mutation frequency as compared to the variant that completely eliminates proofreading" is included without any references . We apologize, we have now incorporated the paper by Kane and Sherbakova in 2014, which described that cancer-associated POE variants can produce unusually strong mutator phenotype exceeding that of proofreading-deficient mutants by up to two orders of magnitude. Again, "Among hypermutated cancers presenting {greater than or equal to} 10 Mut/Mb, approximately 25% are associated with mutations in MMR genes alone, whereas a large number of ultra-hypermutated cancers ({greater than or equal to} 100 Mut/Mb) show mutations affecting both MMR and the replicative DNA polymerases, mainly POLE ", is missing citations. We have now incorporated the appropriate reference by Jansen et al. in 2016. Again, on page 8, "Specific signature errors have been shown to be enriched in mutated POLE cancers.These include three trinucleotide hotspot mutations: C>A transversions in TCT context (C>A-TCT), C>T-TCG and T>G-TTT "should include references 17, 15, 59. We have now incorporated these references 6.They should state what types of cancers have MMR and POLE mutations together, which types of cancer that does not occur, and in general expand this section to have a comprehensive review. Paragraph Mutated POLE tumors and immunogenic response 7. Major Criticism: This sentence mixes up very different types of high mutation load tumors, and consistent with other sections of this manuscript is missing citations."It is becoming increasingly clear that cancers with high intrinsic mutation load (MMR and POLE mutated cancers) or cancers related to mutagenic environmental genotoxic exposure such lung, melanoma and bladder cancers, which show all a high mutational burdens, respond generally well to immunotherapy." This entire section makes very bold statements with little to no references to back them up.Again, we apologize and we have now incorporated the appropriate reference (Le DT 2015, Le DT 2017, (McGranahan et al, 2016;Rizvi et al, 2015;Yarchoan et al, 2017) In this section, as in the rest of the manuscript, they do not make a clear distinction between somatic variants and germline.They discuss their N363K variant with no distinction that they are now describing a germline variant.We have already modified page 5 for this issue and better described the germline versus somatic POLE mutations.The effect of POLE variant in term of chromosome instability and DNA damage and the associated mechanisms is effective when it is expressed, as somatic or germline, so this is the reason we discussed the general mechanistic aspects independently of the type of the variant. There are recent papers examining POLE variants and response to therapeutics, in particular, (PMID: 34250404) is not included in this section, yet is a recent description of POLE variants and immune response.We thank the reviewer for this reference that we have now added page 11 (Keshinro et al., JCO Precis Oncol 2021) Reviewer #2: We thank this reviewer for his/her global positive appreciation of our review. We thank also the reviewer for his/her suggestion to add in the section "POLE mutation and cancer" the recent study on single-strand events that could indicate how POLE mutations arise in cancer and refer to the publication of Gilad Evrony (Mei Hong Liu et al. 2023 bioRxiv).This work, which profiled samples from individuals with cancer-predisposition syndromes and defined single-strand mismatch signatures, shows correspondences between single-strand signatures and known double-strand mutational signatures induced by defective proofreading, so this a possible mechanism of the generation of mutation signature in POLE mutants but we have decided not to include it since this mechanistic aspect is not developped in our review, and would require the description of additional alternative mechanisms . Reviewer #3: The reviewer proposes to incorporate to table 1 the locations of each of mutation and the corresponding enhancement of the mutation frequencies that have been reported.We thank the reviewer for this suggestion.We have corrected the table 1 with the required items. The reviewer proposes also to include a clearer distinction between initiation and promotion of carcinogenesis. We have better described such distinction in page 5 : "Such a defective proofreading activity producing a mutator phenotype, which have been established in model systems, such as yeast, bacteria, and mice, lead to tumorigenesis.These POLE variants are present in heterozygous tumors with no apparent loss of heterozygosity (LOH) and with high mutation loads, up to 500 mutations per megabase (Mut/Mb).A strong mutator phenotype in the presence of the wild-type allele is consistent with the participation of both the wild-type and the mutant polymerases in DNA replication, in contrast to mutated MMR tumors, where loss of both alleles is required to produce a mutator effect.It has been proposed that differential expression levels of the wild-type and mutant POLE alleles in the course of cancer progression may allow transient stages of hypermutation that promote tumor growth together with a threshold limiting excessive mutation load to maintain fitness. Finally, we thank the reviewer who commented on a new original concept in Figure 1 and the consequences for new treatment options for specific cancers.Thank you for submitting your revised manuscript entitled "Exploring the basis of heterogeneity of cancer aggressiveness among the mutated POLE variants".We would be happy to publish your paper in Life Science Alliance pending final revisions necessary to meet our formatting guidelines. Along with points mentioned below, please tend to the following: -please upload your figure as a single file, and remove it from the main manuscript text -please add the Twitter handle of your host institute/organization as well as your own or/and one of the authors in our system -please add ORCID ID for the secondary corresponding author--they should have received instructions on how to do so -please add an Author Contributions section to your main manuscript text -please add your figure and table legends to the main manuscript text after the references section -please add a conflict of interest statement to your main manuscript text -please add a callout for Figure 1A to your main manuscript text If you are planning a press release on your work, please inform us immediately to allow informing our production team and scheduling a release date. LSA now encourages authors to provide a 30-60 second video where the study is briefly explained.We will use these videos on social media to promote the published paper and the presenting author (for examples, see https://twitter.com/LSAjournal/timelines/1437405065917124608).Corresponding or first-authors are welcome to submit the video.Please submit only one video per manuscript.The video can be emailed to contact@life-science-alliance.orgTo upload the final version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plexYou will be guided to complete the submission of your revised manuscript and to fill in all necessary information.Please get in touch in case you do not know or remember your login name. To avoid unnecessary delays in the acceptance and publication of your paper, please read the following information carefully. A. FINAL FILES: These items are required for acceptance. --An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs). --High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, https://www.life-science-alliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max.200 characters including spaces).This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title.It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person.Author names should not be mentioned. B. MANUSCRIPT ORGANIZATION AND FORMATTING: Full guidelines are available on our Instructions for Authors page, https://www.life-science-alliance.org/authors **Submission of a paper that does not conform to Life Science Alliance guidelines will delay the acceptance of your manuscript.****The license to publish form must be signed before your manuscript can be sent to production.A link to the electronic license to publish form will be sent to the corresponding author only.Please take a moment to check your funder requirements.****Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript.If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately.**Thank you for your attention to these final processing requirements.Please revise and format the manuscript and upload materials within 7 days. Thank you for this interesting contribution, we look forward to publishing your paper in Life Science Alliance.The final published version of your manuscript will be deposited by us to PubMed Central upon online publication.Your manuscript will now progress through copyediting and proofing.It is journal policy that authors provide original data upon request. Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript.If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately.***IMPORTANT: If you will be unreachable at any time, please provide us with the email address of an alternate author.Failure to respond to routine queries may lead to unavoidable delays in publication.***Scheduling details will be available from our production department.You will receive proofs shortly before the publication date.Only essential corrections can be made at the proof stage so if there are any minor final changes you wish to make to the manuscript, please let the journal office know now. DISTRIBUTION OF MATERIALS: Authors are required to distribute freely any materials used in experiments published in Life Science Alliance.Authors are encouraged to deposit materials used in their studies to the appropriate repositories for distribution to researchers. You can contact the journal office with any questions, contact@life-science-alliance.orgAgain, congratulations on a very nice paper.I hope you found the review process to be constructive and are pleased with how the manuscript was handled editorially.We look forward to future exciting submissions from your lab.Sincerely, Eric Sawey, PhD Executive Editor Life Science Alliance http://www.lsajournal.org to the reviewers for their constructive comments and suggestions on our review.Based on their comments, we changed critical parts throughout the manuscript.We have particularly included original citations and adequate references that were missing in the initial manuscript.We apologize for this weakness in the previous version of the paper and we agree that for a review, the citations are a critical component.The new parts of the text are highlighted in blue.Please see below our detailed point by point response to the specific comments of each reviewer:Reviewer #1:Paragraph POLE mutations and cancer 1. Major Criticism: http://www.lsajournal.org---------------------------------------------------------------------------submitting your Review entitled "Exploring the basis of heterogeneity of cancer aggressiveness among the mutated POLE variants".It is a pleasure to let you know that your manuscript is now accepted for publication in Life Science Alliance.Congratulations on this interesting work.
2023-10-28T00:52:37.558Z
2023-10-27T00:00:00.000
{ "year": 2023, "sha1": "627666c289b1ce464f70bdfb519289f662e8bc83", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b7092cc47fdc2169258355557622bcc0d6e17097", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247996634
pes2o/s2orc
v3-fos-license
Operational classical mechanics: Holonomic Systems We construct an operational formulation of classical mechanics without presupposing previous results from analytical mechanics. In doing so, several concepts from analytical mechanics will be rediscovered from an entirely new perspective. We start by expressing the basic concepts of the position and velocity of point particles as the eigenvalues of self-adjoint operators acting on a suitable Hilbert space. The concept of Holonomic constraint is shown to be equivalent to a restriction to a linear subspace of the free Hilbert space. The principal results we obtain are: (1) the Lagrange equations of motion are derived without the use of D'Alembert or Hamilton principles, (2) the constraining forces are obtained without the use of Lagrange multipliers, (3) the passage from a position-velocity to a position-momentum description of the movement is done without the use of a Legendre transformation, (4) the Koopman-von Neumann theory is obtained as a result of our ab initio operational approach, (5) previous work on the Schwinger action principle for classical systems is generalized to include holonomic constraints. Introduction Inspired by the rise of quantum mechanics, Koopman [1] and von Neumann [2] reformulated classical (Hamiltonian) statistical mechanics in terms of operators acting on a Hilbert space (see [3] for a review). Since then, the main reasons to study the Koopman-von Neumann (KvN) theory have been to investigate quantization rules and to make other comparisons with the quantum theory [4,5,6,7,8]. Another active area of research involving the KvN theory is the development of consistent quantum-classical hybrid theories [9,10,11,12,13,14,15]. A third and more recent line of research deals with quantum simulations of non-linear systems [16,17,18]. However, apart from any connection with quantum physics, the operational methods had produced results entirely within the classical domain [19,20,21,22,23,24,25,26,27,28]. Hence, it seems worth considering the operational approach in a more encompassing context than the KvN theory. We will show in this work that it is possible to construct a classical operational dynamics independently from previous results from analytical mechanics (unlike the KvN theory that uses the Poisson brackets and the Liouville theorem as the starting point). We will see that the operational approach gives new insights into many old results of analytical mechanics. We will restrict ourselves to holonomic scleronomous systems and leave the study of more general systems for future works. This work is organized as follows: in section 2 we review the operational formulation of classical mechanics for the unconstrained case, as developed in [29,30]. Here we introduce most of the operators and the ket basis for the Hilbert space we deal with in the rest of the paper. As in quantum mechanics, the evolution of the system can be given by a Schrödinger equation for the state vectors, or by Heisenberg equations for the operators. We show that the passage from a velocity to a momentum description of the movement is given by a "quantum" 1 canonical transformation as understood in [31]. In section 3 we define what we understand as a holonomic constraint in our operational approach. We will see how our procedure naturally leads to the concept of the tangent bundle of configuration space as given in the geometrical expositions of analytical mechanics. In section 4 we will use the so-called "quantum" point transformation [34,35] to give the operational version of generalized coordinates and velocities. We emphasize that the change to generalized coordinates is an example of a "quantum" canonical transformation. Some of the most important results of this paper are in section 5 . We will see that our operational version of holonomic constraints and the generalized coordinates allow us to derive the Lagrange equations of motion without the use of D'Alembert or Hamilton principles. The same procedure will allow us to obtain the constraining forces without using Lagrange multipliers. We then perform a "quantum" canonical transformation to a momentum description of the movement to get the KvN theory. In section 6 we derive the Lagrange equation of motion from the variation of the so-called Schwinger action, extending previous work on the Schwinger action principle [38] to systems with holonomic constraints. We comment on the possible relation of the Schwinger action principle with the Gauss principle of least constraints. Unconstrained mechanics We review in this section the operational formulation of classical mechanics for the unconstrained case. Several of the results will be stated without proof, and we refer to [29] for the them and more in-depth treatment. Let us have a system of N point particles that are free of constraints. Classically, we have to know the position and velocity of each particle to determine the state of this system, this is equivalent to specifying a point in the tangent bundle of R 3N , i.e., T R 3N ∼ = R 6N . We will relax this definition of a classical state in favor of a statistical description in terms of probability amplitudes. As a starting point of the operational formulation of classical dynamics, we associate a ray {|r 1 , ..., r N ; v 1 , ..., v N } to each point in T R 3N . The kets are postulated to obey the orthonormality condition The classical states are then defined to be finite norm vectors given by a linear combination of the form where ψ is an square integrable function in R 6N . The probability of finding the first particle around r 1 with velocity v 1 , and so on for the other particles, is given by the Born rule The kets |r 1 , ..., r N ; v 1 , ..., v N are simultaneous eigenvectors of 3N position and 3N velocity operators The position and velocity operators form a complete set of commuting operators. We now postulate the existence of the conjugate set operators λ and π by the relations 2 The dynamics of the state vectors is encoded in a time evolution operator, the so-called Liouvillian operator where ÂB ÂB +B , and m 1 = m 2 = m 3 is the mass of the first particle, and a similar convention is used for all other particles. In (5), the force operators can be function on the position and the velocity. The form of the Liouvillian can be deduced from symmetry principles and some other quite reasonable assumptions [29,30]. In the Schrödinger picture, the evolution of the state vectors is given by the Schrödinger-like equation Just as in quantum mechanics, there is the possibility of using a Heisenberg picture, where the time dependence is passed on to the operator via the Heisenberg equations. In this case, the Heisenberg equations read with analogous equations for the operators λ and π. Examples where the the Schrödinger-like equation (6) leads to the expected dynamics of a classical system can be found in [29]. Lagrangian and canonical momentum operators. Let us split the forces as where, by definition, the components of F (C) j can be computed from a generalized potentialÛ =Û (X,V ) according to 3 and then let us define the Lagrange operator bŷ For forces that are independent on the acceleration, the generalized potential is restricted to be of the form [29] U where φ and A i are only on the position. We can then rewrite Eq (7) as where Φ j is the following family of superoperators We will call Φ j as the Euler-Lagrange superoperators. It is natural to look for a definition of a canonical momentum operator once we have the Lagrangian (10). We define the components of the canonical momentum by It is straightforward to check the following commutation relations for P The introduction of a canonical momentum allows the passage from a positionvelocity description of the dynamics to a position-momentum one. Let us consider the transformation equations The commutation relations between the operators X, P , λ ′ and π ′ are Hence, because of the commutation relations (20), we can conclude that the Eqs. (14), (18), and (19) form a "quantum" canonical transformation as understood in [31]. This transformation is a composition of a scale transformation and a unitary transformation [29]. It can be checked that the unitary operator gives the following In terms of the momentum (14) and the operators (18) and (19), the Liouvillian read The procedure to obtain equation (26) explains from first principles the origin of the minimal coupling in the KvN theory given in [33]. We can check that the Liouvillian (26) is consistent with (7) as follows Contrary to Ref [29], we make the definition Finally, it can be shown that a wave mechanics version of the operational theory developed in this section leads to the KvN theory, and, ultimately, to Hamiltonian mechanics. Time-independent Holonomic constraints Consider a classical system composed by N point particles that obey the l independent holonomic constraints where the C i are constants. The Eq (30) can be equivalently written as On the other hand, the equations (30) and (31) together define a 6N −2l dimensional submanifold embedded on R 6N , the tangent bundle of configuration space T Q. What is the meaning of a holonomic constraint in the operational version of mechanics? We will interpret the constraints as a restriction on the possible states |ψ that our mechanical system can be, such that the allowed states belong to a subset of the unconstrained Hilbert space H T Q ⊆ H. We postulate that the allowed states |ψ T Q obey an operational versions of (30) and (31) given by The set H T Q ⊆ H containing all the vectors that obey (32) is a linear subsapce of H since H T Q is clearly non empty (0 ∈ H T Q ), and we have that |ψ T Q1 + λ |ψ T Q2 ∈ H T Q for any |ψ T Q1 , |ψ T Q2 ∈ H c . Let us now investigate the form of the vectors belonging to H T Q . By our initial definition (2), we can write any vector |ψ T Q1 ∈ H T Q ⊆ H as for some function ψ = ψ(r 1 , ..., (32) and (33) we get the set of equations Since |ψ| 2 is a non negative everywhere, ψ must vanish for the values of the positions and velocities that do not obey (30) and (31) for Eq. (35) to be true. This is, the support of ψ is T Q, and we can write Equivalently, we can write our vectors |ψ T Q by a restriction on the domain of integration Generalized coordinates It is always possible to find a set of n = 3N − l independent generalized coordinates (q 1 , ..., q n ) to fully describe a mechanical system with l holonomic constraints. It is not difficult to pass from wavefunctions on cartesian coordinates to wave functions on generalized coordinates: ψ(r 1 , ..., .., q n ;q 1 , ...,q n ). However, we will work with operators since we want an operational version of mechanics. Let us review the (time-independent) transformation of operators to curvilinear coordinates as given in [34], and then we shall mention how to apply the procedure to coordinates in T Q. Since the original position and velocity operators commute with each other, the point transformation to curvilinear coordinates can be defined in an unambiguous manner 4 The above procedure works as long as the spectrum of theq runs from −∞ to +∞. Angular coordinates are more delicate, and we will deal with them later. From Eqs (37) and (38) and the commutator relations (4), it follows that We now insist that the transformation rule for the auxiliary operators λ and π is such that the point transformation is a "quantum" canonical transformation. In analogy with the definitions given in [34], we give the following rulê Notice that a term 3N i=1 ∂Xi ∂vjλ j + is missing from Eq (41) because the inverse transformations X i = X i (q 1 , ...,q j ) are independent of thev. We can then verify that the following commutator relations are obeyed The set { q, v} is a complete set of commuting operators. Hence, there exist a basis of common eigenkets that can be labeled via their eigenvalues, this is, there exist a basis of kets | q 1 , . . . , q 3N ; v 1 , . . . , v 3N such that It so happen that the ket | q 1 , . . . , where |J| is the Jacobian determinant of the transformation. The new basis kets obey the orthonormality condition Finally, a general state can be written using the new coordinates as Reduction of variables In analogy to analytical mechanics, we want to be able to describe a holonomic system of n degrees of freedom using just n position operators and n velocity operators. Consider the following point transformation The expectation value of the coordinates q j (j = n + 1, . . . , 3N ) is constant for all the states that obey the constraints (32) 3N ). (48) From (33) and (38), it follows that there are generalized velocity operators with vanishing expectation value 3N ). (49) We call these v ⊥ as the perpendicular velocity operators. We arrive at the desired conclusion that the state of a system of n degree of freedom can be described using just n position and n velocity variables (operators). We will see in section 5 that for the dynamics is also enough to define the states by (51) (as long as we are no interested in the constraining forces). Angle coordinates That angle operators in quantum mechanics are problematic has been known for a long time [36]. The same applies to our operational version of classical mechanics and for the same reasons, most importantly, the cyclic nature of their expected spectra. We need to modify the procedures of the preceding sections to accommodate angle variables. The easiest way to deal with angle variables is to use the sine and cosine function, such that instead of defining the angle operator itself asθ i =θ i ( X 1 , ..., X 3N ) we define instead the following Hermitian operators by their action on the base kets [37] To reiterate, the angle operators are well defined only when they are arguments of the trigonometric functions. We shall never work with angle operators by themselves. We also need to define the conjugate operators to complete the transformation of coordinates. By definition, they obey the commutation relation It is worth pointing out that there is no problem with the associated velocities. To each angle coordinate we can associate a velocity operatorv θi with spectrum between −∞ and ∞, and its corresponding conjugate operatorπ i is given by (41). In the appendix we work out in detail the change of variables from Cartesian to spherical coordinates, and we use the result in there to find the equation of motion of the spherical pendulum in section 5.1. Operational Dynamics in T Q In this section, we will express the Liouvillian in generalized coordinates, and in doing so we will find that the Heisenberg equations for the operators lead to the Lagrange equation of motion and the equation for the constraint forces.We will designate by F j the sum of all the impressed (non-constraint forces) and R j stand for the sum of all the constraint forces acting on the j−particle. As is usually the case in mechanics, we assume that the constraining forces are not known a priori, their role is to maintain valid the constraints. Consider the full Liouvillian of a mechanical system in Cartesian coordinates Changing to generalized coordinates and velocities the Liouvillian becomes Since the mean value of the q j and v j for j = n + 1, . . . , 3N has to remain constant due to the constraints, we must have where we used the inverse function theorem 3N i=1 ∂ Xi ∂ q k ∂qi ∂Xi = δ kj , and the equation is valid when evaluated in vectors belonging to H T Q . Equation (58) gives an identity that does not give any new information. On the other hand, Eq (59) gives the form of the constraining forces The above system give 3N − l equations for the 3N − l unknown constraining forces. We will return to analyze Eq (60) later, let us now focus on the equation of motion for the remaining variables. In view of Eq (58) and (59) we can conclude that, when evaluated in H T Q , L is independent ofλ (q) j andπ (q) j for j = n + 1, . . . , 3N . Hence, we can rewrite Eq (57) as wheref is an operator independent ofλ (q) andπ (q) obtained from simplifying the anticommutators in (57). The operatorf has no effect on the equation of motion of the position and velocity operators, its main function is to guarantee the Hermiticity of L. The Heisenberg equation of motion obtained from (61) are Now, the system of n equations (63) can be rewritten as where the following identities have been used The equation (65) Let us now consider equation (60) in more details. Using identities (65) and (65), we can solve (60) for the force of constraint as as where the components of the generalized constraint force R (gen) i are related to the Cartesian components via Example: Spherical pendulum A spherical pendulum consist of a mass restricted to move on the surface of a sphere of radius r under the action of the gravity. The constraining force in this case is the tension that is always in the radial direction. The complete Liouvillian of the system in Cartesian coordinates iŝ We are going to write the reduced Liouvillian taking into account the constraints and ignoring the constraining force T. Using the transformation equations (98), (99) (102) and (103) we obtain where f is independent onλ andπ, and, in view of the constraints (71), we have made the substitutionsr → r andV r = 0. We reiterate that this substitutions are only valid when the Liouvillian is restricted to act on vectors that obey (71). The Heisenberg equation for the velocities are Now, the only constraining force and it is in the radial direction. In view of (69) and (98), we can write The generalized force is then the magnitude of the tension. Using (68), the tension can then be calculated to bê We see that the equation of motion (73) and the tension (75) coincide with the ones obtained from Lagrangian mechanics. Lagrangian and generalized momentum This section follows the same structure as section 2.1, and the objective is the same, to pass from a position-velocity description of the movement to a positionmomentum one via a "quantum" canonical transformation. We start by rewriting Eq (64) as where the kinetic energy operator is given bŷ the generalized forces are defined bŷ and the Euler-Lagrange superoperators are For generalized forces that can be derived from a generalized potential we have thatQ Equation (76) can then be rewritten as With the Lagrangian (83) already defined, we can now define the momentum operators byp Just as in section 2.1, we demand that the passage to a momentum description of the dynamics entails a "quantum" canonical transformation. Consider the following transformation rules for the auxiliary operators It can be checked that (84) and (85) obey the canonical commutation relations Hence, the transformations (84) and (85) give the desired "quantum" canonical transformation. Let us investigate now the form of the Liouvillian as a function of the momentum. For simplicity, we put α = 0. Replacing (84) and (85) into (61) we otain Remembering that the function off is to make Hermitian the Liouvillian (87), we find that the "quantum" canonical transformation (86) leads to the same Liouvillian obtained from the KvN theory (see appendix B). We can proceed a bit further by defining the Hamiltonian operator The Liouvillian (87) can then be written as The form (89) concides with the general definition of the Liouvillian in the KvN theory with the exception of the symmetrization that is not usually acknowledged (since it is not important for systems with scalar potentials in Cartesian coordinates). Action principles In this section, we show two methods to obtain the Heisenberg equation of motion for the position and velocity operators from the variation of an action operator. The results given here are an expansion of the ones given in [38]. Let us remember here that the time evolution of operator in the Heisenberg picture is given byq where the time evolution operators obeys the Schrödinger equation We can think ofq(t) andv(t) as defining a path in operator space, and then we can consider a slightly deformed path given bŷ for some differentiable function η(t). We can write the operator version of the Hamiltonian action aŝ where the Lagrangian operator is given by (83). In the case where all forces can be derivable from a generalized potential, the Euler-Lagrange equations in operational form (82) follows from demanding that the fixed end-points variation of the Hamiltonian action vanishes, i.e, after performing the transformation (92) we impose that η(t 1 ) = η(t 2 ) = 0 ⇒ δŴ H = 0. This result mimics the standard form of the Hamilton principle because all the operators appearing in L commute with each other [38]. The above, while correct, does not give any new insights compared to the usual methods of analytical mechanics. On the other hand, the operational formulation of mechanics allows us to derive the equation of motion from a different action and a different Lagrangian. Consider the Schwinger action given byŴ Assuming the commutation relations (42), the Heisenberg equations of motion (62) and (63) follow from fixed end-points variation of the Schwinger action, δŴ s = 0 [38]. By using the Liouvillian (61) and Eq (62), we can further write the Schwinger Lagrangian aŝ Notice thatL S is a function of non-commuting operators and that the forces do not need to be derivable from a generalized potential. The Schwinger Lagrangian is not very illuminating when written in term of the generalized coordinates and velocities. However, the situation slightly improves whenL S is expressed in terms of Cartesian positions and velocities. The identities (65), (66), and the inverse transformation allow us to simplifyL S intô Let us remark that the forces appearing in (96) are the impressed ones, the constraining forces are missing. In the absence of constraining forces, the physical acceleration given by Eq (7) makesL S vanish identically. When there are forces of constraint present, the Cartesian accelerations d dt V j (t) are such thatŴ S is stationary. At the moment, the relation between the Schwinger action principle and the Hamilton principle is not entirely clear beyond the fact that both lead to the same equations of motion for the position and velocity operators. However, it is worth mentioning that the form of the Lagrangian (96), and the fact that it deals with acceleration and not velocities, is quite reminiscent of Gauss' principle of least constraint [39,40,41,42,43]. Gauss' principle states that the actual motion of a mechanical system occurring in nature is such that the accelerations minimize the so-called "constraint" function I conjecture that the constraint function (97) and the Schwinger Lagrangian are closely related, but I can not give a proof at the moment. Conclusions We have systematically constructed an operational theory for classical mechanics. We started from the case free of constraint, and later we incorporated holonomic scleronomous constraints into our formalism. By taking this unusual approach to describe classical systems, we have rediscovered several well-known results from analytical dynamics in an entirely new way. Our procedure seems to put classical dynamics on new foundations that are logically independent of the ones invoked in analytical mechanics. Our main guiding principle was the demand that all transformations have to be "quantum" canonical transformations. Only in a posteriori analysis can we compare the principles employed and show them to be (or not) equivalent to each other. For example, Our operational approach leads to the operational version of the Hamilton principle, while the usual version of the Hamilton principle leads to Hamiltonian mechanics, then to the KvN theory, then to our operational approach (see Appendix B). On the other hand, and it is perhaps the biggest shortcoming of this paper, we have not given any operational formulation of the D'Alembert principle. Presently, it is not clear to the author how to even define the concept of virtual work using operators. It is also unclear the relationship between the usual Legendre transformation to obtain Hamiltonian mechanics and the "quantum" canonical transformation we have used to define our momentum representation of the dynamics. Also, the Hamiltonian (operator) seems to play a rather secondary role in our approach. We wrote it in Eq (88) just for the sake of completeness, but it is not needed at all to describe the dynamics in terms of the canonical momentum. The following questions remain open: (1) The transformation to a momentum description of the dynamics is given in the constraint-less case by a composition of a scale and a unitary transformation. Can the same be done for equations (84) and (85)? (2) Is there a relation between the Schwinger action principle and the Gauss principle of less constraint? How can we justify the Schwinger principle using only the usual tools from analytical mechanics? (3) How can we justify our procedure to obtain Lagrange equations of motion using only analytical tools? Expressions for the spherical operators can be obtained if desired, for example where ρ = |ψ| 2 is the probability density, and ψ is the probability amplitude. The amplitude is a square integrable function over the whole of the phase space Γ , this is where dω = n j=1 dq j dp j is the Liouville measure in the phase space. Now, the Liouvillian operator L = −i {, H} is self-adjoint in the Hilbert space L 2 (Γ ). Due to the linearity in the derivative of the Poisson brackets, we have that ψ obeys the equation A Lagrangian of the form L = n i,j=1 give rise to a Hamiltonian H = n i,j=1 Hence, we can write the Liouvillian aŝ where the formula of the derivative of an inverse matrix was used. Defining the self-adjoint operatorsq The operator (113) has to be considered just a preliminary form of the true Liouvillian because it is not Hermitian due to the non-commutativity of the termp ipαπ ′ j . The correct Liouvillian has to be a simmetrized version of (113), but since there commutator between thep i and theπ ′ j is a c-number, there exist a function f = f (q, p) such that L = is Hermitian. The operator (114) coincides with (87), hence, we can conclude that the KvN theory is the wave mechanics version of the operational theory developed in this paper.
2022-04-07T05:23:01.325Z
2022-04-06T00:00:00.000
{ "year": 2022, "sha1": "06c570c74e2e8fcf66cc5c340e73ec500d5118a0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f5e31f18238d0593603253469621da984ee66387", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
232120220
pes2o/s2orc
v3-fos-license
Modeling transmission dynamics and effectiveness of worker screening programs for SARS-CoV-2 in pork processing plants Pork processing plants were apparent hotspots for SARS-CoV2 in the spring of 2020. As a result, the swine industry was confronted with a major occupational health, financial, and animal welfare crisis. The objective of this work was to describe the epidemiological situation within processing plants, develop mathematical models to simulate transmission in these plants, and test the effectiveness of routine PCR screening at minimizing SARS-CoV2 circulation. Cumulative incidence of clinical (PCR-confirmed) disease plateaued at ~2.5% to 25% across the three plants studied here. For larger outbreaks, antibody prevalence was approximately 30% to 40%. Secondly, we developed a mathematical model that accounts for asymptomatic, pre-symptomatic, and background “community” transmission. By calibrating this model to observed epidemiological data, we estimated the initial reproduction number (R) of the virus. Across plants, R generally ranged between 2 and 4 during the initial phase, but subsequently declined to ~1 after two to three weeks, most likely as a result of implementation/compliance with biosecurity measures in combination with population immunity. Using the calibrated model to simulate a range of possible scenarios, we show that the effectiveness of routine PCR-screening at minimizing disease spread was far more influenced by testing frequency than by delays in results, R, or background community transmission rates. Testing every three days generally averted about 25% to 40% of clinical cases across a range of assumptions, while testing every 14 days typically averted 7 to 13% of clinical cases. However, the absolute number of additional clinical cases expected and averted was influenced by whether there was residual immunity from a previous peak (i.e., routine testing is implemented after the workforce had experienced an initial outbreak). In contrast, when using PCR-screening to prevent outbreaks or in the early stages of an outbreak, even frequent testing may not prevent a large outbreak within the workforce. This research helps to identify protocols that minimize risk to occupational safety and health and support continuity of business for U.S. processing plants. While the model was calibrated to meat processing plants, the structure of the model and insights about testing are generalizable to other settings where large number of people work in close proximity. Introduction Almost 9000 U.S. meat, poultry, and agricultural workers tested positive for SARS-CoV2 from March through May 2020 [1], and at the time, the numerous outbreaks that occurred at meat processing plants were amongst the largest and fastest growing workplace outbreaks in the country [2]. In addition, disease incidence appears to be elevated in communities in proximity to processing plants, suggesting that the presence of plants may amplify transmission in the surrounding population [3]. In some cases, >25% of workers tested positive [4], leading to plant closures or reduced operations affecting 25% of U.S. meat processing capacity during April and May, 2020 [2]. As a result, the swine industry was confronted with a major occupational health, financial, and animal welfare crisis; at the time, it was feared that up to 700,000 pigs would need to be culled per week due to stalled and backlogged production chains [5], and U.S. meat production fell by 20% by the end of April [6]. Plants largely returned to normal capacity by June [7]. However, similar to other high-density places of work and study, worker health and safety remains an ongoing concern. Meat processing plants may be particularly susceptible to spread of respiratory diseases due to both environmental and human conditions [8], such as prolonged contact amongst employees working in high density settings. This is often further compounded by shared transportation and accommodation, as well as socializing outside of work [1]. Between March 1 and May 31 2020, 8,978 cases and 55 deaths were reported across 742 meat, poultry, and crop production workplaces in 30 states [1]. Testing strategies varied by workplace, and not all workplaces performed mass testing [1]. In addition, community-based testing was quite limited during this period, so the true number of cases in such workplaces relative to background rates of community transmission is largely uncertain. While continued operation of processing facilities is essential for food supply chains, analysis of county-level data reveals a worrisome trend in which counties with processing plants reported >200,000 excess cases and > 4,000 excess deaths compared to nearby counties, with the majority of spread likely attributed to community transmission, but amplified by the presence of a plant [3]. Similar situations have also been reported in Germany, Portugal, and the United Kingdom [9]. Data related to this issue have thus far been reported in an aggregated manner [1,3,10], and there are limited data and epidemiological analyses of how outbreaks unfolded at individual plants (but see [4]). To better understand how outbreaks can be controlled or prevented, screening/testing protocols have been proposed to identify infected and asymptomatic individuals, often with an emphasis on health care environments [11,12], high-risk cohorts [13], and large populations [14]. For example, weekly screening of high-risk populations (such as healthcare workers) can reduce their contribution to disease spread by~25% relative to a strategy based only on isolation of symptomatic cases [13]. However, the population-level effectiveness of different screening/testing protocols, such as screening asymptomatic workers on a routine basis, has not been rigorously tested for non-healthcare workplaces. Working in close partnership with industry stakeholders, the objective of this work was to first characterize the epidemic curves across three pork processing plants and summarize epidemiological data related to PCR and antibody testing at individual plants. We then develop a mathematical model to simulate the spread of SARS-CoV2 within these plants, accounting for pre-symptomatic, asymptomatic, and community transmission, and calibrate the model to observed epidemiological data. We used this model to estimate each outbreaks' reproduction number (R) during the early phases of workplace transmission and calculated time-dependent reproduction numbers (R-TD) that allow the effective R to vary through time to measure the influence of interventions [15]. Finally, we used the model to evaluate a range of PCR-screening scenarios to test their effectiveness at minimizing SARS-CoV2 circulation. (University of Minnesota -https://vetmed.umn.edu/ centers-programs/swine-program/research/ industry-advisory-board). The funder provided support in the form of salaries for author AB, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. SD and JH are affiliated with commercial companies. These commercial affiliations did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries. The specific roles of these authors are articulated in the 'author contributions' section. Competing interests: SD and JH are affiliated with commercial companies. We also disclose that JH is affiliated with one of the slaughtering facilities included in this manuscript. These commercial affiliations did not play a role in the study design, data analysis, or preparation of the manuscript and only provided financial support in the form of authors' salaries. This does not alter our adherence to PLOS ONE policies on sharing data and materials, and data availability is outlined in the Data Availability Statement. Data utilized in this manuscript are confidential and were collected by commercial companies. These companies provided data and approved the manuscript for publication, but did not play a role in study design, data analysis, or preparation of the manuscript. Study populations and descriptive epidemiology Data were available from three plants located in different states. These plants ranged in size from 750 workers to 2400 workers. Plant attributes are summarized in Table 1. Plants participated in this study on condition of confidentiality, therefore company names and plant locations are not identified in this paper. Each plant had two types of data available: daily incidence of new self-reported SARS-CoV2 cases and company-initiated testing data. Each plant became involved in this project during the early stages of COVID-19 as part of a cooperative needs assessment for the swine industry, which was a newly formed interdisciplinary group of swine veterinarians, epidemiologists, medical doctors, and diagnosticians that contributed to the development of the model and interpretation of results. The driver of this novel collaboration was an intrinsic understanding of the impact that this human health problem would have on animal welfare and food production. As such, the plants included in this project were not randomly selected and were aggressive and progressive in implementing control measures for COVID-19 early in the pandemic. Daily incidence of clinical cases. Daily incidence of self-reported cases was summarized as the number of new workers per day who reported a confirmed SARS-CoV2 positive PCR test administered by their private health care provider (Plants A-B) or through onsite company-sponsored testing (Plant C). Given that testing by health care providers was almost exclusively limited to symptomatic cases during this time period, self-reported cases were assumed to represent the incidence of clinical disease and not pre-symptomatic or asymptomatic infections (see Supplementary text A in S1 Appendix for further detail). From these datasets, we plotted the epidemic curve for each plant based on daily clinical cases as well as cumulative clinical cases (plotted as a proportion to facilitate comparisons between plants). Given that the very early reporting of clinical cases may be subject to error, epidemic curves were plotted relative to days elapsed since the first five clinical cases. Company-initiated testing. All plants underwent company-initiated testing of workers, though testing strategy varied between plants (Table 1). Plants A and B tested a randomly selected subset of the workforce with an antibody test (IgG/IgM Rapid Test Cassette, Healgen Scientific, Houston, TX). The manufacturer reports the test's positive percent agreement as 86.7% and 96.7% for IgM and IgG, respectively. Negative percent agreement was 99.0% and 98.0% for IgM and IgG, respectively. Plant C also employed antibody tests (LIAISON1 SARS-CoV-2 S1/S2 IgG, DiaSorin Inc, Stillwater, Minnesota). Reported test performance for IgG was 97.6% positive percent agreement and 99.3% negative percent agreement at >15 days post-infection. Testing was conducted on a single date at Plant A, spread over a series of weeks in Plant B (Table B in S1 Appendix), and on two distinct dates separated by~2 months in Model framework We developed deterministic and stochastic compartmental models to simulate the transmission of SARS-CoV2 within plants, assuming homogenous mixing, which was deemed a reasonable assumption based on discussions with plant representatives and given the small scale of the model. We used the deterministic version of the model for the purposes of model-calibration to minimize computational time. However, the stochastic version of the model allowed us to better quantify variability in model outcomes, as well as introduce events into the model that only occurred at certain intervals (for example, worker screening/testing programs, red lines in Fig 1). In both versions of the compartmental model, all individuals in the worker population are classified as belonging to a series of discrete compartments, and the number of individuals in each compartment at any given time is modeled through time (Fig 1). Compartments utilized for this model include: Susceptible (S)-Workers not yet infected; Exposed (E)-Workers that have contracted the virus but are still in the latent period of infection; proportion q of infected individuals move into the undetected class (U)-workers whose infection is either truly asymptomatic or so mild that it is not detected; the remainder (1q) of workers eventually become clinically infected (I c ) but will pass through a pre-clinical asymptomatic phase (I p ) prior to symptom onset; proportion z of clinically infected workers will continue to go to work while infected, but the remainder will stay at-home (H). After some length of time, infected individuals (I c and U) and at-home individuals (H) will recover and return to work. Recovered individuals are assumed to be immune for at least the short time period of several months that is of interest here. Parameters that determine the rate at which individuals transition between compartments are defined in Table A (S1 Appendix). Workers in the model can become infected by two means. First, transmission can occur from infected workers (I p , I c , and U) based on the effective reproduction number R for workplace transmission, with the undetected class (mild or Schematic for compartmental transmission model. Orange boxes represent classes of individuals that contribute to transmission, and that are also detectable via PCR. Blue arrows represent processes that are modeled as differential equations in the deterministic model, and refer to transmission and disease progression processes. Red arrows represent screening and testing measures in the model, which occur at specific intervals. Red and blue arrows are included in the stochastic model. Parameters that determine the rate at which individuals transition between compartments are defined in Table A in the S1 Appendix. https://doi.org/10.1371/journal.pone.0249143.g001 asymptomatic cases) being proportionally (R c ) less infectious than the clinical cases. Second, workers can become infected outside the workplace, which is represented by a constant background community transmission rate (b) and can be interpreted as a force of infection (per capita rate in which susceptible individuals become infected). Thus, the deterministic model is expressed as a system of ordinary differential Eq (1): Where λ 1 , λ 2 and γ were defined as the inverse of the latent, pre-clinical-infectious and clinical-infectious periods, respectively. β is the workplace transmission rate and is derived from R and the duration of infectiousness, and h is defined as the inverse of the period of time in which infected workers remain at home. Given the short duration of the time period of interest, the population size was considered constant and closed. At the onset of the outbreak, the population was considered fully susceptible with a single latently infected worker. In the stochastic version of this model, we used the tau-leap method proposed by Gillespie [16] for incorporating stochasticity at each time step [17]: each segment within the above equations is pulled from a Poisson distribution. For example, the number of workers leaving the exposed class was drawn from a Poisson distribution with mean (λ 1 E). In addition, multiple testing/screening protocols were introduced to the model as events that occurred at specified intervals (red lines in Fig 1). Each testing/screening protocol i can be turned on or off in the model, and applied to a proportion of workers (p i ) at a specified interval of days (Freq i ). The testing/screening protocol detects infected workers with a certain probability based on the sensitivity (se i ) of the method. Testing/screening either results in workers with positive tests to immediately go home (delay i = 0 days), or after a user-specified number of days due to delays in test results. Temperature-based screening was modeled to occur daily with no delays in results, but only detected clinically infected workers. PCR-based testing was modeled as being able to detect all workers in the I p , I c , or U classes, with a user-specified delay in testing results. This study was reviewed by the University of Minnesota Institutional Review Board (STUDY00012004), which made the determination that this research does not involve human subjects and waived requirements for informed consent; all data was aggregated and fully anonymized. Model calibration We tuned the deterministic model to the epidemiological data (cumulative clinical cases and testing results) from each plant. Given the daily incidence data for two plants (Plants A-B) was based on self-reported PCR testing from private health care providers and one plant (Plant C) was based on ongoing company-provided testing of symptomatic workers, we assumed that workers who sought and received testing were experiencing clinical disease because testing was generally restricted to symptomatic patients at that time. Self-reported PCR-positive workers were also absent from work; therefore, we assumed the number of self-reported cases (Plants A and B) was equivalent to the at-home "H" class in our model. The percent of workers that were IgG-positive during company-initiated testing was assumed to be equivalent of the percent of workers in the R class at a particular point of time. Although it is possible for IgG to be detectable within two days of symptom onset [18], generally <70% of people had detectable IgG by 10 days of symptom onset [18,19], while >85% had detectable IgG after 11-15 days [19][20][21]. In our model, the length of time between infection and recovery was 11-15 days, which is why we believe that the percent of recovered individuals in the model is a reasonable approximation of the percent IgG-positive in the observed data. Because most parameter values in the model are uncertain, we conducted a multivariate calibration exercise on uncertain parameters (Table A in S1 Appendix) using Latin hypercube sampling (LHS) and rejection sampling [22][23][24]. We generated 10,000 parameter sets through sampling a Latin hypercube, which is an efficient method to sample multivariable parameter space. Model results were generated for each parameter set using the deterministic model. Parameter sets were then rejected if the modeled outbreak did not sufficiently resemble observed data, based on a set of criteria that was specific to each plant (cumulative number athome, percent infected/PCR-positive or recovered/IgG-positive, see Table C in S1 Appendix for criteria for each plant.). Goodness-of-fit criteria were based on epidemiological data from the earlier phases of plant-based outbreaks, up until the completion of company-initiated testing. Parameter value medians and interquartile ranges were summarized from candidate parameter sets that met the goodness-of-fit criteria. The median value was considered to be the most-likely value and used in subsequent model exploration. Model fit was checked by plotting the observed epidemiological data against the model's predictions. Here, 1000 simulations were performed with the stochastic model to estimate variability in model outcomes given the most-likely parameter values. For plants which experienced outbreaks of longer duration (B and C), it was apparent that although the calibrated parameter values produced simulated outbreaks that resembled the observed data in the early phase of the outbreak, the number of clinical cases was overestimated in the post-testing period. Therefore, we allowed R and b to be re-calibrated based on the post-testing data to account for altered disease dynamics, which may have emerged as a result of higher adherence to biosafety protocols (reduced R) or changes in disease prevalence in the community (reduced b). We also performed a sensitivity analysis for the stochastic model using Latin hypercube sampling and random forest analyses, which is a common approach used for global sensitivity analyses in simulation modelling [22][23][24][25] (Supplementary text B in S1 Appendix). Estimation of R To further interrogate our estimate of R within the workforce population and to determine the sensitivity of R to the estimation method used, we estimated R using several accessory methods that use only data on the cumulative incidence of clinical cases, namely the exponential-growth (EG) method, maximum-likelihood (ML) method, and the time-dependent R (R-TD) method [26] (Supplementary text C in S1 Appendix). All three methods assume that subsequent cases observed in a population arise from cases occurring earlier in the same population (here, the workforce), and thus this method could lead to higher estimates of R as there is no way to account for the possibility that workers were infected by non-worker cases in the community. Despite this limitation, these estimation techniques were applied in order to evaluate the robustness of our R estimates across approaches with varying assumptions and limitations. The EG and ML methods estimate R for the early phase of an outbreak (identified via built-in functions in the R0 package as the period where cases approximated exponential growth). In contrast, time-dependent reproduction numbers (R-TD) allow R to be variable through time [26], and can be useful for monitoring epidemic trends, measuring the influence of interventions over time, and informing parameters for mathematical models [27]. We used a procedure proposed by [15] to estimate effective R-TDs for each plant from the observed epidemic curves and generation time distribution. R-TD values were calculated on a daily interval and smoothed to weekly bins. 95% confidence intervals (CI) were obtained through simulation, as described elsewhere [26]. Scenario testing We evaluated the effectiveness of PCR-based screening (every 3, 7, 14, 28 days) in reducing the number of clinical cases relative to doing a temperature-screening only baseline. For each testing frequency, we also evaluated the effect of delays in testing results wherein infected workers remained at work until results were received (after 1, 3, or 5 days). We also evaluated the effectiveness of PCR-screening when 100% versus 75% of workers were tested, and across three population sizes (n = 100, 1000, and 2500 workers). Finally, we also evaluated the effectiveness of each scenario if used proactively to prevent disease introduction, or reactively during the early, peak, post-outbreak stages. For the reactive scenarios, the starting number of individuals in each infection class was taken from the deterministic predictions (Early = first timepoint in which a cumulative of 0.5% of workers are clinically infected; Peak = timepoint with the highest number of infected individuals; Post = 60 days after the peak). We also considered an outbreak stage (35%-Imm.) in which testing is used to prevent re-introduction to an already partially immune population (35% of workers start as recovered to resemble the epidemiological situation of Plants B and C). Based on the sensitivity analysis, it was apparent that model results were sensitive to the R and b, thus all scenarios were run for both high and low transmission parameters, yielding 1500 total scenarios. The choice of values for R (high = 4, low = 2) comes from a consensus of values from the above R estimations. The high background rate (b = 0.005) falls within the approximate range of background rates estimated across all plants. At the time of writing (November 15, 2020), daily incidence of >10 per 10,000 have been reported for Midwestern states, and assuming that there is a substantial underreporting, our high b is reasonable. The low background rage (b = 0.00015) is equivalent to~1.5 new cases per 10,000, which would be considered low to moderate at the time of writing and approximated background transmission in summer 2020 in the rural Midwest [28]. Values for high/low R and b were compared to the results of the sensitivity analysis to ensure that the selected values spanned the variability in model outputs related to these parameters. Other parameters were set to a single most-likely value based on either our model calibration or published literature (R c = 0.75 [29]; Latent period = 5 days; Pre-clinical period = 1.5 days; Clinical period = 6 Days; proportion undetected = 0.5; proportion that go to work when clinically ill = 0.3; at-home period = 10 days; Temperature screening sensitivity = 0.7; PCR sensitivity = 0.9). Each scenario was simulated for 90 days, and 100 simulations were performed per scenario, yielding 150,000 total model runs. Software Models were coded in R statistical software v3.6.0. Sensitivity analysis and model calibration were performed using the tgp, randomForest, and randomforestexplainer packages [30][31][32], and estimation of R was performed using the R0 package [33]. An RShiny implementation of the stochastic version of this model was developed as is available at https://sumn.shinyapps.io/ covidshcomp/. This modeling dashboard allows for customization of all parameter values and overlaps the trajectory of outbreaks based on user-defined testing/screening protocols. Descriptive epidemiology Based on confirmed PCR-positives, epidemic curves of the daily number and cumulative proportion of clinical cases plotted for each plant show that Plant C experienced the largest outbreak, which plateaued at~25% of the workforce reporting clinical disease (only Plant C had company-sponsored PCR testing available on an on-going basis, Fig 2A). However, Plant B experienced the largest outbreak as measured by antibody prevalence, even though the cumulative number of PCR-positive cases leveled off with just under 10% of the workforce reporting clinical disease (PCR confirmed by private health providers, Fig 2A). Plant A experienced a much smaller outbreak of short duration. A list of mitigation measures employed by plants is listed in Table 2. Antibody test results were compared against PCR results for Plant B workers (Table D in S1 Appendix). Of 42 workers that self-reported being previously PCR-positive (or presumptive positive), 35 (83%) were IgM-and IgG-positive. Additionally, of 419 workers who had never received a PCR-test, 123 (29%) were IgM-and IgG-positive, and 11 (2.6%) were IgM-positive and IgG-negative. Model calibration to outbreak data The calibrated parameter values for each plant are summarized in Table A (S1 Appendix) and visually compared in Fig 3. The most likely values for parameters related to the progression of infection (e.g., duration of the latent period, pre-clinical period, and clinical period) were similar across plants, as was the proportion of workers estimated to go to work when experiencing clinical symptoms (most likely values ranging from 0.23 to 0.33). However, estimated R's differed substantially among plants, as did background community transmission rate. Furthermore, a higher percentage of infections were estimated to be undetected at Plant A (~80%) than at Plant B (~50%) and Plant C (~30%), and the relative infectiousness of such undetected individuals (with presumed mild or asymptomatic infections) was estimated to be 0.26-0.46. The model fit was evaluated against the observed number of cumulative clinical infections. The model fit reasonably well earlier in the outbreak, which were the data used for model tuning, but the number of cases were overestimated in the latter phase of the outbreaks (Fig B in S1 Appendix). Therefore, the parameter values of R and b were re-calibrated for the post-test- Based on a sensitivity analysis of the stochastic model, the parameters that were most associated with variation in the cumulative number of clinical cases were the transmission rate parameters (background transmission rate, R, and R c , Fig 5A). Cumulative detected clinical cases decreased with higher q (proportion of infections that are undetected, Fig D in S1 Appendix). Given the fundamental importance of R and b, we used the random forest analysis to visualize the interaction between these two variables ( Fig 5B). Estimation of R In addition to calibration of the compartmental model, we used three alternate methods to estimate R. Across all four methods, R ranged from 1.7-2.7 for Plant A, 3.0-4.4 for Plant B, Table E in S1 Appendix). Based on this analysis, R = 2 was considered "low," and R = 4 was considered "high" in subsequent scenario testing. Across all plants, the time-dependent R's initially were >2 for the first one to two weeks, and then fell to below or close to 1 by week two or three (Fig 6B). Results of these three methods are influenced by generation time assumptions. Here, we show the results assuming a generation time of six days, as that is most compatible with our calibrated parameter estimate of~5 days for the (Table 2). However, similar results were obtained when assuming a generation time of four days (Table E and S6, S7 Figs in S1 Appendix). Scenario testing To evaluate the effectiveness of different PCR screening protocols, we modeled how the number of workplace infections was influenced by the frequency of full workforce PCR-testing (every 3, 7, 14, or 28 days) and delays in test results (1, 3, or 5 days) across three population sizes (n = 100, 1000, and 2500). We also explored scenarios in which screening began at five different outbreak stages (prevention, early, peak, post, and 35% immune). Due to model sensitivity to R and b, each scenario was modeled with a factorial combination of high/low R (R = 2 / 4) and b (b = 0.00015 / 0.005). For each 90 day scenario, we summarized the cumulative number of additional infections (only infections that occurred after the onset of PCR screening were counted), cumulative number of additional clinical cases, proportion of infections that were due to workplace transmission, number of infections detected by PCR (Figs G-K in S1 Appendix), and the number and proportion of clinical cases averted (relative to a temperature-screening only baseline, Fig 7). All analyses were repeated with only 75% of workers undergoing testing, and results were largely similar (Figs L-P in S1 Appendix). Discussion In this study, we described the epidemiological situation in three pork processing facilities in the United States, including characterizing each plant's epidemic curves and results of company-initiated cross-sectional sampling. We then developed a mathematical model, calibrated to each plant's data, that accounted for asymptomatic, pre-symptomatic, and background "community" transmission. All plants experienced SARS-CoV2 infections in their workforce beginning in March and April 2020, a time period in which SARS-CoV2 was rapidly spreading throughout the country and scientific understanding of its epidemiology, transmission, and appropriate protective and mitigation measures was rapidly evolving. The epidemic curves of two of three plants demonstrated relatively large outbreaks, with~10% to 25% of workers reporting PCR-confirmed clinical disease. Daily increases in cases in the workforce plateaued after approximately two and six weeks for Plants B and C, respectively. The parameter values estimated during model calibration revealed both similarities and differences across plants. These differences are more apparent for the three plants experiencing larger outbreaks, and may have been related to variability in plant location, case identification methods, and potentially implementation and compliance with biosafety measures. Plant B, for example, was located in a region where high levels of community transmission early in the pandemic may have contributed to the steep epidemic curve. A greater role of community spread within this region was supported by model calibration results for Plant B, in which the estimated background transmission rate was notably higher than in other plants. During model calibration of Plant B, it was difficult to find parameter combinations in which the cumulative clinical cases (at-home/PCR confirmed) remained below the observed 10%, while also achieving~40% recovered (assumed to be IgG+ positive). It is possible that the selfreported clinical cases (at-home sick / PCR-positive) underestimated the true number of clinical cases. Only cases that were PCR-confirmed were included, and some people may have been unable to get tested even though they were showing mild symptoms. Compared to the other plants, this plant's outbreak began in mid-March, a time period when testing was limited, many were unaware of COVID-19 clinical signs, and symptoms could have been confused with other seasonal illnesses. Alternatively, the discrepancy between cumulative clinical cases and percent of the workforce recovered may have emerged because we assumed that IgG-positive individuals were exclusively found in the recovered class. In the model, the mean time from exposure to recovered and assumed IgG-positive would be~10-12 days. However, it is possible for individuals to have detectable IgG as early as 4 days post-infection, though >10 days would be more likely [19]. If some individuals have detectable IgG prior to recovery (10-12 days), then some non-recovered individuals would contribute to the IgG prevalence. This could explain why observed IgG prevalence was higher than the predicted proportion of people in the recovered class. Plant C was the only plant to offer company-sponsored PCR-testing for diagnostic purposes on a rolling basis. Although testing was generally restricted to those showing clinical signs, including very mild clinical signs, the ease of access to PCR-testing may have led to more cases being PCR-confirmed than other plants, and thus an apparent larger outbreak with a cumulative of~25% of workers clinically affected. Compared to other plants, the observed epidemic curve based on PCR-confirmed clinical cases was much more closely aligned to the antibody prevalence based on cross-sectional surveillance, suggesting that fewer cases went undocumented. This is also reflected in the much lower estimated values of q (proportion of infections that are undetected). Plant C also had a policy that asked all household and carpool contacts of a potential case to self-isolate at the same time as the employee showing clinical symptoms (i.e. isolated from first report of clinical symptoms, not when test result is received). These contacts were only tested if clinical signs developed within the 14-day quarantine period. Thus, when illness started in these contacts, they were already outside of the workplace, and their contact tracing led to few or no new workplace contacts. If they developed clinical signs while in quarantine but tested negative, they were still not released from quarantine before the end of the 14-day period. It is notable that the sero-positivity in this population was only slightly lower than that of Plant B, and Plant B's clinical curve leveled off at just 10%. These observations suggest potential under-documentation of clinical cases at Plant B, or a wider clinical definition of what would warrant administration of a PCR test at Plant C. Plant C offered PCR testing as needed on site if at least one clinical sign of COVID-19 was present, however mild, but testing was generally not offered to asymptomatic employees, even those in quarantine due to limits in test availability at the time. Thus, it is difficult to determine whether Plant C truly experienced a larger outbreak, or simply a better documentation of cases. With the above considerations in mind, there are several general insights from the model that warrant further discussion. The majority of R estimates across plants fell between~2.5 to 3.5, regardless of estimation method, indicating that each infection resulted in around three additional infections during the outbreaks' early period. Interestingly, although this is a highdensity population, this estimate is not markedly different from the values estimated for the general population [34][35][36]. Generally speaking, non-model estimates of R (ML, EG, and TD methods) were slightly higher than the model-calibrated estimates. This may be because these methods assume that all infections in the worker population were the result of transmission from earlier infections in the same population, discounting the possibility that some cases were the result of community transmission. That being said, all four methods of estimation have limitations, and the fact that each method produced overlapping confidence intervals provides support that the reported Rs are robust to the varying assumptions and limitations inherent to each method. An analysis of R through time showed that R declined to around 1 within two to three weeks, suggesting that outbreaks were brought under control fairly rapidly. For example, Plant C experienced a decrease in cases at the end of April, corresponding to the date that screening questions were implemented at the point of pick-up (rather than at the plant's entrance) for those employees who take a bus from a nearby community. Plant C also experienced a spike in cases in early May, which corresponded to an outbreak at a neighboring meat processing plant and the discovery that employees from both plants sometimes share a household. The pre-shift screening questions were adjusted to ask about shared living arrangements or close contact with an employee from the neighboring plant over the previous three days, this resulted in a significant number of employees being asked to self-isolate (with pay) for the 14-day quarantine period. Other measures employed at plants include staggered shifts to minimize overlap in workers, barriers and signage to enforce social distancing, extra buses to increase spacing during commute, providing masks and hand-sanitizer, and contact tracing combined with self-isolating employees who showed any clinical symptoms or whose household members were experiencing clinical signs or being tested for COVID-19 (Table 2). While some of these measures were in place in plants even before substantial workplace cases were observed, discussions with company personnel indicate that compliance with biosafety measures may have increased after the company-initiated cross-sectional sampling was performed due to greater awareness of the scope of the outbreak. Interestingly, our results also suggest a potential role of population immunity in slowing infections, though we do not advocate that this should be a purposeful strategy for disease mitigation. If the initial value of R is assumed to be 3, then the critical immunization threshold to achieve herd immunity would be 66% (1-1/R 0 ). However, if biosafety measures reduce by R by one half (R = 1.5), this shifts the threshold to 33%. Interestingly, this is not too different from the antibody prevalence reported in Plants B and C. If population immunity is indeed playing a role in these plants, then it would be expected that the worker population no longer can sustain a large-scale outbreak, though there still may be a consistent low level of cases due to exposure in the community. Indeed, Plant C showed a decrease in new cases during July, even though cases reported in the general community were increasing at this time. In addition, none of the plants had experienced a "second wave" as of November 2020, and case counts at the plants aligned well with what would be expected from concurrent incidence levels in surrounding communities. That being said, worker turnover and uncertainties about the duration of immunity both could change the percent of the population that is antibody positive through time. Additionally, if worker contact rates change and/or compliance with biosafety measures relaxes, this would allow R to shift back upward and consequently the critical immunization threshold needed for herd immunity would also shift upward. One of the most challenging aspects of controlling SARS-CoV2 is the presence of asymptomatic carriers that are capable of transmitting the virus. In this project, these individuals would fall in the "Undetected" class, and the term "carrier" would be loosely applied to those testing PCR or antibody positive, but who reported no symptoms. However, there was not adequate documentation or follow-up to definitely conclude that they were asymptomatic. Thus, the Undetected compartment in the model more accurately represented both true asymptomatic infections as well as those with mild symptoms that either were not aware of infection or were unable to obtain a test. It is possible that the estimated proportion of infections that were undetected was lower in Plant C (~30%) because company-sponsored PCR-testing was available on a rolling basis. The 50% undetected rate in Plant B (located in a regional epicenter) may be because more workers sought and received testing, compared to Plant A, which was distant from epicenters at the time (as discussed above, Plant C's on-site testing may have resulted in fewer undetected cases). Plant A had a modeled undetected rate of >75%, which matches reasonably with the cross-sectional testing data in which 93% of antibody-positive workers at Plant A reported no symptoms. Community transmission is key to understanding and modeling the unfolding of outbreaks at plants. Here, we chose to handle community transmission as a constant background rate. As described above, it appears that plant-to-plant variability in estimated background rates reflected observed regional trends, with Plant B estimated to have the highest background rate during this period. However, we did not allow for the background rate to change dynamically through time. Also, while we can compare our background transmission rates to CDC countylevel incidence data, it is important to note that many workers tend to socialize and live together with other workers both from the same plant and other plants, sometimes as a part of immigrant communities. Thus, the community transmission experienced by workers is likely not completely decoupled from the transmission dynamics within the workplace, nor is it reflective of dynamics in the general population. For these reasons, worker exposure to SARS-CoV2 in the community may be higher than the general public due to such heterogeneities in community contacts. It is also important to note that the nature of our data limits the conclusions we can draw about community transmission, given that comparable data from communities were not available. Thus, our estimates of community transmission should be interpreted cautiously. With this in mind, a general insight from our model indicates the importance of community exposure is dependent on the stage of the outbreak. Community spread accounted for more transmission events than workplace transmission in the very early and post-peak periods of the simulations. However, once a workplace outbreak was established, workplace transmission accounted for the majority of transmission events (Fig C in S1 Appendix). Community transmission appeared to kickstart workplace outbreaks, resulting in the outbreak reaching a "tipping point" of exponential growth more quickly. Indeed, our sensitivity analysis suggests that there is a background transmission threshold of~25 cases per 10,000 people, above which workplace outbreaks are distinctly larger (Fig 5). Finally, we used the calibrated model to evaluate the effectiveness of routine PCR-screening at reducing disease circulation within a plant across a range of scenarios (Fig 7, Figs G-P in S1 Appendix). As expected, more cumulative clinical cases occurred at higher values R and b, though the differences between high and low b were much larger than the differences between high and low R (color of shading in Fig 7). Indeed, in a population of 1000, the baseline expected number of additional clinical cases (prevention stage) was 18 and 17 for the low and high R scenarios when b was low, and 268 and 382 for low and high R when b was high. Across all outbreak stages and population sizes, the proportion of clinical cases averted (cell labels in Fig 7) was most influenced by testing frequency and less influenced by delays in results, R, b, and the proportion of workers tested. Testing every three days generally averted about 25% to 40% of clinical cases, while testing every seven days averted somewhere between 13% and 20% of clinical cases. Testing every 14 and 28 days typically averted 7 to 13% and 2 to 8% of clinical cases, respectively. However, it should be noted that the number of additional clinical cases expected was sometimes quite small (top number in each block in Fig 7), particularly for population sizes of 100. Thus, the absolute number of cases averted should be considered alongside other factors in selecting a mitigation strategy. It is also notable that even frequent testing may not prevent a large outbreak when background transmission is high and population level immunity is low (prevention and early outbreak stages). Our results are similar to other modeling studies examining routine PCR screening for SARS-CoV2 [14]. For example, testing every two days was deemed necessary to limit cases to controllable levels at college campuses [37]. Other models also emphasize the importance of biweekly testing [11,14], with the predicted reductions in transmission related to the assumed value of R, other control measures employed [11], and delays in testing results [14]. Our results also mirror others in that testing frequency was found to be more important than test sensitivity [14,37]. Notably, in [14], the relative number of cases under each testing scenario were not strongly influenced by use of an agent-based versus homogenous mixing modeling framework, which supports the use of a homogenous mixing model for our smaller-scale workplace model. In conclusion, we summarized SARS-CoV2 outbreaks experienced in three pork processing plants during the spring of 2020. Through calibrating a mathematical model to the epidemiological data from these plants, we demonstrated that R was generally between 2 and 4 in the early stages of these outbreaks, but rapidly declined within two to three weeks. This slowing in transmission is clearly evident in the epidemic curves of each plant, and is most likely a consequence of implementation and adherence to biosafety measures employed at each plant potentially in combination with population immunity. We found substantial heterogeneity across plants in undetected rates and the relative infectiousness of such "carriers," but this variation is more likely an artifact of awareness, access to tests, and differences in reporting as opposed to differences in biology. Finally, routine PCR-screening was shown to reduce the number of clinical cases in the workforce, but the absolute number of cases averted depended on assumptions about disease transmissibility and the stage of the outbreak. Even frequent testing may not prevent a large outbreak within the workforce. While the model was calibrated to meat processing plants, the structure of the model and insights about testing are generalizable to any workplace where large number of people work in close proximity.
2021-03-06T02:06:46.441Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "f1e685135e1d41fb4a15f1f92b2683158c7cc647", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0249143&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0fdebb45eae271d0dc64b7c0b4c70cc2f1c626f1", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10458773
pes2o/s2orc
v3-fos-license
The Continuity of Images by Transmission Imaging Revisited Transmission imaging, as an important imaging technique widely used in astronomy, medical diagnosis, and biology science, has been shown in [49] quite different from reflection imaging used in our everyday life. Understanding the structures of images (the prior information) is important for designing, testing, and choosing image processing methods, and good image processing methods are helpful for further uses of the image data, e.g., increasing the accuracy of the object reconstruction methods in transmission imaging applications. In reflection imaging, the images are usually modeled as discontinuous functions and even piecewise constant functions. In transmission imaging, it was shown very recently in [49] that almost all images are continuous functions. However, the author in [49] considered only the case of parallel beam geometry and used some too strong assumptions in the proof, which exclude some common cases such as cylindrical objects. In this paper, we consider more general beam geometries and simplify the assumptions by using totally different techniques. In particular, we will prove that almost all images in transmission imaging with both parallel and divergent beam geometries (two most typical beam geometries) are continuous functions, under much weaker assumptions than those in [49], which admit almost all practical cases. Besides, taking into accounts our analysis, we compare two image processing methods for Poisson noise (which is the most significant noise in transmission imaging) removal. Numerical experiments will be provided to demonstrate our analysis. Introduction. Imaging is an important technique which translates a physical scene to lower dimensional (typically 2D) data for convenient observation and record. It has been applied to many fields, including our everyday life, medical diagnosis, exploring the universe, and biological structure analysis. Many imaging systems and instruments, such as various digital cameras, X-ray computed tomography (CT), telescopes, and microscopes, have been developed. Different imaging systems are based on different physical principles. Digital cameras used in our everyday life record the reflection part of the incoming light [27], whereas transmission electron microscopes generate images by counting the electrons having transmitted the scene [21,15,28]. We refer to these two kinds of imaging techniques by reflection imaging and transmission imaging in this paper for clarity. See Fig. 1. We will consider in this paper the most common case that objects are in R 3 and images are 2D data. The reflection imaging is meaningful by itself, while the transmission imaging is not and the final objective of transmission imaging is to reconstruct the 3D object (density function) from many 2D images. Images usually contain various degradations such as noises due to some reasons such as the non-perfectness of the imaging procedure and network transmission. For instances, images by reflection imaging often contain some Gaussian noises and blur effects, while images by transmission imaging are often contaminated by Poisson noises. As mentioned above, for transmission imaging, we have to reconstruct the objects from the 2D images, and the noise in 2D projection images will affect the accuracy of the reconstruction methods. The strategies for noise reduction in transmission imaging can be divided into three groups: pre-reconstruction denoising, post-reconstruction denoising, and regularized iterative reconstruction methods with many forward and backward projection steps. There are many pre-reconstruction denoising methods developed which operate on the raw projection data (transmission imaging) before image reconstruction [36,2,45,23,40]. Post-reconstruction processing includes the methods for improving the image quality without affecting spatial resolution. However the artifacts in the reconstructed images are always recognized as structures in the scanned object and will be enhanced. Regularized iterative reconstruction methods have demonstrated superior performance in undersampled tomography imaging. These methods have demonstrated their tremendous power in image reconstruction with only a few projections [3,41,42,52]. However, these methods are seldom used for commercial purpose. For example, the traditional filtered back-projection is still mainly used for image reconstruction by commercial CT scanners because of several reasons including the speed and image quality [30]. Many new iterative algorithms for CT reconstruction introduced by major CT manufacturers are still using pre-reconstruction processing, combing with post-reconstruction processing with only one step of backward projection, e.g. iterative reconstruction in image space (IRIS) and sinogram affirmed iterative reconstruction (SAFIRE) by Siemens Medical Solutions, adaptive iterative dose reduction (AIDR 3D) by Toshiba Medical Systems, iDose by Philips Healthcare [48]. In all these algorithms, the pre-reconstruction processing is very important in noise reduction, and understanding the properties of transmission imaging is helpful to noise reduction in projection data. Image processing methods often assume some prior knowledge about the image data. These prior knowledge describes the features of the images without any degradation, e.g., how to model the clean images. As well known, images by reflection imaging can be usually modeled as discontinuous functions and even piecewise constant functions in most cases. See [49] for an analysis from physical principle for the case of parallel light imaging (see Fig. 1 (a)), which can also be applied to the case of divergent light imaging (see Fig. 1(b)) and other reflection imaging cases. Consequently most of images by reflection imaging have sparse gradients. This is a very important property, based on which many image processing and segmentation techniques, models, and algorithms have been proposed in the literature, such as the popular total variation (TV) regularization [35]. In the following we will focus on transmission imaging to understand the properties of the transmission imaging. There are typically two types of wave beam geometries in transmission imaging due to the application backgrounds and the ability of wave generators. See Fig. 1(c)(d). Transmission imaging with parallel beam geometry, such as the cryo-EM technique [15], is widely applied in biological and medicine sciences to detect molecular structures. Transmission imaging with divergent beam geometry is extensively applied in medical diagnosis, such as the X-ray CT technique [21,28]. For the transmission imaging with parallel beam geometry, it has been shown in [49] that almost all images can be modeled as continuous functions. Let us explain this in more details. In transmission imaging with parallel beam geometry, people take images (also called projections in the literature) from many different projection directions in order to reconstruct the density functions of the imaged objects. Each projection direction corresponds to one image. See Fig. 1(c). For a fixed projection direction, the source radial a parallel wave beam and the wave beam transmits the 3D objects (such as some biological specimen) with a portion of the wave arriving at the image plane. The information recorded in the image plane is then used to infer the structure of the scene. The interaction between the wave and the objects in the scene depends on some certain density function of the objects. The image plane records the line integrals of the density function along the lines of the wave beam. In our case of R 3 , the projection directions can be regarded as points on the 2 dimensional unit sphere S 2 . It has been proved that for almost every projection direction, the generated image is a continuous function, even if the density function of the imaged objects are discontinuous (discontinuous density functions are very common). The set of projection directions generating discontinuous images has measure zero on the unit sphere. In the transmission imaging with divergent beam geometry as shown in Fig. 1(d), people take images from many points in R 3 . Each point in R 3 outside of the support of the 3D objects (such as human body to be scanned) corresponds an image. For a fixed point, the source radial a divergent wave beam and the wave beam transmits the 3D objects with a portion of the wave arriving at the image plane. Again, the information recorded in the image plane is then used to infer the structure of the scene. The interaction between the wave and the objects in the scene depends on some certain kind of density function of the objects. The image plane records the line integrals of the density function along the lines of the wave beam. Totally we can take as many images as points in R 3 outside of the support of the 3D objects. One of the purpose of this paper is to show that almost all images are continuous functions, even if the density function of the imaged objects are discontinuous (discontinuous density functions are very common). The set of points generating discontinuous images has measure zero in R 3 . An essential mathematical tool to describe transmission imaging is Radon transform [33]. As far as we know, theoretical results on Radon Transform in the literature focus on the analysis of the imaging procedure as a mapping operator [19,21,28], e.g., the invertability of the operator. In addition, most analysis assume that the density function of the object to be imaged is a continuous or even Schwartz function all over the Euclidean space [19,28,33]. Our analysis in this paper are quite different from those in the literature in two ways. Firstly, we consider the features of images (2D projections) instead of the imaging procedure. Although the central topic in transmission imaging is the reconstruction of the 3D objects from their 2D projection images, restoration of these image data (before 3D reconstruction) is also important in improving the accuracy of the reconstruction methods due to the involvement of noise and other degradations during the imaging procedure. In addition, it is easier to model the noise in projection images before reconstruction. A typical problem is how to remove the Poisson noise (which is the dominant noise in transmission imaging); see [3,11,24,16,31,38,51,55], etc. Studying the image features helps to choose and design better image processing models and algorithms, as mentioned before. Secondly, our analysis assume discontinuous density functions of objects, which are very common in practical applications. In particular, we contribute the following in this paper. Firstly, we prove the continuity property of images by transmission imaging with parallel beam geometry under much weaker assumptions than those in [49]. The analysis in [49] excludes the case of cylindrical 3D objects. Our analysis here includes almost all types of 3D objects. Secondly, we prove similar continuity property of images by transmission imaging with divergent beam geometry, thus completing this kind of analysis for two typical beam geometries in transmission imaging. Thirdly, we compare two current popular image regularization techniques for images by transmission imaging, verifying our analysis and providing some information for choosing wavelet frame methods instead of the TV method for processing this kind of images. The paper is organized as follows. In section 2, we will present our main result, i.e., the continuity analysis of images by transmission imaging. Section 3 provides some numerical results to verify our theoretical analysis. Section 4 concludes the paper. Notations and the main theorems. Let p be a point in R 3 , and W be a vector in R 3 . We denote by L (p,W ) the half line in the direction W starting from p, more precisely, These straight lines will be the light beams we are going to study. Let Σ be a bounded smooth surface in R 3 , especially Σ ⊂ D, where D is a bounded convex domain in R 3 . Let B be the boundary of D, in this paper we also assume B to be a smooth surface. We are going to study the three type of beams in this paper, 1. The diverse beams with source located on B; 2. The diverse beams with source located in R 3 ; 3. The parallel beams with directions on the unit sphere S 2 . When we are using these beams to scan the surface Σ, it is possible to have "singular sources". In fact if for a source point q, there exists a direction W , such that the intersubsection of the half line L (q,W ) and the surface Σ includes a interval on L (q,W ) , then we call this point q to be a singular source. More precisely we give the following definition. Definition 2.1. Let q be a point in R 3 and W be a direction in R 3 . If there is point p in Σ such that p ∈ L (q,W ) and p + tW ∈ Σ if t ∈ (0, ǫ) for some positive ǫ. Then q is a singular source, W is as singular direction, L (q,W ) is a singular beam and p is a singular point of the beam L (q,W ) . We also say the beam L (q,W ) is singular at point p. Especially, we specify three singular sets according to the three types of beams. 1. All the singular sources on B, Z = {q ∈ B : ∃W ∈ R 3 such that L (q,W ) is a singular beam }. This is the singular sources set for the diverse beams with source located on B. 2. All the singular sources in R 3 , Z = {q ∈ R 3 : ∃W ∈ R 3 such that L (q,W ) is a singular beam }. This is the singular sources set for the diverse beams with source located in R 3 . Especially Z =Z ∩ B. 3. All the singular directions, D = {W ∈ S 2 : ∃q ∈ R 3 such that L (q,W ) is a singular beam }. This is the singular directions set for the parallel beams with directions on S 2 . These singular sources sets and singular directions set are determined by the surface Σ. We define X to be all the singular points on Σ. Precisely speaking, that is One may notice that a singular point is also a singular source. The property of X is very important in the study of Z,Z and D. Lemma 2.2. For any point q ∈Z, there is p ∈ X , W ∈ T p Σ, such that L (p,W ) is singular at point p and q ∈ L (p,W ) . And For any unit direction W ∈ D, there is a point p ∈ X , such that L (p,W ) is singular at point p. Proof. The proof is very strait forward. For the first part, if q ∈Z and L (q,V ) is singular at point p 1 ∈ Σ, then there exists ǫ > 0 such that p 1 + tV ∈ Σ for any t ∈ (0, ǫ). Let p = p 1 + ǫ/2V and W = −V , then L (p,W ) is singular at point p and q ∈ L (p,W ) . For the second part, if W ∈ D, then there is a p ∈ Σ, such that L (p,W ) is singular at p. Therefore p ∈ X . We note that the singular points set X can be open on Σ. For example if Σ is a ruled surface, then every point on Σ is a singular point, i.e. X = Σ. But the singular sources sets Z,Z and the singular directions set D are more restrictive. More precisely, we state the main theorem of this paper. where p(u, v) is the point of coordinate (u, v), and a ∂ ∂u + b ∂ ∂v is a tangent vector in T p Σ. In this paper, because all the surfaces are embedded in R 3 , we can also consider the tangent vectors as directions in R 3 . The image of the critical points are called critical values. In the next subsection, we are going to construct several smooth maps, such that the singular sources and singular directions are critical values of the corresponding maps. Construction of smooth maps for Sard's Theorem. In this subsection, we characterize the singular points set X using the second fundamental form. Then we construct several smooth maps, of which the singular sources and singular directions are critical values. We start with a simple observation, Lemma 2.5. For any point p ∈ X , there is at least one vector W ∈ T p Σ such that the directional curvature of Σ at point p on direction W is zero. Proof. For any singular point p ∈ X , there exists ǫ > 0 and a direction W such that p + tW ∈ Σ for t ∈ (0, ǫ). Then we have that the directional curvature of Σ at p on direction W is the same as the curvature of the straight line, which is zero. The second fundamental form II is a symmetric quadratic form on the surface Σ. If (u, v) is a local coordinate of Σ, then we can express the second fundamental form as or, equivalently, in a matrix form Also for each point p ∈ Σ and direction W ∈ T p Σ, we have that the directional curvature of Σ at p on W is II(W, W )/ W, W . In particular the two real eigenvalues k 1 and k 2 of II are the principle curvatures, K = k 1 k 2 is the Gauss curvature and H = k 1 + k 2 is the mean curvature. Gauss curvature and mean curvature are smooth functions on the surface Σ. According to the second fundamental form, we divide the surface Σ in to Σ = implies that the singular points set has no intersubsection with Σ + , on each point of which the direction curvature is always positive. So we can divide X to a union of three parts X = (X ∩ Σ 0 ) ∪ (X ∩ (Σ 1 ∪ Σ − )). Now we are ready to define two different types of smooth maps according to these two parts of X . Smooth maps for zero principle curvatures. Let T Σ be the tangent bundle of Σ. We recall that T Σ is a 4-dimensional manifold. We define the following maps Then we have Proposition 2.6. The maps g,g and g D are smooth maps. For any p ∈ Σ 0 ∩ X and W ∈ T p Σ, the point (p, W ) ∈ T Σ is a critical point of all the three maps g,g and g D . Proof. We fix one pint p 0 ∈ Σ 0 ∩ X . For the computation of the tangent map, let {U, (u, v)} be a local coordinate chart around p 0 , and (u, v, a, b) gives a local coordinate system of T U by in which p(u, v) is a local parametrizaion of Σ with p 0 = p(0, 0) and a ∂p ∂u + b ∂p ∂v is a vector in T p Σ. We first look at g : T Σ → B. According to the local coordinate on T U , we have for a function r. In order to prove the smoothness of g, we only need to show that r is a smooth function. Actually the smooth surface B is locally defined as zero set of a smooth function F (x, y, z) = 0. So we have the function r is actually the solution of the following equation, From our assumption that the domain D is convex, we know that the vector a ∂p(u,v) is not tangent to B at point g(u, v, a, b), therefore the partial derivative ∂F ∂r = 0. Then using the inverse function theorem, we have that the solution r(u, v, a, b) is a smooth function on T Σ. We are ready to compute the tangent map of g at (u, v, a, b), in fact Let n p be the normal vector of Σ at point p(u, v), then n p , g u | p0 = r(a n p , p uu + b n p , p uv )| p0 = r(aL + bM )| p0 = 0; n p , g v | p0 = r(a n p , p vu + b n p , p vv )| p0 = r(aM + bN )| p0 = 0; n p , g a | p0 = 0; This implies that the image of tangent map g * at point (p 0 , W ) is perpendicular to n p0 . We claim that n p0 is not parallel to the normal vector n g(p0,W ) of B at point g(p 0 , W ). Then there exists a vector V in T g(p0,W ) B such that V is not perpendicular to n p0 , therefore the tangent map g * at the point (p 0 , W ) is not onto. We prove the claim by contradiction, suppose n p0 is parallel to n g(p0,W ) , then the half line L (p 0 ,W ) will be tangent to the surface B at point g(p 0 , W ), but this implies that L (p 0 ,W ) stays outside of the convex domain D, which contradicts with the fact that p 0 ∈ Σ ⊂ D. We finished the proof of that Σ 0 ∩ X are critical points of the map g. Secondly, we consider the mapg : T Σ → R 3 . Using the above notion of the local coordinate system of T Σ, we havẽ and the tangent maps areg Then again we get This directly implies that the tangent mapg * to T R 3 is not onto. At last, let us look at the map g D : T Σ → S 2 . Using the above notion of the local coordinate system of T Σ, we have Then we can compute for the tangent map Then again the vanishing second fundamental form implies that And it is clear this time that n p is not parallel to W , therefore there is a vector V ∈ T W/|W | S 2 such that, V, n p = 0. This implies that the tangent map (g D ) * at point (p,W) is not onto. Smooth maps for different principle curvatures. Let us consider the part of Σ with different principle curvatures, precisely speaking In particular Σ − ∪ Σ 1 ⊂ Σ n . Because the Gauss curvature K and the mean curvature H are both smooth functions on Σ, we have that Σ n is an open subset of Σ. Therefore we have a countable open cover Σ n = ∪ i∈N U i such that on each U i , there are two unit vector fields V 1 and V 2 corresponding to k 1 and k 2 , i.e. Now we are ready to construct the following maps We note that these maps are not globally defined on Σ n , since the principle curvatures k 1 and k 2 are not globally defined functions on Σ and the choice of V 1 and V 2 are not unique either. Topologically speaking the collection of the pairs (p, W αβ ) gives a 4-sheets cover spaceΣ n of the manifold Σ n . Then the 4 locally defined maps f αβ can be realized as one globally defined map onΣ n . For the reader's convenience we decide to avoid the using of too much algebraic topology, and choose the local definition as above. For these maps we have the following proposition Proposition 2.8. If p ∈ X ∩ U i , then there exist α and β in {0, 1}, such that 1. The point f αβ (p) is a critical value of f αβ ; 2. The pointf αβ (p, t) is a critical value off αβ ; 3. The point f D αβ (p) is a critical value of f D αβ . Proof. Because p is a singular point, there exists a direction W ∈ T p Σ and ǫ > 0, such that p + sW ∈ Σ for s ∈ (0, ǫ). Therefore the directional curvature of Σ at point p on the direction W is zero, then there exist α, β ∈ {0, 1} such that W αβ /|W αβ | = W/|W |. Without loss of generality we assume that W 00 /|W 00 | = W/|W |. On the other hand, because Σ 0 ∩ Σ n = ∅ and Σ + ∩ X = ∅, the singular point p ∈ X ∩ (Σ 1 ∪ Σ − ). We divide the problem into two cases, Case 1 If p ∈ Σ − , then W 00 = −W 11 is not parallel to W 10 and W 01 , which implies that W αβ /|W αβ | = W/|W | in a neighbourhood of point p for (α, β) = (0, 0). But on any point p+sW with s ∈ (0, ǫ), we have that the directional curvature at point p+sW on direction W is still zero, i.e. II p+sW (W, W ) = 0. Therefore W 00 /|W 00 | = W/|W | at point p ′ = p + sW for sufficient small s > 0. Then we get that for sufficiently small s > 0. Then we get that ∂ ∂s f 00 (p + sW ) = 0, which means that the tangent map (f 00 ) * (p) = 0. But dim Σ = dim B = 2, so (f 00 ) * is not onto at point p, which means p is a critical point of f 00 and f 00 (p) is a critical value. Similarly we have that for sufficiently small s. Again we get (f D 00 ) * (p) = 0, therefore p is a critical point of f D 00 and f D 00 (p) is a critical value of f D 00 . For the mapf 00 , we havẽ f 00 (p + sW, t) = p + sW + tW 00 = p + sW + tλ(s)W, where λ(s) is a positive function of s. The derivatives are ∂ ∂sf 00 (p + sW, t) = (1 + tλ ′ (s))W and ∂ ∂tf 00 (p + sW, t) = λ(s)W . Therefore we get, at point (p, t) where s = 0, the tangent map, But clearly λ(0)W = 0 ∈ T p Σ and (1 + tλ ′ (0)) ∂ ∂t ∈ T R are not equal to each other, so we get (f 00 ) * at point (p, t) is not injective. Again the fact dim Σ×R = dim R 3 = 3 implies that (p, t) is a critical pint off 00 andf 00 (p, t) is a critical value for any t ∈ R. Case 2 If p ∈ Σ 1 , then p + sW ∈ Σ and p + sW is also a singular point for each s ∈ (0, ǫ). If there is a s ∈ (0, ǫ) such that p ′ = p + sW ∈ Σ − , then we get back to the previous case. If p + sW ∈ Σ 1 for all s ∈ (0, ǫ), then we can assume that W 00 = W 10 are in the same direction of W . Then we get back to the same computation as the previous case. The proof is completed. Proof of the main theorems. In this subsection we give the proof of the main theorem. Theorem 2.9. For each type of beams, the singular sources set or singular directions set has measure zero as a subset of the corresponding ambient set. More precisely 1. Z has measure zero in B; 2.Z has measure zero in R 3 ; 3. D has measure zero in S 2 . Proof. For each point q ∈ Z, there is a point p ∈ X and direction W ∈ T p Σ such that L (p,W ) is singular at p and q = L (p,W ) ∩ B. 1. If p ∈ Σ 0 ∩ X , then Proposition 2.6 implies that (p, W ) is a critical point of g. Therefore q is a critical value of g. 2. If p ∈ (Σ 1 ∪ Σ − ) ∩ X , then Proposition 2.8 implies that there is a i ∈ N and α, β ∈ {0, 1} such that p ∈ U i and q is a critical value of f αβ : U i → B. Then we have that Z is covered by the union of critical values of the countable many maps g : Σ → B and f αβ : U i → B for each i ∈ N and α, β ∈ {0, 1}. Then using Sard's theorem, we get Z is covered by a countable union of zero measure sets, which is also a zero measure set. For each point q ∈Z, there is a point p ∈ X , t ∈ R and direction W ∈ T p Σ such that L (p,W ) is singular at p and q = p + W . 1. If p ∈ Σ 0 ∩ X , then Proposition 2.6 implies that (p, W ) is a critical point of g. Therefore q is a critical value ofg. 2. If p ∈ (Σ 1 ∪ Σ − ) ∩ X , then Proposition 2.8 implies that there is a i ∈ N and α, β ∈ {0, 1} such that p ∈ U i and q is a critical value off αβ : U i × R → R 3 . Then we have thatZ is covered by the union of critical values of the countable many mapsg : Σ → R 3 andf αβ : U i × R → R 3 for each i ∈ N and α, β ∈ {0, 1}. Then using Sard's theorem, we getZ is covered by a countable union of zero measure sets, which is also a zero measure set. For each point W ∈ D, there is a point p ∈ X such that L (p,W ) is singular at p. 1. If p ∈ Σ 0 ∩ X , then Proposition 2.6 implies that (p, W ) is a critical point of g D . Therefore W is a critical value of g D . 2. If p ∈ (Σ 1 ∪ Σ − ) ∩ X , then Proposition 2.8 implies that there is a i ∈ N and α, β ∈ {0, 1} such that p ∈ U i and W is a critical value of f D αβ : U i → S 2 . Then we have that D is covered by the union of critical values of the countable many maps g D : Σ → B and f D αβ : U i → B for each i ∈ N and α, β ∈ {0, 1}. Then using Sard's theorem, we get D is covered by a countable union of zero measure sets, which is also a zero measure set. Continuity of the image. In this subsection we give the mathematical definition of image function, and prove that the image functions are continuous almost surely. Let Ω be an open domain in R 3 while Σ be the boundary of Ω and ρ : Ω ∪ Σ → R + be a continuous density function on the closeusreΩ = Ω ∪ Σ . We further assume that for each straight line L in R 3 , the intersubsection L ∩ Σ is a countable union of points and intervals on L. Definition 2.10. For a parallel beam with direction W , let H be the plane perpendicular to W which passes through origin. Then the image function I W : H → R is defined as For a diverse beam, we will define two different image functions, one is the spherical image function, and the other is the image function on flat image plane. Definition 2.11. For a diverse beam starting from source point q, the spherical image function SI q : S 2 → R is defined as for each W ∈ S 2 . If there is a flat image plane H, which intersects all the light beams from q through Ω and q / ∈ H, then image function HI q : H → R is defined as The main theorem in this subsection is Theorem 2.12. For almost every point q in R 3 , and almost every direction W ∈ S 2 , the image functions SI q , HI q and I W are continuous functions. In fact we only need to show that if a point q ∈ R 3 is not a singular source then the image functions SI q and HI q of the diverse beam from q is continuous. Similarly we will also show that if a direction W ∈ S 2 is not a singular direction then the image function I W of the parallel beam with direction W is continuous. The proof of the theorem 2.12 is based on the following lemma in real analysis. Lemma 2.13. If f (x, y) : A × A ′ → R is a compactly supported (lower, upper) continuous function, and f (x, y) is bounded, then the integration is a (lower, upper) continuous function from A ′ to R. Proof. [Proof of Theorem 2.12] Firstly we define the following functions • Φ : R 3 → R is the function which equals ρ on Ω and 0 elsewhere; • Φ : R 3 → R is the function which equals ρ onΩ and 0 elsewhere; • Φ Σ : R 3 → R is the function which equals ρ on Σ and 0 elsewhere. Clearly they satisfy Φ = Φ + Φ Σ . Because Ω is an open domain, we have that • Φ is a compact supported boudned and lower continuous function; • Φ is a compact supported boudned and upper continuous function. On the other hand, for any direction W ∈ S 2 , there is a coordinate system (x, y, z) for R 3 such that W = (0, 0, 1) and the perpendicular plane H = W ⊥ is the x − O − y plane. Therefore the image function is (2.6) and the correspondence functions are Then Lemma 2.13 implies that I W (x, y) is a lower continuous function and I W (x, y) is an upper continuous function. If W is not a singular direction, then I Σ W (x, y) = 0 and that I W (x, y) =Ī W (x, y) is both lower and upper continuous. In this case, the image function I W (x, y) is a continuous function. For any point q ∈ R 3 , let (r, θ, φ) be the polar coordinate system centered at point q. Then the image function is (2.9) and the correspondence functions are Then Lemma 2.13 implies that SI q (θ, φ) is a lower continuous function and SI q (θ, φ) is an upper continuous function. If q is not a singular source, then SI Σ q (θ, φ) = 0 and that SI q (θ, φ) =SI q (θ, φ) is both lower and upper continuous. In this case, the image function SI q (θ, φ) is a continuous function. For the image function HI q : H → R, consider the map T : H → S 2 with T (p) = L(o, p − q) ∩ S 2 for any p ∈ H, and o as the center of S 2 . Because q / ∈ H, the transformation T : H → S 2 is continuous and one-to-one. Therefore the image function is also continuous as a composition of two continuous maps. At last, Theorem 2.9 shows that the singular directions and singular sources are zero measure sets in S 2 and R 3 respectively. Therefore, the image functions are almost surely continuous. Numerical Experiments. In this section we compare several image denoising methods for Poisson noise removal to verify our analysis above. Currently there are many successful image denoising methods, such as the total variation (TV) model [35,3,11,24,16,31,38,51,55], the wavelet frame thresholding and regularization methods [13,5,20,6], anisotropic diffusion [32,47], and nonlocal mean method [4]. We choose only to compare the TV model and wavelet frame regularization method to demonstrate our analysis in previous section, since these two methods use ℓ 1 minimization of different regularizers and the regularizers reveal the smoothness of the underlying data. Compared to the TV model, which favors only piecewise constant images, wavelet frame regularization is more flexible and can model smooth images very well. As shown in [6], different wavelet frame based approaches can be used to approximate the TV method with different orders of differential operators, which also justifies that wavelet frame based approach can be applied more successfully in this smooth image situation. Actually in one-dimensional case, the TV model is equivalent to the simplest frame based model, i.e., the Haar framelet model, which approximates the first order differential operators [43]. We now compare these two methods for images generated by transmission imaging. In practice an image function is recorded by sensing elements on the imaging plane. The sensing elements sample the image function. However, the samples can approximate the image function very well as long as the resolution is high enough. Without loss of generality, we assume that the resolution is M × N and thus an image is an M × N real valued 2-dimensional array. Suppose u = {u i,j } 1≤i≤M,1≤j≤N is an image. Its discrete gradient is given by whereD + x andD + y forward difference operators with periodic boundary condition (u is periodically extended); see, e.g., [50]. Consequently fast Fourier transform can be adopted in our algorithm. The TV model for Poisson noise removal is as follows: To present wavelet frame regularization model, we need to first introduce the discrete framelet transform. A discrete framelet transform applies some discrete convolutions to a signal. The convolution kernels are actually some filters including a low pass filter and some high pass filters. The low pass filter is the refinement mask of a refinable function, while the high pass filters are determined by the masks of framelets represented by the refinable function. Starting from a refinable function, one may construct an MRA-based tight frame system using the unitary extension principle (UEP) [34]. Also, a usual way to construct multivariate framelets is using tensor-products of univariate framelets. For detailed and comprehensive introduction to the MRA-based wavelet frame theory, fast wavelet frame transform, please refer to [12]. In the following we list three simple but very useful univariate framelets which are used in the experiments. • where Wu is the wavelet frame transform of u and λ is a positive vector. We now provide some numerical experiments for verifying the analysis in previous section. The experiments were performed under Ubuntu and Matlab R2011b (version 7.13.0) on a workstation with Intel Xeon (Core 6) E5645 2.40GHz and 50Gb memory. We used u k −u k−1 2 u k 2 ≤ 5 × 10 −5 as the stopping criteria. The levels for all the three wavelet transforms were all set to be 1. The parameters in the model α, λ were rigorously tuned up for all images to achieve the optimal performance. In each example, the density functions were reconstructed using the same method, respectively. In the experiments, the results from TV regularization, Haar wavelet system, piecewise linear B-Spline wavelet system and piecewise cubic B-spline wavelet system were compared in signal noise ratio (SNR) values and Frobenius norms. The SNR value is defined as SNR := 10 log 10 where u 0 is the original (clean) signal;ū 0 is the mean value of u 0 ; and u is the noisy or restored signal. The errors are computed as the Frobenius norms on the differences between reconstructed density functions and the ground truth density function. Example 3.1. This is an example for the 2D Shepp-Logan Phantom density function of size 256 × 256 in Matlab. The projection and reconstruction are done by Matlab built-in function "fanbeam" and "ifanbeam". There are 360 projections with 509 detector values for each projection. For each projection, we add Poisson noise by using Matlab "imnoise" with a scaling to the projection image by a factor 10 −12 , e.g., "imnoise(f * factor, 'Poisson')/factor" where f is a clear projection image. These projections corrupted with Poisson noises are then processed by the TV and framelet based denoising models (3.1) and (3.2). We mention that here we did not compute the results of the Haar framelet based model, because the Haar framelet based model with level 1 is equivalent to the TV model in 1D case, as mentioned before. Fig. 2 shows the denoised results of two projections. It is clear that the wavelet frame systems with piecewise linear B-spline and cubic B-spline framelets generate better results with higher SNR values than the TV model. Also, the frame system with piecewise cubic B-spline framelets yields better results with higher SNR values than the frame system with piecewise linear B-spline framelets. The clean projections, the noisy projections, and the various denoised projections are then used to reconstruct the 2D Phantom; See Fig. 3. It can be seen that better density functions with small errors have been reconstructed from the denoised projections by the frame based model. The frame based denoising model outperforms greatly the TV model for this kind of images and their denoised projection data generate better density function reconstructions. The smoother the framelet is, the better the results are. Example 3.2. In this example we show a numerical experiment for the 3D Shepp-Logan phantom data with size 128 × 128 × 128. The projection and reconstruction are done by the 3D Cone beam CT projection backprojection FDK Matlab code [22]. We obtain 84 projections with each having the size 600 × 500. For each projection, we add Poisson noise using Matlab built-in function "imnoise" with a scaling factor 10 −6 . These projections are then processed by the TV and frame based denoising models (3.1) and (3.2). The noisy projections and various denoised versions are used to reconstruct the density function of the 3D Phantom. The reconstruction is done by backprojection method with filter "hamming". Fig. 4 shows one randomly chosen projection denoised by the TV model, the Haar wavelet system, piecewise linear Bspline system and piecewise cubic B-spline system. The piecewise linear and piecewise cubic B-spline systems return both smoother denoised results with higher SNR values than the Haar system and TV model. Fig. 5 shows the 64'th slice of the reconstructed density function. The SNR values of the denoised projections and the reconstruction errors demonstrate the advantage of the frame based model, especially the model with smoother framelets. Example 3.3. The data used for this experiment is the 3D medical data of human head with size 401 × 401 × 401 from Cone beam CT. Totally there are 84 projections with size 800 × 700. Poisson noise is added to each projection by using Matlab "imnoise" function with a scaling factor 5 × 10 −6 . These projections are then processed by the TV and frame based denoising models (3.1) and (3.2). The noisy projections and various denoised versions are used to reconstruct the density function of the 3D Phantom. The reconstruction is done by backprojection method with filter "hamming". Fig. 6 shows the 30'th projection denoised by the TV model, the Haar wavelet system, the piecewise linear and piecewise cubic B-spline system, respectively. Both the piecewise linear and piecewise cubic B-spline systems return better results than the Haar wavelet system and the TV model. Fig. 7 shows the 120'th slice of the reconstructed volume along z-direction and Fig. 8 gives the isosurface view of the 3D density function with function value 0.09. The SNR values of noisy and denoised versions of some selected projections for all three experiments are summarized in Conclusion. Transmission imaging is widely applied in astronomy and biomedical sciences for macro and micro scale objects, whose physical mechanisms and mathematical models are quite different from reflection imaging frequently used in our everyday life for common scale objects. In this paper, we improved the existing continuity analysis of images generated by transmission imaging in two aspects. First, we consider both parallel and divergent beam geometries, i.e., two basic and important geometries in transmission imaging, while existing analysis applies to only the paral- lel beam geometry. Second, we prove the continuity property of images generated by transmission under much weaker conditions which admit almost all cases in applications, while previous analysis excludes some common cases such as cylindrical objects. Our analysis shows that images by transmission imaging with both parallel and divergent beam geometries are almost surely continuous functions, even if the density functions of the imaged objects are discontinuous (discontinuous density functions are very common). This is quite different from reflection imaging where images are usu- The comparison between the TV and wavelet frame based models for denoising a projection of the 3D Phantom. The first row shows the ground truth projection image, the noisy image and the denoised image by the TV model. The second row shows the denoised projection images by the Haar wavelet system, piecewise linear B-spline system and piecewise cubic B-spline system, respectively. 5. The comparison of the 64'th slice along z-direction of reconstructed 3D phantom object. The first row shows the ground truth, the objects from noisy projections and denoised projections by the TV model, respectively. The second row shows objects reconstructed with the denoised projections by the Haar wavelet system, the piecewise linear B-spline system and the piecewise cubic B-spline system, respectively. The comparison between the TV and wavelet frame based models for denoising a projection of the 3D medical data. The first row shows the ground truth, the noisy image and the denoised result of the TV model. The second row shows the denoised results of the Haar wavelet system, the piecewise linear B-spline system and the piecewise cubic B-spline system, respectively. Table 2 The comparison of the Frobenius norm of the difference between reconstructed object and the ground truth object. ally modeled as discontinuous functions. Although the central topic in transmission imaging is the reconstruction of density functions of objects, processing of the image (projection) data before reconstruction is also sometimes important due to the fact that these data usually involve degradations such as Poisson noise. There are many methods in commercial dealing with the transmission images first before performing the reconstruction, and understanding the structures of transmission images is important because better denoised transmission images gives better reconstruction results. Our theoretical analysis may help us to understand the structures of images generated by transmission imaging and provide some information for choosing, designing and testing image processing techniques for transmission imaging. Taking into accounts our analysis, we compared two popular image denoising methods for Poisson noise removal. Numerical experiments are provided to verify our theoretical analysis. The comparison of the 120'th slice of the reconstructed 3D medical density function. The first row shows slices of the ground truth, the density functions from noisy projection images and denoised projections by the TV model, respectively. The second row shows the density functions from denoised images by the Haar wavelet system, the piecewise linear B-spline and piecewise cubic B-spline systems, respectively. Fig. 8. The comparison of isosurface (density function value = 0.09) of the reconstructed 3D medical density function. The first row shows the isosurfaces of the ground truth, the density functions from noisy projections and denoised projections by the TV model, respectively. The second row shows the isosurfaces of the density functions from denoised images by the Haar wavelet system, the piecewise linear B-spline system and the piecewise cubic B-spline system, respectively.
2014-01-07T17:50:55.000Z
2014-01-07T00:00:00.000
{ "year": 2014, "sha1": "6af7f16940f4d8f309fdd11d4a2cfbe8a4bf54e9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6af7f16940f4d8f309fdd11d4a2cfbe8a4bf54e9", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
235761372
pes2o/s2orc
v3-fos-license
Trait Emotional Intelligence and School Burnout Discriminate Between High and Low Alexithymic Profiles: A Study With Female Adolescents Alexithymic traits, which entail finding it difficult to recognize and describe one’s own emotions, are linked with poor trait emotional intelligence (TEI) and difficulties in identifying and managing stressors. There is evidence that alexithymia may have detrimental consequences for wellbeing and health, beginning in adolescence. In this cross-sectional study, we investigated the prevalence and incidence of alexithymia in teenage girls, testing the statistical power of TEI and student burnout to discriminate between high- and low-alexithymic subjects. A sample of 884 female high school students (mean age 16.2 years, age range 14–19) attending three Italian academic-track high schools (social sciences and humanities curriculum) completed self-report measures of alexithymia, school burnout, and TEI. Main descriptive statistics and correlational analysis preceded the discriminant analysis. The mean alexithymia scores suggest a high prevalence of alexithymia in female adolescents; as expected, this trait was negatively correlated with TEI and positively associated with school burnout. Participants with high vs. low alexithymia profiles were discriminated by a combination of TEI and burnout scores. High scores for the emotionality and self-control dimensions of TEI were strongly associated with membership of the low alexithymia group; high scores for the emotional exhaustion dimension of school burnout were indicative of membership of the high alexithymia group. These findings suggest crucial focuses for educational intervention: efforts to reduce the risk of emotional exhaustion and school burnout should especially concentrate on enhancing emotional awareness and self-control skills, both strongly associated with low levels of alexithymia. INTRODUCTION Adolescence is amply known to be a sensitive period of exposure to risk factors for wellbeing and mental health (Patton et al., 2014;Erskine et al., 2015). Recent research also reports that older adolescents may suffer a decline in mental health outcomes, with girls showing poorer mental health than boys (Inchley et al., 2020). This evidence has highlighted the need to identify potential risk factors that can undermine adolescents' mental health in the school setting (Eccles and Roeser, 2011;Cavioni et al., 2020). Among these factors, alexithymia has been poorly investigated. Alexithymia is a psychological construct that indicates the state of finding it difficult or being unable to recognize and describe one's own emotions. The term alexithymia (derived from the Greek a = lack, lexicon = word, and thymus = mood) was introduced by Sifneos (1973) to denote a cognitive-affective disorder that influences how individuals regulate their emotions. The construct is multifaceted and encompasses multiple manifestations, including difficulty in identifying emotions and distinguishing them from bodily sensations, difficulty in describing and verbalizing emotions, poor imagination, an externally oriented thinking (EOT) style, and reduced empathy (Taylor, 1987). Alexithymia has been explained by some as dysfunctional emotional regulation (e.g., Taylor, 2000); however, it is more appropriate to view it as a disruption of the prerequisites for managing emotion. Those who have difficulty in identifying and naming their emotional states are consequently less well able to find effective strategies for managing them (Ciucci et al., 2016;Sfärlea et al., 2019). Indeed, the literature has frequently reported an association between alexithymia and poor trait emotional intelligence (TEI). Although these two personality dimensions may be viewed as independent of one other, they overlap considerably and are strongly inversely related (Fukunishi et al., 2001;Parker et al., 2001); also, both are key factors in stress management. The literature clearly documents the negative impact of alexithymic traits -especially in females and during adolescence (Levant et al., 2009;Popa-Velea et al., 2017;Sfärlea et al., 2019) -on the ability to promptly identify and therefore manage stressors. This adverse knock-on effect can become chronic in syndromes, such as burnout (Lapa et al., 2017). In light of this background, we set out to explore the relations between alexithymia, risk of student burnout, and TEI in a group of female adolescents. Alexithymia in Adolescence To date, the literature on alexithymia has focused on young adult populations or clinical groups, implying that there is still a lack of research on alexithymia in populations of adolescents, in relation to both the incidence of the phenomenon and its associations with other salient psychological variables. Table 1 summarizes results from existing studies that investigated the prevalence of alexithymia in samples of adolescents, mainly girls. We selected these works because they reported the levels of alexithymia found in other countries when researchers administered the quantitative measure adopted in our study to broadly similar populations. Although the scores are not directly comparable (given that full information about the participants is lacking), they illustrate the overall pattern of scores in both general and specific populations. Although research with community adolescent populations is still relatively limited, findings obtained with clinical samples suggest that alexithymia may have the same adverse effects on wellbeing and health in adolescence as in adulthood (Parker et al., 2010). More specifically, alexithymia has been associated with behavioral problems (Zimmermann, 2006), dissociative tendencies (Sayar et al., 2005), eating disorders (Merino et al., 2002), and depression (Honkalampi et al., 2009), as well as with gambling and Internet addiction (Parker et al., 2005;Scimeca et al., 2014). Hence, expanding our knowledge of alexithymia, its implications and correlations in a susceptible population, such as adolescent girls, should inform more effective prevention of maladaptive behavior and/or mental problems. Indeed, although the literature defines alexithymia as a relatively stable disposition (Mikolajczak and Luminet, 2006;van der Velde et al., 2013) that tends to be associated with marked difficulty in emotion regulation, adolescence is a particularly sensitive period for the management of emotions and is characterized by a general increase in regulation issues. A considerable body of evidence suggests that adolescents experience negative emotions more frequently and intensely than do children and adults, and that they use significantly more maladaptive regulation strategies than they do adaptive ones, a phenomenon that Zimmermann and Iwanski (2014) have labeled a "maladaptive shift. " This temporary increase in the deployment of maladaptive strategies is linked partly to neuroendocrine maturation processes and partly to the transition from hetero-regulation (enacted by caregivers) to emotional self-regulation strategies. Significant gender differences in this phenomenon have also been identified, including a stronger decrease in recourse to adaptive strategies in girls, who, for this reason, may be particularly sensitive and vulnerable to mental distress in adolescence (Cracco et al., 2017). A study conducted by Sfärlea et al. (2019) with two clinical groups of girls (diagnosed with anorexia nervosa and depression, respectively) and a healthy control group showed that alexithymia strongly predicted the use of maladaptive regulation strategies. The authors suggested, drawing on a study by Venta et al. (2013), that the link between alexithymia and reliance on maladaptive emotional regulation strategies may be mediated by experiential avoidance, or the reluctance to tolerate adverse private experiences. In other words, the tendency to avoid potentially negative experiences may inhibit access to personal information that would be useful for dealing with more challenging situations and communicating effectively. Finally, two recent studies on adolescents in different cultures seem to give a picture of alexithymic traits across gender. Ng and Chan (2020) studied the prevalence of alexithymia in 1606 Chinese adolescents: The prevalence in the whole sample was 36.6%, but the percentage among girls (40%) is significantly higher than those in boys (34.3%). A study with Italian adolescents highlighted a higher difficulty in identifying feelings in girls than in boys: authors, in line with the previous literature (i.e., Pascual et al., 2012), suggested that girls -compared to boys -experiment a more complex emotional experience, which is therefore more difficult to identify. For this reason, girls seem to be more prone to misinterpreting their own emotions and confound feelings with associated bodily sensations (Trentini et al., 2021). The propensity to make greater or lesser use of effective emotion regulation strategies is also linked to another personality trait: emotional intelligence. This construct may be defined as "a constellation of emotional self-perceptions and dispositions located at the lower levels of personality hierarchies" (Petrides et al., 2007, p. 26) and is conventionally assessed via self-report questionnaires. It essentially has to do with individuals' perceptions of how they manage their emotions and of how these emotion-coping strategies impact on their social relationships. Petrides (2009) theorized the existence of four different sub-dimensions of TEI: emotionality, self-control, wellbeing, and sociability. High levels of emotionality imply the ability to perceive, express, and connect with one's own emotions and those of others. Self-control is useful for managing emotions and stress, as well as for controlling impulses. Wellness involves having positive feelings over time in relation to one's past achievements, self-esteem, and expectations for the future. Finally, sociability refers to the ability to be socially assertive and aware and to effectively manage emotions while communicating and participating in social situations. All of these factors foster satisfying interpersonal relationships and consequently good social adaptation. As mentioned above, alexithymia and TEI are inversely related, but do not completely overlap. Coffey et al. (2003) suggested that these two constructs share two underlying dimensions, namely, attention to personal emotions and emotional facets of situations (as opposed to externally oriented and concrete thinking) and the ability to clearly comprehend and describe one's own emotional states. Not surprisingly therefore, TEI and alexithymia are both personal features that are associated -albeit in different ways -with coping with stressful events and the risk of developing psychosomatic or depressive symptoms (Mavroveli et al., 2007;Fiorilli et al., 2020). Indeed, in line with the stress-alexithymia hypothesis proposed by Martin and Pihl (1985), alexithymic traits can negatively affect the ability to identify and cope with stressors. In turn, this difficulty can prolong exposure to stressful situations and favor the emergence of burnout symptoms (Lapa et al., 2017;Romano et al., 2019). School burnout can be defined as a psychological syndrome caused by an imbalance between spending and regaining energy in school work and by the perception of a lack of resources for dealing effectively with study demands (Schaufeli and Bakker, 2004). School burnout can be further divided into three sub-dimensions: exhaustion due to school needs, a cynical and detached attitude toward school, and feelings of inadequacy as a student (Salmela-Aro et al., 2009). Previous research found significant gender differences with respect to burnout risk in adolescence: Girls report higher levels of stress with respect to fulfilling school requirements; they experience internalizing symptoms, such as feelings of inadequacy and emotional exhaustion to a greater extent than boys, and are more vulnerable to the negative effects of stress (Salmela-Aro et al., 2009). As a result, girls are more at risk of burnout (Salmela-Aro and Tynkkynen, 2012). Few studies in the literature have investigated the relations among alexithymia, TEI, and burnout, especially among high school students. Furthermore, to the best of our knowledge, no studies have investigated these three variables simultaneously. The literature indicates a general association between alexithymia and high levels of burnout (Bratis et al., 2009), but this link appears to be even stronger in academic settings, where the levels of stress perceived by students can be very high (Heinen et al., 2017). Most past studies that examined this relationship were conducted with medical students (or health practitioners): Research by Katsifaraki and Tucker (2013) showed that alexithymia (and in particular EOT) predicted burnout in student nurses. Popa-Velea et al. (2017) observed that, in medical students, alexithymia, together with stress and perceived social support, predicted different components of burnout. Similarly, a recent study with university students by Romano et al. (2019) found alexithymia to be directly associated with level of burnout and inversely associated with academic performance, a pattern of relationships that was also mediated by anxiety and resilience. Students with alexithymic traits may be more prone to incorrectly assessing or failing to identify emotionally stressful elements of problematic situations arising in the academic setting, which may heighten and protract their perceptions of tension and difficulty, ultimately leading them to become emotionally exhausted (Hamaideh, 2018). This phenomenon may be observed from high school onward and is attracting growing research interest Fiorilli et al., 2020). At the same time, studies on TEI suggest that this factor is positively correlated with low anxiety and strong resilience in the face of stressful situations (see Armstrong et al., 2011;Liu and Ren, 2018), as well as -in educational settings -a healthy level of school adjustment in terms of pro-sociality, friendship, and cooperation (Nikooyeh et al., 2017). A recent study with a sample of high school students by Fiorilli et al. (2020) found that individuals with high TEI were less likely to experience school anxiety and more likely to exhibit resilience, which, in turn, reduced their risk of experiencing student burnout. However, to our knowledge, no studies have investigated the network of relations between alexithymia, emotional intelligence, and burnout in adolescent students. The Present Study The aim of this cross-sectional quantitative study was to contribute to the broader line of inquiry into risk factors for wellbeing and mental health in the pre-adult population via the production of two main research outputs. First, we observed the distribution of alexithymia in terms of its prevalence and incidence in a relatively large sample of female adolescents. Second, given emerging evidence that alexithymia may have detrimental consequences (i.e., represent a risk factor) for wellbeing and health in adolescence and adulthood, we tested for associations between alexithymia, TEI, and student burnout, while controlling for age. We expected that alexithymia would be negatively associated with TEI and positively correlated with student burnout. To the best of our knowledge, few quantitative studies have investigated alexithymia along with other socio-emotional variables in large cohorts of teenage students. Finally, we expected that a linear combination of measures of affectivity and sociality (namely, TEI and burnout) would have the statistical power to discriminate between adolescents with high alexithymia scores (HA) and low alexithymia scores (LA). The "high" and "low" categories are here defined relative to the actual distribution of scores in our sample, as expressed in absolute terms. More specifically, we expected that low scores on emotional intelligence would be associated with membership of the HA group, while higher emotional intelligence scores would be more characteristic of an LA profile. With regard to the relationship between alexithymia and student burnout, we did not formulate any directional hypothesis given that, to date, few studies have jointly examined these variables in adolescent populations. Hence, by focusing on alexithymia in adolescence, this study addresses a key gap in the current literature, as well potentially informing targeted intervention programs for fostering knowledge and emotional literacy at a crucial stage of psychological development. Participants and Procedure The sample comprised 884 community-recruited female students attending high school. Participants' mean age was 16.2 years (SD = 1.52, min-max = 14-19) and they were distributed across years as follows: first year: 23.5% (N = 208); second year: 22.5% (N = 200); third year: 17.0% (N = 150); fourth year: 21% (N = 194); and fifth year: 15.2% (N = 134). The students were recruited at three high schools (all academictrack schools offering a social sciences and humanities curriculum) located in medium SES, urban areas of northern Italy. Written parental consent was obtained for the underage participants. The research was conducted following the ethical principles and code of conduct of the American Psychiatric Association (2013). The students were free to withdraw from the study at any time, and no monetary or other financial rewards were provided to participants. Students' School Burnout The School Burnout Inventory evaluates burnout in 8-12th grade students [Salmela-Aro et al., 2009; Italian validation by Fiorilli et al. (2014)]. It comprises nine items, which the student is asked to rate on a 6-point Likert scale (from 1 = completely disagree to 6 = strongly agree). The inventory assesses students' school-related burnout across three different dimensions: exhaustion at school (four items; range 4-24; e.g., "I feel overwhelmed by my schoolwork"), cynicism about the meaning of school (three items; range 3-18; e.g., "I feel that I am losing interest in my schoolwork"), and sense of inadequacy at school (two items; range 2-12; e.g., "I often have feelings of inadequacy in my schoolwork"). Participants completed the Italian version of the questionnaire, which has also been confirmed to have a three-factor structure (Fiorilli et al., 2014). Each student was assigned a total score as well as a sub-score for each of the three dimensions (Cronbach's α > 0.80). Alexithymia The Toronto Alexithymia Scale [TAS-20, Bagby et al., 1994; Italian validation by Bressi et al. (1996)] consists of 20 items assessing three dimensions: difficulty identifying feelings (DIF; sample item: "I am often confused about what emotion I am feeling"), difficulty describing feelings (sample item: "It is difficult for me to reveal my innermost feelings even to close friends"), and EOT (sample item: "I prefer to analyze problems rather than just describe them"). Students are asked to express their level of agreement with each item via a 5-point Likert scale (from 1 = strongly disagree to 5 = strongly agree). Both the original validation study (Bagby et al., 1994) and the validation study of the Italian version (Bressi et al., 1996) bore out the three-dimensional structure of the TAS-20, but recorded better reliability for the global measure. For this reason, in the present study -as in several others in the literature (e.g., La Ferlita et al., 2007;Caretti et al., 2018) -TAS-20 was used as a global unidimensional measure of alexithymia (Cronbach's α = 0.79). Trait Emotional Intelligence The Trait Emotional Intelligence Questionnaire (TEIQue)-Short Form for adolescents [TEIQue-ASF, Petrides, 2009;Italian Frontiers in Psychology | www.frontiersin.org adaptation by Andrei et al. (2014)] is composed of 30 items to be rated on a 7-point Likert scale ranging from 1 (completely disagree) to 7 (completely agree). The scale comprises four factors: wellbeing (e.g., "I feel that I have a number of good qualities"), self-control (e.g., "I usually find it difficult to regulate my emotions"), emotionality (e.g., "Expressing my emotions with words is not a problem for me"), and sociability (e.g., "I'm usually able to influence the way other people feel"). The four factors may be combined to create a composite (global) emotional intelligence score. In this study, Cronbach's α > 0.75 for the global scale and for each of the sub-scales. Data Analysis Strategy: Multivariate Discriminant Analysis The classification and evaluation of observed cases are among the primary aims of scientific research (Huberty and Olejnik, 2006). Traditionally, two statistical techniques have been used to empirically classify observations into sub-groups: cluster analysis (CA) and discriminant analysis (DA; Bailey, 1994). CA is applied with a view to identifying naturally occurring (Edwards and Cavalli-Sforza, 1965) groups, starting from unclassified raw observation data. DA, on the other hand, is based on data that originally lent itself to being classified into mutually exclusive groups (Veronese and Pepe, 2017); it entails using MANOVA (multivariate analysis of variance) tests to estimate a linear quantitative equation with the power to divide a set of observations into previously defined groups. In DA, the equation g(x) = 0 indicates the linear combination of variables that separated the cases assigned to category ω1 from those allocated to ω2. In addition, the equation g(x) may be adopted to generate a new "hypothetical" group membership function [g(x) > 0 and g(x) < 0] that would use empirical observations, rather than a priori classification, to evaluate membership of a group. In other words, given a set of response variables and a single dichotomous grouping variable, DA evaluates how well group membership corresponds to the measured observations. In the context of the present study, the dichotomous variable was low/high alexithymia scores in a group of adolescents, where "low" and "high" were calculated according to the 20-80th percentile rule. The global TAS-20 alexithymia score (x T ) is the sum of a participant's ratings of all 20 items forming the scale. The cut-off points initially applied during the diagnostic process were x T ≤ 51 non-alexithymia, 52 ≤ x T ≤ 60 possible alexithymia, and x T ≥ 61 alexithymia. However, as pointed out by Kooiman et al. (2002Kooiman et al. ( p. 1089, the original TAS-20 thresholds were adopted "without reference to empirical research. " Consequently, the data observed in the present study were grouped according to the conventional epidemiological standard of the 20-80th percentiles as a method of identifying high-risk and low-risk subjects with respect to a specific indicator (Diouf et al., 2015;Kardashian et al., 2020). In a two-group scenario of this kind, Fisher's linear discriminant function (Fisher, 1936) was used to transform a multivariate observation x, into a univariate score, y, such that the y's computed for each of the populations to be classified would be separated to the greatest extent possible (Li et al., 2006). The statistical power of the discriminant function is ultimately based on two indicators: Wilk's Lambda (λ) and the canonical discriminant coefficients. The λ value indicates the amount of total variance that is not accounted for the difference between groups. The canonical discriminant coefficients express the magnitude and direction of the association between scores and their corresponding groups (McLachlan, 2004). In order to control for a potential source of covariation, the variable age was included in the discriminant function in our own analysis. The procedure followed also allowed us to test for the Yule-Simpson effect (i.e., the statistical scenario whereby an association holds for the full sample but not for each of the assessed cohorts; Simpson, 1951). Finally, we also calculated Press's Q statistic (Hair et al., 1995) to evaluate the classification matrix's discriminant power compared to a chance model (Hahs-Vaughn, 2016). Press's Q was obtained using the formula: where N is total sample size, n is the total number of correctly classified observations, and k is the number of groups. Before initiating the analysis, standard data exploration procedures were followed. Missing value analysis identified less than 1% of missing values in the dataset, and subjects who were missing data were deleted listwise (Chan and Dunn, 1972). Mahalanobis' distances were computed (with value of p set to be less than 0.001) and no multivariate values were identified. The main descriptive statistics are presented together with the zero-order correlations among scores. Table 2 summarizes the main descriptive statistics along with the zero-order correlational patterns identified among the variables under study. Descriptive Statistics and Zero-Order Correlations: Incidence and Prevalence of Alexithymia in Adolescent Students With regard to the first main outcome of the study, alexithymia scores (see Figure 1) monotonically decreased as a function of participants' ages, with the function appearing to flatten out from the age of 18 onward. Interestingly, 95 % confidence intervals remained quite stable across all age categories. The application of the 20-80th percentile range prompted adoption of the following thresholds: x T ≤ 47 non-alexithymia, 48 ≤ x T ≤ 64 possible alexithymia, and x T ≥ 65 alexithymia. The number of adolescents below the lower boundary (very low alexithymia scores, LA = 19.7%) was 176, whereas the number of participants above the upper boundary (high alexithymia scores, HA = 22.5%) was 199. Zero-order correlational analysis revealed that alexithymia scores were negatively associated with both emotional trait intelligence (specifically, this was a large association; r = −0.635, Frontiers in Psychology | www.frontiersin.org 6 July 2021 | Volume 12 | Article 645215 p < 0.001) and age (in this case, a small association; r = 0.144, p < 0.001). Finally, student burnout was positively associated with alexithymia scores (r = 0.301, p < 0.001). Analysis of the standardized discriminant coefficients showed that all dimensions of TEI were played a key role in the discriminant function, as opposed to only one of the three dimensions of student burnout, namely, emotional exhaustion. With regard to TEI, the dimension that contributed most strongly to discriminating between LA and HA groups was emotionality: High scores on the emotionality measure were associated (β 1 = 0.635) with very low scores for alexithymia. A second group of variables that were also positively associated with membership of the LA group comprised the remaining TEI dimensions of self-control, wellbeing, and sociability (with coefficients ranging from β 2 = 0.304 to β 4 = 0.218). In contrast, the analysis suggested a more limited association between alexithymia and student burnout. Specifically, among the three dimensions of student burnout, only levels of emotional exhaustion seemed to be associated with membership of the HA group. Finally, the variable age contributed significantly to the equation, in that being an older student was associated with belonging to the group with lower scores on the alexithymia measure. Discriminant Analysis To evaluate the accuracy of the discriminant equation in classifying LA and HA, the full set of observations collected in this study was re-classified using the derived function. The predictive accuracy of the model was 88.1 percent for the LA and 85.4 percent for the HA group. The resulting Press's Q value was 201.56 (p < 0.001), meaning that the linear function was confirmed as discriminating above chance. Figure 2 summarizes the mean differences between high and low alexithymia groups in scores on the TEI and burnout dimensions included in the linear function. DISCUSSION The aim of this cross-sectional study was to investigate the set of relations between alexithymia and TEI and school burnout in a large sample of female adolescents. We obtained three main findings. First, the alexithymia scores in our sample were slightly higher than those reported in the previous studies with similar populations; second, alexithymia was found to FIGURE 1 | Mean scores for alexithymia: Adolescents' alexithymia scores monotonically decreased as a function of age. Ninety-five percent confidence intervals are reported. The function appears to flatten out from the age of 18 years. be significantly associated with both TEI and student burnout; and finally, combinations of scores on emotional intelligence trait and burnout measures discriminated between participants with high alexithymia vs. low alexithymia. We now discuss each of these findings in detail. Prevalence and Incidence of Alexithymia Mean alexithymia scores in our sample of students (m = 55.33 ± 10.04) were similar to those reported for previously studied clinical groups: for example, with a clinical sample of Italian patients (Bressi et al., 1996) or groups of female students diagnosed with anorexia nervosa or major depression (Sfärlea et al., 2019). In terms of cut-off scores for the scale, only 32.9% of the participants scored less than 51, meaning that alexithymia may only be categorically excluded in one-third of the sample (the non-alexithymia group). All other students fell within the categories of "possible alexithymia" (37.7%) and "alexithymia" (29.2%), suggesting an extremely high prevalence of alexithymia in female adolescents. This result should be treated with caution, due to two distinct but related methodological issues concerning the use of the alexithymia scale with a population of adolescents and interpretation of the scores thus obtained. First, as reported by Meganck et al. (2009), quantitative measures evaluating the construct of alexithymia are at risk of measuring other psychological constructs or internal mental states. If alexithymia is conceptualized as the inability to recognize and verbalize emotions (Sifneos, 1973), then unconscious procedural knowledge and cognitive processes might be inaccessible to conscious self-report (Nisbett and Wilson, 1977). Also, given that the ability to verbalize affect is a key proxy for alexithymia, it is worth asking whether poor alexithymia scores may not reflect a general tendency to use a more restricted emotional lexicon (i.e., poor emotional literacy) rather than a true state of being alexithymic. For instance, research exploring the association between emotion language and alexithymia has suggested that psychosomatic patients display a more limited emotional lexicon and in general poorer resources for verbally expressing their internal states and feelings. The second methodological issue concerns the use of the original suggested cut-off scores. Indeed, the cutoffs for the TAS-20 were established with cohorts of adult participants and have not been explored with samples of adolescents or children (Loas et al., 2017). This means that appropriate cut-off scores reflecting the specific characteristics of younger -adolescent or child populationsneed to be defined if we are to go on studying the prevalence of alexithymia in these groups. While taking into account these possible methodological limitations, the incidence and prevalence profile of alexithymia in female adolescents emerged here is in line with recent data collected using the TAS-20 in non-clinical subjects of the same age groups both in China and Italy. Girls in Ng and Chan's (2020) study obtained a mean alexithymia score of 58.35 (± 10.17) and only 24.4% of them obtained scores under the cutoff of 51. In the study by Trentini et al. (2021), girls' mean score was 52.76 (± 10.51): Even if we do not know the prevalence rates by gender for this sample, the average score is in the "moderate" range, therefore suggesting prevalence similar to that reported in the present study. In light of this, although adolescents become increasingly competent in identifying and discriminating between the feelings in themselves and others as they acquire more complex cognitive resources, future research should consider the possible existence of gender-specific developmental trajectories. Associations Between Alexithymia, Trait Emotional Intelligence, and School Burnout The second aim of the present study was to explore the associations between alexithymia, TEI, and school burnout. As expected, alexithymia was negatively correlated with TEI as measured via emotionality, self-control, wellbeing, and sociability. This result is in line with the previous studies that reported significant inverse correlations between the two theoretical constructs. The inability to recognize and describe one's own emotions, which is typical of alexithymic subjects, is associated with low levels of emotion regulation, difficulty in identifying and managing potential stressors, a generalized sense of psychological frailty, and difficulty in managing emotions in communication and social interactions (see, for example, Coffey et al., 2003). Furthermore, in line with our next research hypothesis, alexithymia scores were positively associated with school burnout. During adolescence, especially in the case of girls, alexithymic traits can be often related to difficulties in identifying and dealing with stressful events (Levant et al., 2009;Popa-Velea et al., 2017;Sfärlea et al., 2019). The tendency to avoid negative situations while relying on maladaptive regulation strategies can come to the fore in potentially stressful situations: As subjects struggle to manage difficulties, their ongoing sense of distress is heightened and they become even more susceptible to burnout. This chain of relationships appears to be even stronger in academic settings, where students are constantly exposed to considerable levels of stress (Fiorilli et al., 2017;Heinen et al., 2017). It is also particularly evident during adolescence, a sensitive period for the management of emotions that is characterized by a higher risk of reliance on maladaptive regulation strategies and consequent mental distress (Zimmermann and Iwanski, 2014). We also found alexithymia to be negatively associated with age. This finding is line with the previous studies which suggested that alexithymia tends to become less prevalent with age. van der Cruijsen et al. (2019) found that early adolescents obtained higher scores on measures of alexithymic traits than did late adolescents. Membership of Low-vs. High-Alexithymia Groups: How Do Trait Emotional Intelligence and Burnout Discriminate? We found that a combination of TEI and burnout scores served to discriminate participants with high levels of alexithymia from those with low levels. More specifically, high scores for the emotionality and self-control dimensions of TEI were key potential indicators of membership of the group with low alexithymia scores. Wellbeing and sociability played a less important role in discriminating between the members of the two alexithymia groups. Thus, participants experiencing a sense of wellbeing did not necessarily belong to the group of adolescents with low levels of alexithymia. Although our results do not allow us identify causal relationships between the study variables, it is plausible that emotionality and self-control may act as antecedents of low alexithymia, whereas wellbeing and sociability are more likely related to the expression of these individual traits in the socio-relational context. Again, these results are in line with the outcomes of the previous studies that demonstrated a negative correlation between alexithymia and specific dimensions of TEI, such as emotional self-awareness, empathy, and stress management, which partly correspond to the emotionality and self-control dimensions of the TEIQue (Bagby et al., 1994;Parker et al., 2001). Furthermore, the recent study by Trentini et al. (2021) highlighted a direct association between the greater DIF for female adolescents with their greater personal distress. Such association could expose girls to a higher risk of developing burnout symptoms. On the other hand, participants with high levels of emotional exhaustion -a typical component of student burnout -were more likely to belong to the group characterized as at high risk of alexithymia. In the context of burnout, emotional exhaustion occurs when the student perceives himself/herself as overwhelmed by the demands of school, with respect to which he/she does not feel adequately equipped . It thus acts as a proxy for emotional selfefficacy and a flag for potential alexithymic traits. This outcome of our study is in line with the association between emotional exhaustion and alexithymia observed in female medical students by Popa-Velea et al. (2017). Inadequacy and cynicism appear to play a less important part in discriminating participants with high levels of alexithymia. This is probably due to the fact that these dimensions may rely more heavily on contextual and relational factors than on individual ones: Studies on burnout generally indicate that an indifferent or distant attitude toward work (i.e., cynicism) and reduced feelings of competence, achievement, and accomplishment (i.e., inadequacy) are more strongly associated with lower academic achievement and lower school engagement than with personality traits (Salmela-Aro et al., 2009). These findings bear some interesting educational implications. Adults and educators who intend to implement educational interventions aimed at enhancing adolescents' emotional competence and reducing the risk of emotional exhaustion and school burnout, should especially focus on activities aimed at improving emotionality and self-control skills, given their strong association with low levels of alexithymia. Finally, as age increases, the probability of being included in the low alexithymia group gets higher. It is plausible that as subjects mature, they may gain competence in recognizing and managing their own emotions in stressful situations, or it may be that their environment or experiences foster the development of skills that compensate for the deficits associated with an alexithymic personal disposition. In this regard, and in light of existing evidence that breadth and quality of social relationships are positively associated with TEI and negatively associated with alexithymia (e.g., Austin et al., 2005), it would be of value for future studies to examine the role of adolescent students' school environment and the quality of their relationships with peers and teachers. Limitations and Future Research Directions The present study is not without its limitations. First, its cross-sectional design prevented us from identifying predictive relationships. Although our results are promising, only longitudinal research can truly access direct and indirect effects between variables. A second limitation is the purely quantitative nature of the research, in which the variables were investigated via self-report questionnaires only. A mixedmethod study, including both quantitative and qualitative data (such as interview transcripts or narratives), would offer deeper insights into the psychological aspects that may help to discriminate between adolescents at risk of alexithymia and those who are not at risk. Finally, a third limitation concerns the characteristics of our sample. Specifically, it was not representative of the broader female adolescent population, given that the data were only collected in restricted geographical areas, at a specific type of school, and without monitoring SES or other environmental variables. For this reason, our findings are not generalizable. In the future, it would be advisable to explore the associations among the study variables in a more representative sample. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS EF contributed to designing the study, collecting, interpreting, and discussing the data, and writing the manuscript. AP contributed to designing the study, analyzing and interpreting the data, and writing the manuscript. VO contributed to designing the study, interpreting and discussing the data, and drafting and revising the manuscript. VC contributed to discussing the findings. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by a grant from the University of Milano-Bicocca assigned to EF for the year 2019.
2021-07-08T13:29:12.024Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "969ded343e0e3b13f3ed380aeabb4d053d3eb368", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.645215/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "969ded343e0e3b13f3ed380aeabb4d053d3eb368", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
253240217
pes2o/s2orc
v3-fos-license
Atezolizumab for the treatment of advanced recurrent basal cell carcinoma and urothelial carcinoma of bladder: a case report Background The use of checkpoint inhibitors has become increasingly important in the treatment of different cancers, including advanced muscle-invasive urothelial cancer and even in basal cell carcinoma. We present the case of a patient with advanced basal cell carcinoma and metastatic muscle-invasive urothelial cancer, who was treated with the programmed death-ligand 1 inhibitor, atezolizumab for both cancers. Case presentation A 72-year-old Caucasian female patient, with a history of smoking without any comorbidities developed periocular basal cell carcinoma, which was surgically removed but relapsed 4 years later. Surgical excision was carried out twice, but with positive margins, therefore definitive radiotherapy was given. Subsequently, the patient developed non-muscle-invasive papillary urothelial carcinoma, which was removed by transurethral resection. Follow-up was irregular owing to the patient’s inadequate compliance, and within 2 years, the patient’s cancer relapsed and histology confirmed muscle-invasive urothelial carcinoma. Definitive radiochemotherapy was not accepted by the patient. Meanwhile, the patient’s basal cell carcinoma had also progressed, despite receiving vismodegib therapy. Therefore, the patient was administered epirubicin-cisplatin. Having reached the maximum cumulative dose of epirubicin, treatment with this chemotherapeutic agent could not be continued. The patient developed bladder cancer metastasis in her left suprainguinal lymph nodes. Owing to the presence of both types of tumors, programmed death-ligand 1 inhibitor atezolizumab treatment was chosen. In just over 1 year, the patient received 17 cycles of atezolizumab altogether, which was tolerated well without any adverse or side effects. Follow-up imaging scans indicated complete remission of the metastatic bladder cancer and stable disease of the basal cell carcinoma. The patient subsequently passed away in hospital due to a complication of COVID-19 infection. Conclusions Our patient attained stable disease in advanced basal cell carcinoma and complete remission in metastatic muscle-invasive urothelial cancer after receiving programmed death-ligand 1 inhibitor, atezolizumab, therapy. To our knowledge, this is the first paper to report the use of programmed death-ligand 1 inhibitor, atezolizumab, as treatment for advanced basal cell carcinoma. This case may also be of interest for clinicians when treating patients with two synchronous cancers. to metastasis [1,2]. Since immune mechanisms appear to play an important role in the pathogenesis of BCC, studies are being conducted to determine the role of immunotherapy, such as checkpoint inhibitors, in the treatment of relapsed-recurrent or metastatic BCC [3]. The use of checkpoint inhibitors has become increasingly important in the treatment of different cancers, including advanced muscle-invasive bladder cancer [4]. Even if patients initially undergo radical cystectomy with pelvic lymph node dissection, metastases may later develop in about half of the cases [5]. Therefore, systemic platinum-based antitumor therapy is a key element in the treatment of these patients, but checkpoint inhibitors such as pembrolizumab, nivolumab, and atezolizumab have become alternatives for secondline systemic treatments [6]. Earlier reports have shown positive response to treatment with checkpoint inhibitors in advanced BCC [3,7]. Recently, the programmed death receptor-1 (PD-1) inhibitor cemiplimab-rwlc has been approved for locally advanced and metastatic BCC [8]. In the present case report, we describe the case of a patient with locally advanced basal cell carcinoma, who concurrently developed metastatic muscle-invasive urothelial bladder carcinoma. The patient's case is of interest since her BCC remained stable and the metastatic bladder cancer was in complete remission during her treatment with the programmed death-ligand 1 (PD-L1) inhibitor, atezolizumab. Case presentation A 72-year-old Caucasian female patient, with a history of 40 years of smoking without any comorbidities presented with a non-healing, ulcerated lesion on the inner canthus of the left eye at the ophthalmology clinic. Surgical excision of the lesion was carried out in April 2008, with limited excision margins. The histological diagnosis was follicular basal cell carcinoma. After being asymptomatic for 4 years, the tumor relapsed and the patient underwent a second surgical excision in August 2012. Resection was incomplete, with positive margins, so the patient underwent another surgery to have the tumor removed; however, histological diagnosis once again showed evidence of positive margins. Finally, radiation therapy was chosen as definitive treatment for the patient. Between December 2012 and January 2013, the patient received 50 Gys of irradiation in 2-Gy fractions on the left halves of the forehead, glabella, nose, and on the left inner periocular eye regions. The patient attended follow-up examinations by the dermatologist every 6 months. Physical examination of the patient and an MRI scan of the affected area were carried out. The patient remained symptom free until 2018. Figure 1 shows the main events of the patient's illnesses and treatment. In June 2016, the patient presented with hematuria and was subsequently admitted to the urology clinic for cystoscopy and transurethral resection (TUR) of urinary bladder cancer. Histological diagnosis confirmed nonmuscle-invasive papillary urothelial carcinoma (G2 pT1). Following surgery, the patient did not attend regular follow-up at the urology clinic, only at the dermatology clinic. Until 2018 the patient remained symptom-free. In February 2018, the patient once again presented with hematuria at the urology clinic. First cystoscopy, then TUR was performed, and the histological diagnosis confirmed muscle-invasive papillary urothelial carcinoma (G3, pT2), indicating the relapse and progression of the patient's bladder tumor. Although both radical cystectomy and definitive radiochemotherapy were considered and offered as treatment options, neither was accepted by the patient. In the weeks prior to surgery, once when the patient was blowing her nose, a cavity had spontaneously opened between the inner canthus of the patient's left eye and the medial meatus of the nasal cavity. Therefore, magnetic resonance imaging (MRI) of the facial skeleton, the cervical soft tissues, and the upper mediastinal regions was carried out in April 2018. The MRI scan showed evidence of the relapse of the basal cell carcinoma on the skin of the inner canthus of the left eye, with the tumor invading the ethmoidal air cells in the left regions of the inner canthus and nasal root, as well as spreading into the nasal cavity and frontal sinus, cranially destroying the rhinobasis ( Fig. 2A, B) For this advanced, locally invasive disease, the patient was given three cycles of vismodegib as first-line treatment without any adverse events. A follow-up MRI scan of the region, in September 2018, showed disease progression and the increased destruction of the skin in the left inner canthal region. To ascertain the histological type of the tumor, biopsy of the sinuses was carried out, which confirmed the presence of invasive follicular BCC. Based on the decision of the tumor board, the patient was given eight cycles of epirubicin-cisplatin (CDDP) chemotherapy between January and September 2019. Follow-up MRI showed stable disease. Throughout the treatment, the patient's functional status remained ECOG 0, and no side-effects were noted. Having reached the maximum cumulative dose of epirubicin, treatment with this chemotherapeutic agent could not be continued. Approval for PD-L1 inhibitor avelumab therapy was requested by the tumor board but was denied. Towards the end of her chemotherapy, in the summer, the patient felt a small lump appear in her left inguinal region, so computed tomography (CT) scans of the chest, abdomen, and pelvis were performed in September 2019 at the urology clinic. Although local recurrence of the cancer in the bladder could not be found, lymph node conglomerates could be detected in the aortic and left parailiac regions, and a soft tissue lesion in the left adrenal gland raised the possibility of a metastasis (Fig. 3C). Ultrasound core biopsy was performed from the left suprainguinal lymph node conglomerates, with the subsequent histological diagnosis confirming the metastasis of the muscle-invasive papillary urothelial carcinoma (CK7+, CK20 focally+, CDX2−, p63, and GATA3+). In November 2019, the patient was referred to the National Institute of Oncology for further oncological treatment. Taking The patient tolerated atezolizumab treatment well; she did not experience any adverse or side effects. Her blood tests were normal, and her performance status remained good (ECOG 0) throughout the treatment. Follow-up abdominal, chest, and pelvis CT scans in both February and June 2020 showed the regression of the retroperitoneal and left parailiac lymph node conglomerates, while the lesion in the left adrenal gland remained stable, altogether indicating the regression of the patient's metastatic bladder cancer. Response to therapy was evaluated according to the RECIST criteria ( Fig. 3D). CT scans of these regions showed evidence of complete remission in September and later on in December 2020 as well. Follow-up MRI scan of the facial skeleton and cervical region in September showed evidence of stable disease of BCC. On 30 January 2021, the patient presented with symptoms of dehydration and arrhythmia at the emergency department. She was diagnosed with atrial fibrillation with high ventricular rate, and was found to be COVID-19 positive. Over the next 2 days, the patient's condition deteriorated, she developed bilateral pneumonia, became somnolent and died. The cause of her death was considered to be due to the complications of COVID-19 infection. Discussion Our case report describes the steps of therapy-highlighting immunotherapy-taken to treat a patient with two primary cancers: recurrent, advanced BCC and metastatic urothelial carcinoma. For patients with locally advanced BCC that relapses following surgery, vismodegib, a drug targeting the hedgehog signaling pathway and often dysregulated in BCC is a recommended treatment option [9,10]. The efficacy of vismodegib has been reported by previous studies, however, long-term results have not yet been documented and resistance to vismodegib has also been found in recurrent periocular BCC [11][12][13]. In line with treatment recommendations, when our patient's BCC relapsed following surgery and invaded the adjacent ethmoidal air cells, nasal cavity, and frontal sinus, the patient was approved for treatment with vismodegib. Although sinusoidal progression of the tumor was halted, the destruction of the skin progressed, so overall the treatment was only partially successful. Non-muscle-invasive bladder cancers account for 70% of new bladder cancers and can mostly be treated curatively, with a high 5-year overall survival rate of 90% [14]. About 15-20% of non-muscle-invasive bladder cancers become muscle invasive, however, with high-grade papillary tumors having a higher chance of progressing to muscle-invasive bladder cancers than low-grade papillary tumors [15,16]. Cisplatin-based chemotherapy is an accepted systemic treatment in muscle-invasive bladder cancer; however, for a large proportion of patients, it is not a suitable choice owing to age-or disease-related risk comorbidities [6]. In our patient's case, instead of radical cystectomy and definitive radiochemotherapy, which were not accepted by the patient, TUR and subsequent close monitoring was the chosen mode of treatment. However, urological follow-up was irregular and when the patient presented at the urology clinic with a palpable lump in her left inguinal region 1 year later, core biopsy showed evidence of a lymphatic metastasis of the bladder cancer. Immunotherapy with a checkpoint inhibitor was a possible treatment modality at this point; therefore, based on the tumor board's decision, our patient was given 17 cycles of atezolizumab. Programmed death-1 (PD-1) and PD-L1 inhibitors are immune checkpoint inhibitors, which have been shown to be safe and effective for patients with advanced muscle-invasive bladder cancer: they have been approved for first-line, second-line, and maintenance therapy in advanced disease [6]. The IMvigor211 phase 3 study showed that atezolizumab, a PD-L1 inhibitor, was associated with similar overall survival as patients treated with chemotherapy in platinum-refractory metastatic urothelial carcinoma overexpressing PD-L1, but had a better safety profile and was well tolerated [17]. Immunotherapy and the use of PD-1 and PD-L1 inhibitors have also become an area of interest in the treatment of nonmelanoma skin cancers including advanced BCC, particularly in patients refractory to Hedgehog pathway inhibitors [10,18]. Several case reports and a small study involving eight patients reported favorable responses with pembrolizumab, nivolumab, and cemiplimab [3,7], furthermore, the latter was approved for locally advanced and metastatic BCC by the US Food and Drug Administration [8]. Atezolizumab treatment was well tolerated by our patient. During the treatment, follow-up imaging showed evidence of complete remission of the metastatic bladder cancer and stable disease of BCC. Thus, atezolizumab treatment appeared to be beneficial for both cancers. Prior to treatment, PD-L1 staining was carried out. Interestingly, whereas the BCC sample showed intensive staining, the sample from the urothelial carcinoma metastasis did not show any. Although PD-L1 expression, measured by immunohistochemistry, is an accepted and relevant biomarker of response to checkpoint inhibitors, it has been found to be unreliable in some cases [19] It is plausible that the patient's metastasis may have contained heterogeneous tumor tissue, or the protein's expression may have been downregulated, which can explain the efficacy of atezolizumab despite the observed PD-L1 negativity in the tumor sample. Although there was a positive response to PD-L1 inhibitor therapy, the patient passed away soon after being hospitalized for atrial fibrillation and dehydration. The direct cause of her death was possibly the consequence of complications due to COVID-19 infection. Conclusions To our knowledge, this is the first paper to report the use of the PD-L1 inhibitor, atezolizumab, as treatment for advanced BCC. With the aging of the population, the incidence of BCC and concurrent morbidity in multiple cancers are likely to increase, and choosing the best treatment for two synchronous cancers may provide a challenge for clinicians. We described such a case, where a patient with both advanced BCC and metastatic bladder cancer received atezolizumab treatment, and the patient's response to therapy was stable disease of the first and complete remission of the second cancer. Although this report describes only one individual case-and therefore cannot be generalized-the treatment steps may be of interest for clinicians when treating patients with two synchronous cancers.
2022-11-01T13:59:13.311Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "8151c9859e2f0ff24cdf95a05bbaf88f2fc46577", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "cfa80ce5488859b3deb581b103978301c40b3395", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
154173906
pes2o/s2orc
v3-fos-license
Study of Carbon Financing Innovation of Chinese Financial Institutions In the past few years, commercial banks and other financial institutions as the intervention of financial products and financial services innovation have become the international financial market development of a new bright spot. It provides a huge space for developing carbon finance system. Chinese low-carbon economy has made considerable progress, but the development is still at a relatively early stage. This essay explains several bottlenecks of involvement in carbon finance of Chinese financial institutions, in terms of the establishment of carbon credit trading platform, strengthening the impact of financial institutions as fund agent and trading agent and promoting the innovation of carbon financial products. General TermsResearch Article KeywordsCarbon finance; Carbon trading; Financial innovation INTRODUCTION People around the world paid more attention on lowcarbon economy, which made the space of Chinese financial institutions increasingly amply to participate in carbon finance. As a branch of environmental finance, carbon finance is about financial activities related to carbon trading, such as direct investment, carbon index trading and bank loans, etc. That is to regulate the carbon emissions of a new type of financial operation model in society. With a short history, carbon finance is considered to be the important financial innovation of international finance in recent years. International carbon finance market in recent years is showing a trend of rapid development. Barclays Bank launched the first standardized OTC (Over the Counter) CER (certified emission reduction) futures contract since October 2006, and the Netherlands Bank and the German Dresden Bank developed and tracked the retail products of EU carbon emission quotas futures by 2008, with that, investment banks began to participate in the carbon finance marker in a more direct way, which involved in Clean Development Mechanism (CDM) of emission reduction projects. Later on, Kanji Bank in Korean as an emerging market institution has also introduced a "Carbon Bank "program. It's extremely rich in carbon emissions resources, and with a great potential in carbon emission reduction for our country. According to UNDP (United Nations Development Program) statistics, it shows that China's carbon emission reduction is to the level of 1/3 among the global market ranking the second. It indicates that a great demand and profitability of financial opportunities, and carbon finance market has a sustainable development. [1] Carbon finance is different from the traditional financial innovation and financial activities, which promotes low-carbon economic development. The carbon demand for financial services gives birth to a low carbon economy, and brings modern financial industry to expand new areas and space. However, the development of China's carbon finance exits a series of constraints and how to break through these barriers is a topic that should be solved. THE NECESSITY OF DEVELOPING CARBON FINANCE BUSINESS INNOVATION OF THE FINANCIAL INSTITUTIONS It has a great significance of developing carbon financial system to our county. On one hand, it is of great importance to the economic transition. It contributes to the transition of national real economy into the lowcarbon type, and can also speed up the adjustment of economic structure. On the other hand, for the financial industry, the great prospective of low-carbon economy also means the arrival of historical opportunities. As a carrier of financial system innovation, carbon financing is to the benefit of optimizing the financial system structure to our country, improving the cooperation of financial institutions between China and other countries. Commercial banks have also carried out on carbon finance businesses, launching CDM project finance and structured products linked to carbon trading and other services and products, such as green credit business innovation of CIB (China Industrial Bank). However, despite Chinese low-carbon economyrelated financial innovation has made considerable progress, but its development is still at a relatively early stage, simple type of business, there are also many aspects needed to be further improved. At this point, researching the design of products, system innovation on commercial banks and other financial institutions in the carbon finance is particularly important. Raised in the 20th century and the mid 90's scrutiny of financial institutions "function point" by Ziv Bodie and Robert • Morton (Robert Mer-ton), corresponding to the carbon finance market in China, the requirement of the theory will be refined to keep the current situation and prospect of the market, promoting the financial system market while improving the resource allocation efficiency of carbon financial. [2] Commercial banks, for example, CDM project contains on the huge demand for financial intermediation services, commercial banks can expand the intermediary business revenue and optimize the revenue structure by providing the provision of related financial services. As a new business, carbon finance needs commercial banks to innovate business mode of operation; it can promote the innovation capacity of commercial banks objectively. Moreover, CDM often involve two or more countries cooperation between financial institutions, commercial banks can take to improve international business negotiation skills, strengthening the international financial institutions and business, and accumulated experience in international operations. In addition, the rise of carbon trading and new energy sources is becoming a huge market for a wide range of excellent opportunity to boost the currency diversification, building the carbon financial system will help to contribute to get hold of more chips in the internationalization of RMB in China. PROBLEMS ON FINANCIAL INSTITUTIONS PARTICIPATING IN CARBON FINANCING INNOVATION The construction of carbon finance market in China is far behind developed countries. China is now establishing a national carbon finance market realistically, indicating that as the representative of China's commercial banks may face problems of financial institutions and obstacles, offering detailed proposals on those situations in the light of products and institutional innovation that may participate in carbon finance by Chinese financial institutions. Not only the significance and possible exploration of the preliminary theory to establish carbon finance market in china, but also is the complement and expansion of financial innovation of financial institutions of our country. 3.1.The development of China's carbon trading market is in its early stage which lacks of a national carbon exchange. Developed countries all currently dominate global Carbon Exchanges. Beijing Environment Exchange (established in 2008), Shanghai Environment and Energy Exchange and the Tianjin Emissions Exchange are the earliest of the three environmental rights trading institutions. Shanxi Lvliang Emission Reduction Trading Centre, Wuhan, Hangzhou and Kunming and other exchanges that have been established since 2009. The carbon trading system has its own characteristic gradually. However, Chinese carbon trading is mainly for the European to develop some of the rules for buying or selling, the real Chinese carbon trading market has not yet appeared, the carbon exchanges across the country has not been established. The current market development of the relevant intermediary is not complete. Carbon emission is a virtual commodity under the CDM, the trading rule is very stringent with a more complex development process. The contract period is very long. It's difficult for non-professional organizations to have the capability to develop and implement. Assessment of CDM projects abroad and the purchase of most of the emissions are done by the intermediaries, and local agencies in China are still in its infancy, it's difficult to develop or to digest a large number of projects. In addition, we're currently lack of professional technical advisory system to help financial institutions to analyze, evaluate, and avoid project risk and transaction risk. [3] Financial institutions are not mature It is imperfect on the carbon financial mode of operation, project development, trading rules led by Chinese commercial banks. A series of risks and difficulties are the reason for hindering the pace of carbon finance business innovation. Industrial Bank is one of the few banks that currently keeping a watchful eye on the carbon finance. ANALYSIS OF CARBON FINANCIAL DEVELOPMENT PATH OF FINANCIAL INSTITUTIONS With the national climate change policy efforts continue to strengthen, the financial markets made this quick and powerful response. First, the international mainstream commercial banks are active in the credit business loans for projects related environmental impact assessment, and strictly monitoring the implementation of environmental risk in the lending process, such as commercial banks are actively expanding its loans to low carbon projects. Second, various financial innovations related to climate change are arising at a historic moment. The interaction between the exchange transactions of CERs and ERUs, spread option based on the spread, can not only be locked, isolated, and avoiding the risks associated with climate change, and is more stable in expectations about the future, can also improve the efficiency of the price mechanism, so that it can allocate resources to more cleaner production technologies sector through the development of a variety of different markets arbitrage connected products. Moreover, it improves to include direct investment, bank loans, carbon funds, carbon credits trading, carbon futures and options to support a range of financial instruments for carbon financial system through the financial markets and transparent pricing of carbon emissions targets. 4.1.Construct carbon credit trading platform. We need to further explore and develop a quota system for carbon emissions quota trading market. Establish and improve standards of risk assessment of carbon to enhance China's international trading in carbon credit pricing, and to create a stable system environment for the development of carbon finance. The development of China's carbon trading platform should be based on the policy and targeted at economic results as the goal. Building a multi-level system includes spot trading and derivatives trading platform including multi-level system. As the market is still in the initial rearing period, the inactive trading hinders the platform to play. We need a more clear policy direction to encourage Chinese enterprises to enter the market platform for trading during the "12th Five-Year plan" period. In that way can we promote the development of spot trading platform, futures and other derivatives trading platform to join. [4] Encourage NGOs (Non-Governmental Organization) and financial institutions to join. We need to pay attention to the role of financial institutions as financial intermediaries and trading intermediary, allowing financial intermediaries to purchase or develop with the project owner of carbon reduction project. Commercial banks should explore more modes, especially the intermediary service model to satisfy carbon credits to meet the diversity of financial needs except for the carbon rights that has been carried out secured loans and related financial products. Investment banks, financial companies should explore carbon finance, providing new investment ways for investors. Accounting firms, asset appraisal agencies need to improve the assessment and advisory work of financial products. Insurance companies and other security companies can provide some related insurance products and promote carbon financial product development and transactions by increasing trust and guarantee systems. Efforts to promote carbon finance product innovation  Great efforts should be made to promote commercial banks to launch financial services innovation in the name of pledge loan business carbon. Establish a carbon credit as financing, which is good potential for CDM (Clean Development Mechanism) project development and corporate credit as a pledge to CERs income right to apply for loans to banks is of an innovative form of innovative form. As CERs income right is a right to future earnings, which has a great uncertainty, so that banks need to pay more attention to the risk of achievements of the right CERs income when providing pledge loans for enterprises. As to enterprises that acquired CERs must go through CDM projects approved by the enterprise, CERs issued by the authenticity and validity. For CDM projects registered in the UN need to be closely tracked of the progress of the project, and up-floating interest rate appropriately in case of risks.  Developing financial leasing business based on carbon trading. Companies are not necessarily to purchase pollution discharge and dirty oil treatment facility owing to Clean Development Mechanism (CDM) project, and it releases the working fund of the company. In practice, in order to encourage companies to reduce carbon emissions, financial leasing and carbon right pledge loan can be combined. It offers loan support for the companies that use leased equipment, but also can reduce the risk of CERs right benefits and reduce the possibilities of bad debts of carbon credit secured loans. In practical applications, it might also be combined with factoring and finance leasing.  Actively develop Carbon Fund financial products. Commercial banks, securities companies and fund companies and other financial institutions can actively explore the carbon fund market, they can develop and design or sale investment and management plan to specific targeted customer, but also can sale the openended fund management plan for general public investors in the analysis of potential target customers. Gathering the idle funds of the customers to form a dedicated carbon fund to apply to the enterprises with a clean development mechanism (CMD) projects of development potential and credit records, and offer CDM projects financing. Customers can get benefit from the profits of Carbon dioxide emissions index sold by the enterprises. The CDM project development cycle is long, a more complex approval process and also risky. Therefore, such a long period of financial products should be designed for 2 to 3 years; in general, the rate of return should be higher than the same period of deposit, which is 20% to 30%.  Develop Trust category of carbon finance products. Such design philosophy is for those with environmental awareness and knowledge of the enterprises to set up carbon finance Carbon Trust investment fund, the money will be invested in clean development mechanism (CDM) development potential project to obtain the corresponding CERs (CDM emission reduction units) indexes through the development of these projects. Commercial banks or other financial institutions can operate these Carbon Trust categories of financial products.  Gradually promoting the securitization of carbon financial assets. Carbon assets that companies will have great potential for the Clean Development Mechanism (CDM) projects (carbon assets) sold to a special purpose agency or company (SPV) (usually are investment banks), SPVs will import these carbon assets to asset pool, and then clear off the securities generated by the asset pool of cash flows. And to form an asset pool of commercial banks carbon right pledge loans, finance leasing specific to carbon credits, assets of open-book credit and bank factoring of business-related carbon emission rights that combined together and make issuance of asset-backed securities. The securitization of carbon asset improves liquidity of the carbon asset, and the risk is transferred, it's conducive to the development of carbon finance. In order to encourage investment banks to promote securitization of carbon asset, it needs to strengthen the risk assessment institution development of carbon asset, and establish credit mechanism of carbon asset-backed securities. In addition, although the carbon finance is a huge market cake for financial institutions, however, there are also hidden financial risks that traditional products don't have. Insurance is still vacant in China, and is also press for exploration. CONCLUSION How to develop "carbon finance" is a system which needs to set specific standards. In accordance with the principles of sustainable development, government, regulators and financial institutions must issue a mature carbon trading system, as well as scientific and rational interest compensation mechanism to improve the carbon finance markets and carbon financial instruments. Financial institutions play an important role in investment and financing activities of energy saving and economic sustainable development.
2019-05-15T14:34:40.368Z
2013-12-10T00:00:00.000
{ "year": 2013, "sha1": "40442c074c0ed3788475668597c473e5b164ec8e", "oa_license": null, "oa_url": "https://doi.org/10.17722/ijme.v2i2.64", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d531c28e54a633b74e8ac8f16041408f0561d94d", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
13884940
pes2o/s2orc
v3-fos-license
Effects of fenugreek seed on the severity and systemic symptoms of dysmenorrhea. BACKGROUND Primary dysmenorrhea is a prevalent disorder and its unfavorable effects deteriorates the quality of life in many people across the world. Based on some evidence on the characteristics of fenugreek as a medical plant with anti-inflammato-ry and analgesic properties, this double-blind, randomized, placebo controlled trial was conducted. The main purpose of the study was to evaluate the effects of fenugreek seeds on the severity of primary dysmenorrhea among students. METHODS Unmarried Students were randomly assigned to two groups who received fenugreek (n = 51) or placebo (n = 50). For the first 3 days of menstruation, 2-3 capsules containing fenugreek seed powder (900 mg) were given to the subjects three times daily for two consecutive menstrual cycles. Pain severity was evaluated using a visual analog scale and systemic symptoms were assessed using a multidimensional verbal scale. RESULTS Pain severity at baseline did not differ significantly between the two groups. Pain severity was significantly reduced in both groups after the intervention; however, the fenugreek group experienced significantly larger pain reduction (p < 0.001). With respect to the duration of pain, there was no meaningful difference between the two cycles in the placebo group (p = 0.07) but in the fenugreek group, the duration of pain decreased between the two cycles (p < 0.001). Systemic symptoms of dysmenorrhea (fatigue, headache, nausea, vomiting, lack of energy, syncope) decreased in the fenugreek seed group (p < 0.05). No side effects were reported in the fenugreek group. CONCLUSION These data suggest that prescription of fenugreek seed powder during menstruation can reduce the severity of dysmenorrhea. Introduction ysmenorrhea, a Greek term, refers to painful uterine contractions during menstruation (1). It is associated with spasmodic pain in the abdomen during menstrual bleeding (2). Primary dysmenorrhea is the main cause of work absentee-ism, decreased quality of life, and reduced ability to carry out daily activities (3,4). In primary dysmenorrhea, the pain is not accompanied by a pelvic disorder. In addition, it is more common in younger women but may last until the fifth decade JRI of life (5). Dysmenorrhea results from uterine contractions associated with ischemia (6). Increased concentrations of prostaglandins, vasopressin, leukotrienes, and emotional factors may also result in dysmenorrhea (7). The prevalence of primary dysmenorrhea has been reported to range from 42 to 95% in different countries (8)(9)(10)(11). Various non-invasive nutritional and psychological interventions have been suggested as treatments. These include psychotherapy, yoga, hypnotherapy, massage, transcutaneous electrical nerve stimulation, vitamins and nutritional supplements. Prescribed medications include inhibitors of prostaglandin synthesis and non-steroidal anti-inflammatory drugs (NSAIDs) for the relief of pain as well as oral contraceptives. Non-pharmaceutical treatments include acupuncture and surgery. Several of these treatments may have adverse effects or may be contraindicated in certain groups of women (5,12). Due to the lack of side effects compared with synthetic drugs, approximately 60% of the world's population is dependent almost entirely on plants for medication. Natural products have been known to be effective therapies (13). The use of herbal medicines has been common hundred years before pharmaceutical companies began to work, and furthermore, many drugs have a herbal basis. The consumption of herbal supplements has been increased in the developed countries, especially the USA, at different age groups for various reasons (14,15). Traditional medicines like brewed herbs have been used to treat dysmenorrhea across the world (16). Many women believe that dysmenorrhea is a normal cycle of menstruation and does not need pharmacological treatment (14). Naturally occurring agents used to treat dysmenorrhea include herbal brews (eg., mint, chamomile, and oregano) the roots of plants (eg., carrots and turnips) and the petals of plants (marigold, hyacinth, and fenugreek) (12,17). In a study, 78% of the participants used the fenugreek, mint, and green tea among which fenugreek had been used more than the others (14). Fenugreek [Trigonella foenum-graecum (Leguminosae)] is the most frequently used herbal galactagogue and is a member of the pea family (18). In Iran, fenugreek has been registered under the name "Shanbalile" (/Šambalile/) (19). Fenugreek is an annual herb with medicinal properties and has been known as the oldest herbal medicine in Egypt and Greece (20). Today, new information has been achieved on the benefits and pharmacological effects of fenugreek on human wellbeing (21,22). Fenugreek plant is native to the West Asia and Iran (23). The Food and Drug Administration (FDA) in the USA lists it as being a generally recognized as safe (GRAS) plant. It has been utilized around the world for centuries (18). Fenugreek is added to ordinary foods of Indians, Egyptians, and Yemenis (13). Fenugreek seeds are used as spices in food preparations to improve or impart flavor and are good sources of protein, fat, minerals, and dietary fiber (24). The use of fenugreek dates back to the ancient Egypt when it was used to facilitate childbirth and increase mother's milk. Egyptian women still consume fenugreek for decreasing dysmenorrhea. Fenugreek was also used as a poultice to treat gout, inflamed glands, tumors, scars, wounds, and various skin inflammations (25,26). The ripe and young fenugreek seeds are full of carbohydrates and sugar, galactomannan, amino acid, fatty acid, vitamins, folic acid and saponins (22). The main chemical constituents of fenugreek are proteins rich in lysine and tryptophan, flavonoids (eg., quercetin, trigonelline, saponins, and phytic acid), and polyphenols (24,27). The well-documented therapeutic uses of fenugreek are its activity against hypoglycemia and hypolipidemia (28−30). It also protects the gastrointestinal (31) and cardiovascular systems (32). Fenugreek seeds are also used in traditional medicine to relieve common cold, arthritic pain, and hyperglycemia. The extracts and isolates of fenugreek seeds have antioxidant and anti-inflammatory activities (33). The anti-inflammatory and analgesic effects of fenugreek have been demonstrated in experimental models (34−37). A study proved the role of the serotonergic system in the analgesic effect of the fenugreek extract on mice and introduced the probability for the existence of other analgesic mechanisms. It also mentioned the superficial similarities between fenugreek extract and non-steroidal anti-inflammatory drugs and the presence of their analgesic, anti-fever, and antiinflammatory effects in combination (34). Phytoestrogens are herbal compounds with estrogenic activity; fenugreek contains phytoestrogen compounds (38). The present study was conducted to evaluate the effects of oral administration of fenugreek on the severity of primary dysmenorrhea among students. The article also con- Younesy S, et al. JRI sidered the ongoing studies in the world in the field of traditional medicine and abundance of some plants that have been mentioned as herbs with analgesic and anti-inflammatory effects among Iranian traditional medicines. Methods This was a double-blind, randomized, placebo controlled trial investigation. It involved unmarried students living in a dormitory at Shahid Beheshti University (Tehran, Iran) from October 2010 to April 2011 who experienced moderate-tosevere dysmenorrhea. The study protocol was approved by the Research and Ethics Committee of the Faculty of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences and is registered in the Iranian Registry of Clinical Trials (Number 201106196807N2). Students were informed about the purpose and methods of the study and provided with written consent forms for participation. It was estimated that a sample size of 100 participants was required to reach statistical significance at the 95% confidence interval. Computergenerated random numbers were used to divide participants into two groups for receiving fenugreek or placebo. Participants and researchers were kept blinded to treatment allocation. The variables related to dysmenorrhea and systemic symptoms including age, age of menarche, age of dysmenorrhea and BMI were matched between the two groups. Other variables, such as underlying diseases (Diabete, Chronic hypertention, Infectious diseases) which might affect fenugreek consumption or the inhibition of its use in such people were controlled by excluding those samples from the study. Students who had irregular menstrual cycles, endometriosis, history of medication usage, experienced acute stress, and/or had vaginal symptoms (burning, irritation, itching, or discharge) were excluded from the study. It was supposed that people who were allergic to fenugreek or other plants or had used herbal drugs during the previous 3 months should have been excluded, though such cases were not found. Samples who showed allergy to the fenugreek when consuming it, did not use the capsule properly, used any other herbal drug during the intervention, stopped taking the capsule, and used 4 capsules or less daily were excluded from the study. Fenugreek seeds (from one geographical region) were purchased from Zardband Pharmaceuticals (Tehran, Iran). After identification and verification of the samples of fenugreek seeds in the Botanical Laboratory at the Faculty of Traditional Medicine at Shahid Beheshti University of Medical Sciences, samples were ground down. The seed powder was placed into capsules (900 mg) using an automated machine. The placebo capsules contained potato starch. The capsules were similar with respect to shape, color, and packaging. Fenugreek and placebo capsules were taken three times a day [For the first 3 days of menstruation, 2−3 capsules containing fenugreek seed powder (900 mg) were given three times daily]. The intervention continued for two consecutive menstrual cycles. Participants were allowed to use NSAIDs such as ibuprofen and mefenamic acid, if required. However, they were asked to take these medications ≥1 hr after taking the study capsule and to record pain severity before consumption of the sedative. Before the intervention and during each treatment cycle, content and test-retest methods were used to assess the validity and reliability (r =0.89) of the questionnaire, respectively. The following demographic data were collected: age, body mass index (BMI), educational level, occupation of the parents, exercise program, and stressful factors in the past 6 months. A self-reported checklist was used to collect information on the number of sedative drugs taken for dysmenorrhea, pain severity, and the systemic symptoms associated with menstruation. During the first three days of menstruation, the pain severity of each sample was measured three times a day on a 10 cm visual analog scale (VAS) at the time the sample felt the most pain during the hours of 8−13, 13−18, 18−24 o'clock (every 8 hr a day) and was classified as "mild" (score of 1-2), "moderate" (3-7), or "severe" (8-10) (39). A multidimensional verbal scoring system from 0 to 3 was used to assess the severity of associated systemic symptoms (fatigue, diarrhea, syncope, nausea, vomiting, lack of energy, headache, and mood swings) (40). The validity of VAS, which was used to measure the pain has been determined in many studies. This scale has a wide range of applications in studies and is considered as one of the most useful and reliable pain measures (41,42). The questionnaire related to the multidimensional verbal scoring system is valid and has been used in numerous J Reprod Infertil, Vol 15, No 1, Jan-Mar 2014 Effects of Fenugreek Seed on Dysmenorrhea JRI studies (40,41,43,44). The imprint codes for the capsules were recorded on a separate sheet. Statistical analyses: SPSS ver16 (SPSS, Chicago, IL, USA) was used for the statistical analyses. Descriptive data are presented as frequencies, mean values and standard deviations and t-test was used to compare age, age of menarche and other variables between the two groups. The Friedman test was used to compare pain severity between the three menstrual cycles. The Mann-Whitney U test was used to compare the findings between the two groups. If the results of the Friedman test were significant, the therapeutic cycles were compared in pairs via the Wilcoxon signed-rank test and modification of the α level. P<0.05 was considered significant. Results Among 400 unmarried female dormitory residents, 221 reported primary dysmenorrhea. After exclusions, 106 individuals were enrolled in the study. The final analysis involved 101 students, 51 of whom received fenugreek and 50 received placebo. There were no significant differences between the groups with respect to age, age at menarche, onset of dysmenorrhea and BMI (Table 1). Pain severity at baseline did not differ significantly between the groups. In the fenugreek group, pain severity decreased from 6.4 at baseline to 3.25 in the second cycle, whereas that in the placebo group decreased from 6.14 to 5.96 (Table 2). Pain severity in each intervention cycle differed significantly between the two groups, with pain reduction in each cycle being significantly larger in the fenugreek group (Table 3). The duration of pain in intervention cycles was shorter in the fenugreek group (p=0.01). With respect to the duration of pain, there were no meaningful differences between the two cycles in the placebo group (p<0.07) but, in the fenugreek group, the duration of pain decreased between the two cycles (p<0.001). The mean number of sedative tablets required in the fenugreek group decreased significantly (Figure 1) (p<0.001). Furthermore, in the fenugreek group, the severity of the systemic symptoms associated with dysmenorrhea decreased significantly as well (p<0.001) ( Table 4). Discussion Women with dysmenorrhea suffer from increased uterine contractions (6). It has been shown that fenugreek has therapeutic effects against diabetes, infertility, and fungal infections, and that it has analgesic, anti-inflammatory and antipyretic properties as well (45). The present study was the first to investigate the use of fenugreek in the treatment of dysmenorrhea. Studies have shown that fenugreek seed has been used for controlling dysmenorrhea and mastalgia (14,16,25,26). In USA, fenugreek has been used for the treatment of post-menopausal vaginal dryness and dysmenorrhea since the nineteenth century (46). The antispasmodic effect of fenugreek on gastrointestinal system has been recognized and this may justify its effectiveness in dysmenorrhea. Moreover, diuretic property of the fenugreek decreases pelvic hyperemia and this property may explain the effectiveness of fenugreek in dysmenorrhea and reduction of mastalgia (14). The chronic analgesic effect of the fenugreek extract was observed and studies have shown that fenugreek seed reduced the pain through serotonergic system (36). Anti-inflammatory, antipyretic and anti-anxiety effects of leaf extracts of fenugreek were proved in animal models (19, 34, 45−48). Phytochemical studies have revealed that alkaloids, glycosides, and phenols are the major components in fenugreek extracts (19,34). Although the existence of anti-inflammatory, analgesic and antipyretic effects in extracts suggests a NSAID-like mechanism, the presence of alkaloids as well as the absence of flavonoids, saponins and steroids does not. Therefore, the alkaloid compounds in the extracts may have several effects (21). Phytoestro-gens are herbal compounds with estrogenic activity; fenugreek contains phytoestrogen compounds (49). Compared to dexamethasone and ibuprofen, the fenugreek has showed similar anti-inflammatory effects. Diosgenin in fenugreek is a steroidal sapogenin and is one of the compounds of fenugreek extract which acts as cortisone, and consequently, reduces anxiety (19). In the present study, pain duration in the intervention cycles was shorter in the fenugreek group (p=0.01). Hence, fenugreek seems to be effective in reducing the duration of dysmenorrhea. Both groups exhibited a reduction in the severity of other symptoms associated with dysmenorrhea. However, in the placebo group, symptom alleviation was not significant except in the reduction of lack of energy (p=0.01). Therefore, fenugreek may reduce dysmenorrhea-associated systemic symptoms (nausea, vomiting, lack of energy, headache, diarrhea, mood swings, syncope, and fatigue). The antihistaminic effect of fenugreek may reduce premenstrual symptoms. The effectiveness of fenugreek has been observed in dysmenorrhea, but not in temperament (14). The effects of fenugreek on systemic signs such as vomiting and anemia have also been reported (21). Anemia causes lack of energy and fatigue, and fenugreek leaves are a rich source of calcium, iron, β-carotene, and vitamins (49). One of the systemic symptoms associated with dysmenorrhea is headache, and fenugreek has been shown to alleviate this symptom (45). Fenugreek seed paste is used to treat abscesses, boils, ulcers and burns. Consumption of fenugreek seed powder has therapeutic effects against gastritis and gastric ulcers due to bacterial infections (50). The fenugreek is full of minerals which have positive effects on the immune system and its therapeutic effects may be justified by its minerals (51). The fenugreek group used fewer sedatives after the intervention. This is an important finding because NSAIDs have numerous adverse effects, including nausea, vomiting, dizziness, purpura, petechiae, hyperkalemia, peripheral edema, peptic ulcers and gastric bleeding (52). In the present study, no complication was reported with regard to fenugreek consumption. Very mild effects and side effects of fenugreek have been introduced (53−55). The existing evidence proves the nontoxicity of the aqueous extract of fenugreek. No nutritional response has been observed in studies related to fenugreek (56). This plant contains nontoxic mucilage, alkaloid, and sugar and has not shown any specific side effect. One of the studies showed side effects like allergic reactions, but no hematological toxicity (13,22,57). The effectiveness of fenugreek in symptoms of dysmenorrhea and its harmlessness have been observed (58). Based on the findings of the present study, further studies are needed to compare fenugreek with anti-inflammatory medications. Conclusion The present study showed that fenugreek reduced the severity of primary dysmenorrhea. Given that adverse effects were not reported for fenugreek, the herb can be administered safely for the management of this condition.
2016-05-12T22:15:10.714Z
2014-02-22T00:00:00.000
{ "year": 2014, "sha1": "338de410ed2428938ce6f12cfa4755a07a3a1c59", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "338de410ed2428938ce6f12cfa4755a07a3a1c59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40777476
pes2o/s2orc
v3-fos-license
Future Global in time einsteinian spacetimes with U(1) isometry group We prove that spacetimes satisfying the vacuum Einstein equations on a manifold of the form $\Sigma \times U(1)\times R$ where $\Sigma $ is a compact surface of genus $G>1$ and where the Cauchy data is invariant with respect to U(1) and sufficiently small exist for an infinite proper time in the expanding direction. Introduction In this paper we prove a global in time existence theorem, in the expanding direction, for a family of spatially compact vacuum spacetimes having spacelike U(1) isometry groups. The 4-manifolds we consider have the form V = M × R where M is an (orientable) circle bundle over a compact higher genus surface Σ and where the spacetime metric is assumed to be invariant with respect to the natural action of U(1) along the bundle's circle fibers. We reduce Einstein's equations,à la Kaluza-Klein, to a system on the base Σ×R where it takes the form of the 2+1 dimensional Einstein equations coupled to a wave map matter source whose target space is the hyperbolic plane. This wave map represents the true gravitational wave degrees of freedom that have descended from 3+1 dimensions to appear as "matter" degrees of freedom in 2+1 dimensions. The 2+1 metric itself contributes only a finite number of additional, Teichmuller parameter, degrees of freedom which couple to the wave map and control the conformal geometry of Σ. After the constraints have been solved and coordinate conditions imposed, through a well defined elliptic system, nothing remains but the evolution problem for the wave map / Teichmuller parameter system though the latter has now become non local in the sense that the "background" metric in which the wave map is propagating is now a non local functional of the wave map itself given by the solution of the elliptic system mentioned above. Thus even in the special "polarized" case which we concentrate on here, in which the wave map reduces to a pure wave equation, this wave equation is now both non linear and non local. In addition to the simplifying assumption of polarization (which obliges us here to treat only trivial bundles, M = Σ × S 1 ) we shall need a smallness condition on the initial data, an assumption that the genus of Σ is greater than 1 and a restriction on the initial values allowed for the Teichmuller parameters. It seems straightforward to remove each of these restrictions except for the smallness condition on the initial data. In particular we believe that the methods developed herein can be extended to the treatment of non polarized solutions on non trivial bundles over surfaces including the torus (but not S 2 ) with no restriction on the initial values of the Teichmuller parameters. Some preliminary work in this direction has already been carried out. We do not know how to remove the small data restriction even in the polarized case but conjecture that long time existence should hold for arbitrary large data since the U(1) isometry assumption seems to suppress the formation of black holes (note that U(1) is here essentially a "translational" and not a "rotational" symmetry since the existence of an axis of rotation would destroy the bundle structure). Of couse there is as yet no large data global existence result for smooth wave maps in 2+1 dimensions even on a given background so there is no immediate hope for such a result in our still more non linear (and non local) problem but the polarized case, though non linear and non local as well, seems more promising. One knows how to control the Teichmuller parameters in pure 2+1 gravity and a wave equation on a given curved background offers no special difficulty. But now the "background metric is instead a funtional of the evolving scalar field and one needs to control this along with the Teichmuller parameters. Serious progress on this problem would represent a "quantum jump" forward in one's understanding of long time existence problems for Einstein's equations since, up to now, the only large data global results require simplifying assumptions that effectively reduce the number of spatial dimensions to one (e.g., Gowdy models and their generalizations, plane symmetric gravitational waves, spherically symmetric matter coupled to gravity) or zero (e.g., Bianchi models, 2+1 gravity). We hope that this work on small data global existence will lay the groundwork for such an eventual quantum jump. But why assume a Killing field if only small data results are aimed for in the current project? A small data global existence result already exists (Andersson -Moncrief, in preparation) for Einstein equations on different 3manifolds of negative Yamabe class which makes no symmetry assumption whatsoever. Shouldn't those methods be applicable to our problem in which case the U(1) symmetry assumption could be removed. The answer to this question is far from obvious for a somewhat subtle reason. In those cases where small data global existence can be established the conformal geometry of the spatial slices (which represents the propagating gravitational wave degrees of freedom) is tending to a well behaved limit. Therefore the various Sobolev "constants" (which are in fact functionals of the geometry) which are needed in the associated energy estimates are tending to well behaved limits as well. This simplifying feature is however missing in the current problem since, during the course of our evolution, the conformal geometry of the circle bundles under study is undergoing a kind of Cheeger-Gromov collapse in which the circular fibers shrink to zero length and the various related Sobolev "constants" may careen out of control making even small data energy estimates much more difficult. Of the various Thurston types of 3-geometries which compactify to negative or zero Yamabe class manifolds {H 3 , H 2 × R, SL(2, R), Sol, Nil, R 3 ) only the hyperbolics are immune from such degenerations and the remaining (positive Yamabe class) Thurston types {S 3 , S 2 × R} not only are subject to Cheeger-Gromov type collapse but also to recollapse of the actual physical geometry to "big crunch" singularities in the future direction. By focusing on negative (or zero) Yamabe class manifolds which exclude (due to Einstein's equations) the occurrence of maximal hypersurfaces, that would signal the onset of recollapse to a big crunch, we thereby concentrate on spacetimes that can expand indefinitely. That such Cheeger-Gromov collapse can be expected in solutions of Einstein's equations can be seen already in the basic compactified Bianchi models wherein all the known solutions of negative Yamabe type except H 3 exhibit conformal collapse either along circular fibers {H 2 ×R, SL(2, R)}, or collapse along T 2 fibers {Sol}, or even total collapse with non zero but bounded curvature of the Gromov "almost flat" variety {Nil}. The solutions we are considering here (of Thurston type H 2 × R or eventually SL(2, R) in non polarized generalizations) extend results exhibiting such behavior to a large family of spatially inhomogeneous spacetimes. We sidestep the extra complication of degenerating Sobolev constants by imposing U(1) symmetry and carrying out Kaluza Klein reduction to work on a spatial manifold of hyperbolic type (though now a 2-dimensional one) for which, as we shall show, collapse and the corresponding degeneracy of the needed Sobolev constants is suppressed. The reason why we avoid the base Σ = T 2 is that the 2-tori themselves tend to collapse under the Einstein flow whereas the higher genus surfaces do not. On the other hand one can probably compute the explicit dependence of the needed Sobolev constants on the Teichmuller parameters for the torus and eventually exploit this to treat the Thurston cases {R 3 , Nil} which compactify typically to trivial and non trivial S 1 −bundles over T 2 . The Sol case (which compactifies to T 2 bundles over S 1 ) tends to collapse (as seen from the Bianchi models) the entire T 2 fibers. Thus to avoid degenerating Sobolev "constants" in this case it seems necessary to impose a full T 2 = U(1) × U(1) isometry group and Kaluza Klein reduce to an S 1 spatial base manifold. This leads to a certain nice generalization of the Gowdy models defined on the "Sol -twisted torus" but has effectively only one space dimension remaining. We exclude the Thurston types {S 2 × R, S 3 } which correspond to trivial and non trivial S 1 bundles over S 2 respectively since they belong to the positive Yamabe class as we have mentioned and should not exhibit infinite expansion but rather recollapse to big crunch singularities. The eight Thurston types are the basic building blocks from which other (and conjucturally all) compact 3-manifolds can be built by glueing together along so called incompressible 2-tori or (to obtain non prime manifolds) along essential 2-spheres. Very little is known about the Einstein "flow" on such more general manifolds but it seems that a natural first step in this direction may be made by studying the Einstein flow on the basic building block manifolds themselves. This program seems tractable provided that a U(1) symmetry is imposed in the H 2 × R, Sl(2, R) and perhaps Nil and R 3 cases, and provided that a U(1) × U(1) symmetry is imposed in the Sol case. No symmetries are needed in the H 3 case due to the absence of Cheeger -Gromov collapse but one can hope to remove the symmetry hypothesis in the other cases by learning how to handle degenerating Sobolev "constants". In this respect the Nil and R 3 cases may provide some guidance since they seem to require a treatment of degenerating Sobolev constants but only in the setting of 2-dimensions (when U(1) symmetry is imposed). The basic methods we use involve the construction of higher order energies to control the Sobolev norms of the scalar wave degrees of freedom combined with an application of the "Dirichlet energy" function in Teichmuller space to control the Teichmuller parameters degrees of freedom. A subtlety is that the most obvious definition of wave equation (or, more generally, wave map) energies does not lead to a well defined rate of decay so that corrected energies must be introduced which exploit "information" about the lowest eigenvalue of the spatial laplacian which enters into the wave equation. Since the lowest eigenvalues vary with position in Teichmuller space we find convenient to choose initial data such that, during the course of the evolution, the lowest eigenvalue avoids a well known gap in the spectrum for an arbitrary higher genus surface. If no eigenvalue drifts into this gap (which we enforce by suitable restriction on the initial data) then one can establish a universal rate of decay for the energies. If the lowest eigenvalue drifts into this gap and remains there asymptotically then the rate of decay of these energies will depend upon the asymptotic value of the lowest eigenvalue and will no longer be universal. While it is straightforward to modify the definitions of the corrected energies to take this refinement into account we shall not do so here to avoid further complication of an already involved analysis. An extension of the definition of our corrected energies to the non polarized case and to the treatment of non trivial S 1 bundles is also relatively straightforward but for simplicity we shall not pursue that here either. The sense in which our solutions are global in the expanding direction is that they exhaust the maximal range allowed for the mean curvature function on a manifold of negative Yamabe type, for which a zero mean curvature can only be asymptotically approached. The normal trajectories to our space slices all have an infinite proper time length. We do not attempt to prove causal geodesic completeness but that would be straightforward to do given the estimates we obtain. Another question concerns the behavior of our solutions in the collapsing direction. Since our energies are decaying in the expanding direction they are growing in the collapsing direction and will eventually escape the region in which we can control their behavior. In particular we cannot use these arguments to show that our solutions extend to their conjectured natural limit as the mean curvature function tends to −∞. There is another approach to the U(1) problem however which, although local in nature, can describe a large family of U(1) -symmetric spacetimes by convergent expansions about the big-bang singularities themselves. This method, which is based on work by S. Kichenassamy and its extensions by A. Rendall and J. Isenberg, can handle vacuum spacetimes that are "velocity dominated" at their big-bang singularities. Work by J. Isenberg and one of us (V.M) shows that the polarized vacuum solutions on T 3 × R are amenable to this analysis. In fact there two larger families of "half -polarized" solutions that can also be rigorously treated and shown to have velocity dominated singularities. By contrast the general (non polarized) solution does not seem to be amenable to this kind of analysis and indeed numerical work by B. Berger shows that such solutions should have generically "oscillatory" rather than velocity dominated singularities. The expansion methods which produce these solutions near their velocity dominated singulariries are essentially local and should be readily adaptable to other manifolds such as circle bundles over higher genus surfaces. Thus one should be able to generate a large collection of initial data sets for the problem dealt with in this papaer which treats the further evolution globally in the expanding direction. Thus the machinery seems to be at hand for treating a large family of U(1) symmetric solutions from their bigbang initial singularities to the limit of infinite expansion. Equations. The spacetime manifold V is a principal fiber bundle with one dimensional Lie group G and base Σ × R, with Σ a smooth 2 dimensional manifold which we suppose here to be compact. The spacetime metric is invariant under the action of G, the orbits are the fibers of V and are supposed to be space like. We write it in the form where γ is a scalar function and (3) g a lorentzian metric on Σ × R which reads: N and ν are respectively the lapse and shift of (3) g,while is a riemannian metric on Σ, depending on t. The 1-form θ is a connection on the fiber bundle V, represented in coordinates (x 3 , x α ) adapted to the bundle structure by Note that A is a locally defined 1-form on Σ × R. Twist potential. The curvature of the connection locally represented by A is a 2-form A on Σ × R, given by where ω is a scalar function on V, called the twist potential, and H a representative of the 1-cohomology class of Σ×R, for instance defined by a 1-form on Σ, harmonic for some given riemannian metric m. Wave map equation. The fact that F is a closed form together with the equation (4) R 33 = 0 imply (with the choice H=0) that the pair u ≡ (γ, ω) satisfies a wave map equation from (Σ × R, (3) g) into the hyperbolic 2-space, i.e. R 2 endowed with the riemannian metric 2(dγ) 2 + (1/2)e 4γ (dω) 2 ). This wave map equation is a system of hyperbolic type when (3) g is a known lorentzian metric. In this article we will consider only the polarized case that is we take ω and H to be zero. Some of the computations and partial results hold however in the general case. It is why we keep the wave map notation wherever possible, since we intend to extend our final result to the general case in later work. In the polarized case the wave map equation reduces to the wave equation for γ in the metric (3) g. 3-dimensional Einstein equations When (4) R 3α = 0 and (4) R 33 = 0 the Einstein equations (4) R αβ = 0 are equivalent to Einstein equations on the 3-manifold Σ × R for the metric (3) g with source the stress energy tensor of the wave map: where a dot denotes a scalar product in the metric of the hyperbolic 2-space. We continue to use the same notation in the polarized case, that is we set γ = u and These Einstein equations decompose into a. Constraints. b. Equations for lapse and shift to be satisfied on each Σ t . These equations, as well as the constraints, are of elliptic type. c. Evolution equations for the Teichmuller parameters, which are ordinary differential equations. One denotes by k the extrinsic curvature of Σ t as submanifold of (Σ×R, (3) g); then, with ∇ the covariant derivative in the metric g, The equations (momentum constraint) and (hamiltonian constraint, do not contain second derivatives transversal to Σ t of g or u, they are the constraints. To transform the constraints into an elliptic system one uses the conformal method. We set where σ is a riemannian metric on Σ, depending on t, on which we will comment later, and where τ is the g-trace of k, hence h is traceless. We denote by D a covariant derivation in the metric σ. From now on, unless otherwise specified, all operators are in the metric σ, and indices are raised or lowered in this metric. We set The momentum constraint reads if τ is constant in space, a choice which we will make, This is a linear equation for h, independent of λ. The general solution is the sum of a transverse traceless tensor h T T ≡ q and a conformal Lie derivative r. Such tensors are L 2 −orthogonal on (Σ, σ). The hamiltonian constraint reads as the semilinear elliptic equation in λ : with Equations for lapse and shift. The lapse and shift are gauge parameters for which we obtain elliptic equations on each Σ t as follows. We impose that the Σ ′ t s have constant (in space) mean curvature, namely that τ is a given negative increasing function of t. The lapse N satisfies then the linear elliptic equation The equation to be satisfied by the shift ν results from the knowledge of σ t . Indeed the definition of k implies that ν satisfies a linear differential equation with an operator L, the conformal Lie derivative, which we first write in the metric g: The kernel of the dual of L is the space of transverse traceless symmetric 2-tensors, i.e. symmetric 2-tensors T such that These tensors are usually called TT tensors. The spaces of TT tensors are the same for two conformal metrics. On a compact 2-dimensional manifold of genus G ≥ 2 the space T eich of conformally inequivalent riemannian metrics, called Teichmuller space, can be identified (cf. Fisher and Tromba) with M −1 /D 0 , the quotient of the space of metrics with scalar curvature −1 by the group of diffeomorphisms homotopic to the identity. M −1 →T eich is a trivial fiber bundle whose base can be endowed with the structure of the manifold R n , with n = 6G − 6. We impose to the metric σ t to be in some chosen cross section Q → ψ(Q) of the above fiber bundle. Let Q I , I = 1, ..., n be coordinates in T eich , then ∂ψ/∂Q I is a known tangent vector to M −1 at ψ(Q), that is a traceless symmetric 2-tensor field on Σ, sum of a transverse traceless tensor field X I (Q) and of the Lie derivative of a vector field on the manifold (Σ, ψ(Q)). The tensor fields X I (Q), I = 1, ...n span the space of transverse traceless tensor fields on (Σ, ψ(Q)). The matrix with elements Σ X ab I X Jab µ ψ(Q) is invertible. Lemma 1 If we impose to the metric σ t to lie in the chosen cross section, i.e. σ t ≡ ψ(Q(t)), the solvability condition for the shift equation determines dQ I /dt in terms of h t . Proof. The time derivative of σ is given by where C is a Lie derivative, L 2 orthogonal to TT tensors. The shift equation on Σ t is solvable if and only if its right hand side f is L 2 orthogonal to TT tensors, i.e. to each tensor field X I . Theses conditions read Σt f ab X Jab µ σ t = 0 We have seen that h is the sum of a tensor r which is in the range of the conformal Killing operator, hence L 2 orthogonal to TT tensors, and a TT tensor. This last tensor can be written with the use of the basis X I of such tensors, the coefficients P I depending only on t: The orthogonality conditions read, using the fact that the transverse tensors X I are orthogonal to Lie derivatives and are traceless: Σt [2Ne −2λ (r ab + P I X I,ab ) + (dQ I /dt)X I,ab ]X ab j µ σ = 0 The tangent vector dQ I /dt to the curve t → Q(t) and the tangent vector P I (t) to T eich are therefore linked by the linear system We will now construct an ordinary differential system for the evolution of the Q I and P I by considering the as yet non solved 3-dimensional Einstein equations Lemma 2 The constraint equations together with the lapse and the wave map equations imply that N( (3) R ab − ρ ab ) with ρ ab ≡ ∂ a u.∂ b u is a transverse traceless tensor on each Σ t . Proof. 1. The equations (3) S 00 = T 00 (7) and The equations 2 and 3 imply On the other hand the Bianchi identity in the 3-metric g gives An elementary calculus using the connexion coefficients of (3) g and g shows that, due to equations previously satisfied, this equation reduces to the following divergence in the metric g: The tensor N( (3) R ab − ρ ab ) is therefore a traceless and transverse tensor on (Σ, g), and hence also on (Σ,σ), by conformal invariance of this property for symmetric 2-covariant tensors. We deduce from this lemma that a necessary and sufficient condition for the previous equations to imply (3) R ab − ρ ab = 0 is that the tensor N( (3) R ab − ρ ab ) be L 2 orthogonal to transverse traceless tensors on (Σ t , σ t ), i.e. to each of the TT tensors X I defined above through the cross section ψ where we choose σ t , that is We recall that and ∂ 0 is an operator on time dependent space tensors (cf. C-B and York) defined by, with L ν the Lie derivative in the direction of ν, We thus obtain an ordinary differential system of the form where Φ is a polynomial of degree 2 in P and dQ/dt with coefficients depending smoothly on Q and directly but continuously on t through the other unknown, namely: and, using integration by parts and the transverse property of the X I to eliminate second derivatives of 3 Homogeneous solution. Theorem 3 A particular solution, obtained by taking for u a constant wave map and for h the zero tensor, is given by: with σ a metric on Σ independent of t and of scalar curvature −1, and θ a flat connexion 1-form on the bundle. Proof. The wave map equation is satisfied by any constant map. Such a map has zero stress energy tensor. The momentum constraint is then satisfied by h = 0, hence The hamiltonian constraint is satisfied by a constant in space λ given if the shift equation is then satisfied by ν = 0 and the lapse equation by A straightforward computation shows that (3) R ab = 0. All the equations Ricci( (4) g) = 0 are satisfied. Cauchy problem. The unknowns which permit the reconstruction of the spacetime metric in the gauge τ ≡ τ (t), given some smooth cross section Q → ψ(Q) of Teichmuller space T eich , are on the one hand u = γ satisfying the wave equation in the metric (3) g, on the other hand λ, N and ν, which satisfy elliptic equations on each Σ t , and also a curve Q(t) in T eich which determines the metric σ t ≡ ψ(Q(t)) on Σ t . An intermediate unknown is the traceless tensor h which splits into a transverse part and a conformal Lie derivative of σ t in the direction of a vector Y which satisfies also an elliptic system on Σ t . The tranverse part is determined through a field of tangent vectors to T eich at the points of Q(t). Definition 5 The Cauchy data on Σ t 0 denoted Σ 0 are: 1. A C ∞ riemannian metric σ 0 which projects onto a point Q(t 0 ) of T eich and a C ∞ tensor q 0 transverse and traceless in the metric σ 0 . 2. Cauchy data for u and where H s is the usual Sobolev space on (Σ, σ 0 ). Functional spaces. Definition 6 Let σ t be a curve of C ∞ riemannian metrics on Σ, uniformly equivalent to the metric σ 0 for t∈ [t 0 , T ] and C 1 in t. Such metrics are called regular for t ∈ [t 0 , T ] 1. The spaces W p s (t) are the usual Sobolev spaces of tensor fields on the riemannian manifold (Σ, σ t ). By the hypothesis on σ t the norms in W p s (t) are uniformly equivalent for t ∈ [t 0, , T ] to the norm in W p s (t 0 ). We set W 2 s (t) = H s (t). When working on one slice Σ t we will often omit reference to the t dependence of the norm. 2. The spaces E p s (T ) are the Banach spaces of t dependent tensor fields f on Σ We will proceed in two steps: Case a. Du 0 , Proof. a. Since in dimension 2 the space H 2 is an algebra one has On the other hand we have hence by multiplication properties of Sobolev spaces , for all q < ∞ by the standard Sobolev embedding theorem, and so does Du.Du. We have An analogous proof gives the result for the other products. Using u completes the proof. Resolution of the elliptic equations for given Q(t), P(t) and u We have supposed chosen a smooth cross section Q → ψ(Q) of M −1 over the Teichmuller space T eich . We suppose given a C 1 curve t → Q(t) contained when t ∈ [t 0 , T ] in a compact subset of T eich , and a continuous set of tangent vectors P to T eich at points of this curve. We are then given by lift to M −1 a regular metric σ t for t ∈ [t 0 , T ], with scalar curvature -1, together with a smooth symmetric 2-tensor h T T t ≡ q t transverse and traceless in the metric σ t and depending continuously on t. Determination of h. We have set h ab = q ab + r ab where q and r are traceless q is transverse and r is a conformal Lie derivative, i.e. D a q a b = 0 and q a a = 0 Determination of q. The traceless transverse tensor q on (Σ t , σ t ) is deduced by lifting its given projection onto the tangent space to Teichmuller space at the point Q(t). It is smooth and depends continuously on t ∈ [t 0, T ]. Let us denote by X I (Q), I = 1, ..., 6G − 6, a basis of traceless transverse tensor fields on (Σ, ψ(Q)) then Determination of r. The vector Y satisfies on each Σ t the elliptic system with zero kernel ( in accordance with the fact that (Σ, σ) does not admit conformal Killing fields when R(σ) < 0), Case a., L ∈ E 2 (T ). It results from elliptic theory that the system satisfied by Y has for each t ∈ [0, T ] one and only one solution in H 4 (t) and there exists a constant depending only on σ t such that The constant C σ t is invariant under diffeomorphism acting on σ t , that is it depends only on its projection on the Teichmuller space of Σ, hence is uniformly bounded under the hypothesis made on σ t . We denote by M σ,T such a constant. We have since the norms W p s (t) and W p s are uniformly equivalent Derivations with respect to t of the equation for Y show that for a regular σ t we have The system for Y has one and only one solution in W p 3 (t) for each t∈ [t 0 , T ], then r t is in W p 2 (t) and there exists a constant C σ t such that One proves also that ∂ t r ∈ W p 1 (t) hence r ∈ E p 2 (T ) and there exists a constant M σ,T such that Case of initial values. On the initial manifold Σ t 0 we have given q 0 ∈ C ∞ , and r 0 satisfies the inequality (we abbreviate to . the L 2 norm on (Σ, σ 0 )) u 0 are small in H 1 norm. Determination of the conformal factor λ. On each Σ t the conformal factor λ t satisfies the equation, with ∆ ≡ ∆ σt the laplacian in the metric σ t (we omit the writing of t to simplify the notation) where the coefficients p are given by, with R(σ) = −1, Case a. We suppose that the coefficients p are given functions in E 2 (T ). This hypothesis is consistent with Du,u ∈ E 2 (T ) and h ∈ E 2 (T ). We know from elliptic theory that the semi linear elliptic equation for λ on (Σ t , σ t ) admits a solution in H 4 (t), which is included in C 2 , if it admits a subsolution λ − and a supersolution λ + , i.e. C 2 functions such that We construct sub and super solutions as follows. We define the number ω to be the real root of the equation where the P's are the integrals of the p's on Σ t By the Gauss Bonnet theorem the volume of (Σ t , σ t ) is a constant if R(σ t ), is constant. We have here R(σ t ) = −1, hence: We find: hence e 2ω exists, is unique and satisfies We define v ∈ H 4 as the solution with mean value zero on Σ t of the linear equation Such a solution exists and is unique, because f (ω) has mean value zero on Σ t are respectively a super and sub solution of the equation for λ. The solution λ ∈ H 4 thus obtained for each t ∈ [t 0 , T ] is unique, due to the monotony of the function f . Its H 4 norm depends continuously on t. Derivation with respect to t of the equation satisfied by λ shows that ∂ t λ ∈ C 0 ([t 0 , T ], H 3 ) .We have proved: Theorem 9 The equation for λ has one and only one solution λ ∈ E 4 (T ) under the hypothesis a (where p i ∈ E 2 (T )). Case b. Theorem 10 The equation for λ has one and only one solution λ ∈ E p 3 (T ) under the hypothesis b (where p i ∈ E p 1 (T )). Proof. Consider a Cauchy sequence of functions p Applying elementary calculus inequalities to the estimate of (a−b) −1 (e a −e b ) and (a − b) −1 (e −a − e −b ) one obtains a well posed linear elliptic equation for λ (n) − λ (m) and an inequality for its norm in W p 3 for each t ∈ [t 0 , T ]. We thus have shown the convergence of the sequence to a limit λ which satisfies the required equation. One can prove similarly that λ ∈ E p 3 (T ). The uniqueness of the solution results from the monotony of f (λ). Bounds for λ When λ ∈ C 2 one obtains a lower bound by using the maximum principle: at a minimum of λ we have ∆λ ≥ 0. Hence a minimum λ m of λ satisfies the inequality when λ ∈ E p 3 is solution of the equation it satisfies the same inequality since W p 3 ⊂ C 0 and λ can be obtained as a limit in W p 3 of functions satisfying this inequality. An analogous argument shows that with e 2ω the positive solution of the equation Case of initial values. The above construction applies in particular on the initial surface Σ 0 . In this case the functions u 0 and . u 0 are considered as given. We have We have we see that e 2ω 0 tends to 2 τ 2 0 and f (ω 0 ) tends to zero when q 0 tends to zero as well as the H 1 norms of Du 0 and . u 0 (then the L 2 norm of h 0 tends also to zero). Determination of the lapse N. The lapse N satisfies the equation with It is a well posed elliptic equation on (Σ t , σ t ), when u, h and λ are known,which has one and only one solution, always positive, in E 4 (T ) in case a, in E p 3 (T ) in case b. Indeed: hence also e 2λ ∈ E 4 (T ) and α ∈ E 2 (T ). The equation has then a solution N ∈ E 4 (T ). Upper bound of N. At a maximum x M of N ∈ C 2 we have (∆N)(x M ) ≤ 0 hence this maximum N M is such that A reasoning analogous to that given for λ shows that this upper bound also holds in case b. Determination of the shift ν. The definition of k implies that n ≡ e −2λ ν satisfies a linear differential equation involving an operator L, the conformal Lie derivative, with injective symbol: The kernel of the dual of L is the space of transverse traceless symmetric 2-tensors in the metric σ t , the equation for ν admits a solution if and only if f is L 2 −orthogonal to all such tensors, i.e. Σt f ab X ab I µ σt = 0, for I =1,...6G-6 This integrability condition will not in general be satisfied with the arbitrary choice of P (t), that is of q t ≡ h T T t . In this subsection P (t) is not considered as given.We set in the expression of f ab h t = P I (t)X I (Q(t)) + r t When σ t is a known C 1 function of t the intgrability condition determines P (t) as a continuous field of tangent vectors to T eich by an invertible system of ordinary linear equations. When h is so chosen the equation for n has a solution, unique since L σ has a trivial kernel on manifolds with R(σ) = −1. It results from elliptic theory that n ∈ E 4 (T ) in case a, and n ∈ E p 3 (T ) in case b.The same properties hold for ν. Wave equation, local solution. The wave equation on (Σ × R) in the metric (3) g reads We suppose that σ t is a given regular riemannian metric for t ∈ [t 0 , T ] and that λ, N, ν are given in E p 3 (T ) with p > 1 and N > 0. Then we have ( (3) g has hyperbolic signature. It is easy to prove along standard lines that the Cauchy problem with data u 0 , (∂ t u) 0 ∈ H 2 ×H 1 has a solution such that (u, ∂ t u) ∈ E 2 (T ) × E 1 (T ) on Σ × [t 0 , T ]. The initial value (∂ t u) 0 is the product of the datum . u 0 by e −2λ 0 , it belongs to H 1 under the hypothesis made in section 1.1 on the Cauchy data. Teichmuller parameters. We suppose known h ∈ E p 2 (T ), λ, N, ν ∈ E p 3 (T ), u ∈ E 2 (T ), and we suppose given Q → ψ(Q) a smooth cross section of M −1 over T eich . The unknown is the curve t → Q(t). We have σ t ≡ ψ(Q(t)) and ∂ t σ ab = dQ dt I X I,ab + C ab with X I (Q) a basis of the space of TT tensors on (Σ, ψ(Q)) and C a conformal Lie derivative, L 2 orthogonal to TT tensors. The curve t → Q(t) and the tangent vector P I (t) to T eich satisfy the ordinary differential system (cf. section 2.3.3) This quasi linear first order system for P and Q has coefficients continuous in t and smooth in Q and P. The matrix of the principal terms, X IJ , is invertible. There exists therefore a number T > 0 such that the system has one and only one solution in C 1 ([t 0 , T ]) with given initial data P 0 , Q 0 . Local existence theorem. We can now prove the following theorem Theorem 11 The Cauchy problem with data (u 0, . u 0 ) ∈ H 2 × H 1 , on Σ t 0 (denoted Σ 0 ) and Q 0 , a point in T eich , P 0 a tangent vector to T eich , for the Einstein equations with U(1) isometry group (polarized case) has a solution with σ t a regular metric on Σ t for t∈ [t 0 , T ] and u ∈ E 2 (T ), T > t 0 , if T −t 0 is small enough. This solution is unique when τ , depending only on t, is chosen together with a cross section of M −1 over T eich . Proof. The proof is straightforward, using iteration to solve alternatively the elliptic systems, the wave equation and the differential system satisfied by Teichmuller parameters, with τ a given function of t and σ t required to remain in a chosen cross section of M −1 over T eich . The iteration converges if T − t 0 is small enough. The limit can be shown to be a solution of Einstein equations with (3) g in constant mean curvature gauge by standard arguments, the 2-metric g is conformal with the factor e 2λ to a metric in the chosen cross section by construction. This local existence theorem can be extended to the non polarized case. 5 Scheme for a global existence theorem. As it is well known we will deduce from our local existence theorem a global one, i.e. on Σ × [t 0 , ∞), if we can prove that the curve Q(t) remains in a compact subset of T eich and that neither the H 2 × H 1 norm of (u(., t), . u(., t)) nor the E p 3 norms of λ(., t), N(., t), ν(., t) blow up when t ∈ [t 0, ∞) while N remains strictly positive. If the spacetime we construct is supported by the manifold M × [t 0 , ∞) it will reach a moment of maximum expansion. It will be after an infinite proper time for observers moving along orthogonal trajectories of the hypersurfaces M t ≡ M × {t} if the lapse function is uniformly bounded below by a strictly positive number. Our proof of this fact will rely on various refined estimates, using in particular corrected energies. The correction of the energies poses special problems in the non polarized case, which we will treat in another paper. Notations. |.| and |.| g : pointwise norms of scalars or tensors on Σ, in the σ or g metric . and . p : L 2 and L p norms in the σ metric . g : L 2 norm in the g metric. A lower case index m or M denote respectively the lower or upper bound of a scalar function on Σ t . It may depends on t. When we have to make a choice of the time parameter t we will set then t will increase from t 0 > 0 to infinity when, Σ t expanding, τ (t) increases from τ 0 < 0 to zero. With this choice the upper bound on N of subsection 4.3.4 reads Remark 13 Other admissible choices of t, for instance τ = t, t ∈ [t 0 , 0), t 0 = τ 0 < 0, would lead to the same geometrical conclusions. Fundamental inequalities. Lemma 1. Let f be a scalar function on Σ. the following inequalities hold 1. . . Proof: The inequalities 1, 2,3.a are trivial consequences of the identities: and a corresponding equality for D 2 f or, more generally, for covariant 2tensors To prove 3b we use the identity obtained by two successive partial integrations and the Ricci formula with R(σ) = −1 We have ∆u = e 2λ ∆ g u, and e 2λ ∆ g u = e λ ∆ g u g The given result follows. Lemma 2. We denote by C σ any positive number depending only on (Σ, σ). 1. Let f be a scalar function on Σ. There exists C σ such that the L 4 norms of f and Df are estimated by: and For any q such that 1 ≤ q < ∞ there exists C σ such that Proof. 1. By the Sobolev inequalities there exists C σ such that which gives the first result using the lemma 1. Analogously leads to the second inequality. 2. The Sobolev embedding theorem and the compactness of Σ. 6.1 Bound of the first energy. The 2+1 dimensional Einstein equations with source the stress energy tensor of the wave map u contain the following equation (hamiltonian constraint) Recall the splitting of the covariant 2-tensor k into a trace and a traceless part: hence |k| 2 g = g ac g bd k ab k cd = |h| 2 g + 1 2 τ 2 (16) and the hamiltonian constraint equation reads We define the first energy by the following formula (recall that |.| g and . g denote respectively the pointwise norm and the L 2 norm in the metric g) This energy is the first energy of the wave map u completed by the L 2 (g) norm of h. We integrate the hamiltonian constraint on (Σ t, g) using the constancy of τ and the Gauss Bonnet theorem which reads, with χ the Euler characteristic of Σ Σt R(g)µ g = 4πχ We have then We know from elementary calculus that on a compact manifold We use the equation N −1(3) R 00 = ∆ g N − N|k| 2 g + ∂ t τ = |u ′ | 2 together with the splitting of k to write after integration, since τ is constant in space, We use these results to compute the derivative of E(t) and we find that it simplifies to: We see that E(t) is a non increasing function of t if τ is negative. The absence of the term |Du| 2 g on the right hand side does not permit an estimate of the rate of decay of E(t). We will estimate this decay in a forthcoming section. Note in addition the appearance of N in the right hand side. Second energy estimates In this paragraph indices are raised with g. We denote by h ab g the contravariant components of h ab computed with the metric g. We define the energy of gradient u by the formula We have for an arbitrary function f : hence after integration by parts on the compact manifold Σ, using the expression of ∂ 0 and replacing f by J 0 + J 1 the following formula where the shift does not appear explicitly: We define the operator∂ 0 on time dependent space tensors bȳ where L ν denotes the Lie derivative in the direction of the shift ν.We have Therefore using We use the commutation of the operator ∂ 0 with the partial derivative ∂ a (cf. C.B-York 1995) together with partial integration to obtain The function u satisfies the wave equation on (Σ × R, (3) g), namely: with, after another integration by parts On the other hand: which can be written, using the identity together with the equation We see that the terms in third derivatives of u disappear in the derivative of E (1) (t). We have obtained where the X's and Y' are given by the above formulas. We read from these formulas the following theorem Theorem 14 The time derivative of the second energy E (1) satisfies the equality The quantity Z is given by: For τ ≤ 0, and 0 < N ≤ 2, the right hand side of (19) is less than Z, which can be estimated with non linear terms in the energies: all the terms which are only quadratic in the derivatives of u, i.e. linear in energy densities, have coefficients which contain N − 2, ∂ a N or h ab g , or their derivatives. To estimate these terms we need bounds which will be deduced from estimates on the conformal factor and the lapse N. Estimate of h . We have defined the auxiliary unknown h by h ab ≡ k ab − 1 2 g ab τ Its L 2 norm on (Σ, σ) is bounded in terms of the first energy and an upper bound λ M of the conformal factor since we have Estimate of Dh . The tensor h satisfies the equations It is the sum of a TT tensor h T T ≡ q and a conformal Lie derivative r: It results from elliptic theory that on each Σ t the tensor r satisfies the estimate We will bound the right hand side of this inequality in terms of the first and second energies of u . We have: we have proven in section 4 that We have set hence, using the lower bound on λ and the definitions of ε and ε 1 we obtain On the other hand It results from these inequalities that It is known (cf. Andersson and Moncrief ) that in dimension 2 the equation D a q a b = 0, with q a a = 0 implies D c D c q ab = R(σ)q ab . When R(σ) = −1 this equation gives by integration on Σ t of its contracted product with q ab the following relation Dq = q more generally any H s norm of q is a multiple of its L 2 norm. We have First estimates. Recall that we denote respectively by . and . p the L 2 (σ) and L p (σ) norms on Σ and by . g an L 2 (g) norm on Σ. The conformal factor λ satisfies the equation where the coefficients p i are functions in E 0 ∩ E p 1 , 1 < p < 2, hypothesis consistent with Du, . u, h ∈ E 1, given by Having chosen R(σ) = −1 we have seen that a lower bound λ m for λ is such that where v ∈ E 2 ∩ E p 3 is the solution with mean value zero on Σ t of the linear equation where e 2ω , positive solution of the equation P 1 e 4ω + P 3 e 2ω − P 2 = 0 is given by, since P 3 < 0, P 2 ≥ 0, P 1 = 1 4 τ 2 V σ , e 2ω = −P 3 + P 2 3 + 4P 1 P 2 2P 1 ≡ −P 3 (1 + 1 + 4P −2 3 P 1 P 2 ) 2P 1 This formula will permit an estimate of e 2ω − 2 τ 2 , a positive quantity, in terms of the energies. Indeed using the elementary algebra inequality √ 1 + a ≤ 1 + 1 2 a, when a ≥ 0 we obtain and, using the expressions of P 2 , P 3 and P 1 = 1 4 τ 2 V σ , together with We have set ε 2 ≡ E(t) and therefore we have We will now give estimates for λ. Lemma 15 Denote by λ M the maximum of λ, one has Proof. The result follows from the expressions of λ − and λ + : Corollary 16 The following inequality holds Proof. Elementary calculus We set We have shown in the section on local existence that ε v 0 tends to zero with the initial data q 0 , Du 0 and . u 0 . Hypothesis H c . We say that v satisfies the hypothesis H c if there exists a number c > ε v 0 , independent of t, such that ε v ≤ c. We suppose also that the initial data are such that E(t 0 ) ≡ ε 2 0 verifies the inequality (we chose 1 2 for simplicity of notations) Then, since E(t) is non increasing and the volume V σ of (Σ, σ) is constant by the Gauss Bonnet theorem, it holds for all t that Recall that we have denoted by C σ any positive number depending only on (Σ, σ). We denote by C any positive number depending only on c. Theorem 17 When ε v ≤ c there exist numbers C such that the conformal factor λ satisfies the estimates: 1. Proof. 1. We find, using the estimate of ω: The result 1 of the lemma follows then from the hypothesis H c and H 0 . 2. Is immediate. The equation satisfied by v implies but the Poincare inequality applied to the function v which has mean value 0 on Σ gives where Λ is the first (positive) eigenvalue of -∆ for functions on Σ t with mean value zero. Therefore on each Σ t Dv ≤ [Λ] −1/2 f 0 We use Ricci identity and R(σ) = −1 to obtain The equation satisfied by v implies then Assembling these various inequalities implies gives then a bound on the L ∞ norm of v on Σ t in terms of the L 2 norm of f (ω), a Sobolev constant C σ and the lowest eigenvalue Λ of −∆ We now estimate the L 2 norm of f (ω). We split f ω into a constant part and a non constant part h ω by setting h ω ≡ p 2 e −2ω + 1 2 |Du| 2 . Since the mean value f ω of f (ω) is zero and the mean value of a constant is equal to itself we have By the isoperimetric inequality there exists a constant I σ such that We want to bound the right hand side in terms of the first and second energies of the wave map. We have by the definition of h ω : Lemma 18 1. The following estimate holds It implies under the hypothesis H c that Proof. 1. We have: Previous elementary calculus gave and D 2 u 2 = ∆u 2 + 1 2 Du 2 with ∆u = e 2λ ∆ g u, and e 2λ ∆ g u = e λ ∆ g u g hence we have the inequality D 2 u ≤ e λ M ∆ g u g + (1/ √ 2) Du g which implies the given result 1. Under the hypothesis H c we have the result 2 follows from the definitions of ε and ε 1 . Lemma 19 The following estimate holds if ε v ≤ c (hypothesis H c ) Proof. We have: We have shown in a previous section that the L 2 norm of h and Dh can be estimated through the first and second energies. We have found and under the hypothesis H c The given result follows from the bound of e 2(λ M −ω) . We now estimate the last term in Dh ω , i.e. 1 2 e −2ω D| . u| 1 . We will use the following estimates of L 4 norms of Du, u ′ and h : Lemma 20 1. Under the hypothesis H c the L 4 norms of Du, u ′ and h are estimated by: 1. Immediate consequence of the inequalities proved in the final section on local existence, and the definitions. The following theorem is a straightforward consequence of our lemmas. Theorem 22 There exists numbers C and C σ such that the L ∞ norm of v is bounded by the following inequality Proof. Recall that there exists a Sobolev constant C σ such that v ∞ ≤ C σ { D|Du| 2 1 + e −2ω ( D|h| 2 1 + D|u| 2 1 )} The three terms in the sum have been evaluated in previous lemmas. Bound on derivatives. The equation satisfied by λ implies after multiplication by λ − λ and integration on Σ The Poincare inequality gives The L 2 (σ) norm of f (λ) is bounded by the following quantity: which gives the following theorem. Theorem 23 Under the hypothesis H c the H 1 norm of Dλ satisfies the inequality 9 Estimates in W p s . Estimates for h in The estimates of h in W p 2 , with 1 < p < 2 (for definiteness we will choose p = 4 3 ) will be obtained using estimates for the conformal factor λ which have been obtained by using the H 1 norm of h. Theorem 24 Under the H hypothesis there exist positive numbers C(c) and C σ such that the W p 2 norm of h, choosing to be specific p = 4 3 , is bounded by Proof. We recall that for any function f on a compact manifold one has, if p ≤ 2, We deduce therefore from the H s estimate of section 6.2 that (C 0 is a given number, V σ = |4πχ| is a constant) To estimate h in W p 2 it remains to estimate r in W p 2 . It results from elliptic theory that on each Σ t the tensor r satisfies for each 1 < p < ∞ the following estimate We have Du. We now estimate Using previous estimates we obtain by a straightforward calculation The result of the theorem follows from the bound of ε by ε + ε 1 . Proof of corollary. 1. The Sobolev embedding theorem, .2 W p 3 estimates for N. 9.2.1 H 2 estimates of N. Theorem 26 There exist numbers C = C(c) and C σ such that the H 2 norm of N satisfies the inequality Proof. We write the equation satisfied by N in the form with, having chosen the parameter t such that ∂ t τ = τ 2 , The standard elliptic estimate applied to the form given to the lapse equation gives Since 0 < N ≤ 2 and e −2λ ≤ 1 2 τ 2 it holds that The L 4 norms of h and u ′ as well as 1 2 e 2λ M τ 2 − 1 have been estimated in the section conformal factor estimate. We deduce from these estimates the bound β ≤ CC σ (ε 2 + εε 1 ). which gives the result of the theorem. The corollary is a consequence of the Sobolev embedding theorem. Theorem 28 Under the hypothesis H c there exist numbers C depending only on c and C σ such that if 1 < p < 2, for instance p = 4 Corollary 29 The gradient of N satisfies the inequality: We apply the standard elliptic estimate with now 1 < p < 2, s = 1. We have for any p ≤ 2, We have already estimated β . We have therefore, with 1 q + 1 q ′ = 1 p , and using estimates obtained for λ under the H hypothesis To bound the first line we recall that the L p norms of Dλ and DN are bounded by their L 2 norms estimated before. To estimate the second line (except for A) we choose p = 4 3 , q = 4, q ′ = 2. We find quantities bounded before and the L 4 norm of Dλ and DN which can be estimated in terms of their H 1 norms bounded before. To bound A we write again, with p = 4 3 : This inequality and corresponding estimates for h give: The H 1 bound found above for DN and Dλ permits the obtention of the given result. The corollary is a consequence of the Sobolev embedding theorem and the relation between σ and g norms: 10 Corrected energy estimates. We have obtained in section 6 a bound for the first energy and a decay for the second energy. These bounds prove unsufficient to control the behaviour in time of the Teichmuller parameters. The right hand side of the first energy inequality is non positive, as well as the quadratic term of the right hand side of the second energy inequality, but the space derivatives are lacking in those right hand sides which would make them negative definite. The introduction of corrected energies enables one to obtain such a definiteness, compensating some terms by others, and leading to better decay estimates. 10.1 Corrected first energy. 10.1.1 Definition and lower bound. One defines as follows a corrected first energy where α is a constant, which we will choose positive: where we have denoted by u the mean value of u, a scalar function, on Σ t : An estimate of E α will give estimates of the L 2 norms of the derivatives of u and of h if there exists a K > 0, independent of t, such that We set and We estimate the complementary term through the Cauchy-Schwarz inequality | Σt (u − u).u ′ µ g | ≤ ||u − u|| g ||u ′ || g . We will use the Poincaré inequality on the compact manifold (Σ, σ) to estimate the L 2 (σ) norm of u −ū: where . denotes the L 2 norm on Σ in the metric σ, λ M is an upper bound of the conformal factor λ and Λ σ is the first positive eigenvalue of the operator −∆ ≡ −∆ σ acting on functions with mean value zero. Note that ||Du|| = ||Du|| g . The inequality (27) to satisfy is implied by the two following ones: and (to be satisfied by all x 0 , x 1 ≥ 0) this quadratic form in the x's will be always non negative if K ≥ 1 and its discriminant is non positive. This last condition reads A necessary and sufficient condition for the existence of K ≥ 1 and K finite is therefore satisfies then the required conditions. It is known that given a 2-manifold Σ of genus G > 1 there is an open subset of Teichmuller space such that for metrics σ ∈ M −1 projecting on this open set it holds We now choose α = 1 4 The condition a < 1 then reads that is using estimates on the conformal factor C(ε 2 + C σ εε 1 ) < δ σ 10.1.2 Time derivative of the corrected energy. We set: with (the terms explicitly containing the shift ν give an exact divergence which integrates to zero) The function γ ≡ u satisfies the wave equation Some elementary computations and integration by parts show that Lemma. If u satisfies the wave equation the quantity Σt u ′ µ g is conserved in time Proof. Integration on (Σ t , g) of the wave equation (multiplied by N) shows that on a compact manifold, where exact divergences integrate to zero, one has To simplify the proofs we will suppose in all that follows that Then R a reduces to, since ∂ t u is constant on Σ t , 10.1.3 Decay of the corrected first energy. In the corrected energy inequality we have seen appear the quantity dτ /dt. To obtain a differential inequality we have to make a choice of τ as a function of t. We wish to work in the expanding direction of our spacetime, where τ , with our sign convention for the extrinsic curvature, starts from a negative value τ 0 and increases, eventually up to the moment of maximum expansion where τ = 0. We have made (section 5, notations) the choice We obtain, using the value of dE/dt and R α , that we look for a positive number k such that the difference can be estimated with higher order terms. We choose α = 1 4 , k = 1 We have then The right hand side is the sum of a negative term and a term which can be considered as a non linear term in the energies because we have proved that (cf section 9.2 on N estimates): Therefore we obtain the following theorem (remember that τ < 0): Theorem 30 The corrected first energy with α = 1 4 satisfies the differential equation 10.2 Corrected second energy. Definition and lowerbound. We define a corrected second energy E α by the formula, with α some constant This corrected second energy will give bounds on the derivatives of Du and u ′ if there exists a number K >0 such that: The hypothesisū ′ = 0 is not necessary here because on a compact manifold Σt ∆ g u.u ′ µ g = 0. We obtain the estimate, analogous to one obtained in the previous section, The same K as in the previous section satisfies the required inequality when we choose α = 1 4 . Time derivative of the corrected second energy. We have We recall that (indices are raised with g in the next few lines) Partial integration together with the splitting k ab = h ab + 1 2 g ab τ , and the equation gives: On the other hand and if u satisfies the wave equation we find These equalities give, if we make the choice τ = −1 t , hence dτ dt = τ 2 : .∂ c u) + (N + 1)τ ∆ g u.u ′ }µ g We have found an equality of the form We see that Q contains also terms only quadratic in the first and second derivatives of u, but its integral will be bounded by non linear terms in the energies through previous estimates on DN and h. We choose α = 1 4 . We split the integral of P 1/4 into linear and non linear terms in the energies by writing 1/4 + U with non linear terms U given by We are ready to prove the following theorem Theorem 31 With the choice α = 1 4 and τ = − 1 t , t > 0, the corrected second energy satisfies the inequality where B a polynomial in ε and ε 1 with all terms of order at least 3 and coefficients of the form CC σ . Proof. We have shown that 1/4 + Z + 1 4 τ Qµ g + τ U We will estimate the various terms in the right hand side. We obtain, using the bound of 2 − N and the definition of ε 1 |τ U| ≤ CC σ |τ | 3 (ε 2 + εε 1 )(ε 2 1 + εε 1 ) We now estimate τ Qµ g , using its expression and the estimates (cf.section 9) It holds on a 2 dimensional compact manifold The bounds on L 4 norms of u ′ and Du give We have, using previous estimates, An inequality of the same type holds for Du L 4 (g) . The estimate of | τ Qµ g | by the product of |τ | 3 with higher than 2 powers of the ε ′ s follows. We now estimate Z. We recall that Previous estimates give To bound Y 2 we use the L 4 norm of ∇ 2 N estimated in terms of its W p 3 norm in the section on lapse estimates. Indeed Y 2 ≤ |τ |ε 1 ∇ 2 N L 4 (g) Du L 4 (g) We have On the other hand we recall the identity and we obtain with The term Y 3 can be estimated using the Holder inequality, Elementary calculus gives The L 6 norms can be estimated with H 1 norms using the Sobolev inequality applied to f = u ′ and f = |Du| together with the inequality hence, going back to the energies therefore, using laplacian and norms in conformal metrics and the previous estimate of u ′ The bound we have just computed of D 2 N 4 gives also a bound of ∆N 4 , hence Gathering the results gives the theorem. 11 Decay of the total energy. We call total energy the quantity We define y(t) to be the total corrected energy namely: We have The inequalities obtained for the corrected energies imply, with τ = −t −1 where A and B are bounded by polynomials in ε and ε 1 with terms of degree at least 3. Lemma 32 Suppose that on (Σ, σ) there is δ σ > 0 such that the first positive eigenvalue Λ σ is then if the energies are such that The numbers C and C σ are known numbers depending respectively on the number c of the hypothesis H c and on the metric σ. Proof. By the definition of a ≡ a σ it holds that which gives using the lower bound of λ and the lemma 3 of the section 8 "conformal factor estimates" 1 − a σ ≥ δ σ − CC σ (ε 2 + εε 1 ) from which the result follows. Hypothesis H σ : 1. The numbers C σ are uniformly bounded by a constant M for all t ≥ t 0 for which they exist. 2. There exists a constant δ > 0 such that the numbers Λ σ , the first positive eingenvalues of −∆ σt for functions with mean value zero, are such that Hypothesis H E . The energies ε 2 t and ε 2 1,t satisfy as long as they exist an inequality of the form C(c, M)(ε 2 + εε 1 ) ≤ δ 2 where C is a number depending only on the numbers c and M. We will prove the following theorem. Theorem 33 Under the hypothesis H c , H E and H σ there exists a number η such that if the total energy is bounded at time t 0 by η then it satisfies at time t = − τ −1 ≥ t 0 > 0 an inequality of the form where M tot depends only on δ. Proof. Under the hypothesis we have made the polynomials A and B are bounded by polynomials in y 1 2 with terms of degree at least 3 and bounded coefficients depending only on c, M, δ. Take η such that y 0 ≡ y(t 0 ) < 1. Then all powers of y 0 greater than 3/2 are less than y hence dy dt (t 0 ) < 0 and y starts decreasing, therefore continues to satisfy y < 1. Therefore A + B ≤ M 1 y 3/2 and y satisfies the differential inequality with always y − M 1 y 3/2 > 0 and, consequently, the differential inequality dy y(1 − M 1 y 1/2 ) + dt t ≤ 0 equivalently dz z(1 − M 1 z) + dt 2t ≤ 0, with y = z 2 which gives by integration in other words We suppose for instance z 0 ≤ 1 2M 1 , then ty ≤ 4t 0 y 0 Recall that under the H c , H E and H σ hypotheses The inequality for y implies therefore with, as announced, M t uniformly bounded: 12 Teichmuller parameters. Let s and σ be two given metrics on Σ and Φ be a mapping from Σ into Σ. The energy of the mapping Φ : (Σ, σ) → (Σ, s) is by definition the positive quantity: Consider the metric s as fixed. Elementary calculus shows that the energy E(σ, Φ) is invariant under a diffeomorphism f of Σ in the following sense In the case where s and σ both have negative curvature it has been proved by Eells and Sampson that there exists one and only one harmonic map Φ σ : (Σ, σ) → (Σ, s) which is a diffeomorphism homotopic to the identity, i.e. Φ σ ∈ D 0 . Such a harmonic map is equivariant under diffeomorphisms homotopic to the identity, i.e. One is then led to the definition: Definition 34 Given a metric s ∈ M −1 the Dirichlet energy D(σ) of the metric σ ∈ M −1 is the energy of the harmonic map Φ σ ∈ D 0 : It depends on the choice of the fixed metric s, but is invariant under the action of diffeomorphisms included in D 0 hence defines a positive functional on the Teichmuller space T eich ≡ M −1 /D 0 . Remark 35 The energy of the mapping Φ : (Σ, σ) → (Σ, s) as well as the harmonic map Φ σ are also invariant under conformal rescalings of σ. They can be used on the space of riemannian metrics of negative curvature before the rescaling which restricts them to metrics of curvature −1. The importance of the Dirichlet energy rests on the following theorem which says that if D(σ) remains in a bounded set of R then the equivalence class of σ remains in a bounded set of T eich . Theorem 36 (Eells and Sampson) The Dirichlet energy is a proper function on Teichmuller space. Estimate of the Dirichlet energy. We will require of the metric σ t that it remains, when t varies, in some cross section of M −1 (space of C ∞ metrics with scalar curvature -1) over the Teichmuller space, diffeomorphic to R 6G−6 , G the genus of Σ. Remark. Following Andersson-Moncrief one can choose the cross section as follows, having given some metric s ∈ M −1 . To an arbitrary metric ζ ∈ M −1 we associate another such metric by its pull back through Φ −1 ζ ψ(ζ) = (Φ −1 ζ ) * ζ For any f ∈ D 0 we have ψ(f * ζ) = (Φ −1 f * ζ ) * f * ζ = ψ(ζ) hence the metric ψ depends only on the equivalence class Q of ζ through D 0 . Thus one gets a cross section of M −1 over Teichmuller space, Q ∈ T eich → ψ(Q) ∈ M −1 . If Q remains in a bounded set of T eich then ψ(Q) remains in a bounded set of M −1 i.e. all these metrics are uniformly equivalent. We will estimate the Dirichlet energy D(σ) ≡ E(σ, Φ σ ). We have, with g ab = e 2λ σ ab If Φ σ is a harmonic map from (Σ, σ) into (Σ, s) it is an extremal of the mapping Φ → E(σ, Φ) and also an extremal of the mapping Φ → E(g, Φ) We which gives using previous notations and the vanishing of the integral of a divergence on a compact manifold Using 0 < N ≤ 2 and e −2λ ≤ τ 2 2 we find The bound of h ∞ found in the section on h estimates gives: We recall the following lemmas. Lemma 37 There exists an open subset Ω of T eich such that if the equivalence class of σ is in Ω and σ is in a smooth cross section of T eich , then there exists a number δ > 0 such that Λ(σ) ≥ 1 8 + δ and all constants C σ are bounded by a fixed number M. We have shown that the hypothesis X t ≤ c, x t ≤ c E , Z t ≤ d and smallness conditions on x 0 , imply the existence of numbers A i depending only on c, c E and d such that Therefore there exists η > 0 such that x 0 ≤ η implies that the triple belongs to the subset U 1 ⊂ R 3 defined by the inequalities: For such an η the triple either belongs to U 1 or to the subset U 2 defined by These subsets are disjoint. We have supposed that for t = t 0 it holds that (X 0 , x 0 , Z 0 ) ∈ U 1 hence, by continuity in t, (X t , x t , Z t ) ∈ U 1 for all t. We have proved the required a priori bounds. The orthogonal trajectories to the space sections M × {t} have an infinite proper length since the lapse N is bounded below by a strictly positive number. 1 Supported in part for the NSF contract n • PHY-9732629 to Yale university Aknowledgements. We thank L. Andersson for suggesting the use of corrected energies. We thank the University Paris VI, the ITP in Santa Barbara, the University of the Aegean in Samos and the IHES in Bures for their hospitality during our collaboration.
2017-09-17T07:50:14.015Z
2001-12-01T00:00:00.000
{ "year": 2001, "sha1": "828b5749d6e731ee0a918460a6dd080c71982467", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/0112049", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9c969f31d61561cfbaf108633def48db1d767e6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
265242735
pes2o/s2orc
v3-fos-license
Field Plant Monitoring from Macro to Micro Scale: Feasibility and Validation of Combined Field Monitoring Approaches from Remote to in Vivo to Cope with Drought Stress in Tomato Monitoring plant growth and development during cultivation to optimize resource use efficiency is crucial to achieve an increased sustainability of agriculture systems and ensure food security. In this study, we compared field monitoring approaches from the macro to micro scale with the aim of developing novel in vivo tools for field phenotyping and advancing the efficiency of drought stress detection at the field level. To this end, we tested different methodologies in the monitoring of tomato growth under different water regimes: (i) micro-scale (inserted in the plant stem) real-time monitoring with an organic electrochemical transistor (OECT)-based sensor, namely a bioristor, that enables continuous monitoring of the plant; (ii) medium-scale (<1 m from the canopy) monitoring through red–green–blue (RGB) low-cost imaging; (iii) macro-scale multispectral and thermal monitoring using an unmanned aerial vehicle (UAV). High correlations between aerial and proximal remote sensing were found with chlorophyll-related indices, although at specific time points (NDVI and NDRE with GGA and SPAD). The ion concentration and allocation monitored by the index R of the bioristor during the drought defense response were highly correlated with the water use indices (Crop Water Stress Index (CSWI), relative water content (RWC), vapor pressure deficit (VPD)). A high negative correlation was observed with the CWSI and, in turn, with the RWC. Although proximal remote sensing measurements correlated well with water stress indices, vegetation indices provide information about the crop’s status at a specific moment. Meanwhile, the bioristor continuously monitors the ion movements and the correlated water use during plant growth and development, making this tool a promising device for field monitoring. Introduction Sustainable agriculture practices claim novel techniques to monitor plant growth, development and health with the aim of increasing crop yields to meet the demands of a rapidly growing population [1][2][3][4][5].Several approaches can be applied to improve yields, reduce environmental threats and optimize input efficiency [6].Recent approaches based on nanotechnology may improve in vivo nutrient delivery to ensure the precise distribution of nutrients as nanoengineered particles may improve crop growth and productivity, increasing fertilizer use efficiency [7].The application of nanofertilizers (NFs) [8,9], nanoparticles [10,11] and organic compounds has shown promising results [12].In this panorama, plant monitoring becomes central to sustainable agriculture.Technology and innovation can significantly improve the ability to monitor plant health during cultivation, thus fine-tuning farm management to improve agriculture sustainability [1,13]. Advances in precision agriculture (PA) technology can (i) significantly increase productivity, (ii) ensure high food quality and further decrease costs, and (iii) preserve crucial environmental resources [14,15].Water savings, precision irrigation based on plant needs and the saving of natural resources are the main focus of PA [16] and are mandatory in view of the ongoing water crisis [17]. Several reviews describe platforms available for field monitoring and plant phenotyping on various observation scales [18][19][20][21][22][23][24][25][26].Proximal and remote sensing (PRS) techniques are increasingly used for plant phenotyping because of their advantages in multidimensional data acquisition and analysis [27].At the macro scale (aerial level), the rapid development of sensors and unmanned aerial vehicles (UAVs), imaging and data analysis algorithms, and improved computer capacities have enabled a broad range of possibilities for aerial precision farming to measure, for example, vegetation indices [28][29][30][31][32][33][34].Overall, images have been demonstrated to be a good proxy for the characterization of quantitative plant traits [35][36][37][38].At the medium scale, proximal RGB images can also be acquired through low-cost imaging methods to identify color indices to be used in crop management [39][40][41].For crops such as wheat and maize, RGB images have shown similar or even better performance in comparison to a multispectral index like the Normalized Difference Vegetation Index (NDVI) in applications like predicting grain yield under different growing conditions, including water status [42][43][44] and availability of nitrogen [23,[45][46][47] and phosphorous [41,48].In the sensor panorama, a central role is played by soil monitoring sensors, with soil being crucial for plant development and yield and for improving the optimization of water resources [34,49]. Real-time sensing is now required not only to trace the time point index of plant health but also to trace plant health and growth dynamics [16,[50][51][52][53][54].To this end, a novel smart organic electrochemical transistor (OECT)-based sensor named a bioristor has been developed and applied in plant stems for the continuous, precise and real-time monitoring of the changes occurring in the plant sap composition, during growth and development and upon drought stress and environmental changes [13,50,55,56] in controlled conditions.Its application allowed for the early warning of drought stress [13] and for dynamically tracing the saline stress response in giant cane [55].The possibility of monitoring the plant's health status and early phases of drought stress directly from the stem can consistently improve water use efficiency in agriculture and increase crop production sustainability.High reproducibility and stability in measurements between tomato plant replicates have been reported [13,57]; moreover, the bioristor's scalability for use in diverse crop species has been reported [55,58]. Tomato (Solanum lycopersicum L.) is one of the most cultivated vegetables in the world, with about 189 million tons cultivated in 2021 according to the Food and Agriculture Organization of the United Nations (FAOSTAT, 2023; http://www.fao.org/faostat/en/#data/QC; accessed on 6 July 2023) and with a total addressable market (TAM) valued at USD 181.74 billion in 2022 [59].During tomato field growth, several abiotic and biotic stresses occur and strongly affect final yield and quality [60,61]. In tomato, drought stress significantly affects yield [62].The tomato plant is sensitive to lack of water during reproduction, especially during flowering and fruit growth [63].Novel approaches are needed to reach the goal of more sustainable agriculture that has lower water requirements and promotes resistance to biotic and abiotic stress [63]. So far, image-based remote and proximal sensing platforms have been individually applied to monitor the drought stress response of tomato.Examples are UAVs equipped with multispectral, hyperspectral [64] and thermal [65] sensors.Only recently, an innovative in vivo sensor named a bioristor was also used to monitor tomatoes in open fields [57] to improve irrigation efficiency. The objectives of the present study were (a) to analyze the strength of a multiscale approach for PA, (b) to determine the relationships between vegetation indices and drought stress and (c) to demonstrate the effectiveness of bioristor in monitoring the water needs of tomato plants in an open field at the micro-scale level. To this end, the performance of multispectral and thermal sensors mounted on a UAV will be compared with that of a low-cost RGB sensor and with that of a bioristor. The results are discussed in terms of the efficiency of the multiscale approach and the correlation between the acquired indices and the physiological or environmental traits investigated. The Macro Scale: UAV Multispectral Remote Imaging Acquisition of Multispectral and Thermal Vegetation Indices CWSI values increased at 56 days after transplant (DAT) up to 82 DAT for 40% PAW, while a decrease in CWSI values was observed at 82 DAT for the 80% and 100% PAW (Figure 1a).Regarding the multispectral VIs, GNDVI and NDRE values decreased from 56 DAT to 82 DAT (Figure 1b,c), while NDVI value reached a maximum value at 62 DAT (Figure 1d).The largest differences between the 40% PAW irrigation treatment and the 80% and 100% PAW were observed at 82 DAT for GNDVI, NDRE, NDVI and CWSI indices (Figure 1).with multispectral, hyperspectral [64] and thermal [65] sensors.Only recently, an innovative in vivo sensor named a bioristor was also used to monitor tomatoes in open fields [57] to improve irrigation efficiency. The objectives of the present study were (a) to analyze the strength of a multiscale approach for PA, (b) to determine the relationships between vegetation indices and drought stress and (c) to demonstrate the effectiveness of bioristor in monitoring the water needs of tomato plants in an open field at the micro-scale level. To this end, the performance of multispectral and thermal sensors mounted on a UAV will be compared with that of a low-cost RGB sensor and with that of a bioristor. The results are discussed in terms of the efficiency of the multiscale approach and the correlation between the acquired indices and the physiological or environmental traits investigated. Acquisition of Multispectral and Thermal Vegetation Indices CWSI values increased at 56 days after transplant (DAT) up to 82 DAT for 40% PAW, while a decrease in CWSI values was observed at 82 DAT for the 80% and 100% PAW (Figure 1a).Regarding the multispectral VIs, GNDVI and NDRE values decreased from 56 DAT to 82 DAT (Figure 1b,c), while NDVI value reached a maximum value at 62 DAT (Figure 1d).The largest differences between the 40% PAW irrigation treatment and the 80% and 100% PAW were observed at 82 DAT for GNDVI, NDRE, NDVI and CWSI indices (Figure 1). Medium Scale: RGB Imaging (Proximal) During the field trial, plots corresponding to 100% and 80% PAW did not show significant differences in the GA index (Figure 2b).On the contrary, 40% PAW showed a significant difference in GA for the entire set of measures (for 17 days).During the fruit-set development (from day 32), 40% PAW showed a 4% GA reduction compared to 100% PAW (p ≤ 0.001), reaching the minimum canopy at the ripening stage (day 62, 19% GA reduction compared with 100% PAW (p ≤ 0.001, Figure 2b). GGA showed a rapid decrease in the 100% PAW plots (Figure 2c, Supplementary Figure S2c) because of the smaller fraction of green pixels captured with canopy images and the rapid increases in the red pixels in the 100% PAW plots during fruit set and ripening The CSI was also calculated based on GA and GGA.It supported the hypothesis of a faster ripening behavior of the 100% PAW plots and a strong reduction of plant development and rapid senescence in the 40% PAW plots (Figure 2a). Bioristor, the Micro-Scale Approach for In Vivo Plant (Ground) Monitoring A bioristor was used to detect the changes occurring in the plant physiology under water shortage at the micro-scale level.During the experiments, tomato growth from transplant to harvest was monitored in real time and continuously for 60 days, giving a complex but interesting picture of the changes occurring in the plant during growth and development under natural cultivation conditions (Figure 3). An increase in R was observed during rainy events as proportional to the intensity of the rain (Figure 4) but also during the irrigation sessions.GGA showed a rapid decrease in the 100% PAW plots (Figure 2c, Supplementary Figure S2c) because of the smaller fraction of green pixels captured with canopy images and the rapid increases in the red pixels in the 100% PAW plots during fruit set and ripening. The CSI was also calculated based on GA and GGA.It supported the hypothesis of a faster ripening behavior of the 100% PAW plots and a strong reduction of plant development and rapid senescence in the 40% PAW plots (Figure 2a). Bioristor, the Micro-Scale Approach for In Vivo Plant (Ground) Monitoring A bioristor was used to detect the changes occurring in the plant physiology under water shortage at the micro-scale level.During the experiments, tomato growth from transplant to harvest was monitored in real time and continuously for 60 days, giving a complex but interesting picture of the changes occurring in the plant during growth and development under natural cultivation conditions (Figure 3). Medium Scale: RGB Imaging (Proximal) During the field trial, plots corresponding to 100% and 80% PAW did not show significant differences in the GA index (Figure 2b).On the contrary, 40% PAW showed a significant difference in GA for the entire set of measures (for 17 days).During the fruitset development (from day 32), 40% PAW showed a 4% GA reduction compared to 100% PAW (p ≤ 0.001), reaching the minimum canopy at the ripening stage (day 62, 19% GA reduction compared with 100% PAW (p ≤ 0.001, Figure 2b). GGA showed a rapid decrease in the 100% PAW plots (Figure 2c, Supplementary Figure S2c) because of the smaller fraction of green pixels captured with canopy images and the rapid increases in the red pixels in the 100% PAW plots during fruit set and ripening The CSI was also calculated based on GA and GGA.It supported the hypothesis of a faster ripening behavior of the 100% PAW plots and a strong reduction of plant development and rapid senescence in the 40% PAW plots (Figure 2a). Bioristor, the Micro-Scale Approach for In Vivo Plant (Ground) Monitoring A bioristor was used to detect the changes occurring in the plant physiology under water shortage at the micro-scale level.During the experiments, tomato growth from transplant to harvest was monitored in real time and continuously for 60 days, giving a complex but interesting picture of the changes occurring in the plant during growth and development under natural cultivation conditions (Figure 3). An increase in R was observed during rainy events as proportional to the intensity of the rain (Figure 4) but also during the irrigation sessions.An increase in R was observed during rainy events as proportional to the intensity of the rain (Figure 4) but also during the irrigation sessions.The R trend showed no significant changes in the overall plant health status, indicated by the changes in the slope of R during time up to 60 DAT (Figure 3).Differential irrigation was applied at 43 DAT, but the occurrence of several rainy events hampered a real differentiation of the overall water available in the soil as an effect of the different irrigation.From 65 to 90 DAT, the R trend showed an appreciable and significant difference within the treatments (p ≤ 0.001), where the R of the 40% PAW plots rapidly dropped from 65 DAT to the end of the experiment, while 100% and 40% were well separated according to the given amount of water only from 72 DAT (Figure 3). The relative water content (RWC) and the SPAD values were evaluated and reported for all treatments to acquire a direct measurement of the plant status (Figure 5). A significant difference, mainly between the 100 and 40% treatments, was observed for RWC and SPAD on day 56 (Figure 5).Interestingly, the RWC data and the bioristor data are not in agreement in the initial days of phase 3 (Figures 4 and 6). The physiological data confirm the similarity of the health status occurring at 100% and 80% PAW and highlight that at 20% PAW is the most affected by drought stress. SPAD values do not show any significant difference for the entire length of the experiment (Figure 5a).The R trend showed no significant changes in the overall plant health status, indicated by the changes in the slope of R during time up to 60 DAT (Figure 3).Differential irrigation was applied at 43 DAT, but the occurrence of several rainy events hampered a real differentiation of the overall water available in the soil as an effect of the different irrigation.From 65 to 90 DAT, the R trend showed an appreciable and significant difference within the treatments (p ≤ 0.001), where the R of the 40% PAW plots rapidly dropped from 65 DAT to the end of the experiment, while 100% and 40% were well separated according to the given amount of water only from 72 DAT (Figure 3). The relative water content (RWC) and the SPAD values were evaluated and reported for all treatments to acquire a direct measurement of the plant status (Figure 5). A significant difference, mainly between the 100 and 40% treatments, was observed for RWC and SPAD on day 56 (Figure 5).Interestingly, the RWC data and the bioristor data are not in agreement in the initial days of phase 3 (Figures 4 and 6). The physiological data confirm the similarity of the health status occurring at 100% and 80% PAW and highlight that at 20% PAW is the most affected by drought stress. SPAD values do not show any significant difference for the entire length of the experiment (Figure 5a). The total yield was 74.9 t ha −1 , 94.3 t ha −1 and 100.2 t ha −1 for the three different irrigation treatments (Table 1, 40%, 80% and 100% PAW), respectively.Only the 40% PAW treatment showed a significant reduction in the total production as well as marketable production due to a significant increase in the yield of rotten fruits, because of the severe drought stress (Table 1).The total yield was 74.9 t ha −1 , 94.3 t ha −1 and 100.2 t ha −1 for the three different irrigation treatments (Table 1, 40%, 80% and 100% PAW), respectively.Only the 40% PAW treatment showed a significant reduction in the total production as well as marketable production due to a significant increase in the yield of rotten fruits, because of the severe drought stress (Table 1).Based on the data collected, a correlation analysis was performed to verify the relationship between RGB, multispectral, thermal, bioristor and physiological indices (Figure 5, Supplementary Figures S2-S4). RWC was highly correlated with water-related indices like CWSI and R, negatively and positively, respectively, and correlated, to a lower degree, with CSI (positively) and GA and GGA (negatively). The SPAD index showed a high correlation with the RGB index CSI (negatively), GGA, GNDVI and NDRE and was moderately correlated with NDVI but had an extremely low correlation with R and CWSI. NDVI showed a good correlation with GGA, GNDVI and NDRE and a negative high correlation with CSI.A medium correlation was observed with the SPAD index. Similarly, NDRE was negatively correlated with GGA, GNDVI and SPAD and was highly anticorrelated with CSI. The CWSI showed a high negative correlation with R and RWC.CSI showed anticorrelation with almost all the analyzed indices, SPAD, NDVI, NDRE, GNDI and GA. GGA showed a high correlation with SPAD, GA, NDVI and NDRE.The main findings reside in the analysis of the R correlation with known indices.It showed a high peculiar correlation only with CWSI (r = −0.82)and a medium correlation with RWC (r = 0.51). The Approach To verify the multiscale approach applied in this research paper, a range of sensors covering the macro, medium and micro scales were applied, as summarized in Figure 6.A UAV was applied as a remote platform, RGB low-cost imaging was applied as a proximal medium-scale sensor and a bioristor was applied for in vivo micro-scale monitoring.S2 for details). Field Trial Description and Stress Conditions A field trial was carried out in 2019 at Podere Stuard, in Parma (60 m a.s.l., 44°48′29.88′′N 10°16′29.074′′E).The tomato cv.Heinz was chosen for the field trial.A randomized block approach was adopted using three plots divided into three rows for each water treatment.The middle row of one plot for each water regime was monitored with a bioristor by measuring 5 plants.The irrigation treatments expressed as plant available water (PAW) irrigation treatments were established based on the irrigation advice defined by Irriframe (https://www.irriframe.it/Irriframe(accessed on 8 September 2023) as 100%, 80% and 40% PAW. The full list of field management and main operations including watering, fertilization and soil tillage is reported in Supplementary Table S1. Environmental Conditions: Soil Humidity Sensors and Meteorological Data Data on rainfall volume (mm) and relative humidity (RH%) at 2 m above the ground were collected by the agrometeorological station of the ARPAE network (https://simc.arpae.it/dext3r/,(accessed on 20 October 2023); Supplementary Figure S1).Based on the data collected, a correlation analysis was performed to verify the relationship between RGB, multispectral, thermal, bioristor and physiological indices (Figure 5, Supplementary Figures S2-S4). %PAW Marketable (t ha RWC was highly correlated with water-related indices like CWSI and R, negatively and positively, respectively, and correlated, to a lower degree, with CSI (positively) and GA and GGA (negatively). The SPAD index showed a high correlation with the RGB index CSI (negatively), GGA, GNDVI and NDRE and was moderately correlated with NDVI but had an extremely low correlation with R and CWSI. NDVI showed a good correlation with GGA, GNDVI and NDRE and a negative high correlation with CSI.A medium correlation was observed with the SPAD index. Similarly, NDRE was negatively correlated with GGA, GNDVI and SPAD and was highly anticorrelated with CSI. The CWSI showed a high negative correlation with R and RWC.CSI showed anticorrelation with almost all the analyzed indices, SPAD, NDVI, NDRE, GNDI and GA. GGA showed a high correlation with SPAD, GA, NDVI and NDRE. The main findings reside in the analysis of the R correlation with known indices.It showed a high peculiar correlation only with CWSI (r = −0.82)and a medium correlation with RWC (r = 0.51). The Approach To verify the multiscale approach applied in this research paper, a range of sensors covering the macro, medium and micro scales were applied, as summarized in Figure 6.A UAV was applied as a remote platform, RGB low-cost imaging was applied as a proximal medium-scale sensor and a bioristor was applied for in vivo micro-scale monitoring. Field Trial Description and Stress Conditions A field trial was carried out in 2019 at Podere Stuard, in Parma (60 m a.s.l., 44 • 48 29.88 N 10 • 16 29.074E).The tomato cv.Heinz was chosen for the field trial.A randomized block approach was adopted using three plots divided into three rows for each water treatment.The middle row of one plot for each water regime was monitored with a bioristor by measuring 5 plants.The irrigation treatments expressed as plant available water (PAW) irrigation treatments were established based on the irrigation advice defined by Irriframe (https://www.irriframe.it/Irriframe(accessed on 8 September 2023) as 100%, 80% and 40% PAW. The full list of field management and main operations including watering, fertilization and soil tillage is reported in Supplementary Table S1. Environmental Conditions: Soil Humidity Sensors and Meteorological Data Data on rainfall volume (mm) and relative humidity (RH%) at 2 m above the ground were collected by the agrometeorological station of the ARPAE network (https://simc.arpae.it/dext3r/,(accessed on 20 October 2023); Supplementary Figure S1). Bioristor Preparation and Implementation The bioristor was prepared according to Janni et al., 2019.In brief, two textile fibers were treated by soaking them for 5 min in aqueous poly(3,4-ethylenedioxythiophene) doped with polystyrene sulfonate (Clevios PH1000, Starck GmbH, Munich, Germany), and dodecyl benzene sulfonic acid (2% v/v) was added.The fibers were then baked at 130 • C for 30 min.The whole process, from deposition to heat treatment, was repeated 3 times to complete the preparation.Then, a treatment with concentrated sulfuric acid (95%) was performed for 20 min to increase the crystallinity of the polymer and, therefore, its electrical properties, as well as its duration over time.Before functionalization, each thread was cleaned by plasma oxygen cleaner treatment (Femto, Diener electronic, Ebhausen, Germany) to increase its wettability and to facilitate the adhesion of the aqueous conductive polymer solution. One bioristor was inserted in the plant stem of 5 plants at the 5-leaf stage by opening a hole using a needle.Five replicas for each water regime were analyzed.The treated fiber was completely inserted into the plant stem.The fiber was connected on each end to a metal wire with silver paste to stabilize the connections, forming the "source" and "drain" electrodes.The transistor device was completed by inserting a second fiber functionalized as a gate electrode (Figure 7A,B).A constant voltage (Vds = −0.1 V) was applied across the main transistor channel, along with a positive voltage at the gate (Vg = 0.5 V) which led to a decrease in channel conductivity due to the cations pushed from the electrolyte into the channel; the resulting current (Ids) was monitored continuously (Figure 2b).The sensor response (R) was acquired and reported in both experiments.It is proportional to the cations present in the electrolyte and is given by the expression |Ids − Ids0|/Ids0, where Ids0 represents the current across the channel when Vg = 0 V. to complete the preparation.Then, a treatment with concentrated sulfuric acid (95%) was performed for 20 min to increase the crystallinity of the polymer and, therefore, its electrical properties, as well as its duration over time.Before functionalization, each thread was cleaned by plasma oxygen cleaner treatment (Femto, Diener electronic, Ebhausen, Germany) to increase its wettability and to facilitate the adhesion of the aqueous conductive polymer solution. One bioristor was inserted in the plant stem of 5 plants at the 5-leaf stage by opening a hole using a needle.Five replicas for each water regime were analyzed.The treated fiber was completely inserted into the plant stem.The fiber was connected on each end to a metal wire with silver paste to stabilize the connections, forming the "source" and "drain" electrodes.The transistor device was completed by inserting a second fiber functionalized as a gate electrode (Figure 7A,B).A constant voltage (Vds = −0.1 V) was applied across the main transistor channel, along with a positive voltage at the gate (Vg = 0.5 V) which led to a decrease in channel conductivity due to the cations pushed from the electrolyte into the channel; the resulting current (Ids) was monitored continuously (Figure 2b).The sensor response (R) was acquired and reported in both experiments.It is proportional to the cations present in the electrolyte and is given by the expression |Ids − Ids0|/Ids0, where Ids0 represents the current across the channel when Vg = 0 V. The bioristor elements were connected to a NI USB−6343 multifunction I/O device (National Instruments, Austin, TX, USA), which is a multi-channel digital-analog converter, connected to a PC where current data were processed using home-made software and then saved in the cloud. RGB-Based Imaging Vegetation indices derived from RGB images were evaluated for each plot at ground level as reported by Gracia-Romero et al., 2019 [28], with slight modifications. One picture per plot was taken while a cell phone was held at 80 cm above the plant canopy.To facilitate the procedure, the camera was attached to a monopod to adjust and stabilize the distance between the camera and the top of the canopy.Images were saved in JPEG format at a resolution of 4608 × 3072 pixels.Two plots for each water regime were investigated using RGB imaging. To calculate the vegetation indices, the RGB images were processed with MosaicTool (https://www.gitlab.com/sckefauver/MosaicTool,University of Barcelona, Barcelona, Spain) integrated as a plugin for FIJI (Fiji is Just ImageJ; https://www.fiji.sc/Fiji/)[41] that The bioristor elements were connected to a NI USB−6343 multifunction I/O device (National Instruments, Austin, TX, USA), which is a multi-channel digital-analog converter, connected to a PC where current data were processed using home-made software and then saved in the cloud. RGB-Based Imaging Vegetation indices derived from RGB images were evaluated for each plot at ground level as reported by Gracia-Romero et al., 2019 [28], with slight modifications. One picture per plot was taken while a cell phone was held at 80 cm above the plant canopy.To facilitate the procedure, the camera was attached to a monopod to adjust and stabilize the distance between the camera and the top of the canopy.Images were saved in JPEG format at a resolution of 4608 × 3072 pixels.Two plots for each water regime were investigated using RGB imaging. To calculate the vegetation indices, the RGB images were processed with MosaicTool (https://www.gitlab.com/sckefauver/MosaicTool,University of Barcelona, Barcelona, Spain) integrated as a plugin for FIJI (Fiji is Just ImageJ; https://www.fiji.sc/Fiji/)[41] that enables the extraction of RGB indices in relation to different color properties of potential interest [44].Derived from the hue-intensity-saturation (HIS) color space, average values from all the pixels of the image were determined for hue, referring to the color tint; saturation, an indication of how much the pure color is diluted with white color; and intensity, as an achromatic measurement of the reflected light.In addition, the portion of pixels with hue classified as green was determined with the Green Area (GA) and Greener Area (GGA) indices. GA is the percentage of pixels in the image with a hue range from 60 GGA is more restrictive, because it reduces the range from 80 • to 180 • , thus excluding the yellowish-green tones.Both indices are also used for the formulation of the Crop Senescence Index (CSI) [66], which provides a scaled ratio between yellow and green pixels to assess the percentage of senescent vegetation.From the CIELab and the CIELuv color space models (recommended by the International Commission on Illumination (CIE) for improved color chromaticity compared to the HIS color space), dimension L* represents lightness and is very similar to intensity from the HIS color space, whereas a* and u* represent the red-green spectrum of chromaticity, and b* and v* represent the yellow-blue color spectrum. UAV Multispectral and Thermal Image-Based Indices Unmanned aerial vehicle (UAV) multispectral and thermal images were collected using a DJI Matrice 210 RTK Quadcopter (SZ DJI Technology Co., Shenzhen, Guangzhou, China) for three field campaigns (Supplementary Table S1) performed at 56 days after transplant (DAT), 62 DAT and 82 DAT on the entire field.The UAV was equipped with a DJI FLIR Zenmuse XT2 high-resolution radiometric thermal camera and a MicaSense RedEdge-Mx multispectral camera (MicaSense, Seattle, WA, USA) which acquired five multispectral images [35].Flights were performed in clear sky conditions, and the flight altitude was 30 m above ground level (AGL).The forward and lateral overlaps were set at 80% and 75% of the images, respectively.A light sensor mounted at the top of the UAV and a reflectance panel provided by MicaSense were used for the radiometric calibration of the multispectral images.The radiometric calibration and orthomosaic generation (both for multispectral and thermal images) were performed using the Pix4D mapper (Pix4D, S.A., Lausanne, Switzerland).Five vegetation indices (VIs), such as Green Normalized Difference Vegetation Index (GNDVI, [67]), Normalized Difference Red Edge Index (NDRE, [68]) and Normalized Difference Vegetation Index (NDVI, [69]), were calculated using the following equations: where R green , R red , R rededge and R nir are reflectance values of vegetation in the green, red, red edge and near-infrared bands extracted from the multispectral orthomosaics.Crop Water Stress Index (CWSI) was calculated, according to the methodology proposed by Idso et al. (1982) [70], using the following equations: where (T c − T a ) is the difference between T c , the canopy temperature extracted from thermal orthomosaics, and T a , the air temperature; (T c − T a ) LL and (T c − T a ) UL are the lower and upper limits of the canopy temperature difference calculated using VPD (kPa) and VPG (kPa), which are the vapor pressure deficit (VPD) and vapor pressure gradient ( VPG) calculated as the difference between the air-saturated water vapor pressure at temperature T a and the air-saturated water vapor pressure at temperature T a + a; a is the intercept and b is the slope of the linear regression. The averages of the VIs and CWSI for each experimental plot were extracted from pure vegetation pixels which were classified by applying a k-means clustering algorithm on multispectral and thermal orthomosaics for segmenting vegetation from the soil. A full list of the variables considered is presented in Supplementary Table S2. Physiological Measurements: Water Status and Fluorescence Five plants for each treatment were analyzed for the leaves' relative water content (RWC) as reported by Janni et al., 2019 [13].Chlorophyll content measurements were performed by using a SPAD 502 m (Konica Minolta, Ramsey, NJ, USA) on three expanded leaves; the relative SPAD value was recorded. Yield Assessment Yield components were recorded at the end of the experiment for all plants for each water regime: total production (t ha −1 ), commercial yield (t ha −1 ), unripe product (t ha −1 ) and rotten product (t ha −1 ). Data Analysis and Statistics The R value was analyzed with MATLAB (https://uk.mathworks.com/) and Microsoft Excel 2016 to smooth day/night oscillations and scaled through a min-max normalization (0.1 range).Bioristor data were statistically analyzed by applying analysis of variance (ANOVA) in MatLab 2014a (8.3.0.532).Mean, standard deviation and standard error were calculated. Discussion Plant stress detection is considered one of the most critical areas for the improvement of crop yields in the compelling worldwide scenario of ongoing climate change [25,71].Agricultural equipment has become more efficient, reliable, and precise thanks to automation and the increased use of robotics and sensors for plant monitoring. The multiscale approach for plant monitoring presented in this work can significantly improve the detection of water stress at the field level [72][73][74][75].Remote sensing methods and image spectral analysis are applied in precision agriculture (PA), can analyze soil state and vegetation health from a distance and are image-based [76].Moreover, RGB or color cameras are the most basic vision-based sensors.Color data may be used to determine parameters such as texture and geometrical characteristics, which are important in agricultural applications [76].Lastly, proximal sensors can measure soil qualities directly or indirectly and are close to, or even in contact with, the ground.The use of such advanced sensors and tools can provide farmers with valuable insights into crop growth and yield [77]. However, the complete lack of sensors that enable dynamic and continuous monitoring of plant water stress was observed [29,47]. In this study, an in vivo biosensor named a bioristor was coupled with remote and proximal sensing techniques as a tool for precision agriculture, and it was demonstrated that the methodologies presented were capable of monitoring tomato plants' response to water conditions. Our results showed that the highest correlations observed were between photosynthesis and chlorophyll-related traits (SPAD and related indices with RGB indices (GGA and CSI)) and between multispectral vegetation indices (NDVI, NDRE) and chlorophyllrelated traits (SPAD), as previously reported [47,78].The high correlation observed between chlorophyll-related indices such as GNDVI, NDRE and SPAD is in line with previously reported data [79,80]. In disagreement with reported data for grapes [81,82], NDVI and GNDVI do not correlate with transpiration-related traits such as RWC and CWSI [70,83].Also, the bioristor R index does not show a correlation with NDVI. Moreover, the NDRE, NDVI, GGA and CSI trends with the time of measurement confirm their ability to trace the course of the first slow and then fast maturation of tomato plants and the possible use of these indices as integrative measurements of the overall amount and quality of photosynthetic material in plants or the combined effects of leaf chlorophyll content, canopy leaf area or architecture [78].R, on the contrary, showed a specific and high correlation with water-related indices that specifically trace the effects of drought stress on plants (RWC, CWSI).No correlation was observed with CSI and NDVI, confirming its high specificity in monitoring changes in values of ions flowing in the transpiration stream and thus monitoring the plant water status. Of particular interest is the high negative correlation observed between R and CWSI (r = −0.82), a measure of the relative transpiration rate occurring in the plant and described as more accurate in determining the soil and plant water status [84]. These data support the negative correlation between R and VPD as reported by Vurro et al., 2019 [56], further demonstrating the bioristor's ability to detect physiological changes caused by transpiration. Under low VPD and high CWSI conditions, a high R was observed, indicating the efficacy of bioristor in detecting the occurrence of transpiration during plant growth development and under water shortage. Due to the ability of the bioristor to monitor the plant water status continuously and in real time, this study further supports its use for precision irrigation; the bioristor is as accurate as CWSI but allows the dynamic and continuous tracing of the plant water status. In addition, when compared with the total yield, R showed a good correlation (r = 0.82), confirming the link between water use efficiency and yield. In this study, we validated the use of multiscale vegetation indices developed from UAV, low-cost proximal RGB and in vivo monitoring methods to predict tomato water needs and to determine local irrigation requirements [94].An evaluation of each scale of monitoring is reported in Figure 8. Conclusions This work provides an overview of three phenotyping approaches for evaluating drought-related functional traits at various observational scales.First, the bioristor, presented as a micro-scale methodology, enabled the continuous monitoring of the plant wa- Conclusions This work provides an overview of three phenotyping approaches for evaluating drought-related functional traits at various observational scales.First, the bioristor, presented as a micro-scale methodology, enabled the continuous monitoring of the plant water status; the bioristor's ion concentration measurements in the transpiration stream are used as a direct estimation of plant water use. Having real-time information about the plant's status helps to identify when it starts responding to stress.However, the application of this methodology may be limited under field conditions, as a high number of sensors would be needed to obtain a precise representation.On the other hand, remote sensing methodologies based on the calculation of vegetation indices at the canopy level, presented at medium and macro scales, have also been reported as good indicators of drought response.Unlike the micro-scale strategy, proximal remote sensing streamlines the selection process by reducing the time required to assess extensive experimental fields.In return, drought response is evaluated indirectly through estimations of green biomass using RGB and multispectral indices which are highly correlated to the measures of chlorophyll content, as well as by measuring transpiration rates through canopy temperature assessment which reported better associations with the water-content-related traits measured by the bioristor.Proximal remote sensing enhances throughput capacity but may sacrifice precision in estimating the response and the ability to determine the onset of stress.The main difference between the medium and macro scales is that UAV technology allows for the assessment of larger populations more quickly, although the distance between the target and the sensor can affect image resolution compared to ground-level evaluations.Finally, conventional RGB cameras used at the medium scale represent a cost-effective alternative to the more expensive methodologies used at the macro level.Despite these differences, measurements at both levels performed similarly when assessing tomato plants. In summary, the combination of technologies described provides a comprehensive understanding of plants, their physiological functions, and their interaction with the environment. Figure 3 . Figure 3. Plot of the daily smooth average of the sensor response R; 100%, 80% and 40% irrigation treatment (PAW).Black asterisks indicate rainy events; red asterisks indicate abundant irrigations; dashed blocks indicate the R windows considered for the correlation with yield.Tomato maturation stages are indicated with colored bars: green, turning stage; red, fruit ripening.UAV measurements: 1, 56 DAT; 2, 62 DAT, 3, 82 DAT.DAT, days after transplant; PAW, plant available water.Black asterisks indicate rainy events, red asterisks abundant irrigations. Figure 3 . Figure 3. Plot of the daily smooth average of the sensor response R; 100%, 80% and 40% irrigation treatment (PAW).Black asterisks indicate rainy events; red asterisks indicate abundant irrigations; dashed blocks indicate the R windows considered for the correlation with yield.Tomato maturation stages are indicated with colored bars: green, turning stage; red, fruit ripening.UAV measurements: 1, 56 DAT; 2, 62 DAT, 3, 82 DAT.DAT, days after transplant; PAW, plant available water.Black asterisks indicate rainy events, red asterisks abundant irrigations. Figure 6 . Figure 6.Multiscale monitoring approach used in tomato field cultivation.Macro aerial traits analyzed: Normalized Difference Vegetation Index, NDVI; Normalized Difference Red Edge Index, NDRE; Green Normalized Difference Vegetation Index, GNDVI; Crop Water Stress Index, CWSI; Green Area, GA; Greener Area, GGA; Crop Senescence Index, CSI; Sensor Response Index, R (see Supplementary TableS2for details). Figure 6 . Figure 6.Multiscale monitoring approach used in tomato field cultivation.Macro aerial traits analyzed: Normalized Difference Vegetation Index, NDVI; Normalized Difference Red Edge Index, NDRE; Green Normalized Difference Vegetation Index, GNDVI; Crop Water Stress Index, CWSI; Green Area, GA; Greener Area, GGA; Crop Senescence Index, CSI; Sensor Response Index, R (see Supplementary TableS2for details). Figure 6.Multiscale monitoring approach used in tomato field cultivation.Macro aerial traits analyzed: Normalized Difference Vegetation Index, NDVI; Normalized Difference Red Edge Index, NDRE; Green Normalized Difference Vegetation Index, GNDVI; Crop Water Stress Index, CWSI; Green Area, GA; Greener Area, GGA; Crop Senescence Index, CSI; Sensor Response Index, R (see Supplementary TableS2for details). Figure 6.Multiscale monitoring approach used in tomato field cultivation.Macro aerial traits analyzed: Normalized Difference Vegetation Index, NDVI; Normalized Difference Red Edge Index, NDRE; Green Normalized Difference Vegetation Index, GNDVI; Crop Water Stress Index, CWSI; Green Area, GA; Greener Area, GGA; Crop Senescence Index, CSI; Sensor Response Index, R (see Supplementary TableS2for details). Table 1 . Yield traits analyzed.(a) Total yield and (b) marketable fruit expressed in t ha −1 .Different letters indicate significant differences between irrigation treatments (HSD Tukey test, p < 0.05). 17 Figure 8 . Figure 8. Advantages and disadvantages of the techniques used in the present work, from macroto micro-scale phenotyping. Figure 8 . Figure 8. Advantages and disadvantages of the techniques used in the present work, from macro-to micro-scale phenotyping. Table 1 . Yield traits analyzed.(a) Total yield and (b) marketable fruit expressed in t ha −1 .Different letters indicate significant differences between irrigation treatments (HSD Tukey test, p < 0.05). • to 180 • , including yellow to bluish-green color values.
2023-11-17T16:39:09.198Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "11197cbbb65b39117fafa1f869b73ea6f3ef20b5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/12/22/3851/pdf?version=1699964607", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5dce09bec762c9bb36a119f28669eb12a623a5c9", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
255499953
pes2o/s2orc
v3-fos-license
One-Step Synthesis of Sulfur-Doped Nanoporous Carbons from Lignin with Ultra-High Surface Area, Sulfur Content and CO2 Adsorption Capacity Lignin is the second-most available biopolymer in nature. In this work, lignin was employed as the carbon precursor for the one-step synthesis of sulfur-doped nanoporous carbons. Sulfur-doped nanoporous carbons have several applications in scientific and technological sectors. In order to synthesize sulfur-doped nanoporous carbons from lignin, sodium thiosulfate was employed as a sulfurizing agent and potassium hydroxide as the activating agent to create porosity. The resultant carbons were characterized by pore textural properties, X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and scanning electron microscopy (SEM). The nanoporous carbons possess BET surface areas of 741–3626 m2/g and a total pore volume of 0.5–1.74 cm3/g. The BET surface area of the carbon was one of the highest that was reported for any carbon-based materials. The sulfur contents of the carbons are 1–12.6 at.%, and the key functionalities include S=C, S-C=O, and SOx. The adsorption isotherms of three gases, CO2, CH4, and N2, were measured at 298 K, with pressure up to 1 bar. In all the carbons, the adsorbed amount was highest for CO2, followed by CH4 and N2. The equilibrium uptake capacity for CO2 was as high as ~11 mmol/g at 298 K and 760 torr, which is likely the highest among all the porous carbon-based materials reported so far. Ideally adsorbed solution theory (IAST) was employed to calculate the selectivity for CO2/N2, CO2/CH4, and CH4/N2, and some of the carbons reported a very high selectivity value. The overall results suggest that these carbons can potentially be used for gas separation purposes. Introduction Sulfur-doped porous carbon is a unique form of heteroatom doped carbon. Unlike other common types of heteroatoms, such as nitrogen, oxygen, or boron, sulfur atoms are significantly larger than carbon atoms, and therefore, sulfur atoms protrude out of the graphene plant, giving rise to a few unique properties of the parent carbon, such as superconductivity, as revealed in the theoretical studies [1,2]. In addition, the lone pair of electrons in the sulfur atom induces polarizability and interactions with oxygen [3]. There are several specific applications of sulfur-doped porous carbon, including electrocatalysis for fuel cells [4], electrodes for electrochemical capacitors [5], anode material for Li-ion batteries [6], cathodes for Li-S batteries [7], heavy metal removal [8], toxic gas removal [9], H 2 storage [10], CO 2 separation [11], and many others [12]. Most of the time, sulfur-doped carbons are synthesized by carbonizing S-bearing carbon precursors, like thiophenemethanol [13], cysteine [14], algae [15], ionic liquids [16], and others [12]. The detailed list of precursors that have been employed to synthesize sulfur-doped nanoporous carbons are listed in [12]. The porosity within the sulfur-doped carbon is achieved by post-synthesis activation [17] or utilizing templating strategies [13], including hard and soft templates. In our past research, we incorporated sodium thiosulfate (Na 2 S 2 O 3 ) at elevated temperatures to introduce sulfur functionalities within the porous carbon [8,[18][19][20][21][22]. The uniqueness of incorporating Na 2 S 2 O 3 is that it does not require an S-bearing carbon precursor to synthesize sulfur-doped carbon, as sulfur is contributed by the Na 2 S 2 O 3 . Lignin is the second most naturally abundant biopolymer present in the environment. It is one of the key constituents of wood along with cellulose and hemicellulose. Although there are three key structural constituents of lignin, including coumaryl, guaiacyl, and sinapyl alcohol, these three components are randomly cross-linked with each other, giving rise to the structural heterogeneity of lignin polymer. The exact structure of lignin polymer depends on the wood (tree) type and processing conditions. Lignin is industrially produced as a by-product in pulp and paper industries and bio-refineries. Although there is much research on the use of lignin, it still lacks prominent value-added utilization. The majority of lignin is used as low-calorie fuel. Historically, lignin was used in several types of specialty carbons, including activated carbon [23,24], mesoporous carbon [25,26], and carbon fibers [27,28]. Synthesis of porous carbon from lignin not may only introduce sustainability in the synthesis but also influences the economy of lignin by increasing its value-added utilization. In this work, we synthesized sulfur-doped nanoporous carbon from lignin using a one-step approach. We incorporated a varying ratio of sodium thiosulfate (Na 2 S 2 O 3 ) and potassium hydroxide (KOH) to simultaneously introduce sulfur functionalities and porosity into the carbon matrix. The resultant carbon was employed for gas separation purposes. Synthesis of Sulfur-Doped Carbons For all the synthesis purposes, commercially available dealkaline lignin (TCI America) was employed. Typically, the desired components of lignin, sodium thiosulfate (Na 2 S 2 O 3 ), and potassium hydroxide (KOH) were mixed in a coffee grinder and then loaded onto an alumina boat. The boat was introduced to the Lindberg-Blue TM (USA) tube furnace. The tube furnace was heated to 800 • C with a ramp rate of 10 • C/min, dwelled at 800 • C for 2 min, and then cooled to room temperature. All the heating and cooling operations were performed under N 2 gas. The final products were washed several times with DI water and then filtered and dried. The names of the carbons according to the ratio of lignin, Na 2 S 2 O 3 , and KOH are given in Table 1. The schematic of the synthesis is shown in Figure 1. From the table, it is clear that the total mixture was in the range of 7-12 g, which is the maximum amount of materials that be processed within the porcelain boat onto the tube surface. The ratio of Na 2 S 2 O 3 and KOH was also adjusted according to the literature and our previous fundings; too low or too high amounts of these materials may result in improper impregnation/activation or breakdown of the entire carbon matrix. Figure 1. Schematic of one-step synthesis of sulfur-doped nanoporous carbon from lignin. Characterization of Sulfur-Doped Carbons All the carbons were characterized with pore textural properties, x-ray photoelectron spectroscopy (XPS), and scanning electron microscopy (SEM). The pore textural properties, including BET specific surface area (BET SSA) and pore size distribution, were calculated using N 2 adsorption isotherms at 77 K and CO 2 adsorption isotherms at 273 K in Quantachorme's Autosorb iQ-any gas instrument (USA). The non-local density function theory (NLDFT)-based pore size distribution below 12 Å was calculated using CO 2 adsorption isotherm, whereas the larger (>12 Å) pores were calculated using N 2 adsorption isotherm. X-ray photoelectron spectroscopy (XPS) results were obtained in a Thermo-Fisher K-alpha instrument (USA)with a monochromatic Al-Kα as an X-ray anode. The intensity of X-ray energy was set to 1486.6 eV, and the resolution was 0.5 eV. Scanning electron microscopic images (SEM) were obtained in a Carl Zeiss Merlin SEM microscope (USA) operating at 1 kV. X-ray diffraction patterns were obtained in Rigaku miniflex XRD instrument. In order to capture the XRD pattern of the carbon, it was ground to a fine powder in mortar and pestle and introduced within the sample holder. Gas Adsorption Studies Equilibrium adsorption isotherms of pure-component CO 2 , CH 4 , and N 2 were measured on all the nanoporous carbons at the temperature of 298 K and pressure up to 760 torr in the same Autosorb iQ-any gas instrument. The temperature was maintained by an additional Chiller (Julabo) (USA). All the gases were of ultra-high purity (UHP) grade. About 80 mg of each of the sample was inserted in the sample tube along with filler rod and non-elutriation cap. Each sample was outgassed at 300 • C for 3 h before the adsorption experiment. Material Characteristics The N 2 adsorption-desorption isotherms at 77 K are shown in Figure 2a. The sharp rise in the low-pressure region suggests the presence of macroporosity. A narrow stretch of hysteresis loop is also observed in all the isotherms, signifying the presence of mesoporosity. The NLDFT-based pore size distribution is shown in Figure 2b. This figure shows that all the carbons have a few pores in the narrow micropore region, including 8.1, 5.5, and 4.7 Å; the pore width around 3.4 Å is attributed to the graphite layer spacing and not a true pore. In the large micropore region, the carbons possess two distinct pores around the 14.7 and 19.3 Å regions. The majority of the carbons also demonstrated a distributed mesoporosity within 20-45 Å, supporting the presence of a hysteresis loop in Figure 2a. The detailed pore textural properties are shown in Table 2. It is observed that LS-3 has the highest BET SSA (3626 m 2 /g) and pore volume (1.74 cm 3 /g). A porous carbon with a BET surface area more than 3000 m 2 /g is very difficult to produce and has been rarely reported in the literature [29][30][31][32][33][34]; only one work reported the BET surface area higher than that of LS-3 (a MOF-derived porous carbon with BET: 4300 m 2 /g) [34]. The lowest porosity belongs to LS-4 (BET: 280 m 2 /g; pore volume: 0.157 cm 3 /g), synthesized without KOH. It is clear that KOH is the primary agent in creating the porosity, Na 2 S 2 O 3 is primarily used to introduce sulfur functionalities. The influence of Na 2 S 2 O 3 in creating porosity is very small. The quantitative results for XPS are shown in Table 3. The C, S, and O contents were calculated by fitting the C-1s, S-2p, and O-1s peaks, and the representative peak fitting results for LS-3 and LS-5 are shown in Figure 3a-f. As observed in the table, LS-4 has the highest amount of sulfur content (12.6 at.%), followed by LS-5 (8.9 at.%). It is quite intuitive to note that the sulfur content is directly proportional to the addition of Na2S2O3 in the course of synthesis; sulfur content decreases in the order of LS-4 > LS-5 > LS-2 > LS-1 > LS-3. Despite LS-5 and LS-4 having the same Na2S2O3 contents, a higher KOH in LS-5 causes removal of some of the sulfur contents in the course of activation. It is also interesting to note that LS-4 has about 1 at.% sulfur, which originated from the pristine lignin itself in the course of its industrial production. Within different types of sulfur functionalities, the largest fraction of sulfur is associated with C-S contents in all the porous carbon samples, followed by SOx and S=C-O. From Table 3, it is obvious that higher sulfur content also caused higher oxygen content (except LS-3), which might have affected sulfur functionalities, resulting in lowering of total carbon content. Within the oxygen-bearing functionalities, the largest group belonged to S=O/C=O/O-H, directly correlating the oxygen contents with sulfur. LS-4, which had the highest sulfur content, had the lowest total carbon content, only 51.3 at.%. According to XPS, C-C sp 2 is the primary carbon structure, suggesting that all the carbons are mostly graphitic in nature. It also needs to be noted The quantitative results for XPS are shown in Table 3. The C, S, and O contents were calculated by fitting the C-1s, S-2p, and O-1s peaks, and the representative peak fitting results for LS-3 and LS-5 are shown in Figure 3a-f. As observed in the table, LS-4 has the highest amount of sulfur content (12.6 at.%), followed by LS-5 (8.9 at.%). It is quite intuitive to note that the sulfur content is directly proportional to the addition of Na 2 S 2 O 3 in the course of synthesis; sulfur content decreases in the order of LS-4 > LS-5 > LS-2 > LS-1 > LS-3. Despite LS-5 and LS-4 having the same Na 2 S 2 O 3 contents, a higher KOH in LS-5 causes removal of some of the sulfur contents in the course of activation. It is also interesting to note that LS-4 has about 1 at.% sulfur, which originated from the pristine lignin itself in the course of its industrial production. Within different types of sulfur functionalities, the largest fraction of sulfur is associated with C-S contents in all the porous carbon samples, followed by SO x and S=C-O. From Table 3, it is obvious that higher sulfur content also caused higher oxygen content (except LS-3), which might have affected sulfur functionalities, resulting in lowering of total carbon content. Within the oxygen-bearing functionalities, the largest group belonged to S=O/C=O/O-H, directly correlating the oxygen contents with sulfur. LS-4, which had the highest sulfur content, had the lowest total carbon content, only 51.3 at.%. According to XPS, C-C sp 2 is the primary carbon structure, suggesting that all the carbons are mostly graphitic in nature. It also needs to be noted that the LS-3, LS-4, and LS-5 have sodium that may have originated from Na 2 S 2 O 3 and/or during the commercial production of lignin. 69.6 C-C sp 3 9.0 8.8 6.4 9.5 9.5 The representative SEM images of LS-3 are shown in Figure 4a-d at different levels of magnification. The carbon particles are irregular in shape, with a size around 20-100 µm. A three-dimensional network of larger pores is observed in the SEM images with macropore size in the range of 1.5-4 µm. Such a pore system along with meso-and micropores may have created a hierarchical porous network in the carbon matrix, which can be highly beneficial for faster diffusion of an adsorbate molecule in the course of diffusion. The X-ray diffraction (XRD) images are shown in Figure 5. Two 'hump'-like and very broad peaks around 23° and 43° are observed for all the carbons. These peaks are remnants of graphitic ordering and are present in almost all sp 2 hybridized carbons [35]. For LS-5, there are a few small peaks observed, which may be associated with salts of Na and/or K, which could originate from Na2S2O3, KOH, and the impurities present in pristine lignin itself. Relatively higher amounts of Na also support the XPS observation, and such sodium originates from sodium thiosulfate and/or pristine lignin. We did not pursue there are a few small peaks observed, which may be associated with salts of Na and/or K, which could originate from Na 2 S 2 O 3 , KOH, and the impurities present in pristine lignin itself. Relatively higher amounts of Na also support the XPS observation, and such sodium originates from sodium thiosulfate and/or pristine lignin. We did not pursue any further analysis to reveal details of these salts, as it is beyond the scope of this study. Gas Adsorption Studies The adsorption isotherms CO2, CH4, and N2 are shown in Figure 6a-e for LS-1, -2, -3 -4, and -5, respectively. For all the plots, the CO2 adsorption amount is higher, followe by CH4 and N2. The largest equilibrium adsorption capacity of CO2 was demonstrated b LS-3 (~10.89 mmol/g at 757 torr pressure), which has the highest BET surface area an micropore volume. Such a high equilibrium uptake of CO2 is probably the highest CO uptake capacity ever reported for any carbon-based material in the literature. The adsorp tion of all these gases is influenced by micropore volume. As observed in this study, ther is a linear trend of adsorbed amounts of CO2, CH4, and N2, suggesting that the micropor volume played a pivotal role in the adsorption processes. In addition, CO2 adsorption ma also be influenced by the presence of sulfur functionalities. It has been reported tha mono-or dioxidized sulfur on the carbon surface causes high enthalpy of CO2 adsorptio of 4-6 kcal/mol, which may be attributed to the negative charge of an oxygen atom, pos sibly caused by the high positive charge on the sulfur atom [36]. Theoretical calculation also revealed that electron overlap between CO2 and sulfur functionalities on the carbo surface may enhance the interactions between CO2 and the carbon substrate [37]. As ob served in Figure 5, there is a possible presence of Na and K salts, which might have orig inated from Na2S2O3, KOH, or the pristine lignin itself. To the best of our knowledge, thes salts do not have any influence in the adsorption of CO2, CH4, or N2. Gas Adsorption Studies The adsorption isotherms CO 2 , CH 4 , and N 2 are shown in Figure 6a-e for LS-1, -2, -3, -4, and -5, respectively. For all the plots, the CO 2 adsorption amount is higher, followed by CH 4 and N 2 . The largest equilibrium adsorption capacity of CO 2 was demonstrated by LS-3 (~10.89 mmol/g at 757 torr pressure), which has the highest BET surface area and micropore volume. Such a high equilibrium uptake of CO 2 is probably the highest CO 2 uptake capacity ever reported for any carbon-based material in the literature. The adsorption of all these gases is influenced by micropore volume. As observed in this study, there is a linear trend of adsorbed amounts of CO 2 , CH 4 , and N 2 , suggesting that the micropore volume played a pivotal role in the adsorption processes. In addition, CO 2 adsorption may also be influenced by the presence of sulfur functionalities. It has been reported that mono-or dioxidized sulfur on the carbon surface causes high enthalpy of CO 2 adsorption of 4-6 kcal/mol, which may be attributed to the negative charge of an oxygen atom, possibly caused by the high positive charge on the sulfur atom [36]. Theoretical calculations also revealed that electron overlap between CO 2 and sulfur functionalities on the carbon surface may enhance the interactions between CO 2 and the carbon substrate [37]. As observed in Figure 5, there is a possible presence of Na and K salts, which might have originated from Na 2 S 2 O 3 , KOH, or the pristine lignin itself. To the best of our knowledge, these salts do not have any influence in the adsorption of CO 2 , CH 4 , or N 2 . Working capacity is generally defined as the difference in the adsorbed amount within the adsorbed pressure of 1 bar (760 torr) and desorbed pressure of 0.1 bar (76 torr). For a suitable adsorbent, consistent working capacity with multiple cycles is required. In this work, we have selected LS-5 as the adsorbent and CO 2 adsorbate gas to study the cyclability of working capacity, and the result is shown in Figure 7. As observed in this figure, the working capacity maintains a constant value within 10 cycles, with a standard deviation of no more than ±0.1. Working capacity is generally defined as the difference in the adsorbed amount within the adsorbed pressure of 1 bar (760 torr) and desorbed pressure of 0.1 bar (76 torr). For a suitable adsorbent, consistent working capacity with multiple cycles is required. In this work, we have selected LS-5 as the adsorbent and CO2 adsorbate gas to study the cyclability of working capacity, and the result is shown in Figure 7. As observed in this figure, the working capacity maintains a constant value within 10 cycles, with a standard deviation of no more than ±0.1. The gas adsorption isotherms were fitted with the Sips isotherm model equation, given below. q = a m bp 1/n 1 + bp 1/n (1) where q is the adsorbed amount (mmol/g), p is the pressure (torr), and a m , b, and n are all Sips constants. The Sips equation is fit employing the solver function of Microsoft Excel, and the values are given in Table S1 of the supporting information. The gas adsorption isotherms were fitted with the Sips isotherm model equation, given below. where is the adsorbed amount (mmol/g), is the pressure (torr), and , , and are all Sips constants. The Sips equation is fit employing the solver function of Microsoft Excel, and the values are given in Table S1 of the supporting information. Owing to the experimental difficulty in performing mixed gas adsorption, it is common practice to calculate the selectivity from the pure-component gas adsorption isotherms. Selectivity provides an indication of the preference of the adsorbent materials to prefer one component over another when both species are present in the feed stream. The selectivity ( α 1/2 ) of component 1 (preferred adsorbate) over component 2 (non-preferred adsorbate) is defined as follows [38]: where and are the mole fractions of adsorbate in the adsorbed phase and bulk gas phase, respectively. The most popular way of calculating selectivity from adsorption isotherms is the Ideally Adsorbed Solution Theory (IAST), originally proposed by Myers and Prausnitz [39]. Working Capacity (mmol/g) Owing to the experimental difficulty in performing mixed gas adsorption, it is common practice to calculate the selectivity from the pure-component gas adsorption isotherms. Selectivity provides an indication of the preference of the adsorbent materials to prefer one component over another when both species are present in the feed stream. The selectivity (α 1/2 ) of component 1 (preferred adsorbate) over component 2 (non-preferred adsorbate) is defined as follows [38]: Number of Cycles where x and y are the mole fractions of adsorbate in the adsorbed phase and bulk gas phase, respectively. The most popular way of calculating selectivity from adsorption isotherms is the Ideally Adsorbed Solution Theory (IAST), originally proposed by Myers and Prausnitz [39]. The selectivity values for CO 2 /N 2 , CO 2 /CH 4 , and CH 4 /N 2 are shown in Figure 8a-c, respectively. From Figure 8a, it is observed that LS-2 and LS-4 have the highest selectivity for CO 2 /N 2 (about 180-120) at the lowest pressure, but it decreases significantly at the higher pressure. At the higher pressure, the highest selectivity was demonstrated by LS-3, which is about 80-60. The lowest selectivity was demonstrated by LS-1. For the selectivity of CO 2 /CH 4 (Figure 8b), the highest selectivity was demonstrated by LS-4 (20-11), followed by LS-3, LS-2, LS-1, and LS-5. For CH 4 /N 2 , the highest selectivity was demonstrated by LS-2 and LS-5 (Figure 8c), which was about 80-140 in the lower pressure range and 23-9 in the lower pressure range. The selectivity of CO 2 /N 2 is probably one of the highest among other S-doped porous carbons reported in the literature; only the sulfur-doped mesoporous carbon synthesized from resorcinol-formaldehyde in our previous work [22] demonstrated slightly higher selectivity of 190 compared to that of LS-3 and LS-4. For CO 2 /CH 4 , the selectivity values lie within 3.3-15.7 in the literature [40]. The selectivity of CO 2 /CH 4 for LS-4 (21-11) is higher than that reported in the literature. The selectivity of CH 4 /N 2 was reported to be as high as 14 in the literature; LS-2 and LS-5 demonstrated a much higher selectivity than this value. It is also important to note that the high equilibrium uptake capacity of a pure preferred component does not always confirm its high selectivity over a non-preferred component; the selectivity also depends on the shape of the pure-component isotherms of both the preferred and non-preferred component. As an example, LS-5 demonstrated very high equilibrium uptake capacity for CO 2 and CH 4 ; however, it represents the highest selectivity owing to the linear nature of the isotherms. tivity of CH4/N2 was reported to be as high as 14 in the literature; LS-2 and LS-5 demonstrated a much higher selectivity than this value. It is also important to note that the high equilibrium uptake capacity of a pure preferred component does not always confirm its high selectivity over a non-preferred component; the selectivity also depends on the shape of the pure-component isotherms of both the preferred and non-preferred component. As an example, LS-5 demonstrated very high equilibrium uptake capacity for CO2 and CH4; however, it represents the highest selectivity owing to the linear nature of the isotherms. Conclusions In this work, we successfully synthesized sulfur-doped nanoporous carbon with ultrahigh surface area from lignin by one-step carbonization with the help sodium thiosulfate as a sulfurizing agent and potassium hydroxide as an activating agent. The peak deconvo-lution results of XPS confirmed that the nanoporous carbons possess the sulfur contents of 1 to 12.6 at.%. The porosity analysis revealed that the BET specific surface areas of the carbons are in the range of 741-3626 m 2 /g. The surface area of 3626 m 2 /g is one of the highest for carbon-based materials reported in literature. Pure-component adsorption isotherms of CO 2 , CH 4 , and N 2 were measured on all the porous carbons at 298 K, with pressure up to 760 torr. The carbon with the highest BET surface area demonstrated the highest CO 2 uptake of more than 10.89 mmol/g, which is one of the highest for porous carbon-based materials reported in literature. The IAST method was applied to calculate the selectivity of CO 2 /N 2 , CO 2 /CH 4 , and CH 4 /N 2 from the pure-component isotherm data, and the results demonstrated that these materials can potentially be used for gas separation purposes.
2023-01-08T05:14:36.433Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "dbf53b87bf369c5c4ce52ace09a4006a80a251db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/1/455/pdf?version=1672742647", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbf53b87bf369c5c4ce52ace09a4006a80a251db", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
233937683
pes2o/s2orc
v3-fos-license
Place4Carers: a multi-method participatory study to co-design, piloting, and transferring a novel psycho-social service for engaging family caregivers in remote rural settings Background Family caregivers are key actors in the ageing society. They are mediators between practitioners and patients and usually provide also essential daily services for the elders. However, till now, few services have been deployed to help caregivers in their care tasks as in improving their mental health which can experience sever burden due to caregiving duties. The purpose of the study is to implement a community-based participatory research project to co-design an innovative organizational model of social services for family caregivers of elderly health consumers living in remote rural areas in Italy. Methods This is a community-based participatory research project in the remote area of Vallecamonica involving four main phases. These included a quantitative analysis of caregiver needs, a scoping review on existing services for caregivers, co-design workshops with local stakeholders and caregivers to create a novel service the piloting and a first implementation of the service and the assessment of project transferability to other contexts. Results As the hours dedicated to elder care increases, both objective and developmental caregiver’s burden significantly increases. Conversely, higher levels of engagement were associated with lower physical and emotional burden, and caregiver engagement was positively correlated with their perceived self-efficacy in managing disruptive patient behaviours. Based on these preliminary results, four co-design workshops with caregivers were conducted and led to the definition of the SOS caregivers service, built on four pillars structured upon the previous need analysis: a citizens’ management board, training courses, peer-to-peer meetings, and project and service information. We found that co-design is an effective means of creating new services for family caregivers, whose experiential knowledge proved to be a key resource for the project team in delivering and managing services. Less positively, the transferability analysis indicated that local municipalities remain reluctant to acknowledge caregivers’ pivotal role. Conclusions A dedicated support service for caregivers can ameliorate caregiving conditions and engagement levels. The service has resulted a successful co-productive initiative for a psycho-social intervention for family caregivers. For the future, we suggest that family caregiver should be considered an active partner in the process of designing novel psycho-social services and not just as recipients to enhance a better aging-in-place process. Conclusions: A dedicated support service for caregivers can ameliorate caregiving conditions and engagement levels. The service has resulted a successful co-productive initiative for a psycho-social intervention for family caregivers. For the future, we suggest that family caregiver should be considered an active partner in the process of designing novel psycho-social services and not just as recipients to enhance a better aging-in-place process. Keywords: Patient engagement, Caregivers, Co-production, Co-design, Rural setting, Participatory action research, Active healthy ageing, Caregiver burden, Hard to reach populations Background The global ageing population is currently rising dramatically at a rate of 3% each year [1]. There is evidence that environmental factors related to living space and place play an increasingly important role in prolonging human life and, in particular, the quality of life of the elderly [2][3][4]. 'Ageing-in-place'-that is, elderly people continuing to live at home for as long as possible-is recognized as a key strategy to improve the quality of life of elderly health consumers and to ensure the sustainability of social and welfare systems. In this scenario, family caregivers play a pivotal role in the daily care of elders, both by assisting with their care and by coordinating interventions and activities with all actors involved in enhancing elders' health. This requires caregivers to dedicate lot of time to care tasks, which can impact negatively on their own quality of life. For this reason, there is a need for new services to support these caregivers and help caregivers to support older people more effectively by developing evidence-based models. Engaging family caregivers in the care network is potentially a critical asset in implementing 'ageing-in-place', especially in remote and rural areas where family caregivers can, if effectively engaged, bridge the gaps caused by the fragmentation of social and welfare systems [5,6]. Family caregivers arrange and attend medical appointments, participate in both routine and high-stake treatment decisions, coordinate care and services, and help with daily tasks such as dressing, bathing, and managing medicines [7,8]. Family members have always been the primary source of support and assistance to their relatives in times of illness and when they can no longer function independently [9,10]. As such, family caregivers constitute an 'invisible workforce' within the health care team, but their important role and in particular their support needs often remain unrecognized [10,11]. Engaging family caregivers in the healthcare process is regarded as a key pillar of improved service effectiveness, sustainability, and patient-centricity [12][13][14][15][16], where engagement is defined as the process of enabling people to become actively and genuinely involved in defining issues of concern to them, in making decisions about factors that affect their lives, in formulating and implementing policies, in planning, developing, and delivering services, and in taking action to achieve change [17]. This paper reports the results of the Place4Cares project [18], a community-based participatory research project to co-designing, piloting and assessing the transferability of an novel organizational model of psycho-social services for family caregivers of elderly health consumers living in the remote rural area of Vallecamonica in Italy. The codesign process was based on an extended review of the literature [19] which clearly evidenced the need for better coordinated intervention dedicated to support family caregivers in hard-to-reach rural area, with a broad scope not only on the improvement of healthcare literacy, but also including psychological support and dedicated spaces to help caregivers' peers' networks and collaborations. Moreover, there is a growing agreement in the scientific debate about the opportunity to engaging health consumers and patients in the process of planning and delivering healthcare services. This is even more crucial when actions devoted to promoting psycho-social support are concerned: in this case the involvement of the users of such services is crucial not only to improve their final acceptance of the service but also to guarantee that it is fine-tuned with their expectations and needs. Our project's aim is to coproduce new services with local service providers and family caregivers to ensure ageing-in-place processes and to strengthen families' inclusion and engagement in a more effective partnership with the welfare system and local health organizations. An accurate process of caregivers' inclusion and involvement in all phases of the project was putted in place and critically revised as reported in a previous publication related to the project [20]. In this manuscript, we will report the experiences made in the co-design, first implementation and transferability analysis of the service, to discuss the results achieved and the need for improvements. We will also critically reflect on the lessons learnt for the future. The study setting: Vallecamonica Vallecamonica is a mountainous territory in the northern part of the Lombardy Region. Residential areas in Vallecamonica are geographically dispersed, and the territory's topography and infrastructural and public transport limitations make service delivery more difficult. On an ageing index (computed as the ratio of people aged 65 or more to those aged 14 or less), the area is characterised by a high proportion of elderly people, with a score of 157.3 compared to the Lombardy Region average of 152.6. This attests to a situation of diffused frailty, with a range of social and healthcare needs. In Vallecamonica, health consumers' requests for social services are processed by the Azienda Territoriale per i Servizi alla Persona (ATSP); when the agency receives such a request from a health consumer, it evaluates which services to activate and then organizes the delivery through more than 40 providers, which include social cooperatives, foundations, and associations. Data from ATSP's 2016 Social Balance Sheet show that the number of requests for social services received by municipality social assistants has increased from 2536 in 2012 to 4820 in 2015; about 30% of these were requests for information about home care services, socio-health assistance, and access to nursing home. The number of elderly people receiving home care services increased from 191 in 2012 (29,639 h of assistance delivered) to 235 in 2015 (38,699 h of assistance delivered). While these data show that family caregivers are an important social network node for elderly people living in the area, the increasing demand for health and social services attests to the difficulty (or inability) of family caregivers to support their aged relatives alone. Methods The whole community-based participatory research project in the remote area of Vallecamonica involved four phases (see [18] for a detailed description of the study protocol). In this paper, we shall focus on the results of the Place4Carers project related to the foundation, codesign, first piloting and transferability of the new service (namely: S.O.S. caregivers). Phase 1: foundation of the service concept: quantitative analysis of family caregivers' needs, services usage, and sustained costs Preliminary, the study involved a quantitative analysis of the caregivers' need, experiences, and expectations to best orient the co-design phase. This analysis involved two main elements: a quantitative survey and database secondary analysis. The quantitative survey addressed a sample of caregivers whose elders live in Vallecamonica. Eligibility criteria included family caregiver of fragile elders, use of home care services provided by ATSP and four local nursing homes, and residency in Vallecamonica. ATSP representatives were responsible for caregiver recruitment. Because of privacy constraints, ATSP could directly contact only caregivers whose elders were using ATSP home care services. Caregivers whose elders used a home care service organized by one of the four nursing homes were first contacted by their nursing homes. If they were interested in participating in the project, their contact details were shared with ATSP. The survey aimed to measure the psychosocial, economic, and organizational needs of family caregivers and the status of their relatives. To that end, four measures were collected and assessed (see Table 1). Demographic measure We collected family caregivers' demographic data, including questions about gender, age, education, marital status, working conditions, position, and sector. Psychosocial measures To assess the level of caregivers' needs and engagement, the following widely recognized scales were used. The Caregiving Health Engagement Scale (CHE-s) investigates the psychological attitudes of caregivers in terms of their participation, competence, and motivation in responding to the care demands of their relatives [13]; measured on a 7-point Likert scale. The Caregiver Burden Scale measures feelings and well-being when caring for elders [21] on five dimensions: a) objective burden (time-dependent evaluation of stress caused by restrictions on one's personal life); b) developmental burden (sense of failure regarding one's hopes and expectations); c) physical burden (physical stress and somatic disorders); d) social burden (caused by role conflicts between job and family); and e) emotional burden (embarrassment or feelings of shame caused by the patient); measured on a 5-point Likert scale. The Caregiver Self-efficiency Scale assesses caregivers' sense of self-efficacy in dealing with difficulties related to the care of elders [22], addressing three factors: Obtaining Respite, Responding to Disruptive Patient Behaviours, and Controlling Upsetting Thoughts; measured as a percentage (0-100). The Health Literacy Scale assesses caregivers' skills and capacities in understanding and analysing information related to health decisions [23]; measured on a 5-point Likert scale. The Caregiver Need Assessment Scale explores the needs of family members caring for fragile and vulnerable people [24] in terms of six factors: emotional needs, physical-functional needs, cognitive-behavioural needs, relational needs, socialorganizational needs, and spiritual needs (along with a final overall value; measured on a 5-point Likert scale. Organizational measures To explore caregivers' satisfaction with existing health and social care services for their elders, we used a selection of ad hoc items on a 7-point Likert-type scale (e.g. 'I feel understood by the professional that mainly delivers home treatments'; 'I am comfortable in sharing my feelings with the professionals that mainly deliver home treatments'; 'The professionals that mainly deliver home treatments address all my questions and doubts'). Economic measures To assess economic strain, we investigated both out-ofpocket expenditure and unpaid caregiver time costs. To measure out-of-pocket expenditure, we used three ad hoc items referring to the average monthly medical and nonmedical costs sustained by the family in caring for elders. 1) 'What monthly costs are sustained by caregivers and/or elders for the care of elders in terms of medications, specialized medical examinations, medical aids, special food and beverages, professional caregiver support and others?'. Among non-medical items, we included transportation costs, calculated as the average price of fuel in Italy [25] multiplied by the total distance travelled by caregivers when accompanying their relatives on clinical visits. 2) 'How many journeys do you make every month to accompany your relative on clinical visits?' 'What is the average length of these journeys in km?'). Transportation costs = avg. price of fuel in Italy x number of journeys accompanying elders x avg. distance per journey]. To assess unpaid time costs, we used one item that estimates the cost of replacing informal caregiver support with professional assistance. 3) 'How many hours per month do you spend caring for your relative?'). More precisely, this replacement cost was calculated as total monthly hours of informal caregiver support multiplied by professional caregivers' average hourly wage. Replacement costs = avg. hourly wages of professional caregivers in Italy x number of hours of informal caregiving per month. A database analysis was conducted to integrate the survey results with relevant secondary administrative information about family caregivers' relatives. Among the elders that used home care services provided by ATSP and four local nursing homes, the analysis included only those whose family caregivers participated in the survey. ATSP and the four nursing homes linked the responses of each family caregiver to their elder's data, creating elder-caregiver dyads. Elders' information collected from the ATSP database and the four nursing homes included the following. Status measures As shown in Table 2, we were interested in elders' demographic and personal data and information about their clinical condition and service usage characteristics. The analysis of elder-caregiver dyads followed a few pre-defined steps. First, a descriptive analysis was performed to describe the sample of elders and family caregivers [26]. A correlation analysis was then performed using a nonparametric measure (Spearman's correlation) to identify any significant positive or negative relationships between variables [27]. More precisely, we correlated Caregiver Burden, Caregiver Needs and Caregiver Engagement with the other variables of interest specified in Tables 1 and 2. Given the large number of results, only significant correlations, and those of relevance to this study are highlighted and discussed. Phase 2: co-design workshops with caregivers and local stakeholders Four co-design workshops [28,29] were conducted to collect ideas and insights for the novel caregiver services. These included a selection of caregivers previously interviewed in the Phase 1 survey in order to 1) explore their caregiving experiences and support needs that existing services in the area fail to meet and 2) co-design a new service to address caregivers' expectations as articulated in Phase 1. In addition to the co-design process, the workshops invited caregivers to reflect on good practices identified in the Phase 2 scoping review of the literature. The workshops were conducted by two expert moderators (NM, EG), employing a non-directive style to enhance spontaneous participation. Further information can be found in a previously published paper [20] which describes this phase of the research in greater depth. The workshop transcripts were subjected to qualitative interpretative content analysis [30] to synthesise and map participants' contributions. Members of the research team performed the analysis as a joint and iterative process. To begin, NM and EG coded the content from the workshops and proposed a preliminary taxonomy. GG, CM, and SB discussed and supervised the first phase of coding and contributed to further synthesis and interpretation. The final synthesis of the workshop results provided a deep description of caregivers' service needs and expectations, along with a first prototype of the proposed new service. To optimize and finalize the proposal, it was presented for discussion with caregivers in a final workshop. Before proceeding to piloting (Phase 4), the research team presented and discussed the proposed service with the local healthcare organization (ATS della Montagna) and in a dedicated session with the local government committee. The purpose of this workshop was to discuss the feasibility of the new service prototype and to ensure the involvement of local stakeholders in piloting the service. Phase 3: piloting and preliminary assessment Feasible service ideas were implemented through service prototyping [31] in Vallecamonica (Breno, BS) over an 18-month period from April 2019 to November 2020. The pilot was suspended from March to September 2020 because the recent Covid-19 emergency made faceto-face activities impossible, especially for a vulnerable group like elders' family caregivers. Following a period of adaptation, the service was successfully moved online by October 2020. Although the target service users were family caregivers and fragile elders using home care services provided by ATSP and Vallecamonica's four local nursing homes, no exclusion criteria were applied. All meetings were published on the project's online channels (i.e facebook page, project's website) and on the ATSP website, enabling everyone interested in the service to join free of charge. ATSP representatives were responsible for caregiver recruitment and implementation of the pilot. As active partners in implementation, family caregivers helped to raise awareness of the project and co-delivered some service activities (e.g. peer-to-peer meetings). The project team organized seven collective meetings (from April to November 2019) in which all members of the project team shared information about the service's progress and issues arising. At the same time, internal and informal meetings and activities were organised by smaller groups of team members. At the end of the pilot, service outcomes were assessed using both quantitative and qualitative methods; further information about these assessments can be found in a previously published paper [20], which describes the metrics and assessment results in greater depth. Phase 4: assessment of transferability to other regions and stakeholder involvement The final phase of the project assessed the transferability of the Place4Carers project to other remote and rural areas like Vallecamonica and involved the heads of social and welfare service providers in the new territory. Given the particularity of the project setting, we decided to analyse its transferability to the neighbouring territory of Valtellina, which has similar geographical and demographic characteristics, comparable health and social care services and structures for elders, and few services for family caregivers of elderly people (see Table 3). Following a formal presentation of the project, the six heads of social and welfare service providers for vulnerable and elderly people in Valtellina (from Sondrio, Morbegno, Tirano, Dongo, Valchiavenna, and Alta Valtellina) were interviewed using a pre-defined set of questions. These structured interviews included two main sections. The first section explored providers' perspectives on the transferability of the Place4Carers project by asking them about factors affecting their willingness to implement the project in their area. The second section referred to a Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis performed by the Place4Carers researchers to investigate the extent to which service providers believed that transferring the Place4Carers project to Valtellina would produce the same internal/ external achievements and issues as in Vallecamonica. This analysis helped to specify how the outcomes of the Place4Carers project might change in a new area. All the heads of social and welfare service providers were then invited to join in a participatory workshop to further investigate the transferability of Place4Carers project to their area. Three of the six heads accepted the invitation and discussed how the project should be adapted and modified for new contexts with researchers (n = 2), local practitioners (n = 4) and politicians (n = 1). The results of the structured interviews and workshop were subjected to qualitative thematic analysis to identify any possible future limiting factors when implementing the Place4Carers project in Valtellina. Triangulation enhanced the reliability of the results. Results Phase 1: foundation of the service concept: quantitative analysis of family caregivers' needs, services usage, and sustained costs Using data from surveys and databases, this phase was devoted to assess the uncovered needs and expectations of family caregivers in their assistential role. The analysis included 51 elder-caregiver dyads (see Table 4). Overview of elders and family caregivers The database of assisted elderly in the area of the study included 51 individuals (see Table 4 for demographics). Of these, 53.1% were widowed, and 40% lived alone. Almost 40% earned less than €10,000 per annum and lived alone. In terms of health status, almost 92% were registered as physically impaired, and 60% had at least one chronic disease, including neurological (37.3%), cardiological (27.5%), and other chronic conditions (35.3%). In terms of service demand, 31 (60.8%) were assisted by ATSP, and 20 (39.2%) used nursing home services. Overall, these elders received an average 214.52 ± 140.12 h of assistance each year; those assisted by ATSP received a significantly higher number of hours of assistance (254.17) than those assisted by nursing homes (148.44 h) (p = 0.027). The greatest demand was for hygiene and mobility support; almost 90% requested two or more services, and about 55% requested more than three services. About 70% were supported by their sons, and 12% were supported by their husband or wife. We also analysed the data of 51 family caregivers (see Table 4 for demographics). In relation to caregivers' wellbeing, almost 60% of the sample reported a moderate or severe level of burden. Regarding their expectations of the social welfare system, caregivers' main concern was the need for information about local services. Finally, in terms of organizational and economic effort, 40% dedicated more than 70 h per week to caring for their elders, (averaging 75.22 ± 54.14 h per week), and monthly out-of-pocket expenditure for elder support averaged €557. Satisfaction with current home care services and nursing homes was high (82%). Older and unemployed caregivers spent significantly more time caring for their relatives than younger caregivers (p = 0.002) or those who were employed (p < 0.001). Caregiver burden Based on our survey data, we analysed the association between demographic factors and caregiver burden (see Table 5). Overall, the level of burden was found to increase with caregiver age. In addition, younger caregivers reported a significantly lower social burden than older counterparts (60-65 years, p = 0.016; ≥ 65 years, p = 0.016). Social burden also decreased significantly with increased caregiver education (Spearman's Rho = −.323, p = 0.021). Caregivers with primary school education also reported higher levels of physical burden (M = 15.36 ± 2.36) than those with a high school education (p = 0.028). Unemployed caregivers reported significantly higher social burden than those who were employed (p = 0.005). Caregivers whose elders had activated an ATSP home care service reported higher levels of physical, social, and emotional burden than those assisted by nursing homes (p = 0.013, p = 0.002, and p = 0.032, respectively). Caregiver engagement No significant differences were found in engagement as assessed by the Caregiving Health Engagement Scale by age, gender, education (both caregiver and elder), caregiver employment status, or hours of assistance required. Higher levels of engagement were associated with less Table 9). Caregiver engagement correlated positively with caregivers perceived self-efficacy in managing disruptive patient behaviours (Self-Efficacy Scale, Spearman's Rho = .568, p < 0.001) and negatively with needs as assessed by the Caregiver Need Assessment. Higher engagement was associated with lower total (Spearman's Rho = −.339, p = 0.021), emotional (Spearman's Rho = −.398, p = 0.006) and spiritual needs (Spearman's Rho = −.305, p = 0.04). Phase 2: co-design workshops with caregivers and local stakeholders In total, 26 of the caregivers who participated in Phase 1 and 7 stakeholders that participated in the following part of the project (see Table 10) have been involved in codesign workshops to generate insight for the novel service devoted to support family caregiver engagement. The great majority were females in their sixties, mainly retired from work, with a low level of education. Need for information According to the co-design workshops' results, caregivers report great difficulties in accessing information about services, benefits, initiatives, and bureaucratic procedures. They attribute this lack of information to a lack of clear sources of reference for medical information in this area (e.g. lack of general practitioners) and to the extreme effort (in terms of energy and time) needed to collect and organize the relevant information. Interviewees noted the need for a single user friendly and constantly updated source of information about medical, social, legal and practical aspects of caring for older people. They also highlighted the need for a clear map that would enable them to locate dispersed services in the territory. 'The problem here in the valley is access to information; we do not know how to find information about procedures or facilities' (Caregiver 5, female). 'It would be useful to have a job description detailing what the doctor does and where he does it, and also a person dedicated to helping caregivers who may not be comfortable with online services' (Caregiver 1, female). Reference was also made to the need for a website and telephone line with trained volunteers to address questions and doubts: 'Especially when looking after someone with Alzheimer's, we need a help desk with a dedicated phone number and someone that can help us emotionally as well as with information' (Caregiver 8, female). This information system should be shared with general practitioners and social workers, as 'they are Female). 'The problem is that we are not prepared to face the illness. We find ourselves in a difficult position because we do not know how to act when a person has Alzheimer's; we recognize the illness, but what is and is not appropriate? When you encounter the situation directly, that's another story' (Caregiver 16, male). For the interviewed caregivers, the experience of feeling inadequate and lacking in solid expertise is a major factor in the potential emotional burden, and they reported a need 'to have a proper training, not only to deal with the illness but also to learn about practical issues and how to help our relatives in their everyday life' (Caregiver 19, male). Emotional needs The emotional burden of caregiving was referred to repeatedly during the workshops, along with the need for spaces and occasions where these feelings can be expressed, with some form of empathetic listening and support. 'To be honest, I feel like I'm in prison since I began to look after my mother. We are at risk; I realise that I can become angry and very nervous' (Caregiver 20, male). 'I think we should organize as a group to share our experiences; WhatsApp is ok, but a proper physical encounter is more important' (Caregiver 9, female). All participants agreed about the need to express and share their emotional burden with peers. 'A group where I can share how I feel, not feel abandoned, and share our experiences and moments together' (Caregiver 2, female). Finally, thanks to the previous positive experiences of some caregivers, the importance of psychological counselling for emotional support also emerged. 'I found great comfort in a previous group at the hospital, where a psychologist helped us to find emotional support and relief. I think that is also important for reducing our burden' (Caregiver 2, female). Phase 3: piloting and preliminary assessment Service co-design The researchers and ATSP personnel discussed the service ideas proposed by family caregivers in the co-design workshops in phase 2 (Tables 11 and 12) putting them in reference to what emerged from the quantitative need analysis conducted in phase 1 and attempted to put them into practice. Activities that were considered feasible in economic and organizational terms were included in the new service pilot, which was called SOS Caregivers. The service was built on four pillars: a health consumers' management board, training courses, peer-to-peer meetings, and project and service information. While the last three of these were based on caregivers' explicit suggestions, the The health consumers' management board included ATSP representatives, researchers, and family caregivers (representing the great majority of board members). The board was open to any family caregivers who were interested in joining. The board had two purposes: to support and advise ATSP in implementing service activities, and to give family caregivers a voice and responsibility. The members met every 4 months to discuss service issues and possible improvements. All members had an equal say in final service decisions; in fact, the number of family caregivers at board meetings was always (at least) double the number of researchers and ATSP representatives. The direct contributions of family caregivers to the management and evaluation of service activities helped ATSP personnel and the researchers to understand caregivers' needs and preferences [32]. The involvement of family caregivers also helped to build strong relationships [33] with ATSP and a shared sense of community [34]. The training programme provided a set of practical courses for family caregivers to help them to care for their relatives. Enhancing caregivers' skills and capabilities has a positive impact on the quality of elder care and the well-being of elders and caregivers alike. Courses were organized monthly and were delivered by professionals including psychologists, social workers, educators, speech therapists, and physiotherapists. Course content reflected the needs and difficulties highlighted by family caregivers during the co-design workshops. Five courses were face-to-face (helping relatives to swallow, supporting relatives during routine activities, dealing with stressful situations, preventing relatives' falls, and managing relatives' medications), and two were delivered online (sanitary best practices in elder care, dealing with elders during the Covid-19 pandemic). At the end of each course, the professionals' material was shared with participants. Peer-to-peer meetings were attended by family caregivers and coordinated by one psychologist. At the beginning of each group meeting, the psychologist suggested a theme related to the caregivers' daily life and encouraged participants to express their opinions and experiences in this regard. The aim was to create a selfhelp network of caregivers, enhancing their sense of well-being and belonging to a community by sharing ideas. During these group meetings, the psychologist sought to promote equal and fair participation, and caregivers were asked to support the psychologist in codelivering the service activity. While the psychologist acted as moderator, caregivers were responsible for developing the group discussion. The self-help meetings during the service pilot (five face-to-face and two online) encouraged serious reflection as well as more general discussion about the role of caregivers, managing one's private life, cultural and culinary habits in Vallecamonica, memories related to family life, and the difficulties of living in remote and rural areas. Project and service information provided caregivers with the information they needed in three ways (see Table 13). First, with the support of the four local nursing homes, the project team produced a report summarizing the bureaucratic procedures, admission constraints, costs, and activities associated with services in Vallecamonica for elderly people living at home. Second, caregivers were informed about the new service activities during the pilot. Finally, one psychologist launched a WhatsApp group to facilitate direct communication with caregivers who had expressed their interest. These three formal and informal sources were complementary; while the report provided information about available activities for elderly people, the service activities kept people informed about opportunities for supporting caregivers, and the WhatsApp group collected caregivers' other informal requests for support. To reach as many recipients as possible, the report and service activities were disseminated through a new project website (https://www.place4carers.it/), a project Facebook page (https://www.facebook.com/place4 carers), brochures, and alerts in the news section of the ASTP website and newsletter. Service activities Overall, Place4Careresas a whole project-reached in the different phases of the research and intervention more than 150 family caregivers. Regarding the service, it involved a total of 69 caregivers (see Table 13 for more details about the contents and the numbers of participants of each activity). We think that at least two partially implemented Psychological support low medium As one-to-one psychological support was too costly, the psychologist got involved in the self-help group. partially implemented Self-help groups high high implemented TV commercial low high The project budget was too limited to fund a TV commercial. However, the project team delivered three interviews on local media partially implemented Busto transport elders to local hospitals and ambulatories low low Insufficient project budget and resources not implemented external constraints limited attendance. First, caregivers' initial mistrust meant that few participants were willing to participate in the first training session and group meeting, forcing the project team to postpone those activities. Second, caregivers' low digital literacy contributed to the low number of participants in the first online training sessions and group meetings (n = 0 online, n = 3 offline). Despite these setbacks, overall satisfaction with service activities was very high, confirming their positive effect on caregivers' well-being. For more information about the qualitative and quantitative analyses of the service pilot, see [20]. Supporting activities The service was implemented along with supporting activities that ensured its success (Fig. 1). At the launch and again at the end of the pilot, an account of the service was presented to the local health agency (ATS della Montagna) and a government committee, and a brief overview was disseminated through local newspapers and broadcast media. During the pilot, ATSP and the project team raised awareness of the service through presentations to local service providers, including the local hospital, nursing homes, cooperatives, and social workers, and an official communication was released on local broadcast media. Additionally, the project team organized seven collective meetings and several internal operational meetings to manage and oversee project progress. Phase 4: assessment of transferability to other regions and stakeholder involvement The research team reflected on internal/external achievements and issues arising throughout the service pilot in Vallecamonica in order to assess the potential transferability of the service to other contexts. More precisely, the following issues were addressed. Direct achievements: factors that impacted directly on family caregivers or service providers (e.g. increased levels of trust between caregivers and service providers) Indirect achievements: actual or potential indirect impacts on one or more actors in the service ecosystem (e.g. increased motivation of service providers following direct collaboration in coproducing a new service) Internal issues: service requirements that negatively affected (or threatened to affect) family caregivers or service providers (e.g. time and effort invested in managing and delivering the service) External issues: external factors that affected (or threatened to affect) one or more actors in the service ecosystem (e.g. participation difficulties when caregivers were unable to move their relatives and/ or leave them alone at home). The research team then sought to determine whether the identified achievements and issues might transfer to the new service setting (Valtellina). To that end, we interviewed the heads of three social and welfare service providers who had made themselves available. All three felt the Place4carers project was useful and interesting for their district because it would improve caregivers' wellbeing and the effectiveness of relative's support. They also welcomed the active involvement of family caregivers in the service life cycle as a means of enhancing trustful relationships between caregivers and service providers, establishing a peer-to-peer community, and disseminating existing health and social care services. All three felt that equal collaboration would have no negative effects on caregivers or service providers-that is, caregivers would not feel useless, and professionals would not lose their authority or control. All three believed that caregivers would be interested in participating in the SOS Caregivers service, and that local stakeholders in Valtellina (e.g., nursing homes) would support this new service. However, two of the three interviewees expressed concern that project implementation might demand undue effort and resources, and only two felt that the project would increase the motivation of professionals and caregivers in relation to caring activities. To integrate and deepen these preliminary results, researchers organized workshops involving the three heads of social and welfare service providers and their key actors, including local social workers, cooperatives, and political parties working with the social care services. During the discussion, at least two limiting factors emerged that would require service providers to modify the Place4Carers project prior to implementation in Valtellina. First, despite its relevance, participants believed that it was premature to invest in services for family caregivers, and that new services should target both caregivers and relatives. Most elderly people living in Valtellina do not receive appropriate health and social care support because the limited economic and human resources dedicated to caring activities cannot meet service demand in time. For these reasons, workshop participants argued that family caregivers should not be the only target for new services; in their view, relatives' needs are the root cause of caregivers' discomfort, and that strain can therefore be relieved by supporting services for relatives. Secondly, participants reflected on the difficulty of integrating and coordinating service activities in Valtellina. In the area managed by the three social and welfare service providers, the list of stakeholders supporting elderly people is long (21 nursing homes, five cooperatives, and three volunteering organizations), and the service network is complex to manage. For that reason, participants believed that Valtellina's existing home care service network should be reinforced by increasing cohesion among stakeholders before launching Place4Carers project, which requires different stakeholders to collaborate. Discussion Scholars have described family caregivers as the invisible backbone of the social and welfare systems [18]. As well as providing daily assistance, family caregivers play a pivotal role in linking and integrating the various actors and services that support elderly health consumers. This is particularly evident in remote and rural areas, where family caregivers can, if effectively engaged, bridge the gaps caused by the fragmentation of the social and welfare systems. According to our data, 40% of caregivers dedicate more than 70 h each week to their relatives; average monthly out-of-pocket expenditure on supporting relatives is €556; and satisfaction with existing services is high (home care services: 90%; nursing homes: 80%). These findings align with previous evidence that family caregivers play a key role in Western integrated care [34], and that caregiving overload has negative psycho-social consequences [35]. Specifically, as the hours dedicated to elder care increase, both objective and developmental burden increase significantly. Caregiver burden did not differ significantly by gender, family role, user age, pathology, or living arrangements. Our analysis of the association between caregiver burden and needs revealed that higher levels of social burden are associated with significantly greater needs, again aligning with previous evidence from other cultural and healthcare contexts [36]. Conversely, higher levels of engagement were associated with lower physical and emotional burden, and caregiver engagement was positively correlated with their perceived self-efficacy in managing disruptive patient behaviours. This tends to confirm that family caregiver engagement is especially crucial for medically frail patients (e.g. children, elderly people affected by mental disorders or neurodegenerative disease) [37] and those living in rural or remote communities, where healthcare services are often fragmented and geographically dispersed. Failure to support active engagement among family caregivers may therefore be regarded as a missed opportunity for ensuring the sustainability of healthcare services and the effectiveness of clinical relationships [38] and all medical interactions. To date, however, little is known about the unique needs of elderly health consumers' family caregivers or their expectations regarding health and care services, and little attention has been devoted to their perspectives and communication needs in the healthcare environment, especially in rural and remote areas. The present findings also confirm that external support is entirely insufficient to meet the needs of service users, and professional caregivers are often required to fill gaps in service users' care needs. This aligns with evidence from other studies regarding the crucial role of family members in caring for elderly patients [39,40]. The quantitative analysis further confirmed that female individual are the pivotal family caregivers, providing both pragmatic and emotional support for their relatively relatives. Again, this finding aligns with previous evidence that the caregiving burden is typically borne by women [38,40]. Female caregivers were also the main participants in the service codesign and early implementation phases of this project, confirming existing evidence that informal caregiving falls mainly to wives or daughters, who devote much of their life to caring tasks [38]. In a remote rural area like Vallecamonica, patterns of caregiving are also affected by the fact that young people tend to gravitate to more populated areas. The more highly educated and those looking for better jobs are motivated to escape while people with lower levels of education tend to remain in the valley, and care duties therefore fall to them. As the scoping review indicated, there are few services to support family caregivers in rural and remote areas despite calls for more research in this regard [41]. Lack of resources, high service demand, and low service accessibility mean that local service providers invest fewer resources in services for relatives, and this is the root cause of caregivers' strain. What also emerged from the study was the clear need for peerto-peer groups to allow caregivers to share their experiences in the pursuit of empathy and relief. As in other social contexts, it also appears that while technology can provide significant logistical support, people still need human encounters that cannot be provided digitally [19]. The present research demonstrates that family caregivers can and should play an active role in the design, delivery, and assessment of new elder services. Recent studies have highlighted several benefits of coproduction as 'an umbrella concept that captures a wide variety of activities that can occur in any phase of the public service cycle and in which state actors and lay actors work together to produce benefits' [42]. First, the study confirms that co-production increases the quality of services and outcomes because final solutions are designed and created on the basis of user needs and requests [43]. Secondly, co-production enhances user satisfaction and use of co-productive solutions [44,45]. Third, close collaboration increases trust among users, service providers, and service organizations by fostering strong and trustful relationships [46]. Finally, close collaboration with service users taps into different points of view, increasing the level of service innovation [47]. In line with previous scientific evidence, we found that coproduction is an effective means of creating new services for family caregivers, whose experiential knowledge proved to be a key resource for the project team in delivering and managing services [48]. In the co-design phase, caregivers drew on their personal daily experiences to identify the most useful ideas. In the codelivery and co-assessment phases, they provided timely feedback on service effectiveness by reporting their personal experiences of service activities. The practical insights and suggestions of family caregivers enabled service providers to shape effective satisfactory, and successful services that made caring activities more effective. Recognizing their crucial role, family caregivers sought to be treated as partners in elder care. To that end, they sought to increase their technical knowledge and competences by requesting training courses and informational materials that would help them to become more effective and knowledgeable in managing and delivering care. In short, investing in caregivers' knowledge is likely to increase the effectiveness of elder care and support [49]. Less positively, the transferability analysis indicated that local municipalities remain reluctant to acknowledge caregivers' pivotal role. Given the increasing number of elders and difficulties in caring for them, policy makers prefer to address elders' needs directly. However, as the Covid-19 emergency confirms, this model of care delivery may fail because resources are limited and demand is rising [50]. In these circumstances, valuing caregivers' support may offer a new and sustainable model of care that reduces the medium-and long-term strain on health and social care facilities [51]. Future studies should seek to confirm the relevance of caregiver engagement in all phases of the service life cycle. Conclusions The paper describes the foundation, co-design, first piloting and transferability analysis of a novel psycho-social support service dedicate to family caregivers of elderly people in the remote rural area of Vallecamonica. The process itself can be considered a good practice for innovating the healthcare and welfare support dedicated to aging in place initiatives. Indeed, moving for the deep analysis of the population characteristics, of their caregiving burden, service usage levels and expectations of support was then possible to directly engage family caregivers in the co-design of a new social service able to truly answer their needs and expectations, in a full sensitiveness to the cultural and anthropological specificities of their local community. The SOS Caregivers service described here is based on the idea that a dedicated support program can enhance caregivers' ability to care more effectively for elderly health consumers and to become valuable partners in the social and welfare systems. In other words, family caregivers should be seen as active partners in elder care rather than as mere recipients, and policy makers and researchers should involve family caregivers in decisionmaking about relatives' care and services, acknowledging the value of their support. Limitations Future research should investigate the contextual, economic, demographic, and geographic factors that warrant investment in elder care rather than caregiver support. As the present analysis is context-specific, the results are less generalizable, and further research should investigate the effects of caregiver engagement in other settings, such as urban areas and developing countries. Finally, privacy issues made it impossible to access clinical data, which prevented comparison of patientcaregiver dyads.
2021-05-08T00:03:02.565Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "5be457b04f7f85bb93b78b666e11643fb1b40bf1", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-021-06563-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7d606f8e86f753297271ba542ee4d07fb0e2d33", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
119181052
pes2o/s2orc
v3-fos-license
Cosmicflows Constrained Local UniversE Simulations This paper combines observational datasets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. These latter are excellent laboratories for studies of the non-linear process of structure formation in our neighborhood. With measurements of radial peculiar velocities in the Local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor 2 to 3 on a 5 Mpc/h scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 +/- 4 km/s i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighborhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the Local Universe up to a distance of 150 Mpc/h on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighborhood extends to a few megaparsecs (<5 Mpc/h). The simulations provide a proper Large Scale environment for studies of the formation of nearby objects. INTRODUCTION The formation of structures in the Universe, from tiny fluctuations at the era of recombination to the large diversity observed today, is a highly non-linear process. Its multi-scale nature is best studied by numerical experimentation. Cosmological simulations of structure formation rely on the cosmological principle which assumes the homogeneity of the Universe on large enough scales. The random nature of the primordial Gaussian perturbation field however implies that separate patches of the Universe are not identical. In that context, properties of patches of a few megaparsecs on aside vary widely. To overcome this cosmic variance, statistical comparisons to study the formation of structures are based on large observational datasets (e.g. Stoughton et al. 2002;Abazajian et al. 2003Abazajian et al. , 2009) and large cosmological simulations (e.g., Klypin et al. 2011;Alimi et al. 2012;Prada et al. 2012;Angulo et al. 2012;Watson et al. 2014;Klypin et al. 2014;Skillman E-mail: jsorce@aip.de et al. 2014). In order to also resolve small scale structures in the entire required large computational boxes, the mass resolution must be sufficiently high, resulting in simulations that can be time consuming and expensive. An alternative approach is to reduce the cosmic variance by focusing on the numerical study of structure formation in the nearby Universe and a direct comparison of theoretical models with local observations. There is a double advantage of producing simulations resembling the Local Universe. First, our neighborhood is without any doubt the best-observed volume of the Universe and as such hides its own cosmological treasures, often as fossils from its early epochs. Second, because a very large box size is not required for such simulations, the desired high resolution can be more easily achieved without being overly time consuming. However, this double advantage requires a correct modeling of the initial conditions based on the observed structures in the Local Universe. Standard cosmological simulations are obtained with initial conditions drawn from a random realization of the primordial perturbation field for a given cosmological model. Observational data of the Local Universe are used as additional constraints on these initial conditions so that resulting simulations resemble the Local Universe. The resulting constrained simulations attempt to describe the evolution of the observed structures in the nearby Universe. The billions of data points that characterize the initial conditions of a cosmological simulation cannot be constrained by only thousands of observational data. The main aim of the different existing reconstruction techniques is to reduce the cosmic variance of the resulting constrained simulations. Local object candidates can then be identified to study their formation and evolution in the proper environment. Since the introduction of the POTENT method and the first attempt to reconstruct initial conditions from sparse observational data of the velocity field traced by galaxies Bertschinger et al. 1990;Nusser & Dekel 1992) a lot of progress has been made over the last two and half decades. The constrained realization technique proposed by Hoffman & Ribak (1991) constitutes such a significant step forward in developing further the reconstruction method. With this technique, Ganon & Hoffman (1993) have constructed constrained initial conditions using the POTENT data that led to the first N-body simulation that mimicked the matter distribution around the Local Group in a 256 h −1 Mpc box (Kolatt et al. 1996). The first step in generating initial conditions for constrained simulations consists in reconstructing today's three-dimensional density field from sparse and noisy observational data of the local galaxies. These observational data can be either the positions and radial peculiar velocities of galaxies (Kravtsov et al. 2002;Klypin et al. 2003;Sorce et al. 2014a) or redshift catalogs (Heß et al. 2013). In a second step the initial linear density field must be retrieved to derive the initial conditions for the simulations. There are two possible ways: backwards as described is section 2 of this paper or forwards as recently proposed by Kitaura (2013); Heß et al. (2013); Jasche & Wandelt (2013); Wang et al. (2013). In the latter case, the initial density field is sampled from a probability distribution function consisting of a Gaussian prior and a likelihood. A complete overview about the different methods is given in Wang et al. (2014). Coming back to the observational dataset used here to constrained the initial conditions, this paper is part of the CLUES project 1 (Constrained Local UniversE Simulations, Gottlöber et al. 2010;Yepes et al. 2014) which focuses on using peculiar velocities as constraints. Within this project, indeed, a number of constrained simulations of the Local Universe, using peculiar velocities from Karachentsev et al. (2004); Willick et al. (1997); Tonry et al. (2001) as observational constraints, have been run. Although estimating peculiar velocities constitutes an observational challenge, there is a double advantage in using them: first they are highly linear and correlated on large scales ; second, they are excellent tracers of the underlying gravitational field as they account for both the baryonic and the dark matter. However, the first generation of these constrained simulations was affected by a substantial shift in the positions of objects recovered at redshift zero. To reduce this shift a new technique has been developed to account for the cosmic displacement field (Doumler et al. 2013a,b,c) and the noisy velocity information (Sorce et al. 2014a). The method has been applied to the first catalog of the Cosmicflows project 2 (Tully et al. 2008) producing simulations resembling the Local Universe down to a few megaparsecs within 30 h −1 Mpc (Sorce et al. 2014a). At the end of 2013 (Tully et al. 2013), a second catalog of the Cosmicflows project was released. Superior in size (number of constraints, ∼ 8000 against 2000) and extent (∼ 150 h −1 Mpc against 30 h −1 Mpc) to the first catalog, it constitutes an ideal supplier of observational data 1 http://www.clues-project.org/ 2 http://www.ipnl.in2p3.fr/projet/cosmicflows/ to construct initial conditions constrained by the observed peculiar velocities of galaxies. Uniting the second release of the Cosmicflows data and the newly developed technique to build more accurate constrained initial conditions, this paper aims at demonstrating by how much the cosmic variance is decreased with respect to random simulations and how far the resemblance with the Local Universe can be extended. The structure of the paper is as follow. In the second section, we briefly review the methodology described in Sorce et al. (2014a) and Sorce (2015) which combines several techniques to reduce biases in the observational catalog and to build constrained initial conditions. The third section presents the resulting constrained simulations of the Local Universe: the cosmic variance is estimated, the Large Scale Structure is compared with galaxies from the 2MASS redshift catalog (Huchra et al. 2012) and with the three-dimensional reconstruction of the overdensity and velocity fields of the Local Universe at redshift zero. In addition, the simulated Laniakea Supercluster of galaxies, a basin of attraction of local velocity flows, is compared with the observed Laniakea Supercluster (Tully et al. 2014). Finally, before concluding, dark matter halo candidates for well known nearby clusters, such as Virgo, are identified in the simulations. METHODOLOGY In this section we discuss how the initial conditions have been constructed from the observational data: a set of positions and radial velocities of galaxies. Important steps in this procedure are the grouping of galaxies and the minimization of biases. Observational data: Cosmicflows-2 Cosmicflows-2 is the second generation observational catalog of galaxy distances built by the Cosmicflows collaboration. Published by Tully et al. (2013), it contains more than 8,000 accurate galaxy distances. Distance measurements come mostly from the Tully-Fisher relation (Tully & Fisher 1977) and the Fundamental Plane methods (Colless et al. 2001). Cepheids (Freedman et al. 2001), Tip of the Red Giant Branch (Lee et al. 1993), Surface Brightness Fluctuation (Tonry et al. 2001), supernovae of type Ia (Jha et al. 2007) and other miscellaneous methods also contribute to this large dataset but to a minor extent (∼ 12%) although they have individually higher weights because of smaller errors. The final goal of the paper is to build initial conditions constrained by this catalog in order to run cosmological simulations, working above the scale of galaxy virial motions in clusters (non-linear displacements) is required. Therefore, in this paper, the grouped version of cosmicflows-2 is used. With a method similar to that described in Tully (2015a,b), 552 groups and 4303 single galaxies can be identified in the dataset shrinking the number of constraints to 4855. The resulting number density is large enough to construct constrained initial conditions as demonstrated by Doumler et al. (2013a,b,c). Figure 1 shows the normalized cumulative distribution of distances in cosmicflows-2 as well as the mean and median distances. The catalog extends up to 230 h −1 Mpc but on average constraints are within ∼ 66 h −1 Mpc (∼ 61 h −1 Mpc for the median). In fact, about 60 % of the constraints are within a sphere of 70 h −1 Mpc radius, approximately 80 % are within 100 h −1 Mpc, and ∼ 98 % are within 160 h −1 Mpc. From this distribution, the inner box is expected to be the most constrained part of the simulation and beyond 160 h −1 Mpc the random component is anticipated to overcome the constraints. Intermediate results should be found in between. Tully et al. (2013) warned us that cosmicflows-2 is affected by biases with effects that cannot be ignored anymore as the effects are stronger with increasing distance. There are four biases known: Minimizing Biases in the Observational Catalog • In the literature the first bias has been given a number of terms that are used interchangeably. These are Problem I, Selection Effect/Bias, r against V, Distance-dependent, Frequentist, Calibration problem, Mbias of the second kind (Kaptney, 1914;Malmquist, 1922;Han 1992;Sandage 1994;Teerikorpi 1997Teerikorpi , 1993Teerikorpi , 1990Hendry & Simmons 1994;Willick 1994;Teerikorpi 1995). This bias is analogous to a selection effect in magnitude (dim galaxies are selectively excluded from the observational sample) resulting in underestimated distances. This bias has been nulled in the observational cosmicflows-2 catalog (e.g. Tully & Courtois 2012;Sorce et al. 2013Sorce et al. , 2014b. • The second bias, referred to as Homogeneous Malmquist Bias or Problem II, General Malmquist Bias, Geometry Bias, V against r, Classical, Bayesian, Inferred-distance problem, M-bias of the first kind in the literature (Kaptney, 1914;Malmquist, 1920;Lynden-Bell et al. 1988;Han 1992;Teerikorpi 1997;Sandage 1994;Teerikorpi 1993Teerikorpi , 1990Hendry & Simmons 1994;Teerikorpi 1995;Strauss & Willick 1995), is due to the fact that the number of observable galaxies from our perspective increases with the distance. There are more galaxies available to scatter inward due to errors than outward, creating the tendency to locate galaxies closer than they should be, namely to underestimate distances. • In addition, because the Universe is inhomogeneous on small scales, galaxies are more likely to be scattered from high density regions towards low density regions than the opposite resulting in the Inhomogeneous Malmquist Bias (e.g. Dekel 1994;Hudson 1994;Landy & Szalay 1992). • On top of this last bias, there is a lognormal error distribution or an asymmetry in the distribution of fractional errors on distances because distances are derived from distance moduli via a logarithmic function. Thus if a same galaxy is located farther than it should be rather than closer, the error on the distance is larger although this is not reflected by the assigned fractional errors. These biases lead mainly to a major infall onto the Local Volume. An iterative method to minimize the infall and reduce spurious non-gaussianities in the radial peculiar velocity distribution was applied to obtain a new distribution of radial peculiar velocities and corresponding distances (Sorce 2015). Building Initial Conditions To build initial conditions for dark matter only numerical simulations (a set of particles with velocities and positions at a starting redshift) constrained by peculiar velocities, we rely on four techniques assuming a prior cosmological model: • The Wiener-Filter method (WF, Zaroubi et al. 1995) to reconstruct the cosmic displacement field required to account for the displacement of constraints from their precursors' locations, • the Reverse Zel'dovich Approximation (RZA) to relocate constraints at the positions of their progenitors (Doumler et al. 2013c,a,b) and to replace noisy radial peculiar velocities by their 3D reconstructions (Sorce et al. 2014a), • the Constrained Realization (CR, Hoffman & Ribak 1991) of Gaussian field technique to produce overdensity fields constrained by observational data, adding a random realization to compensate for the missing power spectrum. These latter are then converted into white noise that can be used to increase the resolution, • the resolution is increased by adding some random small scale features in the white noise, then the white noise is converted back to build initial conditions for cosmological simulations. These four steps can be summarized in a set of equations: where α = x, y, z and C i are the constraints plus their uncertainties and f (t) = d (ln D(t)) d (ln a(t)) is the growth rate (D the growth factor and a the scale factor). Brackets denote correlation functions depending solely on the assumed prior cosmological model (here the power spectrum) ; v is the velocity field and ψ is the displacement field. The 'WF' exponent means that a field is obtained with the Wiener-Filter method, where x RZA init is the approximate location of the constraints' precusors (linear theory at first order valid down to 2 h −1 Mpc), r is the measured position of the constraints. Note that the left of Eq. 2 is an extension of equation 14 in Nusser et al. (1991). where δ stands for overdensity fields. δ RR stands for the random realization field and each C j represents a random constraint drawn from δ RR . δ CR represents the constrained realization field. where ω(k) stands for the white noise in Fourier space, P represents the power spectrum, n is the number of particles and V the volume of the simulated box. This last equation is first used in reverse to convert the overdensity fields in white noises, then random small scale features are added to increase the resolution and the equation is used again to obtain the higher resolution density fields to prepare the initial conditions. We apply the whole scheme to the observational catalog cosmicflows-2 within the framework of Planck cosmology (Ω m =0.307, CONSTRAINED LOCAL UNIVERSE SIMULATIONS In the second section, we noted that the largest distance of the cosmicflows-2 catalog is 230 h −1 Mpc, thus the size of the computational box should be sufficiently large to avoid spurious effects due to periodic boundary conditions where the observational data still have a constraining power. Tests have shown that a 500 h −1 Mpc box meets this requirement. Such a computational volume extends the study up to the Shapley supercluster (the location of the farthest constraint) and with 512 3 particles, the mass resolution is 8 × 10 10 h −1 M sufficient to resolve large groups and clusters of galaxies. Two of the simulations were also run at higher resolutions (1024 3 particles) to check that none of the conclusion drawn in this paper are affected by the 512 3 choice. As none of them are, we settle with the reasonable choice of 512 3 particles. A total of 25 constrained simulations and 15 random simulations have been performed in order to study the residual cosmic variance. The initial conditions of these different simulations have been constructed in various ways: • The first fifteen constrained initial conditions are built out of different random realization fields δ RR plus the observational dataset (see equations 1 to 3). This first step uses the Wiener Filter, the reverse Zel'dovich approximation and the constrained realization technique (Zaroubi et al. 1999;Doumler et al. 2013c;Sorce et al. 2014a;Hoffman & Ribak 1992) to generate 256 3 density grids in accordance with the expected minimum scale on which the constraints are effective. The corresponding white noise field is used as an input to increase the resolution to 512 3 particles with the Ginnungagap code 3 . This full 3 https://github.com/ginnungagapgroup/ginnungagap 'MPI+OpenMP' parallel code adds random small scale fluctuations in real space to increase the resolution to any level within a given cosmology. The resolution limit is dictated only by the total memory of the supercomputer. A final simulation is then characterized by two random seeds: the random realization field δ RR and the added small scale features ; • The additional ten constrained initial conditions share the same random realization field δ RR , but different seeds are used to increase the resolution to 512 3 . They are thus expected to differ only on scales smaller than that of the input white noise field ; • Finally a set of fifteen random, i.e. not constrained, initial conditions has been constructed. They share the same seeds (same δ RR ) as the fifteen constrained initial conditions. The simulations based on all these initial conditions have been performed with Gadget-3 (Springel 2005) from redshift 60 to redshift 0 with a 25 h −1 kpc force resolution. In Figure 2, we first compare the resulting power spectra at z = 0 to the linear power spectrum of the chosen cosmology. As expected the 15 random simulations (grey area) scatter at large scales around the linear input power spectrum (blue solid line). The 15 constrained simulations (dotted black line) tend to have less power on large scales, an effect which decreases with the box size as shown in the Appendix, because a smaller and smaller fraction of the box is constrained. The bottom panel of Figure 2 represents the power spectra divided by the Planck power spectrum as well as the mean values. Although on the low side, the power spectra of constrained simulations are within the scatter obtained with those of random simulations. Their mean is on average smaller by a factor 1.3 (factor that decreases with the scale) than that of random simulations on the large scales. As for the ten constrained simulations built out of the same random realization field, they share the same Large Scale power spectrum (red dashed lines) in agreement with the fact that the random features added to increase the resolution affect only the small scales. In the middle panel, the ratio of the power spectra to their mean for the three samples (15 constrained simulations, 10 constrained simulations with the same seed in 256 3 and 15 random simulations) is displayed. By construction this is indistiguishable from a straight line for the 10 simulations where only small scale structures have been added. Next, the Amiga's Halo Finder (Knollmann & Knebe 2009) is applied to each simulation to compile a list of dark matter halos. In Figure 3, using M 200 defined with respect to the critical density, the cumulative mass functions of the different simulations are compared with the same color code as that of Figure 2. The blue color now stands for the Tinker cumulative mass function as defined by Tinker et al. (2008) using the online mass calculator of Murray et al. (2013) and the Planck cosmological parameters as defined in section 2 of this paper. In the left panel of the figure, the cumulative mass functions for the entire 500 h −1 Mpc boxes are shown (top), as well as their cumulative mass functions divided by the Tinker cumulative mass function (middle) and their scatters around their respective mean (bottom). The cumulative mass functions of the different simulations overlap and, as expected a smaller scatter is observed for the constrained simulations sharing the same random realization field (in red). In order to evaluate the effect of constraints on the cumulative mass functions, these latter are derived in a sphere of radius 160 h −1 Mpc centered on the original box, namely on the center of the box. A radius of 160 h −1 Mpc is a reasonable choice as it encompasses ∼ 98% of the constraints (see section 2). The corresponding cumulative mass functions are shown in the right panel of Figure 3. One can clearly see that at high mass ranges the cumulative mass functions of the constrained simulations are on the low side compared to the cumulative mass functions of the random realizations computed in the same sphere. These latter, contrary to the former, tend to scatter symmetrically around the Tinker cumulative mass function. As expected the scatter of the cumulative mass functions in the sphere is smaller for constrained simulations compared to random simulations (bottom of the right panel of Figure 3) and is the smallest for constrained simulations sharing the same random realization field. In this respect the cosmic variance is reduced in constrained simulations with respect to random simulations. We investigate more thoroughly the residual of the cosmic variance in the following subsection. Reduced Cosmic Variance In this section we compare and quantify the cosmic variance within the different sets of simulations, i.e. constrained and random. To this end, a cloud-in-cell scheme on a 512 3 grid is applied to the particle distributions of the initial conditions (z=60) and of the simulations at z=0 with a subsequent Gaussian smoothing on a scale of 5 h −1 Mpc. Normalized by the mean density, the resulting smoothed density fields of any pair of constrained and random simulations are compared cell by cell. For each pair we build a density-density plot (the density field of a first simulation versus the density field of a second simulation). If the two simulations were identical all points would follow the 1:1 relation. We define the cosmic variance between two simulations as the one-sigma, hereafter 1σ, scatter (or standard deviation) around this 1:1 relation. We repeat this procedure for the 105 pairs of the 15 random simulations and those of the 15 constrained simulations as well as the 45 pairs of the 10 constrained simulations. Then we calculate the mean and the variance of the 1σ scatters. The result is three points (filled dark grey, black and light grey circles for each one of the simulation types: random, constrained, constrained sharing the same random realization) in each of Figures 4 and 5, with error bars at the x-axis value of 500 h −1 Mpc at z=60 and z=0 respectively. We repeat this procedure in smaller sub-boxes of size 400, 300, 250 h −1 Mpc, etc, centered on the original box in order to measure the cosmic variance in the smaller central volumes where most of the observational data are, i.e. where they are the most effective. In Figure 4, at the starting redshift, the mean scatter of the random initial conditions is independent of the size of the sub-box with increasing variance in smaller sub-boxes. On the contrary the mean scatter of the 15 constrained initial conditions starts to decrease substantially below 300 h −1 Mpc. As the median distance in cosmicflows-2 is only 61 h −1 Mpc and ∼ 98% of the measurements are within 160 h −1 Mpc, the standard deviation between density fields in the initial conditions is reduced in the expected large volume. The 10 constrained initial conditions with the same random realization field and added small scale features are on a line because adding modes at the scale of ∼ 0.98 (500/512) h −1 Mpc does not affect the smoothed density distribution shown here. It only slightly increases the variance in the smallest subbox. The top of Figure 5 shows the mean 1σ scatters at redshift z = 0 as a function of the sub-box size for the different types of simulations (random and constrained). The effect of the non-linear clustering is visible on small scales. As the system evolves and becomes more nonlinear, a larger fraction of the sub-box is covered by low density regions (voids) than high density regions (clusters). Thus, voids become dominant in the volume. The densities of these nearly empty regions tend asymptotically to zero regardless of the initial field while that of high density regions, rising from small but positive differences in the initial field, are magnified. Consequently, the probability to compare a cell in a low density region with one in another low density region, with similar values, increases, reducing the scatter for each pair of random simulations. When considering sub-box smaller than 100 h −1 Mpc, the 1σ scatters decrease on average for the random simulations although the variance of these scatters increases because there is still the probability to find a high density region in one of the pair simulation versus a low density region in the other simulation of the pair. This is a limitation of the comparison method because the smaller the sub-box considered the higher the probability to find the same kind of field even if the probability to hit different types of fields is non null. The effect is not that pronounced for the constrained simulations because of the existence of similar (by construction) structures close to the center of the box (such as the Great Attractor and the Virgo cluster) and disappears for the constrained simulations with the same random realization field. We repeat the same procedure for the three components of the velocity field in the bottom panel of Figure 5 and make the same observation. In addition, as a reference to assess the low values of the 1σ scatters and thus the small discrepancies between the constrained simulated velocity fields, one can consider the validity of the Wiener-Filter reconstructed velocity field which is ± [100-150] km s −1 (Sorce 2015). To summarize, Figures 4 and 5 reveal that the cosmic variance, measured with the 1σ scatter of cell-to-cell comparisons, is considerably reduced for constrained simulations, by a factor 2 to 3 on a scale of 5 h −1 Mpc for both density and velocity fields in the inner part of the box, when compared to those obtained for random simulations. In addition, the error bars of these 1σ scatters are smaller by a factor at least 2 when considering constrained simulations with respect to random simulations. As an example, taking a sub-box of 150 h −1 Mpc, the cosmic variance is decreased from 0.5 to 0.3 for the density fields normalized by the mean density at z=0 and from 150 to 50 km s −1 for the velocity fields. In Figure 5, we compare density and velocity fields, in (sub-)boxes, of constrained and random simulations. The observations, however, approximate a spherical distribution with a large radial dependence as demonstrated in Figure 1. Thus, corners of the (sub-)boxes appear at first to be less constrained than other regions at the edges of the (sub-)boxes. One might argue that spherical sub-volumes would be better suited for comparisons between random and constrained simulations. Using spherical regions to derive 1σ scatters, we find no difference. . Mean (circles) and scatter (error bars) of one-sigma scatters (standard deviations) obtained with cell-to-cell comparisons carried out on pairs of density fields normalized by the mean density (top) and of velocity fields (bottom) smoothed on a 5 h −1 Mpc scale at z=0. Scatters are given as a function of the sub-box size. Pairs are constituted of two random (dark grey), two constrained (black) or two constrained, sharing the same random realization field, (light grey) simulations. Next, we apply different smoothing to the density fields normalized by the mean density to determine the constraining power, of the observational dataset combined with the method to build initial conditions, not only as a function of the box size but also as a function of the resolution. We define the resolution as 2π/R where R is the value of the Gaussian smoothing in h −1 Mpc, and the constraining power as the ratio of the mean 1σ scatter obtained for the random simulations to that derived for the constrained simulations. We give to R values ranging from 1 to 8 h −1 Mpc. Figure 6 display the constraining power of the observational peculiar velocities combined with the method to build initial conditions in a resolution versus sub-box size plane: the darker the grey, the higher the ratio of the 1σ scatters, the higher the constraining power is (the more the cosmic variance is decreased in the constrained simulations with respect to the random simulations). As expected, the larger the smoothing (the smaller the resolution), the higher the constraining power is. On the other hand, with smaller smoothing (higher σ R /σ C Figure 6. constraining power, at z=0 of the dataset, combined with the method to build initial conditions, defined as the ratio of the mean one-sigma scatter obtained when comparing pairs of density fields normalized by the mean density of random simulations to that obtained for the pairs of density fields normalized by the mean density of constrained simulations. The higher the ratio, the darker the grey, the higher the constraining power (the smaller the cosmic variance), of the observational data combined with the method to build the initial conditions, is in a sub-box size versus resolution plane. The resolution is defined as 2π/R s where R s is the value of the Gaussian smoothing in h −1 Mpc. resolution), the cosmic variance is larger. In addition, the larger the sub-box size, the smaller the constraining power is because of the decreasing number of constraints. One can also notice that the effect of non-linearities becomes prominent as the smoothing decreases: small scales are essentially not constrained by the observational data. The non linear theory threshold is reached, on smaller scales shell crossing has wiped the initial correlations. In the next section we compare the constrained simulations to the observed Local Universe. Simulation of the Large Scale Structure In the previous subsection a study of the cosmic variance has demonstrated that the constrained simulations are remarkably similar to each other in the constrained part of the simulation box. To compare these simulations with the observed Local Universe, an observer is assumed to be at the center of the box and the three supergalactic coordinates are defined similarly to observational supergalactic coordinates. Comparing observations with simulations is not an easy task. Two possibilities are available although they both involved their own limitations: 1) comparing the Large Scale Structure in simulations to the distribution of observed galaxy surveys which are however magnitude limited, 2) comparing the fields of the simulations with the reconstructed ones obtained with the Wiener-Filter technique from the observational data. They constitute however only the linear fields and tend to the null fields in absence of data or in presence of noisy data. These limitations highlight the importance of the simulations which give access not only to the formation history but also to the full fields including non-linearities of the Local Universe. Before comparing reconstructions, simulations and redshift surveys, we begin with a description of the observed structures in both the reconstruction and in one of the chosen randomly simulation as shown in Figure 7. Note that the choice of the simulation has no impact on the following discussion as simulations present the same Large Scale structure in the constrained ∼200 h −1 Mpc radius area, namely high density regions and voids in this area are in every simulation. In this figure both the reconstructed and simulated (over)density (contour) and velocity (arrows) fields are displayed in a 5 h −1 Mpc thick slice of the XY supergalactic plane. On top of the fields, red dots represent galaxies from the 2MASS redshift catalog in a 10 h −1 Mpc thick slice. These galaxies are superimposed for comparisons purposes and one can notice fingers-of-god in the galaxy distribution. Several well-known structures can be identified in both the reconstructed and the simulated Local Universe: Perseus-Pisces (PP), Shapley as well as Coma superclusters but also voids such as the Sculptor void. In addition to the major structures and voids, the Zone of Avoidance (ZOA) due to our Milky-Way dust is marked. Note, that no structure have been reconstructed in that zone beyond 50 h −1 Mpc from the center of the box due to a lack of information in the observed data. However, the simulation shows structures in this region, in particular connections between objects above and below the ZOA. Little is known about structures in the Zone of Avoidance but the simulation reproduces quite well the observations also in that zone: • A potential supercluster, at a distance about -60 h −1 Mpc in the SGX direction, situated in the zone of obscuration (Kraan-Korteweg et al. 1994) is on an extension of the filament departing from Hydra and Antlia clusters going across the Zone Of Avoidance to reach the region of the Great Attractor (∼Centaurus Supercluster, Kraan-Korteweg et al. 1994). • Kraan-Korteweg et al. (1994) noted a clustering at a distance greater than -100 h −1 Mpc in the SGX direction, in the zone hidden by our galaxy dust, a potential connection between the Horologium and Shapley Superclusters. The simulation contains a high density zone beyond -100 h −1 Mpc in that direction. Regarding comparison with redshift surveys, there is qualitatively a good agreement between the 2MASS redshift catalog and the simulation on Figure 7. To assess this agreement we use the cosmic web based on the velocity shear tensor ) rather than on the gravitational tidal tensor (e.g Hahn et al. 2007) or on the displacement tensor (Lavaux & Wandelt 2010). Eigenvalues of the velocity shear tensor permits to determine whether a region of the Universe with such a velocity field constitutes a knots, a filament, a sheet or a void. It is thus straightforward to determine the position of a galaxy in the cosmic web. With a null threshold and the definition used in Hoffman et al. (2012) for the velocity tensor, three negative eigenvalues correspond to a void while three positive value correspond to a knot. Two negative and one positive value constitute a sheet while the opposite stands for a filament. If simulations are in good agreement with observations, there should be approximately the same number of galaxies in filament and sheets (∼ 35-45 %) then less in knots and in voids (∼ 10 %) (e.g. Forero-Romero & González 2015;Libeskind et al. 2012, although they choose a slightly higher than zero threshold, while we choose zero, they checked that general results are quasi independent of the threshold choice as long as the chosen value is reasonable). On average of the fifteen cosmic webs computed from the fifteen different constrained simulations, 6±1 % of the galaxies are in knots, 35±2 % are in filaments, 48±2 % are in sheets and 10±1 % are in voids. The galaxies are distributed as expected giving agreement with observations. As for comparisons with the reconstruction, coming back to Figure 7, the Wiener-Filter reconstructs fairly well the Local Universe in the center of the box although it shows only the linear fields and tends to the null field in absence of data or in presence of noisy data, thus reconstruction and simulation agree qualitatively very well: the reconstruction presents more feature in the center relatively to its edges but the loss of precision with the distance from the center of the box is the cause. The simulation allows one to go deeper into the Zone of Avoid- ance and to extend further the study of the Large Scale Structure and, more importantly, it supplies the whole density field, including nonlinearities. An estimation of the agreement between reconstruction and simulations can be made with cell-to-cell comparisons between the velocity fields which, unlike densities, are highly linear. The 1σ scatter is on average of the order of 100-150 km s −1 (i.e. 2-3 h −1 Mpc, the linear theory threshold, in terms of displacement). From this comparison between a simulation and the reconstruction, it can be concluded that the major attractors and voids of the Local Universe are properly simulated. Wiener-Filter Reconstruction Constrained Simulation The variance between the different constrained simulations is relatively low (see previous subsection). If a Large Scale Structure feature is present in one of the simulations, there is a high probability to recover it in all the other realizations. The Large Scale Environment is robustly simulated. Since this Large Scale environment has been suggested to play an essential role in the formation and evolution of local objects (e.g. Garrison-Kimmel et al. 2014), these constrained simulations are ideal to study local objects. First, we turn towards a recently discovered Large Scale feature in our neighborhood, the Laniakea supercluster of galaxies (Tully et al. 2014). Example of the Laniakea Supercluster In this subsection, we focus on a particular structure of the Local Universe, the Laniakea Supercluster of galaxies discovered and defined in Tully et al. (2014). This supercluster is constituted of a local basin of attraction and every objects with a perturbative motion toward it. Local flows encompassed in that region converge onto the local attractor. To compare the simulated superclusters with the observed-reconstructed one, the divergent field is evaluated. For a chosen volume, the divergent field corresponds to velocities due solely to densities in this volume. In order to remove most of the non linear components, the density and velocity grids are smoothed with a 5 h −1 Mpc Gaussian. With grids obtained via a cloud-in-cell scheme and a 60 h −1 Mpc sphere radius centered at [-47,13,-5] h −1 Mpc similarly to the definition given by Tully et al. (2014), we are able to find the local basin of attraction in the different simulations. Enlarging the radius, the contours of the simulated Laniakea superclusters of galaxies are recovered. The result obtained with one realization is given on the right panel of Figure 8. The left panel shows the Wiener-Filter reconstructed supercluster. In this figure, the gradient of colors correspond to densities while streamlines stand for the velocity fields. From blue to red the density increases, namely voids are in blue-black and high density regions are in yellowred. The simulation and the reconstruction look alike. The Laniakea supercluster streamlines are very well simulated and converge in a similar location about [−47,11,0] h −1 Mpc. The Laniakea supercluster is surrounded by cosmic gravitational streams flowing towards the Perseus-Pisces superclusters on the positive SGX side, towards Shapley on the negative SGX side and towards Coma on the positive SGY side in both the simulation and the reconstruction. Tully et al. (2014) estimate the mass of the supercluster at around 6.5 × 10 16 h −1 M on the scale of this paper, in relatively good agreement with simulations where the number of dark matter particles contained in the Laniakea supercluster, simplified as a 60 h −1 Mpc sphere radius centered at [-47,13,-5] h −1 Mpc, gives a mass of approximately 2 ± 0.3 × 10 16 h −1 M over the different realizations. To evaluate the agreement between the Wiener-Filter reconstruction and the simulations, we proceed as in the previous section with cell-to-cell comparisons within the Laniakea supercluster region. The 1σ scatter is 104 km s −1 on average with a standard deviation of 4 km s −1 , the median is identical to the mean. Cell-to-cell comparisons in this region between constrained simulations give on average 1σ scatters of 45 ± 6 km s −1 for the velocity fields and of 0.29 ± 0.02 for the density fields at z=0. By comparison, the average 1σ scatters obtained when comparing random simulations in that region are about 145 ± 35 km s −1 for the velocity fields and of 0.43 ± 0.09 for the density fields. Observed Clusters & Simulated Dark Matter Halos Finally, we turn our attention to the study of halos at redshift zero. Lists of dark matter halos obtained with the Amiga Halo Finder (Knollmann & Knebe 2009) are used to match halos in simulations with clusters in the Local Universe. Virgo is one of the largest cluster in our neighborhood and it is the closest. It is natural to look for candidates of that cluster as it should be the most efficiently constrained. In addition, attempts to find candidates for Centaurus (restricted to one component of a complex in the region), Coma and Perseus are made. We begin with the Virgo cluster. Candidates are found at less than 3-4 h −1 Mpc from its observational position in all the simulations. The replicas have masses (M 200 with respect to the critical density) for the simulations between 2.7 and 4.3 × 10 14 h −1 M which is in good agreement with current values given for this cluster (e.g. 2 -6 × 10 14 h −1 M in Planck cosmology, Karachentsev & Nasonova 2010), especially considering that within simulations, masses of cluster candidates can vary between half and twice the cluster mass (Ludlow & Porciani 2011) and that the estimated mass of a cluster depends on the method/definition used, as does the mass of a simulated halo. Several parameters can be compared between cluster candidates and the observed cluster. To estimate the agreement between candidates and observed Virgo cluster, we define the inaccuracy as the difference between the simulated and the observed (mass, distance, coordinate) or Wiener-Filter reconstructed (velocity) component divided by: 1) the mean distance in cosmicflows-2 for the coordinates and distances, 2) the mean velocity of halos of approximately the same mass as Virgo candidates for the velocities and their components, 3) the mean estimated mass of the observed clusters for the masses. Note that the definition we choose for the inaccuracy differs from that of the relative change or difference as we normalize neither by the reference (i.e. observed or reconstructed) parameter nor by a function of the simulated and observed or reconstructed parameters. We justify this choice by the fact that further-from-the-box-center halos would artificially appear on Figure 9 in better agreement with the observed clusters than their counterparts using the latter definitions (i.e. division by the distance). The same artificial observation would happen if more massive (faster) halos were normalized by the mass (velocity) of the halos. For this reason, the normalization is made with reference values independent of the cluster characteristics. The compared parameters are the three supergalactic coordinates and the square root of the sum of their squares, namely the distance, the three components of the velocity and the 3D velocity and the mass (M 200 with respect to the critical density for the simulations). Figure 9 gathers the mean and scatter of the inaccuracies of the parameters as defined above for the Virgo candidates found in the different constrained simulations. The top panel shows the mean inaccuracies (filled black circles) and their standard deviations (error bars) for the Virgo candidates in the 15 simulations with a different random realization field while the bottom panel gives inaccuracies and standard deviations of Virgo candidates found in the ten simulations sharing the same random realization field. There are two observations: 1) there is a very good agreement between the observed-reconstructed and simulated parameters as inaccuracies are close to zero. 2) the standard deviation between the different simulations are quite small (less than 10-15% of the considered parameter). As expected, the ten simulations which differ only on the random small scale features give Virgo candidates in slightly better agreement with each other (scatter less than 5-10% of the considered parameter). We proceed similarly for Centaurus, Coma and Perseus although we expect candidates to be somewhat more scattered because the sim- Figure 9. Inaccuracies of the parameters of Virgo, Coma, Perseus and Centaurus dark matter halo candidates. The inaccuracy is defined as the difference between the simulated and the observed or Wiener-Filter reconstructed component divided by respectively the mean distance in cosmicflows-2 for the supergalactic coordinates and the distance, the mean velocity of halos of approximately the same mass as candidates for the velocity components and the velocity vector and the mean estimated mass of the observed cluster for the mass. Top: Candidates in the fifteen constrained simulations based on different random realization fields. Bottom: Candidates in the ten constrained simulations sharing the same random realization field. For a clearer visibility, datapoints corresponding to Virgo candidates are shifted to the left on the x-axis with respect to datapoints for other cluster candidates. As expected, Virgo candidates are the best constrained because they are close to the center of the box. ulations, although constrained on the Large Scale, are not as well constrained as they are in the very center of the box, near Virgo candidates. Rather than using the estimated masses, we settle for looking for halos more massive than 5 10 13 h −1 M close to the positions of the observed clusters. If present these halos confirm a high density regions where expected. In only one of the fifteen constrained simulations, we cannot locate a halo massive enough in a reasonable radius around the observed position of Coma and Centaurus clusters. As for the Centaurus location, in two simulations we have two candidates of approximately the same mass nearby (the distance between the two is less than ≈3 h −1 Mpc), we select the closest to the estimated observed position. Note that finding two halos close to each other is not surprising as Centaurus is constituted of several massive components. We then proceed as for Virgo candidates, i.e. we derive the inaccuracies for Centaurus, Coma and Perseus candidates although we fix the mass estimate to 5 × 10 14 h −1 M as the reference for these three other clusters. We justify our choice by the fact that 1) simulations are not as well constrained at the positions of these other clusters as they are at Virgo's position, 2) the estimated mass of these clusters varies with the measurement method and the defined boundaries of the cluster (as an example, the estimated mass of the Coma cluster is 5.1 × 10 14 h −1 M according to weak lensing measurements in Gavazzi et al. (2009) while it is 1.4 × 10 15 h −1 M when using the radius of second turn around as define by Tully (2010)) and 3) the definition of the observed mass is different from the mass of simulated halos. Results are given in Figure 9 with three different colors for the three different clusters. They are slightly shifted to the right on the x-axis with respect to datapoints obtained for Virgo candidates for a clearer visibility. As for Virgo candidates, inaccuracies are small in terms of positions in agreement with the fact that the Large Scale Structure is well constrained. Scatters are larger than for the Virgo candidates but that was expected as we are looking at less constrained regions. This is not surprising that the higher scatter in velocities is observed for the Centaurus-like halos as Centaurus is an ensemble of objects rather than a compact object: non-linear motions are involved in that zone and because these regions are very dense, there are different massive halos with a high probability to pick a different one in each one of the realization. Looking at the bottom of Figure 9 which shows candidates found in the simulations sharing the same random realization field, the scatters are decreased by at least a factor 2. Only objects with low mass are heavily modified by the random small scale features added to increase the resolution while the Large Scale Structure is quite unaffected by them. Considering that a goal of the CLUES project is to build a large statistical sample of Local Group-like entities, selecting simulations containing all the appropriate clusters-like objects, we will be able to build a factory of look-alikes of the Local Group in the proper environment to study their statistical properties due to both the Large Scale Structure (constrained with different random realizations) and to the small scale structures (constrained but sharing the same random seed) in a decoupled way. CONCLUSION The first generation of simulations constrained by observational peculiar velocities produced by the CLUES project was affected by a substantial shift in the positions of objects recovered at redshift zero. In this paper, we present a double improvement with respect to this first generation. First, we use the second catalog of the observational Cosmicflows project which is superior in size (number of constraints), extent (∼ up to 150 h −1 Mpc) and accuracy compared to the previously used catalogs. Second, we add the newly developed techniques involving the grouping of galaxies, the minimization of biases, and the reverse Zel'dovich approximation based on the Wiener-Filter method, with the constrained realization technique to build more accurate constrained initial conditions. We are able to show that not only constrained simulations exhibit a lower cosmic variance than random simulations but also that they are in agreement with our cosmic neighborhood up to the non-linear scale (2-3 h −1 Mpc). To do so, we compare a set of 15 random and 25 constrained simulations (10 of the 25 simulations share the same Large Scale random phase and differ only by the small waves added to increase the resolution) of 512 3 particles within a 500 h −1 Mpc boxsize. A check with two 1024 3 -particles simulations showed that the results are not affected by the number of particles. We apply a cloud-in-cell scheme to all the simulations and smooth the resulting velocity and density grids with a 5 h −1 Mpc Gaussian. We define the cosmic variance as the one-sigma, scatter (or standard deviation) in density-density plots (the field of a first simulation versus the field of a second simulation) obtained from cell-to-cell comparisons between pairs of simulations of the same nature (random or constrained). We average the results over the different pairs and found that the 1σ scatters obtained for constrained simulations are not only minimal when comparing the inner part of the boxes, where most of the constraints are, but they are also smaller by a factor 2 to 3 with respect to those found for random simulations. The best constrained part of the simulations is the inner box within approximately 100 h −1 Mpc for the smallest (clusters) scales, the resemblance extends to 300 h −1 Mpc on larger scales (5 to a few tens of megaparsecs). This agreement meets expectations as the cosmicflows-2 catalog extends to 230 h −1 Mpc with 98% of the distance measurements within 160 h −1 Mpc and a median distance of 61 h −1 Mpc. We found that on average the constrained simulations tend to have less power on large scales than the random simulations, although they are well within the expected scatter. This effect could be due to the observational data and/or to the way they are processed to build the initial conditions. We are working on an improved algorithm which reduces or removes this effect by taking into consideration the reduced accuracy of the reconstruction on large distances. This will be important when studying large scale velocity flows. However, results presented in this paper are in no way affected by this effect. In fact, adding artificially power on the largest modes would only slightly change the mass of the most massive objects. We will study and discuss this effect in more detail with a series of new simulations and discuss it in a forthcoming paper. To compare the simulations with the observed Local Universe, we use cell-to-cell comparisons between the reconstructed and the simulated velocity field. We find that simulations at redshift zero agree with the Wiener-Filter reconstruction obtained with the observations at 100-150 km s −1 or 2-3 h −1 Mpc, namely the linear theory threshold. Taking as an example the Laniakea Supercluster of galaxies, defined as a local basin of attraction and all flows going towards it, we show that simulated and reconstructed Laniakea superclusters are in relatively good agreement. The mean 1σ scatter obtained from cell-to-cell comparisons between the reconstructed and simulated velocity fields is 104 ± 4 km s −1 . When comparing the simulations, the mean 1σ scatter of the simulated Laniakea superclusters' fields is 45 ± 6 km s −1 for their velocity fields and 0.29 ± 0.02 for their density fields. By comparison, similar regions between random simulations differ by 145 ± 35 km s −1 for the velocity fields and 0.43 ± 0.09 for the density fields. Finally, we give an overview of Virgo candidates as well as other wellknown nearby clusters at redshift zero and show that again the scatter between simulated dark matter halo candidates within themselves and also with the observed-reconstructed clusters in terms of position, velocity and mass is only of the order of 10%. These comparisons show that simulations are in agreement between each other and above all with the reconstruction. Because the reconstruction recovers fairly well all the major attractors and voids of the Local Universe, they must also be present in all the simulations at redshift zero for these latter to be similar to the reconstruction, i.e. to the observations. The method to build more accurate constrained initial conditions is extremely efficient. We produced the first simulations constrained with observational radial peculiar velocities which resemble the Local Universe up to 150 h −1 Mpc, with increasing accuracy when reaching the inner part of the box.
2015-10-16T14:44:47.000Z
2015-10-16T00:00:00.000
{ "year": 2015, "sha1": "32961678428ae62ef716a0c998bf538e93eb3853", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/455/2/2078/18513260/stv2407.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "32961678428ae62ef716a0c998bf538e93eb3853", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269335072
pes2o/s2orc
v3-fos-license
L-carnitine and Ginkgo biloba Supplementation In Vivo Ameliorates HCD-Induced Steatohepatitis and Dyslipidemia by Regulating Hepatic Metabolism Treatment strategies for steatohepatitis are of special interest given the high prevalence of obesity and fatty liver disease worldwide. This study aimed to investigate the potential therapeutic mechanism of L-carnitine (LC) and Ginkgo biloba leaf extract (GB) supplementation in ameliorating the adverse effects of hyperlipidemia and hepatosteatosis induced by a high-cholesterol diet (HCD) in an animal model. The study involved 50 rats divided into five groups, including a control group, a group receiving only an HCD, and three groups receiving an HCD along with either LC (300 mg LC/kg bw), GB (100 mg GB/kg bw), or both. After eight weeks, various parameters related to lipid and glucose metabolism, antioxidant capacity, histopathology, immune reactivity, and liver ultrastructure were measured. LC + GB supplementation reduced serum total cholesterol, triglyceride, low-density lipoprotein cholesterol, glucose, insulin, HOMA-IR, alanine transaminase, and aspartate transaminase levels and increased high-density lipoprotein cholesterol levels compared with those in the HCD group. Additionally, treatment with both supplements improved antioxidant ability and reduced lipid peroxidation. The histological examination confirmed that the combination therapy reduced liver steatosis and fibrosis while also improving the appearance of cell organelles in the ultrastructural hepatocytes. Finally, the immunohistochemical analysis indicated that cotreatment with LC + GB upregulated the immune expression of GLP-1 and β-Cat in liver sections that were similar to those of the control animals. Mono-treatment with LC or GB alone substantially but not completely protected the liver tissue, while the combined use of LC and GB may be more effective in treating liver damage caused by high cholesterol than either supplement alone by regulating hepatic oxidative stress and the protein expression of GLP-1 and β-Cat. Introduction Hepatic steatosis, or fatty liver, is a gastrointestinal disorder marked by excessive lipid accumulation in the liver (>5% by weight) induced by alcohol or nonalcoholic reasons [1].Dyslipidemia is a metabolic disorder characterized by elevated blood levels of low-density lipoprotein cholesterol (LDL-C), total cholesterol (TC), and triglycerides (TGs), accompanied by a reduction in high-density lipoprotein-cholesterol (HDL-C) levels [2].Dyslipidemia leads to nonalcoholic fatty liver disease (NAFLD), including cirrhosis, mild steatosis, and nonalcoholic steatohepatitis (NASH) [3].A high-cholesterol diet (HCD) promotes lipid accumulation in the liver, leading to both microvesicular and macrovesicular steatosis and triggering inflammation, which, when combined with obesity, results in a reduction in the body's ability to defend against excessive reactive oxygen species (ROS) production-induced oxidative stress [3].The progression from simple steatosis to steatohepatitis is significantly influenced by the accumulation of cholesterol rather than triacylglycerol [4].Maintaining a diet in which supplements are abundant enhances overall health and guards against a range of disorders [5].L-carnitine (LC), a quaternary ammonium molecule (β-hydroxy-γ-N-trimethylaminobutyric acid), is synthesized from L-lysine and L-methionine in the kidneys and liver.It exerts various physiological effects by participating in energy metabolism [6][7][8].Its primary role involves serving as an essential mitochondrial respiratory cofactor, facilitating long-chain fatty acid transport into the mitochondrial matrix for β-oxidation, thereby increasing energy production [9].LC not only functions as a transporter in fatty acid β-oxidation but also exhibits numerous beneficial functions, including increasing glutathione levels, reducing liver lipid peroxidation [10], decreasing hepatic inflammatory cytokines [11], ameliorating hepatic dysfunction induced by a high-fat diet [8], and suppressing heart fibrosis [12].Additionally, LC effectively normalized insulin sensitivity in type 2 diabetic patients by regulating the synthesis of key glycolytic and gluconeogenic enzymes [13].Scientific interest persists in exploring LC as a potential therapy for kidney disease, cardiovascular issues, diabetes, and symptoms related to carnitine deficiency and mitochondrial disorders [14][15][16]. There is a pressing necessity to explore alternative natural compounds derived from medicinal plants, herbs, and spices.This is due to the numerous unwanted side effects associated with synthetic chemical drugs used to treat metabolic disorders and obesity.Ginkgo biloba (GB, G. biloba: Syn.: Salisburia adiantifolia) has been used by humans for more than 2000 years as a valuable herb and is regarded as a living fossil [17].It possesses a rich history in traditional medicine and has been demonstrated to offer therapeutic advantages, encompassing anti-inflammatory, photoprotective, hepatoprotective, cardioprotective, and antioxidant properties [17].The ability of Ginkgo biloba leaf extract to protect organs from damage was attributed to its active constituents; terpenoids, such as Sesquiterpene bilobalide; diterpenoids A, B, C, M, and J ginkgolides; flavonoid glycosides, such as quercetin, kaempferol, and isorhamnetin; proanthocyanidins; biflavones; alkylphenols; 4-O-methylpyridoxine; simple phenolic acids; 6-hydroxykynurenic acid; and polyprenols [18].Once GB is administered orally or parenterally, ginkgolide A and B, flavonoids, and biolalide become available [19].Flavonoid glycoside extracts enhance hepatic steatosis by inhibiting lipid absorption and hepatic lipogenesis and improving hepatic fatty acid oxidation, which leads to an improvement in dyslipidemia [20].The antioxidant defense-enhancing and anti-inflammatory properties of flavonoids provide protection against HDL dysfunction and cardiovascular disorders in the context of inflammatory disease conditions, such as atherosclerosis or obesity [21].Additionally, Sikder et al. [22] reported the protective effect of common flavonoids against hepatotoxicity and inflammation induced by a high-cholesterol diet (2%), as increases in body weight, liver function enzymes, lipid levels, lipid peroxidation levels, and the expression of markers of inflammation were significantly prevented when quercetin or rutin was taken along with an HCD.When GB was used to treat hyperlipidemia, it showed adaptive and regulatory effects [23].Because GB can neutralize ferryl ion-induced peroxidation and scavenge free radicals, its activity has been linked to the treatment and prevention of diseases related to oxidative stress [24]. L-carnitine and Ginkgo biloba are commercial products widely used as nutraceutical herbs.According to many previous studies, LC and GB have antioxidant, anti-inflammatory, hypolipidemic, and anti-obesity effects that have beneficial effects on insulin sensitivity, protein nutrition, and dyslipidemia [8,22,24,25].In various animal models, LC and GB decreased body weight and visceral fat accumulation and accelerated food intake normalization [8,[25][26][27].As a result, they cause weight loss and hypoglycemia by increas-ing insulin sensitivity and decreasing insulin resistance by lowering free fatty acid levels or regulating cellular energy metabolism.GB and LC are healthy combinations that regulate the brain health status of kindled rats, and they significantly relieve alterations in the levels of monoamines in the hypothalamus and hippocampus of rats, reflecting the powerful anticonvulsant effects of LC and GB [26][27][28].To the best of the authors' knowledge, there are no reports on the role of the combination of LC and GB in modulating HCD-induced hyperlipidemia.Therefore, this study sought to assess the underlying mechanisms through which supplementation with L-carnitine, Ginkgo biloba, or their combination mitigates HCD-induced hyperlipidemia and liver steatosis. Materials The standard rodent diet and high-cholesterol diet were procured from the Egyptian Company of Oils and Soap in Kafr-Elzayat, Egypt.L-carnitine (1 g in each tablet) and Ginkgo biloba capsules were obtained from Health Shop (an online Puritan's Pride Products Store in Egypt, support@ths-egypt.com), and each GB hard gelatin capsule contained 260 mg of GB leaf extract powder, which was standardized to 24% ginkgo flavone glycoside and 6% total ginkgolide (terpene lactones). Animals and Treatment Adult male Wistar albino rats weighing between 130-145 g (9-10 weeks old) were purchased from the experimental animal facility in Helwan, Egypt.The care of the rats and all experimental procedures were conducted in compliance with institutional animal ethics guidelines and sanctioned by the Institutional Animal Care and Use Committee of Menoufia University under approval No. MUFS/F/HI/1/22.Animal welfare was ensured by maintaining the animals in a controlled environment with a temperature ranging from 25 ± 5 • C and humidity between 50-70% under a 12 h light/day cycle.Five groups of fifty rats were randomly allocated, each consisting of 10 rats, and were observed for nine weeks (one week for adaptation and 8 weeks for trial duration) [29].The rats in each group were assigned to two cages (n = 5 for each cage).The control group (G1) was fed standard commercial chow (100 g) containing 76% wheat, 5% corn starch, 10% casein, 4.7% cellulose, 1% vitamin mixtures, and 3.3% mineral mixtures.The remaining four groups were fed a high-cholesterol diet (HCD) to induce fatty liver.The prepared HCD (100 g) contained 17.48% protein, 52.99% carbohydrates, 10% cholesterol, 6.85% fat, 4.08% ash, and 2.16% vitamins and minerals [30].The second group (G2) received only an HCD without any further treatment.The remaining three groups received an HCD along with daily oral administration of either LC (G3; 300 mg LC/kg bw), GB (G4; 100 mg GB/kg bw) [27,28], or both LC + GB (G5) at the same previous doses. Serum Biochemical Analysis After the 8-week experiment concluded, animals that had fasted overnight were sedated with 100% isoflurane (2 mg/kg body weight or equivalent to 2% inhaled concentration) and dissected.Using a dry Eppendorf tube, blood was collected via cardiac puncture, allowed to coagulate for 30 min at room temperature, and then centrifuged for 10 to 15 min at 3000 rpm. Histological Lesion Scoring and Image Analysis The quantitative grading and scoring of histological lesions were conducted through microscopic examination of H. and E.-stained liver sections, assessing 10 random fields within each microscopic power field (MPF) at 20× magnification.A quantitative scoring system for evaluating nonalcoholic steatohepatitis was used with slight modifications [38].The intensity of liver fibrosis was evaluated on Masson trichrome-stained sections by the modified Knodell scoring system [39].Semiquantitative scoring of the percentage of blue-stained collagen deposition areas with Masson's trichrome and the brown-stained immunohistochemical expression of GLP-1R and β-Cat was performed on 10 random digital images from high microscopic power fields (HPFs; 40×) and then analyzed via ImageJ software system (Java-based application for analyzing images, 1.51j8, USA) [40]. Ultrastructure Study Tiny liver tissue samples measuring 1 mm 3 were first preserved in a fixative solution containing formalin and glutaraldehyde (4F1G) in phosphate-buffered saline.Subsequently, they were postfixed with 2.0% osmium tetroxide and embedded in resin, as described by Hayat [41].Semithin sections were produced and dyed with toluidine blue for observation using a light microscope.For ultrathin sections, specific areas from the semithin sections were chosen, placed on copper grids, and treated with uranyl acetate and lead citrate for staining.These ultrathin sections were analyzed using a Jeol Japan transmission electron microscope (Jem-1400) at the Electron Microscope Unit, Alexandria University, Egypt. Statistical Analysis The data are presented as the mean ± standard deviation (SD).Statistical significance was determined through one-way ANOVA, and subsequently, the Tukey post hoc test was employed for multiple comparisons.The statistical analysis was carried out using SPSS software (version 25.0).p-values less than 0.05 were considered to indicate statistical significance. Body Weight The current data in Table 1 show that body weight is significantly greater in the HCD group (G2), by approximately 64.79%, than in the control group (G1).Compared with those in the control group (G1), the percentages in the HCD + LC (G3), HCD + GB (G4), and HCD + LC + GB (G5) treatment groups are significantly greater; the percentages are ~38.8% for G3, 44.6% for G4, and 21.67% for G5.Compared with those in the HCD group (G2), the percentages in all the treated groups are notably lower; compared with those in the HCD group, the percentages are 15.78% lower for G3, 12.25% lower for G4, and 26.17% lower for G5. Glucose Homeostasis Investigations were conducted across all groups to evaluate fasting blood glucose (FBG), insulin, and HOMA-IR (Table 1).The HCD-fed rats have approximately 60.4%, 29.6%, and 36.7% greater levels of FBG, insulin, and HOMA-IR, respectively, than the control rats.Furthermore, treatment with only LC or GB has a significant effect on FBG, which changes by approximately 13.91% and 13.71%, respectively, and on HOMA-IR by 4.31% and 6.27%, respectively, compared with those in HCD-fed rats.However, insulin does not significantly change in these treated groups by approximately 12.82% and 23% in the HCD group.Moreover, in the groups treated with both LC and GB, the FBG, insulin, and insulin resistance indices are significantly lower (p < 0.05) than those in the HCD group by approximately 31.44%, 29.5%, and 19.86%, respectively. Lipid Profile The findings from the serum lipid profile analysis revealed that treatment with LC or GB nearly normalizes the lipid profile (TC, TG, HDL-C, and LDL-C) of HCD-fed rats (Figure 1).Compared with those in the control group, the HCD-fed group exhibited notable increases in the serum levels of TC, TG, and LDL, with increases of 26.6%, 42.8%, and 62.69%, respectively.However, the serum level of HDL-C is 15.3% lower in HCD-fed rats than in control rats.Treatment of HCD-fed rats with both LC and GB decreases the serum levels of TC, TG, and LDL-C and increases the serum HDL-C.The cholesterol, triglyceride, and LDL-C levels are reduced by approximately 2.58%, 27.44%, and 7.15%, respectively, in the HCD + LC group and by 22.03%, 37.73%, and 16.45%, respectively, in the HCD + GB group compared with those in the HCD group.Additionally, the level of HDL-C increased by 36.82% in the HCD + LC group and by 65.55% in the HCD + GB group compared with that in the HCD group.The results of the HCD + LC + GB treatment group indicate that this treatment reduces the serum levels of TC, TG, and LDL-C to levels close to those of the control group, while the level of HDL-C significantly increases by 52.3% compared with that of the control group. the HCD + GB group compared with those in the HCD group.Additionally, the level of HDL-C increased by 36.82% in the HCD + LC group and by 65.55% in the HCD + GB group compared with that in the HCD group.The results of the HCD + LC + GB treatment group indicate that this treatment reduces the serum levels of TC, TG, and LDL-C to levels close to those of the control group, while the level of HDL-C significantly increases by 52.3% compared with that of the control group. Liver Enzymes Compared with those in the control group, the ALT and AST levels in the serum of HCD-fed rats without any therapy are significantly (p < 0.05) elevated by approximately 90.31% and 75.66%, respectively (Table 1).On the other hand, ALT and AST are significantly lower in HCD-fed and HCD-treated animals than in HCD-fed animals (31.82% and 28.92% for HCD + LC, 32.37% and 28.72% for HD + GB, and 40% and 31.93% for HCD + LC + GB, respectively). Antioxidant Markers In HCD-fed rats, treatment with LC and GB resulted in a significant increase in the levels of the enzymatic antioxidants SOD and CAT and a significant decrease in the levels of the antioxidant stress marker malondialdehyde (MDA).The SOD, CAT, and MDA data are summarized in Figure 2. The activities of antioxidant enzymes (SOD and CAT) and MDA were measured in the control and exposed groups.The activities of antioxidant enzymes (SOD and CAT) are significantly lower in the HCD group (~55.46% and ~94.57%, respectively) than in the control group.However, CAT and SOD activities are not significantly elevated in the HCD + LC, HCD + GB, and HCD + LC + GB groups (2.17%, 7.75%, 3.27%, and 2.96%, 2.91%, 1.17%, respectively).In HCD-fed animals, the Liver Enzymes Compared with those in the control group, the ALT and AST levels in the serum of HCDfed rats without any therapy are significantly (p < 0.05) elevated by approximately 90.31% and 75.66%, respectively (Table 1).On the other hand, ALT and AST are significantly lower in HCD-fed and HCD-treated animals than in HCD-fed animals (31.82% and 28.92% for HCD + LC, 32.37% and 28.72% for HD + GB, and 40% and 31.93% for HCD + LC + GB, respectively). Antioxidant Markers In HCD-fed rats, treatment with LC and GB resulted in a significant increase in the levels of the enzymatic antioxidants SOD and CAT and a significant decrease in the levels of the antioxidant stress marker malondialdehyde (MDA).The SOD, CAT, and MDA data are summarized in Figure 2. The activities of antioxidant enzymes (SOD and CAT) and MDA were measured in the control and exposed groups.The activities of antioxidant enzymes (SOD and CAT) are significantly lower in the HCD group (~55.46% and ~94.57%, respectively) than in the control group.However, CAT and SOD activities are not significantly elevated in the HCD + LC, HCD + GB, and HCD + LC + GB groups (2.17%, 7.75%, 3.27%, and 2.96%, 2.91%, 1.17%, respectively).In HCD-fed animals, the MDA concentration in the serum significantly (p < 0.05) increased by approximately 494.2% compared with that in the control group.Compared with those in the HCD group, the percentages in the HCD + LC group, HCD + GB group, and HCD + LC + GB group are ~24.26%,40.67%, and 46%, respectively. MDA concentration in the serum significantly (p < 0.05) increased by approximately 494.2% compared with that in the control group.Compared with those in the HCD group, the percentages in the HCD + LC group, HCD + GB group, and HCD + LC + GB group are ~24.26%,40.67%, and 46%, respectively. The Histological Observations Liver H. and E. sections of the control rats exhibited typical hepatic parenchyma of classic lobules with radially arranged polyhedral hepatocytes and blood sinusoids lined by endothelium and some von Kupffer cells (Figure 3a).After 8 weeks of HCD feeding, liver sections revealed multiple histopathological lesions throughout the hepatic parenchyma, including microvesicular and macrovesicular hepatic steatosis, balloon degeneration, inflammation, focal necrotic areas with pyknotic nuclei, and abundant apoptosis.Hepatocellular degeneration occurred with enlarged vacuolated hepatocytes, and the nuclei were displaced toward the cell periphery by diffuse intracytoplasmic fat globules.In addition, portal hepatitis and mononuclear inflammatory cell infiltration around dilated congested blood vessels and proliferated bile ducts of the portal areas were frequently observed (Figure 3b,c).However, liver sections from animals fed HCD supplemented with LC or/and GB for 8 weeks exhibited noticeable alleviation of these histopathological features (Figure 3d-f).LC or GB mono-treatment resulted in mild hepatocellular vacuolation, dilated sinusoids, increased Kupffer cells, and few pyknotic nuclei scattered throughout the tissue (Figure 3d,e).All histopathological lesions became less prominent and more pronounced in livers obtained from rats cotreated with both LC and GB, where the hepatocytes, blood sinusoids, and central veins retained a normal appearance similar to that of the control animals (Figure 3f). The Histological Observations Liver H. and E. sections of the control rats exhibited typical hepatic parenchyma of classic lobules with radially arranged polyhedral hepatocytes and blood sinusoids lined by endothelium and some von Kupffer cells (Figure 3a).After 8 weeks of HCD feeding, liver sections revealed multiple histopathological lesions throughout the hepatic parenchyma, including microvesicular and macrovesicular hepatic steatosis, balloon degeneration, inflammation, focal necrotic areas with pyknotic nuclei, and abundant apoptosis.Hepatocellular degeneration occurred with enlarged vacuolated hepatocytes, and the nuclei were displaced toward the cell periphery by diffuse intracytoplasmic fat globules.In addition, portal hepatitis and mononuclear inflammatory cell infiltration around dilated congested blood vessels and proliferated bile ducts of the portal areas were frequently observed (Figure 3b,c).However, liver sections from animals fed HCD supplemented with LC or/and GB for 8 weeks exhibited noticeable alleviation of these histopathological features (Figure 3d-f).LC or GB mono-treatment resulted in mild hepatocellular vacuolation, dilated sinusoids, increased Kupffer cells, and few pyknotic nuclei scattered throughout the tissue (Figure 3d,e).All histopathological lesions became less prominent and more pronounced in livers obtained from rats cotreated with both LC and GB, where the hepatocytes, blood sinusoids, and central veins retained a normal appearance similar to that of the control animals (Figure 3f). In this study, another noteworthy histological finding was the presence of portal fibroplasia in Masson's trichrome sections, characterized by a notable increase in the percentage of blue collagen fibers extending from the portal tracts into the liver parenchyma, indicating the development of hepatic fibrosis in rats fed an HCD (G2) in comparison with those in the control group (G1), which were fed a standard diet (Figure 4a-c).Mild fibroblastic proliferation and collagen fiber deposition were restricted to the portal area in liver sections from HCD-fed rats and HCD-fed rats monocreated with LC (G3) (Figure 4d).Marker improvement with less evidence of fibroblastic proliferation and collagen fiber deposition was recorded in the rats fed HCD and treated with GB alone or with both LC and GB (G4 and G5) (Figure 4e,f).Compared with that in the control group, the percentage of collagen fibers in the HCD-treated rats is markedly greater (Figure 4).Conversely, HCD-treated rats treated with either GB alone, LC alone, or a combination of both LC and GB demonstrate a substantial decrease in collagen fiber content, with reductions of approximately 66.28%, 74.73%, and 89.25%, respectively, in comparison with those in the HCD group.Table 2 shows the results from the histological quantitative grading and scoring system of the previous histological lesions from rat livers of all the current groups.centage of collagen fibers in the HCD-treated rats is markedly greater (Figure 4).Conversely, HCD-treated rats treated with either GB alone, LC alone, or a combination of both LC and GB demonstrate a substantial decrease in collagen fiber content, with reductions of approximately 66.28%, 74.73%, and 89.25%, respectively, in comparison with those in the HCD group.Table 2 shows the results from the histological quantitative grading and scoring system of the previous histological lesions from rat livers of all the current groups. Values are the mean ± SD.Values (p < 0.05) followed by different superscript letters, a and b, for the vertical row are significantly different between all groups in comparison with the control and HCD groups. Immunohistochemical Observations Figures 5 and 6 show the protein expression of membranous β-Cat and cytoplasmic GLP-1 in immunohistochemical liver sections from the study groups.Livers from the control group displayed strong membranous immune expression of β-Cat, including cell borders, and cytoplasmic expression of GLP-1 in parenchymal hepatocytes (Figures 5a and 6a, respectively).Compared with those in the control group, immune reactivity to β-Cat and GLP-1 is significantly decreased in the HCD-fed rats by approximately 59.84% and 47.57%, respectively (Figures 5b and 6b, respectively).Moreover, immune reactivity to β-Cat and GLP-1 is restored in HCD-fed rats and those monocreated with LC or GB (Figure 5c,d and Figure 6c,d, respectively).Compared with those in HCD-fed rats, β-Cat and GLP-1 in HCD + LC, HCD + GB, and HCD + LC + GB-fed rats are higher (27.24%, 88.22%, 104.63% for β-Cat and 49.65%, 63.21%, 68.04% for GLP-1).In comparison, livers from HCD-fed rats that received both LC and GB displayed significantly increased immunoreactivity to β-Cat and GLP-1, similar to that of the control rats (Figures 5e and 6e, respectively). Ultrastructure Alterations Transmission electron microscopic examination of liver sections from control rats revealed a typical hepatic ultrastructure, where hepatocytes exhibited normal nuclei, rough endoplasmic reticulum, a Golgi apparatus, and mitochondria with well-developed cristae scattered within the cytoplasm.In addition, bile canaliculi with short microvilli were present between hepatocytes (Figure 7a,b).Rats fed an HCD for 8 weeks exhibit several dramatic ultrastructural alterations, with degenerated cytoplasmic organelles, including irregular cell membranes, abnormal small marginal nuclei with dispersed chromatin and prominent nucleoli, a rarified cytoplasm with reduced glycogen granules, a dilated rough endoplasmic reticulum, multiple lipid vacuoles of variable size, and degenerated, dense, fused, and pleomorphic mitochondria with partial or complete loss of cristae (Figure 7c,d).Ultrathin sections of livers obtained from rats fed an HCD supplemented with LC or GB for 8 weeks exhibited noticeable alleviation of these ultrastructural alterations (Figure 7e-h), and some abnormalities, such as dilated rough endoplasmic reticulum, small vacuoles, and few lysosomes, were still observed after LC or GB mono-treatment (Figure 7e-f).The LC and GB dual-treatment group displayed an advanced degree of improvement, and the restored glycogen granules and cell organelles seemed to be seminormal (Figure 7g,h). Ultrastructure Alterations Transmission electron microscopic examination of liver sections from control rats revealed a typical hepatic ultrastructure, where hepatocytes exhibited normal nuclei, rough endoplasmic reticulum, a Golgi apparatus, and mitochondria with well-developed cristae scattered within the cytoplasm.In addition, bile canaliculi with short microvilli were present between hepatocytes (Figure 7a,b).Rats fed an HCD for 8 weeks exhibit several dramatic ultrastructural alterations, with degenerated cytoplasmic organelles, including irregular cell membranes, abnormal small marginal nuclei with dispersed chromatin and prominent nucleoli, a rarified cytoplasm with reduced glycogen granules, Discussion A high-cholesterol diet disturbs the balance between lipid processing and absorption, leading to disturbances in lipid metabolism.These disruptions are particularly prevalent in obese people and can lead to various metabolic disorders, such as diabetes, fatty liver disease, high blood pressure, and atherosclerosis [42].The current study demonstrated that rats fed an HCD for 8 weeks exhibited increased weight gain and fatty liver compared with those fed a standard diet due to increased caloric intake and increased energy expenditure, which subsequently led to adipose tissue accumulation.However, the body weights of rats in the different treatment groups (HCD + LC, HCD + GB, and HCD + LC + GB) were significantly lower than those of rats in the HCD group without any treatment.These findings agree with previous studies demonstrating that rats fed an HCD experience significant weight gain, which is indicative of obesity, within Discussion A high-cholesterol diet disturbs the balance between lipid processing and absorption, leading to disturbances in lipid metabolism.These disruptions are particularly prevalent in obese people and can lead to various metabolic disorders, such as diabetes, fatty liver disease, high blood pressure, and atherosclerosis [42].The current study demonstrated that rats fed an HCD for 8 weeks exhibited increased weight gain and fatty liver compared with those fed a standard diet due to increased caloric intake and increased energy expenditure, which subsequently led to adipose tissue accumulation.However, the body weights of rats in the different treatment groups (HCD + LC, HCD + GB, and HCD + LC + GB) were significantly lower than those of rats in the HCD group without any treatment.These findings agree with previous studies demonstrating that rats fed an HCD experience significant weight gain, which is indicative of obesity, within a period ranging from four to eight weeks [8,43].LC supplementation in mice fed an HFD without exercise resulted in a significant reduction in weight gain [44].LC and GB treatments balanced the increase in body weight in different animal models.Notably, in this study, TG, TC, and LDL-C levels were significantly elevated in the HCD group compared with those in the control group.Conversely, the concentration of HDL-C decreased significantly, indicating potential lipid-related metabolic issues.However, treatment with LC and/or GB effectively inhibited these increases (Figure 1) and significantly reduced the serum TG, TC, and LDL-C levels, which were close to their levels in the control group.Additionally, the levels of LDL-C and HDL-C, which are important indicators of nonalcoholic fatty liver disease (NAFLD) progression, were altered by an HCD but normalized with LC and/or GB treatment.Obese female rats exhibited a significant increase in the serum levels of TC, TG, VLDL, and LDL-C, along with a notable reduction in HDL-C levels [45,46].Abnormal elevations in TC and TG have been linked to NAFLD development [25], and hypercholesterolemia is considered a risk factor for liver injury [47].Excessive triglyceride deposition within hepatocytes leads to liver lipotoxicity and impaired fatty acid oxidation, contributing to NAFLD progression [48].Cholesterol and triglycerides play crucial roles in metabolism, acting as energy sources, membrane components, and precursors for various biological molecules.Monitoring their levels in blood samples is clinically significant due to their implications for liver health and overall metabolic function [49].Moreover, Fidèle et al. [50] reported that the accumulation of cholesterol resulted in an increase in the production of steroid hormones, including cortisol, estrogens, and testosterone.This hormonal increase in rats that were fed a high-fat diet was associated with weight gain.LDL-C levels increase in HCD-fed rats.LDL-C is a poor cholesterol, where the level of fat is greater than the level of protein, and transports cholesterol from the liver to all parts of the body.LDL-C is a significant component of cholesterol related to a greater risk of atherosclerosis, as it serves as a physiological carrier for cholesterol distribution to peripheral tissues and especially deposits it on the walls of blood vessels, causing an increase in blood clots; in contrast, HDL-C is known as good cholesterol, where the level of protein is greater than the level of the fat itself and transports cholesterol from different parts of the body to the liver for recycling and excretion in bile [49].Treatment with LC altered hepatic lipid metabolism potentially through carnitine-mediated lipid metabolism, modulation of hyperglycemia, and enhancement of self-antioxidant capacity.This intervention resulted in reduced serum lipid levels, triglycerides, glucose, and liver enzymes but also improved the cholesterol profile compared with that of rats fed a high-cholesterol diet in the present study.Similarly, rats fed an HFD had a high level of LDL (153.33 ± 1.38 mg/dl) but a low serum level of HDL (40.60 ± 1.16 mg/dL) [51].Several studies support these observations.Kim et al. [52] reported the beneficial effects of LC in lowering lipid levels in both the blood and liver, as they highlighted its role in promoting fat oxidation, thus reducing its accumulation.Similarly, Mayes et al. [53] suggested that LC functions as a fat burner by optimizing fat oxidation, thereby limiting its storage.Research by González-Ortiz et al. [54] demonstrated that supplementing obese rats with LC led to a reduction in the serum levels of TC, TG, LDL, free fatty acids, and very low-density lipoprotein (VLDL), while it particularly increased the level of HDL.Furthermore, LC was found to ameliorate fatty liver, dyslipidemia, and hepatitis by modulating lipid metabolism, antioxidant capacity, and inflammatory responses, as noted by Su Chang Chao and colleagues [55].Additionally, Zhang et al. [23] revealed that GB has a multifaceted impact on the lipid profile of the rat metabolome.It regulates polyunsaturated fatty acids, limits cholesterol absorption, and deactivates 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA).A meta-analysis by Fan et al. [56] suggested that combining GB with statin therapy improved TC and TG and enhanced HDL-C compared with statin therapy alone.Moreover, in a study involving male rabbits, Hussein et al. [45] reported that, compared with no treatment, GB treatment significantly reduced plasma TG and cholesterol levels while increasing HDL-C levels.These findings align with the results presented in Figure 1 of the current study. Ginkgo biloba and L-carnitine have also provided substantial advantages in conventional medicine, such as promoting weight loss and exhibiting antidiabetic, antihypertensive, and antilipidaemic effects [13,43,57,58].These attributes hold promise for treating metabolic syndrome and reducing the heightened risk of cardiovascular events [59].In this HCD-fed rat experiment, metabolic dysregulation characterized by hyperglycemia, hyperinsulinemia, and elevated insulin resistance (HOMA-IR) was observed, potentially stemming from increased energy storage in triglycerides and elevated circulating free fatty acid levels due to high cholesterol intake, leading to insulin resistance [8].The heightened blood glucose levels may result from reduced insulin efficacy in controlling hepatic glucose production [57].Treatment with either LC or/or GB significantly (p < 0.05) reduced the fasting blood glucose, insulin, and insulin resistance indices compared with those in the HCD group, consistent with the findings of Jing et al. [60] for GB.Disruptions in glucose regulation were associated with significant alterations in the serum lipid profile.Similarly, rats fed an HFD had significantly greater blood glucose levels and lipid profiles [51].In a study by Li et al. [25], NAFLD induced by a high-fat diet resulted in hepatic steatosis, lipid accumulation, inflammation, liver injury, glucose intolerance, and insulin resistance in mice.However, treatment with GB significantly improved these conditions, potentially through enhancing insulin receptor substrate 1 (IRS-1) signal activation and reducing nuclear factor kappa B (NF-κB) and endoplasmic reticulum stress (ERS) signal activation.The potential mechanism underlying the antihyperlipidemic effect of GB may involve alterations in the activity of cholesterol biosynthesis enzymes and/or modifications in lipolysis levels, both of which are regulated by insulin [61].Treatment of HCD-fed mice with LC or GB improved glucose intolerance, insulin resistance, lipid accumulation, hepatic steatosis, and liver injury [62,63]. Lipid peroxidation (MDA) and substantial antioxidant (CAT and SOD) levels were significantly greater in HCD-fed rats than in control rats in this study, indicating elevated free radical levels and increased oxidative stress.Conversely, the HCD + LC, HCD + GB, and HCD + LC + GB groups exhibited a marked reduction in MDA levels and a significant increase in CAT and SOD compared with those in the HCD group.These results suggest that an HCD induces hepatic oxidative stress, potentially due to hyperglycemia in an HCD, which enhances polyol pathway activity and inhibits the pentose phosphate pathway, thereby reducing intracellular NADPH levels.NADPH is essential for regenerating GSH from its oxidized form [64].The results indicated that LC and GB have protective effects against oxidative damage, likely due to their antioxidant properties and ability to scavenge free radicals.These results align with a study by Tousson et al. [26], which demonstrated a significant decrease in MDA levels and an increase in SOD and CAT activities in the kidneys of rats treated with GB and LC compared with those of control rats.Sharma and Yadav [65] suggested that LC is a successful dietary treatment for improving weakened biochemicals and renal function in chronic kidney disorders.LC and GB have potent protective and therapeutic effects against pentylenetetrazol-induced epilepsy by addressing antiepileptic actions and oxidative imbalances [27,28].This finding aligns with that of Zhang et al. [66], who demonstrated that GB improved the antioxidant status and reduced free radical-induced lipid peroxidation in the central nervous system of rats fed an HFD.Furthermore, Ginkgo biloba was shown to reduce MDA levels and increase GSH levels in aortic tissue [45].In obese individuals, the presence of ROS leads to oxidative stress [67,68], which, in turn, triggers inflammation, leading to the release of inflammatory factors [69].The redox and oxidative defense system consists primarily of enzymatic and nonenzymatic antioxidants, which efficiently eliminate ROS and breakdown peroxides [70,71].The antioxidant system, which includes CAT and SOD, plays a crucial role in scavenging free radicals and safeguarding cells from oxidative stress [72].The end product of lipid peroxidation (MDA) is very toxic to cells and their membranes [73,74].Elevated MDA levels serve as an indicator of oxidative state imbalance in newly diagnosed patients [75]. In this study, HCD model rats exhibited increased not only body weight, glucose, insulin resistance, lipid profile, and lipid peroxidation but also elevated liver enzymes (Table 1) and exacerbated deposition of lipid droplets and collagen fibers in their liver sections (Figure 3b,c and Figure 4b,c).However, treatment with LC and/or GB significantly mitigated these effects (p < 0.05, Table 2).The lesions observed in the liver tissues of HCD-fed animals in this study included several hepatic histological and ultrastructural disorders, including both portal areas, hepatocytes, progressive enlargement of sinusoids, micro-and macrovesicular fatty degeneration, intracytoplasmic organelles, steatohepatitis and periportal fibrosis.Previous research has demonstrated that feeding rats and rabbits an HCD results in significant lipid accumulation in the liver [58].An HFD induced several hepatic histopathological and ultrastructural changes in prediabetic rats, as demonstrated by the presence of degenerated cytoplasmic hepatocytes with mitochondrial swelling, endoplasmic reticulum dilation, plasma membrane blebbing, cytoplasmic vacuoles and lipid droplets [8,76].Two primary histological patterns of hepatic steatosis define fatty disorders.Microvesicular steatosis involves the substitution of cytoplasm with small fat vacuoles while the nucleus remains centrally positioned.On the other hand, macrovesicular steatosis is characterized by the presence of large fat droplets, where the cytoplasm is largely replaced by a sizable fat vacuole that shifts the nuclei towards the cell's outer edge [77,78].Similarly, hepatocellular ballooning is a crucial histological indicator for diagnosing NASH.Several studies have verified that the release of liver enzymes into the bloodstream is a significant distinguishing characteristic, indicating an elevated risk of disease progression [79,80].In this study, LC + GB triggered severe microvesicular steatosis.Moreover, ultrastructural alterations in hepatocytes, including mitochondria, rough endoplasmic reticulum, and cell nuclei, were noted.These results are in accordance with those of Abdel-Emam et al. [81], who concluded that the coadministration of LC with chlorpheniramine maleate or cetirizine hydrochloride induced less severe histopathological alterations in the liver.Additionally, LC alleviated hepatic cord distortion, blood vessel congestion and dilatation, and hepatocyte degeneration induced by letrozole in female rats [82].Rashad et al. [83] reported the mitigating impact of LC on atrazine-induced hepatotoxicity in rats.These findings are in line with those of Li et al. [25], who reported that Ginkgo biloba extract has a potential therapeutic effect on liver injury in HFD-fed rats by attenuating liver inflammation and fibrosis.Consideration should also be given to circumstances in which GB is used as a required therapy because it causes hypothyroidism in rats [84].These findings confirmed previous data indicating that HCD-induced liver oxidative stress triggers inflammatory and fibrogenic signaling pathways that promote liver steatosis and fibrosis progression.These alterations may be due to increased ROS production associated with an HCD, resulting in chronic inflammatory response stimulation, which is commonly related to NAFLD and may also increase liver damage [85].Thus, the present study evaluated histological inflammatory markers in the liver tissues of untreated and treated HCD-fed rats.LC and/or GB alleviated lipid accumulation and hepatic steatosis in HCD-fed rats (Figure 3 and Table 2).The elevated levels of the liver enzymes ALT and AST in the serum of rats fed HCD provided further evidence of the impact of LC and/or GB and were considerably reduced (p <0.05) following LC and/or GB treatment (Table 1).Elevated AST and ALT serum levels may be caused by inflammation inside liver cells, which releases more proinflammatory cytokines and more severely damages liver cells by injuring hepatocytes [8].Serum ALT and AST are useful biomarkers of liver injury [86].Elevated serum levels of ALT and AST were recorded in a modified dyslipidemia model [87].Other studies revealed that treatment with a high dose of GB or LC significantly reduced the serum levels of ALT and AST markers [88,89].However, Nobili et al. [90] showed no effect on reducing ALT, inflammation, or steatosis in 53 patients with NAFLD in a two-year clinical trial of patients treated with vitamin C and vitamin E (600 IU/day), which is consistent with current evidence on inflammation but not ALT or steatosis.On the other hand, ursodeoxycholic acid, a drug for primary biliary cirrhosis, along with vitamin E (400 IU twice daily) for the same period (2 years) had no effect on inflammation in 48 patients with advanced nonalcoholic fatty liver disease (NAFLD) but decreased alanine transaminase (ALT) and steatosis [91].Overall, this study showed that giving rats an HCD for eight weeks caused hepatic steatosis, which harmed hepatocyte ultrastructure and produced liver injury enzymes in the bloodstream, possibly through an increase in lipids and oxidative stress indicators.LC and/or GB were effective at inhibiting liver dysfunction.Furthermore, the current biochemical, histological and ultrastructural data confirmed that combination therapy (HCD + LC + GB) restored hepatic structure, function, and metabolic damage and increased the number of organelles in hepatocytes as a result of an HCD.Based on these outcomes and to correlate hepatic alterations and the related underlying mechanisms, it was necessary to study markers involved in hepatic metabolism.This metabolic dysfunction was immunohistochemically determined in the present study, which indicated that an HCD decreased the protein expression of hepatic GLP-1R and β-catenin, while HCD rats that received LC and/or GB exhibited significantly restored protein expression levels that were similar to those of the control rats.Tveden-Nyborg et al. [92] reported similar hepatic histological and ultrastructural features in dyslipidemic guinea pigs, which were attributed to the breakdown of mitochondrial polyunsaturated lipids, causing cell death and heightened oxidative stress.Similarly, in steatotic rats, Ellatif et al. [76] observed abnormal hepatocyte ultrastructures associated with oxidative stress, manifested as an enlarged rough endoplasmic reticulum, degenerated mitochondria, increased lysosomes, damaged nuclei, and misshapen nuclear membranes.Natural products have demonstrated efficacy in mitigating hepatocellular damage by exerting antioxidative effects [93]. Dysregulated β-catenin is pivotal for abnormal hepatic growth and the development of health issues such as obesity, diabetes, NAFLD, and metabolic syndrome [94].Altered β-catenin expression is associated with NAFLD and other liver diseases by influencing genes controlling glucose and nutrient metabolism through interactions with transcription factors such as forkhead box protein O, T-cell factor, and hypoxia-inducible factor 1α [95,96].β-catenin is a critical component of the Wnt signaling pathway and affects cell differentiation and proliferation, thus impacting liver physiology and development.Changes in β-catenin activity are implicated in NAFLD pathogenesis [96][97][98].The current study utilized immunohistochemistry to identify β-catenin, which is a multifunctional boundary cadherin protein that serves as an intracellular signaling transducer in the WNT signaling pathway) in liver tissue and acts as a hepatic metabolism indicator involved in liver homeostasis and the regulation of cell adhesion, regeneration, proliferation, hypoxia resistance, apoptosis, steatosis, cholesterol metabolism and other biological processes [99], as it regulates the expression of genes that control glucose, xenobiotic, and nutrient metabolism.Changes in β-catenin signaling trigger the stimulation of hepatic stellate cells, causing fibrosis and contributing to nonalcoholic steatohepatitis pathogenesis [96].Lehwald et al. [100] reported that reagents that affect Wnt-β-catenin signaling might be potential therapeutic options for treating liver disease, as nutrient oxidative stress compromises the function of β-catenin, which plays a crucial role in maintaining homeostasis in mitochondria and controlling ATP synthesis via fatty acid oxidation, oxidative phosphorylation (OXPHOS), and the tricarboxylic acid cycle.LC promoted neurogenesis in mesenchymal stem cells via the Wnt-β-catenin signaling pathway [101].Furthermore, immunohistochemical anal-ysis of GLP-1R (glucagon-like peptide-1 receptor) in liver sections in the present study played a critical role in clarifying the beneficial effect of the combined treatment of LC and GB on HCD-fed rats.GLP-1R is secreted by intestinal L cells and acts as a metabolic regulator, influencing processes in the liver, intestine, and brain, and is associated with weight loss promotion and improved insulin sensitivity [102,103].It operates through direct activation of G protein-coupled receptors (GPRs) of the class B family and indirect nonreceptor-mediated pathways to protect the liver from NASH progression.Notably, Li et al. [104] observed a significant increase in GLP-1R protein expression in the duodenum and liver of NAFLD rats after eugenol administration, which may be inconsistent with prior research.In obese rats, reduced protein expression of both β-catenin and GLP-1R exacerbates hepatic steatosis induced by fatty acids in rat hepatocytes [37,105].Conversely, Gao et al. [94] reported that β-catenin and GLP-1R improved steatohepatitis caused by a high-fructose diet.Additionally, a prior study of HFD-induced NASH demonstrated a significant increase in β-catenin expression, which is potentially linked to impaired glucose metabolism [106].The ameliorative ability of LC and/or GB may be linked to the LC and/or GB antitoxic characteristics provided by their capacities to control oxidative stress based on the antioxidant capacity that originated through their phytoconstituents and to regulate the body response against the impact caused by an HCD.Several findings, such as decreased lipid profiles, glucose levels, insulin resistance, oxidative stress and increased antioxidant activity, support these findings.Additionally, light microscopic analysis of histopathological liver sections revealed a noticeable improvement in the liver structure as well as the immunohistochemical expression of the GLP-1 and β-catenin proteins.TEM analysis, which revealed the restoration of the hepatocyte ultrastructure, supported the beneficial impact of LC and/or GB.All of these effects produced substantial reductions in body weight and serum liver enzymes in the LC and/or GB groups compared with those in the HCD group. Conclusions The outcomes of this work have valuable clinical importance, as LC and GB supplements act as hepatoprotective, hypoglycemic, and antidyslipidemic agents against HCD disorders.Concurrent supplementation with both LC and GB exhibited enhanced efficacy in alleviating liver damage induced by an HCD, surpassing the effects of individual supplements.These valuable effects could occur through ROS scavenging, inhibition of free radical production, antioxidant reactivation, and upregulation of hepatic β-catenin and GLP-1R expression.Therefore, consuming GB with LC supplements can benefit overall health and potentially protect against and treat disorders associated with hyperlipidemia. Figure 1 . Figure 1.(A-D) Lipid profile charts.All values are expressed as the mean ± SD (n = 6).Triglycerides (A), cholesterol (B), HDL-C ((C) high-density lipoprotein cholesterol), and LDL-C ((D) low-density lipoprotein cholesterol) are shown.The letters a-d and e indicate significant differences (p-value < 0.05) between groups in comparison with the control, HCD, HCD + LC, HCD + GB, and HCD + LC + GB groups, respectively. Figure 1 . Figure 1.(A-D) Lipid profile charts.All values are expressed as the mean ± SD (n = 6).Triglycerides (A), cholesterol (B), HDL-C ((C) high-density lipoprotein cholesterol), and LDL-C ((D) low-density lipoprotein cholesterol) are shown.The letters a-d and e indicate significant differences (p-value < 0.05) between groups in comparison with the control, HCD, HCD + LC, HCD + GB, and HCD + LC + GB groups, respectively. Table 1 . Effects of LC and/or GB treatment on body weight, serum glucose homeostasis, and liver enzymes. Table 2 . Quantitative grading and scoring system for nonalcoholic steatohepatitis-related histopathological lesions in the livers of rats in different groups.
2024-04-25T15:33:36.062Z
2024-04-23T00:00:00.000
{ "year": 2024, "sha1": "5fd6906743f1e460639aed2dbbcf995065e0b631", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/13/9/732/pdf?version=1713879492", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a929b602318c6fb602d50826398fae4b94f25262", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
10089399
pes2o/s2orc
v3-fos-license
Using WordNet-based Context Vectors to Estimate the Semantic Relatedness of Concepts In this paper, we introduce a WordNet-based measure of semantic relatedness by combining the structure and content of WordNet with co–occurrence information derived from raw text. We use the co–occurrence information along with the WordNet definitions to build gloss vectors corresponding to each concept in Word-Net. Numeric scores of relatedness are assigned to a pair of concepts by measuring the cosine of the angle between their respective gloss vectors. We show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness, and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatedness. This measure is flex-ible in that it can make comparisons be-tween any two concepts without regard to their part of speech. In addition, it can be adapted to different domains, since any plain text corpus can be used to derive the co–occurrence information. Introduction Humans are able to quickly judge the relative semantic relatedness of pairs of concepts. For example, most would agree that feather is more related to bird than it is to tree. This ability to assess the semantic relatedness among concepts is important for Natural Language Understanding. Consider the following sentence: He swung the bat, hitting the ball into the stands. A reader likely uses domain knowledge of sports along with the realization that the baseball senses of hitting, bat, ball and stands are all semantically related, in order to determine that the event being described is a baseball game. Consequently, a number of techniques have been proposed over the years, that attempt to automatically compute the semantic relatedness of concepts to correspond closely with human judgments (Resnik, 1995;Jiang and Conrath, 1997;Lin, 1998;Leacock and Chodorow, 1998). It has also been shown that these techniques prove useful for tasks such as word sense disambiguation (Patwardhan et al., 2003), real-word spelling correction (Budanitsky and Hirst, 2001) and information extraction (Stevenson and Greenwood, 2005), among others. In this paper we introduce a WordNet-based measure of semantic relatedness inspired by Harris' Distributional Hypothesis (Harris, 1985). The distributional hypothesis suggests that words that are similar in meaning tend to occur in similar linguistic contexts. Additionally, numerous studies (Carnine et al., 1984;Miller and Charles, 1991;McDonald and Ramscar, 2001) have shown that context plays a vital role in defining the meanings of words. (Landauer and Dumais, 1997) describe a context vector-based method that simulates learning of word meanings from raw text. (Schütze, 1998) has also shown that vectors built from the contexts of words are useful representations of word meanings. Our Gloss Vector measure of semantic relatedness is based on second order co-occurrence vectors (Schütze, 1998) in combination with the structure and content of WordNet (Fellbaum, 1998), a semantic network of concepts. This measure captures semantic information for concepts from contextual information drawn from corpora of text. We show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness, and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatedness. This measure is flexible in that it can make comparisons between any two concepts without regard to their part of speech. In addition, it is adaptable since any corpora can be used to derive the word vectors. This paper is organized as follows. We start with a description of second order context vectors in general, and then define the Gloss Vector measure in particular. We present an extensive evaluation of the measure, both with respect to human relatedness judgments and also relative to its performance when used in a word sense disambiguation algorithm based on semantic relatedness. The paper concludes with an analysis of our results, and some discussion of related and future work. Second Order Context Vectors Context vectors are widely used in Information Retrieval and Natural Language Processing. Most often they represent first order co-occurrences, which are simply words that occur near each other in a corpus of text. For example, police and car are likely first order co-occurrences since they commonly occur together. A first order context vector for a given word would simply indicate all the first order co-occurrences of that word as found in a corpus. However, our Gloss Vector measure is based on second order co-occurrences (Schütze, 1998). For example, if car and mechanic are first order cooccurrences, then mechanic and police would be second order co-occurrences since they are both first order co-occurrences of car. Schütze's method starts by creating a Word Space, which is a co-occurrence matrix where each row can be viewed as a first order context vector. Each cell in this matrix represents the frequency with which two words occur near one another in a corpus of text. The Word Space is usually quite large and sparse, since there are many words in the corpus and most of them don't occur near each other. In order to reduce the dimensionality and the amount of noise, non-content stop words such as the, for, a, etc. are excluded from being rows or columns in the Word Space. Given a Word Space, a context can then be represented by second order co-occurrences (context vector). This is done by finding the resultant of the first order context vectors corresponding to each of the words in that context. If a word in a context does not have a first order context vector created for it, or if it is a stop word, then it is excluded from the resultant. For example, suppose we have the following context: The paintings were displayed in the art gallery. The second order context vector would be the resultant of the first order context vectors for painting, display, art, and gallery. The words were, in, and the are excluded from the resultant since we consider them as stop words in this example. Figure 1 shows how the second order context vector might be visualized in a 2-dimensional space. Intuitively, the orientation of each second order context vector is an indicator of the domains or topics (such as biology or baseball) that the context is associated with. Two context vectors that lie close together indicate a considerable contextual overlap, which suggests that they are pertaining to the same meaning of the target word. Gloss Vectors in Semantic Relatedness In this research, we create a Gloss Vector for each concept (or word sense) represented in a dictionary. While we use WordNet as our dictionary, the method can apply to other lexical resources. Creating Vectors from WordNet Glosses A Gloss Vector is a second order context vector formed by treating the dictionary definition of a concept as a context, and finding the resultant of the first order context vectors of the words in the definition. In particular, we define a Word Space by creating first order context vectors for every word w that is not a stop word and that occurs above a minimum frequency in our corpus. The specific steps are as follows: 1. Initialize the first order context vector to a zero vector → w. 2. Find every occurrence of w in the given corpus. 3. For each occurrence of w, increment those dimensions of → w that correspond to the words from the Word Space and are present within a given number of positions around w in the corpus. The first order context vector → w, therefore, encodes the co-occurrence information of word w. For example, consider the gloss of lampan artificial source of visible illumination. The Gloss Vector for lamp would be formed by adding the first order context vectors of artificial, source, visible and illumination. In these experiments, we use WordNet as the corpus of text for deriving first order context vectors. We take the glosses for all of the concepts in WordNet and view that as a large corpus of text. This corpus consists of approximately 1.4 million words, and results in a Word Space of approximately 20,000 dimensions, once low frequency and stop words are removed. We chose the WordNet glosses as a corpus because we felt the glosses were likely to contain content rich terms that would distinguish between the various concepts more distinctly than would text drawn from a more generic corpus. However, in our future work we will experiment with other corpora as the source of first order context vectors, and other dictionaries as the source of glosses. The first order context vectors as well as the Gloss Vectors usually have a very large number of dimensions (usually tens of thousands) and it is not easy to visualize this space. Figure 2 attempts to illustrate these vectors in two dimensions. The words tennis and food are the dimensions of this 2dimensional space. We see that the first order context vector for serve is approximately halfway between tennis and food, since the word serve could Figure 2: First Order Context Vectors and a Gloss Vector mean to "serve the ball" in the context of tennis or could mean "to serve food" in another context. The first order context vectors for eat and cutlery are very close to food, since they do not have a sense that is related to tennis. The gloss for the word fork, "cutlery used to serve and eat food", contains the words cutlery, serve, eat and food. The Gloss Vector for fork is formed by adding the first order context vectors of cutlery, serve, eat and food. Thus, fork has a Gloss Vector which is heavily weighted towards food. The concept of food, therefore, is in the same semantic space as and is related to the concept of fork. Similarly, we expect that in a high dimensional space, the Gloss Vector of fork would be heavily weighted towards all concepts that are semantically related to the concept of fork. Additionally, the previous demonstration involved a small gloss for representing fork. Using augmented glosses, described in section 3.2, we achieve better representations of concepts to build Gloss Vectors upon. Augmenting Glosses Using WordNet Relations The formulation of the Gloss Vector measure described above is independent of the dictionary used and is independent of the corpus used. However, dictionary glosses tend to be rather short, and it is possible that even closely related concepts will be defined using different sets of words. Our belief is that two synonyms that are used in different glosses will tend to have similar Word Vectors (because their co-occurrence behavior should be similar). However, the brevity of dictionary glosses may still make it difficult to create Gloss Vectors that are truly representative of the concept. ) encounter a similar issue when measuring semantic relatedness by counting the number of matching words between the glosses of two different concepts. They expand the glosses of concepts in WordNet with the glosses of concepts that are directly linked by a WordNet relation. We adopt the same technique here, and use the relations in WordNet to augment glosses for the Gloss Vector measure. We take the gloss of a given concept, and concatenate to it the glosses of all the concepts to which it is directly related according to WordNet. The Gloss Vector for that concept is then created from this big concatenated gloss. Other Measures of Relatedness Below we briefly describe five alternative measures of semantic relatedness, and then go on to include them as points of comparison in our experimental evaluation of the Gloss Vector measure. All of these measures depend in some way upon WordNet. Four of them limit their measurements to nouns located in the WordNet is-a hierarchy. Each of these measures takes two WordNet concepts (i.e., word senses or synsets) c 1 and c 2 as input and return a numeric score that quantifies their degree of relatedness. (Leacock and Chodorow, 1998) finds the path length between c 1 and c 2 in the is-a hierarchy of WordNet. The path length is then scaled by the depth of the hierarchy (D) in which they reside to obtain the relatedness of the two concepts. (Resnik, 1995) introduced a measure that is based on information content, which are numeric quantities that indicate the specificity of concepts. These values are derived from corpora, and are used to augment the concepts in WordNet's is-a hierarchy. The measure of relatedness between two concepts is the information content of the most specific concept that both concepts have in common (i.e., their lowest common subsumer in the is-a hierarchy). (Lin, 1998) also extends Resnik's measure, by taking the ratio of the shared information content to that of the individual concepts. , 2003) introduce Extended Gloss Overlaps, which is a measure that determines the relatedness of concepts proportional to the extent of overlap of their WordNet glosses. This simple definition is extended to take advantage of the complex network of relations in Word-Net, and allows the glosses of concepts to include the glosses of synsets to which they are directly related in WordNet. Evaluation As was done by (Budanitsky and Hirst, 2001), we evaluated the measures of relatedness in two ways. First, they were compared against human judgments of relatedness. Second, they were used in an application that would benefit from the measures. The effectiveness of the particular application was an indirect indicator of the accuracy of the relatedness measure used. Comparison with Human Judgment One obvious metric for evaluating a measure of semantic relatedness is its correspondence with the human perception of relatedness. Since semantic relatedness is subjective, and depends on the human view of the world, comparison with human judgments is a self-evident metric for evaluation. This was done by (Budanitsky and Hirst, 2001) in their comparison of five measures of semantic relatedness. We follow a similar approach in evaluating the Gloss Vector measure. We use a set of 30 word pairs from a study carried out by (Miller and Charles, 1991). These word pairs are a subset of 65 word pairs used by (Rubenstein and Goodenough, 1965), in a similar study almost 25 years earlier. In this study, human subjects assigned relatedness scores to the selected word pairs. The word pairs selected for this study ranged from highly related pairs to unrelated pairs. We use these human judgments for our evaluation. Each of the word pairs have been scored by humans on a scale of 0 to 5, where 5 is the most related. The mean of the scores of each pair from all subjects is considered as the "human relatedness score" for that pair. The pairs are then ranked with respect to their scores. The most related pair is the first on the list and the least related pair is at the end of the list. We then have each of the measures of relatedness score the word pairs and a another ranking of the word pairs is created corresponding to each of the measures. (Spearman, 1904) is used to assess the equivalence of two rankings. If the two rankings are exactly the same, the Spearman's correlation coefficient between these two rankings is 1. A completely reversed ranking gets a value of −1. The value is 0 when there is no relation between the rankings. We determine the correlation coefficient of the ranking of each measure with that of the human relatedness. We use the relatedness scores from both the human studies -the Miller and Charles study as well as the Rubenstein and Goodenough research. Table 1 summarizes the results of our experiment. We observe that the Gloss Vector has the highest correlation with humans in both cases. Note that in our experiments with the Gloss Vector measure, we have used not only the gloss of the concept but augmented that with the gloss of all the concepts directly related to it according to WordNet. We observed a significant drop in performance when we used just the glosses of the concept alone, showing that the expansion is necessary. In addition, the frequency cutoffs used to construct the Word Space played a critical role. The best setting of the frequency cutoffs removed both low and high frequency words, which eliminates two different sources of noise. Very low frequency words do not occur enough to draw distinctions among different glosses, whereas high frequency words occur in many glosses, and again do not provide useful information to distinguish among glosses. Application-based Evaluation An application-oriented comparison of five measures of semantic relatedness was presented in (Budanitsky and Hirst, 2001). In that study they evaluate five WordNet-based measures of semantic relatedness with respect to their performance in context sensitive spelling correction. We present the results of an application-oriented evaluation of the measures of semantic relatedness. Each of the seven measures of semantic relatedness was used in a word sense disambiguation algorithm described by . Word sense disambiguation is the task of determining the meaning (from multiple possibilities) of a word in its given context. For example, in the sentence The ex-cons broke into the bank on Elm street, the word bank has the "financial institution" sense as opposed to the "edge of a river" sense. Banerjee and Pedersen attempt to perform this task by measuring the relatedness of the senses of the target word to those of the words in its context. The sense of the target word that is most related to its context is selected as the intended sense of the target word. The experimental data used for this evaluation is the SENSEVAL-2 test data. It consists of 4,328 instances (or contexts) that each includes a single ambiguous target word. Each instance consists of approximately 2-3 sentences and one occurrence of a target word. 1,754 of the instances include nouns as target words, while 1,806 are verbs and 768 are adjectives. We use the noun data to compare all six of the measures, since four of the measures are limited to nouns as input. The accuracy of disambiguation when performed using each of the measures for nouns is shown in Table 2. Gloss Vector Tuning As discussed in earlier sections, the Gloss Vector measure builds a word space consisting of first order context vectors corresponding to every word in a corpus. Gloss vectors are the resultant of a number of first order context vectors. All of these vectors encode semantic information about the concepts or the glosses that the vectors represent. We note that the quality of the words used as the dimensions of these vectors plays a pivotal role in getting accurate relatedness scores. We find that words corresponding to very specific concepts and are highly indicative of a few topics, make good dimensions. Words that are very general in nature and that appear all over the place add noise to the vectors. In an earlier section we discussed using stop words and frequency cutoffs to keep only the high "information content" words. In addition to those, we also experimented with a term frequency · inverse document frequency cutoff. Term frequency and inverse document frequency are commonly used metrics in information retrieval. For a given word, term frequency (tf ) is the number of times a word appears in the corpus. The document frequency is number of documents in which the word occurs. Inverse document frequency (idf ) is then computed as idf = log N umber of Documents Document F requency (1) The tf · idf value is an indicator of the specificity of a word. The higher the tf · idf value, the lower the specificity. Figure 3 shows a plot of tf · idf cutoff on the x-axis against the correlation of the Gloss Vector measure with human judgments on the y-axis. The tf · idf values ranged from 0 to about 4200. Note that we get lower correlation as the cutoff is raised. Analysis We observe from the experimental results that the Gloss Vector measure corresponds the most with human judgment of relatedness (with a correlation of almost 0.9). We believe this is probably because the Gloss Vector measure most closely imitates the representation of concepts in the human mind. (Miller and Charles, 1991) suggest that the cognitive representation of a word is an abstraction derived from its contexts (encountered by the person). Their study also suggested the semantic similarity of two words depends on the overlap between their contextual representations. The Gloss Vector measure uses the contexts of the words and creates a vector representation of these. The overlap between these vector representations is used to compute the semantic similarity of concepts. (Landauer and Dumais, 1997) additionally perform singular value decomposition (SVD) on their context vector representation of words and they show that reducing the number of dimensions of the vectors using SVD more accurately simulates learning in humans. We plan to try SVD on the Gloss Vector measure in future work. In the application-oriented evaluation, the Gloss Vector measure performed relatively well (about 41% accuracy). However, unlike the human study, it did not outperform all the other measures. We think there are two possible explanations for this. First, the word pairs used in the human relatedness study are all nouns, and it is possible that the Gloss Vector measure performs better on nouns than on other parts of speech. In the application-oriented evaluation the measure had to make judgments for all parts of speech. Second, the application itself affects the performance of the measure. The Word Sense Disambiguation algorithm starts by selecting a context of 5 words from around the target word. These context words contain words from all parts of speech. Since the Jiang-Conrath measure assigns relatedness scores only to noun concepts, its behavior would differ from that of the Vector measure which would accept all words and would be affected by the noise introduced from unrelated concepts. Thus the context selection factors into the accuracy obtained. However, for evaluating the measure as being suitable for use in real applications, the Gloss Vector measure proves relatively accurate. The Gloss Vector measure can draw conclusions about any two concepts, irrespective of partof-speech. The only other measure that can make this same claim is the Extended Gloss Overlaps measure. We would argue that Gloss Vectors present certain advantages over it. The Extended Gloss Overlap measure looks for exact string overlaps to measure relatedness. This "exactness" works against the measure, in that it misses potential matches that intuitively would contribute to the score (For example, silverware with spoon). The Gloss Vector measure is more robust than the Extended Gloss Overlap measure, in that exact matches are not required to identify relatedness. The Gloss Vector measure attempts to overcome this "exactness" by using vectors that capture the contextual representation of all words. So even though silverware and spoon do not overlap, their contextual representations would overlap to some extent. (Wilks et al., 1990) describe a word sense disambiguation algorithm that also uses vectors to determine the intended sense of an ambiguous word. In their approach, they use dictionary definitions from LDOCE (Procter, 1978). The words in these definitions are used to build a co-occurrence matrix, which is very similar to our technique of using the WordNet glosses for our Word Space. They augment their dictionary definitions with similar words, which are determined using the cooccurrence matrix. Each concept in LDOCE is then represented by an aggregate vector created by adding the co-occurrence counts for each of the words in the augmented definition of the concept. Related Work The next step in their algorithm is to form a context vector. The context of the ambiguous word is first augmented using the co-occurrence matrix, just like the definitions. The context vector is formed by taking the aggregate of the word vectors of the words in the augmented context. To disambiguate the target word, the context vector is compared to the vectors corresponding to each meaning of the target word in LDOCE, and that meaning is selected whose vector is mathematically closest to that of the context. Our approach differs from theirs in two primary respects. First, rather than creating an aggregate vector for the context we compare the vector of each meaning of the ambiguous word with the vectors of each of the meanings of the words in the context. This adds another level of indirection in the comparison and attempts to use only the relevant meanings of the context words. Secondly, we use the structure of WordNet to augment the short glosses with other related glosses. (Niwa and Nitta, 1994) compare dictionary based vectors with co-occurrence based vectors, where the vector of a word is the probability that an origin word occurs in the context of the word. These two representations are evaluated by applying them to real world applications and quantifying the results. Both measures are first applied to word sense disambiguation and then to the learning of positives or negatives, where it is required to determine whether a word has a positive or negative connotation. It was observed that the cooccurrence based idea works better for the word sense disambiguation and the dictionary based approach gives better results for the learning of positives or negatives. From this, the conclusion is that the dictionary based vectors contain some different semantic information about the words and warrants further investigation. It is also observed that for the dictionary based vectors, the network of words is almost independent of the dictionary that is used, i.e. any dictionary should give us almost the same network. (Inkpen and Hirst, 2003) also use gloss-based context vectors in their work on the disambiguation of near-synonyms -words whose senses are almost indistinguishable. They disambiguate near-synonyms in text using various indicators, one of which is context-vector-based. Context Vectors are created for the context of the target word and also for the glosses of each sense of the target word. Each gloss is considered as a bag of words, where each word has a corresponding Word Vector. These vectors for the words in a gloss are averaged to get a Context Vector corresponding to the gloss. The distance between the vector corresponding to the text and that corresponding to the gloss is measured (as the cosine of the angle between the vectors). The nearness of the vectors is used as an indicator to pick the correct sense of the target word. Conclusion We introduced a new measure of semantic relatedness based on the idea of creating a Gloss Vector that combines dictionary content with corpus based data. We find that this measure correlates extremely well with the results of these human studies, and this is indeed encouraging. We believe that this is due to the fact that the context vector may be closer to the semantic representation of concepts in humans. This measure can be tai-lored to particular domains depending on the corpus used to derive the co-occurrence matrices, and makes no restrictions on the parts of speech of the concept pairs to be compared. We also demonstrated that the Vector measure performs relatively well in an application-oriented setup and can be conveniently deployed in a real world application. It can be easily tweaked and modified to work in a restricted domain, such as bio-informatics or medicine, by selecting a specialized corpus to build the vectors.
2014-07-01T00:00:00.000Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "0f10c6f57e7b640a2e89e94b5ca7d2b0d46d3925", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "0f10c6f57e7b640a2e89e94b5ca7d2b0d46d3925", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
36790533
pes2o/s2orc
v3-fos-license
Bridging stent repair of type III endoleak causing aortocaval fistula after branched aortic endovascular repair A 62-year-old man presented to our department with abdominal pain and diarrhea for 3 weeks on a background of previous branched endovascular repair for a thoracoabdominal aneurysm. A triple-phase computed tomography scan of his abdomen and pelvis showed a large aortocaval fistula caused by a type III endoleak from a dislodged superior mesenteric artery stent. He was successfully treated with a BeGraft (Bentley Innomed, Hechingen, Germany) by using an endovascular technique. Very few cases of aortocaval fistulas (ACFs) after endovascular aneurysm repair (EVAR) have been reported in the literature. Most case reports have described an ACF in the context of a ruptured abdominal aortic aneurysm (AAA) or an iatrogenic injury to the inferior vena cava during laparoscopic repair of an AAA. Only four reports of an ACF caused by a persistent type II endoleak have been published. We describe the case of 62-year-old man who presented with a symptomatic ACF caused by a type III endoleak that was successfully managed using an endovascular technique. Consent was obtained from the patient to publish this case report along with the images. CASE REPORT A 62-year-old man was referred to our unit with a 2-week history of crampy abdominal pain and diarrhea. His medical history was significant for controlled hypertension and former smoking. His surgical history was consistent with a previous endoluminal repair of a large thoracoabdominal aneurysm using a standard branched Cook endograft (Cook Medical, Bloomington, Ind) with four visceral branches for the celiac, the superior mesenteric artery (SMA), and the right and left renal arteries. He underwent this procedure 4 years before this presentation. The patient was afebrile and hemodynamically stable. He had been passing four to five watery stools daily for the past 2 weeks. The initial laboratory investigations, namely a full blood count, inflammatory markers, and stool culture, were nondiagnostic for an acute infective process. The patient denied a history of recent travel or previous viral illness. A triple-phase computed tomography scan of the thorax, abdomen, and pelvis showed dilated bowel loops measuring up to 6 cm (maximum transverse diameter) in the SMA territory. A dislodged SMA stent was noted causing a large type III endoleak with decompression into the inferior vena cava via a large ACF. The endoleak was arising from the junction of the branch with the connecting stent (Fig 1). There was a marginal expansion of the sac in the interim period from 11 to 11.4 cm. The fistulation meant that the sac size did not increase significantly. After an informed consent was obtained, the patient was brought forward for an endoluminal repair of the dislodged SMA stent as a semi-elective case. Access was gained via a left axillary cutdown. Because of the tortuous nature of the branches, the SMA stent was initially cannulated. The significant gap from the end of the stent to the native SMA made it difficult for traditional catheters to gain access. A steerable catheter was successfully used to bridge the gap to the native SMA. A 10-mm  57-mm BeGraft (Bentley Innomed, Hechingen, Germany) was successfully deployed (Fig 2). Selective digital subtraction angiography runs of the SMA showed the type III endoleak was completely excluded. Delayed runs showed no filling of the ACF. The patient made an uneventful recovery, with complete resolution of his symptoms, and was discharged home 2 days later. Repeat imaging showed sealing of the endoleak and no further fistula flow. The sac size had also stabilized at the 1-month follow-up computed tomography scan (Fig 3). At the 6-month follow-up, he was symptom free. DISCUSSION An ACF is a rare condition that was first described by James Syme in 1831. 1 In a study reported by Akwei et al, ACF constitutes <6% of all arteriovenous fistulas. 2 An erosion or spontaneous rupture of the AAA into the vena cava is the most common cause of an ACF. 3,4 The incidence of an ACF in patients treated with an endoprosthesis for an aortic aneurysm repair is low. Cardiac preload and venous hypertension are understood to be the sequelae from shunting of blood from a high-pressure arterial system into a low-pressure venous system. This may clinically manifest in some patients as an acute lower limb edema or hematuria caused by impaired renal perfusion. 5,6 Lau et al 5 have described these symptoms to be prevalent in 50% to 80% patients with a diagnosis of an ACF. Leon et al 7 mentions that the symptoms related to an arteriovenous fistula vary according to the size of the fistula. Patients with a small fistula could be asymptomatic. Endovascular and conservative treatment options have been described in the surgical literature for management of ACF. The traditional approach for treatment of an ACF was an open surgical repair, which had a mortality rate of 30%, particularly in patients with cardiac decompression. 5 The first endovascular repair was performed by Beveridge et al 8 in 1998, after which other authors have described successful treatment using endovascular techniques. Van de Luijtgaarden et al 9 say that in the absence of systemic repercussions, persistent ACFs caused by type II endoleak after EVAR may be managed conservatively and that favorable remodelling of the aneurysm sac might be possible. In our case, the patient was symptomatic from the type III endoleak due to the dislodged SMA stent. This could have occurred as a result of remodelling of the aorta after aneurysm repair. The bowel symptoms were related to ischemia caused by alteration of arterial supply due to the stent dislodgment and may be related to the venous hypertension. Because the patient's symptoms resolved immediately after the endovascular treatment, this was thought to be most likely cause. The Bentley Innomed stent is an approved graft in Australia and New Zealand. These grafts are extremely versatile and became popular with the inability to access Atrium V12 stent grafts (Atrium Medical Corp, Hudson, NH) in New Zealand. In our case, an indirect approach was used to treat the ACF without using an endovenous stent. However a covered endovenous stent or a direct embolization of the aortic sac by injecting Onyx glue (ev3 Endovascular, Inc, Plymouth, Minn) could be considered if there were an evidence of aortic sac expansion in the future. CONCLUSIONS A high degree of vigilance is required in patients presenting with atypical symptoms of abdominal pain. Timely diagnosis and management is important for treatment of an ACF because this has shown to improve patient outcomes. 10 Endovascular treatment is possible for treatment of ACFs even after remodelling of the stent grafts after a previous EVAR.
2018-04-03T04:25:48.862Z
2016-12-27T00:00:00.000
{ "year": 2016, "sha1": "420321996e677e9dbb1dc08f0977c87c07fdf960", "oa_license": "CCBYNCND", "oa_url": "http://jvascsurgcases.org/article/S2468428716300545/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "420321996e677e9dbb1dc08f0977c87c07fdf960", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245839199
pes2o/s2orc
v3-fos-license
Diversity in wine and medicine The first question I asked myself when I was appointed Professor of Academic Medicine and Director of (undergraduate) teaching and learning at Trinity College, Dublin, was: Is there an optimum way of selecting medical students? Apparently not. In Ireland, as in many countries, the number of applicants for places in medical schools far exceeds the number available. The selection process in Ireland depends on the number of ‘points’ a candidate obtains in the national ‘leaving certificate’ examination (the socalled CAO system). Although this system has been widely criticised, it provides a degree of fairness and prohibits parents from applying undue pressure to have their children selected for entry. I recollect many phone calls (exclusively from mothers) extolling the family’s contribution to the university and asking me to favour the selection of their offspring. Luckily (for me), I always said there was nothing I could do as it all depended on the grades achieved by the prospective candidate and the contribution of the family to the university was irrelevant. In the UK, selection of medical students depends not only on examination grades but also heavily on an interview with the prospective candidate. While an interview may provide important information, the pressure imposed on the selection board may become overwhelming, especially in a small country (such as Ireland). In France, each academic institution may have different admission criteria, and medical studies are divided into three cycles. The first cycle (PCEM) lasts 2 years. At the end of the first year, there’s a difficult examination which determines who goes into the second year. Usually, only 15–20% of students pass. This method of selection offers an opportunity for many students to enter medical school but after the first year only students who achieve high grades progress to the next year to continue their medical studies. In the USA, there is a mixture of state universities and private medical schools. In 2019, a scandal erupted when a criminal conspiracy was uncovered. This revealed an attempt to influence undergraduate admissions, by parents, at several top American universities. To date >50 people have been charged with bribery, money laundering and document fabrication and several have received prison sentences. I do not know if any medical schools were involved but the scandal certainly tainted the ‘fairness’ and diversity of admissions policies. In a 2021 paper in the New England Journal of Medicine, Morris and et al. report on Diversity of the National Student Body [1]. They found, not surprisingly, that the number of male AfricanAmerican, Hispanic and other racial ethnic group enrollees was well below the percentages of these groups in the national census. Richter et al. writing in the same journal in 2020 point out that women are much less likely to be promoted to higher academic positions than men. The authors offer a number of factors, including an ‘old boy’s club’ mentality which I presume means misogyny [2]. So, diversity is still an issue. Without going through every country, it is clear that there is no universal acceptable system of selecting medical school entrants and that there is diversity in the system of selection. Whatever about diversity in selection is there diversity in the social class or racial background of enrollees? It seems to me from observation that most medical undergraduates come from middleor upperclass backgrounds. In Trinity College, Dublin, an attempt was made to widen the social background of students entering medical school. The Trinity Access Programme (TAP) is an attempt to widen the scope of entrants to medical school. A number of TAP students who entered the school of medicine unfortunately dropped out. This was not only from a combination of ongoing 'A lot of different flowers make a bouquet'. The first question I asked myself when I was appointed Professor of Academic Medicine and Director of (undergraduate) teaching and learning at Trinity College, Dublin, was: Is there an optimum way of selecting medical students? Apparently not. In Ireland, as in many countries, the number of applicants for places in medical schools far exceeds the number available. The selection process in Ireland depends on the number of 'points' a candidate obtains in the national 'leaving certificate' examination (the socalled CAO system). Although this system has been widely criticised, it provides a degree of fairness and prohibits parents from applying undue pressure to have their children selected for entry. I recollect many phone calls (exclusively from mothers) extolling the family's contribution to the university and asking me to favour the selection of their offspring. Luckily (for me), I always said there was nothing I could do as it all depended on the grades achieved by the prospective candidate and the contribution of the family to the university was irrelevant. In the UK, selection of medical students depends not only on examination grades but also heavily on an interview with the prospective candidate. While an interview may provide important information, the pressure imposed on the selection board may become overwhelming, especially in a small country (such as Ireland). In France, each academic institution may have different admission criteria, and medical studies are divided into three cycles. The first cycle (PCEM) lasts 2 years. At the end of the first year, there's a difficult examination which determines who goes into the second year. Usually, only 15-20% of students pass. This method of selection offers an opportunity for many students to enter medical school but after the first year only students who achieve high grades progress to the next year to continue their medical studies. In the USA, there is a mixture of state universities and private medical schools. In 2019, a scandal erupted when a criminal conspiracy was uncovered. This revealed an attempt to influence undergraduate admissions, by parents, at several top American universities. To date >50 people have been charged with bribery, money laundering and document fabrication and several have received prison sentences. I do not know if any medical schools were involved but the scandal certainly tainted the 'fairness' and diversity of admissions policies. In a 2021 paper in the New England Journal of Medicine, Morris and et al. report on Diversity of the National Student Body [1]. They found, not surprisingly, that the number of male African-American, Hispanic and other racial ethnic group enrollees was well below the percentages of these groups in the national census. Richter et al. writing in the same journal in 2020 point out that women are much less likely to be promoted to higher academic positions than men. The authors offer a number of factors, including an 'old boy's club' mentality which I presume means misogyny [2]. So, diversity is still an issue. Without going through every country, it is clear that there is no universal acceptable system of selecting medical school entrants and that there is diversity in the system of selection. Whatever about diversity in selection is there diversity in the social class or racial background of enrollees? It seems to me from observation that most medical undergraduates come from middle-or upperclass backgrounds. In Trinity College, Dublin, an attempt was made to widen the social background of students entering medical school. The Trinity Access Programme (TAP) is an attempt to widen the scope of entrants to medical school. A number of TAP students who entered the school of medicine unfortunately dropped out. This was not only from a combination of ongoing financial/social difficulties but also snobbery from middle class colleagues. White, a freelance journalist in the UK, took the medical profession and medical schools to task in 2019 [3]. She says that the UK government's social mobility advisor lambasted the medical profession because too few people from socially and educationally disadvantaged backgrounds were being encouraged to become doctors. Although progress is being made it is very slow. The racial and gender balance has been tackled and in 2014, 41% of entrants to medical school were from ethnic minority backgrounds. In many medical schools in Europe, the gender balance has swung in favour of females with their numbers now exceeding the number of male applicants. Some countries have introduced another entrance examination (for example, the Health Professions Admission Test, in Ireland) in an effort to increase the percentage of male applicants, but this is of doubtful value and is a highly contested idea. The idea that one's score cannot be improved by extra tuition is clearly fatuous. Is diversity an issue in the wine trade? Yes. I have already alluded to misogyny [4] in the world of sommeliers, which is a most unpleasant topic but things may be improving in some quarters. Robinson [5] writing in the Weekend FT reports on the recent Taylor's Port Golden Vines scholarships. Apparently 42 wine professionals from all over the world applied for scholarships which would help with the considerable costs of studying for a Master of Wine or a Master Sommelier qualification. Many of the applicants were female and as she says, '…a wine world that can sometimes be seen as a rather stuffy, Anglo-centric institution'. It looks like this is changing, slowly. Whether you believe in global warming or not, the range of grapes now used for making wine has certainly expanded. Likewise, the number of countries making quality wines is protean. A quick glance at the number of countries making quality wines sold by The Wine Society in the UK reveals at least 17, but excludes China and India. Long gone are the days when the only countries worth considering were France, Germany, Italy, Spain, Australia and the USA. Diversity in grape varieties is also on the rise. Gapper [6], writing in the Weekend FT, points out that wine farmers will have to adapt to climate change if they are to survive. However, wine farmers not only have to cope with harsh weather conditions [7] but also with snobbery about the provenance of certain wines. A 'burgundy' which is not made in Burgundy may be difficult to sell, but personally I have no problem drinking sparkling wine made by méthode champenoise from the south of England. Grape varieties which make excellent wines include Nero d'Avola, Fiano di Avellino, Falanghina, Greco di Tufo and, of course, the grape Primitivo, from the heel of Italy, is now known (since the 1960s) to be the same as Zinfandel from California. But in an added twist it seems the grape originated in Croatia. Wines made from Grüner Veltliner (Styria south of Vienna), Fig. 1, make a lovely and inexpensive aperitif. Pinot Blanc (Bianco in Italy) is an underrated grape and some wine makers make an excellent, again, inexpensive aperitif. Furmint from Hungary, Assyrtiko from Greece and Bombino Bianco from the heel of Italy [7] are all becoming more popular. Malbec from Mendoza and Cahors in France have recently become very popular. This is, by no means, an exhaustive selection of grapes but points out that there are many varieties and therefore diversity for wine drinkers. Is there diversity in wine making? Yes. Fermentation may take place in stainless steel tanks, wooden barrels and cement tanks and now amphorae, Fig. 2, are making a comeback. In Beaujolais, most wine makers use carbonic maceration as a means of fermentation, Fig. 3, using whole bunches of grapes, fermenting in an anaerobic environment where CO 2 replaces O 2 so that fermentation takes place without the necessity for yeasts. One piece of good news, at least for exporters of sparkling wine to the UK, is that Rishi Sunak, the chancellor, announced the end of a premium duty on sparkling wines. So, when buying wine be brave and do not stick to the wines/ grapes you have always consumed-there is more diversity out there with which to experiment.
2022-01-11T14:47:30.740Z
2022-01-11T00:00:00.000
{ "year": 2022, "sha1": "e9612beee35842418d16928e8963e875bddb8a2e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41409-021-01549-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e9612beee35842418d16928e8963e875bddb8a2e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
246178320
pes2o/s2orc
v3-fos-license
The Evolution of Blockchain: from Lit to Dark Transactions submitted through the blockchain peer-to-peer (P2P) network may leak out exploitable information. We study the economic incentives behind the adoption of blockchain dark venues, where users' transactions are observable only by miners on these venues. We show that miners may not fully adopt dark venues to preserve rents extracted from arbitrageurs, hence creating execution risk for users. The dark venue neither eliminates frontrunning risk nor reduces transaction costs. It strictly increases the payoff of miners, weakly increases the payoff of users, and weakly reduces arbitrageurs' profits. We provide empirical support for our main implications, and show that they are economically significant. A 1% increase in the probability of being frontrun raises users' adoption rate of the dark venue by 0.6%. Arbitrageurs' cost-to-revenue ratio increases by a third with a dark venue. Introduction Blockchain was initially conceived by Nakamoto (2008) as the backbone technology behind digital currencies and decentralized trustless payment systems. Over time, with the development of smart contract technologies, blockchain systems have enabled additional services, such as tokenization of assets, crowdfunding, and decentralized finance (typically abbreviated with DeFi). See, for instance, Yermack (2017), Cong et al. (2020b), Gan et al. (2021) and Harvey et al. (2021). As blockchain evolves from a payment system to an infrastructure for financial services, transparency of information becomes a key concern. Because of the anonymity of the blockchain network, many users cannot send their transactions directly to miners but have to broadcast their transactions through the blockchain peer-to-peer (P2P) network in order to get them executed. Those pending transactions are observable by any node in the network before the execution, including malicious arbitrageurs. Arbitrageurs can exploit information leaked, and execute frontrunning or backrunning attacks on those pending transactions (see, for instance, Park (2021), Daian et al. (2020)). These vulnerabilities are especially pronounced for DeFi transactions. Arbitrages exploiting pending DeFi transactions have generated significant losses for users Qin et al. (2021); Wang et al. (2022), and the losses are often referred to as miner extractable value (MEV 1 ). Moreover, arbitrage transactions make the underlying blockchain more congested, and thus increase transaction costs, which in turn imposes negative externalities on other users of the same blockchain. Most innovations of blockchain systems have targeted the improvement of the consensus protocol and the system performance. However, few of them have considered the communication mechanism between nodes (especially between users and miners) in the P2P network, which causes the "built-in" information leakage problem. In the mid of 2021, relay services such as Flashbots and Eden Network have been introduced 2 with the objective of providing protection against frontrunning attacks and mitigating the negative externalities generated from high transactions costs imposed by arbitrageurs on users. Relay services create venues for users to send their transactions directly to miners. We call these venues dark, because pending transactions submitted through them are not publicly observable, and thus the transaction information cannot be exploited by arbitrageurs. We build a game theoretical model to analyze the economic incentives behind the adoption of blockchain dark venues. Our model aims at answering the following questions: Will the dark venue be adopted by participants of the blockchain ecosystem? Will the adoption reduce frontrunning arbitrage and transaction costs? Is the introduction of a dark venue welfare enhancing? We show that the dark venue is at least partially adopted by miners and utilized by at least one arbitrageur. The introduction of a dark venue neither eliminates frontrunning arbitrage nor reduces transaction costs. It strictly increases the payoff of miners who adopt the dark venue, but weakly decreases the payoff of miners who stay on the lit venue. After the introduction of the dark venue, the payoff of frontrunnable users increases, while the payoff of arbitrageurs decreases. Aggregate welfare is maximized when all miners adopt the dark venue. However, this outcome may not be attainable in equilibrium because miners have a strong incentive to maintain the rents extracted from arbitrageurs. We propose a selffinancing payment transfer which resolves the misalignment of incentives between miners and users. Our model features three types of agents, i.e., miners, users, and arbitrageurs; and two transaction submission venues, i.e., a dark venue (relay) and a lit venue (the P2P network). Miners need to decide whether or not to join the dark venue. Users submit transactions to the blockchain either through the lit venue or through the dark venue. Transactions sent through the lit venue are publicly observable by all agents, while transactions submitted through the dark venue are observable only by miners who join the dark venue. One user faces frontrunning risk when she submits transactions through the lit venue. We refer to her as the frontrunnable user, and to her transaction as a frontrunnable transaction. The remaining users do not face frontrunning risk and are referred to as non-frontrunnable users. Arbitrageurs who identify a frontrunnable transaction in the lit venue compete to exploit the opportunity. The adoption rate of miners determines the execution probability in the dark venue. In turn, the venue selection decisions of users and arbitrageurs determine the benefit of joining the dark venue for miners. Users and arbitrageurs face the trade-off between execution risk and information leakage. On the one hand, using the dark venue alone presents execution risk to users. Transaction submitted to the dark venue face the risk of not being observed by the miner updating the blockchain, who may not have adopted the dark venue. On the other hand, users who only submit through the dark venue avoid the risk of being frontrun. Arbitrageurs who only use the dark venue would not leak out information about the identified opportunity to their competitors. They also gain prioritized execution for their orders because miners on the dark venue prioritize transactions sent through such venue. We show that both arbitrageurs and the frontrunnable user will submit their transactions through the dark venue, if sufficiently many miners adopt it. If instead the execution risk is high, arbitrageurs will use both the lit and the dark venue: through the dark venue they gain prioritized execution, and through the lit venue they are guaranteed execution. Because of arbitrageurs' competition, paid transactions fees are above the minimum required for transactions to be executed on the blockchain. Those fees are passed to miners, and thus miners and arbitrageurs share MEV. We investigate the trade-off faced by miners. Each miner can observe more transactions (i.e., in addition to those submitted to the lit venue) if he were to join the dark venue. However, if sufficiently many miners join this venue, execution risk becomes sufficiently low to incentivize users to migrate from the lit to the dark venue. This, in turn, eliminates frontrunning arbitrage opportunities that generate MEV. As a result, it may not be incentive compatible for miners to adopt the dark venue. We characterize the subgame perfect equilibrium of the game. If the frontrunning problem is severe, there exists a unique equilibrium where miners fully adopt the dark venue. The reason is that the frontrunnable user would only submit her transaction through the dark venue, but not through the blockchain lit venue. In equilibrium, miners fully adopt the dark venue to attract this user and earn the right to observe her transaction. In such case, the incentives of miners and users are perfectly aligned. By contrast, if the frontrunning problem is not too severe, there exists an equilibrium where miners do not not fully adopt the dark venue. The frontrunnable user would still broadcast through the lit venue and bear the risk of being frontrun. Miners have insufficient incentives to mitigate frontrunning risk because they do not want to forgo MEV. As a result, miners only partially adopt the dark venue and create execution risk, which in turn leads users to prefer submitting transactions through the lit venue and be subject to potential frontrunning by arbitrageurs. In equilibrium, we show that both the aggregate transaction fee per block and the minimum transaction fee required for inclusion in the blockchain increase if a dark venue is present. This may, at first, appear surprising because a dark venue should at least weakly reduce the block space occupied by frontrunning arbitrage orders, and thus weakly decrease transaction costs. However, this is not the case for the following reasons. First, miners adopt the dark venue if and only if their expected transaction fee revenue increases. Second, the creation of a dark venue raises the number of transactions, because it attracts those which would otherwise not be submitted to the blockchain due to high frontrunning risk. Third, a dark venue increases competition between arbitrageurs and thus raises the bid transaction fees. Our analysis generates welfare implications. Miners who join the dark venue are the only participants of the blockchain ecosystem whose welfare strictly increases in the presence of a dark venue. The positional advantage of miners, that is, the ability to determine the execution risk faced by other agents, allows them to extract a larger rent with a dark venue. Welfare of arbitrageurs is reduced because a larger portion of their profits is extracted by miners. The payoff of users remains unchanged if miners adopt the dark venue only partially to preserve MEV generated from frontrunning arbitrage. Aggregate welfare of all participants in the ecosystem is maximized if the dark venue is fully adopted by miners. Full adoption eliminates the frontrunning problem, and the entire block space gets allocated to users. However, this outcome is not always attainable in equilibrium because it may not be incentive compatible for miners to fully adopt the dark venue. We propose a self-financing transfer from the frontrunnable user to miners which aligns their incentives. We show that if the frontrunnable user commits to subsidize the dark venue and those subsidies are then passed to miners, the blockchain ecosystem would move to a new full adoption welfare maximizing equilibrium. We provide empirical support for our model implications. Our dataset contain dark venue transaction-level data of Ethereum blockchain collected from Flashbots API, Ethereum block data, and transaction-level data from Uniswap V2 and Sushiswap AMMs. Our empirical analysis confirms that the dark venue is partially adopted, and further estimates the dark venue adoption rate around 60% as of Jul 2021. Our analysis also shows that miners who join the dark venue have higher revenue than those who stay on the lit venue. Our estimates indicate that joining the dark venue increases miners' revenue by around 0.16 ETH (500 USD) per block. Consistent with our model prediction that users migrate from the lit to the dark venue if frontrunning risk is large, we find that the probability of being frontrun is positively correlated with the proportion of frontrunnable user transactions submitted through the dark venue. A 1% increase in the probability of being frontrun increases the proportion of transactions sent through the dark venue by 0.6%. Our empirical results also confirm that welfare of arbitrageurs decreases after the introduction of a dark venue. We show that the introduction of dark venue increases arbitrageurs' cost revenue ratio by around a third. Literature Review. Our paper contributes to the scarce literature on the information structure of blockchain. Cong and He (2019) analyze how blockchain reshapes agents' infor-mation and incentives. Our study highlights how different transaction submission channels affect blockchain participants' incentives. More broadly, our work is related to existing literature on the economic analysis of Prat and Walter (2021). We focus on the economic incentives behind the adoption of blockchain dark venues, designed to mitigate the consequences of information leakage. Our paper is also related to the branch of market microstructure literature which has analyzed dark venues (e.g., Zhu (2014), Buti et al. (2017), Degryse et al. (2009)). These papers study how the introduction of a dark venue impacts market quality and welfare of market participants. Zhu (2014) studies how execution risk arising in the dark venue leads to better information discovery in the lit venue. In his dark pool setting, execution risk arises because informed traders may overcrowd one side of the dark market. In our setting, instead, execution risk in the blockchain dark venue arises because miners earn rents from MEV opportunities, which acts as a disincentive for them to adopt the dark venue. 3 The paper proceeds as follows. We provide background knowledge of relay services in Section 2. We introduce the game theoretical model in Section 3. We solve for the subgame perfect equilibrium and examine its economic properties in Section 4. We analyze welfare implications in Section 5. Section 6 provides empirical supports for out model implications. Proofs of technical results are relegated to the Appendix. 3 In the context of market design adoption, Budish et al. (2019) shows that rent extraction can provide a disincentive for stock exchanges to eliminate sniping risk. Background on Relay Services In this section, we explain the "built-in" information leakage problem of blockchain, and discuss the principles of relay services. Blockchain and Information Leakage A blockchain is a decentralized database maintained by distributed participants over a P2P network. Every participant can issue transactions and broadcast them to every node in the P2P network. Miners, also referred as validators, collect transactions, add them into blocks, and append blocks to the existing blockchain. Users attach an upfront fee to their submitted transactions. Fees allow users to gain execution priority, as miners execute transactions in decreasing order of fees. Each node on the blockchain may observe pending transactions in the P2P network. This transparency is not a concern if blockchain is used as a technology for digital payments, because there is no gain to be made from frontrunning a payment transaction. Information leakage becomes worrisome if blockchain is used as an infrastructure for financial intermediation. For example, the Ethereum blockchain enables DeFi applications, through which smart contracts act as financial intermediaries and provide a broad range of financial services, including borrowing and lending, token exchanges, leverage trading, and flash loans. In a displacement attack, an attacker observes a profitable transaction from a victim user. She then broadcasts her own profitable transaction with the same arbitrage strategy but with a higher transaction fee. The frontrunning transaction will then be executed in advance of the victim transaction. The attacker will take the profit, while the victim transaction would fail. In an insertion attack, an attacker observes a frontrunnable transaction from a victim user. She then broadcasts two transactions: one (frontrunning transaction) with a higher transaction fee than the victim transaction and the other (backrunning transaction) with a lower transaction fee. After the frontrunning transaction is completed, the market price changes. Consequently, the price of the victim transaction will be higher than if no attack had taken place. This results in a worse exchange rate and financial losses to the victim, and the attacker receives the profit with the backrunning transaction. In a suppression attack, an attacker observes an attackable transaction from a victim user. She then broadcasts transactions with a higher transaction fee in order to prevent the victim transaction from being included in the block. Note that the suppression frontrunning attack is very expensive because the attackers try to consume as much gas as possible to reach the capacity limit of the block. In current DeFi market, insertion frontrunning attacks are most common. Torres Relay Services Relay services are an implementation of the dark venue, which provide a private communication channel between users and miners. A centralized relay service receives transactions from users and forwards them directly to miners, without broadcasting them on the P2P network. Therefore, users' transactions cannot be observed by malicious arbitrageurs. To ensure that miners in a private channel do not use observed information, the relay platform screens miners before they join the relay service and monitor their activities. 4 The first relay service, Flashbots, was launched in January 2021. Miners who join the private channel also have to prioritize execution of the highest bidding transactions by including them at the top of a block. The execution order of transactions submitted through the private channel is typically determined by a one-round, seal-bid, first price auction. 5 Hence, the transaction submitter neither knows the transactions submitted 4 The Flashbots Fair Market Principles (FFMP) can be found at https://hackmd.io/@Flashbots/ fair-market-principles. 5 Flashbots utilizes Coinbase as an additional payment channel between users and miners in addition to by other users nor the attached transaction fees. By contrast, in the P2P network, the transaction fee bidding takes the form of an ascending price auction, and it consists of multiple rounds of bid submission. Moreover, pending transactions and their fees are publicly observable. Model Setup The timeline of our model consists of three periods indexed by t, t = 1, 2, 3. There are three types of agents: blockchain users, arbitrageurs, and miners. All agents are risk-neutral. Miners. There is a continuum of homogeneous, rational miners. All miners have the same probability of earning the right to append a new block to the blockchain. At the end of period 3, the miner who appends the next block is drawn randomly from a uniform distribution. The winning miner earns the fees attached to the transactions included in the block plus a fixed reward. 6 The miner can at most include B transactions in a block due to limited capacity. There exists two transaction submission venues: the lit venue (blockchain P2P network), and the dark venue (relay service). In period 1, miners decide whether to join the dark venue. We assume that joining this venue is costless for miners. 7 We denote by α the portion of miners who join the dark venue in period 1. All miners can observe the transactions submitted through the lit venue, but only miners who join the dark venue can observe the transactions submitted through the dark venue. We assume that miners who join the dark venue do not disclose transaction information. At the end of period 3, the miner who successfully mines the block will select B transthe transaction fee attached to the transaction. See https://docs.Flashbots.net/Flashbots-auction/ searchers/advanced/coinbase-payment/ 6 The reward amount does not affect our analysis. Regardless of whether or not a miner adopts the dark venue, his expected block reward remains constant unlike the transaction fees earned. 7 As discussed in https://docs.Flashbots.net/Flashbots-auction/miners/faq/, Flashbots relay is an open-source software and does not charge any fee for usage. actions whose attached fees are the highest. The winning miner can only select from the transactions he observes. We assume that any tie will be broken uniformly at random. The miner decides the execution order as follows. If the miner has joined the dark venue, then he prioritizes the transactions submitted through the dark venue and execute them first. 8 Those transactions will be executed in decreasing order of bid fees. Subsequently, the winning miner will include the transactions submitted through the lit venue, again in decreasing order of fees. A miner who has not joined the dark venue would only include the transactions from the lit venue in decreasing order of fees. Since a miner's adoption decision does not affect the probability of mining the next block, a miner decides whether to join the dark venue to maximize the expected transaction fees conditional on him successfully mining the next block. The expected transaction fees earned from adopting the dark venue and from using only the lit venue are both contingent on the choice of users and arbitrageurs. We denote the expected fee revenue of the winning miner from adopting the dark venue by r dark (·), and from using the lit venue only by r lit (·). Users. There are two types of users, and the type depends on the exogenously specified nature of their transactions. The first type is a user whose pending transaction is subject to a front-running attack if submitted through the lit venue and identified by arbitrageurs. We refer to this user as frontrunnable and to her transaction as a frontrunnable transaction. If the frontrunnable transaction is successfully written on the blockchain, it generates a benefit v 0 to the initiator, i.e., to the frontrunnable user. We assume that v 0 is common knowledge. However, if the pending transaction is identified by an arbitrageur, then the arbitrageur can frontrun and earn a profit c ≥ 0. This, in turn, results in a loss of c for the frontrunnable user. The second type of users are those whose transactions are not frontrunnable, even if they are broadcast through the lit venue. We refer to this type of users as the non-frontrunnable users and refer to their transactions as non-frontrunnable transactions. Without loss of generality, we assume there exist B + 1 non-frontrunnable users, indexed by i ∈ {1, 2, ..., B + 1}, whose transactions have valuations v i , i ∈ {1, 2, ..., B +1} which are common knowledge. 9 We also impose the following technical assumption to rule out corner cases in our analysis: In period 2, users simultaneously decide the venue to which they send their transactions. An user can broadcast her transaction through the lit venue, or through the dark venue, or choose to not submit her transaction. If a frontrunnable transaction is broadcast through the lit venue, it will face the risk of being identified and frontrun by arbitrageurs. If instead a transaction is only broadcast through the dark venue, then it will not be observed by miners who do not adopt the dark venue. Its probability of being included in the next block is at most α, which means that the execution risk of the dark venue is determined by miners' dark venue adoption rate. We index the frontrunnable user as user 0. We denote the channel chosen by user i, i ∈ I = {0, 1, 2, ..., B + 1}, by C i ∈ {Dark, Lit, None}. User i also attaches a transaction fee f i to her transaction. User i chooses her submission venue C i and attached fee f i to maximize her expected payoff: where 1 Executed,i is the indicator function for the event "transaction by user i is included in the block by miner", 1 frontrun,i is the indicator function for the event "transaction by user i is frontrun by arbitrageurs". We assume that users break any tie in favour of the lit venue. Our assumption is justified by the fact that using the dark venue usually requires more sophistication, and the interface for the lit venue is, in general, much easier for users to use. Arbitrageurs. There are two competing arbitrageurs, indexed by j ∈ J = {1, 2}. The arbitrageurs have to first screen for the pending frontrunnable transaction in the lit venue and then exploit it. An arbitrageur who successfully exploits the opportunity earns a profit c ≥ 0. For any pending frontrunnable transaction, each arbitrageur has a probability p of independently identifying the frontrunning opportunity and exploiting it. In practice, to identify an arbitrage opportunity and exploit it, an arbitrageur has to screen at least hundreds of pending transactions in a few seconds, calculate the profitability of frontrunning them, construct arbitrage orders, and bid appropriate transaction fees. As a result, not all arbitrage opportunities can be detected and exploited by arbitrageurs, and the probability p captures the difficulty of the above process. In period 3, two arbitrageurs first search for potential arbitrage opportunities independently. For any exploitable identified opportunity, the arbitrageur will create an order and decide to which venue to send it: the lit venue, the dark venue, or both. We assume that if the arbitrageur decides to send an arbitrage order to both venues, then he will give both transactions the same nonce, that is, a unique identifier. Since each nonce can be used only once, at most one of these two transactions will be executed. If the winning miner observes both transactions, he will only include the one with highest transaction fee. If the order of an arbitrageur is broadcast through the lit venue, the other arbitrageur will observe it and identify the opportunity. The leaked information then leads to more competition for arbitrage execution. 10 If instead the arbitrage order is only sent to the dark venue, then it may be executed only if the next block is mined by a miner who adopts the venue. Hence, sending arbitrage orders only through the dark venue may limit the probability of the order getting executed, and thus presents execution risk to arbitrageurs. We then denote the channel chosen by arbitrageur j, by V j ∈ {Lit, Dark, Both}. We denote the transaction fee bid by arbitrageur j in the private channel by f D j , and in the lit venue by f L j . Arbitrageurs choose their strategy to maximize their expected payoff: where 1 wins,j is the indicator function for the event "the order by arbitrageur j is executed before the order by the other arbitrageur", and f executed,j is the transaction fee paid by arbitrageur j. Arbitrageurs employ a mixed strategy when choosing transaction fees. This guarantees the existence of a Nash equilibrium for the subgame in period 3. The tie-break rules for arbitrageurs is that "both venues" is their first choice, the "lit venue" is their second choice, and the "dark venue" is their third choice. Transaction Fee Bidding. The arbitrageur who bids the highest fee can exploit the opportunity. The transaction fee bidding mechanisms in the two venues are different. Transaction fee bidding in the lit venue is a variant of an English Auction, i.e., an open-outcry ascending-price auction. The auction only has r rounds where r is a random variable which obeys a geometric distribution with a success rate λ. There exists a random deadline for the transaction fee bidding auction since the time required for miners to mine the block is random. In each round, only one arbitrageur moves, and the bid increment has to be larger than . If only one arbitrageur identifies an opportunity and decides to broadcast his order through the lit venue, then he moves first. If both arbitrageurs identify the same opportunity and decide to send their orders through the lit venue, then the first mover can be either of them with the same probability. To minimize downside risk from the arbitrage execution, arbitrageurs deploy a smart contract. The smart contract would terminate the transaction if the arbitrage opportunity no longer exists. In this case, the transaction would be deemed as failed, and the corresponding transaction fee is negligible and assumed to be equal to zero in our model. The transaction fee bidding in the dark venue is a one-round, seal-bid, first-price auction, where all bidders only have to submit their bids once to the relay, without leaking any information to other bidders. If two arbitrageurs submit the same order to exploit the same opportunity, then only the arbitrageur who pays the highest transaction fee will be considered by miners. Equilibrium. We solve for the subgame perfect equilibrium (SPE) of the game described above. The equilibrium actions are the dark venue adoption rate of miners α * , the venue selection and transaction fee bidding strategies of users, and the venue selection and transaction fee bidding strategies of arbitrageurs.The strategy of user i is a mapping from the the dark venue adoption rate of miners, α, to her transaction submission venue C i and transaction fee bid f i . The strategy of arbitrageur j is a mapping from the dark venue adoption rate of miners, α, and users' actions, (C i , f i ) i∈I to his selected venue V j and transaction fees Model Analysis In this section, we solve for the SPE of the game. We begin by analyzing the venue choice of arbitrageurs and users. We subsequently study the equilibrium adoption rate of the dark venue, and investigate the corresponding welfare implications. Venue Choice of Arbitrageurs We analyze arbitrageurs' venue selection strategies, for any dark venue adoption rate α and assuming that the frontrunnable user chooses the lit venue. Note that it suffices to consider only this choice for the frontrunnable, because if she were to submit her transaction through the dark venue such transaction would not be observable by arbitrageurs. Hence, they would not be able to submit any arbitrage order at t = 3. The main trade-off faced by arbitrageurs is as follows. On one hand, if an arbitrageur chooses only the dark venue, his detected opportunity would not be visible to his competing arbitrageur. This, in turn, reduces competition and thus the arbitrageur's cost from transaction fee bidding. Moreover, the arbitrageur gains prioritized execution, because transactions submitted through the dark venue are placed at the top of the block by miners who join the dark venue. On the other hand, using the dark venue only presents execution risk because a fraction of miners may never observe transactions submitted to the dark venue. The following proposition characterizes the choice of the arbitrageurs' venue choice in equilibrium. Proposition 1 (Venue Selection of Arbitrageurs). There exist two critical thresholds 0 < α 1 < α 2 ≤ 1, such that: 1. If α ≤ α 1 , then the two arbitrageurs send transactions to both the lit and the dark venues in equilibrium. 2. If α 1 < α ≤ α 2 , then there are two equilibria. In each equilibrium, one arbitrageur uses both venues while the other arbitrageur only uses the dark venue. 3. If α > α 2 , then both arbitrageurs only use the dark venue in equilibrium. The main intuition behind the above result is as follows. If only a small fraction of miners adopt the dark venue, the execution risk is high. As a result, arbitrageurs will submit their transactions to both venues. The reason why arbitrageurs would not use only the lit venue is to gain prioritized execution through the dark venue. If instead a large fraction of miners joins the dark venue, execution risk becomes small. The benefit of using the dark venue, that is, of hiding arbitrage opportunities and avoiding intense transaction fee bidding competition, would dominates its cost, that is, execution risk. Hence, arbitrageurs only use the dark venue. The next proposition characterizes the transaction fee bidding strategies of arbitrageurs. Proposition 2 (Transaction Fees Bid by Arbitrageurs). Let α 1 , α 2 be the critical thresholds identified in Proposition 1. The following statements hold: 1. If α ≤ α 1 , then in equilibrium both arbitrageurs bid c in the dark venue. In the lit venue, one of the arbitrageurs places an opening bid v B−2 , and afterwards, in each of his bidding rounds, he increases by the minimal increment from the previous highest bid. 2. If α 1 < α ≤ α 2 , then in equilibrium the arbitrageur who uses both venue bids v B−2 in the lit venue and c in the dark venue. The other arbitrageur who only participates in the dark venue bids c if he observes a bid in the lit venue from the other arbitrageur, and bids v B−2 otherwise. 3. If α > α 2 , then in equilibrium both arbitrageurs bid a transaction fee g according to the probability distribution If execution risk is high, i.e., α < α 1 , arbitrageurs submit their transactions through both venues. Since both arbitrageurs broadcast through the lit venue, if one arbitrageur detects an opportunity the other arbitrageur will also discover it. Hence, to exploit an opportunity, arbitrageurs have to outbid their competitors. Recall that transactions sent through the dark venue will be prioritized by miners who join this venue. To gain this benefit, both arbitrageurs submit to the dark venue and bid truthfully, that is, bid transaction fees equal to their profits. In this case, the dark venue induces an arms race for prioritized execution between arbitrageurs. If execution risk is low, i.e., α > α 2 , both arbitrageurs use only the dark venue to hide their opportunities. Hence, arbitrageurs do not know whether their competitors have also detected the same opportunity, so the equilibrium must be in mixed strategies. As arbitrageurs no longer bid their true valuation in the dark venue, the competition in the dark venue is less intense relative to the case when execution risk is high. Recall that if α 1 < α ≤ α 2 , one arbitrageur only uses the dark venue, while the other arbitrageur uses both the lit and the dark venues. On the one hand, since the latter arbitrageur uses the lit venue, any arbitrage opportunity detected by him will be discovered by the other arbitrageur who uses only the dark venue. This again leads to an arms race for prioritized execution where both arbitrageurs bid truthfully. On the other hand, any arbitrage opportunity detected by the arbitrageur who uses only the dark venue will not be visible to the other arbitrageur. Hence, there will not be any competition, and the arbitrageur who uses only the dark venue can bid the minimum transaction fee. Observe that the transaction fee paid by arbitrageurs is pocketed by the winning miners. Because of competition, the transaction fees bid by arbitrageurs are always higher than v B−2 , that is, the minimum fee which guarantees a transaction to be executed by miners. This suggests that miners extract a portion of MEV. Venue Choice of Users We analyze the venue selection strategy of the frontrunnable user, for an exogenously specified relay adoption rate α. The main trade-off faced by the frontrunnable user is straightforward. Using the dark venue exposes her to execution risk but eliminates the risk of being frontrun. Unlike arbitrageurs, the frontrunnable user does not use the dark venue to outbid competitors but merely to avoid frontrunning. When the dark venue adoption rate of miners is sufficiently large, the execution risk is small, and then the user will also adopt it to avoid frontrunning. The thresholds for adoption of the dark venue by the arbitrageurs and by the users depend on the probability p that an arbitrageur detects the opportunity. The following corollary characterizes how these thresholds vary with p, keeping every other parameter fixed. Corollary 1 (Sensitivity Analysis). The signs of the sensitivities of α's and λ's with respect to p are as follows: As p increases, the risk of being frontrun increases, and thus the benefit of using the dark venue for the frontrunnable user increases. Hence, threshold for the adoption of the dark venue decreases. Vice-versa, as p increases it becomes easier to detect an arbitrage opportunity, reducing the value of information about the arbitrage opportunity. Hence, arbitrageurs are less incentivized to use the dark venue for protecting their private information. Miners' adoption and Equilibrium We derive the equilibrium dark venue adoption rate of miners, α * , and characterize the SPE. For any α > 0, the miners who join the dark venue receive a higher payoff than those who only stay in the lit venue: This is because transactions submitted through the dark venue can only be observed by miners who adopt the dark. As a result, if the actions of users and arbitrageurs are fixed, each individual miner has an incentive to join the dark venue. The situation changes once we account for the strategic responses of users and arbitrageurs. If sufficiently many miners join the dark venue, that is, if α is large enough, then the payoff of each miner may be lower than their payoff when α = 0. This is because the frontrunnable user may then route her transaction from the lit to the dark venue if the execution risk in the dark venue is small enough. The migration of this transaction would eliminate frontrunning opportunities and thus reduce MEV. We first characterize the equilibrium strategy of the frontrunnable user in the benchmark case where there is no dark venue. This is obtained from our game theoretical framework by setting α = 0, and considering the subgame at periods t = 2, 3. Proposition 4 (Only Lit Venue Benchmark). When α = 0, there exists a threshold c 1 ≥ 0 such that the frontrunnable user submits the transaction to the blockchain if and only if c ≤ c 1 . If the frontrunning problem is severe, i.e., c > c 1 , then the frontrunnable user is not willing to submit her transaction to the blockchain because the cost of being frontrun exceeds the benefit of executing her transaction. Conversely, if the frontrunning problem is not too severe, i.e., c ≤ c 1 , then the frontrunnable user submits to the blockchain even if she faces the risk of being frontrun. We next characterize the SPE of our model. We refer to the equilibrium where the relay adoption rate α * = 1 as the full adoption equilibrium, the equilibrium where the relay adoption rate α * ∈ (0, 1) as the partial adoption equilibrium, and the equilibrium where the relay adoption rate α * = 0 as no adoption equilibrium. Proposition 5 (Characterization of the Equilibrium ). Let c 1 be the critical threshold identified in Proposition 4. The following statements hold for the SPE of the game: 1. If c > c 1 , there exists a unique full adoption equilibrium where the relay adoption rate α * = 1, the frontrunnable user selects the dark venue, and the arbitrageurs do not submit arbitrage orders. 2. If c ≤ c 1 , there exists a partial adoption equilibrium where the relay adoption rate α * < 1, the frontrunnable user submits her transaction through the lit venue, and the arbitrageurs send their orders to the dark venue only or to both venues. The dark venue will be, at least partially, adopted by miners, and the equilibrium outcome is contingent on the severity of the front-running problem. Suppose the frontrunning problem is severe. In the absence of a dark venue, it is too costly for the frontrunnable user to submit transactions to the blockchain. To incentivize the frontrunnable user to submit and earn the transaction fee, miners adopt the dark venue. In equilibrium, all miners decide to join the dark venue so that they are able to observe the transaction submitted by the frontrunnable user. Suppose the frontrunning problem is not too severe. Even without a dark venue, the frontrunnable user would still submit her transaction to the blockchain even if she bears the risk of being frontrun. Frontrunning arbitrage generates MEV for miners. To maintain their MEV, only a small fraction of miners choose to adopt the dark venue, which creates high execution risk. As a result, the frontrunnable user prefers to submit through the lit venue and face frontrunning risk. In such case, the introduction of a dark venue does not prevent frontrunning arbitrage. Welfare Implications We investigate how the introduction of a dark venue impacts transaction costs on blockchain. We also analyze how welfare of market participants is impacted by a dark venue. We impose the following equilibrium selection criterion. Among all equilibria characterized in part 3 of Proposition 5, we select the equilibrium which maximizes the aggregate payoff of all miners. We pick this specific equilibrium, because it is the most likely to be coordinated upon by miners. Big mining pools can coordinate and move a sufficiently large mass of mining power from one venue to the other. Hence, they can always achieve the equilibrium that maximizes their aggregate payoff. We also remark that our result is robust to our equilibrium selection, and selecting any partial adoption equilibrium in part 3 of Proposition 5 will yield the same results on welfare. Transaction Costs on Blockchain We begin by showing that the introduction of a dark venue does not serve its intended purpose of reducing blockchain congestion and transaction costs. This result implies that the negative externality induced by MEV cannot be mitigated by the introduction of a dark venue, because it is not incentive-compatible for miners to give up their rents extracted from users and arbitrageurs. Welfare Analysis We study how the introduction of a dark venue affects the welfare of the agents in the blockchain ecosystem as well as the aggregate welfare. Proposition 7 (Welfare of miners, user, and arbitrageur). The introduction of the dark venue leads to 1. a strict increase in aggregate welfare of miners, 2. a strict increase in welfare for miners who adopt the dark venue, and a decrease in welfare for miners who do not adopt the dark venue, 3. an increase in welfare for the frontrunnable user, 4. a reduction in welfare for arbitrageurs. Miners' increase in welfare can be decomposed into two parts: an increase in MEV, and an increase in transaction fees due to a higher demand for block space. First, recall from Proposition 2 that the introduction of the dark venue exacerbates competition between arbitrageurs and increases MEV. This, in turn, leads to a reduction in welfare for arbitrageurs, because a higher portion of their arbitrage profits is transferred to miners. Second, recall that the presence of a dark venue may incentivize the frontrunnable user to submit her transaction to the blockchain and thus increase the demand for block space. This, in turn, increases miners' revenue from transaction costs. Even though the aggregate welfare of miners increase after the introduction of the dark venue, the welfare of miners who do not join the dark venue weakly decreases. This is because some transactions migrate from the lit to the dark venue, and miners who stay in the lit venue can no longer observe them. The welfare of the frontrunnable user increases because she has now access to a privacypreserving transaction submission venue. It is worth observing that her welfare does not necessarily increases strictly. Unless the frontrunning problem is very severe, miners adopt the dark venue partially and create execution risk. As a result, the frontrunnable user may find it preferable to stay in the lit venue and bear frontrunning risk. We next analyze aggregate welfare, defined as the sum of expected payoffs of miners, users, and arbitrageurs. Proposition 8 (Aggregate Welfare). The followings statements hold: 1. The aggregate welfare is maximized when the dark venue is fully adopted by miners. 2. The introduction of the dark venue weakly raises aggregate welfare. 3. If c > c 1 , then the unique full adoption equilibrium attains the maximum aggregate welfare; if c ≤ c 1 , then any partial adoption equilibrium yields an aggregate welfare strictly below the maximum. The above result can be intuitively understood as follows. The profit of arbitrageurs and fee revenue of miners are merely transfers of wealth from users. Despite MEV is extracted from arbitrageurs by miners, it is just a fraction of the profits arbitrageurs extracted from users. As a result, aggregate welfare is maximized if the sum of the valuation of users' transactions added to the block is maximized. In particular, maximum welfare can only be achieved if frontrunning arbitrage does not take up any block space. If the dark venue is fully adopted by miners, execution risk is small, and the frontrunnable user submits through the dark venue. Because no arbitrageur demands for block space, the block only includes the B users' transactions with the highest valuations, and the aggregate welfare is then maximized. We have shown that the introduction of the dark venue weakly improves aggregate welfare. Moreover, the private and social optimum coincide if the frontrunning problem is severe. However, if the frontrunning problem is not too severe, the ecosystem would coordinate on a partial adoption equilibrium where frontrunning arbitrage is still present, and the block space allocation would not be efficient. The aggregate welfare maximizing outcome is then unattainable because miners have a positional advantage and can determine other participants' execution risk. For miners, the dark venue merely serves to extract larger rents. We propose a self-financing transfer from the frontrunnable user to miners so that the misalignment of incentives is resolved, and the resulting full adoption equilibrium achieves the welfare maximizing outcome. Proposition 9 (Attaining Full Adoption). There exists θ ≥ 0 such that if the frontrunnable user commits at t = 1 to make a payment θ to the winning miner on the dark venue, then (i) a unique full adoption equilibrium is attained; (ii) the expected payoff of all miners strictly increase; (iii) the expected payoff of the frontrunnable user does not decrease. In the partial adoption equilibrium, the MEV earned by miners is only a fraction of arbitrageurs' profit extracted from the frontrunnable user. If the frontrunnable user commits to make a payment to the winning miner on the dark venue, and this payment is above miners' expected MEV in the partial adoption equilibrium, then it is incentive compatible for all miners to adopt the dark venue, and the aggregate welfare is maximized. The payoff of the frontrunnable user in the full adoption equilibrium net of the payment is strictly higher than her payoff in the partial adoption equilibrium (where no transfer between the user and miners occurs). This proposed transfer could be implemented in a straightforward manner. The relay service can set up a reward pool which allows users to voluntarily deposit ERC-20 tokens into it. Any miner who joins the relay service and successfully mines a new block that includes transactions sent through the relay service can claim the tokens deposited in the reward pool. Empirical Analysis In this section, we provide empirical support to the implications of our model. Section 6.1 lists the model implications we validate. Section 6.2 describes our dataset. Section 6.3 defines the key variables and stylized facts. Section 6.4 describes our empirical results. Testable Implications Our model generate the following implications: 1. The blockchain dark venue will be partially adopted by miners (see Proposition 5). 2. Miners who adopt dark venue have a higher expected payoff than miners who stay in the lit venue. (See part 1 of Proposition 7) 3. Users submit transaction through the dark venue when the frontrunning risk is high (see Proposition 5). Arbitrageurs' transaction costs increase after the introduction of the dark venu. This is implied from part 3 of Proposition 7. Data We use transaction-level data from Uniswap and Sushiswap to identify frontrunning arbitrages. We run our own Ethereum node to get access to the blockchain history. A modified geth client is used to export all transaction receipts where a swap event was triggered by a We acquire the Ethereum block data from Blockchair available at https://gz.blockchair. com/ethereum/blocks/. The data cover the period from May 1, 2020 to July 31, 2021. The data include the gas fee revenues earned by miners. Definition of Variables and Stylized Facts We describe the main variables used in our statistical analysis, and provide empirical regularities observed in our data. Descriptive Statistics and Stylized Facts. Table 1 presents summary statistics of the data. Figure 1 plots the estimated adoption rate of dark venue. For miners who join the dark venue, we plot the proportion of extracted revenue in Figure 2. We can clearly observe that dark venue transactions contribute a nontrivial (around 15%) portion to the revenues of miners who joined dark venue. The distribution of cost-to-revenue ratio of arbitrageurs is plotted in Figure 3. Comparing panel (a)-(c), we observe that the cost-torevenue ratio for arbitrageurs who submit through the dark venue is skewed right and higher than that of arbitrageur who use lit venue. The average cost-to-revenue ratio increases after Figure 4 plots the daily average cost-to-revenue ratio of arbitrageurs in the lit and the dark venue. After the introduction of the dark venue, the cost-to-revenue ratio in the dark venue steadily increases while the cost-to-revenue ratio in the lit venue decreases. Our model offers a plausible explanation to this observed pattern: as the miner adoption rate of the dark venue increases, more arbitrageurs migrate from the lit venue to the dark venue, which increases competition and raises transaction costs. Recall that transactions sent through the dark venue face execution risk. When the block is not mined by miners who join the dark venue, arbitrage transactions sent through the lit venue are executed, and the transaction cost is lower because of the smaller competition. The Adoption of Dark Venue. The average adoption rate from February to March is 0.02, with a standard deviation of 0.02. The average adoption rate from April to May is 0.348, with a standard deviation of 0.01. The average adoption rate from June to July is 0.597, with a standard deviation of 0.033. These estimates are supportive of our model prediction that the dark venue will be at least partially adopted. Revenue of Miners in the Dark Venue and in the Lit Venue. We estimate the following linear model to compare revenues of miners who adopt the dark venue with revenues of miners who stay in the lit venue: where t indexes the date, M inerRevenue t is the revenue of miner per block, γ t is the day fixed effects, 1 Dark is a dummy variable for Flashbots blocks, and t is an error term. We cluster our standard errors at the day level. The coefficient ρ 1 quantifies the sensitivity of miner's revenue per block on whether he joins the dark venue. Table 2 indicates that joining the dark venue on average increases miners' revenue by around 0.16 ETH per block. This is supportive of our model prediction that the expected payoff of miner who join the dark venue is higher than the expected payoff of miners who stay in the lit venue. In addition, the coefficient estimates reveal that these relationships are statistically and economically significant. Cost-to-Revenue Ratio of Arbitrageurs We estimate the following linear models to compare cost-to-revenue ratio of arbitrageurs before and after the introduction of the dark venue: Table 3: Results from regressing the cost-to-revenue ratio of arbitrages on whether the dark venue is introduced and whether the arbitrage order is sent through the dark venue. The data for regression covers a sample period from May 4, 2020 to Jul 31, 2021. Asterisks denote significance levels (***=1%, **=5%, *=10%). Dependent variables: Cost-to-revenue Ratio Note: * p<0.1; * * p<0.05; * * * p<0.01 where CostRevRatio is the cost-to-revenue ratio of arbitrage transactions, 1 Af ter is a dummy variable for the period after the introduction of the dark venue, 1 Dark is a dummy variable for transaction submitted through dark venue, and is an error term. The coefficient ρ 2 quantifies the difference in cost-to-revenue ratio of arbitrages before and after the introduction of the dark venue. The coefficient ρ 4 quantifies the difference between the cost-to-revenue ratio of arbitrages sent through the lit venue and arbitrages sent through the dar venue, after the introduction of the dark venue. Table 3 (a) indicates that, after the introduction of the dark venue, the average cost-torevenue ratio of arbitrageurs increases by around 0.09, a increment that is almost a third of the average cost-to-revenue ratio before the introduction of the dark venue (around 0.3). Table 3 (b) indicates that the average cost-to-revenue ratio of arbitrageurs in the dark venue is 0.44 higher than that of arbitrageur using the lit venue. This suggests that the increase in the cost-to-revenue ratio after the introduction of the dark venue can be mostly attributed to arbitrageurs who use the dark venue. All results are statistically and economically significant. The regression results support our model prediction that the introduction of the dark venue Table 4: Results from regressing the proportion of frontrunnable transaction sent through dark venue on the probability of being frontrun. The data for regression covers a sample period from Feb 11, 2020 to May 1, 2021. Asterisks denote significance levels (***=1%, **=5%, *=10%). The Migration of Users We estimate the following linear model to measure the relationship between users' probability of being frontrun and their venue choice: P roportionDark is the proportion of frontrunnable transactions sent through the dark venue, F rontrunP rob is the probability of being frontrun for transactions sent through the lit venue, and is an error term. The coefficient κ quantifies the sensitivity of users' venue selection on the frontrunning risk faced by users. Table 4 indicates that an increase in the probability of being frontrun is positively correlated (60% correlation) with a higher proportion of transactions sent through the dark venue. A 1% increase in probability of being frontrun is associated with a 0.6% increase in the proportion of frontrunnable transactions submitted through the dark venue. The coefficient estimates indicate that these relationships are statistically and economically significant. In summary, Table 4 supports our model prediction that frontrunnable users migrate from the lit venue to the dark venue when they face higher frontrun risk. A Technical Results and Proofs Proofs of Proposition 1, 2. We first outline all six potential equilibrium outcomes for venue selection of arbitrageurs. We then solve for the equilibrium transaction fee bidding strategies in all six cases. Finally, we solve for the equilibrium venue selection strategies of arbitrageurs. There are six potential equilibrium outcomes for arbitrageurs' venue selection: (1) Case 1: Both arbitrageurs choose the dark venue. We show that there is no pure strategy Nash equilibrium (PNE), and there exists a unique mixed strategy Nash equilibrium (MNE) where both arbitrageurs bid g ∈ [v B−2 , c], and g follows the probability distribution We prove the non-existence of PNE in two steps. First, we show that there is no symmetric PNE using a contradiction argument. Second, we show that there is no asymmetric PNE. Assume there is a symmetric PNE where both arbitrageurs bid the same transaction price f D i = f D j = g, and the expected utility of arbitrageur i is not higher than the expected utility of arbitrageur j. We argue that there exists an unilateral deviation which allows arbitrageur i to improve its expected utility. If g < c, the expected utility of arbitrageur i is Arbitrageur i can increase its expected utility by changing its strategy to f D i = g + . Its expected payoff would then be A i = c − (g + ) > A i . If g = c, the expected utility of arbitrageur i is 0. Arbitrageur i can then deviate to a strategy We argue that one of the bidding arbitrageurs can improve its expected utility by deviating its strategy. If f D i = g > v B−2 , the expected utility of arbitrageur i is A i = p · (c − g). Therefore, arbitrageur i can deviate to a strategy with Therefore, arbitrageur j can deviate to a strategy where exists no asymmetric PNE. Next, we discuss MNE. We show that there exists no pure strategy which yield a higher expected utility than the mixed strategy for all players. When arbitrageur i play the mixed strategy, its expected utility is Then we show that the other bidding strategy cannot outperform the MNE strategy. We first consider the pure strategy where f D i => (c − v B−2 ) · p + v B−2 . The bidder will then always win the game. Therefore, , which indicates that bidders are not better off deviating. Next, we consider the pure strategy where We can write the expected utility of arbitrageur i as Therefore, deviating to another strategy f D i cannot increase the expected utility of arbitrageur i when the other bidder plays the mixed strategy. Therefore, a combination of any pure strategies cannot outperform the mixed strategy. venues, they all bid truthfully in the dark venue. This is because the bidding mechanism is a sealed-bid, first-price auction where both arbitrageurs have the same valuation. In the lit venue, they all use the same bidding strategy as in Case 4. We then calculate the expected equilibrium payoff of each arbitrageur in all six cases, and construct the following matrix: where γ > 1, γv B−2 < c. We next solve for the equilibrium venue selection strategy of arbitrageurs. . Those two conditions ensure that the unique equilibrium is that both arbitrageurs choose the dark venue. . Using the tie-break rule and the two conditions above, the unique equilibrium is that both arbitrageurs choose both venues. . Those two conditions ensure that one arbitrageur choosing both venues, and the other arbitrageur choosing the dark venue is the equilibrium. Proof of Proposition 3. We only prove the proposition in the case α ∈ (α 2 , 1]. The other two cases can be shown using the same procedure. If α ∈ (α 2 , 1], by Proposition 1, both arbitrageurs choose the dark venue. If the frontrunnable user chooses the dark venue, her expected payoff is If instead the frontrunnable user chooses the lit venue, her expected payoff is Comparing the payoff in the two venues, we have that the frontrunnable user chooses the Proof of Proposition 4. Suppose α = 0. If the frontrunnable user submits to the lit venue, then the payoff of the frontrunnable user is The quantity above is positive if and only if c < c (1−(1−p) 2 ) . If it is positive, then the frontrunnable user will submit her transaction. Otherwise, she will not submit to the blockchain. Proof of Proposition 5. If c > c 1 , the frontrunnable user will only use the dark venue. This is because using the lit venue generates a payoff Miners in the lit venue earn r lit (α) = Bv B+1 after mining a block. For any sufficiently small mass δ > 0 of miners who migrate from the lit to the dark venue, they earn r dark (α + δ) = Bv B > Bv B+1 . In equilibrium, all miners adopt the dark venue. If c ≤ c 1 , we can show that λ 1 is a equilibrium, and it is easy to verify that the other equilibria are λ 2 , λ 3 , 1. At λ 1 , for a sufficiently small mass δ > 0 of miners migrating to the dark venue, their payoffs in the dark venue are equal to If they migrate to the lit venue, their payoff in the lit venue are Hence, there is no incentive for them to migrate. For a sufficiently small mass δ > 0 of miners in lit venue, the payoff is equal If they migrate to the dark venue, their payoff is equal to There is no incentive for them to migrate. This is because if α > λ 1 , the frontrunnable user migrates to the dark venue, and there is no longer a frontrunning arbitrage. At λ 1 , the frontrunnable user still submits to the lit venue as shown in Proposition 3, and the arbitrageurs submit to both venues as shown in Proposition 1. Proof of Proposition 6. If c > c 1 , frontrunnable trader does not submit transactions in the mempool, thus the minimum fee that guarantees the execution of a transaction is v B and the total fee of all transactions are B · v B . With the introduction of a dark venue, the execution fee increases to v B−1 , while the total fee increases to B · v B−1 . If c ≤ c 1 , the minimum fee that guarantees the execution of a transaction is always v B−2 . The expected total fee of all transactions before the introduction of a dark venue is Proof of Proposition 7. We compare the welfare of miners, frontrunnable users, and arbitrageurs separately before and after the introduction of the dark venue. Before the introduction of the dark venue, with probability 1 − (1 − p) 2 , the transaction of frontrunnable user will be observed by arbitrageurs. Therefore, the expected payoff of the frontrunnable user before the introduction of the dark venue is The expected payoff of the winning miner is The expected payoff of arbitrageurs is . The expected payoff of all non-frontrunnable users is . Then, we consider the welfare of different stakeholders in Nash equilibria. In the first Nash equilibrium, the frontrunnable user selects the lit venue and the arbitrageurs select both the dark venue. The expected payoff of the frontrunnable user is The expected payoff of the winning miner if she joins the dark If c > c 1 , then the frontrunnable trader does not submit transactions before the introduction of the dark venue. Therefore, the aggregate social welfare of stakeholders is B i=1 v i . Because the full adoption is the only equilibrium in this scenario, the aggregate social welfare will increase to B−1 i=0 v i after the introduction of the dark venue. If c ≤ c 1 , the expected aggregate social welfare of stakeholders before the introduction of the dark venue is (1 − p) 2 B−2 i=0 v i + (1 − (1 − p) 2 ) B−1 i=0 v i . The expected aggregate social welfare of stakeholders after the introduction of the dark venue is , if both arbitrageurs select the dark venue; Therefore, the introduction of the dark venue weakly raises aggregate welfare in all Nash equilibria. If the dark venue is fully adopted, then the sum of the valuations of transactions included in the block is B−1 i=0 v i . If the dark venue is only partially, arbitrage transactions might be included in the block if the winning miner joins the dark venue. As arbitrage transactions does not generate social welfare and have substituted another non-frontrunnable transaction. Therefore, the largest expected aggregate social welfare in all NE is (1 − α) Proof of Proposition 9. If c > c 1 , there exists a unique full adoption equilibrium at which the aggregate welfare is maximized. The required payment is then zero. If c ≤ c 1 , then there exists a partial adoption equilibrium. At the partial adoption equilibrium, the adoption rate of the dark venue is α * ∈ {λ 1 , λ 2 , λ 3 }. We only prove the case where α * = λ 1 , and the other two cases can be shown with the same procedure. At B.1 Frontrunning Arbitrages In this section, we explain the methodology used to identify frontrunning arbitrages. We identify a two-legged trade (T A1 , T A2 ) as a frontrunning arbitrage, and a transaction T V as the corresponding victim transaction, if the following conditions are met: 1. T A1 and T A2 are included in the same block, and T A1 is executed before T A2 . T A1 and T A2 have different transaction hashes. 2. T A1 and T A2 swap assets in the same liquidity pool, but in opposite directions. The input amount for the swap in T A2 is equal to the output amount of the swap in T A1 . In this way, the transaction T A2 closes the position built up in the first leg T A1 . 3. T V is executed between T A1 and T A2 . T V swaps assets in the same liquidity pool as T A1 and T A2 . T V swaps assets in the same direction as T A1 . 4. Every transaction T A2 is mapped to exactly one transaction T A1 . There exists frontrunning arbitrages where T A1 and T A2 are placed in different blocks. However, arbitrageurs normally prefer to include T A1 and T A2 in one block to minimize inventory risk. Nonetheless, the above procedure allows us to find a lower bound for the number of frontrunning arbitrages. The revenue of a frontrunning arbitrage is the difference between the output of T A2 and the input of T A1 , and the profit is the revenue minus the gas fee paid for these two transactions. B.2 Frontrunnable Transactions In this section, we provide that methodology to identify transactions vulnerable to frontrunning arbitrages. Observe that not all frontrunnable transactions are exploited by arbitrageurs. There were 17, 644, 672 transactions in the given time frame. The input token of 9, 003, 759 of these transactions is ETH. We only focus on those transactions. This is because most arbitrageurs are bots, and only conduct arbitrages where ETH serves as input token. For each transaction, we calculate the optimal revenue that an arbitrageur can attain by frontrunning this transaction. If the revenue is positive, then we identify this transaction as frontrunnable. A swap transaction often has a slippage tolerance threshold m which specifies the minimum amount of output token to be received in the transaction. If the price impact of the frontrunning transaction T A1 is too large, the slippage tolerance threshold of the victim transaction T V may be triggered and T V will automatically fail. In this case, the arbitrage will not be profitable. This is why we have to account for the slippage tolerance threshold for each swap transaction in our calculation. Formally, let v be the amount of input token specified in the victim transaction T V , and m the minimum amount of output token to be received. Let x be the amount of input token swapped in the frontrunning transaction T A1 . Let r 1 and r 2 represent the liquidity reserves of input token and output token in the pool.
2022-01-23T16:54:01.699Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "178a865ecfddf550c2130653bebf7d7472c73b82", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4eca37c60e6f7e3396353018983780b98ec23bf9", "s2fieldsofstudy": [ "Economics", "Computer Science", "Business" ], "extfieldsofstudy": [ "Economics" ] }
214259261
pes2o/s2orc
v3-fos-license
Modular Multifunctional Composite Structure for CubeSat Applications: Preliminary Design and Structural Analysis : CubeSats usually adopt aluminum alloys for primary structures, and a number of studies exist on Carbon Fiber Reinforced Plastic (CFRP) primary structures. The internal volume of a spacecraft is usually occupied by battery arrays, reducing the volume available to the payload. In this paper, a CFRP structural / battery array configuration has been designed in order to integrate the electrical power system with the spacecraft bus primary structure. The configuration has been designed according to the modular design philosophy introduced in the AraMiS project. The structure fits on an external face of a 1U CubeSat. Its external side houses two solar cells and the opposite side houses power system circuitry. An innovative cellular structure concept has been adopted and a set of commercial LiPo batteries has been embedded between two CFRP panels and spaced out with CFRP ribs. Compatibility with launch mechanical loads and vibrations has been shown with a finite element analysis. The results suggest that, even with a low degree of structural integration applied to a composite structural battery, more volume and mass can be made available for the payload, with respect to traditional, functionally separated structures employing aluminum alloy. The low degree of integration is introduced to allow the use of relatively cheap and commercial-o ff -the-shelf components. Introduction CubeSats, in recent years, have experienced a growing interest, and will be employed in support of larger spacecrafts or as the very own space segment.The original CubeSat standard was conceived in 1999 as a spacecraft with cubic shape and 100 mm sides (standardized today as 1 unit, 1U) and a mass up to 1 kg [1] (today up to 1.33 kg [2]). The original objectives of the CubeSat project were mostly educational, although in recent years an increasing number of CubeSat missions has gone beyond the original objectives and is aiming at cutting edge technological and scientific objectives [3].Currently, the majority of CubeSats operates in Low Earth Orbit (LEO) and the prevailing form factor is 3U [4] (the combination of three 1U units, with a maximum mass of 4 kg [2]). Current missions rely on aluminum alloy primary structures, and composite materials are sometimes employed in secondary structures.Multifunctional integration is usually limited to a few components, e.g., solar cells mounted on external panels, and is not employed systematically [4]. On the other hand, space missions objectives and architectures are becoming increasingly complex, leading to sophisticated but overcrowded CubeSats [5].Another issue has to do with the structural mass ratio, i.e., the ratio of structural mass and spacecraft total mass, which may not be always satisfactory.For example, the excellent STRaND-1 CubeSat has a structural mass ratio of 30% and the authors did not consider this figure satisfactory [6]. Although some nanosatellites standards employed structural integration systematically [7], with composite primary structures the integration can move forward, and a higher volume and mass can be allocated for the payload.The capacities of composite materials in terms of tailoring and embedding are greatly attractive, and are promising alternatives if the designer's objective is to obtain highly functionalized, lightweight structures.Composite primary structures for CubeSats, if properly designed, can produce lower stresses and displacements, and higher fundamental frequencies with respect to traditional and widely employed aluminum alloy structures [8]. In this paper, the design of a Carbon Fiber Reinforced Plastic (CFRP) integrated primary structure component is addressed.It is part of a 1U satellite bus and it integrates an embedded Electrical Power System (EPS), to demonstrate the possibilities of functionalized, advanced composite primary structures for pico and nanosatellites and in particular for CubeSats.The integrated EPS components are solar cells, batteries and the necessary electronic circuitry.The integration of the EPS with the CubeSat primary structure allows to reduce the overall mass and volume by eliminating non-energy-storing EPS components, e.g., the case and the electrodes, and, in addition, the CFRP laminates provide containment and protection for the batteries.The CFRP integrated structure is conceived to fit in a 1U commercial, aluminum alloy frame and occupies one external face of the cube. The structure under study shares the design philosophy of the AraMiS project [9].Its goal is to design, produce, and test a set of "smart tiles" (Table 1) that must be mounted on the six faces of the above-mentioned commercial frame (Figure 1a).The main requirement for the tiles is their modularity.As a result, a tile accommodates mainly one bus subsystem components (Figure 1b).The modularity allows to reduce design, assembly and testing time and costs.a further reduction of costs is obtained with the use of Commercial-Off-the-Shelf (COTS) components.Clearly, an adequate level of redundancy must be considered.Existing smart tiles include the telecommunication tile, with Tracking, Telemetry, and Command (TT&C) subsystem components, the reaction wheel tile, with Attitude and Orbit Control System (AOCS) components and the power management tile.The aim of the present design is to improve the previous 1B8 power management tile.In that case, the primary structure function was given by Printed Circuit Boards (PCBs), which are not made with structural materials.Moreover, no energy storing function was implemented. Besides space applications, energy storing composite structural components are studied in the automotive [10], aeronautical [11], and marine [12] sectors.Vehicle mass saving in this case is more useful than internal volume saving, with respect to the CubeSat case [13].It has been shown that energy storing composites can significantly increase the range of aircrafts, as shown in [11], for piloted electric aircraft.Energy storing composites can also improve the performances of uninhabited, long-endurance aircrafts as High-Altitude Long-Endurance (HALE) vehicles exploiting solar energy or fuel cells [14,15]. Materials and Methods The functionalization of the smart tiles must take into account a strong constraint, imposed by CubeSat deployment procedure.CubeSats are usually accommodated inside the launch vehicle in standard deployers, for example the Poly-Picosatellite Orbital Deployer (P-POD) for 1U-3U form factors [2].The P-POD is an anodized aluminum prismatic box.A spring in the deployer pushes CubeSats into space, that are guided by their rails (the vertical rods shown in Figure 1a), in contact with rails on the deployer.For this reason, no component is allowed to protrude more than 6.5 mm normal to the plane of the rails (CubeSat mechanical requirements 3.2.3 and 3.2.3.1 [2]). The functionalization of the primary structure has begun with the design of the reaction wheel tile prototype.Reaction wheels are attitude actuators that usually occupy CubeSats internal volume.The project consisted in merging the reaction wheel and the relative mechanisms and electronic components, the solar cells with related circuitry and the primary structure.The structure was still made with PCBs, made in epoxy resin and glass fibers.This material is usually addressed as FR-4, where FR stands for Flame Retardant. The next step is represented by the design of an energy storing smart tile prototype, with the employment of composite materials.This will free internal volume, available to the payload, and will allow spacecraft bus mass saving. Power Storage Composite Structures There is a wide variety of integrated energy storage systems that can be conveniently described in terms of their degree of integration [11].Traditional energy storing subsystems with no structural integration have zero degree of integration.An example of low degree of integration assembly is the embedded battery, where existing energy storing components are included in structural elements.Although the assembly has both functions, there is functional separation at the components level.At the other extreme, examples of high degree of integration assemblies are genuine structural batteries, whose components have the structural and the electrical function at the same time.One example of structural battery has been recently developed [10].It consists of a unidirectional carbon fiber lamina where, in addition to the usual structural functions of the constituents, the fibers act as battery electrodes and the matrix acts as the battery electrolyte.As a result, the lamina can store electrical energy.Embedded battery systems come in a variety of configurations, the most usual ones are the laminate structure [16], the sandwich structure [12,17], and the stiffener structure [12].The laminate structure consists of a classical composite material laminate with a set of batteries accommodated in the inner layers.The batteries are mechanically connected with the laminae thanks to a resin rich region that usually has an irregular shape.The sandwich structure houses the batteries in cavities of the core material, which usually has inferior mechanical properties with respect to the skins material.Finally, batteries can be placed in the inner regions of stiffeners. Although high degree of integration batteries are promising and allow for greater mass and volume savings, to comply with the AraMiS project design philosophy, a low degree of integration has been chosen.Indeed, COTS components are relatively cheap and can easily be assembled and tested. Power Management Tile Architecture For the present design, an innovative configuration was chosen.It is a cellular configuration previously conceived for sailplanes wing boxes, in particular for the upper and lower skins [18] (Figure 2b).An evolution of this concept can as well be developed for other applications, as beam-like structures (Figure 2a) or innovative multifunctional structures (Figure 3).A CAD section and top view of the smart tile are shown in Figure 3a,b, respectively.The experimental model top view is shown in Figure 3c.The cellular configuration has some advantages with respect to the classical configurations in [12] and [16].It is more rigid of the solid laminate structure [16], thanks to the distance from the neutral axis of the CFRP panels.Moreover, the laminate structure had holes to house the batteries that are not necessary in the cellular structure.The cellular configuration has a major shear and bending stiffness than the sandwich structure [12], thanks to the stiffeners.In addition, CFRP cells obtained with the upper and lower panels and the stiffeners provide protection and containment for the batteries. Tile Mechanical and Electrical Configuration The upper and lower structural panels are composite laminates and hold the PCBs for the solar cells (upper panel in Figure 3a) and the electrical subsystem circuitry (lower panel in Figure 3a).The batteries are placed between the two panels, with four CFRP stiffeners among them.Some electronic details are shown in Figure 4: the magnetorquer (a) is buried in the solar cells PCB.The triple-junction, GaAs solar cells include a protection diode (b) and transmit electrical power through electrical conduits (c).Electrical power is carried to the inner PCB via through-holes (c, d), where it is collected in the battery array.Mechanical connection between the batteries and the CFRP laminates is obtained with a typical silicone resin, to provide thermal protection and vibration damping for the batteries [19].The resin adheres to all the six sides of the batteries.Electrical connection between the batteries and the inner PCB (lower PCB in Figure 3a) is obtained with holes in the CFRP lower panel to allow the passage of batteries connectors (shown in Figure 3a as thin metal strips).The mechanical connection of the tile with the commercial aluminum structure is obtained with four screws passing either through the external, cylindrical holes or through the internal, cylindrical holes (Figure 3b,c).The screws are typical aerospace inserts with a diameter of 3 millimeters.The use of external or internal holes is dictated by the choice of the cubic aluminum structure face. With respect to classical configurations [12,16], there is additional area on the PCBs to contain two solar cells, the set of electronic components and their electrical connections to allow the proper operation of the electrical power subsystem.Moreover, it has been proven that lithium batteries, once mounted on CubeSat external panels, offer additional radiation shielding and thermal regulation [20].However, it has been proven that radiation has negative effects on batteries performance [21]. Protection against space debris has to be studied yet.However, there is a reasonable level of redundancy due to the adoption of six batteries.Although a degradation of the electrical system performance inevitably happens in the case one battery malfunctions, the remaining five can be designed to be electrically independent. The production of the CFRP structure can be made with standard industrial procedures and is the object of current work.Aluminum molds can be employed with a hand lay-up process of standard prepregs.The assembly then undergoes an autoclave cure cycle.PCBs and batteries are mounted after the polymerization. Commercial Batteries Selection Once the Lithium Polymer (LiPo) battery was chosen as the battery type, a preliminary investigation was performed among commercial batteries.Since AraMis is not payload-specific, the objective was to find the maximum energy storage capacity, to accommodate the widest possible range of payloads. The maximum battery thickness was set to 4.5 mm.This value has been set with the help of the CAD model (Figure 3a,b), considering the total thickness of the stack including one battery, the CFRP laminates, the PCB for the solar cells and its electronic components, once mounted on the commercial structure (Figure 1b).The maximum thickness of the stack is 6.5 mm from the plane of the rails, with an appropriate tolerance, due to CubeSat mechanical requirements discussed in Section 2. Various prismatic batteries have been considered, and the potential tile energy storage capacity was estimated.The selected battery is a Batimex LP452540 with a typical capacity of 1776 mWh (details are given in Table 2).The chosen array of six commercial batteries allows to store approximately 11 Wh in one power management tile.The total capacity is in line with actual commercial 1U-2U EPS systems [22].It is known that batteries with similar dimensions and superior capacities exist [20,23].However, these batteries are not COTS and have a considerable cost, thus they do not comply with AraMis project requirements.Since their shape is prismatic, it would be reasonably simple to integrate them in the present design, if they will become suitable for the project in the future.One of the objectives of present work is to demonstrate the feasibility of CubeSat subsystems with COTS components.To increase the EPS overall capacity, the smart tiles modular approach can be exploited to obtain higher energy capacities.With two or three power management tiles mounted on the same 1U CubeSat it is possible to reach 22 or 33 Wh, respectively.Moreover, in multiples of 1U, the area of some smart tiles is increased and thus more commercial batteries can be stored in the same power management tile.For example, a 2U tile has twice the area of a 1U tile, and this allows to store up to 22 Wh with the present design.Moreover, the above-mentioned optimization can be repeated to obtain a major energy capacity per unit of tile surface. Structural Analysis A set of Finite Element (FE) analyses has been performed to assess the compatibility of the design with the launch mechanical environment.The Vega launch vehicle has been chosen for the analysis.The applicable launch condition [24] imposes a Limit Load (LL) of 7 g.The adopted Safety Factor (SF) value is 1.8.It is known that other authors adopted lower safety factors, e.g., Ampatzoglou et al. chose 1.25 [8].However, with the intention of applying this technology to aeronautics also, a safety factor of 1.5 with an additional special factor of 1.2 has been chosen, to take into account the variability of material properties, as suggested for example by CS-VLA 619 and AMC VLA 619.As a result, the Ultimate Load (UL) becomes: UL = SF × LL = 1.8 × 7 g = 12.6 g ≈ 13 g. (1) The 13 g acceleration can be applied in any direction, due to the arbitrary position of the tile on the aluminum frame and to the arbitrary orientation of the CubeSat inside the deployer.The worst case condition has been found to happen with acceleration orthogonal to the tile plane and will be described in the results section.The power management tile is mounted on the aluminum alloy frame with screws passing either through the external holes or through the internal holes (Figure 3b).Both conditions have been considered in the analysis.The degrees of freedom of nodes belonging to fastened holes have been preliminarily imposed to be zero.The FE model (Figure 5a) considers the contact of the above mentioned components, i.e., batteries, resin, CFRP laminates and PCBs.Mechanical properties of the batteries cannot easily be estimated, thus values similar to the ones found in the open literature [16] have been adopted.The mechanical properties of the materials can be found in Table 3.The CFRP panels are carbon/epoxy where a carbon fiber fabric is employed, the lamination is [±45] s .The CFRP panels and stiffeners have been discretized with Quad4 elements with sizes ranging from 0.25 to 0.80 mm.The resin and the batteries have been discretized with Tet4 and Hex8 elements, respectively, with sizes ranging from 0.80 to 1.68 mm. It has been observed that the mechanical layout of commercial EPSs [22,23] is rather common among commercial solutions and thus a similar configuration has been analyzed.As shown in Figure 5b, the commercial architecture is simpler than the proposed one and consists just of a PCB and six batteries, of the same type.The thickness of the PCB has been increased with respect to the proposed solution, according to commercial EPS architectures. The cellular and commercial structures have been analyzed with the MSC Nastran™ software [25] and the results have been compared.a static analysis has been conducted to evaluate strains, stresses and maximum displacements.In addition, the natural frequencies and modes have been evaluated. Simple, preliminary thermal analyses results have shown that the temperature of the batteries is below the maximum operating temperature of 60 • C, a detailed thermal analysis and experimental tests are the object of future work.At least two conditions have to be considered.The first is during the launch, when the launch vehicle ejects the payload fairings.The second is along the nominal Low Earth Orbit (LEO) orbit, and must consider as inputs the irradiation from the Sun, the terrestrial albedo and internal electrical dissipations. Mass Breakdown The mass breakdown is shown in Table 4.The mass of the filling can be improved, being almost equal to the mass of the batteries.This aspect needs to be improved and is the object of future studies.The ratio between the total tile mass and the CubeSat maximum mass is 10%.Although it is a partial result and does not involve the whole CubeSat with its payload, it is lower than the structural mass ratio of 30%, identified as unsatisfactory in the STRaND-1 project [6].There is still margin to place the remaining subsystems components.A comparison of total and partial masses can be done with a commercial embedded EPS [22] and a commercial aluminum alloy structure [26] (Table 5).For the latter structure, one face of the 1U cube has been considered.The PCBs weight of the proposed design is lower than the commercial embedded EPS (respectively, 7.6 g and 25.6 g).Clearly, in the proposed design there is no intended structural function of the PCBs, and thus their thickness can be reduced, while for the commercial EPS, the PCB panel is the only component able to withstand loads, and thus it has a major thickness.The structural mass for the present design (15 g) is given by the mass of CFRP elements.It is clearly lower than the structural mass of the commercial aluminum alloy panel (28 g), due to the use of composite materials.However, the overall mass of the proposed design is greater than the mass of the commercial EPS.This is mainly due to the resin employed for mechanical and thermal insulation of the batteries, whose mass can be reduced with lighter materials.Nevertheless, in the author's opinion, the present design has advantages with respect to its commercial counterpart.The CFRP cells provide containment and protection from the batteries for the rest of the spacecraft, and future experimental qualification tests are aimed at evaluating the protection of batteries from vibrations, excessive temperatures and thermal shocks.Furthermore with two, lighter PCBs, more area can be devoted to EPS electronic components and circuits.In addition, no internal volume is subtracted to the payload.Finally, the proposed design includes additional components with respect to the commercial design [22], i.e., solar cells, magnetorquers and additional electronic circuitry. Finite Element Analysis The FE static analysis (Figure 6) has revealed that the structure of the smart tile can easily survive the launch, with a maximum displacement of 4.8 µm in the worst case, i.e., with external holes blocked.The results are given in Table 6.As expected, the maximum displacement with the external holes blocked is major of the maximum displacement with internal holes blocked (respectively, 4.8 and 4.6 µm).All the stresses and the strains are far below materials allowable limits, with external and internal holes blocked.The maximum stresses on the batteries and PCBs are in the order of tens and hundreds of kilopascals, respectively.These values are far below the materials maximum acceptable stresses. The modal analysis (Figure 7) has provided fundamental frequencies in the order of 900 Hz for both constraint cases.Vega User's Manual [24] prescribes that the lateral axis fundamental frequency, f lat , must be equal or greater than 15 Hz: f lat ≥ 15 Hz. (2) And the longitudinal axis fundamental frequency, f long , must be in the range: 20 Hz < f long <45 Hz or f long > 60 Hz. ( Although this is a partial result about power management tile local resonance, the above requirements are satisfied.The requirements of Equations 2 and 3 will have to be verified on the whole CubeSat. At moment the FEM model does not consider details of electrical connections between the PCBs and the batteries.The vibration resistance of electrical connections will be proven during experimental qualification tests.Future work will establish the amount of damage accumulated in the batteries and in their electrical connections due to the launch mechanical environment. Comparison with Commercial Architecture The FE static and modal analysis has been repeated with the mechanical architecture of the commercial EPS (Figure 5b).Results are summarized and compared with the proposed design in Table 7.The advantages of the proposed design are evident, since its maximum displacement is minor and the fundamental frequency is major than the commercial EPS. Comparison with Similar Studies on Embedded Batteries Structures Besides the structural results comparison with the commercial EPS for CubeSats (Section 3.3), wider comparisons can be made with similar systems for space [20,22,23], automotive [17], and marine [12] environments. Regarding the CubeSat EPS systems [20,22,23], it is evident that the energy capacity is higher, for some models, than the capacity of the EPS under discussion (11 Wh for the present case, against 22 Wh for [20,23]).However, in addition to the batteries, the proposed design includes solar cells, mounted on the outer PCB, additional electronic circuitry, mounted on the inner PCB, and some versions of the tile include magnetic torquers (Table 1) within the PCBs, in order to provide, together with the other tiles, a spacecraft bus as complete as possible and not only the power storage function.These components were not included in the other battery systems [20,22,23].Moreover, the present solution is designed to be mounted on the aluminum frame of the CubeSat, to leave internal volume available for the payload, while the other battery systems [20,22,23] are accommodated inside the CubeSat. Solutions presented in [12,17] have a different structural configuration, i.e., solid beams [12] and sandwich beams [12,17], so a comparison of the structural performances would not be useful.In any case, for both systems the functional integration of the electrical energy storage did not degrade the structural performance below acceptable limits.On the contrary, the bending performances were only marginally affected by the inclusion of the batteries.The present study confirms that, if the system is properly designed, acceptable bending properties can be obtained (Table 6).Although it has been shown that resonance frequencies are affected by the batteries [16], their introduction did not lower the fundamental frequency below acceptable limits (Section 3.2), thanks to the adoption of CFRP. Mockup The geometric compliance of the proposed design has been tested with the production of a mockup of the tile (Figure 8).The CFRP laminates have been 3D printed and the PCB boards have been produced.The dimensional congruence of the CFRP structure has been verified by inserting the chosen batteries in their slots and by stacking the CFRP structure and the PCBs in the correct position.A final check was made to assure the compliance of the assembly with the commercial aluminum structure. Figure 1 . Figure 1.AraMiS CubeSat: (a) Prototype and (b) General architecture.Integrated bus functions include: power management (in the previous power management tile 1B8 and current work), Tracking, Telemetry and Command (TT & C, telecommunication tile 1B9), and attitude control by reaction wheels (reaction wheel tile 1B213A).Some functions like power generation are distributed over several tiles. Figure 2 . Figure 2. Aeronautical cellular structures: (a) Cellular beam and (b) Rectangular cellular panel.Panel thicknesses: top and bottom skin 2 mm, cell width 33 mm, cell height 12 mm, and side walls 1 mm. Figure 3 . Figure 3. Power management tile: (a) CAD model section.The cross-section is colored in white; (b) CAD model top view; and (c) Experimental model top view. Figure 4 . Figure 4. Power management tile electronic details: magnetorquer (a), solar cells connectors with protection diode (b), solar cells electrical conduit (c), and electrical connection with battery array (d). Figure 6 . Figure 6.Static deformations: (a) External holes blocked and (b) Internal holes blocked.Displacements are given in millimeters. Figure 7 . Figure 7. Fundamental modes: (a) External holes blocked; and (b) Internal holes blocked.Displacements are given in millimetres. Figure 8 . Figure 8. Mockup of the proposed design. Table 3 . Materials mechanical properties for the Finite Element (FE) analysis. Table 4 . Mass breakdown for the proposed design. Table 6 . FE static and modal analysis results. Table 7 . Comparison between proposed and commercial architectures.
2020-02-27T09:33:53.773Z
2020-02-24T00:00:00.000
{ "year": 2020, "sha1": "ba0b006079ca70d816a3dbfcd1362b8170be72f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2226-4310/7/2/17/pdf?version=1583060763", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "75f878d748439ec2fde3b9254718ded3bbfbb752", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
230562498
pes2o/s2orc
v3-fos-license
A Comparative Study of the Behavioral Profile of the Behavioral Variant of Frontotemporal Dementia and Parkinson's Disease Dementia Background Executive dysfunction is the common thread between pure cortical dementia like the behavioral variant of frontotemporal dementia (bvFTD) and subcortical dementia like Parkinson's disease dementia (PDD). Although there are clinical and cognitive features to differentiate cortical and subcortical dementia, the behavioral symptoms differentiating these 2 conditions are still not well known. Objective To evaluate the behavioral profile of bvFTD and PDD and compare them to find out which behavioral symptoms can differentiate between the two. Methods Twenty consecutive patients with bvFTD (>1 year after diagnosis) and 20 PDD patients were recruited according to standard diagnostic criteria. Behavioral symptoms were collected from the reliable caregiver by means of a set of questionnaires and then compared between the 2 groups. Results bvFTD patients had more severe disease and more behavioral symptoms than PDD. bvFTD patients were different from PDD patients due to their significantly greater: loss of basic emotion (p < 0.001, odds ratio [OR] 44.33), loss of awareness of pain (p < 0.001, OR 44.33), disinhibition (p < 0.001, OR 35.29), utilization phenomenon (p = 0.008, OR 22.78), loss of taste discrimination (p < 0.001, OR 17), neglect of hygiene (p = 0.001, OR 13.22), loss of embarrassment (p = 0.003, OR 10.52), wandering (p = 0.004, OR 9.33), pacing (p = 0.014, OR 9), selfishness (p = 0.014, OR 9), increased smoking (p = 0.014, OR 9), increased alcohol consumption (p = 0.031, OR 7.36), social avoidance (p = 0.012, OR 6.93), mutism (p = 0.041, OR 5.67), and failure to recognize objects (p = 0.027, OR 4.33). The bvFTD patients were also significantly less suspicious (p = 0.001, OR 0.0295), less inclined to have a false belief that people were in their home (p = 0.014, OR 0.11) and had fewer visual illusions/hallucinations (p = 0.004, OR 0.107) than PDD patients. Conclusion Behavioral symptoms are helpful to distinguish bvFTD from PDD, and thus also cortical dementia with frontal-lobe dysfunction from subcortical dementia. Introduction The frontal lobe plays a crucial role in human behavior. Some of the most striking neurobehavioral syndromes are coupled with frontal-lobe dysfunction. Cummings [1] described 3 clinical syndromes that involve frontal-lobe circuitry: (a) apathy and akinesia resulting from damage to the mesial frontal or anterior cingulate pathway, (b) disinhibition, emotional dysregulation, and distractibility resulting from damage to the orbitofrontal cortex, and (c) deficits in executive function and motor programming due to damage to the dorsolateral prefrontal cortex. Among degenerative neurological disorders, the behavioral variant of frontotemporal dementia (bvFTD) presents with behavioral alterations very early in the disease course, even before cognitive dysfunction becomes evident. As the mesial frontal lobe is connected to the hippocampus and the orbital frontal lobe is connected to the inferior and superior temporal lobes, cortical dementias like Alzheimer's disease (AD) also demonstrate marked behavioral changes [2]. The prefrontal cortex also has extensive connections with subcortical structures such as the basal ganglia and thalamus, and disturbance of these frontosubcortical circuits also results in abnormal behaviors [1,3]. Thus, behavioral abnormalities occur both in cortical dementia (bvFTD, AD, etc.) and subcortical dementia (Huntington's disease, Parkinson's disease dementia [PDD], and vascular dementia [VaD]) [4]. Common behavioral abnormalities observed in PDD include depression, anxiety, and apathy. PDD patients may also have excessive daytime sleepiness, visual hallucinations, delusions, paranoia, and confusion [5]. The characteristics of bvFTD are early and profound alterations in personality, social conduct, and behavior. However, these symptoms are not specific to this disease. There is a significant overlap of the symptoms of bvFTD and psychiatric diseases such as bipolar disorder and schizophrenia [6]. Degeneration of the dorsolateral prefrontal cortex also leads to disturbed executive function in patients with bvFTD. The cognitive disturbance of subcortical dementia manifests as impairment of executive function, impaired attention, reduced speed of information processing, and retrieval defect in memory tasks that improve with cueing. Patients of subcortical dementia often exhibit frontal behavioral features like that of bvFTD, which can create diagnostic difficulties [7]. Clinicians rely upon the cognitive profile and presence of additional motor features and sphincter disturbance to distinguish subcortical from cortical dementia. To clinically differentiate between various dementias, efforts have been made to investigate the behavioral and psychiatric symptoms of these patients [8][9][10]. However, only a few studies have attempted to explore the behavioral profile to differentiate cortical and subcortical dementia [2]. We hypothesized that core frontal-lobe behaviors, such as impaired social cognition, disinhibition, alteration of feeding, sexual behavior and sensory perceptions, environmental dependency, etc. are relatively uncommon in subcortical dementia, and can thus be used to clinically differentiate cortical and subcortical dementia. We used a questionnaire prepared by Bathgate et al. [11] to capture the behavior of patients. This questionnaire was developed with the primary aim of determining the discriminating value of behavioral characteristics in differentiating bvFTD from 2 common forms of dementia, AD, and subcortical vascular dementia. To differentiate FTD from AD, informant-based behavioral interviews were also developed by Bozeat et al. [12], and Ikeda et al. [13]. However, Bathgate et al. [11] demonstrated that changes in emotions and insight, selfishness, disinhibition, personal neglect, gluttony and sweet food preference, wandering, stereotypies, loss of sensitivity to pain, echolalia, and mutism were more characteristic of bvFTD, and differentiated most bvFTD from AD and vascular dementia. The Frontal Systems Behavior Scale (FrSBe) is another questionnaire which measures behavior associated with frontal subcortical deficits, but it tests only 3 items, apathy, disinhibition, and executive dysfunction [2]. We planned to evaluate the behavioral profiles of bvFTD as representative of cortical dementia, and PDD as representative of subcortical dementia, and compare them to find out which behavioral symptoms differentiate cortical and subcortical dementia. Methods This was a questionnaire-based, observational, comparative study conducted between March 2015 and October 2016. A purposive sampling technique was employed for the recruitment of patients from the Cognitive and Movement Disorders Clinic (MDC) of our institute. We selected patients with probable bvFTD, diagnosed according to the international consensus criteria for bvFTD [6]. Patients with bvFTD of > 1 year's duration and regularly attending follow-up were included in the study. Parkinsonian patients who had regularly attended follow-up at the MDC for > 5 years and had received a primary diagnosis of idiopathic Parkinson's disease (IPD) were recruited. At the point of their recruitment, these patients had cognitive dysfunction and fulfilled the clinical diagnostic criteria for PDD [14]. We excluded (a) patients and caregivers who were not willing to participate in the study, (b) patients with overlapping features of both degenerative and vascular dementia, (c) patients with such severe dementia that they could not be assessed, and (d) patients with incomplete clinical evaluation and brain imaging data. A detailed history was obtained from each patient/a reliable caregiver, and neurological examinations and cognitive functions assessment were conducted. Their attention was tested using the continuous performance task/test, digit span test, and serial subtraction test. Memory was tested with the verbal learning test as per the Kolkata Cognitive Screening Battery [15]. For language, we used the Bengali version of the Western Aphasia Battery [16]. Visuospatial domain was tested using the letter cancellation task and line bisection test, while visuoperceptual testing was done via dot-counting, fragmented letters, and progressive silhouettes. Frontal-lobe function testing was done using the Frontal Assessment Battery (FAB) [17]. All the above tools and tests were validated for the Bengali language, and we have used them in previous studies as well [18,19]. All patient underwent detailed lab investigations, including complete blood count, erythrocyte sedimentation rate, blood biochemical tests like the thyroid function test, levels of serum Vitamin B 12 , fasting and postprandial blood glucose, and glycosylated hemoglobin. Lipid profile was obtained, and tests of liver and renal function as well as magnetic resonance imaging (MRI) of the brain were conducted. Behavioral symptoms were collected from a reliable caregiver via a set of questionnaires developed by Bathgate et al. [11]. In this checklist, various behaviors were subcategorized into 7 major categories -affect and social, sensory behaviors, eating and vegetative behaviors, repetitive and-compulsive ritual behaviors, environmental dependency, cognitively mediated behaviors, and behaviors related to psychosis. We used the Bengali translated version. It was emphasized that a "symptom" should reveal a remarkable change from the patient's premorbid condition and not a longstanding trait. We ensured that each symptom being assessed was understood by the caregivers. The response to each question was recorded as the absence or presence of symptoms. Interviews were conducted by a single interviewer. Statistical Analysis For statistical analysis, data were entered in a Microsoft Excel spreadsheet and then analyzed by SPSS v24.0 (SPSS Inc., Chicago, IL, USA) and GraphPad Prism v5. We used descriptive statistics for analyzing the baseline demographics, Bengali Mental Status Examination (BMSE), Clinical Dementia Rating (CDR), and FAB scores. Data were summarized as mean and SD for continuous variables and n (%) for categorical variables. The various domains of behavioral symptoms in the bvFTD and PDD patients were compared by χ 2 /Fisher's exact test. For the comparison of continuous variables between groups, the Mann-Whitney U test was performed. A p value < 0.05 was considered significant. Results Twenty bvFTD patients and 20 PDD were recruited for the study. Demographic profiles of our patients are given in Table 1. The bvFTD patients were younger than the PDD patients, but the difference was not statistically significant. There was male predominance in both groups. The mean duration of dementia at presentation was 3.85 (±1.57) years in the bvFTD group, i.e., comparable to the duration of dementia in the PDD group (4.3 ± 2.25 years). There was also no significant difference in years of education between bvFTD (6.35 ± 5.16) and PDD (7.35 ± 4.93) patients. bvFTD patients were cognitively more impaired than PDD patients. bvFTD patients had more affected behavioral domains than PDD patients. While all (100%) bvFTD patients had ≥3 domains of behavioral symptoms, only 25% of PDD patients had < 3 affected domains (p = 0.047). Table 2 shows the distribution of domains affected in the 2 groups of patients. Overall, all major behavioral domains were more commonly observed in bvFTD than in PDD, except for neuropsychiatric behavior (Table 3). Abnormal affect and social behavior were observed in all bvFTD patients and in 80% of the PDD patients. Regarding social behavior, the following subdomains were significantly more common in bvFTD than in PDD: loss of basic emotion (p < 0.001, odds ratio [OR] 44.33), disinhibition (p < 0.001, OR 35.29), neglect of hygiene (p = 0.001, OR 13.22), loss of embarrassment (p = 0.003, OR 10.52), social avoidance (p = 0.012, OR 6.93), and selfishness (p = 0.014, OR 9). Although most other subdomains were also observed more in bvFTD than in PDD, the differences were not statistically significant. Abnormal sensory behavior was also more common in bvFTD than PDD. Loss of the sensation of pain was significantly more common in bvFTD (p < 0.001, OR 44.33). Loss of smell as well as exaggerated heat and cold responses were observed to be almost similar in the 2 groups. Eating and vegetative behavior were significantly more abnormal in patients with bvFTD (85%) than in those with PDD (45%). The loss of taste discrimination (p < 0.001, OR 17), increased consumption of alcohol (p = 0.031, OR 7.36), and increased smoking (p = 0.014, OR 9) were significantly more evident in bvFTD. Although not statistically significant, overeating was reported more frequently in bvFTD (p = 0.065). Of the vegetative behaviors, wandering (p = 0.004, OR 9.33) and pacing (p = 0.014, OR 9) were found to be significantly more common in bvFTD, while other behaviors were common but did not attain statistical significance. In the repetitive and compulsive behavior domain, no subdomain was significantly more common in either group. However, simple and complex motor stereotypies, and verbal stereotypies or perseverations were reported more commonly in bvFTD. While 40% of bvFTD patients had environmental dependency, only 15% of PDD patients showed similar behavior. Utilization behavior was observed in 35% of the bvFTD but there was none in the PDD group (p = 0.008, OR 22.78). Although the differences did not attain significance, echolalia (20%) and echopraxia (10%) were observed only in bvFTD patients. Cognitively mediated behavior was also more common in bvFTD (70%) than in PDD (60%). Failure to recognize objects (p = 0.027, OR 4.33) and mutism (p = 0.041, OR 5.67) were significantly more common in bvFTD. Of all these behavioral symptoms, neuropsychiatric behavior was more commonly observed in PDD (80%) than in bvFTD (50%). Suspiciousness (p = 0.001, OR 0.0295), the false belief that people were in their home (p = 0.014, OR 0.11) and visual illusions/hallucinations (p = 0.004, OR 0.107) were significantly less common in bvFTD than in PDD. Although not statistically significant, the misidentification phenomenon (p = 0.053) and delusions of theft (p = 0.176) were more common in PDD. We analyzed cognitive performance in relation to behavioral symptoms of bvFTD and PDD patients. In the bvFTD group, FAB score was significantly lower in patients with neglect of hygiene (p = 0.04), loss of interest (p = 0.021), and loss of smell (p = 0.001), but higher in those who got upset if their routine was disrupted (p = 0.04) and who had verbal stereotypies/ perseverations (p = 0.015). Patients with hypersomnia had a higher BMSE score (p = 0.026). Although not statistically significant, patients with hyposomnia had lower BMSE and FAB scores. In the PDD group (Table 4), FAB scores were significantly lower in patients with disinhibition (p = 0.019) and hyposomnia (p = 0.043), and higher in patients with excessive worrying (p = 0.02) and verbal stereotypies/perseverations (p = 0.007). BMSE score was higher in patients who needed to do things immediately (p = 0.011), did excessive clockwatching (p = 0.019) and paid excessive attention to detail (p = 0.028), but it was much lower in patients with a loss of basic emotion (p = 0.002). Discussion Executive dysfunction is common in bvFTD and subcortical dementias, due to disturbances of the dorsolateral prefrontal cortex and its subcortical connections. Patients with bvFTD often have associated parkinsonism and other motor features as well as incontinence, and patients with PDD exhibit behavioral changes like depression, apathy, anxiety, etc., sometimes making it difficult to make a diagnosis. In an attempt to find differentiating features between these 2 dementia syndromes, we explored the behavioral profiles of these patients. We recruited clinically diagnosed bvFTD patients regularly attending follow-up for > 1 year at the Cognitive Clinic, and IPD patients regularly attending follow-up for > 5 years at the MDC who had developed cognitive dysfunction that met the criteria for PDD. This process was to improve the diagnostic certainty of selected patients. Although the bvFTD patients were younger than the PDD patients, they were cognitively more impaired. Clinical manifestation of bvFTD is dominated by altered behavior and changes in social cognition. In our study, several behavioral symptoms showed very high sensitivity for bvFTD. The most sensitive features favoring bvFTD were: loss of basic and social emotions, selfishness, disinhibition, neglect of personal hygiene, loss of awareness of pain, loss of discrimination of food, wandering, pacing, utilization phenomena, failure to recognize objects, and mutism. The presence of these features makes a diagnosis of bvFTD more likely. Neuropsychiatric symptoms like suspiciousness, the false belief that people were in their home, and visual illusions/ hallucinations were more often observed in PDD patients, with significant discriminating value. These features clearly differentiate bvFTD and PDD patients. We found loss of basic emotion, selfishness, and social avoidance to be common in our bvFTD cohort and this helped in distinguishing bvFTD from PDD. Loss of embarrassment was also an important feature of social cognition that favored a diagnosis of bvFTD. Similar observations have been made by other researchers as well [4,11,20]. While IPD and PDD patients also exhibit features of apathy and a lack of empathy, these do not manifest often, appear late, and often mixed with depression [21][22][23][24]. Another prominent discriminatory symptom of impaired social cognition was disinhibition (95 vs. 35%, p < 0.001, OR 35.29). Disinhibition has also been described in other studies [11,20,25]. Marked disinhibition is rare in PDD, and reported frequencies vary from 8 to 24% [24,26]. The amygdala and orbitofrontal and medial prefrontal cortices (including the anterior cingulate cortex) are responsible for Theory of Mind (TofM). TofM is the ability of a person to understand what others are thinking and read their emotions. This is important for empathy, and a lack of it is an important feature of bvFTD. Neglect of personal hygiene was another discriminating feature observed in a relatively high proportion of the patients with bvFTD (85 vs. 30%, p = 0.001, OR 13.22). Other studies have also reported a high proportion (87-100%) of bvFTD patients with symptom [4,11], and it is rare in patients with PDD where it is often mixed with depression or apathy [22,[25][26][27]. Early loss of insight was an important feature of bvFTD and many researchers have reported this, in 25-100% of patients [11,28]; in PDD, however, it is relatively uncommon [22]. Loss of insight was observed in a large proportion of our bvFTD (95%) and PDD (80%) patients. That many of our PDD patients had loss of insight was possibly due to the manner the question was framed to elicit insightfulness. Rather than simply asking about their awareness of the disease, the question was whether the subject reacted to difficulties by becoming upset, distressed, or anxious, losing confidence, or withdrawing, and this might have elicited a more positive response from the PDD patients. Additionally, apathy (65%) and hidden depression might also have contributed in these patients. Exaggerated emotion, aggressiveness, and irritability were present in 25-70% of the bvFTD patients and 15-55% of the PDD patients. A similar excess of emotional responses has also been reported in other studies [11,22,24,28,29]. Like other studies [30,31], altered sensory perception including the lack of pain sensation was found in a significantly high proportion of our bvFTD patients. Hyposmia was less frequent in the bvFTD (55%) patients than in the PDD (60%) patients. Hyposmia is a prominent nonmotor symptom, occurring in about 80-90% of PD patients [32,33]. Although selfreporting, or reporting by the caregiver, may not provide a true account about smell perception, our observation of an abnormal perception of smell in a high proportion of the patients with bvFTD needs to be validated by proper evaluation in future studies. Abnormal eating and vegetative behaviors were observed more frequently in bvFTD than PDD, similar to the reports of other investigators [11,13], although sweet-tongue and food faddism were not common in our bvFTD patients. A lack of taste discrimination, increased consumption of alcohol, and increased smoking clearly discriminated bvFTD from PDD. Altered eating behaviors are relatively uncommon in PDD. Several studies observed that PDD patients lose weight (20-30%), and that this is related to changed eating habits, bradykinesia, an altered perception of taste and smell, or the effect of medication [13,24,34]. Recent changes in patterns of smoking or drinking, or starting these habits, are behavioral features strongly suggesting bvFTD rather than any subcortical dementia. Eating continuously when food is present, food-cramming, seeking out food, stealing food from others' plates, etc. were observed in a higher proportion of our bvFTD patients than our PDD patients, but did not have discriminatory value. Wandering (70 vs. 20%, p = 0.004, OR 9.33) and pacing (50 vs. 10%, p = 0.014, OR 9) were reported more frequently in our bvFTD patients than in the PDD patients. These are also reported at similar frequencies in bvFTD by others [11,31], but generally do not manifest in PDD. Although poorly understood, repetitive motor behavior is considered to be due to a disruption of coordinated function within the basal ganglia or corticostriatal structures [35]. Alteration of sexual behavior was also observed more commonly in our bvFTD patients, with 50% reported as having hyposexuality and 20% as having hypersexuality. A similar observation was made by Bathgate et al. [11] with hyposexuality in 58% and hypersexuality in 19%. One Indian study observed this behavior in 10% of their bvFTD patients [36]. Among our PDD patients, 30% reported hyposexuality and 10% reported hypersexuality. Hypersex- uality has been reported in around 3.5% of IPD subjects as part of an impulse control disorder (ICD) [37]. Culturally, Indians are shy of expressing their sexual habits publicly. Moreover, hyposexuality is reported in many chronic diseases and may not be a feature specific to any disease. A change in sleep pattern was found in nearly equal frequency in both groups (80% in bvFTD and 75% in PDD), with no discriminatory value. Similar to in other neurodegenerative diseases, hypersomnolence is reported in 30-47% of patients with bvFTD [11,38]. The literature suggests that 80% of IPD patients suffer from insomnia [39], while 50% experience excessive daytime sleepiness [40], and 30-90% exhibit REM sleep behavior disorder [41]. Although repetitive and compulsive behaviors were more commonly found in our bvFTD cohort, we did not find them to discriminate bvFTD from PDD. Mendez and Perryman [28] observed perseverative and stereotyped behavior including compulsive-like acts in 45.3% of FTD patients at presentation, and this increased to 88.7% at the 2-year follow-up. ICDs are observed in IPD patients receiving dopa-agonists and these include compulsive gambling, buying, sexual, and eating behaviors, punding (stereotyped, repetitive, purposeless behaviors), and hobbyism (e.g., compulsive internet use, artistic endeavors, and writing) [42]. ICDs result from a dysregulation of the mesocorticolimbic dopamine system and alterations in the opiate and serotonin systems [42]. Around 13.6% of IPD patients report ICDs while on dopa-agonists [43]. However, there are limited data about this characteristic behavior in both PDD and bvFTD patients. Chiu et al. [24] showed that 16% of PDD patients had repetitive behaviors. Another compulsive behavior observed in IPD patients receiving short-acting potent dopaminergic drugs like L-dopa is dopamine dysregulation syndrome, which manifests as compulsive medication overuse [44]. Environmental dependency-related behaviors were observed in both groups. Among these, utilization behavior was significantly higher in bvFTD, observed in 35% of the patients, but not in any of the PDD patients (p = 0.008, OR 22.78), suggesting this to be a strong discriminating feature. This is a very interesting clinical sign observed in > 75% of bvFTD patients in an Indian study [45], but others have reported that this behavior occurs less frequently [5,11,15,28]. The questionnaires also tried to assess cognition and spatial function; 70% of the bvFTD patients and 60% of the PDD patients responded that they had problems with cognition and spatial dysfunction. What is intriguing is that many of our bvFTD caregivers responded positively to the questions relating to disturbance in visuospatial and perceptual functions. The symptoms related to visuospatial-perceptual function was detected in 65% of bvFTD and 40% of PDD patients. As this was a questionnaire-based assessment and our bvFTD cohort were more demented than the PDD patients, the result might not be a true indication of the deficits in this sphere of cognition and would require a more detailed evaluation. Interestingly, mutism was found to be significantly more common in bvFTD and is therefore a discriminatory feature. Psychotic symptoms were more prominent in PDD (80%) than in bvFTD (50%). Delusional behaviors with suspiciousness, the belief that someone is in the home, and visual illusions/hallucinations were found to be significantly more frequent in PDD patients than in bvFTD patients, making these notable discriminatory features. Psychotic symptoms, particularly visual hallucinations, are very characteristic of PDD and dementia with Lewy bodies, primarily because of the impact on areas of visual association. Delusions and hallucinations have been reported in PD, in 30-40% and 45-65% of patients, respectively [22,24]. These studies also suggest that visual hallucinations in PD predict the development of dementia. Delusions and hallucinations are relatively uncommon in bvFTD, except in some patients such as those with bvFTD due to the C9orf72 mutation [46]. While correlating cognitive performance with behavioral symptoms, we observed a trend. In the PDD group, although not statistically significant, patients with some of the behav- ioral symptoms tended to have a lower mean FAB score (< 8; Table 4). These symptoms were: loss of basic emotion, loss of embarrassment, social avoidance, neglect of hygiene, and loss of interest, which might be considered as "negative" symptoms. On the other hand, patients with some of the other symptoms had higher mean FAB scores (> 8). These symptoms were: verbal stereotypies/perseverations, excessive worrying, the need to do things immediately, exaggerated emotional display, irritability, aggression, and seeking out social contact, which might be considered as "positive" symptoms. This indicates that, in PDD, more advanced disease with lower FAB scores might placate the expressive or "positive" behavioral symptoms to some extent. However, it was not possible to generalize, as the FAB scores were significantly lower in the patients displaying disinhibition. Moreover, it was difficult to comment on the bvFTD patients, as most of them had very low FAB scores. Hence, these findings should be evaluated in larger studies to arrive at an accurate conclusion. Dementia with a predominantly frontal dysexecutive pattern may be seen in both cortical (e.g., bvFTD) and subcortical (e.g., PDD) disorders. Behavioral symptoms form an important part of such dementias, and our study suggests that some of these behaviors have a discriminatory value for differentiating between these disorders. We found that several behavioral symptoms increased the odds for bvFTD versus PDD. In a previous study, Moretti et al. [47] observed prominent abnormalities in personality and social conduct with a significant loss of insight in frontal-lobe dementia when compared to subcortical vascular dementia. Hence, although larger-scale comparative studies are required, it can be inferred that behavioral symptoms help to distinguish cortical dementia with frontal-lobe dysfunction from subcortical dementia. This study has a few limitations. The small sample size was a major limitation which could have impacted the power of the study. As features of degenerative disease evolve over time, a cross-sectional assessment does not exactly depict the full spectrum of behavior of these patients. Moreover, the lack of pathological confirmation of patients was another limitation. However, we carefully chose our patients after a long follow-up before recruiting them, so as to ensure a better diagnosis. The systematic analysis of symptoms of behavioral changes was the strength of the study. To conclude, we observed significant differences in behavioral symptoms between bvFTD and PDD patients, with several symptoms showing higher frequency in bvFTD. It may be that the degeneration of the prefrontal cortex is mostly responsible for frontal behavioral symptoms, and that these are relatively uncommon in subcortical diseases despite having rich to-and-fro connections between the two.
2020-12-17T09:12:07.628Z
2020-12-15T00:00:00.000
{ "year": 2020, "sha1": "b176b14645596845a0cd6233e2689d0c7945f1bb", "oa_license": "CCBYNCND", "oa_url": "https://www.karger.com/Article/Pdf/512042", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e276ad5ce52e6a21c6554bb3e43a060ebbba7083", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73342425
pes2o/s2orc
v3-fos-license
Prenatal diagnosis of hemoglobinopathies : from fetoscopy to coelocentesis Prenatal diagnosis of hemoglobinopathies involves the study of fetal material from blood, amniocytes, trophoblast coelomatic cells and fetal DNA in maternal circulation. Its first application dates back to the 70s and it involves globin chain synthesis analysis on fetal blood. In the 1980s molecular analysis was introduced as well as amniocentesis and chorionic villi sampling under high-resolution ultrasound imaging. The application of direct sequencing and polymerase chain reactionbased methodologies improved the DNA analysis procedures and reduced the sampling age for invasive prenatal diagnosis from 18 to 1611 weeks allowing fetal genotyping within the first trimester of pregnancy. In the last years, fetal material obtained at 7-8 weeks of gestation by coelocentesis and isolation of fetal cells has provided new platforms on which to develop diagnostic capabilities while non-invasive technologies using fetal DNA in maternal circulation are starting to develop. Introduction Hemoglobinopathies can be subdivided in expression (thalassemias) and structural defects (abnormal hemoglobins).][4] Migration from native areas has spread hemoglobinopathies to Northern Europe, the Americas and many other immigration areas where these conditions were previously rare or absent.Approximately 1.5% of the global population is carriers of the b-thalassemias with about more than 50,000 new affected every year. 5,6ell organized control programs based upon information, screening for carrier identification, genetic counseling and prenatal diagnosis have reduced dramatically the numbers of affected newborns (incidence). 7,8o date, carrier screening, genetic counseling, and prenatal diagnosis for hemoglobinopathies represent one of the most frequent genetic analyses performed worldwide. 9arrier screening for hemoglobinopathies should be offered to all women in reproductive age and extended to the partner if the woman is suspected or found to be carrier.Ideally, the screening should be done pre-conceptionally or as early as possible in the pregnancy.The traditional basic hematological tests for thalassemia are the measurement of the mean corpuscular volume (MCV), the mean corpuscular hemoglobin value and the quantity of HbA 2 , HbF and the eventual Hb variants.However, most carriers of HbS, HbC, HbD and other relevant traits are not microcytic.Therefore the results of the blood count should not preclude the separation and measurement of the Hb fractions and to date the most sensible protocol is to perform separation and measurement using high performance liquid chromatography (HPLC) or capillary electrophoresis (CE) regardless and in parallel with the blood count.][12] Invasive prenatal diagnosis can be performed from 7 weeks of gestation by coelocentesis 13 to 22 weeks by cordocentesis. 14DNA analysis can be done using cells obtained from coelomatic fluid, chorionic villi, 15 amniotic fluid or fetal blood and molecular defects should be be identified in both parents preferably before or together with prenatal diagnosis.The first attempts of prenatal diagnosis of Hemoglobinopathies was for thalassemia major making use of fetal blood sampling and in vitro globin chain synthesis to measure directly the expression of the mutated genes. 16uccessively, techniques have been developed to isolate fetal DNA. 17Several methods have been used to analyze the DNA including direct identification of the molecular lesion by restriction endonuclease mapping, 18 determination of the chromosome carrying the abnormal globin gene by linkage using restriction fragment length polymorphisms (RFLPs) 19 or detection of specific mutations directly with oligonucleotide probes. 20A grater impulse was given by the advent of polymerase chain reaction (PCR) technology that allowed the development of several PCR-based techniques together with automatic non-radioactive sequencing analysis to perform rapid and safe molecular diagnosis. 21 Fetal blood sampling: fetoscopy and cordocentesis The biological feasibility of prenatal diagnosis of b-thalassemia was reported for the first time in 1970. 22The authors reported also that the erythrocytes of fetus homozygotes for b-thalassemia are morphologically indistinguishable from normal because the cells are largely filled with fetal hemoglobin and their volumes is about 100 fL.Other investigators determined the normal range of fetal ba and g-globin gene synthesis and their ratio in the normal, b-trait and affected b-thalassemia fetus. 23Diagnosis of b-thalassemia in the second trimester of pregnancy posed however several problems: i) the use of safe methods to obtain fetal blood without risk for the fetus; ii) the small amount of blood obtained by fetoscopy; iii) fetal blood samples contaminated by maternal cells; iv) low synthesis of b-chains in the second trimester, i.e. in normal human fetus it is less than 10% 24,25 and further reduction of b-chain synthesis in both b-trait and affected fetus.Many of these problems were successively solved but the last and not least, which is the emotional burden of interrupting an advanced pregnancy.At beginning of 1974 it was possible to obtain fetal blood during the second trimester of pregnancy at 18-20 weeks. 26Initially, the technique used for aspiration of fetal blood was placentocentesis 27 and successively fetoscopy. 28The first one involved aspiration of blood from the placenta under ultrasound guidance. 29A long 20-gauge spinal needle was introduced transabdominally into the placenta and several small samples of blood were aspirated.The second one involved insertion of a fiberoptic endoscope into the amniotic cavity taking a small sample of fetal blood under direct visualization by passing a long 27-gauge needle through the side-arm of the fetoscopy cannula (Figure 1).Some groups used other approaches consisting of ultrasound guided placement of a needle into the hepatic portion of the umbelical vein or ultrasound guided fetal cardiac puncuture. 30The rate of fetal loss reported was variable from center to center, about 4.8% for prenatal diagnosis of genetic disease in Montreal, 5.1% rate in the two centers of United States that actively performed fetoscopy, 7.4% rate for other centers.In about 13% of cases more than one attempt was necessary to obtain an adequate sample. 31The loss rate was significantly higher in the centers in which a smaller numbers of cases were studied, and therefore the presence of an experienced obstetrical team was needed.During aspiration the blood sample must be quickly analyzed to ensure the presence of fetal blood and the absence of maternal contamination.Each sample was immediately analyzed on a Coulter Channelyzer, which detects the difference in size between maternal and fetal red cells due to the large size of fetal erytrocytes (MCV=100-120 fL), while MCV in adult cells of b-thalassemia carriers is usually less than 70-80 fL.This analysis must be conducted in a few seconds, thus the procedure can be stopped as soon as a suitable pure sample has been obtained.The percentage of fetal cells can also be confirmed by Kleihauer-Betke test. 32,33 Laboratory methods All methods used for fetal blood collection are bound to produce samples more or less contaminated with maternal blood.A sample enriched in fetal cells was essential for globin chain synthesis and since fetal and adult erythrocytes differ in several characteristics, such as size, type of hemoglobin, membrane antigens and enzyme levels, 34 elimination of maternal contamination has been carried out in different ways: i) transfusion of the mother to suppress maternal reticulocyte globin gene synthesis; 35 ii) fetal red cell concentration with differential agglutination using anti-i serum; 36 iii) fetal red cell enrichment with antibodies against blood group antigens (ABO, MNs, Rh systems); 37 iv) differential lysis of maternal cells with the Orskov-Stewart-Jacobs reaction. 38This last method was the most useful, simple and efficient for globin chain synthesis in samples of placental blood with less than 5% fetal cells.The method is based on differences in the carbonic anhydrase levels of fetal and adult red blood cells and results in selective hemolysis of the contaminating maternal cells.When red cells are suspended in NH4C1 and NH4HCO3, NH3 enter the cells, if carbonic anhydrase is present, the CO 2 convertes to HCO3-which then exchanges for CL-.The latter is trapped by NH4+ and eventually leads to osmotic lysis after H 2 O also enters the cells.In the fetal cells, the low level of carbonic anydrase is further inhibited by the small amount of acetazolamide and hemolysis occurs much more slowly.Thus the proper choice of concentrations of NH4HCO3 and acetazolamide, combined with an appropriate reaction time, leads to a marked enrichment of fetal cells in mixed samples. 39 Globin gene biosynthesis and separation The fetal blood is incubated with 3 H-leucine to radiolabeled newly synthesized globin chains, and separated by carboxymethyl cellulose chromatography 40 (Figure 2).The diagnosis of b thalassemia is made when the normal b-chains are either absent or substantially reduced.The biosynthesis of b-chains increases slowly during the first and second trimesters.At 16-23 weeks of gestation, the synthesis of b-chains respect to g-chains was calculated between 5-15% in normal fetuses, between 3.5-6% in carriers of b-thalassemia, and below 3% in case of affected fetuses. 41owever, each center determined their own synthetic ratios depending from the heterogeneity of b-thalassemia in that particular region and the technical aspects associated with the separation of newly synthetizes globin chains.[44] Fetal tissue sampling: amniocentesis and villocentesis The advent of molecular analysis allowed prenatal diagnosis early in the second trimester.In 1980's different techniques have been developed for isolating fetal DNA from material obtained by amniocentesis or chorion villi sampling (CVS).Analysis of fetal DNA was done by linkage analysis of the b-globin gene cluster, by RFLPs or by direct identification of the molecular defect by restriction endonuclease mapping. 45Amniocentesis allowed to anticipate prenatal diagnosis of at least two weeks.Fifteen to twenty milliliters of amniotic fluid obtained at 16 weeks gestation contain sufficient fetal cells to provide enough DNA for molecular analysis of globin genes.Subsequently, aspiration of a biopsy of chorionic villi at 11 weeks of gestation provided new practical and psychological advantages. DNA preparation Different methods were described for DNA extraction from whole blood, amniotic cells and chorionic villi.The oldest and still used method in many laboratories is phenol-chloroform extraction.Successively salting-out method was introduced 48 and different kits, which give sufficient DNA recovery, are commercially available.Before DNA extraction from CVS, microscopic dissection to remove traces of blood and maternal tissue is needed.Usually, 10-20 mg of DNA is obtained from CVS while the yield of DNA obtained by amniotic fluid cells is smaller. Molecular prenatal diagnosis The purpose of DNA-based prenatal diagnosis is always to determine whether the fetus has inherited the disease-causing mutations identified in both parents.A variety of technical innovations have been developed since the early 1970s.2] Many of the earlier manuals, semi-automated or automated genotyping methods are still used with improved performances of the detection systems. Restriction fragment length polymorphisms linkage analysis Throughout the globin gene clusters there are single base changes that produce either new restriction enzyme sites or remove previously existing ones.These polymorphic sites were very useful in the past because they were employed as markers for the globin gene complexes.The arrangement of RFLPs along the bglobin gene cluster is not random but specific in different populations.Within any ethnic group it is usual to find individual b tha-lassemia mutations strongly associated with a specific RFLP haplotype in the b gene cluster.The strong association of a particular gene with a polymorphism is an example of what is termed linkage disequilibrium.Kan and Dozy 51 first reported the association of a mutant gene, HbS allele, with a restriction enzyme polymorphism.Although specific b-thalassemia genes are in linkage disequilibrium with various restriction sites, there is no single DNA polymorphism that is generally useful for analysis of families at risk because the same haplotypes occur frequently in normal subjects in the same populations.Therefore RFLPs were no frequently used for prenatal diagnosis except in a few cases in which individual RFLPs were found in strong linkage disequilibrium with specific mutations.Since so few of these allele linked polymorphisms have been found to date and since their linkage to the different thalassemia mutations was not absolute, in order to identify whether a fetus has inherited two b-thalassemia chromosomes it was usually necessary to look for an RFLP linkage by a family study (father, mother and both Grandpas and Grandmas) though it is not often possible to use this approach for prenatal diagnosis. Due to these problems and to the fact that searching for appropriate polymorphisms within a family and their linkage with the normal and mutant genes is a laborious process the use of this approach for prenatal diagnosis has remained limited. 52 Oligonucleotide probes The molecular background of globin genes defects can be roughly subdivided in two categories, the large deletions and the point mutations.Molecular analysis of large deletions is common for a thalassemia and for some less common b thalassemia defects such as δbthalassemia, and gδb-thalassemia. 53,54However, the majority of b-globin gene defects are point mutations or deletions/insertion of one or two nucleotides.Many of these mutations were recognized using an appropriate DNA probe prepared from the 5' portion of the b-globin gene.This approach was quite sensitive for the use of a small amount of DNA isolated from fetal material but the main problem was represented by the presence of many mutations that affect b-globin gene, so it was necessary to perform different tests with specific probes.Furthermore these probes were too long and often it was difficult to detect single base changes that cause many forms of thalassaemia. 55Successively an alternative technique was introduced, based on high specific activity labeling short synthetic DNA fragments (oligonucleotides or oligomers) that can hybridize to their homologous sequences but not to heterologous sequences.Upon assay with both the mutant and corresponding mutant synthetic probes, the genotype of an individual with respect to the mutation can be established without reference to additional data.7][58] The latter found that short probes consisting of 19 nucleotides were very efficient to hybridize perfectly with their homologous sequences.The method was sufficiently sensitive to be used with 10 mg or less of DNA per gel lane but careful control of the hybridization and washing conditions was required. Polymerase chain reaction-based techniques In 1987 it became possible to amplify regions of b-globin directly from genomic DNA. 59For the analysis of globin genes in the fetal DNA, it is necessary to perform PCR for the genomic regions in which the mutations previously identified in the parents are located.A number of PCR-based techniques have been developed for the identification of known and unknown mutations of globin genes (Table 1).These techniques included dot-blot (DB) analysis, restriction endonuclease analysis, amplification refractory mutation system (ARMS), reverse dot blot analysis, denaturant gradient gel electrophoresis (DGGE), GAP-PCR, sequencing analysis.Each method has got advantages and disadvantages and each laboratory must choose the most suitable methods in relation to the type and variety of mutations present in a specific population. 60striction enzyme analysis of amplified DNA Many b-thalassemia alleles can be easily identified by restriction enzymes because mutations can create or abolish a restriction endonuclease site altering the normal pattern of digestion fragments visualized after electrophoretic separation on agarose or polycrylamide gel. 61If a thalassemic allele does not present a specific restriction endonuclease site it is possible to create an artificial site adjacent to the point mutation during PCR using a primer inserting new bases into amplified product (amplification created restriction sites method).This method was applied to the detection of different b-thalassemia mutations especially for the second most common mutation in Mediterranean, IVS1 nt 110 defect. 62 Dot-blot The use of amplified DNA and radioactive oligoprobes was one of the first methods used for the detection of known point mutations of b-globin gene.The principle of dot blot is that a single short strand of DNA (oligonucleotide) can hybridize to a complementary single strand amplified DNA.This approach requires the immobilization on a nitrocellulose or nylon membrane of amplified DNA target and a single hybridization procedure is performed with one specific oligonucleotide probe labeled at 5' end with g 32 ATP.During this procedure any non-specific probe is removed by washing the membrane leaving only the probe that is basepaired to the blotted DNA.Filters are then exposed to X-ray films and results are developed.After removal of radioactive material by washing at 95°C in 0.1 ¥ standard saline citrate (SSC) (1 SSC=0.15M sodium chloride and 0.15 M sodium citrate), 0.1% sodium dodecyl sulphate, the filters can be subjected to another cycle of hybridization with another probe.For genotyping carrier subject many oligonucleotides are required until the mutation is identified, while for prenatal diagnosis two oligonucleotides probes are necessary for each paternal and maternal mutation or one probe complementary to the normal and one to mutant DNA sequence if the parents have the same mutation. Although this methodology is adequate and accurate, it is tedious, expensive, and it required a considerable time to identify the mutation(s) in the couples at risk for b-thalassemia, about 24 h to screen one mutation and many days to screen the most common mutations present in a specific population. 63 Reverse dot-blot The reverse dot-blot (RDB) was first described by Cai et al., 64 Saiki et al. 65 and successively by Maggio et al. 66,67 that used this method to screen b, a, and δδ-globin gene mutations in the Sicilian population.Unlike the dot blot analysis, the reverse dot blot provides the immobilization of oligonucleotide probes on a membrane rather than DNA samples.The advantages of this methodology was that a genomic DNA sample can be amplified Amplification refractory mutation system The amplification refractory mutation system technique is based on the principle that a mismatch at 3' of the PCR primer inhibits the amplification of a target DNA under appropriate conditions.An ARMS reaction for detection of a mutation consists of two couple of primers.One couple is used to screen the presence of the specific mutation while the other couple of primers generate a control amplification product from a different region of the genome under the same condition of PCR.The internal control PCR fragment must be always present indicating that the reaction is working well, while the presence of a mutation is detected by a second fragment generated by the use of primers complementary to the mutated allele.This methodology was used to screen many common and rare mutations by one step reaction.For prenatal diagnosis two separate reactions for each mutation are required, if parents have different mutated alleles, or an ARMS reaction complementary to normal sequence and an ARMS reaction complementary to mutated allele for genotyping fetal DNA if parents have the same mutated allele.This methodology is simple, rapid and inexpensive. 68 SNaPshot method This genotyping technique is a variant of ARMS methodology.The method is based on the principle of single base primer extension to generate an amplified fragment using fluorescent labeling primer visualized on an automated DNA fragment analyzer.More than one known mutation can be screened at the same time in three steps.The first step consists of a multiplex-PCR reaction to generate amplicons including the mutations to screen.The second step consists of a multiplex-PCR single-based extension assay of probe primers using a commercial kit (SNaPshot Kit; Applied Biosystems, Foster City, CA, USA).In this step, DNA polymerase incorporates the complementary dyeconjugated dideoxy nucleotide base at the 3' end of each probe primer annealed proximal to the mutations to screen.In the last step, capillary electrophoresis is performed to determine the size of the extended probe primers and fluorescence dye types.The SNaPshot method has already been used for typing Y chromosome and mitochondrial single nucleotide polymorphisms (SNPs) in population analysis and for identifying mutations commonly associated with genetic human pathologies.Usually it is possible to use a common primer for mutations in a restricted zone and a forward or reverse different primer to detect mutations.Primers used as probes are designed to anneal immediately adjacent to the mutated site and their lengths differ from at least five bases by addition to the 5' end of a polyT tail to distinguish them by capillary electrophoresis.This method can be multiplexed to screen many mutations. 69 GAP-polymerase chain reaction GAP-PCR is a simple technique to identify known large gene deletions.Two primers are used complementary to the sense and antisense strand in the DNA regions that flank the deletion.For small deletions, the primer pair will generate two different products, one of which arising from the deleted allele.For larger deletions, the distance between the two flanking primers is too big to amplify the normal allele and the only product obtained is from the deletion allele.In these cases the normal allele is detected by amplifying across one of the deletion breakpoints, using a third primer complementary to part of the deleted sequence near to the flanking normal DNA sequence.This technique is the standard method for diagnosis of the a + deletions defects (-a 3.7 and -a 4.2 ), the common Mediterranean and Southeast Asian a o -thalassemia deletions (-20.5 /aa, -Med /aa, -Cal /aa, -SEA /aa, -FIL /aa, -THAI /aa , for diagnosis of triplicated a globin gene alleles (-aaa anti3.7 and -aaa anti4.2 ).[72][73][74] Multiplex ligation-dependent probe amplification Although more than 90% of the deletion defects can be detected using GAP-PCR, other methods are necessary to identify unknown, very rare or very large deletions.Multiplex ligation-dependent probe amplification (MLPA) is a new high-resolution method to detect deletions in genomic sequences.The technique consists of two oligonucleotides that can be ligated to each other when hybridized to a target sequence.All ligated probes have identical sequences at their 5' and 3' ends, permitting simultaneous and quantitative amplification in a PCR containing only one primer pair.Each probe gives rise to an amplification product of unique size.Due to the fluorescent labeling of the primer the resulting products could be separated according to size using capillary electrophoresis system.Fragments are analyzed by specific programs.Peak heights are compared with control sample and ratios are calculated.MLPA has rapidly gained acceptance in genetic diagnostic laboratories due to its simplicity compared to other methods, relatively low cost, capacity for reasonably high throughput and robustness. 75,76 DNA sequencing Sequencing analysis was first developed in 1976-1977 by Maxam and Gilbert using a chemical modification of DNA. 77Two years later Sanger and Coulson described a new sequencing method based on in vitro synthesis of DNA from amplified products. 78This technology became more popular because it is more efficient than the first method and do not use toxic chemicals and lower amounts of radioactivity.Initially genomic DNA was fragmented into random pieces, cloned into a DNA vector and amplified in Escherichia coli.Short DNA fragments purified from individual bacterial colonies are individually sequenced and assembled electronically into one long, contiguous sequence.The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotidetriphosphates (dNTPs) and modified nucleotides (dideoxy NTPs) that terminate DNA strand elongation.Initially, radioactive labeled probe was used.The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP) and the DNA polymerase.To each reaction is added only one of the four dideoxynucleotides (ddATP, ddGTP, ddCTP, or ddTTP) which are the specific chain-terminating nucleotides, lacking a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, thus terminating DNA strand extension and resulting in DNA fragments of varying length.The newly synthesized and labeled DNA fragments are heat denatured, and separated by size (with a resolution of just one nucleotide) by gel electrophoresis on a denaturing polyacrylamideurea gel in four separated lanes (lanes A, T, G, C); the DNA bands are then visualized by autoradiography or UV light and the DNA sequence can be directly read off the X-ray film or gel image.More recently methods, which allow sequencing in a single reaction, rather than in four, have been developed.In dye-terminator sequencing, each of the fourdideoxynucleotide chain terminators is labeled with fluorescent dyes, each of which emits light at different wavelengths.Fluorescent peaks chain-termination methods have greatly simplified DNA sequencing.Chain-termination-based kits are commercially available that contain the reagents needed for sequencing, pre-aliquoted and ready to use.As globin genes are relatively short, about 2000 bases, it is possible to sequence each gene using 2-3 amplified fragments by cycle sequencing analysis in few days.This method is very rapid and easy permitting the direct check of all sequence of the gene without the use of radioactive labels.A relative disadvantage is the cost of a sequencing instrument and sequencing signal deterioration after 800-1000 bases. 79 Denaturant gradient gel electrophoresis Myers et al. 80 in 1986 described a technique, which uses heteroduplex formation between wild type and mutated DNA strands to identify mutations.When a DNA fragment is analyzed by electrophoresis through a linearly increasing gradient of denaturants, the fragment remains double stranded until it reaches the concentration of denaturants equivalent to a melting temperature (Tm) that causes the partial melting of lower-Tm domains fragment.At this point, the branching of the molecule caused by partial melting sharply decreases the mobility of the fragment in the gel.The lower-temperature melting domains of DNA fragments differing by as little as a single-base substitution will melt at slightly different denaturant concentrations because of differences in stacking interactions between adjacent bases in each DNA strand.These differences in melting cause two DNA fragments to begin slowing down at different levels in the gel, resulting in their separation from each other. Although DGGE can identify a mutant, it does not reveal the nature or exact location of the mutation but this information can be revealed by sequencing analysis. DGGE can detect mutations only in the lower temperature melting domains, so it is important to determine what portion of the molecule becomes single stranded when the DNA melts.This fact can be determined by use of a computer algorithm, developed by L. Lerman that predicts DNA melting behavior solely from the base-pair sequence.However, DGGE will not separate DNA fragments differing by a base change in the highest temperature-melting domain due to loss of sequencedependent gel migration upon complete strand separation.To permit the detection of point mutations along the complete amplified fragment of the gene under analysis, a GC-rich segment of 30-60 bases may be attached to the 5' of one primer of the pair.This tail has a high Tm and the DNA fragment has a lower Tm. 81,82naturing high-pressure liquid chromatography Denaturing high-pressure liquid chromatography (DHPLC) is a relatively new technique analogous to DGGE.Heteroduplex molecules are separated from homoduplex molecules by ion-pair, reverse-phase liquid chromatography on a special column matrix with partial heat denaturation of the DNA strands. The method is based on the different retention time of homoduplex and eteroduplex molecules within a mixture of denatured and reannealed DNA fragments.Heteroduplex molecules have a retention time lower than the correspondent homoduplex molecule and the eluted DNA fragment is detected by UV.This is a semi-automated method and, maintaining the HPLC column at a temperature that favors partial strand denaturation in the presence of base-pair mismatching, maximizes the sensitivity of the analysis.The vantages of DHPLC are that it has high specificity and sensitivity and has low running costs.However, the method is not suitable for direct analysis of homozygous samples since the DNA must be mixed with a reference of a known genotype prior analysis.[85][86] Best practice for molecular prenatal diagnosis of hemoglobinopathies In order to reduce the occurrence of misdiagnoses, some fundamental strategies and methodologies are introduced in the laboratory procedures: i) analysis of two different pieces of chorionic villus; ii) test to rule out maternal contamination or non-paternity or sample exchange.It is advisable to analyze both DNA extracts and analysis should be conducted by two different operators using different methodologies.It is strongly recommended that, in addition to screening for parental mutations, prenatal diagnosis should include maternal cell contamination test.The potential presence of maternal contamination in CVS or amniotic fluid samples is the main risk in prenatal diagnosis when sensible PCR-based methods are used.The effect of contamination, after CVS, can be limited by careful dissection of maternal decidua.Amniotic fluid can be contaminated with maternal blood and this is easily visible because the sample appears bloodstained.It is to be put in evidence that the blood in amniotic fluid can also be of fetal in origin.For these reasons it is necessary to use sensitive assays and very polymorphic microsatellite markers by quantitative fluorescent polymerase chain reaction (QF-PCR).Best practice requires also the analysis of both maternal and paternal samples.[89] Prenatal diagnosis earlier in pregnancy 1][92] Early amniocentesis has been associated with pulmonary hypoplasia and the low number of cells obtained often makes definitive diagnosis difficult.In addition, the relative safety of early amniocentesis is lower than CVS. 93,94Inherent problems with current prenatal diagnosis, coupled with benefits of earlier reassurance or termination, would make an alternative earlier method very valuable.At days 11 and 12 postconception, a new population of cells appears between the inner surface of the cytotrophoblast and the membrane which lines the primary yolk sac (Heuser's membrane). 95hese cells, which are derived from the yolk sac, form a fine, loose connective tissue, the extraembryonic mesoderm, which eventually fills all the space between the trophoblast externally and the amnion and Heuser's membrane internally (Figure 3).The coelomic or extraembryonic cavity develops during the fourth week of gestation, within the extraembryonic mesoderm that surrounds the bilaminar embryonic disk.The developing coelomic cavity splits the extraembryonic mesoderm in two layers, the somatic and splanchnic mesoderm. 96During the first nine weeks of gestation the coelomic cavity represents the largest space inside the gestational amniotic cavity reaching maximum volume at 7-9 weeks, then subsequently disappearing at around 13 weeks.The advent of high-resolution transvaginal ultrasound transducers allowed a more accurate examination of the early pregnancy.cannot be successfully cultured, embryonic genetic material could be analyzed using PCR techniques. 97 Sampling The coelomatic fluid can be sampled at 7-9 weeks gestation by transvaginal sonography by insertion of 20 G needle through the vagina into the coelomic cavity. 98Debates on the safety of coelocentesis are however ongoing.Ross et al. 99 observed that coelocentesis is associated with a high rate of miscarriage; they reported a 25% fetal loss rate within 2-3 days following coelocentesis.This author reports that the difficulty for the aspiration of coelomic fluid may require an increased number of multiple punctures further compromising the safety.Santolaya-Forgas et al., 100 examining the safety of coelocentesis in baboons, reported that this procedure might be safe for the pregnancy if it is performed at about 40 days of gestation using a 20 G needle and if less than 3 mL of coelomic fluid is aspirated.In another study, Ross et al. 99 reported the results of coelocentesis in 107 women and 337 controls, showing that the risk of fetal loss between 7 and 9 weeks of gestation is about 2%, and comparable to the fetal loss following CVS. Coelomic fluid characteristics and limitations for prenatal diagnosis by coelocentesis As reported by different authors, coelomic fluid contains fetal cells. 97These cells have been initially used to determine fetal sex, sickle cell disease (SCD) 101 and other single-gene defects but often a diagnosis was impossible for high rate of maternal contamination.The main problems encountered during prenatal diagnosis using CF are the following: -The number of fetal cells.Jouannic et al. 102 reported a variable number of the cellular count from complete absence to more than 10,000 cells/mL.We observed in our studies.a variable number of cells but at least 15-20 fetal cells were always present in 1 mL of coelomic fluid.-Therefore, Jouannica et al. reported that the amount of total DNA in the CF was very low and variable, often insufficient.The total concentration of amplified DNA using a real-time PCR assay ranged from undetectable to 777.5 ng/mL (median: 17.9 ng/mL). 103-It has been suggested that coelomic cells may arise from the villous mesenchymal core or from the yolk sac including hematopoietic progenitors.However, the precise cellular origin of this fluid remains unknown. 104Nevertheless, Renda et al., 105 demonstrated that most of fetal cells in the coelomatic fluid were functional embryonic erythroid precursors, megaloblasts.-The presence of variable number of maternal and fetal cells in the coelomatic fluid deter-mining contamination was an obstacle to perform prenatal diagnosis.Contami nation and subsequent potential misdiagnosis were considered the major risks with any diagnostic system becoming more significant when the number of fetal cells decreases. In our center we have analyzed more than 100 coelomatic fluids and observed that about 10% were samples without maternal contamination while about 86% showed maternal contamination between 10% and 70% and 4% presented maternal contamination in a percentage of more than 70%.The cellular composition of coelomatic fluid has permitted to develop method using antibody to purify the samples.We were able to purify the contaminated samples using anti-CD45 and anti-CD105 MicroBeads (Milteny Biotec, Bergisch Gladbach, Germany) for negative selection, removing the principal cells of maternal origin (white blood cells CD45 + and mesenchymal/ endothelial cells CD105 + ). 15 Analysis of coelomatic fetal cells In order to establish a suitable protocol, preclinical experiments on coelomatic fluid have been carried out in our laboratory and then validated before voluntary interruption of pregnancy.Despite the difficulties encountered to find the best target cell and the best method for their enrichment and isolation, several attempts have been made in last years to transfer the results of these researches into clinical practice.The problems regarding contamination of coelomic samples suggested that contamination detection should be an integral part of coelomic fluid analysis in a prenatal diagnostic setting. 106The parallel analysis of microsatellite allele sizes compared to those of parents is strongly recommended to check the presence of maternal contamination (Figure 4).Giambona et al. 15 suggested that at 7-9 weeks the use of protocols involving fluorescently labeled amplicons are also recommended since they are much more sensitive for low DNA copy numbers and QF-PCR is a suitable technique for detection of maternal contamination in celomatic fluid.In addition, it is recommend the analysis of polymorphic markers linked to the disease-causing locus gene in the same amplified fragment containing point mutation in order to support the accuracy of the genotyping result. Laboratory method More than 100 prenatal diagnoses for hemoglobinopathies have been performed in our lab using coelomatic fluid. 15After one insertion of a 20 G needle in the coelomatic cavity, three samples of coelomatic fluid were aspirated into different syringes (0.2, 0.2, 0.6 mL).The first two samples are usually discarded because of high chance of maternal contamination while the third sample is analyzed.Fifty mL of the third sample is first used to evaluate the presence and the percentage of maternal contamination by QF-PCR.If maternal cells are absent or in a percentage lower than 10%, the all of the sample is lysated and the b-globin gene is analyzed by direct sequencing.In case of higher contamination (more than 10%) the entire third sample is purified using anti-CD45 and anti-CD105 MicroBeads (Miltenyi Biotec).Eventual trace of maternal contamination is evaluated by QF-PCR after selection of mater- Non-invasive prenatal diagnosis The possibility to perform noninvasive DNA tests on the fetus, avoiding risk of fetal loss due to invasive procedures, was always searched.Both fetal cells and free fetal DNA provide the development of protocols for noninvasive prenatal diagnosis (NIPD) of single gene disorders (and chromosome abnormalities in the case of fetal cells).From the 1960s a variety of approaches were investigated for the isolation of fetal nucleated cells from maternal plasma, but the complexity of the cell isolation process (due also to the low amount) and the lack of reproducibility preclude the use of this approach in clinical practice. 107ikewise, many studies have been performed to make NIPD by analysis of free fetal DNA (ffDNA) in the maternal blood circulation.This has been applied successfully for fetal sex determination, RhD genotype and trisomy 21 detection; cell-free fetal DNA is detectable very early during pregnancy and the average amount of cell ffDNA during the first and the second trimesters is ~10% of the total amount of cell-free DNA. 108Recently it was developed a method for NIPD of b-thalassemia major and SCD that combine pyrophosphorolysis-activated polymerization (PAP) assay and melting curve analysis (MCA); PAP is able to detect mutations in free fetal DNA in a highly contaminating environment of maternal plasma, while MCA is a helpful screening method to identify which of the SNPs are informative in the family.In contrast to other methods used for NIPD, the combined PAP and MCA analysis detecting the normal paternal allele is also applicable for couples at risk carrying the same mutation, provided that a previously born child is available for testing to determine the linkage to the paternal SNPs. 108 Preimplantation and preconceptional genetic diagnosis Preimplantation and preconceptional genetic diagnosis (PGD) is an established reproductive alternative prenatal diagnosis, offered in quite a number of specialized centers throughout the world.It is used for monogenic diseases or known chromosomal disorders and can be applied for any genetic disease for which there is enough sequence information to facilitate the design of specific primers or probes.Preimplantation genetic diagnosis is performed either by biopsy of one to two blastomers in eight-cell embryos after in vitro fertilization or by biopsy of tro-phectoderm cells from blastocyst.PGD is based on the analysis of the first polar body of unfertilized eggs followed by analysis of the second polar bodies after fertilization, which is performed to avoid misdiagnosis resulting from recombination during the first meiosis; diagnosis is obtained by multiplex nested PCR analysis to detect the mutations as well as polymorphic alleles at the b-globin gene cluster. 107 Conclusions Advanced imaging techniques as well as molecular biology methods have opened up new perspectives in prenatal diagnoses for hemoglobinopathies and for a number of severe hereditary disorders.Improvements include anticipating the diagnosis earlier in pregnancy and decreasing the diagnostic time Standard methods, based on molecular techniques, require a few days and very small sample of fetal material.The ideal prenatal diagnostic method should have three important characteristics: i) diagnostic accuracy; ii) fetal and maternal safety; and iii) widespread applicability.Non-directive but thorough genetic counseling is essential in association with prenatal diagnosis procedures and with information and carrier screening one of the pillars of prevention of genetic diseases.Understand the severity of the disease, the details of fetal analysis used for prenatal diagnosis (feasibility, risk for the fetus, failure and eventually misdiagnosis).Parents should be able to take a well-informed reproductive decision in the event that the fetus would results affected. 109he woman or the couple should take a decision according to their moral and/or religious feelings of responsibility.Religious leaders may opposed abortion of severely affected children or consider it as an act of mercy while public health authorities may consider it a valid alternative to reduce human suffering and public health expenses. 110,111In several Muslim countries abortion is permissible up to 120 days of gestation to save the life of the mother or to prevent a severely affected child, while, for example, in Morocco, a land with very high illegal abortions, the intervention is punished by law unless the mother's life is in danger. 112,113Furthermore, people who have made a decision should be help and they should not be discriminated for their decision either terminating the pregnancy or giving birth to an affected child.If the pregnancy has to be continued it should be provide best possible medical care after birth.Furthermore, the couple at risk should be informed about the possibility to carry out HLA typing on fetal DNA to assess whether a previous normal or heterozygous children is HLA-identical and suitable for bone marrow transplantation.Coelocentesis might be a realistic alternative to the standard prenatal invasive procedures.The reliability of this procedure was 96% and ongoing study will increase the feasibility even in the case that the sample of coelomatic fluid is very poor of fetal cells, whereas the accuracy was 100%, without any discordance between results obtained by coelomatic fluid and controls by amniocentesis or on abortive tissue.Analysis is rapid and in one-day work it is possible to obtain results of prenatal diagnosis. Techniques b-thalassemia mutations Sequencing analysis ARMS Reverse dot-blot analysis GAP-PCR or MLPA (b gene deletion) a-thalassemia mutations Sequencing analysis GAP-PCR MLPA δb-thalassemia deletion GAP-PCR Hb-Lepore GAP-PCR Hb variants Sequencing analysis Restriction enzymatic analysis ARMS ARMS, amplification refractory mutation system; GAP-PCR, GAP-polymerase chain reaction; MLPA, multiplex ligation-dependent probe amplification.N o n -c o m m e r c i a l u s e o n l y and screened simultaneously for a variety of mutations in a working day and that it is nonradioactive method.Unlabeled amino-modified oligonucleotide ASO probes complementary to normal and mutant b-globin gene alleles are fixed to a nylon membrane strip and hybridized with non-radioactive labeled amplified genomic DNA.During the PCR reaction it is possible to label the DNA by adding biotine-16-dUTP instead of d-TTP or using 5' modified primers conjugated with biotine.The protocol requires that the strips be hybridized individually with each denatured amplified DNA samples for about 45 min.The strips are then collectively washed and transferred to a buffer containing avidin alkaline phosphatase for 30 min at room temperature.The color detection is performed using nitroblue tetrazolium salt/5-bromo-4-chloro-3'-indolyphosphate ptoluidine salt.The strategy of RDB is to screen in a single reaction the most common mutations present in a specific population and if no results are obtained one needs to screen for other less common mutations either with a second membrane containing other probes or by direct sequencing.The major advantage of this method is the simultaneous screening of a large number of mutations and the use of nonradioactive material.The most critical point of this assay is the optimization of hybridization conditions and washing temperature for that particular experiment.To date, several commercial kits to screen a and b globin gene mutations are still available. The visualization of the membranes separating the amniotic and coelomic cavity made possible the selective aspiration of coelomic fluid from as early as five weeks gestation.Examination of the fluid demonstrated that although cells from the exocoelomic cavity Figure 4 . Figure 4. Contamination control by parents' alleles quantitative fluorescent polymerase chain reaction valuation: A) father; B) coelomatic fluid before antibodies selection; C) coelomatic fluid after antibodies selection using anti-CD45 and anti-CD105 for negative selection; D) amniotic fluid; E) mother. Figure 5 . Figure 5. Progress in prenatal diagnosis of hemoglobinopathies.
2019-03-11T13:07:03.413Z
2014-09-29T00:00:00.000
{ "year": 2014, "sha1": "0b77caed52f0504161856ea93facd38ff3601cac", "oa_license": "CCBY", "oa_url": "https://www.pagepressjournals.org/index.php/thal/article/download/thal.2014.2200/4158", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0b77caed52f0504161856ea93facd38ff3601cac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216073429
pes2o/s2orc
v3-fos-license
Targeted Isolation of Antioxidant Constituents from Plantago asiatica L. and In Vitro Activity Assay. Plantago asiatica L. is widely distributed in Eastern Asia and a commonly used drug in China, Korea, and Japan for diuretic and antiphlogistic purposes. In this experiment, the present study was performed to isolate antioxidant molecules based on the DPPH scavenging activity assay and discover the bioactive compounds which contributed to performing the function of Plantago asiatica L. Each faction was chosen for further isolation guided by DPPH scavenging activity assay. Afterwards, two potential bioactive molecules, aesculetin and apigenin, were isolated for in vitro antioxidant activity in cells. Hydrogen-peroxide-induced oxidative stress led to decreased cell viability, impaired intercellular junction, and damage to the cell membrane and DNA. Furthermore, aesculetin ameliorated decreased cell viability induced by hydrogen peroxide via upregulation of antioxidant related genes, and apigenin also protected against H2O2 mainly by improving the glutathione (GSH) antioxidant system, such as increasing the activity of glutathione peroxidase (GPX), glutathione reductase (GR), and the ration of GSH/glutathione disulfide (GSSG). Above all, these findings suggest that aesculetin and apigenin may be bioactive compounds for antioxidant function in Plantago asiatica L. Introduction Oxidative stress is a consequence of an increased generation of free radicals and reduced antioxidant defense against free radicals [1]. Oxidative stress could result in DNA damage, and oxidative DNA adducts such as 8-oxodG have been involved in the tumorigenic process [2]. Antioxidant supplementation has become an increasingly popular practice to maintain body function [3]. However, several synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), and tert-Butylhydroquinone (TBHQ) have been reported to be harmful at high levels in animal experiments, and TBHQ is restricted in some countries, such as Canada and Japan [4]. TPlants are the richest sources of natural antioxidants. However, phytochemicals such as flavonoids are not easily absorbed, and higher concentration of flavonoids is more likely to be achieved in the lumen of the gastrointestinal tract where the flavonoids exert their antioxidant function [5]. The interaction between endogenous reactive oxygen species (ROS) and dietary antioxidants firstly takes place in the gastrointestinal tract [6]. There are several important aspects that should be carefully considered when it comes to the application of antioxidants, including their scavenging capacities, possible role in the endogenous antioxidant network, and their bio-viability [7]. Plantago asiatica L. is widely distributed in Eastern Asia and a commonly used drug in China, Korea, and Japan for diuretic and antiphlogistic purposes [8]. Phytochemical studies have shown that the Plantago genus contains a great number of natural products such as iridoids, flavonoids, tannins, The absorbance of different extracts between 0 and 1000 nm wavelength was scanned ( Figure 1). Ethyl acetate and ethyl:methanol (10:1) extract showed a similar shape of absorbance pattern. Similarly, in another study aimed at characterizing the different polarity extracts obtained from Plantago major, methanol and ethyl acetate extracts had higher phenol concentration than dichloromethane and hexane, and only ethyl acetate had highest flavonoid concentration, including gallic acid, luteolin, quercetin, catechin, and galangin [12]. Thus, ethyl acetate extract was further analyzed. Apigenin (P2) was isolated from ethyl acetate extract. gallic acid, luteolin, quercetin, catechin, and galangin [12]. Thus, ethyl acetate extract was further analyzed. Apigenin (P2) was isolated from ethyl acetate extract. [14], we concluded that P2 was apigenin. Hydrogen-Peroxide-Induced Oxidative Stress in Caco-2 Cells Hydrogen peroxide (H2O2) above 150 μM significantly decreased cell viability, and 1000 μM H2O2 further reduced cell viability to about 50% (Figure 2A). Similar results that 1 mM H2O2 treated for 6 h significantly reduced the cell viability of Caco-2 were observed [15]. Additionally, treatments with 0.8 mM H2O2 for 24 h led to increased release of lactate dehydrogenase (LDH), an indicator of cell membrane injury in IPEC-J2 cells [16]. LDH is a stable intracellular enzyme which can be released into the cell culture medium upon damage of the plasma membrane [17]. The leakage of LDH under treatment of H2O2 was measured in this study ( Figure 2B). A total of 250-2000 μM H2O2 significantly increased the LDH level in the culture media, which indicated that H2O2 can lead to cell membrane damage. Transepithelial electrical resistance (TEER) was regarded as an indicator of monolayer integrity and paracellular permeability [18]. In Figure 2C, 250-1000 μM H2O2 decreased the TEER after 4 h treatment of hydrogen peroxide. Consistent with our experiments, a previous study showed that treatment of 500 μM H2O2 for 6 h also caused a significant decrease in TEER in Caco-2 cell monolayers [19]. ROS can be tumorigenic by inducing DNA damage, leading to a genetic lesion that initiates tumorigenicity [20]. Moreover, as shown in Figure 2D, 1000 μM H2O2 significantly upregulated the tail DNA percentage, tail moment, and olive tail moment, which were measured by comet assay, as shown in Figure 2E. Similarly, in another study, DNA damage as measured by comet assay was significantly high at concentration >100 μM H2O2 compared to that of the control in Caco-2 cells [21]. Above all, H2O2 treatment led to decreased cell viability, which may result from impaired cell membranes, damaged intercellular junctions, and DNA damage. Concentration of 1000 μM H2O2 was selected for the following study to induce cell oxidative stress. . From these observations and through comparison with literature NMR data [14], we concluded that P2 was apigenin. Hydrogen-Peroxide-Induced Oxidative Stress in Caco-2 Cells Hydrogen peroxide (H 2 O 2 ) above 150 µM significantly decreased cell viability, and 1000 µM H 2 O 2 further reduced cell viability to about 50% ( Figure 2A). Similar results that 1 mM H 2 O 2 treated for 6 h significantly reduced the cell viability of Caco-2 were observed [15]. Additionally, treatments with 0.8 mM H 2 O 2 for 24 h led to increased release of lactate dehydrogenase (LDH), an indicator of cell membrane injury in IPEC-J2 cells [16]. LDH is a stable intracellular enzyme which can be released into the cell culture medium upon damage of the plasma membrane [17]. The leakage of LDH under treatment of H 2 O 2 was measured in this study ( Figure 2B). A total of 250-2000 µM H 2 O 2 significantly increased the LDH level in the culture media, which indicated that H 2 O 2 can lead to cell membrane damage. Transepithelial electrical resistance (TEER) was regarded as an indicator of monolayer integrity and paracellular permeability [18]. In Figure 2C, 250-1000 µM H 2 O 2 decreased the TEER after 4 h treatment of hydrogen peroxide. Consistent with our experiments, a previous study showed that treatment of 500 µM H 2 O 2 for 6 h also caused a significant decrease in TEER in Caco-2 cell monolayers [19]. ROS can be tumorigenic by inducing DNA damage, leading to a genetic lesion that initiates tumorigenicity [20]. Moreover, as shown in Figure 2D, 1000 µM H 2 O 2 significantly upregulated the tail DNA percentage, tail moment, and olive tail moment, which were measured by comet assay, as shown in Figure 2E. Similarly, in another study, DNA damage as measured by comet assay was significantly high at concentration >100 µM H 2 O 2 compared to that of the control in Caco-2 cells [21]. Above all, H 2 O 2 treatment led to decreased cell viability, which may result from impaired cell membranes, damaged intercellular junctions, and DNA damage. Concentration of 1000 µM H 2 O 2 was selected for the following study to induce cell oxidative stress. Aesculetin and Apigenin Ameliorated Oxidative Damages Induced by Hydrogen Peroxide through Different Mechanisms Aesculetin significantly ameliorated the decreased cell viability caused by hydrogen peroxide ( Figure 3A). Nuclear factor-E2-related factor 2 (Nrf2) is a transcription factor that is sensitive to oxidative stress and promotes the transcription of a wide variety of antioxidant genes, such as superoxide dismutase (SOD), catalase (CAT), glutathione S-transferase (GST), heme oxygenase (HO-1), gamma-glutamine cysteine synthase (γ-GCS), and glutathione peroxidase (GPX) [22]. In our study, aesculetin at 100 and 300 μg/mL dramatically enhanced the mRNA expression of Nrf2, and its downstream genes SOD, CAT, and GCS compared to H2O2 treatment, while H2O2 treatment only slightly increased the mRNA of Nrf2 ( Figure 3B). Several kinds of natural and synthetic compounds were reported to activate the Nrf2/Keap1/ARE system: (1) diphenols, quinones, and phenylenediamines; (2) natural components from plants, such as curcumin, resveratrol, luteolin, and quercetin; (3) hydrogen peroxide, 4-tert-butyl hydrogen peroxide; and (4) components rich in trace elements such as selenium, arsenic, and other substances [23]. Therefore, aesculetin with an ortho-hydroxyl structure can further activate Nrf2 and enhance the transcription of SOD, CAT, and GPX, which was observed in this experiment. H2O2 significantly improved the activity of glutathione peroxidase but inhibited the activity of glutathione reductase (GR) ( Figure 3C,D). However, aesculetin did not show any significant effect on the activity of GPX and GR. Aesculetin and Apigenin Ameliorated Oxidative Damages Induced by Hydrogen Peroxide through Different Mechanisms Aesculetin significantly ameliorated the decreased cell viability caused by hydrogen peroxide ( Figure 3A). Nuclear factor-E2-related factor 2 (Nrf2) is a transcription factor that is sensitive to oxidative stress and promotes the transcription of a wide variety of antioxidant genes, such as superoxide dismutase (SOD), catalase (CAT), glutathione S-transferase (GST), heme oxygenase (HO-1), gamma-glutamine cysteine synthase (γ-GCS), and glutathione peroxidase (GPX) [22]. In our study, aesculetin at 100 and 300 µg/mL dramatically enhanced the mRNA expression of Nrf2, and its downstream genes SOD, CAT, and GCS compared to H 2 O 2 treatment, while H 2 O 2 treatment only slightly increased the mRNA of Nrf2 ( Figure 3B). Several kinds of natural and synthetic compounds were reported to activate the Nrf2/Keap1/ARE system: (1) diphenols, quinones, and phenylenediamines; (2) natural components from plants, such as curcumin, resveratrol, luteolin, and quercetin; (3) hydrogen peroxide, 4-tert-butyl hydrogen peroxide; and (4) components rich in trace elements such as selenium, arsenic, and other substances [23]. Therefore, aesculetin with an ortho-hydroxyl structure can further activate Nrf2 and enhance the transcription of SOD, CAT, and GPX, which was observed in this experiment. H 2 O 2 significantly improved the activity of glutathione peroxidase but inhibited the activity of glutathione reductase (GR) ( Figure 3C,D). However, aesculetin did not show any significant effect on the activity of GPX and GR. Apigenin is a flavone that exists widely in many fruits and vegetables such as onions, oranges, and tea [24]. Apigenin at 125 and 250 μg/mL significantly ameliorated the decreased cell viability caused by hydrogen peroxide ( Figure 4A). In the hydrogen-peroxide-induced oxidative stress model in the MC3T3-E1 mouse osteoblastic cell line, pretreatment of cells with apigenin attenuated the reduced cell viability and upregulated the gene expression of SOD1, SOD2, and GPX [24]. However, apigenin reversed the increased mRNA expression of Nrf2 and GCS caused by H2O2 and decreased the mRNA expression of CAT but enhanced the transcription level of SOD ( Figure 4B). SOD catalyzed the dismutation of O2 − to H2O2 [25]. Additionally, apigenin further increased the activity of GPX, which was slightly enhanced by H2O2 ( Figure 4C). Different changes of GPX and CAT activity were observed in a similar cell model. GPX activity increased with higher H2O2 concentration, while CAT activity remained constant at different H2O2 treatments, which indicated that GPX was more active than CAT in scavenging H2O2 [21]. CAT played an important role as a primary defense enzyme against H2O2 at a low concentration but with a higher concentration of H2O2, GPX worked as a primary defense enzyme against oxidative damage [21]. Moreover, apigenin reversed the decreased activity of GR caused by H2O2 ( Figure 4D). GPX and GR are two key enzymes involved in the glutathione (GSH) redox cycle, where GPX uses GSH to reduce organic peroxide and H2O2, and GR reduces glutathione disulfide (GSSG) to GSH in a nicotinamide adenine dinucleotide phosphate (NADPH)-dependent manner [25]. Since GPX and GR activity were increased by apigenin compared to the H2O2 treated group, intracellular GSH level was examined in the following study. Apigenin is a flavone that exists widely in many fruits and vegetables such as onions, oranges, and tea [24]. Apigenin at 125 and 250 µg/mL significantly ameliorated the decreased cell viability caused by hydrogen peroxide ( Figure 4A). In the hydrogen-peroxide-induced oxidative stress model in the MC3T3-E1 mouse osteoblastic cell line, pretreatment of cells with apigenin attenuated the reduced cell viability and upregulated the gene expression of SOD1, SOD2, and GPX [24]. However, apigenin reversed the increased mRNA expression of Nrf2 and GCS caused by H 2 O 2 and decreased the mRNA expression of CAT but enhanced the transcription level of SOD ( Figure 4B). SOD catalyzed the dismutation of O 2 − to H 2 O 2 [25]. Additionally, apigenin further increased the activity of GPX, which was slightly enhanced by H 2 O 2 ( Figure 4C). Different changes of GPX and CAT activity were observed in a similar cell model. GPX activity increased with higher H 2 O 2 concentration, while CAT activity remained constant at different H 2 O 2 treatments, which indicated that GPX was more active than CAT in scavenging H 2 O 2 [21]. CAT played an important role as a primary defense enzyme against H 2 O 2 at a low concentration but with a higher concentration of H 2 O 2 , GPX worked as a primary defense enzyme against oxidative damage [21]. Moreover, apigenin reversed the decreased activity of GR caused by H 2 O 2 ( Figure 4D). GPX and GR are two key enzymes involved in the glutathione (GSH) redox cycle, where GPX uses GSH to reduce organic peroxide and H 2 O 2 , and GR reduces glutathione disulfide (GSSG) to GSH in a nicotinamide adenine dinucleotide phosphate (NADPH)-dependent manner [25]. Since GPX and GR activity were increased by apigenin compared to the H 2 O 2 treated group, intracellular GSH level was examined in the following study. Glutathione (GSH) is at the heart of one of the most important cellular antioxidant systems and capable of scavenging reactive oxygen species (ROS) and contributes to maintaining redox homoeostasis [26]. H2O2 significantly decreased the total GSH, which was further decreased by apigenin ( Figure 5A). The biosynthesis of GSH is catalyzed by the action of two ATP-dependent enzymes, γ-glutamylcysteine synthetase (γ-GCS) and glutathione synthase (GS), and GCS catalyzes the formation of γ-glutamyl-cysteine from glutamate and cysteine in the presence of ATP, which is the rate-limiting step in biosynthesis [27]. Similarly oxidized dimer of GSH (GSSG) decreased by H2O2 was further inhibited by apigenin ( Figure 5B). The decreased total GSH may have resulted from the downregulation of GCS transcription. In cells, GSSG can be regenerated by GR, and GR is responsible for maintaining the supply of reduced GSH [26]. The ratio of GSH/GSSG has been regarded as an index of oxidative stress [28]. However, apigenin alleviated the decreased ration of GSH to GSSH caused by H2O2 ( Figure 5C), which was related to the increased mRNA expression of GR. [26]. H 2 O 2 significantly decreased the total GSH, which was further decreased by apigenin ( Figure 5A). The biosynthesis of GSH is catalyzed by the action of two ATP-dependent enzymes, γ-glutamylcysteine synthetase (γ-GCS) and glutathione synthase (GS), and GCS catalyzes the formation of γ-glutamyl-cysteine from glutamate and cysteine in the presence of ATP, which is the rate-limiting step in biosynthesis [27]. Similarly oxidized dimer of GSH (GSSG) decreased by H 2 O 2 was further inhibited by apigenin ( Figure 5B). The decreased total GSH may have resulted from the downregulation of GCS transcription. In cells, GSSG can be regenerated by GR, and GR is responsible for maintaining the supply of reduced GSH [26]. The ratio of GSH/GSSG has been regarded as an index of oxidative stress [28]. However, apigenin alleviated the decreased ration of GSH to GSSH caused by H 2 O 2 ( Figure 5C), which was related to the increased mRNA expression of GR. Glutathione (GSH) is at the heart of one of the most important cellular antioxidant systems and capable of scavenging reactive oxygen species (ROS) and contributes to maintaining redox homoeostasis [26]. H2O2 significantly decreased the total GSH, which was further decreased by apigenin ( Figure 5A). The biosynthesis of GSH is catalyzed by the action of two ATP-dependent enzymes, γ-glutamylcysteine synthetase (γ-GCS) and glutathione synthase (GS), and GCS catalyzes the formation of γ-glutamyl-cysteine from glutamate and cysteine in the presence of ATP, which is the rate-limiting step in biosynthesis [27]. Similarly oxidized dimer of GSH (GSSG) decreased by H2O2 was further inhibited by apigenin ( Figure 5B). The decreased total GSH may have resulted from the downregulation of GCS transcription. In cells, GSSG can be regenerated by GR, and GR is responsible for maintaining the supply of reduced GSH [26]. The ratio of GSH/GSSG has been regarded as an index of oxidative stress [28]. However, apigenin alleviated the decreased ration of GSH to GSSH caused by H2O2 ( Figure 5C), which was related to the increased mRNA expression of GR. Free Radical Scavenging Ability on DPPH and Absorbance Spectrum DPPH scavenging activity was determined according to the method reported by Brand-Williams [29] with some modifications. A total of 25 mg DPPH was dissolved in 50 mL 80% ethanol as stock solution. DPPH stock solution was diluted with 80% ethanol at ratio of 1:2 (v/v) before measurement to prepare the work solution of DPPH. Each sample was diluted in a gradient ratio of 1:1 to 11 gradients. After dilution, 100 µL solution at all concentrations and 80% ethanol (as blank) was added into flat bottom 96-well plate. Then, the 100 µL DPPH work solution was added to each well, and the absorbance was determined at 517 nm after reaction at room temperature in the dark for 30 min. The eliminate percentage (E%) of DPPH at the steady state was determined using the following equation: E% = 1 − (Abs blank − Abs sample)/Abs blank. EC 50 , which is the concentration of substances (samples) discoloring 50% of the DPPH, was calculated by GraphPad Prism 5 (Version 7.04, Graphpad Software, San Diego, CA, USA, www.graphpad.com). The absorbance of different extracts in the same concentration between 0 and 1000 nm wavelength was scanned by Microplate Reader (Spectra Max i3x, Molecular Devices, San Jose, CA, USA). Cell Culture and H 2 O 2 Exposure Caco-2 cells were purchased from the Institute of Animal Science of CAAS (Beijing, China). Caco-2 cells were cultured in DMEM supplemented with 10% FBS, 1% penicillin, and streptomycin at 37 • C in humidified air containing 5% CO 2 . Caco-2 cells (2 × 10 4 /well) grown for 24-48 h were allowed to attach to the culture plate before being pretreated with aesculetin or apigenin overnight. Then cells were treated with 1 mM H 2 O 2 in DMEM without FBS for 2 h. The culture media and cells were collected for the further measurements. Cell Viability and LDH Assay Cell viability was measured using a CCK-8 kit in accordance with the manufacturer's instructions (Dojindo, Kumamoto, Japan). Briefly, Caco-2 cells were seeded in a 96-well plate at 2 × 10 4 cell per well and cultured overnight. Cells were pretreated with the same compounds isolated from Plantago asiatica L. for 18 h after plate attachment. Then, cells were then treated with H 2 O 2 for 2 h. The culture media was collected for LDH (lactate dehydrogenase) assay and replaced by 200 µL DMEM supplemented with 10% CCK-8 per well and cultured at 37 • C for 2 h. Afterward, the absorbance was measured at 450 nm on a Microplate Reader (Spectra Max i3x, Molecular Devices, USA). The LDH activity in the cultural media was measured based on the reaction between LDH and lactic acid, which led to the generation of pyruvic acid, and pyruvic acid could become brown in alkaline environments with 2,4-dinitrophenylhydrazine via a commercial kit obtained from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Measurement of Intercellular Transmembrane Resistance (TEER) TEER was measured based on the method reported by Shao et al. [30]. Briefly, Caco-2 cells grown on a Transwell filter and transepithelial electrical resistance were monitored daily before differentiation by use of a Milicell Electrical Resistance System-2 (Millipore Corp., Bedford, MA, USA) and expressed as Ω × cm 2 . Comet Assay Comet assay, namely, single-cell gel electrophoresis assay, is a relatively convenient and sensitive technique for the analysis of DNA breakage in individual cells and commonly used for the investigation of antioxidants in intervention studies. The procedures of DNA strand breaks were determined by comet assay, according to the method reported by Fernández-Blanco with some modifications [31]. Briefly, Caco-2 cells (2.0 × 10 5 cells/well) were seeded in 6-well plate and grew for 42 h. Then cells were treated with 0-1 mM H 2 O 2 for 2 h followed by being suspended in prewarmed low-melting-point agarose. Additionally, suspension was rapidly transferred to precoated slide with agarose and covered with a coverslip. Coverslip was removed after gelling for 10 min at 4 • C, and a second low-melting-point agarose was added with gelatinization for 10 min at 4 • C. Then. slides were put into lysis buffer (2.5 M NaCl, 100 mM Na-EDTA, 10 mM Tris, 250 mM NaOH, 10% DMSO, and 1% Triton X-100) for 30 min at 4 • C and incubated in the fresh electrophoresis buffer (300 mM NaOH, 1 mM Na-EDTA) for 20 min to unwind the DNA after removing the residual lysate. After electrophoresis for 40 min (25 V, 300 mA), slides were washed with neutralization buffer (0.4 M Tris, pH 7.5) three times. Slides were stained with 500 µL PI (20 µg/mL), covered with a coverslip, and kept at 4 • C for 40 min. Slides were visualized under a fluorescence microscope (Lica universal microscope). At least 30 randomly selected single cells were analyzed by Comet Assay Software Project (http://casplab.com/). The DNA damage in cells was expressed as a percentage of total DNA content in the tail, tail moment, and olive tail moment (tail moment = tail length×tail DNA; tail moment = TailDNA% × (TailMeanX − HeadMeanX), equaling to ((percent of DNA in the tail) × (distance between the center of gravity of DNA in the tail and the of center of gravity of DNA in the head in x-direction)). Transcription Levels Analysis by RT-PCR Briefly, cells seeded in a 6-well plate after treatments were collected. Total RNA extraction was carried out by an Eastep Super Total RNA Extraction Kit (Peomaga Co., Shanghai, China). RNA quantity was measured by Nanodrop at 260 and 280 nm. Then, total RNA was reverse-transcribed into cDNA through a PrimeScripTMRT reagent Kit with gDNA Eraser (Perfect Real Time) (Takara, Japan), and gene expression was determined by SYBR Premix Ex TaqTM (Tli RNaseH Plus, Takara, Japan) in accordance with manufacturer's protocol. Gene primers are reported in Table 2, and 2-∆∆Ct was calculated to express the gene expression level. Measurements of Antioxidant Enzyme Glutathione peroxidase (GPX) and glutathione reductase (GR) were analyzed using a commercial kit (Nanjing Jiancheng Bioengineering Institute, China). Briefly, the activity of GPX was determined based on the reaction between GSH and disulfide dinitrobenzoic acid (DNTB). Activity of GR was detected via changes of NADPH along with transformation of GSSG to GSH under catalysis of GR. Total GSH and reduced GSH were determined using a commercial GSH and GSSG kit (Beyotime biotechnology Co., Beijing, China). Briefly, DNTB and NADPH were converted to 5'-thionitrobenzoic acid (TNB) and NADP + under catalysis by GR with GSSG and GSH. There was a positive correlation between the generation of TNB and total glutathione content. Statistical Analysis Statistical analysis was carried out using SPSS version 15 and data were expressed as mean ± SD. The differences between groups were analyzed with one-way ANOVA, and p < 0.05 was considered statistically significant. Conclusions The results of the present study have conclusively indicated that aesculetin and apigenin isolated from Plantago asiatica L. could ameliorate the Caco-2 cell damage caused by H 2 O 2 . Aesculetin protected cell from oxidative damage by activating Nrf-2 and its downstream genes such as SOD, CAT, and GCS and increasing the activity of GPX to enhance the intracellular antioxidant defense system. Apigenin exerted its protection against H 2 O 2 mainly by improving the GSH antioxidant system, such as increasing the activity of GPX, GR, and the ration of GSH/GSSG. These findings suggest that aseculetin and apigenin may be bioactive substances for antioxidant function in Plantago asiatica L.
2020-04-23T09:11:49.054Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "5cca5b00fc4dd138f0ff68b89f9f9acf4a0a3e56", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/8/1825/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb071f0b0c49a4a4dc3067eb9d866ce21d4401d1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119230634
pes2o/s2orc
v3-fos-license
Topologically nontrivial solution in Einstein-Dirac gravity on the Hopf bundle The topologically nontrivial solution in Einstein-Dirac gravity with cosmological constant is obtained. The spacetime has the Hopf bundle as a spatial section. It is shown that the Hopf invariant is related to the spinor current density. Two Dirac spinors are used for obtaining a diagonal energy-momentum tensor. The solutions for the nongravitating Dirac equation on the background of Lorentzian spacetime with the Hopf bundle as a spatial section are also obtained. Nongravitating solutions of the Dirac equation are defined by two quantum half-integer numbers $m, n$. I. INTRODUCTION In general relativity, there are a lot of solutions with gravitating fundamental fields: scalar, electromagnetic, and non-Abelian fields. However, very little is known about solutions in Einstein-Dirac gravity. For example, we do not know any asymptotically flat solution with a gravitating spinor field. Perhaps the problem here is related to the fact that the spinor field has a spin. This has the result that the energy-momentum tensor for a spinor field has nondiagonal components and not only T tϕ , as it happens for the Kerr metric. Gravitating spinors with nonlinear self-interactions are well investigated in cosmology [1]- [5]. These papers study the role of a spinor field in considering the evolution of the anisotropic Universe described by the Bianchi type VI, VI 0 , V, III, I, or isotropic Friedmann-Robertson-Walker (FRW) models. In Refs. [6] and [7] models of the Universe with tachyonic and fermionic fields interacting through a Yukawa-type potential are investigated. In Ref. [8] a class of exact cosmological solutions with a neutral scalar field and a Majorana fermion field is found. A Dirac spinor in D = 3 dimensions coupled to topologically massive gravity is investigated in [9]. In Ref. [10] a mechanism where quantum oscillations of the Dirac wave functions can prevent the formation of the big bang or big crunch singularity is analysed. A simpler problem is the problem of looking for solutions on the background of a curved spacetime. In the textbook [11] Dirac's equation on the background of Kerr geometry is considered. In Ref. [12] topologically trivial solutions for the Dirac equation on a 3D sphere S 3 are obtained. Here we will consider two topologically nontrivial Dirac spinors coupled to Einstein gravity with the cosmological constant. 3D section of the spacetime metric is a Hopf bundle. The topological nontriviality means that the current of a spinor field is connected with the Hopf invariant. We consider two Dirac spinors. The energy-momentum tensor for every spinor field has nondiagonal components that is related to the fact that the spinor has the spin. For our choice of spinors the total energy-momentum tensor will have diagonal components only. We will also investigate the nongravitating Dirac equation. Some special solutions will be found for the case when a Dirac spinor can be decoupled on Weyl spinors (this is the case when the mass of a spinor is zero, m = 0). Numerical solution for non-zero mass m = 0 will be also presented. III. THE SOLUTION We seek the solution of the Einstein-Dirac equations (1) and (2) in the following form: where the space metric is the metric on the Hopf bundle, and r is the radius of 3D sphere. The Hopf bundle can be presented as a S 3 sphere with topological mapping S 3 → S 2 , where the fibre S 1 is spanned on the coordinate ψ, and the metric on the base of bundle S 2 is dl 2 = (r/2) 2 dθ 2 + sin 2 θdϕ 2 . In order to calculate all these quantities, we have to define tetrads e a µ for the metric (10): A. Dirac equations on the Hopf bundle The spinors (11) may transform under the rotation at an angle 2π as follows: Taking into account the exponents e in1,2χ , e im1,2ϕ from (11), we see that the numbers m 1,2 , n 1,2 should satisfy the following condition: where N is an integer. After substitution the Ansatz (11) into the Dirac equation (2), we have where, for brevity, we have omitted the index: Ω = Ω 1 , m, n = m 1 , n 1 ; Ω = rΩ and µ = rμ. For Σ 1,2,3,4 we also have the same equations. Some special solutions for this set of equations are presented in Appendixes A and B. B. Energy-momentum tensor for massless spinor fields, m = 0 In order to solve the Einstein-Dirac equations (1) and (2), we will use the following Ans[>e for the spinors ψ 1,2 : . C. The solution of the Einstein-Dirac equations Einstein tensor G ab = R ab − (1/2)η ab R for the tetrad (13) is In order to have the same structure on the right-hand side of the Einstein equations [as in (25)], we have to use the sum of energy-momentum tensors from (21) and (22). In this case the Einstein equations will be 3 hereΘ =Θ 1,2 and l P l is the Planck length. The solution of these equations in dimensionless form is Θl 3/2 P l = 1 32π (1) and (2) is withΘ, ǫ from (28) and (29), and r is the radius of a 3D sphere that is the total space of the Hopf bundle. D. Current and the Hopf invariant The covariant current for the spinor ψ 1 is Calculations with the spinor (11) give us Similar result can be obtained for the spinor ψ 2 by changing Θ → Σ. For the solution (32), we have The Hopf bundle with the metric (12) has the Hopf invariant defined as where Υ is some 1-form which can be related to the spatial part of the covariant current (36), and V = sin θdχdθdϕ is the volume of a S 3 sphere of unit radius. We construct the 1-form Υ from the spatial part of the current (36): where i = χ, θ, ϕ. Substitution of Υ into (37) gives us We see that the topological nontriviality of the solution (32) is connected with the Hopf invariant (39). E. Comparison with the Taub-NUT and Friedman solutions Let us compare the solution (31)-(33) with the Taub-NUT solution where m, l are constants and The Taub-NUT metric describes an empty spacetime with a nontrivial topology: the spatial section is a 3D sphere S 3 that is the total space of the Hopf bundle. 2D space with the metric dθ 2 + sin 2 θdφ 2 is the metric on the base of the bundle and the coordinate χ is spanned on the fibre S 1 . We can interpret the solution (28), (29), (31)-(33) as the Taub-NUT spacetime filled with the spinor fields ψ 1,2 satisfying the Dirac equations (2). Physically, this means that the empty nontrivial Taub-NUT spacetime with the Hopf bundle as the spatial section has a nontrivial evolution in time. But if we fill the spacetime with two spinors ψ 1,2 and cosmological constant then the spacetime becomes static. The FRW metric is where dS 2 3 is the metric on a 3D sphere S 3 . dS 2 3 can be written in the standard way dS 2 3 = dψ 2 +sin 2 ψ dθ 2 + sin 2 θdϕ 2 or as the metric (12) on the Hopf bundle. The metric (42) describes a Friedman Universe filled with matter. The Universe is not static and evolves from an initial singularity to the maximum size and then to the final singularity. As we see from the solution (28), (29), (31)-(33), if the Friedman Universe will be filled with two spinor fields ψ 1,2 instead the matter plus the cosmological constant then the Universe becomes static without any evolution in time. IV. DISCUSSION AND CONCLUSIONS We have considered Einstein-Dirac gravity with cosmological constant and obtained topologically nontrivial solution for two gravitating Dirac spinors. We have used two spinors and some special choice of quantum numbers m, n in order to obtain a diagonal energy-momentum tensor. The diagonal energy-momentum tensor allows us to derive the required solution, which: • is not asymptotically flat; • has no any event horizon; • is topologically nontrivial since it is defined on the Hopf bundle and the Hopf invariant is related to the current of the spinor field; • cannot describe any quantum particle because of nonlinearity of the Einstein-Dirac equations: the spinor cannot be normalized on unity; • can be regarded as the Taub-NUT spacetime filled with the spinor field + Λ; • can be regarded as a Friedman Universe filled with two spinor fields (instead of matter) without time evolution.
2016-12-17T08:05:38.000Z
2016-12-17T00:00:00.000
{ "year": 2018, "sha1": "e01a11ac3bfa1af65c58d74e311e7dadae799097", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.07149", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e01a11ac3bfa1af65c58d74e311e7dadae799097", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
23212878
pes2o/s2orc
v3-fos-license
Macromolecular Composition Dictates Receptor and G Protein Selectivity of Regulator of G Protein Signaling (RGS) 7 and 9-2 Protein Complexes in Living Cells* Background: RGS7 and RGS9-2 regulate G protein signaling in the striatum, but the selectivity of their action is largely unknown. Results: RGS protein complexes show distinct patterns of receptor and G protein selectivity. Conclusion: Macromolecular composition dictates receptor and G protein selectivity of the RGS7 and RGS9-2 protein complexes. Significance: These data demonstrate novel mechanisms contributing to the regulation of striatal G protein signaling. Regulator of G protein signaling (RGS) proteins play essential roles in the regulation of signaling via G protein-coupled receptors (GPCRs). With hundreds of GPCRs and dozens of G proteins, it is important to understand how RGS regulates selective GPCR-G protein signaling. In neurons of the striatum, two RGS proteins, RGS7 and RGS9-2, regulate signaling by μ-opioid receptor (MOR) and dopamine D2 receptor (D2R) and are implicated in drug addiction, movement disorders, and nociception. Both proteins form trimeric complexes with the atypical G protein β subunit Gβ5 and a membrane anchor, R7BP. In this study, we examined GTPase-accelerating protein (GAP) activity as well as Gα and GPCR selectivity of RGS7 and RGS9-2 complexes in live cells using a bioluminescence resonance energy transfer-based assay that monitors dissociation of G protein subunits. We showed that RGS9-2/Gβ5 regulated both Gi and Go with a bias toward Go, but RGS7/Gβ5 could serve as a GAP only for Go. Interestingly, R7BP enhanced GAP activity of RGS7 and RGS9-2 toward Go and Gi and enabled RGS7 to regulate Gi signaling. Neither RGS7 nor RGS9-2 had any activity toward Gz, Gs, or Gq in the absence or presence of R7BP. We also observed no effect of GPCRs (MOR and D2R) on the G protein bias of R7 RGS proteins. However, the GAP activity of RGS9-2 showed a strong receptor preference for D2R over MOR. Finally, RGS7 displayed an four times greater GAP activity relative to RGS9-2. These findings illustrate the principles involved in establishing G protein and GPCR selectivity of striatal RGS proteins. Signal transduction through G protein-coupled receptors (GPCRs) 2 regulates fundamental processes in the nervous sys-tem, including neuronal excitability and neurotransmitter release (1). GPCRs activate heterotrimeric G proteins, which in turn engage a wide range of the intracellular effectors to produce a cellular response. Activation of G proteins entails their binding to GTP and resulting dissociation into G␣-GTP and G␤␥ subunits. The extent and duration of signaling in GPCR pathways is critically controlled by the regulator of G protein signaling (RGS) proteins that limit G protein activity (2,3). RGS proteins bind directly to activated G␣ and facilitate the GTP hydrolysis, thus serving as GTPase-accelerating proteins (GAPs). In humans, 17 G␣, about 865 GPCR, and ϳ30 RGS genes have been identified (4 -6). With most of the components expressed in the nervous system, this forms a formidable array of possible combinations. However, activation of individual GPCR pathways often produces unique cellular and behavioral responses. Understanding the mechanisms of this signaling selectivity is one of the biggest challenges in studying neuronal GPCR pathways. Neurons of the striatum, a nucleus that plays a major role in reward behavior and motor control, express a number of GPCRs that respond to many neurotransmitters, including dopamine, opioids, serotonin, and acetylcholine (7,8). Several studies have demonstrated that the long splice isoform of RGS9 (RGS9-2) serves as a critical GAP in these neurons (9,10). In particular, RGS9-2 has been shown to regulate signaling downstream from D2 dopamine (D2R) and -opioid (MOR) receptors and has been implicated in drug addiction and movement disorders (11)(12)(13)(14)(15). However, no studies directly examined the impact of RGS9-2 on G protein dynamics activated by D2R and MOR. Recent behavioral studies implicated another RGS protein in the striatum, RGS7, in controlling the effects of addictive drugs and suggested that it may be differentially involved in controlling MOR and D2R signaling (16). Both RGS9-2 and RGS7 share extensive homology in their macromolecular organization. In addition to the catalytic RGS domain, they possess the N-terminal Dishevelled, EGL-10, Pleckstrin/Dishevelled, EGL-10, Pleckstrin helical extension module and a G protein ␥ sub-unit-like domain (10). The G protein ␥ subunit-like domain forms a constitutive complex with the atypical G protein ␤ subunit, G␤5 (17,18), and both RGS7 and RGS9-2 exist as obligatory dimers with G␤5 (19). In the striatum, RGS/G␤5 dimers associate with membrane anchor R7BP (20) that recruits them to the plasma membrane and potentiates the GAP activity (21,22). What remains completely unexplored is the relative activity and selectivity of the RGS7 and RGS9-2 complexes, as well as the role of R7BP in this process. In this study, we examined the ability of the RGS7/G␤5 and RGS9-2/G␤5 complexes to regulate G protein signaling by MOR and D2R under the native environment of living cells using a bioluminescence resonance energy transfer (BRET)based assay. We report a marked differences in the catalytic activity of complexes as well as their G protein and GPCR selectivity, depending on their macromolecular composition. Our findings illustrate mechanisms for establishing the selective regulation of striatal GPCR signaling pathways. Fast Kinetic BRET Assay-Agonist-dependent cellular measurements of BRET between masGRK3ct-Rluc8 and G␤1␥2-Venus were performed to visualize the action of G protein signaling in living cells, as described previously, with slight modifications (25). 16 to 24 h post-transfection, HEK293T/17 cells were washed once with PBS containing 5 mM EDTA (EDTA/PBS) and detached by incubation in EDTA/PBS at room temperature for 10 min. Cells were harvested with centrifugation at 500 g for 5 min and resuspended in PBS containing 0.5 mM MgCl 2 and 0.1% glucose (BRET buffer). Approximately 50,000 -100,000 cells/well were distributed in 96-well flat-bottomed white microplates (Greiner Bio-One). The Rluc substrate, coelenterazine-h (Nanolight Technologies), was dissolved in acidified alcohol at a final concentration of 5 mM and stored at Ϫ20°C. Acidified alcohol was prepared by adding 200 l of 3N HCl to 10 ml of ethanol. Aliquots were dissolved in BRET buffer immediately before use and added to cell suspension at a final concentration of 5 M. BRET measurements were made using a microplate reader (POLARstar Omega, BMG Labtech) equipped with two emission photomultiplier tubes, allowing us to detect two emissions simultaneously with the highest possible resolution of 50 milliseconds for every data point. All measurements were performed at room temperature. The BRET signal is determined by calculating the ration of the light emitted by G␤1␥2-Venus (535 nm) over the light emitted by masGRK3ct-Rluc8 (475 nm). The average base-line value (basal R) recorded prior to agonist stimulation was subtracted from BRET signal values, and the resulting difference (⌬R) was normalized against the maximal ⌬R value (Rmax) recorded upon agonist stimulation. The rate constants (1/) of the activation and deactivation phases were obtained by fitting a single exponential curve to the traces. k GAP rate constants were determined by subtracting the basal deactivation rate (k app ) from the deactivation rate measured in the presence of exogenous RGS protein. Obtained k GAP rate constants were used to quantify GAP activity. Western Blotting-For each sample, ϳ5,000,000 cells were lysed in 500 l of sample buffer (125 mM Tris (pH 6.8), 4 M urea, 4% SDS, 10% 2-mercaptoethanol, 20% glycerol, 0.16 mg/ml bromphenol blue). Western blot analysis of proteins was performed following SDS-PAGE. Blots were blocked with 5% skim milk in PBS containing 0.1% Tween 20 (PBST) for 30 min at room temperature, followed by a 90-min incubation with specific antibodies diluted in PBST containing 1% skim milk. Blots were washed in PBST and incubated for 45 min with a 1:10,000 dilution of secondary antibodies conjugated with horseradish peroxidase in PBST containing 1% skim milk. Proteins were visualized on x-ray film by SuperSignal West Femto substrate (Pierce). Band densities were quantified using ImageJ software by measuring the integrated intensity. The relative expression level of RGS proteins was determined by subtracting the background densities in the absence of exogenous RGS proteins and normalizing the resulting value as a fraction of the brightest band intensity expressing the maximal amount of RGS protein. CRE-Luciferase Reporter Gene Assays-HEK293T/17 cells were transfected with CRE-luc2P reporter (Promega), MOR, G␣i1, G␤1 and G␥2, or G␤5 with or without R7 RGS at a 1:1:1: 1:1:1:1 ratio between cDNA constructs using Lipofectamine LTX reagent in 96-well plate. 16 h after transfection, cells were treated with 50 nM isoproterenol together with serial doses of morphine for 5 h. The level of expressed luciferase were determined using a Bright-Glo luciferase assay kit (Promega) according to the instructions of the manufacturer. Statistical Analysis-Linear regression was used to relate the k GAP value to the expression level of RGS7 or RGS9-2 proteins. To compare the activities of the RGS proteins on Go versus Gi (Fig. 5) and their activities in the absence versus presence of R7BP (Figs. 6 and 7), the differences in the slopes of the regression lines were evaluated by calculating a p value (two-tailed) testing the null hypothesis that the slopes are all identical using GraphPad Prism 5. Kruskal-Wallis one-way analysis of variance on ranks followed by Tukey's post hoc test was performed to compare k GAP values in Fig. 8E using SigmaPlot 11. IC 50 values in Fig. 9C were compared by one-way analysis of variance followed by Tukey's post hoc test using SigmaPlot 11. Live Cell Receptor-based Assays Allow Examination of the GAP Activity of R7 RGS Proteins in a Physiological Context-To visualize RGS action in living cells, we reconstituted HEK293T cells with GPCRs, G␣ subunits, and BRET sensors (G␤␥-Venus and masGRK3ct-Rluc8, recently developed by Hollins et al. (25) (Fig. 1A)). In this assay system, activation of a GPCR by an agonist promotes the interaction of the Venus-tagged G␤␥ subunits with the Rluc8-tagged masGRK3ct reporter producing the BRET signal. Conversely, application of an antagonist quenches GPCR-driven G protein activation and results in BRET signal decay. Indeed, activation of the heterologously expressed D2R by dopamine resulted in generation of the robust BRET response and the addition of the haloperidol deactivated the signal in a time-dependent manner (Fig. 1B). The rise and decay of the BRET signal are well described by single exponential fitting, allowing us to obtain rate constants for the activation and deactivation phases, respectively. No significant agonist-elicited response was observed when D2R or G␣o was not transfected (Fig. 1C), indicating that the BRET signal recorded in the system originates from the functional coupling of the exogenously expressed receptors and G proteins and not from the endogenous proteins contained in HEK293T cells. We next performed a series of control experiments to demonstrate the utility of the assay in studying the GAP activity of the R7 RGS proteins. Because the GAP action of RGS proteins promotes heterotrimer reformation, they are expected to influence the kinetics of the deactivation phase of the response. However, the concurrent presence of GPCR in the system makes it necessary to ensure that the kinetics of the deactivation phase are determined by the RGS action and are not influenced by changes in GPCR activity. First, we transfected cells with increasing amounts of D2R construct and stimulated cells by application of a saturating concentration of dopamine ( Fig. 1, D-F). An increase in the amount of D2R did not affect the amplitude of the response (Fig. 1D), suggesting an equal extent of the total G protein mobilization in the cells, but accelerated the onset kinetics consistent with the enhanced speed of G protein activation (Fig. 1E). Importantly, an increase in functional D2R had no effect on deactivation kinetics (Fig. 1F), indicating that the GPCR concentration does not influence the antagonist-induced termination of the response. Secondly, we applied different concentrations of dopamine to cells expressing a fixed amount of D2R (Fig. 1, G-I). We observed a typical sigmoidal dose-response curve of the response amplitude (EC 50 ϭ 1.91 ϫ 10 Ϫ7 Ϯ 1.67 ϫ 10 Ϫ8 M), indicating that an increase in the agonist concentration results in an increase in the pool of the activated G proteins. As expected, enhanced GPCR activation also resulted in the increase in the rate of G protein activation (Fig. 1H). However no changes in the deactivation phase of the response were noted (Fig. 1I). These data indicate that the amount of active G protein also does not change the deactivation rates. Finally, we tested the effect of increasing the RGS concentration on the deactivation kinetics of D2R-Go signaling ( Fig. 1, J-L). In contrast to the manipulations with GPCR and G protein concentration, an increase in the RGS7 concentration substantially accelerated the deactivation rates of the response (Fig. 1, J and K). In fact, within the concentration range tested, the calculated GAP activity of RGS7 showed a clear linear relationship with its expression level (Fig. 1L). The final set of control experiments ensured that the expression of components of the RGS complex did not affect the levels of the receptor, G␣ subunits, or BRET sensors in cells (Fig. 2). We thus conclude that, under the assay conditions, GTP hydrolysis by G proteins is the rate-limiting step that dictates the deactivation phase of the response. The function of the R7 RGS proteins can thus be quantitatively analyzed by measuring the deactivation kinetics that show no sensitivity to fluctuations in GPCR and/or G protein concentration. RGS7 and RGS9-2 Complexes Selectively Regulate the G␣i/o Subfamily in the absence or presence of R7BP-Members of the R7 RGS family have been shown previously to be selective GAPs for the G␣i/o proteins in the in vitro biochemical assays (26). However, their selectivity was never examined in the physiological context of living cells. Furthermore, on the basis of the genetic evidence from Caenorhabditis elegans, it was proposed recently that R7BP might unlock the GAP activity of the R7 RGS proteins toward other G␣ subunits, e.g. G␣q (27). We therefore used the in-cell BRET assay to re-examine the regulation of the G␣ GTPase by R7 RGS complexes in living cells. In these experiments, we chose a panel of representative GPCRs well known to activate each of the G␣ subunits examined, e.g. D1R for G␣s, M1R for G␣q, D2R for G␣o and MOR for G␣z. Consistent with the in vitro studies, we found that, in cells, both RGS7/G␤5 and RGS9-2/G␤5 potently terminated a D2Rdriven response via G␣o, a representative member of the G␣i/o family (Fig. 3, A, D, E, and H). However, we observed no regulation of the M1R-G␣q or D1R-G␣s response termination by either RGS7/G␤5 (Fig. 3, B, C, and D) or RGS9-2/G␤5 (F, G, and H). These data indicate that the RGS7/G␤5 and RGS9-2/G␤5 dimers are Gi/o subfamily-selective regulators in living cells. G Protein and GPCR Selectivity of RGS Complexes Next, we examined the possibility that R7BP may enable G␣q or G␣s regulation by RGS7 or RGS9-2 complexes. Again, we found that trimeric RGS7/G␤5/R7BP and RGS9-2/G␤5/R7BP complexes potently regulated G␣o deactivation but had no effect on the deactivation rates of G␣q and G␣s (Fig. 3, I-P). Because expression levels of RGS7 or RGS9-2 were found to be similar across cells reconstituted with various G protein subunits (data not shown), the lack of GAP activity toward G␣q and G␣s cannot be attributed to a lower expression of RGS7 and RGS9-2. Thus, we conclude that complex formation with R7BP does not influence subfamily selectivity and that the RGS7/G␤5/R7BP and RGS9-2/G␤5/R7BP trimers are also Gi/o subfamily-selective GAPs. Varying G Protein Selectivity of the RGS7 and RGS9-2 Complexes within the G␣i/o Family-In addition to G␣o, the G␣i/o family contains the highly homologous proteins G␣i1-3 and the atypical subunit G␣z, which is characterized by an extremely slow intrinsic GTP hydrolysis rate (28). Because no previous studies examined the regulation of G␣z GTPase by R7 RGS proteins, we addressed this question using cell-based BRET assays. Morphine application induced a robust BRET signal from the cells reconstituted with MOR and G␣z. Consistent with biochemical measurements, G␣z responses showed very slow deactivation kinetics upon termination of signaling at MOR, with a rate constant of 3.7 Ϯ 0.1 ϫ 10 Ϫ3 s Ϫ1 . These slow deactivation kinetics were completely unaffected by the addition of either the RGS9-2/G␤5 or RGS7/G␤5 complex both in the presence or absence of R7BP (Fig. 4). These observations suggest that R7 RGS complexes do not regulate G␣z signaling. We next examined the impact of RGS9-2 and RGS7 on G␣i deactivation in comparison to G␣o. In these experiments, we titrated the amount of RGS proteins and normalized their expression levels by post hoc quantitative Western blotting. We used the k GAP versus RGS concentration slope as a measure of RGS catalytic efficiency. As evident from Fig. 5A, RGS9-2/G␤5 can regulate both G␣o and G␣i1 in a system containing D2R. Comparison of the two slopes indicates that RGS9-2/G␤5 shows an ϳ3-fold greater preference for G␣o over G␣i1. In contrast, although RGS7/G␤5 also effectively terminated G␣omediated responses, it had no detectable activity toward G␣i1 in a D2R-based system (Fig. 5B). We confirmed that the expression levels of G␣o and G␣i1 are similar (Fig. 5, E and F), indicating that the G protein selectivity of these two RGS proteins is not due to different levels of G protein expression. Because in (n ϭ 4). K, the expression levels of RGS7 in cells used in J were examined by Western blotting with an RGS7-specific antibody. L, the deactivation rate constant measured in the absence of RGS7 (0.1427 Ϯ 0.0028 s) was subtracted from values measured in the presence of RGS7, and the resulting k GAP parameters were plotted against the expression level of RGS7 measured in K (n ϭ 4). FIGURE 2. Overexpression of the R7 RGS complex does not influence the expression levels of signaling components reconstituted in HEK293T cells. HEK293T cells were transfected with D2R, G␣o, Venus-G␤␥, masGRK3ct-Rluc8, and G␤5 with or without R7 RGS and R7BP. The expression levels of signaling components were analyzed by Western blotting with the indicated specific antibodies. Anti-Rluc (Renilla luciferase) antibody was used to detect masGRK3ct-Rluc8. Because transfection of G␤5 without R7 RGS does not change the parameters, Rmax, and activation and deactivation rates (data not shown), G␤5 was transfected in all conditions. vivo R7 RGS proteins also regulate MOR signaling, we next tested the possible effect of GPCR on G␣ selectivity of RGS proteins. As with D2R signaling, RGS9-2/G␤5 exerted a more efficacious GAP activity toward G␣o relative to G␣i1 (3-fold) (Fig. 5C). Again, RGS7/G␤5 selectively regulated G␣o but not G␣i signaling when MOR was used instead of D2R to drive the responses (Fig. 5D). These data indicate that the RGS9-2/G␤5 dimer is a GAP for both G␣o and G␣i, with a prefer-ence toward G␣o, that the RGS7/G␤5 dimer is strictly selective for G␣o and does not regulate G␣i, and that G protein selectivity of these RGS proteins does not depend on the GPCR. R7BP Potentiates the GAP Activity of RGS7 and RGS9-2 and Enables the Regulation of G␣i by RGS7 with Both D2R and MOR-We examined the effects of R7BP on the activity of RGS7 and RGS9-2 by comparing their GAP activity toward G␣o and G␣i1 in the D2R receptor system. Coexpression with R7BP potentiated the GAP activity of both RGS7 and RGS9-2 for G␣o. The extent of the regulation was similar and reached ϳ2-fold (Fig. 6, A and B). This increase in activity was independent from the regulation of the RGS9-2 expression levels by R7BP because calculated k GAP values were normalized to the relative expression levels of the proteins. A similar stimulatory effect of R7BP (2.5-fold) was observed for RGS9-2 when G␣i was used in the assay instead of G␣o (Fig. 6C). Strikingly, coexpression with R7BP dramatically affected the ability of the RGS7/G␤5 complex to regulate G␣i, essentially resulting in an all-or-nothing effect (Fig. 6D). Although, in the absence of R7BP, we detected no GAP activity of RGS7/G␤5 toward G␣i1, the RGS7/G␤5/R7BP trimer was capable of efficiently regulating G␣i inactivation, with the k GAP values approaching those for the RGS9-2/G␤5/R7BP complex. We next tested the possible effect of GPCRs on the function of R7BP. For this purpose, we switched receptors from D2R to MOR (Fig. 7). With MOR, we observed similar effects of R7BP on GAP activity as with the cells transfected with D2R. R7BP potentiated RGS9-2 GAP activity toward G␣o and G␣i1 ϳ2-fold (Fig. 7, A and C). Likewise, R7BP enhanced RGS7 activity toward G␣o 2.5-fold (Fig. 7B) and played a permissive role in enabling the regulation of G␣i1 GTPase activity (D). Thus, we conclude that GPCRs do not have a significant effect on the ability of R7BP to augment the GAP activity of both RGS7 and RGS9-2. RGS9-2 Is Less Efficacious but More Selective GAP Relative to RGS7-Next, we addressed questions pertaining to the relative efficiencies of RGS7 versus RGS9-2 in G protein deactivation and the possible GPCR selectivity of their effects (Fig. 8). To enable such comparisons, we obtained absolute quantitative values for the RGS activity. Direct comparison of RGS7/G␤5 and RGS9-2/G␤5 indicate that they accelerate deactivation rates in D2R-and MOR-based systems to a different extent (Fig. 8, A and B). The expression levels of RGS7 and RGS9-2 were quantified by Western blot analysis with purified RGS proteins as standards (Fig. 8, C and D). Given the linear relationship between RGS concentration and G protein deactivation rates, these values were used to normalize k GAP values to derive the specific activity for each RGS protein in each GPCR system. The results allow direct comparisons of RGS efficiencies between MOR and D2R. As evident from such an analysis, RGS7 produced a 3-and 4.5-fold higher activity than RGS9-2 on D2R and MOR signaling, respectively (Fig. 8E). A more efficacious GAP activity of RGS7 is likely explained by its higher affinity toward the transition state of the G␣o, as evidenced by the pull-down assay between recombinant R7 RGS/G␤5 complexes and native brain lysates (Fig. 8F). Thus, we conclude that RGS7 is a more potent GAP than RGS9-2, irrespective of the GPCR used in the assay. However, although RGS7 exerted a similar GAP activity in both the D2R and MOR systems, RGS9-2 was about 2-fold more effective in deactivating D2R relative to MOR signaling (Fig. 8E). Thus, RGS9-2 exhibits GPCR selectivity and preferentially regulates D2R signaling. RGS7 and RGS9-2 Deferentially Control the G␣i-mediated Inhibition of Adenylate Cyclase Activity-To examine how the selectivity of RGS7 and RGS9-2 on G protein inhibition propagates the regulation of downstream effector signaling, we chose an adenylyl cyclase (AC) system for its central role in cellular signaling. AC is stimulated by G␣s and inhibited by G␣i, thus integrating G protein inputs (Fig. 9A). Using this system, we analyzed the effects of RGS9-2 and RGS7 on the ability of MOR to suppress cAMP production using a CRE-luciferase reporter construct. Stimulation of MOR with morphine caused a dosedependent inhibition of ␤2AR-agonist isoproterenol-mediated CRE-luciferase induction (Fig. 9B). Cotransfection of RGS9-2 resulted in the rightward shift of the dose-response curve, increasing the IC 50 values ϳ3-fold (8.38 Ϯ 0.68 nM to 25.82 Ϯ 2.39 nM) (Fig. 9C). This indicates that RGS9-2 reduces the potency of MOR-G␣i-AC signaling and is consistent with the action of RGS9-2 as a negative regulator of G␣i. In contrast, cotransfection of RGS7 did not significantly affect MOR signaling to AC (Fig. 9, B and C), consistent with the lack of the RGS7 activity on G␣i revealed in the BRET assays. Thus, these data, together with data from the BRET assays, indicate that RGS proteins differentially control GPCR-mediated signaling to downstream effectors, consistent with their G protein selectivity profile. DISCUSSION The main result of this study is the establishment of the selectivity for two major striatal RGS proteins in their ability to regulate physiologically relevant GPCRs in the native environment of a living cell. The GAP activity of RGS proteins is usually assayed in in vitro systems with purified components where RGS proteins, and often only their catalytic domains, are studied in isolation from protein-protein interactions, receptors, and the membrane environment. Under those conditions, RGS proteins display very few differences in their substrate selectivity and specific activity. However, it is becoming increasingly appreciated that, in vivo, RGS proteins function in a tight association with other components of the GPCR signaling cascades and exist in larger macromolecular complexes (29). For example, both RGS7 and RGS9 form complexes with a range of partners that include G␤5; the membrane anchors R7BP and R9AP (30); and the GPCRs mGlur6 (31), D2R (13,32), MOR (33)(34)(35), m3 muscarinic (36), and GPR158/179 (37). Nevertheless, how these interactions shape RGS action in cells is poorly understood. We used a cell-based BRET assay system to study the influence of multisubunit RGS complexes on the kinetics of G protein subunit reassociation following termination of GPCR activity. We developed quantitative measures of RGS protein GAP activity in this system and applied it to investigate the FIGURE 6. R7BP augments the GAP activity of RGS7 and RGS9-2 toward G␣i1 and G␣o in the D2R-based system. G␣i1 and G␣o were reconstituted in HEK293T cells with D2R, and the GAP activity of RGS7 and RGS9-2 was examined in the absence (E) or presence (•) of R7BP. Changes in the k GAP values are plotted as a function of RGS concentration. Representative of two to three independent experiments yielding similar results are shown. Slope values obtained from linear regression analysis are 4.8 ϫ 10 Ϫ3 Ϯ 3.1 ϫ 10 Ϫ4 for D2R-Go-RGS9-2-R7BP, 1.8 ϫ 10 Ϫ3 Ϯ 1.1 ϫ 10 Ϫ4 for D2R-Go-RGS9-2, 3.3 ϫ 10 Ϫ3 Ϯ 5.9 ϫ 10 Ϫ4 for D2R-Go-RGS7-R7BP, 1.5 ϫ 10 Ϫ3 Ϯ 8.7 ϫ 10 Ϫ5 for D2R-Go-RGS7, 6.7 ϫ 10 Ϫ4 Ϯ 4.2 ϫ 10 Ϫ5 for D2R-Gi-RGS9-2-R7BP, 3.7 ϫ 10 Ϫ4 Ϯ 3.7 ϫ 10 Ϫ5 for D2R-Gi-RGS9-2, 9.8 ϫ 10 Ϫ4 Ϯ 8.8 ϫ 10 Ϫ5 for D2R-Gi-RGS7-R7BP, and 3.1 ϫ 10 Ϫ5 Ϯ 8.5 ϫ 10 Ϫ5 for D2R-Gi-RGS7. Four to six replicate samples were used for obtaining each data point. activity and selectivity of the striatal RGS proteins RGS7 and RGS9-2. The following are the key conclusions of our study (Fig. 10). First, we show that in the cellular environment RGS9-2/G␤5 and RGS7/G␤5 display a strong preference for G␣o over G␣i. These results are in overall agreement with published in vitro data (26,38). The RGS7 complex showed the greatest selectivity for G␣o and was completely unable to inactivate G␣i in the absence of R7BP. Second, we demonstrate that the GAP activity of the RGS9-2/G␤5 complex shows a receptor preference for D2R over MOR in regulation of the G␣o deactivation. To our knowledge, this is the first clear example of GPCR selectivity of the R7 RGS action. Although the mechanisms behind this receptor preference of the RGS9-2 complex need to be established, we speculate that they are likely determined by selective interactions of RGS9-2 with the receptors, as suggested from the studies on RGS4, that selectively interacted with the ␦-opi-oid receptor over the -opioid receptor (39). Interestingly, no receptor preference was revealed for the RGS7/G␤5 action. Third, we found that R7BP acted universally to potentiate the action of both RGS7 and RGS9-2 on both G␣i and G␣o and with both D2R and MOR. It had the most pronounced all-ornothing effect on the ability of RGS7 to regulate G␣i, essentially switching it on. Although the design of our study did not allow distinguishing between allosteric effects and the general effects of positioning RGS complexes on the plasma membrane, we think that both mechanisms are likely involved in the action of R7BP, as exemplified in the studies on related membrane anchor R9AP (24,41). Finally, we report that in living cells, RGS7 shows a much more potent activity relative to the RGS9-2 complex. In contrast, previous in vitro observations with purified proteins reported approximately equal catalytic activities of these two proteins (26). This illustrates the importance of considering a native, physiologically relevant FIGURE 7. R7BP augments the GAP activity of RGS7 and RGS9-2 toward G␣i1 and G␣o in the MOR-based system. Gi1 and Go signaling were reconstituted in HEK293T cells with MOR, and the GAP activity of RGS7 and RGS9-2 was examined in the absence (E) or presence (•) of R7BP. Changes in the k GAP values are plotted as a function of RGS concentration. Representative of two to three independent experiments yielding similar results are shown. The slope values obtained from linear regression analysis are 1.7 ϫ 10 Ϫ3 Ϯ 3.4 ϫ 10 Ϫ4 for MOR-Go-RGS9-2-R7BP, 6.0 ϫ 10 Ϫ4 Ϯ 1.0 ϫ 10 Ϫ4 for MOR-Go-RGS9-2, 3.6 ϫ 10 Ϫ3 Ϯ 6.5 ϫ 10 Ϫ4 for MOR-Go-RGS7-R7BP, 1.8 ϫ 10 Ϫ3 Ϯ 2.9 ϫ 10 Ϫ4 for MOR-Go-RGS7, 6.7 ϫ 10 Ϫ4 Ϯ 4.2 ϫ 10 Ϫ5 for MOR-Gi-RGS9-2-R7BP, 3.7 ϫ 10 Ϫ4 Ϯ 3.7 ϫ 10 Ϫ5 for MOR-Gi-RGS9-2, 3.0 ϫ 10 Ϫ4 Ϯ 5.9 ϫ 10 Ϫ5 for MOR-Gi-RGS7-R7BP, and 1.2 ϫ 10 Ϫ5 Ϯ 6.8 ϫ 10 Ϫ5 for MOR-Gi-RGS7. Four to six replicate samples were used for obtaining each data point. cellular environment when comparing the activities of complex RGS proteins. It is interesting to consider the observed differences in selectivity and activity of RGS complexes in the context of striatal G protein signaling regulation. We have reported recently that changes in neuronal excitability and oxygenation trigger a remodeling of RGS complexes in the striatum (40). During this remodeling, RGS9-2 undergoes degradation, and vacated R7BP FIGURE 8. Comparison of catalytic activities and receptor preferences of RGS7 and RGS9-2 complexes. A and B, D2R (A) and MOR (B) signaling was reconstituted in HEK293T cells with Go, and the GAP activity of the RGS7/G␤5 and RGS9-2/G␤5 dimers were examined. Trace lines represent the exponential fit of the deactivation phase. C and D, the protein levels of RGS7 (C) and RGS9-2 (D) in the transfected cells used for the BRET assay were determined by Western blotting. Band intensities (E) obtained from recombinant proteins were used to generate a standard curve. Band intensities (1.49 Ϯ 0.05 for RGS7 and 1.97 Ϯ 0.08 for RGS9-2) from the triplicate samples in question were plotted on the calibration curve (E) and used to determine the amount of RGS7 and RGS9-2. E, the k GAP values obtained from the BRET assay were divided by the amount of RGS proteins determined by Western blot analysis to compare the activity of RGS7 and RGS9-2. Six replicate samples were used for each experiment. The experiment was performed independently three times, and all showed similar results. Error bars represent mean Ϯ S.E. *, p Ͻ 0.05, Kruskal-Wallis one-way analysis of variance on ranks followed by Tukey's post hoc test; ns, not significant. F, purified His-tagged RGS7/G␤5 and RGS9-2/G␤5 were incubated with mouse brain membrane fractions treated with GDP and AlF 4 Ϫ . After detergent treatment, complexes containing His-tagged proteins were purified with nickel-nitrilotriacetic acid chromatography and analyzed by Western blot analysis with G␣o and G␤5 antibodies. recruits RGS7 to the plasma membrane compartments. Furthermore, multiple studies demonstrated that exposure to addictive drugs (e.g. cocaine, morphine, amphetamine) that influence D2R and MOR signaling also changes RGS9-2 expression (9) and, thus, likely influences the composition of RGS complexes in striatal neurons. Taken together with the results of this study, these observations suggest a model where remodeling of RGS complexes is used to adjust the strength and selectivity of striatal G protein signaling. For example, an increase in dopamine and opioid signaling may be counteracted by tweaking the GAP complex (41) to substitute less efficient RGS9-2 for the stronger RGS7 catalytic subunit. Substituting a more selective GAP for a less selective one will likely also affect the relative balance of D2R versus MOR signaling in the striatum. It thus appears that RGS protein complexes are more than just blunt indiscriminate tools and rather contribute to the homeostatic scaling of G protein signaling in a GPCR-and G protein-selective fashion. FIGURE 9. Differential effects of RGS7 and RGS9-2 on G␣i-mediated inhibition of adenylate cyclase activity. A, schematic cross-talk signaling pathway of endogenous ␤2-adrenergic receptor (␤2AR) and MOR to the firefly luciferase gene regulated by the CRE response element. B, HEK293T cells were transfected with CRE-luc2P reporter, MOR, G␣i1, G␤1, G␥2, and G␤5 with or without R7 RGS. After 5 h of treatment with 50 nM isoproterenol (ISO) together with serial doses of morphine, luciferase activity was measured. The highest dose of morphine treatment of cells transfected without RGS or with RGS7 or RGS9-2 inhibited ISO-induced luminescence by 68. 5 Ϯ 1.3%, 62.9 Ϯ 2.5%, and 61.2 Ϯ 4.7% (S.E., n ϭ 4), respectively. The average luminance at the highest dose of morphine treatment was subtracted as background, and the resulting difference (⌬R) was normalized against the maximal value upon stimulation by ISO only (Rmax). C, IC 50 values were obtained by fitting a four-parameter logistic curve to the inhibition data using GraphPad Prism 5. **, p Ͻ 0.01, one-way analysis of variance followed by Tukey's post hoc test. FIGURE 10. A model for GPCR and G protein selectivity of striatal R7 RGS complexes. RGS7 and RGS9-2 differentially regulate D2R-and MOR-mediated signaling to G␣i and G␣o in the absence (A) or presence (B) of R7BP. The thickness of the T-shaped arrows indicates the relative strength of GAP activity observed in this study. A thicker line represents stronger activity. RGS7 is a stronger GAP than RGS9-2. RGS9-2, but not RGS7, is capable of regulating G␣i in the absence of R7BP. Both RGS9-2 and RGS7 preferentially regulate G␣o in the absence or presence of R7BP. Although RGS7 does not show a GPCR preference, RGS9-2 complexes selectively regulate D2R over MOR.
2018-04-03T00:44:40.186Z
2013-07-15T00:00:00.000
{ "year": 2013, "sha1": "200cf29134b76ab6e3774b3322c93000b21ac353", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/288/35/25129.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "ae540bd4c24e0a70764325f999f4d82a4f62e225", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
8423671
pes2o/s2orc
v3-fos-license
Cooperation and Defection at the Crossroads We study a simple traffic model with a non-signalized road intersection. In this model the car arriving from the right has precedence. The vehicle dynamics far from the crossing are governed by the rules introduced by Nagel and Paczuski, which define how drivers behave when braking or accelerating. We measure the average velocity of the ensemble of cars and its flow as a function of the density of cars on the roadway. An additional set of rules is defined to describe the dynamics at the intersection assuming a fraction of drivers that do not obey the rule of precedence. This problem is treated within a game-theory framework, where the drivers that obey the rule are cooperators and those who ignore it are defectors. We study the consequences of these behaviors as a function of the fraction of cooperators and defectors. The results show that cooperation is the best strategy because it maximizes the flow of vehicles and minimizes the number of accidents. A rather paradoxical effect is observed: for any percentage of defectors the number of accidents is larger when the density of cars is low because of the higher average velocity. Introduction Urban transportation systems are a source of numerous inefficiencies and of negative externalities. Traffic problems worsen due to heavy congestions; additionally there are environmental issues such as smog and noise pollution, and huge economic losses due to congested traffic. In order to improve efficiency and reduce externalities it is important to understand traffic dynamics in a controlled environment and to identify optimal control strategies which could help alleviate the problem. Traffic flow problems have received much attention for decades. Many investigations have been carried out using different points of view and considering various aspects of traffic phenomena, and are evaluated in order to better understand the overall quality of traffic flow. One of the main questions in the study of traffic is how to better accommodate the demand for mobility in a system. The pioneering traffic flow descriptions on a highway are derived from observations made by Greenshields, first published about 75 years ago [2]. Greenshields carried out tests to measure traffic flow, traffic density and velocity using photographic measurement methods for the first time. He was able to develop a model of uninterrupted traffic flow that predicts and explains the trends which are observed in real traffic. Nowadays the search for the mechanisms behind the complex interactions between drivers, vehicles and road infrastructure continues. Also, traffic congestion has deteriorated considerably. Recently traffic problems have attracted the attention of physicists because of observed non-equilibrium properties and various nonlinear dynamics phenomena. Several approaches have been proposed to investigate the behavior of vehicular traffic. Most of the approaches are classified into macroscopic and microscopic models based on how the movement of vehicles is considered. In the macroscopic approach a traffic stream is viewed as a continuous medium. The collective vehicle dynamics is described in terms of the spatial vehicular density per lane and the average velocity as a function of the freeway location and time. The first major step in macroscopic modeling of traffic was carried out by Lighthill and Whitham in 1955 [3], when they compared the ''traffic flow on long crowded roads'' with ''flood movements in long rivers''. A year later, Richards (1956) [4] complemented the idea by introducing ''shock-waves on the highway'' with an identical approach. That is the origin of the LWR model. It is common to refer to this class of models as first-order models. Another kind of macroscopic model, second-order models, contain an additional partial differential equation for the average velocity and take into account the finite relaxation time to adapt the velocity to changing traffic conditions [5,6]. In the microscopic approach the motion of each vehicle in a traffic stream is considered. Thus, the dynamic variables of the model represent microscopic properties such as the position and velocity of a single vehicle. The so-called car-following models focus on the non-linear interaction and dynamics of single vehicles. The driving behavior of a vehicle depends significantly on the motion of the preceding vehicle: the acceleration is a function of the vehicle's distance to the preceding one and of its own and relative velocities [7][8][9]. These models are used only for detailed studies (e.g. on-ramp traffic, bottlenecks, effects of traffic optimization measures), as they consume an enormous amount of CPU time because of the large number of variables involved. An alternative approach are cellular automaton (CA) models, which permit the simulation of a minimal model of traffic dynamics faster than real-time simulations [1,10,11]. Cellular automata use integer variables to describe the dynamic properties of the system by discretizing space and time. The Nagel-Schreckenberg (NaSch) model [10] is a basic CA model describing a one-lane traffic flow. Based on this model many CA have been extended to investigate the properties of systems with realistic traffic factors such as highway junctions, crossings, tollbooths and speed limit zones [12][13][14]. An extensive and generous overview of traffic modeling can be found in a review article by Helbing [15]. He considers empirical data and reviews the main approaches to modeling pedestrian and vehicular traffic. Control strategies including ramp metering [16,17], and variable speed limits [18] have also been widely studied. At the most basic level, traffic dynamics are often discussed on a homogeneous roadway. Next it becomes necessary to consider road intersections. Modeling intersections is difficult, since intersection models are phenomenological by nature. They describe, for instance in the case of a merge, the local priority rules. At an intersection a limited space must be shared by vehicles from different directions. Various approaches have been used to resolve the obvious traffic conflicts. There are schemes that require a vehicle to come to a full stop, e.g. stop signs or traffic lights. Other types of schemes try to avoid the full stop of vehicles, like traffic circles or roundabouts [19,20]. In this paper we extend the original discrete model proposed by Nagel and Paczuski [1] in order to account for a non-signalized intersection. This is a common problem in street intersections within cities, particularly in old cities where crossings are neither rotatory nor signalized. Earlier, Zhang et al. [21] considered the intersection problem within a game-theory framework, and will be revisited in the next sections. Perc [22] has also studied the effect of competing strategies in a different discrete traffic model. In this paper we study the effects of cooperator or defector behavior on the flow and average velocity of vehicles, as well as the incidence of accidents when cars do not stop at crossings. The paper is organized as follows. In the next section we define the model and the rules that describe the behavior of vehicles on the street and at an intersection. Next we describe the setting of the simulations and present the results obtained. Finally, we conclude and discuss the potential relevance of this work to the solution of the problems of real traffic. Model and Methods Let us present a model for traffic dynamics at a single intersection and describe the flow in the system. We choose to model the motion of a vehicle on a single lane street using the automaton rules proposed by Nagel and Paczuski [1]. The interaction of vehicles arriving at the intersection, and which one is going to pass first, is determined by the set of rules that we will specify hereafter. We consider a system of two streets, s 1 and s 2 , that cross at a given point X . The streets have a defined sense of circulation: South to North for s 1 and East to West for s 2 (see Fig. 1). Each street is a ribbon with L slots, with periodic boundary conditions. On each street we place N vehicles (initially at random), giving linear density r~N=L, and identified by an index i. Double occupancy of the sites is prohibited (except at X ). Following Ref. [1] we consider a variable associated with the distribution of cars in the streets, the gap g i , that is the number of empty sites in front of car i up to the car ahead. At the intersection the gap needs further specification, as will be explained below. Each car has a time-dependent velocity v i (t) that takes discrete values between 0 and v max . Time proceeds discretely, and v i is the number of sites each vehicle advances during one time step. At the beginning of the simulation all cars have zero velocity. For the purpose of defining the interaction at the crossroad we identify the cars nearest to the intersection as c 1 and c 2 (in streets s 1 and s 2 respectively). Streets are equivalent, i. e. the same traffic rules are applied to both streets. General rules of vehicle motion Firstly, let us describe the dynamics of a vehicle away from the intersection. For every configuration of the model, one iteration consists of the following steps, performed simultaneously for all vehicles: , car i will reduce its velocity as follows: further reduction of velocity). 2. If g i §v i (t)z1 and v i (t)vv max , the car will accelerate as follows: 3. If g i~vi , or if v i~vmax and the gap is g i §v max , the velocity v i does not change. 4. After updating the velocities, each car advances v i (tz1) sites. Note that, in rule 2, Nagel and Paczuski [1] consider p~q~1=2. We prefer to keep some flexibility in the choice of the probabilities. In the simulations reported below we set v max~5 , but any value v max §2 gives the same qualitative behavior. Dynamics at the intersection In order to define the rules that regulate the movement of vehicles at the uncontrolled intersection, one needs to determine the priorities when crossing the intersection, just like in real crossroads, making the transit fluid and avoiding collisions. Zhang et al. [21] considered a similar problem in a game theoretical framework. In their model, drivers approaching the intersection behave either as cooperators (C) or defectors (D), allowing (or not) the cars arriving on the other street to pass. Furthermore, drivers always adopt complementary strategies, that is, if c 1 is C (D), then c 2 is D (C). Thus the dynamics at the intersection is deterministic, since all pairs are of C-D or D-C type, and cooperators will always let the defectors cross. Zhang et al. set the probability of cooperating between 0 and 0:5 (and the probability of defecting between 0:5 and 1). Since cooperators stop at the intersection, the street with the larger number of defectors exhibits an average velocity higher than the other one. However, the average total flow of both streets appears to be rather independent of the probability of cooperation. We believe that the use of cooperating and defecting strategies is a fair approach for the description of the interaction of drivers at crossroads in many real situations. A true game description must take into account the full set of possible interacting strategies. That is, drivers arriving at the intersection may as well be both cooperators (C-C) or defectors (D-D). Since the authors of Ref. [21] penalize only cooperators, it is better to be a defector, and it results that a driver should not behave a priori in a cooperative way. It is precisely due to the relative payoffs of the complete set of interactions that the formal games of Hawks and Doves or Prisoner Dilemma gain their interest in the description of social systems. In our model a driver has a strategy that determines his behavior at the intersection. These are set at random at the beginning of the simulation, with probability p c for cooperation and (1{p c ) for defection. In order to avoid deterministic or synchronization artifacts arising from the periodic boundary conditions, when a car reaches the end of the lane and re-enters the street, drivers are reassigned new strategies, at random with the same probability p c . In this way, heterogeneity in drivers' behavior is incorporated in the model. Let us specify the interaction at the crossroad in a way that imitates what happens in real situations. We impose a single traffic rule: N Rule 1: Drivers must always yield to cars approaching from the right. Rule 1 is a widespread right-of-way traffic rule that applies for equivalent streets in the absence of control devices in almost all countries with a right-hand driving. If street s 1 runs from South to North, and street s 2 from East to West (see Fig. 1), the driver c 1 must respect the priority of c 2 and let him pass first. However, traffic rules are not always respected, and some drivers may try to cross disregarding the rule. This behavior may impact the traffic flow in different ways depending on the density of cars, and it is the phenomenon that we aim to study. We define the following strategies: the other street, measure the gap up to X (neither C or D will crash intentionally). Determination of velocities: (a) If c 1 is cooperator and is at site X {1, it yields. If the velocity of c 2 is such that it will cross the intersection, c 1 sets its velocity to 0 (''stop at the intersection''). However, if the speed of c 2 is not large enough to cross at that step, c 1 sets its velocity according to the general rule and keeps advancing. Note that c 1 yields disregarding the strategy of c 2 . (b) If c 1 is defector and c 2 is cooperator, both may try to cross at the same time (if their velocities are large enough to allow it in the current time step). In this case, the velocities are set up in such a way that cars advance only up to the intersection (not further, not before). This rule slightly favors the right hand driver (the cooperator) with respect to the left hand one (defector), by penalizing the defector (with a reduction of speed). However, neither car stops. This simulates an ''almost crash'', where both drivers lose some time (the defector more than the cooperator). At the next time step they will accelerate if it is allowed by the traffic density. (c) If c 1 and c 2 are defectors, both may try to cross at the same time (again: both speeds need to be large enough given their current positions). In this case neither car yields, and we penalize both of them with a crash. Cars are given velocities just enough to bring them to the intersection, and they are flagged to have their speeds set to 0 the next time step. So they will stay at X one more time step, causing interruption of the traffic flow, which will pile up behind them. The next time step cars will accelerate if allowed by the traffic density. Given this set of rules, we performed various simulations. In the next section, we present the results obtained. Results and Discussion In the simulations, the length of each street is L~1000 sites. N cars are distributed at random in each street, giving a density r~N=L. Starting with an initial condition in which all vehicles have zero velocities, we wait a reasonable transient time in order to obtain a stationary phase, defined by the average velocity in both streets. We then calculate the flow of cars, defined as the average velocity times the density, w~SvTr, additionally averaged over the stationary state. We also calculate various statistical properties of the distribution of velocities. Let us first observe a comparison between a pure Nagel system and street s 1 of our model (Fig. 2). System parameters correspond to a rather noisy behavior of the drivers, with p~q~0:5, and a density of r~0:2 cars per site. For the case of the pure Nagel system (see Fig. 2.A), there are congestion clusters (jams), which are formed randomly due to velocity fluctuations of the cars. These cars either stop moving or move very slowly, and can accelerate to full speed only after having left the jam, keeping this velocity until the next one. Thus, the stationary state is characterized by an inhomogeneous mixture of jam free regions and higher density jammed regions. These jammed regions decrease the average flow in the system. In Fig. 2.B we show the results of our model. We observe that, from the beginning, the intersection acts as an ordering defect. Even if its action is local, its effect is far reaching. When the cars reduce their velocity, and even stop at the intersection, a free space is created ahead of the defect. After crossing the intersection cars can accelerate to maximum velocity, resulting in an almost completely ordered flow that persits downstream. So, the intersection acts as a source of order in the traffic. By allowing vehicles to pass one at a time it effectively destroys the spontaneous jams observed by Nagel. Now we study macroscopic fundamental flow diagrams for a variety of traffic scenarios. These diagrams show the relation between the flow and the density and are represented in Fig. 3. Three phases are observed: (1) a low density phase, with freely flowing traffic at the maximum speed (where the flow grows linearly with the density); (2) a high density phase, corresponding to heavily congested traffic and very slow speed, with the flow depending inversely on the density of cars; (3) an intermediate density phase where the flow remains in a plateau independent of the density and thus the average velocity is in inverse proportion to the density. The first two phases are also present in Nagel-Paczuski's model [1]; the third phase has been observed by Zhang et al. [21]. The transition between the free flow phase and the plateau is a crossover that, in street s 1 (panels A and C in Fig. 3), appears as a peak. We looked at the dynamics in this region in detail, and this peak does not correspond to any abrupt phase transition. A close up of one of the peaks appears as an inset in Fig. 3.A. The fast reduction of flow in the yielding street (s 1 ) must be interpreted, precisely, as the yielding vehicles stopping, at the first stages of the jamming produced by an increased density. We remark that, since the flows in both streets are considered separately, the intersection can be seen as a defect in the street. However, the fact that one of the streets is the preferential one makes the flow different in the yielding street and the preferential one. We performed simulations for two pairs of values of the probabilities p and q (see Model and Methods). On one side, p~q~0:5 represents the behavior of undecided or cautious drivers, which we call a noisy system. These are drivers that half of the time do not accelerate to the maximum possible velocity, and the rest of the time brake more than it is strictly necessary. This set of values has been used by Nagel and Paczuski [1], and will serve as a reference. Another pair of values, p~q~0:9, represents more ''deterministic'' drivers, who mostly try to optimize their motion. Observe, in Fig. 3, that the behavior of the system is qualitatively the same in both cases. Nevertheless, the flow of the noisy system is a little slower than the more deterministic one (the corresponding curves on the left panels of Fig. 3 are higher than those on the right). In addition, we explored a wide range of values for the probability of cooperation p c , ranging from zero cooperators to a fully cooperating system. As expected, the flow is the same in both streets when p c~0 (all defectors, black squares). When p c w0 the flow is greater in street s 2 than in street s 1 . Observe that the impact of cooperation is less relevant in the noisy case, p~q~0:5. On Fig. 4 we plot the total flow as a function of the probability of cooperation p c , for several fixed values of the density, in order to visualize the effect of cooperation. This is shown for the two scenarios, p~q~0:9 and p~q~0:5. The trends are similar in both cases, even though the flow is greater for the deterministic case. More interesting than this is the fact that, for less congested systems, the dependence on cooperation is non monotonous. There is a maximum flow at an intermediate value of p c , indicating that an excess of cooperation may induce a jam at the intersection. On the other hand, for high densities (e.g. r~0:7, as shown) the flow monotonically decreases with the probability of cooperation. Indeed, scenarios with very high densities usually perform better, in real life, when drivers switch their strategies to a new one (neither C nor D), alternating turns to cross the intersection, instead of stopping constantly yielding to the vehicles on the right. We must remark that the flow is an average measure of the traffic. In order to get a complementary description we analyzed the distribution of velocities. On Fig. 5 we plot the mean value, v, the standard deviation, s and the skewness, k of this distribution, as a function of the density r. One can see that for small densities the velocity stays very near the maximum v max (laminar flow), and then sharply decreases when the cars start to pile up at the intersection jam. The intermediate density region is a perfect inverse power law r {1 , corresponding to the plateau of the flow that we showed before. A break in this law is seen at r&0:6, corresponding to the beginning of the high density regime. The standard deviation is close to zero in the laminar flow region. Then, it starts to grow when the flow enters in the plateau region and exhibits a maximum for intermediates values of the density, indicating a big dispersion in the velocities. The dispersion diminishes for high values of the density: the traffic becomes more uniform and slower. For very high densities the average velocity is very small and so is the standard deviation. The skewness of the distribution complements this information. For small densities it is negative, since the maximum of the distribution corresponds to large values of the velocity. When increasing the density the skewness goes through zero, indicating a symmetric distribution. For high values of the density the distribution is centered at low velocities, and the skewness is positive. Another important variable to consider is the number of crashes. A crash does not only cause a time delay in traffic but also, in real systems, has an economic impact. In Fig. 6 we plot the number of crashes per car and per unit time, x, as a function of the density of traffic, for different values of the probability of cooperation p c . One can see that there is a peak in the number of accidents for low densities. This is due to the fact that even though the number of cars is small they move fast, and then the number of accidents per unit time is large. On the contrary, when the density is high the average velocity is small and the number of accidents per unit time decreases almost to zero. For all densities, we also verify that the number of accidents decreases when increasing the probability of cooperation (and goes to zero when all drivers are cooperators). Also, more deterministic systems (not shown), display more crashes due to the higher vehicle speed. The correlation between the rate of crashes and the flow is analyzed in Fig. 7. An hysteresis loop is observed. When the flow increases at a low density the number of crashes increases very fast (due to high vehicle speed), attaining a maximum when the flow enters the plateau (for a density r&0:1). Within the plateau the number of crashes diminishes and when the flow decreases at high densities the number of crashes further decreases going finally to zero at very high densities where the traffic goes to a standstill. Finally, in Fig. 8 we show the number of crashes normalized by the average velocity, x=v, vs the probability of cooperation p c . This function decreases in a monotonic way when increasing the probability of cooperation, as expected. Moreover, as the curves corresponding to four values of vehicle density show, there is a nearly universal exponential behavior as a function of the cooperation, which is independent of the density. This behavior could provide a good field test of our model in real situations. This will be studied in further work. Alternative rules The rules of interaction between a defector in s 1 and a cooperator in s 2 (rule 2.b) imply some advantage of the latter, which is the opposite of some paradigmatic games of defection and cooperation like the Prisoners' Dilemma. In order to study this point we have analyzed some alternative rules to 2.b. They may be summarized as follows: 2.b1) The defector keeps his velocity as if the intersection did not exist, while the cooperator reduces his velocity to arrive just to the intersection at X . This rule clearly favors the defector by penalizing the cooperator (who has the priority). The cooperator is obliged to reduce his velocity and the situation is an almost crash, but differently from the original rule, the defector is not penalized. At the next time step the cooperator c 2 accelerates from the point X . 2.b2) A slight variant of 2.b1: c 2 reduces his velocity to arrive just to the intersection X and there stops, i.e. it will continue in the next step accelerating from v~0. The situation for the defector is the same as in rule 2.b1. It is interesting and reassuring that these alternative rules produce no significative changes in the presented results and for this reason we do not include new figures. Some minor differences arise when considering rule 2.b2 because the cooperators driving in street s 2 are obliged to stop completely at the intersection. Thus, they exhibit a lower average speed. But these changes are not relevant enough and show only small variations in the numerical values of the measured variables when compared with the general results already presented. Conclusions We studied the flow and speed of cars circulating on intersecting one-lane streets, and where drivers coming from the right have the precedence. The drivers may be cooperators-when they respect the right precedence-or defectors if they ignore this rule. We observed some significative trends in the results, that we detail below. The flow increases linearly with the density of cars for very low densities and then remains constant for a wide range of densities. For high densities the flow decreases dramatically to values very near zero (see Fig. 2). This indicates the existence of two critical densities: the first one when the system enters into the plateau of constant flow and the second when it leaves the plateau. Within the plateau region the flow is constant, suggesting that the street has a ''capacity'' up to r*0:7. However, the width of the plateau decreases with the number of defectors both in streets s 1 and s 2 . Also, ''undecided'' drivers that accelerate less that the maximum possibility or brake more than needed reduce the global performance: the flow is much lower when p~q~0:5 than when p~q~0:9. These results can be confirmed by observing the velocity as a function of the density (Fig. 5). The velocity is maximum for low densities, then decreases in inverse proportion to the density for intermediate values and reaches zero for high densities. Again, the plateau region provides a good traffic flow, but the dispersion of the speeds is high. It is curious that the behavior of cooperation or defection is not very relevant. Indeed for intermediate or high densities of cars it is more important to be a ''decided'' driver, accelerating or breaking the maximum or minimum respectively, than to cooperate. The maximum flow is obtained for half of the drivers being defectors and for intermediate densities, or when all the drivers are defectors for high densities. As a matter of fact, for high densities the absolute respect of the right hand precedence can completely block the circulation in street s 1 , a well known phenomenon in many real traffic situations. Nevertheless one must keep in mind that the defectors may be particularly dangerous when the density is low. In this case the average speed is high and the number of accidents can also be very high (see Fig. 7). One can conclude that a significant fraction of defectors is very dangerous at low densities, or in regions of high speed, but a number of them is necessary to increase the flow at intermediate or high densities of cars. Indeed for very high densities the righ hand precedence is annoying and alternate crossing should be preferred. Comparison with real data are in progress, as is also the study of two-lane streets and the comparison between non-signaled and signaled crossings. Also, we plan to extend our model to the study of two-dimensional block model cities, in the line of Refs. [23,24].
2016-05-12T22:15:10.714Z
2013-04-16T00:00:00.000
{ "year": 2013, "sha1": "69b1228f771e0063d9f72d15826d67fd827f76c6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0061876&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69b1228f771e0063d9f72d15826d67fd827f76c6", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
220304292
pes2o/s2orc
v3-fos-license
Tracking Changes in SARS-CoV-2 Spike: Evidence that D614G Increases Infectivity of the COVID-19 Virus Summary A SARS-CoV-2 variant carrying the Spike protein amino acid change D614G has become the most prevalent form in the global pandemic. Dynamic tracking of variant frequencies revealed a recurrent pattern of G614 increase at multiple geographic levels: national, regional, and municipal. The shift occurred even in local epidemics where the original D614 form was well established prior to introduction of the G614 variant. The consistency of this pattern was highly statistically significant, suggesting that the G614 variant may have a fitness advantage. We found that the G614 variant grows to a higher titer as pseudotyped virions. In infected individuals, G614 is associated with lower RT-PCR cycle thresholds, suggestive of higher upper respiratory tract viral loads, but not with increased disease severity. These findings illuminate changes important for a mechanistic understanding of the virus and support continuing surveillance of Spike mutations to aid with development of immunological interventions. 3 to the fitness or antigenic profile of the virus is important to ensure the effectiveness of the vaccines and immunotherapeutic interventions as they advance to the clinic. In response to the urgent need to develop effective vaccines and antibody-based therapeutics against SARS-CoV-2, over 90 vaccine and 50 antibody approaches are currently being explored (Cohen, 2020;Yu et al., 2020). Most target the trimeric Spike protein, which mediates host cell binding and entry and is the major target of neutralizing antibodies Yuan et al., 2020). Spike monomers are comprised of an N-terminal S 1 subunit that mediates receptor binding and a membrane-proximal S 2 subunit that mediates membrane fusion (Hoffmann et al., 2020a;Walls et al., 2020;Wrapp et al., 2020). SARS-CoV-2 and SARS-CoV-1 share ~79% sequence identity (Lu et al., 2020) and both use angiotensin converting enzyme-2 (ACE2) as their cellular receptor. Antibody responses to the SARS-CoV-1 Spike are complex. In some patients with rapid and high neutralizing antibody responses, an early decline of these responses was associated with increased severity of disease and a higher risk of death (Ho et al., 2005;Liu et al., 2006;Temperton et al., 2005;Zhang et al., 2006). Some antibodies against SARS-CoV-1 Spike mediate antibody-dependent enhancement (ADE) of infection in vitro and exacerbate disease in animal models (Jaume et al., 2011;Wan et al., 2020;Wang et al., 2014;Yip et al., 2016). Most current SARS-CoV-2 immunogens and testing reagents are based on the Spike protein sequence of the Wuhan reference sequence , and first-generation antibody therapeutics were discovered based on the early pandemic infections and evaluated using the Wuhan reference sequence proteins. Alterations from the reference sequence as the virus propagates in human-to-human transmission could potentially alter the viral phenotype and/or the efficacy of immune-based interventions. Therefore, we have designed bioinformatic tools to create an "early warning" strategy to evaluate Spike evolution during the pandemic, to enable the testing of mutations for phenotypic implications and the generation of appropriate antibody breadth evaluation panels as vaccines and antibody-based therapeutics progress. Phylogenetic analysis of the global sampling of SARS-CoV-2 is being very capably addressed at the Global Initiative for Sharing All Influenza Data (GISAID) database (www.gisaid.org; Elbe and Buckland-Merrett, 2017;Shu and McCauley, 2017) and Nextstrain (nextstrain.org; (Hadfield et al., 2018). In a setting of low genetic diversity like that of SARS-CoV-2, however, with very few de novo mutational events, phylogenetic methods that use homoplasy to identify positive selection (Crispell et al., 2019) have limited statistical power. Additionally, recombination can add a confounding factor to phylogenetic reconstructions, and recombination is known to play a role in natural coronavirus evolution (Graham and Baric, 2010;Lau et al., 2011;Li et al., 2020;Oong et al., 2017;Rehman et al., 2020), and recombinant sequences (although potential sequencing artefacts) have been found among SARS-CoV-2 sequences (De Maio et al., 2020). Given these issues, we developed an alternative indicator of potential positive selection, by identifying variants that are recurrently becoming more prevalent in different geographic locations. If increases in relative frequency of a particular variant are repeatedly observed in distinct geographic regions, that variant becomes a candidate for conferring a selective advantage. Single amino acid changes are worth monitoring as they can be phenotypically relevant. Among coronaviruses, point mutations have been demonstrated to confer resistance to neutralizing antibodies in MERS-CoV (Tang et al., 2014) and SARS-CoV-1 (Sui et al., 2008;ter Meulen et al., 2006). In the HIV Envelope, single amino acid changes are known to alter host species susceptibility , increase expression levels (Asmal et al., 2011), change viral phenotype from Tier 2 to Tier 1, causing an overall change in neutralization sensitivity (Gao et al., 2014;LaBranche et al., 2019), and confer complete or near complete resistance to classes of neutralizing antibodies (Bricault et al., 2019;Sadjadpour et al., 2013;Zhou et al., 2019). We have developed a bioinformatic pipeline to identify Spike amino acid variants that are increasing in frequency across many geographic regions by monitoring GISAID data. By early April 2020, it was clear the Spike D614G mutation was exhibiting this behavior, and G614 has since become the dominant form in the pandemic. We present experimental evidence that the G614 variant is associated with greater infectivity, as well as clinical evidence that it is associated with higher viral loads. We continue to monitor other mutations in Spike for frequency shifts at regional and global levels and to provide regular updates at a public web site (cov.lanl.gov). Website Overview Our analysis pipeline to track SARS-CoV-2 mutations in the COVID-19 pandemic is based on regular updates from the GISAID SARS-CoV-2 sequence database (GISAID acknowledgments are in Table S1). GISAID sequences are generally linked to the location and date of sampling. Our website provides visualizations and summary data that allow regional tracking of SARS-CoV-2 mutations over time. Hundreds of new SARS-CoV-2 sequences are added to GISAID each day, so we have automated steps to create daily working alignments (Kurtz et al., 2004) (Fig. S1). The analysis presented here is based on a May 29, 2020 download of the GISAID data, when our Spike alignment included 28,576 sequences; updated versions of key figures can recreated at our website (cov.lanl.gov). The overall evolutionary rate for SARS-CoV-2 is very low, so we set a low threshold for a Spike mutation to be deemed "of interest", and we track all sites in Spike at which 0.3% of the sequences differ from the Wuhan reference sequence, monitoring them for increasing frequency over time in geographic regions, as well as for recurrence in different geographic regions. Here we present results for the first amino acid variant to stand out by these metrics, D614G. The D614G variant Increasing frequency and global distribution. The Spike D614G amino acid change is caused by an A-to-G nucleotide mutation at position 23,403 in the Wuhan reference strain; it was the only site identified in our first Spike variation analysis in early March that met our threshold criterion. At that time, the G614 form was rare globally, but gaining prominence in Europe, and GISAID was also tracking the clade carrying the D614G substitution, designating it the "G clade". The D614G change is almost always accompanied by three other mutations: a C-to-T mutation in the 5' UTR (position 241 relative to the Wuhan reference sequence), a silent C-to-T mutation at position 3,037; and a C-to-T mutation at position 14,408 that results in an amino acid change in RNA-dependent RNA polymerase (RdRp P323L). The haplotype comprising these 4 genetically linked mutations is now the globally dominant form. Prior to March 1, it was found in 10% of 997 global sequences; between March 1-March 31, it represented 67% of 14,951 sequences; and between April 1-May 18 (the last data point available in our May 29 th sample) it represented 78% of 12,194 sequences. The transition from D614 to G614 was occurred asynchronously in different regions throughout the world, beginning in Europe, followed by North America and Oceania, then Asia (Figs. 1-3, S2-S3). We developed two statistical approaches to assess the consistency and significance of the D614-to-G614 transition. In general, to observe a significant change in the frequency of variants in a geographic region, three requirements must be met. First, both variants must at some point be cocirculating in the geographic area. Second, there must be sampling over an adequate duration to observe a change in frequency. Third, enough samples must be available for adequate statistical power to detect a difference. Both of our approaches enable us to systematically extract all GISIAD local and regional data that meet these three requirements. 6 Our first approach requires that there be an "onset", defined as the first day where the cumulative number of sequences reached 15 and both forms were represented at least 3 times; we further require that there be at least 15 sequences available at least two weeks after the onset. Each geographic region that meets these criteria is extracted separately based on the hierarchical geographic/political levels designated in GISAID (Fig. 1B). A two-sided Fisher's exact test compares the counts in the pre-onset period to the counts after the two-week delay period and provides a pvalue against the null hypothesis that the fraction of D614 vs G614 sequences did not change. All regions that met the above criteria and that showed significant change in either direction (p<0.05) are included. Almost all shifted towards increasing G614 frequencies: 5/5 continents, 16/17 countries, (two-sided binomial p-value of 0.00027); 16/16 regions (p = 0.00003), and 11/12 counties and cities (p= 0.0063). In new form due to stochastic effects or serial re-introductions, or apparent emergence due to sampling biases, the consistency of the shift to G614 across regions is striking. The increase in G614 often continued after national stay-home-orders were implemented, and in some cases beyond the 2-week maximum incubation period. We found two exceptions to the pattern of increasing G614 frequency in Figure 1B; details regarding these cases are shown in Figure S4. The first is Iceland. Changes in sampling strategy during a regional molecular epidemiology survey conducted through the month of March might explain this exception (Gudbjartsson et al., 2020). In early March, only high-risk people were sampled, the majority being travelers from countries in Europe where G614 dominated. In mid-March, screening began to include the local population; this coincided with the appearance of the D614 variant in the sequence data set. The second exception is Santa Clara county, one of the most heavily sampled regions in California (Deng et al., 2020). The D614 variant dominates sequences from the Santa Clara Department of Public Health (DPH) to date; the G614 variant was apparently not established in that community. In contrast, a smaller set of Santa Clara county sequences, sampled mid-March to early April, were specifically noted to be from Stanford: the Stanford samples had a mixture of both 7 forms co-circulating (Fig. S4), suggesting that the two communities within Santa Clara County are effectively distinct. A June 19 th GISAID update for several California counties is provided in Figure S4C, and the G614 form is present in the most recent Santa Clara DPH samples. Our second statistical approach to evaluating the significance of the D614-to-G614 transition (Fig. 3) uses the time-series data in GISAID more fully. Here we extracted all regional data from GISAID that had a minimum of 5 sequences representing each of the D614 and the G614 variants, and at least 14 days of sampling. We then modeled the daily fraction of G614 as a function of time using isotonic regression, testing the null hypothesis that this fraction does not change over time (i.e. it remains roughly flat over time, with equally likely random fluctuations of increase or decrease). We then separately tested the null against two alternative hypotheses: that the fraction of G614 either increases, or that it decreases. Figure 3A shows separate p-values for all subcountries/states and counties/cities that met the minimal criteria. 30 of 31 subcountries/states with a significant change in frequency were increasing in G614; a binomial test indicates that G614 increases are highly significantly enriched (p-value = 2.98e-09). This was also found in 17 of 19 counties/cites (p-value = 0.0007). Figure 3B shows examples for 3 cities, plotting the daily fraction of G614 as a function of time. Country summaries (similar to Fig. 3A), and plots for all regions (similar to Fig. 3B) are included in Data S1. Origins of the D614G 4-base haplotype. The earliest examples of sequences carrying parts of the 4-mutation haplotype that characterizes the D614G GISAID G clade were found in China and Germany in late January, and they carried 3 of the 4 mutations that define the clade, lacking only the RdRp P323L substitution (Fig. S5D). This may be an ancestral form of the G clade. One early Wuhan sequence and one early Thai sequence had the D614G change, but not the other 3 mutations (Fig. S5D); these may have arisen independently. The earliest sequence we detected that carried all 4 mutations was sampled in Italy on Feb. 20 (Fig. S5D). Within days, this haplotype was sampled in many countries in Europe. Structural implications of the Spike D614G change. D614 is located on the surface of the Spike protein protomer, where it can form contacts with the neighboring protomer (Fig. 4A). Cryo-EM structures (Walls et al., 2020;Wrapp et al., 2020) indicate that the sidechain of D614 and T859 of the neighboring protomer (Fig. 4B) form a between-protomer hydrogen bond, bringing together a residue from the S 1 unit of one protomer and a residue of the S 2 unit of the other protomer (Fig. 4C). The change to G614 would eliminate this sidechain hydrogen bond, possibly increasing mainchain 8 flexibility and altering between-protomer interactions. In addition, this substitution could modulate glycosylation at the nearby N616 site, influence the dynamics of the spatially proximal fusion peptide ( Fig. 4D) of the neighboring protomer, or have other effects. G614 is associated with potentially higher viral loads in COVID-19 patients but not with disease severity. SARS-CoV-2 sequences from 999 individuals presenting with COVID-19 disease at the Sheffield Teaching Hospitals NHS Foundation Trust were available, and linked to clinical data. The Sheffield data include age, sex, date of sampling, hospitalization status (defined as outpatient, OP; inpatient, IP, requiring hospitalization; or admittance into the intensive care unit, ICU), and the cycle threshold (Ct) for a positive signal in E-gene based RT-PCR. The Ct is used here as a surrogate for relative viral loads; lower Ct values indicate higher viral loads (Corman et al., 2020), though not all viral nucleic acids represent infectious viral particles. RT-PCR methods changed during the course of the study due to limited availability of testing kits. The first method involved nucleic acid extraction; the second method, heat treatment (Fomsgaard and Rosenstierne, 2020). A generalized linear model (GLM) used to predict PCR Ct based on the RT-PCR method, sex, age, and D614G status showed only the RT-PCR method (p <2e-16) and D614G status (p=0.037) to be statistically significant (Fig. 5A). Lower Ct values were observed in G614 infections. While our paper was in revision, G614-variant association with low Ct values in vivo (Fig. 5) was independently reported by two other groups (Lorenzo-Redondo et al., 2020;Wagner et al., 2020), in preprints that have not yet been peer reviewed. We found no significant association between D614G status and disease severity as measured by hospitalization outcomes. A comparison of D614G status and hospitalization (combining IP+ICU) was not significant (p = 0.66, Fisher's exact test), although comparing ICU admission with (IP+OP) did have borderline significance (p= 0.047) (Fig. 5B). Regression analysis reinforced the result that G614 status was not associated with greater levels of hospitalization, but that higher age (Dowd et al., 2020;Promislow, 2020), male sex (Conti and Younes, 2020;Promislow, 2020) and higher Ct values (lower viral loads) were each highly predictive of hospitalization. Further analysis showed that viral load was not masking a potential D614G status effect on hospitalization (see STAR Methods). Univariate analysis also found highly significant associations between age and male sex and hospitalization (see STAR Methods). G614 is associated with higher infectious titers of spike-pseudotyped virus. We quantified the infectious titers of pseudotyped single-cycle vesicular stomatitis virus (VSV) and lentiviral particles displaying either D614 or G614 SARS-CoV2 Spike protein. For both the VSV and lentiviral pseudotypes, G614-bearing viruses had significantly higher infectious titers (2.6 -9.3 fold increase) than their D614 counterparts; this was confirmed in multiple cell types ( Figure 6A-C). Similar results recently reported in a preprint that has not yet been peer reviewed also suggest that G614 increases both spike stability and membrane incorporation . TMPRSS2, a type-II transmembrane serine protease, cleaves the viral spike after receptor binding to enhance entry of MERS-CoV, SARS-CoV and SARS-CoV-2 (Hoffmann et al., 2020b;Kleine-Weber et al., 2018;Matsuyama et al., 2020;Millet and Whittaker, 2014;Park et al., 2016;Shulla et al., 2011;Zang et al., 2020). Spike 614 is in a pocket adjacent to the fusion peptide near the expected TMPRSS2 cleavage site, suggesting there could be differences in the propensity and/or requirement for TMPRSS2 of the G614 variant. To test this hypothesis, we infected 293T cells stably expressing ACE2 receptor in the presence or absence of TMPRSS2, and quantified the titer of infectious virus. We found similar fold-changes in the titers between D614 and G614 regardless of TMPRSS2 expression (Fig. 6A). Hence, entry of G614-bearing viruses in 293T-ACE2 cells, as compared to D614-bearing viruses, is not enhanced by TMPRSS2. Further studies are required to determine if the G614 variant shows increased titers in lung cells, which may recapitulate native protease expression levels more faithfully, and to determine if this variant increases the fitness of authentic SARS-CoV-2. We also tested if the D614G variations would be similarly neutralized by polyclonal antibody. The convalescent sera of six San Diego residents, likely infected in early to mid-March when both D614 and G614 were circulating, each demonstrate equivalent or better neutralization of G614-bearing pseudovirus compared to D614-bearing pseudovirus (Fig 6D-E). Although we do not know with which virus each of these individuals were infected, these initial data suggest that despite increased fitness in cell culture, G614-bearing virions are not intrinsically more resistant to neutralization by convalescent sera. Additional sites of interest in the Spike gene with rare mutations Spike has very few mutations overall. A small set has reached >0.3% of the global population sample, the threshold for automatic tracking at the cov.lanl.gov website ( Fig. 7A and B, details provided in Table S2). Regions in the alignment where entropy is relatively high compared to the rest of Spike (i.e., local clusters of rare mutations) are also tracked (Table S2). Genetic mutations of interest are mapped as amino acid changes onto a Spike structure (Figure 4). The mutation resulting in the signal peptide L5F change recurs many times in the tree and is stably maintained in about 0.6% of the global GISAID data. There are several clusters of mutations in region of the spike gene encoding the N-terminal domain (NTD) and RBD which are potential targets for neutralizing antibodies (Chen et al., 2017;Zhou et al., 2019a;(Sui et al., 2008;Tang et al., 2014;ter Meulen et al., 2006). The RBD cluster (positives 475-483) spans two positions, at 475 and 476, that are located within 4Å of bound ACE2 (Fig. 4D) (Yan et al., 2020). The fusion peptide contains a cluster of amino acid changes between 826-839; this cluster is highlighted in Fig. 7 to illustrate our web-based tools for tracking variation (Fig. 7A-C). The fusion core of HR1 , next to the helix break in pre-fusion Spike, also contains a cluster of amino acid changes between 93E-940 (Fig. 4E). The motif SXSS (937-940) may enhance the association of helices (Dawson et al., 2002;Salamango and Johnson, 2015). The cytoplasmic tail of Spike also contains a site of interest, P1263L. Discussion Our data show that over the course of one month, the variant carrying the D614G Spike mutation became the globally dominant form of SARS-CoV-2. Phylogenetic tracking of SARS-CoV-2 variants at Nextstrain reveals complex webs of evolutionary and geographical relationships (nextstrain.org; (Hadfield et al., 2018)); travelers dispersed G614 variants globally, and likely would have introduced and reintroduced G614 variants into different locations. Still D614 prevalent epidemics were very well established in many locations when G614 first began to appear (see Fig. S2 for examples). The mutation that causes the D614G amino change is transmitted as part of a conserved haplotype defined by 4 mutations that almost always track together ( Fig. S5 and S6). The pattern of increasing G614 frequency within many different populations where both D614 and G614 were co-circulating is highly significant, suggesting that G614 may be under positive selection (Fig. 1b, 3). We also found G614 to be associated with higher levels of viral nucleic acid in the upper respiratory tract in human patients ( Fig. 5) (suggestive of higher viral loads), and with higher infectivity in multiple pseudotyping assays (Fig. 6). Given that most G614 variants belong to the G clade lineage, phylogenetic methods that depend upon recurrence of mutational events for their signal are poorly powered to resolve whether D614G is under positive selection. The GISAID data, however, provided the opportunity to look into the relationships among the SARS-CoV-2 variants in the context of time and geography, enabling us to track the increase in frequency of G614 as an early indicator of possible positive selection. This approach is potentially subject to founder effects and sampling biases, and so we generally view this strategy as simply an early indicator of an amino acid change that should be monitored further and tested. The G614 variant stood out, however, in our early detection framework for several reasons. First was the consistency of increase across geographic regions, which was highly significantly non-random (Figs. 1b and Fig. 3). Second, if the two forms were equally likely to propagate, one would expect the D614 form to persist in many locations where the G614 form was introduced into the ongoing well-established D614 epidemics. Instead, we found that even in such cases, G614 increased S2 and S3.). Third, the increase in G614 frequency often continued well after national stay-at-home orders were in place, when serial reseeding from travelers was likely to be significantly reduced (Figs. 2,S2 and S3.). Our global tracking data show that the G614 variant in Spike has spread faster than D614. We interpret this to mean that the virus is likely to be more infectious, a hypothesis consistent with the higher infectivity observed with G614 Spike-pseudotyped viruses we observed in vitro (Fig. 6), and the G614 variant association with higher patient Ct values, indicative of potentially higher in vivo viral loads (Fig. 5). Interestingly, we did not find evidence of G614 impact on disease severity; i.e., it was not significantly associated with hospitalization status. However, an association between the G614 variant and higher fatality rates has been reported in a comparison of mortality rates across countries, although this kind of analysis can be complicated by different availability of testing and care in different nations (Becerra-Flores and Cardozo, 2020). While higher infectiousness of the G614 variant may fully account for its rapid spread and persistence, other factors should also be considered. These include epidemiological factors, as viral spread also depends on who it infects, and epidemiological influences can also cause changes in genotype frequency to mimic evolutionary pressures. In all likelihood, a combination of evolutionary selection for G614 and the founder's effects of being introduced into highly mobile and connected populations may have together contributed in part to its rise. The G-clade mutations in the 5' UTR, or in the RdRP protein might also have impact. In addition, there could be immunological consequences resulting from the G614 change in Spike. The G614 variant is sensitive to neutralization by polyclonal convalescent sera (Fig. 5), which is encouraging in terms of immune interventions, but it will be important to determine whether the D614 and G614 forms of SARS-CoV-2 are differentially sensitive to neutralization by vaccine-elicited antibodies or by antibodies produced in response to infection with either form of the virus. Also, if the G614 variant is indeed more infectious than the D614 form ( Fig. 6), it may require higher antibody levels for protection by vaccines or antibody therapeutics than the D614 form. Antibodies against an immunodominant linear epitope spanning Spike 614 in SARS-CoV-1 were associated with ADE activity (Wang et al., 2016), and so it is possible that this mutation may impact ADE. Tracking mutations in the Spike gene has been our primary focus to date because of its relevance to vaccine and antibody-based therapy strategies currently under development. Such interventions take months to years to develop. For the sake of efficiency, contemporary variation should be factored in during development to ensure that the interventions will be effective against circulating variants when they are eventually deployed. To this end, we built a data-analysis pipeline to enable the exploration of potentially interesting mutations on SARS-CoV-2 sequences. The analysis is updated daily as the data become available through GISAID, enabling experimentalists can make use of the most current data available to inform vaccine development, reagents for evaluating antibody response, and experimental design. The speed with which G614 variant became the dominant form globally suggests the need for continued vigilance. Limitations of this study Shifts in frequency towards the G614 variant in any given geographic region could in principle result from either founder effects or sampling biases; it was the consistency of this pattern across regions where both forms of the virus were initially co-circulating that led us to suggest that the G614 form might be transmitted more readily due to an intrinsic fitness advantage; however, systematic biases across many regions could impact the levels of significance we observed. The lack of association between G614 and hospitalization that we report may miss impacts on disease severity that are more subtle than we can detect. The experimental approach taken here to acquire laboratory evidence of increased fitness of the D614G mutation is based on two different pseudovirus models of infection in established cell lines. The extent to which this model faithfully recapitulates wild-type virus infection in natural target cells of the respiratory system is still being determined, and our laboratory experiments do not directly address the biology and mechanics of natural transmission. Infectiousness and transmissibility are not always synonymous, and more studies are needed to determine if the D614G mutation actually led to an increase number of infections, not just higher viral loads during infection. We encourage others to study this phenomenon in greater detail with wild-type virus in natural infection and varied target cells (Hou et al., 2020), and in relevant animal models. Finally, the neutralization assays performed were based on sera from SARS-CoV-2 infected individuals with an unknown D614G status. Thus, while they show that the G614 variants are neutralization sensitive, more work is needed to resolve whether the potency of neutralization is affected when the variant that initiated the immune response differs from the test variant, or when monoclonal antibodies are used. Nguyen for sharing preliminary MD data. We thank Alessandro Sette and Shane Crotty for survivor sera, and Sharon Schendel for manuscript edits. We acknowledge Barney research. We gratefully acknowledge the team at GISAID for creating SARS-CoV-2 global database, and the many people who provided sequence data (Table S1). Declaration of Interests The authors declare no competing interests. and G614 (blue) variants in different continents between January 12 to May 12. The measure of interest is the relative frequency over time. The shape of the overall curve just reflects sample availability: Sequencing was more limited earlier in the epidemic (hence the left-hand tail), and there is a time lag between viral sampling and sequence availability in GISAID (hence the right-hand tail). Main Figure Titles and Legends Weekly running count plots were generated with Python Matplotlib (Hunter, 2007); all elements of this figure are frequently updated at cov.lanl.gov. essentially G614 epidemics when sampling began, but even in these cases small traces of D614 found early on were soon lost (e.g. France and Italy). The Italian epidemic started with D614 clade, but Italy had the first sampled case of the full G614 haplotype, and had shifted to all G614 samples prior to March 1 (see Fig. S5). European nations that began with a mixture of D614 and G614 most clearly reveal the frequency shifts (e.g. Germany, Spain and the United Kingdom). The UK is richly sampled, and so is subdivided into smaller regions: England, Wales, and Scotland, then further divided to display two well-sampled English cities. Even in settings with very well-established D614 epidemics (e.g. Wales and Nottingham, also see Figs. S2 and S3), G614 becomes prevalent soon after its appearance. The increase in G614 frequency often continues well after stay-at-home orders are in place (pink line) and past the subsequent two-week incubation period (pink transparent box). The figures shown here can be recreated with contemporary data from GISAID at the cov.lanl.gov website. UK stay-at-home order dates were based on the date of the national proclamation, others were documented on the web (Schramm and Melin, 2020). decreasing time window in California was driven by sampling from Santa Clara county, a rare region that has retained the D614 form (Fig. S4). In the May 29 th data set used here, Santa Clara county was sampled later in May than any other region in the California, so the California G614 frequency dips at this last available time point. If Santa Clara county is removed from the California sample, the pattern of increasing levels of G614 is restored (red asterisk). B. Three examples for cities, plotting the daily fraction of G614 as a function of time, and accompanied by plots of running weekly counts. The dot size is proportional to the number of sequences sampled that day. The staircase line is the maximum likelihood estimate under the constraint that the logarithm of the odds ratio is nondecreasing. Two typical examples are shown highlighted in blue (Sydney and Cambridge), and one exception is shown, highlighted in orange (Yakima). Yakima had a brief sampling window enriched for G614 early in the sampling period, but otherwise G614 maintained a low frequency. Summaries and plots for all regional data at levels 2-4 (included country) are included in Data S1. loads. The PCR method was changed part way through April due to shortages of nucleic-acid extraction kits (Fomsgaard and Rosenstierne, 2020). Ct levels for the two PCR methods (nucleic acid extraction vs. simple heat inactivation) differ, and so we used a GLM to evaluate statistical impact of D614G across methods. B) D614G status was not statistically associated with hospitalization status (outpatient (OP), inpatient (IP), or ICU) as a marker of disease severity, but age was highly correlated. The number of counts in each category is noted in the upper right-hand corner of each graph. See the main text and methods for statistical details. Fig. 6. Viral infectivity and D614G associations A) Recombinant VSV pseudotyped with the G614 Spike grows to higher titer than D614 Spike in Vero, 293T-ACE2 and 293T-ACE2-TMPRSS2 cells as measured in terms of focus forming units (ffu). Four asterisks (****) indicates a p <0.0001 by a Student's t test in pairwise comparisons. Experiments were repeated twice, each time in triplicate. Using a GLM to assess viral infectivity of the D614 and G614 variants across cell types and to account for repeat experiments, we found the G614 variant had average 3-fold higher infectious titer than D614, and that this difference was highly significant (p=9x10 -11 ) (see STAR Methods). B,C) Recombinant lentiviruses pseudotyped with the G614 Spike were more infectious than corresponding D614 S-peudotyped viruses in (B) 293T/ACE2 (6.5-fold increase) and (C) TZM-bl/ACE2 cells (2.8fold increase, p <0.0001). Relative luminescence units (RLUs) of Luc reporter gene expression (Naldini et al., 1996) were standardized to p24 content of the pseudoviruses (p24 content of pseudoviruses for 293T/ACE2 cells: D614= 269 ng/ml, G614= 255 ng/ml; p24 content of pseudoviruses for TZM-bl/ACE2 cells, D614= 680 ng/ml, G614= 605 ng/ml). Background RLU was measured in wells that received cells but no pseudovirus. D,E) Convalescent sera from six individuals in San Diego (4765-4767 and 4774-4777) can neutralize both D614-(orange) and G614-(blue) bearing VSV pseudoviruses. Samples 1592 and 1616 (grey) are negative control normal human sera. % relative infection is plotted vs. log polyclonal antibody concentration. Other sites of interest cluster in a main lineage, but are occasionally found in other parts of the tree in distant geographic regions, thus likely to be recurring at a low level. Build parsimony trees. A brief parsimony search (parsimony ratchet, with 5 replicates) is performed with 'oblong' (Goloboff, 2014) This is intended as an efficient clustering procedure rather than an explicit attempt to achieve an accurate phylogenetic reconstruction, but it appears to yield reasonable results in this situation of a very large number of sequences with a very small number of changes, where more complex models may be subject to overfitting. When multiple most-parsimonious trees are found, only the shortest of these (under a p-distance criterion) is retained. Distance scoring is performed with PAUP* (Swofford, 2003). B. Table indicating Contemporary versions of these figures can be created at cov.lanl.gov. Care should be taken to try to avoid systematic sequencing errors and processing artefacts among rare variants (for example, see Figure 1. The intent of these procedures is to generate, for each of several regions, a set of contiguous codon-aligned sequences, complete in that region, without extensive uncalled bases, large gaps, or regions that are unalignable or highly divergent, in reasonable running time for n > 30,000 ~30kb viral genomes. This allows daily processing of GISAID data to enable us to track mutations. This process provides the foundational data to enable the generation of Figs. 1-3 and 7. A. Processing procedures: 1. Download all SARS-CoV-2 sequences from GISAID.org (34,607 as of 2020-05-29). The downloaded sequences are stored in compressed form (via bzip2: https://sourceware.org/git/bzip2.git). 2. Align sequences to the SARS-CoV-2 reference sequence (NC_045512), trim to desired endpoints, and filter for coverage and quality. These steps are incorporated in a single Perl script, 'align_to_ref.pl', briefly summarized here: sequences are compressed for identity, then mapped against the given reference sequence using 'nucmer' from the 'MUMmer' package (Kurtz et al., 2004). The nucmer 'delta' file contains locations of matching regions and is parsed and used to, first, partition the sequences into "good" and "bad" subsets, and then to generate alignments from the "good" sequences. B. Categories of sequences included and excluded from our automated alignments. A series of criteria is used successively to exclude sequences with large internal gaps, excessive five-and threeprime gaps, large numbers of mismatches or ambiguities (>30) overall, or regions with a high concentration of mismatches or ambiguities (>10 in any 100 nt subsequence): the counts of these categories of "bad" sequence are shown, for different regional genome alignments. We then create the following different regional subalignments: CODING-REGIONS 2 ("FULL", from the 5'-most start-codon (orf1ab) to the 3'-most stop-codon (ORF10), NC_045512 bases 266-29,674; SPIKE, the complete surface glycoprotein coding region, bases 21,563-25,384; NEAR-COMPLETE ("NEARCOMP", the most-commonly-sequenced region of the genome, bases 55-29,836; COMPLETE, matching the NC_045512 sequence from start up to the poly-A tail, bases 1-29,870; 5' UTR, the five-prime untranslated region, bases 1-265 only. Generally speaking, the smaller the region, the more sequences are included. Sequences are trimmed to the extent of the reference (with minimum allowed gaps at 5' and 3' ends), following which the pairwise alignments are generated from the matching regions, and a multiple sequence alignment is constructed from the pairwise alignments. 3. De-duplicate 1 . To reduce computational demands, sequences are compressed by identity following trimming to the desired region, by computing a hash value for each sequence (currently the SHA-1 message digest, 160 bits encoded as a 40-character hex string). To prevent the loss of parsimonyinformative characters when they occur in identical strings, however, multiple sequences are reduced to a minimum of two occurrences. 4. Codon-align. Gaps are introduced into the entire compressed alignment so that the alignment column containing the last base of each codon has a number divisible by three; this simplifies processing of translations. Code for this procedure is derived from the GeneCutter tool from the LANL HIV database (https://www.hiv.lanl.gov/content/sequence/GENE_CUTTER/cutter.html). 5. Partition (full/spike-only). For subalignments that encompass the spike protein and substantial additional sequence, the spike region is extracted separately, to allow matched comparisons. 6. Build parsimony trees. A brief parsimony search (parsimony ratchet, with 5 replicates) is performed with 'oblong' (Goloboff, 2014) This is intended as an efficient clustering procedure rather than an explicit attempt to achieve an accurate phylogenetic reconstruction, but it appears to yield reasonable results in this situation of a very large number of sequences with a very small number of changes, where more complex models may be subject to overfitting. When multiple most-parsimonious trees are found, only the shortest of these (under a p-distance criterion) is retained. Distance scoring is performed with PAUP* (Swofford, 2003). 7. Re-duplicate (expand, i.e., uncompress). The original sequence names and occurrence counts are restored to FastA format files and the appropriate leaf taxa added to the parsimony trees. 8. Sort alignment by tree. Sequences in the FastA files are sorted by the expanded tree, allowing patterns of mutation to be discerned by inspection. 9. Mutations of interest can be readily tracked on the trees to resolve whether they are identified in predominantly in single clades or distributed throughout the tree and likely to be recurring (e.g. Fig. 7 (sites of interest with low frequency amino acid substitutions) and Fig. S6 (Site 614)). Fig. 2 and S3, and Fig. 1 has details about how to read these figures. When a particular stay-at-home order date was known for a state or county it is shown as a pink line, followed by a light pink block indicating the maximum two-week incubation time. Different counties in California had different stay-at-home order dates (Mar. 16-19) so are not highlighted, but more detail can be seen regarding California in Fig. S4. The decline in D614 frequency often continues well after the stay-at-home orders were initiated, and sometimes beyond the 14-day maximum incubation period, when serial reintroduction of the G614 would be unlikely. On the right, Washington State is shown, with details from two heavily sampled counties, Snohomish and King. Both counties had wellestablished ongoing D614 epidemics when G614 variants were introduced, undoubtedly by travelers. Washington state's stay-at-home order was initiated March 24. At this time there were 1170 confirmed cases in King County, and 614 confirmed cases in Snohomish County. (Confirmed COVID19 case count data from: COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University). Testing was limited, and so this is lower bound on actual cases. Of the sequences sampled by March 24, 95% from King County (153/161) and 100% from Snohomish County (33/33) were the original D614 form (Part B, details at cov.lanl.gov). By mid-April, D614 was rarely sampled. Whatever the geographic origin of the G614 variants that entered these counties, and whether one or if multiple G614 variants were introduced, the rapid expansion of G614 variants occurred in the framework of well-established local D614 variant epidemics. Santa Clara county is one of the two exceptions to the pattern of D614 decline in Fig. 1B: details are provided in Fig. S4A and C. Fig. 2 and Fig. S2, and Fig. 1 has details about how to read these figures. The plot representing national sampling in Australia is on the left, with two regional subsets of the data on the right. In each case a local epidemic started with the D614 variant, and despite being well established, the G614 variant soon dominates the sampling. Only limited recent sampling from Asia is currently available in GISAID; to include more samples on the map the 10-day period between March 11-20, is shown rather than the period between March 21-30; even the limited sampling mid-March the supports the repeated pattern of a shift to G614. The Asian epidemic was overwhelmingly D614 through February, and despite this, G614 repeatedly becomes prominent in sampling by mid-March. Fig. 1B. Many samples from the Santa Clara County Department of Public Heath (DPH) were obtained from March into May, and D614 has steadily dominated the local epidemic among those samples. The subset of Santa Clara county samples specifically labeled "Stanford", however, were sampled over a few weeks mid-March through early April, and have a mixture of both the G614 and D614 forms. These distinct patterns suggest relatively little mixing between the two local epidemics. Why Santa Clara county DPH samples should maintain the original form is unknown, but one possibility is that they may represent a relatively isolated community that had limited exposure to the G614 form, and G614 may not have had the opportunity to become established in this community -though this may be changing, see Part C. The local stay-at-home orders were initiated relatively early, March 16, 2020. B. Details regarding Iceland, the only country with an exceptional pattern from Fig. 1B. All Icelandic samples are from Reykjavik, and only G614 variants were initially observed there, with a modest but stable introduction of the original form D614 in mid-March. This atypical pattern might be explained by local sampling. The Icelanders conducted a detailed study of their early epidemic (Gudbjartsson et al., 2020), and all early March samples were collected from high risk travelers from Europe and people in contact with people who were ill; the majority of the traveler samples from early March were from people coming in from Italy and Austria, and G614 dominated both regions. On March 13, they began to sequence samples from local population screening, and on March 15, more travelers from the UK and USA with mixed G614/D614 infections began to be sampled in the high-risk group, and those events were coincident with the appearance of D614. C. Updated data regarding California from the June 19, 2020 GISAID sampling. Most of the analysis in this paper was undertaken using the May 29, 2020 GISAID download, but as California was an interesting outlier, and more recent sampling conducted while the paper was under review was informative, we have included some additional plots from California data that were available at the time of our final response to review, on June 19 th . Informative examples from well-sampled local regions are shown. Stay-at-home order dates are shown as a pink line, followed by a light pink block indicating the maximum two-week incubation time. N indicates the number of available sequences. Overall California, and specifically, San Diego and San Joaquin, show a clear shift from D614 to G614. The transition for San Joaquin was well after the stay-at-home orders and incubation period had passed. San Francisco shows a trend towards G614. Santa Clara DPH, which was essentially all D614 in our May 29 th GISAID download, had 7 G614 forms sampled in late May that were evident in our June 19 th GISAID download. Ventura is an example of a setting that was essentially all G614 when it began to be sampled significantly in early April, so a transition cannot be tracked; i.e. we cannot differentiate in such cases whether the local epidemic originated as a G614 epidemic, or whether it went through a transition from D614 to G614 prior to sampling. The figures in Parts A, B, and C can be recreated with more current data at https://cov.lanl.gov. Figures 1 and 2. Weekly running counts of the earliest forms of the virus carrying G614. B. The GISAID G clade is based on a 4 base haplotype that distinguishes it from the original Wuhan form. Part B shows the tallies of all of the variants among the 4 base mutations that are the foundation of the G clade haplotype, versus the bases found in the original Wuhan reference strain, and highlights of some of the earliest identified sequences bearing these mutations. We first consider just the 3 mutations that are in coding regions, to enable using an alignment that contains a larger set of sequences. One is in the RdRp protein (nucleotide C14408T resulting in a P323L amino acid change), one in Spike (nucleotide A23403G resulting in the D614G amino acid change) and one is silent (C3037T). The other mutation is in the 5' UTR (C241T), and tallies based on all 4 positions are done separately, as they are based on an alignment with fewer sequences. The earliest examples of a partial haplotype, TTCG, are found in Germany and China (Parts A and B). This form was present in Shanghai but never expanded (Part A). A cluster of infections that all carried this form were identified in Germany, but they did not expand and were subsequently replaced with the original Wuhan form, only to be replaced again by sequences that carry the full 4 base haplotype variant TTTG. The first example in our alignments (see Fig. S1) found to carry the full 4 base haplotype was sampled in Italy on Feb. 20. The first cases in Italy were of the original Wuhan form CCCA form, but by the end of February TTTG was the only form sampled in Italy, and it is the TTTG form that has come the dominate the pandemic. Of note, the TTCG form did not expand, and it lacked the RdRp P323L change, raising the possibility that the P323L change may contribute to a selective advantage of the haplotype. TTTG and CCCA are almost always linked in SARS-CoV-2 genomes, >99.9% of the time (Part B). The number of cases where the clade haplotype is disrupted, and the form of the disruption, are all noted in part B (not including ambiguous base calls). Some cases of a disrupted haplotype may be due to recombination events and not de novo mutations. Given the fact that these two forms are were co-circulating in many communities throughout the spring of 2020, and the fact that disruptions in the 4 base pattern are rare, suggests that recombination is overall relatively rare among pandemic sequences. Figure 7. This tree the same as the tree shown in Fig. 6A, but highlights complementary information: the G614 substitution, and patterns of bases that underly the clades. It is based on the "FULL" alignment of 17,760 sequences, from the June 2 alignment, described in the pipeline in Fig. S1, from the beginning of (orf1ab) to the last stop-codon (ORF10), NC_045512 bases 266-29,674). The outer element is a radial presentation of a full-codingregion parsimony tree; branches are colored by the global region of origin for each virus isolate. The inner element is a radial bar chart showing the identity of common mutations (any of the top 20 single-nucleotide mutations from the June 2 alignment), so that sectors of the tree containing a particular mutation at high frequency are subtended by an inner colored arc; mutations not in the top 20 are presented together in gray. The tree is rooted on a reference sequence derived from the original Wuhan isolates (GenBank accession number NC_045512), at the 3 o'clock position. Branch ends representing sequence isolates bearing the D614G change are decorated with a gray square; sectors of the tree containing that mutation are subtended by a dark blue arc in the inner element; other mutations are denoted by different colors. As an example, in this tree, the region from approximately 12:30 to 3 o'clock represents GISAID's "GR" clade, defined both by mutations we are tracking in this paper that carry the G614 variant (the GISAID G clade, defined by mutations A23403G, C14408T, C3037T, and a mutation in the 5' UTR (C241T, not shown here), and an additional 3-position polymorphism: G28881A + G28882A + G28883C. These base substitutions are contiguous and result two amino acid changes, including N-G204R, hence GISAID's "GR clade" name. Close examination of this triplet in sequences from the Sheffield dataset suggests the mutations are not a sequencing artifact. The outer phylogenetic tree was computed using oblong (see STAR Methods), and plotted with the APE package in R. The inner element is a bar chart plotted with polar coordinates using the gglot2 package in R. The frequency of the GR clade appears to be increased in the UK and Europe as a subset of the regional G clade expansion, given that both carry G614D. files from nanopore sequencing data of amplicons produced by the Artic network protocol. Raw data from amplicon 81 contains a portion of adapter sequence which is homologous to the reference genome, apart from the C variants which lead to a S943P mutation call. This region is therefore included in variant calling if location-based trimming is not carried out. Subsequent panels show that this region is soft clipped when trimming adapters and primers and is therefore not available for variant calling. B. Base frequencies at position 24389 in 23 samples from the Sheffield data show that C is present in half of the reads in the raw data, but is absent from trimmed and primer trimmed data. This figure is also associated with the Star Methods. RESOURCE AVAILABILITY Lead Contact Further information and requests for resources should be directed to and will be fulfilled by the Lead Contact, Bette Korber (btk@lanl.gov). Materials Availability This study did not generate new unique reagents. Data and Code Availability All sequence data used here are available from The Global Initiative for Sharing All Influenza Data (GISAID), at https://gisaid.org. The user agreement for GISAID does not permit redistribution of sequences. Web-based tools to recreate much of the analyses provided in this paper but based on contemporary GISAID data downloads are available at cov.lanl.gov. Code to create the alignments as described in Fig. S1 and to perform the Isotonic regression analysis in Fig. 3 EXPERIMENTAL MODEL AND SUBJECT DETAILS Human Subjects 999 individuals presenting with active COVID-19 disease were sampled for SARS CoV-2 sequencing at Sheffield Teaching Hospitals NHS Foundation Trust, UK using samples collected for routine clinical diagnostic use. This work was performed under approval by the Public Health England Research Ethics and Governance Group for the COVID-19 Genomics UK consortium (R&D NR0195). SARS-CoV-2 sequences were generated using samples taken for routine clinical diagnostic use from 999 individuals presenting with active COVID-19 disease: 593 female, 399 male, 6 no gender specified; ages 15-103 (median 55) years. DETECTION AND SEQUENCING OF SARS-CoV-2 ISOLATES FROM CLINICAL SAMPLES Samples for PCR detection of SARS-CoV-2 (Fig. 5A) were all obtained from either throat or combined nose/throat swabs. Nucleic acid was extracted from 200µl of sample on MagnaPure96 extraction 23 platform (Roche Diagnostics Ltd, Burgess Hill, UK). SARS-CoV-2 RNA was detected using primers and probes targeting the E gene and the RdRp genes for routine clinical diagnostic purposes, with thermocycling and fluorescence detection on ABI Thermal Cycler (Applied Biosystems, Foster City, United States) using previously described primer and probe sets (Corman et al., 2020). Nucleic acid from positive cases underwent long-read whole genome sequencing (Oxford Nanopore Technologies (ONT), Oxford, UK) using the ARTIC network protocol (accessed the 19 th of April, https://artic.network/ncov-2019.) Following base calling, data were demultiplexed using ONT Guppy using a high accuracy model. Reads were filtered based on quality and length (400 to 700bp), then mapped to the Wuhan reference genome and primer sites trimmed. Reads were then downsampled to 200x coverage in each direction. Variants were called using nanopolish (https://github.com/jts/nanopolish) and used to determine changes from the reference. Consensus sequences were constructed using reference and variants called. PSEUDOTYPED VIRUS INFECTIVITY VSV System Plasmids for full-length SARS-Cov-2 Spike were generated from synthetic codon-optimized DNA (Wuhan-Hu-1 isolate, GenBank: MN908947.3) through sub-cloning into the pHCMV3 expression vector, with a stop codon included prior to the HA tag. The D614G variant was generated by sitedirected mutagenesis. Positive clones were fully sequenced to ensure that no additional mutations were introduced. Lentiviruses for stable cell line production were generated by seeding 293T cells at a density of 1x10 6 cells/well in a 6-well dish. Once the cells reached confluency, they were transfected with 2ug pCaggs-VSV-G, 2ug of lentiviral packaging vector pSPAX2, and 2ug of lentiviral expression plasmid pCW62 encoding ACE2-V5 and the puromycin resistance gene (pCW62-ACE2.V5-PuroR) or TMPRSS2-FLAG and the blasticidin resistance gene (pCW62-TMPRSS2.FLAG-BlastR) using Trans-IT transfection reagent according to manufacturer's instructions. 24 hours post-transfection, media was replaced with fresh DMEM containing 10% FBS and 20mM HEPES. 48 hours post-transfection, supernatants were collected and filtered using a 0.45um syringe filter (VWR Catalog #28200-026). Recombinant SARS-CoV-2-pseduotyped VSV-∆G-GFP were generated by transfecting 293T cells with phCMV3 expressing the indicated version of codon-optimized SARS-CoV-2 Spike using TransIT according to the manufacturer's instructions. At 24 hr post-transfection, the medium was removed, and cells were infected with rVSV-G pseudotyped ∆G-GFP parent virus (VSV-G*∆G-GFP) at MOI = 2 for 2 hours with rocking. The virus was then removed, and the cells were washed twice with OPTI-MEM containing 2% FBS (OPTI-2) before fresh OPTI-2 was added. Supernatants containing rVSV-SARS-2 were removed 24 hours post-infection and clarified by centrifugation. Viral titrations were performed by seeding cells in 96-well plates at a density sufficient to produce a monolayer at the time of infection. Then, 10-fold serial dilutions of pseudovirus were made and added to cells in triplicate wells. Infection was allowed to proceed for 12-16 hr at 37 ˚C. The cells were then fixed with 4% PFA, washed two times with 1xPBS and stained with Hoescht (1ug/mL in PBS). After two additional washes with PBS, pseudovirus titers were quantified as the number of fluorescent forming units (ffu/mL) using a CellInsight CX5 imager (ThermoScientific) and automated enumeration of cells expressing GFP. Lentiviral System Additional assessments of corresponding D614 and G614 Spike pseudotyped viruses were performed by using lentiviral vectors and infection in 293T/ACE2.MF and TZM-bl/ACE2.MF cells (both cell lines kindly provided by Drs. Mike Farzan and Huihui Mu at Scripps). Cells were maintained in DMEM containing 10% FBS, 1% Pen Strep and 3 ug/ml puromycin. An expression plasmid encoding codon-optimized full-length spike of the Wuhan-1 strain (VRC7480), was provided by Drs. Barney Graham and Kizzmekia Corbett at the Vaccine Research Center, National Institutes of Health (USA). The D614G amino acid change was introduced into VRC7480 by site-directed mutagenesis using the QuikChange Lightning Site-Directed Mutagenesis Kit from Agilent Technologies (Catalog # 210518). The mutation was confirmed by full-length spike gene sequencing. Pseudovirions were produced in HEK 293T/17 cells (ATCC cat. no. CRL-11268) by transfection using Fugene 6 (Promega Cat#E2692). Pseudovirions for 293T/ACE2 infection were produced by co-transfection with a lentiviral backbone (pCMV ∆R8.2) and firefly luciferase reporter gene (pHR' CMV Luc) (Naldini et al., 1996). Pseudovirions for TZM-bl/ACE2 infection were produced by co-transfection with the Envdeficient lentiviral backbone pSG3∆Env (kindly provided by Drs Beatrice Hahn and Feng Gao). Culture supernatants from transfections were clarified of cells by low-speed centrifugation and filtration (0.45 µm filter) and used immediately for infection in 96-well culture plates. 293T/ACE2.MF cells were preseeded at 5,000 cells per well in 96-well black/white culture plates (Perkin-Elmer Catalog # 6005060) one day prior to infection. Sixteen wells were inoculated with 50 ul of a 1:10dilution of each pseudovirus and incubated for three days. Luminescence was measured using the Promega Luciferase Assay System (Catalog # E1501). For infection of TZM-bl/ACE2.MF cells, 10,000 freshly trypsinized cells were added to 16 wells of a 96-well clear culture plate (Fisher Scientific) and inoculated with undiluted pseudovirus. Luminescence was measured after 2 days in a solid black plate using the Britelite Plus Reporter Gene Assay System (Perkin-Elmer). Luminescence in both assays was measured using a PerkinElmer Life Sciences, Model Victor2 luminometer. HIV-1 p24 content (produced by the backbone vectors) was quantified using the Alliance p24 ELISA Kit (PerkinElmer Health Sciences, Cat# NEK050B001KT). Reported relative luminescence units (RLUs) were adjusted for p24 content. Neutralization Assay Pre-titrated amounts of rVSV-SARS-CoV-2 (D614 or G614 variant) were incubated with serially diluted human sera at 37 °C for 1 hr before addition to confluent Vero monolayers in 96-well plates. Infection proceeded for 12-16 hrs at 37 °C in 5% CO 2 before cells were fixed in 4% paraformaldehyde and stained with 1ug/mL Hoescht. Cells were imaged using a CellInsight CX5 imager and infection was quantitated by automated enumeration of total cells and those expressing GFP. Infection was normalized to the percent cells infected with rVSV-SARS-CoV-2 incubated with normal human sera. Data are presented as the relative neutralization for each concentration of sera. Background and General Approach The Global Initiative for Sharing All Influenza Data (GISAID) (Elbe and Buckland-Merrett, 2017;Shu and McCauley, 2017) has been coordinating SARS-CoV-2 genome sequence submissions and making data available for download since early in the pandemic. At time of this writing, hundreds of sequences were being added every day. These sequences result from extraordinary efforts by a wide variety of institutions and individuals: while an invaluable resource, but are mixed in quality. The complete sequence download includes a large number of partial sequences, with variable coverage, and extensive 'N' runs in many sequences. To assemble a high-quality dataset for mutational analysis, we constructed a data pipeline using some off-the-shelf bioinformatic tools and a small amount of custom code. From theSARS-CoV-2 sequences available from GISAID, we derived a "clean" codon-aligned dataset comprising near-complete viral genomes, without large insertions or deletions ("indels") or runs of undetermined or ambiguous bases. For convenience in mutation assessment, we generated a codonbased nucleotide multiple sequence alignment, and extracted translations of each reading frame, from which we generated lists of mutations. The cleaning process was in general a process of deletion, with alignment of retained sequences; the following criteria were used to exclude sequences: 1. Fragmented matching (> 20 nt gap in match to reference) 2. 4. Regions with concentrated ambiguity calls: >10 in any 50 nt window) Any sequence matching any of the above criteria was excluded in its entirety. Sequence Mapping and Alignment Sequences were mapped to a reference (bases 266:29674 of GenBank entry NC_045512; i.e., the first base of the ORF1ab start codon to the last base of the ORF10 stop codon) using "nucmer" from the MUMmer package (version 3.23; (Kurtz et al., 2004)). The nucmer output "delta" file was parsed directly using custom Perl code to partition sequences into the various exclusion categories (Sequence Mapping Table) and to construct a multiple sequence alignment (MSA). The MSA was refined using code derived from the Los Alamos HIV database "Gene Cutter" tool code base. At this stage, alignment columns comprising an insertion of a single "N" in a single sequence (generating a frame-shift) were deleted, and gaps were shifted to conform with codon boundaries. Using the initial "good-sequence" alignment, a low-effort parsimony tree was constructed. Initially, trees were built using PAUP* (Swofford, 2003) with a single replicate heuristic search using stepwise random sequence addition; subsequently, a parsimony ratchet was added; currently, oblong (Goloboff, 2014) is used. Sequences in the alignment were sorted vertically to correspond to the (ladderized) tree, and reference-sequence reading frames were added. See Fig. S1 for a pipeline schematic. Data partitioning and phylogenetic trees Alignments were made and trees inferred for three distinct data partitions, the longer the alignments, the fewer sequences the sequences (Fig. S1.) The full genome tree was used for Fig. 7. Trees were inferred by either of two methods: 1. neighbor-joining using a p-distance criterion, (Swofford, 2003) or 2. parsimony heuristic search using a version of the parsimony ratchet (Goloboff, 2014), the general conclusions in Fig. 7 were substantiated in both; the parsimony tree is shown. Global Maps The Covid-19 pie chart map is generated by overlaying Leaflet (a JavaScript library for interactive maps) pie charts on maps provided by OpenStreetMap. The interface is presented using rocker/shiny, a Docker for Shiny Server. SYSTEMATIC REGIONAL ANALYSIS OF D614/G614 FREQUENCIES To observe a significant change in the frequency of two SARS-CoV-2 variants in a geographic region, three minimal requirements must be met. Both variants must have been introduced into an area and be co-circulating, data must be sampled for a long enough period to observe a change in frequency, and there must be enough data to be powered adequately to detect a difference. We use the bioinformatic approaches described above to extract from GISAID all the politically defined geographic regions within the data that met these criteria, to track changes in frequency in a systematic way using all available data. The political/geographical regions we use are strictly hierarchically segmented based on the naming conventions used in GISAID. GISAID data is labeled such that the geographic source is noted first as a continent or Oceania; we call this Level 1. Level 2 is the country of origin of a sample. Level 3 are subcountries and states, and although occasionally level 3 includes a major city in a small country. For this purpose, England, Scotland, and Wales are considered sub-countries of the United Kingdom, and assigned level 3; the sampling in the UK has been the most extensive globally to date. Level 4, is the county or city of origin. The levels are strictly hierarchical, and within a given level, the geographical regions do not overlap. In some cases (e.g., Nepal_Kathmandu and Nepal, Greece_Athens and Greece, Italy_Veneto_Verona and Italy_Veneto, or Iceland_Reykjavik and Iceland) the sampling in a sub-level exactly matches the sampling in the corresponding upper level, in which case the sub-level is not presented. Levels 3 and 4 are not always available, and the day of sampling is also not always available. The statistical strategies we use are then applied separately in each country, region or city, and we do not assume that outbreaks in each political subdivision are independent and identically distributed. Instead, our model assumption is that the individuals we test within a region are independent. This assumption may fail if there are sampling biases in a region that change over a given period of time. The G614 form is part of the G clade haplotype that is introduced by travelers, as we discuss in the text, and it is rare for it to arise independently. Our null hypothesis is that the observed shifts in frequency are random nondirectional drift. We have taken two statistical approaches to test this. Fisher's exact comparison For this comparison, we used a two-sided Fisher's exact test to compare the G614 and D614 counts in the pre-onset and the post-delay periods, as described in the text, and provides a p-value against the null hypothesis that the fraction of D614 and G614 sequences did not change. To be included in the analysis, 15 sequences were required pre-onset, with a mixture of D614 and G614 present such that the rarer form was present at least 3 times; we also required a minimum of 15 sequences be sample at least 2 weeks later, to create a post-delay set. Only regions for which p<0.05 are considered, based on a two sided-test. We then use a binomial test to evaluate the null hypothesis that in regions where we saw significant change in sampling frequency over time, the shift was as likely to be an increase or a decrease in G614 across geographic regions. This analysis is presented in Fig. 1B. Isotonic Regression Isotonic regression forms the basis of a one-sided test of the hypothesis for positive selection based on fitting the indicator that the typed strain is G as a logistic regression in which the logarithm of the odds ratio is a non-decreasing function of time. We use the residual deviance of the fitted model as our test statistics. To be included in this analysis, a region was required to have at least 5 sequences each of D614 and G614, and a minimum of 14 sampling days of data available. While we have a composite null hypothesis (the log-odds ratio is non-increasing), assuming that the log-odds ratio remains constant over time leads to tests that have largest power. While the classical chi-square approximation does not hold, we can sample from the constant log-odds ratio by permuting the vector of variant labels, and refitting the isotonic logistic regression. We performed 400 randomizations of the data in each region. Hence the lowest p-value we can obtain is 0.0025. The reverse hypothesis, namely than the fraction of G variant decreases with time is also tested by fitting a non-increasing function of time. The isotonic logistic regression was done using R and the cgam package. We applied the bionomial test across regions with a significant change in one direction, as we did for the Fishers test results. This analysis is presented in Fig 3 and Modeling PCR Ct Two PCR Ct methods were used as a surrogate for estimating in vivo viral load in the upper respiratory tract, switching methods in mid-April due a shortage of kits. The first method involved nucleic acid extraction; the second method, heat treatment (Fomsgaard and Rosenstierne, 2020). To assess the impact of available clinical parameters on viral load as measured by PCR Cts, we used a linear model, predicting Ct from PCR method, Sex, Age and D614G variant. This revealed that only the PCR method and the D614G variant were statistically significant. A negative coefficient for the G variant indicated that patients infected by the latter have, on average, a higher viral load, but that that viral load is not impacted by neither age nor sex. The results from the smaller model are: Results comparing D614G status for the two methods were also evaluated independently, and the first method showed a significant association between lower Ct values and presence of G614 (Wilcoxon p = 0.033), but the second method, with many fewer samples, did not reach significance. Predicting Hospitalization The simple Fisher's exact test analysis in Fig. 5 indicates that the D614G status is not predictive of hospitalization, even though it is predictive of viral load. We can make a first analysis to predict hospitalization from viral load, gender, age and D614G status: As somewhat expected, the D614G status is not statistically significant, even though viral load is, but the coefficient goes in the opposite direction than we would have intuited: a lower viral load is predictive of a higher probability of hospitalization. Sex (Male) and Age both increase the probability of hospitalization. Predicting Hospitalization, revisited Although the above analysis indicates that aa614G does not predict hospitalization directly, it does predict viral load and viral load predicts hospitalization; so there is a concern that aa614G might affect hospitalization, but that this effect is "masked" by the viral load. To explore this hypothesis, we "unmask" the aa614G by using the residuals from the regression of Ct on extraction method and D614G status to get a second predictive model for hospitalization: In these regression analyses, the estimated coefficients for age, sex and viral load (corrected or not for method and strain) remain mostly unchanged, and strain still does not have an effect. All other comparisons were not significant. All coding was done using R. Results of these analysis are presented in the main text and in Fig. 5. Modeling pseudotype virus infectivity We used a log-normal generalized linear model (GLM) to test whether the G614 variant grew to higher titers than the wildtype D614 virus in Vero, 293T-ACE2 and 293T-ACE2-TMPRSS2 cell lines. The full experiment was repeated twice, each time in triplicate, and the 2 experimental repeats were considered random effects. Viral variant and cell line were considered as fixed effects. On average, across all cell lines, G614 grows to about a 3-fold (2.95) higher titer than D614 (p=9x10 -11 ). A significant interaction was found between viral variant and cell line (p=0.002), indicating that the relative increase of G614 compared to D614 was significantly different across cell lines (p=0.002). Results of these analysis are presented in Fig. 6A. Sequence quality control We discovered a sequencing processing error that gave rise to what appeared at first to be a mutation of interest at position 943 (24389 A>C and 24390 C>G) in Spike that was evident in sequences from Belgium. It was frequent enough to be a site of interest, and was tracked. We contacted the group in Belgium, the source of the data, who were already aware of the issue, concurred with our interpretation, and they had been in touch with GISAID with a request to remove the problematic sequences. We identified the issue with this site as part of another study using a method to detect systematic sequencing errors (Freeman et al., 2020); we are interrogating the quality of available sequencing data and these positions were highlighted as suspect. We interrogated these positions in the raw sequencing data from Sheffield, and although these two variants are not present in the final consensus sequence from any of the Sheffield isolates, the raw, untrimmed bam files show their presence in only one of the amplicons covering the site (Fig. S7 A&B). We noticed that in fact this position is to the left of the 5' primer of amplicon 81 in what we believe to be an adapter sequence. Comparison of the Wuhan reference and the adapter sequence reveals similarity around this position: Nanopore adapter sequence: CAGCACCTT The Wuhan reference sequence: CAGCAAGTT In our validation set, we see a C present at around 50% of called bases at both these positions in raw data but this region is trimmed by the ARTIC pipeline and is therefore not used to call variants and contribute to the final consensus sequence. Although it is evident in amplicon 81, in this region, there is no evidence for these variants in the data from amplicon 80, which also covers these positions. We include a figure (Fig. S7) to explain our finding. In summary this is an error that has arisen due to a combination of improper trimming of adapter and primer regions from raw sequencing reads before downstream analysis, and the coincidental homology between the nanopore adapter sequence and the Wuhan reference genome in this region. This is included here as a cautionary note; resolving rare biological mutations and sequencing error will be an important balance going forward in terms of interpretation of rare mutations (De Maio et al., 2020). A recurrent amino acid change like L5F (Fig. 7) could potentially result from a recurrent sequencing or sequence processing error (De Maio et al., 2020), or alternatively, it may be of particular interest if it is naturally recurring homoplasy. Supplemental Data Data S1. Modeling the daily fraction of the G614 variant as a function of time in local regions using isotonic regression, Related to Figure 3. Highlights: -A SARS-CoV-2 variant with Spike G614 has replaced D614 as the dominant pandemic form -The consistent increase of G614 at regional levels may indicate a fitness advantage -G614 is associated with lower RT PCR Ct's, suggestive of higher viral loads in patients -The G614 variant grows to higher titers as pseudotyped virions Sequence Processing Pipeline Notes: 1. Multiply occurring identical sequences are reduced to 2 occurrences to so that parsimony-informative sites do not become unique. 2. "Coding regions" subset includes sequences passing error filtering, bounded from the orf1ab start codon to the ORF10 stop codon (NC_045512 genome positions 266-29674).
2020-07-03T13:05:04.756Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "9b5d22b423aa04a240766b814a16387edd321183", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S0092867420308205/pdf", "oa_status": "HYBRID", "pdf_src": "ElsevierCorona", "pdf_hash": "9b5d22b423aa04a240766b814a16387edd321183", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264950141
pes2o/s2orc
v3-fos-license
Traceable Research Data Sharing in a German Medical Data Integration Center With FAIR (Findability, Accessibility, Interoperability, and Reusability)-Geared Provenance Implementation: Proof-of-Concept Study Background Secondary investigations into digital health records, including electronic patient data from German medical data integration centers (DICs), pave the way for enhanced future patient care. However, only limited information is captured regarding the integrity, traceability, and quality of the (sensitive) data elements. This lack of detail diminishes trust in the validity of the collected data. From a technical standpoint, adhering to the widely accepted FAIR (Findability, Accessibility, Interoperability, and Reusability) principles for data stewardship necessitates enriching data with provenance-related metadata. Provenance offers insights into the readiness for the reuse of a data element and serves as a supplier of data governance. Objective The primary goal of this study is to augment the reusability of clinical routine data within a medical DIC for secondary utilization in clinical research. Our aim is to establish provenance traces that underpin the status of data integrity, reliability, and consequently, trust in electronic health records, thereby enhancing the accountability of the medical DIC. We present the implementation of a proof-of-concept provenance library integrating international standards as an initial step. Methods We adhered to a customized road map for a provenance framework, and examined the data integration steps across the ETL (extract, transform, and load) phases. Following a maturity model, we derived requirements for a provenance library. Using this research approach, we formulated a provenance model with associated metadata and implemented a proof-of-concept provenance class. Furthermore, we seamlessly incorporated the internationally recognized Word Wide Web Consortium (W3C) provenance standard, aligned the resultant provenance records with the interoperable health care standard Fast Healthcare Interoperability Resources, and presented them in various representation formats. Ultimately, we conducted a thorough assessment of provenance trace measurements. Results This study marks the inaugural implementation of integrated provenance traces at the data element level within a German medical DIC. We devised and executed a practical method that synergizes the robustness of quality- and health standard–guided (meta)data management practices. Our measurements indicate commendable pipeline execution times, attaining notable levels of accuracy and reliability in processing clinical routine data, thereby ensuring accountability in the medical DIC. These findings should inspire the development of additional tools aimed at providing evidence-based and reliable electronic health record services for secondary use. Conclusions The research method outlined for the proof-of-concept provenance class has been crafted to promote effective and reliable core data management practices. It aims to enhance biomedical data by imbuing it with meaningful provenance, thereby bolstering the benefits for both research and society. Additionally, it facilitates the streamlined reuse of biomedical data. As a result, the system mitigates risks, as data analysis without knowledge of the origin and quality of all data elements is rendered futile. While the approach was initially developed for the medical DIC use case, these principles can be universally applied throughout the scientific domain. Introduction Provenance-a piece of metadata-is considered information that is fundamental in the data life cycle because it expresses the traceability of the processed data and facilitates the reproducibility of the results [1,2].The availability of provenance throughout the data life cycle is deemed a crucial factor for maintaining trust in the data at all stages [3].The data life cycle encompasses data generation, processing, validation, analysis, reporting, and application for decision-making in any context, culminating in storage within a specified retention period [4].Medical data integration centers (DICs), particularly those established within the German Medical Informatics Initiative, must enhance accountability for their activities.This is particularly crucial for the methods used in extracting, transforming, and loading sensitive patient data from heterogeneous clinical routine systems into (standardized) research data repositories for subsequent secondary use [5].In this given context, it is necessary to understand the limitations of the provided data [6].Collecting comprehensive and pertinent contextual provenance information along these processing pipelines is one approach to enhance the accountability of the medical DIC (Textbox 1).Provenance and integrity must be systematically evaluated and documented in routinely collected data sets to facilitate their reuse in clinical trials [7]. Textbox 1. Accountability in a German medical data integration center. Accountability means accepting responsibility for activities and in this context entails all procedures and processes for data managing pipelines [8].This includes keeping the movement of data elements transparent and traceable.Provenance traces enable documentation of this movement and hence generate trust in the data integrity and reliability of the provided data for secondary use. To achieve reproducibility [9] and integrity when exchanging data between academia and industry, researchers must adhere to essential research principles, particularly following good practice guidelines (eg, good clinical practice, good research/scientific practice, commonly referred to as GxP) [10].Ensuring and evaluating data integrity and data provenance are anticipated to be prerequisites for clinical trial data [11].For instance, the clinical research data quality standard ALCOA+ (Attributable, Legible, Contemporaneous, Original, and Accurate+) articulates enhanced data integrity properties and fundamentally contributes to provenance information [12].These properties pertain to attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available data characteristics [10]. In addition to adhering to good scientific practice [13], heightened legal requirements such as compliance with the General Data Protection Regulation (GDPR) in the European Union, or contractual obligations, mandate evidence-based data processing for both deidentification and reidentification of data, encompassing the life cycle of the patient's consent [14]. A crucial factor in advancing these objectives is the metadata acquired from the data transformation and integration process throughout the data life cycle.The field of biological research has already acknowledged the significance of metadata, as outlined in ISO norms such as ISO/CD 20961 [15] and ISO/TC 276/WG5 on data processing and integration [16].ISO 20961, for example, specifies requirements for the consistent formatting and documentation of data and metadata.Furthermore, the FAIR (Findability, Accessibility, Interoperability, and Reusability) guiding principles for data management and data stewardship emphasize the overall relevance of metadata for the data itself, including those used in infrastructures and services [17].Aspects of the FAIR recommendations explicitly address provenance capture.As such, the "R1.2"FAIR principle demands machine-accessible and readable metadata, which include provenance information about the data creation or generation.Related metadata accumulate not only during the data transformation itself but also within the software used [18].The principle "R1.3" expects metadata to be adhering to domain-relevant community standards such as the HL7 Fast Healthcare Interoperability Resources (FHIR) or Dublin core [1].FHIR is an internationally recognized standard that supports the exchange of data between different software systems within the health care sector [19].In this vein, the FHIR resource "provenance" records entities and processes involved in creating a specific resource.From a technical point of view, the FHIR Provenance resource is founded on the framework of the open W3C standard PROV-Data Model definition and ontology [20], the successor to the Open Provenance Model [21].Here, the concepts of linked entities, activities, and agent resources enable the establishment of a provenance model.Such resources can be described with the W3C Resource Description Framework (RDF) method [22].RDF is a data model, which is commonly stored in formats such as RDF/XML (.rd) or JSON-LD (.json).All formats represent a knowledge graph. As of now, the capture of provenance in health care is not adequately or uniformly implemented in German medical DICs, as revealed in a recent study on their data management status [23].The results demonstrated that provenance is indeed a factor strongly influenced by the maturity level of data management XSL • FO RenderX practices.Following complex transformations in the data integration process, the provenance of data elements is often lost, making it difficult to impossible to assess the (measurement) quality of a data element.This reduction in traceability diminishes trust in the validity of the collected data. The primary objective of this study is to improve the reusability of clinical routine data within a medical DIC for its secondary application in clinical research.Our goal is to enhance processed clinical routine data by incorporating appropriate semantic metadata, a key requirement guided by the FAIR principles [17].Furthermore, our intention is to bolster the accountability of our DIC by mitigating the risks associated with the reuse of compromised data in clinical research. To our knowledge, this is the first demonstration of provenance integration within a medical DIC. Materials We used test data to develop and test our provenance class.Test data elements were chosen to reflect the composition of a typical data integration repository.We created exemplary dummy data element definitions with comprehensive annotation (Textbox 2).We defined 7 data element types and generated 100,000 data elements for each data element type to generate a total of 700,000 provenance records using a Python (Python Foundation) script. Proof-of-Concept Solution Following the tailor-made provenance framework [3], we developed a proof-of-concept provenance solution.This framework complements a standard software engineering cycle (requirements, design, coding, testing, and implementation) with insights from a comprehensive literature search and uses established works as a guide to the users of the framework.The expanded requirements analysis is substantiated by the topics identified through the literature search.Details are described in Figure 1. Overview An interdisciplinary team of internal stakeholders in the University Medicine Mannheim-DIC (lead, medical experts, computer scientists, technical staff, and process owner of the ETL [extract, transform, and load] process) performed the requirements analysis for the research approach.Initially, we engaged in discussions, documented feedback, and obtained approval for our own data pipeline processes, based on the WH questions (what, when, where, who, why, how, which, whose).This was done to ensure accurate and risk-managed data processing pipelines.Our focus centered on questions related to data governance, annotation, documentation, interoperability, data integrity and accuracy, data sharing, and information technology operations.This emphasis aligns with a prior investigation on data management practices in German DICs [23], where these questions were identified as integral to tracing patient data through the DICs. Building on the previous steps, we initiated the process by visualizing the scope definition (system border and context) of the planned provenance tracking systems.Using notation according to DeMarco [24], we generated a data flow diagram.Following this, we documented the resultant requirements, representing them in free text and as a unified modeling language (UML) class diagram to address various requirements perspectives [25]. System Border and Context The context view (Figure 2) is used to delineate the scope of our system, establishing the boundary between functionalities that are considered in and out of scope.The system to be modeled, known as the Provenance Information System Traces (PISA), is depicted as a circle in the center (outlined by the dotted red line in Figure 2).At the conceptual level, we established the system border to encompass all aspects within the object scope.We delineated the system context (depicted in green as a freehand drawing) with aspects (A to H) that impact the planned provenance tracking system in our medical DIC.The processes that were modeled had been previously defined by local stakeholders and were influenced by the processes of the medical informatics initiative community [5].The core process, the ETL process (D), includes valid documents (G) (eg, statutes, standard operating procedures, European Union-GDPR) and the involvement of stakeholders within and beyond the organizational unit (H), representing the primary focus of our development efforts.Existing software and hardware systems (A-C), as well as the processes of secondary usage for data request (E) and long-term archiving (F), are outside the scope of this study. Data Flow Given the multitude of processes within a DIC, we confined our focus to the requirements related to the data integration process (Figure 2; ETL, letter D).We scrutinized the data flow and derived a data flow diagram, illustrating the functional requirements perspective (Figure 3).As part of the Medical Informatics Initiative, all DICs in Germany modeled a comparable, generic data flow.This data flow delineates the movement of data among processes (ETL), storage entities (staging area, data warehouse, FHIR server, and research data repository), and involved actors (staff in DIC, researcher, and trusted third party).Processes encapsulate functions responsible for transforming processing data.These processes consume input data from diverse systems, manage these data, and convey the results to an output.Storage ensures data persistence, allowing processes to access the storage in read or write modes.Actors actively engage in information exchange with the system. Requirements Description In a previous publication, we conducted interviews with various German medical DICs [23].Through these interviews, we identified the most crucial requirements, emphasizing assessments of data quality, traceability, and information capability.Additionally, transparency in processing steps, workflows, and data sets emerged as a significant consideration.Other identified requirements encompassed aspects such as debugging or performance evaluation.Additionally, there was a focus on compliance with regulations, reproducibility, support of the scientific utilization process, increased confidence in data, and clear regulation of responsible parties [23]. In alignment with this study, we established preconditions and requirements along the data flow for implementing the provenance tracking system.We identified the intended features for the implementation of the PISA and derived the system's XSL • FO RenderX requirements (Table 1).In general, PISA should have the capability to trace the complete production history of a data element while incorporating domain-specific characteristics of the data element.These provenance traces for an individual data element must be captured along the presented data flow. Table 1.Requirements for the proof of concept for PISA a . Explanation Requirements (functional and nonfunctional) Number It includes all the information (metadata) required for producing a specific data set or a data element while preserving its data integrity status.This encompasses details such as data source, data destination, method, tools, software, and versions used.The benchmark should align with the "entities" and "activities" components of the W3C model. PISA must have the capability to track the complete processing history of a data element, and the provenance information must be stored in a database.This encompasses all derivation steps performed on data elements during their processing steps.PISA must be analyzable by an authorized user and capable of producing diverse representations and export formats for the provenance traces. 3 The provenance information for a data element is expanded to include the quality status of the processed data element. PISA must be able to track the quality status and assessment of data elements. 4 At a minimum, the provenance information should encompass the verification status and time stamp of the processed scripts. PISA must be able to track the status of the script execution.5 PISA should facilitate easy integration into ETL pipelines with transfer interfaces, allowing seamless integration with established technologies.Moreover, it must be easy to install, for example, by supporting widely used and easily set up databases. PISA must provide a high level of ease of use for ETL e programmers and should be usable without requiring in-depth knowledge of provenance terms and concepts.7 Time measurements per data element must take place and be evaluated to verify the feasibility of the proof-of-concept approach. PISA must be time-efficient and capable of ensuring acceptable performance. Development of the Logical Data Model Based on the aforementioned requirements (Table 1) and the DIC maturity model [23], we constructed the logical data model as a UML class diagram, identifying classes and their associations (Figure 4). Metadata Strategy Our metadata strategy centered on characterizing the data elements and their associated artifacts throughout their processing pipeline. Aligned with the requirements and the logical model, we extracted the pertinent provenance metadata and aligned this provenance profile with the W3C components entity, agent, and activity.Simultaneously, we diligently enforced documentation efforts and annotation, guided by good documentation practices such as the ALCOA(+) principles for the identified components [10].The annotation process we implemented enhanced the comprehension, increased understanding, and improved the traceability of the processed data elements. The FAIR principles R1.2 and R1.3 guided us to enrich (R) data elements with meaningful (provenance) metadata.Consequently, we characterized data elements by collecting content-rich contextual and technical metadata that narrate the story of the entire data processing workflow and link to related artifacts (Table 2).During the transformation processes, we documented quality procedures and incorporated coding practices and versioning information.Individual characteristics per data element during a processing step such as ID, name, description, source and destination information from Data Store Level, description of the transformation approach, description of quality check (testing and validation approach), privacy and security status, and information from Script Level Data Element "id" : "id" "occuredDateTime": "timestamp", "recorded": "timestamp" Examples of expanded metadata elements are more detailed descriptions of the transformation, the quality check, and the status of the data element in scope, or the results of the used log files.The metadata gathering for provenance comprises both manual annotation and an automated collection process, representing a hybrid form of provenance [26]. Ontology We organized, annotated, and represented information using WebProtégé 4.0.2(Protege Team in the Biomedical Informatics Research Group at Stanford University), a tool designed for collaboratively creating complex ontologies [27].The W3C PROV ontology and the fundamental relationships between entities, activities, and agents served as a framework for representing the provenance graph [20].More specifically, we mapped processes onto activities, actors onto agents, and input/output data onto entities.The attributes of the provenance data model were aligned with the attributes of the data set.An instantiation of the provenance model, reflecting the W3C PROV vocabulary and layout convention, is illustrated in Figure 5. Additionally, the W3C PROV supports interoperable interchange of provenance in heterogeneous environments. Implementation and Verification Approach Finally, building on the preceding steps, we developed an open-source Python class "Data Provenance" with associated methods, and validated our approach in an exemplified data integration pipeline [28].Provenance traces were mapped exemplarily onto the W3C RDF/XML and HL7 FHIR resource "Provenance" in its current maturity level (version R 5).We utilized peewee (version 3.15.4), a Python Object-Relational Mapping library that supports the binding of objects to relational databases such as SQLite, MySQL, or PostgreSQL [29].To visualize the provenance traces, we used the Mermaid plotting framework [30]. The verification and validation approach for the developed provenance class involved an independent code review and unit tests to ensure that the code meets the requirements of the design.We assessed efficiency (storage space in kilobytes and computing time) and ensured the maintainability of the program (code structure, modularity, comments in code, currency, and comprehensibility of documentation). While creating provenance records, we conducted a runtime experiment to measure the performance of our developed class.We recorded the time that the program took to run for proper execution.The runtime environment comprised the operating system Ubuntu 22.04.2LTS (Canonical Ltd.), 32 GB memory, and an 8-core Intel Xeon Platinum 8276 CPU @ 2.20-GHz computer. As a runtime environment, we used a virtual machine running on top of the machine.The runtime period was defined as the duration when the program was actively running. RenderX We conducted measurements per data element and per provenance record on 9 virtual machines, each utilizing different data element block sizes (starting with 1, 10, 100, 1000, 10,000, and 100,000 up to 9, 90, 900, 9000, 90,000, and 900,000 data elements).For the analysis of runtime measurements, we used R version 4.2.0 (2022; R Foundation for Statistical Computing), and figures were generated using the ggplot2 package [31]. The code is available in a git repository under the Massachusetts Institute of Technology (MIT) license [32]. Ethical Considerations Given the nature of the proof-of-concept study relying on dummy test data, ethics approval, informed consent, and deidentification were not applicable. Provenance Traces Representation All the gathered provenance information is in a machine-readable format.Additionally, FHIR health care standards were used [33]. We developed an FHIR profile based on the "provenance" resource, resulting in a record that delineates the entities and processes involved in producing, delivering, or otherwise influencing that resource.This was accomplished by mapping the contextual and technical metadata to the corresponding resource provenance elements (Table 2). Through the integration of all metadata levels, we facilitated the traceability of each data element.We illustrated the traceability using a data flow diagram and presented it in a human-readable text form.Additionally, the provenance information was exported into various formats such as FHIR-JSON, W3C-RDF/XML, W3C-RDF/JSON-LD, or a text-based log file.This approach aligns with data obtained in other studies [34]. Measurement of Provenance Traces As anticipated, the specified provenance class successfully generated the database and the metadata tables according to the UML class diagram (illustrated in Table 2).Provenance records were automatically appended to the provenance table throughout the execution of the exemplified data integration pipeline.We recorded runtime measurements of the algorithm, displayed separately for the storage duration of a data element and for a record, as well as the corresponding increase in the database (Figure 6).As evident, the runtime complexity of the algorithm per data element indicates a nearly linear relationship with the size of the input data.We observed an acceptable runtime duration ranging from 0.0039 to 0.02601 seconds per data element.However, when measuring the runtime for a provenance record, we encountered an increasing duration, ranging from 0.0271 to 0.1882 seconds.Given that our approach incorporates novel aspects, we were unable to find comparable studies for this measurement.Nevertheless, the data obtained here suggest that using this approach to establish provenance traces can yield accurate and timely information. Verification and Validation The validation status for our proof-of-concept provenance class is outlined in Table 3.We anticipate that our results can be readily adopted for additional metadata components and seamlessly transferred to decision-making applications. Validation result Requirement number Introduction of metadata for data elements and their processing collected automatically during ETL a job running in data flow.Relevant tables (DataProvenance, DataElement, and associated tables) in the provenance database were created and continuously updated during processing. 1 Organizational topics (DataGovernance, DataSteward, and DataOwner) were recorded in the provenance database and continuously updated during processing. 2 Provenance traces were created in different formats.Detailed provenance traces are accessible and exportable to support evaluation by users (eg, FHIR b provenance, W3C c RDF d /XML RDF/JSON-LD provenance). 3 The quality status of a processed data element is tracked and currently presented with a placeholder value in the DataProvenance table (see the "Future Work" section). 4 The verification status of used scripts and time stamps were recorded in the table DataElement. More specific content-related provenance information needs to be added in the second step.This compromises detailed annotation about the performed transactions and can be used for handling inconsistencies and rules for conflict resolution (see the "Future Work" section) 5 Easy integration into the ETL pipeline setup: only 3 lines of code, set up per data element: 1 line (see the "Future Work" section).7 Time measurements confirmed satisfying results.8 We achieved a code coverage of >90%, confirming that the code is comprehensively verified (quality aspect for software).We successfully verified the provenance with unit tests and validated all results against the defined requirements. Principal Findings Our study introduces the first ready-to-use library designed to record provenance information from clinical data processing pipelines in a German medical DIC.This current research extends previous work in provenance by using an approach that systematically combines detailed insights from medical, data management, and information technology operational experts.This method aims to facilitate the reuse of enriched patient data with precision and rigor.We demonstrated that our research approach successfully facilitates the implementation of traceability in the processing of data elements.This, in turn, contributes to the promotion of good data management and documentation practices, ultimately ensuring sufficient provenance quality.Furthermore, these good practices pave the way for the (automated) generation of annotations [23] and prevent poor data integrity, thereby enhancing data quality [35].Through this, we hypothesize that our work could contribute to the reliability and safety of quality-assured patient data for secondary use.Simultaneously, we mitigate the risks associated with the reuse of weak data in clinical research. We fulfilled the requirement for FAIR (Findability, Accessibility, Interoperability, and Reusability) provenance information by adhering to standards for syntactic and semantic interoperability, including JSON, W3C PROV, and FHIR mapping.Compared with the FHIR resource Provenance, we noted that our metadata recording offers significantly more detailed contextual information for each data element.We suggest that improvements to the FHIR Provenance resource, particularly for data within medical DICs, be deliberated and harmonized with existing FHIR resources such as "AuditEvent" or the "FiveWs Pattern" [19]. The strengths of this study are (1) the provision of provenance information for data elements with export options to interchange standard formats such as FHIR-JSON or W3C RDF/XML; (2) the simplicity of integrating this provenance class into ETL and other data pipelines; and (3) the extensibility of metadata components along with acceptable runtime measurements. Related Work In general, research on provenance and related management has progressed significantly in recent years.Numerous studies have been conducted, both domain specific and domain independent, focusing on provenance.Recently submitted scoping review results on provenance tracking have yielded valuable insights and provided an extensive summary of current approaches and criteria [3].The scoping review revealed technical, implementation, and knowledge gaps, with a specific emphasis on modeling and metadata frameworks for (sensitive) scientific biomedical data.Moreover, the primary focus of the research was centered on workflow provenance.This involved the utilization of models such as the Open Provenance Model or the W3C PROV data model across various semantic levels and tools in scientific workflows or experiments, as demonstrated in frameworks such as BioWorkbench or the OpenPREDICT use case [36,37].Additionally, other work has delved into different yet more general approaches for metadata usage and harvesting [38,39].A systematic literature analysis on functional XSL • FO RenderX requirements for medical data integration outlined general requirements for data traceability and metadata management [40]. While these prior efforts are crucial, they still lack the specific requirements and considerations tailored for a DIC use case.By contrast, our approach is finely tuned to the unique needs of a DIC, providing a comprehensive exploration of provenance that imparts medical meaning and understanding to the data elements, thereby enhancing their reusability. Lessons Learned We discovered that interdisciplinary competence profiles; fostering communication between medical experts, data stewards, and information technology developers; and establishing a common language were pivotal factors leading to significant progress in our specific DIC use case.Implementing proper data governance and comprehensive data management documentation, such as data management plans, would be instrumental in mitigating the risk of incorrect use of the data. The lessons learned from our description could serve as motivation for other researchers aiming to establish FAIR-oriented provenance.This would not only advance the reuse of their research data and results but also underscore the importance of maintaining overall responsibility for the data, even after project funding concludes. Future Work Future work should also prioritize the development of a strategy for assessing data privacy, data integrity, and related quality of a data element.Integrating this information into the framework would enhance the expressiveness of the provenance information and enable the derivation of quality dimensions.For this reason, data elements may need to be accompanied by additional properties (refer to Table 2) that are significant for interpretability, helping determine limitations or detect duplications for use in similar research studies.Addressing the adequacy and relevance of the data element for upcoming research questions aids in supporting interpretation and, consequently, the reuse of a data element, as already highlighted in a draft Food and Drug Administration guidance [41].To facilitate easy integration with other programming languages, we will provide an application programming interface. Future studies should also explore ways to enhance the script for generating the provenance class in alignment with the FAIR for Research Software Principles [42].Determining appropriate software metadata that accurately describe the specific characteristics of the software is an essential aspect to be addressed [18]. Before the future implementation and integration of the provenance class into real-world data integration processes, it is advisable to seek recommendations for risk measures.Factors such as the confidentiality level and security of provenance information, storage considerations, performance issues, and scalability should be carefully considered.In addition, it is crucial to consider experiences gained from maintaining metadata management and interoperable technologies, especially from professional data stewards.Ongoing exchanges with stakeholders and conducting usability evaluations are essential aspects that should be taken into account.This work also contributes to a broader community project that seeks to establish the "Minimal Requirements for Automated Provenance Information Enrichment" (MIRAPIE) project [43]. Limitations As the library has only been tested with simulated data, the next step-testing in a real environment-is currently in preparation.Despite the straightforward ETL integration approach, we will carefully assess the complexity and associated costs of implementation within the medical DIC.We recognize the need to bolster the overall qualification and validation concept.We believe it is crucial to expand the current provenance class to one that is inspection-or audit-ready, although accreditation demands additional measures and efforts.Additionally, further scalability analysis should be incorporated into the research approach. Trust involves more than just the provenance of data elements; it also implies correctness and security against malicious users.This challenge can only be addressed through technical access limitations and organizational measures.Nevertheless, automated provenance traces can contribute to building trust in the transformation and movement of data within the DIC.Moreover, it empowers us to confidently assess the quality and validity of the original data points even after undergoing complex transformations within a data warehouse. Conclusions We have designed, developed, and implemented provenance traces at the data element level for a German medical DIC, with the potential for extension at the national level.The described research method for the proof-of-concept provenance class has been crafted to promote effective and reliable core data management practices, enriching biomedical data with meaningful provenance.This, in turn, strengthens the benefits for research and society while simplifying the reuse of biomedical data.While the approach was initially developed for the medical DIC use case, these principles can be applied universally throughout the scientific domain.The implementation and analysis of provenance traces play a crucial role in minimizing risks associated with undetected or unintended data integrity breaches.Hence, provenance traces significantly contribute to building trust in routine clinical data and enhancing the accountability of a medical DIC.We are confident that by adhering to this advanced practice, the existing gaps between industry (pharmaceutical companies), service providers, and academia can be mitigated.Consequently, this can lead to an increase in the secondary use of (sensitive) patient data in clinical investigations. The outcomes of our research prompt additional questions, particularly regarding how in-depth exploration of further provenance analysis can predict the quality of data using machine learning methods.The limitations identified in our study indicate the need for further investigations into provenance theory, standards, and practices in the clinical field. Figure 1 . Figure 1.Overview of the road map steps. Figure 2 . Figure 2. Aspects in the system context and border of the Provenance Information System Traces (PISA).EU: European Union; GDPR: General Data Protection Regulation; SOP: standard operating procedure. Figure 3 . Figure 3. Simplified general data flow diagram in the data integration center.The simplified general data flow diagram in the data integration center (DIC) provides information about components participating in data flow: different hospital or laboratory systems donating the data, the independent trust center (trusted third party) enabling the separate processing of identifying data (IDAT) and medical data (MDAT), the data integration center with the different integration phases staging, data warehouse, FHIR and the research data repository (RDR).Individual DIC may deviate from this general data flow.FHIR: Fast Healthcare Interoperability Resources. 1 It includes information (metadata) about all the involved agents in producing a data set or data elements, such as staff, standard operating procedures, and guidance.The benchmark should align with the "agent" components of the W3C model.PISA must possess the capability to trace organizational responsibilities and the means used. 2 Detailed provenance traces are accessible and exportable to support evaluation by users, including formats such as log files, FHIR b provenance, W3C c RDF d /XML, and RDF/JSON-LD provenance. 8 Passed testing results.Verification by unit tests/code coverage >80% 9 a PISA: Provenance Information System Traces.b FHIR: Fast Healthcare Interoperability Resources.c W3C: Word Wide Web Consortium.d RDF: Resource Description Framework.e ETL: extract, transform, and load. Figure 4 . Figure 4.The logical data model as UML class diagram (technology-agnostic). Table 2 . Levels of contextual and technical metadata and their related FHIR a mapping: a mapping example of our metadata to the FHIR Provenance resource.The FHIR Provenance elements are aligned with the W3C b PROV model elements.Exemplified output fPossible mapping e Description d Level c "policy" : ["http://example.org/policy/1234Name and version of the standard operating procedures or regulation (eg, "DIC_ETL-ST.pdf,v1, : "http://terminology.hl7.org/CodeSystem/v3-ActReason",Name of the (hospital) department and the responsible person owning the patient data (eg, physician or stakeholder name)Name of the responsible data steward (eg, person who takes care of data management) as mapping from entity)Used input or created output data file as part of the processing pipeline (eg, name original source system and name target system) : "http://terminology.hl7.org/CodeSystem/iso-21089-lifecy-Scripts or programs developed to process the data with a description of script version and name and creator (eg, etl_st.pyv1 MZ) other mentioned levels and testimony for quality (eg, "25, 3, 5, good, 2023-02Fast Healthcare Interoperability Resources.b W3C: Word Wide Web Consortium.c Level corresponds to the maturity level of the data integration center.d Description of the possible content or annotation.e Possible mapping to the Health Level 7 FHIR resource "Provenance."f One possible exemplified output extract as a serialization in FHIR JSON.gNot yet or only partly implemented.hN/A: not applicable. Figure 5 . Figure 5. Exemplary instantiation of the provenance information model.SOP: standard operating procedure. Figure 6 . Figure 6.Provenance-Runtime-Experiment presenting storage duration per element and per record. c W3C: Word Wide Web Consortium. d RDF: Resource Description Framework. Table 3 . Validation status of requirements.
2023-11-03T15:13:12.065Z
2023-06-16T00:00:00.000
{ "year": 2023, "sha1": "aa263df29420f4a04e8657c96ef3de6b71fc92ad", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/50027", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "65794890f661ecb87525bb2d58a5c21e522972cb", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
195356283
pes2o/s2orc
v3-fos-license
C-reactive protein (CRP) promotes malignant properties in pancreatic neuroendocrine neoplasms Objective Elevated pre-operative C-reactive protein (CRP) serum values have been reported to be associated with poor overall survival for patients with pancreatic neuroendocrine neoplasms (pNEN). The aim of this study was to identify mechanisms linking CRP to poor prognosis in pNEN. Methods The malignant properties of pNENs were investigated using the human pNEN cell-lines BON1 and QGP1 exposed to CRP or IL-6. Analyses were performed by ELISA, Western blot, flow cytometry and immunocytochemistry as well as invasion and proliferation assays. To compare cytokine profiles and CRP levels, 76 serum samples of pNEN patients were analyzed using Luminex technology. In parallel, the expression of CRP and growth signaling pathway proteins was assessed on cell lines and paraffin-embedded primary pNEN. Results In BON1 and QGP1 cells, inflammation (exposure to IL-6) significantly upregulated CRP expression and secretion as well as migratory properties. CRP stimulation of BON1 cells increased IL-6 secretion and invasion. This was accompanied by activation/phosphorylation of the ERK, AKT and/or STAT3 pathways. Although known CRP receptors – CD16, CD32 and CD64 – were not detected on BON1 cells, CRP uptake of pNEN cells was shown after CRP exposure. In patients, increased pre-operative CRP levels (≥5 mg/L) were associated with significantly higher serum levels of IL-6 and G-CSF, as well as with an increased CRP expression and ERK/AKT/STAT3 phosphorylation in pNEN tissue. Conclusion The malignant properties of pNEN cells can be stimulated by CRP and IL-6 promoting ERK/AKT/STAT pathways activation as well as invasion, thus linking systemic inflammation and poor prognosis. Introduction Pancreatic neuroendocrine neoplasm (pNEN), the most aggressive neuroendocrine malignancy, has an increasing incidence, with 3.2 cases per 1,000,000 inhabitants define the molecular features associated with prognosis in these tumors. Recently, elevated pre-operative C-reactive protein (CRP) values have been shown to be prognostic for survival in patients with pNENs; patients with elevated pre-operative CRP levels had a significantly shorter overall survival (6). However, the molecular mechanisms for this phenomenon remained unclear. CRP is an acute-phase protein which is produced mainly in the liver and rarely in atherosclerotic lesions, kidney, neurons and alveolar macrophages (7,8,9). CRP synthesis is triggered largely through secretion of interleukin 6 (IL-6) from macrophages and T-cells (10). Any type of inflammatory process is able to activate IL-6, thus causing increased concentration of CRP in systemic circulation (11). The function of CRP was initially considered to be related to its role in the innate immune system (12). CRP activates the complement system, which is then able to enhance the ability of antibodies and phagocytic cells to clear microbes and damaged cells from the organism. CRP binds to certain Fc-receptors, such as CD16, CD32 and CD64, whose names are derived from their binding specificity to the antibody region Fc (Fragment crystallizable). Additionally, it acts as opsonin for various pathogens. Interaction of CRP with Fc-receptors, for example internalization in human aortic endothelial cells (13), leads to the secretion of pro-inflammatory cytokines enhancing inflammatory response (14,15). Another protein that is involved in response to inflammatory products is STAT3 (signal transducer and activator of transcription 3). It is a crucial regulator of gene expression in response to pro-inflammatory cytokines (including IL-6) as it plays an important role in regulating growth, survival, differentiation and pathogen resistance (16,17). Constitutive activation of STAT3 signaling pathway has been observed in a growing number of human cancers, such as breast cancer, multiple myeloma, head and neck tumors and both ovarian and prostate cancers (18,19,20,21). Nothing is known about the role of STAT3 in pNEN. CRP and STAT3 may be part of the link between inflammation and cancer (22). Tumor-associated inflammatory cues in the tumor microenvironment, such as macrophages or tumor-associated fibroblasts and proinflammatory cytokines (including TNF-alpha and IL-6), contribute to genomic instability and are important tumor progression-promoting signals (23,24). Chronic inflammation is able to stimulate endocrine cells leading to their hyperplasia and neoplastic transformation (25) such as IL-1 was able to direct cancer cells into neuroendocrine differentiation. In other cancer entities, such as pancreatic ductal adenocarcinoma, squamous cell and adenocarcinoma of the esophagus, melanoma, soft tissue sarcoma and gastric cancer, circulating proinflammatory cytokines are also related to the pathogenesis and tumor progression (26,27,28,29,30). Additionally, in all the above-mentioned entities, elevated pre-operative CRP levels were also reported to be significantly associated with poor prognosis (31,32,33,34,35). We, therefore, aimed to identify the mechanisms linking CRP and inflammation to poor prognosis in pancreatic neuroendocrine neoplasms. Clinical samples For this study, snap-frozen tumor tissue (n = 14), paraffinembedded tissue (n = 20) as well as serum (n = 76) from patients with pNEN were obtained from the Pancobank of the European Pancreas Center (EPZ/Department of Surgery, University Hospital Heidelberg; Ethical Approval Votes no. 301/2001 and 159/2002), a member of BMBH/ Biomaterial Bank Heidelberg. A written informed consent was obtained from all patients after full explanation of the purpose and nature of all procedures used. The histological examination of formalin-fixed, paraffin-embedded and H&E-stained pancreatic tissue sections was performed by an experienced pancreas pathologist (F B) and only tumor sections with more than 90% pNEN tissue were used. Patients characteristics from which the clinical samples derived are summarized in Table 1. pNEN cell lines BON1 and QGP1 The human metastasized adherent pNEN cell line BON1 was obtained from Dr M Kidd, Yale University School of Medicine, as a gift and has been authenticated by STRS analysis. BON1 cells were cultured as a monolayer in T-75 flasks (Corning) in RPMI 1640:Ham's F-12 medium in a 1:1 volume ratio (Invitrogen) supplemented with 10% fetal bovine serum (Life Technologies) and penicillin/ streptomycin (100 IU/mL) at 37°C with 5% CO 2 as previously described (36). The human adherent pNEN cell line QGP1 was purchased from the Japanese Cancer Research Resources Bank (JCRB), which is part of the National Institutes of Biomedical Innovation, Health and Nutrition and was cultivated as a monolayer in T-75 flasks in RPMI 1640 supplemented with 10% fetal bovine serum (Life Technologies) and penicillin/streptomycin (100 IU/mL) at 37°C with 5% CO 2 . To investigate the role of CRP as well as inflammation in pNEN cell proliferation and metastasis, BON1 and QGP1 cells were treated with the pro-inflammatory cytokine IL-6 (recombinant human IL-6, 25 ng/ml, Cell Signaling) or recombinant human CRP (20 µg/mL) (R&D Systems) for 48 h. Protein extraction and Western blot analysis Proteins from frozen pNEN tissue specimen (60-80 mg) pulverized in liquid nitrogen or from BON1 cells pellets were extracted using RIPA buffer containing complete protease inhibitors (Sigma-Aldrich) and supplemented with phosphatase inhibitor sets 1 and 2 (Sigma-Aldrich). Using an ultrasonic homogenizer (SonoPuls mini20 Bandelin ® , Berlin, Germany), the suspensions were subjected to a 30-s sonication step on ice (ampl. 80%, 0.99 kJ) and subsequently centrifuged at 16,000 g for 10 min. Supernatants were collected and divided into aliquots, and the total protein concentration was determined using a BCA Protein Determination Kit (Thermo Scientific) following the manufacturer's instructions. Immunohistochemistry and immunocytochemistry The expression of CRP and IL-6 was assessed on paraffin-embedded primary tumor samples by immunohistochemistry and on BON1 cells by immunocytochemistry as previously described (38,39). Briefly, tissue sections were de-paraffinized and rehydrated with a graded ethanol series, and heat-based antigen retrieval was carried out in citrate buffer. Both tissue sections and BON1 cells were blocked with Universal Blocking Reagent (BioGenex, Fremont, CA, USA) and incubated with anti-CRP antibody (Abcam) or anti-IL-6 antibody (Abcam) before being washed and developed using goat anti-rabbit secondary antibody (CRP, Dako) or goat anti-mouse secondary antibody (IL-6, Dako) and the DAB kit (Dako). After the tissue sections were counterstained with hematoxylin and washed with water, they were incubated in graded alcohol solutions and roticlear (Roche) and then mounted. Optical imaging and analysis was performed using Zeiss Axioplan2 Microscope (Zeiss) with Axio Vision and K5400 Zeiss software. Ten different high-power fields (400×) were selected for each slide and integrated optical density was used as the measure of staining intensity. Negative controls were processed in the absence of primary antibodies. ELISA and PETIA The determination of CRP and IL-6 levels in cell extracts and culture supernatants was performed using human C-Reactive-Protein ELISA Kit (Abcam) and human interleukin-6 ELISA Kit (Abcam) according to the manufacturer's instructions. CRP was additionally measured with PETIA (particleenhanced turbidimetric immunoassay) using Dimension EXL (Siemens Healthineers) at the central clinical laboratory of the University Hospital of Heidelberg. Multiplex cytokine analysis A multiplex assay for quantitative determination of inflammatory mediators was applied to assess the concentrations of selected cytokines in serum. The analysis was performed using the Bio-Plex Pro Human Proliferation assays The proliferation capability of BON1 cells was firstly assessed by a MTS cell proliferation colorimetric assay kit (Biovision, Milpitas, CA, USA) according to the manufactures instructions. Briefly, BON1 cells (5 × 10 3 /well) were seeded in 96-well plate and incubated for 3 days with IL-6 (25 ng/mL, Cell Signaling) or CRP (20 µg/mL, R&D Systems). Twenty microliters of MTS reagent were added to each well. After 3 h of incubation, optical density was read at 490 nm and 650 nm using a SYNERGY/HTX multi-mode reader (BioTek Instruments). Secondly, we used the BrdU Cell Proliferation ELISA Kit (Abcam), in order to assess the same mechanism, according to the manufactures instructions. RNA extraction and quantitative reverse transcriptase PCR (qRT-PCR) Total mRNA extraction from cells as well as cDNA synthesis was performed using MPLC RNA IsoI. Kit (Roche) according to the manufacturer's instructions. For PCR, the following primer pairs were used: Gene expression was quantified by qRT-PCR on a LightCycler 480 II using LightCycler ® 480 SYBR green I master reagent (Roche). The quantitative PCR was performed in a 20 µL reaction solution containing 6 µL nuclease-free water, 2 µL cDNA templates, 10 µL Master Mix and 1 µL each of forward and reverse primer. The amplification conditions were initial denaturation at 95°C for 15 min followed by 45 cycles of denaturation at 95°C for 10 s, annealing at 60°C for 30 s and elongation at 72°C for 20 s. The experiment was performed in triplicates. A qualitative PCR was also performed in order to confirm the presence of single and appropriate bands for each primer set. PCR data were analyzed using the ΔΔCT method as previously described (40). Cell migration and invasion assays The migration and invasion potential of BON1 cells was assessed using a 24-well plate containing inserts with 12 µm pore size polycarbonate membranes (Merck Millipore) which was coated with a uniform layer of dried basement membrane matrix solution (1 mg/mL; Corning) on the upper surface of the inserts' membrane. The lower compartment was filled with RPMI 1640/Ham's-F12 medium containing 30% FBS. BON1 cell suspensions (3 × 10 5 cells/well in 300 µL serum-free medium) were added to the inside of each insert. After 48 h of incubation, inserts were removed and cells were fixed and stained with 0.05% crystal violet. Cells on the upper surface of the insert membrane (invasive cells) were removed with a cotton swab. Membranes were cut out. Cells that were able to invade and migrate (taken from the bottom of the membrane and the bottom of the outer well) were counted with a Zeiss Axioskop 2 Microscope with nine individual fields per membrane (Fig. 1B). Inhibition of endocytosis and immunofluorescence staining In order to investigate CRP internalization, 5 × 10 5 BON1 or QGP1 cells/mL were seeded in six-well plates on collagen-coated cover slides and cultured for 24 h. After 24-h incubation, cells were additionally treated with the prepared FITC-conjugated CRP (20 µg/mL) for 24 h, harvested and seeded on collagen-coated glass plates. Then, cells were fixed in ice cold acetone for 10 min. Cells were washed three times with PBS for 5 min. Afterward, the cells were permeabilized using 0.3% Triton (Sigma-Aldrich) for 20 min. Cells were blocked in blocking buffer (0.2% FCS, 2% BSA, 0.2% gelatin fish skin in 100 mL 1× PBS) for 45 min at RT. Subsequently, cells were incubated with DAPI (1:10,000) for 1 h. After another washing step, the cells were Statistical analysis Statistical analyses were performed using Prism 5 (GraphPad Software). The Mann-Whitney test was used to determine the statistical difference between two groups. Comparisons between more than two groups were performed using the Kruskal-Wallis test, followed by the Dunn's post hoc test where appropriate. Statistical significance is indicated by an asterisk. A P value <0.05 was considered statistically significant. Impact of CRP on IL-6 secretion, invasion and proliferation of BON1 and QGP1 cells Given the fact that systemic elevated CRP is part of an inflammatory response and associated with poor prognosis in pNEN, we first examined the impact of the exposure of pNEN cells such as BON1 and QGP1 to CRP alone. Upon treatment with CRP (20 µg/mL for 48 h) IL-6 secretion by BON1 cells was significantly increased (Fig. 1A), while QGP1 cells did not secrete more IL-6 (data not shown). Next, the functional significance of CRP with respect to essential malignant properties of cancer cells was examined. Invasion ( Fig. 1C and D) as well as viability/ proliferation measured by the MTS assay- (Fig. 1E) were both significantly increased upon exposure to CRP in comparison to unstimulated BON1 cells, while QGP1 cells did not show any differences (data not shown). Proliferation -measured by BrdU-ELISA -was not affected by CRP in BON1 (Fig. 1F) and QGP1 cells (data not shown). Impact of pro-inflammatory IL-6 on CRP secretion, proliferation and invasion of BON1 and QGP1 cells To investigate whether BON1 and QGP1 cells spontaneously express and secrete CRP under normal culture conditions, CRP concentrations were measured in cell supernatants using ELISA and the high-sensitive CRP test of the Heidelberg University Hospital Clinical Laboratory. Both assays revealed marginal amount of CRP in the supernatants of BON1 and QGP1 cells (control group Fig. 2B and C). Since CRP synthesis of hepatocytes is triggered by inflammatory mediators such as IL-6, IL-1β and TNF-α (11), BON1 and QGP1 cells were exposed to the pro-inflammatory cytokine IL-6. Three days after treatment with IL-6 (25 ng/mL), a strong intracellular CRP expression by BON1 cells was found ( Fig. 2A). Concordantly, a twofold increased CRP secretion of BON1 (OD) and a ninefold increased CRP secretion of QGP1 (pg/mL) cells was measured ( Fig. 2B and C). We also sought to investigate the functional significance of IL-6 exposure with respect to invasion as key property of cancer cells, which was examined using a gel on top of a semi-permeable membrane with 12 µm pores. Invasion of cells after 48 h of IL-6 exposure was found to be increased seven-fold in BON1 and 1.5-fold QGP1 cells (Fig. 2E and F). A correlation between CRP secretion and cell invasion was not given since the high inflammatory effect of IL-6 exceeded the relative low CRP concentration in the supernatant (secretion). Another explanation for the lower invasiveness of QGP-1 cells may be the stronger QGP1-cell-cluster formation in the presence of IL-6. Impact of IL-6 and CRP on signaling pathways in BON1 and QGP1 cells To examine the impact of pro-inflammatory IL-6 on signaling pathways in BON1 cells, we first determined STAT3 expression in IL-6-treated and PBS-treated BON1 cells as IL-6 treatment led to a robust STAT3 activation as reflected by increased phosphorylated STAT3 in IL-6treated BON1 cells in comparison to control cells that did not show any STAT3 phosphorylation on the Western blot. The potential activation of the ERK and AKT pathways was also examined and a strong activation of AKT, but not of ERK, was found in these cells (Fig. 3A, B and C). Whether the effects of IL-6 on STAT3 phosphorylation translated into active STAT3-mediated transcription was investigated as well. The expression of two STAT3responsive genes, BCL2 and MMP-9, was examined by qRT-PCR. Both genes were strongly upregulated in IL-6treated BON1 cells (Fig. 3D and E). CRP triggered phosphorylation of ERK (growth signaling pathway). STAT3 and AKT were not affected by CRP treatment (Fig. 3F, G and H). In QGP1 cells, none of those effects on signaling pathways were seen (data not shown). CRP receptors in BON1 and QGP1 cells Having shown that CRP itself was able to directly promote proliferation and invasion by phosphorylation of ERK in BON1 cells, we next examined the presence of 8:7 putative CRP receptors on BON1 and QGP1 cells. It has been previously demonstrated that CRP was capable of binding to three receptors: FcgRI (CD64), FcgRII (CD32) and probably FcgRIII (CD16) on neutrophils (41,42,43). Using FACS analysis, none of these FcgRs could be detected on BON1 cells (Fig. 4A). Therefore, FITC-labeled CRP was used in immunofluorescence microscopy in order to visualize CRP on neuroendocrine tumor cells. Indeed, FITClabeled CRP was detectable within BON1 and QGP1 cells (Fig. 4B, middle), thereby suggesting an internalization of CRP either through endocytosis or another yet unknown receptor. In order to specify this mechanism of internalization, clathrin-mediated endocytosis was inhibited by treatment with Dyngo. Subsequently, FITC-labeled CRP was given. Figure 4B (right) shows that this treatment visually tended to result in less CRP internalization. Endocytosis as possible mechanism is therefore not excluded. CRP and IL-6 expression in human pNEN are associated with STAT3, AKT and ERK pathway activation and systemic cytokine levels Given the impact of CRP on proliferation and invasion of BON1 cells, we examined CRP expression and growth signaling pathways in human pNEN tissue. Both CRP and IL-6 expression were significantly higher in immunohistochemistry and Western blot analysis in pNEN of patients with pre-operative CRP serum concentration above 5 mg/L compared to pNEN of patients with preoperative CRP serum concentration below 5 mg/L (Fig. 5A, B and C). Additionally, high serum CRP concentrations were associated with increased phosphorylation of STAT3, AKT and ERK, as well as strongly increased IL-6 levels in comparison to low CRP serum concentrations ( Fig. 5C and D). The increase of serum CRP was also associated with increased systemic levels of IL-6 and G-CSF (Fig. 5E accompanying CRP secretion. These results confirm the link observed in vitro between pNEN and systemic inflammation in the human disease. Discussion In this study, we showed that pancreatic neuroendocrine neoplasm cells (BON1 and QGP1) upregulate and secrete CRP when stimulated by inflammation (IL-6), Additionally, IL-6 is secreted by BON1 cells during exposure to CRP. Both CRP and IL-6 increased invasion of BON1 and QGP1 cells. Furthermore, the ERK, AKT and/or STAT3 signaling were delineated as critical effector pathways that are strongly associated with the essential malignant properties of cancer cells. Those in vitro observations were confirmed in human neuroendocrine disease. Increased CRP and IL-6 expression, as well as ERK/AKT/STAT3 phosphorylation in tissue of pNENs patients with elevated CRP serum levels, were identified. Although the role of CRP in the innate immune response as opsonin is well established, its role in tumorigenesis and progression -particularly in pancreatic neuroendocrine neoplasms -is unknown. Although in breast cancer cell lines increased invasion was only observed in synergy with sphingosine-1-phosphate (44), elevated CRP levels at the time of diagnosis of breast cancer were found to be significantly correlated with stage, (E) Both IL-6 and G-CSF were significantly increased in serum from pNENs patients with high pre-operative serum CRP concentration (≥5 mg/L) (ELISA, *P < 0.05). NML, normal pancreas tissue; pNEN-L, pNEN patients with low pre-operative serum CRP concentrations; pNEN-H: pNEN patients with high pre-operative serum CRP concentrations. size and grade of the tumor and metastasis (45). Increased pre-operative serum CRP has also been demonstrated to enhance the metastasis of renal cell carcinoma (46). Consistent with these studies, we demonstrated that elevated CRP increased the invasion capability of BON1 and QGP1 cells. Low CRP concentration (<5 mg/L) was revealed to inhibit proliferation of vascular endothelium cells (47); however, high CRP concentration (≥5 mg/L) increased proliferation of multiple myeloma cells and macrophages (48,49,50). In this study, we show activation of growth signaling pathways in neuroendocrine tumor cells (BON1) and increased viability and proliferation using the MTS assay, while in the BrdU assay, high CRP concentrations were not shown to be pro-proliferative in the pNEN cell lines BON1 and QGP1. It is remarkable that both pNEN cell lines BON1 and QGP1 reacted differently to the same stimuli as well as secreted CRP in different concentrations. The different behavior of pNEN cell lines may correspond to the biological heterogeneity (51,52,53) that is well known from human pNENs (5). Our study is limited by the small number of pNEN cell lines currently available for research. One important goal of future studies has to be the generation of new pNEN cell lines, especially from well-differentiated tumors. Another limitation of this study is that in vitro experiments were conducted with highly proliferative pNEN cell lines, whereas the patient cohort included a majority of G1/G2 pNENs (Table 1). IL-6 has also been shown to stimulate the proliferation and metastasis of a diverse range of tumors (54,55), in which signaling involves several distinct downstream intracellular signaling cascades, including the AKT/STAT3 pathway. This ultimately leads to transcriptional changes that have an impact on survival, proliferation, differentiation and migration (56). We examined the IL-6-secretion of BON1 cells and determined the effects of IL-6 exposure on proliferation and migration/invasion. The response to IL-6 is most likely mediated by STAT3 activation, which has been associated with poor prognosis in renal cell carcinoma, for example (57). In agreement with these findings, our data showed that the effects of IL-6 in augmenting both BON1 and QGP1 cells migration and proliferation are likely mediated via the AKT/STAT3 pathway. Interaction of CRP with monocytes/macrophages was reported to induce the secretion of IL-1 and TNF as well as IL-1-induced IL-6 production (58,59). Moreover, monocytes/macrophages were shown to synthesize CRP (8) and IL-6 (59), thus suggesting a positive feedback mechanism between CRP and IL-6 in monocytes/ macrophages in the promotion of systemic inflammation. The present study indicates that CRP may play a role in pNEN tumor progression using an IL-6/AKT/STAT-3/ CRP signaling as shown in Fig. 6. CRP either binds to an unknown receptor or becomes internalized by endocytosis, triggers IL-6 production and activates the IL-6/AKT/STAT3/CRP axis in BON1 and QGP1 cells. The mutual stimulation between CRP and IL-6 in those cells may represent a positive feedback mechanism which promotes tumor progression as well. Figure 6 Schematic model of possible signal transduction stimulated by CRP in BON1 cells. In pNEN microenvironment, special immunological cells and hepatocytes are one source of CRP and/or IL-6. The other source of CRP and IL-6 is BON1 or QGP1 cells. The binding of IL-6 to its ligandbinding receptor initiates activation of AKT, which in turn phosphorylates and activates STAT3 transcription factors. Activated STAT3 binds to specific sites in the target gene promoters inducing transcription of BCL-2 and MMP-9, thus resulting in proliferation and metastasis (invasion/ migration) of pancreatic neuroendocrine tumor cells, as well as the expression and secretion of CRP. Secreted CRP may exert its effects by internalization either through an unknown receptor (?) or via endocytosis. It activates ERK as well as IL-6 secretion, thereby leading to proliferation and metastasis in pancreatic neuroendocrine tumor cells.
2019-06-26T13:04:02.121Z
2019-06-24T00:00:00.000
{ "year": 2019, "sha1": "70ddb258907e1a1f13fcfe3d6d6cd475e43c741a", "oa_license": "CCBYNC", "oa_url": "https://ec.bioscientifica.com/downloadpdf/journals/ec/8/7/EC-19-0132.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1c5ddea8e54ec512abc428f337fe5453302c0b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55637298
pes2o/s2orc
v3-fos-license
Public Health Awareness Building in the field of Safe Motherhood Methods: A total of 400 sample households (HHs) were represented from the 20 wards from 10 VDCs of four districts in the country during the fieldwork from April to August 2006. The sampled VDCs and wards were selected with the meeting of respective stakeholders from district to village levels respectively. A semistructured questionnaire and ethnographic observation were administrated for garnering required data. INTRODUCTION The life expectancy is 63.3 years and Infant Mortality Rate has been 64 per 1000 live births in 2006.2][3] Complicated pregnancy and problem during childbirth are the major causes influencing to maternal health.The total numbers of MCHW and nurse/ ANW are 3190 and 11637 consecutively in the country during 2006.As subsistence economy, majority of people (65 %) are fully depend on agriculture with livestock.31% populations are still below the poverty line.[5] With education, training, empowerment and awareness through the effort of government, other stakeholders the situation at present has been gradually improving. However, some of the maternal health related problems such as maternal mortality is still higher, uterus prolapse and other women related diseases are widely found in rural and distant areas of the country.During fieldwork, even Chhapmi, the adjoining VDC from ring road in Lalitpur District, most of the women of wards 1 and 6 complained that the complicated disease related to safe motherhood in this VDC is uterus prolapse.Experiences also showed that the avoidance of the three delays was imperative to achieve the goal of reducing maternal mortality.Three delays include delay in seeking care, delay in reaching care and delay in receiving care.[6][7] METHODS This was the field-based study of four districts representing from hill to Terai regions of the country.They are Sapatari, Dhanusa, Kavrepalanchock and Lalitpur Districts.The districts were selected based on the suggestion of NHRC.20 wards from 10 VDCs of four districts were studied.The sampled VDCs and wards were selected with the meeting of respective stakeholders from district to village levels respectively.The fieldwork was carried out on April to August 2006.The total population during field visit was 2783 in the sample VDCs.As based on research design, two Districts from each Region were taken as sample where the sample of 80 HHs from each hilly district and 120 HHs from each Terai district were determined.In this case, the total sample of 400 questionnaires was calculated.Based on the limited time frame, certain budget along with approval of NHRC, the proportion of the VDCs differs from hill to Terai regions.For the collection of required first hand data, semi-structured survey questionnaire and ethnographic observation were administrated as research techniques.Secondary sources were also used for the discussion of this article.That is why, both primary and secondary sources were applied. Upon the completion of sampled districts and VDCs, a survey questionnaire including required checklists was prepared.After the field preparation, the study team (principle investigator and field supervisors) moved to the respective headquarter of Kavre district where the team arranged a meeting of district level stakeholders (DHO, FPAN, and other health related organizations) for the appropriate selection of sample VDCs in the district.Same procedure was followed to other sample districts.Supervision and monitoring of the fieldwork were also managed.After editing the field data, they were analysed by using latest version of statistical program for social science (SPSS). Ethical issue was also considered throughout the study period.After the approval of the tools and techniques for the fieldwork, the VDCs were selected with the help of the meetings of the respective stakeholders.Questionnaire was asked by visiting to the each sample Hh.The data were collected during leisure time of the informants.The privacy of each informant was considered. RESULTS The result has been discussed at two levels.At first, socio economic information has been analysed.And in second, focus will be given on the different issues of safe motherhood. Sex is considered as the significant variable of the social, economic and demographic components.The present study reveals at first the distribution of the total population by sex/sex ratio and household size.The total population of the four sampled districts is found to be 2783 of which 1395 (50.1%) are males and 1388 (49.8%) are females.Of the total 2783 population, the highest number of population is in Saptari 1026 (36.8%)followed by Dhanusa 767 (27.5%),Kavre 571(20.5%)and Lalitpur 419 (15%).The overall sex ratio of the four sampled districts is 1.005 while the national sex ratio is slightly less i.e. 0.997 (CBS, 2001).If the sex ratio is compared among the sampled districts, it is the highest in Saptari (1.115) followed by Dhanusa (0.992), Lalitpur (0.976) and Kavre (0.866).Similarly, the overall Hh size is 6.95 for all the four districts, which is higher than national average Hh size 5.44 (CBS 2001).It is happened by account of the majority sampled VDCs with rural background.However, the biggest Hh size (8.55) is in Saptari and the smallest (5.23) is in Lalitpur district (Table 1).The bigger Hh size indicates high fertility rate, low educational level and massive poverty and other encumbrances.During fieldwork, posters, leaflet and brochure related to safe motherhood were also observed.Calendar would be also more useful materials either for day-today activities or learning more about safe motherhood. With the help of such materials, the awareness level in sampled VDCs has been found gradually improving. Being the multiple choices, the total respondents are calculated 616 on this issue (Table 5).Out of the total respondents, majority (33%) reported health workers as an important source followed by mother in law (30%), TBA (25.8%) and others including neighbors, sister, and sister in law (10.7%).If it is compared in sampled districts on mother in law as source of knowledge, a higher majority (50.4%) is in Kavre followed by Dhanusa (30.6%),Saptari (23.6%) and Lalitpur (18.9%).Colostrums to infant is very essential for the healthy and growth.The field data has revealed that majority of the respondents (69%) reported they practice colostrums to infant.If it is compared in sample district, a higher proportion of Kavre (84.8%) practice colostrums followed by Lalitpur (81.6%),Dhanusa (61.7%) and Saptari (58%). Public Health Awareness In ecological belt, majority of respondents of hilly sampled districts have practiced than Terai sampled districts (Table 7).During fieldwork, it was found that the uterus prolapse is in critical in the sampled districts.Most the female respondents argued maternal health has been more affected by such disease.Furthermore, they suggested that uterus prolapse has to be examined by organizing health camp in distant area.In Ballakathal VDC of Dhanusa district, a critical case was encountered that a Kurmi woman sent her husband to the field study team where they were in night camp for hearing her problem and getting the way to be free from such disease.In reality, it is hidden problem because the patient takes regular food and performs other activities properly.The family members do not easily trust she has a problem.Therefore, such type of hidden case cannot be exposed until the affected person bring into a continue illness or completely bed rest. DISCUSSION Safe motherhood is also one of the important components of sexual and reproductive health.There are various components related to the safe motherhood.Efforts will be given discussing them respectively.Maternal health can be affected by marital age.In totality in the sampled districts of this study, 29% community people were found marital aged below 18 years.In health perspective, it is said that the women health could be weak if she married below 18 years age.Therefore, government of Nepal has made the act for not marrying below the age of 18 years.Furthermore, the study reveals a high proportion (58.37) married at the age of 18 to 22 years.The results nearly coincide with FPAN .According to FPAN (2005), 54.7% repondents were found marrying at 15-19 years of age.Many pregnant are generally found suffering at the time of delivery because of the lack of proper knowledge on reproductive and sexual health, place of delivery, assistance during pregnancy, available health facilities and so forth.The present study shows that a high majority (80.5%) were found home delivery in four sampled districts.The finding was exactly supported with same FPAN result. 8The present study also indicates that of the total respondents, most of them (87.5%) have clear knowledge about ante-natal care (ANC).This result is contradicted with DoHS.DoHS found different results in the gap of the different fiscal years.During fiscal year 2001/02, 42.7 % of the expected pregnant mother received ANC services at least for one time, 53.1% during 2002/03 and 66.1% during 2003/04. 2 Among the different factors of safe motherhood, it should be considered on the place of delivery.The present study has demonstrated majority of the respondents (33%) reported health workers are the prime source for the assistance during delivery.Furthermore, the same study added that 30 % mothers in law have assisted to the pregnant for safe delivery.However, DoHS has quite contradicted result.The health workers (18.3%) assisted mothers for delivery during the fiscal year 2003/04. 2 During delivery period, it should be carefully managed of the instruments for cutting placenta.The field data of the present study indicates that majority of recipients (58.9%) are found to use blades followed by sterilized blades (26.1%) and others (15%). During fieldwork, posters, leaflets and brochures related to safe motherhood were also observed.With the help of such materials, the awareness level in sampled VDCs has been found gradually increasing.NHEICC has found some specific results in this regard.Of the179 respondents exposed to the posters of safe motherhood, 93% reported having learned from the posters.Proportion of respondents learning from posters was found highest in Terai (95.4%) followed by hills (92.6%) and mountains (90.9%). 9east milk contains all types of nutrients required for a child in right proportion with quality as well as good composition.Furthermore, the initiation of first breast milk is very important for the babies because it contains colostrums. 10In the field of maternal and child health, colostrums is very useful nutrient to infants for their health and growth.Proper use of colostrums helps both child and maternal health.Sometime, it is also found some mothers might be affected due to the improper use of first breast milk.The present study reveals 69% respondents practised first breast milk to infants.In Kavre district only, 84.8% practised colostrums to infant. In this study, it was also observed that uterus prolapse has created complexity on maternal health.Most of the female respondents of the sampled VDCs expressed that uterus prolapse has to be examined by organizing health camp in distant area.In this sense, most of the women in remote area have to be aware on the critical situation like such problem at the one side and government and NGO have to be organised regular mobile clinic or health camp prioritised to maternal health in the country. CONCLUSION Safe motherhood is the significant component of reproductive health.Majority people still practised delivery at home rather than public health institutes.The study indicates that 80.5% population reported they managed home delivery followed by health institutes (18.3%).Similarly, 30% respondents expressed mothersin-law assist during delivery.69% informants reported that they milked colostrums to infant.Despite some positive achievements in this field, some areas are still found weak and critical.Hence, it is suggested that the government and NGOs have arranged awareness campaigns focusing to the critical area of the problems. Table 1 . Distributions of Sample PopulationMarital age plays a vital role in women health.Therefore, the team has garnered the information of marital age practicing in each sampled cluster.The table 2 indicates that 29% community people practice marriage below 18 years in total.Generally it is said that the women health can be weak if she marries below 18 years.If it is compared in sampled district, more than half respondents of both Terai districts like Dhanusa and Saptari marry below 18 years than other sampled districts.Being advanced area of the country, no marriage occurs below 18 years in Lalitpur district.Furthermore in Lalitpur, 46.3% reported practicing marital age in community is above 22 years influencing from urban area like Kathmandu. Table 2 . Marital Age Practicing in the Sample Districts Table 3 . Knowledge of ANC Table 4 . Place of Recent Delivery Table 5 . Building in the field of Safe Motherhood Assistants During Delivery JNHRC Vol. 6 No. 2 Issue 13 Oct 2008 Table 7 . Practicing of Colostrums to Infants
2018-12-16T00:22:05.240Z
2009-06-12T00:00:00.000
{ "year": 2009, "sha1": "6f172ddbfb5e89eb1cc3715414a1638f94e80495", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/JNHRC/article/download/2187/2012", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6f172ddbfb5e89eb1cc3715414a1638f94e80495", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240022878
pes2o/s2orc
v3-fos-license
Thyroid Hormone Action in Muscle Atrophy Skeletal muscle atrophy is a condition associated with various physiological and pathophysiological conditions, such as denervation, cachexia, and fasting. It is characterized by an altered protein turnover in which the rate of protein degradation exceeds the rate of protein synthesis, leading to substantial muscle mass loss and weakness. Muscle protein breakdown reflects the activation of multiple proteolytic mechanisms, including lysosomal degradation, apoptosis, and ubiquitin–proteasome. Thyroid hormone (TH) plays a key role in these conditions. Indeed, skeletal muscle is among the principal TH target tissue, where TH regulates proliferation, metabolism, differentiation, homeostasis, and growth. In physiological conditions, TH stimulates both protein synthesis and degradation, and an alteration in TH levels is often responsible for a specific myopathy. Intracellular TH concentrations are modulated in skeletal muscle by a family of enzymes named deiodinases; in particular, in muscle, deiodinases type 2 (D2) and type 3 (D3) are both present. D2 activates the prohormone T4 into the active form triiodothyronine (T3), whereas D3 inactivates both T4 and T3 by the removal of an inner ring iodine. Here we will review the present knowledge of TH action in skeletal muscle atrophy, in particular, on the molecular mechanisms presiding over the control of intracellular T3 concentration in wasting muscle conditions. Finally, we will discuss the possibility of exploiting the modulation of deiodinases as a possible therapeutic approach to treat muscle atrophy. Introduction Thyroid hormone (TH) affects virtually every organ system in the body, including skeletal muscle. Among its widespread biological functions, TH controls tissue development and homeostasis, cellular growth, differentiation, and metabolism [1]. In particular, specific TH levels are critical for the development of different tissues and are essential for the regulation of metabolic processes in life. Skeletal muscle is one of the largest tissues in humans, accounting for about 40% of body weight. It is a dynamic and plastic tissue that is primarily responsible for locomotion, but it also plays important roles in many other physiological processes such as glucose and energy metabolism. The skeletal muscle is a well-known TH target tissue; TH affects muscle development, contractile function, myogenesis, and bioenergetic metabolism [2]. Alterations of circulating TH levels are associated with several muscle symptoms and signs. Since the 1970s, many studies have shown that thyroid disorders could induce a specific muscle myopathy. Notably, THs have been demonstrated to be one of the elements controlling muscle mass, and their alteration could be responsible for the development of muscle atrophy [3][4][5]. Muscle atrophy is a common comorbidity factor among patients with chronic and/or advanced disease but can also be induced by muscle disuse or muscle dystrophies. The resulting condition derives from the combined effects of muscle atrophy and muscle stem cell death, which lead to an overall loss of muscle mass and a decrease in muscle strength [6]. In catabolic conditions, protein breakdown is enhanced and exceeds protein synthesis, thereby resulting in myofiber atrophy [7]. THs control many processes involved in the development of muscle atrophy. Here, we will review the present knowledge of the role of TH in skeletal muscle atrophy. In particular, we will discuss the molecular mechanisms presiding over the control of local intracellular T3 concentration in muscle and how these are involved in the development of muscle atrophy. TH Action The thyroid gland secretes mainly thyroxine (T4) and, in small amounts, triiodothyronine (T3) [1]. T3 is the most metabolically active form of TH, whereas T4, due to its low affinity for TH receptors, is considered a prohormone and constitutes a large reserve of T3. It was long unclear how a pleiotropic agent such as TH could regulate a multitude of cellular processes in virtually all cells of the body without grossly modifying their circulating levels. Indeed, homeostatic regulation, operated by the hypothalamic-pituitary-thyroid axis (HPT axis), guarantees a very stable concentration of circulating TH under healthy conditions [8,9]. When circulating TH levels are too high or too low, the hypothalamus and pituitary gland can modulate thyroid-stimulating hormone (TSH) and the TSH-releasing hormone secretion to compensate for this unbalance through the negative feedback exerted by TH. Beyond this central regulation of circulating TH levels, there is a second system that acts at the intracellular level, modulating the amount of intracellular TH. Different cells, even if exposed to the same amount of circulating TH, can have a dramatically different thyroid status from each other, and that condition can be rapidly modified. The discovery of TH transporters (THT) and deiodinases, together with a better understanding of the peripheral metabolism of TH, dramatically changed our understanding of TH action [1,10]. The TH intracellular concentration depends on the expression of THT and deiodinases. Three different families of THT have been identified: monocarboxylate transporter (MCT), organic anion-transporting polypeptide (OATP), and L-type amino acid transporters (LAT). THT can differ for TH affinity and tissue distribution. MCT8, the first THT identified, equally transports T4 and T3, and is expressed in the heart, skeletal muscle, kidney, liver, placenta, and testis, whereas OATP1C1 is a high-affinity transporter for T4 and is mainly expressed in the brain. The physiological relevance of THTs has been proven by evidence that mutations in humans in MCT8 and MCT10 genes (decreasing their activities) are associated with psychomotor retardation (Allan-Herndon-Dudley syndrome) and neurodegenerative diseases, respectively [10,11]. Once inside the cell, the intracellular concentration of TH is tightly controlled by deiodinases. Deiodinases type 1 and type 2 (D1 and D2) are able to activate T4 into T3. Deiodinase type 3 (D3) is the principal physiological inactivator of TH; it prevents the activation of T4 by converting it into reverse-T3 (rT3) or terminates T3 action by converting it into T2. D1 is expressed in the thyroid, liver, and kidney, and it is the only deiodinase capable of both activating and inactivating TH. D1 is induced by TH. D2 is expressed in key thyroid-responsive tissues, such as brain tissue, brown adipose tissue, and, relevant to this review, in skeletal muscle, wherein D2 can rapidly enhance TH signaling by increasing the intracellular production of T3. D2 has a very short half-life (about 30 min); moreover, T4 increases the proteasome-mediated degradation of D2, while T3 decreases its expression, thus making D2 a very sensitive rheostat to thyroid status. D3 is mainly expressed in fetal tissues; its expression declines after birth, and in adult life, it is restricted to brain, placenta, skin, and pancreatic b-cells. However, D3 expression can be reactivated in different tissues during specific physiological processes, such as muscle regeneration, or by pathological conditions, such as tumors. Taken together, the actions of the three deiodinases constitute a potent mechanism for pre-receptor regulation of TH action at the cellular level [1]). Intracellular T3 exerts its biological activity by binding to TH nuclear receptors (TRs) and regulating target gene expression (genomic action), but also by binding to cytosolic proteins and activating intracellular signals (non-genomic action). TRs function as ligandmodulated transcription factors. The binding of T3 to TRs modifies the transcription of target genes by an exchange between co-repressor and coactivator complexes. TRs are encoded by two genes, Thra and Thrb [12]. They are differentially expressed by tissues and during development. TRa is mainly expressed in skeletal muscle, heart, intestine, brain, and skeleton tissue, whereas TRb is mainly expressed in pituitary and liver tissue. During the last ten years, it has been demonstrated that the specific modulation of local TH concentration is essential to the progression of complex processes, such as thermogenesis, myogenesis, and neurogenesis [13][14][15][16][17]. D2 and D3 are both expressed in muscle tissue as well as in thyroid hormone transporters MCT8 and MCT10 and thyroid hormone receptor TRα1. The generation of genetically modified mouse models has shed light on the role of the local control of TH concentration as a key regulator of muscle development, metabolism, and homeostasis (Table 1). Table 1. Muscular phenotype of murine thyroid-hormone-signaling knockout models. Dio2 Changes in contractile function and fiber type composition. In contrast with local hypothyroidism, mice show increased fast characteristics of soleus and increased myofiber CSA, contraction rate, and fatigue resistance. [18] Dio3 (Global KO) Neonatal thyrotoxicosis followed by central hypothyroidism, decrease in body weight, and partial perinatal lethality. [19] Dio3 (MuSC specific KO) Massive apoptosis of MuSC, impairment of the initial phases of muscle regeneration, and delay of repair process. [14] Thra/Thrb, Thra, Thrb Similar to the absence of thyroid hormone, fast to slow MHC isoform switching and decrease in body and muscle weights. [20] Slc16a2 (Mct8)/ OATP1C1 Hyperthyroid state of skeletal muscle, shift from slow to fast fibers, muscle hypoplasia, and impaired muscle regeneration. [21] Slc16a2 (Mct8) Hyperthyroid state of skeletal muscle, shift from slow to fast fibers, and impaired muscle regeneration. [21] Slc16a10 (Mct10) Increased plasma and muscle and kidney aromatic amino acid. [21] Effects of TH on Skeletal Muscle Physiology TH induces the expression of multiple genes that code for proteins essential to defining muscle contractile and metabolic properties ( Figure 1). The molecular effects of TH on skeletal muscle are evident from development, when TH and neuronal innervation trigger the transition of muscle fibers from the embryonic/neonatal phenotype to the adult phenotype. Notably, TH stimulates the conversion of muscle fiber type from slow to fast by inducing the transition of MYH7 to MYH2, MYH2 to MYH1, and MYH1 to MYH4 [22]. Hypothyroid conditions delay this switch, as the only Thra deletion in mice is sufficient to increase MYH7 expression [20]. However, slow muscle is considered more sensitive to TH alterations than fast muscle. Rat slow muscle, such as soleus, has higher T3 levels compared to fast extensor digitorum longus muscle (EDL), even if T4 levels are similar [23]. This is in agreement with the higher Dio2 expression in slow-twitch than fast-twitch mouse skeletal muscle [24]. Nonetheless, in a mouse model of Dio2 global knockout (D2KO), unexpectedly, the absence of Dio2 in soleus muscle induced a marked increase in the number of fast, glycolytic type IIB fibers [18]. Fast fibers are more glycolytic, whereas slow fibers are oxidative and have more mitochondria. Interestingly, peroxisome proliferator-activated receptor coactivator 1α (PGC-1a) gene expression, a TH-dependent regulator of slow muscle fiber type, was decreased by 50% in D2KO soleus [18] compared to WT. PGC1a is involved in mitochondrial biogenesis and favors oxidative metabolism by increasing mitochondrial biogenesis [25]. Thus, the reduction in PGC1a expression in D2KO slow-twitch muscle could explain the increase in fast glycolytic fibers. However, the single overexpression of Dio2 in the C2C12 muscle cell line induces a shift from oxidative phosphorylation to glycolysis [26]. of expression versus D3. Dio2 expression increases during differentiation, similar to MyoD. The genetic ablation of Dio2 in these cells impairs myogenic differentiation, a phenotype that is reverted by culturing the cells in the presence of T3 [13]. These studies highlight the complex role of local control of TH in muscle physiology. Of note, it is evident from these studies that the plasmatic concentration of the active hormone T3 is not adequate for the different processes in which stem cells are involved. Accomplishing these functions requires customizing T3 concentrations, which must be dynamically regulated by the specific action of deiodinases as well as the expression of TH transporters and receptors. , Figure 1. TH induces the expression of multiple genes that code for proteins essential to defining muscle contractile and metabolic properties. Muscle Atrophy and Thyroid Dysfunction Muscle atrophy is a condition characterized by a decrease in muscle mass. Different causes can be responsible for muscle atrophy, ranging from progressive muscle disuse associated with aging, muscle diseases such as dystrophies, or systemic diseases such as cancer, sepsis, or burn injury. The loss in muscle mass is associated with a loss in strength, muscle weakness, and hypofunction, which make the subject more fragile [27]. The balance between protein synthesis and degradation, and the ability to regenerate muscle fibers, are some of the elements that determine muscle mass. Several factors affect this balance, e.g., physical activity, nutritional status, hormones, and diseases. The impact of TH on muscle physiology is shown by the muscular consequences of systemic hyperthyroidism or hypothyroidism, as well as by mutations impairing the function of TH also plays an important role in myogenesis, a multistep process responsible for normal skeletal muscle development, maintenance, and repair of adult myofibers. The initial step consists of the activation of satellite cells, their amplification, differentiation, and fusion to form new myofibers. The correct progression through these phases is orchestrated by myogenic regulatory factors (MRFs) such as Myf5, MyoD, and Myogenin. Several studies showed that, more than the plasmatic T3 levels, a finely tuned sequential expression of D3 and D2 is important than plasmatic T3 levels to customizing the intracellular amount of TH in muscle stem cells (satellite cells) during the different phases of myogenic lineage. Satellite cells and C2C12 cells cultured under proliferative conditions express elevated D3 levels that decline upon differentiation [13,14]. D3 genetic ablation from satellite cells increases intracellular T3 concentration and induces massive apoptosis through the activation of the FoxO3 signaling pathway [13,14]. D2 has an opposite pattern of expression versus D3. Dio2 expression increases during differentiation, similar to MyoD. The genetic ablation of Dio2 in these cells impairs myogenic differentiation, a phenotype that is reverted by culturing the cells in the presence of T3 [13]. These studies highlight the complex role of local control of TH in muscle physiology. Of note, it is evident from these studies that the plasmatic concentration of the active hormone T3 is not adequate for the different processes in which stem cells are involved. Accomplishing these functions requires customizing T3 concentrations, which must be dynamically regulated by the specific action of deiodinases as well as the expression of TH transporters and receptors. Muscle Atrophy and Thyroid Dysfunction Muscle atrophy is a condition characterized by a decrease in muscle mass. Different causes can be responsible for muscle atrophy, ranging from progressive muscle disuse associated with aging, muscle diseases such as dystrophies, or systemic diseases such as cancer, sepsis, or burn injury. The loss in muscle mass is associated with a loss in strength, muscle weakness, and hypofunction, which make the subject more fragile [27]. The balance between protein synthesis and degradation, and the ability to regenerate muscle fibers, are some of the elements that determine muscle mass. Several factors affect this balance, e.g., physical activity, nutritional status, hormones, and diseases. The impact of TH on muscle physiology is shown by the muscular consequences of systemic hyperthyroidism or hypothyroidism, as well as by mutations impairing the function of even just one of the genes involved in intracellular TH signaling (i.e., receptors or transporters) [13,28]. Hypothyroidism and hyperthyroidism are conditions associated with a spectrum of muscle symptoms and signs, ranging from myalgias, cramps, and easy fatigability to atrophic myopathy and rhabdomyolysis. The severity of the myopathy correlates with the severity and the duration of the hypothyroidism or hyperthyroidism [4,5]. Hypothyroid subjects show a characteristic myopathy especially affecting type II fibers. The most severely clinically affected patients show type II muscle fiber atrophy and loss, with increased central nuclear counts and the presence of mitochondrial abnormalities [29]. Overt and subclinical hyperthyroidism is associated with a decrease in muscle strength and cross-sectional area that usually improves after treatment with anti-thyroid drugs. It is thought that muscle atrophy observed in hyperthyroid patients depends on a faster protein turnover. The hyperthyroid state increases basal metabolism and, therefore, energy expenditure. Thus, the extra energy demands are satisfied by the augmented oxidation of lipids and proteins. The catabolic effect on protein metabolism causes accelerated protein breakdown and thus atrophy. The evidence that mild degrees of thyroid dysfunction might be associated with muscle alterations suggests that this condition should probably be treated, especially in a fragile population such as the elderly. Sarcopenia is an agerelated muscle-atrophic condition that is due to a reduction in muscle regeneration and a progressive decrease in anabolism with an increase in catabolism. The prevalence of subclinical thyroid disorders increases with age. Interestingly, subclinical hyperthyroidism, but not subclinical hypothyroidism, has influences on muscle mass and strength in elderly subjects. In an elderly Korean population, it has been observed that higher levels of T4 are associated with sarcopenia. A Chinese study also showed that higher free triiodothyronine (FT3) concentrations within a normal range are correlated to muscle mass and muscle function in elderly subjects [30]. However, low FT3 has been identified as a possible marker of frailty in the elderly [31]. It has been proposed that TH deficiency might act at different levels (nerve, neuro-muscular junction, and muscle fiber) [32] through the alteration of glycogenolytic and oxidative metabolism, expression of contractile proteins, and neuro-mediated damage. It is interesting to observe that not only systemic alterations in TH concentration but also mutations of single genes coding for some of the proteins involved in TH signaling are associated with muscle disorders. Thyroid hormone transporter MCT8 deficiency is an inherited disorder that is characterized by severe intellectual disability, an impaired ability to speak, diminished muscle tone (hypotonia), muscle wasting, and/or movement abnormalities. SECIS-binding protein 2 (SBP-2) is involved in the synthesis of selenoproteins as deiodinases. Deficits of SBP2 are associated with thyroid dysfunction and neurocognitive deficits, as well as azoospermia, muscular dystrophy, photosensitivity, and immune dysfunction [33]. Understanding the molecular mechanisms through which TH controls muscle mass will allow a better comprehension of the development of muscle atrophy and will eventually allow the identification of therapeutic targets for the treatment of muscle loss. Effects of TH on Skeletal Muscle Pathophysiology Skeletal muscle has a remarkable regenerative capacity and can undergo several rounds of repair in response to injury and/or pathophysiologic conditions such as sarcopenia or dystrophies [34,35]. Muscle regeneration is a process highly sensitive to TH concentration. In D2KO mice, muscle regeneration, induced by cardiotoxin-injury, is impaired despite the normal concentration of circulating TH, supporting the concept that increased requirements in TH at regeneration sites are satisfied by the local action of D2. Moreover, in D2KO mice, MyoD levels (a well-known TH responsive gene) are reduced, leading to a marked delay in muscle differentiation/regeneration. Thus, during regeneration, adequate MyoD levels are ensured by the local rise of TH mediated by D2. In skeletal muscle, D2 is positively regulated by FoxO3a, a forkhead transcriptional factor that is implicated in muscle differentiation and in the activation of autophagy. Accordingly, the expression of Dio2 is significantly reduced in FoxO3-null mice. Muscle stem cells from FoxO3-null mice fail to fully differentiate due to the absence of MyoD, and treatment with TH is able to rescue a complete myogenic differentiation. D3 is highly expressed in myoblasts during the initial phase of the regenerative process. D3 action reduces intracellular TH signaling, allowing the proliferation of satellite cells. Satellite cell-specific genetic D3 depletion severely compromises skeletal muscle regeneration, with irreversible stem cell death due to excessive intracellular TH levels. D3 plays a crucial role in the early phase of muscle regeneration, when cells have to be protected from the "normal" circulating TH that is sufficient to trigger a death pathway via the FoxO3-MyoD axis in these cells [14]. Muscle atrophy is the result of altered protein and cell turnover. Indeed, an increase in muscle turnover that leads to premature exhaustion of satellite cells or a reduction in regenerative potential due to a decrease in the satellite cell pool could be a cause of muscle atrophy. The number of satellite cells is progressively reduced during aging, impairing regenerative capacity and leading to the loss of muscle mass. Sarcopenia has been correlated with alteration in TH signaling. It has been observed that serum TH levels decrease with aging, and this seems to influence skeletal muscle physiology [36]. During aging, the reduction of muscle mass is associated with an overall shift from fast to slower fibers, a decrease in type 2b fibers, an increase of type 2x, and the conversion of type 2a to type 1 fibers [37][38][39]. These changes are similar to what is observed in hypothyroid conditions [40] and are consistent with the lower level of TH detected during aging. Interestingly, administration of T3 to aged rats induces an increase of Myh2 and Myh1 expression in the slow-twitch soleus, whereas no significant changes are observed in the fast-twitch EDL after the treatment [41]. TH controls mitochondrial function in skeletal muscle, which is also altered during aging [3]. Indeed, in muscle fibers, ATP production and mtDNA are reduced with aging [42][43][44]. In model muscle mice, p43 overexpression (the mitochondrial THRA isoform) stimulated mitochondrial biogenesis and activity, leading to increased oxidative stress and reduced mitochondrial function. This condition is probably the cause of muscle sarcopenia observed starting from 6 months of age [45]. In several muscular dystrophies, regenerative muscle processes are impaired, and this leads to muscle wasting, progressive weakness, and metabolic disorder. Duchenne muscular dystrophy (DMD) is the most common form of muscular dystrophy, caused by the absence of functional dystrophin. The dystrophic myofibers undergo continuous cycles of degeneration and regeneration, resulting in the loss of muscle tissue, a process likely due to a progressive decrease in the satellite cell reservoir. Intriguingly, alterations in TH circulating levels have a profound impact on the phenotype of a mouse model of Duchenne named mdx [46]. McIntosh and Anderson demonstrated that systemic hypothyroidism induces an increase in myogenic precursor cell proliferation but a delay in myotube formation, worsening the phenotype of mdx mice. Of note, McArdle and colleagues demonstrated that hypothyroidism induced by PTU (an anti-thyroid drug that significantly reduces plasma TH levels) improves the dystrophic phenotype, preventing necrosis in the muscle of 21-day old mdx mice and eliminating the characteristic elevation in serum creatine kinase (a marker of muscle damage). This study provided the first demonstration that experimental manipulation of TH levels alters the onset of necrosis in mdx mice [47]. It was later demonstrated that thyroid antagonist (PTU or MMI) treatments reduce the rate of muscle degeneration in a model of avian muscular dystrophy. Indeed, if hypothyroidism is established immediately after hatching, the muscle function of dystrophic chicks significantly improves. The beneficial effects of anti-thyroid drugs on avian dystrophy are specifically due to induced hypothyroidism since TH deprivation increases muscle function (righting ability) and reduces plasma CK activity in dystrophic chickens [48]. In contrast, in regenerating mdx muscle, hyperthyroidism induces an increase of necrosis and an early differentiation of myogenic precursor cells that impairs the growth and formation of new myotubes [49,50]. It is not known if this is the result of a decreased proliferation rate of the myogenic precursor cells or an increased and anticipated fusion of the cells. Taken together, these data are consistent with the role of intracellular TH signaling during myogenesis and indicate that changes in TH levels deeply impact the dystrophic phenotype. Common Pathways and Shared Molecular Mechanisms between TH and Muscle Wasting Although the etiologies of muscle wasting can be very different, a common feature is the activation of multiple proteolytic mechanisms, including lysosomal degradation, apoptosis, and ubiquitin-proteasome protein breakdown. In pathological conditions characterized by a catabolic state as cancer, heart failure, and sepsis, the increased energy requirements are satisfied by increased protein degradation, with consequent progressive muscle wasting. In particular, the ubiquitin-proteasome system, the autophagy-lysosome system, and Akt/FoxO signaling pathways have been intensively studied and implicated in these conditions. Under physiological conditions, TH stimulates both protein synthesis and degradation, whereas supraphysiological TH levels shift this balance towards increased protein catabolism. Interestingly, many connections between TH action and proteolytic pathways have been identified (Figure 2). a decreased proliferation rate of the myogenic precursor cells or an increased and anticipated fusion of the cells. Taken together, these data are consistent with the role of intracellular TH signaling during myogenesis and indicate that changes in TH levels deeply impact the dystrophic phenotype. Common Pathways and Shared Molecular Mechanisms between TH and Muscle Wasting Although the etiologies of muscle wasting can be very different, a common feature is the activation of multiple proteolytic mechanisms, including lysosomal degradation, apoptosis, and ubiquitin-proteasome protein breakdown. In pathological conditions characterized by a catabolic state as cancer, heart failure, and sepsis, the increased energy requirements are satisfied by increased protein degradation, with consequent progressive muscle wasting. In particular, the ubiquitin-proteasome system, the autophagy-lysosome system, and Akt/FoxO signaling pathways have been intensively studied and implicated in these conditions. Under physiological conditions, TH stimulates both protein synthesis and degradation, whereas supraphysiological TH levels shift this balance towards increased protein catabolism. Interestingly, many connections between TH action and proteolytic pathways have been identified (Figure 2). The ubiquitin-proteasome system is one of the most important proteolytic systems controlling protein turnover in muscle. In atrophic muscle, an increased expression of ubiquitin-conjugating enzymes (E2), ubiquitin-protein ligase (E3), and proteasome subunits have been observed. Atrogin-1 and MuRF1 are two muscle-specific ubiquitin ligases whose expression is strongly increased in several catabolic muscle conditions and are considered master genes of muscle atrophy. Tawa et al. demonstrated a clear effect of TH in activating the proteasome-dependent proteolytic pathway during atrophy [51]. They showed that, in rats, hypothyroidism leads to a decrease in proteasome-dependent muscle protein degradation [51]. O'Neal et al. later demonstrated that T3 administration in rats upregulates the expression of atrogin-1 and MuRF1 in skeletal muscle [52]. It is not The ubiquitin-proteasome system is one of the most important proteolytic systems controlling protein turnover in muscle. In atrophic muscle, an increased expression of ubiquitin-conjugating enzymes (E2), ubiquitin-protein ligase (E3), and proteasome subunits have been observed. Atrogin-1 and MuRF1 are two muscle-specific ubiquitin ligases whose expression is strongly increased in several catabolic muscle conditions and are considered master genes of muscle atrophy. Tawa et al. demonstrated a clear effect of TH in activating the proteasome-dependent proteolytic pathway during atrophy [51]. They showed that, in rats, hypothyroidism leads to a decrease in proteasome-dependent muscle protein degradation [51]. O'Neal et al. later demonstrated that T3 administration in rats upregulates the expression of atrogin-1 and MuRF1 in skeletal muscle [52]. It is not known whether the increase in protein degradation and the expression of atrogin-1 and MuRF1 is due to a direct or indirect effect of T3 [53,54]. However, overexpression of the mitochondrial TH receptor p43 in skeletal muscle induces atrophy and a strong increase in Atrogin-1 and MuRF1 expression [45]. Accumulating evidence has demonstrated that TH also induces the autophagylysosomal system in different tissues, such as liver and muscle. In 1978, De Martino and Goldberg were the first to associate the effects of TH on protein degradation with lysosomal enzyme activities in rat liver and skeletal muscle [55]. They observed that TH regulates protein degradation by increasing lysosomal enzyme activity, i.e., cathepsin B and D. In 2009, O'Neal et al. showed that this increase was responsible for the muscle wasting observed in hyperthyroidism [52]. Furthermore, TH increases autophagy in skeletal muscle by inducing the expression of key autophagy genes, including microtubule-associated protein light chain 3 (LC3), sequestosome 1 (p62), Unc-51-like kinase 1 (Ulk1), and FoxO1/3a [28]. Indeed, the expression of dominant-negative TRα mutant in skeletal muscle reduced autophagy, mitochondrial turnover, and altered muscle fiber phenotype [56]. It has been shown that the simultaneous activation and coordinate regulation of lysosomal and proteasomal pathways for protein degradation are mediated by Forkhead box O (FoxO) [57]. Sandri et al. were the first to implicate FoxO in the control of muscle size during atrophy [58,59]. In physiological conditions, the transcription factors FoxO are negatively regulated by the PI3K-AKT signaling pathway, which promotes protein synthesis. In muscle atrophy, the decreased activity of the PI3K/AKT pathway leads to the activation of FoxO. In particular, a specific transcription factor, FoxO3, induces transcription of the Atrogin-1 and a dramatic decrease in fiber size in different models of atrophy. Constitutively active FoxO3 induces the expression of atrogin-1 and the aberrant atrophy of myotubes and muscle fibers [58]. Conversely, when FoxO3 activity is blocked by a dominant-negative construct in myotubes or by RNAi in mouse muscles in vivo, atrogin-1 induction during starvation or glucocorticoid administration is hampered. We also demonstrated that FoxO3 is transcriptionally induced by TH, but as mentioned before, FoxO3 also indirectly sustains T3 concentration by inducing D2 [14]. Overall, these data suggest an interplay between TH muscle status and the development of muscle atrophy. Conclusions Overall, TH has a large influence on muscle physiology, and it is clear that TH acts on different molecular pathways in a spatially and temporally regulated manner. The evidence that both hypothyroidism and hyperthyroidism alter muscle physiology, as well as recent data showing the relevance of intracellular control of TH action, implies that muscle TH concentration should be maintained in a narrow but dynamically regulated range. TH serves as a crucial regulator of the atrophic process by activating the canonical atrophic molecular pathways and controlling fiber regeneration. The local control of TH muscle concentration throughout deiodinase action makes it reasonable to exploit the modulation of deiodinases as a possible therapeutic approach to treat muscle-wasting disorders. Furthermore, the development of specific TH agonists represents a further scenario of intervention to modulate TH action at specific sites. In the future, the development of selective deiodinase modulators will allow the modification of the intracellular concentration of TH, avoiding the systemic alteration of TH and thus the related side effects. One challenging aspect of the development of these modulators is the evidence that muscle fibers and cells could have different needs in TH during different phases of the same process. To this aim, it will be important to have very manageable molecules (i.e., with short half-lives, low IC 50 ). Conflicts of Interest: The authors declare no conflict of interest.
2021-10-28T15:20:56.016Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "893c26ae00ad7ae0c66991dfb5ce7752d9b9aaf1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/11/11/730/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7b0ee67dcd3f6f43a045a7b53285ea326a05f75", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
115600121
pes2o/s2orc
v3-fos-license
Distributed energy management for community microgrids considering phase balancing and peak shaving : In this study, a distributed energy management for community microgrids considering phase balancing and peak shaving is proposed. In each iteration, the house energy management system (HEMS) installed in each house minimises its electricity costs and the costs associated with the discomfort of customers due to deviations in indoor temperature from customers’ set points. At the community level, the microgrid central controller (MCC) schedules the distributed energy resources (DERs) and energy storage Abstract: In this study, a distributed energy management for community microgrids considering phase balancing and peak shaving is proposed. In each iteration, the house energy management system (HEMS) installed in each house minimises its electricity costs and the costs associated with the discomfort of customers due to deviations in indoor temperature from customers' set points. At the community level, the microgrid central controller (MCC) schedules the distributed energy resources (DERs) and energy storage based on the received load profiles from customers and the forecast energy price at the point of common coupling. The MCC updates the energy price for each phase based on the amount of unbalanced power between generation and consumption. The updated energy price and unbalanced power for each phase are distributed to the HEMSs on corresponding phases. When the optimisation converges, the unbalanced power of each phase is close to zero. Meanwhile, the schedules of DERs, energy storage systems and the energy consumption of each house are determined by the MCC and HEMSs, separately. In particular, the phase balancing and peak shaving are considered in the proposed distributed energy management model. The effectiveness of the proposed distributed energy management has been demonstrated by case studies. Nomenclature The symbols used in this paper are defined as follows. A symbol with (k) at the upper right position stands for its value at the kth iteration. A bold symbol stands for its corresponding vector. Introduction A microgrid can be defined as a low-voltage power system comprising various distributed energy resources (DERs) collocated with loads. It can be operated either grid-connected or islanded from the utility grid [1]. When grid connected, a microgrid interacts with the utility distribution system through the point of common coupling (PCC). Power could be imported from or exported to the utility system under an agreement. In addition, a microgrid can provide various ancillary services such as frequency regulation and voltage support in response to the request of the utility [2,3]. Once the utility distribution network is faulted, a microgrid automatically transforms from grid-connected mode into the islanded mode and continues to serve its islanded portion without any interruptions. By integrating renewable generation, energy storage devices, flexible loads, advanced control and information and communication technology, a microgrid provides a new scheme of electricity supply with high reliability and low costs and emissions [4]. These benefits of microgrids have prompted a growing amount of research from both academia and industry [5]. In general, a microgrid central controller (MCC) performs the energy management of a microgrid in both modes. The MCC determines the output power of controllable distributed generators (DGs), charging/discharging power of energy storage devices and the imported/exported power from/to the utility system by solving an optimisation problem. The optimisation problem is usually formulated to minimise the system operating cost while satisfying various constraints related to characteristics of components and/or reliability of the electricity supply [6]. Various models for microgrid energy management in islanded mode [7][8][9] and gridconnected mode [10][11][12][13][14][15][16] have been proposed in the existing literature. In particular, Bracco et al. [10], Li and Xu [11] and Martinez Cesena et al. [12] proposed deterministic programming models that ignore the uncertainty of renewable generation, while stochastic and robust programming models that consider the uncertainty of renewable generation have been presented in [13][14][15][16]. In general, the vast majority of the proposed microgrid energy management systems in the existing literature is based on solving a centralised optimisation problem. Although these methods are straightforward and easy to implement, two problems are associated with them. First, energy consumption by consumers is mostly considered as fixed or interruptible loads in the form of direct load control (DLC). However, the customers are usually very careful while allowing a utility or an MCC to directly control their appliances -such as water heaters and heating, ventilation and airconditioning [high-voltage AC (HVAC)] systems -because of various issues such as psychological safety and privacy protection. Second, the centralised optimisation model is subject to 'the curse of dimensionality'. As the number of customers increases, the solution efficiency is reduced rapidly. To overcome these issues, distributed optimisation models based on dual decomposition are proposed in [17][18][19][20]. Since the variables of DERs and each house are only coupled by the power balance constraint, the centralised optimisation model can be decoupled into subproblem of DERs and subproblem of each house. The home energy management system (HEMS) in each house solves the corresponding subproblem to determine the schedule of house appliances, while the MCC solves the subproblem of DERs to determine the schedule of DGs and energy storage systems. During each iteration, the Lagrangian multipliers are adjusted based on the imbalance of power supply and consumption. Then, the schedule of DERs and house appliances are updated by the MCC and each HEMS, separately. The iteration repeats until the power supply and consumption are balanced, i.e. the power balance constraint is satisfied. By this method, the MCC no longer needs to control the appliances of customers directly. Similar to dual decomposition, other distributed optimisation algorithms such as the predictor corrector proximal multiplier [21] have been applied to solve the microgrid energy management problem in a distributed manner. To ensure the convergence of the dual decomposition algorithm, convexity and finiteness are assumed for all subproblems. In practise, however, the subproblems are usually non-convex owing to binary constraints (on/off of generators and HVAC systems), leading to non-convergence of the optimisation. In this paper, a new distributed energy management for community microgrids is developed. The single-phase model in [22] is extended into a practical three-phase unbalanced model to enable the adjustment of loads and generation in specific phases. The alternating direction method of multipliers (ADMMs) algorithm is used to decompose the centralised optimisation into parallel subproblems of DERs and HEMSs [23]. The main contributions of this paper in addition to [22] are as follows: • Reducing the phase unbalance of the microgrid at the PCC to avoid mal-operation of zero-sequence protections and potentially provide phase balancing service. In particular, phasewise price signals are proposed to enable the adjustment of loads and generation in specific phases. Note that the phase balancing function is an innovative contribution to this paper. • Adding the function of peak shaving for the community microgrid controller to reduce demand charge. This function can also be used for emergency load shedding by setting the PCC power limits in specified intervals to a certain percentage of normal PCC power. For the rest of this paper, the community microgrids and thermal dynamic model of buildings are introduced in Section 2. In Section 3, centralised optimisation-based microgrid energy management is formulated. On the basis of that, distributed energy management for microgrids is developed in Section 4. The proposed distributed energy management is demonstrated on a community microgrid in Section 5. This paper is finally concluded in Section 6. Community microgrid As an alternative way to increase local energy supply independence and resilience, a community microgrid is a special kind of microgrid normally serving a residential community. Various DGs and energy storage systems are installed on-site to ensure a continuous and reliable electricity supply to customers even in the face of widespread blackouts. For each house, HEMS is installed and schedules all appliances in the house based on user settings such as desired indoor temperature, and communicates with the MCC for price signals or control orders. In a centralised optimisation-based microgrid energy management system, the user settings, consumption schedule of house appliances as well as detailed house thermal dynamic model are all forwarded to the MCC by the HEMSs. On the basis of this received information, as well as the rate/price from the utility, the MCC determines the optimal schedule of DGs, energy storage and home appliances by solving a centralised optimisation that minimises the total cost of operating the community microgrid while preserving customer comfort. The optimal schedules of home appliances are sent to the corresponding HEMSs, which will control the HVAC and other appliances accordingly. An example of the community microgrid is shown in Fig. 1. Unlike in centralised optimisation-based microgrid energy management, the HEMSs actually withhold user settings, house thermal parameters and other load information from the MCC in the proposed distributed energy management system. Specifically, the HEMS schedules the appliances in the house based on the price signal received from the MCC to minimise the electricity bill while ensuring user comfort. Meanwhile, the MCC determines the schedule of DERs at the microgrid level to optimise certain objectives. The price signal is iteratively updated based on the power unbalance between generation and load. When this iterative process converges, the HEMSs and MCC reach a consensus on the price signal and energy consumption of each house. HVAC system An HVAC system is usually controlled by a thermostat. In the case of cooling, for instance, the HVAC is switched on when the indoor temperature reaches the upper limit of the allowed indoor temperature range, then continues running until the lower limit is reached. Thermostats based on this automatic temperature control scheme are widely used. In a departure from the automatic temperature control scheme described, the HEMS intelligently optimises the consumption of HVAC as well as other house appliances to reduce the electricity cost and discomfort of customers. As is known, the change of indoor temperature is a gradual process because of the thermal inertia of the house. A house could be taken as a thermal storage facility, which provides the HEMS extra flexibility in scheduling the HVAC system. Specifically, the HVAC system can be switched on to precool/preheat the house during times when electricity prices are low or renewable generation is high. Thus, the house can ride through peak-price periods without high electricity consumption and still maintain the indoor temperature within an allowable range. This method is expected to achieve significant electricity cost savings compared with autonomous temperature control [15]. Building thermal dynamic model Intelligent control of the HVAC system requires to model the house as a thermostatically controlled load, which is related with a number of factors including thermal capacitance of indoor air, inner walls and house envelope, thermal resistance between indoor air, inner walls, house envelope and ambient air, effective window area etc. There has been extensive research on the accurate modelling of thermal dynamics of houses/buildings. In this work, a two-layer thermal insulation model is utilised to model each house. On the basis of the rules of heat transfer, the thermal dynamic characteristic of a house could be modelled as first-order differential equations in continuous time. Then, these differential equations can be further transformed into an equivalent discrete time model by using Euler discretisation (i.e. zero-order hold) with a constant sampling time [24]. In this paper, the thermal dynamic model of a house is described by the following state-space model: where is the state vector, which indicates the temperatures at different layers of the house. is the input vector, which includes the ambient temperature, solar irradiance and the heat transferred by the HVAC system. In specific, u ϕht H = 1 corresponds to the heating mode of the HVAC system and u ϕht C = 1 corresponds to the HVAC cooling mode. The coefficients of matrices A ϕh and B ϕh of a house h on phase ϕ could be determined with the thermal parameters of the house and the time step size of the optimisation horizon. More details on the mathematical modelling and parameter estimation of the house can be found in [24]. The indoor temperature for the house h is limited in a comfortable range as in the equation below: Centralised community microgrid energy management In community microgrids, on-site DERs and the utility distribution feeder together supply electricity to all houses. The on-site DERs are divided into two categories: dispatchable units and nondispatchable units. The dispatchable units include various DGs (e.g. diesel generators, combined heat and power, fuel cells etc.) and energy storage systems (e.g. batteries and pump-hydro), which could be dispatched by the MCC. On the contrary, renewable generation resources (e.g. photovoltaic systems) are nondispatchable units with power output depending on weather conditions. The community microgrid imports/exports power from/to the utility through the PCC. All DGs, renewable generation resources and energy storage systems are assumed to be threephase balanced. Each house is associated with an HVAC load, and the other loads in the house are aggregated as an interruptible load, a certain percentage of which could be shed. All house loads are assumed to be single phase. All houses are distributed on an average on three phases. Under these assumptions, a centralised optimisation problem for the energy management of community microgrids is formulated. The objective is usually to minimise the operation and maintenance cost as well as the discomfort of customers due to indoor temperature deviations, as in (3). Specifically, the piecewise linear operation cost and start-up cost of DGs are represented in lines 1 and 2; the purchasing/selling cost/ benefit of the microgrid at PCC is in line 3; the degradation cost of energy storage systems is in line 4; and finally, the discomfort and inconvenience of customers caused by indoor temperature deviations and voluntary load shedding for each house are described in line 5 The reliable and efficient operation of a community microgrid is subject to various limits and constraints from DERs, customers as well as the MCC u bt C + u bt D ≤ 1, ∀b, ∀t SOC bt min ≤ SOC bt ≤ SOC bt max , ∀b, ∀t P btϕ C = P bt C /N Φ , ∀b, ∀t, ∀ϕ The power output of a DG is divided into several blocks in (4). Each block is limited by a maximum value as in constraint (5). The operating cost in each block is assumed linear. Thus, the operating cost of the DG is piecewise linear. The minimum and maximum power outputs of DGs are limited by the constraint (6). In particular, DG outputs are three-phase balanced, which is ensured by (7). For energy storage, the charging and discharging powers of an energy storage system are constrained by (8) and (9). The charging and discharging states of energy storage are mutually exclusive, a condition enforced by (10). The state of charge (SOC) of an energy storage system at the end of the current time period t is determined by the SOC at the previous time period t − 1, plus/ minus the energy charged/discharged during the current time period. This coupling relationship is described in (11). To avoid overcharging and undercharging of an energy storage system, the SOC is limited as in (12). The loss during the charging and discharging process is described with the parameters η b C and η b D . Similar to DGs, energy storage devices are phase balanced, which is ensured by the constraints of (13) and (14). For DGs and energy storage systems with unbalanced output, we could simply relax (7), (13) and (14) to make sure the sum of generated/consumed power at all phases equals the total power for each DG and energy storage system. For each house, a total load of the house h on phase ϕ at time t equals the HVAC load plus the aggregated rest loads, as in constraint (15). The heating and cooling states of an HVAC system are mutually exclusive, as is guaranteed by constraint (16). The voluntary load shedding of the house h during the time period t is limited by a certain percentage of the aggregated load specified by the customer as shown in constraint (17). Note that the thermal dynamic characteristic of a house (1) and the indoor temperature requirements (2) should be included as constraints as well. Besides the constraints associated with each component or house, there are several system level constraints such as generation and load balance, peak load limits and maximum phase unbalances at the PCC. The generation and load balance on each phase are guaranteed by (18). In particular, wind and photovoltaic (PV) systems could be balanced three-phase or single-phase sources. The peak load seen by the utility at PCC is limited, as in (19), which could be requested by the utility or required by the MCC to reduce the demand charge. Note that the peak shaving in this paper is realised by real-time pricing. Other methods such as demand charge [25] will be investigated in future work. To avoid maloperation of zero-sequence protections, the maximum phase unbalance at the PCC is constrained by (20). Note that the phase coupling has been ignored for two reasons. First, the feeders in community microgrids are mostly single-phase laterals. Second, these feeders are usually short because of the limited capacity and low-voltage level of microgrids It should be noted that the centralised optimisation is mixed-integer linear programming (MILP), except the two logic terms in the objective function (3), which could be easily formulated in linear or MIL form by introducing auxiliary variables. In specific, by introducing a binary variable, the start-up cost of DG (in line 2) could be represented in MIL form [26]. As to the absolute value of the indoor temperature deviation (in line 5), it could be substituted by an auxiliary variable X ϕht with constraints (21)- (23). In general, this centralised optimisation problem could be solved by various commercial MILP solvers X ϕht ≥ 0, ∀ϕ, ∀h, ∀t More complicated phase-coupled internal models of DGs and energy storage systems need to be considered if their outputs are unbalanced or reactive power is considered. Under this situation, piecewise linearisation techniques could be used to formulate these non-linear models into special-ordered-sets-of-type 2 constraints IET Gener. Transm. Distrib., 2019, Vol. 13 Iss. 9, pp. 1612-1620 © The Institution of Engineering and Technology 2019 [27]. As a result, the dimension of the problem, especially the number of binary variables, will increase significantly. Nevertheless, the optimisation model is still MILP. Distributed community microgrid energy management The centralised community microgrid energy management presented in Section 3 is straightforward and easy to solve. However, this model is subject to dimensional and privacy issues in practical implementation, since the HVAC systems and voluntary load shedding are directly controlled by the MCC. First of all, the dimension of the optimisation problem rises rapidly with the growth of customer scale, which compromises the solution efficiency. As a result, more computing resources are required by the MCC. Besides, the MCC requires access to the thermal dynamic models and detailed load information for all houses, whereas customers generally prefer to conceal all information behind the metres and control their home appliances by themselves. Therefore, our objective is to break down the centralised optimisation and obtain a distributed, scalable, privacypreserving microgrid energy management. In this section, we propose a distributed algorithm to solve the centralised optimisation model (1)-(23) using ADMM [23]. The centralised optimisation model has a separable structure since the only constraint (18) is a complicating constraint involving variables from both the microgrid level and the house level. Therefore, we use ADMM to decompose the centralised optimisation model into optimisation subproblems at the microgrid and house levels. The subproblems are solved by the MCC and the HEMSs separately, and the solutions are coordinated by an iterative process Initially set at k ← 0, the HEMSs schedule their appliances randomly and communicate them to the MCC. In the meantime, the MCC sets initial price curves for each phase and schedules the microgrid-level resources randomly. At the beginning of each iteration, the MCC updates the unbalanced power of each phase according to (24) and communicates the prime residual R ϕt (k) and price signal λ ϕt (k) to the corresponding HEMSs connected to each phase. Then the following process occurs. i. Each HEMS solves the HEMS subproblem as follows: s.t. (4) − (14) and (19) − (20) At the end of each iteration, the HEMSs communicate their updated schedules P ϕht (k) and P ϕht LS, (k) to the MCC, then the MCC updates the prime residual R ϕt (k + 1) and price signal λ ϕt (k + 1) according to (24) and (27). This iterative process is repeated until convergence occurs. In this distributed optimisation scheme, prices are iteratively negotiated between customers and generators (including DGs, energy storage and utility). Therefore, this approach has the advantage of being self-contained in the sense that it does not need other price constraints because price procurement is based on an iterative negotiation process A complete description of the proposed distributed energy management system can be found in Algorithm 1 (see Fig. 2). Compared to the traditional dual decomposition algorithm, ADMM adds an augmented Lagrangian term with a penalty factor ρ > 0. This augmented Lagrangian term is introduced in part to bring robustness to the dual decomposition algorithm, and particularly to yield convergence when assumptions of strict convexity and finiteness of (3) are no longer valid. In other words, ADMM improves the classic dual decomposition algorithm with the superior convergence properties of augmented Lagrangian methods. Note that ADMM cannot guarantee a global optimum such as other non-convex optimisation algorithms. The convergence of ADMM for the non-convex problem has not been proved mathematically. In practise, the solution might oscillate around an optimum because of the non-convexity of the integer variables. In this situation, a suboptimal solution can still be obtained for practical use by increasing the penalty factor ρ or relaxing the converge criterion R max . Nevertheless, it is often the case that ADMM converges to modest accuracy within a few tens of iterations [23]. In this distributed energy management system, the power unbalances and price signals of each phase are broadcasted to the corresponding HEMSs through the advanced metering infrastructures. For each house, the HEMS subproblem (25) is solved, then the total consumption P ϕht (k) and voluntary load 1616 IET Gener. Transm. Distrib., 2019, Vol. 13 Iss. 9, pp. 1612-1620 © The Institution of Engineering and Technology 2019 shedding P ϕht LS, (k) are communicated to the MCC. In this way, customers have absolute control over their HVAC systems and other appliances behind the metre. In addition, the schedules of individual appliances and customer preferences are concealed from the MCC. Therefore, the privacy of customers is preserved. Fig. 3 illustrates the information exchange between the MCC and HEMSs. Case studies The proposed distributed energy management was tested using an Oak Ridge National Laboratory microgrid test system as shown in Fig. 4. All DGs, PVs and batteries are assumed to have threephase-balanced output/input. Their parameters can be found in [16]. The community microgrid supplies electricity to 20 houses. A 5 kW HVAC system is installed in each house. The coefficients of performance of all HVAC systems are set as η h = 3. For simplicity, the desired room temperatures of all houses during the whole scheduling horizon are set at 23°C. To avoid excessively rapid cycling of the HVAC systems, the indoor temperature is allowed to deviate ±2°C from the set point. The temperature deviations lead to discomfort of customers, which is penalised at $0.05/°C. The maximum voluntary load shedding is limited as 50% of the non-HVAC load in each house, with the price of lost load set as the PCC price doubled. The thermal dynamic models of the houses are taken from [24] (Table 7.1 in [24]). Small random errors are introduced to represent the diversity of houses. The ambient temperature and solar irradiance are the measured data of the Oak Ridge, Tennessee, area on 1 August 2015 [28], which is a typical summer day in the southern states of the USA. The forecast non-HVAC load of house #1 and the electricity price at the PCC are shown in Fig. 5. All houses are assumed to have the same non-HVAC loads. The penalty parameter ρ is set as 0.1. The initial price is set as 0.1 $/kWh for all time intervals. The simulation is conducted in a day (24 h) with 15 min time resolution. Noted that the forecasts of load and renewable resources are subject to errors, which usually increase rapidly as the forecast horizon increases. To handle the uncertainty of forecasts, model predictive control (MPC) has been used in power system balancing models considering uncertain forecasts [29]. For example, if the load and renewable forecasts are updated every hour, the optimisation should be run every hour, but only schedules of the first hour will be implemented into the system and the rest will be discarded. The parameters of MPC, in this case, are summarised in Table 1. In this work, the forecast errors for load and renewable resources have been neglected in the simulation since the MPC approach does not change the proposed optimisation model and solution algorithm, but repeatedly run the optimisation with the most recent forecast data. All problems are solved using the commercial MILP solver CPLEX 12.6. With a pre-specified duality gap of 0.5%, the solution time of centralised optimisation is about 15 min on a 2.66 GHz Windows-based personal computer with 4 GB of random access memory. For the distributed optimisation, the solution time of each subproblem is <10 s using the same computer. Since all subproblems are solved in parallel, the total solution time distributed optimisation is around 3 min. Comparing costs of different cases The total operating costs of the community microgrid in various cases are compared in this section. The test cases include autonomous thermostatic control (without considering building thermal dynamics in the optimisation), base case (considering building thermal dynamics in the optimisation), the base case with phase balancing constraints, the base case with peak shaving constraints, and finally, the base case with both phase balancing and peak shaving constraints. In the case of autonomous thermostat control, (1) is solved first, then the thermostats determine the on/off state of the HVAC system based on the internal temperature controlled relay circuit. Given the HVAC states, we solve the centralised optimisation and obtain the total operating cost as in (3). The costs are compared in Table 2. First, compared to autonomous thermostat control, the total operating cost of the base case is reduced by 26.11% by integrating building thermal dynamics in the optimisation. The cost savings are significant for community microgrids, in which HVAC systems dominate the load. Second, compared to the base case, adding the phase balancing and/or peak shaving constraints to the base case has very little effect on the total operating cost. Note that the base case could be considered as our previous work in [22]. In other words, the functions of phase balancing and peak shaving are realised in this paper with very little additional cost compared with [22]. This also indicates that HVAC energy consumption can easily be shifted for phase balancing and peak shaving without obvious effects on system operating cost and customer discomfort. Third, as mentioned earlier, ADMM converges to a local optimum. However, comparing the difference between the costs of centralised optimisation and of distributed optimisation, the proposed distributed algorithm has almost the same performance as centralised optimisation; i.e. the solution is very close to the global optimum. The calculated indoor temperature and HVAC status of house #1, in the cases of an autonomous thermostat control and the base case, are compared in Fig. 6. Comparing Fig. 6a with Figs. 6b and c, it clearly shows, in the base case (considering building thermal dynamics in the optimisation), the HVAC cooling is switched on around 9 AM when the indoor temperature is perfect 23°C, i.e. the set point. Thus, the house is precooled during low price periods to avoid or reduce the consumption of HVAC during peak-price periods (1-2 PM and 7-8 PM), so that the electricity bill of the customer is reduced. Since the PCC can be seen as the marginal unit, in this case, the price for house #1 converges to the PCC price, as shown in Fig. 6c. Distributed energy management with constraints of phase balancing or/and peak shaving The maximum phase unbalances and total power at the PCC in the different cases solved by the proposed distributed energy management method are compared in Fig. 7. For the phase balancing constraint, the maximum difference in power between any two phases for all time intervals is limited to <20 kW. For the peak shaving constraint, the peak demand at the PCC is limited to <50 kW. As can be seen in Fig. 7a, the maximum phase unbalance is reduced to <20 kW for the cases with the phase balancing constraint. Similarly, the total power at the PCC is reduced to be <50 kW for the cases with peak shaving constraint, as can be seen in Fig. 7b. When both constraints are considered, the maximum phase unbalances and total power at the PCC are reduced to the corresponding limits simultaneously. It should be noted that unwanted demand spikes occur during the periods of 3-4 AM and 10-11 PM. The reason is that the energy price at the PCC is very low during these hours, which causes the shutdown of all DGs. It should be noted that the peak shaving function can also be used for emergency load shedding. For example, if the community microgrid is requested by the utility to reduce its imported power at PCC by 50% in the next 15 min. The MCC can simply reset the PCC power limit in the next time interval to 50% of the power imported in a normal situation. Convergence of the proposed distributed energy management For the case with both phase balancing and peal shaving constraints, the converged price signals for all three phases and the utility rate at the PCC are compared in Fig. 8. As can be seen, the price curves for all three phases generally follow the utility rate at the PCC, except for three periods (9-10 AM, 3-4 PM and 9-10 PM). In the periods 9-10 AM and 3-4 PM, the prices in phase C are extremely high. This is because the load in phase C is much higher than in phases A and B (see Fig. 7a). Thus, high price signals are generated to reduce the energy consumption of houses connected to phase C during these periods. Like congestion pricing in transmission level wholesale market, the synchronised consumption on phase C causes an unbalanced issue in this case. As a result, a penalty price is generated to solve this issue. Similarly, during the period 9-10 PM, the price signals for all three phases are higher than the utility rate at the PCC. This is because the total load at the PCC exceeds the peak demand limits (see Fig. 7b). As a result, high price signals are generated to reduce the energy consumption of houses on all three phases during this period. For the case with both phase balancing and peak shaving constraints, the load curtailments on all three phases over the scheduling horizon are shown in Fig. 9. As can be seen, during the periods of 9-10 AM and 3-4 PM, only loads on phase C are curtailed for phase balancing. However, during the period 9-10 PM, the loads on all three phases are curtailed for peak shaving. Although there are several convergence criterions for the ADMM algorithm (e.g. prime residual convergence, objective convergence and dual variable convergence), a reasonable convergence criterion for the proposed distributed energy management for community microgrids is that the primal residual R ϕt must be very small, i.e. the total generation equals the total consumption for each phase. The satisfaction of this criterion means the MCC and customers (both DERs and consumers) reach an agreement on the price and the amount of electricity generation/ consumption. The primal residual could be calculated according to (24), and the stopping criterion used in the simulation is R ϕt ≤ 0.5. The primal residual of three phases as a function of the iteration number is shown in Fig. 10. For each iteration, the primal residuals of all time intervals are included and shown in chronological order. As can be seen, the proposed distributed optimisation converges after eight iterations. ADMM can be very slow to converge at high accuracy. However, it usually can produce acceptable results for practical use within a few tens of iterations. Conclusions In this paper, a distributed energy management model for community microgrids considering phase balancing and peak shaving is proposed. Given the price signals and unbalanced power between generation and demand received from the MCC, the HEMS in each house optimises its electricity cost and the customer's comfort, considering the thermal dynamic model of the house. Then, the MCC optimises the output of DERs and energy storage at the microgrid level, updates the price signals and unbalanced power and then communicates them to the houses
2019-04-16T13:29:25.254Z
2019-01-15T00:00:00.000
{ "year": 2019, "sha1": "5aee3170727bcfae414fedb75d49ac18cf3878c3", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/iet-gtd.2018.5881", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "a1737d98d0a5f4af1fbebfa1656eae6d99557ebc", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
2339578
pes2o/s2orc
v3-fos-license
The Sodium Glucose Cotransporter SGLT1 Is an Extremely Efficient Facilitator of Passive Water Transport* The small intestine is void of aquaporins adept at facilitating vectorial water transport, and yet it reabsorbs ∼8 liters of fluid daily. Implications of the sodium glucose cotransporter SGLT1 in either pumping water or passively channeling water contrast with its reported water transporting capacity, which lags behind that of aquaporin-1 by 3 orders of magnitude. Here we overexpressed SGLT1 in MDCK cell monolayers and reconstituted the purified transporter into proteoliposomes. We observed the rate of osmotic proteoliposome deflation by light scattering. Fluorescence correlation spectroscopy served to assess (i) SGLT1 abundance in both vesicles and plasma membranes and (ii) flow-mediated dilution of an aqueous dye adjacent to the cell monolayer. Calculation of the unitary water channel permeability, pf, yielded similar values for cell and proteoliposome experiments. Neither the absence of glucose or Na+, nor the lack of membrane voltage in vesicles, nor the directionality of water flow grossly altered pf. Such weak dependence on protein conformation indicates that a water-impermeable occluded state (glucose and Na+ in their binding pockets) lasts for only a minor fraction of the transport cycle or, alternatively, that occlusion of the substrate does not render the transporter water-impermeable as was suggested by computational studies of the bacterial homologue vSGLT. Although the similarity between the pf values of SGLT1 and aquaporin-1 makes a transcellular pathway plausible, it renders water pumping physiologically negligible because the passive flux would be orders of magnitude larger. The small intestine reabsorbs ϳ8 liters of fluid daily. The underlying transport mechanism has thus far remained enigmatic. Although renal vectorial water transport is mediated by aquaporins, there has been no direct evidence for the role of aquaporins in intestinal water reabsorption (1). Although aquaporins 7, 10, and 11 are expressed in the jejunum (2, 3), they preferentially adopt an intracellular location and/or appear to have rather low water permeability (4 -6). Consequently, other transporters were proposed to facilitate water reabsorption. The sodium-glucose cotransporter (SGLT1) 3 is one of the most prominent candidates. SGLT1 is highly expressed in the enterocytes on the villus of brush border membranes (7). It facilitates water transport, albeit at reportedly low rates: its published unitary water permeability (p f ) of only 4.5 ϫ 10 Ϫ16 cm 3 /s (8) is so low that it signifies a physiologically negligible contribution to passive water flux by SGLT1. Even if such an incredibly high number as 10 6 transporters were expressed in the apical membrane of an epithelial cell, their combined permeabilities would only amount to 4.5 ϫ 10 Ϫ10 cm 3 /s. The lipid matrix of the apical membrane of MDCK cells would conduct three times that much, assuming that its water permeability (P f ) is ϳ5 m/s (9), and its area (S ap ) amounts to ϳ250 m 2 (10). Much higher p f values of 4.7 ϫ 10 Ϫ15 cm 3 /s (11) and 2.7 ϫ 10 Ϫ13 cm 3 /s (12) were reported by molecular dynamics simulations on water passage through the homologous bacterial transporter protein from Vibrio parahemeolyticus, vSGLT. Although the two computed p f values appear to be very different, they do not reflect a discrepancy in the simulations themselves; rather they mirror a difference in the methods used to extract p f from the simulations: the first value is computed from the cumulative sum of the discrete efflux and influx permeation events per time interval (11), whereas the second was calculated from the velocity of water movement along a permeation path across the center of mass of water molecules (12). Thus, both values may be considered as upper and lower bounds for the computed microstate of the transporter, i.e. the sugar and sodium-bound inward facing state of vSGLT. Because the p f values for the other conformations of the transporter may be very different, the in silico p f may not closely match the experimentally observed values, they are essentially ensemble averages over all the states that might arise during the time course of these measurements. It is interesting, however, to note that the upper boundary of the in silico p f compares well with that of aquaporin-1 (13), which, if shared by the ensemble average, would transform SGLT1 into the main facilitator of transcellular water movement through the intestinal epithelium. To date there is no general agreement as to whether water takes a transcellular or a paracellular route. According to transepithelium resistance and junctional morphology, the epithelium of the jejunum is classified as an intermediate epithelium (14), offering the possibility for cellular junctions to serve as the main water route. Thus, solving the debate about the p f of SGLT1 will offer crucial insight about how the 8 liters of water are reabsorbed in the human intestine daily. Establishing the true passive water transporting capacity of SGLT1 would also shed new light on the long nourished hypothesis about secondary active water transport. This hypothesis was born because there was no alternative explana-tion for the large water flux that was observed in response to sugar and Na ϩ uptake in the jejunum (15). The water flux size clearly required either the transepithelial osmotic gradient or the epithelial water permeability to be large, because the flux is calculated as the product of both parameters. However, the osmotic gradient cannot be larger than a few mOsm in size, because it can only be observed in the immediate epithelial vicinity (16), i.e. within the adjacent stagnant aqueous layers (unstirred layers (USLs)) where transport occurs solely by diffusion. The osmotic permeability seemed to be minute, too, because neither efficient water channel proteins nor a highly water-permeable lipid matrix were found. It was known that the apical membrane of epithelial cells offers an extremely small P f because of its lipid composition (9). In consequence, the most plausible solution to the conundrum of SGLT1-triggered water transport seemed to be the assignment of 260 additional water molecules to every turnover of the transporter along with two Na ϩ molecules and one glucose molecule (15). Considering a turnover rate of ϳ100/s, every SGLT1 molecule would accordingly pump N P ϭ 26,000 H 2 O molecules/s. It would do so independent of the osmotic gradient, i.e. if necessary, even against the osmotic gradient (15). In the latter case N P is opposed by the number of water molecules N w that may cross a unitary SGLT1 molecule in the direction of the osmotic gradient, where ⌬c osm and N A are the transmembrane osmotic gradient and Avogadro's number, respectively. For a reasonably small ⌬c osm of only 5 mM, N w varies between 1350 and 810,000 s Ϫ1 depending on whether the in vitro (8) or in silico estimates (12) of p f are used for calculation. Secondary active water transport may merely attain physiological importance in the former case because N P Ͼ Ͼ N W . In the latter case, it would be completely insignificant, because N P Ͻ Ͻ N W , i.e. the passive flux would always be orders of magnitude larger than the water pumping capacity of the transporter. To determine p f , we used SGLT1-expressing tight MDCK-C7 monolayers, which do not possess paracellular permeability (17). We assessed water flow from solute dilution in the immediate vicinity of the monolayer and simultaneously measured SGLT1 membrane abundance (18). Additional reconstitution experiments of the purified transporter into small vesicles and the extraction of p f from the deflation kinetics of proteoliposomes (13) provided ultimate proof for the physiological role of the water channeling capability of SGLT1 and allowed insight into the water pathway within SGLT1. Materials and Methods Cell Culture-The human sodium-glucose cotransporter SGLT1 DNA (kindly provided by Drs. N. K. Tyagi and R. Kinne) was subcloned into a pEGFP-N3 vector (Clontech) to create an eGFP tag at the C terminus of SGLT1. Madin-Darby canine kidney (MDCK-C7) cells and the stable cell line expressing human sodium-glucose cotransporter 1 tagged with eGFP (MDCK-SGLT1) were cultured in DMEM supplemented with nonessential amino acids, 5% fetal calf serum (v/v), 20 mM HEPES, penicillin, and streptomycin (all from PAA) at 37°C in 7.5% CO 2 . MDCK-SGLT1 cells were kept under G418 selection (500 g ml Ϫ1 ). The cells (5 ϫ 10 5 ) were seeded onto polyester permeable supports (Transwell (0.33 cm 2 , 0.4-m pores), Corning Life Sciences). Cell culturing on Transwells continued (for 4 -5 days in general) until a tight monolayer was formed. We used the cells within 6 days after plating. All experiments were carried out in Hanks' balanced salt solution buffer at 37°C. Fluorescence Correlation Spectroscopy (FCS)-We used the FCS to estimate the SGLT1 plasma membrane abundance of MDCK cells and to detect the concentration of the reporter dye dextran-RhBs in the immediate vicinity of the MDCK monolayers as previously described (18). In brief, temporal fluctuations of the fluorescence intensity (I) were measured in the focal volume of a commercial laser scanning microscope (LSM 510) that was equipped with FCS unit (Confocor3; Carl Zeiss, Jena, Germany). The corresponding autocorrelation function G() allowed us to extract both the equilibrium average number ϽNϾ of diffusing SGLT1 molecules in the focal area of radius (r) and their diffusion coefficient (D) (19). A C-Apochromat 40ϫ/1.2W objective was exploited. The pinhole for all measurements was 1 Airy unit. The cells were illuminated at 488 nm (30 milliwatt argon laser, 50% power, 1% transmission), and the fluorescence was detected with a band pass filter (505-550 nm). Calibration and control of setup performance were done routinely with Rhodamine 6G. We estimated the total transporter abundance by assuming that SGLT1s are uniformly distributed in the cell membrane. The dextran-RhB fluorescence was excited with a 561-nm DPSS laser and detected with a 580-nm-long pass filter. We also exploited FCS for determining the number n of SGLT1 molecules per proteoliposome as previously described (20,21). In brief, we first obtained the numbers n v and n pl per unit volume of lipid vesicles and proteoliposomes, respectively. All vesicles were labeled, because the lipid mixture contained 0.004% (w/w) N-(lissamine-rhodamine-sulfonyl) phosphatidylethanolamine. All protein-bearing vesicles had a second label because of the GFP tag on SGLT1. Thus, recordings of the temporal fluctuations of the fluorescence intensity I in both the lipid channel and in the protein channel allowed calculation of two autocorrelation curves (22): where ϽNϾ, z, and D v are the number of particles in the focal volume, the elongation of the focus in the direction of the laser beam, and the diffusion coefficient of the particles, respectively. Dividing ϽNϾ by the focal volumes of the lipid and protein channels returned n v and n pl , respectively. We subsequently dissolved the vesicles in detergent (2% SDS ϩ 2% OG ϩ 2% Fos-Choline12) and repeated the FCS measurements with the resulting micelles. Equation 3 served to deter-mine their number in the focal volume of the protein channel. Dividing that number by the focal volume returned the number of micelles n m per unit volume. Assuming that each micelle contained exactly one SGLT1 molecule, we assigned n to the calculated ratio n m /n pl . Transepithelial Water Flow Rate-The measurements were conducted as previously described (18). In brief, the outer chamber (buffer volume, 1.5 ml) was placed on the heated stage of the confocal microscope. Transwell inserts with cells (buffer volume inside 150 l) were fixed on a micromanipulator and positioned 100 -120 m above the glass bottom of the outer chamber. The buffer in the outer chamber contained the reporter dye, dextran-RhB. The inability of dextran-RhB to leak into the inner chamber served as an indication for a tight monolayer. Water flow across the cell monolayer brought about time-and position-dependent changes in the dextran-RhB concentration measured using FCS. To extract the osmotic water permeability of cell monolayers from the measured kinetics of dextran-RhB concentration, we built a computational convection diffusion model (18). The two-dimensional model is realized in COMSOL Multiphysics and takes into account the rotationally symmetric geometry of our experimental setup: the sizes of (i) the insert (inner compartment), (ii) the outer chamber (outer compartment), and (iii) the cleft between them, as well as the volume of the buffer. Rate of Transepithelial Glucose Transport-An Amplex Red glucose assay kit allowed us to assess glucose transport through confluent cell monolayers. Prior to the experiment, the culture medium was replaced by a glucose-free medium for 1 h at 37°C. We used Krebs-Ringer-Henseleit (KRH) solution containing 120 mM NaCl, 4.7 mM KCl, 1.2 mM MgCl 2 , 2.2 mM CaCl 2 , 5.5 mM sorbitol, 10 mM HEPES (pH 7.4) for this purpose. After the incubation period, the KRH buffer inside the Transwell inserts was isotonically exchanged for the glucosecontaining buffer (5.5 mM glucose, sorbitol omitted), and the outer KRH buffer was replenished with fresh solution. 25-l samples of outer KRH buffer were collected 0, 15, 30, and 60 min after buffer exchange and tested for glucose concentration according to the manufacturer protocol using calibration solutions with known glucose concentrations. The porous membrane with SGLT1-expressing MDCK cells was excised out from the insert at the end of the experiment, and protein abundance in 10 cells was analyzed with FCS. Cloning and Expression-The C292A mutation was introduced into SGLT1 by site-directed mutagenesis and confirmed by Sanger sequencing. Both the native and mutant SGLT1 sequences were cloned into the baculovirus transfer vector pACEBac1 (Geneva Biotech). A C-terminal myc-GFP-His tag was inserted during cloning. Bacmids were prepared using the DH10MultiBac virus backbone (Geneva Biotech), and baculoviruses were generated in Sf9 cells. For protein expression, Tni cells at a density of 1 ϫ 10 6 cells/ml were either infected with the native or mutant SGLT1-containing baculovirus, and expression was performed at 21°C. Cells were harvested 72-96 h after infection and frozen in liquid nitrogen. SGLT1 Purification and Reconstitution-For SGLT1-eGFP purification from Tni cells, we adopted a previously published protocol (23). The cells were resuspended in breaking buffer (50 mM sodium phosphate, pH 7.4, 10% glycerol), supplemented with protease inhibitor mixture, and homogenized using Emul-siFlex-C5 (Avestin) at 20,000 psi. Cell debris was removed by spinning the homogenate at 6,000 ϫ g for 10 min. We collected membrane pellets after centrifuging the supernatant at 100,000 ϫ g for 45 min. To remove adhering and peripheral proteins, we washed the membranes using breaking buffer with 4 M urea for 2-3 h. After 1 h of centrifugation at 100,000 ϫ g, the membrane pellet was collected and solubilized in TG buffer (20 mM Tris-HCl, pH 8, 1 M NaCl, 20% glycerol) with 1.2% Fos-Choline12 (Anatrace) for 2-4 h or overnight. After removal of the insoluble pellets by centrifugation at 100,000 ϫ g for 1 h, the supernatant was washed and mixed with equilibrated nickelnitrilotriacetic acid Superflow beads (Qiagen) and left to bind for 3 h or overnight. After washing the beads with 100ϫ column volume of TG buffer (0.6% Fos-Choline12, 20 mM imidazole), we eluted the protein (TG buffer, 0.2% Fos-Choline12, 250 mM imidazole). Collected protein fractions were concentrated using ultrafiltration spin columns (Vivaspin, Sartorius) of ϳ500 l and subjected to size exclusion chromatography (Ä kta pure, GE Lifesciences). We controlled the quality of the collected fractions by SDS-PAGE. The chosen fractions were concentrated and immediately used for reconstitution without protein freezing. All procedures were carried out at 4°C. A lipid extract from liver or the polar Escherichia coli lipid extract supplemented with 25 mol % cholesterol (all from Avanti Polar Lipids) were doped with 0.004% RhPE and used to form multilamellar vesicles in 150 mM salt, 10 or 50 mM HEPES (pH 7.5) and 1.3% octyl glucoside at a final lipid concentration of 20 mg/ml. Subsequent to bath sonication, the clear suspension was incubated with 5 mM 8-aminonaphthalene-1,3,6trisulfonic acid), 22 mM N-dodecyl-N,N-dimethylammino-3propane and purified SGLT1 at 4°C for 1 h under constant shaking. Biobeads SM-2 (Bio-Rad) removed the detergent in a stepwise manner within 36 h. Proteoliposomes were harvested by ultracentrifugation. The resuspended vesicles were centrifuged to remove aggregates and extruded through 100-nm polycarbonate filters to produce a homogenous suspension. Control vesicles were similarly treated. All samples were assayed without delay. Determination of Unitary Water Permeability of Reconstituted SGLT1-As described previously (13), we monitored the intensity I of scattered light at a wavelength of 546 nm. It is related to the volumes of proteoliposomes V SGLT1 (t) and bare vesicles V bare (t) according to the following, where ␣ is the fraction of bare vesicles. The parameter a is calculated as and d are the initial osmolyte concentrations inside the vesicles, the incremental osmolyte concentration in the external solution caused by sucrose addition, and two fitting parameters, respectively. The rate of the osmotically driven volume decrease allowed calculation of the vesicular water permeability P f ϭ P f,c ϩ P f,l that reflects the permeabilities P f,c and P f,l of all channels and the lipid bilayer, respectively, SGLT1 Facilitates Passive Water Transport where V w , V 0 , A, and L are the molar volume of water, vesicle volume at time 0, surface area of the vesicle, and the Lambert function: L(x)e L(x) ϭ x, respectively (13). The system of Equations 4 and 5 was globally fitted to the whole set of shrinking curves from a particular reconstitution sample to extract the fitting parameters b, d, ␣, and P f,c and P f,l . P f,l was common to all shrinking curves (including the protein free sample), whereas P f,c varied with the protein concentration in the bilayers. Glucose Transport into Lipid Vesicles-To determine the glucose transport activity of SGLT1 reconstituted into the lipid vesicles, proteoliposomes and control liposomes both prepared in glucose-free high K ϩ buffer were mixed 1:4 (v/v) with low K ϩ buffer containing 10 mM glucose. 0.5 M valinomycin was added to establish potential across the membrane. After 30 min of incubation at room temperature, liposomes were washed on PD10 columns to remove external glucose. To estimate the amount of liposomes and protein abundance in proteoliposomes, samples collected after PD10 column washing were checked using FCS. To release the accumulated glucose, aliquots of prepared samples were dissolved by detergent mixture (1% Fos-Choline12 ϩ 1% n-dodecyl-␤-L-maltoside). Concentrations of glucose in dissolved and intact samples were determined using an Amplex Red glucose assay kit according to the manufacturer's protocol. By combining the data of the amount of released glucose, the number of proteoliposomes, and their total inner volume, we calculated the average concentration of glucose accumulated inside the vesicles. Results Water Flow through Epithelial Monolayers-We stably transfected human SGLT1 tagged by eGFP into MDCK-C7 cells. The cells were allowed to grow on Transwell filters until they reached confluence. SGLT1 sorted into the plasma membrane as revealed by confocal fluorescence microscopy (Fig. 1). We observed rapid bleaching while focusing on the plasma membrane, indicating that a considerable SGLT1-eGFP fraction was immobile ( Fig. 2A). We assumed that both the mobile and the immobile cotransporter fractions contributed to the initial fluorescence intensity, whereas the steady state FCS signal only reflected the mobile fraction (Fig. 1). It amounted to 75.2 Ϯ 7.2% for cells that were cultured for 1 day and 38.4 Ϯ 3.4% for cells that were cultured for 4 -5 days. We confirmed the size (S) of the immobile fraction by observing fluorescence recovery after photobleaching (FRAP) in an area with radius (r FRAP ) of 2-4 m (Fig. 2B). S was calculated as follows, where F i , F 0 , and F [ifnty] are the initial fluorescence intensity, the intensity immediately after bleaching, and the intensity after fractional recovery, respectively (24). We monitored the whole cell area to exclude both overall bleaching and focal plane drift. We also calculated the SGLT1 membrane diffusion coefficient (D SGLT ) from the half-time FRAP of the fractional fluorescence recovery. The values 0.24 Ϯ 0.01 and 0.18 Ϯ 0.04 m 2 /s, derived for D SGLT from FCS (Equation 2) and FRAP, respectively, agreed reasonably well with each other. By counting the mobile SGLT1 fraction by FCS and accounting for the bleached immobile SLGT1 fraction, we found a density of ϳ218 transporters/ m 2 in the apical membrane. Isotonic addition of glucose to the apical side of the cell monolayer yielded a time-dependent increase in glucose concentration in the basal buffer that was larger for the cotransporter-expressing cells than for the parental cells (Fig. 3A). Glucose uptake by nontransfected confluent cultures of MDCK cells has been observed before. This background transport activity can be inhibited by both organic and inorganic trivalent arsenicals in a time-and concentration-dependent manner (25). By transfection with SGLT1, we provided the MDCK cells with an additional uptake system. As expected, it was inhibited by phlorizin. We calculated the transporter turnover rate R as follows, where ⌬Gl, ⌬t, and N c are the increment in the number of glucose molecules in the basal buffer of SGLT1-expressing cells as compared with parental cells, the time(s) allowed for glucose accumulation, and the number of cells per Transwell insert (ϳ2 ϫ 10 5 ). Assuming S ap to be equal to ϳ250 m 2 (10), we obtained an apparent r of 250 Ϯ 40 s Ϫ1 (mean Ϯ S.E., n ϭ 10), indicating that SGLT1 was fully functional. The SGLT1-expressing cell monolayers displayed an increase in transepithelial water permeability, P e . P e was derived by rendering the basolateral compartment hyperosmotic (hypoosmotic) and by observing the resulting dilution (concentration increase) of the reporter dye, dextran-RhB, in the immediate monolayer vicinity (Fig. 3, B and C). FCS served to measure the dye concentration within the 50-m-wide aqueous cleft between the basal membrane and the glass bottom of the measurement chamber (18). To demonstrate that SGLT1 facilitated water transport, we inhibited SGLT1 by phlorizin (0.5 mM). P e of MDCK SGLT1 decreased from 30 m/s to 10 m/s. As expected, the drug did not alter the P e of 6 m/s of parental MDCK cells (Fig. 3C). To derive the single cotransporter water permeability (p f ) from P e , we took into account that P e depends on both apical and basolateral membrane permeability, P ap and P bl , respectively, where F ϭ 7.63 (10) is the ratio of the basolateral to apical membrane areas. Because our MDCK cells expressed SGLT1 both in the apical and the basolateral membranes, P ap and P bl represent the sum of transporter-(P t,ap and P t,bl ) and lipidmediated (P l,ap and P l,bl ) permeabilities. P t,ap and P t,bl can be expressed as the product of p f and . Because p f exceeded the one previously measured in oocytes (8) by 3 orders of magnitude, we decided to test the water transporting capacity of SGLT1 in a reconstituted system. We therefore purified the protein from overexpressing insect cells and reconstituted it into lipid vesicles. Proteoliposome exposure to a hyperosmotic solution resulted in their deflation. Measurements of scattered light intensity (I) served to monitor vesicle volume V(t) as a function of time (Fig. 4) via our new adaptation of the Rayleigh-Gans-Debye equation (13). In turn, V(t) allowed calculation of vesicle permeability P f . Increasing the number (n) of SGLT1 molecules per vesicle accelerated the rate of shrinkage (Fig. 4). In contrast, substitution of Na ϩ for K ϩ had no significant effect on p f , suggesting that passing the Na ϩ binding site may not be rate-limiting for water. Water does not seem to drag Na ϩ through a constriction site either, because changes of the transmembrane voltage by CCCP and valinomycin did not alter p f . This may have been expected if a streaming potential would have built up (Fig. 5, C and D). Finally, we substituted Cys in position 292 for Ala. Although the mutation triples the Na ϩ leak current through SGLT1 in the oolema (26), it did not aug-ment p f of the reconstituted channel (Fig. 5B). Thus, the introduced mutation removes a barrier that obstructs the Na ϩ leak (either sterical or electrostatic) but does not eliminate a significant barrier for water movement. Previous oocyte experiments report contradictory results: removal of external Na ϩ either eliminated the water conducting ability of SGLT1 (27) or augmented it (28). The addition of 1 or 2 mM glucose somewhat decreased the water flux. To get a statistically significant effect, we had to add larger amounts of glucose to the proteoliposomes: 20 mM glucose resulted in a 17% drop in p f (Fig. 5A), suggesting that at least some of the water molecules share the pathway of glucose and are delayed by its presence. In contrast, phlorizin did not significantly inhibit water transport through the reconstituted SGLT1. The result is not at odds with the partial inhibition (Fig. 3) observed with MDCK cells, mainly because (i) the membrane-impermeable phlorizin may bind to only 50% of the randomly oriented transporter molecules in vesicles, (ii) inhibition of glucose-sodium cotransport across the apical plasma membrane must decrease the cytoplasmic osmolyte concentration adjacent to that membrane, thereby decreasing water flux, and (iii) the inhibitory effect in MDCK cells may partly be mediated by the inhibition of the Na ϩ -K ϩ -ATPase. The pump generates a local osmotic gradient which in turn drives water transport in epithelia (16). The reconstituted transporter was functional as indicated by accumulation of 2.5 mM glucose within the proteoliposomes as compared with control vesicles (Fig. 6). Glucose transport into the vesicles was driven by both a glucose gradient and a membrane potential that built up in the presence of valinomycin because of a transmembrane potassium gradient. We calculated p f from a plot of P f as a function of n (29). The slope of that plot corresponds to a p f of 3.3 Ϯ 0.4 ϫ 10 Ϫ13 cm 3 /s at 5°C (Fig. 6). Assuming the typical activation energy of ϳ 4 kcal/mol for water transport through channels, we find that p f is equal to 7 Ϯ 0.8 ϫ 10 Ϫ13 cm 3 s Ϫ1 at 37°C. The p f of the reconstituted transporter is in line with the one obtained from SGLT1-expressing cell monolayers. The agreement is surprising, because the SGLT1 conformations in cells and vesicles are not necessarily the same: The membrane potential is maintained at Ϫ30 mV in living MDCK cells, whereas it is either positive inside shrunken vesicles (because of the up-concentration of Na ϩ ) or clamped to 0 by the protonophore CCCP and the K ϩ -ionophore valinomycin. Previous oocyte experiments also revealed a weak potential dependence: the water permeability of SGLT1 overexpressing oocytes changed by only ϳ20% when the membrane potential was decreased from Ϫ20 to Ϫ100 mV (28). Discussion p f measurements of SGLT1 both in live cells and in a reconstituted system reveal unexpectedly high water conductivity. It positions the transporter into the top league of water facilitators, alongside with aquaporin-1 (13). Taking into account that aquaporin-1 represents the main water pathway in the leaky epithelium of the proximal renal tubule, this observation suggests that SGLT1 may enable transcellular water movement through the intermediate epithelium of the small intestine. The high p f value immediately leads to a second very important conclusion: It rules out the possibility of secondary active water transport via SGLT1. SGLT1 has been repeatedly proposed to pump water (reviewed in Ref. 30). The simple calculation according to Equation 1 shows that even a modest osmotic gradient of only 5 mOsm would drive an osmotic flux of ϳ1000 times as large as the largest flux attainable by pumping (compare Introduction). That is, even if water was carried along with glucose and Na ϩ , the resulting flux would be physiologically negligible. Our conclusion is in line with other reports that likewise found only passive water flux and no indications for water pumping: (i) The first report explained the increment in water influx caused by SGLT1 expression in oocytes (15) by the nearmembrane accumulation of the cotransported osmotically active Na ϩ ions and glucose molecules (31,32). The transport of both solutes away from the oocyte membrane was hampered by USLs. Because the multiple oolemma invaginations act to increase the size of the USLs, solute accumulation is more pronounced than in epithelial cells. In combination with the large p f of SGLT1, the USLs ensure that even small osmotic gradients produce large water fluxes. (ii) Recent molecular dynamics simulations captured such a small number of water molecules in the occluded state of vSGLT that pumping them through vSGLT1 or any other homologous protein was deemed unlikely to make a significant contribution to water transport (11). (iii) Another experimental study explored the possibility that not only SGLT1, but also the potassium chloride cotransporter has falsely been implicated in water pumping (33). It showed that the decrease in transepithelial water flux upon pharmacological potassium chloride cotransporter inhibition has nothing to do with water pumping but is due to a reduced potassium efflux from the cell monolayer. Administering the K ϩ -H ϩ exchanger nigericin rescued the cellular K ϩ efflux, which in turn allowed for full recovery of the transepithelial water flux (33). Because the mobile carrier nigericin is too small to shuttle several hundred water molecules per cycle, there must be alternative pathways for water and potassium. Our experimentally determined p f is 3 orders of magnitude larger than the one derived for SGLT1 from experiments in the oocyte expression system (8). The difference is too large to place sole blame on oocyte USLs (31). During the swelling experiments, oocytes generally increase their volume by 20 -30%. The required increase in surface area is impossible to achieve by membrane stretching, which is limited to 5% (34). It requires unfolding of the oolemma, which has a 9-fold larger surface area than a sphere with the size of the oocyte (8). Attachment of SGLT1 to the cytoskeleton is likely to hamper FIGURE 6. Water and glucose transport across reconstituted SGLT1. A, the water permeability p f of one SGLT1 molecule was measured at 5°C and subsequently recalculated for 37°C. B, dependence of the vesicular water permeability P f on the per vesicle number n SGLT1 of reconstituted SGLT1 molecules. p f was found as the slope of the linear regression multiplied by the membrane area of an individual vesicle. Substituting 150 mM Na ϩ (red) for 150 mM K ϩ had no significant effect on p f . The buffer also contained 10 mM HEPES (pH 7.5). C, the reconstituted transporter was functional as tested by a colorimetric assay of glucose uptake into the vesicles. External K ϩ and valinomycin served to establish an initial membrane potential. SGLT1 Facilitates Passive Water Transport unfolding of oolemma invaginations and prevent microvilli from stretching out. If the fraction of SGLT1 molecules in oocytes that is thus rendered immobile is as large as that in MDCK cells (Fig. 2), oolemma unfolding must face a considerable resistance. This explains why the water permeability of oocytes transfected with SGLT1 saturates at 38 m/s (27), whereas levels of 300 m/s are easily achievable by transfection with aquaporins (35). Because the structures of the SGLT1 family members vSGLT and LeuT do not show an aqueous pore (36 -38), water is likely to take the same routes as glucose or Na ϩ through these transporters (Fig. 7). We found experimental support for a shared pathway with glucose, but none in favor of water taking the Na ϩ pathway. That is, neither the presence nor absence of Na ϩ nor modifying the membrane potential (by CCCP and/or valinomycin) exerted any effect on p f . Furthermore, the wild-type protein and the C292A mutant conduct water equally well, although the mutant is known to facilitate a 3-fold larger Na ϩ leak current. Nevertheless, we cannot exclude that (i) the mutation only removed a barrier to Na ϩ and (ii) the diffusion of both species is completely decoupled within that part of the pathway that they may share. A shared pathway for water and glucose is supported by the observed decrease in p f upon addition of saturating glucose concentrations. That is, glucose partially blocked the water pathway when its concentration exceeded Michaelis constant K m ϭ 0.6 mM by more than an order of magnitude, at which SGLT1 achieves half of its maximum rate. This observation clearly shows that the empty transporter is also water-permeable. So far only the inward facing substrate-bound state was reported to facilitate water transport by computational analysis (12). However, it must be noted that the experimentally obtained p f value is in surprisingly good agreement with the in silico p f that was derived for a single conformational state of the homologous vSGLT (12). SGLT1 overexpressing in MDCK cells and reconstitution into proteoliposomes both provide the first experimental evi-dence for a highly efficient water pathway through the transporter. Because SGLT1 is not equipped with a special water selectivity filter, the constriction site in the middle of the protein consisting of four hydrophobic residues is likely to fulfill that function. The large p f value of SGLT1 renders transcellular water transport in the jejunal epithelium plausible and at the same time proves by reductio ad absurdum that SGLT1 water pumping is out of the question.
2018-04-03T03:19:12.842Z
2016-03-04T00:00:00.000
{ "year": 2016, "sha1": "1a2c91b2d5ed3678cf544bb59cfac8ae546ed026", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/291/18/9712.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d99477192cc9ef810a73698d0e1936c05e4f1e7e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }